Search is not available for this dataset
project
stringlengths 1
235
| source
stringclasses 16
values | language
stringclasses 48
values | content
stringlengths 909
64.8M
|
---|---|---|---|
tonic | hex | Erlang | Toggle Theme
tonic v0.2.2
API Reference
===
Modules
---
[Tonic](Tonic.html)
A DSL for conveniently loading binary data/files
[Tonic.Types](Tonic.Types.html)
Exceptions
---
[Tonic.MarkNotFound](Tonic.MarkNotFound.html)
[Tonic.NotEmpty](Tonic.NotEmpty.html)
Toggle Theme
tonic v0.2.2 Tonic
===
A DSL for conveniently loading binary data/files.
The DSL is designed to closely represent the structure of the actual binary data
layout. So it aims to be easy to read, and easy to change.
The DSL defines functionality to represent types, endianness, groups, chunks,
repeated data, branches, optional segments. Where the majority of these functions
can be further extended to customize the behaviour and result.
The default behaviour of these operations is to remove the data that was read from the
current binary data, and append the value to the current block. Values by default are
in the form of a tagged value if a name is supplied `{ :name, value }`, otherwise are
simply `value` if no name is supplied. The default return value behaviour can be
overridden by passing in a function.
The most common types are defined in [`Tonic.Types`](Tonic.Types.html) for convenience. These are
common integer and floating point types, and strings. The behaviour of types can
further be customized when used otherwise new types can be defined using the [`type/2`](#type/2)
function.
To use the DSL, call `use Tonic` in your module and include any additional type modules
you may require. Then you are free to write the DSL directly inside the module. Certain
options may be passed to the library on `use`, to indicate additional behaviours. The
currently supported options are:
`optimize:` which can be passed `true` to enable all optimizations, or a keyword list
enabling the specific optimizations. Enabling optimizations may make debugging issues
trickier, so best practice is to enable after it's been tested. Current specific
optimizations include:
```
:reduce #Enables the code reduction optimization, so the generated code is reduced as much as possible.
```
Example
---
```
defmodule PNG do
use Tonic, optimize: true
endian :big
repeat :magic, 8, :uint8
repeat :chunks do
uint32 :length
string :type, length: 4
chunk get(:length) do
on get(:type) do
"IHDR" ->
uint32 :width
uint32 :height
uint8 :bit_depth
uint8 :colour_type
uint8 :compression_type
uint8 :filter_method
uint8 :interlace_method
"gAMA" ->
uint32 :gamma, fn { name, value } -> { name, value / 100000 } end
"cHRM" ->
group :white_point do
uint32 :x, fn { name, value } -> { name, value / 100000 } end
uint32 :y, fn { name, value } -> { name, value / 100000 } end
end
group :red do
uint32 :x, fn { name, value } -> { name, value / 100000 } end
uint32 :y, fn { name, value } -> { name, value / 100000 } end
end
group :green do
uint32 :x, fn { name, value } -> { name, value / 100000 } end
uint32 :y, fn { name, value } -> { name, value / 100000 } end
end
group :blue do
uint32 :x, fn { name, value } -> { name, value / 100000 } end
uint32 :y, fn { name, value } -> { name, value / 100000 } end
end
"iTXt" ->
string :keyword, ?\0
string :text
_ -> repeat :uint8
end
end
uint32 :crc
end end
#Example load result:
#{{:magic, [137, 80, 78, 71, 13, 10, 26, 10]},
# {:chunks,
# [{{:length, 13}, {:type, "IHDR"}, {:width, 48}, {:height, 40},
# {:bit_depth, 8}, {:colour_type, 6}, {:compression_type, 0},
# {:filter_method, 0}, {:interlace_method, 0}, {:crc, 3095886193}},
# {{:length, 4}, {:type, "gAMA"}, {:gamma, 0.45455}, {:crc, 201089285}},
# {{:length, 32}, {:type, "cHRM"}, {:white_point, {:x, 0.3127}, {:y, 0.329}},
# {:red, {:x, 0.64}, {:y, 0.33}}, {:green, {:x, 0.3}, {:y, 0.6}},
# {:blue, {:x, 0.15}, {:y, 0.06}}, {:crc, 2629456188}},
# {{:length, 345}, {:type, "iTXt"}, {:keyword, "XML:com.adobe.xmp"},
# {:text,
# <<0, 0, 0, 0, 60, 120, 58, 120, 109, 112, 109, 101, 116, 97, 32, 120, 109, 108, 110, 115, 58, 120, 61, 34, 97, 100, 111, 98, 101, 58, 110, 115, 58, 109, 101, 116, 97, 47, 34, ...>>},
# {:crc, 1287792473}},
# {{:length, 1638}, {:type, "IDAT"},
# [88, 9, 237, 216, 73, 143, 85, 69, 24, 198, 241, 11, 125, 26, 68, 148, 25,
# 109, 4, 154, 102, 114, 192, 149, 70, 137, 137, 209, 152, 152, 24, 19, 190,
# 131, 75, 22, 234, 55, 224, 59, ...], {:crc, 2269121590}},
# {{:length, 0}, {:type, "IEND"}, [], {:crc, 2923585666}}]}}
```
[Link to this section](#summary)
Summary
===
[Types](#types)
---
[ast()](#t:ast/0)
[block(body)](#t:block/1)
[callback()](#t:callback/0)
[endianness()](#t:endianness/0)
[length()](#t:length/0)
[signedness()](#t:signedness/0)
[Functions](#functions)
---
[chunk(length, block)](#chunk/2)
Extract a chunk of data for processing
[empty!()](#empty!/0)
Assert that all the data has been loaded from the current data. If there is still data,
the [`Tonic.NotEmpty`](Tonic.NotEmpty.html) exception will be raised
[endian(endianness)](#endian/1)
Sets the default endianness used by types where endianness is not specified
[get(name_or_fun)](#get/1)
Get the loaded value by using either a name to lookup the value, or a function to manually
look it up
[get(name, fun)](#get/2)
Get the loaded value with name, and pass the value into a function
[group(block)](#group/1)
Group the load operations
[group(name, block)](#group/2)
Group the load operations, wrapping them with the given name
[group(name, fun, block)](#group/3)
Group the load operations, wrapping them with the given name and passing the result to a callback
[load(data, module)](#load/2)
Loads the binary data using the spec from a given module
[load_file(file, module)](#load_file/2)
Loads the file data using the spec from a given module
[on(condition, list)](#on/2)
Executes the given load operations of a particular clause that matches the condition
[optional(type)](#optional/1)
Optionally execute the given load operations
[repeat(type)](#repeat/1)
Repeat the given load operations until it reaches the end
[repeat(name, type)](#repeat/2)
Repeat the given load operations until it reaches the end or for length
[repeat(name, length, type)](#repeat/3)
Repeat the given load operations for length
[repeat(name, length, fun, block)](#repeat/4)
Repeats the load operations for length, passing the result to a callback
[skip(type)](#skip/1)
Skip the given load operations
[spec()](#spec/0)
Execute the current spec again
[spec(module)](#spec/1)
Execute the provided spec
[type(name, type)](#type/2)
Declare a new type as an alias of another type or of a function
[type(name, type, endianness)](#type/3)
Declare a new type as an alias of another type with an overriding (fixed) endianness
[type(name, type, size, signedness)](#type/4)
Declare a new type for a binary type of size with signedness (if used)
[type(name, type, size, signedness, endianness)](#type/5)
Declare a new type for a binary type of size with signedness (if used) and a overriding
(fixed) endianness
[Link to this section](#types)
Types
===
```
ast() :: [Macro.t](https://hexdocs.pm/elixir/Macro.html#t:t/0)()
```
```
block(body) :: [{:do, body}]
```
```
callback() :: ({[any](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), [any](https://hexdocs.pm/elixir/typespecs.html#basic-types)()} -> [any](https://hexdocs.pm/elixir/typespecs.html#basic-types)())
```
```
endianness() :: :little | :big | :native
```
```
length() :: [non_neg_integer](https://hexdocs.pm/elixir/typespecs.html#basic-types)() | ([list](https://hexdocs.pm/elixir/typespecs.html#built-in-types)() -> [boolean](https://hexdocs.pm/elixir/typespecs.html#built-in-types)())
```
```
signedness() :: :signed | :unsigned
```
[Link to this section](#functions)
Functions
===
(macro)
```
chunk([length](#t:length/0)(), [block](#t:block/1)([any](https://hexdocs.pm/elixir/typespecs.html#basic-types)())) :: [ast](#t:ast/0)()
```
Extract a chunk of data for processing.
Executes the load operations only on the given chunk.
**`chunk([length](#t:length/0), [block(any)](#t:block/1)) :: [ast](#t:ast/0)`**
Uses the block as the load operation on the chunk of length.
Example
---
```
chunk 4 do
uint8 :a
uint8 :b end
chunk 4 do
repeat :uint8 end
```
(macro)
Assert that all the data has been loaded from the current data. If there is still data,
the [`Tonic.NotEmpty`](Tonic.NotEmpty.html) exception will be raised.
Example
---
```
int8 :a empty!() #check that there is no data left
```
(macro)
```
endian([endianness](#t:endianness/0)()) :: [ast](#t:ast/0)()
```
Sets the default endianness used by types where endianness is not specified.
Examples
---
```
endian :little uint32 :value #little endian
endian :big uint32 :value #big endian
endian :little uint32 :value, :big #big endian
```
(macro)
```
get([atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)()) :: [ast](#t:ast/0)()
```
```
get(([list](https://hexdocs.pm/elixir/typespecs.html#built-in-types)() -> [any](https://hexdocs.pm/elixir/typespecs.html#basic-types)())) :: [ast](#t:ast/0)()
```
Get the loaded value by using either a name to lookup the value, or a function to manually
look it up.
**`get(atom) :: any`**
Using a name for the lookup will cause it to search for that matched name in the current
loaded data scope and containing scopes (but not separate branched scopes). If the name
is not found, an exception will be raised [`Tonic.MarkNotFound`](Tonic.MarkNotFound.html).
**`get(fun) :: any`**
Using a function for the lookup will cause it to pass the current state to the function.
where the function can return the value you want to get.
Examples
---
```
uint8 :length repeat get(:length), :uint8
uint8 :length repeat get(fn [[{ :length, length }]] -> length end), :uint8
```
(macro)
```
get([atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), ([list](https://hexdocs.pm/elixir/typespecs.html#built-in-types)() -> [any](https://hexdocs.pm/elixir/typespecs.html#basic-types)())) :: [ast](#t:ast/0)()
```
Get the loaded value with name, and pass the value into a function.
Examples
---
```
uint8 :length repeat get(:length, fn length -> length - 1 end)
```
(macro)
```
group([block](#t:block/1)([any](https://hexdocs.pm/elixir/typespecs.html#basic-types)())) :: [ast](#t:ast/0)()
```
Group the load operations.
Examples
---
```
group do
uint8 :a
uint8 :b end
```
(macro)
```
group([atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), [block](#t:block/1)([any](https://hexdocs.pm/elixir/typespecs.html#basic-types)())) :: [ast](#t:ast/0)()
```
Group the load operations, wrapping them with the given name.
Examples
---
```
group :values do
uint8 :a
uint8 :b end
```
(macro)
```
group([atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), [callback](#t:callback/0)(), [block](#t:block/1)([any](https://hexdocs.pm/elixir/typespecs.html#basic-types)())) :: [ast](#t:ast/0)()
```
Group the load operations, wrapping them with the given name and passing the result to a callback.
Examples
---
```
group :values, fn { _, value } -> value end do
uint8 :a
uint8 :b end
```
```
load([bitstring](https://hexdocs.pm/elixir/typespecs.html#built-in-types)(), [module](https://hexdocs.pm/elixir/typespecs.html#built-in-types)()) :: {[any](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), [bitstring](https://hexdocs.pm/elixir/typespecs.html#built-in-types)()}
```
Loads the binary data using the spec from a given module.
The return value consists of the loaded values and the remaining data that wasn't read.
```
load_file([Path.t](https://hexdocs.pm/elixir/Path.html#t:t/0)(), [module](https://hexdocs.pm/elixir/typespecs.html#built-in-types)()) :: {[any](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), [bitstring](https://hexdocs.pm/elixir/typespecs.html#built-in-types)()}
```
Loads the file data using the spec from a given module.
The return value consists of the loaded values and the remaining data that wasn't read.
(macro)
```
on([term](https://hexdocs.pm/elixir/typespecs.html#built-in-types)(), [{:do, [{:->, [any](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), [any](https://hexdocs.pm/elixir/typespecs.html#basic-types)()}]}]) :: [ast](#t:ast/0)()
```
Executes the given load operations of a particular clause that matches the condition.
Examples
---
```
uint8 :type on get(:type) do
1 -> uint32 :value
2 -> float32 :value end
```
(macro)
```
optional([atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)()) :: [ast](#t:ast/0)()
```
```
optional([block](#t:block/1)([any](https://hexdocs.pm/elixir/typespecs.html#basic-types)())) :: [ast](#t:ast/0)()
```
Optionally execute the given load operations.
Usually if the current data does not match what is trying to be loaded, a match error
will be raised and the data will not be loaded successfully. Using `optional` is a way
to avoid that. If there is a match error the load operations it attempted to execute
will be skipped, and it will continue on with the rest of the data spec. If there
isn't a match error then the load operations that were attempted will be combined with
the current loaded data.
**`optional(atom) :: [ast](#t:ast/0)`**
Optionally load the given type.
**`optional([block(any)](#t:block/1)) :: [ast](#t:ast/0)`**
Optionally load the given block.
Example
---
```
optional :uint8
optional do
uint8 :a
uint8 :b end
```
(macro)
```
repeat([atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)()) :: [ast](#t:ast/0)()
```
```
repeat([block](#t:block/1)([any](https://hexdocs.pm/elixir/typespecs.html#basic-types)())) :: [ast](#t:ast/0)()
```
Repeat the given load operations until it reaches the end.
**`repeat(atom) :: [ast](#t:ast/0)`**
Uses the type as the load operation to be repeated.
**`repeat([block(any)](#t:block/1)) :: [ast](#t:ast/0)`**
Uses the block as the load operation to be repeated.
Examples
---
```
repeat :uint8
repeat do
uint8 :a
uint8 :b end
```
(macro)
```
repeat([atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), [atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)()) :: [ast](#t:ast/0)()
```
```
repeat([atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), [block](#t:block/1)([any](https://hexdocs.pm/elixir/typespecs.html#basic-types)())) :: [ast](#t:ast/0)()
```
```
repeat([length](#t:length/0)(), [atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)()) :: [ast](#t:ast/0)()
```
```
repeat([length](#t:length/0)(), [block](#t:block/1)([any](https://hexdocs.pm/elixir/typespecs.html#basic-types)())) :: [ast](#t:ast/0)()
```
Repeat the given load operations until it reaches the end or for length.
**`repeat(atom, atom) :: [ast](#t:ast/0)`**
Uses the type as the load operation to be repeated. And wraps the output with the given
name.
**`repeat(atom, [block(any)](#t:block/1)) :: [ast](#t:ast/0)`**
Uses the block as the load operation to be repeated. And wraps the output with the given
name.
**`repeat([length](#t:length/0), atom) :: [ast](#t:ast/0)`**
Uses the type as the load operation to be repeated. And repeats for length.
**`repeat([length](#t:length/0), [block(any)](#t:block/1)) :: [ast](#t:ast/0)`**
Uses the block as the load operation to be repeated. And repeats for length.
Examples
---
```
repeat :values, :uint8
repeat :values do
uint8 :a
uint8 :b end
repeat 4, :uint8
repeat fn _ -> false end, :uint8
repeat 2 do
uint8 :a
uint8 :b end
repeat fn _ -> false end do
uint8 :a
uint8 :b end
```
(macro)
```
repeat([atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), [length](#t:length/0)(), [atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)()) :: [ast](#t:ast/0)()
```
```
repeat([atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), [length](#t:length/0)(), [block](#t:block/1)([any](https://hexdocs.pm/elixir/typespecs.html#basic-types)())) :: [ast](#t:ast/0)()
```
Repeat the given load operations for length.
**`repeat(atom, [length](#t:length/0), atom) :: [ast](#t:ast/0)`**
Uses the type as the load operation to be repeated. And wraps the output with the given
name. Repeats for length.
**`repeat(atom, [length](#t:length/0), [block(any)](#t:block/1)) :: [ast](#t:ast/0)`**
Uses the block as the load operation to be repeated. And wraps the output with the given
name. Repeats for length.
Examples
---
```
repeat :values, 4, :uint8
repeat :values, fn _ -> false end, :uint8
repeat :values, 4 do
uint8 :a
uint8 :b end
repeat :values, fn _ -> false end do
uint8 :a
uint8 :b end
```
(macro)
```
repeat([atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), [length](#t:length/0)(), [callback](#t:callback/0)(), [block](#t:block/1)([any](https://hexdocs.pm/elixir/typespecs.html#basic-types)())) :: [ast](#t:ast/0)()
```
Repeats the load operations for length, passing the result to a callback.
Examples
---
```
repeat :values, 4, fn result -> result end do
uint8 :a
uint8 :b end
repeat :values, 4, fn { name, value } -> value end do
uint8 :a
uint8 :b end
repeat :values, fn _ -> false end, fn result -> result end do
uint8 :a
uint8 :b end
```
(macro)
```
skip([atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)()) :: [ast](#t:ast/0)()
```
```
skip([block](#t:block/1)([any](https://hexdocs.pm/elixir/typespecs.html#basic-types)())) :: [ast](#t:ast/0)()
```
Skip the given load operations.
Executes the load operations but doesn't return the loaded data.
**`skip(atom) :: [ast](#t:ast/0)`**
Skip the given type.
**`skip([block(any)](#t:block/1)) :: [ast](#t:ast/0)`**
Skip the given block.
Example
---
```
skip :uint8
skip do
uint8 :a
uint8 :b end
```
(macro)
```
spec() :: [ast](#t:ast/0)()
```
Execute the current spec again.
Examples
---
```
defmodule Recursive do
use Tonic
uint8
optional do: spec end
# Tonic.load <<1, 2, 3>>, Recursive
# => {{1, {2, {3}}}, ""}
```
(macro)
```
spec([module](https://hexdocs.pm/elixir/typespecs.html#built-in-types)()) :: [ast](#t:ast/0)()
```
Execute the provided spec.
Examples
---
```
defmodule Foo do
use Tonic
uint8
spec Bar end
defmodule Bar do
use Tonic
uint8 :a
uint8 :b end
# Tonic.load <<1, 2, 3>>, Foo
# => {{1, {{:a, 2}, {:b, 3}}}, ""}
```
(macro)
```
type([atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), [atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)()) :: [ast](#t:ast/0)()
```
```
type([atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), ([bitstring](https://hexdocs.pm/elixir/typespecs.html#built-in-types)(), [atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), [endianness](#t:endianness/0)() -> {[any](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), [bitstring](https://hexdocs.pm/elixir/typespecs.html#built-in-types)()})) ::
[ast](#t:ast/0)()
```
Declare a new type as an alias of another type or of a function.
**`type(atom, atom) :: [ast](#t:ast/0)`**
Create the new type as an alias of another type.
**`type(atom, (bitstring, atom, [endianness](#t:endianness/0) -> { any, bitstring })) :: [ast](#t:ast/0)`**
Implement the type as a function.
Examples
---
```
type :myint8, :int8
type :myint8, fn data, name, _ ->
<<value :: integer-size(8)-signed, data :: bitstring>> data
{ { name, value }, data }
end
type :myint16, fn
data, name, :little ->
<<value :: integer-size(16)-signed-little, data :: bitstring>> = data
{ { name, value }, data }
data, name, :big ->
<<value :: integer-size(16)-signed-big, data :: bitstring>> = data
{ { name, value }, data }
data, name, :native ->
<<value :: integer-size(16)-signed-native, data :: bitstring>> = data
{ { name, value }, data }
end
```
(macro)
```
type([atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), [atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), [endianness](#t:endianness/0)()) :: [ast](#t:ast/0)()
```
Declare a new type as an alias of another type with an overriding (fixed) endianness.
Examples
---
```
type :mylittleint16, :int16, :little
```
(macro)
```
type([atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), [atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), [non_neg_integer](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), [signedness](#t:signedness/0)()) :: [ast](#t:ast/0)()
```
Declare a new type for a binary type of size with signedness (if used).
Examples
---
```
type :myint16, :integer, 16, :signed
```
(macro)
```
type([atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), [atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), [non_neg_integer](https://hexdocs.pm/elixir/typespecs.html#basic-types)(), [signedness](#t:signedness/0)(), [endianness](#t:endianness/0)()) :: [ast](#t:ast/0)()
```
Declare a new type for a binary type of size with signedness (if used) and a overriding
(fixed) endianness.
Examples
---
```
type :mylittleint16, :integer, 16, :signed, :little
```
Toggle Theme
tonic v0.2.2 Tonic.Types
===
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[bit()](#bit/0)
Read a single bit boolean value
[bit(label)](#bit/1)
[bit(label, endianness_or_fun)](#bit/2)
[bit(label, endianness, fun)](#bit/3)
[bit(currently_loaded, data, name, endianness)](#bit/4)
[float32()](#float32/0)
Read a 32-bit floating point
[float32(label)](#float32/1)
[float32(label, endianness_or_fun)](#float32/2)
[float32(label, endianness, fun)](#float32/3)
[float32(currently_loaded, data, name, atom)](#float32/4)
[float64()](#float64/0)
Read a 64-bit floating point
[float64(label)](#float64/1)
[float64(label, endianness_or_fun)](#float64/2)
[float64(label, endianness, fun)](#float64/3)
[float64(currently_loaded, data, name, atom)](#float64/4)
[int16()](#int16/0)
Read a 16-bit signed integer
[int16(label)](#int16/1)
[int16(label, endianness_or_fun)](#int16/2)
[int16(label, endianness, fun)](#int16/3)
[int16(currently_loaded, data, name, atom)](#int16/4)
[int32()](#int32/0)
Read a 32-bit signed integer
[int32(label)](#int32/1)
[int32(label, endianness_or_fun)](#int32/2)
[int32(label, endianness, fun)](#int32/3)
[int32(currently_loaded, data, name, atom)](#int32/4)
[int64()](#int64/0)
Read a 64-bit signed integer
[int64(label)](#int64/1)
[int64(label, endianness_or_fun)](#int64/2)
[int64(label, endianness, fun)](#int64/3)
[int64(currently_loaded, data, name, atom)](#int64/4)
[int8()](#int8/0)
Read an 8-bit signed integer
[int8(label)](#int8/1)
[int8(label, endianness_or_fun)](#int8/2)
[int8(label, endianness, fun)](#int8/3)
[int8(currently_loaded, data, name, atom)](#int8/4)
[string(name \\ [], options \\ [])](#string/2)
Read a string
[uint16()](#uint16/0)
Read a 16-bit unsigned integer
[uint16(label)](#uint16/1)
[uint16(label, endianness_or_fun)](#uint16/2)
[uint16(label, endianness, fun)](#uint16/3)
[uint16(currently_loaded, data, name, atom)](#uint16/4)
[uint32()](#uint32/0)
Read a 32-bit unsigned integer
[uint32(label)](#uint32/1)
[uint32(label, endianness_or_fun)](#uint32/2)
[uint32(label, endianness, fun)](#uint32/3)
[uint32(currently_loaded, data, name, atom)](#uint32/4)
[uint64()](#uint64/0)
Read a 64-bit unsigned integer
[uint64(label)](#uint64/1)
[uint64(label, endianness_or_fun)](#uint64/2)
[uint64(label, endianness, fun)](#uint64/3)
[uint64(currently_loaded, data, name, atom)](#uint64/4)
[uint8()](#uint8/0)
Read an 8-bit unsigned integer
[uint8(label)](#uint8/1)
[uint8(label, endianness_or_fun)](#uint8/2)
[uint8(label, endianness, fun)](#uint8/3)
[uint8(currently_loaded, data, name, atom)](#uint8/4)
[Link to this section](#functions)
Functions
===
(macro)
Read a single bit boolean value.
(macro)
(macro)
(macro)
(macro)
Read a 32-bit floating point.
(macro)
(macro)
(macro)
(macro)
Read a 64-bit floating point.
(macro)
(macro)
(macro)
(macro)
Read a 16-bit signed integer.
(macro)
(macro)
(macro)
(macro)
Read a 32-bit signed integer.
(macro)
(macro)
(macro)
(macro)
Read a 64-bit signed integer.
(macro)
(macro)
(macro)
(macro)
Read an 8-bit signed integer.
(macro)
(macro)
(macro)
(macro)
Read a string.
Default will read until end of data. Otherwise a length value can be specified `length: 10`,
or it can read up to a terminator `?\n` or `terminator: ?\n`, or both limits can be applied.
The string can have its trailing characters stripped by using `strip: ?\n` or `strip: "\n"`.
Examples
---
```
string :read_to_end string :read_8_chars, length: 8 string :read_till_nul, 0 string :read_till_newline, ?\n string :read_till_newline_or_8_chars, length: 8, terminator: ?\n string :read_to_end_remove_newline, strip: ?\n
```
(macro)
Read a 16-bit unsigned integer.
(macro)
(macro)
(macro)
(macro)
Read a 32-bit unsigned integer.
(macro)
(macro)
(macro)
(macro)
Read a 64-bit unsigned integer.
(macro)
(macro)
(macro)
(macro)
Read an 8-bit unsigned integer.
(macro)
(macro)
(macro)
Toggle Theme
tonic v0.2.2 Tonic.MarkNotFound exception
===
Toggle Theme
tonic v0.2.2 Tonic.NotEmpty exception
=== |
pyfr | readthedoc | Unknown | PyFR Documentation
Release 1.15.0
Imperial College London
Sep 30, 2022
CONTENTS 1.1 Quick-star... 3 1.1.1 macO... 3 1.1.2 Ubunt... 4 1.2 Compiling from sourc... 4 1.2.1 Dependencie... 4 2.1 Running PyF... 7 2.1.1 Running in Paralle... 8 2.2 Configuration File (.ini... 8 2.2.1 Backend... 8 2.2.2 System... 10 2.2.3 Boundary and Initial Condition... 17 2.2.4 Nodal Point Set... 20 2.2.5 Plugin... 24 2.2.6 Region... 29 2.2.7 Additional Informatio... 30 3.1 A Brief Overview of the PyFR Framewor... 31 3.1.1 Where to Star... 31 3.1.2 Controlle... 31 3.1.3 Steppe... 36 3.1.4 PseudoSteppe... 44 3.1.5 Syste... 50 3.1.6 Element... 54 3.1.7 Interface... 60 3.1.8 Backen... 64 3.1.9 Pointwise Kernel Provide... 67 3.1.10 Kernel Generato... 70 3.2 PyFR-Mak... 72 3.2.1 PyFR-Mako Kernel... 72 3.2.2 PyFR-Mako Macro... 73 3.2.3 Synta... 73 4.1 OpenMP Backen... 75 4.1.1 AVX-51... 75 4.1.2 Cores vs. thread... 75
i
4.1.... Loop Schedulin... 75 4.1.... MPI processes vs. OpenMP thread... 76 4.2 CUDA Backen... 76 4.2.... CUDA-aware MP... 76 4.3 HIP Backen... 76 4.3.... HIP-aware MP... 76 4.4 Partitionin... 76 4.4.... METIS vs SCOTC... 76 4.4.... Mixed grid... 77 4.4.... Detecting load imbalance... 77 4.5 Scalin... 78 4.6 Parallel I/... 78 4.7 Plugin... 78 4.8 Start-up Tim... 78 5.1 Euler Equation... 79 5.1.... 2D Euler Vorte... 79 5.2 Compressible Navier–Stokes Equation... 81 5.2.... 2D Couette Flo... 81 5.3 Incompressible Navier–Stokes Equation... 81 5.3.... 2D Incompressible Cylinder Flo... 81 ii
PyFR Documentation, Release 1.15.0 PyFR 1.15.0 is an open-source flow solver that uses the high-order flux reconstruction method. For more information on the PyFR project visit our website, or to ask a question visit our forum.
Contents:
PyFR Documentation, Release 1.15.0 2 CONTENTS
CHAPTER
ONE
INSTALLATION 1.1 Quick-start PyFR 1.15.0 can be installed using pip and virtualenv, as shown in the quick-start guides below.
1.1.1 macOS It is assumed that the Xcode Command Line Tools and Homebrew are already installed. Follow the steps below to setup the OpenMP backend on macOS:
1. Install MPI:
brew install mpi4py
2. Install METIS and set the library path:
brew install metis
export PYFR_METIS_LIBRARY_PATH=/opt/homebrew/lib/libmetis.dylib
3. Download and install libxsmm and set the library path:
git clone <EMAIL>:libxsmm/libxsmm.git
cd libxsmm
make -j4 STATIC=0 BLAS=0
export PYFR_XSMM_LIBRARY_PATH=`pwd`/lib/libxsmm.dylib
4. Make a venv and activate it:
python3.10 -m venv pyfr-venv
source pyfr-venv/bin/activate
5. Install PyFR:
pip install pyfr
6. Add the following to your Configuration File (.ini):
[backend-openmp]
cc = gcc-12 Note the version of the compiler which must support the openmp flag. This has been tested on macOS 12.5 with an Apple M1 Max.
PyFR Documentation, Release 1.15.0 1.1.2 Ubuntu Follow the steps below to setup the OpenMP backend on Ubuntu:
1. Install Python and MPI:
sudo apt install python3 python3-pip libopenmpi-dev openmpi-bin
pip3 install virtualenv
2. Install METIS:
sudo apt install metis libmetis-dev
3. Download and install libxsmm and set the library path:
git clone <EMAIL>:libxsmm/libxsmm.git
cd libxsmm
make -j4 STATIC=0 BLAS=0
export PYFR_XSMM_LIBRARY_PATH=`pwd`/lib/libxsmm.so
4. Make a virtualenv and activate it:
python3 -m virtualenv pyfr-venv
source pyfr-venv/bin/activate
5. Install PyFR:
pip install pyfr This has been tested on Ubuntu 20.04.
1.2 Compiling from source PyFR can be obtained here. To install the software from source, use the provided setup.py installer or add the root PyFR directory to PYTHONPATH using:
user@computer ~/PyFR$ export PYTHONPATH=.:$PYTHONPATH When installing from source, we strongly recommend using pip and virtualenv to manage the Python dependencies.
1.2.1 Dependencies PyFR 1.15.0 has a hard dependency on Python 3.9+ and the following Python packages:
1. gimmik >= 3.0
2. h5py >= 2.10
3. mako >= 1.0.0
4. mpi4py >= 3.0
5. numpy >= 1.20
6. platformdirs >= 2.2.0
7. pytools >= 2016.2.1
PyFR Documentation, Release 1.15.0 Note that due to a bug in NumPy, PyFR is not compatible with 32-bit Python distributions.
1.2.1.1 CUDA Backend The CUDA backend targets NVIDIA GPUs with a compute capability of 3.0 or greater. The backend requires:
1. CUDA >= 11.4 1.2.1.2 HIP Backend The HIP backend targets AMD GPUs which are supported by the ROCm stack. The backend requires:
1. ROCm >= 5.2.0
2. rocBLAS >= 2.41.0 1.2.1.3 OpenCL Backend The OpenCL backend targets a range of accelerators including GPUs from AMD, Intel, and NVIDIA. The backend requires:
1. OpenCL >= 2.1
2. Optionally CLBlast Note that when running on NVIDIA GPUs the OpenCL backend terminate with a segmentation fault after the simulation has finished. This is due to a long-standing bug in how the NVIDIA OpenCL implementation handles sub-buffers. As it occurs during the termination phase—after all data has been written out to disk—the issue does not impact the functionality or correctness of PyFR.
1.2.1.4 OpenMP Backend The OpenMP backend targets multi-core x86-64 and ARM CPUs. The backend requires:
1. GCC >= 12.0 or another C compiler with OpenMP 5.1 support
2. libxsmm >= commit 0db15a0da13e3d9b9e3d57b992ecb3384d2e15ea compiled as a shared library (STATIC=0)
with BLAS=0.
In order for PyFR to find libxsmm it must be located in a directory which is on the library search path. Alternatively,
the path can be specified explicitly by exporting the environment variable PYFR_XSMM_LIBRARY_PATH=/path/to/
libxsmm.so.
1.2.1.5 Parallel To partition meshes for running in parallel it is also necessary to have one of the following partitioners installed:
1. METIS >= 5.0
2. SCOTCH >= 6.0 In order for PyFR to find these libraries they must be located in a directory which is on the library search path. Alter-
natively, the paths can be specified explicitly by exporting the environment variables PYFR_METIS_LIBRARY_PATH=/
path/to/libmetis.so and/or PYFR_SCOTCH_LIBRARY_PATH=/path/to/libscotch.so.
PyFR Documentation, Release 1.15.0 6 Chapter 1. Installation
CHAPTER
TWO
USER GUIDE For information on how to install PyFR see Installation.
2.1 Running PyFR PyFR 1.15.0 uses three distinct file formats:
1. .ini — configuration file
2. .pyfrm — mesh file
3. .pyfrs — solution file The following commands are available from the pyfr program:
1. pyfr import — convert a Gmsh .msh file into a PyFR .pyfrm file.
Example:
pyfr import mesh.msh mesh.pyfrm
2. pyfr partition — partition an existing mesh and associated solution files.
Example:
pyfr partition 2 mesh.pyfrm solution.pyfrs .
3. pyfr run — start a new PyFR simulation. Example:
pyfr run mesh.pyfrm configuration.ini
4. pyfr restart — restart a PyFR simulation from an existing solution file. Example:
pyfr restart mesh.pyfrm solution.pyfrs
5. pyfr export — convert a PyFR .pyfrs file into an unstructured VTK .vtu or .pvtu file. If a -k flag is
provided with an integer argument then .pyfrs elements are converted to high-order VTK cells which are
exported, where the order of the VTK cells is equal to the value of the integer argument. Example:
pyfr export -k 4 mesh.pyfrm solution.pyfrs solution.vtu
If a -d flag is provided with an integer argument then .pyfrs elements are subdivided into linear VTK cells
which are exported, where the number of sub-divisions is equal to the value of the integer argument. Example:
PyFR Documentation, Release 1.15.0
pyfr export -d 4 mesh.pyfrm solution.pyfrs solution.vtu
If no flags are provided then .pyfrs elements are converted to high-order VTK cells which are exported, where
the order of the cells is equal to the order of the solution data in the .pyfrs file.
2.1.1 Running in Parallel PyFR can be run in parallel. To do so prefix pyfr with mpiexec -n <cores/devices>. Note that the mesh must be pre-partitioned, and the number of cores or devices must be equal to the number of partitions.
2.2 Configuration File (.ini)
The .ini configuration file parameterises the simulation. It is written in the INI format. Parameters are grouped into sections. The roles of each section and their associated parameters are described below. Note that both ; and # may be used as comment characters. Additionally, all parameter values support environment variable expansion.
2.2.1 Backends The backend sections detail how the solver will be configured for a range of different hardware platforms. If a hardware specific backend section is omitted, then PyFR will fall back to built-in default settings.
2.2.1.1 [backend]
Parameterises the backend with
1. precision — number precision:
single | double
2. rank-allocator — MPI rank allocator:
linear | random
3. collect-wait-times — If to track MPI request wait times or not:
True | False
4. collect-wait-times-len — Size of the wait time history buffer:
int Example:
[backend]
precision = double rank-allocator = linear
PyFR Documentation, Release 1.15.0 2.2.1.2 [backend-cuda]
Parameterises the CUDA backend with
1. device-id — method for selecting which device(s) to run on:
int | round-robin | local-rank | uuid
2. mpi-type — type of MPI library that is being used:
standard | cuda-aware
3. cflags — additional NVIDIA realtime compiler (nvrtc) flags:
string Example:
[backend-cuda]
device-id = round-robin mpi-type = standard 2.2.1.3 [backend-hip]
Parameterises the HIP backend with
1. device-id — method for selecting which device(s) to run on:
int | local-rank | uuid
2. mpi-type — type of MPI library that is being used:
standard | hip-aware Example:
[backend-hip]
device-id = local-rank mpi-type = standard 2.2.1.4 [backend-opencl]
Parameterises the OpenCL backend with
1. platform-id — for selecting platform id:
int | string
2. device-type — for selecting what type of device(s) to run on:
all | cpu | gpu | accelerator
3. device-id — for selecting which device(s) to run on:
int | string | local-rank | uuid
4. gimmik-max-nnz — cutoff for GiMMiK in terms of the number of non-zero entires in a constant matrix:
int Example:
PyFR Documentation, Release 1.15.0
[backend-opencl]
platform-id = 0 device-type = gpu device-id = local-rank gimmik-max-nnz = 512 2.2.1.5 [backend-openmp]
Parameterises the OpenMP backend with
1. cc — C compiler:
string
2. cflags — additional C compiler flags:
string
3. alignb — alignment requirement in bytes; must be a power of two and at least 32:
int
4. schedule — OpenMP loop scheduling scheme:
static | dynamic | dynamic, n | guided | guided, n
where n is a positive integer.
Example:
[backend-openmp]
cc = gcc 2.2.2 Systems These sections of the input file setup and control the physical system being solved, as well as charateristics of the spatial and temporal schemes to be used.
2.2.2.1 [constants]
Sets constants used in the simulation
1. gamma — ratio of specific heats for euler | navier-stokes:
float
2. mu — dynamic viscosity for navier-stokes:
float
3. nu — kinematic viscosity for ac-navier-stokes:
float
4. Pr — Prandtl number for navier-stokes:
float
PyFR Documentation, Release 1.15.0
5. cpTref — product of specific heat at constant pressure and reference temperature for navier-stokes with
Sutherland’s Law:
float
6. cpTs — product of specific heat at constant pressure and Sutherland temperature for navier-stokes with
Sutherland’s Law:
float
7. ac-zeta — artificial compressibility factor for ac-euler | ac-navier-stokes
float Other constant may be set by the user which can then be used throughout the .ini file.
Example:
[constants]
; PyFR Constants gamma = 1.4 mu = 0.001 Pr = 0.72
; User Defined Constants V_in = 1.0 P_out = 20.0 2.2.2.2 [solver]
Parameterises the solver with
1. system — governing system:
euler | navier-stokes | ac-euler | ac-navier-stokes
where
navier-stokes requires
• viscosity-correction — viscosity correction:
none | sutherland
• shock-capturing — shock capturing scheme:
none | artificial-viscosity
2. order — order of polynomial solution basis:
int
3. anti-alias — type of anti-aliasing:
flux | surf-flux | flux, surf-flux Example:
[solver]
system = navier-stokes order = 3 anti-alias = flux
(continues on next page)
PyFR Documentation, Release 1.15.0
(continued from previous page)
viscosity-correction = none shock-capturing = artificial-viscosity 2.2.2.3 [solver-time-integrator]
Parameterises the time-integration scheme used by the solver with
1. formulation — formulation:
std | dual
where
std requires
• scheme — time-integration scheme
euler | rk34 | rk4 | rk45 | tvd-rk3
• tstart — initial time
float
• tend — final time
float
• dt — time-step
float
• controller — time-step controller
none | pi
where
pi only works with rk34 and rk45 and requires
– atol — absolute error tolerance
float
– rtol — relative error tolerance
float
– errest-norm — norm to use for estimating the error
uniform | l2
– safety-fact — safety factor for step size adjustment (suitable range 0.80-0.95)
float
– min-fact — minimum factor by which the time-step can change between iterations
(suitable range 0.1-0.5)
float
– max-fact — maximum factor by which the time-step can change between iterations
(suitable range 2.0-6.0)
float
– dt-max — maximum permissible time-step
PyFR Documentation, Release 1.15.0
float
dual requires
• scheme — time-integration scheme
backward-euler | sdirk33 | sdirk43
• pseudo-scheme — pseudo time-integration scheme
euler | rk34 | rk4 | rk45 | tvd-rk3 | vermeire
• tstart — initial time
float
• tend — final time
float
• dt — time-step
float
• controller — time-step controller
none
• pseudo-dt — pseudo time-step
float
• pseudo-niters-max — minimum number of iterations
int
• pseudo-niters-min — maximum number of iterations
int
• pseudo-resid-tol — pseudo residual tolerance
float
• pseudo-resid-norm — pseudo residual norm
uniform | l2
• pseudo-controller — pseudo time-step controller
none | local-pi
where
local-pi only works with rk34 and rk45 and requires
– atol — absolute error tolerance
float
– safety-fact — safety factor for pseudo time-step size adjustment (suitable range
0.80-0.95)
float
– min-fact — minimum factor by which the local pseudo time-step can change be-
tween iterations (suitable range 0.98-0.998)
float
PyFR Documentation, Release 1.15.0
– max-fact — maximum factor by which the local pseudo time-step can change be-
tween iterations (suitable range 1.001-1.01)
float
– pseudo-dt-max-mult — maximum permissible local pseudo time-step given as a
multiplier of pseudo-dt (suitable range 2.0-5.0)
float Example:
[solver-time-integrator]
formulation = std scheme = rk45 controller = pi tstart = 0.0 tend = 10.0 dt = 0.001 atol = 0.00001 rtol = 0.00001 errest-norm = l2 safety-fact = 0.9 min-fact = 0.3 max-fact = 2.5 2.2.2.4 [solver-dual-time-integrator-multip]
Parameterises multi-p for dual time-stepping with
1. pseudo-dt-fact — factor by which the pseudo time-step size changes between multi-p levels:
float
2. cycle — nature of a single multi-p cycle:
[(order,nsteps), (order,nsteps), ... (order,nsteps)]
where order in the first and last bracketed pair must be the overall polynomial order used for the
simulation, and order can only change by one between subsequent bracketed pairs Example:
[solver-dual-time-integrator-multip]
pseudo-dt-fact = 2.3 cycle = [(3, 1), (2, 1), (1, 1), (0, 2), (1, 1), (2, 1), (3, 3)]
2.2.2.5 [solver-interfaces]
Parameterises the interfaces with
1. riemann-solver — type of Riemann solver:
rusanov | hll | hllc | roe | roem
where
hll | hllc | roe | roem do not work with ac-euler | ac-navier-stokes
PyFR Documentation, Release 1.15.0
2. ldg-beta — beta parameter used for LDG:
float
3. ldg-tau — tau parameter used for LDG:
float Example:
[solver-interfaces]
riemann-solver = rusanov ldg-beta = 0.5 ldg-tau = 0.1 2.2.2.6 [solver-source-terms]
Parameterises solution, space (x, y, [z]), and time (t) dependent source terms with
1. rho — density source term for euler | navier-stokes:
string
2. rhou — x-momentum source term for euler | navier-stokes :
string
3. rhov — y-momentum source term for euler | navier-stokes :
string
4. rhow — z-momentum source term for euler | navier-stokes :
string
5. E — energy source term for euler | navier-stokes :
string
6. p — pressure source term for ac-euler | ac-navier-stokes:
string
7. u — x-velocity source term for ac-euler | ac-navier-stokes:
string
8. v — y-velocity source term for ac-euler | ac-navier-stokes:
string
9. w — w-velocity source term for ac-euler | ac-navier-stokes:
string Example:
[solver-source-terms]
rho = t rhou = x*y*sin(y)
rhov = z*rho rhow = 1.0 E = 1.0/(1.0+x)
PyFR Documentation, Release 1.15.0 2.2.2.7 [solver-artificial-viscosity]
Parameterises artificial viscosity for shock capturing with
1. max-artvisc — maximum artificial viscosity:
float
2. s0 — sensor cut-off:
float
3. kappa — sensor range:
float Example:
[solver-artificial-viscosity]
max-artvisc = 0.01 s0 = 0.01 kappa = 5.0 2.2.2.8 [soln-filter]
Parameterises an exponential solution filter with
1. nsteps — apply filter every nsteps:
int
2. alpha — strength of filter:
float
3. order — order of filter:
int
4. cutoff — cutoff frequency below which no filtering is applied:
int Example:
[soln-filter]
nsteps = 10 alpha = 36.0 order = 16 cutoff = 1
PyFR Documentation, Release 1.15.0 2.2.3 Boundary and Initial Conditions These sections allow users to set the boundary and initial conditions of calculations.
2.2.3.1 [soln-bcs-name]
Parameterises constant, or if available space (x, y, [z]) and time (t) dependent, boundary condition labelled name in the
.pyfrm file with
1. type — type of boundary condition:
ac-char-riem-inv | ac-in-fv | ac-out-fp | char-riem-inv | no-slp-adia-wall
| no-slp-isot-wall | no-slp-wall | slp-adia-wall | slp-wall | sub-in-frv |
sub-in-ftpttang | sub-out-fp | sup-in-fa | sup-out-fn
where
ac-char-riem-inv only works with ac-euler | ac-navier-stokes and requires
• ac-zeta — artificial compressibility factor for boundary (increasing ac-zeta makes the bound-
ary less reflective allowing larger deviation from the target state)
float
• niters — number of Newton iterations
int
• p — pressure
float | string
• u — x-velocity
float | string
• v — y-velocity
float | string
• w — z-velocity
float | string
ac-in-fv only works with ac-euler | ac-navier-stokes and requires
• u — x-velocity
float | string
• v — y-velocity
float | string
• w — z-velocity
float | string
ac-out-fp only works with ac-euler | ac-navier-stokes and requires
• p — pressure
float | string
char-riem-inv only works with euler | navier-stokes and requires
• rho — density
PyFR Documentation, Release 1.15.0
float | string
• u — x-velocity
float | string
• v — y-velocity
float | string
• w — z-velocity
float | string
• p — static pressure
float | string
no-slp-adia-wall only works with navier-stokes
no-slp-isot-wall only works with navier-stokes and requires
• u — x-velocity of wall
float
• v — y-velocity of wall
float
• w — z-velocity of wall
float
• cpTw — product of specific heat capacity at constant pressure and temperature of wall
float
no-slp-wall only works with ac-navier-stokes and requires
• u — x-velocity of wall
float
• v — y-velocity of wall
float
• w — z-velocity of wall
float
slp-adia-wall only works with euler | navier-stokes
slp-wall only works with ac-euler | ac-navier-stokes
sub-in-frv only works with navier-stokes and requires
• rho — density
float | string
• u — x-velocity
float | string
• v — y-velocity
float | string
• w — z-velocity
PyFR Documentation, Release 1.15.0
float | string
sub-in-ftpttang only works with navier-stokes and requires
• pt — total pressure
float
• cpTt — product of specific heat capacity at constant pressure and total temperature
float
• theta — azimuth angle (in degrees) of inflow measured in the x-y plane relative to the positive
x-axis
float
• phi — inclination angle (in degrees) of inflow measured relative to the positive z-axis
float
sub-out-fp only works with navier-stokes and requires
• p — static pressure
float | string
sup-in-fa only works with euler | navier-stokes and requires
• rho — density
float | string
• u — x-velocity
float | string
• v — y-velocity
float | string
• w — z-velocity
float | string
• p — static pressure
float | string
sup-out-fn only works with euler | navier-stokes Example:
[soln-bcs-bcwallupper]
type = no-slp-isot-wall cpTw = 10.0 u = 1.0 Simple periodic boundary conditions are supported; however, their behaviour is not controlled through the .ini file,
instead it is handled at the mesh generation stage. Two faces may be taged with periodic_x_l and periodic_x_r,
where x is a unique identifier for the pair of boundaries. Currently, only periodicity in a single cardinal direction is supported, for example, the planes (x,y,0)` and (x,y,10).
PyFR Documentation, Release 1.15.0 2.2.3.2 [soln-ics]
Parameterises space (x, y, [z]) dependent initial conditions with
1. rho — initial density distribution for euler | navier-stokes:
string
2. u — initial x-velocity distribution for euler | navier-stokes | ac-euler | ac-navier-stokes:
string
3. v — initial y-velocity distribution for euler | navier-stokes | ac-euler | ac-navier-stokes:
string
4. w — initial z-velocity distribution for euler | navier-stokes | ac-euler | ac-navier-stokes:
string
5. p — initial static pressure distribution for euler | navier-stokes | ac-euler | ac-navier-stokes:
string Example:
[soln-ics]
rho = 1.0 u = x*y*sin(y)
v = z w = 1.0 p = 1.0/(1.0+x)
2.2.4 Nodal Point Sets Solution point sets must be specified for each element type that is used and flux point sets must be specified for each interface type that is used. If anti-aliasing is enabled then quadrature point sets for each element and interface type that is used must also be specified. For example, a 3D mesh comprised only of prisms requires a solution point set for prism elements and flux point set for quadrilateral and triangular interfaces.
2.2.4.1 [solver-interfaces-line{-mg-porder}]
Parameterises the line interfaces, or if -mg-porder is suffixed the line interfaces at multi-p level order, with
1. flux-pts — location of the flux points on a line interface:
gauss-legendre | gauss-legendre-lobatto
2. quad-deg — degree of quadrature rule for anti-aliasing on a line interface:
int
3. quad-pts — name of quadrature rule for anti-aliasing on a line interface:
gauss-legendre | gauss-legendre-lobatto Example:
PyFR Documentation, Release 1.15.0
[solver-interfaces-line]
flux-pts = gauss-legendre quad-deg = 10 quad-pts = gauss-legendre 2.2.4.2 [solver-interfaces-tri{-mg-porder}]
Parameterises the triangular interfaces, or if -mg-porder is suffixed the triangular interfaces at multi-p level order, with
1. flux-pts — location of the flux points on a triangular interface:
williams-shunn
2. quad-deg — degree of quadrature rule for anti-aliasing on a triangular interface:
int
3. quad-pts — name of quadrature rule for anti-aliasing on a triangular interface:
williams-shunn | witherden-vincent Example:
[solver-interfaces-tri]
flux-pts = williams-shunn quad-deg = 10 quad-pts = williams-shunn 2.2.4.3 [solver-interfaces-quad{-mg-porder}]
Parameterises the quadrilateral interfaces, or if -mg-porder is suffixed the quadrilateral interfaces at multi-p level order,
with
1. flux-pts — location of the flux points on a quadrilateral interface:
gauss-legendre | gauss-legendre-lobatto
2. quad-deg — degree of quadrature rule for anti-aliasing on a quadrilateral interface:
int
3. quad-pts — name of quadrature rule for anti-aliasing on a quadrilateral interface:
gauss-legendre | gauss-legendre-lobatto | witherden-vincent Example:
[solver-interfaces-quad]
flux-pts = gauss-legendre quad-deg = 10 quad-pts = gauss-legendre
PyFR Documentation, Release 1.15.0 2.2.4.4 [solver-elements-tri{-mg-porder}]
Parameterises the triangular elements, or if -mg-porder is suffixed the triangular elements at multi-p level order, with
1. soln-pts — location of the solution points in a triangular element:
williams-shunn
2. quad-deg — degree of quadrature rule for anti-aliasing in a triangular element:
int
3. quad-pts — name of quadrature rule for anti-aliasing in a triangular element:
williams-shunn | witherden-vincent Example:
[solver-elements-tri]
soln-pts = williams-shunn quad-deg = 10 quad-pts = williams-shunn 2.2.4.5 [solver-elements-quad{-mg-porder}]
Parameterises the quadrilateral elements, or if -mg-porder is suffixed the quadrilateral elements at multi-p level order,
with
1. soln-pts — location of the solution points in a quadrilateral element:
gauss-legendre | gauss-legendre-lobatto
2. quad-deg — degree of quadrature rule for anti-aliasing in a quadrilateral element:
int
3. quad-pts — name of quadrature rule for anti-aliasing in a quadrilateral element:
gauss-legendre | gauss-legendre-lobatto | witherden-vincent Example:
[solver-elements-quad]
soln-pts = gauss-legendre quad-deg = 10 quad-pts = gauss-legendre 2.2.4.6 [solver-elements-hex{-mg-porder}]
Parameterises the hexahedral elements, or if -mg-porder is suffixed the hexahedral elements at multi-p level order, with
1. soln-pts — location of the solution points in a hexahedral element:
gauss-legendre | gauss-legendre-lobatto
2. quad-deg — degree of quadrature rule for anti-aliasing in a hexahedral element:
int
3. quad-pts — name of quadrature rule for anti-aliasing in a hexahedral element:
gauss-legendre | gauss-legendre-lobatto | witherden-vincent
PyFR Documentation, Release 1.15.0 Example:
[solver-elements-hex]
soln-pts = gauss-legendre quad-deg = 10 quad-pts = gauss-legendre 2.2.4.7 [solver-elements-tet{-mg-porder}]
Parameterises the tetrahedral elements, or if -mg-porder is suffixed the tetrahedral elements at multi-p level order, with
1. soln-pts — location of the solution points in a tetrahedral element:
shunn-ham
2. quad-deg — degree of quadrature rule for anti-aliasing in a tetrahedral element:
int
3. quad-pts — name of quadrature rule for anti-aliasing in a tetrahedral element:
shunn-ham | witherden-vincent Example:
[solver-elements-tet]
soln-pts = shunn-ham quad-deg = 10 quad-pts = shunn-ham 2.2.4.8 [solver-elements-pri{-mg-porder}]
Parameterises the prismatic elements, or if -mg-porder is suffixed the prismatic elements at multi-p level order, with
1. soln-pts — location of the solution points in a prismatic element:
williams-shunn~gauss-legendre | williams-shunn~gauss-legendre-lobatto
2. quad-deg — degree of quadrature rule for anti-aliasing in a prismatic element:
int
3. quad-pts — name of quadrature rule for anti-aliasing in a prismatic element:
williams-shunn~gauss-legendre | williams-shunn~gauss-legendre-lobatto |
witherden-vincent Example:
[solver-elements-pri]
soln-pts = williams-shunn~gauss-legendre quad-deg = 10 quad-pts = williams-shunn~gauss-legendre
PyFR Documentation, Release 1.15.0 2.2.4.9 [solver-elements-pyr{-mg-porder}]
Parameterises the pyramidal elements, or if -mg-porder is suffixed the pyramidal elements at multi-p level order, with
1. soln-pts — location of the solution points in a pyramidal element:
gauss-legendre | gauss-legendre-lobatto
2. quad-deg — degree of quadrature rule for anti-aliasing in a pyramidal element:
int
3. quad-pts — name of quadrature rule for anti-aliasing in a pyramidal element:
witherden-vincent Example:
[solver-elements-pyr]
soln-pts = gauss-legendre quad-deg = 10 quad-pts = witherden-vincent 2.2.5 Plugins Plugins allow for powerful additional functionality to be swapped in and out. It is possible to load multiple instances of the same plugin by appending a tag, for example:
[soln-plugin-writer]
...
[soln-plugin-writer-2]
...
[soln-plugin-writer-three]
...
2.2.5.1 [soln-plugin-writer]
Periodically write the solution to disk in the pyfrs format. Parameterised with
1. dt-out — write to disk every dt-out time units:
float
2. basedir — relative path to directory where outputs will be written:
string
3. basename — pattern of output names:
string
4. post-action — command to execute after writing the file:
string
5. post-action-mode — how the post-action command should be executed:
blocking | non-blocking
PyFR Documentation, Release 1.15.0
4. region — region to be written, specified as either the entire domain using *, a combination of the geometric
shapes specified in Regions, or a sub-region of elements that have faces on a specific domain boundary via the
name of the domain boundary:
* | shape(args, ...) | string Example:
[soln-plugin-writer]
dt-out = 0.01 basedir = .
basename = files-{t:.2f}
post-action = echo "Wrote file {soln} at time {t} for mesh {mesh}."
post-action-mode = blocking region = box((-5, -5, -5), (5, 5, 5))
2.2.5.2 [soln-plugin-fluidforce-name]
Periodically integrates the pressure and viscous stress on the boundary labelled name and writes out the resulting force and moment (if requested) vectors to a CSV file. Parameterised with
1. nsteps — integrate every nsteps:
int
2. file — output file path; should the file already exist it will be appended to:
string
3. header — if to output a header row or not:
boolean
4. morigin — origin used to compute moments (optional):
(x, y, [z])
Example:
[soln-plugin-fluidforce-wing]
nsteps = 10 file = wing-forces.csv header = true morigin = (0.0, 0.0, 0.5)
2.2.5.3 [soln-plugin-nancheck]
Periodically checks the solution for NaN values. Parameterised with
1. nsteps — check every nsteps:
int Example:
[soln-plugin-nancheck]
nsteps = 10
PyFR Documentation, Release 1.15.0 2.2.5.4 [soln-plugin-residual]
Periodically calculates the residual and writes it out to a CSV file. Parameterised with
1. nsteps — calculate every nsteps:
int
2. file — output file path; should the file already exist it will be appended to:
string
3. header — if to output a header row or not:
boolean Example:
[soln-plugin-residual]
nsteps = 10 file = residual.csv header = true 2.2.5.5 [soln-plugin-dtstats]
Write time-step statistics out to a CSV file. Parameterised with
1. flushsteps — flush to disk every flushsteps:
int
2. file — output file path; should the file already exist it will be appended to:
string
3. header — if to output a header row or not:
boolean Example:
[soln-plugin-dtstats]
flushsteps = 100 file = dtstats.csv header = true 2.2.5.6 [soln-plugin-pseudostats]
Write pseudo-step convergence history out to a CSV file. Parameterised with
1. flushsteps — flush to disk every flushsteps:
int
2. file — output file path; should the file already exist it will be appended to:
string
3. header — if to output a header row or not:
boolean
PyFR Documentation, Release 1.15.0 Example:
[soln-plugin-pseudostats]
flushsteps = 100 file = pseudostats.csv header = true 2.2.5.7 [soln-plugin-sampler]
Periodically samples specific points in the volume and writes them out to a CSV file. The point location process automatically takes advantage of scipy.spatial.cKDTree where available. Parameterised with
1. nsteps — sample every nsteps:
int
2. samp-pts — list of points to sample:
[(x, y), (x, y), ...] | [(x, y, z), (x, y, z), ...]
3. format — output variable format:
primitive | conservative
4. file — output file path; should the file already exist it will be appended to:
string
5. header — if to output a header row or not:
boolean Example:
[soln-plugin-sampler]
nsteps = 10 samp-pts = [(1.0, 0.7, 0.0), (1.0, 0.8, 0.0)]
format = primitive file = point-data.csv header = true 2.2.5.8 [soln-plugin-tavg]
Time average quantities. Parameterised with
1. nsteps — accumulate the average every nsteps time steps:
int
2. dt-out — write to disk every dt-out time units:
float
3. tstart — time at which to start accumulating average data:
float
4. mode — output file accumulation mode:
PyFR Documentation, Release 1.15.0
continuous | windowed
Windowed outputs averages over each dt- out period. Whereas, continuous outputs averages over
all dt-out periods thus far completed within a given invocation of PyFR. The default is windowed.
5. basedir — relative path to directory where outputs will be written:
string
6. basename — pattern of output names:
string
7. precision — output file number precision:
single | double
8. region — region to be written, specified as either the entire domain using *, a combination of the geometric
shapes specified in Regions, or a sub-region of elements that have faces on a specific domain boundary via the
name of the domain boundary:
* | shape(args, ...) | string
9. avg-name — expression to time average, written as a function of the primitive variables and gradients thereof;
multiple expressions, each with their own name, may be specified:
string
10. fun-avg-name — expression to compute at file output time, written as a function of any ordinary average terms;
multiple expressions, each with their own name, may be specified:
string
As fun-avg terms are evaluated at write time, these are only indirectly effected by the averaging mode.
Example:
[soln-plugin-tavg]
nsteps = 10 dt-out = 2.0 mode = windowed basedir = .
basename = files-{t:06.2f}
avg-u = u avg-v = v avg-uu = u*u avg-vv = v*v avg-uv = u*v fun-avg-upup = uu - u*u fun-avg-vpvp = vv - v*v fun-avg-upvp = uv - u*v fun-avg-urms = sqrt(uu - u*u + vv - v*v)
PyFR Documentation, Release 1.15.0 2.2.5.9 [soln-plugin-integrate]
Integrate quantities over the compuational domain. Parameterised with:
1. nsteps — calculate the integral every nsteps time steps:
int
2. file — output file path; should the file already exist it will be appended to:
string
3. header — if to output a header row or not:
boolean
4. quad-deg — degree of quadrature rule (optional):
5. quad-pts-{etype} — name of quadrature rule (optional):
6. region — region to integrate, specified as either the entire domain using * or a combination of the geometric
shapes specified in Regions:
* | shape(args, ...)
7. int-name — expression to integrate, written as a function of the primitive variables and gradients thereof, the
physical coordinates [x, y, [z]] and/or the physical time [t]; multiple expressions, each with their own name, may
be specified:
string Example:
[soln-plugin-integrate]
nsteps = 50 file = integral.csv header = true quad-deg = 9 vor1 = (grad_w_y - grad_v_z)
vor2 = (grad_u_z - grad_w_x)
vor3 = (grad_v_x - grad_u_y)
int-E = rho*(u*u + v*v + w*w)
int-enst = rho*(%(vor1)s*%(vor1)s + %(vor2)s*%(vor2)s + %(vor3)s*%(vor3)s)
2.2.6 Regions Certain plugins are capable of performing operations on a subset of the elements inside the domain. One means of constructing these element subsets is through parameterised regions. Note that an element is considered part of a region if any of its nodes are found to be contained within the region. Supported regions:
Rectangular cuboid box(x0, x1)
A rectangular cuboid defined by two diametrically opposed vertices. Valid in both 2D and 3D.
Conical frustum conical_frustum(x0, x1, r0, r1)
A conical frustum whose end caps are at x0 and x1 with radii r0 and r1, respectively. Only valid in 3D.
Cone cone(x0, x1, r)
A cone of radius r whose centre-line is defined by x0 and x1. Equivalent to conical_frustum(x0, x1, r,
0). Only valid in 3D.
PyFR Documentation, Release 1.15.0 Cylinder cylinder(x0, x1, r)
A circular cylinder of radius r whose centre-line is defined by x0 and x1. Equivalent to conical_frustum(x0,
x1, r, r). Only valid in 3D.
Cartesian ellipsoid ellipsoid(x0, a, b, c)
An ellipsoid centred at x0 with Cartesian coordinate axes whose extents in the x, y, and z directions are given by
a, b, and c, respectively. Only valid in 3D.
Sphere sphere(x0, r)
A sphere centred at x0 with a radius of r. Equivalent to ellipsoid(x0, r, r, r). Only valid in 3D.
Region expressions can also be added and subtracted together arbitrarily. For example box((-10, -10, -10), (10,
10, 10)) - sphere((0, 0, 0), 3) will result in a cube-shaped region with a sphere cut out of the middle.
2.2.7 Additional Information The INI file format is very versatile. A feature that can be useful in defining initial conditions is the substitution feature and this is demonstrated in the [soln-plugin-integrate] example.
To prevent situations where you have solutions files for unknown configurations, the contents of the .ini file are added as an attribute to .pyfrs files. These files use the HDF5 format and can be straightforwardly probed with tools such as h5dump.
In several places within the .ini file expressions may be used. As well as the constant pi, expressions containing the following functions are supported:
1. +, -, *, / — basic arithmetic
2. sin, cos, tan — basic trigonometric functions (radians)
3. asin, acos, atan, atan2 — inverse trigonometric functions
4. exp, log — exponential and the natural logarithm
5. tanh — hyperbolic tangent
6. pow — power, note ** is not supported
7. sqrt — square root
8. abs — absolute value
9. min, max — two variable minimum and maximum functions, arguments can be arrays
CHAPTER
THREE
DEVELOPER GUIDE 3.1 A Brief Overview of the PyFR Framework 3.1.1 Where to Start The symbolic link pyfr.scripts.pyfr points to the script pyfr.scripts.main, which is where it all starts! Specif-
ically, the function process_run calls the function _process_common, which in turn calls the function get_solver,
returning an Integrator – a composite of a Controller and a Stepper. The Integrator has a method named run, which is then called to run the simulation.
3.1.2 Controller A Controller acts to advance the simulation in time. Specifically, a Controller has a method named advance_to which advances a System to a specified time. There are three types of physical-time Controller available in PyFR 1.15.0:
StdNoneController Click to show class pyfr.integrators.std.controllers.StdNoneController(*args, **kwargs)
_accept_step(dt, idxcurr, err=None)
_add(*args, subdims=None)
_addv(consts, regidxs, subdims=None)
_check_abort()
_get_axnpby_kerns(*rs, subdims=None)
_get_gndofs()
_get_plugins(initsoln)
_get_reduction_kerns(*rs, **kwargs)
_reject_step(dt, idxold, err=None)
advance_to(t)
call_plugin_dt(dt)
property cfgmeta
PyFR Documentation, Release 1.15.0
collect_stats(stats)
controller_name = 'none'
property controller_needs_errest
formulation = 'std'
static get_plugin_data_prefix(name, suffix)
property grad_soln
property nsteps
run()
property soln
step(t, dt)
StdPIController Click to show class pyfr.integrators.std.controllers.StdPIController(*args, **kwargs)
_accept_step(dt, idxcurr, err=None)
_add(*args, subdims=None)
_addv(consts, regidxs, subdims=None)
_check_abort()
_errest(rcurr, rprev, rerr)
_get_axnpby_kerns(*rs, subdims=None)
_get_gndofs()
_get_plugins(initsoln)
_get_reduction_kerns(*rs, **kwargs)
_reject_step(dt, idxold, err=None)
advance_to(t)
call_plugin_dt(dt)
property cfgmeta
collect_stats(stats)
controller_name = 'pi'
property controller_needs_errest
formulation = 'std'
static get_plugin_data_prefix(name, suffix)
property grad_soln
PyFR Documentation, Release 1.15.0
property nsteps
run()
property soln
step(t, dt)
DualNoneController Click to show class pyfr.integrators.dual.phys.controllers.DualNoneController(*args, **kwargs)
_accept_step(idxcurr)
_check_abort()
_get_plugins(initsoln)
advance_to(t)
call_plugin_dt(dt)
property cfgmeta
collect_stats(stats)
controller_name = 'none'
formulation = 'dual'
static get_plugin_data_prefix(name, suffix)
property grad_soln
property nsteps
property pseudostepinfo
run()
property soln
step(t, dt)
property system Types of physical-time Controller are related via the following inheritance diagram:
PyFR Documentation, Release 1.15.0
StdNoneController
BaseCommon BaseStdIntegrator BaseStdController StdPIController
BaseIntegrator BaseDualIntegrator BaseDualController DualNoneController There are two types of pseudo-time Controller available in PyFR 1.15.0:
DualNonePseudoController Click to show class pyfr.integrators.dual.pseudo.pseudocontrollers.DualNonePseudoController(*args,
**kwargs)
_add(*args, subdims=None)
_addv(consts, regidxs, subdims=None)
_get_axnpby_kerns(*rs, subdims=None)
_get_gndofs()
_get_reduction_kerns(*rs, **kwargs)
property _pseudo_stepper_regidx
_resid(rcurr, rold, dt_fac)
property _source_regidx
property _stage_regidx
property _stepper_regidx
_update_pseudostepinfo(niters, resid)
aux_nregs = 0
convmon(i, minniters, dt_fac=1)
formulation = 'dual'
init_stage(currstg, stepper_coeffs, dt)
obtain_solution(bcoeffs)
PyFR Documentation, Release 1.15.0
pseudo_advance(tcurr)
pseudo_controller_name = 'none'
pseudo_controller_needs_lerrest = False
store_current_soln()
DualPIPseudoController Click to show class pyfr.integrators.dual.pseudo.pseudocontrollers.DualPIPseudoController(*args,
**kwargs)
_add(*args, subdims=None)
_addv(consts, regidxs, subdims=None)
_get_axnpby_kerns(*rs, subdims=None)
_get_gndofs()
_get_reduction_kerns(*rs, **kwargs)
property _pseudo_stepper_regidx
_resid(rcurr, rold, dt_fac)
property _source_regidx
property _stage_regidx
property _stepper_regidx
_update_pseudostepinfo(niters, resid)
aux_nregs = 0
convmon(i, minniters, dt_fac=1)
formulation = 'dual'
init_stage(currstg, stepper_coeffs, dt)
localerrest(errbank)
obtain_solution(bcoeffs)
pseudo_advance(tcurr)
pseudo_controller_name = 'local-pi'
pseudo_controller_needs_lerrest = True
store_current_soln()
3.1. A Brief Overview of the PyFR Framework 35
PyFR Documentation, Release 1.15.0 Types of pseudo-time Controller are related via the following inheritance diagram:
DualNonePseudoController
BaseCommon BaseDualPseudoIntegrator BaseDualPseudoController
DualPIPseudoController 3.1.3 Stepper A Stepper acts to advance the simulation by a single time-step. Specifically, a Stepper has a method named step which advances a System by a single time-step. There are eight types of Stepper available in PyFR 1.15.0:
StdEulerStepper Click to show class pyfr.integrators.std.steppers.StdEulerStepper(backend, systemcls, rallocs, mesh, initsoln, cfg)
_add(*args, subdims=None)
_addv(consts, regidxs, subdims=None)
_check_abort()
_get_axnpby_kerns(*rs, subdims=None)
_get_gndofs()
_get_plugins(initsoln)
_get_reduction_kerns(*rs, **kwargs)
property _stepper_nfevals
advance_to(t)
call_plugin_dt(dt)
property cfgmeta
collect_stats(stats)
property controller_needs_errest
formulation = 'std'
static get_plugin_data_prefix(name, suffix)
property grad_soln
PyFR Documentation, Release 1.15.0
property nsteps
run()
property soln
step(t, dt)
stepper_has_errest = False
stepper_name = 'euler'
stepper_nregs = 2
stepper_order = 1 StdRK4Stepper Click to show class pyfr.integrators.std.steppers.StdRK4Stepper(backend, systemcls, rallocs, mesh, initsoln, cfg)
_add(*args, subdims=None)
_addv(consts, regidxs, subdims=None)
_check_abort()
_get_axnpby_kerns(*rs, subdims=None)
_get_gndofs()
_get_plugins(initsoln)
_get_reduction_kerns(*rs, **kwargs)
property _stepper_nfevals
advance_to(t)
call_plugin_dt(dt)
property cfgmeta
collect_stats(stats)
property controller_needs_errest
formulation = 'std'
static get_plugin_data_prefix(name, suffix)
property grad_soln
property nsteps
run()
property soln
step(t, dt)
stepper_has_errest = False
PyFR Documentation, Release 1.15.0
stepper_name = 'rk4'
stepper_nregs = 3
stepper_order = 4 StdRK34Stepper Click to show class pyfr.integrators.std.steppers.StdRK34Stepper(*args, **kwargs)
_add(*args, subdims=None)
_addv(consts, regidxs, subdims=None)
_check_abort()
_get_axnpby_kerns(*rs, subdims=None)
_get_gndofs()
_get_plugins(initsoln)
_get_reduction_kerns(*rs, **kwargs)
_get_rkvdh2_kerns(stage, r1, r2, rold=None, rerr=None)
property _stepper_nfevals
a = [0.32416573882874605, 0.5570978645055429, -0.08605491431272755]
advance_to(t)
b = [0.10407986927510238, 0.6019391368822611, 2.9750900268840206,
-2.681109033041384]
bhat = [0.3406814840808433, 0.09091523008632837, 2.866496742725443,
-2.298093456892615]
call_plugin_dt(dt)
property cfgmeta
collect_stats(stats)
property controller_needs_errest
formulation = 'std'
static get_plugin_data_prefix(name, suffix)
property grad_soln
property nsteps
run()
property soln
step(t, dt)
property stepper_has_errest
PyFR Documentation, Release 1.15.0
stepper_name = 'rk34'
property stepper_nregs
stepper_order = 3 StdRK45Stepper Click to show class pyfr.integrators.std.steppers.StdRK45Stepper(*args, **kwargs)
_add(*args, subdims=None)
_addv(consts, regidxs, subdims=None)
_check_abort()
_get_axnpby_kerns(*rs, subdims=None)
_get_gndofs()
_get_plugins(initsoln)
_get_reduction_kerns(*rs, **kwargs)
_get_rkvdh2_kerns(stage, r1, r2, rold=None, rerr=None)
property _stepper_nfevals
a = [0.22502245872571303, 0.5440433129514047, 0.14456824349399464,
0.7866643421983568]
advance_to(t)
b = [0.05122930664033915, 0.3809548257264019, -0.3733525963923833,
0.5925012850263623, 0.34866717899927996]
bhat = [0.13721732210321927, 0.19188076232938728, -0.2292067211595315,
0.6242946765438954, 0.27581396018302956]
call_plugin_dt(dt)
property cfgmeta
collect_stats(stats)
property controller_needs_errest
formulation = 'std'
static get_plugin_data_prefix(name, suffix)
property grad_soln
property nsteps
run()
property soln
step(t, dt)
3.1. A Brief Overview of the PyFR Framework 39
PyFR Documentation, Release 1.15.0
property stepper_has_errest
stepper_name = 'rk45'
property stepper_nregs
stepper_order = 4 StdTVDRK3Stepper Click to show class pyfr.integrators.std.steppers.StdTVDRK3Stepper(backend, systemcls, rallocs, mesh, initsoln, cfg)
_add(*args, subdims=None)
_addv(consts, regidxs, subdims=None)
_check_abort()
_get_axnpby_kerns(*rs, subdims=None)
_get_gndofs()
_get_plugins(initsoln)
_get_reduction_kerns(*rs, **kwargs)
property _stepper_nfevals
advance_to(t)
call_plugin_dt(dt)
property cfgmeta
collect_stats(stats)
property controller_needs_errest
formulation = 'std'
static get_plugin_data_prefix(name, suffix)
property grad_soln
property nsteps
run()
property soln
step(t, dt)
stepper_has_errest = False
stepper_name = 'tvd-rk3'
stepper_nregs = 3
stepper_order = 3 DualBackwardEulerStepper Click to show
PyFR Documentation, Release 1.15.0 class pyfr.integrators.dual.phys.steppers.DualBackwardEulerStepper(*args, **kwargs)
_check_abort()
_finalize_step()
_get_plugins(initsoln)
a = [[1]]
advance_to(t)
call_plugin_dt(dt)
property cfgmeta
collect_stats(stats)
formulation = 'dual'
fsal = True
static get_plugin_data_prefix(name, suffix)
property grad_soln
nstages = 1
property nsteps
property pseudostepinfo
run()
property soln
property stage_nregs
step(t, dt)
stepper_name = 'backward-euler'
stepper_nregs = 1
property system SDIRK33Stepper Click to show class pyfr.integrators.dual.phys.steppers.SDIRK33Stepper(*args, **kwargs)
_al = 0.43586652150845906
_at = 0.11327896981804066
_check_abort()
_finalize_step()
_get_plugins(initsoln)
3.1. A Brief Overview of the PyFR Framework 41
PyFR Documentation, Release 1.15.0
a = [[0.43586652150845906], [0.28206673924577047, 0.43586652150845906],
[1.2084966491760103, -0.6443631706844692, 0.43586652150845906]]
advance_to(t)
call_plugin_dt(dt)
property cfgmeta
collect_stats(stats)
formulation = 'dual'
fsal = True
static get_plugin_data_prefix(name, suffix)
property grad_soln
nstages = 3
property nsteps
property pseudostepinfo
run()
property soln
property stage_nregs
step(t, dt)
stepper_name = 'sdirk33'
stepper_nregs = 1
property system SDIRK43Stepper Click to show class pyfr.integrators.dual.phys.steppers.SDIRK43Stepper(*args, **kwargs)
_a_lam = 1.0685790213016289
_b_rlam = 0.1288864005157204
_check_abort()
_finalize_step()
_get_plugins(initsoln)
a = [[1.0685790213016289], [-0.5685790213016289, 1.0685790213016289],
[2.1371580426032577, -3.2743160852065154, 1.0685790213016289]]
advance_to(t)
b = [0.1288864005157204, 0.7422271989685592, 0.1288864005157204]
call_plugin_dt(dt)
PyFR Documentation, Release 1.15.0
property cfgmeta
collect_stats(stats)
formulation = 'dual'
fsal = False
static get_plugin_data_prefix(name, suffix)
property grad_soln
nstages = 3
property nsteps
property pseudostepinfo
run()
property soln
property stage_nregs
step(t, dt)
stepper_name = 'sdirk43'
stepper_nregs = 1
property system Types of Stepper are related via the following inheritance diagram:
StdEulerStepper
BaseStdStepper StdRK4Stepper
BaseCommon BaseStdIntegrator
BaseIntegrator
BaseDualIntegrator
BaseDualStepper BaseDIRKStepper SDIRK33Stepper
PyFR Documentation, Release 1.15.0 3.1.4 PseudoStepper A PseudoStepper acts to advance the simulation by a single pseudo-time-step. They are used to converge implicit Stepper time-steps via a dual time-stepping formulation. There are six types of PseudoStepper available in PyFR 1.15.0:
DualDenseRKPseudoStepper Click to show class pyfr.integrators.dual.pseudo.pseudosteppers.DualDenseRKPseudoStepper(*args, **kwargs)
_add(*args, subdims=None)
_addv(consts, regidxs, subdims=None)
_get_axnpby_kerns(*rs, subdims=None)
_get_gndofs()
_get_reduction_kerns(*rs, **kwargs)
property _pseudo_stepper_regidx
_rhs_with_dts(t, uin, fout)
property _source_regidx
property _stage_regidx
property _stepper_regidx
aux_nregs = 0
collect_stats(stats)
formulation = 'dual'
init_stage(currstg, stepper_coeffs, dt)
property ntotiters
obtain_solution(bcoeffs)
property pseudo_stepper_nfevals
step(t)
store_current_soln()
DualRK4PseudoStepper Click to show class pyfr.integrators.dual.pseudo.pseudosteppers.DualRK4PseudoStepper(backend, systemcls,
rallocs, mesh, initsoln,
cfg, stepper_nregs,
stage_nregs, dt)
_add(*args, subdims=None)
PyFR Documentation, Release 1.15.0
_addv(consts, regidxs, subdims=None)
_get_axnpby_kerns(*rs, subdims=None)
_get_gndofs()
_get_reduction_kerns(*rs, **kwargs)
property _pseudo_stepper_regidx
_rhs_with_dts(t, uin, fout)
property _source_regidx
property _stage_regidx
property _stepper_regidx
aux_nregs = 0
collect_stats(stats)
formulation = 'dual'
init_stage(currstg, stepper_coeffs, dt)
property ntotiters
obtain_solution(bcoeffs)
pseudo_stepper_has_lerrest = False
pseudo_stepper_name = 'rk4'
property pseudo_stepper_nfevals
pseudo_stepper_nregs = 3
pseudo_stepper_order = 4
step(t)
store_current_soln()
DualTVDRK3PseudoStepper Click to show class pyfr.integrators.dual.pseudo.pseudosteppers.DualTVDRK3PseudoStepper(backend, systemcls,
rallocs, mesh,
initsoln, cfg,
stepper_nregs,
stage_nregs, dt)
_add(*args, subdims=None)
_addv(consts, regidxs, subdims=None)
_get_axnpby_kerns(*rs, subdims=None)
_get_gndofs()
_get_reduction_kerns(*rs, **kwargs)
3.1. A Brief Overview of the PyFR Framework 45
PyFR Documentation, Release 1.15.0
property _pseudo_stepper_regidx
_rhs_with_dts(t, uin, fout)
property _source_regidx
property _stage_regidx
property _stepper_regidx
aux_nregs = 0
collect_stats(stats)
formulation = 'dual'
init_stage(currstg, stepper_coeffs, dt)
property ntotiters
obtain_solution(bcoeffs)
pseudo_stepper_has_lerrest = False
pseudo_stepper_name = 'tvd-rk3'
property pseudo_stepper_nfevals
pseudo_stepper_nregs = 3
pseudo_stepper_order = 3
step(t)
store_current_soln()
DualEulerPseudoStepper Click to show class pyfr.integrators.dual.pseudo.pseudosteppers.DualEulerPseudoStepper(backend, systemcls,
rallocs, mesh,
initsoln, cfg,
stepper_nregs,
stage_nregs, dt)
_add(*args, subdims=None)
_addv(consts, regidxs, subdims=None)
_get_axnpby_kerns(*rs, subdims=None)
_get_gndofs()
_get_reduction_kerns(*rs, **kwargs)
property _pseudo_stepper_regidx
_rhs_with_dts(t, uin, fout)
property _source_regidx
property _stage_regidx
PyFR Documentation, Release 1.15.0
property _stepper_regidx
aux_nregs = 0
collect_stats(stats)
formulation = 'dual'
init_stage(currstg, stepper_coeffs, dt)
property ntotiters
obtain_solution(bcoeffs)
pseudo_stepper_has_lerrest = False
pseudo_stepper_name = 'euler'
property pseudo_stepper_nfevals
pseudo_stepper_nregs = 2
pseudo_stepper_order = 1
step(t)
store_current_soln()
DualRK34PseudoStepper Click to show class pyfr.integrators.dual.pseudo.pseudosteppers.DualRK34PseudoStepper(*args, **kwargs)
_add(*args, subdims=None)
_addv(consts, regidxs, subdims=None)
_get_axnpby_kerns(*rs, subdims=None)
_get_gndofs()
_get_reduction_kerns(*rs, **kwargs)
_get_rkvdh2pseudo_kerns(stage, r1, r2, rold, rerr=None)
property _pseudo_stepper_regidx
_rhs_with_dts(t, uin, fout)
property _source_regidx
property _stage_regidx
property _stepper_regidx
a = [0.32416573882874605, 0.5570978645055429, -0.08605491431272755]
aux_nregs = 0
b = [0.10407986927510238, 0.6019391368822611, 2.9750900268840206,
-2.681109033041384]
PyFR Documentation, Release 1.15.0
bhat = [0.3406814840808433, 0.09091523008632837, 2.866496742725443,
-2.298093456892615]
collect_stats(stats)
formulation = 'dual'
init_stage(currstg, stepper_coeffs, dt)
property ntotiters
obtain_solution(bcoeffs)
property pseudo_stepper_has_lerrest
pseudo_stepper_name = 'rk34'
property pseudo_stepper_nfevals
property pseudo_stepper_nregs
pseudo_stepper_order = 3
step(t)
store_current_soln()
DualRK45PseudoStepper Click to show class pyfr.integrators.dual.pseudo.pseudosteppers.DualRK45PseudoStepper(*args, **kwargs)
_add(*args, subdims=None)
_addv(consts, regidxs, subdims=None)
_get_axnpby_kerns(*rs, subdims=None)
_get_gndofs()
_get_reduction_kerns(*rs, **kwargs)
_get_rkvdh2pseudo_kerns(stage, r1, r2, rold, rerr=None)
property _pseudo_stepper_regidx
_rhs_with_dts(t, uin, fout)
property _source_regidx
property _stage_regidx
property _stepper_regidx
a = [0.22502245872571303, 0.5440433129514047, 0.14456824349399464,
0.7866643421983568]
aux_nregs = 0
b = [0.05122930664033915, 0.3809548257264019, -0.3733525963923833,
0.5925012850263623, 0.34866717899927996]
PyFR Documentation, Release 1.15.0
bhat = [0.13721732210321927, 0.19188076232938728, -0.2292067211595315,
0.6242946765438954, 0.27581396018302956]
collect_stats(stats)
formulation = 'dual'
init_stage(currstg, stepper_coeffs, dt)
property ntotiters
obtain_solution(bcoeffs)
property pseudo_stepper_has_lerrest
pseudo_stepper_name = 'rk45'
property pseudo_stepper_nfevals
property pseudo_stepper_nregs
pseudo_stepper_order = 4
step(t)
store_current_soln()
Note that DualDenseRKPseudoStepper includes families of PseudoStepper whose coefficients are read from .txt files named thus:
{scheme name}-s{stage count}-p{temporal order}-sp{optimal spatial polynomial order}.txt Types of PseudoStepper are related via the following inheritance diagram:
DualDenseRKPseudoStepper
BaseDualPseudoIntegrator BaseDualPseudoStepper DualEulerPseudoStepper
PyFR Documentation, Release 1.15.0 3.1.5 System A System holds information/data for the system, including Elements, Interfaces, and the Backend with which the simu-
lation is to run. A System has a method named rhs, which obtains the divergence of the flux (the ‘right-hand-side’) at each solution point. The method rhs invokes various kernels which have been pre-generated and loaded into queues. A System also has a method named _gen_kernels which acts to generate all the kernels required by a particular System.
A kernel is an instance of a ‘one-off’ class with a method named run that implements the required kernel functionality.
Individual kernels are produced by a kernel provider. PyFR 1.15.0 has various types of kernel provider. A Pointwise Kernel Provider produces point-wise kernels such as Riemann solvers and flux functions etc. These point-wise kernels are specified using an in-built platform-independent templating language derived from Mako, henceforth referred to as PyFR-Mako. There are four types of System available in PyFR 1.15.0:
ACEulerSystem Click to show class pyfr.solvers.aceuler.system.ACEulerSystem(backend, rallocs, mesh, initsoln, nregs, cfg)
_compute_grads_graph(t, uinbank)
_gen_kernels(nregs, eles, iint, mpiint, bcint)
_gen_mpireqs(mpiint)
_get_kernels(uinbank, foutbank)
_kdeps(kdict, kern, *dnames)
_load_bc_inters(rallocs, mesh, elemap)
_load_eles(rallocs, mesh, initsoln, nregs, nonce)
_load_int_inters(rallocs, mesh, elemap)
_load_mpi_inters(rallocs, mesh, elemap)
_nonce_seq = count(0)
_prepare_kernels(t, uinbank, foutbank)
_rhs_graphs(uinbank, foutbank)
bbcinterscls
alias of ACEulerBaseBCInters
compute_grads(t, uinbank)
ele_scal_upts(idx)
elementscls
alias of ACEulerElements
filt(uinoutbank)
intinterscls
alias of ACEulerIntInters
mpiinterscls
alias of ACEulerMPIInters
name = 'ac-euler'
PyFR Documentation, Release 1.15.0
rhs(t, uinbank, foutbank)
rhs_wait_times()
ACNavierStokesSystem Click to show class pyfr.solvers.acnavstokes.system.ACNavierStokesSystem(backend, rallocs, mesh, initsoln, nregs,
cfg)
_compute_grads_graph(uinbank)
_gen_kernels(nregs, eles, iint, mpiint, bcint)
_gen_mpireqs(mpiint)
_get_kernels(uinbank, foutbank)
_kdeps(kdict, kern, *dnames)
_load_bc_inters(rallocs, mesh, elemap)
_load_eles(rallocs, mesh, initsoln, nregs, nonce)
_load_int_inters(rallocs, mesh, elemap)
_load_mpi_inters(rallocs, mesh, elemap)
_nonce_seq = count(0)
_prepare_kernels(t, uinbank, foutbank)
_rhs_graphs(uinbank, foutbank)
bbcinterscls
alias of ACNavierStokesBaseBCInters
compute_grads(t, uinbank)
ele_scal_upts(idx)
elementscls
alias of ACNavierStokesElements
filt(uinoutbank)
intinterscls
alias of ACNavierStokesIntInters
mpiinterscls
alias of ACNavierStokesMPIInters
name = 'ac-navier-stokes'
rhs(t, uinbank, foutbank)
rhs_wait_times()
EulerSystem Click to show class pyfr.solvers.euler.system.EulerSystem(backend, rallocs, mesh, initsoln, nregs, cfg)
PyFR Documentation, Release 1.15.0
_compute_grads_graph(t, uinbank)
_gen_kernels(nregs, eles, iint, mpiint, bcint)
_gen_mpireqs(mpiint)
_get_kernels(uinbank, foutbank)
_kdeps(kdict, kern, *dnames)
_load_bc_inters(rallocs, mesh, elemap)
_load_eles(rallocs, mesh, initsoln, nregs, nonce)
_load_int_inters(rallocs, mesh, elemap)
_load_mpi_inters(rallocs, mesh, elemap)
_nonce_seq = count(0)
_prepare_kernels(t, uinbank, foutbank)
_rhs_graphs(uinbank, foutbank)
bbcinterscls
alias of EulerBaseBCInters
compute_grads(t, uinbank)
ele_scal_upts(idx)
elementscls
alias of EulerElements
filt(uinoutbank)
intinterscls
alias of EulerIntInters
mpiinterscls
alias of EulerMPIInters
name = 'euler'
rhs(t, uinbank, foutbank)
rhs_wait_times()
NavierStokesSystem Click to show class pyfr.solvers.navstokes.system.NavierStokesSystem(backend, rallocs, mesh, initsoln, nregs, cfg)
_compute_grads_graph(uinbank)
_gen_kernels(nregs, eles, iint, mpiint, bcint)
_gen_mpireqs(mpiint)
_get_kernels(uinbank, foutbank)
_kdeps(kdict, kern, *dnames)
PyFR Documentation, Release 1.15.0
_load_bc_inters(rallocs, mesh, elemap)
_load_eles(rallocs, mesh, initsoln, nregs, nonce)
_load_int_inters(rallocs, mesh, elemap)
_load_mpi_inters(rallocs, mesh, elemap)
_nonce_seq = count(0)
_prepare_kernels(t, uinbank, foutbank)
_rhs_graphs(uinbank, foutbank)
bbcinterscls
alias of NavierStokesBaseBCInters
compute_grads(t, uinbank)
ele_scal_upts(idx)
elementscls
alias of NavierStokesElements
filt(uinoutbank)
intinterscls
alias of NavierStokesIntInters
mpiinterscls
alias of NavierStokesMPIInters
name = 'navier-stokes'
rhs(t, uinbank, foutbank)
rhs_wait_times()
Types of System are related via the following inheritance diagram:
ACEulerSystem
ACNavierStokesSystem
BaseSystem BaseAdvectionSystem BaseAdvectionDiffusionSystem
NavierStokesSystem
EulerSystem
PyFR Documentation, Release 1.15.0 3.1.6 Elements An Elements holds information/data for a group of elements. There are four types of Elements available in PyFR 1.15.0:
ACEulerElements Click to show class pyfr.solvers.aceuler.elements.ACEulerElements(basiscls, eles, cfg)
property _mesh_regions
property _ploc_in_src_exprs
property _pnorm_fpts
property _scratch_bufs
_slice_mat(mat, region, ra=None, rb=None)
property _smats_djacs_mpts
property _soln_in_src_exprs
property _src_exprs
property _srtd_face_fpts
static con_to_pri(convs, cfg)
convarmap = {2: ['p', 'u', 'v'], 3: ['p', 'u', 'v', 'w']}
curved_smat_at(name)
dualcoeffs = {2: ['u', 'v'], 3: ['u', 'v', 'w']}
formulations = ['dual']
get_ploc_for_inter(eidx, fidx)
get_pnorms(eidx, fidx)
get_pnorms_for_inter(eidx, fidx)
get_scal_fpts_for_inter(eidx, fidx)
get_vect_fpts_for_inter(eidx, fidx)
opmat(expr)
ploc_at(name, side=None)
ploc_at_np(name)
property plocfpts
static pri_to_con(pris, cfg)
privarmap = {2: ['p', 'u', 'v'], 3: ['p', 'u', 'v', 'w']}
PyFR Documentation, Release 1.15.0
property qpts
rcpdjac_at(name, side=None)
rcpdjac_at_np(name)
set_backend(*args, **kwargs)
set_ics_from_cfg()
set_ics_from_soln(solnmat, solncfg)
sliceat()
smat_at_np(name)
property upts
visvarmap = {2: [('velocity', ['u', 'v']), ('pressure', ['p'])], 3: [('velocity',
['u', 'v', 'w']), ('pressure', ['p'])]}
ACNavierStokesElements Click to show class pyfr.solvers.acnavstokes.elements.ACNavierStokesElements(basiscls, eles, cfg)
property _mesh_regions
property _ploc_in_src_exprs
property _pnorm_fpts
property _scratch_bufs
_slice_mat(mat, region, ra=None, rb=None)
property _smats_djacs_mpts
property _soln_in_src_exprs
property _src_exprs
property _srtd_face_fpts
static con_to_pri(convs, cfg)
convarmap = {2: ['p', 'u', 'v'], 3: ['p', 'u', 'v', 'w']}
curved_smat_at(name)
dualcoeffs = {2: ['u', 'v'], 3: ['u', 'v', 'w']}
formulations = ['dual']
get_artvisc_fpts_for_inter(eidx, fidx)
get_ploc_for_inter(eidx, fidx)
get_pnorms(eidx, fidx)
get_pnorms_for_inter(eidx, fidx)
PyFR Documentation, Release 1.15.0
get_scal_fpts_for_inter(eidx, fidx)
get_vect_fpts_for_inter(eidx, fidx)
static grad_con_to_pri(cons, grad_cons, cfg)
opmat(expr)
ploc_at(name, side=None)
ploc_at_np(name)
property plocfpts
static pri_to_con(pris, cfg)
privarmap = {2: ['p', 'u', 'v'], 3: ['p', 'u', 'v', 'w']}
property qpts
rcpdjac_at(name, side=None)
rcpdjac_at_np(name)
set_backend(*args, **kwargs)
set_ics_from_cfg()
set_ics_from_soln(solnmat, solncfg)
sliceat()
smat_at_np(name)
property upts
visvarmap = {2: [('velocity', ['u', 'v']), ('pressure', ['p'])], 3: [('velocity',
['u', 'v', 'w']), ('pressure', ['p'])]}
EulerElements Click to show class pyfr.solvers.euler.elements.EulerElements(basiscls, eles, cfg)
property _mesh_regions
property _ploc_in_src_exprs
property _pnorm_fpts
property _scratch_bufs
_slice_mat(mat, region, ra=None, rb=None)
property _smats_djacs_mpts
property _soln_in_src_exprs
property _src_exprs
property _srtd_face_fpts
PyFR Documentation, Release 1.15.0
static con_to_pri(cons, cfg)
convarmap = {2: ['rho', 'rhou', 'rhov', 'E'], 3: ['rho', 'rhou', 'rhov', 'rhow',
'E']}
curved_smat_at(name)
dualcoeffs = {2: ['rho', 'rhou', 'rhov', 'E'], 3: ['rho', 'rhou', 'rhov', 'rhow',
'E']}
formulations = ['std', 'dual']
get_ploc_for_inter(eidx, fidx)
get_pnorms(eidx, fidx)
get_pnorms_for_inter(eidx, fidx)
get_scal_fpts_for_inter(eidx, fidx)
get_vect_fpts_for_inter(eidx, fidx)
opmat(expr)
ploc_at(name, side=None)
ploc_at_np(name)
property plocfpts
static pri_to_con(pris, cfg)
privarmap = {2: ['rho', 'u', 'v', 'p'], 3: ['rho', 'u', 'v', 'w', 'p']}
property qpts
rcpdjac_at(name, side=None)
rcpdjac_at_np(name)
set_backend(*args, **kwargs)
set_ics_from_cfg()
set_ics_from_soln(solnmat, solncfg)
sliceat()
smat_at_np(name)
property upts
visvarmap = {2: [('density', ['rho']), ('velocity', ['u', 'v']), ('pressure',
['p'])], 3: [('density', ['rho']), ('velocity', ['u', 'v', 'w']), ('pressure',
['p'])]}
NavierStokesElements Click to show class pyfr.solvers.navstokes.elements.NavierStokesElements(basiscls, eles, cfg)
PyFR Documentation, Release 1.15.0
property _mesh_regions
property _ploc_in_src_exprs
property _pnorm_fpts
property _scratch_bufs
_slice_mat(mat, region, ra=None, rb=None)
property _smats_djacs_mpts
property _soln_in_src_exprs
property _src_exprs
property _srtd_face_fpts
static con_to_pri(cons, cfg)
convarmap = {2: ['rho', 'rhou', 'rhov', 'E'], 3: ['rho', 'rhou', 'rhov', 'rhow',
'E']}
curved_smat_at(name)
dualcoeffs = {2: ['rho', 'rhou', 'rhov', 'E'], 3: ['rho', 'rhou', 'rhov', 'rhow',
'E']}
formulations = ['std', 'dual']
get_artvisc_fpts_for_inter(eidx, fidx)
get_ploc_for_inter(eidx, fidx)
get_pnorms(eidx, fidx)
get_pnorms_for_inter(eidx, fidx)
get_scal_fpts_for_inter(eidx, fidx)
get_vect_fpts_for_inter(eidx, fidx)
static grad_con_to_pri(cons, grad_cons, cfg)
opmat(expr)
ploc_at(name, side=None)
ploc_at_np(name)
property plocfpts
static pri_to_con(pris, cfg)
privarmap = {2: ['rho', 'u', 'v', 'p'], 3: ['rho', 'u', 'v', 'w', 'p']}
property qpts
rcpdjac_at(name, side=None)
PyFR Documentation, Release 1.15.0
rcpdjac_at_np(name)
set_backend(*args, **kwargs)
set_ics_from_cfg()
set_ics_from_soln(solnmat, solncfg)
shockvar = 'rho'
sliceat()
smat_at_np(name)
property upts
visvarmap = {2: [('density', ['rho']), ('velocity', ['u', 'v']), ('pressure',
['p'])], 3: [('density', ['rho']), ('velocity', ['u', 'v', 'w']), ('pressure',
['p'])]}
Types of Elements are related via the following inheritance diagram:
BaseACFluidElements
ACEulerElements ACNavierStokesElements
BaseElements BaseAdvectionElements BaseAdvectionDiffusionElements
EulerElements NavierStokesElements
BaseFluidElements
PyFR Documentation, Release 1.15.0 3.1.7 Interfaces An Interfaces holds information/data for a group of interfaces. There are eight types of (non-boundary) Interfaces available in PyFR 1.15.0:
ACEulerIntInters Click to show class pyfr.solvers.aceuler.inters.ACEulerIntInters(*args, **kwargs)
_const_mat(inter, meth)
_gen_perm(lhs, rhs)
_get_perm_for_view(inter, meth)
_scal_view(inter, meth)
_scal_xchg_view(inter, meth)
_set_external(name, spec, value=None)
_vect_view(inter, meth)
_vect_xchg_view(inter, meth)
_view(inter, meth, vshape=())
_xchg_view(inter, meth, vshape=())
prepare(t)
ACEulerMPIInters Click to show class pyfr.solvers.aceuler.inters.ACEulerMPIInters(*args, **kwargs)
BASE_MPI_TAG = 2314
_const_mat(inter, meth)
_get_perm_for_view(inter, meth)
_scal_view(inter, meth)
_scal_xchg_view(inter, meth)
_set_external(name, spec, value=None)
_vect_view(inter, meth)
_vect_xchg_view(inter, meth)
_view(inter, meth, vshape=())
_xchg_view(inter, meth, vshape=())
prepare(t)
ACNavierStokesIntInters Click to show class pyfr.solvers.acnavstokes.inters.ACNavierStokesIntInters(be, lhs, rhs, elemap, cfg)
_const_mat(inter, meth)
PyFR Documentation, Release 1.15.0
_gen_perm(lhs, rhs)
_get_perm_for_view(inter, meth)
_scal_view(inter, meth)
_scal_xchg_view(inter, meth)
_set_external(name, spec, value=None)
_vect_view(inter, meth)
_vect_xchg_view(inter, meth)
_view(inter, meth, vshape=())
_xchg_view(inter, meth, vshape=())
prepare(t)
ACNavierStokesMPIInters Click to show class pyfr.solvers.acnavstokes.inters.ACNavierStokesMPIInters(be, lhs, rhsrank, rallocs, elemap,
cfg)
BASE_MPI_TAG = 2314
_const_mat(inter, meth)
_get_perm_for_view(inter, meth)
_scal_view(inter, meth)
_scal_xchg_view(inter, meth)
_set_external(name, spec, value=None)
_vect_view(inter, meth)
_vect_xchg_view(inter, meth)
_view(inter, meth, vshape=())
_xchg_view(inter, meth, vshape=())
prepare(t)
EulerIntInters Click to show class pyfr.solvers.euler.inters.EulerIntInters(*args, **kwargs)
_const_mat(inter, meth)
_gen_perm(lhs, rhs)
_get_perm_for_view(inter, meth)
_scal_view(inter, meth)
_scal_xchg_view(inter, meth)
PyFR Documentation, Release 1.15.0
_set_external(name, spec, value=None)
_vect_view(inter, meth)
_vect_xchg_view(inter, meth)
_view(inter, meth, vshape=())
_xchg_view(inter, meth, vshape=())
prepare(t)
EulerMPIInters Click to show class pyfr.solvers.euler.inters.EulerMPIInters(*args, **kwargs)
BASE_MPI_TAG = 2314
_const_mat(inter, meth)
_get_perm_for_view(inter, meth)
_scal_view(inter, meth)
_scal_xchg_view(inter, meth)
_set_external(name, spec, value=None)
_vect_view(inter, meth)
_vect_xchg_view(inter, meth)
_view(inter, meth, vshape=())
_xchg_view(inter, meth, vshape=())
prepare(t)
NavierStokesIntInters Click to show class pyfr.solvers.navstokes.inters.NavierStokesIntInters(be, lhs, rhs, elemap, cfg)
_const_mat(inter, meth)
_gen_perm(lhs, rhs)
_get_perm_for_view(inter, meth)
_scal_view(inter, meth)
_scal_xchg_view(inter, meth)
_set_external(name, spec, value=None)
_vect_view(inter, meth)
_vect_xchg_view(inter, meth)
_view(inter, meth, vshape=())
_xchg_view(inter, meth, vshape=())
PyFR Documentation, Release 1.15.0
prepare(t)
NavierStokesMPIInters Click to show class pyfr.solvers.navstokes.inters.NavierStokesMPIInters(be, lhs, rhsrank, rallocs, elemap, cfg)
BASE_MPI_TAG = 2314
_const_mat(inter, meth)
_get_perm_for_view(inter, meth)
_scal_view(inter, meth)
_scal_xchg_view(inter, meth)
_set_external(name, spec, value=None)
_vect_view(inter, meth)
_vect_xchg_view(inter, meth)
_view(inter, meth, vshape=())
_xchg_view(inter, meth, vshape=())
prepare(t)
Types of (non-boundary) Interfaces are related via the following inheritance diagram:
BaseAdvectionDiffusionIntInters ACNavierStokesIntInters
EulerIntInters
BaseAdvectionIntInters
ACEulerIntInters
NavierStokesIntInters
BaseInters TplargsMixin
NavierStokesMPIInters
EulerMPIInters
BaseAdvectionMPIInters
ACEulerMPIInters
BaseAdvectionDiffusionMPIInters ACNavierStokesMPIInters
PyFR Documentation, Release 1.15.0 3.1.8 Backend A Backend holds information/data for a backend. There are four types of Backend available in PyFR 1.15.0:
CUDABackend Click to show class pyfr.backends.cuda.base.CUDABackend(cfg)
_malloc_impl(nbytes)
alias(obj, aobj)
blocks = False
commit()
const_matrix(initval, dtype=None, tags={})
graph()
kernel(name, *args, **kwargs)
property lookup
malloc(obj, extent)
matrix(ioshape, initval=None, extent=None, aliases=None, tags={})
matrix_slice(mat, ra, rb, ca, cb)
name = 'cuda'
ordered_meta_kernel(kerns)
run_graph(graph, wait=False)
run_kernels(kernels, wait=False)
unordered_meta_kernel(kerns)
view(matmap, rmap, cmap, rstridemap=None, vshape=(), tags={})
xchg_matrix(ioshape, initval=None, extent=None, aliases=None, tags={})
xchg_matrix_for_view(view, tags={})
xchg_view(matmap, rmap, cmap, rstridemap=None, vshape=(), tags={})
HIPBackend Click to show class pyfr.backends.hip.base.HIPBackend(cfg)
_malloc_impl(nbytes)
alias(obj, aobj)
blocks = False
PyFR Documentation, Release 1.15.0
commit()
const_matrix(initval, dtype=None, tags={})
graph()
kernel(name, *args, **kwargs)
property lookup
malloc(obj, extent)
matrix(ioshape, initval=None, extent=None, aliases=None, tags={})
matrix_slice(mat, ra, rb, ca, cb)
name = 'hip'
ordered_meta_kernel(kerns)
run_graph(graph, wait=False)
run_kernels(kernels, wait=False)
unordered_meta_kernel(kerns)
view(matmap, rmap, cmap, rstridemap=None, vshape=(), tags={})
xchg_matrix(ioshape, initval=None, extent=None, aliases=None, tags={})
xchg_matrix_for_view(view, tags={})
xchg_view(matmap, rmap, cmap, rstridemap=None, vshape=(), tags={})
OpenCLBackend Click to show class pyfr.backends.opencl.base.OpenCLBackend(cfg)
_malloc_impl(nbytes)
alias(obj, aobj)
blocks = False
commit()
const_matrix(initval, dtype=None, tags={})
graph()
kernel(name, *args, **kwargs)
property lookup
malloc(obj, extent)
matrix(ioshape, initval=None, extent=None, aliases=None, tags={})
matrix_slice(mat, ra, rb, ca, cb)
name = 'opencl'
PyFR Documentation, Release 1.15.0
ordered_meta_kernel(kerns)
run_graph(graph, wait=False)
run_kernels(kernels, wait=False)
unordered_meta_kernel(kerns)
view(matmap, rmap, cmap, rstridemap=None, vshape=(), tags={})
xchg_matrix(ioshape, initval=None, extent=None, aliases=None, tags={})
xchg_matrix_for_view(view, tags={})
xchg_view(matmap, rmap, cmap, rstridemap=None, vshape=(), tags={})
OpenMPBackend Click to show class pyfr.backends.openmp.base.OpenMPBackend(cfg)
_malloc_impl(nbytes)
alias(obj, aobj)
blocks = True
commit()
const_matrix(initval, dtype=None, tags={})
graph()
kernel(name, *args, **kwargs)
property krunner
property lookup
malloc(obj, extent)
matrix(ioshape, initval=None, extent=None, aliases=None, tags={})
matrix_slice(mat, ra, rb, ca, cb)
name = 'openmp'
ordered_meta_kernel(kerns)
run_graph(graph, wait=False)
run_kernels(kernels, wait=False)
unordered_meta_kernel(kerns)
view(matmap, rmap, cmap, rstridemap=None, vshape=(), tags={})
xchg_matrix(ioshape, initval=None, extent=None, aliases=None, tags={})
xchg_matrix_for_view(view, tags={})
PyFR Documentation, Release 1.15.0
xchg_view(matmap, rmap, cmap, rstridemap=None, vshape=(), tags={})
Types of Backend are related via the following inheritance diagram:
CUDABackend
HIPBackend
BaseBackend
OpenCLBackend
OpenMPBackend 3.1.9 Pointwise Kernel Provider A Pointwise Kernel Provider produces point-wise kernels. Specifically, a Pointwise Kernel Provider has a method named register, which adds a new method to an instance of a Pointwise Kernel Provider. This new method, when called, returns a kernel. A kernel is an instance of a ‘one-off’ class with a method named run that implements the required kernel functionality. The kernel functionality itself is specified using PyFR-Mako. Hence, a Pointwise Kernel Provider also has a method named _render_kernel, which renders PyFR-Mako into low-level platform-specific code.
The _render_kernel method first sets the context for Mako (i.e. details about the Backend etc.) and then uses Mako to begin rendering the PyFR-Mako specification. When Mako encounters a pyfr:kernel an instance of a Kernel Generator is created, which is used to render the body of the pyfr:kernel. There are four types of Pointwise Kernel Provider available in PyFR 1.15.0:
CUDAPointwiseKernelProvider Click to show class pyfr.backends.cuda.provider.CUDAPointwiseKernelProvider(backend)
_build_arglst(dims, argn, argt, argdict)
_build_kernel(name, src, argtypes)
_instantiate_kernel(dims, fun, arglst, argmv)
_render_kernel(name, mod, extrns, tplargs)
PyFR Documentation, Release 1.15.0
kernel_generator_cls
alias of CUDAKernelGenerator
register(mod)
HIPPointwiseKernelProvider Click to show class pyfr.backends.hip.provider.HIPPointwiseKernelProvider(*args, **kwargs)
_build_arglst(dims, argn, argt, argdict)
_build_kernel(name, src, argtypes)
_instantiate_kernel(dims, fun, arglst, argmv)
_render_kernel(name, mod, extrns, tplargs)
kernel_generator_cls = None
register(mod)
OpenCLPointwiseKernelProvider Click to show class pyfr.backends.opencl.provider.OpenCLPointwiseKernelProvider(backend)
_build_arglst(dims, argn, argt, argdict)
_build_kernel(name, src, argtypes)
_build_program(src)
_instantiate_kernel(dims, fun, arglst, argmv)
_render_kernel(name, mod, extrns, tplargs)
kernel_generator_cls
alias of OpenCLKernelGenerator
register(mod)
OpenMPPointwiseKernelProvider Click to show class pyfr.backends.openmp.provider.OpenMPPointwiseKernelProvider(*args, **kwargs)
_build_arglst(dims, argn, argt, argdict)
_build_function(name, src, argtypes, restype=None)
_build_kernel(name, src, argtypes)
_build_library(src)
_get_arg_cls(argtypes)
_instantiate_kernel(dims, fun, arglst, argmv)
_render_kernel(name, mod, extrns, tplargs)
kernel_generator_cls = None
PyFR Documentation, Release 1.15.0
register(mod)
Types of Pointwise Kernel Provider are related via the following inheritance diagram:
OpenMPUnorderedMetaKernel
CUDAOrderedMetaKernel
OpenMPOrderedMetaKernel
CUDAKernel CUDAUnorderedMetaKernel
OpenMPKernelFunction
MetaKernel HIPOrderedMetaKernel
Kernel HIPKernel HIPUnorderedMetaKernel
OpenCLKernel OpenCLOrderedMetaKernel
OpenMPKernel OpenCLUnorderedMetaKernel
CUDAKernelProvider CUDAPointwiseKernelProvider
HIPKernelProvider
HIPPointwiseKernelProvider
BaseKernelProvider BasePointwiseKernelProvider
OpenCLPointwiseKernelProvider
OpenCLKernelProvider
OpenMPPointwiseKernelProvider
OpenMPKernelProvider
PyFR Documentation, Release 1.15.0 3.1.10 Kernel Generator A Kernel Generator renders the PyFR-Mako in a pyfr:kernel into low-level platform-specific code. Specifically,
a Kernel Generator has a method named render, which applies Backend specific regex and adds Backend specific
‘boiler plate’ code to produce the low-level platform-specific source – which is compiled, linked, and loaded. There are four types of Kernel Generator available in PyFR 1.15.0:
CUDAKernelGenerator Click to show class pyfr.backends.cuda.generator.CUDAKernelGenerator(*args, **kwargs)
_deref_arg(arg)
_deref_arg_array_1d(arg)
_deref_arg_array_2d(arg)
_deref_arg_view(arg)
_render_body(body)
_render_spec()
argspec()
ldim_size(name, *factor)
needs_ldim(arg)
render()
HIPKernelGenerator Click to show class pyfr.backends.hip.generator.HIPKernelGenerator(*args, **kwargs)
_deref_arg(arg)
_deref_arg_array_1d(arg)
_deref_arg_array_2d(arg)
_deref_arg_view(arg)
_render_body(body)
_render_spec()
argspec()
block1d = None
block2d = None
ldim_size(name, *factor)
needs_ldim(arg)
render()
OpenCLKernelGenerator Click to show
PyFR Documentation, Release 1.15.0 class pyfr.backends.opencl.generator.OpenCLKernelGenerator(*args, **kwargs)
_deref_arg(arg)
_deref_arg_array_1d(arg)
_deref_arg_array_2d(arg)
_deref_arg_view(arg)
_render_body(body)
_render_spec()
argspec()
ldim_size(name, *factor)
needs_ldim(arg)
render()
OpenMPKernelGenerator Click to show class pyfr.backends.openmp.generator.OpenMPKernelGenerator(name, ndim, args, body, fpdtype)
_deref_arg(arg)
_deref_arg_array_1d(arg)
_deref_arg_array_2d(arg)
_deref_arg_view(arg)
_render_args(argn)
_render_body(body)
argspec()
ldim_size(name, *factor)
needs_ldim(arg)
render()
Types of Kernel Generator are related via the following inheritance diagram:
PyFR Documentation, Release 1.15.0
CUDAKernelGenerator
BaseKernelGenerator OpenCLKernelGenerator
OpenMPKernelGenerator 3.2 PyFR-Mako 3.2.1 PyFR-Mako Kernels PyFR-Mako kernels are specifications of point-wise functionality that can be invoked directly from within PyFR. They are opened with a header of the form:
<%pyfr:kernel name='kernel-name' ndim='data-dimensionality' [argument-name='argument-
˓→intent argument-attribute argument-data-type' ...]>
where
1. kernel-name — name of kernel
string
2. data-dimensionality — dimensionality of data
int
3. argument-name — name of argument
string
4. argument-intent — intent of argument
in | out | inout
5. argument-attribute — attribute of argument
mpi | scalar | view
6. argument-data-type — data type of argument
string and are closed with a footer of the form:
</%pyfr:kernel>
PyFR Documentation, Release 1.15.0 3.2.2 PyFR-Mako Macros PyFR-Mako macros are specifications of point-wise functionality that cannot be invoked directly from within PyFR,
but can be embedded into PyFR-Mako kernels. PyFR-Mako macros can be viewed as building blocks for PyFR-mako kernels. They are opened with a header of the form:
<%pyfr:macro name='macro-name' params='[parameter-name, ...]'>
where
1. macro-name — name of macro
string
2. parameter-name — name of parameter
string and are closed with a footer of the form:
</%pyfr:macro>
PyFR-Mako macros are embedded within a kernel using an expression of the following form:
${pyfr.expand('macro-name', ['parameter-name', ...])};
where
1. macro-name — name of the macro
string
2. parameter-name — name of parameter
string 3.2.3 Syntax 3.2.3.1 Basic Functionality Basic functionality can be expressed using a restricted subset of the C programming language. Specifically, use of the following is allowed:
1. +,-,*,/ — basic arithmetic
2. sin, cos, tan — basic trigonometric functions
3. exp — exponential
4. pow — power
5. fabs — absolute value
6. output = ( condition ? satisfied : unsatisfied ) — ternary if
7. min — minimum
8. max — maximum However, conditional if statements, as well as for/while loops, are not allowed.
PyFR Documentation, Release 1.15.0 3.2.3.2 Expression Substitution Mako expression substitution can be used to facilitate PyFR-Mako kernel specification. A Python expression expression prescribed thus ${expression} is substituted for the result when the PyFR-Mako kernel specification is interpreted at runtime.
Example:
E = s[${ndims - 1}]
3.2.3.3 Conditionals Mako conditionals can be used to facilitate PyFR-Mako kernel specification. Conditionals are opened with % if condition: and closed with % endif. Note that such conditionals are evaluated when the PyFR-Mako kernel speci-
fication is interpreted at runtime, they are not embedded into the low-level kernel.
Example:
% if ndims == 2:
fout[0][1] += t_xx; fout[1][1] += t_xy;
fout[0][2] += t_xy; fout[1][2] += t_yy;
fout[0][3] += u*t_xx + v*t_xy + ${-c['mu']*c['gamma']/c['Pr']}*T_x;
fout[1][3] += u*t_xy + v*t_yy + ${-c['mu']*c['gamma']/c['Pr']}*T_y;
% endif 3.2.3.4 Loops Mako loops can be used to facilitate PyFR-Mako kernel specification. Loops are opened with % for condition:
and closed with % endfor. Note that such loops are unrolled when the PyFR-Mako kernel specification is interpreted at runtime, they are not embedded into the low-level kernel.
Example:
% for i in range(ndims):
rhov[${i}] = s[${i + 1}];
v[${i}] = invrho*rhov[${i}];
% endfor
CHAPTER
FOUR
PERFORMANCE TUNING The following sections contain best practices for tuning the performance of PyFR. Note, however, that it is typically not worth pursuing the advice in this section until a simulation is working acceptably and generating the desired results.
4.1 OpenMP Backend 4.1.1 AVX-512 When running on an AVX-512 capable CPU Clang and GCC will, by default, only make use of 256-bit vectors. Given that the kernels in PyFR benefit meaningfully from longer vectors it is desirable to override this behaviour. This can be accomplished through the cflags key as:
[backend-openmp]
cflags = -mprefer-vector-width=512 4.1.2 Cores vs. threads PyFR does not typically derive any benefit from SMT. As such the number of OpenMP threads should be chosen to be equal to the number of physical cores.
4.1.3 Loop Scheduling By default PyFR employs static scheduling for loops, with work being split evenly across cores. For systems with a single type of core this is usually the right choice. However, on heterogeneous systems it typically results in load imbalance. Here, it can be beneficial to experiment with the guided and dynamic loop schedules as:
[backend-openmp]
schedule = dynamic, 5
PyFR Documentation, Release 1.15.0 4.1.4 MPI processes vs. OpenMP threads When using the OpenMP backend it is recommended to employ one MPI rank per NUMA zone. For most systems each socket represents its own NUMA zone. Thus, on a two socket system it is suggested to run PyFR with two MPI ranks, with each process being bound to a single socket. The specifics of how to accomplish this depend on both the job scheduler and MPI distribution.
4.2 CUDA Backend 4.2.1 CUDA-aware MPI PyFR is capable of taking advantage of CUDA-aware MPI. This enables CUDA device pointers to be directly to passed MPI routines. Under the right circumstances this can result in improved performance for simulations which are near the strong scaling limit. Assuming mpi4py has been built against an MPI distribution which is CUDA-aware this functionality can be enabled through the mpi-type key as:
[backend-cuda]
mpi-type = cuda-aware 4.3 HIP Backend 4.3.1 HIP-aware MPI PyFR is capable of taking advantage of HIP-aware MPI. This enables HIP device pointers to be directly to passed MPI routines. Under the right circumstances this can result in improved performance for simulations which are near the strong scaling limit. Assuming mpi4py has been built against an MPI distribution which is HIP-aware this functionality can be enabled through the mpi-type key as:
[backend-hip]
mpi-type = hip-aware 4.4 Partitioning 4.4.1 METIS vs SCOTCH The partitioning module in PyFR includes support for both METIS and SCOTCH. Both usually result in high-quality decompositions. However, for long running simulations on complex geometries it may be worth partitioning a grid with both and observing which decomposition performs best.
PyFR Documentation, Release 1.15.0 4.4.2 Mixed grids When running PyFR in parallel on mixed element grids it is necessary to take some additional care when partitioning the grid. A good domain decomposition is one where each partition contains the same amount of computational work.
For grids with a single element type the amount of computational work is very well approximated by the number of elements assigned to a partition. Thus the goal is simply to ensure that all of the partitions have roughly the same number of elements. However, when considering mixed grids this relationship begins to break down since the computational cost of one element type can be appreciably more than that of another.
Thus in order to obtain a good decomposition it is necessary to assign a weight to each type of element in the domain.
Element types which are more computationally intensive should be assigned a larger weight than those that are less intensive. Unfortunately, the relative cost of different element types depends on a variety of factors, including:
• The polynomial order.
• If anti-aliasing is enabled in the simulation, and if so, to what extent.
• The hardware which the simulation will be run on.
Weights can be specified when partitioning the mesh as -e shape:weight. For example, if on a particular system a quadrilateral is found to be 50% more expensive than a triangle this can be specified as:
pyfr partition -e quad:3 -e tri:2 ...
If precise profiling data is not available regarding the performance of each element type in a given configuration a helpful rule of thumb is to under-weight the dominant element type in the domain. For example, if a domain is 90%
tetrahedra and 10% prisms then, absent any additional information about the relative performance of tetrahedra and prisms, a safe choice is to assume the prisms are appreciably more expensive than the tetrahedra.
4.4.3 Detecting load imbalances PyFR includes code for monitoring the amount of time each rank spends waiting for MPI transfers to complete. This can be used, among other things, to detect load imbalances. Such imbalances are typically observed on mixed-element grids with an incorrect weighting factor. Wait time tracking can be enabled as:
[backend]
collect-wait-times = true with the resulting statistics being recorded in the [backend-wait-times] section of the /stats object which is included in all PyFR solution files. This can be extracted as:
h5dump -d /stats -b --output=stats.ini soln.pyfrs Note that the number of graphs depends on the system, and not all graphs initiate MPI requests. The average amount of time each rank spends waiting for MPI requests per right hand side evaluation can be obtained by vertically summing all of the -median fields together.
There exists an inverse relationship between the amount of computational work a rank has to perform and the amount of time it spends waiting for MPI requests to complete. Hence, ranks which spend comparatively less time waiting than their peers are likely to be overloaded, whereas those which spend comparatively more time waiting are likely to be underloaded. This information can then be used to explicitly re-weight the partitions and/or the per-element weights.
PyFR Documentation, Release 1.15.0 4.5 Scaling The general recommendation when running PyFR in parallel is to aim for a parallel efficiency of 𝜖 ≃ 0.8 with the parallel efficiency being defined as:
𝜖= ,
𝑁 𝑇𝑁 where 𝑁 is the number of ranks, 𝑇1 is the simulation time with one rank, and 𝑇𝑁 is the simulation time with 𝑁 ranks.
This represents a reasonable trade-off between the overall time-to-solution and efficient resource utilisation.
4.6 Parallel I/O PyFR incorporates support for parallel file I/O via HDF5 and will use it automatically where available. However, for this work several prerequisites must be satisfied:
• HDF5 must be explicitly compiled with support for parallel I/O.
• The mpi4py Python module must be compiled against the same MPI distribution as HDF5. A version mismatch
here can result in subtle and difficult to diagnose errors.
• The h5py Python module must be built with support for parallel I/O.
After completing this process it is highly recommended to verify everything is working by trying the h5py parallel HDF5 example.
4.7 Plugins A common source of performance issues is running plugins too frequently. Given the time steps taken by PyFR are typically much smaller than those associated with the underlying physics there is seldom any benefit to running inte-
gration and/or time average accumulation plugins more frequently than once every 50 steps. Further, when running with adaptive time stepping there is no need to run the NaN check plugin. For simulations with fixed time steps, it is not recommended to run the NaN check plugin more frequently than once every 10 steps.
4.8 Start-up Time The start-up time required by PyFR can be reduced by ensuring that Python is compiled from source with profile guided optimisations (PGO) which can be enabled by passing --enable-optimizations to the configure script.
It is also important that NumPy be configured to use an optimised BLAS/LAPACK distribution. Further details can be found in the NumPy building from source guide.
If the point sampler plugin is being employed with a large number of sample points it is further recommended to install SciPy.
CHAPTER
FIVE
EXAMPLES Test cases are available in the PyFR-Test-Cases <https://github.com/PyFR/PyFR-Test-Cases> repository. It is impor-
tant to note, however, that these examples are all relatively small 2D simulations and, as such, are not suitable for scalability or performance studies.
5.1 Euler Equations 5.1.1 2D Euler Vortex Proceed with the following steps to run a parallel 2D Euler vortex simulation on a structured mesh:
1. Navigate to the PyFR-Test-Cases/2d-euler-vortex directory:
cd PyFR-Test-Cases/2d-euler-vortex
2. Run pyfr to convert the Gmsh mesh file into a PyFR mesh file called 2d-euler-vortex.pyfrm:
pyfr import 2d-euler-vortex.msh 2d-euler-vortex.pyfrm
3. Run pyfr to partition the PyFR mesh file into two pieces:
pyfr partition 2 2d-euler-vortex.pyfrm .
4. Run pyfr to solve the Euler equations on the mesh, generating a series of PyFR solution files called
2d-euler-vortex*.pyfrs:
mpiexec -n 2 pyfr run -b cuda -p 2d-euler-vortex.pyfrm 2d-euler-vortex.ini
5. Run pyfr on the solution file 2d-euler-vortex-100.0.pyfrs converting it into an unstructured VTK file
called 2d-euler-vortex-100.0.vtu:
pyfr export 2d-euler-vortex.pyfrm 2d-euler-vortex-100.0.pyfrs 2d-euler-vortex-100.0.
˓→vtu
6. Visualise the unstructured VTK file in Paraview
PyFR Documentation, Release 1.15.0
Fig. 1: Colour map of density distribution at 100 time units.
PyFR Documentation, Release 1.15.0 5.2 Compressible Navier–Stokes Equations 5.2.1 2D Couette Flow Proceed with the following steps to run a serial 2D Couette flow simulation on a mixed unstructured mesh:
1. Navigate to the PyFR-Test-Cases/2d-couette-flow directory:
cd PyFR-Test-Cases/2d-couette-flow
2. Run pyfr to covert the Gmsh mesh file into a PyFR mesh file called 2d-couette-flow.pyfrm:
pyfr import 2d-couette-flow.msh 2d-couette-flow.pyfrm
3. Run pyfr to solve the Navier-Stokes equations on the mesh, generating a series of PyFR solution files called
2d-couette-flow-*.pyfrs:
pyfr run -b cuda -p 2d-couette-flow.pyfrm 2d-couette-flow.ini
4. Run pyfr on the solution file 2d-couette-flow-040.pyfrs converting it into an unstructured VTK file called
2d-couette-flow-040.vtu:
pyfr export 2d-couette-flow.pyfrm 2d-couette-flow-040.pyfrs 2d-couette-flow-040.vtu
5. Visualise the unstructured VTK file in Paraview
Fig. 2: Colour map of steady-state density distribution.
5.3 Incompressible Navier–Stokes Equations 5.3.1 2D Incompressible Cylinder Flow Proceed with the following steps to run a serial 2D incompressible cylinder flow simulation on a mixed unstructured mesh:
1. Navigate to the PyFR-Test-Cases/2d-inc-cylinder directory:
PyFR Documentation, Release 1.15.0
cd PyFR-Test-Cases/2d-inc-cylinder
2. Run pyfr to covert the Gmsh mesh file into a PyFR mesh file called 2d-inc-cylinder.pyfrm:
pyfr import 2d-inc-cylinder.msh 2d-inc-cylinder.pyfrm
3. Run pyfr to solve the incompressible Navier-Stokes equations on the mesh, generating a series of PyFR solution
files called 2d-inc-cylinder-*.pyfrs:
pyfr run -b cuda -p 2d-inc-cylinder.pyfrm 2d-inc-cylinder.ini
4. Run pyfr on the solution file 2d-inc-cylinder-75.00.pyfrs converting it into an unstructured VTK file
called 2d-inc-cylinder-75.00.vtu:
pyfr export 2d-inc-cylinder.pyfrm 2d-inc-cylinder-75.00.pyfrs 2d-inc-cylinder-75.00.
˓→vtu
5. Visualise the unstructured VTK file in Paraview
Fig. 3: Colour map of velocity magnitude distribution at 75 time units.
CHAPTER
SIX
INDICES AND TABLES
• genindex
• modindex
• search
83
PyFR Documentation, Release 1.15.0 84 Chapter 6. Indices and Tables
INDEX _a_lam (pyfr.integrators.dual.phys.steppers.SDIRK43Stepper _addv() (pyfr.integrators.dual.pseudo.pseudosteppers.DualDenseRKPseud
attribute), 42 method), 44
_addv() (pyfr.integrators.dual.pseudo.pseudosteppers.DualEulerPseudoSte _accept_step() (pyfr.integrators.dual.phys.controllers.DualNoneController
method), 33 method), 46
_addv()
_accept_step() (pyfr.integrators.std.controllers.StdNoneController (pyfr.integrators.dual.pseudo.pseudosteppers.DualRK34PseudoSt
method), 31 method), 47
_addv()
_accept_step() (pyfr.integrators.std.controllers.StdPIController (pyfr.integrators.dual.pseudo.pseudosteppers.DualRK45PseudoSt
method), 32 method), 48 _add() (pyfr.integrators.dual.pseudo.pseudocontrollers.DualNonePseudoController
method), 34 method), 44 _add() (pyfr.integrators.dual.pseudo.pseudocontrollers.DualPIPseudoController
method), 35 method), 45
_addv() (pyfr.integrators.std.controllers.StdNoneController _add() (pyfr.integrators.dual.pseudo.pseudosteppers.DualDenseRKPseudoStepper
method), 44 method), 31
_addv() (pyfr.integrators.std.controllers.StdPIController _add() (pyfr.integrators.dual.pseudo.pseudosteppers.DualEulerPseudoStepper
method), 46 method), 32
_addv() (pyfr.integrators.std.steppers.StdEulerStepper _add() (pyfr.integrators.dual.pseudo.pseudosteppers.DualRK34PseudoStepper
method), 47 method), 36 _add() (pyfr.integrators.dual.pseudo.pseudosteppers.DualRK45PseudoStepper
method), 48 method), 38 _add() (pyfr.integrators.dual.pseudo.pseudosteppers.DualRK4PseudoStepper
method), 44 method), 39 _add() (pyfr.integrators.dual.pseudo.pseudosteppers.DualTVDRK3PseudoStepper
method), 45 method), 37 _add() (pyfr.integrators.std.controllers.StdNoneController _addv() (pyfr.integrators.std.steppers.StdTVDRK3Stepper
method), 31 method), 40 _add() (pyfr.integrators.std.controllers.StdPIController _al (pyfr.integrators.dual.phys.steppers.SDIRK33Stepper
method), 32 attribute), 41 _add() (pyfr.integrators.std.steppers.StdEulerStepper _at (pyfr.integrators.dual.phys.steppers.SDIRK33Stepper
method), 36 attribute), 41 _add() (pyfr.integrators.std.steppers.StdRK34Stepper _b_rlam (pyfr.integrators.dual.phys.steppers.SDIRK43Stepper
method), 38 attribute), 42 _add() (pyfr.integrators.std.steppers.StdRK45Stepper _build_arglst() (pyfr.backends.cuda.provider.CUDAPointwiseKernelPr
method), 39 method), 67 _add() (pyfr.integrators.std.steppers.StdRK4Stepper _build_arglst() (pyfr.backends.hip.provider.HIPPointwiseKernelProvid
method), 37 method), 68 _add() (pyfr.integrators.std.steppers.StdTVDRK3Stepper _build_arglst() (pyfr.backends.opencl.provider.OpenCLPointwiseKerne
method), 40 method), 68
_build_arglst() (pyfr.backends.openmp.provider.OpenMPPointwiseKer _addv() (pyfr.integrators.dual.pseudo.pseudocontrollers.DualNonePseudoController
method), 34 method), 68
_build_function() (pyfr.backends.openmp.provider.OpenMPPointwiseK _addv() (pyfr.integrators.dual.pseudo.pseudocontrollers.DualPIPseudoController
PyFR Documentation, Release 1.15.0
method), 68 method), 61 _build_kernel() (pyfr.backends.cuda.provider.CUDAPointwiseKernelProvider
_const_mat() (pyfr.solvers.euler.inters.EulerIntInters
method), 67 method), 61 _build_kernel() (pyfr.backends.hip.provider.HIPPointwiseKernelProvider
_const_mat() (pyfr.solvers.euler.inters.EulerMPIInters
method), 68 method), 62 _build_kernel() (pyfr.backends.opencl.provider.OpenCLPointwiseKernelProvider
_const_mat() (pyfr.solvers.navstokes.inters.NavierStokesIntInters
method), 68 method), 62 _build_kernel() (pyfr.backends.openmp.provider.OpenMPPointwiseKernelProvider
_const_mat() (pyfr.solvers.navstokes.inters.NavierStokesMPIInters
method), 68 method), 63 _build_library() (pyfr.backends.openmp.provider.OpenMPPointwiseKernelProvider
_deref_arg() (pyfr.backends.cuda.generator.CUDAKernelGenerator
method), 68 method), 70 _build_program() (pyfr.backends.opencl.provider.OpenCLPointwiseKernelProvider
_deref_arg() (pyfr.backends.hip.generator.HIPKernelGenerator
method), 68 method), 70 _check_abort() (pyfr.integrators.dual.phys.controllers.DualNoneController
_deref_arg() (pyfr.backends.opencl.generator.OpenCLKernelGenerator
method), 33 method), 71 _check_abort() (pyfr.integrators.dual.phys.steppers.DualBackwardEulerStepper
_deref_arg() (pyfr.backends.openmp.generator.OpenMPKernelGenerato
method), 41 method), 71 _check_abort() (pyfr.integrators.dual.phys.steppers.SDIRK33Stepper
method), 41 (pyfr.backends.cuda.generator.CUDAKernelGenerator _check_abort() (pyfr.integrators.dual.phys.steppers.SDIRK43Stepper method), 70
method), 42 _deref_arg_array_1d()
_check_abort() (pyfr.integrators.std.controllers.StdNoneController (pyfr.backends.hip.generator.HIPKernelGenerator
method), 31 method), 70 _check_abort() (pyfr.integrators.std.controllers.StdPIController
method), 32 (pyfr.backends.opencl.generator.OpenCLKernelGenerator _check_abort() (pyfr.integrators.std.steppers.StdEulerStepper method), 71
method), 36 _deref_arg_array_1d()
_check_abort() (pyfr.integrators.std.steppers.StdRK34Stepper (pyfr.backends.openmp.generator.OpenMPKernelGenerator
method), 38 method), 71 _check_abort() (pyfr.integrators.std.steppers.StdRK45Stepper
method), 39 (pyfr.backends.cuda.generator.CUDAKernelGenerator _check_abort() (pyfr.integrators.std.steppers.StdRK4Stepper method), 70
method), 37 _deref_arg_array_2d()
_check_abort() (pyfr.integrators.std.steppers.StdTVDRK3Stepper (pyfr.backends.hip.generator.HIPKernelGenerator
method), 40 method), 70 _compute_grads_graph() _deref_arg_array_2d()
(pyfr.solvers.aceuler.system.ACEulerSystem (pyfr.backends.opencl.generator.OpenCLKernelGenerator
method), 50 method), 71 _compute_grads_graph() _deref_arg_array_2d()
(pyfr.solvers.acnavstokes.system.ACNavierStokesSystem (pyfr.backends.openmp.generator.OpenMPKernelGenerator
method), 51 method), 71 _compute_grads_graph() _deref_arg_view() (pyfr.backends.cuda.generator.CUDAKernelGenerat
(pyfr.solvers.euler.system.EulerSystem method), method), 70
51 _deref_arg_view() (pyfr.backends.hip.generator.HIPKernelGenerator _compute_grads_graph() method), 70
(pyfr.solvers.navstokes.system.NavierStokesSystem_deref_arg_view() (pyfr.backends.opencl.generator.OpenCLKernelGene
method), 52 method), 71 _const_mat() (pyfr.solvers.aceuler.inters.ACEulerIntInters_deref_arg_view() (pyfr.backends.openmp.generator.OpenMPKernelGe
method), 60 method), 71 _const_mat() (pyfr.solvers.aceuler.inters.ACEulerMPIInters_errest() (pyfr.integrators.std.controllers.StdPIController
method), 60 method), 32 _const_mat() (pyfr.solvers.acnavstokes.inters.ACNavierStokesIntInters
_finalize_step() (pyfr.integrators.dual.phys.steppers.DualBackwardEul
method), 60 method), 41 _const_mat() (pyfr.solvers.acnavstokes.inters.ACNavierStokesMPIInters
PyFR Documentation, Release 1.15.0
method), 41 (pyfr.integrators.std.controllers.StdNoneController _finalize_step() (pyfr.integrators.dual.phys.steppers.SDIRK43Steppermethod), 31
method), 42 _get_axnpby_kerns()
_gen_kernels() (pyfr.solvers.aceuler.system.ACEulerSystem (pyfr.integrators.std.controllers.StdPIController
method), 50 method), 32 _gen_kernels() (pyfr.solvers.acnavstokes.system.ACNavierStokesSystem
_get_axnpby_kerns()
method), 51 (pyfr.integrators.std.steppers.StdEulerStepper _gen_kernels() (pyfr.solvers.euler.system.EulerSystem method), 36
method), 52 _get_axnpby_kerns()
_gen_kernels() (pyfr.solvers.navstokes.system.NavierStokesSystem (pyfr.integrators.std.steppers.StdRK34Stepper
method), 52 method), 38 _gen_mpireqs() (pyfr.solvers.aceuler.system.ACEulerSystem _get_axnpby_kerns()
method), 50 (pyfr.integrators.std.steppers.StdRK45Stepper _gen_mpireqs() (pyfr.solvers.acnavstokes.system.ACNavierStokesSystemmethod), 39
method), 51 _get_axnpby_kerns()
_gen_mpireqs() (pyfr.solvers.euler.system.EulerSystem (pyfr.integrators.std.steppers.StdRK4Stepper
method), 52 method), 37 _gen_mpireqs() (pyfr.solvers.navstokes.system.NavierStokesSystem
_get_axnpby_kerns()
method), 52 (pyfr.integrators.std.steppers.StdTVDRK3Stepper _gen_perm() (pyfr.solvers.aceuler.inters.ACEulerIntInters method), 40
method), 60 _get_gndofs() (pyfr.integrators.dual.pseudo.pseudocontrollers.DualNone _gen_perm() (pyfr.solvers.acnavstokes.inters.ACNavierStokesIntIntersmethod), 34
method), 60 _get_gndofs() (pyfr.integrators.dual.pseudo.pseudocontrollers.DualPIPs _gen_perm() (pyfr.solvers.euler.inters.EulerIntInters method), 35
method), 61 _get_gndofs() (pyfr.integrators.dual.pseudo.pseudosteppers.DualDenseR _gen_perm() (pyfr.solvers.navstokes.inters.NavierStokesIntInters method), 44
method), 62 _get_gndofs() (pyfr.integrators.dual.pseudo.pseudosteppers.DualEulerP _get_arg_cls() (pyfr.backends.openmp.provider.OpenMPPointwiseKernelProvider
method), 68 _get_gndofs() (pyfr.integrators.dual.pseudo.pseudosteppers.DualRK34P _get_axnpby_kerns() method), 47
(pyfr.integrators.dual.pseudo.pseudocontrollers.DualNonePseudoController
method), 34 method), 48 _get_axnpby_kerns() _get_gndofs() (pyfr.integrators.dual.pseudo.pseudosteppers.DualRK4Pse
(pyfr.integrators.dual.pseudo.pseudocontrollers.DualPIPseudoController
method), 35 _get_gndofs() (pyfr.integrators.dual.pseudo.pseudosteppers.DualTVDRK _get_axnpby_kerns() method), 45
(pyfr.integrators.dual.pseudo.pseudosteppers.DualDenseRKPseudoStepper
_get_gndofs() (pyfr.integrators.std.controllers.StdNoneController
method), 44 method), 31 _get_axnpby_kerns() _get_gndofs() (pyfr.integrators.std.controllers.StdPIController
(pyfr.integrators.dual.pseudo.pseudosteppers.DualEulerPseudoStepper
method), 46 _get_gndofs() (pyfr.integrators.std.steppers.StdEulerStepper _get_axnpby_kerns() method), 36
(pyfr.integrators.dual.pseudo.pseudosteppers.DualRK34PseudoStepper
method), 47 method), 38 _get_axnpby_kerns() _get_gndofs() (pyfr.integrators.std.steppers.StdRK45Stepper
(pyfr.integrators.dual.pseudo.pseudosteppers.DualRK45PseudoStepper
method), 48 _get_gndofs() (pyfr.integrators.std.steppers.StdRK4Stepper _get_axnpby_kerns() method), 37
(pyfr.integrators.dual.pseudo.pseudosteppers.DualRK4PseudoStepper
method), 45 method), 40 _get_axnpby_kerns() _get_kernels() (pyfr.solvers.aceuler.system.ACEulerSystem
(pyfr.integrators.dual.pseudo.pseudosteppers.DualTVDRK3PseudoStepper
method), 45 _get_kernels() (pyfr.solvers.acnavstokes.system.ACNavierStokesSystem _get_axnpby_kerns() method), 51
PyFR Documentation, Release 1.15.0 _get_kernels() (pyfr.solvers.euler.system.EulerSystem (pyfr.integrators.dual.pseudo.pseudocontrollers.DualPIPseudoCo
method), 52 method), 35 _get_kernels() (pyfr.solvers.navstokes.system.NavierStokesSystem
_get_reduction_kerns()
method), 52 (pyfr.integrators.dual.pseudo.pseudosteppers.DualDenseRKPseud _get_perm_for_view() method), 44
(pyfr.solvers.aceuler.inters.ACEulerIntInters _get_reduction_kerns()
method), 60 (pyfr.integrators.dual.pseudo.pseudosteppers.DualEulerPseudoSt _get_perm_for_view() method), 46
(pyfr.solvers.aceuler.inters.ACEulerMPIInters _get_reduction_kerns()
method), 60 (pyfr.integrators.dual.pseudo.pseudosteppers.DualRK34PseudoSt _get_perm_for_view() method), 47
(pyfr.solvers.acnavstokes.inters.ACNavierStokesIntInters
_get_reduction_kerns()
method), 61 (pyfr.integrators.dual.pseudo.pseudosteppers.DualRK45PseudoSt _get_perm_for_view() method), 48
(pyfr.solvers.acnavstokes.inters.ACNavierStokesMPIInters
_get_reduction_kerns()
method), 61 (pyfr.integrators.dual.pseudo.pseudosteppers.DualRK4PseudoSte _get_perm_for_view() method), 45
(pyfr.solvers.euler.inters.EulerIntInters _get_reduction_kerns()
method), 61 (pyfr.integrators.dual.pseudo.pseudosteppers.DualTVDRK3Pseud _get_perm_for_view() method), 45
(pyfr.solvers.euler.inters.EulerMPIInters _get_reduction_kerns()
method), 62 (pyfr.integrators.std.controllers.StdNoneController _get_perm_for_view() method), 31
(pyfr.solvers.navstokes.inters.NavierStokesIntInters_get_reduction_kerns()
method), 62 (pyfr.integrators.std.controllers.StdPIController _get_perm_for_view() method), 32
(pyfr.solvers.navstokes.inters.NavierStokesMPIInters
_get_reduction_kerns()
method), 63 (pyfr.integrators.std.steppers.StdEulerStepper _get_plugins() (pyfr.integrators.dual.phys.controllers.DualNoneController
method), 33 _get_reduction_kerns()
_get_plugins() (pyfr.integrators.dual.phys.steppers.DualBackwardEulerStepper
method), 41 method), 38 _get_plugins() (pyfr.integrators.dual.phys.steppers.SDIRK33Stepper
_get_reduction_kerns()
method), 41 (pyfr.integrators.std.steppers.StdRK45Stepper _get_plugins() (pyfr.integrators.dual.phys.steppers.SDIRK43Stepper method), 39
method), 42 _get_reduction_kerns()
_get_plugins() (pyfr.integrators.std.controllers.StdNoneController (pyfr.integrators.std.steppers.StdRK4Stepper
method), 31 method), 37 _get_plugins() (pyfr.integrators.std.controllers.StdPIController
_get_reduction_kerns()
method), 32 (pyfr.integrators.std.steppers.StdTVDRK3Stepper _get_plugins() (pyfr.integrators.std.steppers.StdEulerStepper method), 40
method), 36 _get_rkvdh2_kerns()
_get_plugins() (pyfr.integrators.std.steppers.StdRK34Stepper (pyfr.integrators.std.steppers.StdRK34Stepper
method), 38 method), 38 _get_plugins() (pyfr.integrators.std.steppers.StdRK45Stepper
method), 39 (pyfr.integrators.std.steppers.StdRK45Stepper _get_plugins() (pyfr.integrators.std.steppers.StdRK4Stepper method), 39
method), 37 _get_rkvdh2pseudo_kerns()
_get_plugins() (pyfr.integrators.std.steppers.StdTVDRK3Stepper (pyfr.integrators.dual.pseudo.pseudosteppers.DualRK34PseudoSt
method), 40 method), 47 _get_reduction_kerns() _get_rkvdh2pseudo_kerns()
(pyfr.integrators.dual.pseudo.pseudocontrollers.DualNonePseudoController
method), 34 method), 48 _get_reduction_kerns() _instantiate_kernel()
PyFR Documentation, Release 1.15.0
(pyfr.backends.cuda.provider.CUDAPointwiseKernelProvider method), 64
method), 67 _malloc_impl() (pyfr.backends.opencl.base.OpenCLBackend _instantiate_kernel() method), 65
(pyfr.backends.hip.provider.HIPPointwiseKernelProvider
_malloc_impl() (pyfr.backends.openmp.base.OpenMPBackend
method), 68 method), 66 _instantiate_kernel() _mesh_regions (pyfr.solvers.aceuler.elements.ACEulerElements
(pyfr.backends.opencl.provider.OpenCLPointwiseKernelProvider
method), 68 _mesh_regions (pyfr.solvers.acnavstokes.elements.ACNavierStokesElemen _instantiate_kernel() property), 55
(pyfr.backends.openmp.provider.OpenMPPointwiseKernelProvider
_mesh_regions (pyfr.solvers.euler.elements.EulerElements
method), 68 property), 56 _kdeps() (pyfr.solvers.aceuler.system.ACEulerSystem _mesh_regions (pyfr.solvers.navstokes.elements.NavierStokesElements
method), 50 property), 57 _kdeps() (pyfr.solvers.acnavstokes.system.ACNavierStokesSystem
_nonce_seq (pyfr.solvers.aceuler.system.ACEulerSystem
method), 51 attribute), 50 _kdeps() (pyfr.solvers.euler.system.EulerSystem _nonce_seq (pyfr.solvers.acnavstokes.system.ACNavierStokesSystem
method), 52 attribute), 51 _kdeps() (pyfr.solvers.navstokes.system.NavierStokesSystem_nonce_seq (pyfr.solvers.euler.system.EulerSystem at-
method), 52 tribute), 52 _load_bc_inters() (pyfr.solvers.aceuler.system.ACEulerSystem
_nonce_seq (pyfr.solvers.navstokes.system.NavierStokesSystem
method), 50 attribute), 53 _load_bc_inters() (pyfr.solvers.acnavstokes.system.ACNavierStokesSystem
_ploc_in_src_exprs (pyfr.solvers.aceuler.elements.ACEulerElements
method), 51 property), 54 _load_bc_inters() (pyfr.solvers.euler.system.EulerSystem_ploc_in_src_exprs (pyfr.solvers.acnavstokes.elements.ACNavierStokes
method), 52 property), 55 _load_bc_inters() (pyfr.solvers.navstokes.system.NavierStokesSystem
_ploc_in_src_exprs (pyfr.solvers.euler.elements.EulerElements
method), 52 property), 56 _load_eles() (pyfr.solvers.aceuler.system.ACEulerSystem _ploc_in_src_exprs (pyfr.solvers.navstokes.elements.NavierStokesEleme
method), 50 property), 58 _load_eles() (pyfr.solvers.acnavstokes.system.ACNavierStokesSystem
_pnorm_fpts (pyfr.solvers.aceuler.elements.ACEulerElements
method), 51 property), 54 _load_eles() (pyfr.solvers.euler.system.EulerSystem _pnorm_fpts (pyfr.solvers.acnavstokes.elements.ACNavierStokesElements
method), 52 property), 55 _load_eles() (pyfr.solvers.navstokes.system.NavierStokesSystem
_pnorm_fpts (pyfr.solvers.euler.elements.EulerElements
method), 53 property), 56 _load_int_inters() (pyfr.solvers.aceuler.system.ACEulerSystem
_pnorm_fpts (pyfr.solvers.navstokes.elements.NavierStokesElements
method), 50 property), 58 _load_int_inters() (pyfr.solvers.acnavstokes.system.ACNavierStokesSystem
_prepare_kernels() (pyfr.solvers.aceuler.system.ACEulerSystem
method), 51 method), 50 _load_int_inters() (pyfr.solvers.euler.system.EulerSystem _prepare_kernels() (pyfr.solvers.acnavstokes.system.ACNavierStokesSy
method), 52 method), 51 _load_int_inters() (pyfr.solvers.navstokes.system.NavierStokesSystem
_prepare_kernels() (pyfr.solvers.euler.system.EulerSystem
method), 53 method), 52 _load_mpi_inters() (pyfr.solvers.aceuler.system.ACEulerSystem
_prepare_kernels() (pyfr.solvers.navstokes.system.NavierStokesSystem
method), 50 method), 53 _load_mpi_inters() (pyfr.solvers.acnavstokes.system.ACNavierStokesSystem
_pseudo_stepper_regidx
method), 51 (pyfr.integrators.dual.pseudo.pseudocontrollers.DualNonePseudo _load_mpi_inters() (pyfr.solvers.euler.system.EulerSystem property), 34
method), 52 _pseudo_stepper_regidx _load_mpi_inters() (pyfr.solvers.navstokes.system.NavierStokesSystem
(pyfr.integrators.dual.pseudo.pseudocontrollers.DualPIPseudoCo
method), 53 property), 35 _malloc_impl() (pyfr.backends.cuda.base.CUDABackend_pseudo_stepper_regidx
method), 64 (pyfr.integrators.dual.pseudo.pseudosteppers.DualDenseRKPseud _malloc_impl() (pyfr.backends.hip.base.HIPBackend property), 44
PyFR Documentation, Release 1.15.0 _pseudo_stepper_regidx method), 53
(pyfr.integrators.dual.pseudo.pseudosteppers.DualEulerPseudoStepper
_rhs_with_dts() (pyfr.integrators.dual.pseudo.pseudosteppers.DualDens
property), 46 method), 44 _pseudo_stepper_regidx _rhs_with_dts() (pyfr.integrators.dual.pseudo.pseudosteppers.DualEule
(pyfr.integrators.dual.pseudo.pseudosteppers.DualRK34PseudoStepper
property), 47 _rhs_with_dts() (pyfr.integrators.dual.pseudo.pseudosteppers.DualRK3 _pseudo_stepper_regidx method), 47
(pyfr.integrators.dual.pseudo.pseudosteppers.DualRK45PseudoStepper
property), 48 method), 48 _pseudo_stepper_regidx _rhs_with_dts() (pyfr.integrators.dual.pseudo.pseudosteppers.DualRK4
(pyfr.integrators.dual.pseudo.pseudosteppers.DualRK4PseudoStepper
property), 45 _rhs_with_dts() (pyfr.integrators.dual.pseudo.pseudosteppers.DualTVD _pseudo_stepper_regidx method), 46
(pyfr.integrators.dual.pseudo.pseudosteppers.DualTVDRK3PseudoStepper
_scal_view() (pyfr.solvers.aceuler.inters.ACEulerIntInters
property), 45 method), 60 _reject_step() (pyfr.integrators.std.controllers.StdNoneController
_scal_view() (pyfr.solvers.aceuler.inters.ACEulerMPIInters
method), 31 method), 60 _reject_step() (pyfr.integrators.std.controllers.StdPIController
_scal_view() (pyfr.solvers.acnavstokes.inters.ACNavierStokesIntInters
method), 32 method), 61 _render_args() (pyfr.backends.openmp.generator.OpenMPKernelGenerator
_scal_view() (pyfr.solvers.acnavstokes.inters.ACNavierStokesMPIInters
method), 71 method), 61 _render_body() (pyfr.backends.cuda.generator.CUDAKernelGenerator
_scal_view() (pyfr.solvers.euler.inters.EulerIntInters
method), 70 method), 61 _render_body() (pyfr.backends.hip.generator.HIPKernelGenerator
_scal_view() (pyfr.solvers.euler.inters.EulerMPIInters
method), 70 method), 62 _render_body() (pyfr.backends.opencl.generator.OpenCLKernelGenerator
_scal_view() (pyfr.solvers.navstokes.inters.NavierStokesIntInters
method), 71 method), 62 _render_body() (pyfr.backends.openmp.generator.OpenMPKernelGenerator
_scal_view() (pyfr.solvers.navstokes.inters.NavierStokesMPIInters
method), 71 method), 63 _render_kernel() (pyfr.backends.cuda.provider.CUDAPointwiseKernelProvider
_scal_xchg_view() (pyfr.solvers.aceuler.inters.ACEulerIntInters
method), 67 method), 60 _render_kernel() (pyfr.backends.hip.provider.HIPPointwiseKernelProvider
_scal_xchg_view() (pyfr.solvers.aceuler.inters.ACEulerMPIInters
method), 68 method), 60 _render_kernel() (pyfr.backends.opencl.provider.OpenCLPointwiseKernelProvider
_scal_xchg_view() (pyfr.solvers.acnavstokes.inters.ACNavierStokesIntIn
method), 68 method), 61 _render_kernel() (pyfr.backends.openmp.provider.OpenMPPointwiseKernelProvider
_scal_xchg_view() (pyfr.solvers.acnavstokes.inters.ACNavierStokesMPII
method), 68 method), 61 _render_spec() (pyfr.backends.cuda.generator.CUDAKernelGenerator
_scal_xchg_view() (pyfr.solvers.euler.inters.EulerIntInters
method), 70 method), 61 _render_spec() (pyfr.backends.hip.generator.HIPKernelGenerator
_scal_xchg_view() (pyfr.solvers.euler.inters.EulerMPIInters
method), 70 method), 62 _render_spec() (pyfr.backends.opencl.generator.OpenCLKernelGenerator
_scal_xchg_view() (pyfr.solvers.navstokes.inters.NavierStokesIntInters
method), 71 method), 62 _resid() (pyfr.integrators.dual.pseudo.pseudocontrollers.DualNonePseudoController
_scal_xchg_view() (pyfr.solvers.navstokes.inters.NavierStokesMPIInters
method), 34 method), 63 _resid() (pyfr.integrators.dual.pseudo.pseudocontrollers.DualPIPseudoController
_scratch_bufs (pyfr.solvers.aceuler.elements.ACEulerElements
method), 35 property), 54 _rhs_graphs() (pyfr.solvers.aceuler.system.ACEulerSystem_scratch_bufs (pyfr.solvers.acnavstokes.elements.ACNavierStokesElemen
method), 50 property), 55 _rhs_graphs() (pyfr.solvers.acnavstokes.system.ACNavierStokesSystem
_scratch_bufs (pyfr.solvers.euler.elements.EulerElements
method), 51 property), 56 _rhs_graphs() (pyfr.solvers.euler.system.EulerSystem _scratch_bufs (pyfr.solvers.navstokes.elements.NavierStokesElements
method), 52 property), 58 _rhs_graphs() (pyfr.solvers.navstokes.system.NavierStokesSystem
_set_external() (pyfr.solvers.aceuler.inters.ACEulerIntInters
PyFR Documentation, Release 1.15.0
method), 60 property), 46 _set_external() (pyfr.solvers.aceuler.inters.ACEulerMPIInters
_src_exprs (pyfr.solvers.aceuler.elements.ACEulerElements
method), 60 property), 54 _set_external() (pyfr.solvers.acnavstokes.inters.ACNavierStokesIntInters
_src_exprs (pyfr.solvers.acnavstokes.elements.ACNavierStokesElements
method), 61 property), 55 _set_external() (pyfr.solvers.acnavstokes.inters.ACNavierStokesMPIInters
_src_exprs (pyfr.solvers.euler.elements.EulerElements
method), 61 property), 56 _set_external() (pyfr.solvers.euler.inters.EulerIntInters _src_exprs (pyfr.solvers.navstokes.elements.NavierStokesElements
method), 61 property), 58 _set_external() (pyfr.solvers.euler.inters.EulerMPIInters_srtd_face_fpts (pyfr.solvers.aceuler.elements.ACEulerElements
method), 62 property), 54 _set_external() (pyfr.solvers.navstokes.inters.NavierStokesIntInters
_srtd_face_fpts (pyfr.solvers.acnavstokes.elements.ACNavierStokesElem
method), 62 property), 55 _set_external() (pyfr.solvers.navstokes.inters.NavierStokesMPIInters
_srtd_face_fpts (pyfr.solvers.euler.elements.EulerElements
method), 63 property), 56 _slice_mat() (pyfr.solvers.aceuler.elements.ACEulerElements
_srtd_face_fpts (pyfr.solvers.navstokes.elements.NavierStokesElements
method), 54 property), 58 _slice_mat() (pyfr.solvers.acnavstokes.elements.ACNavierStokesElements
_stage_regidx (pyfr.integrators.dual.pseudo.pseudocontrollers.DualNone
method), 55 property), 34 _slice_mat() (pyfr.solvers.euler.elements.EulerElements _stage_regidx (pyfr.integrators.dual.pseudo.pseudocontrollers.DualPIPs
method), 56 property), 35 _slice_mat() (pyfr.solvers.navstokes.elements.NavierStokesElements
_stage_regidx (pyfr.integrators.dual.pseudo.pseudosteppers.DualDenseR
method), 58 property), 44 _smats_djacs_mpts (pyfr.solvers.aceuler.elements.ACEulerElements
_stage_regidx (pyfr.integrators.dual.pseudo.pseudosteppers.DualEulerP
property), 54 property), 46 _smats_djacs_mpts (pyfr.solvers.acnavstokes.elements.ACNavierStokesElements
property), 55 property), 47 _smats_djacs_mpts (pyfr.solvers.euler.elements.EulerElements
property), 56 property), 48 _smats_djacs_mpts (pyfr.solvers.navstokes.elements.NavierStokesElements
property), 58 property), 45 _soln_in_src_exprs (pyfr.solvers.aceuler.elements.ACEulerElements
_stage_regidx (pyfr.integrators.dual.pseudo.pseudosteppers.DualTVDRK
property), 54 property), 46 _soln_in_src_exprs (pyfr.solvers.acnavstokes.elements.ACNavierStokesElements
_stepper_nfevals (pyfr.integrators.std.steppers.StdEulerStepper
property), 55 property), 36 _soln_in_src_exprs (pyfr.solvers.euler.elements.EulerElements
property), 56 property), 38 _soln_in_src_exprs (pyfr.solvers.navstokes.elements.NavierStokesElements
property), 58 property), 39 _source_regidx (pyfr.integrators.dual.pseudo.pseudocontrollers.DualNonePseudoController
property), 34 property), 37 _source_regidx (pyfr.integrators.dual.pseudo.pseudocontrollers.DualPIPseudoController
property), 35 property), 40 _source_regidx (pyfr.integrators.dual.pseudo.pseudosteppers.DualDenseRKPseudoStepper
_stepper_regidx (pyfr.integrators.dual.pseudo.pseudocontrollers.DualNo
property), 44 property), 34 _source_regidx (pyfr.integrators.dual.pseudo.pseudosteppers.DualEulerPseudoStepper
_stepper_regidx (pyfr.integrators.dual.pseudo.pseudocontrollers.DualPI
property), 46 property), 35 _source_regidx (pyfr.integrators.dual.pseudo.pseudosteppers.DualRK34PseudoStepper
_stepper_regidx (pyfr.integrators.dual.pseudo.pseudosteppers.DualDens
property), 47 property), 44 _source_regidx (pyfr.integrators.dual.pseudo.pseudosteppers.DualRK45PseudoStepper
_stepper_regidx (pyfr.integrators.dual.pseudo.pseudosteppers.DualEule
property), 48 property), 46 _source_regidx (pyfr.integrators.dual.pseudo.pseudosteppers.DualRK4PseudoStepper
property), 45 property), 47 _source_regidx (pyfr.integrators.dual.pseudo.pseudosteppers.DualTVDRK3PseudoStepper
PyFR Documentation, Release 1.15.0 _stepper_regidx (pyfr.integrators.dual.pseudo.pseudosteppers.DualRK4PseudoStepper
_view() (pyfr.solvers.navstokes.inters.NavierStokesIntInters _stepper_regidx (pyfr.integrators.dual.pseudo.pseudosteppers.DualTVDRK3PseudoStepper
_view() (pyfr.solvers.navstokes.inters.NavierStokesMPIInters _update_pseudostepinfo() _xchg_view() (pyfr.solvers.aceuler.inters.ACEulerIntInters
(pyfr.integrators.dual.pseudo.pseudocontrollers.DualNonePseudoController
method), 34 _xchg_view() (pyfr.solvers.aceuler.inters.ACEulerMPIInters _update_pseudostepinfo() method), 60
(pyfr.integrators.dual.pseudo.pseudocontrollers.DualPIPseudoController
_xchg_view() (pyfr.solvers.acnavstokes.inters.ACNavierStokesIntInters _vect_view() (pyfr.solvers.aceuler.inters.ACEulerIntInters_xchg_view() (pyfr.solvers.acnavstokes.inters.ACNavierStokesMPIInters _vect_view() (pyfr.solvers.aceuler.inters.ACEulerMPIInters _xchg_view() (pyfr.solvers.euler.inters.EulerIntInters _vect_view() (pyfr.solvers.acnavstokes.inters.ACNavierStokesIntInters
_xchg_view() (pyfr.solvers.euler.inters.EulerMPIInters _vect_view() (pyfr.solvers.acnavstokes.inters.ACNavierStokesMPIInters
_xchg_view() (pyfr.solvers.navstokes.inters.NavierStokesIntInters _vect_view() (pyfr.solvers.euler.inters.EulerIntInters _xchg_view() (pyfr.solvers.navstokes.inters.NavierStokesMPIInters _vect_view() (pyfr.solvers.euler.inters.EulerMPIInters
method), 62 A _vect_view() (pyfr.solvers.navstokes.inters.NavierStokesIntInters
a (pyfr.integrators.dual.phys.steppers.DualBackwardEulerStepper _vect_view() (pyfr.solvers.navstokes.inters.NavierStokesMPIInters _vect_xchg_view() (pyfr.solvers.aceuler.inters.ACEulerIntInters _vect_xchg_view() (pyfr.solvers.aceuler.inters.ACEulerMPIInters _vect_xchg_view() (pyfr.solvers.acnavstokes.inters.ACNavierStokesIntInters _vect_xchg_view() (pyfr.solvers.acnavstokes.inters.ACNavierStokesMPIInters _vect_xchg_view() (pyfr.solvers.euler.inters.EulerIntInters a (pyfr.integrators.std.steppers.StdRK45Stepper at-
_vect_xchg_view() (pyfr.solvers.euler.inters.EulerMPIInters ACEulerElements (class in _vect_xchg_view() (pyfr.solvers.navstokes.inters.NavierStokesIntInters
ACEulerIntInters (class in pyfr.solvers.aceuler.inters),
_vect_xchg_view() (pyfr.solvers.navstokes.inters.NavierStokesMPIInters
ACEulerMPIInters (class in pyfr.solvers.aceuler.inters),
_view() (pyfr.solvers.aceuler.inters.ACEulerIntInters ACEulerSystem (class in pyfr.solvers.aceuler.system), 50
method), 60 ACNavierStokesElements (class in _view() (pyfr.solvers.aceuler.inters.ACEulerMPIInters pyfr.solvers.acnavstokes.elements), 55
method), 60 ACNavierStokesIntInters (class in _view() (pyfr.solvers.acnavstokes.inters.ACNavierStokesIntInters pyfr.solvers.acnavstokes.inters), 60
method), 61 ACNavierStokesMPIInters (class in _view() (pyfr.solvers.acnavstokes.inters.ACNavierStokesMPIInters pyfr.solvers.acnavstokes.inters), 61
method), 61 ACNavierStokesSystem (class in _view() (pyfr.solvers.euler.inters.EulerIntInters pyfr.solvers.acnavstokes.system), 51
method), 62 advance_to() (pyfr.integrators.dual.phys.controllers.DualNoneController _view() (pyfr.solvers.euler.inters.EulerMPIInters method), 33
PyFR Documentation, Release 1.15.0
B advance_to() (pyfr.integrators.dual.phys.steppers.DualBackwardEulerStepper
method), 41 b (pyfr.integrators.dual.phys.steppers.SDIRK43Stepper advance_to() (pyfr.integrators.dual.phys.steppers.SDIRK33Stepper attribute), 42
method), 42 b (pyfr.integrators.dual.pseudo.pseudosteppers.DualRK34PseudoStepper advance_to() (pyfr.integrators.dual.phys.steppers.SDIRK43Stepper attribute), 47
method), 42 b (pyfr.integrators.dual.pseudo.pseudosteppers.DualRK45PseudoStepper advance_to() (pyfr.integrators.std.controllers.StdNoneController attribute), 48
method), 31 b (pyfr.integrators.std.steppers.StdRK34Stepper at-
advance_to() (pyfr.integrators.std.controllers.StdPIController tribute), 38
method), 32 b (pyfr.integrators.std.steppers.StdRK45Stepper at-
advance_to() (pyfr.integrators.std.steppers.StdEulerStepper tribute), 39
method), 36 BASE_MPI_TAG (pyfr.solvers.aceuler.inters.ACEulerMPIInters advance_to() (pyfr.integrators.std.steppers.StdRK34Stepper attribute), 60
method), 38 BASE_MPI_TAG (pyfr.solvers.acnavstokes.inters.ACNavierStokesMPIInters advance_to() (pyfr.integrators.std.steppers.StdRK45Stepper attribute), 61
method), 39 BASE_MPI_TAG (pyfr.solvers.euler.inters.EulerMPIInters advance_to() (pyfr.integrators.std.steppers.StdRK4Stepper attribute), 62
method), 37 BASE_MPI_TAG (pyfr.solvers.navstokes.inters.NavierStokesMPIInters advance_to() (pyfr.integrators.std.steppers.StdTVDRK3Stepper attribute), 63
method), 40 bbcinterscls (pyfr.solvers.aceuler.system.ACEulerSystem alias() (pyfr.backends.cuda.base.CUDABackend attribute), 50
method), 64 bbcinterscls (pyfr.solvers.acnavstokes.system.ACNavierStokesSystem alias() (pyfr.backends.hip.base.HIPBackend method), attribute), 51
64 bbcinterscls (pyfr.solvers.euler.system.EulerSystem at-
alias() (pyfr.backends.opencl.base.OpenCLBackend tribute), 52
method), 65 bbcinterscls (pyfr.solvers.navstokes.system.NavierStokesSystem alias() (pyfr.backends.openmp.base.OpenMPBackend attribute), 53
method), 66 bhat (pyfr.integrators.dual.pseudo.pseudosteppers.DualRK34PseudoSteppe argspec() (pyfr.backends.cuda.generator.CUDAKernelGenerator attribute), 47
method), 70 bhat (pyfr.integrators.dual.pseudo.pseudosteppers.DualRK45PseudoSteppe argspec() (pyfr.backends.hip.generator.HIPKernelGenerator attribute), 48
method), 70 bhat (pyfr.integrators.std.steppers.StdRK34Stepper at-
argspec() (pyfr.backends.opencl.generator.OpenCLKernelGeneratortribute), 38
method), 71 bhat (pyfr.integrators.std.steppers.StdRK45Stepper at-
argspec() (pyfr.backends.openmp.generator.OpenMPKernelGenerator tribute), 39
method), 71 block1d (pyfr.backends.hip.generator.HIPKernelGenerator aux_nregs (pyfr.integrators.dual.pseudo.pseudocontrollers.DualNonePseudoController
attribute), 34 block2d (pyfr.backends.hip.generator.HIPKernelGenerator aux_nregs (pyfr.integrators.dual.pseudo.pseudocontrollers.DualPIPseudoController
attribute), 35 blocks (pyfr.backends.cuda.base.CUDABackend at-
aux_nregs (pyfr.integrators.dual.pseudo.pseudosteppers.DualDenseRKPseudoStepper
attribute), 44 blocks (pyfr.backends.hip.base.HIPBackend attribute),
aux_nregs (pyfr.integrators.dual.pseudo.pseudosteppers.DualEulerPseudoStepper
attribute), 47 blocks (pyfr.backends.opencl.base.OpenCLBackend at-
aux_nregs (pyfr.integrators.dual.pseudo.pseudosteppers.DualRK34PseudoStepper
attribute), 47 blocks (pyfr.backends.openmp.base.OpenMPBackend aux_nregs (pyfr.integrators.dual.pseudo.pseudosteppers.DualRK45PseudoStepper
attribute), 48 aux_nregs (pyfr.integrators.dual.pseudo.pseudosteppers.DualRK4PseudoStepper
C
attribute), 45
call_plugin_dt() (pyfr.integrators.dual.phys.controllers.DualNoneCont aux_nregs (pyfr.integrators.dual.pseudo.pseudosteppers.DualTVDRK3PseudoStepper
attribute), 46
call_plugin_dt() (pyfr.integrators.dual.phys.steppers.DualBackwardEul
PyFR Documentation, Release 1.15.0 call_plugin_dt() (pyfr.integrators.dual.phys.steppers.SDIRK33Stepper
method), 42 method), 49 call_plugin_dt() (pyfr.integrators.dual.phys.steppers.SDIRK43Stepper
method), 42 method), 45 call_plugin_dt() (pyfr.integrators.std.controllers.StdNoneController
collect_stats() (pyfr.integrators.dual.pseudo.pseudosteppers.DualTVD
method), 31 method), 46 call_plugin_dt() (pyfr.integrators.std.controllers.StdPIController
collect_stats() (pyfr.integrators.std.controllers.StdNoneController
method), 32 method), 31 call_plugin_dt() (pyfr.integrators.std.steppers.StdEulerStepper
collect_stats() (pyfr.integrators.std.controllers.StdPIController
method), 36 method), 32 call_plugin_dt() (pyfr.integrators.std.steppers.StdRK34Stepper
collect_stats() (pyfr.integrators.std.steppers.StdEulerStepper
method), 38 method), 36 call_plugin_dt() (pyfr.integrators.std.steppers.StdRK45Stepper
method), 39 method), 38 call_plugin_dt() (pyfr.integrators.std.steppers.StdRK4Stepper
method), 37 method), 39 call_plugin_dt() (pyfr.integrators.std.steppers.StdTVDRK3Stepper
method), 40 method), 37 cfgmeta (pyfr.integrators.dual.phys.controllers.DualNoneController
property), 33 method), 40 cfgmeta (pyfr.integrators.dual.phys.steppers.DualBackwardEulerStepper
commit() (pyfr.backends.cuda.base.CUDABackend
property), 41 method), 64 cfgmeta (pyfr.integrators.dual.phys.steppers.SDIRK33Steppercommit() (pyfr.backends.hip.base.HIPBackend method),
property), 42 64 cfgmeta (pyfr.integrators.dual.phys.steppers.SDIRK43Steppercommit() (pyfr.backends.opencl.base.OpenCLBackend
property), 42 method), 65 cfgmeta (pyfr.integrators.std.controllers.StdNoneControllercommit() (pyfr.backends.openmp.base.OpenMPBackend
property), 31 method), 66 cfgmeta (pyfr.integrators.std.controllers.StdPIController compute_grads() (pyfr.solvers.aceuler.system.ACEulerSystem
property), 32 method), 50 cfgmeta (pyfr.integrators.std.steppers.StdEulerStepper compute_grads() (pyfr.solvers.acnavstokes.system.ACNavierStokesSystem
property), 36 method), 51 cfgmeta (pyfr.integrators.std.steppers.StdRK34Stepper compute_grads() (pyfr.solvers.euler.system.EulerSystem
property), 38 method), 52 cfgmeta (pyfr.integrators.std.steppers.StdRK45Stepper compute_grads() (pyfr.solvers.navstokes.system.NavierStokesSystem
property), 39 method), 53 cfgmeta (pyfr.integrators.std.steppers.StdRK4Stepper con_to_pri() (pyfr.solvers.aceuler.elements.ACEulerElements
property), 37 static method), 54 cfgmeta (pyfr.integrators.std.steppers.StdTVDRK3Stepper con_to_pri() (pyfr.solvers.acnavstokes.elements.ACNavierStokesElement
property), 40 static method), 55 collect_stats() (pyfr.integrators.dual.phys.controllers.DualNoneController
con_to_pri() (pyfr.solvers.euler.elements.EulerElements
method), 33 static method), 56 collect_stats() (pyfr.integrators.dual.phys.steppers.DualBackwardEulerStepper
con_to_pri() (pyfr.solvers.navstokes.elements.NavierStokesElements
method), 41 static method), 58 collect_stats() (pyfr.integrators.dual.phys.steppers.SDIRK33Stepper
const_matrix() (pyfr.backends.cuda.base.CUDABackend
method), 42 method), 64 collect_stats() (pyfr.integrators.dual.phys.steppers.SDIRK43Stepper
const_matrix() (pyfr.backends.hip.base.HIPBackend
method), 43 method), 65 collect_stats() (pyfr.integrators.dual.pseudo.pseudosteppers.DualDenseRKPseudoStepper
const_matrix() (pyfr.backends.opencl.base.OpenCLBackend
method), 44 method), 65 collect_stats() (pyfr.integrators.dual.pseudo.pseudosteppers.DualEulerPseudoStepper
const_matrix() (pyfr.backends.openmp.base.OpenMPBackend
method), 47 method), 66 collect_stats() (pyfr.integrators.dual.pseudo.pseudosteppers.DualRK34PseudoStepper
controller_name (pyfr.integrators.dual.phys.controllers.DualNoneContro
method), 48 attribute), 33
PyFR Documentation, Release 1.15.0 controller_name (pyfr.integrators.std.controllers.StdNoneController
dualcoeffs (pyfr.solvers.aceuler.elements.ACEulerElements
attribute), 32 attribute), 54 controller_name (pyfr.integrators.std.controllers.StdPIController
dualcoeffs (pyfr.solvers.acnavstokes.elements.ACNavierStokesElements
attribute), 32 attribute), 55 controller_needs_errest dualcoeffs (pyfr.solvers.euler.elements.EulerElements
(pyfr.integrators.std.controllers.StdNoneController attribute), 57
property), 32 dualcoeffs (pyfr.solvers.navstokes.elements.NavierStokesElements controller_needs_errest attribute), 58
(pyfr.integrators.std.controllers.StdPIController DualDenseRKPseudoStepper (class in
property), 32 pyfr.integrators.dual.pseudo.pseudosteppers),
controller_needs_errest 44
(pyfr.integrators.std.steppers.StdEulerStepper DualEulerPseudoStepper (class in
property), 36 pyfr.integrators.dual.pseudo.pseudosteppers),
controller_needs_errest 46
(pyfr.integrators.std.steppers.StdRK34Stepper DualNoneController (class in
property), 38 pyfr.integrators.dual.phys.controllers), 33 controller_needs_errest DualNonePseudoController (class in
(pyfr.integrators.std.steppers.StdRK45Stepper pyfr.integrators.dual.pseudo.pseudocontrollers),
property), 39 34 controller_needs_errest DualPIPseudoController (class in
(pyfr.integrators.std.steppers.StdRK4Stepper pyfr.integrators.dual.pseudo.pseudocontrollers),
property), 37 35 controller_needs_errest DualRK34PseudoStepper (class in
(pyfr.integrators.std.steppers.StdTVDRK3Stepper pyfr.integrators.dual.pseudo.pseudosteppers),
property), 40 47 convarmap (pyfr.solvers.aceuler.elements.ACEulerElementsDualRK45PseudoStepper (class in
attribute), 54 pyfr.integrators.dual.pseudo.pseudosteppers),
convarmap (pyfr.solvers.acnavstokes.elements.ACNavierStokesElements
attribute), 55 DualRK4PseudoStepper (class in convarmap (pyfr.solvers.euler.elements.EulerElements pyfr.integrators.dual.pseudo.pseudosteppers),
attribute), 57 44 convarmap (pyfr.solvers.navstokes.elements.NavierStokesElements
attribute), 58 pyfr.integrators.dual.pseudo.pseudosteppers),
convmon() (pyfr.integrators.dual.pseudo.pseudocontrollers.DualNonePseudoController
method), 34
E convmon() (pyfr.integrators.dual.pseudo.pseudocontrollers.DualPIPseudoController
method), 35 ele_scal_upts() (pyfr.solvers.aceuler.system.ACEulerSystem CUDABackend (class in pyfr.backends.cuda.base), 64 method), 50 CUDAKernelGenerator (class in ele_scal_upts() (pyfr.solvers.acnavstokes.system.ACNavierStokesSystem
pyfr.backends.cuda.generator), 70 method), 51 CUDAPointwiseKernelProvider (class in ele_scal_upts() (pyfr.solvers.euler.system.EulerSystem
pyfr.backends.cuda.provider), 67 method), 52 curved_smat_at() (pyfr.solvers.aceuler.elements.ACEulerElements
ele_scal_upts() (pyfr.solvers.navstokes.system.NavierStokesSystem
method), 54 method), 53 curved_smat_at() (pyfr.solvers.acnavstokes.elements.ACNavierStokesElements
elementscls (pyfr.solvers.aceuler.system.ACEulerSystem
method), 55 attribute), 50 curved_smat_at() (pyfr.solvers.euler.elements.EulerElements
elementscls (pyfr.solvers.acnavstokes.system.ACNavierStokesSystem
method), 57 attribute), 51 curved_smat_at() (pyfr.solvers.navstokes.elements.NavierStokesElements
elementscls (pyfr.solvers.euler.system.EulerSystem at-
method), 58 tribute), 52
elementscls (pyfr.solvers.navstokes.system.NavierStokesSystem DualBackwardEulerStepper (class in EulerElements (class in pyfr.solvers.euler.elements), 56
pyfr.integrators.dual.phys.steppers), 40 EulerIntInters (class in pyfr.solvers.euler.inters), 61
PyFR Documentation, Release 1.15.0 EulerMPIInters (class in pyfr.solvers.euler.inters), 62 formulations (pyfr.solvers.euler.elements.EulerElements EulerSystem (class in pyfr.solvers.euler.system), 51 attribute), 57
formulations (pyfr.solvers.navstokes.elements.NavierStokesElements filt() (pyfr.solvers.aceuler.system.ACEulerSystem fsal (pyfr.integrators.dual.phys.steppers.DualBackwardEulerStepper
method), 50 attribute), 41 filt() (pyfr.solvers.acnavstokes.system.ACNavierStokesSystem
method), 51 attribute), 42 filt() (pyfr.solvers.euler.system.EulerSystem method), fsal (pyfr.integrators.dual.phys.steppers.SDIRK43Stepper filt() (pyfr.solvers.navstokes.system.NavierStokesSystem
method), 53 G formulation (pyfr.integrators.dual.phys.controllers.DualNoneController
get_artvisc_fpts_for_inter()
attribute), 33 (pyfr.solvers.acnavstokes.elements.ACNavierStokesElements formulation (pyfr.integrators.dual.phys.steppers.DualBackwardEulerStepper
attribute), 41 get_artvisc_fpts_for_inter()
formulation (pyfr.integrators.dual.phys.steppers.SDIRK33Stepper (pyfr.solvers.navstokes.elements.NavierStokesElements
attribute), 42 method), 58 formulation (pyfr.integrators.dual.phys.steppers.SDIRK43Stepper
get_ploc_for_inter()
attribute), 43 (pyfr.solvers.aceuler.elements.ACEulerElements formulation (pyfr.integrators.dual.pseudo.pseudocontrollers.DualNonePseudoController
attribute), 34 get_ploc_for_inter()
(pyfr.solvers.acnavstokes.elements.ACNavierStokesElements formulation (pyfr.integrators.dual.pseudo.pseudocontrollers.DualPIPseudoController
attribute), 35 method), 55 formulation (pyfr.integrators.dual.pseudo.pseudosteppers.DualDenseRKPseudoStepper
get_ploc_for_inter()
attribute), 44 (pyfr.solvers.euler.elements.EulerElements formulation (pyfr.integrators.dual.pseudo.pseudosteppers.DualEulerPseudoStepper
attribute), 47 get_ploc_for_inter()
(pyfr.solvers.navstokes.elements.NavierStokesElements formulation (pyfr.integrators.dual.pseudo.pseudosteppers.DualRK34PseudoStepper
attribute), 48 method), 58 formulation (pyfr.integrators.dual.pseudo.pseudosteppers.DualRK45PseudoStepper
get_plugin_data_prefix()
attribute), 49 (pyfr.integrators.dual.phys.controllers.DualNoneController formulation (pyfr.integrators.dual.pseudo.pseudosteppers.DualRK4PseudoStepper
attribute), 45 get_plugin_data_prefix()
(pyfr.integrators.dual.phys.steppers.DualBackwardEulerStepper formulation (pyfr.integrators.dual.pseudo.pseudosteppers.DualTVDRK3PseudoStepper
attribute), 46 static method), 41 formulation (pyfr.integrators.std.controllers.StdNoneController
get_plugin_data_prefix()
attribute), 32 (pyfr.integrators.dual.phys.steppers.SDIRK33Stepper formulation (pyfr.integrators.std.controllers.StdPIController static method), 42
attribute), 32 get_plugin_data_prefix()
formulation (pyfr.integrators.std.steppers.StdEulerStepper (pyfr.integrators.dual.phys.steppers.SDIRK43Stepper
attribute), 36 static method), 43 formulation (pyfr.integrators.std.steppers.StdRK34Stepperget_plugin_data_prefix()
attribute), 38 (pyfr.integrators.std.controllers.StdNoneController formulation (pyfr.integrators.std.steppers.StdRK45Stepper static method), 32
attribute), 39 get_plugin_data_prefix()
formulation (pyfr.integrators.std.steppers.StdRK4Stepper (pyfr.integrators.std.controllers.StdPIController
attribute), 37 static method), 32 formulation (pyfr.integrators.std.steppers.StdTVDRK3Stepper
get_plugin_data_prefix()
attribute), 40 (pyfr.integrators.std.steppers.StdEulerStepper formulations (pyfr.solvers.aceuler.elements.ACEulerElements static method), 36
attribute), 54 get_plugin_data_prefix()
formulations (pyfr.solvers.acnavstokes.elements.ACNavierStokesElements
attribute), 55 static method), 38
PyFR Documentation, Release 1.15.0 get_plugin_data_prefix() static method), 56
(pyfr.integrators.std.steppers.StdRK45Stepper grad_con_to_pri() (pyfr.solvers.navstokes.elements.NavierStokesElemen
static method), 39 static method), 58 get_plugin_data_prefix() grad_soln (pyfr.integrators.dual.phys.controllers.DualNoneController
(pyfr.integrators.std.steppers.StdRK4Stepper property), 33
static method), 37 grad_soln (pyfr.integrators.dual.phys.steppers.DualBackwardEulerSteppe get_plugin_data_prefix() property), 41
(pyfr.integrators.std.steppers.StdTVDRK3Stepper grad_soln (pyfr.integrators.dual.phys.steppers.SDIRK33Stepper
static method), 40 property), 42 get_pnorms() (pyfr.solvers.aceuler.elements.ACEulerElements
method), 54 property), 43 get_pnorms() (pyfr.solvers.acnavstokes.elements.ACNavierStokesElements
grad_soln (pyfr.integrators.std.controllers.StdNoneController
method), 55 property), 32 get_pnorms() (pyfr.solvers.euler.elements.EulerElements grad_soln (pyfr.integrators.std.controllers.StdPIController
method), 57 property), 32 get_pnorms() (pyfr.solvers.navstokes.elements.NavierStokesElements
grad_soln (pyfr.integrators.std.steppers.StdEulerStepper
method), 58 property), 36 get_pnorms_for_inter() grad_soln (pyfr.integrators.std.steppers.StdRK34Stepper
(pyfr.solvers.aceuler.elements.ACEulerElements property), 38
method), 54 grad_soln (pyfr.integrators.std.steppers.StdRK45Stepper get_pnorms_for_inter() property), 39
(pyfr.solvers.acnavstokes.elements.ACNavierStokesElements
method), 55 property), 37 get_pnorms_for_inter() grad_soln (pyfr.integrators.std.steppers.StdTVDRK3Stepper
(pyfr.solvers.euler.elements.EulerElements property), 40
method), 57 graph() (pyfr.backends.cuda.base.CUDABackend get_pnorms_for_inter() method), 64
(pyfr.solvers.navstokes.elements.NavierStokesElements
graph() (pyfr.backends.hip.base.HIPBackend method),
method), 58 65 get_scal_fpts_for_inter() graph() (pyfr.backends.opencl.base.OpenCLBackend
(pyfr.solvers.aceuler.elements.ACEulerElements method), 65
method), 54 graph() (pyfr.backends.openmp.base.OpenMPBackend get_scal_fpts_for_inter() method), 66
(pyfr.solvers.acnavstokes.elements.ACNavierStokesElements
method), 55 H get_scal_fpts_for_inter() HIPBackend (class in pyfr.backends.hip.base), 64
(pyfr.solvers.euler.elements.EulerElements HIPKernelGenerator (class in
method), 57 pyfr.backends.hip.generator), 70 get_scal_fpts_for_inter() HIPPointwiseKernelProvider (class in
(pyfr.solvers.navstokes.elements.NavierStokesElements pyfr.backends.hip.provider), 68
method), 58 get_vect_fpts_for_inter() I
(pyfr.solvers.aceuler.elements.ACEulerElements init_stage() (pyfr.integrators.dual.pseudo.pseudocontrollers.DualNoneP
method), 54 method), 34 get_vect_fpts_for_inter() init_stage() (pyfr.integrators.dual.pseudo.pseudocontrollers.DualPIPse
(pyfr.solvers.acnavstokes.elements.ACNavierStokesElementsmethod), 35
method), 56 init_stage() (pyfr.integrators.dual.pseudo.pseudosteppers.DualDenseRK get_vect_fpts_for_inter() method), 44
(pyfr.solvers.euler.elements.EulerElements init_stage() (pyfr.integrators.dual.pseudo.pseudosteppers.DualEulerPse
method), 57 method), 47 get_vect_fpts_for_inter() init_stage() (pyfr.integrators.dual.pseudo.pseudosteppers.DualRK34Pse
(pyfr.solvers.navstokes.elements.NavierStokesElements method), 48
method), 58 init_stage() (pyfr.integrators.dual.pseudo.pseudosteppers.DualRK45Pse grad_con_to_pri() (pyfr.solvers.acnavstokes.elements.ACNavierStokesElements
PyFR Documentation, Release 1.15.0 init_stage() (pyfr.integrators.dual.pseudo.pseudosteppers.DualRK4PseudoStepper
lookup (pyfr.backends.openmp.base.OpenMPBackend
method), 45 property), 66 init_stage() (pyfr.integrators.dual.pseudo.pseudosteppers.DualTVDRK3PseudoStepper
method), 46 M intinterscls (pyfr.solvers.aceuler.system.ACEulerSystem malloc() (pyfr.backends.cuda.base.CUDABackend
attribute), 50 method), 64 intinterscls (pyfr.solvers.acnavstokes.system.ACNavierStokesSystem
malloc() (pyfr.backends.hip.base.HIPBackend method),
attribute), 51 65 intinterscls (pyfr.solvers.euler.system.EulerSystem at- malloc() (pyfr.backends.opencl.base.OpenCLBackend
tribute), 52 method), 65 intinterscls (pyfr.solvers.navstokes.system.NavierStokesSystem
malloc() (pyfr.backends.openmp.base.OpenMPBackend
attribute), 53 method), 66
matrix() (pyfr.backends.cuda.base.CUDABackend kernel() (pyfr.backends.cuda.base.CUDABackend matrix() (pyfr.backends.hip.base.HIPBackend method),
method), 64 65 kernel() (pyfr.backends.hip.base.HIPBackend method), matrix() (pyfr.backends.opencl.base.OpenCLBackend kernel() (pyfr.backends.opencl.base.OpenCLBackend matrix() (pyfr.backends.openmp.base.OpenMPBackend
method), 65 method), 66 kernel() (pyfr.backends.openmp.base.OpenMPBackend matrix_slice() (pyfr.backends.cuda.base.CUDABackend
method), 66 method), 64 kernel_generator_cls matrix_slice() (pyfr.backends.hip.base.HIPBackend
(pyfr.backends.cuda.provider.CUDAPointwiseKernelProvider method), 65
attribute), 67 matrix_slice() (pyfr.backends.opencl.base.OpenCLBackend kernel_generator_cls method), 65
(pyfr.backends.hip.provider.HIPPointwiseKernelProvider
matrix_slice() (pyfr.backends.openmp.base.OpenMPBackend
attribute), 68 method), 66 kernel_generator_cls mpiinterscls (pyfr.solvers.aceuler.system.ACEulerSystem
(pyfr.backends.opencl.provider.OpenCLPointwiseKernelProvider
attribute), 68 mpiinterscls (pyfr.solvers.acnavstokes.system.ACNavierStokesSystem kernel_generator_cls attribute), 51
(pyfr.backends.openmp.provider.OpenMPPointwiseKernelProvider
mpiinterscls (pyfr.solvers.euler.system.EulerSystem at-
attribute), 68 tribute), 52 krunner (pyfr.backends.openmp.base.OpenMPBackend mpiinterscls (pyfr.solvers.navstokes.system.NavierStokesSystem
property), 66 attribute), 53 L N ldim_size() (pyfr.backends.cuda.generator.CUDAKernelGenerator
name (pyfr.backends.cuda.base.CUDABackend attribute),
method), 70 64 ldim_size() (pyfr.backends.hip.generator.HIPKernelGenerator
method), 70 name (pyfr.backends.opencl.base.OpenCLBackend ldim_size() (pyfr.backends.opencl.generator.OpenCLKernelGenerator attribute), 65
method), 71 name (pyfr.backends.openmp.base.OpenMPBackend at-
ldim_size() (pyfr.backends.openmp.generator.OpenMPKernelGenerator tribute), 66
method), 71 name (pyfr.solvers.aceuler.system.ACEulerSystem at-
localerrest() (pyfr.integrators.dual.pseudo.pseudocontrollers.DualPIPseudoController
method), 35 name (pyfr.solvers.acnavstokes.system.ACNavierStokesSystem lookup (pyfr.backends.cuda.base.CUDABackend prop- attribute), 51
erty), 64 name (pyfr.solvers.euler.system.EulerSystem attribute), 52 lookup (pyfr.backends.hip.base.HIPBackend property), name (pyfr.solvers.navstokes.system.NavierStokesSystem lookup (pyfr.backends.opencl.base.OpenCLBackend NavierStokesElements (class in
property), 65 pyfr.solvers.navstokes.elements), 57
PyFR Documentation, Release 1.15.0 NavierStokesIntInters (class in O
pyfr.solvers.navstokes.inters), 62 obtain_solution() (pyfr.integrators.dual.pseudo.pseudocontrollers.Dua NavierStokesMPIInters (class in method), 34
pyfr.solvers.navstokes.inters), 63 obtain_solution() (pyfr.integrators.dual.pseudo.pseudocontrollers.Dua NavierStokesSystem (class in method), 35
pyfr.solvers.navstokes.system), 52 obtain_solution() (pyfr.integrators.dual.pseudo.pseudosteppers.DualD needs_ldim() (pyfr.backends.cuda.generator.CUDAKernelGeneratormethod), 44
method), 70 obtain_solution() (pyfr.integrators.dual.pseudo.pseudosteppers.DualEu needs_ldim() (pyfr.backends.hip.generator.HIPKernelGenerator method), 47
method), 70 obtain_solution() (pyfr.integrators.dual.pseudo.pseudosteppers.DualR needs_ldim() (pyfr.backends.opencl.generator.OpenCLKernelGenerator method), 48
method), 71 obtain_solution() (pyfr.integrators.dual.pseudo.pseudosteppers.DualR needs_ldim() (pyfr.backends.openmp.generator.OpenMPKernelGenerator method), 49
method), 71 obtain_solution() (pyfr.integrators.dual.pseudo.pseudosteppers.DualR nstages (pyfr.integrators.dual.phys.steppers.DualBackwardEulerStepper
attribute), 41 obtain_solution() (pyfr.integrators.dual.pseudo.pseudosteppers.DualTV nstages (pyfr.integrators.dual.phys.steppers.SDIRK33Stepper method), 46
attribute), 42 OpenCLBackend (class in pyfr.backends.opencl.base), 65 nstages (pyfr.integrators.dual.phys.steppers.SDIRK43StepperOpenCLKernelGenerator (class in
attribute), 43 pyfr.backends.opencl.generator), 70 nsteps (pyfr.integrators.dual.phys.controllers.DualNoneController
OpenCLPointwiseKernelProvider (class in
property), 33 pyfr.backends.opencl.provider), 68 nsteps (pyfr.integrators.dual.phys.steppers.DualBackwardEulerStepper
OpenMPBackend (class in pyfr.backends.openmp.base),
property), 41 66 nsteps (pyfr.integrators.dual.phys.steppers.SDIRK33Stepper OpenMPKernelGenerator (class in
property), 42 pyfr.backends.openmp.generator), 71 nsteps (pyfr.integrators.dual.phys.steppers.SDIRK43Stepper OpenMPPointwiseKernelProvider (class in
property), 43 pyfr.backends.openmp.provider), 68 nsteps (pyfr.integrators.std.controllers.StdNoneController opmat() (pyfr.solvers.aceuler.elements.ACEulerElements
property), 32 method), 54 nsteps (pyfr.integrators.std.controllers.StdPIController opmat() (pyfr.solvers.acnavstokes.elements.ACNavierStokesElements
property), 32 method), 56 nsteps (pyfr.integrators.std.steppers.StdEulerStepper opmat() (pyfr.solvers.euler.elements.EulerElements
property), 36 method), 57 nsteps (pyfr.integrators.std.steppers.StdRK34Stepper opmat() (pyfr.solvers.navstokes.elements.NavierStokesElements
property), 38 method), 58 nsteps (pyfr.integrators.std.steppers.StdRK45Stepper ordered_meta_kernel()
property), 39 (pyfr.backends.cuda.base.CUDABackend nsteps (pyfr.integrators.std.steppers.StdRK4Stepper method), 64
property), 37 ordered_meta_kernel()
nsteps (pyfr.integrators.std.steppers.StdTVDRK3Stepper (pyfr.backends.hip.base.HIPBackend method),
property), 40 65 ntotiters (pyfr.integrators.dual.pseudo.pseudosteppers.DualDenseRKPseudoStepper
ordered_meta_kernel()
property), 44 (pyfr.backends.opencl.base.OpenCLBackend ntotiters (pyfr.integrators.dual.pseudo.pseudosteppers.DualEulerPseudoStepper
property), 47 ordered_meta_kernel()
ntotiters (pyfr.integrators.dual.pseudo.pseudosteppers.DualRK34PseudoStepper
(pyfr.backends.openmp.base.OpenMPBackend
property), 48 method), 66 ntotiters (pyfr.integrators.dual.pseudo.pseudosteppers.DualRK45PseudoStepper
property), 49 P ntotiters (pyfr.integrators.dual.pseudo.pseudosteppers.DualRK4PseudoStepper
ploc_at() (pyfr.solvers.aceuler.elements.ACEulerElements
property), 45 method), 54 ntotiters (pyfr.integrators.dual.pseudo.pseudosteppers.DualTVDRK3PseudoStepper
ploc_at() (pyfr.solvers.acnavstokes.elements.ACNavierStokesElements
property), 46 method), 56
PyFR Documentation, Release 1.15.0 ploc_at() (pyfr.solvers.euler.elements.EulerElements pseudo_advance() (pyfr.integrators.dual.pseudo.pseudocontrollers.DualP
method), 57 method), 35 ploc_at() (pyfr.solvers.navstokes.elements.NavierStokesElements
pseudo_controller_name
method), 58 (pyfr.integrators.dual.pseudo.pseudocontrollers.DualNonePseudo ploc_at_np() (pyfr.solvers.aceuler.elements.ACEulerElements attribute), 35
method), 54 pseudo_controller_name ploc_at_np() (pyfr.solvers.acnavstokes.elements.ACNavierStokesElements
(pyfr.integrators.dual.pseudo.pseudocontrollers.DualPIPseudoCo
method), 56 attribute), 35 ploc_at_np() (pyfr.solvers.euler.elements.EulerElements pseudo_controller_needs_lerrest
method), 57 (pyfr.integrators.dual.pseudo.pseudocontrollers.DualNonePseudo ploc_at_np() (pyfr.solvers.navstokes.elements.NavierStokesElementsattribute), 35
method), 58 pseudo_controller_needs_lerrest plocfpts (pyfr.solvers.aceuler.elements.ACEulerElements (pyfr.integrators.dual.pseudo.pseudocontrollers.DualPIPseudoCo
property), 54 attribute), 35 plocfpts (pyfr.solvers.acnavstokes.elements.ACNavierStokesElements
pseudo_stepper_has_lerrest
property), 56 (pyfr.integrators.dual.pseudo.pseudosteppers.DualEulerPseudoSt plocfpts (pyfr.solvers.euler.elements.EulerElements attribute), 47
property), 57 pseudo_stepper_has_lerrest plocfpts (pyfr.solvers.navstokes.elements.NavierStokesElements (pyfr.integrators.dual.pseudo.pseudosteppers.DualRK34PseudoSt
property), 58 property), 48 prepare() (pyfr.solvers.aceuler.inters.ACEulerIntInters pseudo_stepper_has_lerrest
method), 60 (pyfr.integrators.dual.pseudo.pseudosteppers.DualRK45PseudoSt prepare() (pyfr.solvers.aceuler.inters.ACEulerMPIInters property), 49
method), 60 pseudo_stepper_has_lerrest prepare() (pyfr.solvers.acnavstokes.inters.ACNavierStokesIntInters (pyfr.integrators.dual.pseudo.pseudosteppers.DualRK4PseudoSte
method), 61 attribute), 45 prepare() (pyfr.solvers.acnavstokes.inters.ACNavierStokesMPIInters
pseudo_stepper_has_lerrest
method), 61 (pyfr.integrators.dual.pseudo.pseudosteppers.DualTVDRK3Pseud prepare() (pyfr.solvers.euler.inters.EulerIntInters attribute), 46
method), 62 pseudo_stepper_name prepare() (pyfr.solvers.euler.inters.EulerMPIInters (pyfr.integrators.dual.pseudo.pseudosteppers.DualEulerPseudoSt
method), 62 attribute), 47 prepare() (pyfr.solvers.navstokes.inters.NavierStokesIntInters
pseudo_stepper_name
method), 62 (pyfr.integrators.dual.pseudo.pseudosteppers.DualRK34PseudoSt prepare() (pyfr.solvers.navstokes.inters.NavierStokesMPIInters attribute), 48
method), 63 pseudo_stepper_name pri_to_con() (pyfr.solvers.aceuler.elements.ACEulerElements (pyfr.integrators.dual.pseudo.pseudosteppers.DualRK45PseudoSt
static method), 54 attribute), 49 pri_to_con() (pyfr.solvers.acnavstokes.elements.ACNavierStokesElements
pseudo_stepper_name
static method), 56 (pyfr.integrators.dual.pseudo.pseudosteppers.DualRK4PseudoSte pri_to_con() (pyfr.solvers.euler.elements.EulerElements attribute), 45
static method), 57 pseudo_stepper_name pri_to_con() (pyfr.solvers.navstokes.elements.NavierStokesElements(pyfr.integrators.dual.pseudo.pseudosteppers.DualTVDRK3Pseud
static method), 58 attribute), 46 privarmap (pyfr.solvers.aceuler.elements.ACEulerElementspseudo_stepper_nfevals
attribute), 54 (pyfr.integrators.dual.pseudo.pseudosteppers.DualDenseRKPseud privarmap (pyfr.solvers.acnavstokes.elements.ACNavierStokesElements property), 44
attribute), 56 pseudo_stepper_nfevals privarmap (pyfr.solvers.euler.elements.EulerElements (pyfr.integrators.dual.pseudo.pseudosteppers.DualEulerPseudoSt
attribute), 57 property), 47 privarmap (pyfr.solvers.navstokes.elements.NavierStokesElements
pseudo_stepper_nfevals
attribute), 58 (pyfr.integrators.dual.pseudo.pseudosteppers.DualRK34PseudoSt pseudo_advance() (pyfr.integrators.dual.pseudo.pseudocontrollers.DualNonePseudoController
method), 34 pseudo_stepper_nfevals
PyFR Documentation, Release 1.15.0
(pyfr.integrators.dual.pseudo.pseudosteppers.DualRK45PseudoStepper
qpts (pyfr.solvers.navstokes.elements.NavierStokesElements
property), 49 property), 58 pseudo_stepper_nfevals
R
(pyfr.integrators.dual.pseudo.pseudosteppers.DualRK4PseudoStepper
property), 45 rcpdjac_at() (pyfr.solvers.aceuler.elements.ACEulerElements pseudo_stepper_nfevals method), 55
(pyfr.integrators.dual.pseudo.pseudosteppers.DualTVDRK3PseudoStepper
rcpdjac_at() (pyfr.solvers.acnavstokes.elements.ACNavierStokesElement
property), 46 method), 56 pseudo_stepper_nregs rcpdjac_at() (pyfr.solvers.euler.elements.EulerElements
(pyfr.integrators.dual.pseudo.pseudosteppers.DualEulerPseudoStepper
attribute), 47 rcpdjac_at() (pyfr.solvers.navstokes.elements.NavierStokesElements pseudo_stepper_nregs method), 58
(pyfr.integrators.dual.pseudo.pseudosteppers.DualRK34PseudoStepper
rcpdjac_at_np() (pyfr.solvers.aceuler.elements.ACEulerElements
property), 48 method), 55 pseudo_stepper_nregs rcpdjac_at_np() (pyfr.solvers.acnavstokes.elements.ACNavierStokesElem
(pyfr.integrators.dual.pseudo.pseudosteppers.DualRK45PseudoStepper
property), 49 rcpdjac_at_np() (pyfr.solvers.euler.elements.EulerElements pseudo_stepper_nregs method), 57
(pyfr.integrators.dual.pseudo.pseudosteppers.DualRK4PseudoStepper
rcpdjac_at_np() (pyfr.solvers.navstokes.elements.NavierStokesElements
attribute), 45 method), 58 pseudo_stepper_nregs register() (pyfr.backends.cuda.provider.CUDAPointwiseKernelProvider
(pyfr.integrators.dual.pseudo.pseudosteppers.DualTVDRK3PseudoStepper
attribute), 46 register() (pyfr.backends.hip.provider.HIPPointwiseKernelProvider pseudo_stepper_order method), 68
(pyfr.integrators.dual.pseudo.pseudosteppers.DualEulerPseudoStepper
register() (pyfr.backends.opencl.provider.OpenCLPointwiseKernelProvi
attribute), 47 method), 68 pseudo_stepper_order register() (pyfr.backends.openmp.provider.OpenMPPointwiseKernelPro
(pyfr.integrators.dual.pseudo.pseudosteppers.DualRK34PseudoStepper
attribute), 48 render() (pyfr.backends.cuda.generator.CUDAKernelGenerator pseudo_stepper_order method), 70
(pyfr.integrators.dual.pseudo.pseudosteppers.DualRK45PseudoStepper
render() (pyfr.backends.hip.generator.HIPKernelGenerator
attribute), 49 method), 70 pseudo_stepper_order render() (pyfr.backends.opencl.generator.OpenCLKernelGenerator
(pyfr.integrators.dual.pseudo.pseudosteppers.DualRK4PseudoStepper
attribute), 45 render() (pyfr.backends.openmp.generator.OpenMPKernelGenerator pseudo_stepper_order method), 71
(pyfr.integrators.dual.pseudo.pseudosteppers.DualTVDRK3PseudoStepper
rhs() (pyfr.solvers.aceuler.system.ACEulerSystem
attribute), 46 method), 50 pseudostepinfo (pyfr.integrators.dual.phys.controllers.DualNoneController
rhs() (pyfr.solvers.acnavstokes.system.ACNavierStokesSystem
property), 33 method), 51 pseudostepinfo (pyfr.integrators.dual.phys.steppers.DualBackwardEulerStepper
property), 41 rhs() (pyfr.solvers.navstokes.system.NavierStokesSystem pseudostepinfo (pyfr.integrators.dual.phys.steppers.SDIRK33Stepper method), 53
property), 42 rhs_wait_times() (pyfr.solvers.aceuler.system.ACEulerSystem pseudostepinfo (pyfr.integrators.dual.phys.steppers.SDIRK43Stepper method), 51
property), 43 rhs_wait_times() (pyfr.solvers.acnavstokes.system.ACNavierStokesSyste Q rhs_wait_times() (pyfr.solvers.euler.system.EulerSystem qpts (pyfr.solvers.aceuler.elements.ACEulerElements method), 52
property), 54 rhs_wait_times() (pyfr.solvers.navstokes.system.NavierStokesSystem qpts (pyfr.solvers.acnavstokes.elements.ACNavierStokesElements method), 53
property), 56 run() (pyfr.integrators.dual.phys.controllers.DualNoneController qpts (pyfr.solvers.euler.elements.EulerElements prop- method), 33
erty), 57
PyFR Documentation, Release 1.15.0 run() (pyfr.integrators.dual.phys.steppers.DualBackwardEulerStepper
set_ics_from_cfg() (pyfr.solvers.euler.elements.EulerElements
method), 41 method), 57 run() (pyfr.integrators.dual.phys.steppers.SDIRK33Stepperset_ics_from_cfg() (pyfr.solvers.navstokes.elements.NavierStokesEleme
method), 42 method), 59 run() (pyfr.integrators.dual.phys.steppers.SDIRK43Stepperset_ics_from_soln()
method), 43 (pyfr.solvers.aceuler.elements.ACEulerElements run() (pyfr.integrators.std.controllers.StdNoneController method), 55
method), 32 set_ics_from_soln()
run() (pyfr.integrators.std.controllers.StdPIController (pyfr.solvers.acnavstokes.elements.ACNavierStokesElements
method), 33 method), 56 run() (pyfr.integrators.std.steppers.StdEulerStepper set_ics_from_soln()
method), 37 (pyfr.solvers.euler.elements.EulerElements run() (pyfr.integrators.std.steppers.StdRK34Stepper method), 57
method), 38 set_ics_from_soln()
run() (pyfr.integrators.std.steppers.StdRK45Stepper (pyfr.solvers.navstokes.elements.NavierStokesElements
method), 39 method), 59 run() (pyfr.integrators.std.steppers.StdRK4Stepper shockvar (pyfr.solvers.navstokes.elements.NavierStokesElements
method), 37 attribute), 59 run() (pyfr.integrators.std.steppers.StdTVDRK3Stepper sliceat() (pyfr.solvers.aceuler.elements.ACEulerElements
method), 40 method), 55 run_graph() (pyfr.backends.cuda.base.CUDABackend sliceat() (pyfr.solvers.acnavstokes.elements.ACNavierStokesElements
method), 64 method), 56 run_graph() (pyfr.backends.hip.base.HIPBackend sliceat() (pyfr.solvers.euler.elements.EulerElements
method), 65 method), 57 run_graph() (pyfr.backends.opencl.base.OpenCLBackend sliceat() (pyfr.solvers.navstokes.elements.NavierStokesElements
method), 66 method), 59 run_graph() (pyfr.backends.openmp.base.OpenMPBackend smat_at_np() (pyfr.solvers.aceuler.elements.ACEulerElements
method), 66 method), 55 run_kernels() (pyfr.backends.cuda.base.CUDABackend smat_at_np() (pyfr.solvers.acnavstokes.elements.ACNavierStokesElement
method), 64 method), 56 run_kernels() (pyfr.backends.hip.base.HIPBackend smat_at_np() (pyfr.solvers.euler.elements.EulerElements
method), 65 method), 57 run_kernels() (pyfr.backends.opencl.base.OpenCLBackend smat_at_np() (pyfr.solvers.navstokes.elements.NavierStokesElements
method), 66 method), 59 run_kernels() (pyfr.backends.openmp.base.OpenMPBackend soln (pyfr.integrators.dual.phys.controllers.DualNoneController
method), 66 property), 33
soln (pyfr.integrators.dual.phys.steppers.DualBackwardEulerStepper SDIRK33Stepper (class in soln (pyfr.integrators.dual.phys.steppers.SDIRK33Stepper
pyfr.integrators.dual.phys.steppers), 41 property), 42 SDIRK43Stepper (class in soln (pyfr.integrators.dual.phys.steppers.SDIRK43Stepper
pyfr.integrators.dual.phys.steppers), 42 property), 43
soln (pyfr.integrators.std.controllers.StdNoneController set_backend() (pyfr.solvers.aceuler.elements.ACEulerElements
method), 55 property), 32
(pyfr.integrators.std.controllers.StdPIController set_backend() (pyfr.solvers.acnavstokes.elements.ACNavierStokesElements
soln
method), 56 property), 33 set_backend() (pyfr.solvers.euler.elements.EulerElements soln (pyfr.integrators.std.steppers.StdEulerStepper prop-
method), 57 erty), 37 set_backend() (pyfr.solvers.navstokes.elements.NavierStokesElements
method), 59 erty), 38 set_ics_from_cfg() (pyfr.solvers.aceuler.elements.ACEulerElements prop-
method), 55 erty), 39 set_ics_from_cfg() (pyfr.solvers.acnavstokes.elements.ACNavierStokesElements
method), 56 erty), 37
PyFR Documentation, Release 1.15.0 soln (pyfr.integrators.std.steppers.StdTVDRK3Stepper step() (pyfr.integrators.std.steppers.StdTVDRK3Stepper
property), 40 method), 40 stage_nregs (pyfr.integrators.dual.phys.steppers.DualBackwardEulerStepper
stepper_has_errest (pyfr.integrators.std.steppers.StdEulerStepper
property), 41 attribute), 37 stage_nregs (pyfr.integrators.dual.phys.steppers.SDIRK33Stepper
property), 42 property), 38 stage_nregs (pyfr.integrators.dual.phys.steppers.SDIRK43Stepper
property), 43 property), 39 StdEulerStepper (class in stepper_has_errest (pyfr.integrators.std.steppers.StdRK4Stepper
pyfr.integrators.std.steppers), 36 attribute), 37 StdNoneController (class in stepper_has_errest (pyfr.integrators.std.steppers.StdTVDRK3Stepper
pyfr.integrators.std.controllers), 31 attribute), 40 StdPIController (class in stepper_name (pyfr.integrators.dual.phys.steppers.DualBackwardEulerSte
pyfr.integrators.std.controllers), 32 attribute), 41 StdRK34Stepper (class in pyfr.integrators.std.steppers), stepper_name (pyfr.integrators.dual.phys.steppers.SDIRK33Stepper StdRK45Stepper (class in pyfr.integrators.std.steppers), stepper_name (pyfr.integrators.dual.phys.steppers.SDIRK43Stepper StdRK4Stepper (class in pyfr.integrators.std.steppers), stepper_name (pyfr.integrators.std.steppers.StdEulerStepper StdTVDRK3Stepper (class in stepper_name (pyfr.integrators.std.steppers.StdRK34Stepper
pyfr.integrators.std.steppers), 40 attribute), 38 step() (pyfr.integrators.dual.phys.controllers.DualNoneController
method), 33 attribute), 40 step() (pyfr.integrators.dual.phys.steppers.DualBackwardEulerStepper
method), 41 attribute), 37 step() (pyfr.integrators.dual.phys.steppers.SDIRK33Stepper stepper_name (pyfr.integrators.std.steppers.StdTVDRK3Stepper
method), 42 attribute), 40 step() (pyfr.integrators.dual.phys.steppers.SDIRK43Stepper stepper_nregs (pyfr.integrators.dual.phys.steppers.DualBackwardEulerSt
method), 43 attribute), 41 step() (pyfr.integrators.dual.pseudo.pseudosteppers.DualDenseRKPseudoStepper
method), 44 attribute), 42 step() (pyfr.integrators.dual.pseudo.pseudosteppers.DualEulerPseudoStepper
method), 47 attribute), 43 step() (pyfr.integrators.dual.pseudo.pseudosteppers.DualRK34PseudoStepper
stepper_nregs (pyfr.integrators.std.steppers.StdEulerStepper
method), 48 attribute), 37 step() (pyfr.integrators.dual.pseudo.pseudosteppers.DualRK45PseudoStepper
method), 49 property), 39 step() (pyfr.integrators.dual.pseudo.pseudosteppers.DualRK4PseudoStepper
method), 45 property), 40 step() (pyfr.integrators.dual.pseudo.pseudosteppers.DualTVDRK3PseudoStepper
method), 46 attribute), 38 step() (pyfr.integrators.std.controllers.StdNoneController stepper_nregs (pyfr.integrators.std.steppers.StdTVDRK3Stepper
method), 32 attribute), 40 step() (pyfr.integrators.std.controllers.StdPIController stepper_order (pyfr.integrators.std.steppers.StdEulerStepper
method), 33 attribute), 37 step() (pyfr.integrators.std.steppers.StdEulerStepper stepper_order (pyfr.integrators.std.steppers.StdRK34Stepper
method), 37 attribute), 39 step() (pyfr.integrators.std.steppers.StdRK34Stepper stepper_order (pyfr.integrators.std.steppers.StdRK45Stepper
method), 38 attribute), 40 step() (pyfr.integrators.std.steppers.StdRK45Stepper stepper_order (pyfr.integrators.std.steppers.StdRK4Stepper
method), 39 attribute), 38 step() (pyfr.integrators.std.steppers.StdRK4Stepper stepper_order (pyfr.integrators.std.steppers.StdTVDRK3Stepper
method), 37 attribute), 40
PyFR Documentation, Release 1.15.0 store_current_soln() V
(pyfr.integrators.dual.pseudo.pseudocontrollers.DualNonePseudoController
view() (pyfr.backends.cuda.base.CUDABackend
method), 35 method), 64 store_current_soln() view() (pyfr.backends.hip.base.HIPBackend method), 65
(pyfr.integrators.dual.pseudo.pseudocontrollers.DualPIPseudoController
view() (pyfr.backends.opencl.base.OpenCLBackend
method), 35 method), 66 store_current_soln() view() (pyfr.backends.openmp.base.OpenMPBackend
(pyfr.integrators.dual.pseudo.pseudosteppers.DualDenseRKPseudoStepper
method), 44 visvarmap (pyfr.solvers.aceuler.elements.ACEulerElements store_current_soln() attribute), 55
(pyfr.integrators.dual.pseudo.pseudosteppers.DualEulerPseudoStepper
visvarmap (pyfr.solvers.acnavstokes.elements.ACNavierStokesElements
method), 47 attribute), 56 store_current_soln() visvarmap (pyfr.solvers.euler.elements.EulerElements
(pyfr.integrators.dual.pseudo.pseudosteppers.DualRK34PseudoStepper
method), 48 visvarmap (pyfr.solvers.navstokes.elements.NavierStokesElements store_current_soln() attribute), 59
(pyfr.integrators.dual.pseudo.pseudosteppers.DualRK45PseudoStepper
method), 49 X store_current_soln()
xchg_matrix() (pyfr.backends.cuda.base.CUDABackend
(pyfr.integrators.dual.pseudo.pseudosteppers.DualRK4PseudoStepper
method), 45
xchg_matrix() (pyfr.backends.hip.base.HIPBackend store_current_soln()
(pyfr.integrators.dual.pseudo.pseudosteppers.DualTVDRK3PseudoStepper
xchg_matrix() (pyfr.backends.opencl.base.OpenCLBackend
method), 46 system (pyfr.integrators.dual.phys.controllers.DualNoneController
xchg_matrix() (pyfr.backends.openmp.base.OpenMPBackend
property), 33 system (pyfr.integrators.dual.phys.steppers.DualBackwardEulerStepper
xchg_matrix_for_view()
property), 41
(pyfr.backends.cuda.base.CUDABackend system (pyfr.integrators.dual.phys.steppers.SDIRK33Stepper
property), 42
xchg_matrix_for_view()
system (pyfr.integrators.dual.phys.steppers.SDIRK43Stepper
(pyfr.backends.hip.base.HIPBackend method),
property), 43
xchg_matrix_for_view()
U (pyfr.backends.opencl.base.OpenCLBackend unordered_meta_kernel() method), 66
(pyfr.backends.cuda.base.CUDABackend xchg_matrix_for_view()
method), 64 (pyfr.backends.openmp.base.OpenMPBackend unordered_meta_kernel() method), 66
(pyfr.backends.hip.base.HIPBackend method), xchg_view() (pyfr.backends.cuda.base.CUDABackend unordered_meta_kernel() xchg_view() (pyfr.backends.hip.base.HIPBackend
(pyfr.backends.opencl.base.OpenCLBackend method), 65
method), 66 xchg_view() (pyfr.backends.opencl.base.OpenCLBackend unordered_meta_kernel() method), 66
(pyfr.backends.openmp.base.OpenMPBackend xchg_view() (pyfr.backends.openmp.base.OpenMPBackend
method), 66 method), 66 upts (pyfr.solvers.aceuler.elements.ACEulerElements
property), 55 upts (pyfr.solvers.acnavstokes.elements.ACNavierStokesElements
property), 56 upts (pyfr.solvers.euler.elements.EulerElements prop-
erty), 57 upts (pyfr.solvers.navstokes.elements.NavierStokesElements
property), 59 |
pexpect | readthedoc | Python | Pexpect 4.8 documentation
[Pexpect](index.html#document-index)
---
Pexpect version 4.8[¶](#pexpect-version-version)
===
Pexpect makes Python a better tool for controlling other applications.
Pexpect is a pure Python module for spawning child applications;
controlling them; and responding to expected patterns in their output.
Pexpect works like Don Libes’ Expect. Pexpect allows your script to spawn a child application and control it as if a human were typing commands.
Pexpect can be used for automating interactive applications such as ssh, ftp, passwd, telnet, etc. It can be used to a automate setup scripts for duplicating software package installations on different servers. It can be used for automated software testing. Pexpect is in the spirit of Don Libes’ Expect, but Pexpect is pure Python. Unlike other Expect-like modules for Python, Pexpect does not require TCL or Expect nor does it require C extensions to be compiled. It should work on any platform that supports the standard Python pty module. The Pexpect interface was designed to be easy to use.
Contents:
Installation[¶](#installation)
---
Pexpect is on PyPI, and can be installed with standard tools:
```
pip install pexpect
```
Or:
```
easy_install pexpect
```
### Requirements[¶](#requirements)
This version of Pexpect requires Python 3.3 or above, or Python 2.7.
As of version 4.0, Pexpect can be used on Windows and POSIX systems. However,
[`pexpect.spawn`](index.html#pexpect.spawn) and [`pexpect.run()`](index.html#pexpect.run) are only available on POSIX,
where the [`pty`](https://docs.python.org/3/library/pty.html#module-pty) module is present in the standard library. See
[Pexpect on Windows](index.html#windows) for more information.
API Overview[¶](#api-overview)
---
Pexpect can be used for automating interactive applications such as ssh, ftp,
mencoder, passwd, etc. The Pexpect interface was designed to be easy to use.
Here is an example of Pexpect in action:
```
# This connects to the openbsd ftp site and
# downloads the recursive directory listing.
import pexpect child = pexpect.spawn('ftp ftp.openbsd.org')
child.expect('Name .*: ')
child.sendline('anonymous')
child.expect('Password:')
child.sendline('<EMAIL>')
child.expect('ftp> ')
child.sendline('lcd /tmp')
child.expect('ftp> ')
child.sendline('cd pub/OpenBSD')
child.expect('ftp> ')
child.sendline('get README')
child.expect('ftp> ')
child.sendline('bye')
```
Obviously you could write an ftp client using Python’s own [`ftplib`](https://docs.python.org/3/library/ftplib.html#module-ftplib) module,
but this is just a demonstration. You can use this technique with any application.
This is especially handy if you are writing automated test tools.
There are two important methods in Pexpect – [`expect()`](index.html#pexpect.spawn.expect) and
[`send()`](index.html#pexpect.spawn.send) (or [`sendline()`](index.html#pexpect.spawn.sendline) which is like [`send()`](index.html#pexpect.spawn.send) with a linefeed). The [`expect()`](index.html#pexpect.spawn.expect)
method waits for the child application to return a given string. The string you specify is a regular expression, so you can match complicated patterns. The
[`send()`](index.html#pexpect.spawn.send) method writes a string to the child application.
From the child’s point of view it looks just like someone typed the text from a terminal. After each call to [`expect()`](index.html#pexpect.spawn.expect) the `before` and `after`
properties will be set to the text printed by child application. The `before`
property will contain all text up to the expected string pattern. The `after`
string will contain the text that was matched by the expected pattern.
The match property is set to the [re match object](http://docs.python.org/3/library/re#match-objects).
An example of Pexpect in action may make things more clear. This example uses ftp to login to the OpenBSD site; list files in a directory; and then pass interactive control of the ftp session to the human user:
```
import pexpect child = pexpect.spawn ('ftp ftp.openbsd.org')
child.expect ('Name .*: ')
child.sendline ('anonymous')
child.expect ('Password:')
child.sendline ('<EMAIL>')
child.expect ('ftp> ')
child.sendline ('ls /pub/OpenBSD/')
child.expect ('ftp> ')
print child.before # Print the result of the ls command.
child.interact() # Give control of the child to the user.
```
### Special EOF and TIMEOUT patterns[¶](#special-eof-and-timeout-patterns)
There are two special patterns to match the End Of File ([`EOF`](index.html#pexpect.EOF))
or a Timeout condition ([`TIMEOUT`](index.html#pexpect.TIMEOUT)). You can pass these patterns to [`expect()`](index.html#pexpect.spawn.expect). These patterns are not regular expressions. Use them like predefined constants.
If the child has died and you have read all the child’s output then ordinarily
[`expect()`](index.html#pexpect.spawn.expect) will raise an [`EOF`](index.html#pexpect.EOF) exception.
You can read everything up to the EOF without generating an exception by using the EOF pattern expect. In this case everything the child has output will be available in the `before` property.
The pattern given to [`expect()`](index.html#pexpect.spawn.expect) may be a regular expression or it may also be a list of regular expressions. This allows you to match multiple optional responses. The [`expect()`](index.html#pexpect.spawn.expect) method returns the index of the pattern that was matched. For example, say you wanted to login to a server. After entering a password you could get various responses from the server – your password could be rejected; or you could be allowed in and asked for your terminal type; or you could be let right in and given a command prompt.
The following code fragment gives an example of this:
```
child.expect('password:')
child.sendline(my_secret_password)
# We expect any of these three patterns...
i = child.expect (['Permission denied', 'Terminal type', '[#\$] '])
if i==0:
print('Permission denied on host. Can\'t login')
child.kill(0)
elif i==1:
print('Login OK... need to send terminal type.')
child.sendline('vt100')
child.expect('[#\$] ')
elif i==2:
print('Login OK.')
print('Shell command prompt', child.after)
```
If nothing matches an expected pattern then [`expect()`](index.html#pexpect.spawn.expect) will eventually raise a [`TIMEOUT`](index.html#pexpect.TIMEOUT) exception. The default time is 30 seconds, but you can change this by passing a timeout argument to
[`expect()`](index.html#pexpect.spawn.expect):
```
# Wait no more than 2 minutes (120 seconds) for password prompt.
child.expect('password:', timeout=120)
```
### Find the end of line – CR/LF conventions[¶](#find-the-end-of-line-cr-lf-conventions)
Pexpect matches regular expressions a little differently than what you might be used to.
The `$` pattern for end of line match is useless. The `$`
matches the end of string, but Pexpect reads from the child one character at a time, so each character looks like the end of a line. Pexpect can’t do a look-ahead into the child’s output stream. In general you would have this situation when using regular expressions with any stream.
Note
Pexpect does have an internal buffer, so reads are faster than one character at a time, but from the user’s perspective the regex patterns test happens one character at a time.
The best way to match the end of a line is to look for the newline: `"\r\n"`
(CR/LF). Yes, that does appear to be DOS-style. It may surprise some UNIX people to learn that terminal TTY device drivers (dumb, vt100, ANSI, xterm, etc.) all use the CR/LF combination to signify the end of line. Pexpect uses a Pseudo-TTY device to talk to the child application, so when the child app prints `"\n"`
you actually see `"\r\n"`.
UNIX uses just linefeeds to end lines of text, but not when it comes to TTY devices! TTY devices are more like the Windows world. Each line of text ends with a CR/LF combination. When you intercept data from a UNIX command from a TTY device you will find that the TTY device outputs a CR/LF combination. A UNIX command may only write a linefeed (`\n`), but the TTY device driver converts it to CR/LF. This means that your terminal will see lines end with CR/LF (hex `0D 0A`). Since Pexpect emulates a terminal, to match ends of lines you have to expect the CR/LF combination:
```
child.expect('\r\n')
```
If you just need to skip past a new line then `expect('\n')` by itself will work, but if you are expecting a specific pattern before the end of line then you need to explicitly look for the `\r`. For example the following expects a word at the end of a line:
```
child.expect('\w+\r\n')
```
But the following would both fail:
```
child.expect('\w+\n')
```
And as explained before, trying to use `$` to match the end of line would not work either:
```
child.expect ('\w+$')
```
So if you need to explicitly look for the END OF LINE, you want to look for the CR/LF combination – not just the LF and not the $ pattern.
This problem is not limited to Pexpect. This problem happens any time you try to perform a regular expression match on a stream. Regular expressions need to look ahead. With a stream it is hard to look ahead because the process generating the stream may not be finished. There is no way to know if the process has paused momentarily or is finished and waiting for you. Pexpect must implicitly always do a NON greedy match (minimal) at the end of a input.
Pexpect compiles all regular expressions with the [`re.DOTALL`](https://docs.python.org/3/library/re.html#re.DOTALL) flag.
With the [`DOTALL`](https://docs.python.org/3/library/re.html#re.DOTALL) flag, a `"."` will match a newline.
### Beware of + and * at the end of patterns[¶](#beware-of-and-at-the-end-of-patterns)
Remember that any time you try to match a pattern that needs look-ahead that you will always get a minimal match (non greedy). For example, the following will always return just one character:
```
child.expect ('.+')
```
This example will match successfully, but will always return no characters:
```
child.expect ('.*')
```
Generally any star * expression will match as little as possible.
One thing you can do is to try to force a non-ambiguous character at the end of your `\d+` pattern. Expect that character to delimit the string. For example, you might try making the end of your pattern be `\D+` instead of `\D*`. Number digits alone would not satisfy the `(\d+)\D+`
pattern. You would need some numbers and at least one non-number at the end.
### Debugging[¶](#debugging)
If you get the string value of a [`pexpect.spawn`](index.html#pexpect.spawn) object you will get lots of useful debugging information. For debugging it’s very useful to use the following pattern:
```
try:
i = child.expect ([pattern1, pattern2, pattern3, etc])
except:
print("Exception was thrown")
print("debug information:")
print(str(child))
```
It is also useful to log the child’s input and out to a file or the screen. The following will turn on logging and send output to stdout (the screen):
```
child = pexpect.spawn(foo)
child.logfile = sys.stdout.buffer
```
The sys.stdout.buffer object is available since Python 3. With Python 2, one has to assign just sys.stdout instead.
### Exceptions[¶](#exceptions)
[`EOF`](index.html#pexpect.EOF)
Note that two flavors of EOF Exception may be thrown. They are virtually identical except for the message string. For practical purposes you should have no need to distinguish between them, but they do give a little extra information about what type of platform you are running. The two messages are:
* “End Of File (EOF) in read(). Exception style platform.”
* “End Of File (EOF) in read(). Empty string style platform.”
Some UNIX platforms will throw an exception when you try to read from a file descriptor in the EOF state. Other UNIX platforms instead quietly return an empty string to indicate that the EOF state has been reached.
If you wish to read up to the end of the child’s output without generating an
[`EOF`](index.html#pexpect.EOF) exception then use the `expect(pexpect.EOF)` method.
[`TIMEOUT`](index.html#pexpect.TIMEOUT)
The [`expect()`](index.html#pexpect.spawn.expect) and [`read()`](index.html#pexpect.spawn.read) methods will also timeout if the child does not generate any output for a given amount of time. If this happens they will raise a [`TIMEOUT`](index.html#pexpect.TIMEOUT) exception.
You can have these methods ignore timeout and block indefinitely by passing
`None` for the timeout parameter:
```
child.expect(pexpect.EOF, timeout=None)
```
### Pexpect on Windows[¶](#pexpect-on-windows)
New in version 4.0: Windows support
Pexpect can be used on Windows to wait for a pattern to be produced by a child process, using [`pexpect.popen_spawn.PopenSpawn`](index.html#pexpect.popen_spawn.PopenSpawn), or a file descriptor,
using [`pexpect.fdpexpect.fdspawn`](index.html#pexpect.fdpexpect.fdspawn).
[`pexpect.spawn`](index.html#pexpect.spawn) and [`pexpect.run()`](index.html#pexpect.run) are *not* available on Windows,
as they rely on Unix pseudoterminals (ptys). Cross platform code must not use these.
`PopenSpawn` is not a direct replacement for `spawn`. Many programs only offer interactive behaviour if they detect that they are running in a terminal.
When run by `PopenSpawn`, they may behave differently.
See also
[winpexpect](https://pypi.python.org/pypi/winpexpect) and [wexpect](https://gist.github.com/anthonyeden/8488763)
Two unmaintained pexpect-like modules for Windows, which work with a hidden console.
API documentation[¶](#api-documentation)
---
### Core pexpect components[¶](#module-pexpect)
Pexpect is a Python module for spawning child applications and controlling them automatically. Pexpect can be used for automating interactive applications such as ssh, ftp, passwd, telnet, etc. It can be used to a automate setup scripts for duplicating software package installations on different servers. It can be used for automated software testing. Pexpect is in the spirit of Don Libes’ Expect, but Pexpect is pure Python. Other Expect-like modules for Python require TCL and Expect or require C extensions to be compiled. Pexpect does not use C, Expect, or TCL extensions. It should work on any platform that supports the standard Python pty module. The Pexpect interface focuses on ease of use so that simple tasks are easy.
There are two main interfaces to the Pexpect system; these are the function,
run() and the class, spawn. The spawn class is more powerful. The run()
function is simpler than spawn, and is good for quickly calling program. When you call the run() function it executes a given program and then returns the output. This is a handy replacement for os.system().
For example:
```
pexpect.run('ls -la')
```
The spawn class is the more powerful interface to the Pexpect system. You can use this to spawn a child program then interact with it by sending input and expecting responses (waiting for patterns in the child’s output).
For example:
```
child = pexpect.spawn('scp foo <EMAIL>:.')
child.expect('Password:')
child.sendline(mypassword)
```
This works even for commands that ask for passwords or other input outside of the normal stdio streams. For example, ssh reads input directly from the TTY device which bypasses stdin.
Credits: <NAME>, <NAME>, <NAME>, <NAME>,
<NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>,
<NAME>, <NAME>, <NAME>, <NAME>,
<NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, and <NAME>. Let me know if I forgot anyone.
Pexpect is free, open source, and all that good stuff.
<http://pexpect.sourceforge.net/PEXPECT LICENSE
> This license is approved by the OSI and FSF as GPL-compatible.
> <http://opensource.org/licenses/isc-license.txt>
> Copyright (c) 2012, <NAME> <[<EMAIL>](mailto:<EMAIL>)>
> PERMISSION TO USE, COPY, MODIFY, AND/OR DISTRIBUTE THIS SOFTWARE FOR ANY
> PURPOSE WITH OR WITHOUT FEE IS HEREBY GRANTED, PROVIDED THAT THE ABOVE
> COPYRIGHT NOTICE AND THIS PERMISSION NOTICE APPEAR IN ALL COPIES.
> THE SOFTWARE IS PROVIDED “AS IS” AND THE AUTHOR DISCLAIMS ALL WARRANTIES
> WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
> MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
> ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
> WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
> ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
> OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
#### spawn class[¶](#spawn-class)
*class* `pexpect.``spawn`(*command*, *args=[]*, *timeout=30*, *maxread=2000*, *searchwindowsize=None*, *logfile=None*, *cwd=None*, *env=None*, *ignore_sighup=False*, *echo=True*, *preexec_fn=None*, *encoding=None*, *codec_errors='strict'*, *dimensions=None*, *use_poll=False*)[[source]](_modules/pexpect/pty_spawn.html#spawn)[¶](#pexpect.spawn)
This is the main class interface for Pexpect. Use this class to start and control child applications.
`__init__`(*command*, *args=[]*, *timeout=30*, *maxread=2000*, *searchwindowsize=None*, *logfile=None*, *cwd=None*, *env=None*, *ignore_sighup=False*, *echo=True*, *preexec_fn=None*, *encoding=None*, *codec_errors='strict'*, *dimensions=None*, *use_poll=False*)[[source]](_modules/pexpect/pty_spawn.html#spawn.__init__)[¶](#pexpect.spawn.__init__)
This is the constructor. The command parameter may be a string that includes a command and any arguments to the command. For example:
```
child = pexpect.spawn('/usr/bin/ftp')
child = pexpect.spawn('/usr/bin/ssh <EMAIL>')
child = pexpect.spawn('ls -latr /tmp')
```
You may also construct it with a list of arguments like so:
```
child = pexpect.spawn('/usr/bin/ftp', [])
child = pexpect.spawn('/usr/bin/ssh', ['<EMAIL>'])
child = pexpect.spawn('ls', ['-latr', '/tmp'])
```
After this the child application will be created and will be ready to talk to. For normal use, see expect() and send() and sendline().
Remember that Pexpect does NOT interpret shell meta characters such as redirect, pipe, or wild cards (`>`, `|`, or `*`). This is a common mistake. If you want to run a command and pipe it through another command then you must also start a shell. For example:
```
child = pexpect.spawn('/bin/bash -c "ls -l | grep LOG > logs.txt"')
child.expect(pexpect.EOF)
```
The second form of spawn (where you pass a list of arguments) is useful in situations where you wish to spawn a command and pass it its own argument list. This can make syntax more clear. For example, the following is equivalent to the previous example:
```
shell_cmd = 'ls -l | grep LOG > logs.txt'
child = pexpect.spawn('/bin/bash', ['-c', shell_cmd])
child.expect(pexpect.EOF)
```
The maxread attribute sets the read buffer size. This is maximum number of bytes that Pexpect will try to read from a TTY at one time. Setting the maxread size to 1 will turn off buffering. Setting the maxread value higher may help performance in cases where large amounts of output are read back from the child. This feature is useful in conjunction with searchwindowsize.
When the keyword argument *searchwindowsize* is None (default), the full buffer is searched at each iteration of receiving incoming data.
The default number of bytes scanned at each iteration is very large and may be reduced to collaterally reduce search cost. After
[`expect()`](#pexpect.spawn.expect) returns, the full buffer attribute remains up to size *maxread* irrespective of *searchwindowsize* value.
When the keyword argument `timeout` is specified as a number,
(default: *30*), then [`TIMEOUT`](#pexpect.TIMEOUT) will be raised after the value specified has elapsed, in seconds, for any of the [`expect()`](#pexpect.spawn.expect)
family of method calls. When None, TIMEOUT will not be raised, and
[`expect()`](#pexpect.spawn.expect) may block indefinitely until match.
The logfile member turns on or off logging. All input and output will be copied to the given file object. Set logfile to None to stop logging. This is the default. Set logfile to sys.stdout to echo everything to standard output. The logfile is flushed after each write.
Example log input and output to a file:
```
child = pexpect.spawn('some_command')
fout = open('mylog.txt','wb')
child.logfile = fout
```
Example log to stdout:
```
# In Python 2:
child = pexpect.spawn('some_command')
child.logfile = sys.stdout
# In Python 3, we'll use the ``encoding`` argument to decode data
# from the subprocess and handle it as unicode:
child = pexpect.spawn('some_command', encoding='utf-8')
child.logfile = sys.stdout
```
The logfile_read and logfile_send members can be used to separately log the input from the child and output sent to the child. Sometimes you don’t want to see everything you write to the child. You only want to log what the child sends back. For example:
```
child = pexpect.spawn('some_command')
child.logfile_read = sys.stdout
```
You will need to pass an encoding to spawn in the above code if you are using Python 3.
To separately log output sent to the child use logfile_send:
```
child.logfile_send = fout
```
If `ignore_sighup` is True, the child process will ignore SIGHUP signals. The default is False from Pexpect 4.0, meaning that SIGHUP will be handled normally by the child.
The delaybeforesend helps overcome a weird behavior that many users were experiencing. The typical problem was that a user would expect() a
“Password:” prompt and then immediately call sendline() to send the password. The user would then see that their password was echoed back to them. Passwords don’t normally echo. The problem is caused by the fact that most applications print out the “Password” prompt and then turn off stdin echo, but if you send your password before the application turned off echo, then you get your password echoed.
Normally this wouldn’t be a problem when interacting with a human at a real keyboard. If you introduce a slight delay just before writing then this seems to clear up the problem. This was such a common problem for many users that I decided that the default pexpect behavior should be to sleep just before writing to the child application. 1/20th of a second (50 ms) seems to be enough to clear up the problem. You can set delaybeforesend to None to return to the old behavior.
Note that spawn is clever about finding commands on your path.
It uses the same logic that “which” uses to find executables.
If you wish to get the exit status of the child you must call the close() method. The exit or signal status of the child will be stored in self.exitstatus or self.signalstatus. If the child exited normally then exitstatus will store the exit return code and signalstatus will be None. If the child was terminated abnormally with a signal then signalstatus will store the signal value and exitstatus will be None:
```
child = pexpect.spawn('some_command')
child.close()
print(child.exitstatus, child.signalstatus)
```
If you need more detail you can also read the self.status member which stores the status returned by os.waitpid. You can interpret this using os.WIFEXITED/os.WEXITSTATUS or os.WIFSIGNALED/os.TERMSIG.
The echo attribute may be set to False to disable echoing of input.
As a pseudo-terminal, all input echoed by the “keyboard” (send()
or sendline()) will be repeated to output. For many cases, it is not desirable to have echo enabled, and it may be later disabled using setecho(False) followed by waitnoecho(). However, for some platforms such as Solaris, this is not possible, and should be disabled immediately on spawn.
If preexec_fn is given, it will be called in the child process before launching the given command. This is useful to e.g. reset inherited signal handlers.
The dimensions attribute specifies the size of the pseudo-terminal as seen by the subprocess, and is specified as a two-entry tuple (rows,
columns). If this is unspecified, the defaults in ptyprocess will apply.
The use_poll attribute enables using select.poll() over select.select()
for socket handling. This is handy if your system could have > 1024 fds
`expect`(*pattern*, *timeout=-1*, *searchwindowsize=-1*, *async_=False*, ***kw*)[¶](#pexpect.spawn.expect)
This seeks through the stream until a pattern is matched. The pattern is overloaded and may take several types. The pattern can be a StringType, EOF, a compiled re, or a list of any of those types.
Strings will be compiled to re types. This returns the index into the pattern list. If the pattern was not a list this returns index 0 on a successful match. This may raise exceptions for EOF or TIMEOUT. To avoid the EOF or TIMEOUT exceptions add EOF or TIMEOUT to the pattern list. That will cause expect to match an EOF or TIMEOUT condition instead of raising an exception.
If you pass a list of patterns and more than one matches, the first match in the stream is chosen. If more than one pattern matches at that point, the leftmost in the pattern list is chosen. For example:
```
# the input is 'foobar'
index = p.expect(['bar', 'foo', 'foobar'])
# returns 1('foo') even though 'foobar' is a "better" match
```
Please note, however, that buffering can affect this behavior, since input arrives in unpredictable chunks. For example:
```
# the input is 'foobar'
index = p.expect(['foobar', 'foo'])
# returns 0('foobar') if all input is available at once,
# but returns 1('foo') if parts of the final 'bar' arrive late
```
When a match is found for the given pattern, the class instance attribute *match* becomes an re.MatchObject result. Should an EOF or TIMEOUT pattern match, then the match attribute will be an instance of that exception class. The pairing before and after class instance attributes are views of the data preceding and following the matching pattern. On general exception, class attribute
*before* is all data received up to the exception, while *match* and
*after* attributes are value None.
When the keyword argument timeout is -1 (default), then TIMEOUT will raise after the default value specified by the class timeout attribute. When None, TIMEOUT will not be raised and may block indefinitely until match.
When the keyword argument searchwindowsize is -1 (default), then the value specified by the class maxread attribute is used.
A list entry may be EOF or TIMEOUT instead of a string. This will catch these exceptions and return the index of the list entry instead of raising the exception. The attribute ‘after’ will be set to the exception type. The attribute ‘match’ will be None. This allows you to write code like this:
```
index = p.expect(['good', 'bad', pexpect.EOF, pexpect.TIMEOUT])
if index == 0:
do_something()
elif index == 1:
do_something_else()
elif index == 2:
do_some_other_thing()
elif index == 3:
do_something_completely_different()
```
instead of code like this:
```
try:
index = p.expect(['good', 'bad'])
if index == 0:
do_something()
elif index == 1:
do_something_else()
except EOF:
do_some_other_thing()
except TIMEOUT:
do_something_completely_different()
```
These two forms are equivalent. It all depends on what you want. You can also just expect the EOF if you are waiting for all output of a child to finish. For example:
```
p = pexpect.spawn('/bin/ls')
p.expect(pexpect.EOF)
print p.before
```
If you are trying to optimize for speed then see expect_list().
On Python 3.4, or Python 3.3 with asyncio installed, passing
`async_=True` will make this return an [`asyncio`](https://docs.python.org/3/library/asyncio.html#module-asyncio) coroutine,
which you can yield from to get the same result that this method would normally give directly. So, inside a coroutine, you can replace this code:
```
index = p.expect(patterns)
```
With this non-blocking form:
```
index = yield from p.expect(patterns, async_=True)
```
`expect_exact`(*pattern_list*, *timeout=-1*, *searchwindowsize=-1*, *async_=False*, ***kw*)[¶](#pexpect.spawn.expect_exact)
This is similar to expect(), but uses plain string matching instead of compiled regular expressions in ‘pattern_list’. The ‘pattern_list’
may be a string; a list or other sequence of strings; or TIMEOUT and EOF.
This call might be faster than expect() for two reasons: string searching is faster than RE matching and it is possible to limit the search to just the end of the input buffer.
This method is also useful when you don’t want to have to worry about escaping regular expression characters that you want to match.
Like [`expect()`](#pexpect.spawn.expect), passing `async_=True` will make this return an asyncio coroutine.
`expect_list`(*pattern_list*, *timeout=-1*, *searchwindowsize=-1*, *async_=False*, ***kw*)[¶](#pexpect.spawn.expect_list)
This takes a list of compiled regular expressions and returns the index into the pattern_list that matched the child output. The list may also contain EOF or TIMEOUT(which are not compiled regular expressions). This method is similar to the expect() method except that expect_list() does not recompile the pattern list on every call. This may help if you are trying to optimize for speed, otherwise just use the expect() method. This is called by expect().
Like [`expect()`](#pexpect.spawn.expect), passing `async_=True` will make this return an asyncio coroutine.
`compile_pattern_list`(*patterns*)[¶](#pexpect.spawn.compile_pattern_list)
This compiles a pattern-string or a list of pattern-strings.
Patterns must be a StringType, EOF, TIMEOUT, SRE_Pattern, or a list of those. Patterns may also be None which results in an empty list (you might do this if waiting for an EOF or TIMEOUT condition without expecting any pattern).
This is used by expect() when calling expect_list(). Thus expect() is nothing more than:
```
cpl = self.compile_pattern_list(pl)
return self.expect_list(cpl, timeout)
```
If you are using expect() within a loop it may be more efficient to compile the patterns first and then call expect_list().
This avoid calls in a loop to compile_pattern_list():
```
cpl = self.compile_pattern_list(my_pattern)
while some_condition:
...
i = self.expect_list(cpl, timeout)
...
```
`send`(*s*)[[source]](_modules/pexpect/pty_spawn.html#spawn.send)[¶](#pexpect.spawn.send)
Sends string `s` to the child process, returning the number of bytes written. If a logfile is specified, a copy is written to that log.
The default terminal input mode is canonical processing unless set otherwise by the child process. This allows backspace and other line processing to be performed prior to transmitting to the receiving program. As this is buffered, there is a limited size of such buffer.
On Linux systems, this is 4096 (defined by N_TTY_BUF_SIZE). All other systems honor the POSIX.1 definition PC_MAX_CANON – 1024 on OSX, 256 on OpenSolaris, and 1920 on FreeBSD.
This value may be discovered using fpathconf(3):
```
>>> from os import fpathconf
>>> print(fpathconf(0, 'PC_MAX_CANON'))
256
```
On such a system, only 256 bytes may be received per line. Any subsequent bytes received will be discarded. BEL (`''`) is then sent to output if IMAXBEL (termios.h) is set by the tty driver.
This is usually enabled by default. Linux does not honor this as an option – it behaves as though it is always set on.
Canonical input processing may be disabled altogether by executing a shell, then stty(1), before executing the final program:
```
>>> bash = pexpect.spawn('/bin/bash', echo=False)
>>> bash.sendline('stty -icanon')
>>> bash.sendline('base64')
>>> bash.sendline('x' * 5000)
```
`sendline`(*s=''*)[[source]](_modules/pexpect/pty_spawn.html#spawn.sendline)[¶](#pexpect.spawn.sendline)
Wraps send(), sending string `s` to child process, with
`os.linesep` automatically appended. Returns number of bytes written. Only a limited number of bytes may be sent for each line in the default terminal mode, see docstring of [`send()`](#pexpect.spawn.send).
`write`(*s*)[[source]](_modules/pexpect/pty_spawn.html#spawn.write)[¶](#pexpect.spawn.write)
This is similar to send() except that there is no return value.
`writelines`(*sequence*)[[source]](_modules/pexpect/pty_spawn.html#spawn.writelines)[¶](#pexpect.spawn.writelines)
This calls write() for each element in the sequence. The sequence can be any iterable object producing strings, typically a list of strings. This does not add line separators. There is no return value.
`sendcontrol`(*char*)[[source]](_modules/pexpect/pty_spawn.html#spawn.sendcontrol)[¶](#pexpect.spawn.sendcontrol)
Helper method that wraps send() with mnemonic access for sending control character to the child (such as Ctrl-C or Ctrl-D). For example, to send Ctrl-G (ASCII 7, bell, ‘’):
```
child.sendcontrol('g')
```
See also, sendintr() and sendeof().
`sendeof`()[[source]](_modules/pexpect/pty_spawn.html#spawn.sendeof)[¶](#pexpect.spawn.sendeof)
This sends an EOF to the child. This sends a character which causes the pending parent output buffer to be sent to the waiting child program without waiting for end-of-line. If it is the first character of the line, the read() in the user program returns 0, which signifies end-of-file. This means to work as expected a sendeof() has to be called at the beginning of a line. This method does not send a newline.
It is the responsibility of the caller to ensure the eof is sent at the beginning of a line.
`sendintr`()[[source]](_modules/pexpect/pty_spawn.html#spawn.sendintr)[¶](#pexpect.spawn.sendintr)
This sends a SIGINT to the child. It does not require the SIGINT to be the first character on a line.
`read`(*size=-1*)[¶](#pexpect.spawn.read)
This reads at most “size” bytes from the file (less if the read hits EOF before obtaining size bytes). If the size argument is negative or omitted, read all data until EOF is reached. The bytes are returned as a string object. An empty string is returned when EOF is encountered immediately.
`readline`(*size=-1*)[¶](#pexpect.spawn.readline)
This reads and returns one entire line. The newline at the end of line is returned as part of the string, unless the file ends without a newline. An empty string is returned if EOF is encountered immediately.
This looks for a newline as a CR/LF pair (rn) even on UNIX because this is what the pseudotty device returns. So contrary to what you may expect you will receive newlines as rn.
If the size argument is 0 then an empty string is returned. In all other cases the size argument is ignored, which is not standard behavior for a file-like object.
`read_nonblocking`(*size=1*, *timeout=-1*)[[source]](_modules/pexpect/pty_spawn.html#spawn.read_nonblocking)[¶](#pexpect.spawn.read_nonblocking)
This reads at most size characters from the child application. It includes a timeout. If the read does not complete within the timeout period then a TIMEOUT exception is raised. If the end of file is read then an EOF exception will be raised. If a logfile is specified, a copy is written to that log.
If timeout is None then the read may block indefinitely.
If timeout is -1 then the self.timeout value is used. If timeout is 0 then the child is polled and if there is no data immediately ready then this will raise a TIMEOUT exception.
The timeout refers only to the amount of time to read at least one character. This is not affected by the ‘size’ parameter, so if you call read_nonblocking(size=100, timeout=30) and only one character is available right away then one character will be returned immediately.
It will not wait for 30 seconds for another 99 characters to come in.
On the other hand, if there are bytes available to read immediately,
all those bytes will be read (up to the buffer size). So, if the buffer size is 1 megabyte and there is 1 megabyte of data available to read, the buffer will be filled, regardless of timeout.
This is a wrapper around os.read(). It uses select.select() or select.poll() to implement the timeout.
`eof`()[[source]](_modules/pexpect/pty_spawn.html#spawn.eof)[¶](#pexpect.spawn.eof)
This returns True if the EOF exception was ever raised.
`interact`(*escape_character='\x1d'*, *input_filter=None*, *output_filter=None*)[[source]](_modules/pexpect/pty_spawn.html#spawn.interact)[¶](#pexpect.spawn.interact)
This gives control of the child process to the interactive user (the human at the keyboard). Keystrokes are sent to the child process, and the stdout and stderr output of the child process is printed. This simply echos the child stdout and child stderr to the real stdout and it echos the real stdin to the child stdin. When the user types the escape_character this method will return None. The escape_character will not be transmitted. The default for escape_character is entered as `Ctrl - ]`, the very same as BSD telnet. To prevent escaping, escape_character may be set to None.
If a logfile is specified, then the data sent and received from the child process in interact mode is duplicated to the given log.
You may pass in optional input and output filter functions. These functions should take bytes array and return bytes array too. Even with `encoding='utf-8'` support, meth:interact will always pass input_filter and output_filter bytes. You may need to wrap your function to decode and encode back to UTF-8.
The output_filter will be passed all the output from the child process.
The input_filter will be passed all the keyboard input from the user.
The input_filter is run BEFORE the check for the escape_character.
Note that if you change the window size of the parent the SIGWINCH signal will not be passed through to the child. If you want the child window size to change when the parent’s window size changes then do something like the following example:
```
import pexpect, struct, fcntl, termios, signal, sys def sigwinch_passthrough (sig, data):
s = struct.pack("HHHH", 0, 0, 0, 0)
a = struct.unpack('hhhh', fcntl.ioctl(sys.stdout.fileno(),
termios.TIOCGWINSZ , s))
if not p.closed:
p.setwinsize(a[0],a[1])
# Note this 'p' is global and used in sigwinch_passthrough.
p = pexpect.spawn('/bin/bash')
signal.signal(signal.SIGWINCH, sigwinch_passthrough)
p.interact()
```
`logfile`[¶](#pexpect.spawn.logfile)
`logfile_read`[¶](#pexpect.spawn.logfile_read)
`logfile_send`[¶](#pexpect.spawn.logfile_send)
Set these to a Python file object (or [`sys.stdout`](https://docs.python.org/3/library/sys.html#sys.stdout)) to log all communication, data read from the child process, or data sent to the child process.
Note
With [`spawn`](#pexpect.spawn) in bytes mode, the log files should be open for writing binary data. In unicode mode, they should be open for writing unicode text. See [Handling unicode](#unicode).
##### Controlling the child process[¶](#controlling-the-child-process)
*class* `pexpect.``spawn`[[source]](_modules/pexpect/pty_spawn.html#spawn)
`kill`(*sig*)[[source]](_modules/pexpect/pty_spawn.html#spawn.kill)[¶](#pexpect.spawn.kill)
This sends the given signal to the child application. In keeping with UNIX tradition it has a misleading name. It does not necessarily kill the child unless you send the right signal.
`terminate`(*force=False*)[[source]](_modules/pexpect/pty_spawn.html#spawn.terminate)[¶](#pexpect.spawn.terminate)
This forces a child process to terminate. It starts nicely with SIGHUP and SIGINT. If “force” is True then moves onto SIGKILL. This returns True if the child was terminated. This returns False if the child could not be terminated.
`isalive`()[[source]](_modules/pexpect/pty_spawn.html#spawn.isalive)[¶](#pexpect.spawn.isalive)
This tests if the child process is running or not. This is non-blocking. If the child was terminated then this will read the exitstatus or signalstatus of the child. This returns True if the child process appears to be running or False if not. It can take literally SECONDS for Solaris to return the right status.
`wait`()[[source]](_modules/pexpect/pty_spawn.html#spawn.wait)[¶](#pexpect.spawn.wait)
This waits until the child exits. This is a blocking call. This will not read any data from the child, so this will block forever if the child has unread output and has terminated. In other words, the child may have printed output then called exit(), but, the child is technically still alive until its output is read by the parent.
This method is non-blocking if [`wait()`](#pexpect.spawn.wait) has already been called previously or [`isalive()`](#pexpect.spawn.isalive) method returns False. It simply returns the previously determined exit status.
`close`(*force=True*)[[source]](_modules/pexpect/pty_spawn.html#spawn.close)[¶](#pexpect.spawn.close)
This closes the connection with the child application. Note that calling close() more than once is valid. This emulates standard Python behavior with files. Set force to True if you want to make sure that the child is terminated (SIGKILL is sent if the child ignores SIGHUP and SIGINT).
`getwinsize`()[[source]](_modules/pexpect/pty_spawn.html#spawn.getwinsize)[¶](#pexpect.spawn.getwinsize)
This returns the terminal window size of the child tty. The return value is a tuple of (rows, cols).
`setwinsize`(*rows*, *cols*)[[source]](_modules/pexpect/pty_spawn.html#spawn.setwinsize)[¶](#pexpect.spawn.setwinsize)
This sets the terminal window size of the child tty. This will cause a SIGWINCH signal to be sent to the child. This does not change the physical window size. It changes the size reported to TTY-aware applications like vi or curses – applications that respond to the SIGWINCH signal.
`getecho`()[[source]](_modules/pexpect/pty_spawn.html#spawn.getecho)[¶](#pexpect.spawn.getecho)
This returns the terminal echo mode. This returns True if echo is on or False if echo is off. Child applications that are expecting you to enter a password often set ECHO False. See waitnoecho().
Not supported on platforms where `isatty()` returns False.
`setecho`(*state*)[[source]](_modules/pexpect/pty_spawn.html#spawn.setecho)[¶](#pexpect.spawn.setecho)
This sets the terminal echo mode on or off. Note that anything the child sent before the echo will be lost, so you should be sure that your input buffer is empty before you call setecho(). For example, the following will work as expected:
```
p = pexpect.spawn('cat') # Echo is on by default.
p.sendline('1234') # We expect see this twice from the child...
p.expect(['1234']) # ... once from the tty echo...
p.expect(['1234']) # ... and again from cat itself.
p.setecho(False) # Turn off tty echo p.sendline('abcd') # We will set this only once (echoed by cat).
p.sendline('wxyz') # We will set this only once (echoed by cat)
p.expect(['abcd'])
p.expect(['wxyz'])
```
The following WILL NOT WORK because the lines sent before the setecho will be lost:
```
p = pexpect.spawn('cat')
p.sendline('1234')
p.setecho(False) # Turn off tty echo p.sendline('abcd') # We will set this only once (echoed by cat).
p.sendline('wxyz') # We will set this only once (echoed by cat)
p.expect(['1234'])
p.expect(['1234'])
p.expect(['abcd'])
p.expect(['wxyz'])
```
Not supported on platforms where `isatty()` returns False.
`waitnoecho`(*timeout=-1*)[[source]](_modules/pexpect/pty_spawn.html#spawn.waitnoecho)[¶](#pexpect.spawn.waitnoecho)
This waits until the terminal ECHO flag is set False. This returns True if the echo mode is off. This returns False if the ECHO flag was not set False before the timeout. This can be used to detect when the child is waiting for a password. Usually a child application will turn off echo mode when it is waiting for the user to enter a password. For example, instead of expecting the “password:” prompt you can wait for the child to set ECHO off:
```
p = pexpect.spawn('ssh <EMAIL>')
p.waitnoecho()
p.sendline(mypassword)
```
If timeout==-1 then this method will use the value in self.timeout.
If timeout==None then this method to block until ECHO flag is False.
`pid`[¶](#pexpect.spawn.pid)
The process ID of the child process.
`child_fd`[¶](#pexpect.spawn.child_fd)
The file descriptor used to communicate with the child process.
##### Handling unicode[¶](#handling-unicode)
By default, [`spawn`](#pexpect.spawn) is a bytes interface: its read methods return bytes,
and its write/send and expect methods expect bytes. If you pass the *encoding*
parameter to the constructor, it will instead act as a unicode interface:
strings you send will be encoded using that encoding, and bytes received will be decoded before returning them to you. In this mode, patterns for
[`expect()`](#pexpect.spawn.expect) and [`expect_exact()`](#pexpect.spawn.expect_exact) should also be unicode.
Changed in version 4.0: [`spawn`](#pexpect.spawn) provides both the bytes and unicode interfaces. In Pexpect 3.x, the unicode interface was provided by a separate `spawnu` class.
For backwards compatibility, some Unicode is allowed in bytes mode: the send methods will encode arbitrary unicode as UTF-8 before sending it to the child process, and its expect methods can accept ascii-only unicode strings.
Note
Unicode handling with pexpect works the same way on Python 2 and 3, despite the difference in names. I.e.:
* Bytes mode works with `str` on Python 2, and [`bytes`](https://docs.python.org/3/library/stdtypes.html#bytes) on Python 3,
* Unicode mode works with `unicode` on Python 2, and [`str`](https://docs.python.org/3/library/stdtypes.html#str) on Python 3.
#### run function[¶](#run-function)
`pexpect.``run`(*command*, *timeout=30*, *withexitstatus=False*, *events=None*, *extra_args=None*, *logfile=None*, *cwd=None*, *env=None*, ***kwargs*)[[source]](_modules/pexpect/run.html#run)[¶](#pexpect.run)
This function runs the given command; waits for it to finish; then returns all output as a string. STDERR is included in output. If the full path to the command is not given then the path is searched.
Note that lines are terminated by CR/LF (rn) combination even on UNIX-like systems because this is the standard for pseudottys. If you set
‘withexitstatus’ to true, then run will return a tuple of (command_output,
exitstatus). If ‘withexitstatus’ is false then this returns just command_output.
The run() function can often be used instead of creating a spawn instance.
For example, the following code uses spawn:
```
from pexpect import *
child = spawn('scp foo <EMAIL>:.')
child.expect('(?i)password')
child.sendline(mypassword)
```
The previous code can be replace with the following:
```
from pexpect import *
run('scp foo <EMAIL>:.', events={'(?i)password': mypassword})
```
**Examples**
Start the apache daemon on the local machine:
```
from pexpect import *
run("/usr/local/apache/bin/apachectl start")
```
Check in a file using SVN:
```
from pexpect import *
run("svn ci -m 'automatic commit' my_file.py")
```
Run a command and capture exit status:
```
from pexpect import *
(command_output, exitstatus) = run('ls -l /bin', withexitstatus=1)
```
The following will run SSH and execute ‘ls -l’ on the remote machine. The password ‘secret’ will be sent if the ‘(?i)password’ pattern is ever seen:
```
run("ssh [email protected] 'ls -l'",
events={'(?i)password':'secret\n'})
```
This will start mencoder to rip a video from DVD. This will also display progress ticks every 5 seconds as it runs. For example:
```
from pexpect import *
def print_ticks(d):
print d['event_count'],
run("mencoder dvd://1 -o video.avi -oac copy -ovc copy",
events={TIMEOUT:print_ticks}, timeout=5)
```
The ‘events’ argument should be either a dictionary or a tuple list that contains patterns and responses. Whenever one of the patterns is seen in the command output, run() will send the associated response string.
So, run() in the above example can be also written as:
> run(“mencoder dvd://1 -o video.avi -oac copy -ovc copy”,
> events=[(TIMEOUT,print_ticks)], timeout=5)
Use a tuple list for events if the command output requires a delicate control over what pattern should be matched, since the tuple list is passed to pexpect() as its pattern list, with the order of patterns preserved.
Note that you should put newlines in your string if Enter is necessary.
Like the example above, the responses may also contain a callback, either a function or method. It should accept a dictionary value as an argument.
The dictionary contains all the locals from the run() function, so you can access the child spawn object or any other variable defined in run()
(event_count, child, and extra_args are the most useful). A callback may return True to stop the current run process. Otherwise run() continues until the next event. A callback may also return a string which will be sent to the child. ‘extra_args’ is not used by directly run(). It provides a way to pass data to a callback function through run() through the locals dictionary passed to a callback.
Like [`spawn`](#pexpect.spawn), passing *encoding* will make it work with unicode instead of bytes. You can pass *codec_errors* to control how errors in encoding and decoding are handled.
#### Exceptions[¶](#exceptions)
*class* `pexpect.``EOF`(*value*)[[source]](_modules/pexpect/exceptions.html#EOF)[¶](#pexpect.EOF)
Raised when EOF is read from a child.
This usually means the child has exited.
*class* `pexpect.``TIMEOUT`(*value*)[[source]](_modules/pexpect/exceptions.html#TIMEOUT)[¶](#pexpect.TIMEOUT)
Raised when a read time exceeds the timeout.
*class* `pexpect.``ExceptionPexpect`(*value*)[[source]](_modules/pexpect/exceptions.html#ExceptionPexpect)[¶](#pexpect.ExceptionPexpect)
Base class for all exceptions raised by this module.
#### Utility functions[¶](#utility-functions)
`pexpect.``which`(*filename*, *env=None*)[[source]](_modules/pexpect/utils.html#which)[¶](#pexpect.which)
This takes a given filename; tries to find it in the environment path;
then checks if it is executable. This returns the full path to the filename if found and executable. Otherwise this returns None.
`pexpect.``split_command_line`(*command_line*)[[source]](_modules/pexpect/utils.html#split_command_line)[¶](#pexpect.split_command_line)
This splits a command line into a list of arguments. It splits arguments on spaces, but handles embedded quotes, doublequotes, and escaped characters. It’s impossible to do this with a regular expression, so I wrote a little state machine to parse the command line.
### fdpexpect - use pexpect with a file descriptor[¶](#module-pexpect.fdpexpect)
This is like pexpect, but it will work with any file descriptor that you pass it. You are responsible for opening and close the file descriptor.
This allows you to use Pexpect with sockets and named pipes (FIFOs).
PEXPECT LICENSE
> This license is approved by the OSI and FSF as GPL-compatible.
> <http://opensource.org/licenses/isc-license.txt>
> Copyright (c) 2012, <NAME> <[<EMAIL>](mailto:noah%40noah.org)>
> PERMISSION TO USE, COPY, MODIFY, AND/OR DISTRIBUTE THIS SOFTWARE FOR ANY
> PURPOSE WITH OR WITHOUT FEE IS HEREBY GRANTED, PROVIDED THAT THE ABOVE
> COPYRIGHT NOTICE AND THIS PERMISSION NOTICE APPEAR IN ALL COPIES.
> THE SOFTWARE IS PROVIDED “AS IS” AND THE AUTHOR DISCLAIMS ALL WARRANTIES
> WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
> MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
> ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
> WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
> ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
> OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
#### fdspawn class[¶](#fdspawn-class)
*class* `pexpect.fdpexpect.``fdspawn`(*fd*, *args=None*, *timeout=30*, *maxread=2000*, *searchwindowsize=None*, *logfile=None*, *encoding=None*, *codec_errors='strict'*, *use_poll=False*)[[source]](_modules/pexpect/fdpexpect.html#fdspawn)[¶](#pexpect.fdpexpect.fdspawn)
Bases: `pexpect.spawnbase.SpawnBase`
This is like pexpect.spawn but allows you to supply your own open file descriptor. For example, you could use it to read through a file looking for patterns, or to control a modem or serial device.
`__init__`(*fd*, *args=None*, *timeout=30*, *maxread=2000*, *searchwindowsize=None*, *logfile=None*, *encoding=None*, *codec_errors='strict'*, *use_poll=False*)[[source]](_modules/pexpect/fdpexpect.html#fdspawn.__init__)[¶](#pexpect.fdpexpect.fdspawn.__init__)
This takes a file descriptor (an int) or an object that support the fileno() method (returning an int). All Python file-like objects support fileno().
`isalive`()[[source]](_modules/pexpect/fdpexpect.html#fdspawn.isalive)[¶](#pexpect.fdpexpect.fdspawn.isalive)
This checks if the file descriptor is still valid. If [`os.fstat()`](https://docs.python.org/3/library/os.html#os.fstat)
does not raise an exception then we assume it is alive.
`close`()[[source]](_modules/pexpect/fdpexpect.html#fdspawn.close)[¶](#pexpect.fdpexpect.fdspawn.close)
Close the file descriptor.
Calling this method a second time does nothing, but if the file descriptor was closed elsewhere, [`OSError`](https://docs.python.org/3/library/exceptions.html#OSError) will be raised.
`expect`()[¶](#pexpect.fdpexpect.fdspawn.expect)
`expect_exact`()[¶](#pexpect.fdpexpect.fdspawn.expect_exact)
`expect_list`()[¶](#pexpect.fdpexpect.fdspawn.expect_list)
As [`pexpect.spawn`](index.html#pexpect.spawn).
### popen_spawn - use pexpect with a piped subprocess[¶](#module-pexpect.popen_spawn)
Provides an interface like pexpect.spawn interface using subprocess.Popen
#### PopenSpawn class[¶](#popenspawn-class)
*class* `pexpect.popen_spawn.``PopenSpawn`(*cmd*, *timeout=30*, *maxread=2000*, *searchwindowsize=None*, *logfile=None*, *cwd=None*, *env=None*, *encoding=None*, *codec_errors='strict'*, *preexec_fn=None*)[[source]](_modules/pexpect/popen_spawn.html#PopenSpawn)[¶](#pexpect.popen_spawn.PopenSpawn)
`__init__`(*cmd*, *timeout=30*, *maxread=2000*, *searchwindowsize=None*, *logfile=None*, *cwd=None*, *env=None*, *encoding=None*, *codec_errors='strict'*, *preexec_fn=None*)[[source]](_modules/pexpect/popen_spawn.html#PopenSpawn.__init__)[¶](#pexpect.popen_spawn.PopenSpawn.__init__)
Initialize self. See help(type(self)) for accurate signature.
`send`(*s*)[[source]](_modules/pexpect/popen_spawn.html#PopenSpawn.send)[¶](#pexpect.popen_spawn.PopenSpawn.send)
Send data to the subprocess’ stdin.
Returns the number of bytes written.
`sendline`(*s=''*)[[source]](_modules/pexpect/popen_spawn.html#PopenSpawn.sendline)[¶](#pexpect.popen_spawn.PopenSpawn.sendline)
Wraps send(), sending string `s` to child process, with os.linesep automatically appended. Returns number of bytes written.
`write`(*s*)[[source]](_modules/pexpect/popen_spawn.html#PopenSpawn.write)[¶](#pexpect.popen_spawn.PopenSpawn.write)
This is similar to send() except that there is no return value.
`writelines`(*sequence*)[[source]](_modules/pexpect/popen_spawn.html#PopenSpawn.writelines)[¶](#pexpect.popen_spawn.PopenSpawn.writelines)
This calls write() for each element in the sequence.
The sequence can be any iterable object producing strings, typically a list of strings. This does not add line separators. There is no return value.
`kill`(*sig*)[[source]](_modules/pexpect/popen_spawn.html#PopenSpawn.kill)[¶](#pexpect.popen_spawn.PopenSpawn.kill)
Sends a Unix signal to the subprocess.
Use constants from the [`signal`](https://docs.python.org/3/library/signal.html#module-signal) module to specify which signal.
`sendeof`()[[source]](_modules/pexpect/popen_spawn.html#PopenSpawn.sendeof)[¶](#pexpect.popen_spawn.PopenSpawn.sendeof)
Closes the stdin pipe from the writing end.
`wait`()[[source]](_modules/pexpect/popen_spawn.html#PopenSpawn.wait)[¶](#pexpect.popen_spawn.PopenSpawn.wait)
Wait for the subprocess to finish.
Returns the exit code.
`expect`()[¶](#pexpect.popen_spawn.PopenSpawn.expect)
`expect_exact`()[¶](#pexpect.popen_spawn.PopenSpawn.expect_exact)
`expect_list`()[¶](#pexpect.popen_spawn.PopenSpawn.expect_list)
As [`pexpect.spawn`](index.html#pexpect.spawn).
### replwrap - Control read-eval-print-loops[¶](#module-pexpect.replwrap)
Generic wrapper for read-eval-print-loops, a.k.a. interactive shells
New in version 3.3.
*class* `pexpect.replwrap.``REPLWrapper`(*cmd_or_spawn*, *orig_prompt*, *prompt_change*, *new_prompt='[PEXPECT_PROMPT>'*, *continuation_prompt='[PEXPECT_PROMPT+'*, *extra_init_cmd=None*)[[source]](_modules/pexpect/replwrap.html#REPLWrapper)[¶](#pexpect.replwrap.REPLWrapper)
Wrapper for a REPL.
| Parameters: | * **cmd_or_spawn** – This can either be an instance of [`pexpect.spawn`](index.html#pexpect.spawn)
in which a REPL has already been started, or a str command to start a new REPL process.
* **orig_prompt** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The prompt to expect at first.
* **prompt_change** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – A command to change the prompt to something more unique. If this is `None`, the prompt will not be changed. This will be formatted with the new and continuation prompts as positional parameters, so you can use `{}` style formatting to insert them into the command.
* **new_prompt** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The more unique prompt to expect after the change.
* **extra_init_cmd** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Commands to do extra initialisation, such as disabling pagers.
|
`run_command`(*command*, *timeout=-1*, *async_=False*)[[source]](_modules/pexpect/replwrap.html#REPLWrapper.run_command)[¶](#pexpect.replwrap.REPLWrapper.run_command)
Send a command to the REPL, wait for and return output.
| Parameters: | * **command** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The command to send. Trailing newlines are not needed.
This should be a complete block of input that will trigger execution;
if a continuation prompt is found after sending input, [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError)
will be raised.
* **timeout** ([*int*](https://docs.python.org/3/library/functions.html#int)) – How long to wait for the next prompt. -1 means the default from the [`pexpect.spawn`](index.html#pexpect.spawn) object (default 30 seconds).
None means to wait indefinitely.
* **async** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – On Python 3.4, or Python 3.3 with asyncio installed, passing `async_=True` will make this return an
[`asyncio`](https://docs.python.org/3/library/asyncio.html#module-asyncio) Future, which you can yield from to get the same result that this method would normally give directly.
|
`pexpect.replwrap.``PEXPECT_PROMPT`[¶](#pexpect.replwrap.PEXPECT_PROMPT)
A string that can be used as a prompt, and is unlikely to be found in output.
Using the objects above, it is easy to wrap a REPL. For instance, to use a Python shell:
```
py = REPLWrapper("python", ">>> ", "import sys; sys.ps1={!r}; sys.ps2={!r}")
py.run_command("4+7")
```
Convenience functions are provided for Python and bash shells:
`pexpect.replwrap.``python`(*command='python'*)[[source]](_modules/pexpect/replwrap.html#python)[¶](#pexpect.replwrap.python)
Start a Python shell and return a [`REPLWrapper`](#pexpect.replwrap.REPLWrapper) object.
`pexpect.replwrap.``bash`(*command='bash'*)[[source]](_modules/pexpect/replwrap.html#bash)[¶](#pexpect.replwrap.bash)
Start a bash shell and return a [`REPLWrapper`](#pexpect.replwrap.REPLWrapper) object.
### pxssh - control an SSH session[¶](#pxssh-control-an-ssh-session)
Note
*pxssh* is a screen-scraping wrapper around the SSH command on your system.
In many cases, you should consider using
[Paramiko](https://github.com/paramiko/paramiko) or
[RedExpect](https://github.com/Red-M/RedExpect) instead.
Paramiko is a Python module which speaks the SSH protocol directly, so it doesn’t have the extra complexity of running a local subprocess.
RedExpect is very similar to pxssh except that it reads and writes directly into an SSH session all done via Python with all the SSH protocol in C,
additionally it is written for communicating to SSH servers that are not just Linux machines. Meaning that it is extremely fast in comparison to Paramiko and already has the familiar expect API. In most cases RedExpect and pxssh code should be fairly interchangeable.
This class extends pexpect.spawn to specialize setting up SSH connections.
This adds methods for login, logout, and expecting the shell prompt.
PEXPECT LICENSE
> This license is approved by the OSI and FSF as GPL-compatible.
> <http://opensource.org/licenses/isc-license.txt>
> Copyright (c) 2012, <NAME> <[<EMAIL>](mailto:noah%40noah.org)>
> PERMISSION TO USE, COPY, MODIFY, AND/OR DISTRIBUTE THIS SOFTWARE FOR ANY
> PURPOSE WITH OR WITHOUT FEE IS HEREBY GRANTED, PROVIDED THAT THE ABOVE
> COPYRIGHT NOTICE AND THIS PERMISSION NOTICE APPEAR IN ALL COPIES.
> THE SOFTWARE IS PROVIDED “AS IS” AND THE AUTHOR DISCLAIMS ALL WARRANTIES
> WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
> MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
> ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
> WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
> ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
> OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*class* `pexpect.pxssh.``ExceptionPxssh`(*value*)[[source]](_modules/pexpect/pxssh.html#ExceptionPxssh)[¶](#pexpect.pxssh.ExceptionPxssh)
Raised for pxssh exceptions.
#### pxssh class[¶](#pxssh-class)
*class* `pexpect.pxssh.``pxssh`(*timeout=30*, *maxread=2000*, *searchwindowsize=None*, *logfile=None*, *cwd=None*, *env=None*, *ignore_sighup=True*, *echo=True*, *options={}*, *encoding=None*, *codec_errors='strict'*, *debug_command_string=False*, *use_poll=False*)[[source]](_modules/pexpect/pxssh.html#pxssh)[¶](#pexpect.pxssh.pxssh)
This class extends pexpect.spawn to specialize setting up SSH connections. This adds methods for login, logout, and expecting the shell prompt. It does various tricky things to handle many situations in the SSH login process. For example, if the session is your first login, then pxssh automatically accepts the remote certificate; or if you have public key authentication setup then pxssh won’t wait for the password prompt.
pxssh uses the shell prompt to synchronize output from the remote host. In order to make this more robust it sets the shell prompt to something more unique than just $ or #. This should work on most Borne/Bash or Csh style shells.
Example that runs a few commands on a remote server and prints the result:
```
from pexpect import pxssh import getpass try:
s = pxssh.pxssh()
hostname = raw_input('hostname: ')
username = raw_input('username: ')
password = getpass.getpass('password: ')
s.login(hostname, username, password)
s.sendline('uptime') # run a command
s.prompt() # match the prompt
print(s.before) # print everything before the prompt.
s.sendline('ls -l')
s.prompt()
print(s.before)
s.sendline('df')
s.prompt()
print(s.before)
s.logout()
except pxssh.ExceptionPxssh as e:
print("pxssh failed on login.")
print(e)
```
Example showing how to specify SSH options:
```
from pexpect import pxssh s = pxssh.pxssh(options={
"StrictHostKeyChecking": "no",
"UserKnownHostsFile": "/dev/null"})
...
```
Note that if you have ssh-agent running while doing development with pxssh then this can lead to a lot of confusion. Many X display managers (xdm,
gdm, kdm, etc.) will automatically start a GUI agent. You may see a GUI dialog box popup asking for a password during development. You should turn off any key agents during testing. The ‘force_password’ attribute will turn off public key authentication. This will only work if the remote SSH server is configured to allow password logins. Example of using ‘force_password’
attribute:
```
s = pxssh.pxssh()
s.force_password = True hostname = raw_input('hostname: ')
username = raw_input('username: ')
password = getpass.getpass('password: ')
s.login (hostname, username, password)
```
debug_command_string is only for the test suite to confirm that the string generated for SSH is correct, using this will not allow you to do anything other than get a string back from pxssh.pxssh.login().
`__init__`(*timeout=30*, *maxread=2000*, *searchwindowsize=None*, *logfile=None*, *cwd=None*, *env=None*, *ignore_sighup=True*, *echo=True*, *options={}*, *encoding=None*, *codec_errors='strict'*, *debug_command_string=False*, *use_poll=False*)[[source]](_modules/pexpect/pxssh.html#pxssh.__init__)[¶](#pexpect.pxssh.pxssh.__init__)
This is the constructor. The command parameter may be a string that includes a command and any arguments to the command. For example:
```
child = pexpect.spawn('/usr/bin/ftp')
child = pexpect.spawn('/usr/bin/ssh <EMAIL>')
child = pexpect.spawn('ls -latr /tmp')
```
You may also construct it with a list of arguments like so:
```
child = pexpect.spawn('/usr/bin/ftp', [])
child = pexpect.spawn('/usr/bin/ssh', ['<EMAIL>'])
child = pexpect.spawn('ls', ['-latr', '/tmp'])
```
After this the child application will be created and will be ready to talk to. For normal use, see expect() and send() and sendline().
Remember that Pexpect does NOT interpret shell meta characters such as redirect, pipe, or wild cards (`>`, `|`, or `*`). This is a common mistake. If you want to run a command and pipe it through another command then you must also start a shell. For example:
```
child = pexpect.spawn('/bin/bash -c "ls -l | grep LOG > logs.txt"')
child.expect(pexpect.EOF)
```
The second form of spawn (where you pass a list of arguments) is useful in situations where you wish to spawn a command and pass it its own argument list. This can make syntax more clear. For example, the following is equivalent to the previous example:
```
shell_cmd = 'ls -l | grep LOG > logs.txt'
child = pexpect.spawn('/bin/bash', ['-c', shell_cmd])
child.expect(pexpect.EOF)
```
The maxread attribute sets the read buffer size. This is maximum number of bytes that Pexpect will try to read from a TTY at one time. Setting the maxread size to 1 will turn off buffering. Setting the maxread value higher may help performance in cases where large amounts of output are read back from the child. This feature is useful in conjunction with searchwindowsize.
When the keyword argument *searchwindowsize* is None (default), the full buffer is searched at each iteration of receiving incoming data.
The default number of bytes scanned at each iteration is very large and may be reduced to collaterally reduce search cost. After
[`expect()`](index.html#pexpect.fdpexpect.fdspawn.expect) returns, the full buffer attribute remains up to size *maxread* irrespective of *searchwindowsize* value.
When the keyword argument `timeout` is specified as a number,
(default: *30*), then `TIMEOUT` will be raised after the value specified has elapsed, in seconds, for any of the [`expect()`](index.html#pexpect.fdpexpect.fdspawn.expect)
family of method calls. When None, TIMEOUT will not be raised, and
[`expect()`](index.html#pexpect.fdpexpect.fdspawn.expect) may block indefinitely until match.
The logfile member turns on or off logging. All input and output will be copied to the given file object. Set logfile to None to stop logging. This is the default. Set logfile to sys.stdout to echo everything to standard output. The logfile is flushed after each write.
Example log input and output to a file:
```
child = pexpect.spawn('some_command')
fout = open('mylog.txt','wb')
child.logfile = fout
```
Example log to stdout:
```
# In Python 2:
child = pexpect.spawn('some_command')
child.logfile = sys.stdout
# In Python 3, we'll use the ``encoding`` argument to decode data
# from the subprocess and handle it as unicode:
child = pexpect.spawn('some_command', encoding='utf-8')
child.logfile = sys.stdout
```
The logfile_read and logfile_send members can be used to separately log the input from the child and output sent to the child. Sometimes you don’t want to see everything you write to the child. You only want to log what the child sends back. For example:
```
child = pexpect.spawn('some_command')
child.logfile_read = sys.stdout
```
You will need to pass an encoding to spawn in the above code if you are using Python 3.
To separately log output sent to the child use logfile_send:
```
child.logfile_send = fout
```
If `ignore_sighup` is True, the child process will ignore SIGHUP signals. The default is False from Pexpect 4.0, meaning that SIGHUP will be handled normally by the child.
The delaybeforesend helps overcome a weird behavior that many users were experiencing. The typical problem was that a user would expect() a
“Password:” prompt and then immediately call sendline() to send the password. The user would then see that their password was echoed back to them. Passwords don’t normally echo. The problem is caused by the fact that most applications print out the “Password” prompt and then turn off stdin echo, but if you send your password before the application turned off echo, then you get your password echoed.
Normally this wouldn’t be a problem when interacting with a human at a real keyboard. If you introduce a slight delay just before writing then this seems to clear up the problem. This was such a common problem for many users that I decided that the default pexpect behavior should be to sleep just before writing to the child application. 1/20th of a second (50 ms) seems to be enough to clear up the problem. You can set delaybeforesend to None to return to the old behavior.
Note that spawn is clever about finding commands on your path.
It uses the same logic that “which” uses to find executables.
If you wish to get the exit status of the child you must call the close() method. The exit or signal status of the child will be stored in self.exitstatus or self.signalstatus. If the child exited normally then exitstatus will store the exit return code and signalstatus will be None. If the child was terminated abnormally with a signal then signalstatus will store the signal value and exitstatus will be None:
```
child = pexpect.spawn('some_command')
child.close()
print(child.exitstatus, child.signalstatus)
```
If you need more detail you can also read the self.status member which stores the status returned by os.waitpid. You can interpret this using os.WIFEXITED/os.WEXITSTATUS or os.WIFSIGNALED/os.TERMSIG.
The echo attribute may be set to False to disable echoing of input.
As a pseudo-terminal, all input echoed by the “keyboard” (send()
or sendline()) will be repeated to output. For many cases, it is not desirable to have echo enabled, and it may be later disabled using setecho(False) followed by waitnoecho(). However, for some platforms such as Solaris, this is not possible, and should be disabled immediately on spawn.
If preexec_fn is given, it will be called in the child process before launching the given command. This is useful to e.g. reset inherited signal handlers.
The dimensions attribute specifies the size of the pseudo-terminal as seen by the subprocess, and is specified as a two-entry tuple (rows,
columns). If this is unspecified, the defaults in ptyprocess will apply.
The use_poll attribute enables using select.poll() over select.select()
for socket handling. This is handy if your system could have > 1024 fds
`PROMPT`[¶](#pexpect.pxssh.pxssh.PROMPT)
The regex pattern to search for to find the prompt. If you call [`login()`](#pexpect.pxssh.pxssh.login)
with `auto_prompt_reset=False`, you must set this attribute manually.
`force_password`[¶](#pexpect.pxssh.pxssh.force_password)
If this is set to `True`, public key authentication is disabled, forcing the server to ask for a password. Note that the sysadmin can disable password logins, in which case this won’t work.
`options`[¶](#pexpect.pxssh.pxssh.options)
The dictionary of user specified SSH options, eg, `options = dict(StrictHostKeyChecking="no", UserKnownHostsFile="/dev/null")`
`login`(*server*, *username=None*, *password=''*, *terminal_type='ansi'*, *original_prompt='[#$]'*, *login_timeout=10*, *port=None*, *auto_prompt_reset=True*, *ssh_key=None*, *quiet=True*, *sync_multiplier=1*, *check_local_ip=True*, *password_regex='(?i)(?:password:)|(?:passphrase for key)'*, *ssh_tunnels={}*, *spawn_local_ssh=True*, *sync_original_prompt=True*, *ssh_config=None*, *cmd='ssh'*)[[source]](_modules/pexpect/pxssh.html#pxssh.login)[¶](#pexpect.pxssh.pxssh.login)
This logs the user into the given server.
It uses ‘original_prompt’ to try to find the prompt right after login.
When it finds the prompt it immediately tries to reset the prompt to something more easily matched. The default ‘original_prompt’ is very optimistic and is easily fooled. It’s more reliable to try to match the original prompt as exactly as possible to prevent false matches by server strings such as the “Message Of The Day”. On many systems you can disable the MOTD on the remote server by creating a zero-length file called `~/.hushlogin` on the remote server. If a prompt cannot be found then this will not necessarily cause the login to fail. In the case of a timeout when looking for the prompt we assume that the original prompt was so weird that we could not match it, so we use a few tricks to guess when we have reached the prompt. Then we hope for the best and blindly try to reset the prompt to something more unique. If that fails then login() raises an [`ExceptionPxssh`](#pexpect.pxssh.ExceptionPxssh) exception.
In some situations it is not possible or desirable to reset the original prompt. In this case, pass `auto_prompt_reset=False` to inhibit setting the prompt to the UNIQUE_PROMPT. Remember that pxssh uses a unique prompt in the [`prompt()`](#pexpect.pxssh.pxssh.prompt) method. If the original prompt is not reset then this will disable the [`prompt()`](#pexpect.pxssh.pxssh.prompt) method unless you manually set the [`PROMPT`](#pexpect.pxssh.pxssh.PROMPT) attribute.
Set `password_regex` if there is a MOTD message with password in it.
Changing this is like playing in traffic, don’t (p)expect it to match straight away.
If you require to connect to another SSH server from the your original SSH connection set `spawn_local_ssh` to False and this will use your current session to do so. Setting this option to False and not having an active session will trigger an error.
Set `ssh_key` to a file path to an SSH private key to use that SSH key for the session authentication.
Set `ssh_key` to True to force passing the current SSH authentication socket to the desired `hostname`.
Set `ssh_config` to a file path string of an SSH client config file to pass that file to the client to handle itself. You may set any options you wish in here, however doing so will require you to post extra information that you may not want to if you run into issues.
Alter the `cmd` to change the ssh client used, or to prepend it with network namespaces. For example ``cmd="ip netns exec vlan2 ssh"`` to execute the ssh in network namespace named ``vlan``.
`logout`()[[source]](_modules/pexpect/pxssh.html#pxssh.logout)[¶](#pexpect.pxssh.pxssh.logout)
Sends exit to the remote shell.
If there are stopped jobs then this automatically sends exit twice.
`prompt`(*timeout=-1*)[[source]](_modules/pexpect/pxssh.html#pxssh.prompt)[¶](#pexpect.pxssh.pxssh.prompt)
Match the next shell prompt.
This is little more than a short-cut to the [`expect()`](index.html#pexpect.spawn.expect)
method. Note that if you called [`login()`](#pexpect.pxssh.pxssh.login) with
`auto_prompt_reset=False`, then before calling [`prompt()`](#pexpect.pxssh.pxssh.prompt) you must set the [`PROMPT`](#pexpect.pxssh.pxssh.PROMPT) attribute to a regex that it will use for matching the prompt.
Calling [`prompt()`](#pexpect.pxssh.pxssh.prompt) will erase the contents of the `before`
attribute even if no prompt is ever matched. If timeout is not given or it is set to -1 then self.timeout is used.
| Returns: | True if the shell prompt was matched, False if the timeout was reached. |
`sync_original_prompt`(*sync_multiplier=1.0*)[[source]](_modules/pexpect/pxssh.html#pxssh.sync_original_prompt)[¶](#pexpect.pxssh.pxssh.sync_original_prompt)
This attempts to find the prompt. Basically, press enter and record the response; press enter again and record the response; if the two responses are similar then assume we are at the original prompt.
This can be a slow function. Worst case with the default sync_multiplier can take 12 seconds. Low latency connections are more likely to fail with a low sync_multiplier. Best case sync time gets worse with a high sync multiplier (500 ms with default).
`set_unique_prompt`()[[source]](_modules/pexpect/pxssh.html#pxssh.set_unique_prompt)[¶](#pexpect.pxssh.pxssh.set_unique_prompt)
This sets the remote prompt to something more unique than `#` or `$`.
This makes it easier for the [`prompt()`](#pexpect.pxssh.pxssh.prompt) method to match the shell prompt unambiguously. This method is called automatically by the [`login()`](#pexpect.pxssh.pxssh.login)
method, but you may want to call it manually if you somehow reset the shell prompt. For example, if you ‘su’ to a different user then you will need to manually reset the prompt. This sends shell commands to the remote host to set the prompt, so this assumes the remote host is ready to receive commands.
Alternatively, you may use your own prompt pattern. In this case you should call [`login()`](#pexpect.pxssh.pxssh.login) with `auto_prompt_reset=False`; then set the
[`PROMPT`](#pexpect.pxssh.pxssh.PROMPT) attribute to a regular expression. After that, the
[`prompt()`](#pexpect.pxssh.pxssh.prompt) method will try to match your prompt pattern.
The modules `pexpect.screen` and `pexpect.ANSI` have been deprecated in Pexpect version 4. They were separate from the main use cases for Pexpect, and there are better maintained Python terminal emulator packages, such as
[pyte](https://pypi.python.org/pypi/pyte).
These modules are still present for now, but we don’t advise using them in new code.
Examples[¶](#examples)
---
Under the distribution tarball directory you should find an “examples” directory.
This is the best way to learn to use Pexpect. See the descriptions of Pexpect Examples.
[topip.py](https://github.com/pexpect/pexpect/blob/master/examples/topip.py)
This runs netstat on a local or remote server. It calculates some simple statistical information on the number of external inet connections. This can be used to detect if one IP address is taking up an excessive number of connections. It can also send an email alert if a given IP address exceeds a threshold between runs of the script. This script can be used as a drop-in Munin plugin or it can be used stand-alone from cron. I used this on a busy web server that would sometimes get hit with denial of service attacks. This made it easy to see if a script was opening many multiple connections. A typical browser would open fewer than 10 connections at once. A script might open over 100 simultaneous connections.
[hive.py](https://github.com/pexpect/pexpect/blob/master/examples/hive.py)
This script creates SSH connections to a list of hosts that you provide.
Then you are given a command line prompt. Each shell command that you enter is sent to all the hosts. The response from each host is collected and printed. For example, you could connect to a dozen different machines and reboot them all at once.
[script.py](https://github.com/pexpect/pexpect/blob/master/examples/script.py)
This implements a command similar to the classic BSD “script” command.
This will start a subshell and log all input and output to a file.
This demonstrates the [`interact()`](index.html#pexpect.spawn.interact) method of Pexpect.
[ftp.py](https://github.com/pexpect/pexpect/blob/master/examples/ftp.py)
This demonstrates an FTP “bookmark”. This connects to an ftp site;
does a few ftp tasks; and then gives the user interactive control over the session. In this case the “bookmark” is to a directory on the OpenBSD ftp server. It puts you in the i386 packages directory. You can easily modify this for other sites. This demonstrates the
[`interact()`](index.html#pexpect.spawn.interact) method of Pexpect.
[monitor.py](https://github.com/pexpect/pexpect/blob/master/examples/monitor.py)
This runs a sequence of commands on a remote host using SSH. It runs a simple system checks such as uptime and free to monitor the state of the remote host.
[passmass.py](https://github.com/pexpect/pexpect/blob/master/examples/passmass.py)
This will login to each given server and change the password of the given user. This demonstrates scripting logins and passwords.
[python.py](https://github.com/pexpect/pexpect/blob/master/examples/python.py)
This starts the python interpreter and prints the greeting message backwards. It then gives the user interactive control of Python. It’s pretty useless!
[ssh_tunnel.py](https://github.com/pexpect/pexpect/blob/master/examples/ssh_tunnel.py)
This starts an SSH tunnel to a remote machine. It monitors the connection and restarts the tunnel if it goes down.
[uptime.py](https://github.com/pexpect/pexpect/blob/master/examples/uptime.py)
This will run the uptime command and parse the output into variables.
This demonstrates using a single regular expression to match the output of a command and capturing different variable in match groups.
The grouping regular expression handles a wide variety of different uptime formats.
FAQ[¶](#faq)
---
**Q: Where can I get help with pexpect? Is there a mailing list?**
A: You can use the [pexpect tag on Stackoverflow](http://stackoverflow.com/questions/tagged/pexpect)
to ask questions specifically related to Pexpect. For more general Python support, there’s the [python-list](https://mail.python.org/mailman/listinfo/python-list) mailing list, and the [#python](https://www.python.org/community/irc/)
IRC channel. Please refrain from using github for general python or systems scripting support.
**Q: Why don’t shell pipe and redirect (| and >) work when I spawn a command?**
A: Remember that Pexpect does NOT interpret shell meta characters such as redirect, pipe, or wild cards (`>`, `|`, or `*`). That’s done by a shell not the command you are spawning. This is a common mistake. If you want to run a command and pipe it through another command then you must also start a shell.
For example:
```
child = pexpect.spawn('/bin/bash -c "ls -l | grep LOG > log_list.txt"')
child.expect(pexpect.EOF)
```
The second form of spawn (where you pass a list of arguments) is useful in situations where you wish to spawn a command and pass it its own argument list.
This can make syntax more clear. For example, the following is equivalent to the previous example:
```
shell_cmd = 'ls -l | grep LOG > log_list.txt'
child = pexpect.spawn('/bin/bash', ['-c', shell_cmd])
child.expect(pexpect.EOF)
```
**Q: The `before` and `after` properties sound weird.**
A: This is how the -B and -A options in grep works, so that made it easier for me to remember. Whatever makes my life easier is what’s best.
Originally I was going to model Pexpect after Expect, but then I found that I didn’t actually like the way Expect did some things. It was more confusing. The after property can be a little confusing at first,
because it will actually include the matched string. The after means after the point of match, not after the matched string.
**Q: Why not just use Expect?**
A: I love it. It’s great. I has bailed me out of some real jams, but I wanted something that would do 90% of what I need from Expect; be 10% of the size; and allow me to write my code in Python instead of TCL.
Pexpect is not nearly as big as Expect, but Pexpect does everything I have ever used Expect for.
**Q: Why not just use a pipe (popen())?**
A: A pipe works fine for getting the output to non-interactive programs.
If you just want to get the output from ls, uname, or ping then this works. Pipes do not work very well for interactive programs and pipes will almost certainly fail for most applications that ask for passwords such as telnet, ftp, or ssh.
There are two reasons for this.
* First an application may bypass stdout and print directly to its controlling TTY. Something like SSH will do this when it asks you for a password. This is why you cannot redirect the password prompt because it does not go through stdout or stderr.
* The second reason is because most applications are built using the C Standard IO Library (anything that uses `#include <stdio.h>`). One of the features of the stdio library is that it buffers all input and output. Normally output is line buffered when a program is printing to a TTY (your terminal screen). Everytime the program prints a line-feed the currently buffered data will get printed to your screen. The problem comes when you connect a pipe. The stdio library is smart and can tell that it is printing to a pipe instead of a TTY. In that case it switches from line buffer mode to block buffered. In this mode the currently buffered data is flushed when the buffer is full. This causes most interactive programs to deadlock. Block buffering is more efficient when writing to disks and pipes. Take the situation where a program prints a message `"Enter your user name:\n"` and then waits for you type type something. In block buffered mode, the stdio library will not put the message into the pipe even though a linefeed is printed. The result is that you never receive the message, yet the child application will sit and wait for you to type a response. Don’t confuse the stdio lib’s buffer with the pipe’s buffer. The pipe buffer is another area that can cause problems. You could flush the input side of a pipe, whereas you have no control over the stdio library buffer.
More information: the Standard IO library has three states for a
`FILE *`. These are: _IOFBF for block buffered; _IOLBF for line buffered;
and _IONBF for unbuffered. The STDIO lib will use block buffering when talking to a block file descriptor such as a pipe. This is usually not helpful for interactive programs. Short of recompiling your program to include fflush() everywhere or recompiling a custom stdio library there is not much a controlling application can do about this if talking over a pipe.
The program may have put data in its output that remains unflushed because the output buffer is not full; then the program will go and deadlock while waiting for input – because you never send it any because you are still waiting for its output (still stuck in the STDIO’s output buffer).
The answer is to use a pseudo-tty. A TTY device will force line buffering (as opposed to block buffering). Line buffering means that you will get each line when the child program sends a line feed. This corresponds to the way most interactive programs operate – send a line of output then wait for a line of input.
I put “answer” in quotes because it’s ugly solution and because there is no POSIX standard for pseudo-TTY devices (even though they have a TTY standard…). What would make more sense to me would be to have some way to set a mode on a file descriptor so that it will tell the STDIO to be line-buffered. I have investigated, and I don’t think there is a way to set the buffered state of a child process. The STDIO Library does not maintain any external state in the kernel or whatnot, so I don’t think there is any way for you to alter it. I’m not quite sure how this line-buffered/block-buffered state change happens internally in the STDIO library. I think the STDIO lib looks at the file descriptor and decides to change behavior based on whether it’s a TTY or a block file
(see isatty()).
I hope that this qualifies as helpful. Don’t use a pipe to control another application.
**Q: Can I do screen scraping with this thing?**
A: That depends. If your application just does line-oriented output then this is easy. If a program emits many terminal sequences, from video attributes to screen addressing, such as programs using curses, then it may become very difficult to ascertain what text is displayed on a screen.
We suggest using the [pyte](https://github.com/selectel/pyte) library to screen-scrape. The module `pexpect.ANSI` released with previous versions of pexpect is now marked deprecated and may be removed in the future.
**Q: I get strange behavior with pexect and gevent**
A: Pexpect uses fork(2), exec(2), select(2), waitpid(2), and implements its own selector in expect family of calls. pexpect has been known to misbehave when paired with gevent. A solution might be to isolate your pexpect dependent code from any frameworks that manipulate event selection behavior by running it in an another process entirely.
Common problems[¶](#common-problems)
---
### Threads[¶](#threads)
On Linux (RH 8) you cannot spawn a child from a different thread and pass the handle back to a worker thread. The child is successfully spawned but you can’t interact with it. The only way to make it work is to spawn and interact with the child all in the same thread. [<NAME>]
### Timing issue with send() and sendline()[¶](#timing-issue-with-send-and-sendline)
This problem has been addressed and should not affect most users.
It is sometimes possible to read an echo of the string sent with
[`send()`](index.html#pexpect.spawn.send) and [`sendline()`](index.html#pexpect.spawn.sendline). If you call
[`send()`](index.html#pexpect.spawn.send) and then immediately call [`readline()`](index.html#pexpect.spawn.readline),
you may get part of your output echoed back. You may read back what you just wrote even if the child application does not explicitly echo it. Timing is critical. This could be a security issue when talking to an application that asks for a password; otherwise, this does not seem like a big deal. But why do TTYs do this?
People usually report this when they are trying to control SSH or some other login. For example, if your code looks something like this:
```
child.expect ('[pP]assword:')
child.sendline (my_password)
```
1. SSH prints “password:” prompt to the user.
2. SSH turns off echo on the TTY device.
3. SSH waits for user to enter a password.
When scripting with Pexpect what can happen is that Pexpect will respond to the
“password:” prompt before SSH has had time to turn off TTY echo. In other words,
Pexpect sends the password between steps 1. and 2., so the password gets echoed back to the TTY. I would call this an SSH bug.
Pexpect now automatically adds a short delay before sending data to a child process. This more closely mimics what happens in the usual human-to-app interaction. The delay can be tuned with the `delaybeforesend` attribute of objects of the spawn class. In general, this fixes the problem for everyone and so this should not be an issue for most users. For some applications you might with to turn it off:
```
child = pexpect.spawn ("ssh <EMAIL>")
child.delaybeforesend = None
```
### Truncated output just before child exits[¶](#truncated-output-just-before-child-exits)
So far I have seen this only on older versions of Apple’s MacOS X. If the child application quits it may not flush its output buffer. This means that your Pexpect application will receive an EOF even though it should have received a little more data before the child died. This is not generally a problem when talking to interactive child applications. One example where it is a problem is when trying to read output from a program like *ls*. You may receive most of the directory listing, but the last few lines will get lost before you receive an EOF.
The reason for this is that *ls* runs; completes its task; and then exits. The buffer is not flushed before exit so the last few lines are lost. The following example demonstrates the problem:
```
child = pexpect.spawn('ls -l')
child.expect(pexpect.EOF)
print child.before
```
### Controlling SSH on Solaris[¶](#controlling-ssh-on-solaris)
Pexpect does not yet work perfectly on Solaris. One common problem is that SSH sometimes will not allow TTY password authentication. For example, you may expect SSH to ask you for a password using code like this:
```
child = pexpect.spawn('ssh <EMAIL>')
child.expect('password')
child.sendline('mypassword')
```
You may see the following error come back from a spawned child SSH:
```
Permission denied (publickey,keyboard-interactive).
```
This means that SSH thinks it can’t access the TTY to ask you for your password.
The only solution I have found is to use public key authentication with SSH.
This bypasses the need for a password. I’m not happy with this solution. The problem is due to poor support for Solaris Pseudo TTYs in the Python Standard Library.
### child does not receive full input, emits BEL[¶](#child-does-not-receive-full-input-emits-bel)
You may notice when running for example cat(1) or base64(1), when sending a very long input line, that it is not fully received, and the BEL (‘a’) may be found in output.
By default the child terminal matches the parent, which is often in “canonical mode processing”. You may wish to disable this mode. The exact limit of a line varies by operating system, and details of disabling canonical mode may be found in the docstring of [`send()`](index.html#pexpect.spawn.send).
History[¶](#history)
---
### Releases[¶](#releases)
#### Version 4.8[¶](#version-4-8)
* Returned behavior of searchwindowsize to that in 4.3 and earlier (searches are only done within the search window) ([PR #579](https://github.com/pexpect/pexpect/pull/579/)).
* Fixed a bug truncating `before` attribute after a timeout ([PR #579](https://github.com/pexpect/pexpect/pull/579/)).
* Fixed a bug where a search could be less than `searchwindowsize` if it was increased between calls ([PR #579](https://github.com/pexpect/pexpect/pull/579/)).
* Minor test cleanups to improve portability ([PR #580](https://github.com/pexpect/pexpect/pull/580/)) ([PR #581](https://github.com/pexpect/pexpect/pull/581/))
([PR #582](https://github.com/pexpect/pexpect/pull/582/)) ([PR #583](https://github.com/pexpect/pexpect/pull/583/)) ([PR #584](https://github.com/pexpect/pexpect/pull/584/)) ([PR #585](https://github.com/pexpect/pexpect/pull/585/)).
* Disable chaining of timeout and EOF exceptions ([:gphull:`606`](#id1)).
* Allow traceback included snippet length to be configured via
`str_last_chars` rather than always 100 ([PR #598](https://github.com/pexpect/pexpect/pull/598/)).
* Python 3 warning added to interact.py ([PR #537](https://github.com/pexpect/pexpect/pull/537/)).
* Several doc updates.
#### Version 4.7[¶](#version-4-7)
* The [`pxssh.login()`](index.html#pexpect.pxssh.pxssh.login) method now no longer requires a username if an ssh config is provided and will raise an error if neither are provided.
([PR #562](https://github.com/pexpect/pexpect/pull/562/)).
* The [`pxssh.login()`](index.html#pexpect.pxssh.pxssh.login) method now supports providing your own `ssh`
command via the `cmd` parameter.
([PR #528](https://github.com/pexpect/pexpect/pull/528/)) ([PR #563](https://github.com/pexpect/pexpect/pull/563/)).
* [`pxssh`](index.html#pexpect.pxssh.pxssh) now supports the `use_poll` parameter which is passed into `pexpect.spawn()`
([PR #542](https://github.com/pexpect/pexpect/pull/542/)).
* Minor bug fix with `ssh_config`.
([PR #498](https://github.com/pexpect/pexpect/pull/498/)).
* `replwrap.run_command()` now has async support via an `async_` parameter.
([PR #501](https://github.com/pexpect/pexpect/pull/501/)).
* `pexpect.spawn()` will now read additional bytes if able up to a buffer limit.
([PR #304](https://github.com/pexpect/pexpect/pull/304/)).
#### Version 4.6[¶](#version-4-6)
* The [`pxssh.login()`](index.html#pexpect.pxssh.pxssh.login) method now supports an `ssh_config` parameter,
which can be used to specify a file path to an SSH config file
([PR #490](https://github.com/pexpect/pexpect/pull/490/)).
* Improved compatability for the `crlf` parameter of [`PopenSpawn`](index.html#pexpect.popen_spawn.PopenSpawn)
([PR #493](https://github.com/pexpect/pexpect/pull/493/))
* Fixed an issue in read timeout handling when using [`spawn`](index.html#pexpect.spawn) and
[`fdspawn`](index.html#pexpect.fdpexpect.fdspawn) with the `use_poll` parameter ([PR #492](https://github.com/pexpect/pexpect/pull/492/)).
#### Version 4.5[¶](#version-4-5)
* [`spawn`](index.html#pexpect.spawn) and [`fdspawn`](index.html#pexpect.fdpexpect.fdspawn) now have a `use_poll` parameter.
If this is True, they will use [`select.poll()`](https://docs.python.org/3/library/select.html#select.poll) instead of [`select.select()`](https://docs.python.org/3/library/select.html#select.select).
`poll()` allows file descriptors above 1024, but it must be explicitly enabled due to compatibility concerns ([PR #474](https://github.com/pexpect/pexpect/pull/474/)).
* The [`pxssh.login()`](index.html#pexpect.pxssh.pxssh.login) method has several new and changed options:
+ The option `password_regex` allows changing
the password prompt regex, for servers that include `password:` in a banner
before reaching a prompt ([PR #468](https://github.com/pexpect/pexpect/pull/468/)).
+ [`login()`](index.html#pexpect.pxssh.pxssh.login) now allows for setting up SSH tunnels to be requested once
logged in to the remote server. This option is `ssh_tunnels` ([PR #473](https://github.com/pexpect/pexpect/pull/473/)).
The structure should be like this:
```
{
'local': ['2424:localhost:22'], # Local SSH tunnels
'remote': ['2525:localhost:22'], # Remote SSH tunnels
'dynamic': [8888], # Dynamic/SOCKS tunnels
}
```
+ The option `spawn_local_ssh=False` allows subsequent logins from the
remote session and treats the session as if it was local ([PR #472](https://github.com/pexpect/pexpect/pull/472/)).
+ Setting `sync_original_prompt=False` will prevent changing the prompt to
something unique, in case the remote server is sensitive to new lines at login
([PR #468](https://github.com/pexpect/pexpect/pull/468/)).
+ If `ssh_key=True` is passed, the SSH client forces forwarding the authentication
agent to the remote server instead of providing a key ([PR #473](https://github.com/pexpect/pexpect/pull/473/)).
#### Version 4.4[¶](#version-4-4)
* [`PopenSpawn`](index.html#pexpect.popen_spawn.PopenSpawn) now has a `preexec_fn` parameter, like [`spawn`](index.html#pexpect.spawn)
and [`subprocess.Popen`](https://docs.python.org/3/library/subprocess.html#subprocess.Popen), for a function to be called in the child process before executing the new command. Like in `Popen`, this works only in POSIX, and can cause issues if your application also uses threads
([PR #460](https://github.com/pexpect/pexpect/pull/460/)).
* Significant performance improvements when processing large amounts of data
([PR #464](https://github.com/pexpect/pexpect/pull/464/)).
* Ensure that `spawn.closed` gets set by [`close()`](index.html#pexpect.spawn.close), and improve an example for passing `SIGWINCH` through to a child process ([PR #466](https://github.com/pexpect/pexpect/pull/466/)).
#### Version 4.3.1[¶](#version-4-3-1)
* When launching bash for [`pexpect.replwrap`](index.html#module-pexpect.replwrap), load the system `bashrc`
from a couple of different common locations ([PR #457](https://github.com/pexpect/pexpect/pull/457/)), and then unset the `PROMPT_COMMAND` environment variable, which can interfere with the prompt we’re expecting ([PR #459](https://github.com/pexpect/pexpect/pull/459/)).
#### Version 4.3[¶](#version-4-3)
* The `async=` parameter to integrate with asyncio has become `async_=`
([PR #431](https://github.com/pexpect/pexpect/pull/431/)), as *async* is becoming a Python keyword from Python 3.6.
Pexpect will still recognise `async` as an alternative spelling.
* Similarly, the module `pexpect.async` became `pexpect._async`
([PR #450](https://github.com/pexpect/pexpect/pull/450/)). This module is not part of the public API.
* Fix problems with asyncio objects closing file descriptors during garbage collection ([#347](https://github.com/pexpect/pexpect/issues/347/), [PR #376](https://github.com/pexpect/pexpect/pull/376/)).
* Set the `.pid` attribute of a [`PopenSpawn`](index.html#pexpect.popen_spawn.PopenSpawn) object ([PR #417](https://github.com/pexpect/pexpect/pull/417/)).
* Fix passing Windows paths to [`PopenSpawn`](index.html#pexpect.popen_spawn.PopenSpawn) ([PR #446](https://github.com/pexpect/pexpect/pull/446/)).
* [`PopenSpawn`](index.html#pexpect.popen_spawn.PopenSpawn) on Windows can pass string commands through to `Popen`
without splitting them into a list ([PR #447](https://github.com/pexpect/pexpect/pull/447/)).
* Stop `shlex` trying to read from stdin when [`PopenSpawn`](index.html#pexpect.popen_spawn.PopenSpawn) is passed `cmd=None` ([#433](https://github.com/pexpect/pexpect/issues/433/), [PR #434](https://github.com/pexpect/pexpect/pull/434/)).
* Ensure that an error closing a Pexpect spawn object raises a Pexpect error,
rather than a Ptyprocess error ([#383](https://github.com/pexpect/pexpect/issues/383/), [PR #386](https://github.com/pexpect/pexpect/pull/386/)).
* Cleaned up invalid backslash escape sequences in strings ([PR #430](https://github.com/pexpect/pexpect/pull/430/),
[PR #445](https://github.com/pexpect/pexpect/pull/445/)).
* The pattern for a password prompt in [`pexpect.pxssh`](index.html#module-pexpect.pxssh) changed from
`password` to `password:` ([PR #452](https://github.com/pexpect/pexpect/pull/452/)).
* Correct docstring for using unicode with spawn ([PR #395](https://github.com/pexpect/pexpect/pull/395/)).
* Various other improvements to documentation.
#### Version 4.2.1[¶](#version-4-2-1)
* Fix to allow running `env` in replwrap-ed bash.
* Raise more informative exception from pxssh if it fails to connect.
* Change `passmass` example to not log passwords entered.
#### Version 4.2[¶](#version-4-2)
* Change: When an `env` parameter is specified to the [`spawn`](index.html#pexpect.spawn) or
`run` family of calls containing a value for `PATH`, its value is used to discover the target executable from a relative path, rather than the current process’s environment `PATH`. This mirrors the behavior of
`subprocess.Popen()` in the standard library ([#348](https://github.com/pexpect/pexpect/issues/348/)).
* Regression: Re-introduce capability for `read_nonblocking()` in class
`fdspawn` as previously supported in version 3.3 ([#359](https://github.com/pexpect/pexpect/issues/359/)).
#### Version 4.0[¶](#version-4-0)
* Integration with [`asyncio`](https://docs.python.org/3/library/asyncio.html#module-asyncio): passing `async=True` to [`expect()`](index.html#pexpect.spawn.expect),
[`expect_exact()`](index.html#pexpect.spawn.expect_exact) or [`expect_list()`](index.html#pexpect.spawn.expect_list) will make them return a coroutine. You can get the result using `yield from`, or wrap it in an
[`asyncio.Task`](https://docs.python.org/3/library/asyncio-task.html#asyncio.Task). This allows the event loop to do other things while waiting for output that matches a pattern.
* Experimental support for Windows (with some caveats)—see [Pexpect on Windows](index.html#windows).
* Enhancement: allow method as callbacks of argument `events` for
[`pexpect.run()`](index.html#pexpect.run) ([#176](https://github.com/pexpect/pexpect/issues/176/)).
* It is now possible to call [`wait()`](index.html#pexpect.spawn.wait) multiple times, or after a process is already determined to be terminated without raising an exception
([PR #211](https://github.com/pexpect/pexpect/pull/211/)).
* New [`pexpect.spawn`](index.html#pexpect.spawn) keyword argument, `dimensions=(rows, columns)`
allows setting terminal screen dimensions before launching a program
([#122](https://github.com/pexpect/pexpect/issues/122/)).
* Fix regression that prevented executable, but unreadable files from being found when not specified by absolute path – such as
/usr/bin/sudo ([#104](https://github.com/pexpect/pexpect/issues/104/)).
* Fixed regression when executing pexpect with some prior releases of the multiprocessing module where stdin has been closed ([#86](https://github.com/pexpect/pexpect/issues/86/)).
##### Backwards incompatible changes[¶](#backwards-incompatible-changes)
* Deprecated `pexpect.screen` and `pexpect.ANSI`. Please use other packages such as [pyte](https://pypi.python.org/pypi/pyte) to emulate a terminal.
* Removed the independent top-level modules (`pxssh fdpexpect FSM screen ANSI`)
which were installed alongside Pexpect. These were moved into the Pexpect package in 3.0, but the old names were left as aliases.
* Child processes created by Pexpect no longer ignore SIGHUP by default: the
`ignore_sighup` parameter of [`pexpect.spawn`](index.html#pexpect.spawn) defaults to False. To get the old behaviour, pass `ignore_sighup=True`.
#### Version 3.3[¶](#version-3-3)
* Added a mechanism to wrap REPLs, or shells, in an object which can conveniently be used to send commands and wait for the output ([`pexpect.replwrap`](index.html#module-pexpect.replwrap)).
* Fixed issue where pexpect would attempt to execute a directory because it has the ‘execute’ bit set ([#37](https://github.com/pexpect/pexpect/issues/37/)).
* Removed the `pexpect.psh` module. This was never documented, and we found no evidence that people use it. The new [`pexpect.replwrap`](index.html#module-pexpect.replwrap) module provides a more flexible alternative.
* Fixed `TypeError: got <type 'str'> ('\r\n') as pattern` in `spawnu.readline()`
method ([#67](https://github.com/pexpect/pexpect/issues/67/)).
* Fixed issue where EOF was not correctly detected in [`interact()`](index.html#pexpect.spawn.interact), causing a repeating loop of output on Linux, and blocking before EOF on BSD and Solaris ([#49](https://github.com/pexpect/pexpect/issues/49/)).
* Several Solaris (SmartOS) bugfixes, preventing [`IOError`](https://docs.python.org/3/library/exceptions.html#IOError) exceptions, especially when used with cron(1) ([#44](https://github.com/pexpect/pexpect/issues/44/)).
* Added new keyword argument `echo=True` for `spawn`. On SVR4-like systems, the method `isatty()` will always return *False*: the child pty does not appear as a terminal. Therefore, [`setecho()`](index.html#pexpect.spawn.setecho), [`getwinsize()`](index.html#pexpect.spawn.getwinsize),
[`setwinsize()`](index.html#pexpect.spawn.setwinsize), and [`waitnoecho()`](index.html#pexpect.spawn.waitnoecho) are not supported on those platforms.
After this, we intend to start working on a bigger refactoring of the code, to be released as Pexpect 4. There may be more bugfix 3.x releases, however.
#### Version 3.2[¶](#version-3-2)
* Fix exception handling from [`select.select()`](https://docs.python.org/3/library/select.html#select.select) on Python 2 ([PR #38](https://github.com/pexpect/pexpect/pull/38/)).
This was accidentally broken in the previous release when it was fixed for Python 3.
* Removed a workaround for `TIOCSWINSZ` on very old systems, which was causing issues on some BSD systems ([PR #40](https://github.com/pexpect/pexpect/pull/40/)).
* Fixed an issue with exception handling in [`pxssh`](index.html#module-pexpect.pxssh) ([PR #43](https://github.com/pexpect/pexpect/pull/43/))
The documentation for [`pxssh`](index.html#module-pexpect.pxssh) was improved.
#### Version 3.1[¶](#version-3-1)
* Fix an issue that prevented importing pexpect on Python 3 when `sys.stdout`
was reassigned ([#30](https://github.com/pexpect/pexpect/issues/30/)).
* Improve prompt synchronisation in [`pxssh`](index.html#module-pexpect.pxssh) ([PR #28](https://github.com/pexpect/pexpect/pull/28/)).
* Fix pickling exception instances ([PR #34](https://github.com/pexpect/pexpect/pull/34/)).
* Fix handling exceptions from [`select.select()`](https://docs.python.org/3/library/select.html#select.select) on Python 3 ([PR #33](https://github.com/pexpect/pexpect/pull/33/)).
The examples have also been cleaned up somewhat - this will continue in future releases.
#### Version 3.0[¶](#version-3-0)
The new major version number doesn’t indicate any deliberate API incompatibility.
We have endeavoured to avoid breaking existing APIs. However, pexpect is under new maintenance after a long dormancy, so some caution is warranted.
* A new [unicode API](index.html#unicode) was introduced.
* Python 3 is now supported, using a single codebase.
* Pexpect now requires at least Python 2.6 or 3.2.
* The modules other than pexpect, such as [`pexpect.fdpexpect`](index.html#module-pexpect.fdpexpect) and
[`pexpect.pxssh`](index.html#module-pexpect.pxssh), were moved into the pexpect package. For now, wrapper modules are installed to the old locations for backwards compatibility (e.g.
`import pxssh` will still work), but these will be removed at some point in the future.
* Ignoring `SIGHUP` is now optional - thanks to <NAME> for the patch.
We also now have [docs on ReadTheDocs](https://pexpect.readthedocs.io/),
and [continuous integration on Travis CI](https://travis-ci.org/pexpect/pexpect).
#### Version 2.4[¶](#version-2-4)
* Fix a bug regarding making the pty the controlling terminal when the process spawning it is not, actually, a terminal (such as from cron)
#### Version 2.3[¶](#version-2-3)
* Fixed OSError exception when a pexpect object is cleaned up. Previously, you might have seen this exception:
```
Exception exceptions.OSError: (10, 'No child processes')
in <bound method spawn.__del__ of <pexpect.spawn instance at 0xd248c>> ignored
```
You should not see that anymore. Thanks to <NAME>.
* Added support for buffering reads. This greatly improves speed when trying to match long output from a child process. When you create an instance of the spawn object you can then set a buffer size. For now you MUST do the following to turn on buffering – it may be on by default in future version:
```
child = pexpect.spawn ('my_command')
child.maxread=1000 # Sets buffer to 1000 characters.
```
* I made a subtle change to the way TIMEOUT and EOF exceptions behave.
Previously you could either expect these states in which case pexpect will not raise an exception, or you could just let pexpect raise an exception when these states were encountered. If you expected the states then the `before` property was set to everything before the state was encountered, but if you let pexpect raise the exception then
`before` was not set. Now, the `before` property will get set either way you choose to handle these states.
* The spawn object now provides iterators for a *file-like interface*.
This makes Pexpect a more complete file-like object. You can now write code like this:
```
child = pexpect.spawn ('ls -l')
for line in child:
print line
```
* write and writelines() no longer return a value. Use send() if you need that functionality. I did this to make the Spawn object more closely match a file-like object.
* Added the attribute `exitstatus`. This will give the exit code returned by the child process. This will be set to `None` while the child is still alive. When `isalive()` returns 0 then `exitstatus` will be set.
* Made a few more tweaks to `isalive()` so that it will operate more consistently on different platforms. Solaris is the most difficult to support.
* You can now put `TIMEOUT` in a list of expected patterns. This is just like putting `EOF` in the pattern list. Expecting for a `TIMEOUT` may not be used as often as `EOF`, but this makes Pexpect more consistent.
* Thanks to a suggestion and sample code from <NAME> I added the ability for Pexpect to operate on a file descriptor that is already open. This means that Pexpect can be used to control streams such as those from serial port devices. Now,
you just pass the integer file descriptor as the “command” when constructing a spawn open. For example on a Linux box with a modem on ttyS1:
```
fd = os.open("/dev/ttyS1", os.O_RDWR|os.O_NONBLOCK|os.O_NOCTTY)
m = pexpect.spawn(fd) # Note integer fd is used instead of usual string.
m.send("+++") # Escape sequence m.send("ATZ0\r") # Reset modem to profile 0 rval = m.expect(["OK", "ERROR"])
```
* `read()` was renamed to `read_nonblocking()`. Added new `read()` method that matches file-like object interface. In general, you should not notice the difference except that `read()` no longer allows you to directly set the timeout value. I hope this will not effect any existing code. Switching to
`read_nonblocking()` should fix existing code.
* Changed the name of `set_echo()` to `setecho()`.
* Changed the name of `send_eof()` to `sendeof()`.
* Modified `kill()` so that it checks to make sure the pid `isalive()`.
* modified `spawn()` (really called from `__spawn()`) so that it does not raise an exception if `setwinsize()` fails. Some platforms such as Cygwin do not like setwinsize. This was a constant problem and since it is not a critical feature I decided to just silence the error. Normally I don’t like to do that, but in this case I’m making an exception.
* Added a method `close()` that does what you think. It closes the file descriptor of the child application. It makes no attempt to actually kill the child or wait for its status.
* Add variables `__version__` and `__revision__` (from cvs) to the pexpect modules. This is mainly helpful to me so that I can make sure that I’m testing with the right version instead of one already installed.
* `log_open()` and `log_close(` have been removed. Now use `setlog()`.
The `setlog()` method takes a file object. This is far more flexible than the previous log method. Each time data is written to the file object it will be flushed. To turn logging off simply call `setlog()` with None.
* renamed the `isAlive()` method to `isalive()` to match the more typical naming style in Python. Also the technique used to detect child process status has been drastically modified. Previously I did some funky stuff with signals which caused indigestion in other Python modules on some platforms. It was a big headache. It still is, but I think it works better now.
* attribute `matched` renamed to `after`
* new attribute `match`
* The `expect_eof()` method is gone. You can now simply use the
`expect()` method to look for EOF.
* **Pexpect works on OS X**, but the nature of the quirks cause many of the tests to fail. See bugs. (Incomplete Child Output). The problem is more than minor, but Pexpect is still more than useful for most tasks.
* **Solaris**: For some reason, the *second* time a pty file descriptor is created and deleted it never gets returned for use. It does not effect the first time or the third time or any time after that. It’s only the second time. This is weird… This could be a file descriptor leak, or it could be some peculiarity of how Solaris recycles them. I thought it was a UNIX requirement for the OS to give you the lowest available filedescriptor number. In any case,
this should not be a problem unless you create hundreds of pexpect instances…
It may also be a pty module bug.
### Moves and forks[¶](#moves-and-forks)
* Pexpect development used to be hosted on Sourceforge.
* In 2011, <NAME> forked pexpect as ‘pexpect-u’, to support Python 3. He later decided he had taken the wrong approach with this.
* In 2012, <NAME>, the original author of Pexpect, moved the project to Github, but was still too busy to develop it much.
* In 2013, <NAME> and <NAME> forked Pexpect again, intending to call the new fork Pexpected. Noah Spurrier agreed to let them use the name Pexpect, so Pexpect versions 3 and above are based on this fork, which now lives [here on Github](https://github.com/pexpect/pexpect).
Pexpect is developed [on Github](http://github.com/pexpect/pexpect). Please report [issues](https://github.com/pexpect/pexpect/issues) there as well.
Indices and tables[¶](#indices-and-tables)
===
* [Index](genindex.html)
* [Module Index](py-modindex.html)
* [Search Page](search.html) |
showdown | rust | Rust | Crate showdown
===
Pokémon Showdown client.
Stability
---
This crate is not stable, not even close. The APIs of this crate are heavily experimented on, and there isn’t going to be depreciation period for removed features. Don’t use this crate if you aren’t prepared for constant breakage.
Re-exports
---
* `pub use time;`
* `pub use url;`
Modules
---
* message
Structs
---
* ErrorA specialized `Result` type for Showdown client operations.
* RoomId
* SendMessage
* StreamMessage stream.
Functions
---
* fetch_server_urlRequires `native-tls`, `native-tls-vendored` or `rustls-tls` feature.
Type Aliases
---
* Result
Crate showdown
===
Pokémon Showdown client.
Stability
---
This crate is not stable, not even close. The APIs of this crate are heavily experimented on, and there isn’t going to be depreciation period for removed features. Don’t use this crate if you aren’t prepared for constant breakage.
Re-exports
---
* `pub use time;`
* `pub use url;`
Modules
---
* message
Structs
---
* ErrorA specialized `Result` type for Showdown client operations.
* RoomId
* SendMessage
* StreamMessage stream.
Functions
---
* fetch_server_urlRequires `native-tls`, `native-tls-vendored` or `rustls-tls` feature.
Type Aliases
---
* Result
Struct showdown::Error
===
```
pub struct Error(/* private fields */);
```
A specialized `Result` type for Showdown client operations.
Trait Implementations
---
### impl Debug for Error
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn fmt(&self, __formatter: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more1.0.0 · source#### fn description(&self) -> &str
👎Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎Deprecated since 1.33.0: replaced by Error::source, which can support downcasting#### fn provide<'a>(&'a self, request: &mut Request<'a>)
🔬This is a nightly-only experimental API. (`error_generic_member_access`)Provides type based access to context intended for error reports. Read moreAuto Trait Implementations
---
### impl !RefUnwindSafe for Error
### impl Send for Error
### impl Sync for Error
### impl Unpin for Error
### impl !UnwindSafe for Error
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> ToString for Twhere
T: Display + ?Sized,
#### default fn to_string(&self) -> String
Converts the given value to a `String`.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct showdown::RoomId
===
```
pub struct RoomId<'a>(pub &'a str);
```
Tuple Fields
---
`0: &'a str`Implementations
---
### impl RoomId<'_#### pub const LOBBY: RoomId<'static> = _
Trait Implementations
---
### impl<'a> Clone for RoomId<'a#### fn clone(&self) -> RoomId<'aReturns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
Formats the value using the given formatter.
---
### impl<'a> RefUnwindSafe for RoomId<'a### impl<'a> Send for RoomId<'a### impl<'a> Sync for RoomId<'a### impl<'a> Unpin for RoomId<'a### impl<'a> UnwindSafe for RoomId<'aBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct showdown::SendMessage
===
```
pub struct SendMessage(/* private fields */);
```
Implementations
---
### impl SendMessage
#### pub fn global_command(command: impl Display) -> Self
Creates a global command.
##### Example
```
use futures::{SinkExt, StreamExt};
use showdown::message::{Kind, QueryResponse};
use showdown::{Result, RoomId, SendMessage, Stream};
#[tokio::main]
async fn main() -> Result<()> {
let mut stream = Stream::connect("showdown").await?;
stream.send(SendMessage::global_command("cmd rooms")).await?;
while let Some(received) = stream.next().await {
if let Kind::QueryResponse(QueryResponse::Rooms(rooms)) = received?.kind() {
assert!(rooms
.official
.iter()
.any(|room| room.title == "Tournaments"));
return Ok(());
}
}
panic!("Server didn't provide a list of rooms")
}
```
#### pub fn chat_message(room_id: RoomId<'_>, message: impl Display) -> Self
Creates a chat room message.
#### pub fn chat_command(room_id: RoomId<'_>, command: impl Display) -> Self
Creates a command that executes in a chat room.
##### Examples
```
use futures::{SinkExt, StreamExt};
use showdown::message::{Kind, QueryResponse};
use showdown::{Result, RoomId, SendMessage, Stream};
#[tokio::main]
async fn main() -> Result<()> {
let mut stream = Stream::connect("showdown").await?;
stream.send(SendMessage::global_command("join lobby")).await?;
stream.send(SendMessage::chat_command(RoomId::LOBBY, "roomdesc")).await;
while let Some(message) = stream.next().await {
if let Kind::Html(html) = message?.kind() {
assert!(html.contains("Relax here amidst the chaos."));
return Ok(());
}
}
panic!("Server didn't provide a room description")
}
```
#### pub fn broadcast_command(room_id: RoomId<'_>, command: impl Display) -> Self
Trait Implementations
---
### impl Clone for SendMessage
#### fn clone(&self) -> SendMessage
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn eq(&self, other: &SendMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Sink<SendMessage> for Stream
#### type Error = Error
The type of value produced by the sink when an error occurs.#### fn poll_ready(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Result<()>Attempts to prepare the `Sink` to receive a value.
Each call to this function must be preceded by a successful call to
`poll_ready` which returned `Poll::Ready(Ok(()))`.
### impl StructuralEq for SendMessage
### impl StructuralPartialEq for SendMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for SendMessage
### impl Send for SendMessage
### impl Sync for SendMessage
### impl Unpin for SendMessage
### impl UnwindSafe for SendMessage
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct showdown::Stream
===
```
pub struct Stream { /* private fields */ }
```
Message stream.
Examples
---
```
use futures::{SinkExt, StreamExt};
use showdown::message::{Kind, UpdateUser};
use showdown::{Result, RoomId, Stream};
#[tokio::main]
async fn main() -> Result<()> {
let mut stream = Stream::connect("showdown").await?;
let message = stream.next().await.unwrap()?;
match message.kind() {
Kind::UpdateUser(UpdateUser {
username,
named: false,
..
}) => {
assert!(username.starts_with(" Guest "));
}
_ => panic!(),
}
Ok(())
}
```
Implementations
---
### impl Stream
#### pub async fn connect(name: &str) -> Result<SelfConnects to a named Showdown server.
Returns a structure implementing `Sink` and
`Stream` traits which can be used to send and receives messages respectively.
It’s possible to use `StreamExt::split`
to split returned structure into separate `Sink` and `Stream` objects.
Requires `native-tls`, `native-tls-vendored` or `rustls-tls` feature.
##### Examples
```
use showdown::{Result, Stream};
#[tokio::main]
async fn main() {
assert!(Stream::connect("showdown").await.is_ok());
assert!(Stream::connect("fakestofservers").await.is_err());
}
```
#### pub async fn connect_to_url(url: &Url) -> Result<SelfConnects to an URL.
This URL is provided by
`fetch_server_url`
function.
##### Examples
```
use showdown::{fetch_server_url, Result, Stream};
#[tokio::main]
async fn main() -> Result<()> {
let url = fetch_server_url("smogtours").await?;
assert_eq!(url.as_str(), "ws://sim3.psim.us:8002/showdown/websocket");
Stream::connect_to_url(&url).await?;
Ok(())
}
```
Trait Implementations
---
### impl Debug for Stream
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### type Error = Error
The type of value produced by the sink when an error occurs.#### fn poll_ready(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Result<()>Attempts to prepare the `Sink` to receive a value.
Each call to this function must be preceded by a successful call to
`poll_ready` which returned `Poll::Ready(Ok(()))`.
#### type Item = Result<Message, ErrorValues yielded by the stream.#### fn poll_next(
self: Pin<&mut Self>,
cx: &mut Context<'_>
) -> Poll<Option<Self::Item>Attempt to pull out the next value of this stream, registering the current task for wakeup if the value is not yet available, and returning
`None` if the stream is exhausted.
Returns the bounds on the remaining length of the stream. Read moreAuto Trait Implementations
---
### impl !RefUnwindSafe for Stream
### impl Send for Stream
### impl Sync for Stream
### impl Unpin for Stream
### impl !UnwindSafe for Stream
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T, Item> SinkExt<Item> for Twhere
T: Sink<Item> + ?Sized,
#### fn with<U, Fut, F, E>(self, f: F) -> With<Self, Item, U, Fut, F>where
F: FnMut(U) -> Fut,
Fut: Future<Output = Result<Item, E>>,
E: From<Self::Error>,
Self: Sized,
Composes a function *in front of* the sink.
F: FnMut(U) -> St,
St: Stream<Item = Result<Item, Self::Error>>,
Self: Sized,
Composes a function *in front of* the sink.
F: FnOnce(Self::Error) -> E,
Self: Sized,
Transforms the error returned by the sink.#### fn sink_err_into<E>(self) -> SinkErrInto<Self, Item, E>where
Self: Sized,
Self::Error: Into<E>,
Map this sink’s error to a different error type using the `Into` trait.
Self: Sized,
Adds a fixed-size buffer to the current sink.
Self: Unpin,
Close the sink.#### fn fanout<Si>(self, other: Si) -> Fanout<Self, Si>where
Self: Sized,
Item: Clone,
Si: Sink<Item, Error = Self::Error>,
Fanout items to multiple sinks.
Self: Unpin,
Flush the sink, processing all pending items.
Self: Unpin,
A future that completes after the given item has been fully processed into the sink, including flushing.
Self: Unpin,
A future that completes after the given item has been received by the sink.
St: TryStream<Ok = Item, Error = Self::Error> + Stream + Unpin + ?Sized,
Self: Unpin,
A future that completes after the given stream has been fully processed into the sink, including flushing.
Si2: Sink<Item, Error = Self::Error>,
Self: Sized,
Wrap this sink in an `Either` sink, making it the left-hand variant of that `Either`.
Si1: Sink<Item, Error = Self::Error>,
Self: Sized,
Wrap this stream in an `Either` stream, making it the right-hand variant of that `Either`.
&mut self,
cx: &mut Context<'_>
) -> Poll<Result<(), Self::Error>>where
Self: Unpin,
A convenience method for calling [`Sink::poll_ready`] on `Unpin`
sink types.#### fn start_send_unpin(&mut self, item: Item) -> Result<(), Self::Error>where
Self: Unpin,
A convenience method for calling [`Sink::start_send`] on `Unpin`
sink types.#### fn poll_flush_unpin(
&mut self,
cx: &mut Context<'_>
) -> Poll<Result<(), Self::Error>>where
Self: Unpin,
A convenience method for calling [`Sink::poll_flush`] on `Unpin`
sink types.#### fn poll_close_unpin(
&mut self,
cx: &mut Context<'_>
) -> Poll<Result<(), Self::Error>>where
Self: Unpin,
A convenience method for calling [`Sink::poll_close`] on `Unpin`
sink types.### impl<T> StreamExt for Twhere
T: Stream + ?Sized,
#### fn next(&mut self) -> Next<'_, Self>where
Self: Unpin,
Creates a future that resolves to the next item in the stream.
Self: Sized + Unpin,
Converts this stream into a future of `(next_item, tail_of_stream)`.
If the stream terminates, then the next item is `None`.
F: FnMut(Self::Item) -> T,
Self: Sized,
Maps this stream’s items to a different type, returning a new stream of the resulting type.
Self: Sized,
Creates a stream which gives the current iteration count as well as the next value.
F: FnMut(&Self::Item) -> Fut,
Fut: Future<Output = bool>,
Self: Sized,
Filters the values produced by this stream according to the provided asynchronous predicate.
F: FnMut(Self::Item) -> Fut,
Fut: Future<Output = Option<T>>,
Self: Sized,
Filters the values produced by this stream while simultaneously mapping them to a different type according to the provided asynchronous closure.
F: FnMut(Self::Item) -> Fut,
Fut: Future,
Self: Sized,
Computes from this stream’s items new items of a different type using an asynchronous closure.
C: Default + Extend<Self::Item>,
Self: Sized,
Transforms a stream into a collection, returning a future representing the result of that computation.
FromA: Default + Extend<A>,
FromB: Default + Extend<B>,
Self: Sized + Stream<Item = (A, B)>,
Converts a stream of pairs into a future, which resolves to pair of containers.
Self: Sized,
Self::Item: Extend<<Self::Item as IntoIterator>::Item> + IntoIterator + Default,
Concatenate all items of a stream into a single extendable destination, returning a future representing the end result.
Self: Sized,
Drives the stream to completion, counting the number of items.
Self: Sized + Clone,
Repeats a stream endlessly.
F: FnMut(T, Self::Item) -> Fut,
Fut: Future<Output = T>,
Self: Sized,
Execute an accumulating asynchronous computation over a stream,
collecting all the values into one final result.
F: FnMut(Self::Item) -> Fut,
Fut: Future<Output = bool>,
Self: Sized,
Execute predicate over asynchronous stream, and return `true` if any element in stream satisfied a predicate.
F: FnMut(Self::Item) -> Fut,
Fut: Future<Output = bool>,
Self: Sized,
Execute predicate over asynchronous stream, and return `true` if all element in stream satisfied a predicate.
Self::Item: Stream,
Self: Sized,
Flattens a stream of streams into just one continuous stream.
self,
limit: impl Into<Option<usize>>
) -> FlattenUnorderedWithFlowController<Self, ()>where
Self::Item: Stream + Unpin,
Self: Sized,
Flattens a stream of streams into just one continuous stream. Polls inner streams produced by the base stream concurrently.
F: FnMut(Self::Item) -> U,
U: Stream,
Self: Sized,
Maps a stream like `StreamExt::map` but flattens nested `Stream`s.
self,
limit: impl Into<Option<usize>>,
f: F
) -> FlatMapUnordered<Self, U, F>where
U: Stream + Unpin,
F: FnMut(Self::Item) -> U,
Self: Sized,
Maps a stream like `StreamExt::map` but flattens nested `Stream`s and polls them concurrently, yielding items in any order, as they made available.
F: FnMut(&mut S, Self::Item) -> Fut,
Fut: Future<Output = Option<B>>,
Self: Sized,
Combinator similar to `StreamExt::fold` that holds internal state and produces a new stream.
F: FnMut(&Self::Item) -> Fut,
Fut: Future<Output = bool>,
Self: Sized,
Skip elements on this stream while the provided asynchronous predicate resolves to `true`.
F: FnMut(&Self::Item) -> Fut,
Fut: Future<Output = bool>,
Self: Sized,
Take elements from this stream while the provided asynchronous predicate resolves to `true`.
Fut: Future,
Self: Sized,
Take elements from this stream until the provided future resolves.
F: FnMut(Self::Item) -> Fut,
Fut: Future<Output = ()>,
Self: Sized,
Runs this stream to completion, executing the provided asynchronous closure for each element on the stream.
self,
limit: impl Into<Option<usize>>,
f: F
) -> ForEachConcurrent<Self, Fut, F>where
F: FnMut(Self::Item) -> Fut,
Fut: Future<Output = ()>,
Self: Sized,
Runs this stream to completion, executing the provided asynchronous closure for each element on the stream concurrently as elements become available.
Self: Sized,
Creates a new stream of at most `n` items of the underlying stream.
Self: Sized,
Creates a new stream which skips `n` items of the underlying stream.
Self: Sized,
Fuse a stream such that `poll_next` will never again be called once it has finished. This method can be used to turn any `Stream` into a `FusedStream`.
Borrows a stream, rather than consuming it.
Self: Sized + UnwindSafe,
Catches unwinding panics while polling the stream.
self
) -> Pin<Box<dyn Stream<Item = Self::Item> + Send + 'a, Global>>where
Self: Sized + Send + 'a,
Wrap the stream in a Box, pinning it.
Self: Sized + 'a,
Wrap the stream in a Box, pinning it.
Self::Item: Future,
Self: Sized,
An adaptor for creating a buffered list of pending futures.
Self::Item: Future,
Self: Sized,
An adaptor for creating a buffered list of pending futures (unordered).
St: Stream,
Self: Sized,
An adapter for zipping two streams together.
St: Stream<Item = Self::Item>,
Self: Sized,
Adapter for chaining two streams.
Self: Sized,
Creates a new stream which exposes a `peek` method.
Self: Sized,
An adaptor for chunking up items of the stream inside a vector.
Self: Sized,
An adaptor for chunking up ready items of the stream inside a vector.
S: Sink<Self::Ok, Error = Self::Error>,
Self: TryStream + Sized,
A future that completes after the given stream has been fully processed into the sink and the sink has been flushed and closed.
Self: Sink<Item> + Sized,
Splits this `Stream + Sink` object into separate `Sink` and `Stream`
objects.
F: FnMut(&Self::Item),
Self: Sized,
Do something with each item of this stream, afterwards passing it on.
B: Stream<Item = Self::Item>,
Self: Sized,
Wrap this stream in an `Either` stream, making it the left-hand variant of that `Either`.
B: Stream<Item = Self::Item>,
Self: Sized,
Wrap this stream in an `Either` stream, making it the right-hand variant of that `Either`.
Self: Unpin,
A convenience method for calling [`Stream::poll_next`] on `Unpin`
stream types.#### fn select_next_some(&mut self) -> SelectNextSome<'_, Self>where
Self: Unpin + FusedStream,
Returns a `Future` that resolves when the next item in this stream is ready.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<S, T, E> TryStream for Swhere
S: Stream<Item = Result<T, E>> + ?Sized,
#### type Ok = T
The type of successful values yielded by this future#### type Error = E
The type of failures yielded by this future#### fn try_poll_next(
self: Pin<&mut S>,
cx: &mut Context<'_>
) -> Poll<Option<Result<<S as TryStream>::Ok, <S as TryStream>::Error>>Poll this `TryStream` as if it were a `Stream`.
S: TryStream + ?Sized,
#### fn err_into<E>(self) -> ErrInto<Self, E>where
Self: Sized,
Self::Error: Into<E>,
Wraps the current stream in a new stream which converts the error type into the one provided.
Self: Sized,
F: FnMut(Self::Ok) -> T,
Wraps the current stream in a new stream which maps the success value using the provided closure.
Self: Sized,
F: FnMut(Self::Error) -> E,
Wraps the current stream in a new stream which maps the error value using the provided closure.
F: FnMut(Self::Ok) -> Fut,
Fut: TryFuture<Error = Self::Error>,
Self: Sized,
Chain on a computation for when a value is ready, passing the successful results to the provided closure `f`.
F: FnMut(Self::Error) -> Fut,
Fut: TryFuture<Ok = Self::Ok>,
Self: Sized,
Chain on a computation for when an error happens, passing the erroneous result to the provided closure `f`.
F: FnMut(&Self::Ok),
Self: Sized,
Do something with the success value of this stream, afterwards passing it on.
F: FnMut(&Self::Error),
Self: Sized,
Do something with the error value of this stream, afterwards passing it on.
Self: Sized,
Wraps a [`TryStream`] into a type that implements
`Stream`
Self: Unpin,
Creates a future that attempts to resolve the next item in the stream.
If an error is encountered before the next item, the error is returned instead.
F: FnMut(Self::Ok) -> Fut,
Fut: TryFuture<Ok = (), Error = Self::Error>,
Self: Sized,
Attempts to run this stream to completion, executing the provided asynchronous closure for each element on the stream.
F: FnMut(&Self::Ok) -> Fut,
Fut: TryFuture<Ok = bool, Error = Self::Error>,
Self: Sized,
Skip elements on this stream while the provided asynchronous predicate resolves to `true`.
F: FnMut(&Self::Ok) -> Fut,
Fut: TryFuture<Ok = bool, Error = Self::Error>,
Self: Sized,
Take elements on this stream while the provided asynchronous predicate resolves to `true`.
self,
limit: impl Into<Option<usize>>,
f: F
) -> TryForEachConcurrent<Self, Fut, F>where
F: FnMut(Self::Ok) -> Fut,
Fut: Future<Output = Result<(), Self::Error>>,
Self: Sized,
Attempts to run this stream to completion, executing the provided asynchronous closure for each element on the stream concurrently as elements become available, exiting as soon as an error occurs.
C: Default + Extend<Self::Ok>,
Self: Sized,
Attempt to transform a stream into a collection,
returning a future representing the result of that computation.
Self: Sized,
An adaptor for chunking up successful items of the stream inside a vector.
Fut: Future<Output = bool>,
F: FnMut(&Self::Ok) -> Fut,
Self: Sized,
Attempt to filter the values produced by this stream according to the provided asynchronous closure.
Fut: TryFuture<Ok = Option<T>, Error = Self::Error>,
F: FnMut(Self::Ok) -> Fut,
Self: Sized,
Attempt to filter the values produced by this stream while simultaneously mapping them to a different type according to the provided asynchronous closure.
self,
limit: impl Into<Option<usize>>
) -> TryFlattenUnordered<Self>where
Self::Ok: TryStream + Unpin,
<Self::Ok as TryStream>::Error: From<Self::Error>,
Self: Sized,
Flattens a stream of streams into just one continuous stream. Produced streams will be polled concurrently and any errors will be passed through without looking at them.
If the underlying base stream returns an error, it will be **immediately** propagated.
Self::Ok: TryStream,
<Self::Ok as TryStream>::Error: From<Self::Error>,
Self: Sized,
Flattens a stream of streams into just one continuous stream.
F: FnMut(T, Self::Ok) -> Fut,
Fut: TryFuture<Ok = T, Error = Self::Error>,
Self: Sized,
Attempt to execute an accumulating asynchronous computation over a stream, collecting all the values into one final result.
Self: Sized,
Self::Ok: Extend<<Self::Ok as IntoIterator>::Item> + IntoIterator + Default,
Attempt to concatenate all items of a stream into a single extendable destination, returning a future representing the end result.
Self::Ok: TryFuture<Error = Self::Error>,
Self: Sized,
Attempt to execute several futures from a stream concurrently (unordered).
Self::Ok: TryFuture<Error = Self::Error>,
Self: Sized,
Attempt to execute several futures from a stream concurrently.
&mut self,
cx: &mut Context<'_>
) -> Poll<Option<Result<Self::Ok, Self::Error>>>where
Self: Unpin,
A convenience method for calling [`TryStream::try_poll_next`] on `Unpin`
stream types.#### fn into_async_read(self) -> IntoAsyncRead<Self>where
Self: Sized + TryStreamExt<Error = Error>,
Self::Ok: AsRef<[u8]>,
Adapter that converts this stream into an `AsyncBufRead`.
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Function showdown::fetch_server_url
===
```
pub async fn fetch_server_url(name: &str) -> Result<Url>
```
Requires `native-tls`, `native-tls-vendored` or `rustls-tls` feature.
Type Alias showdown::Result
===
```
pub type Result<T> = Result<T, Error>;
```
Aliased Type
---
```
enum Result<T> {
Ok(T),
Err(Error),
}
```
Variants
---
1.0.0### Ok(T)
Contains the success value
1.0.0### Err(Error)
Contains the error value |
github.com/twpayne/go-vfs | go | Go | README
[¶](#section-readme)
---
### go-vfs
[![PkgGoDev](https://pkg.go.dev/badge/github.com/twpayne/go-vfs)](https://pkg.go.dev/github.com/twpayne/go-vfs)
[![Report Card](https://goreportcard.com/badge/github.com/twpayne/go-vfs)](https://goreportcard.com/report/github.com/twpayne/go-vfs)
Package `vfs` provides an abstraction of the `os` and `ioutil` packages that is easy to test.
#### Key features
* File system abstraction layer for commonly-used `os` and `ioutil` functions from the standard library.
* Powerful and easy-to-use declarative testing framework, `vfst`. You declare the desired state of the filesystem after your code has run, and `vfst` tests that the filesystem matches that state. For a quick tour of `vfst`'s features,
see [the examples in the documentation](https://godoc.org/github.com/twpayne/go-vfs/vfst#pkg-examples).
* Compatibility with
[`github.com/spf13/afero`](https://github.com/spf13/afero) and
[`github.com/src-d/go-billy`](https://github.com/src-d/go-billy).
#### Quick start
`vfs` provides implementations of the `FS` interface:
```
// An FS is an abstraction over commonly-used functions in the os and ioutil
// packages.
type FS interface {
Chmod(name string, mode os.FileMode) error
Chown(name string, uid, git int) error
Chtimes(name string, atime, mtime time.Time) error
Create(name string) (*os.File, error)
FileSeparator() rune
Glob(pattern string) ([]string, error)
Lchown(name string, uid, git int) error
Lstat(name string) (os.FileInfo, error)
Mkdir(name string, perm os.FileMode) error
Open(name string) (*os.File, error)
OpenFile(name string, flag int, perm os.ModePerm) (*os.File, error)
RawPath(name string) (string, error)
ReadDir(dirname string) ([]os.FileInfo, error)
ReadFile(filename string) ([]byte, error)
Readlink(name string) (string, error)
Remove(name string) error
RemoveAll(name string) error
Rename(oldpath, newpath string) error
Stat(name string) (os.FileInfo, error)
Symlink(oldname, newname string) error
Truncate(name string, size int64) error
WriteFile(filename string, data []byte, perm os.FileMode) error
}
```
To use `vfs`, you write your code to use the `FS` interface, and then use
`vfst` to test it.
`vfs` also provides functions `MkdirAll` (equivalent to `os.MkdirAll`),
`Contains` (an improved `filepath.HasPrefix`), and `Walk` (equivalent to
`filepath.Walk`) that operate on an `FS`.
The implementations of `FS` provided are:
* `OSFS` which calls the underlying `os` and `ioutil` functions directly.
* `PathFS` which transforms all paths to provide a poor-man's `chroot`.
* `ReadOnlyFS` which prevents modification of the underlying FS.
* `TestFS` which assists running tests on a real filesystem but in a temporary directory that is easily cleaned up. It uses `OSFS` under the hood.
Example usage:
```
// writeConfigFile is the function we're going to test. It can make arbitrary
// changes to the filesystem through fs.
func writeConfigFile(fs vfs.FS) error {
return fs.WriteFile("/home/user/app.conf", []byte(`app config`), 0644)
}
// TestWriteConfigFile is our test function.
func TestWriteConfigFile(t *testing.T) {
// Create and populate an temporary directory with a home directory.
fs, cleanup, err := vfst.NewTestFS(map[string]interface{}{
"/home/user/.bashrc": "# contents of user's .bashrc\n",
})
// Check that the directory was populated successfully.
if err != nil {
t.Fatalf("vfsTest.NewTestFS(_) == _, _, %v, want _, _, <nil>", err)
}
// Ensure that the temporary directory is removed.
defer cleanup()
// Call the function we want to test.
if err := writeConfigFile(fs); err != nil {
t.Error(err)
}
// Check properties of the filesystem after our function has modified it.
vfst.RunTest(t, fs, "app_conf",
vfst.PathTest("/home/user/app.conf",
vfst.TestModeIsRegular,
vfst.TestModePerm(0644),
vfst.TestContentsString("app config"),
),
)
}
```
#### `github.com/spf13/afero` compatibility
There is a compatibility shim for
[`github.com/spf13/afero`](https://github.com/spf13/afero) in
[`github.com/twpayne/go-vfsafero`](https://github.com/twpayne/go-vfsafero). This allows you to use `vfst` to test existing code that uses
[`afero.FS`](https://godoc.org/github.com/spf13/afero#Fs). See [the documentation](https://godoc.org/github.com/twpayne/go-vfsafero) for an example.
#### `github.com/src-d/go-billy` compatibility
There is a compatibility shim for
[`github.com/src-d/go-billy`](https://github.com/src-d/go-billy) in
[`github.com/twpayne/go-vfsbilly`](https://github.com/twpayne/go-vfsbilly). This allows you to use `vfst` to test existing code that uses
[`billy.Filesystem`](https://godoc.org/github.com/src-d/go-billy#Filesystem).
See [the documentation](https://godoc.org/github.com/twpayne/go-vfsbilly) for an example.
#### Motivation
`vfs` was inspired by
[`github.com/spf13/afero`](https://github.com/spf13/afero). So, why not use
`afero`?
* `afero` has several critical bugs in its in-memory mock filesystem implementation `MemMapFs`, to the point that it is unusable for non-trivial test cases. `vfs` does not attempt to implement an in-memory mock filesystem,
and instead only provides a thin layer around the standard library's `os` and
`ioutil` packages, and as such should have fewer bugs.
* `afero` does not support creating or reading symbolic links, and its
`LstatIfPossible` interface is clumsy to use as it is not part of the
`afero.Fs` interface. `vfs` provides out-of-the-box support for symbolic links with all methods in the `FS` interface.
* `afero` has been effectively abandoned by its author, and a "friendly fork"
([`github.com/absfs/afero`](https://github.com/absfs/afero)) has not seen much activity. `vfs`, by providing much less functionality than `afero`, should be smaller and easier to maintain.
#### License
MIT
Documentation
[¶](#section-documentation)
---
[Rendered for](https://go.dev/about#build-context)
linux/amd64 windows/amd64 darwin/amd64 js/wasm
### Overview [¶](#pkg-overview)
Package vfs provides an abstraction of the os and ioutil packages that is easy to test.
### Index [¶](#pkg-index)
* [Variables](#pkg-variables)
* [func Contains(fs Stater, p, prefix string) (bool, error)](#Contains)
* [func MkdirAll(fs MkdirStater, path string, perm os.FileMode) error](#MkdirAll)
* [func Walk(fs LstatReadDirer, path string, walkFn filepath.WalkFunc) error](#Walk)
* [func WalkSlash(fs LstatReadDirer, path string, walkFn filepath.WalkFunc) error](#WalkSlash)
* [type FS](#FS)
* [type LstatReadDirer](#LstatReadDirer)
* [type MkdirStater](#MkdirStater)
* [type PathFS](#PathFS)
* + [func NewPathFS(fs FS, path string) *PathFS](#NewPathFS)
* + [func (p *PathFS) Chmod(name string, mode os.FileMode) error](#PathFS.Chmod)
+ [func (p *PathFS) Chown(name string, uid, gid int) error](#PathFS.Chown)
+ [func (p *PathFS) Chtimes(name string, atime, mtime time.Time) error](#PathFS.Chtimes)
+ [func (p *PathFS) Create(name string) (*os.File, error)](#PathFS.Create)
+ [func (p *PathFS) Glob(pattern string) ([]string, error)](#PathFS.Glob)
+ [func (p *PathFS) Join(op, name string) (string, error)](#PathFS.Join)
+ [func (p *PathFS) Lchown(name string, uid, gid int) error](#PathFS.Lchown)
+ [func (p *PathFS) Lstat(name string) (os.FileInfo, error)](#PathFS.Lstat)
+ [func (p *PathFS) Mkdir(name string, perm os.FileMode) error](#PathFS.Mkdir)
+ [func (p *PathFS) Open(name string) (*os.File, error)](#PathFS.Open)
+ [func (p *PathFS) OpenFile(name string, flag int, perm os.FileMode) (*os.File, error)](#PathFS.OpenFile)
+ [func (p *PathFS) PathSeparator() rune](#PathFS.PathSeparator)
+ [func (p *PathFS) RawPath(path string) (string, error)](#PathFS.RawPath)
+ [func (p *PathFS) ReadDir(dirname string) ([]os.FileInfo, error)](#PathFS.ReadDir)
+ [func (p *PathFS) ReadFile(filename string) ([]byte, error)](#PathFS.ReadFile)
+ [func (p *PathFS) Readlink(name string) (string, error)](#PathFS.Readlink)
+ [func (p *PathFS) Remove(name string) error](#PathFS.Remove)
+ [func (p *PathFS) RemoveAll(name string) error](#PathFS.RemoveAll)
+ [func (p *PathFS) Rename(oldpath, newpath string) error](#PathFS.Rename)
+ [func (p *PathFS) Stat(name string) (os.FileInfo, error)](#PathFS.Stat)
+ [func (p *PathFS) Symlink(oldname, newname string) error](#PathFS.Symlink)
+ [func (p *PathFS) Truncate(name string, size int64) error](#PathFS.Truncate)
+ [func (p *PathFS) WriteFile(filename string, data []byte, perm os.FileMode) error](#PathFS.WriteFile)
* [type ReadOnlyFS](#ReadOnlyFS)
* + [func NewReadOnlyFS(fs FS) *ReadOnlyFS](#NewReadOnlyFS)
* + [func (r *ReadOnlyFS) Chmod(name string, mode os.FileMode) error](#ReadOnlyFS.Chmod)
+ [func (r *ReadOnlyFS) Chown(name string, uid, gid int) error](#ReadOnlyFS.Chown)
+ [func (r *ReadOnlyFS) Chtimes(name string, atime, mtime time.Time) error](#ReadOnlyFS.Chtimes)
+ [func (r *ReadOnlyFS) Create(name string) (*os.File, error)](#ReadOnlyFS.Create)
+ [func (r *ReadOnlyFS) Glob(pattern string) ([]string, error)](#ReadOnlyFS.Glob)
+ [func (r *ReadOnlyFS) Lchown(name string, uid, gid int) error](#ReadOnlyFS.Lchown)
+ [func (r *ReadOnlyFS) Lstat(name string) (os.FileInfo, error)](#ReadOnlyFS.Lstat)
+ [func (r *ReadOnlyFS) Mkdir(name string, perm os.FileMode) error](#ReadOnlyFS.Mkdir)
+ [func (r *ReadOnlyFS) Open(name string) (*os.File, error)](#ReadOnlyFS.Open)
+ [func (r *ReadOnlyFS) OpenFile(name string, flag int, perm os.FileMode) (*os.File, error)](#ReadOnlyFS.OpenFile)
+ [func (r *ReadOnlyFS) PathSeparator() rune](#ReadOnlyFS.PathSeparator)
+ [func (r *ReadOnlyFS) RawPath(path string) (string, error)](#ReadOnlyFS.RawPath)
+ [func (r *ReadOnlyFS) ReadDir(dirname string) ([]os.FileInfo, error)](#ReadOnlyFS.ReadDir)
+ [func (r *ReadOnlyFS) ReadFile(filename string) ([]byte, error)](#ReadOnlyFS.ReadFile)
+ [func (r *ReadOnlyFS) Readlink(name string) (string, error)](#ReadOnlyFS.Readlink)
+ [func (r *ReadOnlyFS) Remove(name string) error](#ReadOnlyFS.Remove)
+ [func (r *ReadOnlyFS) RemoveAll(name string) error](#ReadOnlyFS.RemoveAll)
+ [func (r *ReadOnlyFS) Rename(oldpath, newpath string) error](#ReadOnlyFS.Rename)
+ [func (r *ReadOnlyFS) Stat(name string) (os.FileInfo, error)](#ReadOnlyFS.Stat)
+ [func (r *ReadOnlyFS) Symlink(oldname, newname string) error](#ReadOnlyFS.Symlink)
+ [func (r *ReadOnlyFS) Truncate(name string, size int64) error](#ReadOnlyFS.Truncate)
+ [func (r *ReadOnlyFS) WriteFile(filename string, data []byte, perm os.FileMode) error](#ReadOnlyFS.WriteFile)
* [type Stater](#Stater)
### Constants [¶](#pkg-constants)
This section is empty.
### Variables [¶](#pkg-variables)
```
var OSFS = &osfs{}
```
OSFS is the FS that calls os and ioutil functions directly.
```
var SkipDir = [filepath](/path/filepath).[SkipDir](/path/filepath#SkipDir)
```
SkipDir is filepath.SkipDir.
### Functions [¶](#pkg-functions)
####
func [Contains](https://github.com/twpayne/go-vfs/blob/v1.7.2/contains.go#L18) [¶](#Contains)
added in v1.0.6
```
func Contains(fs [Stater](#Stater), p, prefix [string](/builtin#string)) ([bool](/builtin#bool), [error](/builtin#error))
```
Contains returns true if p is reachable by traversing through prefix. prefix must exist, but p may not. It is an expensive but accurate alternative to the deprecated filepath.HasPrefix.
####
func [MkdirAll](https://github.com/twpayne/go-vfs/blob/v1.7.2/mkdirall.go#L15) [¶](#MkdirAll)
```
func MkdirAll(fs [MkdirStater](#MkdirStater), path [string](/builtin#string), perm [os](/os).[FileMode](/os#FileMode)) [error](/builtin#error)
```
MkdirAll is equivalent to os.MkdirAll but operates on fs.
####
func [Walk](https://github.com/twpayne/go-vfs/blob/v1.7.2/walk.go#L56) [¶](#Walk)
```
func Walk(fs [LstatReadDirer](#LstatReadDirer), path [string](/builtin#string), walkFn [filepath](/path/filepath).[WalkFunc](/path/filepath#WalkFunc)) [error](/builtin#error)
```
Walk is the equivalent of filepath.Walk but operates on fs. Entries are returned in lexicographical order.
####
func [WalkSlash](https://github.com/twpayne/go-vfs/blob/v1.7.2/walk.go#L63) [¶](#WalkSlash)
added in v1.4.1
```
func WalkSlash(fs [LstatReadDirer](#LstatReadDirer), path [string](/builtin#string), walkFn [filepath](/path/filepath).[WalkFunc](/path/filepath#WalkFunc)) [error](/builtin#error)
```
WalkSlash is the equivalent of Walk but all paths are converted to use forward slashes with filepath.ToSlash.
### Types [¶](#pkg-types)
####
type [FS](https://github.com/twpayne/go-vfs/blob/v1.7.2/vfs.go#L12) [¶](#FS)
```
type FS interface {
Chmod(name [string](/builtin#string), mode [os](/os).[FileMode](/os#FileMode)) [error](/builtin#error)
Chown(name [string](/builtin#string), uid, git [int](/builtin#int)) [error](/builtin#error)
Chtimes(name [string](/builtin#string), atime, mtime [time](/time).[Time](/time#Time)) [error](/builtin#error)
Create(name [string](/builtin#string)) (*[os](/os).[File](/os#File), [error](/builtin#error))
Glob(pattern [string](/builtin#string)) ([][string](/builtin#string), [error](/builtin#error))
Lchown(name [string](/builtin#string), uid, git [int](/builtin#int)) [error](/builtin#error)
Lstat(name [string](/builtin#string)) ([os](/os).[FileInfo](/os#FileInfo), [error](/builtin#error))
Mkdir(name [string](/builtin#string), perm [os](/os).[FileMode](/os#FileMode)) [error](/builtin#error)
Open(name [string](/builtin#string)) (*[os](/os).[File](/os#File), [error](/builtin#error))
OpenFile(name [string](/builtin#string), flag [int](/builtin#int), perm [os](/os).[FileMode](/os#FileMode)) (*[os](/os).[File](/os#File), [error](/builtin#error))
PathSeparator() [rune](/builtin#rune)
RawPath(name [string](/builtin#string)) ([string](/builtin#string), [error](/builtin#error))
ReadDir(dirname [string](/builtin#string)) ([][os](/os).[FileInfo](/os#FileInfo), [error](/builtin#error))
ReadFile(filename [string](/builtin#string)) ([][byte](/builtin#byte), [error](/builtin#error))
Readlink(name [string](/builtin#string)) ([string](/builtin#string), [error](/builtin#error))
Remove(name [string](/builtin#string)) [error](/builtin#error)
RemoveAll(name [string](/builtin#string)) [error](/builtin#error)
Rename(oldpath, newpath [string](/builtin#string)) [error](/builtin#error)
Stat(name [string](/builtin#string)) ([os](/os).[FileInfo](/os#FileInfo), [error](/builtin#error))
Symlink(oldname, newname [string](/builtin#string)) [error](/builtin#error)
Truncate(name [string](/builtin#string), size [int64](/builtin#int64)) [error](/builtin#error)
WriteFile(filename [string](/builtin#string), data [][byte](/builtin#byte), perm [os](/os).[FileMode](/os#FileMode)) [error](/builtin#error)
}
```
An FS is an abstraction over commonly-used functions in the os and ioutil packages.
####
type [LstatReadDirer](https://github.com/twpayne/go-vfs/blob/v1.7.2/walk.go#L14) [¶](#LstatReadDirer)
added in v1.0.4
```
type LstatReadDirer interface {
Lstat(name [string](/builtin#string)) ([os](/os).[FileInfo](/os#FileInfo), [error](/builtin#error))
ReadDir(dirname [string](/builtin#string)) ([][os](/os).[FileInfo](/os#FileInfo), [error](/builtin#error))
}
```
A LstatReadDirer implements all the functionality needed by Walk.
####
type [MkdirStater](https://github.com/twpayne/go-vfs/blob/v1.7.2/mkdirall.go#L9) [¶](#MkdirStater)
added in v1.0.4
```
type MkdirStater interface {
Mkdir(name [string](/builtin#string), perm [os](/os).[FileMode](/os#FileMode)) [error](/builtin#error)
Stat(name [string](/builtin#string)) ([os](/os).[FileInfo](/os#FileInfo), [error](/builtin#error))
}
```
A MkdirStater implements all the functionality needed by MkdirAll.
####
type [PathFS](https://github.com/twpayne/go-vfs/blob/v1.7.2/pathfs.go#L14) [¶](#PathFS)
```
type PathFS struct {
// contains filtered or unexported fields
}
```
A PathFS operates on an existing FS, but prefixes all names with a path. All names must be absolute paths, with the exception of symlinks, which may be relative.
####
func [NewPathFS](https://github.com/twpayne/go-vfs/blob/v1.7.2/pathfs.go#L21) [¶](#NewPathFS)
```
func NewPathFS(fs [FS](#FS), path [string](/builtin#string)) *[PathFS](#PathFS)
```
NewPathFS returns a new *PathFS operating on fs and prefixing all names with path.
####
func (*PathFS) [Chmod](https://github.com/twpayne/go-vfs/blob/v1.7.2/pathfs.go#L29) [¶](#PathFS.Chmod)
```
func (p *[PathFS](#PathFS)) Chmod(name [string](/builtin#string), mode [os](/os).[FileMode](/os#FileMode)) [error](/builtin#error)
```
Chmod implements os.Chmod.
####
func (*PathFS) [Chown](https://github.com/twpayne/go-vfs/blob/v1.7.2/pathfs.go#L38) [¶](#PathFS.Chown)
added in v0.1.5
```
func (p *[PathFS](#PathFS)) Chown(name [string](/builtin#string), uid, gid [int](/builtin#int)) [error](/builtin#error)
```
Chown implements os.Chown.
####
func (*PathFS) [Chtimes](https://github.com/twpayne/go-vfs/blob/v1.7.2/pathfs.go#L47) [¶](#PathFS.Chtimes)
added in v0.1.5
```
func (p *[PathFS](#PathFS)) Chtimes(name [string](/builtin#string), atime, mtime [time](/time).[Time](/time#Time)) [error](/builtin#error)
```
Chtimes implements os.Chtimes.
####
func (*PathFS) [Create](https://github.com/twpayne/go-vfs/blob/v1.7.2/pathfs.go#L56) [¶](#PathFS.Create)
added in v0.1.5
```
func (p *[PathFS](#PathFS)) Create(name [string](/builtin#string)) (*[os](/os).[File](/os#File), [error](/builtin#error))
```
Create implements os.Create.
####
func (*PathFS) [Glob](https://github.com/twpayne/go-vfs/blob/v1.7.2/pathfs.go#L65) [¶](#PathFS.Glob)
added in v1.3.0
```
func (p *[PathFS](#PathFS)) Glob(pattern [string](/builtin#string)) ([][string](/builtin#string), [error](/builtin#error))
```
Glob implements filepath.Glob.
####
func (*PathFS) [Join](https://github.com/twpayne/go-vfs/blob/v1.7.2/pathfs.go#L84) [¶](#PathFS.Join)
```
func (p *[PathFS](#PathFS)) Join(op, name [string](/builtin#string)) ([string](/builtin#string), [error](/builtin#error))
```
Join returns p's path joined with name.
####
func (*PathFS) [Lchown](https://github.com/twpayne/go-vfs/blob/v1.7.2/pathfs.go#L89) [¶](#PathFS.Lchown)
added in v0.1.5
```
func (p *[PathFS](#PathFS)) Lchown(name [string](/builtin#string), uid, gid [int](/builtin#int)) [error](/builtin#error)
```
Lchown implements os.Lchown.
####
func (*PathFS) [Lstat](https://github.com/twpayne/go-vfs/blob/v1.7.2/pathfs.go#L98) [¶](#PathFS.Lstat)
```
func (p *[PathFS](#PathFS)) Lstat(name [string](/builtin#string)) ([os](/os).[FileInfo](/os#FileInfo), [error](/builtin#error))
```
Lstat implements os.Lstat.
####
func (*PathFS) [Mkdir](https://github.com/twpayne/go-vfs/blob/v1.7.2/pathfs.go#L107) [¶](#PathFS.Mkdir)
```
func (p *[PathFS](#PathFS)) Mkdir(name [string](/builtin#string), perm [os](/os).[FileMode](/os#FileMode)) [error](/builtin#error)
```
Mkdir implements os.Mkdir.
####
func (*PathFS) [Open](https://github.com/twpayne/go-vfs/blob/v1.7.2/pathfs.go#L116) [¶](#PathFS.Open)
added in v0.1.4
```
func (p *[PathFS](#PathFS)) Open(name [string](/builtin#string)) (*[os](/os).[File](/os#File), [error](/builtin#error))
```
Open implements os.Open.
####
func (*PathFS) [OpenFile](https://github.com/twpayne/go-vfs/blob/v1.7.2/pathfs.go#L125) [¶](#PathFS.OpenFile)
added in v0.1.5
```
func (p *[PathFS](#PathFS)) OpenFile(name [string](/builtin#string), flag [int](/builtin#int), perm [os](/os).[FileMode](/os#FileMode)) (*[os](/os).[File](/os#File), [error](/builtin#error))
```
OpenFile implements os.OpenFile.
####
func (*PathFS) [PathSeparator](https://github.com/twpayne/go-vfs/blob/v1.7.2/pathfs.go#L134) [¶](#PathFS.PathSeparator)
added in v1.4.0
```
func (p *[PathFS](#PathFS)) PathSeparator() [rune](/builtin#rune)
```
PathSeparator implements PathSeparator.
####
func (*PathFS) [RawPath](https://github.com/twpayne/go-vfs/blob/v1.7.2/pathfs.go#L139) [¶](#PathFS.RawPath)
added in v1.3.3
```
func (p *[PathFS](#PathFS)) RawPath(path [string](/builtin#string)) ([string](/builtin#string), [error](/builtin#error))
```
RawPath implements RawPath.
####
func (*PathFS) [ReadDir](https://github.com/twpayne/go-vfs/blob/v1.7.2/pathfs.go#L144) [¶](#PathFS.ReadDir)
```
func (p *[PathFS](#PathFS)) ReadDir(dirname [string](/builtin#string)) ([][os](/os).[FileInfo](/os#FileInfo), [error](/builtin#error))
```
ReadDir implements ioutil.ReadDir.
####
func (*PathFS) [ReadFile](https://github.com/twpayne/go-vfs/blob/v1.7.2/pathfs.go#L153) [¶](#PathFS.ReadFile)
```
func (p *[PathFS](#PathFS)) ReadFile(filename [string](/builtin#string)) ([][byte](/builtin#byte), [error](/builtin#error))
```
ReadFile implements ioutil.ReadFile.
####
func (*PathFS) [Readlink](https://github.com/twpayne/go-vfs/blob/v1.7.2/pathfs.go#L162) [¶](#PathFS.Readlink)
```
func (p *[PathFS](#PathFS)) Readlink(name [string](/builtin#string)) ([string](/builtin#string), [error](/builtin#error))
```
Readlink implements os.Readlink.
####
func (*PathFS) [Remove](https://github.com/twpayne/go-vfs/blob/v1.7.2/pathfs.go#L171) [¶](#PathFS.Remove)
```
func (p *[PathFS](#PathFS)) Remove(name [string](/builtin#string)) [error](/builtin#error)
```
Remove implements os.Remove.
####
func (*PathFS) [RemoveAll](https://github.com/twpayne/go-vfs/blob/v1.7.2/pathfs.go#L180) [¶](#PathFS.RemoveAll)
```
func (p *[PathFS](#PathFS)) RemoveAll(name [string](/builtin#string)) [error](/builtin#error)
```
RemoveAll implements os.RemoveAll.
####
func (*PathFS) [Rename](https://github.com/twpayne/go-vfs/blob/v1.7.2/pathfs.go#L189) [¶](#PathFS.Rename)
added in v0.1.3
```
func (p *[PathFS](#PathFS)) Rename(oldpath, newpath [string](/builtin#string)) [error](/builtin#error)
```
Rename implements os.Rename.
####
func (*PathFS) [Stat](https://github.com/twpayne/go-vfs/blob/v1.7.2/pathfs.go#L202) [¶](#PathFS.Stat)
```
func (p *[PathFS](#PathFS)) Stat(name [string](/builtin#string)) ([os](/os).[FileInfo](/os#FileInfo), [error](/builtin#error))
```
Stat implements os.Stat.
####
func (*PathFS) [Symlink](https://github.com/twpayne/go-vfs/blob/v1.7.2/pathfs.go#L211) [¶](#PathFS.Symlink)
```
func (p *[PathFS](#PathFS)) Symlink(oldname, newname [string](/builtin#string)) [error](/builtin#error)
```
Symlink implements os.Symlink.
####
func (*PathFS) [Truncate](https://github.com/twpayne/go-vfs/blob/v1.7.2/pathfs.go#L230) [¶](#PathFS.Truncate)
added in v0.1.5
```
func (p *[PathFS](#PathFS)) Truncate(name [string](/builtin#string), size [int64](/builtin#int64)) [error](/builtin#error)
```
Truncate implements os.Truncate.
####
func (*PathFS) [WriteFile](https://github.com/twpayne/go-vfs/blob/v1.7.2/pathfs.go#L239) [¶](#PathFS.WriteFile)
```
func (p *[PathFS](#PathFS)) WriteFile(filename [string](/builtin#string), data [][byte](/builtin#byte), perm [os](/os).[FileMode](/os#FileMode)) [error](/builtin#error)
```
WriteFile implements ioutil.WriteFile.
####
type [ReadOnlyFS](https://github.com/twpayne/go-vfs/blob/v1.7.2/readonlyfs.go#L11) [¶](#ReadOnlyFS)
```
type ReadOnlyFS struct {
// contains filtered or unexported fields
}
```
A ReadOnlyFS operates on an existing FS, but any methods that modify the FS return an error.
####
func [NewReadOnlyFS](https://github.com/twpayne/go-vfs/blob/v1.7.2/readonlyfs.go#L16) [¶](#NewReadOnlyFS)
```
func NewReadOnlyFS(fs [FS](#FS)) *[ReadOnlyFS](#ReadOnlyFS)
```
NewReadOnlyFS returns a new *ReadOnlyFS operating on fs.
####
func (*ReadOnlyFS) [Chmod](https://github.com/twpayne/go-vfs/blob/v1.7.2/readonlyfs.go#L23) [¶](#ReadOnlyFS.Chmod)
```
func (r *[ReadOnlyFS](#ReadOnlyFS)) Chmod(name [string](/builtin#string), mode [os](/os).[FileMode](/os#FileMode)) [error](/builtin#error)
```
Chmod implements os.Chmod.
####
func (*ReadOnlyFS) [Chown](https://github.com/twpayne/go-vfs/blob/v1.7.2/readonlyfs.go#L28) [¶](#ReadOnlyFS.Chown)
added in v0.1.5
```
func (r *[ReadOnlyFS](#ReadOnlyFS)) Chown(name [string](/builtin#string), uid, gid [int](/builtin#int)) [error](/builtin#error)
```
Chown implements os.Chown.
####
func (*ReadOnlyFS) [Chtimes](https://github.com/twpayne/go-vfs/blob/v1.7.2/readonlyfs.go#L33) [¶](#ReadOnlyFS.Chtimes)
added in v0.1.5
```
func (r *[ReadOnlyFS](#ReadOnlyFS)) Chtimes(name [string](/builtin#string), atime, mtime [time](/time).[Time](/time#Time)) [error](/builtin#error)
```
Chtimes implements os.Chtimes.
####
func (*ReadOnlyFS) [Create](https://github.com/twpayne/go-vfs/blob/v1.7.2/readonlyfs.go#L38) [¶](#ReadOnlyFS.Create)
added in v0.1.5
```
func (r *[ReadOnlyFS](#ReadOnlyFS)) Create(name [string](/builtin#string)) (*[os](/os).[File](/os#File), [error](/builtin#error))
```
Create implements os.Create.
####
func (*ReadOnlyFS) [Glob](https://github.com/twpayne/go-vfs/blob/v1.7.2/readonlyfs.go#L43) [¶](#ReadOnlyFS.Glob)
added in v1.3.0
```
func (r *[ReadOnlyFS](#ReadOnlyFS)) Glob(pattern [string](/builtin#string)) ([][string](/builtin#string), [error](/builtin#error))
```
Glob implements filepath.Glob.
####
func (*ReadOnlyFS) [Lchown](https://github.com/twpayne/go-vfs/blob/v1.7.2/readonlyfs.go#L48) [¶](#ReadOnlyFS.Lchown)
added in v0.1.5
```
func (r *[ReadOnlyFS](#ReadOnlyFS)) Lchown(name [string](/builtin#string), uid, gid [int](/builtin#int)) [error](/builtin#error)
```
Lchown implements os.Lchown.
####
func (*ReadOnlyFS) [Lstat](https://github.com/twpayne/go-vfs/blob/v1.7.2/readonlyfs.go#L53) [¶](#ReadOnlyFS.Lstat)
```
func (r *[ReadOnlyFS](#ReadOnlyFS)) Lstat(name [string](/builtin#string)) ([os](/os).[FileInfo](/os#FileInfo), [error](/builtin#error))
```
Lstat implements os.Lstat.
####
func (*ReadOnlyFS) [Mkdir](https://github.com/twpayne/go-vfs/blob/v1.7.2/readonlyfs.go#L58) [¶](#ReadOnlyFS.Mkdir)
```
func (r *[ReadOnlyFS](#ReadOnlyFS)) Mkdir(name [string](/builtin#string), perm [os](/os).[FileMode](/os#FileMode)) [error](/builtin#error)
```
Mkdir implements os.Mkdir.
####
func (*ReadOnlyFS) [Open](https://github.com/twpayne/go-vfs/blob/v1.7.2/readonlyfs.go#L63) [¶](#ReadOnlyFS.Open)
added in v0.1.4
```
func (r *[ReadOnlyFS](#ReadOnlyFS)) Open(name [string](/builtin#string)) (*[os](/os).[File](/os#File), [error](/builtin#error))
```
Open implements os.Open.
####
func (*ReadOnlyFS) [OpenFile](https://github.com/twpayne/go-vfs/blob/v1.7.2/readonlyfs.go#L68) [¶](#ReadOnlyFS.OpenFile)
added in v0.1.5
```
func (r *[ReadOnlyFS](#ReadOnlyFS)) OpenFile(name [string](/builtin#string), flag [int](/builtin#int), perm [os](/os).[FileMode](/os#FileMode)) (*[os](/os).[File](/os#File), [error](/builtin#error))
```
OpenFile implements os.OpenFile.
####
func (*ReadOnlyFS) [PathSeparator](https://github.com/twpayne/go-vfs/blob/v1.7.2/readonlyfs.go#L76) [¶](#ReadOnlyFS.PathSeparator)
added in v1.4.0
```
func (r *[ReadOnlyFS](#ReadOnlyFS)) PathSeparator() [rune](/builtin#rune)
```
PathSeparator implements PathSeparator.
####
func (*ReadOnlyFS) [RawPath](https://github.com/twpayne/go-vfs/blob/v1.7.2/readonlyfs.go#L111) [¶](#ReadOnlyFS.RawPath)
added in v1.3.4
```
func (r *[ReadOnlyFS](#ReadOnlyFS)) RawPath(path [string](/builtin#string)) ([string](/builtin#string), [error](/builtin#error))
```
RawPath implements RawPath.
####
func (*ReadOnlyFS) [ReadDir](https://github.com/twpayne/go-vfs/blob/v1.7.2/readonlyfs.go#L81) [¶](#ReadOnlyFS.ReadDir)
```
func (r *[ReadOnlyFS](#ReadOnlyFS)) ReadDir(dirname [string](/builtin#string)) ([][os](/os).[FileInfo](/os#FileInfo), [error](/builtin#error))
```
ReadDir implements ioutil.ReadDir.
####
func (*ReadOnlyFS) [ReadFile](https://github.com/twpayne/go-vfs/blob/v1.7.2/readonlyfs.go#L86) [¶](#ReadOnlyFS.ReadFile)
```
func (r *[ReadOnlyFS](#ReadOnlyFS)) ReadFile(filename [string](/builtin#string)) ([][byte](/builtin#byte), [error](/builtin#error))
```
ReadFile implements ioutil.ReadFile.
####
func (*ReadOnlyFS) [Readlink](https://github.com/twpayne/go-vfs/blob/v1.7.2/readonlyfs.go#L91) [¶](#ReadOnlyFS.Readlink)
```
func (r *[ReadOnlyFS](#ReadOnlyFS)) Readlink(name [string](/builtin#string)) ([string](/builtin#string), [error](/builtin#error))
```
Readlink implments os.Readlink.
####
func (*ReadOnlyFS) [Remove](https://github.com/twpayne/go-vfs/blob/v1.7.2/readonlyfs.go#L96) [¶](#ReadOnlyFS.Remove)
```
func (r *[ReadOnlyFS](#ReadOnlyFS)) Remove(name [string](/builtin#string)) [error](/builtin#error)
```
Remove implements os.Remove.
####
func (*ReadOnlyFS) [RemoveAll](https://github.com/twpayne/go-vfs/blob/v1.7.2/readonlyfs.go#L101) [¶](#ReadOnlyFS.RemoveAll)
```
func (r *[ReadOnlyFS](#ReadOnlyFS)) RemoveAll(name [string](/builtin#string)) [error](/builtin#error)
```
RemoveAll implements os.RemoveAll.
####
func (*ReadOnlyFS) [Rename](https://github.com/twpayne/go-vfs/blob/v1.7.2/readonlyfs.go#L106) [¶](#ReadOnlyFS.Rename)
added in v0.1.3
```
func (r *[ReadOnlyFS](#ReadOnlyFS)) Rename(oldpath, newpath [string](/builtin#string)) [error](/builtin#error)
```
Rename implements os.Rename.
####
func (*ReadOnlyFS) [Stat](https://github.com/twpayne/go-vfs/blob/v1.7.2/readonlyfs.go#L116) [¶](#ReadOnlyFS.Stat)
```
func (r *[ReadOnlyFS](#ReadOnlyFS)) Stat(name [string](/builtin#string)) ([os](/os).[FileInfo](/os#FileInfo), [error](/builtin#error))
```
Stat implements os.Stat.
####
func (*ReadOnlyFS) [Symlink](https://github.com/twpayne/go-vfs/blob/v1.7.2/readonlyfs.go#L121) [¶](#ReadOnlyFS.Symlink)
```
func (r *[ReadOnlyFS](#ReadOnlyFS)) Symlink(oldname, newname [string](/builtin#string)) [error](/builtin#error)
```
Symlink implements os.Symlink.
####
func (*ReadOnlyFS) [Truncate](https://github.com/twpayne/go-vfs/blob/v1.7.2/readonlyfs.go#L126) [¶](#ReadOnlyFS.Truncate)
added in v0.1.5
```
func (r *[ReadOnlyFS](#ReadOnlyFS)) Truncate(name [string](/builtin#string), size [int64](/builtin#int64)) [error](/builtin#error)
```
Truncate implements os.Truncate.
####
func (*ReadOnlyFS) [WriteFile](https://github.com/twpayne/go-vfs/blob/v1.7.2/readonlyfs.go#L131) [¶](#ReadOnlyFS.WriteFile)
```
func (r *[ReadOnlyFS](#ReadOnlyFS)) WriteFile(filename [string](/builtin#string), data [][byte](/builtin#byte), perm [os](/os).[FileMode](/os#FileMode)) [error](/builtin#error)
```
WriteFile implements ioutil.WriteFile.
####
type [Stater](https://github.com/twpayne/go-vfs/blob/v1.7.2/contains.go#L11) [¶](#Stater)
added in v1.0.6
```
type Stater interface {
Stat([string](/builtin#string)) ([os](/os).[FileInfo](/os#FileInfo), [error](/builtin#error))
}
```
A Stater implements Stat. It is assumed that the os.FileInfos returned by Stat are compatible with os.SameFile. |
github.com/Pradumnasaraf/DevOps | go | Go | README
[¶](#section-readme)
---
DevOps
===
Contains all my learning related to DevOps tools and tech.
### Kubernetes
![Kubernetes](https://user-images.githubusercontent.com/51878265/200594367-f416d081-af8f-4f48-8008-998d005b317f.png)
* [Notes + Learning Resources](https://github.com/Pradumnasaraf/DevOps/blob/v1.4.3/Kubernetes/README.md)
* [Commands](https://github.com/Pradumnasaraf/DevOps/blob/v1.4.3/Kubernetes/commands/README.md)
* [Manifest Files](https://github.com/Pradumnasaraf/DevOps/tree/main/Kubernetes/YAML)
### Docker
![docker](https://user-images.githubusercontent.com/51878265/200594916-47ba8a4c-fb94-4953-b179-dfb542df9499.png)
* [Notes + Learning Resources](https://github.com/Pradumnasaraf/DevOps/blob/v1.4.3/Docker/README.md)
* [Commands](https://github.com/Pradumnasaraf/DevOps/blob/v1.4.3/Docker/commands/README.md)
* [Compose/Stack Files](https://github.com/Pradumnasaraf/DevOps/tree/main/Docker/YAML)
* [Dockerfile](https://github.com/Pradumnasaraf/DevOps/tree/main/Docker/Dockerfile)
### GitHub Actions
![GitHub Action](https://user-images.githubusercontent.com/51878265/211621722-c2ddc389-6e4e-4769-9dac-f18f8e71fed3.png)
* [Notes + Learning Resources](https://github.com/Pradumnasaraf/DevOps/blob/v1.4.3/GitHub-Actions/README.md)
* [Workflows](https://github.com/Pradumnasaraf/DevOps/tree/main/GitHub-Actions/Workflows)
### Linux
![linux](https://user-images.githubusercontent.com/51878265/209197882-51406a8f-04ff-4c53-a362-ac32ae8566ad.png)
* [Notes + Learning Resources](https://github.com/Pradumnasaraf/DevOps/blob/v1.4.3/Linux/README.md)
* [Commands](https://github.com/Pradumnasaraf/DevOps/blob/v1.4.3/Linux/commands/README.md)
### Git
![git](https://user-images.githubusercontent.com/51878265/202784470-2c813581-7160-4aaf-b96c-35187795d05b.png)
* [Notes + Learning Resources](https://github.com/Pradumnasaraf/DevOps/blob/v1.4.3/Git/README.md)
* [Commands](https://github.com/Pradumnasaraf/DevOps/blob/v1.4.3/Git/commands/README.md)
### Networking
![network](https://user-images.githubusercontent.com/51878265/204347251-efd0e271-5d3c-4008-bdab-6f6ce5b2195f.png)
* [Notes + Learning Resources](https://github.com/Pradumnasaraf/DevOps/blob/v1.4.3/Networking/README.md)
* [Commands](https://github.com/Pradumnasaraf/DevOps/blob/v1.4.3/Networking/commands/README.md)
### YAML
![YAML](https://user-images.githubusercontent.com/51878265/202765143-55758916-b631-4c18-aaad-718b42507d67.png)
* [Notes + Learning Resources](https://github.com/Pradumnasaraf/DevOps/blob/v1.4.3/YAML/README.md)
* [Syntax](https://github.com/Pradumnasaraf/DevOps/blob/v1.4.3/YAML/syntax/README.md)
### Go
![network](https://user-images.githubusercontent.com/51878265/213385507-52f03107-388c-4992-9b5e-c89de6906e37.png)
* [Notes + Learning Resources](https://github.com/Pradumnasaraf/DevOps/blob/v1.4.3/Go/README.md)
* [Concepts](https://github.com/Pradumnasaraf/DevOps/tree/main/Go/Concepts)
* [Practice App](https://github.com/Pradumnasaraf/DevOps/tree/main/Go/App)
### Helm
![Helm](https://user-images.githubusercontent.com/51878265/202859249-b90ac510-d8e8-408d-9c07-0d2bd8e1b092.png)
* [Notes + Learning Resources](https://github.com/Pradumnasaraf/DevOps/blob/v1.4.3/Helm/README.md)
### Prometheus
![Prometheus](https://user-images.githubusercontent.com/51878265/202859485-eba6809e-1cb8-4bbc-ab22-efa3c91d6463.png)
* [Notes + Learning Resources](https://github.com/Pradumnasaraf/DevOps/blob/v1.4.3/Prometheus/README.md)
### GitOps
![Gitops](https://user-images.githubusercontent.com/51878265/206730962-b20f94c1-17af-48b2-b62c-b6c02dbeeb77.png)
* [Notes + Learning Resources](https://github.com/Pradumnasaraf/DevOps/blob/v1.4.3/GitOps/README.md)
### ArgoCD
![Argo](https://user-images.githubusercontent.com/51878265/205495495-b3f0b395-3ce3-42d8-9274-220ff10334f6.png)
* [Notes + Learning Resources](https://github.com/Pradumnasaraf/DevOps/blob/v1.4.3/ArgoCD/README.md)
* [Manifest Files](https://github.com/Pradumnasaraf/DevOps/tree/main/ArgoCD/YAML)
### Portainer
![portainer](https://user-images.githubusercontent.com/51878265/204345912-dee5ddf4-4a91-4b4f-aeb3-5a429de5a7f7.png)
* [Notes + Learning Resources](https://github.com/Pradumnasaraf/DevOps/blob/v1.4.3/Portainer/README.md)
### Jenkins
![Jenkins](https://user-images.githubusercontent.com/51878265/209197795-570330e6-fbee-4bf3-a42e-b8609e3afc46.png)
* [Notes + Learning Resources](https://github.com/Pradumnasaraf/DevOps/blob/v1.4.3/Jenkins/README.md)
* [Jenkinsfile](https://github.com/Pradumnasaraf/DevOps/tree/main/Jenkins/Jenkinsfile)
### Bash Scripting
![Bash](https://user-images.githubusercontent.com/51878265/200594989-b1406680-ed41-478a-84d5-7c35b287e112.png)
* [Notes + Learning Resources](https://github.com/Pradumnasaraf/DevOps/blob/v1.4.3/Bash-Scripting/README.md)
* [Script](https://github.com/Pradumnasaraf/DevOps/tree/main/Bash-Scripting/Scripts)
### Lens IDE
![Lens](https://user-images.githubusercontent.com/51878265/208243882-9c4f03fe-7aa3-4f42-84c4-ab90047e056b.png)
* [Notes + Learning Resources](https://github.com/Pradumnasaraf/DevOps/blob/v1.4.3/Lens/README.md)
### Kubescape
![Kubescape](https://user-images.githubusercontent.com/51878265/208244012-919ce817-32c1-40fe-b31f-44dba72655da.png)
* [Notes + Learning Resources](https://github.com/Pradumnasaraf/DevOps/blob/v1.4.3/Kubescape/README.md)
### ValidKube
![ValidKube](https://user-images.githubusercontent.com/51878265/208244291-3e43c1aa-cee1-4943-8775-21189cab3dcd.png)
* [Notes + Learning Resources](https://github.com/Pradumnasaraf/DevOps/blob/v1.4.3/Validkube/README.md)
**If you find the repository helpful, please consider sponsoring my efforts via [GitHub Sponsors](https://github.com/sponsors/Pradumnasaraf)**
None |
spck-code-editor | readthedoc | Unknown | Spck (pronouced as "spec") Editor enables you to work on JavaScript projects whenever, wherever. If its a new single-page app, a game, data visualizations, 3D simulations, or anything else that can be realized using the magic of JavaScript, HTML, and CSS, Spck Editor hopes to be the code editor of your choice.
### Android App
Spck Editor is currently available on Android. Chrome Store & iOS support are coming, but not yet ready. Meanwhile, if it's your first time here, feel free to checkout a list of features below.
## Features Overview
### Hints
Code hints are an essential aid to mastering external libraries. They also offer a convenience in your own code, and provide incentives to better document code with JSDoc comment strings.
### Autocompletion
Make less typos with the awesome code completion provided by the editor. Code completions are handled by the awesome TypeScript libraries provided by VS Code.
### Themes
The editor supports a variety of themes including XCode, Chrome, Dracula, Monokai, Ayu Mirage, and Ayu Light.
### Markdown
If you ever need to preview a Markdown file, you can do so right in the editor. Built upon the awesome Markdown-it library.
### Github
Export your changes right to your Github repo. If you need to make live edits to Github, you can do so right in the editor. The editor provides basic Github integration including code diffs, highlighted files with changes, pulling and committing.
Be sure to give it a try. Also please checkout our Github repos on more info on embedding Spck Editor in your own site as well as libraries that Spck Editor is built upon.
### Diffs
File diffing is an awesome feature that any great code editor with git integration needs. More features will be added to the code differ in the near future, for example, char-by-char diffs as well as in-place diff editting.
### Import & Export Zip
An alternative to Github commits is exporting your project as a zip file. You can transfer your project to another device by the reverse operation of importing. Note that the exported file may have Unix-style newlines (meaning a single "\n" character instead of "\r\n", however, any proper code editor should handle this automatically in Windows.
### Search
Project-wide code searching is a useful tool for any JavaScript development. Navigating between files by keywords and refactoring are all made easier with an awesome fast search feature. There's still room to improve with the editor's search, so we need your feedback on any ideas to improve the editor!
### Live Preview
Live previewing is a feature so essential, its barely worth mentioning. However... Spck Editor supports two modes of previewing, in a separate browser tab as well as an inline iframe for quicker edits-to-preview.
Additionally, previewing also work great in Spck Editor's mobile app. So if your interested in giving JavaScript development a try on your mobile device, be sure to download our mobile app.
Yay! This is a page dedicated to describing the long, tedious, and complex settings to use the editor. Hence, the name of this section. Nah, I promsie it won't be that bad. In fact, settings in Spck Editor are fairly straightforward (hopefully).
### Appearance
Animation: Enables or disables most CSS animations. Disable editor animations if they are running slow on your device. Results in better performance and faster transitions.
*
Theme: Changes the colors of the code editor. Dark and light editor theme are tied with this setting.
### Editor
Check Syntax: Creates a web worker to check the syntax of JavaScript, CSS, and HTML files. Other languages may or may not be supported for syntax checking.
*
Font Size: Changes the font size of the code editor.
*
Indent Guides: Adds indent guidelines to the code editor for visual guidance.
*
Line Numbers: Toggles the line numbers in the code editor.
*
Tab Size: Changes the number of spaces a tab represents in the code editor.
*
Word Wrap: Toggles soft line wraps in the code editor.
Note: These settings affect both the main editor as well as the diff editor.
### Touch
Touch Features: Toggles on and off all touch features. This includes touch primary keyboard, files touch mode, and other touch features for the code editor.
*
Extra Touch Keyboard: Toggles the extra touch keyboard which contains key options for common code keywords and symbols.
Note: These settings are only available on devices with a touch screen.
### Preview
* Live Preview: Toggles auto-refreshing in the preview iframe when file changes are made. This feature is not useful in the mobile version.
### About
A section of the settings containing miscellaneous information about the editor. Contains the version information, and a link to report issues with the editor.
There is also an option to revoke privacy permissions granted to the app.
Warning: Revoking privacy information means removing all projects, files, and other user data that the app stores.
Github currently is one of the main ways of exporting projects from the editor. (For other ways of exporting your work, see section on Importing & Exporting) Although other ways of exporting such as to shared drives is planned for the future.
Right now Github is the only supported git repository provider by this editor. This document tries to summarize the git integration that is already available int he editor.
### Table of Contents
* Getting Started
* Branching
* File Diffs
* Commiting
## Getting Started
### Mounting a Github Repo
The process of mounting a Github repo has already been covered in the section Import a Github Repo. It will not be covered here; note that it is recommended to add a Github personal token to allow for mounting of bigger repositories as well as to increase your daily usage limits of Github's APIs.
### Pulling Remote Changes
Pulling will download any changes on the remote Github server that the editor is missing. Essentially, this is the process lets the editor's repository catch up on changes made by others to the remote repository.
To pull changes from the remote repository, open the project's options menu and select the "Pull Changes" option.
If there is a discrepency between a file's last known state between the remote repo and the editor, then the file is marked as having a merge conflict. This is covered in more detail in the section Merge Conflicts.
### Merge Conflicts
Sometimes a file have a discrepancy with the remote repository, and the history cannot differentiate which one is newer. As a result, the file is marked as having a "Merge Conflict". These files will be marked in purple.
Projects with merge conflicts cannot push commits to the remote repository until all merge conflicts are resolved.
Currently the only way to resolve merge conflicts is to manually mark them as resolved by right-clicking on the file and selecting the "Mark Resolved" option.
### Adding a Personal Github Token
Adding a personal Github Token to the app will increase the rate limit usage of Github APIs to 5000 requests per hour. This is more than enough to download most repositories. It is recommended to use a Github personal token (which requires a Github account) if you are mounting repositories with more than ~20 or so files.
To add a Github token, choose the "Add Git Token" option from the menu in the "Projects" tab.
In the prompt that appears you need to enter the Github personal token that you generated, as well as author username and email for commits. If you leave this blank, then the editor defaults will be used instead for the email and username.
To Generate the Github token on the website, click the "here" link provided in the "Add Github Token" prompt.
Next click "Generate new token" button on Github's page.
Enter the token name and select all "repo" permissions.
## Branching
### Creating Branches
Currently, creating branches is not possible in the editor. Branches would have be to created on Github until this feature is implemented.
### Changing Git Branches
To change the current branch, go to the "Files" tab menu and select the "Switch Branch" option. Alternatively, switching branches can also be done form the "Projects" tab.
This will open up the "Switch Branch" prompt. The project's current branch will be the current selected option. Choose an available branch from the selection menu and click "OK" to proceed.
If there are uncommited changes in the branch before switching, they will be stored locally. Next time you switch back to the original branch, the changes will be brought back.
## File Diffs
Sometimes you need to see changes between your local repository and the remote repository; this is when a diff viewer is needed. Spck Editor has a built-in diff viewer for files that have changed from the remote repo.
### Viewing Diffs
To view changes that have occurred, select a file marked in yellow or blue depending on if the editor is in "dark" or "light" mode.
This should open up a modal with a unified view of the diffs. Currently, the editor does not offer any editing features in the diff viewer.
## Committing
To make a Github commit, select the "Push Changes" option from the menu in the "Files" tab.
Pushing changes require a Github token. To add a Github token please see the section on Adding a Personal Github Token.
If you receive a 401 or 403 error, it means that the token you have provided does not have the permissions necessary to commit to the current repository.
Currently, the only way to import an existing project is through Github or a local zip file. However, integration with additional Git providers and third-party providers, such as Gitlab and Dropbox are on the roadmap. For now, exporting and importing zip files offer a convenient means of transfering your projects around without having to use Github.
## Import Zip Archive
To import a zip file with your project contained within, select the option "Import Zip File" from the menu in the "Projects" tab.
In the prompt that appears, enter a new project name for the imported zip archive.
After clicking "OK" to continue, the imported files should appear in your new project.
## Export Zip Archive
To export your project select the "Export Zip" option from either the menu in the "Files" or "Projects" tab.
After doing so, enter the name of the zip file that will be saved to your default "Downloads" location.
After clicking "OK", the download for your zip file should begin.
|
open-cypher | rust | Rust | Struct open_cypher::CypherParser
===
```
pub struct CypherParser;
```
Trait Implementations
---
source### impl Parser<Rule> for CypherParser
source#### fn parse<'i>(rule: Rule, input: &'i str) -> Result<Pairs<'i, Rule>, Error<Rule>Parses a `&str` starting from `rule`.
Auto Trait Implementations
---
### impl RefUnwindSafe for CypherParser
### impl Send for CypherParser
### impl Sync for CypherParser
### impl Unpin for CypherParser
### impl UnwindSafe for CypherParser
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Enum open_cypher::Rule
===
```
pub enum Rule {
Cypher,
}
```
Variants
---
### `Cypher`
Trait Implementations
---
source### impl Clone for Rule
source#### fn clone(&self) -> Rule
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for Rule
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Hash for Rule
source#### fn hash<__H: Hasher>(&self, state: &mut__H)
Feeds this value into the given `Hasher`. Read more
1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mutH) where H: Hasher,
Feeds a slice of this type into the given `Hasher`. Read more
source### impl Ord for Rule
source#### fn cmp(&self, other: &Rule) -> Ordering
This method returns an `Ordering` between `self` and `other`. Read more
1.21.0 · source#### fn max(self, other: Self) -> Self
Compares and returns the maximum of two values. Read more
1.21.0 · source#### fn min(self, other: Self) -> Self
Compares and returns the minimum of two values. Read more
1.50.0 · source#### fn clamp(self, min: Self, max: Self) -> Self
Restrict a value to a certain interval. Read more
source### impl Parser<Rule> for CypherParser
source#### fn parse<'i>(rule: Rule, input: &'i str) -> Result<Pairs<'i, Rule>, Error<Rule>Parses a `&str` starting from `rule`.
source### impl PartialEq<Rule> for Rule
source#### fn eq(&self, other: &Rule) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`.
source### impl PartialOrd<Rule> for Rule
source#### fn partial_cmp(&self, other: &Rule) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists. Read more
1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool
This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more
1.0.0 · source#### fn le(&self, other: &Rhs) -> bool
This method tests less than or equal to (for `self` and `other`) and is used by the `<=`
operator. Read more
1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool
This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more
1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool
This method tests greater than or equal to (for `self` and `other`) and is used by the `>=`
operator. Read more
source### impl Copy for Rule
source### impl Eq for Rule
source### impl StructuralEq for Rule
source### impl StructuralPartialEq for Rule
Auto Trait Implementations
---
### impl RefUnwindSafe for Rule
### impl Send for Rule
### impl Sync for Rule
### impl Unpin for Rule
### impl UnwindSafe for Rule
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> RuleType for T where T: Copy + Debug + Eq + Hash + Ord,
Function open_cypher::parse
===
```
pub fn parse(code: &str) -> Result<Pairs<'_, Rule>, Error<Rule>>
``` |
@d2-plus/cli-plugin-e2e-cypress | npm | JavaScript | @vue/cli-plugin-e2e-cypress
===
> e2e-cypress plugin for vue-cli
This adds E2E testing support using [Cypress](https://www.cypress.io/).
Cypress offers a rich interactive interface for running E2E tests, but currently only supports running the tests in Chromium. If you have a hard requirement on E2E testing in multiple browsers, consider using the Selenium-based [@vue/cli-plugin-e2e-nightwatch](https://github.com/vuejs/vue-cli/tree/dev/packages/%40vue/cli-plugin-e2e-nightwatch).
Injected Commands
---
* **`vue-cli-service test:e2e`**
Run e2e tests with `cypress run`.
By default it launches Cypress in interactive mode with a GUI. If you want to run the tests in headless mode (e.g. for CI), you can do so with the `--headless` option.
The command automatically starts a server in production mode to run the e2e tests against. If you want to run the tests multiple times without having to restart the server every time, you can start the server with `vue-cli-service serve --mode production` in one terminal, and then run e2e tests against that server using the `--url` option.
Options:
```
--headless run in headless mode without GUI
--mode specify the mode the dev server should run in. (default: production)
--url run e2e tests against given url instead of auto-starting dev server
-s, --spec (headless only) runs a specific spec file. defaults to "all"
```
Additionally:
+ In GUI mode, [all Cypress CLI options for `cypress open` are also supported](https://docs.cypress.io/guides/guides/command-line.html#cypress-open);
+ In `--headless` mode, [all Cypress CLI options for `cypress run` are also supported](https://docs.cypress.io/guides/guides/command-line.html#cypress-run).Examples :
+ Run Cypress in headless mode for a specific file: `vue-cli-service test:e2e --headless --spec tests/e2e/specs/actions.spec.js`
Configuration
---
We've pre-configured Cypress to place most of the e2e testing related files under `<projectRoot>/tests/e2e`. You can also check out [how to configure Cypress via `cypress.json`](https://docs.cypress.io/guides/references/configuration.html#Options).
Environment Variables
---
Cypress doesn't load .env files for your test files the same way as `vue-cli` does for your [application code](https://cli.vuejs.org/guide/mode-and-env.html#using-env-variables-in-client-side-code). Cypress supports a few ways to [define env variables](https://docs.cypress.io/guides/guides/environment-variables.html#) but the easiest one is to use .json files (either `cypress.json` or `cypress.env.json`) to define environment variables. Keep in mind those variables are accessible via `Cypress.env` function instead of regular `process.env` object.
Installing in an Already Created Project
---
```
vue add e2e-cypress
```
Readme
---
### Keywords
* vue
* cli
* e2e-cypress |
@walletconnect/eth-provider | npm | JavaScript | WalletConnect ETH Provider
===
ETH Provider for WalletConnect
For more details, read the [documentation](https://docs.walletconnect.org)
Example
---
```
import Web3 from "web3";
import WalletConnectProvider from "@walletconnect/eth-provider";
/**
* Create WalletConnect Provider (qrcode modal will be displayed automatically)
*/
const provider = new WalletConnectProvider({
infuraId: "<INSERT_INFURA_APP_ID>",
});
/**
* Create Web3
*/
const web3 = new Web3(provider);
/**
* Get Accounts
*/
const accounts = await web3.eth.getAccounts();
/**
* Send Transaction
*/
const txHash = await web3.eth.sendTransaction(tx);
/**
* Sign Transaction
*/
const signedTx = await web3.eth.signTransaction(tx);
/**
* Sign Message
*/
const signedMessage = await web3.eth.sign(msg);
/**
* Sign Typed Data
*/
const signedTypedData = await web3.eth.signTypedData(msg);
```
Readme
---
### Keywords
* wallet
* walletconnect
* ethereum
* jsonrpc
* mobile
* qrcode
* web3
* crypto
* cryptocurrency
* dapp |
univie | ctan | TeX | # The univie-ling-expose class
<NAME>
Please report issues via [https://github.com/jsplitz/univie-ling](https://github.com/jsplitz/univie-ling).
Version 2.4, 2023/03/31
###### Abstract
The univie-ling-expose class provides a \(\operatorname{\mathsf{EIFX}}2\varepsilon\) class suitable for those research proposals (_Exposes_) that are required in the context of the public presentation of a dissertation project (_FOP_) at the University of Vienna.1 The class implements some standards for these proposals (such as a suitable title page). It is particularly suited for projects in the field of (Applied) Linguistics and pre-loads some packages that are considered useful in this disciplinary context. The class might also be used for General and Historical Linguistics as well as for other fields of study at Vienna University. In this case, however, some settings might have to be adjusted. This manual documents the class as well as the configuration possibilities.
Footnote 1: [http://doktorat.univie.ac.at/en/doctorate-university-of-vienna/research-proposal/](http://doktorat.univie.ac.at/en/doctorate-university-of-vienna/research-proposal/).
###### Contents
* Aims and scope
* Requirements of univie-ling-expose
* Fonts
* Class Options
* 4.1 Font selection
* 4.2 Polyglossia
* 4.3 Package loading
* 4.4 Draft mode
* 4.5 Further options
* General settings
* 5.1 Author-related data
* 5.2 Project-related data
* Semantic markup
* Linguistic examples and glosses
* 8.1 Default bibliography style (_Unified Style for Linguistics_)
* 8.2 Using APA/DGPs style
* 8.3 Using a different style
* Further instructions
*
* sourcecodepro: Default monospaced font (_Source Code Pro_).
* biblatex: Contemporary bibliography support.
* caption: Caption layout adjustments.
* covington: Support for linguistic examples/glosses.
* fontenc: Set the font encoding for PostScript fonts. Loaded with option **T1**.
* microtype: Micro-typographic adjustments.
* prettyref: Verbose cross-references.
* varioref: Context-sensitive cross references.
The following packages are required for optional features (not used by default):
* biblatex-apa: APA style for biblatex.
* draftwatermark: Create a draft mark.
* fontspec: Load OpenType fonts (with LuaTeX or XeTeX).
* polyglossia: Multi-language and script support.
## 3 Fonts
The class uses, by default, PostScript (a. k. a. Type 1) fonts and thus requires classic (PDF)LaTeX. Optionally, however, you can also use OpenType fonts via the fontspec package and the XeTeX or LuaTeX engine instead. In order to do this, use the class option **fonts=otf** (see sec. 4 for details).
In both cases, the class uses by default _Times New Roman_ as a serif font, _Arial_ (or, alternatively, _Helvetica_) as a sans serif font, and _Source Code Pro_ as a monospaced (typewriter) font. Note that _Arial_ (PostScript) font support is not included in most LaTeX distributions by default, due to license reasons. You can install it easily via the getnonfreefonts script.4 If _Arial_ is not installed, however, _Helvetica_ (which is part of the LaTeX core packages) is used as a fallback. This is usually a suitable alternative, since _Arial_ and _Helvetica_ only differ in subtle details. If you use **fonts=otf**, you just have to make sure that you have the fonts _Arial_, _Times New Roman_ and _Source Code Pro_ installed on your operating system (with exactly these names, i. e., fonts named _Arial Unicode MS_ or _Times_ will not be recognized!).
Footnote 4: [https://www.tug.org/fonts/getnonfreefonts](https://www.tug.org/fonts/getnonfreefonts) <25.01.2017>.
Note that by default, with PostScript fonts, univie-ling-expose also loads the fontenc package with T1 font encoding, but this can be customized (see sec. 4 for details).
If you want (or need) to load all fonts and font encodings manually, you can switch off all automatic loading of fonts and font encodings by the class option **fonts=none** (see sec. 4).
Class Options
The univie-ling-expose class provides a range of key=value type options to control the font handling, package loading and some specific behavior. These are documented in this section.
### Font selection
As elaborated above, the package supports PostScript fonts (via LaTeX and PDFLaTeX) as well as OpenType fonts (via XeTeX and LuaTeX). PostScript is the traditional LaTeX font format. Specific LaTeX packages and metrics files are needed to use the fonts (but all fonts needed to use this class should be included in your LaTeX distribution and thus ready to use). OpenType fonts, by contrast, are taken directly from the operating system. They usually provide a wider range of glyphs, which might be a crucial factor for a linguistic paper. However, they can only be used by newer, partly still experimental TeX engines such as XeTeX and LuaTeX.
The class provides the following option to set the font handling:
```
fonts=ps|otf|none:ifpsisselected,PostScriptfontsareused(thisisthedefaultandthecorrectchoiceifyouuseLaTeXorPDFLaTeX);iftotfisselected,OpenTypefontsareused,theclassloadsthefontspecackage,setsTimesNewRomanasemainfontandArialasansersiffont(thisisthecorrectchoiceifyouuseXeTeXorLuaTeX;makesureyouhavetherespectivefontsinstalledonyoursystem);ifnoneisselected,finally,theclasswillnotloadanyfontpackageatall,andneitherfontenc(thischoiceisueifyouwantotcontrolthefonthandlingcompletelyyourself).
```
With PostScript fonts, univie-ling-expose also loads thefontenc package with T1 font encoding, which is suitable for most Western European (and some Eastern European) writing systems. In order to load different, or more, encodings, the class option
```
fontenc=encoding(s)>canbeused(e.g.,fontenc={T1,X2}).Withfontenc=none,theloadingofthefontencpackagecanbeprevented.Thepackageisalsonotloadedwithfonts=none.
```
### Polyglossia
If you need polyglossia rather than babel for language support, please do not use the package yourself, but rather use the package option polyglossia=true. Thisassurescorrectloadingorder. Thisalsopresetsfonts=otf.
### Package loading
Most of the extra features provided by the class can be switched off. This might be useful if you do not need the respective feature anyway, and crucial if you need an alternative package that conflicts with one of the preloaded package.
All following options are true by default. They can be switched off one-by-one via the value false, or altogether, by means of the special option all=false. You can also switch selected packages on/off again after this option (e. g., alt=false,microtype=true will switch off all packages except microtype).
```
apan=true|false:If true, the biblatex-apa style is used when biblatex is loaded. By default, the included univie-ling style is loaded, instead. See sec. 8 for details. biblatex=true|false:If false, biblatex is not loaded. This is useful if you prefer BibTeX over biblatex, but also if you neither want to use the preloaded univie-ling style nor the alternative biblatex-apa style (i. e., if you want to load biblatex manually with different options). See sec. 8 for details. caption=true|false:If false, the caption package is not loaded. This affects the caption layout. covington=true|false:If false, covington is not loaded. Covington is used for numbered examples. microtype=true|false:If false, microtype is not loaded. ref=true|false:If false, both prettyref and varioref are not loaded and the string (re)definitons of the class (concerning verbose cross references) are omitted.
### Draft mode
The option draftmark=true|false|firstpage allows you to mark your document as a draft, which is indicated by a watermark (including the current date). This might be useful when sharing preliminary versions with your supervisor. With draftmark=true, this mark is printed on top of each page. With draftmark=firstpage, the draft mark appears on the title page only.
### Further options
The class builds on scrartcl (KOMA article), which provides many more options to tweak the appearance of your document. You can use all these options via the \KOMAoptions macro. Please refer to the KOMA-Script manual [4] for details.
## 5 General settings
In this section, it is explained how you can enter some general settings, particular the information that must be given on the title page. The title page, following the model given in university guidelines for theses, is automatically set up by the \maketitive command, given that you have specified the following data in the preamble.
### Author-related data
``` \author{<name>:Nameoftheproposal'sauthor. \studienkennzahl{<code>}:Thedegreeprogrammecode(Studienkennzahl)asitappearsonthestudentrecordsheet,e.g.A792327. ```* **studienrichtung**{<field of study>}: Your degree programme (_Studienrichtung_) or field of study (_Dissertationsgebiet_) as it appears on the student record sheet, e. g. _Sprachwissenschaft_.
### Project-related data
* **vtitle**{<title>}: Title of the dissertation project.
* **subtitle**{<subtitle>}: Subtitle of the dissertation project.
* **supervisor**{<name>}: Title and name of your (main) supervisor.
* **cosupervisor**{<name>}: Title and name of your co-supervisor.
* **advisor**{<name>}: Title(s) and name(s) of the advisory board members (_Doktorats-beitrat_).
## 6 Semantic markup
The class defines some basic semantic markup common in linguistics:
* ****Expression**{<text>}: To mark expressions (object language). Typeset in _italics_.
* ****Concept**{<text>}: To mark concepts. Typeset in small capitals.
* ****Meaning**{<text>}: To mark meaning. Typeset in'single quotation marks'.
You can redefine each of these commands, if needed, like this:
* ****Fenewcommand**{<field>}: ****Expression**[1]**{<text{$text{$\#1$}}}
* ****renewcommand**{<field>}: ****Concept**[1]**{<text{$text{$\#1$}}}
* ****renewcommand**{<field>}: ****Meaning**[1]**{</**enquote**{</$\#1$}}
## 7 Linguistic examples and glosses
The class automatically loads the covington package which provides macros for examples and glosses. Please refer to the covington manual [1] for details.
## 8 Bibliography
### Default bibliography style (_Unified Style for Linguistics_)
By default, the univie-ling-expose class loads a bibliography style which matches the conventions that are recommended by the Applied Linguistics staff of the department.5 These conventions draw on the _Unified Style Sheet for Linguistics_ of the LSA (_Linguistic Society of America_), a style that is also quite common in General Linguistics nowadays. In order to conform to this style, the univie-ling-expose class uses the biblatex package with the univie-ling style that is included in the univie-ling-expose package.
If you are in Applied Linguistics, using the default style is highly recommended. The style recommended until 2017, namely APA/DGPs, is also still supported, but its use is no longer encouraged; see sec. 8.2 for details. If you want/need to use a different style, please refer to section 8.3 for instructions.
### Using APA/DGPs style
Until 2017, rather than the Unified Style, the Applied Linguistics staff recommended conventions that drew on the citation style guide of the APA (_American Psychological Association_) and its adaptation for German by the DGPs (_Deutsche Gesellschaft fur Psychologie_).
For backwards compatibility reasons, this style is still supported (though not recommended). You can enable it with the package option **apa=true**.
If you want to use APA/DGPs style, consider the following caveats.
* For full conformance with the APA/DGPs conventions (particularly with regard to the rather tricky handling of "and" vs. "&" in- and outside of parentheses), it is mandatory that you adequately use the respective biblatex(-apa) citation commands: Use \textcite for all inline citations and \parencite for all parenthesized citations (instead of manually wrapping \cite in parentheses). If you cannot avoid manually set parentheses that contain citations, use \npretcite (a biblatex-apa-specific command) inside them.6 For quotations, it is recommended to use the quotation macros/environments provided by the csquotes package (which is preloaded by univie-ling-expose anyway); the univie-ling-expose class assures that citations are correct if you use the optional arguments of those commands/macros in order to insert references. Footnote 6: Please refer to [5] and [2] for detailed instructions.
* The biblatex-apa style automatically lowercases English titles. This conforms to the APA (and DGPs) conventions, which favour "sentence casing" over "title casing". English titles, from biblatex's point of view, are titles of bibliographic entries that are either coded as **english** via the **LangID** entry field or that have no **LangID** coding but appear in an English document (i. e., a document with main language English). Consequently, if the document's main language is English, all non-English entries need to be linguistically coded (via **LangID**) in order to prevent erroneous lowercasing, since biblatex assumes that non-identified entries use the main language (hence, such a classification is also important for correct hyphenation of the entries). Note that up to biblatex 3.3, the document language was not taken into account by the lowercasing automatism and all non-classified entries were treated like English entries (and thus lowercased), notwithstanding the main language; therefore, any entry needed to be coded. Even if this misbehaviour is fixed as of biblatex 3.4, it is still advisable to systematically set the proper **LangID**, since this is a prerequisite for a correct multilingual bibliography.
* The lowercasing automatism described above cannot deal properly with manual punctuation inside titles. Hence, a title such as **Maintitle**. A subtitle will come out as _Main title. a subtitle_. There are several ways to avoid that. The most proper one is to use the title and subtitle fields rather than adding everything to title. Alternatively, everything that is nested inside braces will not get lowercased, i. e. Maintitle. {A} subtitle will produce the correct result. This trick is also needed for names and other elements that should not get lowercased (Introduction to {Germanic} linguistics). However, please do not configure your BibTeX editor to generally embrace titles (this is a feature provided by many editors) since this will prevent biblatex-apa from lowercasing at places where it should be done.
* The biblatex-apa style requires that you use biber as a bibliography processor instead of bibtex (the program). See [3] for details.
### Using a different style
If you do not want or are not supposed to use neither the default Unified nor the APA/DGPs style, you can disable automatic biblatex loading via the class option **biblatex=false** (see sec. 4.3). In this case, you will need to load your own style manually, by entering the respective biblatex or BibTeX commands.
One case where you need to do that is if you prefer classic BibTeX over biblatex. If you want to follow the Applied Linguistics conventions, but prefer classic BibTeX over biblatex, a BibTeX style file unified.bst that implements the _Unified Style Sheet for Linguistics_ is available on the Internet.7 Note, though, that this package does not have specific support for German, so it is only really suitable if you write in English. Thus, if you want to follow the Applied Linguistics conventions, it is strongly recommended that you use biblatex with the preloaded univie-ling style.
Footnote 7: [http://cclxj.org/downloads/unified.bst](http://cclxj.org/downloads/unified.bst)
## 9 Further instructions
### Commands and environments
Since the class draws on scartcl, you can use all commands and environments provided by KOMA article in order to structure and typeset your document. Please refer to the comprehensive KOMA-Script manual [4] for information.
Please also refer to the template files included in the package for some further usage instructions and hints.
### LyX layouts and templates
A layout for LyX8 can be retrieved from [https://github.com/jspitz/univie-ling/raw/master/lyx/layx/layouts/univie-ling-expose.layout](https://github.com/jspitz/univie-ling/raw/master/lyx/layx/layouts/univie-ling-expose.layout).
Footnote 8: See [https://www.lyx.org](https://www.lyx.org).
Templates are provided as well:
* English template: [https://github.com/jspitz/univie-ling/raw/master/lyx/templates/template-univie-ling-expose-english.lyx](https://github.com/jspitz/univie-ling/raw/master/lyx/templates/template-univie-ling-expose-english.lyx)* German template: [https://github.com/jsplitz/univie-ling/raw/master/lyx/templates/template-univie-ling-expose-deutsch.lyx](https://github.com/jsplitz/univie-ling/raw/master/lyx/templates/template-univie-ling-expose-deutsch.lyx)
## 10 Release History
2023/03/31 (v. 2.4)
* No change to this class.
2023/01/26 (v. 2.3)
* No change to this class.
2022/12/06 (v. 2.2)
* Fix boolean option parsing.
2022/10/21 (v. 2.1)
* Fix **polysplasia** option.
2022/10/02 (v. 2.0)
* Use l3keys rather than xkeyval for key-value option handling.
* Fix some varioref definitions.
* Use translator instead of translations for localization.
* Various small class cleanups.
2022/09/08 (v. 1.20)
* Load varioref AtBeginDocument.
2022/06/18 (v. 1.19)
* Add option **fontenc**.
* Fix translation of English example string.
* Add monospaced font.
2022/05/11 (v. 1.18)
* Fix error with subtitles.
2022/02/05 (v. 1.17)
* Fix option **apa**.
* Omit quotation marks when title is empty.
2021/11/03 (v. 1.16)Add option **draftmark**. See sec. 4.4.
* Fix grouping in **maketitte**.
2021/10/19 (v. 1.15) No change to this class.
2021/09/01 (v. 1.14) No change to this class.
2020/11/11 (v. 1.13) No change to this class.
2020/06/25 (v. 1.12) No change to this class.
2020/05/05 (v. 1.11) New option **polyglossia**.
2020/05/01 (v. 1.10) No change to this class.
2019/01/21 (v. 1.9) No change to this class.
2019/01/15 (v. 1.8) No change to this class.
2018/11/07 (v. 1.7) No change to this class.
2018/11/04 (v. 1.6) Remove **subexamples** environment as this is now provided by covington.
2018/09/03 (v. 1.5) Introduce **subexamples** environment.
2018/04/26 (v. 1.4) Fix full date issue in biblatex bibliography style.
2018/03/02 (v. 1.3) No change to this class.
2018/02/13 (v. 1.2) No change to this class.
2018/02/11 (v. 1.1) No change to this class.
2018/02/08 (v. 1.0)
* Switch default bibliography style (from APA to Unified).
* Initial release to CTAN.
2016/05/05 (v. 0.7)
* Fix comma after _et al._ with biblatex-apa.
2016/04/30 (v. 0.6)
* Set proper citation command for csquotes' integrated environments.
* Improve templates.
2016/03/23 (v. 0.5)
* Fix the output of German multi-name citations (DGPs guidelines).
* Extend documentation of bibliographic features.
2016/01/29 (v. 0.4) Initial release.
## References
* [1] Covington, <NAME>. and <NAME>: _The covington Package. Macros for Linguistics_. September 7, 2018. [http://www.ctan.org/pkg/covington](http://www.ctan.org/pkg/covington).
* [2] <NAME>: _APA BibTeX style. Citation and References macros for BibTeX_. March 3, 2016. [http://www.ctan.org/pkg/biblatex-apa](http://www.ctan.org/pkg/biblatex-apa).
* [3] <NAME> and <NAME>: _Biber. A backend bibliography processor for biblatex_. March 6, 2016. [http://www.ctan.org/pkg/biber](http://www.ctan.org/pkg/biber).
* [4] <NAME> (2015): KOMA-Script. The Guide. URL: [http://www.ctan.org/pkg/koma-script](http://www.ctan.org/pkg/koma-script).
* [5] <NAME> (with <NAME>, <NAME> and <NAME>): _The biblatex Package. Programmable Bibliographies and Citations_. March 3, 2016. [http://www.ctan.org/pkg/biblatex](http://www.ctan.org/pkg/biblatex). |
freerails | readthedoc | Groovy | FreeRails 0.4.0 documentation
FreeRails’s documentation![¶](#freerails-s-documentation)
===
FreeRails is a real-time, multi player railway strategy/management game where players compete to build the most powerful railroad empire. It is based on the RailRoad Tycoon I and II games.
Introduction[¶](#introduction)
---
Player Manual[¶](#player-manual)
---
### How to play[¶](#how-to-play)
1. Look at what different tiles supply and demand by moving the cursor around and pressing ‘I’.
2. Pick two cities that each have at least 2 city or 4 village tiles. Ideally, the cities should be 10-20 tiles apart.
3. Build track between the cities (by using the number pad keys with num lock on or by dragging the mouse).
4. Build stations at each of the cities (press F8). Make sure the station radius surrounds most of the city tiles.
5. Build a train (press F7). Don’t add any wagons (they will be selected automatically).
### Game controls[¶](#game-controls)
#### On the map view[¶](#on-the-map-view)
* Build Track: Mouse or number pad with Num Lock down
* Move Cursor: Hold [control] and use Arrow Keys
* Call Broker: ‘M’
* Save Game: [control] + ‘S’
* Load Game: [control] + ‘L’
* Build Station: ‘F8’ (on existing track)
* Build Train: ‘F7’ (after building a station)
* Terrain Info: ‘I’
* Build industry: ‘B’
#### On the train orders screen[¶](#on-the-train-orders-screen)
* Goto Station: ‘G’
* Change station: ‘S’
* Automatic schedule: ‘A’
* Remove station: ‘Delete’
* Toggle wait until full: ‘W’
* Remove last wagon: ‘Back space’
* Add station: ‘N’
* Set priority orders: ‘O’
#### On the select station screen[¶](#on-the-select-station-screen)
* Change selection: Mouse over or number pad
* Accept selection: Click mouse or ‘Enter’
Specification[¶](#specification)
---
### Sid Meier’s Railroad Tycoon (1990)[¶](#sid-meier-s-railroad-tycoon-1990)
This is a summary of the game model of the original Railroad Tycoon (RRT).
#### Start condition[¶](#start-condition)
* $1 million (half in equity, half as loan)
* Lay track
* Build stations
* Buy trains
* Schedule trains
* Sell, buy bonds
* Build additional industries
* Buy or sell shares
#### Stations[¶](#stations)
Max. 32 stations per player. First stations gets an engine shop for free. Stations can be up- or downgraded.
During an accounting period built or re-built stations pay double for freight.
* Signal Tower
* Depot (3x3 squares influence area)
* Station (4x4 squares influence area)
* Terminal (5x5 squares influence area)
#### Station improvements[¶](#station-improvements)
* Engine shop
* Store
* Hotel
* Switching yard
* maintenance shop
* cold storage
* livestock pens
* goods storage
* post office
* restaurant
#### Trains[¶](#trains)
Max. 32 trains per player.
* Change “consist” (number and type of cars/wagons
* Types include: mail, passenger, freight
##### Engines[¶](#engines)
* 0-4-0 Grasshopper
* 4-2-0 Norris
* 4-4-0 American
* 2-6-0 Mogul
* 4-6-0 Ten-Wheeler
* 2-8-0 Consolidation
* 4-6-2 Pacific
* 2-8-2 Mikado
* 2-6-6-4 Mallet (Challenger class)
* F3-A Diesel-Electric
* EMD GP Diesel-Electric
* 2-2-0 Planet
* 2-2-2 Patentee
* 4-2-0 Iron Duke
* 0-6-0 DX Goods
* 4-2-2 Stirling
* 0-8-0 Webb Compund
* 4-2-2 <NAME>land Spinner
* 4-4-0 <NAME>
* 4-6-2 A1 Class
* 4-6-2 A4 Class
* 6/6 GE Class Crocodile
* 1-Do-1 Class E18
* 4-8-4 242 A1
* V200 B-B
* Bo-Bo-Bo RE Class 6/6
* TGV
##### Cars[¶](#cars)
* mail
* passenger
* beer
* livestock
* goods
* hops
* textiles
* steel
* chemicals
* cotton
* coal
#### Supply and demand[¶](#supply-and-demand)
#### Stock market[¶](#stock-market)
#### Time[¶](#time)
* 100 years running time (starting year depends on the scenario)
* Accounting period two years long
#### Difficulty levels[¶](#difficulty-levels)
The chosen difficulty level remains in effect for the whole duration of a game.
Levels are “Investor”, “Financier”, “Mogul”, “Tycoon”. The level of difficulty affects revenue earned by each delivery as well as the tycoon rating at the end.
#### Reality levels[¶](#reality-levels)
* “No Collision Operation/Dispatcher Operation” In the dispatcher operation, the movement of trains is controlled by block signals and collisions are possible.
* “Friendly competition/Cut-Throat competition” In friendly competition, they do not buy your stock, attempt to take you over or start rate wars.
* “Basic economy/Complex economy”
#### Economy[¶](#economy)
* Simple: Stations with two or more cities will buy everything
* Complex: Specialized rules
#### Other features[¶](#other-features)
* If the share price of a competing railroad falls below $5 and stays there for too long, it can be dissolved and be removed from the game
* For each bankruptcy that the player declares, the interest for selling new bonds is increased by 1%. After a certain number of bankruptcies, the player is unable to sell any bonds.
* Each car that is placed on a train costs $5,000. When the consist changes, one is only charged if the total number of cars increases.
* There is a Find City option
#### Supplied scenarios[¶](#supplied-scenarios)
* Western United States
* Northeast United States
* Great Britain
* Continental Europe
#### Links[¶](#links)
* <https://en.wikipedia.org/wiki/Railroad_Tycoon### Railroad Tycoon II (1998)[¶](#railroad-tycoon-ii-1998)
This is a summary of the game model of the Railroad Tycoon 2 (RRT2).
This specification describes the various elements of the game and their interactions.
It may not discuss in detail how features will be implemented.
### Introduction[¶](#introduction)
Let’s imagine a few real life stories of how actual (stereotypical) people would use it.
Scenario 1: Jeff.
Aged 38, splits his time between contract web development work and (real) surfing (i.e. He’s an aging hippy).
Played RT1, disliked RT2 and RT3. Sceptical about the quality of open source software.
Scenario 2: Andy.
Young programmer, 19, rides a moped. First language not English. Never played RT1 but has played RT2 and RT3.
When software doesn’t work or works in a way he doesn’t expect, he is quick to blame the software.
Wants to get involved in the development of jfreerails and to write is own AI.
Scenario 3: Claudia.
Recently left university and started working for the government (as an economist). Never played any of the RT games before.
Wants something to play on her computer to distract her from her research.
*Non Goals*
Realism - it’s a game not a simulation.
### User interface[¶](#user-interface)
All dialogs and screens should have a help button showing localized help. The help is just a locally stored html page.
#### Start screen[¶](#start-screen)
Must enable the following actions
* change preferences (modal dialog)
* start the scenario editor (full screen)
* exit the game
* show help information (modal dialog)
* start playing a game (modal dialog)
#### Start playing dialog[¶](#start-playing-dialog)
Must enable the following actions
* start a new local game
* continue a stored local game
* connect to an external server and participate in a network game hosted on that server
* host network game (and participate in it)
* close the dialog
### Game Model[¶](#game-model)
#### Time[¶](#time)
Each year is a representative day. So, for instance, a train travelling at an average speed of 10 mph will make two
(and a bit) 100 mile trips in a year.
#### Map[¶](#map)
Two maps will be available. A map of south America - used for proper games -and a small map - used for a tutorial.
The maps are divided into square tiles. Tiles are 10 miles across. Rather than let the scale vary between maps of different sized regions, different sized maps should be used.
Note
The exact shape of mountain ranges and the distribution of terrain types will vary between games.
#### Terrain[¶](#terrain)
##### Terrain Type[¶](#terrain-type)
Each tile on the map has a ‘terrain type’ e.g. Farm, desert etc.
Category All terrain types fall into one of the following categories: Urban, River, Ocean, Hill, Country, Special, Industry, or Resource.
Cargo production Some types of terrain produce cargo (of one or more types), e.g. A Cattle Ranch produces livestock.
Cargo consumption Some types of terrain consume cargo, e.g. ‘City’ tiles consume ‘Food’.
Right-of-way cost Before you build track on a square, you need to purchase the right of way. Different terrain types have different ROW costs.
##### Terrain Heights[¶](#terrain-heights)
Different terrain types will have different heights, I’ll call this the type-height.
The height of a tile will be the weighted average of the type-heights of the surrounding tiles.
#### Cargo[¶](#cargo)
There are a number of cargo types, e.g. Mail, Passengers, Livestock etc. Cargo types fall into one of the following categories:
Mail, Passengers, Fast Freight, Slow Freight, Bulk Freight. Mail is the most sensitive to speed of delivery; Bulk Freight is the least.
#### Cities[¶](#cities)
Cities of random size are added at predefined locations on the map when the game starts.
As time passes, cities grow and shrink depending the amount of cargo picked up and delivered.
#### Track[¶](#track)
There are a number of track types, e.g. standard track, double track, tunnel etc. A tile can only have track of one type.
You can only built track if you have sufficient cash. The cost of building track depends on the track type,
the track configuration, and the right of way cost of the terrain. Some track types can only be built on certain terrain,
e.g. an Iron Girder bridge can only be built on river. If some track has already been built, all new track must connect to the existing track.
Track can be upgraded. The cost of upgrading track from type X to type Y is less than the cost of building track of type Y but more than the cost of Y minus the cost of X. Track can be removed. When you remove track you get a little amount of money back.
Different track types have different maintenance costs. The maintenance cost must be paid once per year.
##### Bridges[¶](#bridges)
Name | Price | Train Speed | Track Type
* Wooden Trestle | $50,000 | slow | single
* Iron Girder | $200,000 | fast | single
* Stone Masonry | $400,000 | fast | double
##### Track Contention[¶](#track-contention)
On single (double) track, only one (two) train(s) can move at a time. Gridlock shouldn’t occur because when trains stop moving they do not block other trains.
#### Trains[¶](#trains)
Once you have built some track and a station you can build a train. You get a choice of engines and the option to add up to 6 wagons. You can build trains even if you have cash < 0.
(This is to stop people getting stuck without any trains since you need trains to make money.)
Trains can be scheduled to stop at 2 or more stations.
The pseudo code below describes the behaviour of trains.:
```
if(train is moving){
if(train has reached a new tile){
if (there is a station here){
unload cargo demanded by station
if(this is a scheduled stop){
if(consist needs changing){
unload cargo that won't fit after changing consist
change consist
}
}
load cargo
setStatus(stoppedAtStation)
departTime = currentTime + stopTime
}else{
setStatus(readyToMove)
}
}else{
keep moving
}
} else if(train is at station){
if(waiting for full load){
load any cargo
if(full){
update next station on schedule
if(currentTime > departTime){
setStatus(readyToMove)
}
}
}else{
if(currentTime > departTime){
if(this was a scheduled stop){
update next station on schedule
}
setStatus(readyToMove)
}
}
}
if(train status is readyToMove){
find next track section
if(number of trains on next track section < number of tracks){
setStatus(moving)
move train onto next track section
}
}
```
##### Engines[¶](#engines)
Two types of engine are available when you start the game. Three other types become available later.
A train’s engine can be upgraded.
##### Wagons[¶](#wagons)
There is one wagon type for each cargo type. A wagon can carry 40 units of cargo. Wagons are free and can be added at any station (since moving empty wagons about would be boring).
##### Train Schedules[¶](#train-schedules)
The stations a train stops at and whether it changes its wagons when it stops are governed by a train’s schedule.
##### Train Movement[¶](#train-movement)
The more wagons a train is pulling, the slower it moves. The greater the amount of cargo, the slower the train.
The gradient of the track also affects speed. When trains arrive at a station it stops for a few moments to load and unload cargo. Trains stop instantly (this is a simplification so we don’t need to look ahead) but speed up slowly.
Path finding is automatic. That means that trains take the slowest perceived route and priority trains having priority.
(This may change at some point with the player being able to choose between several possible routes between two stations.
It would be implemented using way points, maybe even changing routes depending on the traffic.)
#### Stations[¶](#stations)
Supply and demand at a station is determined by the tile types within the station’s sphere-of-influence. Different station types have different sized spheres-of-influence. The spheres-of-influence of two stations cannot overlap.
##### Station Improvements[¶](#station-improvements)
Improvement Type | Effect | Price
* Engine Shop | Trains can be built | $100,000
* Switching Yard | Cuts time taken the change wagons by 75% | $50,000
* Maintenance Shop | Cuts yearly maintenance of trains that stop at station by 75% | $25,000
* Cargo Storage | Prevents a certain cargo from wasting away | variable
* Revenue booster | Increases revenue from a cargo X by Y% | variable
#### Economy[¶](#economy)
The economy alternates between 5 states, with an order and states can only change to adjacent states.
Economic Climate | Base Interest rate | Effect on track price
* Boom | 2% | +33%
* Prosperity | 3% | +17%
* Moderation | 4% | -
* Recession | 5% | -17%
* Panic | 6% | -33%
##### Stocks and Bonds[¶](#stocks-and-bonds)
The value of a bonds is $500k.
The interest rate for new bonds = (base interest rate) + (number of outstanding bonds). Bonds can only be issued if this figure is <= 7.
New railroads issue 1,000,000 shares at $5 per share.
Shares are traded in 10,000 share bundles.
If the share price is >$100 at the end of the year, stocks are split 2 for 1.
Stock price = [Net worth at start of year + 5 * profit last year] / [ shares owned by public + 0.5 shares owned by other players]
Let profit last year = 100,000 in the first year.
When a player buys or sells shares, the price used is the price calculate after the shares have changed hands.
A transaction fee $25,000 applies each time a bond is issued or repaid and each time a bundle of shares is bought or sold.
##### Competition between Railroads[¶](#competition-between-railroads)
You can take over a rival by buying over 50% of its stock. When you have done this, you have indirect control over the other railroad. You can transfer money between the two railroads, tell the other railroad where to build track to next,
and tell the other railroad to repay its bonds.
If you go on to buy 100% of the stock, you have the option to merge the with the other railroad. If you do this,
you gain complete control over the other railroad. I.e. the other railroad’s track, trains, and stations are added to your railroad. Once a merger has taken place, there is no way to undo it.
#### Non player effects on the game model[¶](#non-player-effects-on-the-game-model)
##### City Growth and Decay[¶](#city-growth-and-decay)
As time passes, Urban (e.g. Village, City), Industry (e.g. Factory, Steel Mill), and Resource (e.g. Coal Mine, Sugar Plantation) tiles are added and removed from cities. Example, a factory tile will relocate from New York to Boston if Boston’s utility gain exceeds New york’s utility loss. Utilities are calculated as follows. The routine that updates cities should run once per month.
Category | Marginal utility | Motivation
* Industry | Number of Urban Tiles / (1+Number of Industry Tiles)^2 | Urban tiles supply labour to industries. There are decreasing returns to scale.
* Resource | (Units of Resource Picked Up + c) / (1 + Number of Resource Tiles) | Resources grow when they are exploited. There are decreasing returns to scale.
* Urban | Units of demanded cargo delivered - k * Number of Urban Tiles/(1+Number of Industry Tiles) | Urban tiles value employment and delivery of cargo but are adverse to overcrowding.
Industries owned by Railroads do not enter the utility calculations, so when you build an industry, it stays put!
##### Payments for delivering Cargo[¶](#payments-for-delivering-cargo)
##### Addition and removal of cargo at stations[¶](#addition-and-removal-of-cargo-at-stations)
#### User Interface[¶](#user-interface-1)
##### Main Window[¶](#main-window)
The main window has a menu bar, the world view, the mini map, and a tabpane.
Scheme of main window.
The GUI components should display properly when the main window is 640 * 480 pixels or bigger. The table below shows the dimensions of the components in terms of the width (W) and height of the main window. The figures do not include space taken up by borders, scroll bars, tabs, menus etc.
Component | Width | Height
* Minimap | 200 | 200
* Tab’s Content | 195 | H - 300
* World View | W - 230 | H - 70
Pressing the tab key toggles keyboard focus between the world view window and the tabpane’s content.
##### Menu bar[¶](#menu-bar)
Game - New Game | Game Speed (Paused, Slow, Moderate, Fast) | Save Game | Load Game | Exit Game
Build - New train | Industry | Improve Station
Display - Regional Display | Area | Detailed Area | Options
Reports - Balance Sheet | Income Statement | Networth Graph | Stocks | Leaderboard | Accomplishments
Broker - Call Broker
Help - Controls | Quick Start | Manual | Report Bug | About
##### World View Window[¶](#world-view-window)
The world can be displayed at 4 zoom levels:
* Local detailed: 30x30 px
* Local: 15x15 px
* Network: Scaled so that all the player’s stations are visible, Shows trains, stations, and track but not geography
* Regional: Scaled so the whole map is visible, Minimap hidden
##### Cursor[¶](#cursor)
The cursor can be in one of the following modes. The cursor should only be visible if the world view has keyboard focus.
The cursor’s appearance should indicate which mode it is in.
The initial cursor position is 0,0. However, if a game is loaded or a new game is started and the map size is the same as the last map size, then the cursor should take the position it had on the last map.
##### Place station mode[¶](#place-station-mode)
The cursor gets put into place-station-mode when the player selects a station type from the build tab.
Shows the radius of the selected station type.
Red when station cannot be built on selected square, white otherwise. This should be determined by whether building the station is possible, not merely whether there is track on the selected tile.
Pressing the LMB attempts to place the station. If the station is built, the cursor is returned to its previous mode;
if the station is not built, the cursor remains in place-station-mode.
Pressing the RMB or pressing Esc cancels placing the station and returns the cursor to its previous mode.
##### Build track mode[¶](#build-track-mode)
Track can be built by dragging the mouse (moving the mouse with the LMB down). As the mouse is dragged, the proposed track is shown. Releasing the LMB builds the track. Pressing the RMB or Esc cancels any proposed track.
Track can be built by pressing the number pad keys.
##### Remove track mode[¶](#remove-track-mode)
Track can be removed by moving the cursor with the number pad keys.
##### Info mode[¶](#info-mode)
#### Components on right hand side[¶](#components-on-right-hand-side)
Minimap, Current Cash, Date (Shows the current year and month.)
##### Train Roster Tab[¶](#train-roster-tab)
Shows wagons in each train and whether they are full or empty, the trains relative speed and destination.
Double clicking a train on the roster (or pressing enter when the train roster has focus) or on the map opens the train report for the train.
##### Build Tab[¶](#build-tab)
There are 5 build modes (see the table and screenshot below).
Build modes[¶](#table-1)
| Build mode | Options visible when mode is selected | Action |
| --- | --- | --- |
| build track | Track type, bridges, and tunnels. | When the cursor is moved, track is built. On clear terrain, the selected track type is built. On rivers, the selected bridge type is built (if a bridge type is selected.) On hills and mountains, tunnel is built if build tunnls is selected. |
| upgrade track | Track type and bridges | Track and bridges are upgraded to the selected type when the cursor enters a tile. |
| build station | Stations | |
| bulldoze | None | When the cursor moves from a tile to a neigbouring tile, any track connecting the two tiles is removed. |
| off | None | Nothing is built or removed when the cursor moves. |
Scheme of build tab.
The Build tab should not accept keyboard focus when the mouse is click on it. This is because doing so would cause the world view window to lose focus which is annoying when you are building track using the keyboard.
When a new game is started or a game is loaded, the build mode should default to ‘build track’ with single track,
wooden trestle bridges, and tunnels selected.
#### Reports and dialog boxes[¶](#reports-and-dialog-boxes)
##### Broker Dialog[¶](#broker-dialog)
Example of the broker dialog.
Another example of the broker dialog.
##### Station report[¶](#station-report)
Shows information on a station. There will be 3 tabs: ‘supply and demand’, ‘trains’, and ‘improvements’
Supply and Demand Tab
The trains tab will list all trains that are scheduled to stop at this station. Note, if a train is scheduled to stop at the stations several times, there will be a row in the table for each scheduled stop.
Trains Tab
Improvements tab shows the station improvements that have been built at this station and lets you buy additional ones.
##### Station list[¶](#station-list)
Shows summary details for each of the stations: name, type, cargo waiting, revenue this year.
##### Train report[¶](#train-report)
Trains report
##### Train list[¶](#train-list)
Shows summary details for each of the trains
Trains list
##### Select station[¶](#select-station)
Select station
##### Newspaper[¶](#newspaper)
Newspaper
##### Leaderboard[¶](#leaderboard)
Leaderboard
##### Balance sheet[¶](#balance-sheet)
Balance sheet
##### Income statement[¶](#income-statement)
Income statement
##### Networth graph[¶](#networth-graph)
Networth graph
##### Report bug dialog[¶](#report-bug-dialog)
The report bug dialog box is accessible from the help menu. It is also shown when there is an unexpected exception.
It should list the following information and it should be possible to copy and paste the details to the clipboard.
Property | Value
* tracker.url <http://sourceforge.net/tracker/?group_id=9495&atid=109495>
* java.version Java System Property
* java.vm.name Java System Property
* os.name Java System Property
* os.version Java System Property
* jfreerails.build The timestamp generated by the ant script.
* jfreerails.compiled.by The username of the crazy person who ran the ant compile target
The how to report bug dialog should appear as follows…
```
How to report a bug
Use the sourceforge.net bug tracker at the following url:
{tracker.url}
Please include:
1. Steps to reproduce the bug (attach a save game if appropriate).
2. What you expected to see.
3. What you saw instead (attach a screenshot if appropriate).
4. The details below (copy and past them into the bug report).
{os.name} {os.version}
{java.vm.name} {java.version}
Freerails build {jfreerails.build} compiled by {jfreerails.compiled.by}
```
And the “Unexpected Exception” version should read …
```
Unexpected Exception
Consider submitting a bug report using the sourceforge.net bug tracker at the following url:
{tracker.url}
Please:
1. Use the following as the title of the bug report:
{Exception.type} at {fileaname} line {line.number}
2. Include steps to reproduce the bug (attach a save game if appropriate).
3. Copy and paste the details below into the bug report:
{os.name} {os.version}
{java.vm.name} {java.version}
Freerails build {jfreerails.build} compiled by {jfreerails.compiled.by}
{stacktrace}
```
##### Cargo chart[¶](#cargo-chart)
The cargo chart will show the sources of supply and demand for each of the cargo types. The information will be presented in a table as below. There should be a ‘print’ button which should… well, its pretty obvious what it should do.
Cargo chart
##### Load games[¶](#load-games)
Displays a list of saved games. The list comprises all files ending in ‘.sav’ in the directory from which the game was run. If the current game is a network game, the relevant directory is the directory from which the server was run. All players, not just the host, can access the dialogue.
Load game
The ‘OK’ button is only enabled when a game is selected.
Pressing the ‘OK’ button loads the selected game.
Pressing the ‘Cancel’ button closes the dialogue box.
Pressing the ‘Refresh’ button updates the list of saved games, taking into account any changes to the filesystem (e.g. any files that have been added, removed, or renamed.)
##### Launcher[¶](#launcher)
Panel 1: Select Game Type
Launcher1
Selection | Next Screen
* Single Player Select Map (without server port input box)
* Start a network game Select Map (with server port input box)
* Join a network game Client details (with remote server details showing)
* Server only Select Map (with server port input box)
Panel 2: Select Map (and server details)
Launcher2
The value of the field “Server port” should be the value entered last time the launcher was run. On the first run it defaults to 55000.
Selection | Next Screen
* Single Player Client details (without remote server details showing)
* Start a network game Client details (without remote server details showing)
* Server only Connected players
Condition | Message or result | When checked
* No saved game available. | The item “Load a saved game” should be disabled | When the panel is created.
* Port field does not contain a number between, inclusive 0 and 65535 | “A valid port value is between between 0 and 65535.” and disable “next” button. | As text is entered.
* ” Start a new map” is selected but no map is selected.” | “Select a map”. The “next” button should be disabled. | When the radio button selection changes and when the selected map in the map list changes.
* Can’t start server on specified port | Use the message from the exception. The next button should still be enabled. | When the next button is pressed.
Panel 3: Client details (and remote server details)
The following fields should be recalled from the last time the launcher was run.
Field | Default | Notes
* Player name | The value of system property “user.name” | If a game is being loaded, the text box should not be appear. Instead there should be a dropdown list with the names of the players from the saved game.
* IP Address | 127.0.0.1 |
* port | 5500 |
Selection | Next Screen
* Single Player | Progress bar
* Start a network game | Connected players
* Join a network game | Progress bar
Launcher3
Condition | Message or result | When checked
* The ” Player name” field is empty. | ” Enter a player name” and disable ” next” button. | As text is entered.
* Port field does not contain a number between, inclusive 0 and 65535 | ” A valid port value is between between 0 and 65535.” and disable ” next” button. | As text is entered.
* ” Full screen” is selected but no map is selected.” | ” Select a display mode” . The next button should be disabled. | When the radio button selection changes and when the selected display-mode in the display-mode list changes.
* The ” IP address” field is empty. | ” Enter the address of the server” and disable ” next” button. | As text is entered.
* Can’t resolve host. | “Can’t resolve host.” | When next button is pressed.
* Can’t connect to server. | ” Can’t connect to server.” | When next button is pressed.
* Load game was selected. | The player name textbox should be replaced with a dropdown list of players in the saved game. | When the form is displayed.
* We are connecting to a remote server which has loaded, but not started a game, and the player name we entered is not a player in the saved game. | “New players can’t join a saved game.” | When next button is pressed.
* We are connecting to a remote server but the game has already started. | “New players can’t join a game in progress.” | When the next button is pressed.
Panel 4: Connected players
Launcher4
Panel 5: Progress bar
Launcher5
#### AI[¶](#ai)
Disclaimer - the notes below are very incomplete. It might make sense to do something simpler for the first version of the AI.
Deciding which cities should be connected to each other Create a table of the distances between cities. E.g.
City distances example[¶](#table-2)
| | City A | City B | City C | City D |
| City A | x | 20 km | 30 km | 25 km |
| City B | x | x | 35 km | 15 km |
| City C | x | x | x | 40 km |
| City D | x | x | x | x |
```
For every pair of cities i, j {
For every city k where i != k and j != k {
Let A = the distance between i and j.
Let B = the distance between i and k.
Let C = the distance between k and j.
If (A < B and A < C) then remove the value at i, j from the table.
}
}
```
We can now construct a graph from the values remaining in the table. It will have the following properties. First, every city is connected to its nearest neighbour. Second, we can get from any city to any other city. Third, not too much track will be wasted.
Deciding the order in which to connect cities Lets assume the profitability of a line between 2 cities, A and B is given by the following condition.
Profitability = (Cargo supplied by A and demanded by B + Cargo supplied by B and demanded by A) / Distance between A and B.
Implementation Note: a natural way to analyze supply and demand and cargo conversions would be using matrix algebra.
E.g. supply and demand at a station could be represented by n * 1 matrices and cargo converted by an n * n matrix
(where n is the number of cargo types). There is a public domain java matrix package available at: <http://math.nist.gov/javanumerics/jama/>.
The simplest strategy for building track would be starting with most profitable connection. Note, that on the first move, we can pick any connection, but on subsequent moves, we are restricted to connections involving at least one city we have already connected to. Call this restricted set of connections S. A reasonable strategy for subsequent moves would be repeatedly picking the most profitable connection from S.
A more sophisticated strategy would take into account the restriction that new track must connect to existing track when picking the first pair of cities to connect. We could approach the problem as follows. Assume we build one connection per year and the game continues until we have built all possible connections. Suppose our payoff for building a connection is the profitability of the connection times the number of years remaining. For simplicity,
assume that once we have built the first connection, we revert to just picking the most profitable connection from S as before. Now, we can solve the problem of which connection to start with by comparing the payoff over the complete game for each of the possible starts.
Obviously, to formally solve the problem above, we would need to consider strategies other than picking the most profitable connection from S for moves after the first one. However, unless the number of cities is relatively small this would likely take a long time to solve. What is more, we have not even considered what other players may be doing, so even if we could formally solve the problem above, we would still have a lot of work to do.
Development[¶](#development)
---
Client/Server design overview.
### Brief overview of the architecture[¶](#brief-overview-of-the-architecture)
The model and the view are separated. The classes that make up the model, referred to as the game world, are in the freerails.model.* packages. The view classes are in the freerails.view.* packages.
The state of game world is stored by an instance of a class implementing the interface World.
This class is mutable - one can think of it as a specialised hashmap. Changes to the game world involve adding, removing or changing their properties.
The client and server are separate. They communicate by sending objects to each other. This is done by sending serialized objects over a network connection.
When a new game starts or a game is loaded, the server sends the client a copy of the World object. All changes to the game world that occur after the game has started, referred to as moves, are done using the classes in the package freerails.move.*.
### Server - client communication[¶](#server-client-communication)
The server is the entirety of computational processes that are concerned with evaluating the game mechanics and distributing the game updates, which can be physically separated from the players.
The server knows logged in users and communicates with them via connections which can read and write serialized Java objects over the internet. However, connections can get lost and reestablished, so the server needs to have the ability to re-associate a connection to a player and the player needs the ability to re-identify itself within a new connection.
Solution: Identity is an uuid, given by the server to the client, together with a display name, that the client chooses.
Identification is by uuid. If there is a player on the server side without connection and a new connection presents the UUID of this player, instead of creating a new player, the connection is re-associated.
Before that connections have to send their game version id and the version id has to be equal to the version id of the server or the connection will be closed immediately.
Furthermore: Connections not presenting a UUID within certain a timeout or presenting something else are automatically closed.
For this we need handler, that actually process messages and allow to act on the various things as well as do something in the future. (see also Java Executors <https://stackoverflow.com/a/2258082/1536976>)
Proposed structure of identify: Identity (UUID id, String name)
Identified players can send commands/message to the server and may receive responses.
Examples are: chat message (are echoed to all) , chat log request (returns with the last X chat messages), display available inbuilt scenarios, display saved games ready for loading on the server, create random scenario with options,
start hosting a game (preparation phase), start game, stop game.
* ChatMessage: String content, String author, String date (certain format)
* SimpleMessage (Request): Enum type of request (chat log request, display available scenarios, )
* ChatLog: List<ChatMessage>
* Scenario/SaveGameInformation: String title, String description, Image preview
* RandomScenarioCreationOptions:
* …
The server is represented with a list of <id, message> and processes them. The first player on the server to decide to host a game becomes the game master. He decides when to start the game, when to stop it running on the server.
However, the players can safe the game locally as well (see later)
Stopping the server is done externally by the local client, not through messages. The local server starts at start of the local client and stops when the local client stops.
When a server stops, the current running game is stopped, all connections are closed.
A game consists of a certain number of players and additional data, all contained in a model object. In a game the players are either humans or AI. Access to a player may be restricted by password. Users can decide to use heir uuid as password.
If no passwords are stored, the scenario is said to be “pure”, otherwise it is said to be “restricted”.
Pure scenarios can always become restricted. Restricted scenarios can only become pure if all participating human players agree. Passwords are part of the game model.
Client options:
* Internal (program version acting as option version)
* Server limited to local connections (default: on, changes are reflected on next start)
* UUID (chosen at first start, may be set)
* Default password (chosen randomly at first start, may be set)
* Use default password (otherwise the player is prompted before sending)
* Start in windowed/fullscreen mode (changes are reflected on next start)
* Music mute (immediate, default: off)
* Music volume (immediate)
Implementation (stored as JSON as key-value pairs, or as properties maybe?)
During the game, only Chat Messages (not part of the game, chat messages not persistent when server stops) and stop game messages (by game master).
The game has a status (running, paused). It pauses if the majority of players hit the pause button (message) or if a player lost a connection. The game master determines the game speed (stored in the game model).
Everyone can send a Move, Moves are collected in a list in the order of arrival at the server. Every move is processed sequentially and first tested for applicability. If so, it gets a running number (applied moves since start of game)
and is applied and sent to the client. The client then checks if the current running number is the successor of the last received such move and applies it. If not, a Message (Invalid Move or so) is returned.
A Move should always encapsulates a single atomic game action for each player.
Move: void apply(World), Status applicable(Read-Only-World)
In case of inconsistencies every player can always send a request to obtain the whole world).
### Track implementation[¶](#track-implementation)
The map consists of an rectangular grid consisting of square-sized tiles. Each tile (except those at the border) have exactly eight neighbors which can be uniquely identified by compass points (north, north-east, …) given the current center tile or by grid positions (row, column).
The total track on the map consists of many track pieces, where each piece connects two neighboring tiles (diagonal connections have ~1.41 (square-root of 2) times the length of horizontal or vertical connections).
Path finding of the trains works on a graph where the tiles are nodes and the track pieces connecting neighboring tiles are the edges.
Building and removing track works by adding and removing track pieces. In particular, the planning of a longer piece of newly built track is done by path finding again.
The track itself is visualized by rendering each tile according to its track configuration. The track configuration is an 8 bit value indicating if there is a connecting to one of the 8 neighbors from the current tile. See the attached image for some examples.
Track configurations.
The track configurations are not independent from each other. For every connection on a tile towards a neighboring tile,
this neighboring tile must also have a connection to this tile. This invariant must be obeyed by not allowing to change track configurations directly, but only by allowing adding or removing of track pieces at a time.
Track pieces have no direction, any train can go on them both ways. However, a train can change direction at every tile at most by 90 degree, effectively inducing some kind of directionality.
Track pieces can be single tracked or double tracked. There can only be one running train on each track piece
(and trains have a certain extent, also measured in track pieces). However, stopped trains do not count as obstacles.
The track configuration of a tile is sufficient to draw it uniquely on the tile. Not all possible 8 bit values are valid.
Bridges and stations are a special case. Stations have a orientation and only allow track parallel to their orientation.
Bridges span a water tile (other track cannot be put on water) and consist of two track pieces resulting in a parallel configuration.
### Computer controlled (AI) players[¶](#computer-controlled-ai-players)
Is built exactly like a client (communicates by Moves) but without UI or presenter. Lives locally on the server.
Shutdown automatically if the game ends.
### Coding Guidelines[¶](#coding-guidelines)
* Follow package dependency rules (utils does only depend on java libraries, model only on utils, move only on model and utils, ..).
* Avoid circular dependencies between packages. I.e. if any classes in package A import classes from package B, then classes in package B should not import classes from package A.
* Run all unit tests after making changes to see whether anything has broken. You can do this using the check gradle target.
* Javadoc comments. Add a couple of sentences saying what the class does and the reason for its addition.
* Use the Code Formatter with care. Avoid reformatting whole files with different formatting schemes when you have only changed a few lines.
* Consider writing junit tests.
* Consider using assertions.
* Add //TODO comments if you spot possible problems in existing code.
* Use logging instead of System.out or System.err for debug messages. Each class should have its own logger, e.g.
private static final Logger logger = Logger.getLogger(SomeClass.class.getName());
Reading
* Effective Java (<http://java.sun.com/docs/books/effective/>) (sample chapters online)
* User Interface Design for Programmers (<http://www.joelonsoftware.com/uibook/chapters/fog0000000057.html>) (available online)
History[¶](#history)
---
### FreeRails (2000-2005)[¶](#freerails-2000-2005)
The original [FreeRails](https://sourceforge.net/projects/freerails/)
project was registered on Sourceforge on 2000-08-09 with the aim to create a fun game based off the RailRoad Tycoon and RailRoad Tycoon II games. It was described as a real time multi player strategy game where players compete to build the most powerful railroad empire.
The introductory [news message](https://sourceforge.net/p/freerails/news/2000/08/freerails-sister-project-of-freeciv/)
by <NAME> reads:
> My friends and I have long loved the game Railroad Tycoon, and would
> love to see a open source version, very similar to FreeCiv. However,
> none of us have time to work on this due to dedication to other open
> source projects. We would love to see people run with the idea, and
> in a hopeful attempt at pushing the idea off to the open source
> community, I have created a SourceForge project. To find out more
> about the FreeRails project (sister project to FreeCiv), visit the
> web page: <http://freerails.sourceforge.net> and be sure to join our
> mailing list as we discuss the future and feasibility of this
> project.
It was programmed in Java and used a [CVS repository](http://freerails.cvs.sourceforge.net/) for source control. Regular
[releases](https://sourceforge.net/projects/freerails/files/jfreerails/)
from 2001-07-25 (version 0.0.2) to 2005-09-23 (version 0.2.7) brought it to a quite extensive, playable tech demo including play over network.
The source code is under the GPL-2.0 license. Main Java programmer seems to have been Luke Lindsay.
* Last binary release: [FreeRails 0.2.7](https://sourceforge.net/projects/freerails/files/jfreerails/0.2.7/)
* Authors: <NAME> (Original project proposal), <NAME>
((Administrator)), <NAME>, <NAME>, <NAME>,
<NAME>, <NAME>, <NAME>, <NAME>,
<NAME>, <NAME>, <NAME>, <NAME>, [adam@jgf](mailto:adam%40jgf),
<NAME>, <NAME>, <NAME>, <NAME> (Testing), <NAME>
#### C++/C# client for FreeRails[¶](#c-c-client-for-freerails)
A client for Freerails using C++ was
[announced](https://sourceforge.net/p/freerails/news/2001/08/freerails-status-update/)
in 2001-08, but no official releases remain. Code can be found in the CVS repository. They shared the same graphics resources.
### Railz (2004-2005)[¶](#railz-2004-2005)
On 2004-02-23, Railz was registered on now defunct
[BerliOS](https://en.wikipedia.org/wiki/BerliOS) as a railway strategy/management game based upon the freerails source code base. It was created by <NAME>. Until the beginning of 2005 it saw several releases and the code was considerably changed from the FreeRails project on Sourceforge. An archived version of the [project page](https://web.archive.org/web/20140328214257/http://developer.berlios.de/projects/railz/)
is available as well as a automated [exported project page](https://sourceforge.net/projects/railz.berlios/). The source code of Railz was also released under GPL-2.0.
* Last binary release: [Railz 0.3.3](https://sourceforge.net/projects/railz.berlios/files/)
* Authors: Those of FreeRails (2000-2005), especially <NAME>
### FreeRails2 (2007-2008)[¶](#freerails2-2007-2008)
On 2007-11-05,
[FreeRails2](https://sourceforge.net/projects/freerails2) was registered on Sourceforge with the aim to create an optimized version of the FreeRails project. A web start version was included on the [web site](http://freerails2.sourceforge.net/). It used a [SVN repository](https://sourceforge.net/p/freerails2/code/HEAD/tree/) to extend the code base of the FreeRails (2000-2005) project. The development mainly focused on improving the infrastructure. The main contributor was <NAME>.
* Last binary release: [FreeRails2 0.4.0](https://sourceforge.net/projects/freerails2/files/freerails2/v0.4.0/)
* Authors: Those of FreeRails (2000-2005) and <NAME>
(cymric_npg)
### Railz2 (2012-2014)[¶](#railz2-2012-2014)
On 2012-04-08, LukeYJ (Luke Jordan?) created the
[Railz2](https://sourceforge.net/projects/railz2/) project on Sourceforge as a follow-on to the freerails and Railz projects. The code was stored in a [SVN repository](https://sourceforge.net/p/railz2/code/HEAD/tree/) and the starting point was the Railz 0.3.3 release. Again the Java code base was extended.
* Last binary release: [Railz2 0.4.0](https://sourceforge.net/projects/railz2/files/)
* Authors: Those of Railz and LukeYJ
### FreeRails3 (2016)[¶](#freerails3-2016)
On 2016-12-24, LukeYJ created the
[FreeRails3](https://sourceforge.net/projects/freerails3/) project on Sourceforge as a fork from the inactive Freerails2 project. The code was stored in a [GIT repository](https://sourceforge.net/p/freerails3/code/ci/master/tree/)
and the history FreeRails2 was imported. Within a few weeks some issues were fixed and a new release was made with the version number 0.0.4
(smaller than the version of FreeRails before). Finally the code was also placed on [Github](https://github.com/lukeyj13/freerails3).
* Last binary release: [FreeRails3 0.0.4](https://sourceforge.net/projects/freerails3/files/release-0.0.4/)
* Authors: Those of FreeRails2 and LukeYJ
### FreeRails continuation (2017-)[¶](#freerails-continuation-2017)
On 2017-12-12, Trilarion created the
[FreeRails](https://github.com/Trilarion/freerails) project on Github as a continuation of FreeRails 1, 2 and 3 in Java. The history of FreeRails 1, 2 and 3 as well as Railz 1 and 2 was investigated, stored in git branches ([FreeRails on Sourceforge](https://github.com/Trilarion/freerails/tree/freerails_sourceforge)
and [Railz on Sourceforge](https://github.com/Trilarion/freerails/tree/railz_sourceforge))
including the last releases of each of the previous projects.
No release has yet been made.
Changelog[¶](#changelog)
---
Note
The change logs for the years 2000 to 2006 are more like commit messages and are kept here for historical reasons.
From 2017 on this change log describes the publication date and changes of every released version.
### FreeRails continuation (2017-)[¶](#freerails-continuation-2017)
Jan 2, 2018, FreeRails 0.4.1
* No changes in functionality, artwork or user interface
* Code of version 0.4.0 made runnable on Java 8 and with a slight round of code cleanup
### FreeRails3 (2016)[¶](#freerails3-2016)
Dec 25, 2016, lukeyj
* Re-added station info panel
* Only show good supplied or demanded
* Modernised build through maven
* Pulled out file paths into constants
### FreeRails (2000-2005)[¶](#freerails-2000-2005)
Sep 3, 2006, Luke
* Give moving trains priority over stationary trains.
* Fixed bugs
* 1551106 Trains with ‘auto consist’ set don’t pickup cargo
Sep 1, 2006, Luke
* Features implemented:
* 987520 Track Contention
Aug 31, 2006 Luke
* Brought back ‘cheat’ menu.
* Fixed bugs
* 1537413 Exception when building station.
Jul 24, 2006 Luke
* Fixed bug where list of saved games didn’t get updated when files were added, removed, or renamed.
Jul 16, 2006 Luke
* Fixed bugs
* 1384250 Wait until full slows frame rate.
* 1384249 Unexpected Exception when removing last wagon.
Dec 5, 2005 Luke
* Fixed bugs
* 1313227 Remove last wagon doesn’t work with more than 2 wagons
* 1313225 Harbour: Conversion makes no sense
* 1341365 Exception when calculating stock price after buying shares
* 1303162 Unexpected Exception in SquareTileBackgroundRenderer
Sept 21, 2005 Luke
* Improve exception reporting
Sept 19, 2005 Luke
* Fixed bugs
* 1289010 Net worth on broker screen is wrong
* 1289014 Can buy 100% of treasury stock
* 1289012 No limits on issuing/repaying bonds
* 1295855 Can buy stock when can’t afford to
* 1289008 Stock price never changes
Sept 11, 2005 Luke
* Fixed bugs
* 1266577 Broker screen layout
Sept 10, 2005 Luke
* Fixed bugs
* 1269664 Can’t tell other player has my stock
* 1266575 Stock holder equity not shown properly on the balance sheet.
Sept 9, 2005 Luke
* Fixed bugs
* 1269676 Stack traces often lost
Sept 8, 2005 Luke
* Fixed bugs
* 1266581 Underscores in map names
* 1223231 “waiting” message on launcher unclear
* 1266582 Progress bar
* 1269679 Train Info doesn’t update
Aug- 31, 2005 Luke
* Fixed bugs
* 1266584 Can’t select which saved game on launcher
* 1269683 Load game when no saved games.
* 1269688 Load game in full screen
* 1269689 Load game across network
Aug- 24, 2005 Luke
* RFEs implemented
* 1223234- Add show FPS to ‘Display’ menu.
* 1223235- Save launcher input
* Fixed bugs
* 1266695 Unexpected exception during network game
* 1266637 OutOfMemoryError
* 1110270 Reproducible Crash Bug
* 1223230 Incorrect station price on popup
* 1223228 NotSerializableException when saving game with trains
Jul 03, 2005 Luke
* More code cleanup.
Jul 03, 2005 Luke
* Reorganisation of existing code.
May 22, 2005 Luke
* Code cleanup
* Improve pathfinder: finding paths for track is now up to 20 times faster.
* More of the same
Apr 10, 2005 Luke
* More work on new train movement classes
Apr 04, 2005 Luke
* More work on new train movement classes
Apr 01, 2005 Luke
* More work on new train movement classes
Feb 20, 2005 Luke
* More work on new train movement classes
Feb 18, 2005 Luke
* Refactoring existing train movement classes in
* preparation to use new classes.
Feb 05, 2005 Luke
* Update website to use SSI
* Work on new train movement classes
* Added AI page to functional spec.
Feb 04, 2005 Luke
* Add new train movement classes.
Jan 27, 2005 Luke
* Added toString() to KEY classes.
Jan 27, 2005 Luke
* Added serialVersionUID field to serializable classes.
Jan 26, 2005 Luke
* Bugs Fixed:
* 1105499- Word wrapping in Html components
* 1105494- Load game with wrong player
* 1105488- Attempting to join game in progress
Jan 25, 2005 Luke
* Work on bug 1105494- (Load game with wrong player).
Jan 24, 2005 Luke
* Second attempt at fixing bug 1103632 (Sound on Linux)
Jan 17, 2005 Luke
* Note, some of theses changes occurred at earlier dates but were not
* entered into this change log.
* Bugs Fixed:
* 1103632- Sound on Linux
* 1103633- Build station mode
* 1103634- ‘P’ sets priority orders
* 1102801- keys on train orders
* 1102803- Blank schedule after adding stations
* 1102797- Pause 1st time track is built
* 1103154- Building track quickly with keyboard fails
* 1103150- Can build track in station placement mode
* 1102804- Cursor on map edges
* 1103155- Can’t upgrade station with F8
* 1102800- Turbo game speed does nothing
* 1102806- Newspaper does nothing
* 1102798- Building track out of station too expensive
* 1102799- “Can’t afford to remove station”
* 1087429- Same icon for info, no tunnels, no bridges
* 1096168- No tooltips on build tab
* 1087428- Wrong cursor message
* 1087431- Message “Illegal track config..-
* 1087373- Stations influence should not overlap
* 1087427- Terrain info dialogue close button
* 1087409- java.io.InvalidClassException
* 1087414- Upgrade track on Ocean -> ArrayIndexOutOfBoundsException
* 1087425- NullPointerException
* 1087426- Can see stations boxes for other players
* 1087433- Can’t tell that train roster has focus
* 1087422- Pressing ‘I’ on other’s station ->crash-
* 1005144- java.lang.IllegalArgumentException: Tried to add TrainPosition
* Features implemented:
* 927146- Display natural numbers for trains, stations, etc
* Other changes:
* New track graphics
Jan 14, 2005 Luke
* Updated build.xml
* Minor javadoc updates
Jan 13, 2005 Luke
* Bugs fixed:
* 1098769 Blinking cursor
* 1098767 Can’t remove bridges when ‘no bridges’ selected
* 1099095 Remove track not cancelled
* 1099093 Upgrade track starting at station fails
* 1099083 Remove train, then click train list-> Exception
* 1099091 Station placement cursor wrong colour-
* 1099092 Station influence remains after station removed
Jan 09, 2005 Luke
* Bugs fixed:
* 1087432- Can’t remove or upgrade track using mouse
Jan 04, 2005 Luke
* Bugs fixed:
* 1087437- java properties window should word wrap.
* 1087434- Building track out of station
* Other changes:
* Code cleanup
Dec 18, 2004 Luke
* RFEs Implemented:
* 1055501- Automatically build bridges & tunnels
* 931570- Improve Cursor
* 915941- Bridge types GUI
* 915940- Tunnels options GUI
Dec 15, 2004 Luke
* More on track build system. Its almost complete.
Dec 14, 2004 Luke
* Work on track build system. Appropriate track for the terrain
* is now automatically selected. Still some bugs.
Dec 12, 2004 Luke
* Updated functional specification.
Nov 16, 2004 Luke
* Work on GUI to select track type and build mode.
Nov 15, 2004 Luke
* Started using java 1.5 language features
* Updated build.xml to use 1.5 and removed ‘format’ and ‘ConstJava’ ant targets.
Oct 27, 2004 Luke
* Bugs Fixed:
* 1054729- Can’t build bridges using mouse
Oct 19, 2004 Luke
* Bugs Fixed:
* 1046399- No supply and demand at new stations
Oct 18, 2004 Luke
* RFEs Implemented:
* 1048913- Option to turn off sound
* Bugs:
* Work on 1046399- No supply and demand at new stations
Oct 17, 2004 Luke
* RFEs Implemented:
* 972863- Launcher: progress bar should be on new page
* Bugs Fixed:
* 1047435- Can’t rejoin game
* 1047445 Invalid port but next button enabled-
* 1047440 Progress bar not visible when starting network game
* 1047431- No server but no error message.
* 1047422- java.net.SocketException: Connection reset
* 1047412- 2 players, same name -> Exception
Oct 13, 2004 Luke
* Bugs Fixed:
* 1047428 “no players” message goes away
* 1047414 Connected players list should auto update
* 1047439 Shutting down remote client crashes server
* 1047425 2 servers, same port -> Exception
* 1046385 pressing Backspace causes IllegalStateException
Oct 12, 2004 Luke
* Made map scroll when mouse is dragged outside the view port
* when building track.
Sep 18, 2004 Luke
* RFEs Implemented:
* 931581 Build Industry.
* 931594 Show which player is winning.
* 915955 Automatic Schedules.
* 931597 Graph showing total profits over time.
* 915957 Build track by dragging mouse.-
* 932630 Change speed from network clients.
Aug 14, 2004 Luke
* Added ConstJava ant target
* Note, ConstJava adds the keyword ‘const’ to java. It can be typed /*=const */ so that the files remain valid java files.
* Fixed some mutability problems that it identified.
Aug 10, 2004 Luke
* Implemented City growth
* Work on deadlock and unexpected exception bugs.
Jul 26, 2004 Luke
* Apply <NAME>’s patch for bug 997088 (IllegalArgumentException in OneTileMoveVector.getInstance)
Jul 21, 2004 Luke
* Remove some circular dependencies.
Jul 07, 2004 Luke
* Fixed problem with unit tests in freerails.controller.net
Jul 07, 2004 Luke
* Bugs fixed:
* 972866 Build track by dragging - only when build track selected
Jul 06, 2004 Luke
* RFEs Implemented:
* 915943 Sounds!
* Bugs fixed:
* 984510 freerails.world.player.player; local class incompatible
Jun 25, 2004 Luke
* Bugs fixed:
* 979831 Stack traces printed out when running unit tests
Jun 17, 2004 Luke
* Apply <NAME> Massa’s station distance patch.
* Fixed DisplayModesComboBoxModels.removeDisplayModesBelow(.) so
* that it does not remove display modes when displayMode.getBitDepth() returns DisplayMode.BIT_DEPTH_MULTI
Jun 15, 2004 Luke
* Bugs fixed:
* 972869 Crash when track under train removed.
* 972867 Signal towers do nothing - I’ve removed them!
* 972864 Deselect place-station-mode when track selected
Jun 14, 2004 Luke
* Bugs fixed:
* 948668 Building Station on Curve - Cursor changes function -
* 948671 Map City Overlays incorrect
* 967675 No trains/stations but train & station menus selectable
* 972738 Crash when station removed
* 967662 Bottom of terrain info tab cut off in 640*480 res.
* 972869 Crash when track under train removed.
Jun 13, 2004 Luke
* Bugs fixed:
* 948651 IP Address input should be checked immediately.
* 948649 Dialogue Box Behavior
* 967668 No supply & demand at new station
* 948672 Large numbers of active trains slows performance -
Jun 12, 2004 Luke
* Bugs Fixed:
* 967667 Cannot close multiple dialogue boxes.
* 967664 Fullscreen res. below 640x480 16bit selectable.
* 967666 Selected fullscreen resolution ignored.
* 967713 FPS counter obscures build menu
* 967660 Debug text sent to console
* 948679 Delete/Rebuild single section of track doesn’t cost anything
Jun 9, 2004 Luke
* Bugs Fixed:
* 967673 Crash when building track close to edge of map
Jun 6, 2004 Luke
* Bugs Fixed:
* 967677 OutOfMemoryError after starting several new games
Jun 6, 2004 Luke
* RFE implemented:
* 915960 Logging
Jun 5, 2004 Luke
* Bugs Fixed:
* 967129 Main map white on 1.5.0 beta 2
* 941743 Build train dialog closes without building train.
* 967214 EchoGameServerTest hangs
May 31, 2004 Luke
* Bugs Fixed:
* 948653 Crash after loading a saved game when one is not available.-
* 948665 “Show Details” on Train List doesn’t work if no train is selected.
* 948659 Dialogue Box Behavior not deterministic
* 948663 Extra Close Button on Station List tab
* 948661 No Formal Specification (see /src/docs/freerails_1_0_functional_specification.html)
* 948656 Non Movable Dialogue Boxes
* made dialogue boxes movable
* added option to show/hide station names, spheres of influence, and cargo waiting.
May 30, 2004 Luke
* Bugs Fixed:
* 948666 Crash when Building Train with Money < 0 and only one station
May 28, 2004 Luke
* Bugs Fixed:
* 948655 Can’t see consist when there are more than 6 wagons
* 948675 Can’t upgrade station types
* 948680 No way to tell sphere of influence for a station type
May 27, 2004 Luke
* Bugs Fixed:
* 948676 Waiting list is cut off
* 948673 Cost of Building track/stations not shown
* 948670 Removing non-existent track
* 948654 Locomotive graphic backwards
May 24, 2004 Luke
* Bug fixes for freerails.world.top.WorldDifferences
May 24, 2004 Luke
* Added class freerails.world.top.WorldDifferences - may be useful for RFE 915957!
May 10, 2004 Luke
* Applied Jan Tozicka’s first patch for 915957 (Build track by dragging mouse)
May 5, 2004 Luke
* Fix bug in SimpleAStarPathFinder spotted by <NAME>.
Apr 30, 2004 Luke
* Applied Jan Tozicka’s patch
* Implements 927165 (Quick start option)
Apr 21, 2004 Luke
* Fix DialogueBoxTester
* Tweak build.xml
Apr 11, 2004 Luke
* Added some javadoc comments.
* Added hashcode methods to classes that override equals.
* Code cleanup
* Let track be built on terrain of category ‘Industry’ and ‘Resource’
Apr 9, 2004 Luke
* Fixed bug 891452 (2 servers same port, no error message)
* Fixed bug 868555 (Undo move by pressing backspace doesn’t work)
* Fix for bug 910132 (Too easy to make money!)
* More work on bug 910902 (Game speed not stored on world object)
Apr 8, 2004 Luke
* Added website to CVS
* Added website deployment targets to build.xml
Apr 7, 2004 Luke
* Implemented 930716 (Scale overview map) by
* incorporating code from Railz.
Apr 6, 2004 Luke
* Fix selection of track type and build mode that was broken by the game speed patch.
Apr 6, 2004 Luke
* Implemented 915945 (Stations should not overlap)
* Increased the quality of scaled images returned by ImageManagerImpl
Apr 5, 2004 Luke
* Implemented 915952 (Boxes showing cargo waiting at stations)
Apr 5, 2004 Luke
* Fixed 910134 Demand for mail and passengers
* Updated javadoc comments in freerails.server.parser.
Apr 4, 2004 Luke
* Implemented 927152 Show change station popup when add station is clicked
Apr 3, 2004 Luke
* Apply Jan Tozicka’s 2nd patch for 910902
Apr 2, 2004 Luke
* Fixed bug 910130 (Placement of harbours)
Apr 1, 2004 Luke
* Made trains stop for a couple of seconds at stations.
* 915947 Implement wait until full.
Apr 1, 2004 Luke
* 910138 After building a train display train orders
* 910143 After building station show supply and demand
* Started rewriting freerails in C#!
Mar 30, 2004 Luke
* Implemented 915949 (Balance sheet)
* Fixed bug where an exception was thrown if you moved the cursor when ‘View Mode’ was selected on the build menu.
Mar 29, 2004 Luke
* Implemented 915948 (Income statement)
Mar 27, 2004 Luke
* Updated coding guidelines.
Mar 15, 2004 Luke
* Added ‘Show java properties’ to about menu.
Mar 14, 2004 Luke
* Implemented 910123 (Add/remove cargo to cities more frequently).
Mar 13, 2004 Luke
* Fixed various bugs where exceptions were getting thrown.
* Stopped the client window getting displayed before the world is loaded from the server.
Mar 13, 2004 Luke
* Implemented 910126 (Train list on RHS panel)
* Started 915303 (Icons for buttons and tabs) - the tabs on the RHS now have icons instead of titles.
Mar 12, 2004 Luke
* Apply Jan Tozicka’s patch for 910902 (Game speed not stored on world object).
Mar 9, 2004 Luke
* Increase client performance. 93FPS to 111FPS on my machine.
* Note, I get much higher FPS when the client and server are in different JVMs.
Mar 8, 2004 Luke
* Readied 640x480 fixed size windows mode. It is useful for taking screen shots and making sure the dialogue boxes work in 640x480 fullscreen mode.
Mar 6, 2004 Luke
* Added Scott Bennett’s terrain randomisation patch.
Mar 6, 2004 Luke
* Remove ‘never read’ local variables.
* Fixed bug 910135 Trains jump when game un paused
* Fixed bug 891360 Trains don’t get built while game is paused
Mar 5, 2004 Luke
* Applied Jan Tozicka’s patch for bug 900039 (No clear indication game is paused).
Mar 4, 2004 Luke
* Minor changes to coding guidelines.
* Fixed stale serialVersionUID problem in freerails.world.player.Player
* Made ant script insert build id into README and about.htm
Mar 3, 2004 Luke
* Apply Scott Bennett’s removal_of_Loading_text patch.
Mar 3, 2004 Luke
* Implemented Request 905446 Track should be continuous
* Implemented Request 905444 Multi player support: different track
Mar 2, 2004 Luke
* Implemented Request 905443 Multi player support: different trains
Mar 1, 2004 Luke
* Implemented Request 905441 Multi player support: different bank accounts
* Note, presently some of the dialogue boxes are not working. This will be fixed as adding multi player support continues.
Feb 27, 2004 Luke
* Some fixes for DialogueBoxTester.
Feb 27, 2004 Luke
* Refactoring in preparation for multiplayer support.
Feb 26, 2004 Luke
* Applied Jan Tozicka’s ‘Shortcuts for game speed’ (patch 904903).
Feb 21, 2004 Luke
* Fix 891359 - Javadoc package dependencies out of date
* Tidy up javadoc
Feb 20, 2004 Luke
* Fix 839371 - Goods & livestock wagons appear the same on train orders
Feb 20, 2004 Luke
* Fix bugs 867473 and 880450 (Intermittent deadlocks).
Feb 18, 2004 Luke
* Fix bug 839331 - set initial game speed to ‘slow’ instead of paused
* Fix bug 874416 (station icon hides after track-upgrade)
* Fix bug 839361 (Several industries of the same type in same city)
* Fix bug 891362 (Cancel button on select engine dialogue doesn’t work )
* Fix bug 891431 No link between train list and train orders screens
Feb 18, 2004 Luke
* Removed unreachable code.
* Fix build.xml
Feb 17, 2004 Luke
* Apply move infrastructure patch.
* Apply OSX work around.
Feb 16, 2004 Luke
* Add new select station popup to train orders dialogue (fixes bug 891427).
* Add ‘About’ dialogue (fixes bug 891377)
* Add ‘How to play’ dialogue (fixes bug 891371)
Feb 6, 2004 Luke
* Apply Robert Tuck’s patch to fix bug 880496 (User stuck after connection refused)
Feb 5, 2004 Luke
* Apply Robert Tuck’s Mac OS X fixes.
* Uncomment out code in TrackMaintenanceMoveGenerator
Feb 4, 2004 Luke
* Add testDefensiveCopy() to WorldImplTest
Jan 19, 2004 Luke
* Applied Robert Tuck’s launcher patch.
Dec 31, 2003 Luke
* Remove some unused code.
* Fix some things jlint moaned about - perhaps slightly pointless!
Dec 30, 2003 Luke
* Refactoring to change the threads in which moves are executed.
* 1. Moves are pre-committed on the client’s copy of the world by the thread “AWT_EventQueue.”
* 2. All moves are now executed on the server’s copy of the world in freerails.server.ServerGameEngine.update() by the thread “freerails server”.
* 3. Moves received from the server are now executed on the clients copy of the world in freerails.client.top.run() by the client thread by the thread “freerails client: …”
* Moves are passed between threads using queues.
* Currently starting new games and loading games does not work.
* Removed most of the passing of mutexes between classes.
Dec 29, 2003 Luke
* Apply Robert Tuck’s patch to BufferedTiledBackgroundRenderer.
* Make the client keep its own copy of the world object even when it is in the same VM as the server.
Dec 24, 2003 Luke
* Prepare for release.
Dec 23, 2003 Luke
* Refactoring to remove some cyclic dependencies.
Dec 20, 2003 Luke
* Apply part of Robert Tuck’s performance patch.
* Update side on wagon graphics.
* Fix for bug 839355 (User not told why track cannot be built)
Dec 18, 2003 Luke
* Fix for bug 855729 (Game does not start on pre 1.4.2 VMs)
Dec 17, 2003 Luke
* Move UNITS_OF_CARGO_PER_WAGON constant to WagonType.
Dec 17, 2003 Luke
* Applied Robert Tuck’s patch to fix apparent network lag.
* Tweaked ‘format’ ant target so that it does not format files that are up to date.
Dec 13, 2003 Luke
* Fix bug: stations on the trains schedule can now be changed again.
Dec 13, 2003 Luke
* Fixed bug: passengers are now demanded by cities and villages.
* Fixed bug: track maintenance cost is no longer equal to the build cost.
* Fixed bug 839366 (No feedback when trains arrive)
Dec 12, 2003 Luke
* Add Robert Tuck’s new train graphics.
Dec 8, 2003 Luke
* Deprecate methods that take a mutex as a parameter.
Dec 6, 2003 Luke
* Apply source code formatting.
Dec 5, 2003 Luke
* Apply Robert Tucks move ahead patch.
Nov 30, 2003 Luke
* Fixed bug 839376 (Harbours are not painted properly)
Nov 30, 2003 Luke
* Fixed bug 839336 (Removing station train heading to causes Exception)
Nov 29, 2003 Luke
* Fixed bug 839392(After F8 to build station, position still follows mouse)
* Added jalopy ‘format’ target to build.xml
Nov 18, 2003 Luke
* Applied Robert Tuck’s patch to fix the bug that occurred with 1 local client and 1 networked client in a 2nd VM.
Nov 10, 2003 Luke
* Made MoveExecuter non-static.
* Fixed bug 835337.
* Remove debug console output.
Nov 9, 2003 Luke
* Applied Robert Tuck’s to fix bug 835241.
Nov 3, 2003 Luke
* Added <NAME>’s enhanced city tile positioner.
Nov 03, 2003 Luke
* Applied Robert Tuck’s patches to update the launcher gui.
* Added <NAME>’s extra Cities
Oct 18, 2003 Luke
* Applied Robert Tuck’s patch adding comments to ServerGameEngine.
* Other javadoc updates.
Oct 13, 2003 Luke
* Applied Robert Tuck’s network patch.
Oct 06, 2003 Luke
* Fixed, I think, bug where trains went off the track.
Oct 04, 2003 Luke
* Update CVS write permissions.
Sep 12, 2003 Luke
* Add Robert Tuck’s ‘build’ tab patch.
Sep 07, 2003 Luke
* Added progress bar to show what is happening while the game is loading.
Sep 03, 2003 Luke
* Added GUI to select display mode and number of clients.
Aug 28, 2003 Luke
* Made train speed decrease with no of wagons.
* Made fare increase with distance travelled.
* Made CalcSupplyAtStations implement WorldListListener so that when a new station is added, its supply and demand is calculated by the server.
Aug 25, 2003 Luke
* Added new Train orders dialogue.
* Made changes to train consist and schedule use Moves instead of changing the DB directly.
* Lots of other changes/fixes.
Aug 23, 2003 Luke
* Removed cruft from the experimental package.
* Added a simple train list dialogue, accessible via the display menu.
* Made the engine images have transparent backgrounds and flipped them horizontally.
Aug 19, 2003 Luke
* Applied Robert Tuck’s patches that separated the client and server and allow you to start up two clients in the same JVM.
* Fixed painting bug that occurred when you started two clients.
* Major refactor to get the checkdep ant target working again.
Aug 11, 2003 Luke
* You are now charged for track maintenance once per year.
* Cargo conversions occur when you deliver cargo to a station if an industry that converts the relevant cargo is within the station radius.
Aug 07, 2003 Luke
* Applied Robert Tuck’s patches to:
* 1. Stop the Terrain Info panel from setting its preferred size to a fixed value.
* 2. Fix the issue with starting a new map and being unable to lay track.
* 3. Update remaining classes to use MoveExecuter.
* 4. Add the station info panel to the tab plane.
* 22. Add the train info/orders panel to the tab plane.
Aug 06, 2003 Luke
* Applied Robert Tuck’s patch to stop the split pane divider getting focus when you press F8.
* Added the field ‘constrained’ to AddTransactionMove. When this is set to true, the move will fail if you don’t have enough cash.
* Made the building and upgrading track cash constrained.
Aug 04, 2003 Luke
* Added 5 patches contributed by <NAME>
* 1. Changes to build.xml
* 2. Added ‘View mode’ to build menu.
* 3. Update to train schedule so that stations can be added and removed.
* 4. Changes to MoveChain and Addition of MoveExecutor.
* 22. Adding TabbedPane to the RHS with a tab to show terrain info.
* Made build xml copy the game controls html file.
Aug 02, 2003 Luke
* Increased the number of resource tiles that are placed around cities.
* Fixed bug where cargo was added to trains before wagons were changed.
Aug 01, 2003 Luke
* Fixed failure in DropOffAndPickupCargoMoveGeneratorTest.
Jul 30, 2003 Luke
* The player gets paid for delivering cargo, simply $1,000 per unit of cargo for now. See freerails.server.ProcessCargoAtStationMoveGenerator
* Fixed bug where 40 times too much cargo was being produced by changing figures in cargo_and_terrain.xml
Jul 27, 2003 Luke
* Got DropOffAndPickupCargoMoveGeneratorTest running without failures.
Jul 21, 2003 Luke
* The player now gets charged for: building stations, building trains, upgrading track
* The text for the ‘Game controls’ dialogue box is now read in from a file rather than hard coded into the java.
Jul 08, 2003 Luke
* Added initial balance of 1,000,000.
* Added prices to the track types defined in track_tiles.xml
* Updated the track XML parser to read in the track prices.
* Updated the build track moves that you get charged when you build track and get a small credit when you remove track.
Jul 07, 2003 Luke
* Wrote ‘Move’ class to add financial transactions.
* Changed the class that adds cargo to stations so that- it adds 40 units per year if the station supplies one carload per year.
Jun 30, 2003 Scott
* Cargo is now transferred correctly
Jun 28, 2003 Luke
* Moved ‘show game controls’ menu item to the Help menu.
* Removed ‘add cargo to stations’ menu item from the game menu. Now cargo is added to stations at the start of each year.
* Set the initial game speed to ‘moderate’.
* Added junit test for DropOffAndPickupCargoMoveGenerator
Jun 28, 2003 Luke
* Moved classes to remove circular dependencies between- packages and updated the ‘checkdep’ ant target.
Jun 27, 2003 Luke
* Added ‘station of origin’ field to CargoBatch and updated- the classes that use CargoBatch as appropriate. It lets us check whether a train has brought cargo back to the station- that it came from.
Jun 27, 2003 Luke
* Added ‘no change’ option to train orders - it indicates that a train should keep whatever wagons it has when it stops at a station.
* Made ‘no change’ the default order for new trains.
Jun 15, 2003 Luke
* Improved the train orders dialogue to show- the current train consist and what cargo the train is carrying.
Jun 15, 2003 Luke
* Fixed a load of problems with station building.
* stations can now only be built on the track
* building a station on a station now upgrades the station rather than adding a new one.
* building stations is now fully undoable in the same way as building track.
Jun 15, 2003 Luke
* The map gets centered on the cursors when you press ‘C’;
* Pressing ‘I’ over a station brings up the station info dialogue box.
* Station radii are defined in track xml.
* The radius of the station type selected is shown on the map when the station types popup is visible.
Jun 14, 2003 Luke
* Fixed bug where train went past station before turning around.
Jun 12, 2003 Luke
* Improved javadoc comments.
Jun 11, 2003 Luke
* Add change game speed submenu to game menu.
Jun 11, 2003 Scott
* Implemented the Train/Station cargo drop-off and pickup feature, trains currently only pickup cargo. Its playable!
Jun 05, 2003 Luke
* Added loadAndUnloadCargo(..) method to freerails.controller.pathfinder.TrainPathFinder
Jun 04, 2003 Luke
* Updated freerails.world package overview.
Jun 01, 2003 Luke
* The game times passes as real time passes.
Jun 01, 2003 Luke
* Rewrote ClientJFrame using Netbeans GUI editor.
* Added JLabels to show the date and available cash to ClientJFrame.
May 31, 2003 Luke
* Pressing backspace now undoes building/removing track.
May 31, 2003 Luke
* Make build track moves undoable.
May 31, 2003 Luke
* Cargo gets added to stations based on what they supply, currently this is triggered by the ‘Add cargo to stations’ item on the game menu.
May 19, 2003 Scott
* Fixed the problem and deviation from the design ;-) of the station cargo calculations, there’s now a temporary menu item on the display menu. Use this to manually update the cargo supply rates.
May 18, 2003 Luke
* Uses the new engine and wagon images on the select wagon, select engine, and train info dialogue boxes.
May 18, 2003 Scott
* The cargo supplied to a station can now be viewed from the menu, although some more work is needed.
May 16, 2003 Luke
* Now loads tile sized track images instead of grabbing- them from the big image.
May 12, 2003 Luke
* Now prints out the time it takes to startup.
May 11, 2003 Luke
* Track is shown on the overview map again.
* Rules about on what terrain track can be built have been added, this is driven by terrain category.
May 10, 2003 Luke
* Rejig track and terrain graphics file names following discussion on mailing list.
* Generated side-on and overhead train graphics.
May 05, 2003 Luke
* Added station info dialogue.
* Fixed some bugs related to loading games and starting new games.
May 05, 2003 Luke
* Changed map view classes to use a VolatileImage for a backbuffer.
May 05, 2003 Luke
* Added terrain info dialogue.
May 03, 2003 Luke
* Fixed river drawing bug.
May 02, 2003 Luke
* The terrain graphics now get loaded correctly although there is a bug in the code that picks the right image for rivers and other types that are drawn in the same way.
May 01, 2003 Luke
* Split up track and terrain images.
Apr 28, 2003 Luke
* Integrate new terrain and cargo xml into game. Temporarily lost terrain graphics.
Apr 19, 2003 Luke
* More work on schedule GUI, you can set change the station that a train is going to.
Apr 19, 2003 Luke
* Work on train schedule GUI.
Apr 16, 2003 Luke
* Added NonNullElements WorldIterator which iterates over non-null elements
* Stations now get removed when you remove the track beneath them
* Station name renderer and train building and pathfinding classes updated to handle null values for stations gracefully.
Apr 10, 2003 Scott
* Added City Names
* Added Random City Tile positioning.
* Cities are now no longer related to the image map. Positions are determined by the data in the south_america_cities.xml file.
Apr 04, 2003 Luke
* Simple train schedules, set the 4 points on the track that trains will travel between by pressing F1 - F4- over the track.
Apr 04, 2003 Luke
* Added package comments for javadoc.
Mar 22, 2003 Luke
* Got the game running again!
Mar 19, 2003 Luke
* Refactored to use the new world interface, does not run yet.
Mar 10, 2003 Luke
* Fixed bug [ 684596 ] ant build failed
Mar 10, 2003 Luke
* Added the MapViewJComponentMouseAdapter in MapViewJComponentConcrete.java contributed by <NAME> -
it scrolls the main map while pressing the second mouse button.
Mar 10, 2003 Luke
* Added mnemonics contributed by <NAME>
Jan 24, 2003 Luke
* Release refactorings.
Jan 12, 2003 Luke
* Fixed javadoc errors.
Jan 12, 2003 Luke
* Major refactoring
* added ant target, checkdep, to check that the dependencies between packages are in order. What it does is copy the java files from a package together with the java files from all the packages that it is allowed to depend on to a temporary directory. It then compiles the java files from the package in question in the temporary director.
If the build succeeds, then the package dependencies are ok.
Jan 11, 2003 Luke
* Refactoring and removing dead code.
Jan 10, 2003 Luke
* Added package.html to freerails.moves
* refactoring to simplify the move classes.
Dec 22, 2002 <NAME>
* Added ‘Newspaper’ option to ‘game’ menu to test drawing on the glass panel. The same technique can be used for dialogue boxes.
Dec 04, 2002 <NAME>
* The classes from the fastUtils library that are needed by freerails have been added to the freerails source tree, so you no longer need fastUtils.jar on the classpath to compile and run freerails.
Dec 01, 2002 <NAME>
* Prepare for release.
Dec 01, 2002 <NAME>say
* The trains no longer all move at the same speed.
Nov 30, 2002 <NAME>
* Load, save, and new game now work again.
Nov 30, 2002 <NAME>
* The path finder now controls train movement. Press t with the cursor over the track and all the trains will head for that point on the track.
Nov 27, 2002 <NAME>
* Wrote SimpleAStarPathFinder and a unit test for it. It seems to work. The next step is use it together with NewFlatTrackExplorer to control train movement.
Nov 26, 2002 <NAME>
* More or less finished NewFlatTrackExplorer and incorporated it into the main game code.
Nov 26, 2002 <NAME>
* Wrote NewFlatTrackExplorer and NewFlatTrackExplorerTest, in preparation for writing a pathfinder.
Nov 24, 2002 <NAME>
* Rewrote PositionOnTrack and added PositionOnTrackTest. track positions can now be store as a single int.
Nov 24, 2002 <NAME>
* Organise imports.
Nov 09, 2002 <NAME>
* Changes to how the mainmap’s buffer gets refreshed.vInstead of the refresh being driven by the cursor moving,
it is now driven by moves being received. This means that it it will refresh even if the moves are generate by another player.
Nov 08, 2002 <NAME>
* Stations can be built by pressing F8.
* The station types no longer appear with the track types on the build menu.
Nov 06, 2002 <NAME>
* Fixed ‘jar_doc’ task in build.xml
Nov 05, 2002 <NAME>
* Moving trains: the class ServerGameEngine has a list of TrainMover objects, which control the movement of individual trains.
Movement is triggered by calls to ServerGameEngine.update() in the GameLoop’s run() method.
Nov 03, 2002 <NAME>
* Improvements to TrainPosition and ChangeTrainPositionMove classes
Oct 28, 2002 <NAME>
* Fix javadoc warnings
* Add ‘upload to sourceforge’ task to build.xml
* Add world_javadoc task to build xml.
Oct 27, 2002 <NAME>
* Wrote ChangeTrainPositionMove and ChangeTrainPositionTest
Oct 27, 2002 <NAME>
* Wrote TrainPosition and TrainPositionTest to replace Snake class.
Oct 16, 2002 <NAME>
* Removed cyclic dependencies from the rest of the project.
Oct 16, 2002 <NAME>
* Refactored the freerails.world.* packages so that (1) freerails.world.* do not depend on any other freerails packages.
(2) there are no cyclic dependencies between any of the freerails.world.* packages. This should make it easier to maintain.
Oct 13, 2002 <NAME>
* Added trains! They don’t move yet. Hit F7 when the cursor is over the track to build one.
Oct 13, 2002 <NAME>:
* Add a task to build.xml that runs all junit tests.
* Change build.xml to work under Eclipse.
Sep 29, 2002 <NAME>:
* Reorganised package structure.
* Changed files that were incorrectly added to the cvs as binaries to text
* Small changes to build.xml so that the ChangeLog, TODO, and build.xml files are included in distributions.
* Changed DOMLoader so that it works correctly when reading files from a jar archive.
Sep 24, 2002 <NAME>:
* Updated TrainDemo, it now draws wagons rather than lines.
Sep 23, 2002 <NAME>say:
* Wrote a simple demo, TrainDemo, to try out using FreerailsPathIterator and PathWalker to move trains along a track.
To see it in action, run: experimental.RunTrainDemo
Sep 22, 2002 Luke Lindsay:
* wrote PathWalkerImpl and PathWalkerImplTest
Sep 19, 2002 <NAME>say:
* wrote SimplePathIteratorImpl and SimplePathIteratorImplTest
* removed the method boolean canStepForward(int distance) from the interface PathWalker so that looking ahead is not required.
Sep 16, 2002 <NAME>say:
* Updated and commented FreerailsPathIterator and PathWalker interfaces.
* build.xml written by JonLS added. (Sorry, I - forgot to add it to the change log earlier.)
Sep 08, 2002 <NAME>say:
* Wrote ‘Snake’ class that represents a train position.
Aug 26, 2002 Luke Lindsay:
* Games can now be loaded and saved.
* New games can be started.
Aug 18, 2002 Luke Lindsay:
* More work on active rendering fixes for linux.
Jul 28, 2002 <NAME>say:
* Partially fixed active rendering under linux.
Jul 04, 2002 <NAME>say:
* Rotate method added to OneTileMoveVector
21 Jun, 2002 <NAME>:
* Fullscreen mode
* GameLoop, freerails now uses active, rather than passive, rendering.
* Work on separating the model and view.
* Tilesets can be validated against rulesets - ViewLists.validate(Type t)
* FPS counter added.
Mar 04, 2002 <NAME>say:
* Rearrange dependencies in freerails.world…
Mar 02, 2002 <NAME>say:
* Reorganisation of package structure.
Feb 16, 2002 Luke Lindsay:
* Unrecoverable FreerailsExceptions replaced with standard unchecked exceptions.
* Changed CVS directory structure.
* This ChangeLog started!
* [Index](genindex.html)
* [Search Page](search.html)
Built on Nov 14, 2022
[FreeRails](index.html#document-index)
===
### Navigation
* [Introduction](index.html#document-introduction)
* [Player Manual](index.html#document-manual)
* [Specification](index.html#document-specification)
* [Development](index.html#document-development)
* [History](index.html#document-history)
* [Changelog](index.html#document-changelog)
### Related Topics
* [Documentation overview](index.html#document-index)
### Quick search |
phidget | rust | Rust | Crate phidget
===
Safe Rust bindings to the phidget22 library.
Re-exports
---
* `pub use phidget_sys as ffi;`
* `pub use crate::phidget::AttachCallback;`
* `pub use crate::phidget::DetachCallback;`
* `pub use crate::phidget::GenericPhidget;`
* `pub use crate::phidget::Phidget;`
* `pub use crate::net::ServerType;`
* `pub use crate::hub::Hub;`
* `pub use crate::hub::HubPortMode;`
* `pub use crate::humidity_sensor::HumiditySensor;`
* `pub use crate::temperature_sensor::TemperatureSensor;`
* `pub use crate::digital_io::DigitalInput;`
* `pub use crate::digital_io::DigitalOutput;`
* `pub use crate::voltage_io::VoltageInput;`
* `pub use crate::voltage_io::VoltageOutput;`
* `pub use crate::errors::*;`
Modules
---
* digital_ioPhidget digital I/O
* errorsThe error types for the crate
* hubPhidget hub
* humidity_sensorPhidget hmidity sensor Phidget Humidity sensor
* netNetwork API Phidget network API
* phidgetThe main Phidget trait
* temperature_sensorPhidget temerature sensor
* voltage_ioPhidget voltage I/O
Enums
---
* ChannelClassPhidget channel class
* DeviceClassPhidget device class
Constants
---
* PHIDGET_CHANNEL_ANY
* PHIDGET_HUBPORTSPEED_AUTO
* PHIDGET_HUBPORT_ANY
* PHIDGET_SERIALNUMBER_ANY
* PHIDGET_TIMEOUT_DEFAULT
* PHIDGET_TIMEOUT_INFINITE
* TIMEOUT_DEFAULTThe default timeout for the library
* TIMEOUT_INFINITEAn infinite timeout (wait forever)
Functions
---
* library_versionThe the full version of the phidget22 library as a string.
This is somthing like, “Phidget22 - Version 1.14 - Built Mar 31 2023 22:44:59”
* library_version_numberGets just the version number of the phidget22 library as a string.
This is something like, “1.14”
Crate phidget
===
Safe Rust bindings to the phidget22 library.
Re-exports
---
* `pub use phidget_sys as ffi;`
* `pub use crate::phidget::AttachCallback;`
* `pub use crate::phidget::DetachCallback;`
* `pub use crate::phidget::GenericPhidget;`
* `pub use crate::phidget::Phidget;`
* `pub use crate::net::ServerType;`
* `pub use crate::hub::Hub;`
* `pub use crate::hub::HubPortMode;`
* `pub use crate::humidity_sensor::HumiditySensor;`
* `pub use crate::temperature_sensor::TemperatureSensor;`
* `pub use crate::digital_io::DigitalInput;`
* `pub use crate::digital_io::DigitalOutput;`
* `pub use crate::voltage_io::VoltageInput;`
* `pub use crate::voltage_io::VoltageOutput;`
* `pub use crate::errors::*;`
Modules
---
* digital_ioPhidget digital I/O
* errorsThe error types for the crate
* hubPhidget hub
* humidity_sensorPhidget hmidity sensor Phidget Humidity sensor
* netNetwork API Phidget network API
* phidgetThe main Phidget trait
* temperature_sensorPhidget temerature sensor
* voltage_ioPhidget voltage I/O
Enums
---
* ChannelClassPhidget channel class
* DeviceClassPhidget device class
Constants
---
* PHIDGET_CHANNEL_ANY
* PHIDGET_HUBPORTSPEED_AUTO
* PHIDGET_HUBPORT_ANY
* PHIDGET_SERIALNUMBER_ANY
* PHIDGET_TIMEOUT_DEFAULT
* PHIDGET_TIMEOUT_INFINITE
* TIMEOUT_DEFAULTThe default timeout for the library
* TIMEOUT_INFINITEAn infinite timeout (wait forever)
Functions
---
* library_versionThe the full version of the phidget22 library as a string.
This is somthing like, “Phidget22 - Version 1.14 - Built Mar 31 2023 22:44:59”
* library_version_numberGets just the version number of the phidget22 library as a string.
This is something like, “1.14”
Type Definition phidget::phidget::AttachCallback
===
```
pub type AttachCallback = dyn Fn(&GenericPhidget) + Send + 'static;
```
The signature for device attach callbacks
Type Definition phidget::phidget::DetachCallback
===
```
pub type DetachCallback = dyn Fn(&GenericPhidget) + Send + 'static;
```
The signature for device detach callbacks
Struct phidget::phidget::GenericPhidget
===
```
pub struct GenericPhidget { /* private fields */ }
```
A wrapper for a generic phidget.
This contains a wrapper around a generic PhidgetHandle, which might be any type of device. It can be queried for additional information and potentially converted into a specific device object.
This is a non-owning object. It will not release the underlying Phidget when dropped. It is typically used to wrap a generic handle sent to a callback from the phidget22 library.
Implementations
---
### impl GenericPhidget
#### pub fn new(phid: PhidgetHandle) -> Self
Creates a new, generic phidget for the handle.
Trait Implementations
---
### impl From<*mut_Phidget> for GenericPhidget
#### fn from(phid: PhidgetHandle) -> Self
Converts to this type from the input type.### impl Phidget for GenericPhidget
#### fn as_handle(&mut self) -> PhidgetHandle
Get the phidget handle for the device
#### fn open(&mut self) -> Result<()Attempt to open the humidity sensor for input.#### fn open_wait(&mut self, to: Duration) -> Result<()Attempt to open the humidity sensor for input, waiting a limited time for it to connect.#### fn open_wait_default(&mut self) -> Result<()Attempt to open the humidity sensor for input, waiting the default time for it to connect.#### fn close(&mut self) -> Result<()Closes the channel#### fn is_open(&mut self) -> Result<boolDetermines if the channel is open#### fn is_attached(&mut self) -> Result<boolDetermines if the channel is open and attached to a device.#### fn is_local(&mut self) -> Result<boolDetermines if the channel is open locally (not over a network).#### fn set_local(&mut self, local: bool) -> Result<()Set true to open the channel locally (not over a network).#### fn is_remote(&mut self) -> Result<boolDetermines if the channel is open remotely (over a network).#### fn set_remote(&mut self, rem: bool) -> Result<()Set true to open the channel locally, (not over a network).#### fn data_interval(&mut self) -> Result<DurationGets the data interval for the device, if supported.#### fn set_data_interval(&mut self, interval: Duration) -> Result<()Sets the data interval for the device, if supported.#### fn min_data_interval(&mut self) -> Result<DurationGets the minimum data interval for the device, if supported.#### fn max_data_interval(&mut self) -> Result<DurationGets the maximum data interval for the device, if supported.#### fn data_rate(&mut self) -> Result<f64Gets the data update rate for the device, if supported.#### fn set_data_rate(&mut self, freq: f64) -> Result<()Sets the data update rate for the device, if supported.#### fn min_data_rate(&mut self) -> Result<f64Gets the minimum data interval for the device, if supported.#### fn max_data_rate(&mut self) -> Result<f64Gets the maximum data interval for the device, if supported.#### fn device_channel_count(&mut self, cls: ChannelClass) -> Result<u32Get the number of channels of the specified class on the device.#### fn channel_class(&mut self) -> Result<ChannelClassGets class of the channel#### fn channel_class_name(&mut self) -> Result<StringGet the name of the channel class#### fn channel_name(&mut self) -> Result<StringGet the channel’s name.#### fn device_class(&mut self) -> Result<DeviceClassGets class of the device#### fn device_class_name(&mut self) -> Result<StringGet the name of the device class#### fn is_hub_port_device(&mut self) -> Result<boolDetermines whether this channel is a VINT Hub port channel, or part of a VINT device attached to a hub port.#### fn set_is_hub_port_device(&mut self, on: bool) -> Result<()Specify whether this channel should be opened on a VINT Hub port directly, or on a VINT device attached to a hub port.
This must be set before the channel is opened.#### fn hub_port(&mut self) -> Result<i32Gets the index of the port on the VINT Hub to which the channel is attached.#### fn set_hub_port(&mut self, port: i32) -> Result<()Gets the index of the port on the VINT Hub to which the channel is attached.
Set to PHIDGET_HUBPORT_ANY to open the channel on any port of the hub.
This must be set before the channel is opened.#### fn channel(&mut self) -> Result<i32Gets the channel index of the device.#### fn set_channel(&mut self, chan: i32) -> Result<()Sets the channel index to be opened.
The default channel is 0. Set to PHIDGET_CHANNEL_ANY to open any channel on the specified device. This must be set before the channel is opened.#### fn serial_number(&mut self) -> Result<i32Gets the serial number of the device.
If the channel is part of a VINT device, this is the serial number of the VINT Hub to which the device is attached.#### fn set_serial_number(&mut self, sn: i32) -> Result<()Sets the device serial number to be opened.
Leave un-set, or set to PHIDGET_SERIALNUMBER_ANY to open any serial number. If the channel is part of a VINT device, this is the serial number of the VINT Hub to which the device is attached.
This must be set before the channel is opened.### impl Send for GenericPhidget
Auto Trait Implementations
---
### impl RefUnwindSafe for GenericPhidget
### impl !Sync for GenericPhidget
### impl Unpin for GenericPhidget
### impl UnwindSafe for GenericPhidget
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Trait phidget::phidget::Phidget
===
```
pub trait Phidget: Send {
// Required method
fn as_handle(&mut self) -> PhidgetHandle;
// Provided methods
fn open(&mut self) -> Result<()> { ... }
fn open_wait(&mut self, to: Duration) -> Result<()> { ... }
fn open_wait_default(&mut self) -> Result<()> { ... }
fn close(&mut self) -> Result<()> { ... }
fn is_open(&mut self) -> Result<bool> { ... }
fn is_attached(&mut self) -> Result<bool> { ... }
fn is_local(&mut self) -> Result<bool> { ... }
fn set_local(&mut self, local: bool) -> Result<()> { ... }
fn is_remote(&mut self) -> Result<bool> { ... }
fn set_remote(&mut self, rem: bool) -> Result<()> { ... }
fn data_interval(&mut self) -> Result<Duration> { ... }
fn set_data_interval(&mut self, interval: Duration) -> Result<()> { ... }
fn min_data_interval(&mut self) -> Result<Duration> { ... }
fn max_data_interval(&mut self) -> Result<Duration> { ... }
fn data_rate(&mut self) -> Result<f64> { ... }
fn set_data_rate(&mut self, freq: f64) -> Result<()> { ... }
fn min_data_rate(&mut self) -> Result<f64> { ... }
fn max_data_rate(&mut self) -> Result<f64> { ... }
fn device_channel_count(&mut self, cls: ChannelClass) -> Result<u32> { ... }
fn channel_class(&mut self) -> Result<ChannelClass> { ... }
fn channel_class_name(&mut self) -> Result<String> { ... }
fn channel_name(&mut self) -> Result<String> { ... }
fn device_class(&mut self) -> Result<DeviceClass> { ... }
fn device_class_name(&mut self) -> Result<String> { ... }
fn is_hub_port_device(&mut self) -> Result<bool> { ... }
fn set_is_hub_port_device(&mut self, on: bool) -> Result<()> { ... }
fn hub_port(&mut self) -> Result<i32> { ... }
fn set_hub_port(&mut self, port: i32) -> Result<()> { ... }
fn channel(&mut self) -> Result<i32> { ... }
fn set_channel(&mut self, chan: i32) -> Result<()> { ... }
fn serial_number(&mut self) -> Result<i32> { ... }
fn set_serial_number(&mut self, sn: i32) -> Result<()> { ... }
}
```
The base trait and implementation for Phidgets
Required Methods
---
#### fn as_handle(&mut self) -> PhidgetHandle
Get the phidget handle for the device
Provided Methods
---
#### fn open(&mut self) -> Result<()Attempt to open the humidity sensor for input.
#### fn open_wait(&mut self, to: Duration) -> Result<()Attempt to open the humidity sensor for input, waiting a limited time for it to connect.
#### fn open_wait_default(&mut self) -> Result<()Attempt to open the humidity sensor for input, waiting the default time for it to connect.
#### fn close(&mut self) -> Result<()Closes the channel
#### fn is_open(&mut self) -> Result<boolDetermines if the channel is open
#### fn is_attached(&mut self) -> Result<boolDetermines if the channel is open and attached to a device.
#### fn is_local(&mut self) -> Result<boolDetermines if the channel is open locally (not over a network).
#### fn set_local(&mut self, local: bool) -> Result<()Set true to open the channel locally (not over a network).
#### fn is_remote(&mut self) -> Result<boolDetermines if the channel is open remotely (over a network).
#### fn set_remote(&mut self, rem: bool) -> Result<()Set true to open the channel locally, (not over a network).
#### fn data_interval(&mut self) -> Result<DurationGets the data interval for the device, if supported.
#### fn set_data_interval(&mut self, interval: Duration) -> Result<()Sets the data interval for the device, if supported.
#### fn min_data_interval(&mut self) -> Result<DurationGets the minimum data interval for the device, if supported.
#### fn max_data_interval(&mut self) -> Result<DurationGets the maximum data interval for the device, if supported.
#### fn data_rate(&mut self) -> Result<f64Gets the data update rate for the device, if supported.
#### fn set_data_rate(&mut self, freq: f64) -> Result<()Sets the data update rate for the device, if supported.
#### fn min_data_rate(&mut self) -> Result<f64Gets the minimum data interval for the device, if supported.
#### fn max_data_rate(&mut self) -> Result<f64Gets the maximum data interval for the device, if supported.
#### fn device_channel_count(&mut self, cls: ChannelClass) -> Result<u32Get the number of channels of the specified class on the device.
#### fn channel_class(&mut self) -> Result<ChannelClassGets class of the channel
#### fn channel_class_name(&mut self) -> Result<StringGet the name of the channel class
#### fn channel_name(&mut self) -> Result<StringGet the channel’s name.
#### fn device_class(&mut self) -> Result<DeviceClassGets class of the device
#### fn device_class_name(&mut self) -> Result<StringGet the name of the device class
#### fn is_hub_port_device(&mut self) -> Result<boolDetermines whether this channel is a VINT Hub port channel, or part of a VINT device attached to a hub port.
#### fn set_is_hub_port_device(&mut self, on: bool) -> Result<()Specify whether this channel should be opened on a VINT Hub port directly, or on a VINT device attached to a hub port.
This must be set before the channel is opened.
#### fn hub_port(&mut self) -> Result<i32Gets the index of the port on the VINT Hub to which the channel is attached.
#### fn set_hub_port(&mut self, port: i32) -> Result<()Gets the index of the port on the VINT Hub to which the channel is attached.
Set to PHIDGET_HUBPORT_ANY to open the channel on any port of the hub.
This must be set before the channel is opened.
#### fn channel(&mut self) -> Result<i32Gets the channel index of the device.
#### fn set_channel(&mut self, chan: i32) -> Result<()Sets the channel index to be opened.
The default channel is 0. Set to PHIDGET_CHANNEL_ANY to open any channel on the specified device. This must be set before the channel is opened.
#### fn serial_number(&mut self) -> Result<i32Gets the serial number of the device.
If the channel is part of a VINT device, this is the serial number of the VINT Hub to which the device is attached.
#### fn set_serial_number(&mut self, sn: i32) -> Result<()Sets the device serial number to be opened.
Leave un-set, or set to PHIDGET_SERIALNUMBER_ANY to open any serial number. If the channel is part of a VINT device, this is the serial number of the VINT Hub to which the device is attached.
This must be set before the channel is opened.
Implementors
---
### impl Phidget for DigitalInput
### impl Phidget for DigitalOutput
### impl Phidget for Hub
### impl Phidget for HumiditySensor
### impl Phidget for TemperatureSensor
### impl Phidget for VoltageInput
### impl Phidget for VoltageOutput
### impl Phidget for GenericPhidget
Enum phidget::net::ServerType
===
```
#[repr(u32)]
pub enum ServerType {
None,
DeviceListener,
Device,
DeviceRemote,
WwwListener,
Www,
WwwRemote,
Sbc,
}
```
Phidget server types
Variants
---
### None
### DeviceListener
### Device
### DeviceRemote
### WwwListener
### Www
### WwwRemote
### Sbc
Trait Implementations
---
### impl Clone for ServerType
#### fn clone(&self) -> ServerType
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn cmp(&self, other: &ServerType) -> Ordering
This method returns an `Ordering` between `self` and `other`. Read more1.21.0 · source#### fn max(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the maximum of two values. Read more1.21.0 · source#### fn min(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the minimum of two values. Read more1.50.0 · source#### fn clamp(self, min: Self, max: Self) -> Selfwhere
Self: Sized + PartialOrd<Self>,
Restrict a value to a certain interval.
#### fn eq(&self, other: &ServerType) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialOrd<ServerType> for ServerType
#### fn partial_cmp(&self, other: &ServerType) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists. Read more1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool
This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool
This method tests less than or equal to (for `self` and `other`) and is used by the `<=`
operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool
This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool
This method tests greater than or equal to (for `self` and `other`) and is used by the `>=`
operator.
#### type Error = ReturnCode
The type returned in the event of a conversion error.#### fn try_from(val: u32) -> Result<SelfPerforms the conversion.### impl Copy for ServerType
### impl Eq for ServerType
### impl StructuralEq for ServerType
### impl StructuralPartialEq for ServerType
Auto Trait Implementations
---
### impl RefUnwindSafe for ServerType
### impl Send for ServerType
### impl Sync for ServerType
### impl Unpin for ServerType
### impl UnwindSafe for ServerType
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct phidget::hub::Hub
===
```
pub struct Hub { /* private fields */ }
```
Phidget Hub
Implementations
---
### impl Hub
#### pub fn new() -> Self
Create a new hub.
#### pub fn port_mode(&self, port: i32) -> Result<HubPortModeGet the mode of the specified hub port
#### pub fn set_port_mode(&self, port: i32, mode: HubPortMode) -> Result<()Set the mode of the specified hub port
#### pub fn set_on_attach_handler<F>(&mut self, cb: F) -> Result<()>where
F: Fn(&GenericPhidget) + Send + 'static,
Sets a handler to receive attach callbacks
#### pub fn set_on_detach_handler<F>(&mut self, cb: F) -> Result<()>where
F: Fn(&GenericPhidget) + Send + 'static,
Sets a handler to receive detach callbacks
Trait Implementations
---
### impl Default for Hub
#### fn default() -> Self
Returns the “default value” for a type.
#### fn drop(&mut self)
Executes the destructor for this type.
#### fn from(chan: HubHandle) -> Self
Converts to this type from the input type.### impl Phidget for Hub
#### fn as_handle(&mut self) -> PhidgetHandle
Get the phidget handle for the device#### fn open(&mut self) -> Result<()Attempt to open the humidity sensor for input.#### fn open_wait(&mut self, to: Duration) -> Result<()Attempt to open the humidity sensor for input, waiting a limited time for it to connect.#### fn open_wait_default(&mut self) -> Result<()Attempt to open the humidity sensor for input, waiting the default time for it to connect.#### fn close(&mut self) -> Result<()Closes the channel#### fn is_open(&mut self) -> Result<boolDetermines if the channel is open#### fn is_attached(&mut self) -> Result<boolDetermines if the channel is open and attached to a device.#### fn is_local(&mut self) -> Result<boolDetermines if the channel is open locally (not over a network).#### fn set_local(&mut self, local: bool) -> Result<()Set true to open the channel locally (not over a network).#### fn is_remote(&mut self) -> Result<boolDetermines if the channel is open remotely (over a network).#### fn set_remote(&mut self, rem: bool) -> Result<()Set true to open the channel locally, (not over a network).#### fn data_interval(&mut self) -> Result<DurationGets the data interval for the device, if supported.#### fn set_data_interval(&mut self, interval: Duration) -> Result<()Sets the data interval for the device, if supported.#### fn min_data_interval(&mut self) -> Result<DurationGets the minimum data interval for the device, if supported.#### fn max_data_interval(&mut self) -> Result<DurationGets the maximum data interval for the device, if supported.#### fn data_rate(&mut self) -> Result<f64Gets the data update rate for the device, if supported.#### fn set_data_rate(&mut self, freq: f64) -> Result<()Sets the data update rate for the device, if supported.#### fn min_data_rate(&mut self) -> Result<f64Gets the minimum data interval for the device, if supported.#### fn max_data_rate(&mut self) -> Result<f64Gets the maximum data interval for the device, if supported.#### fn device_channel_count(&mut self, cls: ChannelClass) -> Result<u32Get the number of channels of the specified class on the device.#### fn channel_class(&mut self) -> Result<ChannelClassGets class of the channel#### fn channel_class_name(&mut self) -> Result<StringGet the name of the channel class#### fn channel_name(&mut self) -> Result<StringGet the channel’s name.#### fn device_class(&mut self) -> Result<DeviceClassGets class of the device#### fn device_class_name(&mut self) -> Result<StringGet the name of the device class#### fn is_hub_port_device(&mut self) -> Result<boolDetermines whether this channel is a VINT Hub port channel, or part of a VINT device attached to a hub port.#### fn set_is_hub_port_device(&mut self, on: bool) -> Result<()Specify whether this channel should be opened on a VINT Hub port directly, or on a VINT device attached to a hub port.
This must be set before the channel is opened.#### fn hub_port(&mut self) -> Result<i32Gets the index of the port on the VINT Hub to which the channel is attached.#### fn set_hub_port(&mut self, port: i32) -> Result<()Gets the index of the port on the VINT Hub to which the channel is attached.
Set to PHIDGET_HUBPORT_ANY to open the channel on any port of the hub.
This must be set before the channel is opened.#### fn channel(&mut self) -> Result<i32Gets the channel index of the device.#### fn set_channel(&mut self, chan: i32) -> Result<()Sets the channel index to be opened.
The default channel is 0. Set to PHIDGET_CHANNEL_ANY to open any channel on the specified device. This must be set before the channel is opened.#### fn serial_number(&mut self) -> Result<i32Gets the serial number of the device.
If the channel is part of a VINT device, this is the serial number of the VINT Hub to which the device is attached.#### fn set_serial_number(&mut self, sn: i32) -> Result<()Sets the device serial number to be opened.
Leave un-set, or set to PHIDGET_SERIALNUMBER_ANY to open any serial number. If the channel is part of a VINT device, this is the serial number of the VINT Hub to which the device is attached.
This must be set before the channel is opened.### impl Send for Hub
Auto Trait Implementations
---
### impl RefUnwindSafe for Hub
### impl !Sync for Hub
### impl Unpin for Hub
### impl UnwindSafe for Hub
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Enum phidget::hub::HubPortMode
===
```
#[repr(u32)]
pub enum HubPortMode {
Vint,
DigitalInput,
DigitalOutput,
VoltageInput,
VoltageRatioInput,
}
```
Possible operational modes for a hub port
Variants
---
### Vint
Communicate with a smart VINT device
### DigitalInput
5V Logic-level digital input
### DigitalOutput
3.3V digital output
### VoltageInput
0-5V voltage input for non-ratiometric sensors
### VoltageRatioInput
0-5V voltage input for ratiometric sensors
Trait Implementations
---
### impl Clone for HubPortMode
#### fn clone(&self) -> HubPortMode
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn cmp(&self, other: &HubPortMode) -> Ordering
This method returns an `Ordering` between `self` and `other`. Read more1.21.0 · source#### fn max(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the maximum of two values. Read more1.21.0 · source#### fn min(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the minimum of two values. Read more1.50.0 · source#### fn clamp(self, min: Self, max: Self) -> Selfwhere
Self: Sized + PartialOrd<Self>,
Restrict a value to a certain interval.
#### fn eq(&self, other: &HubPortMode) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialOrd<HubPortMode> for HubPortMode
#### fn partial_cmp(&self, other: &HubPortMode) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists. Read more1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool
This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool
This method tests less than or equal to (for `self` and `other`) and is used by the `<=`
operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool
This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool
This method tests greater than or equal to (for `self` and `other`) and is used by the `>=`
operator.
#### type Error = ReturnCode
The type returned in the event of a conversion error.#### fn try_from(val: u32) -> Result<SelfPerforms the conversion.### impl Copy for HubPortMode
### impl Eq for HubPortMode
### impl StructuralEq for HubPortMode
### impl StructuralPartialEq for HubPortMode
Auto Trait Implementations
---
### impl RefUnwindSafe for HubPortMode
### impl Send for HubPortMode
### impl Sync for HubPortMode
### impl Unpin for HubPortMode
### impl UnwindSafe for HubPortMode
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct phidget::humidity_sensor::HumiditySensor
===
```
pub struct HumiditySensor { /* private fields */ }
```
Phidget humidity sensor
Implementations
---
### impl HumiditySensor
#### pub fn new() -> Self
Create a new humidity sensor.
#### pub fn as_channel(&self) -> &HumiditySensorHandle
Get a reference to the underlying sensor handle
#### pub fn humidity(&self) -> Result<f64Read the current humidity value.
#### pub fn set_on_humidity_change_handler<F>(&mut self, cb: F) -> Result<()>where
F: Fn(&HumiditySensor, f64) + Send + 'static,
Sets a handler to receive humitity change callbacks.
#### pub fn set_on_attach_handler<F>(&mut self, cb: F) -> Result<()>where
F: Fn(&GenericPhidget) + Send + 'static,
Sets a handler to receive attach callbacks
#### pub fn set_on_detach_handler<F>(&mut self, cb: F) -> Result<()>where
F: Fn(&GenericPhidget) + Send + 'static,
Sets a handler to receive detach callbacks
Trait Implementations
---
### impl Default for HumiditySensor
#### fn default() -> Self
Returns the “default value” for a type.
#### fn drop(&mut self)
Executes the destructor for this type.
#### fn from(chan: HumiditySensorHandle) -> Self
Converts to this type from the input type.### impl Phidget for HumiditySensor
#### fn as_handle(&mut self) -> PhidgetHandle
Get the phidget handle for the device#### fn open(&mut self) -> Result<()Attempt to open the humidity sensor for input.#### fn open_wait(&mut self, to: Duration) -> Result<()Attempt to open the humidity sensor for input, waiting a limited time for it to connect.#### fn open_wait_default(&mut self) -> Result<()Attempt to open the humidity sensor for input, waiting the default time for it to connect.#### fn close(&mut self) -> Result<()Closes the channel#### fn is_open(&mut self) -> Result<boolDetermines if the channel is open#### fn is_attached(&mut self) -> Result<boolDetermines if the channel is open and attached to a device.#### fn is_local(&mut self) -> Result<boolDetermines if the channel is open locally (not over a network).#### fn set_local(&mut self, local: bool) -> Result<()Set true to open the channel locally (not over a network).#### fn is_remote(&mut self) -> Result<boolDetermines if the channel is open remotely (over a network).#### fn set_remote(&mut self, rem: bool) -> Result<()Set true to open the channel locally, (not over a network).#### fn data_interval(&mut self) -> Result<DurationGets the data interval for the device, if supported.#### fn set_data_interval(&mut self, interval: Duration) -> Result<()Sets the data interval for the device, if supported.#### fn min_data_interval(&mut self) -> Result<DurationGets the minimum data interval for the device, if supported.#### fn max_data_interval(&mut self) -> Result<DurationGets the maximum data interval for the device, if supported.#### fn data_rate(&mut self) -> Result<f64Gets the data update rate for the device, if supported.#### fn set_data_rate(&mut self, freq: f64) -> Result<()Sets the data update rate for the device, if supported.#### fn min_data_rate(&mut self) -> Result<f64Gets the minimum data interval for the device, if supported.#### fn max_data_rate(&mut self) -> Result<f64Gets the maximum data interval for the device, if supported.#### fn device_channel_count(&mut self, cls: ChannelClass) -> Result<u32Get the number of channels of the specified class on the device.#### fn channel_class(&mut self) -> Result<ChannelClassGets class of the channel#### fn channel_class_name(&mut self) -> Result<StringGet the name of the channel class#### fn channel_name(&mut self) -> Result<StringGet the channel’s name.#### fn device_class(&mut self) -> Result<DeviceClassGets class of the device#### fn device_class_name(&mut self) -> Result<StringGet the name of the device class#### fn is_hub_port_device(&mut self) -> Result<boolDetermines whether this channel is a VINT Hub port channel, or part of a VINT device attached to a hub port.#### fn set_is_hub_port_device(&mut self, on: bool) -> Result<()Specify whether this channel should be opened on a VINT Hub port directly, or on a VINT device attached to a hub port.
This must be set before the channel is opened.#### fn hub_port(&mut self) -> Result<i32Gets the index of the port on the VINT Hub to which the channel is attached.#### fn set_hub_port(&mut self, port: i32) -> Result<()Gets the index of the port on the VINT Hub to which the channel is attached.
Set to PHIDGET_HUBPORT_ANY to open the channel on any port of the hub.
This must be set before the channel is opened.#### fn channel(&mut self) -> Result<i32Gets the channel index of the device.#### fn set_channel(&mut self, chan: i32) -> Result<()Sets the channel index to be opened.
The default channel is 0. Set to PHIDGET_CHANNEL_ANY to open any channel on the specified device. This must be set before the channel is opened.#### fn serial_number(&mut self) -> Result<i32Gets the serial number of the device.
If the channel is part of a VINT device, this is the serial number of the VINT Hub to which the device is attached.#### fn set_serial_number(&mut self, sn: i32) -> Result<()Sets the device serial number to be opened.
Leave un-set, or set to PHIDGET_SERIALNUMBER_ANY to open any serial number. If the channel is part of a VINT device, this is the serial number of the VINT Hub to which the device is attached.
This must be set before the channel is opened.### impl Send for HumiditySensor
Auto Trait Implementations
---
### impl RefUnwindSafe for HumiditySensor
### impl !Sync for HumiditySensor
### impl Unpin for HumiditySensor
### impl UnwindSafe for HumiditySensor
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct phidget::temperature_sensor::TemperatureSensor
===
```
pub struct TemperatureSensor { /* private fields */ }
```
Phidget temperature sensor
Implementations
---
### impl TemperatureSensor
#### pub fn new() -> Self
Create a new temperature sensor.
#### pub fn as_channel(&self) -> &TemperatureSensorHandle
Get a reference to the underlying sensor handle
#### pub fn temperature(&self) -> Result<f64Read the current temperature
#### pub fn set_on_temperature_change_handler<F>(&mut self, cb: F) -> Result<()>where
F: Fn(&TemperatureSensor, f64) + Send + 'static,
Set a handler to receive temperature change callbacks.
#### pub fn set_on_attach_handler<F>(&mut self, cb: F) -> Result<()>where
F: Fn(&GenericPhidget) + Send + 'static,
Sets a handler to receive attach callbacks
#### pub fn set_on_detach_handler<F>(&mut self, cb: F) -> Result<()>where
F: Fn(&GenericPhidget) + Send + 'static,
Sets a handler to receive detach callbacks
Trait Implementations
---
### impl Default for TemperatureSensor
#### fn default() -> Self
Returns the “default value” for a type.
#### fn drop(&mut self)
Executes the destructor for this type.
#### fn from(chan: TemperatureSensorHandle) -> Self
Converts to this type from the input type.### impl Phidget for TemperatureSensor
#### fn as_handle(&mut self) -> PhidgetHandle
Get the phidget handle for the device#### fn open(&mut self) -> Result<()Attempt to open the humidity sensor for input.#### fn open_wait(&mut self, to: Duration) -> Result<()Attempt to open the humidity sensor for input, waiting a limited time for it to connect.#### fn open_wait_default(&mut self) -> Result<()Attempt to open the humidity sensor for input, waiting the default time for it to connect.#### fn close(&mut self) -> Result<()Closes the channel#### fn is_open(&mut self) -> Result<boolDetermines if the channel is open#### fn is_attached(&mut self) -> Result<boolDetermines if the channel is open and attached to a device.#### fn is_local(&mut self) -> Result<boolDetermines if the channel is open locally (not over a network).#### fn set_local(&mut self, local: bool) -> Result<()Set true to open the channel locally (not over a network).#### fn is_remote(&mut self) -> Result<boolDetermines if the channel is open remotely (over a network).#### fn set_remote(&mut self, rem: bool) -> Result<()Set true to open the channel locally, (not over a network).#### fn data_interval(&mut self) -> Result<DurationGets the data interval for the device, if supported.#### fn set_data_interval(&mut self, interval: Duration) -> Result<()Sets the data interval for the device, if supported.#### fn min_data_interval(&mut self) -> Result<DurationGets the minimum data interval for the device, if supported.#### fn max_data_interval(&mut self) -> Result<DurationGets the maximum data interval for the device, if supported.#### fn data_rate(&mut self) -> Result<f64Gets the data update rate for the device, if supported.#### fn set_data_rate(&mut self, freq: f64) -> Result<()Sets the data update rate for the device, if supported.#### fn min_data_rate(&mut self) -> Result<f64Gets the minimum data interval for the device, if supported.#### fn max_data_rate(&mut self) -> Result<f64Gets the maximum data interval for the device, if supported.#### fn device_channel_count(&mut self, cls: ChannelClass) -> Result<u32Get the number of channels of the specified class on the device.#### fn channel_class(&mut self) -> Result<ChannelClassGets class of the channel#### fn channel_class_name(&mut self) -> Result<StringGet the name of the channel class#### fn channel_name(&mut self) -> Result<StringGet the channel’s name.#### fn device_class(&mut self) -> Result<DeviceClassGets class of the device#### fn device_class_name(&mut self) -> Result<StringGet the name of the device class#### fn is_hub_port_device(&mut self) -> Result<boolDetermines whether this channel is a VINT Hub port channel, or part of a VINT device attached to a hub port.#### fn set_is_hub_port_device(&mut self, on: bool) -> Result<()Specify whether this channel should be opened on a VINT Hub port directly, or on a VINT device attached to a hub port.
This must be set before the channel is opened.#### fn hub_port(&mut self) -> Result<i32Gets the index of the port on the VINT Hub to which the channel is attached.#### fn set_hub_port(&mut self, port: i32) -> Result<()Gets the index of the port on the VINT Hub to which the channel is attached.
Set to PHIDGET_HUBPORT_ANY to open the channel on any port of the hub.
This must be set before the channel is opened.#### fn channel(&mut self) -> Result<i32Gets the channel index of the device.#### fn set_channel(&mut self, chan: i32) -> Result<()Sets the channel index to be opened.
The default channel is 0. Set to PHIDGET_CHANNEL_ANY to open any channel on the specified device. This must be set before the channel is opened.#### fn serial_number(&mut self) -> Result<i32Gets the serial number of the device.
If the channel is part of a VINT device, this is the serial number of the VINT Hub to which the device is attached.#### fn set_serial_number(&mut self, sn: i32) -> Result<()Sets the device serial number to be opened.
Leave un-set, or set to PHIDGET_SERIALNUMBER_ANY to open any serial number. If the channel is part of a VINT device, this is the serial number of the VINT Hub to which the device is attached.
This must be set before the channel is opened.### impl Send for TemperatureSensor
Auto Trait Implementations
---
### impl RefUnwindSafe for TemperatureSensor
### impl !Sync for TemperatureSensor
### impl Unpin for TemperatureSensor
### impl UnwindSafe for TemperatureSensor
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct phidget::digital_io::DigitalInput
===
```
pub struct DigitalInput { /* private fields */ }
```
Phidget digital input
Implementations
---
### impl DigitalInput
#### pub fn new() -> Self
Create a new digital input.
#### pub fn as_channel(&self) -> &DigitalInputHandle
Get a reference to the underlying sensor handle
#### pub fn state(&self) -> Result<i32Get the state of the digital input channel
#### pub fn set_on_state_change_handler<F>(&mut self, cb: F) -> Result<()>where
F: Fn(&DigitalInput, i32) + Send + 'static,
Sets a handler to receive digital input state change callbacks.
#### pub fn set_on_attach_handler<F>(&mut self, cb: F) -> Result<()>where
F: Fn(&GenericPhidget) + Send + 'static,
Sets a handler to receive attach callbacks
#### pub fn set_on_detach_handler<F>(&mut self, cb: F) -> Result<()>where
F: Fn(&GenericPhidget) + Send + 'static,
Sets a handler to receive detach callbacks
Trait Implementations
---
### impl Default for DigitalInput
#### fn default() -> Self
Returns the “default value” for a type.
#### fn drop(&mut self)
Executes the destructor for this type.
#### fn from(chan: DigitalInputHandle) -> Self
Converts to this type from the input type.### impl Phidget for DigitalInput
#### fn as_handle(&mut self) -> PhidgetHandle
Get the phidget handle for the device#### fn open(&mut self) -> Result<()Attempt to open the humidity sensor for input.#### fn open_wait(&mut self, to: Duration) -> Result<()Attempt to open the humidity sensor for input, waiting a limited time for it to connect.#### fn open_wait_default(&mut self) -> Result<()Attempt to open the humidity sensor for input, waiting the default time for it to connect.#### fn close(&mut self) -> Result<()Closes the channel#### fn is_open(&mut self) -> Result<boolDetermines if the channel is open#### fn is_attached(&mut self) -> Result<boolDetermines if the channel is open and attached to a device.#### fn is_local(&mut self) -> Result<boolDetermines if the channel is open locally (not over a network).#### fn set_local(&mut self, local: bool) -> Result<()Set true to open the channel locally (not over a network).#### fn is_remote(&mut self) -> Result<boolDetermines if the channel is open remotely (over a network).#### fn set_remote(&mut self, rem: bool) -> Result<()Set true to open the channel locally, (not over a network).#### fn data_interval(&mut self) -> Result<DurationGets the data interval for the device, if supported.#### fn set_data_interval(&mut self, interval: Duration) -> Result<()Sets the data interval for the device, if supported.#### fn min_data_interval(&mut self) -> Result<DurationGets the minimum data interval for the device, if supported.#### fn max_data_interval(&mut self) -> Result<DurationGets the maximum data interval for the device, if supported.#### fn data_rate(&mut self) -> Result<f64Gets the data update rate for the device, if supported.#### fn set_data_rate(&mut self, freq: f64) -> Result<()Sets the data update rate for the device, if supported.#### fn min_data_rate(&mut self) -> Result<f64Gets the minimum data interval for the device, if supported.#### fn max_data_rate(&mut self) -> Result<f64Gets the maximum data interval for the device, if supported.#### fn device_channel_count(&mut self, cls: ChannelClass) -> Result<u32Get the number of channels of the specified class on the device.#### fn channel_class(&mut self) -> Result<ChannelClassGets class of the channel#### fn channel_class_name(&mut self) -> Result<StringGet the name of the channel class#### fn channel_name(&mut self) -> Result<StringGet the channel’s name.#### fn device_class(&mut self) -> Result<DeviceClassGets class of the device#### fn device_class_name(&mut self) -> Result<StringGet the name of the device class#### fn is_hub_port_device(&mut self) -> Result<boolDetermines whether this channel is a VINT Hub port channel, or part of a VINT device attached to a hub port.#### fn set_is_hub_port_device(&mut self, on: bool) -> Result<()Specify whether this channel should be opened on a VINT Hub port directly, or on a VINT device attached to a hub port.
This must be set before the channel is opened.#### fn hub_port(&mut self) -> Result<i32Gets the index of the port on the VINT Hub to which the channel is attached.#### fn set_hub_port(&mut self, port: i32) -> Result<()Gets the index of the port on the VINT Hub to which the channel is attached.
Set to PHIDGET_HUBPORT_ANY to open the channel on any port of the hub.
This must be set before the channel is opened.#### fn channel(&mut self) -> Result<i32Gets the channel index of the device.#### fn set_channel(&mut self, chan: i32) -> Result<()Sets the channel index to be opened.
The default channel is 0. Set to PHIDGET_CHANNEL_ANY to open any channel on the specified device. This must be set before the channel is opened.#### fn serial_number(&mut self) -> Result<i32Gets the serial number of the device.
If the channel is part of a VINT device, this is the serial number of the VINT Hub to which the device is attached.#### fn set_serial_number(&mut self, sn: i32) -> Result<()Sets the device serial number to be opened.
Leave un-set, or set to PHIDGET_SERIALNUMBER_ANY to open any serial number. If the channel is part of a VINT device, this is the serial number of the VINT Hub to which the device is attached.
This must be set before the channel is opened.### impl Send for DigitalInput
Auto Trait Implementations
---
### impl RefUnwindSafe for DigitalInput
### impl !Sync for DigitalInput
### impl Unpin for DigitalInput
### impl UnwindSafe for DigitalInput
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct phidget::digital_io::DigitalOutput
===
```
pub struct DigitalOutput { /* private fields */ }
```
Phidget digital output
Implementations
---
### impl DigitalOutput
#### pub fn new() -> Self
Create a new digital input.
#### pub fn state(&self) -> Result<i32Get the state of the digital output channel
#### pub fn set_state(&self, state: i32) -> Result<()Set the state of the digital output This overrides any duty cycle that was previously set.
#### pub fn duty_cycle(&self) -> Result<f64Get the duty cycle of the digital output channel This is the fraction of the time the output is high. A value of 1.0 means constantly high; 0.0 means constantly low
#### pub fn set_duty_cycle(&self, dc: f64) -> Result<()Set the duty cycle of the digital output This is the fraction of the time the output is high. A value of 1.0 means constantly high; 0.0 means constantly low
#### pub fn set_on_attach_handler<F>(&mut self, cb: F) -> Result<()>where
F: Fn(&GenericPhidget) + Send + 'static,
Sets a handler to receive attach callbacks
#### pub fn set_on_detach_handler<F>(&mut self, cb: F) -> Result<()>where
F: Fn(&GenericPhidget) + Send + 'static,
Sets a handler to receive detach callbacks
Trait Implementations
---
### impl Default for DigitalOutput
#### fn default() -> Self
Returns the “default value” for a type.
#### fn drop(&mut self)
Executes the destructor for this type.
#### fn from(chan: DigitalOutputHandle) -> Self
Converts to this type from the input type.### impl Phidget for DigitalOutput
#### fn as_handle(&mut self) -> PhidgetHandle
Get the phidget handle for the device#### fn open(&mut self) -> Result<()Attempt to open the humidity sensor for input.#### fn open_wait(&mut self, to: Duration) -> Result<()Attempt to open the humidity sensor for input, waiting a limited time for it to connect.#### fn open_wait_default(&mut self) -> Result<()Attempt to open the humidity sensor for input, waiting the default time for it to connect.#### fn close(&mut self) -> Result<()Closes the channel#### fn is_open(&mut self) -> Result<boolDetermines if the channel is open#### fn is_attached(&mut self) -> Result<boolDetermines if the channel is open and attached to a device.#### fn is_local(&mut self) -> Result<boolDetermines if the channel is open locally (not over a network).#### fn set_local(&mut self, local: bool) -> Result<()Set true to open the channel locally (not over a network).#### fn is_remote(&mut self) -> Result<boolDetermines if the channel is open remotely (over a network).#### fn set_remote(&mut self, rem: bool) -> Result<()Set true to open the channel locally, (not over a network).#### fn data_interval(&mut self) -> Result<DurationGets the data interval for the device, if supported.#### fn set_data_interval(&mut self, interval: Duration) -> Result<()Sets the data interval for the device, if supported.#### fn min_data_interval(&mut self) -> Result<DurationGets the minimum data interval for the device, if supported.#### fn max_data_interval(&mut self) -> Result<DurationGets the maximum data interval for the device, if supported.#### fn data_rate(&mut self) -> Result<f64Gets the data update rate for the device, if supported.#### fn set_data_rate(&mut self, freq: f64) -> Result<()Sets the data update rate for the device, if supported.#### fn min_data_rate(&mut self) -> Result<f64Gets the minimum data interval for the device, if supported.#### fn max_data_rate(&mut self) -> Result<f64Gets the maximum data interval for the device, if supported.#### fn device_channel_count(&mut self, cls: ChannelClass) -> Result<u32Get the number of channels of the specified class on the device.#### fn channel_class(&mut self) -> Result<ChannelClassGets class of the channel#### fn channel_class_name(&mut self) -> Result<StringGet the name of the channel class#### fn channel_name(&mut self) -> Result<StringGet the channel’s name.#### fn device_class(&mut self) -> Result<DeviceClassGets class of the device#### fn device_class_name(&mut self) -> Result<StringGet the name of the device class#### fn is_hub_port_device(&mut self) -> Result<boolDetermines whether this channel is a VINT Hub port channel, or part of a VINT device attached to a hub port.#### fn set_is_hub_port_device(&mut self, on: bool) -> Result<()Specify whether this channel should be opened on a VINT Hub port directly, or on a VINT device attached to a hub port.
This must be set before the channel is opened.#### fn hub_port(&mut self) -> Result<i32Gets the index of the port on the VINT Hub to which the channel is attached.#### fn set_hub_port(&mut self, port: i32) -> Result<()Gets the index of the port on the VINT Hub to which the channel is attached.
Set to PHIDGET_HUBPORT_ANY to open the channel on any port of the hub.
This must be set before the channel is opened.#### fn channel(&mut self) -> Result<i32Gets the channel index of the device.#### fn set_channel(&mut self, chan: i32) -> Result<()Sets the channel index to be opened.
The default channel is 0. Set to PHIDGET_CHANNEL_ANY to open any channel on the specified device. This must be set before the channel is opened.#### fn serial_number(&mut self) -> Result<i32Gets the serial number of the device.
If the channel is part of a VINT device, this is the serial number of the VINT Hub to which the device is attached.#### fn set_serial_number(&mut self, sn: i32) -> Result<()Sets the device serial number to be opened.
Leave un-set, or set to PHIDGET_SERIALNUMBER_ANY to open any serial number. If the channel is part of a VINT device, this is the serial number of the VINT Hub to which the device is attached.
This must be set before the channel is opened.### impl Send for DigitalOutput
Auto Trait Implementations
---
### impl RefUnwindSafe for DigitalOutput
### impl !Sync for DigitalOutput
### impl Unpin for DigitalOutput
### impl UnwindSafe for DigitalOutput
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct phidget::voltage_io::VoltageInput
===
```
pub struct VoltageInput { /* private fields */ }
```
Phidget voltage input
Implementations
---
### impl VoltageInput
#### pub fn new() -> Self
Create a new voltage input.
#### pub fn as_channel(&self) -> &VoltageInputHandle
Get a reference to the underlying sensor handle
#### pub fn voltage(&self) -> Result<f64Get the voltage on the input channel
#### pub fn set_on_voltage_change_handler<F>(&mut self, cb: F) -> Result<()>where
F: Fn(&VoltageInput, f64) + Send + 'static,
Sets a handler to receive voltage change callbacks.
#### pub fn set_on_attach_handler<F>(&mut self, cb: F) -> Result<()>where
F: Fn(&GenericPhidget) + Send + 'static,
Sets a handler to receive attach callbacks
#### pub fn set_on_detach_handler<F>(&mut self, cb: F) -> Result<()>where
F: Fn(&GenericPhidget) + Send + 'static,
Sets a handler to receive detach callbacks
Trait Implementations
---
### impl Default for VoltageInput
#### fn default() -> Self
Returns the “default value” for a type.
#### fn drop(&mut self)
Executes the destructor for this type.
#### fn from(chan: VoltageInputHandle) -> Self
Converts to this type from the input type.### impl Phidget for VoltageInput
#### fn as_handle(&mut self) -> PhidgetHandle
Get the phidget handle for the device#### fn open(&mut self) -> Result<()Attempt to open the humidity sensor for input.#### fn open_wait(&mut self, to: Duration) -> Result<()Attempt to open the humidity sensor for input, waiting a limited time for it to connect.#### fn open_wait_default(&mut self) -> Result<()Attempt to open the humidity sensor for input, waiting the default time for it to connect.#### fn close(&mut self) -> Result<()Closes the channel#### fn is_open(&mut self) -> Result<boolDetermines if the channel is open#### fn is_attached(&mut self) -> Result<boolDetermines if the channel is open and attached to a device.#### fn is_local(&mut self) -> Result<boolDetermines if the channel is open locally (not over a network).#### fn set_local(&mut self, local: bool) -> Result<()Set true to open the channel locally (not over a network).#### fn is_remote(&mut self) -> Result<boolDetermines if the channel is open remotely (over a network).#### fn set_remote(&mut self, rem: bool) -> Result<()Set true to open the channel locally, (not over a network).#### fn data_interval(&mut self) -> Result<DurationGets the data interval for the device, if supported.#### fn set_data_interval(&mut self, interval: Duration) -> Result<()Sets the data interval for the device, if supported.#### fn min_data_interval(&mut self) -> Result<DurationGets the minimum data interval for the device, if supported.#### fn max_data_interval(&mut self) -> Result<DurationGets the maximum data interval for the device, if supported.#### fn data_rate(&mut self) -> Result<f64Gets the data update rate for the device, if supported.#### fn set_data_rate(&mut self, freq: f64) -> Result<()Sets the data update rate for the device, if supported.#### fn min_data_rate(&mut self) -> Result<f64Gets the minimum data interval for the device, if supported.#### fn max_data_rate(&mut self) -> Result<f64Gets the maximum data interval for the device, if supported.#### fn device_channel_count(&mut self, cls: ChannelClass) -> Result<u32Get the number of channels of the specified class on the device.#### fn channel_class(&mut self) -> Result<ChannelClassGets class of the channel#### fn channel_class_name(&mut self) -> Result<StringGet the name of the channel class#### fn channel_name(&mut self) -> Result<StringGet the channel’s name.#### fn device_class(&mut self) -> Result<DeviceClassGets class of the device#### fn device_class_name(&mut self) -> Result<StringGet the name of the device class#### fn is_hub_port_device(&mut self) -> Result<boolDetermines whether this channel is a VINT Hub port channel, or part of a VINT device attached to a hub port.#### fn set_is_hub_port_device(&mut self, on: bool) -> Result<()Specify whether this channel should be opened on a VINT Hub port directly, or on a VINT device attached to a hub port.
This must be set before the channel is opened.#### fn hub_port(&mut self) -> Result<i32Gets the index of the port on the VINT Hub to which the channel is attached.#### fn set_hub_port(&mut self, port: i32) -> Result<()Gets the index of the port on the VINT Hub to which the channel is attached.
Set to PHIDGET_HUBPORT_ANY to open the channel on any port of the hub.
This must be set before the channel is opened.#### fn channel(&mut self) -> Result<i32Gets the channel index of the device.#### fn set_channel(&mut self, chan: i32) -> Result<()Sets the channel index to be opened.
The default channel is 0. Set to PHIDGET_CHANNEL_ANY to open any channel on the specified device. This must be set before the channel is opened.#### fn serial_number(&mut self) -> Result<i32Gets the serial number of the device.
If the channel is part of a VINT device, this is the serial number of the VINT Hub to which the device is attached.#### fn set_serial_number(&mut self, sn: i32) -> Result<()Sets the device serial number to be opened.
Leave un-set, or set to PHIDGET_SERIALNUMBER_ANY to open any serial number. If the channel is part of a VINT device, this is the serial number of the VINT Hub to which the device is attached.
This must be set before the channel is opened.### impl Send for VoltageInput
Auto Trait Implementations
---
### impl RefUnwindSafe for VoltageInput
### impl !Sync for VoltageInput
### impl Unpin for VoltageInput
### impl UnwindSafe for VoltageInput
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct phidget::voltage_io::VoltageOutput
===
```
pub struct VoltageOutput { /* private fields */ }
```
Phidget voltage output
Implementations
---
### impl VoltageOutput
#### pub fn new() -> Self
Create a new voltage input.
#### pub fn voltage(&self) -> Result<f64Get the voltage value that the channel will output
#### pub fn set_voltage(&self, v: f64) -> Result<()Set the voltage value that the channel will output.
#### pub fn set_on_attach_handler<F>(&mut self, cb: F) -> Result<()>where
F: Fn(&GenericPhidget) + Send + 'static,
Sets a handler to receive attach callbacks
#### pub fn set_on_detach_handler<F>(&mut self, cb: F) -> Result<()>where
F: Fn(&GenericPhidget) + Send + 'static,
Sets a handler to receive detach callbacks
Trait Implementations
---
### impl Default for VoltageOutput
#### fn default() -> Self
Returns the “default value” for a type.
#### fn drop(&mut self)
Executes the destructor for this type.
#### fn from(chan: VoltageOutputHandle) -> Self
Converts to this type from the input type.### impl Phidget for VoltageOutput
#### fn as_handle(&mut self) -> PhidgetHandle
Get the phidget handle for the device#### fn open(&mut self) -> Result<()Attempt to open the humidity sensor for input.#### fn open_wait(&mut self, to: Duration) -> Result<()Attempt to open the humidity sensor for input, waiting a limited time for it to connect.#### fn open_wait_default(&mut self) -> Result<()Attempt to open the humidity sensor for input, waiting the default time for it to connect.#### fn close(&mut self) -> Result<()Closes the channel#### fn is_open(&mut self) -> Result<boolDetermines if the channel is open#### fn is_attached(&mut self) -> Result<boolDetermines if the channel is open and attached to a device.#### fn is_local(&mut self) -> Result<boolDetermines if the channel is open locally (not over a network).#### fn set_local(&mut self, local: bool) -> Result<()Set true to open the channel locally (not over a network).#### fn is_remote(&mut self) -> Result<boolDetermines if the channel is open remotely (over a network).#### fn set_remote(&mut self, rem: bool) -> Result<()Set true to open the channel locally, (not over a network).#### fn data_interval(&mut self) -> Result<DurationGets the data interval for the device, if supported.#### fn set_data_interval(&mut self, interval: Duration) -> Result<()Sets the data interval for the device, if supported.#### fn min_data_interval(&mut self) -> Result<DurationGets the minimum data interval for the device, if supported.#### fn max_data_interval(&mut self) -> Result<DurationGets the maximum data interval for the device, if supported.#### fn data_rate(&mut self) -> Result<f64Gets the data update rate for the device, if supported.#### fn set_data_rate(&mut self, freq: f64) -> Result<()Sets the data update rate for the device, if supported.#### fn min_data_rate(&mut self) -> Result<f64Gets the minimum data interval for the device, if supported.#### fn max_data_rate(&mut self) -> Result<f64Gets the maximum data interval for the device, if supported.#### fn device_channel_count(&mut self, cls: ChannelClass) -> Result<u32Get the number of channels of the specified class on the device.#### fn channel_class(&mut self) -> Result<ChannelClassGets class of the channel#### fn channel_class_name(&mut self) -> Result<StringGet the name of the channel class#### fn channel_name(&mut self) -> Result<StringGet the channel’s name.#### fn device_class(&mut self) -> Result<DeviceClassGets class of the device#### fn device_class_name(&mut self) -> Result<StringGet the name of the device class#### fn is_hub_port_device(&mut self) -> Result<boolDetermines whether this channel is a VINT Hub port channel, or part of a VINT device attached to a hub port.#### fn set_is_hub_port_device(&mut self, on: bool) -> Result<()Specify whether this channel should be opened on a VINT Hub port directly, or on a VINT device attached to a hub port.
This must be set before the channel is opened.#### fn hub_port(&mut self) -> Result<i32Gets the index of the port on the VINT Hub to which the channel is attached.#### fn set_hub_port(&mut self, port: i32) -> Result<()Gets the index of the port on the VINT Hub to which the channel is attached.
Set to PHIDGET_HUBPORT_ANY to open the channel on any port of the hub.
This must be set before the channel is opened.#### fn channel(&mut self) -> Result<i32Gets the channel index of the device.#### fn set_channel(&mut self, chan: i32) -> Result<()Sets the channel index to be opened.
The default channel is 0. Set to PHIDGET_CHANNEL_ANY to open any channel on the specified device. This must be set before the channel is opened.#### fn serial_number(&mut self) -> Result<i32Gets the serial number of the device.
If the channel is part of a VINT device, this is the serial number of the VINT Hub to which the device is attached.#### fn set_serial_number(&mut self, sn: i32) -> Result<()Sets the device serial number to be opened.
Leave un-set, or set to PHIDGET_SERIALNUMBER_ANY to open any serial number. If the channel is part of a VINT device, this is the serial number of the VINT Hub to which the device is attached.
This must be set before the channel is opened.### impl Send for VoltageOutput
Auto Trait Implementations
---
### impl RefUnwindSafe for VoltageOutput
### impl !Sync for VoltageOutput
### impl Unpin for VoltageOutput
### impl UnwindSafe for VoltageOutput
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Module phidget::errors
===
The error types for the crate
Enums
---
* ReturnCodeReturn Codes from the phidgets22 library.
These are all the integer success/failure codes returned by the calls to the phidget22 library. A zero indicates success, whereas any other value indicates failure. These are unsigned, so all errors are >0.
This type is a Rust std::error::Error.
Type Definitions
---
* ErrorThe error type for the crate is a phidget22 return code.
* ResultThe default result type for the phidget-rs library
Module phidget::digital_io
===
Phidget digital I/O
Structs
---
* DigitalInputPhidget digital input
* DigitalOutputPhidget digital output
Type Definitions
---
* DigitalInputCallbackThe function signature for the safe Rust digital input state change callback.
Module phidget::hub
===
Phidget hub
Structs
---
* HubPhidget Hub
Enums
---
* HubPortModePossible operational modes for a hub port
Module phidget::humidity_sensor
===
Phidget hmidity sensor Phidget Humidity sensor
Structs
---
* HumiditySensorPhidget humidity sensor
Type Definitions
---
* HumidityCallbackThe function signature for the safe Rust humidity change callback.
Module phidget::net
===
Network API Phidget network API
Enums
---
* ServerTypePhidget server types
Functions
---
* add_serverRegister a server to which the client will try to connect.
* disable_serverPrevents attempts to automatically connect to a server.
* disable_server_discoveryDisables the dynamic discovery of servers that publish their identity.
This does not disconnect already established connections.
* enable_serverEnables attempts to connect to a discovered server, if attempts were previously disabled by `disable_server()`.
* enable_server_discoveryEnables the dynamic discovery of servers that publish their identity to the network.
Currently Multicast DNS is used to discover and publish Phidget servers.
* remove_all_serversRemoves all registered servers.
* remove_serverRemoves the registration for a server.
* set_server_passwardSets the password that will be used to attempt to connect to the server.
If the server has not already been added or discovered, a placeholder server entry will be registered to use this password on the server once it is discovered.
Module phidget::phidget
===
The main Phidget trait
Structs
---
* GenericPhidgetA wrapper for a generic phidget.
Traits
---
* PhidgetThe base trait and implementation for Phidgets
Functions
---
* set_on_attach_handlerAssigns a handler that will be called when the Attach event occurs for a matching phidget.
* set_on_detach_handlerAssigns a handler that will be called when the Detach event occurs for a matching Phidget.
Type Definitions
---
* AttachCallbackThe signature for device attach callbacks
* DetachCallbackThe signature for device detach callbacks
Module phidget::temperature_sensor
===
Phidget temerature sensor
Structs
---
* TemperatureSensorPhidget temperature sensor
Type Definitions
---
* TemperatureCallbackThe function type for the safe Rust temperature change callback.
Module phidget::voltage_io
===
Phidget voltage I/O
Structs
---
* VoltageInputPhidget voltage input
* VoltageOutputPhidget voltage output
Type Definitions
---
* VoltageChangeCallbackThe function signature for the safe Rust voltage change callback.
Enum phidget::ChannelClass
===
```
#[repr(u32)]
pub enum ChannelClass {
Nothing,
Accelerometer,
BldcMotor,
CaptiveTouch,
CurrentInput,
CurrentOutput,
DataAdapter,
DcMotor,
Dictionary,
DigitalInput,
DigitalOutput,
DistanceSensor,
Encoder,
FirmwareUpgrade,
FrequencyCounter,
Generic,
Gps,
Gyroscope,
Hub,
HumiditySensor,
Ir,
Lcd,
LightSensor,
Magnetometer,
MeshDongle,
MotorPositionController,
MotorVelocityController,
PhSensor,
PowerGuard,
PressureSensor,
RcServo,
ResistanceInput,
Rfid,
SoundSensor,
Spatial,
Stepper,
TemperatureSensor,
VoltageInput,
VoltageOutput,
VoltageRatioInput,
}
```
Phidget channel class
Variants
---
### Nothing
### Accelerometer
### BldcMotor
### CaptiveTouch
### CurrentInput
### CurrentOutput
### DataAdapter
### DcMotor
### Dictionary
### DigitalInput
### DigitalOutput
### DistanceSensor
### Encoder
### FirmwareUpgrade
### FrequencyCounter
### Generic
### Gps
### Gyroscope
### Hub
### HumiditySensor
### Ir
### Lcd
### LightSensor
### Magnetometer
### MeshDongle
### MotorPositionController
### MotorVelocityController
### PhSensor
### PowerGuard
### PressureSensor
### RcServo
### ResistanceInput
### Rfid
### SoundSensor
### Spatial
### Stepper
### TemperatureSensor
### VoltageInput
### VoltageOutput
### VoltageRatioInput
Trait Implementations
---
### impl Clone for ChannelClass
#### fn clone(&self) -> ChannelClass
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn cmp(&self, other: &ChannelClass) -> Ordering
This method returns an `Ordering` between `self` and `other`. Read more1.21.0 · source#### fn max(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the maximum of two values. Read more1.21.0 · source#### fn min(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the minimum of two values. Read more1.50.0 · source#### fn clamp(self, min: Self, max: Self) -> Selfwhere
Self: Sized + PartialOrd<Self>,
Restrict a value to a certain interval.
#### fn eq(&self, other: &ChannelClass) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialOrd<ChannelClass> for ChannelClass
#### fn partial_cmp(&self, other: &ChannelClass) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists. Read more1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool
This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool
This method tests less than or equal to (for `self` and `other`) and is used by the `<=`
operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool
This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool
This method tests greater than or equal to (for `self` and `other`) and is used by the `>=`
operator.
#### type Error = ReturnCode
The type returned in the event of a conversion error.#### fn try_from(val: u32) -> Result<SelfPerforms the conversion.### impl Copy for ChannelClass
### impl Eq for ChannelClass
### impl StructuralEq for ChannelClass
### impl StructuralPartialEq for ChannelClass
Auto Trait Implementations
---
### impl RefUnwindSafe for ChannelClass
### impl Send for ChannelClass
### impl Sync for ChannelClass
### impl Unpin for ChannelClass
### impl UnwindSafe for ChannelClass
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Enum phidget::DeviceClass
===
```
#[repr(u32)]
pub enum DeviceClass {
Nothing,
Accelerometer,
AdvancedServo,
Analog,
Bridge,
DataAdapter,
Dictionary,
Encoder,
FirmwareUpgrade,
FrequencyCounter,
Generic,
Gps,
Hub,
InterfaceKit,
Ir,
Led,
MeshDongle,
MotorControl,
PhSensor,
Rfid,
Servo,
Spatial,
Steper,
TemperatreSensor,
TextLcd,
Vint,
}
```
Phidget device class
Variants
---
### Nothing
### Accelerometer
### AdvancedServo
### Analog
### Bridge
### DataAdapter
### Dictionary
### Encoder
### FirmwareUpgrade
### FrequencyCounter
### Generic
### Gps
### Hub
### InterfaceKit
### Ir
### Led
### MeshDongle
### MotorControl
### PhSensor
### Rfid
### Servo
### Spatial
### Steper
### TemperatreSensor
### TextLcd
### Vint
Trait Implementations
---
### impl Clone for DeviceClass
#### fn clone(&self) -> DeviceClass
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn cmp(&self, other: &DeviceClass) -> Ordering
This method returns an `Ordering` between `self` and `other`. Read more1.21.0 · source#### fn max(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the maximum of two values. Read more1.21.0 · source#### fn min(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the minimum of two values. Read more1.50.0 · source#### fn clamp(self, min: Self, max: Self) -> Selfwhere
Self: Sized + PartialOrd<Self>,
Restrict a value to a certain interval.
#### fn eq(&self, other: &DeviceClass) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialOrd<DeviceClass> for DeviceClass
#### fn partial_cmp(&self, other: &DeviceClass) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists. Read more1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool
This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool
This method tests less than or equal to (for `self` and `other`) and is used by the `<=`
operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool
This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool
This method tests greater than or equal to (for `self` and `other`) and is used by the `>=`
operator.
#### type Error = ReturnCode
The type returned in the event of a conversion error.#### fn try_from(val: u32) -> Result<SelfPerforms the conversion.### impl Copy for DeviceClass
### impl Eq for DeviceClass
### impl StructuralEq for DeviceClass
### impl StructuralPartialEq for DeviceClass
Auto Trait Implementations
---
### impl RefUnwindSafe for DeviceClass
### impl Send for DeviceClass
### impl Sync for DeviceClass
### impl Unpin for DeviceClass
### impl UnwindSafe for DeviceClass
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Constant phidget::TIMEOUT_DEFAULT
===
```
pub const TIMEOUT_DEFAULT: Duration;
```
The default timeout for the library
Constant phidget::TIMEOUT_INFINITE
===
```
pub const TIMEOUT_INFINITE: Duration;
```
An infinite timeout (wait forever)
Function phidget::library_version
===
```
pub fn library_version() -> Result<String>
```
The the full version of the phidget22 library as a string.
This is somthing like, “Phidget22 - Version 1.14 - Built Mar 31 2023 22:44:59”
Function phidget::library_version_number
===
```
pub fn library_version_number() -> Result<String>
```
Gets just the version number of the phidget22 library as a string.
This is something like, “1.14” |
google-cloudasset1 | rust | Rust | Crate google_cloudasset1
===
This documentation was generated from *Cloud Asset* crate version *5.0.3+20230121*, where *20230121* is the exact revision of the *cloudasset:v1* schema built by the mako code generator *v5.0.3*.
Everything else about the *Cloud Asset* *v1* API can be found at the official documentation site.
The original source code is on github.
Features
---
Handle the following *Resources* with ease from the central hub …
* assets
* *list*
* effective iam policies
* *batch get*
* feeds
* *create*, *delete*, *get*, *list* and *patch*
* operations
* *get*
* saved queries
* *create*, *delete*, *get*, *list* and *patch*
Other activities are …
* analyze iam policy
* analyze iam policy longrunning
* analyze move
* analyze org policies
* analyze org policy governed assets
* analyze org policy governed containers
* batch get assets history
* export assets
* query assets
* search all iam policies
* search all resources
Not what you are looking for ? Find all other Google APIs in their Rust documentation index.
Structure of this Library
---
The API is structured into the following primary items:
* **Hub**
+ a central object to maintain state and allow accessing all *Activities*
+ creates *Method Builders* which in turn
allow access to individual *Call Builders*
* **Resources**
+ primary types that you can apply *Activities* to
+ a collection of properties and *Parts*
+ **Parts**
- a collection of properties
- never directly used in *Activities*
* **Activities**
+ operations to apply to *Resources*
All *structures* are marked with applicable traits to further categorize them and ease browsing.
Generally speaking, you can invoke *Activities* like this:
```
let r = hub.resource().activity(...).doit().await
```
Or specifically …
```
let r = hub.feeds().create(...).doit().await let r = hub.feeds().delete(...).doit().await let r = hub.feeds().get(...).doit().await let r = hub.feeds().list(...).doit().await let r = hub.feeds().patch(...).doit().await
```
The `resource()` and `activity(...)` calls create builders. The second one dealing with `Activities`
supports various methods to configure the impending operation (not shown here). It is made such that all required arguments have to be
specified right away (i.e. `(...)`), whereas all optional ones can be build up as desired.
The `doit()` method performs the actual communication with the server and returns the respective result.
Usage
---
### Setting up your Project
To use this library, you would put the following lines into your `Cargo.toml` file:
```
[dependencies]
google-cloudasset1 = "*"
serde = "^1.0"
serde_json = "^1.0"
```
### A complete example
```
extern crate hyper;
extern crate hyper_rustls;
extern crate google_cloudasset1 as cloudasset1;
use cloudasset1::api::CreateFeedRequest;
use cloudasset1::{Result, Error};
use std::default::Default;
use cloudasset1::{CloudAsset, oauth2, hyper, hyper_rustls, chrono, FieldMask};
// Get an ApplicationSecret instance by some means. It contains the `client_id` and
// `client_secret`, among other things.
let secret: oauth2::ApplicationSecret = Default::default();
// Instantiate the authenticator. It will choose a suitable authentication flow for you,
// unless you replace `None` with the desired Flow.
// Provide your own `AuthenticatorDelegate` to adjust the way it operates and get feedback about
// what's going on. You probably want to bring in your own `TokenStorage` to persist tokens and
// retrieve them from storage.
let auth = oauth2::InstalledFlowAuthenticator::builder(
secret,
oauth2::InstalledFlowReturnMethod::HTTPRedirect,
).build().await.unwrap();
let mut hub = CloudAsset::new(hyper::Client::builder().build(hyper_rustls::HttpsConnectorBuilder::new().with_native_roots().https_or_http().enable_http1().build()), auth);
// As the method needs a request, you would usually fill it with the desired information
// into the respective structure. Some of the parts shown here might not be applicable !
// Values shown here are possibly random and not representative !
let mut req = CreateFeedRequest::default();
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.feeds().create(req, "parent")
.doit().await;
match result {
Err(e) => match e {
// The Error enum provides details about what exactly happened.
// You can also just use its `Debug`, `Display` or `Error` traits
Error::HttpError(_)
|Error::Io(_)
|Error::MissingAPIKey
|Error::MissingToken(_)
|Error::Cancelled
|Error::UploadSizeLimitExceeded(_, _)
|Error::Failure(_)
|Error::BadRequest(_)
|Error::FieldClash(_)
|Error::JsonDecodeError(_, _) => println!("{}", e),
},
Ok(res) => println!("Success: {:?}", res),
}
```
### Handling Errors
All errors produced by the system are provided either as Result enumeration as return value of the doit() methods, or handed as possibly intermediate results to either the
Hub Delegate, or the Authenticator Delegate.
When delegates handle errors or intermediate values, they may have a chance to instruct the system to retry. This
makes the system potentially resilient to all kinds of errors.
### Uploads and Downloads
If a method supports downloads, the response body, which is part of the Result, should be read by you to obtain the media.
If such a method also supports a Response Result, it will return that by default.
You can see it as meta-data for the actual media. To trigger a media download, you will have to set up the builder by making this call: `.param("alt", "media")`.
Methods supporting uploads can do so using up to 2 different protocols:
*simple* and *resumable*. The distinctiveness of each is represented by customized
`doit(...)` methods, which are then named `upload(...)` and `upload_resumable(...)` respectively.
### Customization and Callbacks
You may alter the way an `doit()` method is called by providing a delegate to the
Method Builder before making the final `doit()` call.
Respective methods will be called to provide progress information, as well as determine whether the system should
retry on failure.
The delegate trait is default-implemented, allowing you to customize it with minimal effort.
### Optional Parts in Server-Requests
All structures provided by this library are made to be encodable and
decodable via *json*. Optionals are used to indicate that partial requests are responses
are valid.
Most optionals are are considered Parts which are identifiable by name, which will be sent to
the server to indicate either the set parts of the request or the desired parts in the response.
### Builder Arguments
Using method builders, you are able to prepare an action call by repeatedly calling it’s methods.
These will always take a single argument, for which the following statements are true.
* PODs are handed by copy
* strings are passed as `&str`
* request values are moved
Arguments will always be copied or cloned into the builder, to make them independent of their original life times.
Re-exports
---
* `pub extern crate google_apis_common as client;`
* `pub use api::CloudAsset;`
* `pub use hyper;`
* `pub use hyper_rustls;`
* `pub use client::chrono;`
* `pub use client::oauth2;`
Modules
---
* api
Structs
---
* FieldMaskA `FieldMask` as defined in `https://github.com/protocolbuffers/protobuf/blob/ec1a70913e5793a7d0a7b5fbf7e0e4f75409dd41/src/google/protobuf/field_mask.proto#L180`
Enums
---
* Error
Traits
---
* DelegateA trait specifying functionality to help controlling any request performed by the API.
The trait has a conservative default implementation.
Type Aliases
---
* ResultA universal result type used as return for all calls.
Crate google_cloudasset1
===
This documentation was generated from *Cloud Asset* crate version *5.0.3+20230121*, where *20230121* is the exact revision of the *cloudasset:v1* schema built by the mako code generator *v5.0.3*.
Everything else about the *Cloud Asset* *v1* API can be found at the official documentation site.
The original source code is on github.
Features
---
Handle the following *Resources* with ease from the central hub …
* assets
* *list*
* effective iam policies
* *batch get*
* feeds
* *create*, *delete*, *get*, *list* and *patch*
* operations
* *get*
* saved queries
* *create*, *delete*, *get*, *list* and *patch*
Other activities are …
* analyze iam policy
* analyze iam policy longrunning
* analyze move
* analyze org policies
* analyze org policy governed assets
* analyze org policy governed containers
* batch get assets history
* export assets
* query assets
* search all iam policies
* search all resources
Not what you are looking for ? Find all other Google APIs in their Rust documentation index.
Structure of this Library
---
The API is structured into the following primary items:
* **Hub**
+ a central object to maintain state and allow accessing all *Activities*
+ creates *Method Builders* which in turn
allow access to individual *Call Builders*
* **Resources**
+ primary types that you can apply *Activities* to
+ a collection of properties and *Parts*
+ **Parts**
- a collection of properties
- never directly used in *Activities*
* **Activities**
+ operations to apply to *Resources*
All *structures* are marked with applicable traits to further categorize them and ease browsing.
Generally speaking, you can invoke *Activities* like this:
```
let r = hub.resource().activity(...).doit().await
```
Or specifically …
```
let r = hub.feeds().create(...).doit().await let r = hub.feeds().delete(...).doit().await let r = hub.feeds().get(...).doit().await let r = hub.feeds().list(...).doit().await let r = hub.feeds().patch(...).doit().await
```
The `resource()` and `activity(...)` calls create builders. The second one dealing with `Activities`
supports various methods to configure the impending operation (not shown here). It is made such that all required arguments have to be
specified right away (i.e. `(...)`), whereas all optional ones can be build up as desired.
The `doit()` method performs the actual communication with the server and returns the respective result.
Usage
---
### Setting up your Project
To use this library, you would put the following lines into your `Cargo.toml` file:
```
[dependencies]
google-cloudasset1 = "*"
serde = "^1.0"
serde_json = "^1.0"
```
### A complete example
```
extern crate hyper;
extern crate hyper_rustls;
extern crate google_cloudasset1 as cloudasset1;
use cloudasset1::api::CreateFeedRequest;
use cloudasset1::{Result, Error};
use std::default::Default;
use cloudasset1::{CloudAsset, oauth2, hyper, hyper_rustls, chrono, FieldMask};
// Get an ApplicationSecret instance by some means. It contains the `client_id` and
// `client_secret`, among other things.
let secret: oauth2::ApplicationSecret = Default::default();
// Instantiate the authenticator. It will choose a suitable authentication flow for you,
// unless you replace `None` with the desired Flow.
// Provide your own `AuthenticatorDelegate` to adjust the way it operates and get feedback about
// what's going on. You probably want to bring in your own `TokenStorage` to persist tokens and
// retrieve them from storage.
let auth = oauth2::InstalledFlowAuthenticator::builder(
secret,
oauth2::InstalledFlowReturnMethod::HTTPRedirect,
).build().await.unwrap();
let mut hub = CloudAsset::new(hyper::Client::builder().build(hyper_rustls::HttpsConnectorBuilder::new().with_native_roots().https_or_http().enable_http1().build()), auth);
// As the method needs a request, you would usually fill it with the desired information
// into the respective structure. Some of the parts shown here might not be applicable !
// Values shown here are possibly random and not representative !
let mut req = CreateFeedRequest::default();
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.feeds().create(req, "parent")
.doit().await;
match result {
Err(e) => match e {
// The Error enum provides details about what exactly happened.
// You can also just use its `Debug`, `Display` or `Error` traits
Error::HttpError(_)
|Error::Io(_)
|Error::MissingAPIKey
|Error::MissingToken(_)
|Error::Cancelled
|Error::UploadSizeLimitExceeded(_, _)
|Error::Failure(_)
|Error::BadRequest(_)
|Error::FieldClash(_)
|Error::JsonDecodeError(_, _) => println!("{}", e),
},
Ok(res) => println!("Success: {:?}", res),
}
```
### Handling Errors
All errors produced by the system are provided either as Result enumeration as return value of the doit() methods, or handed as possibly intermediate results to either the
Hub Delegate, or the Authenticator Delegate.
When delegates handle errors or intermediate values, they may have a chance to instruct the system to retry. This
makes the system potentially resilient to all kinds of errors.
### Uploads and Downloads
If a method supports downloads, the response body, which is part of the Result, should be read by you to obtain the media.
If such a method also supports a Response Result, it will return that by default.
You can see it as meta-data for the actual media. To trigger a media download, you will have to set up the builder by making this call: `.param("alt", "media")`.
Methods supporting uploads can do so using up to 2 different protocols:
*simple* and *resumable*. The distinctiveness of each is represented by customized
`doit(...)` methods, which are then named `upload(...)` and `upload_resumable(...)` respectively.
### Customization and Callbacks
You may alter the way an `doit()` method is called by providing a delegate to the
Method Builder before making the final `doit()` call.
Respective methods will be called to provide progress information, as well as determine whether the system should
retry on failure.
The delegate trait is default-implemented, allowing you to customize it with minimal effort.
### Optional Parts in Server-Requests
All structures provided by this library are made to be encodable and
decodable via *json*. Optionals are used to indicate that partial requests are responses
are valid.
Most optionals are are considered Parts which are identifiable by name, which will be sent to
the server to indicate either the set parts of the request or the desired parts in the response.
### Builder Arguments
Using method builders, you are able to prepare an action call by repeatedly calling it’s methods.
These will always take a single argument, for which the following statements are true.
* PODs are handed by copy
* strings are passed as `&str`
* request values are moved
Arguments will always be copied or cloned into the builder, to make them independent of their original life times.
Re-exports
---
* `pub extern crate google_apis_common as client;`
* `pub use api::CloudAsset;`
* `pub use hyper;`
* `pub use hyper_rustls;`
* `pub use client::chrono;`
* `pub use client::oauth2;`
Modules
---
* api
Structs
---
* FieldMaskA `FieldMask` as defined in `https://github.com/protocolbuffers/protobuf/blob/ec1a70913e5793a7d0a7b5fbf7e0e4f75409dd41/src/google/protobuf/field_mask.proto#L180`
Enums
---
* Error
Traits
---
* DelegateA trait specifying functionality to help controlling any request performed by the API.
The trait has a conservative default implementation.
Type Aliases
---
* ResultA universal result type used as return for all calls.
Struct google_cloudasset1::api::EffectiveIamPolicyBatchGetCall
===
```
pub struct EffectiveIamPolicyBatchGetCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Gets effective IAM policies for a batch of resources.
A builder for the *batchGet* method supported by a *effectiveIamPolicy* resource.
It is not used directly, but through a `EffectiveIamPolicyMethods` instance.
Example
---
Instantiate a resource method builder
```
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.effective_iam_policies().batch_get("scope")
.add_names("amet.")
.doit().await;
```
Implementations
---
### impl<'a, S> EffectiveIamPolicyBatchGetCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(
self
) -> Result<(Response<Body>, BatchGetEffectiveIamPoliciesResponse)Perform the operation you have build so far.
#### pub fn scope(self, new_value: &str) -> EffectiveIamPolicyBatchGetCall<'a, SRequired. Only IAM policies on or below the scope will be returned. This can only be an organization number (such as “organizations/123”), a folder number (such as “folders/123”), a project ID (such as “projects/my-project-id”), or a project number (such as “projects/12345”). To know how to get organization id, visit here. To know how to get folder or project id, visit here.
Sets the *scope* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn add_names(self, new_value: &str) -> EffectiveIamPolicyBatchGetCall<'a, SRequired. The names refer to the [full_resource_names] (https://cloud.google.com/asset-inventory/docs/resource-name-format) of searchable asset types. A maximum of 20 resources’ effective policies can be retrieved in a batch.
Append the given value to the *names* query property.
Each appended value will retain its original ordering and be ‘/’-separated in the URL’s parameters.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> EffectiveIamPolicyBatchGetCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(
self,
name: T,
value: T
) -> EffectiveIamPolicyBatchGetCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *$.xgafv* (query-string) - V1 error format.
* *access_token* (query-string) - OAuth access token.
* *alt* (query-string) - Data format for response.
* *callback* (query-string) - JSONP
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
* *uploadType* (query-string) - Legacy upload protocol for media (e.g. “media”, “multipart”).
* *upload_protocol* (query-string) - Upload protocol for media (e.g. “raw”, “multipart”).
#### pub fn add_scope<St>(self, scope: St) -> EffectiveIamPolicyBatchGetCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(
self,
scopes: I
) -> EffectiveIamPolicyBatchGetCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> EffectiveIamPolicyBatchGetCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for EffectiveIamPolicyBatchGetCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for EffectiveIamPolicyBatchGetCall<'a, S### impl<'a, S> Send for EffectiveIamPolicyBatchGetCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for EffectiveIamPolicyBatchGetCall<'a, S### impl<'a, S> Unpin for EffectiveIamPolicyBatchGetCall<'a, S### impl<'a, S> !UnwindSafe for EffectiveIamPolicyBatchGetCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_cloudasset1::api::Feed
===
```
pub struct Feed {
pub asset_names: Option<Vec<String>>,
pub asset_types: Option<Vec<String>>,
pub condition: Option<Expr>,
pub content_type: Option<String>,
pub feed_output_config: Option<FeedOutputConfig>,
pub name: Option<String>,
pub relationship_types: Option<Vec<String>>,
}
```
An asset feed used to export asset updates to a destinations. An asset feed filter controls what updates are exported. The asset feed must be created within a project, organization, or folder. Supported destinations are: Pub/Sub topics.
Activities
---
This type is used in activities, which are methods you may call on this type or where this type is involved in.
The list links the activity name, along with information about where it is used (one of *request* and *response*).
* create feeds (response)
* delete feeds (none)
* get feeds (response)
* list feeds (none)
* patch feeds (response)
Fields
---
`asset_names: Option<Vec<String>>`A list of the full names of the assets to receive updates. You must specify either or both of asset_names and asset_types. Only asset updates matching specified asset_names or asset_types are exported to the feed. Example: `//compute.googleapis.com/projects/my_project_123/zones/zone1/instances/instance1`. For a list of the full names for supported asset types, see Resource name format.
`asset_types: Option<Vec<String>>`A list of types of the assets to receive updates. You must specify either or both of asset_names and asset_types. Only asset updates matching specified asset_names or asset_types are exported to the feed. Example: `"compute.googleapis.com/Disk"` For a list of all supported asset types, see Supported asset types.
`condition: Option<Expr>`A condition which determines whether an asset update should be published. If specified, an asset will be returned only when the expression evaluates to true. When set, `expression` field in the `Expr` must be a valid [CEL expression] (https://github.com/google/cel-spec) on a TemporalAsset with name `temporal_asset`. Example: a Feed with expression (“temporal_asset.deleted == true”) will only publish Asset deletions. Other fields of `Expr` are optional. See our user guide for detailed instructions.
`content_type: Option<String>`Asset content type. If not specified, no content but the asset name and type will be returned.
`feed_output_config: Option<FeedOutputConfig>`Required. Feed output configuration defining where the asset updates are published to.
`name: Option<String>`Required. The format will be projects/{project_number}/feeds/{client-assigned_feed_identifier} or folders/{folder_number}/feeds/{client-assigned_feed_identifier} or organizations/{organization_number}/feeds/{client-assigned_feed_identifier} The client-assigned feed identifier must be unique within the parent project/folder/organization.
`relationship_types: Option<Vec<String>>`A list of relationship types to output, for example: `INSTANCE_TO_INSTANCEGROUP`. This field should only be specified if content_type=RELATIONSHIP. * If specified: it outputs specified relationship updates on the [asset_names] or the [asset_types]. It returns an error if any of the [relationship_types] doesn’t belong to the supported relationship types of the [asset_names] or [asset_types], or any of the [asset_names] or the [asset_types] doesn’t belong to the source types of the [relationship_types]. * Otherwise: it outputs the supported relationships of the types of [asset_names] and [asset_types] or returns an error if any of the [asset_names] or the [asset_types] has no replationship support. See Introduction to Cloud Asset Inventory for all supported asset types and relationship types.
Trait Implementations
---
### impl Clone for Feed
#### fn clone(&self) -> Feed
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> Feed
Returns the “default value” for a type.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl ResponseResult for Feed
Auto Trait Implementations
---
### impl RefUnwindSafe for Feed
### impl Send for Feed
### impl Sync for Feed
### impl Unpin for Feed
### impl UnwindSafe for Feed
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Struct google_cloudasset1::api::FeedCreateCall
===
```
pub struct FeedCreateCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Creates a feed in a parent project/folder/organization to listen to its asset updates.
A builder for the *create* method supported by a *feed* resource.
It is not used directly, but through a `FeedMethods` instance.
Example
---
Instantiate a resource method builder
```
use cloudasset1::api::CreateFeedRequest;
// As the method needs a request, you would usually fill it with the desired information
// into the respective structure. Some of the parts shown here might not be applicable !
// Values shown here are possibly random and not representative !
let mut req = CreateFeedRequest::default();
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.feeds().create(req, "parent")
.doit().await;
```
Implementations
---
### impl<'a, S> FeedCreateCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, Feed)Perform the operation you have build so far.
#### pub fn request(self, new_value: CreateFeedRequest) -> FeedCreateCall<'a, SSets the *request* property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn parent(self, new_value: &str) -> FeedCreateCall<'a, SRequired. The name of the project/folder/organization where this feed should be created in. It can only be an organization number (such as “organizations/123”), a folder number (such as “folders/123”), a project ID (such as “projects/my-project-id”)“, or a project number (such as “projects/12345”).
Sets the *parent* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn delegate(self, new_value: &'a mut dyn Delegate) -> FeedCreateCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> FeedCreateCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *$.xgafv* (query-string) - V1 error format.
* *access_token* (query-string) - OAuth access token.
* *alt* (query-string) - Data format for response.
* *callback* (query-string) - JSONP
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
* *uploadType* (query-string) - Legacy upload protocol for media (e.g. “media”, “multipart”).
* *upload_protocol* (query-string) - Upload protocol for media (e.g. “raw”, “multipart”).
#### pub fn add_scope<St>(self, scope: St) -> FeedCreateCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> FeedCreateCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> FeedCreateCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for FeedCreateCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for FeedCreateCall<'a, S### impl<'a, S> Send for FeedCreateCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for FeedCreateCall<'a, S### impl<'a, S> Unpin for FeedCreateCall<'a, S### impl<'a, S> !UnwindSafe for FeedCreateCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_cloudasset1::api::FeedDeleteCall
===
```
pub struct FeedDeleteCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Deletes an asset feed.
A builder for the *delete* method supported by a *feed* resource.
It is not used directly, but through a `FeedMethods` instance.
Example
---
Instantiate a resource method builder
```
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.feeds().delete("name")
.doit().await;
```
Implementations
---
### impl<'a, S> FeedDeleteCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, Empty)Perform the operation you have build so far.
#### pub fn name(self, new_value: &str) -> FeedDeleteCall<'a, SRequired. The name of the feed and it must be in the format of: projects/project_number/feeds/feed_id folders/folder_number/feeds/feed_id organizations/organization_number/feeds/feed_id
Sets the *name* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn delegate(self, new_value: &'a mut dyn Delegate) -> FeedDeleteCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> FeedDeleteCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *$.xgafv* (query-string) - V1 error format.
* *access_token* (query-string) - OAuth access token.
* *alt* (query-string) - Data format for response.
* *callback* (query-string) - JSONP
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
* *uploadType* (query-string) - Legacy upload protocol for media (e.g. “media”, “multipart”).
* *upload_protocol* (query-string) - Upload protocol for media (e.g. “raw”, “multipart”).
#### pub fn add_scope<St>(self, scope: St) -> FeedDeleteCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> FeedDeleteCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> FeedDeleteCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for FeedDeleteCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for FeedDeleteCall<'a, S### impl<'a, S> Send for FeedDeleteCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for FeedDeleteCall<'a, S### impl<'a, S> Unpin for FeedDeleteCall<'a, S### impl<'a, S> !UnwindSafe for FeedDeleteCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_cloudasset1::api::FeedGetCall
===
```
pub struct FeedGetCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Gets details about an asset feed.
A builder for the *get* method supported by a *feed* resource.
It is not used directly, but through a `FeedMethods` instance.
Example
---
Instantiate a resource method builder
```
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.feeds().get("name")
.doit().await;
```
Implementations
---
### impl<'a, S> FeedGetCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, Feed)Perform the operation you have build so far.
#### pub fn name(self, new_value: &str) -> FeedGetCall<'a, SRequired. The name of the Feed and it must be in the format of: projects/project_number/feeds/feed_id folders/folder_number/feeds/feed_id organizations/organization_number/feeds/feed_id
Sets the *name* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn delegate(self, new_value: &'a mut dyn Delegate) -> FeedGetCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> FeedGetCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *$.xgafv* (query-string) - V1 error format.
* *access_token* (query-string) - OAuth access token.
* *alt* (query-string) - Data format for response.
* *callback* (query-string) - JSONP
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
* *uploadType* (query-string) - Legacy upload protocol for media (e.g. “media”, “multipart”).
* *upload_protocol* (query-string) - Upload protocol for media (e.g. “raw”, “multipart”).
#### pub fn add_scope<St>(self, scope: St) -> FeedGetCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> FeedGetCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> FeedGetCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for FeedGetCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for FeedGetCall<'a, S### impl<'a, S> Send for FeedGetCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for FeedGetCall<'a, S### impl<'a, S> Unpin for FeedGetCall<'a, S### impl<'a, S> !UnwindSafe for FeedGetCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_cloudasset1::api::FeedListCall
===
```
pub struct FeedListCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Lists all asset feeds in a parent project/folder/organization.
A builder for the *list* method supported by a *feed* resource.
It is not used directly, but through a `FeedMethods` instance.
Example
---
Instantiate a resource method builder
```
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.feeds().list("parent")
.doit().await;
```
Implementations
---
### impl<'a, S> FeedListCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, ListFeedsResponse)Perform the operation you have build so far.
#### pub fn parent(self, new_value: &str) -> FeedListCall<'a, SRequired. The parent project/folder/organization whose feeds are to be listed. It can only be using project/folder/organization number (such as “folders/12345”)“, or a project ID (such as “projects/my-project-id”).
Sets the *parent* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn delegate(self, new_value: &'a mut dyn Delegate) -> FeedListCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> FeedListCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *$.xgafv* (query-string) - V1 error format.
* *access_token* (query-string) - OAuth access token.
* *alt* (query-string) - Data format for response.
* *callback* (query-string) - JSONP
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
* *uploadType* (query-string) - Legacy upload protocol for media (e.g. “media”, “multipart”).
* *upload_protocol* (query-string) - Upload protocol for media (e.g. “raw”, “multipart”).
#### pub fn add_scope<St>(self, scope: St) -> FeedListCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> FeedListCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> FeedListCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for FeedListCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for FeedListCall<'a, S### impl<'a, S> Send for FeedListCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for FeedListCall<'a, S### impl<'a, S> Unpin for FeedListCall<'a, S### impl<'a, S> !UnwindSafe for FeedListCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_cloudasset1::api::FeedPatchCall
===
```
pub struct FeedPatchCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Updates an asset feed configuration.
A builder for the *patch* method supported by a *feed* resource.
It is not used directly, but through a `FeedMethods` instance.
Example
---
Instantiate a resource method builder
```
use cloudasset1::api::UpdateFeedRequest;
// As the method needs a request, you would usually fill it with the desired information
// into the respective structure. Some of the parts shown here might not be applicable !
// Values shown here are possibly random and not representative !
let mut req = UpdateFeedRequest::default();
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.feeds().patch(req, "name")
.doit().await;
```
Implementations
---
### impl<'a, S> FeedPatchCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, Feed)Perform the operation you have build so far.
#### pub fn request(self, new_value: UpdateFeedRequest) -> FeedPatchCall<'a, SSets the *request* property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn name(self, new_value: &str) -> FeedPatchCall<'a, SRequired. The format will be projects/{project_number}/feeds/{client-assigned_feed_identifier} or folders/{folder_number}/feeds/{client-assigned_feed_identifier} or organizations/{organization_number}/feeds/{client-assigned_feed_identifier} The client-assigned feed identifier must be unique within the parent project/folder/organization.
Sets the *name* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn delegate(self, new_value: &'a mut dyn Delegate) -> FeedPatchCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> FeedPatchCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *$.xgafv* (query-string) - V1 error format.
* *access_token* (query-string) - OAuth access token.
* *alt* (query-string) - Data format for response.
* *callback* (query-string) - JSONP
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
* *uploadType* (query-string) - Legacy upload protocol for media (e.g. “media”, “multipart”).
* *upload_protocol* (query-string) - Upload protocol for media (e.g. “raw”, “multipart”).
#### pub fn add_scope<St>(self, scope: St) -> FeedPatchCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> FeedPatchCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> FeedPatchCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for FeedPatchCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for FeedPatchCall<'a, S### impl<'a, S> Send for FeedPatchCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for FeedPatchCall<'a, S### impl<'a, S> Unpin for FeedPatchCall<'a, S### impl<'a, S> !UnwindSafe for FeedPatchCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_cloudasset1::api::Operation
===
```
pub struct Operation {
pub done: Option<bool>,
pub error: Option<Status>,
pub metadata: Option<HashMap<String, Value>>,
pub name: Option<String>,
pub response: Option<HashMap<String, Value>>,
}
```
This resource represents a long-running operation that is the result of a network API call.
Activities
---
This type is used in activities, which are methods you may call on this type or where this type is involved in.
The list links the activity name, along with information about where it is used (one of *request* and *response*).
* get operations (response)
* analyze iam policy longrunning (response)
* export assets (response)
Fields
---
`done: Option<bool>`If the value is `false`, it means the operation is still in progress. If `true`, the operation is completed, and either `error` or `response` is available.
`error: Option<Status>`The error result of the operation in case of failure or cancellation.
`metadata: Option<HashMap<String, Value>>`Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
`name: Option<String>`The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the `name` should be a resource name ending with `operations/{unique_id}`.
`response: Option<HashMap<String, Value>>`The normal response of the operation in case of success. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
Trait Implementations
---
### impl Clone for Operation
#### fn clone(&self) -> Operation
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> Operation
Returns the “default value” for a type.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl ResponseResult for Operation
Auto Trait Implementations
---
### impl RefUnwindSafe for Operation
### impl Send for Operation
### impl Sync for Operation
### impl Unpin for Operation
### impl UnwindSafe for Operation
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Struct google_cloudasset1::api::OperationGetCall
===
```
pub struct OperationGetCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Gets the latest state of a long-running operation. Clients can use this method to poll the operation result at intervals as recommended by the API service.
A builder for the *get* method supported by a *operation* resource.
It is not used directly, but through a `OperationMethods` instance.
Example
---
Instantiate a resource method builder
```
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.operations().get("name")
.doit().await;
```
Implementations
---
### impl<'a, S> OperationGetCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, Operation)Perform the operation you have build so far.
#### pub fn name(self, new_value: &str) -> OperationGetCall<'a, SThe name of the operation resource.
Sets the *name* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> OperationGetCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> OperationGetCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *$.xgafv* (query-string) - V1 error format.
* *access_token* (query-string) - OAuth access token.
* *alt* (query-string) - Data format for response.
* *callback* (query-string) - JSONP
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
* *uploadType* (query-string) - Legacy upload protocol for media (e.g. “media”, “multipart”).
* *upload_protocol* (query-string) - Upload protocol for media (e.g. “raw”, “multipart”).
#### pub fn add_scope<St>(self, scope: St) -> OperationGetCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> OperationGetCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> OperationGetCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for OperationGetCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for OperationGetCall<'a, S### impl<'a, S> Send for OperationGetCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for OperationGetCall<'a, S### impl<'a, S> Unpin for OperationGetCall<'a, S### impl<'a, S> !UnwindSafe for OperationGetCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_cloudasset1::api::SavedQuery
===
```
pub struct SavedQuery {
pub content: Option<QueryContent>,
pub create_time: Option<DateTime<Utc>>,
pub creator: Option<String>,
pub description: Option<String>,
pub labels: Option<HashMap<String, String>>,
pub last_update_time: Option<DateTime<Utc>>,
pub last_updater: Option<String>,
pub name: Option<String>,
}
```
A saved query which can be shared with others or used later.
Activities
---
This type is used in activities, which are methods you may call on this type or where this type is involved in.
The list links the activity name, along with information about where it is used (one of *request* and *response*).
* create saved queries (request|response)
* get saved queries (response)
* patch saved queries (request|response)
Fields
---
`content: Option<QueryContent>`The query content.
`create_time: Option<DateTime<Utc>>`Output only. The create time of this saved query.
`creator: Option<String>`Output only. The account’s email address who has created this saved query.
`description: Option<String>`The description of this saved query. This value should be fewer than 255 characters.
`labels: Option<HashMap<String, String>>`Labels applied on the resource. This value should not contain more than 10 entries. The key and value of each entry must be non-empty and fewer than 64 characters.
`last_update_time: Option<DateTime<Utc>>`Output only. The last update time of this saved query.
`last_updater: Option<String>`Output only. The account’s email address who has updated this saved query most recently.
`name: Option<String>`The resource name of the saved query. The format must be: * projects/project_number/savedQueries/saved_query_id * folders/folder_number/savedQueries/saved_query_id * organizations/organization_number/savedQueries/saved_query_id
Trait Implementations
---
### impl Clone for SavedQuery
#### fn clone(&self) -> SavedQuery
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> SavedQuery
Returns the “default value” for a type.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl ResponseResult for SavedQuery
Auto Trait Implementations
---
### impl RefUnwindSafe for SavedQuery
### impl Send for SavedQuery
### impl Sync for SavedQuery
### impl Unpin for SavedQuery
### impl UnwindSafe for SavedQuery
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Struct google_cloudasset1::api::SavedQueryDeleteCall
===
```
pub struct SavedQueryDeleteCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Deletes a saved query.
A builder for the *delete* method supported by a *savedQuery* resource.
It is not used directly, but through a `SavedQueryMethods` instance.
Example
---
Instantiate a resource method builder
```
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.saved_queries().delete("name")
.doit().await;
```
Implementations
---
### impl<'a, S> SavedQueryDeleteCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, Empty)Perform the operation you have build so far.
#### pub fn name(self, new_value: &str) -> SavedQueryDeleteCall<'a, SRequired. The name of the saved query to delete. It must be in the format of: * projects/project_number/savedQueries/saved_query_id * folders/folder_number/savedQueries/saved_query_id * organizations/organization_number/savedQueries/saved_query_id
Sets the *name* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> SavedQueryDeleteCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> SavedQueryDeleteCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *$.xgafv* (query-string) - V1 error format.
* *access_token* (query-string) - OAuth access token.
* *alt* (query-string) - Data format for response.
* *callback* (query-string) - JSONP
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
* *uploadType* (query-string) - Legacy upload protocol for media (e.g. “media”, “multipart”).
* *upload_protocol* (query-string) - Upload protocol for media (e.g. “raw”, “multipart”).
#### pub fn add_scope<St>(self, scope: St) -> SavedQueryDeleteCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> SavedQueryDeleteCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> SavedQueryDeleteCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for SavedQueryDeleteCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for SavedQueryDeleteCall<'a, S### impl<'a, S> Send for SavedQueryDeleteCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for SavedQueryDeleteCall<'a, S### impl<'a, S> Unpin for SavedQueryDeleteCall<'a, S### impl<'a, S> !UnwindSafe for SavedQueryDeleteCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_cloudasset1::api::SavedQueryPatchCall
===
```
pub struct SavedQueryPatchCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Updates a saved query.
A builder for the *patch* method supported by a *savedQuery* resource.
It is not used directly, but through a `SavedQueryMethods` instance.
Example
---
Instantiate a resource method builder
```
use cloudasset1::api::SavedQuery;
// As the method needs a request, you would usually fill it with the desired information
// into the respective structure. Some of the parts shown here might not be applicable !
// Values shown here are possibly random and not representative !
let mut req = SavedQuery::default();
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.saved_queries().patch(req, "name")
.update_mask(&Default::default())
.doit().await;
```
Implementations
---
### impl<'a, S> SavedQueryPatchCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, SavedQuery)Perform the operation you have build so far.
#### pub fn request(self, new_value: SavedQuery) -> SavedQueryPatchCall<'a, SSets the *request* property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn name(self, new_value: &str) -> SavedQueryPatchCall<'a, SThe resource name of the saved query. The format must be: * projects/project_number/savedQueries/saved_query_id * folders/folder_number/savedQueries/saved_query_id * organizations/organization_number/savedQueries/saved_query_id
Sets the *name* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn update_mask(self, new_value: FieldMask) -> SavedQueryPatchCall<'a, SRequired. The list of fields to update.
Sets the *update mask* query property to the given value.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> SavedQueryPatchCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> SavedQueryPatchCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *$.xgafv* (query-string) - V1 error format.
* *access_token* (query-string) - OAuth access token.
* *alt* (query-string) - Data format for response.
* *callback* (query-string) - JSONP
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
* *uploadType* (query-string) - Legacy upload protocol for media (e.g. “media”, “multipart”).
* *upload_protocol* (query-string) - Upload protocol for media (e.g. “raw”, “multipart”).
#### pub fn add_scope<St>(self, scope: St) -> SavedQueryPatchCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> SavedQueryPatchCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> SavedQueryPatchCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for SavedQueryPatchCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for SavedQueryPatchCall<'a, S### impl<'a, S> Send for SavedQueryPatchCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for SavedQueryPatchCall<'a, S### impl<'a, S> Unpin for SavedQueryPatchCall<'a, S### impl<'a, S> !UnwindSafe for SavedQueryPatchCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_cloudasset1::api::MethodAnalyzeIamPolicyCall
===
```
pub struct MethodAnalyzeIamPolicyCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Analyzes IAM policies to answer which identities have what accesses on which resources.
A builder for the *analyzeIamPolicy* method.
It is not used directly, but through a `MethodMethods` instance.
Example
---
Instantiate a resource method builder
```
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.methods().analyze_iam_policy("scope")
.saved_analysis_query("rebum.")
.execution_timeout(chrono::Duration::seconds(5840181))
.analysis_query_resource_selector_full_resource_name("ipsum")
.analysis_query_options_output_resource_edges(true)
.analysis_query_options_output_group_edges(true)
.analysis_query_options_expand_roles(false)
.analysis_query_options_expand_resources(true)
.analysis_query_options_expand_groups(false)
.analysis_query_options_analyze_service_account_impersonation(true)
.analysis_query_identity_selector_identity("duo")
.analysis_query_condition_context_access_time(chrono::Utc::now())
.add_analysis_query_access_selector_roles("sed")
.add_analysis_query_access_selector_permissions("no")
.doit().await;
```
Implementations
---
### impl<'a, S> MethodAnalyzeIamPolicyCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, AnalyzeIamPolicyResponse)Perform the operation you have build so far.
#### pub fn scope(self, new_value: &str) -> MethodAnalyzeIamPolicyCall<'a, SRequired. The relative name of the root asset. Only resources and IAM policies within the scope will be analyzed. This can only be an organization number (such as “organizations/123”), a folder number (such as “folders/123”), a project ID (such as “projects/my-project-id”), or a project number (such as “projects/12345”). To know how to get organization id, visit here. To know how to get folder or project id, visit here.
Sets the *scope* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn saved_analysis_query(
self,
new_value: &str
) -> MethodAnalyzeIamPolicyCall<'a, SOptional. The name of a saved query, which must be in the format of: * projects/project_number/savedQueries/saved_query_id * folders/folder_number/savedQueries/saved_query_id * organizations/organization_number/savedQueries/saved_query_id If both `analysis_query` and `saved_analysis_query` are provided, they will be merged together with the `saved_analysis_query` as base and the `analysis_query` as overrides. For more details of the merge behavior, please refer to the MergeFrom page. Note that you cannot override primitive fields with default value, such as 0 or empty string, etc., because we use proto3, which doesn’t support field presence yet.
Sets the *saved analysis query* query property to the given value.
#### pub fn execution_timeout(
self,
new_value: Duration
) -> MethodAnalyzeIamPolicyCall<'a, SOptional. Amount of time executable has to complete. See JSON representation of Duration. If this field is set with a value less than the RPC deadline, and the execution of your query hasn’t finished in the specified execution timeout, you will get a response with partial result. Otherwise, your query’s execution will continue until the RPC deadline. If it’s not finished until then, you will get a DEADLINE_EXCEEDED error. Default is empty.
Sets the *execution timeout* query property to the given value.
#### pub fn analysis_query_resource_selector_full_resource_name(
self,
new_value: &str
) -> MethodAnalyzeIamPolicyCall<'a, SRequired. The [full resource name] (https://cloud.google.com/asset-inventory/docs/resource-name-format) of a resource of supported resource types.
Sets the *analysis query.resource selector.full resource name* query property to the given value.
#### pub fn analysis_query_options_output_resource_edges(
self,
new_value: bool
) -> MethodAnalyzeIamPolicyCall<'a, SOptional. If true, the result will output the relevant parent/child relationships between resources. Default is false.
Sets the *analysis query.options.output resource edges* query property to the given value.
#### pub fn analysis_query_options_output_group_edges(
self,
new_value: bool
) -> MethodAnalyzeIamPolicyCall<'a, SOptional. If true, the result will output the relevant membership relationships between groups and other groups, and between groups and principals. Default is false.
Sets the *analysis query.options.output group edges* query property to the given value.
#### pub fn analysis_query_options_expand_roles(
self,
new_value: bool
) -> MethodAnalyzeIamPolicyCall<'a, SOptional. If true, the access section of result will expand any roles appearing in IAM policy bindings to include their permissions. If IamPolicyAnalysisQuery.access_selector is specified, the access section of the result will be determined by the selector, and this flag is not allowed to set. Default is false.
Sets the *analysis query.options.expand roles* query property to the given value.
#### pub fn analysis_query_options_expand_resources(
self,
new_value: bool
) -> MethodAnalyzeIamPolicyCall<'a, SOptional. If true and IamPolicyAnalysisQuery.resource_selector is not specified, the resource section of the result will expand any resource attached to an IAM policy to include resources lower in the resource hierarchy. For example, if the request analyzes for which resources user A has permission P, and the results include an IAM policy with P on a Google Cloud folder, the results will also include resources in that folder with permission P. If true and IamPolicyAnalysisQuery.resource_selector is specified, the resource section of the result will expand the specified resource to include resources lower in the resource hierarchy. Only project or lower resources are supported. Folder and organization resources cannot be used together with this option. For example, if the request analyzes for which users have permission P on a Google Cloud project with this option enabled, the results will include all users who have permission P on that project or any lower resource. If true, the default max expansion per resource is 1000 for AssetService.AnalyzeIamPolicy][] and 100000 for AssetService.AnalyzeIamPolicyLongrunning][]. Default is false.
Sets the *analysis query.options.expand resources* query property to the given value.
#### pub fn analysis_query_options_expand_groups(
self,
new_value: bool
) -> MethodAnalyzeIamPolicyCall<'a, SOptional. If true, the identities section of the result will expand any Google groups appearing in an IAM policy binding. If IamPolicyAnalysisQuery.identity_selector is specified, the identity in the result will be determined by the selector, and this flag is not allowed to set. If true, the default max expansion per group is 1000 for AssetService.AnalyzeIamPolicy][]. Default is false.
Sets the *analysis query.options.expand groups* query property to the given value.
#### pub fn analysis_query_options_analyze_service_account_impersonation(
self,
new_value: bool
) -> MethodAnalyzeIamPolicyCall<'a, SOptional. If true, the response will include access analysis from identities to resources via service account impersonation. This is a very expensive operation, because many derived queries will be executed. We highly recommend you use AssetService.AnalyzeIamPolicyLongrunning RPC instead. For example, if the request analyzes for which resources user A has permission P, and there’s an IAM policy states user A has iam.serviceAccounts.getAccessToken permission to a service account SA, and there’s another IAM policy states service account SA has permission P to a Google Cloud folder F, then user A potentially has access to the Google Cloud folder F. And those advanced analysis results will be included in AnalyzeIamPolicyResponse.service_account_impersonation_analysis. Another example, if the request analyzes for who has permission P to a Google Cloud folder F, and there’s an IAM policy states user A has iam.serviceAccounts.actAs permission to a service account SA, and there’s another IAM policy states service account SA has permission P to the Google Cloud folder F, then user A potentially has access to the Google Cloud folder F. And those advanced analysis results will be included in AnalyzeIamPolicyResponse.service_account_impersonation_analysis. Only the following permissions are considered in this analysis: * `iam.serviceAccounts.actAs` * `iam.serviceAccounts.signBlob` * `iam.serviceAccounts.signJwt` * `iam.serviceAccounts.getAccessToken` * `iam.serviceAccounts.getOpenIdToken` * `iam.serviceAccounts.implicitDelegation` Default is false.
Sets the *analysis query.options.analyze service account impersonation* query property to the given value.
#### pub fn analysis_query_identity_selector_identity(
self,
new_value: &str
) -> MethodAnalyzeIamPolicyCall<'a, SRequired. The identity appear in the form of principals in IAM policy binding. The examples of supported forms are: “user:<EMAIL>”, “group:<EMAIL>”, “domain:google.com”, “serviceAccount:[email protected]”. Notice that wildcard characters (such as * and ?) are not supported. You must give a specific identity.
Sets the *analysis query.identity selector.identity* query property to the given value.
#### pub fn analysis_query_condition_context_access_time(
self,
new_value: DateTime<Utc>
) -> MethodAnalyzeIamPolicyCall<'a, SThe hypothetical access timestamp to evaluate IAM conditions. Note that this value must not be earlier than the current time; otherwise, an INVALID_ARGUMENT error will be returned.
Sets the *analysis query.condition context.access time* query property to the given value.
#### pub fn add_analysis_query_access_selector_roles(
self,
new_value: &str
) -> MethodAnalyzeIamPolicyCall<'a, SOptional. The roles to appear in result.
Append the given value to the *analysis query.access selector.roles* query property.
Each appended value will retain its original ordering and be ‘/’-separated in the URL’s parameters.
#### pub fn add_analysis_query_access_selector_permissions(
self,
new_value: &str
) -> MethodAnalyzeIamPolicyCall<'a, SOptional. The permissions to appear in result.
Append the given value to the *analysis query.access selector.permissions* query property.
Each appended value will retain its original ordering and be ‘/’-separated in the URL’s parameters.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> MethodAnalyzeIamPolicyCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> MethodAnalyzeIamPolicyCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *$.xgafv* (query-string) - V1 error format.
* *access_token* (query-string) - OAuth access token.
* *alt* (query-string) - Data format for response.
* *callback* (query-string) - JSONP
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
* *uploadType* (query-string) - Legacy upload protocol for media (e.g. “media”, “multipart”).
* *upload_protocol* (query-string) - Upload protocol for media (e.g. “raw”, “multipart”).
#### pub fn add_scope<St>(self, scope: St) -> MethodAnalyzeIamPolicyCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> MethodAnalyzeIamPolicyCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> MethodAnalyzeIamPolicyCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for MethodAnalyzeIamPolicyCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for MethodAnalyzeIamPolicyCall<'a, S### impl<'a, S> Send for MethodAnalyzeIamPolicyCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for MethodAnalyzeIamPolicyCall<'a, S### impl<'a, S> Unpin for MethodAnalyzeIamPolicyCall<'a, S### impl<'a, S> !UnwindSafe for MethodAnalyzeIamPolicyCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_cloudasset1::api::MethodAnalyzeOrgPolicyGovernedAssetCall
===
```
pub struct MethodAnalyzeOrgPolicyGovernedAssetCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Analyzes organization policies governed assets (Google Cloud resources or policies) under a scope. This RPC supports custom constraints and the following 10 canned constraints: * storage.uniformBucketLevelAccess * iam.disableServiceAccountKeyCreation * iam.allowedPolicyMemberDomains * compute.vmExternalIpAccess * appengine.enforceServiceAccountActAsCheck * gcp.resourceLocations * compute.trustedImageProjects * compute.skipDefaultNetworkCreation * compute.requireOsLogin * compute.disableNestedVirtualization This RPC only returns either resources of types supported by searchable asset types, or IAM policies.
A builder for the *analyzeOrgPolicyGovernedAssets* method.
It is not used directly, but through a `MethodMethods` instance.
Example
---
Instantiate a resource method builder
```
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.methods().analyze_org_policy_governed_assets("scope")
.page_token("dolore")
.page_size(-22)
.filter("voluptua.")
.constraint("amet.")
.doit().await;
```
Implementations
---
### impl<'a, S> MethodAnalyzeOrgPolicyGovernedAssetCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(
self
) -> Result<(Response<Body>, AnalyzeOrgPolicyGovernedAssetsResponse)Perform the operation you have build so far.
#### pub fn scope(
self,
new_value: &str
) -> MethodAnalyzeOrgPolicyGovernedAssetCall<'a, SRequired. The organization to scope the request. Only organization policies within the scope will be analyzed. The output assets will also be limited to the ones governed by those in-scope organization policies. * organizations/{ORGANIZATION_NUMBER} (e.g., “organizations/123456”)
Sets the *scope* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn page_token(
self,
new_value: &str
) -> MethodAnalyzeOrgPolicyGovernedAssetCall<'a, SThe pagination token to retrieve the next page.
Sets the *page token* query property to the given value.
#### pub fn page_size(
self,
new_value: i32
) -> MethodAnalyzeOrgPolicyGovernedAssetCall<'a, SThe maximum number of items to return per page. If unspecified, AnalyzeOrgPolicyGovernedAssetsResponse.governed_assets will contain 100 items with a maximum of 200.
Sets the *page size* query property to the given value.
#### pub fn filter(
self,
new_value: &str
) -> MethodAnalyzeOrgPolicyGovernedAssetCall<'a, SThe expression to filter the governed assets in result. The only supported fields for governed resources are `governed_resource.project` and `governed_resource.folders`. The only supported fields for governed iam policies are `governed_iam_policy.project` and `governed_iam_policy.folders`. The only supported operator is `=`. Example 1: governed_resource.project=“projects/12345678” filter will return all governed resources under projects/12345678 including the project ifself, if applicable. Example 2: governed_iam_policy.folders=“folders/12345678” filter will return all governed iam policies under folders/12345678, if applicable.
Sets the *filter* query property to the given value.
#### pub fn constraint(
self,
new_value: &str
) -> MethodAnalyzeOrgPolicyGovernedAssetCall<'a, SRequired. The name of the constraint to analyze governed assets for. The analysis only contains analyzed organization policies for the provided constraint.
Sets the *constraint* query property to the given value.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> MethodAnalyzeOrgPolicyGovernedAssetCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(
self,
name: T,
value: T
) -> MethodAnalyzeOrgPolicyGovernedAssetCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *$.xgafv* (query-string) - V1 error format.
* *access_token* (query-string) - OAuth access token.
* *alt* (query-string) - Data format for response.
* *callback* (query-string) - JSONP
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
* *uploadType* (query-string) - Legacy upload protocol for media (e.g. “media”, “multipart”).
* *upload_protocol* (query-string) - Upload protocol for media (e.g. “raw”, “multipart”).
#### pub fn add_scope<St>(
self,
scope: St
) -> MethodAnalyzeOrgPolicyGovernedAssetCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(
self,
scopes: I
) -> MethodAnalyzeOrgPolicyGovernedAssetCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> MethodAnalyzeOrgPolicyGovernedAssetCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for MethodAnalyzeOrgPolicyGovernedAssetCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for MethodAnalyzeOrgPolicyGovernedAssetCall<'a, S### impl<'a, S> Send for MethodAnalyzeOrgPolicyGovernedAssetCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for MethodAnalyzeOrgPolicyGovernedAssetCall<'a, S### impl<'a, S> Unpin for MethodAnalyzeOrgPolicyGovernedAssetCall<'a, S### impl<'a, S> !UnwindSafe for MethodAnalyzeOrgPolicyGovernedAssetCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_cloudasset1::api::MethodAnalyzeOrgPolicyGovernedContainerCall
===
```
pub struct MethodAnalyzeOrgPolicyGovernedContainerCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Analyzes organization policies governed containers (projects, folders or organization) under a scope.
A builder for the *analyzeOrgPolicyGovernedContainers* method.
It is not used directly, but through a `MethodMethods` instance.
Example
---
Instantiate a resource method builder
```
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.methods().analyze_org_policy_governed_containers("scope")
.page_token("diam")
.page_size(-49)
.filter("et")
.constraint("et")
.doit().await;
```
Implementations
---
### impl<'a, S> MethodAnalyzeOrgPolicyGovernedContainerCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(
self
) -> Result<(Response<Body>, AnalyzeOrgPolicyGovernedContainersResponse)Perform the operation you have build so far.
#### pub fn scope(
self,
new_value: &str
) -> MethodAnalyzeOrgPolicyGovernedContainerCall<'a, SRequired. The organization to scope the request. Only organization policies within the scope will be analyzed. The output containers will also be limited to the ones governed by those in-scope organization policies. * organizations/{ORGANIZATION_NUMBER} (e.g., “organizations/123456”)
Sets the *scope* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn page_token(
self,
new_value: &str
) -> MethodAnalyzeOrgPolicyGovernedContainerCall<'a, SThe pagination token to retrieve the next page.
Sets the *page token* query property to the given value.
#### pub fn page_size(
self,
new_value: i32
) -> MethodAnalyzeOrgPolicyGovernedContainerCall<'a, SThe maximum number of items to return per page. If unspecified, AnalyzeOrgPolicyGovernedContainersResponse.governed_containers will contain 100 items with a maximum of 200.
Sets the *page size* query property to the given value.
#### pub fn filter(
self,
new_value: &str
) -> MethodAnalyzeOrgPolicyGovernedContainerCall<'a, SThe expression to filter the governed containers in result. The only supported field is `parent`, and the only supported operator is `=`. Example: parent=“//cloudresourcemanager.googleapis.com/folders/001” will return all containers under “folders/001”.
Sets the *filter* query property to the given value.
#### pub fn constraint(
self,
new_value: &str
) -> MethodAnalyzeOrgPolicyGovernedContainerCall<'a, SRequired. The name of the constraint to analyze governed containers for. The analysis only contains organization policies for the provided constraint.
Sets the *constraint* query property to the given value.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> MethodAnalyzeOrgPolicyGovernedContainerCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(
self,
name: T,
value: T
) -> MethodAnalyzeOrgPolicyGovernedContainerCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *$.xgafv* (query-string) - V1 error format.
* *access_token* (query-string) - OAuth access token.
* *alt* (query-string) - Data format for response.
* *callback* (query-string) - JSONP
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
* *uploadType* (query-string) - Legacy upload protocol for media (e.g. “media”, “multipart”).
* *upload_protocol* (query-string) - Upload protocol for media (e.g. “raw”, “multipart”).
#### pub fn add_scope<St>(
self,
scope: St
) -> MethodAnalyzeOrgPolicyGovernedContainerCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(
self,
scopes: I
) -> MethodAnalyzeOrgPolicyGovernedContainerCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> MethodAnalyzeOrgPolicyGovernedContainerCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for MethodAnalyzeOrgPolicyGovernedContainerCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for MethodAnalyzeOrgPolicyGovernedContainerCall<'a, S### impl<'a, S> Send for MethodAnalyzeOrgPolicyGovernedContainerCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for MethodAnalyzeOrgPolicyGovernedContainerCall<'a, S### impl<'a, S> Unpin for MethodAnalyzeOrgPolicyGovernedContainerCall<'a, S### impl<'a, S> !UnwindSafe for MethodAnalyzeOrgPolicyGovernedContainerCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_cloudasset1::api::MethodBatchGetAssetsHistoryCall
===
```
pub struct MethodBatchGetAssetsHistoryCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Batch gets the update history of assets that overlap a time window. For IAM_POLICY content, this API outputs history when the asset and its attached IAM POLICY both exist. This can create gaps in the output history. Otherwise, this API outputs history with asset in both non-delete or deleted status. If a specified asset does not exist, this API returns an INVALID_ARGUMENT error.
A builder for the *batchGetAssetsHistory* method.
It is not used directly, but through a `MethodMethods` instance.
Example
---
Instantiate a resource method builder
```
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.methods().batch_get_assets_history("parent")
.add_relationship_types("Stet")
.read_time_window_start_time(chrono::Utc::now())
.read_time_window_end_time(chrono::Utc::now())
.content_type("dolor")
.add_asset_names("duo")
.doit().await;
```
Implementations
---
### impl<'a, S> MethodBatchGetAssetsHistoryCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(
self
) -> Result<(Response<Body>, BatchGetAssetsHistoryResponse)Perform the operation you have build so far.
#### pub fn parent(self, new_value: &str) -> MethodBatchGetAssetsHistoryCall<'a, SRequired. The relative name of the root asset. It can only be an organization number (such as “organizations/123”), a project ID (such as “projects/my-project-id”)“, or a project number (such as “projects/12345”).
Sets the *parent* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn add_relationship_types(
self,
new_value: &str
) -> MethodBatchGetAssetsHistoryCall<'a, SOptional. A list of relationship types to output, for example: `INSTANCE_TO_INSTANCEGROUP`. This field should only be specified if content_type=RELATIONSHIP. * If specified: it outputs specified relationships’ history on the [asset_names]. It returns an error if any of the [relationship_types] doesn’t belong to the supported relationship types of the [asset_names] or if any of the [asset_names]’s types doesn’t belong to the source types of the [relationship_types]. * Otherwise: it outputs the supported relationships’ history on the [asset_names] or returns an error if any of the [asset_names]’s types has no relationship support. See Introduction to Cloud Asset Inventory for all supported asset types and relationship types.
Append the given value to the *relationship types* query property.
Each appended value will retain its original ordering and be ‘/’-separated in the URL’s parameters.
#### pub fn read_time_window_start_time(
self,
new_value: DateTime<Utc>
) -> MethodBatchGetAssetsHistoryCall<'a, SStart time of the time window (exclusive).
Sets the *read time window.start time* query property to the given value.
#### pub fn read_time_window_end_time(
self,
new_value: DateTime<Utc>
) -> MethodBatchGetAssetsHistoryCall<'a, SEnd time of the time window (inclusive). If not specified, the current timestamp is used instead.
Sets the *read time window.end time* query property to the given value.
#### pub fn content_type(
self,
new_value: &str
) -> MethodBatchGetAssetsHistoryCall<'a, SOptional. The content type.
Sets the *content type* query property to the given value.
#### pub fn add_asset_names(
self,
new_value: &str
) -> MethodBatchGetAssetsHistoryCall<'a, SA list of the full names of the assets. See: https://cloud.google.com/asset-inventory/docs/resource-name-format Example: `//compute.googleapis.com/projects/my_project_123/zones/zone1/instances/instance1`. The request becomes a no-op if the asset name list is empty, and the max size of the asset name list is 100 in one request.
Append the given value to the *asset names* query property.
Each appended value will retain its original ordering and be ‘/’-separated in the URL’s parameters.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> MethodBatchGetAssetsHistoryCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(
self,
name: T,
value: T
) -> MethodBatchGetAssetsHistoryCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *$.xgafv* (query-string) - V1 error format.
* *access_token* (query-string) - OAuth access token.
* *alt* (query-string) - Data format for response.
* *callback* (query-string) - JSONP
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
* *uploadType* (query-string) - Legacy upload protocol for media (e.g. “media”, “multipart”).
* *upload_protocol* (query-string) - Upload protocol for media (e.g. “raw”, “multipart”).
#### pub fn add_scope<St>(self, scope: St) -> MethodBatchGetAssetsHistoryCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(
self,
scopes: I
) -> MethodBatchGetAssetsHistoryCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> MethodBatchGetAssetsHistoryCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for MethodBatchGetAssetsHistoryCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for MethodBatchGetAssetsHistoryCall<'a, S### impl<'a, S> Send for MethodBatchGetAssetsHistoryCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for MethodBatchGetAssetsHistoryCall<'a, S### impl<'a, S> Unpin for MethodBatchGetAssetsHistoryCall<'a, S### impl<'a, S> !UnwindSafe for MethodBatchGetAssetsHistoryCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_cloudasset1::api::MethodQueryAssetCall
===
```
pub struct MethodQueryAssetCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Issue a job that queries assets using a SQL statement compatible with BigQuery Standard SQL. If the query execution finishes within timeout and there’s no pagination, the full query results will be returned in the `QueryAssetsResponse`. Otherwise, full query results can be obtained by issuing extra requests with the `job_reference` from the a previous `QueryAssets` call. Note, the query result has approximately 10 GB limitation enforced by BigQuery https://cloud.google.com/bigquery/docs/best-practices-performance-output, queries return larger results will result in errors.
A builder for the *queryAssets* method.
It is not used directly, but through a `MethodMethods` instance.
Example
---
Instantiate a resource method builder
```
use cloudasset1::api::QueryAssetsRequest;
// As the method needs a request, you would usually fill it with the desired information
// into the respective structure. Some of the parts shown here might not be applicable !
// Values shown here are possibly random and not representative !
let mut req = QueryAssetsRequest::default();
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.methods().query_assets(req, "parent")
.doit().await;
```
Implementations
---
### impl<'a, S> MethodQueryAssetCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, QueryAssetsResponse)Perform the operation you have build so far.
#### pub fn request(
self,
new_value: QueryAssetsRequest
) -> MethodQueryAssetCall<'a, SSets the *request* property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn parent(self, new_value: &str) -> MethodQueryAssetCall<'a, SRequired. The relative name of the root asset. This can only be an organization number (such as “organizations/123”), a project ID (such as “projects/my-project-id”), or a project number (such as “projects/12345”), or a folder number (such as “folders/123”). Only assets belonging to the `parent` will be returned.
Sets the *parent* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> MethodQueryAssetCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> MethodQueryAssetCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *$.xgafv* (query-string) - V1 error format.
* *access_token* (query-string) - OAuth access token.
* *alt* (query-string) - Data format for response.
* *callback* (query-string) - JSONP
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
* *uploadType* (query-string) - Legacy upload protocol for media (e.g. “media”, “multipart”).
* *upload_protocol* (query-string) - Upload protocol for media (e.g. “raw”, “multipart”).
#### pub fn add_scope<St>(self, scope: St) -> MethodQueryAssetCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> MethodQueryAssetCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> MethodQueryAssetCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for MethodQueryAssetCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for MethodQueryAssetCall<'a, S### impl<'a, S> Send for MethodQueryAssetCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for MethodQueryAssetCall<'a, S### impl<'a, S> Unpin for MethodQueryAssetCall<'a, S### impl<'a, S> !UnwindSafe for MethodQueryAssetCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Type Alias google_cloudasset1::Result
===
```
pub type Result<T> = Result<T, Error>;
```
A universal result type used as return for all calls.
Trait google_cloudasset1::Delegate
===
```
pub trait Delegate: Send {
// Provided methods
fn begin(&mut self, _info: MethodInfo) { ... }
fn http_error(&mut self, _err: &Error) -> Retry { ... }
fn api_key(&mut self) -> Option<String> { ... }
fn token(
&mut self,
e: Box<dyn Error + Send + Sync, Global>
) -> Result<Option<String>, Box<dyn Error + Send + Sync, Global>> { ... }
fn upload_url(&mut self) -> Option<String> { ... }
fn store_upload_url(&mut self, url: Option<&str>) { ... }
fn response_json_decode_error(
&mut self,
json_encoded_value: &str,
json_decode_error: &Error
) { ... }
fn http_failure(&mut self, _: &Response<Body>, _err: Option<Value>) -> Retry { ... }
fn pre_request(&mut self) { ... }
fn chunk_size(&mut self) -> u64 { ... }
fn cancel_chunk_upload(&mut self, chunk: &ContentRange) -> bool { ... }
fn finished(&mut self, is_success: bool) { ... }
}
```
A trait specifying functionality to help controlling any request performed by the API.
The trait has a conservative default implementation.
It contains methods to deal with all common issues, as well with the ones related to uploading media
Provided Methods
---
#### fn begin(&mut self, _info: MethodInfo)
Called at the beginning of any API request. The delegate should store the method information if he is interesting in knowing more context when further calls to it are made.
The matching `finished()` call will always be made, no matter whether or not the API request was successful. That way, the delegate may easily maintain a clean state between various API calls.
#### fn http_error(&mut self, _err: &Error) -> Retry
Called whenever there is an HttpError, usually if there are network problems.
If you choose to retry after a duration, the duration should be chosen using the exponential backoff algorithm.
Return retry information.
#### fn api_key(&mut self) -> Option<StringCalled whenever there is the need for your applications API key after the official authenticator implementation didn’t provide one, for some reason.
If this method returns None as well, the underlying operation will fail
#### fn token(
&mut self,
e: Box<dyn Error + Send + Sync, Global>
) -> Result<Option<String>, Box<dyn Error + Send + Sync, Global>Called whenever the Authenticator didn’t yield a token. The delegate may attempt to provide one, or just take it as a general information about the impending failure.
The given Error provides information about why the token couldn’t be acquired in the first place
#### fn upload_url(&mut self) -> Option<StringCalled during resumable uploads to provide a URL for the impending upload.
It was saved after a previous call to `store_upload_url(...)`, and if not None,
will be used instead of asking the server for a new upload URL.
This is useful in case a previous resumable upload was aborted/canceled, but should now be resumed.
The returned URL will be used exactly once - if it fails again and the delegate allows to retry, we will ask the server for a new upload URL.
#### fn store_upload_url(&mut self, url: Option<&str>)
Called after we have retrieved a new upload URL for a resumable upload to store it in case we fail or cancel. That way, we can attempt to resume the upload later,
see `upload_url()`.
It will also be called with None after a successful upload, which allows the delegate to forget the URL. That way, we will not attempt to resume an upload that has already finished.
#### fn response_json_decode_error(
&mut self,
json_encoded_value: &str,
json_decode_error: &Error
)
Called whenever a server response could not be decoded from json.
It’s for informational purposes only, the caller will return with an error accordingly.
##### Arguments
* `json_encoded_value` - The json-encoded value which failed to decode.
* `json_decode_error` - The decoder error
#### fn http_failure(&mut self, _: &Response<Body>, _err: Option<Value>) -> Retry
Called whenever the http request returns with a non-success status code.
This can involve authentication issues, or anything else that very much depends on the used API method.
The delegate should check the status, header and decoded json error to decide whether to retry or not. In the latter case, the underlying call will fail.
If you choose to retry after a duration, the duration should be chosen using the exponential backoff algorithm.
#### fn pre_request(&mut self)
Called prior to sending the main request of the given method. It can be used to time the call or to print progress information.
It’s also useful as you can be sure that a request will definitely be made.
#### fn chunk_size(&mut self) -> u64
Return the size of each chunk of a resumable upload.
Must be a power of two, with 1<<18 being the smallest allowed chunk size.
Will be called once before starting any resumable upload.
#### fn cancel_chunk_upload(&mut self, chunk: &ContentRange) -> bool
Called before the given chunk is uploaded to the server.
If true is returned, the upload will be interrupted.
However, it may be resumable if you stored the upload URL in a previous call to `store_upload_url()`
#### fn finished(&mut self, is_success: bool)
Called before the API request method returns, in every case. It can be used to clean up internal state between calls to the API.
This call always has a matching call to `begin(...)`.
##### Arguments
* `is_success` - a true value indicates the operation was successful. If false, you should discard all values stored during `store_upload_url`.
Implementors
---
### impl Delegate for DefaultDelegate
Struct google_cloudasset1::FieldMask
===
```
pub struct FieldMask(_);
```
A `FieldMask` as defined in `https://github.com/protocolbuffers/protobuf/blob/ec1a70913e5793a7d0a7b5fbf7e0e4f75409dd41/src/google/protobuf/field_mask.proto#L180`
Trait Implementations
---
### impl Clone for FieldMask
#### fn clone(&self) -> FieldMask
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter.
#### fn default() -> FieldMask
Returns the “default value” for a type.
#### fn deserialize<D>(
deserializer: D
) -> Result<FieldMask, <D as Deserializer<'de>>::Error>where
D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter.
#### type Err = Infallible
The associated error which can be returned from parsing.#### fn from_str(s: &str) -> Result<FieldMask, <FieldMask as FromStr>::ErrParses a string `s` to return a value of this type.
#### fn eq(&self, other: &FieldMask) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for FieldMask
#### fn serialize<S>(
&self,
s: S
) -> Result<<S as Serializer>::Ok, <S as Serializer>::Error>where
S: Serializer,
Serialize this value into the given Serde serializer.
### impl StructuralEq for FieldMask
### impl StructuralPartialEq for FieldMask
Auto Trait Implementations
---
### impl RefUnwindSafe for FieldMask
### impl Send for FieldMask
### impl Sync for FieldMask
### impl Unpin for FieldMask
### impl UnwindSafe for FieldMask
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
T: Display + ?Sized,
#### default fn to_string(&self) -> String
Converts the given value to a `String`.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Enum google_cloudasset1::Error
===
```
pub enum Error {
HttpError(Error),
UploadSizeLimitExceeded(u64, u64),
BadRequest(Value),
MissingAPIKey,
MissingToken(Box<dyn Error + Send + Sync, Global>),
Cancelled,
FieldClash(&'static str),
JsonDecodeError(String, Error),
Failure(Response<Body>),
Io(Error),
}
```
Variants
---
### HttpError(Error)
The http connection failed
### UploadSizeLimitExceeded(u64, u64)
An attempt was made to upload a resource with size stored in field `.0`
even though the maximum upload size is what is stored in field `.1`.
### BadRequest(Value)
Represents information about a request that was not understood by the server.
Details are included.
### MissingAPIKey
We needed an API key for authentication, but didn’t obtain one.
Neither through the authenticator, nor through the Delegate.
### MissingToken(Box<dyn Error + Send + Sync, Global>)
We required a Token, but didn’t get one from the Authenticator
### Cancelled
The delgate instructed to cancel the operation
### FieldClash(&'static str)
An additional, free form field clashed with one of the built-in optional ones
### JsonDecodeError(String, Error)
Shows that we failed to decode the server response.
This can happen if the protocol changes in conjunction with strict json decoding.
### Failure(Response<Body>)
Indicates an HTTP repsonse with a non-success status code
### Io(Error)
An IO error occurred while reading a stream into memory
Trait Implementations
---
### impl Debug for Error
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter.
#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more1.0.0 · source#### fn description(&self) -> &str
👎Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎Deprecated since 1.33.0: replaced by Error::source, which can support downcasting#### fn provide<'a>(&'a self, request: &mut Request<'a>)
🔬This is a nightly-only experimental API. (`error_generic_member_access`)Provides type based access to context intended for error reports.
#### fn from(err: Error) -> Error
Converts to this type from the input type.Auto Trait Implementations
---
### impl !RefUnwindSafe for Error
### impl Send for Error
### impl Sync for Error
### impl Unpin for Error
### impl !UnwindSafe for Error
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToString for Twhere
T: Display + ?Sized,
#### default fn to_string(&self) -> String
Converts the given value to a `String`.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more |
viridisLite | cran | R | Package ‘viridisLite’
May 3, 2023
Type Package
Title Colorblind-Friendly Color Maps (Lite Version)
Version 0.4.2
Date 2023-05-02
Maintainer <NAME> <<EMAIL>>
Description Color maps designed to improve graph readability for readers with
common forms of color blindness and/or color vision deficiency. The color
maps are also perceptually-uniform, both in regular form and also when
converted to black-and-white for printing. This is the 'lite' version of the
'viridis' package that also contains 'ggplot2' bindings for discrete and
continuous color and fill scales and can be found at
<https://cran.r-project.org/package=viridis>.
License MIT + file LICENSE
Encoding UTF-8
Depends R (>= 2.10)
Suggests hexbin (>= 1.27.0), ggplot2 (>= 1.0.1), testthat, covr
URL https://sjmgarnier.github.io/viridisLite/,
https://github.com/sjmgarnier/viridisLite/
BugReports https://github.com/sjmgarnier/viridisLite/issues/
RoxygenNote 7.2.3
NeedsCompilation no
Author <NAME> [aut, cre],
<NAME> [ctb, cph],
<NAME> [ctb, cph],
<NAME> [ctb, cph],
<NAME> [ctb, cph],
<NAME> [ctb, cph]
Repository CRAN
Date/Publication 2023-05-02 23:50:02 UTC
R topics documented:
viridi... 2
viridis.ma... 4
viridis Viridis Color Palettes
Description
This function creates a vector of n equally spaced colors along the selected color map.
Usage
viridis(n, alpha = 1, begin = 0, end = 1, direction = 1, option = "D")
viridisMap(n = 256, alpha = 1, begin = 0, end = 1, direction = 1, option = "D")
magma(n, alpha = 1, begin = 0, end = 1, direction = 1)
inferno(n, alpha = 1, begin = 0, end = 1, direction = 1)
plasma(n, alpha = 1, begin = 0, end = 1, direction = 1)
cividis(n, alpha = 1, begin = 0, end = 1, direction = 1)
rocket(n, alpha = 1, begin = 0, end = 1, direction = 1)
mako(n, alpha = 1, begin = 0, end = 1, direction = 1)
turbo(n, alpha = 1, begin = 0, end = 1, direction = 1)
Arguments
n The number of colors (≥ 1) to be in the palette.
alpha The alpha transparency, a number in [0,1], see argument alpha in hsv.
begin The (corrected) hue in [0,1] at which the color map begins.
end The (corrected) hue in [0,1] at which the color map ends.
direction Sets the order of colors in the scale. If 1, the default, colors are ordered from
darkest to lightest. If -1, the order of colors is reversed.
option A character string indicating the color map option to use. Eight options are
available:
• "magma" (or "A")
• "inferno" (or "B")
• "plasma" (or "C")
• "viridis" (or "D")
• "cividis" (or "E")
• "rocket" (or "F")
• "mako" (or "G")
• "turbo" (or "H")
Details
Here are the color scales:
magma(), plasma(), inferno(), cividis(), rocket(), mako(), and turbo() are convenience
functions for the other color map options, which are useful when the scale must be passed as a
function name.
Semi-transparent colors (0 < alpha < 1) are supported only on some devices: see rgb.
Value
viridis returns a character vector, cv, of color hex codes. This can be used either to create a user-
defined color palette for subsequent graphics by palette(cv), a col = specification in graphics
functions or in par.
viridisMap returns a n lines data frame containing the red (R), green (G), blue (B) and alpha (alpha)
channels of n equally spaced colors along the selected color map. n = 256 by default.
Author(s)
<NAME>: <<EMAIL>> / @sjmgarnier
Examples
library(ggplot2)
library(hexbin)
dat <- data.frame(x = rnorm(10000), y = rnorm(10000))
ggplot(dat, aes(x = x, y = y)) +
geom_hex() + coord_fixed() +
scale_fill_gradientn(colours = viridis(256, option = "D"))
# using code from RColorBrewer to demo the palette
n = 200
image(
1:n, 1, as.matrix(1:n),
col = viridis(n, option = "D"),
xlab = "viridis n", ylab = "", xaxt = "n", yaxt = "n", bty = "n"
)
viridis.map Color Map Data
Description
A data set containing the RGB values of the color maps included in the package. These are:
• ’magma’, ’inferno’, ’plasma’, and ’viridis’ as defined in Matplotlib for Python. These color
maps are designed in such a way that they will analytically be perfectly perceptually-uniform,
both in regular form and also when converted to black-and-white. They are also designed to
be perceived by readers with the most common form of color blindness. They were created
by <NAME> and <NAME>;
• ’cividis’, a corrected version of ’viridis’, ’cividis’, developed by <NAME>, <NAME>, and <NAME>, and originally ported to R by <NAME>. It is designed
to be perceived by readers with all forms of color blindness;
• ’rocket’ and ’mako’ as defined in Seaborn for Python;
• ’turbo’, an improved Jet rainbow color map for reducing false detail, banding and color blind-
ness ambiguity.
Usage
viridis.map
Format
A data frame with 2048 rows and 4 variables:
• R: Red value;
• G: Green value;
• B: Blue value;
• opt: The colormap "option" (A: magma; B: inferno; C: plasma; D: viridis; E: cividis; F:
rocket; G: mako; H: turbo).
Author(s)
<NAME>: <<EMAIL>> / @sjmgarnier
References
• ’magma’, ’inferno’, ’plasma’, and ’viridis’: https://bids.github.io/colormap/
• ’cividis’: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0199239
• ’rocket’ and ’mako’: https://seaborn.pydata.org/index.html
• ’turbo’: https://ai.googleblog.com/2019/08/turbo-improved-rainbow-colormap-for.html |
trillium | rust | Rust | Crate trillium
===
Welcome to the `trillium` crate!
---
This crate is the primary dependency for building a trillium app or library. It contains a handful of core types and reexports a few others that you will necessarily need, but otherwise tries to stay small and focused. This crate will hopefully be the most stable within the trillium ecosystem. That said, trillium is still pre 1.0 and should be expected to evolve over time.
To get started with this crate, first take a look at the guide, then browse the docs for
`trillium::Conn`.
At a minimum to build a trillium app, you’ll also need a trillium runtime adapter.
Re-exports
---
* `pub use log;`
Macros
---
* conn_tryUnwraps an `Result::Ok` or returns the `Conn` with a 500 status.
* conn_unwrapUnwraps an `Option::Some` or returns the `Conn`.
* delegate_handlerMacro for implementing Handler for simple newtypes that contain another handler.
* log_errorA convenience macro for logging the contents of error variants.
Structs
---
* BodyThe trillium representation of a http body. This can contain either `&'static [u8]` content, `Vec<u8>` content, or a boxed
`AsyncRead` type.
* ConnA Trillium HTTP connection.
* HeaderNameThe name of a http header. This can be either a
`KnownHeaderName` or a string representation of an unknown header.
* HeaderValueA `HeaderValue` represents the right hand side of a single `name: value` pair.
* HeaderValuesA header value is a collection of one or more `HeaderValue`. It has been optimized for the “one `HeaderValue`” case, but can accomodate more than one value.
* HeadersTrillium’s header map type
* InfoThis struct represents information about the currently connected server.
* InitProvides support for asynchronous initialization of a handler after the server is started.
* StateA handler for sharing state across an application.
* StateSetStore and retrieve values by
`TypeId`. This allows storing arbitrary data that implements `Sync + Send + 'static`.
Enums
---
* ErrorConcrete errors that occur within trillium’s http implementation
* KnownHeaderNameA short nonehaustive enum of headers that trillium can represent as a u8. Use a `KnownHeaderName` variant instead of a &’static str anywhere possible, as it allows trillium to skip parsing the header entirely.
* MethodHTTP request methods.
* StatusHTTP response status codes.
* VersionThe version of the HTTP protocol in use.
Traits
---
* HandlerThe building block for Trillium applications.
Functions
---
* initalias for `Init::new`
* stateConstructs a new `State` handler from any Clone + Send + Sync +
’static. Alias for `State::new`
Type Definitions
---
* UpgradeA HTTP protocol upgrade
Attribute Macros
---
* async_trait
Crate trillium
===
Welcome to the `trillium` crate!
---
This crate is the primary dependency for building a trillium app or library. It contains a handful of core types and reexports a few others that you will necessarily need, but otherwise tries to stay small and focused. This crate will hopefully be the most stable within the trillium ecosystem. That said, trillium is still pre 1.0 and should be expected to evolve over time.
To get started with this crate, first take a look at the guide, then browse the docs for
`trillium::Conn`.
At a minimum to build a trillium app, you’ll also need a trillium runtime adapter.
Re-exports
---
* `pub use log;`
Macros
---
* conn_tryUnwraps an `Result::Ok` or returns the `Conn` with a 500 status.
* conn_unwrapUnwraps an `Option::Some` or returns the `Conn`.
* delegate_handlerMacro for implementing Handler for simple newtypes that contain another handler.
* log_errorA convenience macro for logging the contents of error variants.
Structs
---
* BodyThe trillium representation of a http body. This can contain either `&'static [u8]` content, `Vec<u8>` content, or a boxed
`AsyncRead` type.
* ConnA Trillium HTTP connection.
* HeaderNameThe name of a http header. This can be either a
`KnownHeaderName` or a string representation of an unknown header.
* HeaderValueA `HeaderValue` represents the right hand side of a single `name: value` pair.
* HeaderValuesA header value is a collection of one or more `HeaderValue`. It has been optimized for the “one `HeaderValue`” case, but can accomodate more than one value.
* HeadersTrillium’s header map type
* InfoThis struct represents information about the currently connected server.
* InitProvides support for asynchronous initialization of a handler after the server is started.
* StateA handler for sharing state across an application.
* StateSetStore and retrieve values by
`TypeId`. This allows storing arbitrary data that implements `Sync + Send + 'static`.
Enums
---
* ErrorConcrete errors that occur within trillium’s http implementation
* KnownHeaderNameA short nonehaustive enum of headers that trillium can represent as a u8. Use a `KnownHeaderName` variant instead of a &’static str anywhere possible, as it allows trillium to skip parsing the header entirely.
* MethodHTTP request methods.
* StatusHTTP response status codes.
* VersionThe version of the HTTP protocol in use.
Traits
---
* HandlerThe building block for Trillium applications.
Functions
---
* initalias for `Init::new`
* stateConstructs a new `State` handler from any Clone + Send + Sync +
’static. Alias for `State::new`
Type Definitions
---
* UpgradeA HTTP protocol upgrade
Attribute Macros
---
* async_trait
Struct trillium::Conn
===
```
pub struct Conn { /* private fields */ }
```
A Trillium HTTP connection.
---
A Conn represents both the request and response of a http connection,
as well as any application state that is associated with that connection.
### `with_{attribute}` naming convention
A convention that is used throughout trillium is that any interface that is named `with_{attribute}` will take ownership of the conn, set the attribute and return the conn, enabling chained calls like:
```
struct MyState(&'static str);
async fn handler(mut conn: trillium::Conn) -> trillium::Conn {
conn.with_header("content-type", "text/plain")
.with_state(MyState("hello"))
.with_body("hey there")
.with_status(418)
}
use trillium_testing::prelude::*;
assert_response!(
get("/").on(&handler),
Status::ImATeapot,
"hey there",
"content-type" => "text/plain"
);
```
If you need to set a property on the conn without moving it,
`set_{attribute}` associated functions will be your huckleberry, as is conventional in other rust projects.
### State
Every trillium Conn contains a state type which is a set that contains at most one element for each type. State is the primary way that handlers attach data to a conn as it passes through a tuple handler. State access should generally be implemented by libraries using a private type and exposed with a `ConnExt` trait. See library patterns for more elaboration and examples.
### In relation to `trillium_http::Conn`
`trillium::Conn` is currently implemented as an abstraction on top of a
`trillium_http::Conn`. In particular, `trillium::Conn` boxes the transport using a `BoxedTransport`
so that application code can be written without transport generics. See `Transport` for further reading on this.
Implementations
---
### impl Conn
#### pub fn ok(self, body: impl Into<Body>) -> Self
`Conn::ok` is a convenience function for the common pattern of setting a body and a 200 status in one call. It is exactly identical to `conn.with_status(200).with_body(body).halt()`
```
use trillium::Conn;
use trillium_testing::prelude::*;
let mut conn = get("/").on(&|conn: Conn| async move { conn.ok("hello") });
assert_body!(&mut conn, "hello");
assert_status!(&conn, 200);
assert!(conn.is_halted());
```
#### pub fn status(&self) -> Option<Statusreturns the response status for this `Conn`, if it has been set.
```
use trillium_testing::prelude::*;
let mut conn = get("/").on(&());
assert!(conn.status().is_none());
conn.set_status(200);
assert_eq!(conn.status().unwrap(), Status::Ok);
```
#### pub fn set_status(&mut self, status: impl TryInto<Status>)
assigns a status to this response. see `Conn::status` for example usage
#### pub fn with_status(self, status: impl TryInto<Status>) -> Self
sets the response status for this `Conn` and returns it. note that this does not set the halted status.
```
use trillium_testing::prelude::*;
let conn = get("/").on(&|conn: Conn| async move {
conn.with_status(418)
});
let status = conn.status().unwrap();
assert_eq!(status, Status::ImATeapot);
assert_eq!(status, 418);
assert!(!conn.is_halted());
```
#### pub fn with_body(self, body: impl Into<Body>) -> Self
Sets the response body from any `impl Into<Body>` and returns the
`Conn` for fluent chaining. Note that this does not set the response status or halted. See `Conn::ok` for a function that does both of those.
```
use trillium_testing::prelude::*;
let conn = get("/").on(&|conn: Conn| async move {
conn.with_body("hello")
});
assert_eq!(conn.response_len(), Some(5));
```
#### pub fn set_body(&mut self, body: impl Into<Body>)
Sets the response body from any `impl Into<Body>`. Note that this does not set the response status or halted.
```
use trillium_testing::prelude::*;
let mut conn = get("/").on(&());
conn.set_body("hello");
assert_eq!(conn.response_len(), Some(5));
```
#### pub fn take_response_body(&mut self) -> Option<BodyRemoves the response body from the `Conn`
```
use trillium_testing::prelude::*;
let mut conn = get("/").on(&());
conn.set_body("hello");
let mut body = conn.take_response_body().unwrap();
assert_eq!(body.len(), Some(5));
assert_eq!(conn.response_len(), None);
```
#### pub fn state<T: 'static>(&self) -> Option<&TAttempts to retrieve a &T from the state set
```
use trillium_testing::prelude::*;
struct Hello;
let mut conn = get("/").on(&());
assert!(conn.state::<Hello>().is_none());
conn.set_state(Hello);
assert!(conn.state::<Hello>().is_some());
```
#### pub fn state_mut<T: 'static>(&mut self) -> Option<&mut TAttempts to retrieve a &mut T from the state set
#### pub fn set_state<T: Send + Sync + 'static>(&mut self, val: T) -> Option<TPuts a new type into the state set. see `Conn::state`
for an example. returns the previous instance of this type, if any
#### pub fn with_state<T: Send + Sync + 'static>(self, val: T) -> Self
Puts a new type into the state set and returns the
`Conn`. this is useful for fluent chaining
#### pub fn take_state<T: Send + Sync + 'static>(&mut self) -> Option<TRemoves a type from the state set and returns it, if present
#### pub fn mut_state_or_insert_with<T, F>(&mut self, default: F) -> &mut Twhere
T: Send + Sync + 'static,
F: FnOnce() -> T,
Either returns the current &mut T from the state set, or inserts a new one with the provided default function and returns a mutable reference to it
#### pub async fn request_body(&mut self) -> ReceivedBody<'_, BoxedTransportReturns a ReceivedBody that references this `Conn`. The `Conn`
retains all data and holds the singular transport, but the
`ReceivedBody` provides an interface to read body content.
See also: `Conn::request_body_string` for a convenience function when the content is expected to be utf8.
##### Examples
```
use trillium_testing::prelude::*;
let mut conn = get("/").with_request_body("request body").on(&());
let request_body = conn.request_body().await;
assert_eq!(request_body.content_length(), Some(12));
assert_eq!(request_body.read_string().await.unwrap(), "request body");
```
#### pub async fn request_body_string(&mut self) -> Result<StringConvenience function to read the content of a request body as a `String`.
##### Errors
This will return an error variant if either there is an IO failure on the underlying transport or if the body content is not a utf8 string.
##### Examples
```
use trillium_testing::prelude::*;
let mut conn = get("/").with_request_body("request body").on(&());
assert_eq!(conn.request_body_string().await.unwrap(), "request body");
```
#### pub fn response_len(&self) -> Option<u64if there is a response body for this conn and it has a known fixed length, it is returned from this function
```
use trillium_testing::prelude::*;
let mut conn = get("/").on(&|conn: trillium::Conn| async move {
conn.with_body("hello")
});
assert_eq!(conn.response_len(), Some(5));
```
#### pub fn method(&self) -> Method
returns the request method for this conn.
```
use trillium_testing::prelude::*;
let mut conn = get("/").on(&());
assert_eq!(conn.method(), Method::Get);
```
#### pub fn headers(&self) -> &Headers
borrows the request headers
this is aliased as `Conn::request_headers`
#### pub fn headers_mut(&mut self) -> &mut Headers
mutably borrows response headers
this is aliased as `Conn::response_headers_mut`
#### pub fn response_headers(&self) -> &Headers
borrow the response headers
#### pub fn response_headers_mut(&mut self) -> &mut Headers
mutably borrow the response headers
this is aliased as `Conn::headers_mut`
#### pub fn request_headers(&self) -> &Headers
borrow the request headers
#### pub fn request_headers_mut(&mut self) -> &mut Headers
mutably borrow request headers
#### pub fn with_header(
self,
header_name: impl Into<HeaderName<'static>>,
header_value: impl Into<HeaderValues>
) -> Self
insert a header name and value/values into the response headers and return the conn. for a slight performance improvement, use a
`KnownHeaderName` as the first argument instead of a str.
```
use trillium_testing::prelude::*;
let mut conn = get("/").on(&|conn: trillium::Conn| async move {
conn.with_header("content-type", "application/html")
});
```
#### pub fn path(&self) -> &str
returns the path for this request. note that this may not represent the entire http request path if running nested routers.
#### pub fn querystring(&self) -> &str
returns query part of the request path
```
use trillium_testing::prelude::*;
let conn = get("/a/b?c&d=e").on(&());
assert_eq!(conn.querystring(), "c&d=e");
let conn = get("/a/b").on(&());
assert_eq!(conn.querystring(), "");
```
#### pub fn halt(self) -> Self
sets the `halted` attribute of this conn, preventing later processing in a given tuple handler. returns the conn for fluent chaining
```
use trillium_testing::prelude::*;
let mut conn = get("/").on(&|conn: trillium::Conn| async move {
conn.halt()
});
assert!(conn.is_halted());
```
#### pub fn set_halted(&mut self, halted: bool)
sets the `halted` attribute of this conn. see `Conn::halt`.
```
use trillium_testing::prelude::*;
let mut conn = get("/").on(&());
assert!(!conn.is_halted());
conn.set_halted(true);
assert!(conn.is_halted());
```
#### pub const fn is_halted(&self) -> bool
retrieves the halted state of this conn. see `Conn::halt`.
#### pub fn is_secure(&self) -> bool
predicate function to indicate whether the connection is secure. note that this does not necessarily indicate that the transport itself is secure, as it may indicate that
`trillium_http` is behind a trusted reverse proxy that has terminated tls and provided appropriate headers to indicate this.
#### pub const fn inner(&self) -> &Conn<BoxedTransportreturns an immutable reference to the inner
`trillium_http::Conn`. please open an issue if you need to do this in application code.
stability note: hopefully this can go away at some point, but for now is an escape hatch in case `trillium_http::Conn`
presents interfaces that cannot be reached otherwise.
#### pub fn inner_mut(&mut self) -> &mut Conn<BoxedTransportreturns a mutable reference to the inner
`trillium_http::Conn`. please open an issue if you need to do this in application code.
stability note: hopefully this can go away at some point, but for now is an escape hatch in case `trillium_http::Conn`
presents interfaces that cannot be reached otherwise.
#### pub fn into_inner<T: Transport>(self) -> Conn<Ttransforms this `trillium::Conn` into a `trillium_http::Conn`
with the specified transport type. Please note that this will panic if you attempt to downcast from trillium’s boxed transport into the wrong transport type. Also note that this is a lossy conversion, dropping the halted state and any nested router path data.
#### pub fn peer_ip(&self) -> Option<IpAddrretrieves the remote ip address for this conn, if available.
#### pub fn set_peer_ip(&mut self, peer_ip: Option<IpAddr>)
sets the remote ip address for this conn.
#### pub fn push_path(&mut self, path: String)
for router implementations. pushes a route segment onto the path
#### pub fn pop_path(&mut self)
for router implementations. removes a route segment onto the path
Trait Implementations
---
### impl AsMut<StateSet> for Conn
#### fn as_mut(&mut self) -> &mut StateSet
Converts this type into a mutable reference of the (usually inferred) input type.### impl AsRef<StateSet> for Conn
#### fn as_ref(&self) -> &StateSet
Converts this type into a shared reference of the (usually inferred) input type.### impl Debug for Conn
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn from(inner: Conn<T>) -> Self
Converts to this type from the input type.Auto Trait Implementations
---
### impl !RefUnwindSafe for Conn
### impl Send for Conn
### impl Sync for Conn
### impl Unpin for Conn
### impl !UnwindSafe for Conn
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Macro trillium::conn_try
===
```
macro_rules! conn_try {
($expr:expr, $conn:expr) => { ... };
}
```
Unwraps an `Result::Ok` or returns the `Conn` with a 500 status.
---
```
use trillium_testing::prelude::*;
use trillium::{Conn, conn_try};
let handler = |mut conn: Conn| async move {
let request_body_string = conn_try!(conn.request_body_string().await, conn);
let u8: u8 = conn_try!(request_body_string.parse(), conn);
conn.ok(format!("received u8 as body: {}", u8))
};
assert_status!(
post("/").with_request_body("not u8").on(&handler),
500
);
assert_body!(
post("/").with_request_body("10").on(&handler),
"received u8 as body: 10"
);
```
Macro trillium::conn_unwrap
===
```
macro_rules! conn_unwrap {
($option:expr, $conn:expr) => { ... };
}
```
Unwraps an `Option::Some` or returns the `Conn`.
---
This is useful for gracefully exiting a `Handler` without returning an error.
```
use trillium_testing::prelude::*;
use trillium::{Conn, conn_unwrap, State};
#[derive(Copy, Clone)]
struct MyState(&'static str);
let handler = |conn: trillium::Conn| async move {
let important_state: MyState = *conn_unwrap!(conn.state(), conn);
conn.ok(important_state.0)
};
assert_not_handled!(get("/").on(&handler)); // we never reached the conn.ok line.
assert_ok!(
get("/").on(&(State::new(MyState("hi")), handler)),
"hi"
);
```
Macro trillium::delegate_handler
===
```
macro_rules! delegate_handler {
($struct_name:ty) => { ... };
}
```
Macro for implementing Handler for simple newtypes that contain another handler.
---
```
use trillium::{delegate_handler, State, Conn, conn_unwrap};
#[derive(Clone, Copy)]
struct MyState(usize);
struct MyHandler(State<MyState>);
delegate_handler!(MyHandler);
impl MyHandler {
fn new(n: usize) -> Self {
MyHandler(State::new(MyState(n)))
}
}
let handler = (MyHandler::new(5), |conn: Conn| async move {
let MyState(n) = *conn_unwrap!(conn.state(), conn);
conn.ok(n.to_string())
});
assert_ok!(get("/").on(&handler), "5");
```
Macro trillium::log_error
===
```
macro_rules! log_error {
($expr:expr) => { ... };
($expr:expr, $message:expr) => { ... };
}
```
A convenience macro for logging the contents of error variants.
---
This is useful when there is no further action required to process the error path, but you still want to record that it transpired
Struct trillium::Body
===
```
pub struct Body(_);
```
The trillium representation of a http body. This can contain either `&'static [u8]` content, `Vec<u8>` content, or a boxed
`AsyncRead` type.
Implementations
---
### impl Body
#### pub fn new_streaming(
async_read: impl AsyncRead + Send + Sync + 'static,
len: Option<u64>
) -> Body
Construct a new body from a streaming [`AsyncRead`] source. If you have the body content in memory already, prefer
`Body::new_static` or one of the From conversions.
#### pub fn new_static(content: impl Into<Cow<'static, [u8]>>) -> Body
Construct a fixed-length Body from a `Vec<u8>` or `&'static [u8]`.
#### pub fn static_bytes(&self) -> Option<&[u8]Retrieve a borrow of the static content in this body. If this body is a streaming body or an empty body, this will return None.
#### pub fn into_reader(
self
) -> Pin<Box<dyn AsyncRead + Send + Sync + 'static, Global>Transform this Body into a dyn `AsyncRead`. This will wrap static content in a `Cursor`. Note that this is different from reading directly from the Body, which includes chunked encoding.
#### pub async fn into_bytes(
self
) -> impl Future<Output = Result<Cow<'static, [u8]>, Error>Consume this body and return the full content. If the body was constructed with `[Body::new_streaming`], this will read the entire streaming body into memory, awaiting the streaming source’s completion. This function will return an error if a streaming body has already been read to completion.
##### Errors
This returns an error variant if either of the following conditions are met:
* there is an io error when reading from the underlying transport such as a disconnect
* the body has already been read to completion
#### pub fn bytes_read(&self) -> u64
Retrieve the number of bytes that have been read from this body
#### pub fn len(&self) -> Option<u64returns the content length of this body, if known and available.
#### pub fn is_empty(&self) -> bool
determine if the this body represents no data
#### pub fn is_static(&self) -> bool
determine if the this body represents static content
#### pub fn is_streaming(&self) -> bool
determine if the this body represents streaming content
Trait Implementations
---
### impl AsyncRead for Body
#### fn poll_read(
self: Pin<&mut Body>,
cx: &mut Context<'_>,
buf: &mut [u8]
) -> Poll<Result<usize, Error>Attempt to read from the `AsyncRead` into `buf`.
self: Pin<&mut Self>,
cx: &mut Context<'_>,
bufs: &mut [IoSliceMut<'_>]
) -> Poll<Result<usize, Error>Attempt to read from the `AsyncRead` into `bufs` using vectored IO operations.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter.
#### fn default() -> Body
Returns the “default value” for a type.
#### fn from(content: &'static [u8]) -> Body
Converts to this type from the input type.### impl From<&'static str> for Body
#### fn from(s: &'static str) -> Body
Converts to this type from the input type.### impl<Transport> From<ReceivedBody<'static, Transport>> for Bodywhere
Transport: AsyncRead + AsyncWrite + Send + Sync + Unpin + 'static,
#### fn from(rb: ReceivedBody<'static, Transport>) -> Body
Converts to this type from the input type.### impl From<String> for Body
#### fn from(s: String) -> Body
Converts to this type from the input type.### impl From<Vec<u8, Global>> for Body
#### fn from(content: Vec<u8, Global>) -> Body
Converts to this type from the input type.Auto Trait Implementations
---
### impl !RefUnwindSafe for Body
### impl Send for Body
### impl Sync for Body
### impl Unpin for Body
### impl !UnwindSafe for Body
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
R: AsyncRead + ?Sized,
#### fn read<'a>(&'a mut self, buf: &'a mut [u8]) -> ReadFuture<'a, Self>where
Self: Unpin,
Reads some bytes from the byte stream.
&'a mut self,
bufs: &'a mut [IoSliceMut<'a>]
) -> ReadVectoredFuture<'a, Self>where
Self: Unpin,
Like `read()`, except it reads into a slice of buffers.
&'a mut self,
buf: &'a mut Vec<u8, Global>
) -> ReadToEndFuture<'a, Self>where
Self: Unpin,
Reads the entire contents and appends them to a `Vec`.
&'a mut self,
buf: &'a mut String
) -> ReadToStringFuture<'a, Self>where
Self: Unpin,
Reads the entire contents and appends them to a `String`.
Self: Unpin,
Reads the exact number of bytes required to fill `buf`.
Self: Sized,
Creates an adapter which will read at most `limit` bytes from it.
Self: Sized,
Converts this [`AsyncRead`] into a [`Stream`] of bytes.
R: AsyncRead,
Self: Sized,
Creates an adapter which will chain this stream with another.
Self: Sized + Send + 'a,
Boxes the reader and changes its type to `dyn AsyncRead + Send + 'a`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct trillium::HeaderName
===
```
pub struct HeaderName<'a>(_);
```
The name of a http header. This can be either a
`KnownHeaderName` or a string representation of an unknown header.
Implementations
---
### impl<'a> HeaderName<'a#### pub fn into_owned(self) -> HeaderName<'staticConvert a potentially-borrowed headername to a static headername *by value*.
#### pub fn to_owned(&self) -> HeaderName<'staticConvert a potentially-borrowed headername to a static headername *by cloning if needed from a borrow*. If you have ownership of a headername with a non-static lifetime, it is preferable to use `into_owned`. This is the equivalent of
`self.clone().into_owned()`.
Trait Implementations
---
### impl AsRef<str> for HeaderName<'_#### fn as_ref(&self) -> &str
Converts this type into a shared reference of the (usually inferred) input type.### impl<'a> Clone for HeaderName<'a#### fn clone(&self) -> HeaderName<'aReturns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
The associated error which can be returned from parsing.#### fn from_str(
s: &str
) -> Result<HeaderName<'static>, <HeaderName<'static> as FromStr>::ErrParses a string `s` to return a value of this type.
__H: Hasher,
Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where
H: Hasher,
Self: Sized,
Feeds a slice of this type into the given `Hasher`.
#### fn eq(&self, other: &HeaderName<'_>) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl<'a> PartialEq<HeaderName<'a>> for HeaderName<'a#### fn eq(&self, other: &HeaderName<'a>) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<KnownHeaderName> for &HeaderName<'_#### fn eq(&self, other: &KnownHeaderName) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<KnownHeaderName> for HeaderName<'_#### fn eq(&self, other: &KnownHeaderName) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl<'a> Eq for HeaderName<'a### impl<'a> StructuralEq for HeaderName<'a### impl<'a> StructuralPartialEq for HeaderName<'aAuto Trait Implementations
---
### impl<'a> RefUnwindSafe for HeaderName<'a### impl<'a> Send for HeaderName<'a### impl<'a> Sync for HeaderName<'a### impl<'a> Unpin for HeaderName<'a### impl<'a> UnwindSafe for HeaderName<'aBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Checks if this value is equivalent to the given key.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
T: Display + ?Sized,
#### default fn to_string(&self) -> String
Converts the given value to a `String`.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Enum trillium::KnownHeaderName
===
```
#[non_exhaustive]
pub enum KnownHeaderName {
Accept,
AcceptCh,
AcceptChLifetime,
AcceptCharset,
AcceptEncoding,
AcceptLanguage,
AcceptPushPolicy,
AcceptRanges,
AcceptSignature,
AccessControlAllowCredentials,
AccessControlAllowHeaders,
AccessControlAllowMethods,
AccessControlAllowOrigin,
AccessControlExposeHeaders,
AccessControlMaxAge,
AccessControlRequestHeaders,
AccessControlRequestMethod,
Age,
Allow,
AltSvc,
Authorization,
CacheControl,
ClearSiteData,
Connection,
ContentDpr,
ContentDisposition,
ContentEncoding,
ContentLanguage,
ContentLength,
ContentLocation,
ContentRange,
ContentSecurityPolicy,
ContentSecurityPolicyReportOnly,
ContentType,
Cookie,
Cookie2,
CrossOriginEmbedderPolicy,
CrossOriginOpenerPolicy,
CrossOriginResourcePolicy,
Dnt,
Dpr,
Date,
DeviceMemory,
Downlink,
Ect,
Etag,
EarlyData,
Expect,
ExpectCt,
Expires,
FeaturePolicy,
Forwarded,
From,
Host,
IfMatch,
IfModifiedSince,
IfNoneMatch,
IfRange,
IfUnmodifiedSince,
KeepAlive,
LargeAllocation,
LastEventId,
LastModified,
Link,
Location,
MaxForwards,
Nel,
Origin,
OriginIsolation,
PingFrom,
PingTo,
Pragma,
ProxyAuthenticate,
ProxyAuthorization,
ProxyConnection,
PublicKeyPins,
PublicKeyPinsReportOnly,
PushPolicy,
Rtt,
Range,
Referer,
ReferrerPolicy,
RefreshCache,
ReportTo,
RetryAfter,
SaveData,
SecChUa,
SecChUAMobile,
SecChUAPlatform,
SecFetchDest,
SecFetchMode,
SecFetchSite,
SecFetchUser,
SecGpc,
SecWebsocketAccept,
SecWebsocketExtensions,
SecWebsocketKey,
SecWebsocketProtocol,
SecWebsocketVersion,
Server,
ServerTiming,
ServiceWorkerAllowed,
SetCookie,
SetCookie2,
Signature,
SignedHeaders,
Sourcemap,
StrictTransportSecurity,
Te,
TimingAllowOrigin,
Trailer,
TransferEncoding,
Upgrade,
UpgradeInsecureRequests,
UserAgent,
Vary,
Via,
ViewportWidth,
WwwAuthenticate,
Warning,
Width,
XcontentTypeOptions,
XdnsPrefetchControl,
XdownloadOptions,
XfirefoxSpdy,
XforwardedBy,
XforwardedFor,
XforwardedHost,
XforwardedProto,
XforwardedSsl,
XframeOptions,
XpermittedCrossDomainPolicies,
Xpingback,
XpoweredBy,
XrequestId,
XrequestedWith,
XrobotsTag,
XuaCompatible,
XxssProtection,
}
```
A short nonehaustive enum of headers that trillium can represent as a u8. Use a `KnownHeaderName` variant instead of a &’static str anywhere possible, as it allows trillium to skip parsing the header entirely.
Variants (Non-exhaustive)
---
Non-exhaustive enums could have additional variants added in future. Therefore, when matching against variants of non-exhaustive enums, an extra wildcard arm must be added to account for any future variants.### Accept
The Accept header.
### AcceptCh
The Accept-CH header.
### AcceptChLifetime
The Accept-CH-Lifetime header.
### AcceptCharset
The Accept-Charset header.
### AcceptEncoding
The Accept-Encoding header.
### AcceptLanguage
The Accept-Language header.
### AcceptPushPolicy
The Accept-Push-Policy header.
### AcceptRanges
The Accept-Ranges header.
### AcceptSignature
The Accept-Signature header.
### AccessControlAllowCredentials
The Access-Control-Allow-Credentials header.
### AccessControlAllowHeaders
The Access-Control-Allow-Headers header.
### AccessControlAllowMethods
The Access-Control-Allow-Methods header.
### AccessControlAllowOrigin
The Access-Control-Allow-Origin header.
### AccessControlExposeHeaders
The Access-Control-Expose-Headers header.
### AccessControlMaxAge
The Access-Control-Max-Age header.
### AccessControlRequestHeaders
The Access-Control-Request-Headers header.
### AccessControlRequestMethod
The Access-Control-Request-Method header.
### Age
The Age header.
### Allow
The Allow header.
### AltSvc
The Alt-Svc header.
### Authorization
The Authorization header.
### CacheControl
The Cache-Control header.
### ClearSiteData
The Clear-Site-Data header.
### Connection
The Connection header.
### ContentDpr
The Content-DPR header.
### ContentDisposition
The Content-Disposition header.
### ContentEncoding
The Content-Encoding header.
### ContentLanguage
The Content-Language header.
### ContentLength
The Content-Length header.
### ContentLocation
The Content-Location header.
### ContentRange
The Content-Range header.
### ContentSecurityPolicy
The Content-Security-Policy header.
### ContentSecurityPolicyReportOnly
The Content-Security-Policy-Report-Only header.
### ContentType
The Content-Type header.
### Cookie
The Cookie header.
### Cookie2
The Cookie2 header.
### CrossOriginEmbedderPolicy
The Cross-Origin-Embedder-Policy header.
### CrossOriginOpenerPolicy
The Cross-Origin-Opener-Policy header.
### CrossOriginResourcePolicy
The Cross-Origin-Resource-Policy header.
### Dnt
The DNT header.
### Dpr
The DPR header.
### Date
The Date header.
### DeviceMemory
The Device-Memory header.
### Downlink
The Downlink header.
### Ect
The ECT header.
### Etag
The ETag header.
### EarlyData
The Early-Data header.
### Expect
The Expect header.
### ExpectCt
The Expect-CT header.
### Expires
The Expires header.
### FeaturePolicy
The Feature-Policy header.
### Forwarded
The Forwarded header.
### From
The From header.
### Host
The Host header.
### IfMatch
The If-Match header.
### IfModifiedSince
The If-Modified-Since header.
### IfNoneMatch
The If-None-Match header.
### IfRange
The If-Range header.
### IfUnmodifiedSince
The If-Unmodified-Since header.
### KeepAlive
The Keep-Alive header.
### LargeAllocation
The Large-Allocation header.
### LastEventId
The Last-Event-ID header.
### LastModified
The Last-Modified header.
### Link
The Link header.
### Location
The Location header.
### MaxForwards
The Max-Forwards header.
### Nel
The NEL header.
### Origin
The Origin header.
### OriginIsolation
The Origin-Isolation header.
### PingFrom
The Ping-From header.
### PingTo
The Ping-To header.
### Pragma
The Pragma header.
### ProxyAuthenticate
The Proxy-Authenticate header.
### ProxyAuthorization
The Proxy-Authorization header.
### ProxyConnection
The Proxy-Connection header.
### PublicKeyPins
The Public-Key-Pins header.
### PublicKeyPinsReportOnly
The Public-Key-Pins-Report-Only header.
### PushPolicy
The Push-Policy header.
### Rtt
The RTT header.
### Range
The Range header.
### Referer
The Referer header.
### ReferrerPolicy
The Referrer-Policy header.
### RefreshCache
The Refresh-Cache header.
### ReportTo
The Report-To header.
### RetryAfter
The Retry-After header.
### SaveData
The Save-Data header.
### SecChUa
The Sec-CH-UA header.
### SecChUAMobile
The Sec-CH-UA-Mobile header.
### SecChUAPlatform
The Sec-CH-UA-Platform header.
### SecFetchDest
The Sec-Fetch-Dest header.
### SecFetchMode
The Sec-Fetch-Mode header.
### SecFetchSite
The Sec-Fetch-Site header.
### SecFetchUser
The Sec-Fetch-User header.
### SecGpc
The Sec-GPC header.
### SecWebsocketAccept
The Sec-WebSocket-Accept header.
### SecWebsocketExtensions
The Sec-WebSocket-Extensions header.
### SecWebsocketKey
The Sec-WebSocket-Key header.
### SecWebsocketProtocol
The Sec-WebSocket-Protocol header.
### SecWebsocketVersion
The Sec-WebSocket-Version header.
### Server
The Server header.
### ServerTiming
The Server-Timing header.
### ServiceWorkerAllowed
The Service-Worker-Allowed header.
### SetCookie
The Set-Cookie header.
### SetCookie2
The Set-Cookie2 header.
### Signature
The Signature header.
### SignedHeaders
The Signed-Headers header.
### Sourcemap
The SourceMap header.
### StrictTransportSecurity
The Strict-Transport-Security header.
### Te
The TE header.
### TimingAllowOrigin
The Timing-Allow-Origin header.
### Trailer
The Trailer header.
### TransferEncoding
The Transfer-Encoding header.
### Upgrade
The Upgrade header.
### UpgradeInsecureRequests
The Upgrade-Insecure-Requests header.
### UserAgent
The User-Agent header.
### Vary
The Vary header.
### Via
The Via header.
### ViewportWidth
The Viewport-Width header.
### WwwAuthenticate
The WWW-Authenticate header.
### Warning
The Warning header.
### Width
The Width header.
### XcontentTypeOptions
The X-Content-Type-Options header.
### XdnsPrefetchControl
The X-DNS-Prefetch-Control header.
### XdownloadOptions
The X-Download-Options header.
### XfirefoxSpdy
The X-Firefox-Spdy header.
### XforwardedBy
The X-Forwarded-By header.
### XforwardedFor
The X-Forwarded-For header.
### XforwardedHost
The X-Forwarded-Host header.
### XforwardedProto
The X-Forwarded-Proto header.
### XforwardedSsl
The X-Forwarded-SSL header.
### XframeOptions
The X-Frame-Options header.
### XpermittedCrossDomainPolicies
The X-Permitted-Cross-Domain-Policies header.
### Xpingback
The X-Pingback header.
### XpoweredBy
The X-Powered-By header.
### XrequestId
The X-Request-Id header.
### XrequestedWith
The X-Requested-With header.
### XrobotsTag
The X-Robots-Tag header.
### XuaCompatible
The X-UA-Compatible header.
### XxssProtection
The X-XSS-Protection header.
Trait Implementations
---
### impl AsRef<str> for KnownHeaderName
#### fn as_ref(&self) -> &str
Converts this type into a shared reference of the (usually inferred) input type.### impl Clone for KnownHeaderName
#### fn clone(&self) -> KnownHeaderName
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter.
#### type Err = ()
The associated error which can be returned from parsing.#### fn from_str(
s: &str
) -> Result<KnownHeaderName, <KnownHeaderName as FromStr>::ErrParses a string `s` to return a value of this type.
#### fn hash<__H>(&self, state: &mut __H)where
__H: Hasher,
Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where
H: Hasher,
Self: Sized,
Feeds a slice of this type into the given `Hasher`.
#### fn eq(&self, other: &HeaderName<'_>) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<KnownHeaderName> for &HeaderName<'_#### fn eq(&self, other: &KnownHeaderName) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<KnownHeaderName> for HeaderName<'_#### fn eq(&self, other: &KnownHeaderName) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<KnownHeaderName> for KnownHeaderName
#### fn eq(&self, other: &KnownHeaderName) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Copy for KnownHeaderName
### impl Eq for KnownHeaderName
### impl StructuralEq for KnownHeaderName
### impl StructuralPartialEq for KnownHeaderName
Auto Trait Implementations
---
### impl RefUnwindSafe for KnownHeaderName
### impl Send for KnownHeaderName
### impl Sync for KnownHeaderName
### impl Unpin for KnownHeaderName
### impl UnwindSafe for KnownHeaderName
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Checks if this value is equivalent to the given key.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
T: Display + ?Sized,
#### default fn to_string(&self) -> String
Converts the given value to a `String`.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct trillium::HeaderValue
===
```
pub struct HeaderValue(_);
```
A `HeaderValue` represents the right hand side of a single `name: value` pair.
Implementations
---
### impl HeaderValue
#### pub fn as_str(&self) -> Option<&strReturns this header value as a &str if it is utf8, None otherwise. If you need to convert non-utf8 bytes to a string somehow, match directly on the `HeaderValue` as an enum and handle that case. If you need a byte slice regardless of whether it’s utf8, use the `AsRef<[u8]>` impl
Trait Implementations
---
### impl AsRef<[u8]> for HeaderValue
#### fn as_ref(&self) -> &[u8]
Converts this type into a shared reference of the (usually inferred) input type.### impl Clone for HeaderValue
#### fn clone(&self) -> HeaderValue
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter.
#### fn from(b: &'static [u8]) -> HeaderValue
Converts to this type from the input type.### impl From<&'static str> for HeaderValue
#### fn from(s: &'static str) -> HeaderValue
Converts to this type from the input type.### impl From<Cow<'static, str>> for HeaderValue
#### fn from(c: Cow<'static, str>) -> HeaderValue
Converts to this type from the input type.### impl From<HeaderValue> for HeaderValues
#### fn from(v: HeaderValue) -> HeaderValues
Converts to this type from the input type.### impl From<String> for HeaderValue
#### fn from(s: String) -> HeaderValue
Converts to this type from the input type.### impl From<Vec<u8, Global>> for HeaderValue
#### fn from(v: Vec<u8, Global>) -> HeaderValue
Converts to this type from the input type.### impl PartialEq<&[u8]> for HeaderValue
#### fn eq(&self, other: &&[u8]) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<&String> for HeaderValue
#### fn eq(&self, other: &&String) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<&str> for HeaderValue
#### fn eq(&self, other: &&str) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<[u8]> for &HeaderValue
#### fn eq(&self, other: &[u8]) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<[u8]> for HeaderValue
#### fn eq(&self, other: &[u8]) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<HeaderValue> for HeaderValue
#### fn eq(&self, other: &HeaderValue) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<String> for &HeaderValue
#### fn eq(&self, other: &String) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<String> for HeaderValue
#### fn eq(&self, other: &String) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<str> for &HeaderValue
#### fn eq(&self, other: &str) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<str> for HeaderValue
#### fn eq(&self, other: &str) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Eq for HeaderValue
### impl StructuralEq for HeaderValue
### impl StructuralPartialEq for HeaderValue
Auto Trait Implementations
---
### impl RefUnwindSafe for HeaderValue
### impl Send for HeaderValue
### impl Sync for HeaderValue
### impl Unpin for HeaderValue
### impl UnwindSafe for HeaderValue
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Checks if this value is equivalent to the given key.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
T: Display + ?Sized,
#### default fn to_string(&self) -> String
Converts the given value to a `String`.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.{"&[u8]":"<h3>Notable traits for <code>&[<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u8.html\">u8</a>]</code></h3><pre><code><span class=\"where fmt-newline\">impl <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/std/io/trait.Read.html\" title=\"trait std::io::Read\">Read</a> for &[<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u8.html\">u8</a>]</span>"}
Struct trillium::HeaderValues
===
```
pub struct HeaderValues(_);
```
A header value is a collection of one or more `HeaderValue`. It has been optimized for the “one `HeaderValue`” case, but can accomodate more than one value.
Implementations
---
### impl HeaderValues
#### pub fn new() -> HeaderValues
Builds an empty `HeaderValues`. This is not generally necessary in application code. Using a `From` implementation is preferable.
#### pub fn as_str(&self) -> Option<&strIf there is only a single value, returns that header as a borrowed string slice if it is utf8. If there are more than one header value, or if the singular header value is not utf8,
`as_str` returns None.
#### pub fn one(&self) -> Option<&HeaderValueIf there is only a single `HeaderValue` inside this
`HeaderValues`, `one` returns a reference to that value. If there are more than one header value inside this
`HeaderValues`, `one` returns None.
#### pub fn append(&mut self, value: impl Into<HeaderValue>)
Add another header value to this `HeaderValues`.
#### pub fn extend(&mut self, values: impl Into<HeaderValues>)
Adds any number of other header values to this `HeaderValues`.
Methods from Deref<Target = [HeaderValue]>
---
#### pub fn sort_floats(&mut self)
🔬This is a nightly-only experimental API. (`sort_floats`)Sorts the slice of floats.
This sort is in-place (i.e. does not allocate), *O*(*n* * log(*n*)) worst-case, and uses the ordering defined by `f64::total_cmp`.
##### Current implementation
This uses the same sorting algorithm as `sort_unstable_by`.
##### Examples
```
#![feature(sort_floats)]
let mut v = [2.6, -5e-8, f64::NAN, 8.29, f64::INFINITY, -1.0, 0.0, -f64::INFINITY, -0.0];
v.sort_floats();
let sorted = [-f64::INFINITY, -1.0, -5e-8, -0.0, 0.0, 2.6, 8.29, f64::INFINITY, f64::NAN];
assert_eq!(&v[..8], &sorted[..8]);
assert!(v[8].is_nan());
```
1.0.0 · source#### pub fn len(&self) -> usize
Returns the number of elements in the slice.
##### Examples
```
let a = [1, 2, 3];
assert_eq!(a.len(), 3);
```
1.0.0 · source#### pub fn is_empty(&self) -> bool
Returns `true` if the slice has a length of 0.
##### Examples
```
let a = [1, 2, 3];
assert!(!a.is_empty());
```
1.0.0 · source#### pub fn first(&self) -> Option<&TReturns the first element of the slice, or `None` if it is empty.
##### Examples
```
let v = [10, 40, 30];
assert_eq!(Some(&10), v.first());
let w: &[i32] = &[];
assert_eq!(None, w.first());
```
1.0.0 · source#### pub fn first_mut(&mut self) -> Option<&mut TReturns a mutable pointer to the first element of the slice, or `None` if it is empty.
##### Examples
```
let x = &mut [0, 1, 2];
if let Some(first) = x.first_mut() {
*first = 5;
}
assert_eq!(x, &[5, 1, 2]);
```
1.5.0 · source#### pub fn split_first(&self) -> Option<(&T, &[T])Returns the first and all the rest of the elements of the slice, or `None` if it is empty.
##### Examples
```
let x = &[0, 1, 2];
if let Some((first, elements)) = x.split_first() {
assert_eq!(first, &0);
assert_eq!(elements, &[1, 2]);
}
```
1.5.0 · source#### pub fn split_first_mut(&mut self) -> Option<(&mut T, &mut [T])Returns the first and all the rest of the elements of the slice, or `None` if it is empty.
##### Examples
```
let x = &mut [0, 1, 2];
if let Some((first, elements)) = x.split_first_mut() {
*first = 3;
elements[0] = 4;
elements[1] = 5;
}
assert_eq!(x, &[3, 4, 5]);
```
1.5.0 · source#### pub fn split_last(&self) -> Option<(&T, &[T])Returns the last and all the rest of the elements of the slice, or `None` if it is empty.
##### Examples
```
let x = &[0, 1, 2];
if let Some((last, elements)) = x.split_last() {
assert_eq!(last, &2);
assert_eq!(elements, &[0, 1]);
}
```
1.5.0 · source#### pub fn split_last_mut(&mut self) -> Option<(&mut T, &mut [T])Returns the last and all the rest of the elements of the slice, or `None` if it is empty.
##### Examples
```
let x = &mut [0, 1, 2];
if let Some((last, elements)) = x.split_last_mut() {
*last = 3;
elements[0] = 4;
elements[1] = 5;
}
assert_eq!(x, &[4, 5, 3]);
```
1.0.0 · source#### pub fn last(&self) -> Option<&TReturns the last element of the slice, or `None` if it is empty.
##### Examples
```
let v = [10, 40, 30];
assert_eq!(Some(&30), v.last());
let w: &[i32] = &[];
assert_eq!(None, w.last());
```
1.0.0 · source#### pub fn last_mut(&mut self) -> Option<&mut TReturns a mutable pointer to the last item in the slice.
##### Examples
```
let x = &mut [0, 1, 2];
if let Some(last) = x.last_mut() {
*last = 10;
}
assert_eq!(x, &[0, 1, 10]);
```
1.0.0 · source#### pub fn get<I>(&self, index: I) -> Option<&<I as SliceIndex<[T]>>::Output>where
I: SliceIndex<[T]>,
Returns a reference to an element or subslice depending on the type of index.
* If given a position, returns a reference to the element at that position or `None` if out of bounds.
* If given a range, returns the subslice corresponding to that range,
or `None` if out of bounds.
##### Examples
```
let v = [10, 40, 30];
assert_eq!(Some(&40), v.get(1));
assert_eq!(Some(&[10, 40][..]), v.get(0..2));
assert_eq!(None, v.get(3));
assert_eq!(None, v.get(0..4));
```
1.0.0 · source#### pub fn get_mut<I>(
&mut self,
index: I
) -> Option<&mut <I as SliceIndex<[T]>>::Output>where
I: SliceIndex<[T]>,
Returns a mutable reference to an element or subslice depending on the type of index (see `get`) or `None` if the index is out of bounds.
##### Examples
```
let x = &mut [0, 1, 2];
if let Some(elem) = x.get_mut(1) {
*elem = 42;
}
assert_eq!(x, &[0, 42, 2]);
```
1.0.0 · source#### pub unsafe fn get_unchecked<I>(
&self,
index: I
) -> &<I as SliceIndex<[T]>>::Outputwhere
I: SliceIndex<[T]>,
Returns a reference to an element or subslice, without doing bounds checking.
For a safe alternative see `get`.
##### Safety
Calling this method with an out-of-bounds index is *undefined behavior*
even if the resulting reference is not used.
##### Examples
```
let x = &[1, 2, 4];
unsafe {
assert_eq!(x.get_unchecked(1), &2);
}
```
1.0.0 · source#### pub unsafe fn get_unchecked_mut<I>(
&mut self,
index: I
) -> &mut <I as SliceIndex<[T]>>::Outputwhere
I: SliceIndex<[T]>,
Returns a mutable reference to an element or subslice, without doing bounds checking.
For a safe alternative see `get_mut`.
##### Safety
Calling this method with an out-of-bounds index is *undefined behavior*
even if the resulting reference is not used.
##### Examples
```
let x = &mut [1, 2, 4];
unsafe {
let elem = x.get_unchecked_mut(1);
*elem = 13;
}
assert_eq!(x, &[1, 13, 4]);
```
1.0.0 · source#### pub fn as_ptr(&self) -> *const T
Returns a raw pointer to the slice’s buffer.
The caller must ensure that the slice outlives the pointer this function returns, or else it will end up pointing to garbage.
The caller must also ensure that the memory the pointer (non-transitively) points to is never written to (except inside an `UnsafeCell`) using this pointer or any pointer derived from it. If you need to mutate the contents of the slice, use `as_mut_ptr`.
Modifying the container referenced by this slice may cause its buffer to be reallocated, which would also make any pointers to it invalid.
##### Examples
```
let x = &[1, 2, 4];
let x_ptr = x.as_ptr();
unsafe {
for i in 0..x.len() {
assert_eq!(x.get_unchecked(i), &*x_ptr.add(i));
}
}
```
1.0.0 · source#### pub fn as_mut_ptr(&mut self) -> *mut T
Returns an unsafe mutable pointer to the slice’s buffer.
The caller must ensure that the slice outlives the pointer this function returns, or else it will end up pointing to garbage.
Modifying the container referenced by this slice may cause its buffer to be reallocated, which would also make any pointers to it invalid.
##### Examples
```
let x = &mut [1, 2, 4];
let x_ptr = x.as_mut_ptr();
unsafe {
for i in 0..x.len() {
*x_ptr.add(i) += 2;
}
}
assert_eq!(x, &[3, 4, 6]);
```
1.48.0 · source#### pub fn as_ptr_range(&self) -> Range<*const TReturns the two raw pointers spanning the slice.
The returned range is half-open, which means that the end pointer points *one past* the last element of the slice. This way, an empty slice is represented by two equal pointers, and the difference between the two pointers represents the size of the slice.
See `as_ptr` for warnings on using these pointers. The end pointer requires extra caution, as it does not point to a valid element in the slice.
This function is useful for interacting with foreign interfaces which use two pointers to refer to a range of elements in memory, as is common in C++.
It can also be useful to check if a pointer to an element refers to an element of this slice:
```
let a = [1, 2, 3];
let x = &a[1] as *const _;
let y = &5 as *const _;
assert!(a.as_ptr_range().contains(&x));
assert!(!a.as_ptr_range().contains(&y));
```
1.48.0 · source#### pub fn as_mut_ptr_range(&mut self) -> Range<*mut TReturns the two unsafe mutable pointers spanning the slice.
The returned range is half-open, which means that the end pointer points *one past* the last element of the slice. This way, an empty slice is represented by two equal pointers, and the difference between the two pointers represents the size of the slice.
See `as_mut_ptr` for warnings on using these pointers. The end pointer requires extra caution, as it does not point to a valid element in the slice.
This function is useful for interacting with foreign interfaces which use two pointers to refer to a range of elements in memory, as is common in C++.
1.0.0 · source#### pub fn swap(&mut self, a: usize, b: usize)
Swaps two elements in the slice.
##### Arguments
* a - The index of the first element
* b - The index of the second element
##### Panics
Panics if `a` or `b` are out of bounds.
##### Examples
```
let mut v = ["a", "b", "c", "d", "e"];
v.swap(2, 4);
assert!(v == ["a", "b", "e", "d", "c"]);
```
#### pub unsafe fn swap_unchecked(&mut self, a: usize, b: usize)
🔬This is a nightly-only experimental API. (`slice_swap_unchecked`)Swaps two elements in the slice, without doing bounds checking.
For a safe alternative see `swap`.
##### Arguments
* a - The index of the first element
* b - The index of the second element
##### Safety
Calling this method with an out-of-bounds index is *undefined behavior*.
The caller has to ensure that `a < self.len()` and `b < self.len()`.
##### Examples
```
#![feature(slice_swap_unchecked)]
let mut v = ["a", "b", "c", "d"];
// SAFETY: we know that 1 and 3 are both indices of the slice unsafe { v.swap_unchecked(1, 3) };
assert!(v == ["a", "d", "c", "b"]);
```
1.0.0 · source#### pub fn reverse(&mut self)
Reverses the order of elements in the slice, in place.
##### Examples
```
let mut v = [1, 2, 3];
v.reverse();
assert!(v == [3, 2, 1]);
```
1.0.0 · source#### pub fn iter(&self) -> Iter<'_, TReturns an iterator over the slice.
The iterator yields all items from start to end.
##### Examples
```
let x = &[1, 2, 4];
let mut iterator = x.iter();
assert_eq!(iterator.next(), Some(&1));
assert_eq!(iterator.next(), Some(&2));
assert_eq!(iterator.next(), Some(&4));
assert_eq!(iterator.next(), None);
```
1.0.0 · source#### pub fn iter_mut(&mut self) -> IterMut<'_, TReturns an iterator that allows modifying each value.
The iterator yields all items from start to end.
##### Examples
```
let x = &mut [1, 2, 4];
for elem in x.iter_mut() {
*elem += 2;
}
assert_eq!(x, &[3, 4, 6]);
```
1.0.0 · source#### pub fn windows(&self, size: usize) -> Windows<'_, TReturns an iterator over all contiguous windows of length
`size`. The windows overlap. If the slice is shorter than
`size`, the iterator returns no values.
##### Panics
Panics if `size` is 0.
##### Examples
```
let slice = ['r', 'u', 's', 't'];
let mut iter = slice.windows(2);
assert_eq!(iter.next().unwrap(), &['r', 'u']);
assert_eq!(iter.next().unwrap(), &['u', 's']);
assert_eq!(iter.next().unwrap(), &['s', 't']);
assert!(iter.next().is_none());
```
If the slice is shorter than `size`:
```
let slice = ['f', 'o', 'o'];
let mut iter = slice.windows(4);
assert!(iter.next().is_none());
```
There’s no `windows_mut`, as that existing would let safe code violate the
“only one `&mut` at a time to the same thing” rule. However, you can sometimes use `Cell::as_slice_of_cells` in conjunction with `windows` to accomplish something similar:
```
use std::cell::Cell;
let mut array = ['R', 'u', 's', 't', ' ', '2', '0', '1', '5'];
let slice = &mut array[..];
let slice_of_cells: &[Cell<char>] = Cell::from_mut(slice).as_slice_of_cells();
for w in slice_of_cells.windows(3) {
Cell::swap(&w[0], &w[2]);
}
assert_eq!(array, ['s', 't', ' ', '2', '0', '1', '5', 'u', 'R']);
```
1.0.0 · source#### pub fn chunks(&self, chunk_size: usize) -> Chunks<'_, TReturns an iterator over `chunk_size` elements of the slice at a time, starting at the beginning of the slice.
The chunks are slices and do not overlap. If `chunk_size` does not divide the length of the slice, then the last chunk will not have length `chunk_size`.
See `chunks_exact` for a variant of this iterator that returns chunks of always exactly
`chunk_size` elements, and `rchunks` for the same iterator but starting at the end of the slice.
##### Panics
Panics if `chunk_size` is 0.
##### Examples
```
let slice = ['l', 'o', 'r', 'e', 'm'];
let mut iter = slice.chunks(2);
assert_eq!(iter.next().unwrap(), &['l', 'o']);
assert_eq!(iter.next().unwrap(), &['r', 'e']);
assert_eq!(iter.next().unwrap(), &['m']);
assert!(iter.next().is_none());
```
1.0.0 · source#### pub fn chunks_mut(&mut self, chunk_size: usize) -> ChunksMut<'_, TReturns an iterator over `chunk_size` elements of the slice at a time, starting at the beginning of the slice.
The chunks are mutable slices, and do not overlap. If `chunk_size` does not divide the length of the slice, then the last chunk will not have length `chunk_size`.
See `chunks_exact_mut` for a variant of this iterator that returns chunks of always exactly `chunk_size` elements, and `rchunks_mut` for the same iterator but starting at the end of the slice.
##### Panics
Panics if `chunk_size` is 0.
##### Examples
```
let v = &mut [0, 0, 0, 0, 0];
let mut count = 1;
for chunk in v.chunks_mut(2) {
for elem in chunk.iter_mut() {
*elem += count;
}
count += 1;
}
assert_eq!(v, &[1, 1, 2, 2, 3]);
```
1.31.0 · source#### pub fn chunks_exact(&self, chunk_size: usize) -> ChunksExact<'_, TReturns an iterator over `chunk_size` elements of the slice at a time, starting at the beginning of the slice.
The chunks are slices and do not overlap. If `chunk_size` does not divide the length of the slice, then the last up to `chunk_size-1` elements will be omitted and can be retrieved from the `remainder` function of the iterator.
Due to each chunk having exactly `chunk_size` elements, the compiler can often optimize the resulting code better than in the case of `chunks`.
See `chunks` for a variant of this iterator that also returns the remainder as a smaller chunk, and `rchunks_exact` for the same iterator but starting at the end of the slice.
##### Panics
Panics if `chunk_size` is 0.
##### Examples
```
let slice = ['l', 'o', 'r', 'e', 'm'];
let mut iter = slice.chunks_exact(2);
assert_eq!(iter.next().unwrap(), &['l', 'o']);
assert_eq!(iter.next().unwrap(), &['r', 'e']);
assert!(iter.next().is_none());
assert_eq!(iter.remainder(), &['m']);
```
1.31.0 · source#### pub fn chunks_exact_mut(&mut self, chunk_size: usize) -> ChunksExactMut<'_, TReturns an iterator over `chunk_size` elements of the slice at a time, starting at the beginning of the slice.
The chunks are mutable slices, and do not overlap. If `chunk_size` does not divide the length of the slice, then the last up to `chunk_size-1` elements will be omitted and can be retrieved from the `into_remainder` function of the iterator.
Due to each chunk having exactly `chunk_size` elements, the compiler can often optimize the resulting code better than in the case of `chunks_mut`.
See `chunks_mut` for a variant of this iterator that also returns the remainder as a smaller chunk, and `rchunks_exact_mut` for the same iterator but starting at the end of the slice.
##### Panics
Panics if `chunk_size` is 0.
##### Examples
```
let v = &mut [0, 0, 0, 0, 0];
let mut count = 1;
for chunk in v.chunks_exact_mut(2) {
for elem in chunk.iter_mut() {
*elem += count;
}
count += 1;
}
assert_eq!(v, &[1, 1, 2, 2, 0]);
```
#### pub unsafe fn as_chunks_unchecked<const N: usize>(&self) -> &[[T; N]]
🔬This is a nightly-only experimental API. (`slice_as_chunks`)Splits the slice into a slice of `N`-element arrays,
assuming that there’s no remainder.
##### Safety
This may only be called when
* The slice splits exactly into `N`-element chunks (aka `self.len() % N == 0`).
* `N != 0`.
##### Examples
```
#![feature(slice_as_chunks)]
let slice: &[char] = &['l', 'o', 'r', 'e', 'm', '!'];
let chunks: &[[char; 1]] =
// SAFETY: 1-element chunks never have remainder
unsafe { slice.as_chunks_unchecked() };
assert_eq!(chunks, &[['l'], ['o'], ['r'], ['e'], ['m'], ['!']]);
let chunks: &[[char; 3]] =
// SAFETY: The slice length (6) is a multiple of 3
unsafe { slice.as_chunks_unchecked() };
assert_eq!(chunks, &[['l', 'o', 'r'], ['e', 'm', '!']]);
// These would be unsound:
// let chunks: &[[_; 5]] = slice.as_chunks_unchecked() // The slice length is not a multiple of 5
// let chunks: &[[_; 0]] = slice.as_chunks_unchecked() // Zero-length chunks are never allowed
```
#### pub fn as_chunks<const N: usize>(&self) -> (&[[T; N]], &[T])
🔬This is a nightly-only experimental API. (`slice_as_chunks`)Splits the slice into a slice of `N`-element arrays,
starting at the beginning of the slice,
and a remainder slice with length strictly less than `N`.
##### Panics
Panics if `N` is 0. This check will most probably get changed to a compile time error before this method gets stabilized.
##### Examples
```
#![feature(slice_as_chunks)]
let slice = ['l', 'o', 'r', 'e', 'm'];
let (chunks, remainder) = slice.as_chunks();
assert_eq!(chunks, &[['l', 'o'], ['r', 'e']]);
assert_eq!(remainder, &['m']);
```
If you expect the slice to be an exact multiple, you can combine
`let`-`else` with an empty slice pattern:
```
#![feature(slice_as_chunks)]
let slice = ['R', 'u', 's', 't'];
let (chunks, []) = slice.as_chunks::<2>() else {
panic!("slice didn't have even length")
};
assert_eq!(chunks, &[['R', 'u'], ['s', 't']]);
```
#### pub fn as_rchunks<const N: usize>(&self) -> (&[T], &[[T; N]])
🔬This is a nightly-only experimental API. (`slice_as_chunks`)Splits the slice into a slice of `N`-element arrays,
starting at the end of the slice,
and a remainder slice with length strictly less than `N`.
##### Panics
Panics if `N` is 0. This check will most probably get changed to a compile time error before this method gets stabilized.
##### Examples
```
#![feature(slice_as_chunks)]
let slice = ['l', 'o', 'r', 'e', 'm'];
let (remainder, chunks) = slice.as_rchunks();
assert_eq!(remainder, &['l']);
assert_eq!(chunks, &[['o', 'r'], ['e', 'm']]);
```
#### pub fn array_chunks<const N: usize>(&self) -> ArrayChunks<'_, T, N🔬This is a nightly-only experimental API. (`array_chunks`)Returns an iterator over `N` elements of the slice at a time, starting at the beginning of the slice.
The chunks are array references and do not overlap. If `N` does not divide the length of the slice, then the last up to `N-1` elements will be omitted and can be retrieved from the `remainder` function of the iterator.
This method is the const generic equivalent of `chunks_exact`.
##### Panics
Panics if `N` is 0. This check will most probably get changed to a compile time error before this method gets stabilized.
##### Examples
```
#![feature(array_chunks)]
let slice = ['l', 'o', 'r', 'e', 'm'];
let mut iter = slice.array_chunks();
assert_eq!(iter.next().unwrap(), &['l', 'o']);
assert_eq!(iter.next().unwrap(), &['r', 'e']);
assert!(iter.next().is_none());
assert_eq!(iter.remainder(), &['m']);
```
#### pub unsafe fn as_chunks_unchecked_mut<const N: usize>(
&mut self
) -> &mut [[T; N]]
🔬This is a nightly-only experimental API. (`slice_as_chunks`)Splits the slice into a slice of `N`-element arrays,
assuming that there’s no remainder.
##### Safety
This may only be called when
* The slice splits exactly into `N`-element chunks (aka `self.len() % N == 0`).
* `N != 0`.
##### Examples
```
#![feature(slice_as_chunks)]
let slice: &mut [char] = &mut ['l', 'o', 'r', 'e', 'm', '!'];
let chunks: &mut [[char; 1]] =
// SAFETY: 1-element chunks never have remainder
unsafe { slice.as_chunks_unchecked_mut() };
chunks[0] = ['L'];
assert_eq!(chunks, &[['L'], ['o'], ['r'], ['e'], ['m'], ['!']]);
let chunks: &mut [[char; 3]] =
// SAFETY: The slice length (6) is a multiple of 3
unsafe { slice.as_chunks_unchecked_mut() };
chunks[1] = ['a', 'x', '?'];
assert_eq!(slice, &['L', 'o', 'r', 'a', 'x', '?']);
// These would be unsound:
// let chunks: &[[_; 5]] = slice.as_chunks_unchecked_mut() // The slice length is not a multiple of 5
// let chunks: &[[_; 0]] = slice.as_chunks_unchecked_mut() // Zero-length chunks are never allowed
```
#### pub fn as_chunks_mut<const N: usize>(&mut self) -> (&mut [[T; N]], &mut [T])
🔬This is a nightly-only experimental API. (`slice_as_chunks`)Splits the slice into a slice of `N`-element arrays,
starting at the beginning of the slice,
and a remainder slice with length strictly less than `N`.
##### Panics
Panics if `N` is 0. This check will most probably get changed to a compile time error before this method gets stabilized.
##### Examples
```
#![feature(slice_as_chunks)]
let v = &mut [0, 0, 0, 0, 0];
let mut count = 1;
let (chunks, remainder) = v.as_chunks_mut();
remainder[0] = 9;
for chunk in chunks {
*chunk = [count; 2];
count += 1;
}
assert_eq!(v, &[1, 1, 2, 2, 9]);
```
#### pub fn as_rchunks_mut<const N: usize>(&mut self) -> (&mut [T], &mut [[T; N]])
🔬This is a nightly-only experimental API. (`slice_as_chunks`)Splits the slice into a slice of `N`-element arrays,
starting at the end of the slice,
and a remainder slice with length strictly less than `N`.
##### Panics
Panics if `N` is 0. This check will most probably get changed to a compile time error before this method gets stabilized.
##### Examples
```
#![feature(slice_as_chunks)]
let v = &mut [0, 0, 0, 0, 0];
let mut count = 1;
let (remainder, chunks) = v.as_rchunks_mut();
remainder[0] = 9;
for chunk in chunks {
*chunk = [count; 2];
count += 1;
}
assert_eq!(v, &[9, 1, 1, 2, 2]);
```
#### pub fn array_chunks_mut<const N: usize>(&mut self) -> ArrayChunksMut<'_, T, N🔬This is a nightly-only experimental API. (`array_chunks`)Returns an iterator over `N` elements of the slice at a time, starting at the beginning of the slice.
The chunks are mutable array references and do not overlap. If `N` does not divide the length of the slice, then the last up to `N-1` elements will be omitted and can be retrieved from the `into_remainder` function of the iterator.
This method is the const generic equivalent of `chunks_exact_mut`.
##### Panics
Panics if `N` is 0. This check will most probably get changed to a compile time error before this method gets stabilized.
##### Examples
```
#![feature(array_chunks)]
let v = &mut [0, 0, 0, 0, 0];
let mut count = 1;
for chunk in v.array_chunks_mut() {
*chunk = [count; 2];
count += 1;
}
assert_eq!(v, &[1, 1, 2, 2, 0]);
```
#### pub fn array_windows<const N: usize>(&self) -> ArrayWindows<'_, T, N🔬This is a nightly-only experimental API. (`array_windows`)Returns an iterator over overlapping windows of `N` elements of a slice,
starting at the beginning of the slice.
This is the const generic equivalent of `windows`.
If `N` is greater than the size of the slice, it will return no windows.
##### Panics
Panics if `N` is 0. This check will most probably get changed to a compile time error before this method gets stabilized.
##### Examples
```
#![feature(array_windows)]
let slice = [0, 1, 2, 3];
let mut iter = slice.array_windows();
assert_eq!(iter.next().unwrap(), &[0, 1]);
assert_eq!(iter.next().unwrap(), &[1, 2]);
assert_eq!(iter.next().unwrap(), &[2, 3]);
assert!(iter.next().is_none());
```
1.31.0 · source#### pub fn rchunks(&self, chunk_size: usize) -> RChunks<'_, TReturns an iterator over `chunk_size` elements of the slice at a time, starting at the end of the slice.
The chunks are slices and do not overlap. If `chunk_size` does not divide the length of the slice, then the last chunk will not have length `chunk_size`.
See `rchunks_exact` for a variant of this iterator that returns chunks of always exactly
`chunk_size` elements, and `chunks` for the same iterator but starting at the beginning of the slice.
##### Panics
Panics if `chunk_size` is 0.
##### Examples
```
let slice = ['l', 'o', 'r', 'e', 'm'];
let mut iter = slice.rchunks(2);
assert_eq!(iter.next().unwrap(), &['e', 'm']);
assert_eq!(iter.next().unwrap(), &['o', 'r']);
assert_eq!(iter.next().unwrap(), &['l']);
assert!(iter.next().is_none());
```
1.31.0 · source#### pub fn rchunks_mut(&mut self, chunk_size: usize) -> RChunksMut<'_, TReturns an iterator over `chunk_size` elements of the slice at a time, starting at the end of the slice.
The chunks are mutable slices, and do not overlap. If `chunk_size` does not divide the length of the slice, then the last chunk will not have length `chunk_size`.
See `rchunks_exact_mut` for a variant of this iterator that returns chunks of always exactly `chunk_size` elements, and `chunks_mut` for the same iterator but starting at the beginning of the slice.
##### Panics
Panics if `chunk_size` is 0.
##### Examples
```
let v = &mut [0, 0, 0, 0, 0];
let mut count = 1;
for chunk in v.rchunks_mut(2) {
for elem in chunk.iter_mut() {
*elem += count;
}
count += 1;
}
assert_eq!(v, &[3, 2, 2, 1, 1]);
```
1.31.0 · source#### pub fn rchunks_exact(&self, chunk_size: usize) -> RChunksExact<'_, TReturns an iterator over `chunk_size` elements of the slice at a time, starting at the end of the slice.
The chunks are slices and do not overlap. If `chunk_size` does not divide the length of the slice, then the last up to `chunk_size-1` elements will be omitted and can be retrieved from the `remainder` function of the iterator.
Due to each chunk having exactly `chunk_size` elements, the compiler can often optimize the resulting code better than in the case of `rchunks`.
See `rchunks` for a variant of this iterator that also returns the remainder as a smaller chunk, and `chunks_exact` for the same iterator but starting at the beginning of the slice.
##### Panics
Panics if `chunk_size` is 0.
##### Examples
```
let slice = ['l', 'o', 'r', 'e', 'm'];
let mut iter = slice.rchunks_exact(2);
assert_eq!(iter.next().unwrap(), &['e', 'm']);
assert_eq!(iter.next().unwrap(), &['o', 'r']);
assert!(iter.next().is_none());
assert_eq!(iter.remainder(), &['l']);
```
1.31.0 · source#### pub fn rchunks_exact_mut(&mut self, chunk_size: usize) -> RChunksExactMut<'_, TReturns an iterator over `chunk_size` elements of the slice at a time, starting at the end of the slice.
The chunks are mutable slices, and do not overlap. If `chunk_size` does not divide the length of the slice, then the last up to `chunk_size-1` elements will be omitted and can be retrieved from the `into_remainder` function of the iterator.
Due to each chunk having exactly `chunk_size` elements, the compiler can often optimize the resulting code better than in the case of `chunks_mut`.
See `rchunks_mut` for a variant of this iterator that also returns the remainder as a smaller chunk, and `chunks_exact_mut` for the same iterator but starting at the beginning of the slice.
##### Panics
Panics if `chunk_size` is 0.
##### Examples
```
let v = &mut [0, 0, 0, 0, 0];
let mut count = 1;
for chunk in v.rchunks_exact_mut(2) {
for elem in chunk.iter_mut() {
*elem += count;
}
count += 1;
}
assert_eq!(v, &[0, 2, 2, 1, 1]);
```
#### pub fn group_by<F>(&self, pred: F) -> GroupBy<'_, T, F>where
F: FnMut(&T, &T) -> bool,
🔬This is a nightly-only experimental API. (`slice_group_by`)Returns an iterator over the slice producing non-overlapping runs of elements using the predicate to separate them.
The predicate is called on two elements following themselves,
it means the predicate is called on `slice[0]` and `slice[1]`
then on `slice[1]` and `slice[2]` and so on.
##### Examples
```
#![feature(slice_group_by)]
let slice = &[1, 1, 1, 3, 3, 2, 2, 2];
let mut iter = slice.group_by(|a, b| a == b);
assert_eq!(iter.next(), Some(&[1, 1, 1][..]));
assert_eq!(iter.next(), Some(&[3, 3][..]));
assert_eq!(iter.next(), Some(&[2, 2, 2][..]));
assert_eq!(iter.next(), None);
```
This method can be used to extract the sorted subslices:
```
#![feature(slice_group_by)]
let slice = &[1, 1, 2, 3, 2, 3, 2, 3, 4];
let mut iter = slice.group_by(|a, b| a <= b);
assert_eq!(iter.next(), Some(&[1, 1, 2, 3][..]));
assert_eq!(iter.next(), Some(&[2, 3][..]));
assert_eq!(iter.next(), Some(&[2, 3, 4][..]));
assert_eq!(iter.next(), None);
```
#### pub fn group_by_mut<F>(&mut self, pred: F) -> GroupByMut<'_, T, F>where
F: FnMut(&T, &T) -> bool,
🔬This is a nightly-only experimental API. (`slice_group_by`)Returns an iterator over the slice producing non-overlapping mutable runs of elements using the predicate to separate them.
The predicate is called on two elements following themselves,
it means the predicate is called on `slice[0]` and `slice[1]`
then on `slice[1]` and `slice[2]` and so on.
##### Examples
```
#![feature(slice_group_by)]
let slice = &mut [1, 1, 1, 3, 3, 2, 2, 2];
let mut iter = slice.group_by_mut(|a, b| a == b);
assert_eq!(iter.next(), Some(&mut [1, 1, 1][..]));
assert_eq!(iter.next(), Some(&mut [3, 3][..]));
assert_eq!(iter.next(), Some(&mut [2, 2, 2][..]));
assert_eq!(iter.next(), None);
```
This method can be used to extract the sorted subslices:
```
#![feature(slice_group_by)]
let slice = &mut [1, 1, 2, 3, 2, 3, 2, 3, 4];
let mut iter = slice.group_by_mut(|a, b| a <= b);
assert_eq!(iter.next(), Some(&mut [1, 1, 2, 3][..]));
assert_eq!(iter.next(), Some(&mut [2, 3][..]));
assert_eq!(iter.next(), Some(&mut [2, 3, 4][..]));
assert_eq!(iter.next(), None);
```
1.0.0 · source#### pub fn split_at(&self, mid: usize) -> (&[T], &[T])
Divides one slice into two at an index.
The first will contain all indices from `[0, mid)` (excluding the index `mid` itself) and the second will contain all indices from `[mid, len)` (excluding the index `len` itself).
##### Panics
Panics if `mid > len`.
##### Examples
```
let v = [1, 2, 3, 4, 5, 6];
{
let (left, right) = v.split_at(0);
assert_eq!(left, []);
assert_eq!(right, [1, 2, 3, 4, 5, 6]);
}
{
let (left, right) = v.split_at(2);
assert_eq!(left, [1, 2]);
assert_eq!(right, [3, 4, 5, 6]);
}
{
let (left, right) = v.split_at(6);
assert_eq!(left, [1, 2, 3, 4, 5, 6]);
assert_eq!(right, []);
}
```
1.0.0 · source#### pub fn split_at_mut(&mut self, mid: usize) -> (&mut [T], &mut [T])
Divides one mutable slice into two at an index.
The first will contain all indices from `[0, mid)` (excluding the index `mid` itself) and the second will contain all indices from `[mid, len)` (excluding the index `len` itself).
##### Panics
Panics if `mid > len`.
##### Examples
```
let mut v = [1, 0, 3, 0, 5, 6];
let (left, right) = v.split_at_mut(2);
assert_eq!(left, [1, 0]);
assert_eq!(right, [3, 0, 5, 6]);
left[1] = 2;
right[1] = 4;
assert_eq!(v, [1, 2, 3, 4, 5, 6]);
```
#### pub unsafe fn split_at_unchecked(&self, mid: usize) -> (&[T], &[T])
🔬This is a nightly-only experimental API. (`slice_split_at_unchecked`)Divides one slice into two at an index, without doing bounds checking.
The first will contain all indices from `[0, mid)` (excluding the index `mid` itself) and the second will contain all indices from `[mid, len)` (excluding the index `len` itself).
For a safe alternative see `split_at`.
##### Safety
Calling this method with an out-of-bounds index is *undefined behavior*
even if the resulting reference is not used. The caller has to ensure that
`0 <= mid <= self.len()`.
##### Examples
```
#![feature(slice_split_at_unchecked)]
let v = [1, 2, 3, 4, 5, 6];
unsafe {
let (left, right) = v.split_at_unchecked(0);
assert_eq!(left, []);
assert_eq!(right, [1, 2, 3, 4, 5, 6]);
}
unsafe {
let (left, right) = v.split_at_unchecked(2);
assert_eq!(left, [1, 2]);
assert_eq!(right, [3, 4, 5, 6]);
}
unsafe {
let (left, right) = v.split_at_unchecked(6);
assert_eq!(left, [1, 2, 3, 4, 5, 6]);
assert_eq!(right, []);
}
```
#### pub unsafe fn split_at_mut_unchecked(
&mut self,
mid: usize
) -> (&mut [T], &mut [T])
🔬This is a nightly-only experimental API. (`slice_split_at_unchecked`)Divides one mutable slice into two at an index, without doing bounds checking.
The first will contain all indices from `[0, mid)` (excluding the index `mid` itself) and the second will contain all indices from `[mid, len)` (excluding the index `len` itself).
For a safe alternative see `split_at_mut`.
##### Safety
Calling this method with an out-of-bounds index is *undefined behavior*
even if the resulting reference is not used. The caller has to ensure that
`0 <= mid <= self.len()`.
##### Examples
```
#![feature(slice_split_at_unchecked)]
let mut v = [1, 0, 3, 0, 5, 6];
// scoped to restrict the lifetime of the borrows unsafe {
let (left, right) = v.split_at_mut_unchecked(2);
assert_eq!(left, [1, 0]);
assert_eq!(right, [3, 0, 5, 6]);
left[1] = 2;
right[1] = 4;
}
assert_eq!(v, [1, 2, 3, 4, 5, 6]);
```
#### pub fn split_array_ref<const N: usize>(&self) -> (&[T; N], &[T])
🔬This is a nightly-only experimental API. (`split_array`)Divides one slice into an array and a remainder slice at an index.
The array will contain all indices from `[0, N)` (excluding the index `N` itself) and the slice will contain all indices from `[N, len)` (excluding the index `len` itself).
##### Panics
Panics if `N > len`.
##### Examples
```
#![feature(split_array)]
let v = &[1, 2, 3, 4, 5, 6][..];
{
let (left, right) = v.split_array_ref::<0>();
assert_eq!(left, &[]);
assert_eq!(right, [1, 2, 3, 4, 5, 6]);
}
{
let (left, right) = v.split_array_ref::<2>();
assert_eq!(left, &[1, 2]);
assert_eq!(right, [3, 4, 5, 6]);
}
{
let (left, right) = v.split_array_ref::<6>();
assert_eq!(left, &[1, 2, 3, 4, 5, 6]);
assert_eq!(right, []);
}
```
#### pub fn split_array_mut<const N: usize>(&mut self) -> (&mut [T; N], &mut [T])
🔬This is a nightly-only experimental API. (`split_array`)Divides one mutable slice into an array and a remainder slice at an index.
The array will contain all indices from `[0, N)` (excluding the index `N` itself) and the slice will contain all indices from `[N, len)` (excluding the index `len` itself).
##### Panics
Panics if `N > len`.
##### Examples
```
#![feature(split_array)]
let mut v = &mut [1, 0, 3, 0, 5, 6][..];
let (left, right) = v.split_array_mut::<2>();
assert_eq!(left, &mut [1, 0]);
assert_eq!(right, [3, 0, 5, 6]);
left[1] = 2;
right[1] = 4;
assert_eq!(v, [1, 2, 3, 4, 5, 6]);
```
#### pub fn rsplit_array_ref<const N: usize>(&self) -> (&[T], &[T; N])
🔬This is a nightly-only experimental API. (`split_array`)Divides one slice into an array and a remainder slice at an index from the end.
The slice will contain all indices from `[0, len - N)` (excluding the index `len - N` itself) and the array will contain all indices from `[len - N, len)` (excluding the index `len` itself).
##### Panics
Panics if `N > len`.
##### Examples
```
#![feature(split_array)]
let v = &[1, 2, 3, 4, 5, 6][..];
{
let (left, right) = v.rsplit_array_ref::<0>();
assert_eq!(left, [1, 2, 3, 4, 5, 6]);
assert_eq!(right, &[]);
}
{
let (left, right) = v.rsplit_array_ref::<2>();
assert_eq!(left, [1, 2, 3, 4]);
assert_eq!(right, &[5, 6]);
}
{
let (left, right) = v.rsplit_array_ref::<6>();
assert_eq!(left, []);
assert_eq!(right, &[1, 2, 3, 4, 5, 6]);
}
```
#### pub fn rsplit_array_mut<const N: usize>(&mut self) -> (&mut [T], &mut [T; N])
🔬This is a nightly-only experimental API. (`split_array`)Divides one mutable slice into an array and a remainder slice at an index from the end.
The slice will contain all indices from `[0, len - N)` (excluding the index `N` itself) and the array will contain all indices from `[len - N, len)` (excluding the index `len` itself).
##### Panics
Panics if `N > len`.
##### Examples
```
#![feature(split_array)]
let mut v = &mut [1, 0, 3, 0, 5, 6][..];
let (left, right) = v.rsplit_array_mut::<4>();
assert_eq!(left, [1, 0]);
assert_eq!(right, &mut [3, 0, 5, 6]);
left[1] = 2;
right[1] = 4;
assert_eq!(v, [1, 2, 3, 4, 5, 6]);
```
1.0.0 · source#### pub fn split<F>(&self, pred: F) -> Split<'_, T, F>where
F: FnMut(&T) -> bool,
Returns an iterator over subslices separated by elements that match
`pred`. The matched element is not contained in the subslices.
##### Examples
```
let slice = [10, 40, 33, 20];
let mut iter = slice.split(|num| num % 3 == 0);
assert_eq!(iter.next().unwrap(), &[10, 40]);
assert_eq!(iter.next().unwrap(), &[20]);
assert!(iter.next().is_none());
```
If the first element is matched, an empty slice will be the first item returned by the iterator. Similarly, if the last element in the slice is matched, an empty slice will be the last item returned by the iterator:
```
let slice = [10, 40, 33];
let mut iter = slice.split(|num| num % 3 == 0);
assert_eq!(iter.next().unwrap(), &[10, 40]);
assert_eq!(iter.next().unwrap(), &[]);
assert!(iter.next().is_none());
```
If two matched elements are directly adjacent, an empty slice will be present between them:
```
let slice = [10, 6, 33, 20];
let mut iter = slice.split(|num| num % 3 == 0);
assert_eq!(iter.next().unwrap(), &[10]);
assert_eq!(iter.next().unwrap(), &[]);
assert_eq!(iter.next().unwrap(), &[20]);
assert!(iter.next().is_none());
```
1.0.0 · source#### pub fn split_mut<F>(&mut self, pred: F) -> SplitMut<'_, T, F>where
F: FnMut(&T) -> bool,
Returns an iterator over mutable subslices separated by elements that match `pred`. The matched element is not contained in the subslices.
##### Examples
```
let mut v = [10, 40, 30, 20, 60, 50];
for group in v.split_mut(|num| *num % 3 == 0) {
group[0] = 1;
}
assert_eq!(v, [1, 40, 30, 1, 60, 1]);
```
1.51.0 · source#### pub fn split_inclusive<F>(&self, pred: F) -> SplitInclusive<'_, T, F>where
F: FnMut(&T) -> bool,
Returns an iterator over subslices separated by elements that match
`pred`. The matched element is contained in the end of the previous subslice as a terminator.
##### Examples
```
let slice = [10, 40, 33, 20];
let mut iter = slice.split_inclusive(|num| num % 3 == 0);
assert_eq!(iter.next().unwrap(), &[10, 40, 33]);
assert_eq!(iter.next().unwrap(), &[20]);
assert!(iter.next().is_none());
```
If the last element of the slice is matched,
that element will be considered the terminator of the preceding slice.
That slice will be the last item returned by the iterator.
```
let slice = [3, 10, 40, 33];
let mut iter = slice.split_inclusive(|num| num % 3 == 0);
assert_eq!(iter.next().unwrap(), &[3]);
assert_eq!(iter.next().unwrap(), &[10, 40, 33]);
assert!(iter.next().is_none());
```
1.51.0 · source#### pub fn split_inclusive_mut<F>(&mut self, pred: F) -> SplitInclusiveMut<'_, T, F>where
F: FnMut(&T) -> bool,
Returns an iterator over mutable subslices separated by elements that match `pred`. The matched element is contained in the previous subslice as a terminator.
##### Examples
```
let mut v = [10, 40, 30, 20, 60, 50];
for group in v.split_inclusive_mut(|num| *num % 3 == 0) {
let terminator_idx = group.len()-1;
group[terminator_idx] = 1;
}
assert_eq!(v, [10, 40, 1, 20, 1, 1]);
```
1.27.0 · source#### pub fn rsplit<F>(&self, pred: F) -> RSplit<'_, T, F>where
F: FnMut(&T) -> bool,
Returns an iterator over subslices separated by elements that match
`pred`, starting at the end of the slice and working backwards.
The matched element is not contained in the subslices.
##### Examples
```
let slice = [11, 22, 33, 0, 44, 55];
let mut iter = slice.rsplit(|num| *num == 0);
assert_eq!(iter.next().unwrap(), &[44, 55]);
assert_eq!(iter.next().unwrap(), &[11, 22, 33]);
assert_eq!(iter.next(), None);
```
As with `split()`, if the first or last element is matched, an empty slice will be the first (or last) item returned by the iterator.
```
let v = &[0, 1, 1, 2, 3, 5, 8];
let mut it = v.rsplit(|n| *n % 2 == 0);
assert_eq!(it.next().unwrap(), &[]);
assert_eq!(it.next().unwrap(), &[3, 5]);
assert_eq!(it.next().unwrap(), &[1, 1]);
assert_eq!(it.next().unwrap(), &[]);
assert_eq!(it.next(), None);
```
1.27.0 · source#### pub fn rsplit_mut<F>(&mut self, pred: F) -> RSplitMut<'_, T, F>where
F: FnMut(&T) -> bool,
Returns an iterator over mutable subslices separated by elements that match `pred`, starting at the end of the slice and working backwards. The matched element is not contained in the subslices.
##### Examples
```
let mut v = [100, 400, 300, 200, 600, 500];
let mut count = 0;
for group in v.rsplit_mut(|num| *num % 3 == 0) {
count += 1;
group[0] = count;
}
assert_eq!(v, [3, 400, 300, 2, 600, 1]);
```
1.0.0 · source#### pub fn splitn<F>(&self, n: usize, pred: F) -> SplitN<'_, T, F>where
F: FnMut(&T) -> bool,
Returns an iterator over subslices separated by elements that match
`pred`, limited to returning at most `n` items. The matched element is not contained in the subslices.
The last element returned, if any, will contain the remainder of the slice.
##### Examples
Print the slice split once by numbers divisible by 3 (i.e., `[10, 40]`,
`[20, 60, 50]`):
```
let v = [10, 40, 30, 20, 60, 50];
for group in v.splitn(2, |num| *num % 3 == 0) {
println!("{group:?}");
}
```
1.0.0 · source#### pub fn splitn_mut<F>(&mut self, n: usize, pred: F) -> SplitNMut<'_, T, F>where
F: FnMut(&T) -> bool,
Returns an iterator over mutable subslices separated by elements that match
`pred`, limited to returning at most `n` items. The matched element is not contained in the subslices.
The last element returned, if any, will contain the remainder of the slice.
##### Examples
```
let mut v = [10, 40, 30, 20, 60, 50];
for group in v.splitn_mut(2, |num| *num % 3 == 0) {
group[0] = 1;
}
assert_eq!(v, [1, 40, 30, 1, 60, 50]);
```
1.0.0 · source#### pub fn rsplitn<F>(&self, n: usize, pred: F) -> RSplitN<'_, T, F>where
F: FnMut(&T) -> bool,
Returns an iterator over subslices separated by elements that match
`pred` limited to returning at most `n` items. This starts at the end of the slice and works backwards. The matched element is not contained in the subslices.
The last element returned, if any, will contain the remainder of the slice.
##### Examples
Print the slice split once, starting from the end, by numbers divisible by 3 (i.e., `[50]`, `[10, 40, 30, 20]`):
```
let v = [10, 40, 30, 20, 60, 50];
for group in v.rsplitn(2, |num| *num % 3 == 0) {
println!("{group:?}");
}
```
1.0.0 · source#### pub fn rsplitn_mut<F>(&mut self, n: usize, pred: F) -> RSplitNMut<'_, T, F>where
F: FnMut(&T) -> bool,
Returns an iterator over subslices separated by elements that match
`pred` limited to returning at most `n` items. This starts at the end of the slice and works backwards. The matched element is not contained in the subslices.
The last element returned, if any, will contain the remainder of the slice.
##### Examples
```
let mut s = [10, 40, 30, 20, 60, 50];
for group in s.rsplitn_mut(2, |num| *num % 3 == 0) {
group[0] = 1;
}
assert_eq!(s, [1, 40, 30, 20, 60, 1]);
```
1.0.0 · source#### pub fn contains(&self, x: &T) -> boolwhere
T: PartialEq<T>,
Returns `true` if the slice contains an element with the given value.
This operation is *O*(*n*).
Note that if you have a sorted slice, `binary_search` may be faster.
##### Examples
```
let v = [10, 40, 30];
assert!(v.contains(&30));
assert!(!v.contains(&50));
```
If you do not have a `&T`, but some other value that you can compare with one (for example, `String` implements `PartialEq<str>`), you can use `iter().any`:
```
let v = [String::from("hello"), String::from("world")]; // slice of `String`
assert!(v.iter().any(|e| e == "hello")); // search with `&str`
assert!(!v.iter().any(|e| e == "hi"));
```
1.0.0 · source#### pub fn starts_with(&self, needle: &[T]) -> boolwhere
T: PartialEq<T>,
Returns `true` if `needle` is a prefix of the slice.
##### Examples
```
let v = [10, 40, 30];
assert!(v.starts_with(&[10]));
assert!(v.starts_with(&[10, 40]));
assert!(!v.starts_with(&[50]));
assert!(!v.starts_with(&[10, 50]));
```
Always returns `true` if `needle` is an empty slice:
```
let v = &[10, 40, 30];
assert!(v.starts_with(&[]));
let v: &[u8] = &[];
assert!(v.starts_with(&[]));
```
1.0.0 · source#### pub fn ends_with(&self, needle: &[T]) -> boolwhere
T: PartialEq<T>,
Returns `true` if `needle` is a suffix of the slice.
##### Examples
```
let v = [10, 40, 30];
assert!(v.ends_with(&[30]));
assert!(v.ends_with(&[40, 30]));
assert!(!v.ends_with(&[50]));
assert!(!v.ends_with(&[50, 30]));
```
Always returns `true` if `needle` is an empty slice:
```
let v = &[10, 40, 30];
assert!(v.ends_with(&[]));
let v: &[u8] = &[];
assert!(v.ends_with(&[]));
```
1.51.0 · source#### pub fn strip_prefix<P>(&self, prefix: &P) -> Option<&[T]>where
P: SlicePattern<Item = T> + ?Sized,
T: PartialEq<T>,
Returns a subslice with the prefix removed.
If the slice starts with `prefix`, returns the subslice after the prefix, wrapped in `Some`.
If `prefix` is empty, simply returns the original slice.
If the slice does not start with `prefix`, returns `None`.
##### Examples
```
let v = &[10, 40, 30];
assert_eq!(v.strip_prefix(&[10]), Some(&[40, 30][..]));
assert_eq!(v.strip_prefix(&[10, 40]), Some(&[30][..]));
assert_eq!(v.strip_prefix(&[50]), None);
assert_eq!(v.strip_prefix(&[10, 50]), None);
let prefix : &str = "he";
assert_eq!(b"hello".strip_prefix(prefix.as_bytes()),
Some(b"llo".as_ref()));
```
1.51.0 · source#### pub fn strip_suffix<P>(&self, suffix: &P) -> Option<&[T]>where
P: SlicePattern<Item = T> + ?Sized,
T: PartialEq<T>,
Returns a subslice with the suffix removed.
If the slice ends with `suffix`, returns the subslice before the suffix, wrapped in `Some`.
If `suffix` is empty, simply returns the original slice.
If the slice does not end with `suffix`, returns `None`.
##### Examples
```
let v = &[10, 40, 30];
assert_eq!(v.strip_suffix(&[30]), Some(&[10, 40][..]));
assert_eq!(v.strip_suffix(&[40, 30]), Some(&[10][..]));
assert_eq!(v.strip_suffix(&[50]), None);
assert_eq!(v.strip_suffix(&[50, 30]), None);
```
1.0.0 · source#### pub fn binary_search(&self, x: &T) -> Result<usize, usize>where
T: Ord,
Binary searches this slice for a given element.
If the slice is not sorted, the returned result is unspecified and meaningless.
If the value is found then `Result::Ok` is returned, containing the index of the matching element. If there are multiple matches, then any one of the matches could be returned. The index is chosen deterministically, but is subject to change in future versions of Rust.
If the value is not found then `Result::Err` is returned, containing the index where a matching element could be inserted while maintaining sorted order.
See also `binary_search_by`, `binary_search_by_key`, and `partition_point`.
##### Examples
Looks up a series of four elements. The first is found, with a uniquely determined position; the second and third are not found; the fourth could match any position in `[1, 4]`.
```
let s = [0, 1, 1, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55];
assert_eq!(s.binary_search(&13), Ok(9));
assert_eq!(s.binary_search(&4), Err(7));
assert_eq!(s.binary_search(&100), Err(13));
let r = s.binary_search(&1);
assert!(match r { Ok(1..=4) => true, _ => false, });
```
If you want to find that whole *range* of matching items, rather than an arbitrary matching one, that can be done using `partition_point`:
```
let s = [0, 1, 1, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55];
let low = s.partition_point(|x| x < &1);
assert_eq!(low, 1);
let high = s.partition_point(|x| x <= &1);
assert_eq!(high, 5);
let r = s.binary_search(&1);
assert!((low..high).contains(&r.unwrap()));
assert!(s[..low].iter().all(|&x| x < 1));
assert!(s[low..high].iter().all(|&x| x == 1));
assert!(s[high..].iter().all(|&x| x > 1));
// For something not found, the "range" of equal items is empty assert_eq!(s.partition_point(|x| x < &11), 9);
assert_eq!(s.partition_point(|x| x <= &11), 9);
assert_eq!(s.binary_search(&11), Err(9));
```
If you want to insert an item to a sorted vector, while maintaining sort order, consider using `partition_point`:
```
let mut s = vec![0, 1, 1, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55];
let num = 42;
let idx = s.partition_point(|&x| x < num);
// The above is equivalent to `let idx = s.binary_search(&num).unwrap_or_else(|x| x);`
s.insert(idx, num);
assert_eq!(s, [0, 1, 1, 1, 1, 2, 3, 5, 8, 13, 21, 34, 42, 55]);
```
1.0.0 · source#### pub fn binary_search_by<'a, F>(&'a self, f: F) -> Result<usize, usize>where
F: FnMut(&'a T) -> Ordering,
Binary searches this slice with a comparator function.
The comparator function should return an order code that indicates whether its argument is `Less`, `Equal` or `Greater` the desired target.
If the slice is not sorted or if the comparator function does not implement an order consistent with the sort order of the underlying slice, the returned result is unspecified and meaningless.
If the value is found then `Result::Ok` is returned, containing the index of the matching element. If there are multiple matches, then any one of the matches could be returned. The index is chosen deterministically, but is subject to change in future versions of Rust.
If the value is not found then `Result::Err` is returned, containing the index where a matching element could be inserted while maintaining sorted order.
See also `binary_search`, `binary_search_by_key`, and `partition_point`.
##### Examples
Looks up a series of four elements. The first is found, with a uniquely determined position; the second and third are not found; the fourth could match any position in `[1, 4]`.
```
let s = [0, 1, 1, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55];
let seek = 13;
assert_eq!(s.binary_search_by(|probe| probe.cmp(&seek)), Ok(9));
let seek = 4;
assert_eq!(s.binary_search_by(|probe| probe.cmp(&seek)), Err(7));
let seek = 100;
assert_eq!(s.binary_search_by(|probe| probe.cmp(&seek)), Err(13));
let seek = 1;
let r = s.binary_search_by(|probe| probe.cmp(&seek));
assert!(match r { Ok(1..=4) => true, _ => false, });
```
1.10.0 · source#### pub fn binary_search_by_key<'a, B, F>(
&'a self,
b: &B,
f: F
) -> Result<usize, usize>where
F: FnMut(&'a T) -> B,
B: Ord,
Binary searches this slice with a key extraction function.
Assumes that the slice is sorted by the key, for instance with
`sort_by_key` using the same key extraction function.
If the slice is not sorted by the key, the returned result is unspecified and meaningless.
If the value is found then `Result::Ok` is returned, containing the index of the matching element. If there are multiple matches, then any one of the matches could be returned. The index is chosen deterministically, but is subject to change in future versions of Rust.
If the value is not found then `Result::Err` is returned, containing the index where a matching element could be inserted while maintaining sorted order.
See also `binary_search`, `binary_search_by`, and `partition_point`.
##### Examples
Looks up a series of four elements in a slice of pairs sorted by their second elements. The first is found, with a uniquely determined position; the second and third are not found; the fourth could match any position in `[1, 4]`.
```
let s = [(0, 0), (2, 1), (4, 1), (5, 1), (3, 1),
(1, 2), (2, 3), (4, 5), (5, 8), (3, 13),
(1, 21), (2, 34), (4, 55)];
assert_eq!(s.binary_search_by_key(&13, |&(a, b)| b), Ok(9));
assert_eq!(s.binary_search_by_key(&4, |&(a, b)| b), Err(7));
assert_eq!(s.binary_search_by_key(&100, |&(a, b)| b), Err(13));
let r = s.binary_search_by_key(&1, |&(a, b)| b);
assert!(match r { Ok(1..=4) => true, _ => false, });
```
1.20.0 · source#### pub fn sort_unstable(&mut self)where
T: Ord,
Sorts the slice, but might not preserve the order of equal elements.
This sort is unstable (i.e., may reorder equal elements), in-place
(i.e., does not allocate), and *O*(*n* * log(*n*)) worst-case.
##### Current implementation
The current algorithm is based on pattern-defeating quicksort by <NAME>,
which combines the fast average case of randomized quicksort with the fast worst case of heapsort, while achieving linear time on slices with certain patterns. It uses some randomization to avoid degenerate cases, but with a fixed seed to always provide deterministic behavior.
It is typically faster than stable sorting, except in a few special cases, e.g., when the slice consists of several concatenated sorted sequences.
##### Examples
```
let mut v = [-5, 4, 1, -3, 2];
v.sort_unstable();
assert!(v == [-5, -3, 1, 2, 4]);
```
1.20.0 · source#### pub fn sort_unstable_by<F>(&mut self, compare: F)where
F: FnMut(&T, &T) -> Ordering,
Sorts the slice with a comparator function, but might not preserve the order of equal elements.
This sort is unstable (i.e., may reorder equal elements), in-place
(i.e., does not allocate), and *O*(*n* * log(*n*)) worst-case.
The comparator function must define a total ordering for the elements in the slice. If the ordering is not total, the order of the elements is unspecified. An order is a total order if it is (for all `a`, `b` and `c`):
* total and antisymmetric: exactly one of `a < b`, `a == b` or `a > b` is true, and
* transitive, `a < b` and `b < c` implies `a < c`. The same must hold for both `==` and `>`.
For example, while `f64` doesn’t implement `Ord` because `NaN != NaN`, we can use
`partial_cmp` as our sort function when we know the slice doesn’t contain a `NaN`.
```
let mut floats = [5f64, 4.0, 1.0, 3.0, 2.0];
floats.sort_unstable_by(|a, b| a.partial_cmp(b).unwrap());
assert_eq!(floats, [1.0, 2.0, 3.0, 4.0, 5.0]);
```
##### Current implementation
The current algorithm is based on pattern-defeating quicksort by <NAME>,
which combines the fast average case of randomized quicksort with the fast worst case of heapsort, while achieving linear time on slices with certain patterns. It uses some randomization to avoid degenerate cases, but with a fixed seed to always provide deterministic behavior.
It is typically faster than stable sorting, except in a few special cases, e.g., when the slice consists of several concatenated sorted sequences.
##### Examples
```
let mut v = [5, 4, 1, 3, 2];
v.sort_unstable_by(|a, b| a.cmp(b));
assert!(v == [1, 2, 3, 4, 5]);
// reverse sorting v.sort_unstable_by(|a, b| b.cmp(a));
assert!(v == [5, 4, 3, 2, 1]);
```
1.20.0 · source#### pub fn sort_unstable_by_key<K, F>(&mut self, f: F)where
F: FnMut(&T) -> K,
K: Ord,
Sorts the slice with a key extraction function, but might not preserve the order of equal elements.
This sort is unstable (i.e., may reorder equal elements), in-place
(i.e., does not allocate), and *O*(m * *n* * log(*n*)) worst-case, where the key function is
*O*(*m*).
##### Current implementation
The current algorithm is based on pattern-defeating quicksort by <NAME>,
which combines the fast average case of randomized quicksort with the fast worst case of heapsort, while achieving linear time on slices with certain patterns. It uses some randomization to avoid degenerate cases, but with a fixed seed to always provide deterministic behavior.
Due to its key calling strategy, `sort_unstable_by_key`
is likely to be slower than `sort_by_cached_key` in cases where the key function is expensive.
##### Examples
```
let mut v = [-5i32, 4, 1, -3, 2];
v.sort_unstable_by_key(|k| k.abs());
assert!(v == [1, 2, -3, 4, -5]);
```
1.49.0 · source#### pub fn select_nth_unstable(
&mut self,
index: usize
) -> (&mut [T], &mut T, &mut [T])where
T: Ord,
Reorder the slice such that the element at `index` is at its final sorted position.
This reordering has the additional property that any value at position `i < index` will be less than or equal to any value at a position `j > index`. Additionally, this reordering is unstable (i.e. any number of equal elements may end up at position `index`), in-place
(i.e. does not allocate), and *O*(*n*) on average. The worst-case performance is *O*(*n* log *n*).
This function is also known as “kth element” in other libraries.
It returns a triplet of the following from the reordered slice:
the subslice prior to `index`, the element at `index`, and the subslice after `index`;
accordingly, the values in those two subslices will respectively all be less-than-or-equal-to and greater-than-or-equal-to the value of the element at `index`.
##### Current implementation
The current algorithm is based on the quickselect portion of the same quicksort algorithm used for `sort_unstable`.
##### Panics
Panics when `index >= len()`, meaning it always panics on empty slices.
##### Examples
```
let mut v = [-5i32, 4, 1, -3, 2];
// Find the median v.select_nth_unstable(2);
// We are only guaranteed the slice will be one of the following, based on the way we sort
// about the specified index.
assert!(v == [-3, -5, 1, 2, 4] ||
v == [-5, -3, 1, 2, 4] ||
v == [-3, -5, 1, 4, 2] ||
v == [-5, -3, 1, 4, 2]);
```
1.49.0 · source#### pub fn select_nth_unstable_by<F>(
&mut self,
index: usize,
compare: F
) -> (&mut [T], &mut T, &mut [T])where
F: FnMut(&T, &T) -> Ordering,
Reorder the slice with a comparator function such that the element at `index` is at its final sorted position.
This reordering has the additional property that any value at position `i < index` will be less than or equal to any value at a position `j > index` using the comparator function.
Additionally, this reordering is unstable (i.e. any number of equal elements may end up at position `index`), in-place (i.e. does not allocate), and *O*(*n*) on average.
The worst-case performance is *O*(*n* log *n*). This function is also known as
“kth element” in other libraries.
It returns a triplet of the following from the slice reordered according to the provided comparator function: the subslice prior to
`index`, the element at `index`, and the subslice after `index`; accordingly, the values in those two subslices will respectively all be less-than-or-equal-to and greater-than-or-equal-to the value of the element at `index`.
##### Current implementation
The current algorithm is based on the quickselect portion of the same quicksort algorithm used for `sort_unstable`.
##### Panics
Panics when `index >= len()`, meaning it always panics on empty slices.
##### Examples
```
let mut v = [-5i32, 4, 1, -3, 2];
// Find the median as if the slice were sorted in descending order.
v.select_nth_unstable_by(2, |a, b| b.cmp(a));
// We are only guaranteed the slice will be one of the following, based on the way we sort
// about the specified index.
assert!(v == [2, 4, 1, -5, -3] ||
v == [2, 4, 1, -3, -5] ||
v == [4, 2, 1, -5, -3] ||
v == [4, 2, 1, -3, -5]);
```
1.49.0 · source#### pub fn select_nth_unstable_by_key<K, F>(
&mut self,
index: usize,
f: F
) -> (&mut [T], &mut T, &mut [T])where
F: FnMut(&T) -> K,
K: Ord,
Reorder the slice with a key extraction function such that the element at `index` is at its final sorted position.
This reordering has the additional property that any value at position `i < index` will be less than or equal to any value at a position `j > index` using the key extraction function.
Additionally, this reordering is unstable (i.e. any number of equal elements may end up at position `index`), in-place (i.e. does not allocate), and *O*(*n*) on average.
The worst-case performance is *O*(*n* log *n*).
This function is also known as “kth element” in other libraries.
It returns a triplet of the following from the slice reordered according to the provided key extraction function: the subslice prior to
`index`, the element at `index`, and the subslice after `index`; accordingly, the values in those two subslices will respectively all be less-than-or-equal-to and greater-than-or-equal-to the value of the element at `index`.
##### Current implementation
The current algorithm is based on the quickselect portion of the same quicksort algorithm used for `sort_unstable`.
##### Panics
Panics when `index >= len()`, meaning it always panics on empty slices.
##### Examples
```
let mut v = [-5i32, 4, 1, -3, 2];
// Return the median as if the array were sorted according to absolute value.
v.select_nth_unstable_by_key(2, |a| a.abs());
// We are only guaranteed the slice will be one of the following, based on the way we sort
// about the specified index.
assert!(v == [1, 2, -3, 4, -5] ||
v == [1, 2, -3, -5, 4] ||
v == [2, 1, -3, 4, -5] ||
v == [2, 1, -3, -5, 4]);
```
#### pub fn partition_dedup(&mut self) -> (&mut [T], &mut [T])where
T: PartialEq<T>,
🔬This is a nightly-only experimental API. (`slice_partition_dedup`)Moves all consecutive repeated elements to the end of the slice according to the
`PartialEq` trait implementation.
Returns two slices. The first contains no consecutive repeated elements.
The second contains all the duplicates in no specified order.
If the slice is sorted, the first returned slice contains no duplicates.
##### Examples
```
#![feature(slice_partition_dedup)]
let mut slice = [1, 2, 2, 3, 3, 2, 1, 1];
let (dedup, duplicates) = slice.partition_dedup();
assert_eq!(dedup, [1, 2, 3, 2, 1]);
assert_eq!(duplicates, [2, 3, 1]);
```
#### pub fn partition_dedup_by<F>(&mut self, same_bucket: F) -> (&mut [T], &mut [T])where
F: FnMut(&mut T, &mut T) -> bool,
🔬This is a nightly-only experimental API. (`slice_partition_dedup`)Moves all but the first of consecutive elements to the end of the slice satisfying a given equality relation.
Returns two slices. The first contains no consecutive repeated elements.
The second contains all the duplicates in no specified order.
The `same_bucket` function is passed references to two elements from the slice and must determine if the elements compare equal. The elements are passed in opposite order from their order in the slice, so if `same_bucket(a, b)` returns `true`, `a` is moved at the end of the slice.
If the slice is sorted, the first returned slice contains no duplicates.
##### Examples
```
#![feature(slice_partition_dedup)]
let mut slice = ["foo", "Foo", "BAZ", "Bar", "bar", "baz", "BAZ"];
let (dedup, duplicates) = slice.partition_dedup_by(|a, b| a.eq_ignore_ascii_case(b));
assert_eq!(dedup, ["foo", "BAZ", "Bar", "baz"]);
assert_eq!(duplicates, ["bar", "Foo", "BAZ"]);
```
#### pub fn partition_dedup_by_key<K, F>(&mut self, key: F) -> (&mut [T], &mut [T])where
F: FnMut(&mut T) -> K,
K: PartialEq<K>,
🔬This is a nightly-only experimental API. (`slice_partition_dedup`)Moves all but the first of consecutive elements to the end of the slice that resolve to the same key.
Returns two slices. The first contains no consecutive repeated elements.
The second contains all the duplicates in no specified order.
If the slice is sorted, the first returned slice contains no duplicates.
##### Examples
```
#![feature(slice_partition_dedup)]
let mut slice = [10, 20, 21, 30, 30, 20, 11, 13];
let (dedup, duplicates) = slice.partition_dedup_by_key(|i| *i / 10);
assert_eq!(dedup, [10, 20, 30, 20, 11]);
assert_eq!(duplicates, [21, 30, 13]);
```
1.26.0 · source#### pub fn rotate_left(&mut self, mid: usize)
Rotates the slice in-place such that the first `mid` elements of the slice move to the end while the last `self.len() - mid` elements move to the front. After calling `rotate_left`, the element previously at index
`mid` will become the first element in the slice.
##### Panics
This function will panic if `mid` is greater than the length of the slice. Note that `mid == self.len()` does *not* panic and is a no-op rotation.
##### Complexity
Takes linear (in `self.len()`) time.
##### Examples
```
let mut a = ['a', 'b', 'c', 'd', 'e', 'f'];
a.rotate_left(2);
assert_eq!(a, ['c', 'd', 'e', 'f', 'a', 'b']);
```
Rotating a subslice:
```
let mut a = ['a', 'b', 'c', 'd', 'e', 'f'];
a[1..5].rotate_left(1);
assert_eq!(a, ['a', 'c', 'd', 'e', 'b', 'f']);
```
1.26.0 · source#### pub fn rotate_right(&mut self, k: usize)
Rotates the slice in-place such that the first `self.len() - k`
elements of the slice move to the end while the last `k` elements move to the front. After calling `rotate_right`, the element previously at index `self.len() - k` will become the first element in the slice.
##### Panics
This function will panic if `k` is greater than the length of the slice. Note that `k == self.len()` does *not* panic and is a no-op rotation.
##### Complexity
Takes linear (in `self.len()`) time.
##### Examples
```
let mut a = ['a', 'b', 'c', 'd', 'e', 'f'];
a.rotate_right(2);
assert_eq!(a, ['e', 'f', 'a', 'b', 'c', 'd']);
```
Rotate a subslice:
```
let mut a = ['a', 'b', 'c', 'd', 'e', 'f'];
a[1..5].rotate_right(1);
assert_eq!(a, ['a', 'e', 'b', 'c', 'd', 'f']);
```
1.50.0 · source#### pub fn fill(&mut self, value: T)where
T: Clone,
Fills `self` with elements by cloning `value`.
##### Examples
```
let mut buf = vec![0; 10];
buf.fill(1);
assert_eq!(buf, vec![1; 10]);
```
1.51.0 · source#### pub fn fill_with<F>(&mut self, f: F)where
F: FnMut() -> T,
Fills `self` with elements returned by calling a closure repeatedly.
This method uses a closure to create new values. If you’d rather
`Clone` a given value, use `fill`. If you want to use the `Default`
trait to generate values, you can pass `Default::default` as the argument.
##### Examples
```
let mut buf = vec![1; 10];
buf.fill_with(Default::default);
assert_eq!(buf, vec![0; 10]);
```
1.7.0 · source#### pub fn clone_from_slice(&mut self, src: &[T])where
T: Clone,
Copies the elements from `src` into `self`.
The length of `src` must be the same as `self`.
##### Panics
This function will panic if the two slices have different lengths.
##### Examples
Cloning two elements from a slice into another:
```
let src = [1, 2, 3, 4];
let mut dst = [0, 0];
// Because the slices have to be the same length,
// we slice the source slice from four elements
// to two. It will panic if we don't do this.
dst.clone_from_slice(&src[2..]);
assert_eq!(src, [1, 2, 3, 4]);
assert_eq!(dst, [3, 4]);
```
Rust enforces that there can only be one mutable reference with no immutable references to a particular piece of data in a particular scope. Because of this, attempting to use `clone_from_slice` on a single slice will result in a compile failure:
```
let mut slice = [1, 2, 3, 4, 5];
slice[..2].clone_from_slice(&slice[3..]); // compile fail!
```
To work around this, we can use `split_at_mut` to create two distinct sub-slices from a slice:
```
let mut slice = [1, 2, 3, 4, 5];
{
let (left, right) = slice.split_at_mut(2);
left.clone_from_slice(&right[1..]);
}
assert_eq!(slice, [4, 5, 3, 4, 5]);
```
1.9.0 · source#### pub fn copy_from_slice(&mut self, src: &[T])where
T: Copy,
Copies all elements from `src` into `self`, using a memcpy.
The length of `src` must be the same as `self`.
If `T` does not implement `Copy`, use `clone_from_slice`.
##### Panics
This function will panic if the two slices have different lengths.
##### Examples
Copying two elements from a slice into another:
```
let src = [1, 2, 3, 4];
let mut dst = [0, 0];
// Because the slices have to be the same length,
// we slice the source slice from four elements
// to two. It will panic if we don't do this.
dst.copy_from_slice(&src[2..]);
assert_eq!(src, [1, 2, 3, 4]);
assert_eq!(dst, [3, 4]);
```
Rust enforces that there can only be one mutable reference with no immutable references to a particular piece of data in a particular scope. Because of this, attempting to use `copy_from_slice` on a single slice will result in a compile failure:
```
let mut slice = [1, 2, 3, 4, 5];
slice[..2].copy_from_slice(&slice[3..]); // compile fail!
```
To work around this, we can use `split_at_mut` to create two distinct sub-slices from a slice:
```
let mut slice = [1, 2, 3, 4, 5];
{
let (left, right) = slice.split_at_mut(2);
left.copy_from_slice(&right[1..]);
}
assert_eq!(slice, [4, 5, 3, 4, 5]);
```
1.37.0 · source#### pub fn copy_within<R>(&mut self, src: R, dest: usize)where
R: RangeBounds<usize>,
T: Copy,
Copies elements from one part of the slice to another part of itself,
using a memmove.
`src` is the range within `self` to copy from. `dest` is the starting index of the range within `self` to copy to, which will have the same length as `src`. The two ranges may overlap. The ends of the two ranges must be less than or equal to `self.len()`.
##### Panics
This function will panic if either range exceeds the end of the slice,
or if the end of `src` is before the start.
##### Examples
Copying four bytes within a slice:
```
let mut bytes = *b"Hello, World!";
bytes.copy_within(1..5, 8);
assert_eq!(&bytes, b"Hello, Wello!");
```
1.27.0 · source#### pub fn swap_with_slice(&mut self, other: &mut [T])
Swaps all elements in `self` with those in `other`.
The length of `other` must be the same as `self`.
##### Panics
This function will panic if the two slices have different lengths.
##### Example
Swapping two elements across slices:
```
let mut slice1 = [0, 0];
let mut slice2 = [1, 2, 3, 4];
slice1.swap_with_slice(&mut slice2[2..]);
assert_eq!(slice1, [3, 4]);
assert_eq!(slice2, [1, 2, 0, 0]);
```
Rust enforces that there can only be one mutable reference to a particular piece of data in a particular scope. Because of this,
attempting to use `swap_with_slice` on a single slice will result in a compile failure:
```
let mut slice = [1, 2, 3, 4, 5];
slice[..2].swap_with_slice(&mut slice[3..]); // compile fail!
```
To work around this, we can use `split_at_mut` to create two distinct mutable sub-slices from a slice:
```
let mut slice = [1, 2, 3, 4, 5];
{
let (left, right) = slice.split_at_mut(2);
left.swap_with_slice(&mut right[1..]);
}
assert_eq!(slice, [4, 5, 3, 1, 2]);
```
1.30.0 · source#### pub unsafe fn align_to<U>(&self) -> (&[T], &[U], &[T])
Transmute the slice to a slice of another type, ensuring alignment of the types is maintained.
This method splits the slice into three distinct slices: prefix, correctly aligned middle slice of a new type, and the suffix slice. How exactly the slice is split up is not specified; the middle part may be smaller than necessary. However, if this fails to return a maximal middle part, that is because code is running in a context where performance does not matter, such as a sanitizer attempting to find alignment bugs. Regular code running in a default (debug or release) execution *will* return a maximal middle part.
This method has no purpose when either input element `T` or output element `U` are zero-sized and will return the original slice without splitting anything.
##### Safety
This method is essentially a `transmute` with respect to the elements in the returned middle slice, so all the usual caveats pertaining to `transmute::<T, U>` also apply here.
##### Examples
Basic usage:
```
unsafe {
let bytes: [u8; 7] = [1, 2, 3, 4, 5, 6, 7];
let (prefix, shorts, suffix) = bytes.align_to::<u16>();
// less_efficient_algorithm_for_bytes(prefix);
// more_efficient_algorithm_for_aligned_shorts(shorts);
// less_efficient_algorithm_for_bytes(suffix);
}
```
1.30.0 · source#### pub unsafe fn align_to_mut<U>(&mut self) -> (&mut [T], &mut [U], &mut [T])
Transmute the mutable slice to a mutable slice of another type, ensuring alignment of the types is maintained.
This method splits the slice into three distinct slices: prefix, correctly aligned middle slice of a new type, and the suffix slice. How exactly the slice is split up is not specified; the middle part may be smaller than necessary. However, if this fails to return a maximal middle part, that is because code is running in a context where performance does not matter, such as a sanitizer attempting to find alignment bugs. Regular code running in a default (debug or release) execution *will* return a maximal middle part.
This method has no purpose when either input element `T` or output element `U` are zero-sized and will return the original slice without splitting anything.
##### Safety
This method is essentially a `transmute` with respect to the elements in the returned middle slice, so all the usual caveats pertaining to `transmute::<T, U>` also apply here.
##### Examples
Basic usage:
```
unsafe {
let mut bytes: [u8; 7] = [1, 2, 3, 4, 5, 6, 7];
let (prefix, shorts, suffix) = bytes.align_to_mut::<u16>();
// less_efficient_algorithm_for_bytes(prefix);
// more_efficient_algorithm_for_aligned_shorts(shorts);
// less_efficient_algorithm_for_bytes(suffix);
}
```
#### pub fn as_simd<const LANES: usize>(&self) -> (&[T], &[Simd<T, LANES>], &[T])where
Simd<T, LANES>: AsRef<[T; LANES]>,
T: SimdElement,
LaneCount<LANES>: SupportedLaneCount,
🔬This is a nightly-only experimental API. (`portable_simd`)Split a slice into a prefix, a middle of aligned SIMD types, and a suffix.
This is a safe wrapper around `slice::align_to`, so has the same weak postconditions as that method. You’re only assured that
`self.len() == prefix.len() + middle.len() * LANES + suffix.len()`.
Notably, all of the following are possible:
* `prefix.len() >= LANES`.
* `middle.is_empty()` despite `self.len() >= 3 * LANES`.
* `suffix.len() >= LANES`.
That said, this is a safe method, so if you’re only writing safe code,
then this can at most cause incorrect logic, not unsoundness.
##### Panics
This will panic if the size of the SIMD type is different from
`LANES` times that of the scalar.
At the time of writing, the trait restrictions on `Simd<T, LANES>` keeps that from ever happening, as only power-of-two numbers of lanes are supported. It’s possible that, in the future, those restrictions might be lifted in a way that would make it possible to see panics from this method for something like `LANES == 3`.
##### Examples
```
#![feature(portable_simd)]
use core::simd::SimdFloat;
let short = &[1, 2, 3];
let (prefix, middle, suffix) = short.as_simd::<4>();
assert_eq!(middle, []); // Not enough elements for anything in the middle
// They might be split in any possible way between prefix and suffix let it = prefix.iter().chain(suffix).copied();
assert_eq!(it.collect::<Vec<_>>(), vec![1, 2, 3]);
fn basic_simd_sum(x: &[f32]) -> f32 {
use std::ops::Add;
use std::simd::f32x4;
let (prefix, middle, suffix) = x.as_simd();
let sums = f32x4::from_array([
prefix.iter().copied().sum(),
0.0,
0.0,
suffix.iter().copied().sum(),
]);
let sums = middle.iter().copied().fold(sums, f32x4::add);
sums.reduce_sum()
}
let numbers: Vec<f32> = (1..101).map(|x| x as _).collect();
assert_eq!(basic_simd_sum(&numbers[1..99]), 4949.0);
```
#### pub fn as_simd_mut<const LANES: usize>(
&mut self
) -> (&mut [T], &mut [Simd<T, LANES>], &mut [T])where
Simd<T, LANES>: AsMut<[T; LANES]>,
T: SimdElement,
LaneCount<LANES>: SupportedLaneCount,
🔬This is a nightly-only experimental API. (`portable_simd`)Split a mutable slice into a mutable prefix, a middle of aligned SIMD types,
and a mutable suffix.
This is a safe wrapper around `slice::align_to_mut`, so has the same weak postconditions as that method. You’re only assured that
`self.len() == prefix.len() + middle.len() * LANES + suffix.len()`.
Notably, all of the following are possible:
* `prefix.len() >= LANES`.
* `middle.is_empty()` despite `self.len() >= 3 * LANES`.
* `suffix.len() >= LANES`.
That said, this is a safe method, so if you’re only writing safe code,
then this can at most cause incorrect logic, not unsoundness.
This is the mutable version of `slice::as_simd`; see that for examples.
##### Panics
This will panic if the size of the SIMD type is different from
`LANES` times that of the scalar.
At the time of writing, the trait restrictions on `Simd<T, LANES>` keeps that from ever happening, as only power-of-two numbers of lanes are supported. It’s possible that, in the future, those restrictions might be lifted in a way that would make it possible to see panics from this method for something like `LANES == 3`.
#### pub fn is_sorted(&self) -> boolwhere
T: PartialOrd<T>,
🔬This is a nightly-only experimental API. (`is_sorted`)Checks if the elements of this slice are sorted.
That is, for each element `a` and its following element `b`, `a <= b` must hold. If the slice yields exactly zero or one element, `true` is returned.
Note that if `Self::Item` is only `PartialOrd`, but not `Ord`, the above definition implies that this function returns `false` if any two consecutive items are not comparable.
##### Examples
```
#![feature(is_sorted)]
let empty: [i32; 0] = [];
assert!([1, 2, 2, 9].is_sorted());
assert!(![1, 3, 2, 4].is_sorted());
assert!([0].is_sorted());
assert!(empty.is_sorted());
assert!(![0.0, 1.0, f32::NAN].is_sorted());
```
#### pub fn is_sorted_by<'a, F>(&'a self, compare: F) -> boolwhere
F: FnMut(&'a T, &'a T) -> Option<Ordering>,
🔬This is a nightly-only experimental API. (`is_sorted`)Checks if the elements of this slice are sorted using the given comparator function.
Instead of using `PartialOrd::partial_cmp`, this function uses the given `compare`
function to determine the ordering of two elements. Apart from that, it’s equivalent to
`is_sorted`; see its documentation for more information.
#### pub fn is_sorted_by_key<'a, F, K>(&'a self, f: F) -> boolwhere
F: FnMut(&'a T) -> K,
K: PartialOrd<K>,
🔬This is a nightly-only experimental API. (`is_sorted`)Checks if the elements of this slice are sorted using the given key extraction function.
Instead of comparing the slice’s elements directly, this function compares the keys of the elements, as determined by `f`. Apart from that, it’s equivalent to `is_sorted`; see its documentation for more information.
##### Examples
```
#![feature(is_sorted)]
assert!(["c", "bb", "aaa"].is_sorted_by_key(|s| s.len()));
assert!(![-2i32, -1, 0, 3].is_sorted_by_key(|n| n.abs()));
```
1.52.0 · source#### pub fn partition_point<P>(&self, pred: P) -> usizewhere
P: FnMut(&T) -> bool,
Returns the index of the partition point according to the given predicate
(the index of the first element of the second partition).
The slice is assumed to be partitioned according to the given predicate.
This means that all elements for which the predicate returns true are at the start of the slice and all elements for which the predicate returns false are at the end.
For example, `[7, 15, 3, 5, 4, 12, 6]` is partitioned under the predicate `x % 2 != 0`
(all odd numbers are at the start, all even at the end).
If this slice is not partitioned, the returned result is unspecified and meaningless,
as this method performs a kind of binary search.
See also `binary_search`, `binary_search_by`, and `binary_search_by_key`.
##### Examples
```
let v = [1, 2, 3, 3, 5, 6, 7];
let i = v.partition_point(|&x| x < 5);
assert_eq!(i, 4);
assert!(v[..i].iter().all(|&x| x < 5));
assert!(v[i..].iter().all(|&x| !(x < 5)));
```
If all elements of the slice match the predicate, including if the slice is empty, then the length of the slice will be returned:
```
let a = [2, 4, 8];
assert_eq!(a.partition_point(|x| x < &100), a.len());
let a: [i32; 0] = [];
assert_eq!(a.partition_point(|x| x < &100), 0);
```
If you want to insert an item to a sorted vector, while maintaining sort order:
```
let mut s = vec![0, 1, 1, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55];
let num = 42;
let idx = s.partition_point(|&x| x < num);
s.insert(idx, num);
assert_eq!(s, [0, 1, 1, 1, 1, 2, 3, 5, 8, 13, 21, 34, 42, 55]);
```
#### pub fn take<R, 'a>(self: &mut &'a [T], range: R) -> Option<&'a [T]>where
R: OneSidedRange<usize>,
🔬This is a nightly-only experimental API. (`slice_take`)Removes the subslice corresponding to the given range and returns a reference to it.
Returns `None` and does not modify the slice if the given range is out of bounds.
Note that this method only accepts one-sided ranges such as
`2..` or `..6`, but not `2..6`.
##### Examples
Taking the first three elements of a slice:
```
#![feature(slice_take)]
let mut slice: &[_] = &['a', 'b', 'c', 'd'];
let mut first_three = slice.take(..3).unwrap();
assert_eq!(slice, &['d']);
assert_eq!(first_three, &['a', 'b', 'c']);
```
Taking the last two elements of a slice:
```
#![feature(slice_take)]
let mut slice: &[_] = &['a', 'b', 'c', 'd'];
let mut tail = slice.take(2..).unwrap();
assert_eq!(slice, &['a', 'b']);
assert_eq!(tail, &['c', 'd']);
```
Getting `None` when `range` is out of bounds:
```
#![feature(slice_take)]
let mut slice: &[_] = &['a', 'b', 'c', 'd'];
assert_eq!(None, slice.take(5..));
assert_eq!(None, slice.take(..5));
assert_eq!(None, slice.take(..=4));
let expected: &[char] = &['a', 'b', 'c', 'd'];
assert_eq!(Some(expected), slice.take(..4));
```
#### pub fn take_mut<R, 'a>(self: &mut &'a mut [T], range: R) -> Option<&'a mut [T]>where
R: OneSidedRange<usize>,
🔬This is a nightly-only experimental API. (`slice_take`)Removes the subslice corresponding to the given range and returns a mutable reference to it.
Returns `None` and does not modify the slice if the given range is out of bounds.
Note that this method only accepts one-sided ranges such as
`2..` or `..6`, but not `2..6`.
##### Examples
Taking the first three elements of a slice:
```
#![feature(slice_take)]
let mut slice: &mut [_] = &mut ['a', 'b', 'c', 'd'];
let mut first_three = slice.take_mut(..3).unwrap();
assert_eq!(slice, &mut ['d']);
assert_eq!(first_three, &mut ['a', 'b', 'c']);
```
Taking the last two elements of a slice:
```
#![feature(slice_take)]
let mut slice: &mut [_] = &mut ['a', 'b', 'c', 'd'];
let mut tail = slice.take_mut(2..).unwrap();
assert_eq!(slice, &mut ['a', 'b']);
assert_eq!(tail, &mut ['c', 'd']);
```
Getting `None` when `range` is out of bounds:
```
#![feature(slice_take)]
let mut slice: &mut [_] = &mut ['a', 'b', 'c', 'd'];
assert_eq!(None, slice.take_mut(5..));
assert_eq!(None, slice.take_mut(..5));
assert_eq!(None, slice.take_mut(..=4));
let expected: &mut [_] = &mut ['a', 'b', 'c', 'd'];
assert_eq!(Some(expected), slice.take_mut(..4));
```
#### pub fn take_first<'a>(self: &mut &'a [T]) -> Option<&'a T🔬This is a nightly-only experimental API. (`slice_take`)Removes the first element of the slice and returns a reference to it.
Returns `None` if the slice is empty.
##### Examples
```
#![feature(slice_take)]
let mut slice: &[_] = &['a', 'b', 'c'];
let first = slice.take_first().unwrap();
assert_eq!(slice, &['b', 'c']);
assert_eq!(first, &'a');
```
#### pub fn take_first_mut<'a>(self: &mut &'a mut [T]) -> Option<&'a mut T🔬This is a nightly-only experimental API. (`slice_take`)Removes the first element of the slice and returns a mutable reference to it.
Returns `None` if the slice is empty.
##### Examples
```
#![feature(slice_take)]
let mut slice: &mut [_] = &mut ['a', 'b', 'c'];
let first = slice.take_first_mut().unwrap();
*first = 'd';
assert_eq!(slice, &['b', 'c']);
assert_eq!(first, &'d');
```
#### pub fn take_last<'a>(self: &mut &'a [T]) -> Option<&'a T🔬This is a nightly-only experimental API. (`slice_take`)Removes the last element of the slice and returns a reference to it.
Returns `None` if the slice is empty.
##### Examples
```
#![feature(slice_take)]
let mut slice: &[_] = &['a', 'b', 'c'];
let last = slice.take_last().unwrap();
assert_eq!(slice, &['a', 'b']);
assert_eq!(last, &'c');
```
#### pub fn take_last_mut<'a>(self: &mut &'a mut [T]) -> Option<&'a mut T🔬This is a nightly-only experimental API. (`slice_take`)Removes the last element of the slice and returns a mutable reference to it.
Returns `None` if the slice is empty.
##### Examples
```
#![feature(slice_take)]
let mut slice: &mut [_] = &mut ['a', 'b', 'c'];
let last = slice.take_last_mut().unwrap();
*last = 'd';
assert_eq!(slice, &['a', 'b']);
assert_eq!(last, &'d');
```
#### pub unsafe fn get_many_unchecked_mut<const N: usize>(
&mut self,
indices: [usize; N]
) -> [&mut T; N]
🔬This is a nightly-only experimental API. (`get_many_mut`)Returns mutable references to many indices at once, without doing any checks.
For a safe alternative see `get_many_mut`.
##### Safety
Calling this method with overlapping or out-of-bounds indices is *undefined behavior*
even if the resulting references are not used.
##### Examples
```
#![feature(get_many_mut)]
let x = &mut [1, 2, 4];
unsafe {
let [a, b] = x.get_many_unchecked_mut([0, 2]);
*a *= 10;
*b *= 100;
}
assert_eq!(x, &[10, 2, 400]);
```
#### pub fn get_many_mut<const N: usize>(
&mut self,
indices: [usize; N]
) -> Result<[&mut T; N], GetManyMutError<N>🔬This is a nightly-only experimental API. (`get_many_mut`)Returns mutable references to many indices at once.
Returns an error if any index is out-of-bounds, or if the same index was passed more than once.
##### Examples
```
#![feature(get_many_mut)]
let v = &mut [1, 2, 3];
if let Ok([a, b]) = v.get_many_mut([0, 2]) {
*a = 413;
*b = 612;
}
assert_eq!(v, &[413, 2, 612]);
```
#### pub fn flatten(&self) -> &[T]
🔬This is a nightly-only experimental API. (`slice_flatten`)Takes a `&[[T; N]]`, and flattens it to a `&[T]`.
##### Panics
This panics if the length of the resulting slice would overflow a `usize`.
This is only possible when flattening a slice of arrays of zero-sized types, and thus tends to be irrelevant in practice. If
`size_of::<T>() > 0`, this will never panic.
##### Examples
```
#![feature(slice_flatten)]
assert_eq!([[1, 2, 3], [4, 5, 6]].flatten(), &[1, 2, 3, 4, 5, 6]);
assert_eq!(
[[1, 2, 3], [4, 5, 6]].flatten(),
[[1, 2], [3, 4], [5, 6]].flatten(),
);
let slice_of_empty_arrays: &[[i32; 0]] = &[[], [], [], [], []];
assert!(slice_of_empty_arrays.flatten().is_empty());
let empty_slice_of_arrays: &[[u32; 10]] = &[];
assert!(empty_slice_of_arrays.flatten().is_empty());
```
#### pub fn flatten_mut(&mut self) -> &mut [T]
🔬This is a nightly-only experimental API. (`slice_flatten`)Takes a `&mut [[T; N]]`, and flattens it to a `&mut [T]`.
##### Panics
This panics if the length of the resulting slice would overflow a `usize`.
This is only possible when flattening a slice of arrays of zero-sized types, and thus tends to be irrelevant in practice. If
`size_of::<T>() > 0`, this will never panic.
##### Examples
```
#![feature(slice_flatten)]
fn add_5_to_all(slice: &mut [i32]) {
for i in slice {
*i += 5;
}
}
let mut array = [[1, 2, 3], [4, 5, 6], [7, 8, 9]];
add_5_to_all(array.flatten_mut());
assert_eq!(array, [[6, 7, 8], [9, 10, 11], [12, 13, 14]]);
```
#### pub fn sort_floats(&mut self)
🔬This is a nightly-only experimental API. (`sort_floats`)Sorts the slice of floats.
This sort is in-place (i.e. does not allocate), *O*(*n* * log(*n*)) worst-case, and uses the ordering defined by `f32::total_cmp`.
##### Current implementation
This uses the same sorting algorithm as `sort_unstable_by`.
##### Examples
```
#![feature(sort_floats)]
let mut v = [2.6, -5e-8, f32::NAN, 8.29, f32::INFINITY, -1.0, 0.0, -f32::INFINITY, -0.0];
v.sort_floats();
let sorted = [-f32::INFINITY, -1.0, -5e-8, -0.0, 0.0, 2.6, 8.29, f32::INFINITY, f32::NAN];
assert_eq!(&v[..8], &sorted[..8]);
assert!(v[8].is_nan());
```
1.23.0 · source#### pub fn is_ascii(&self) -> bool
Checks if all bytes in this slice are within the ASCII range.
1.23.0 · source#### pub fn eq_ignore_ascii_case(&self, other: &[u8]) -> bool
Checks that two slices are an ASCII case-insensitive match.
Same as `to_ascii_lowercase(a) == to_ascii_lowercase(b)`,
but without allocating and copying temporaries.
1.23.0 · source#### pub fn make_ascii_uppercase(&mut self)
Converts this slice to its ASCII upper case equivalent in-place.
ASCII letters ‘a’ to ‘z’ are mapped to ‘A’ to ‘Z’,
but non-ASCII letters are unchanged.
To return a new uppercased value without modifying the existing one, use
`to_ascii_uppercase`.
1.23.0 · source#### pub fn make_ascii_lowercase(&mut self)
Converts this slice to its ASCII lower case equivalent in-place.
ASCII letters ‘A’ to ‘Z’ are mapped to ‘a’ to ‘z’,
but non-ASCII letters are unchanged.
To return a new lowercased value without modifying the existing one, use
`to_ascii_lowercase`.
1.60.0 · source#### pub fn escape_ascii(&self) -> EscapeAscii<'_Returns an iterator that produces an escaped version of this slice,
treating it as an ASCII string.
##### Examples
```
let s = b"0\t\r\n'\"\\\x9d";
let escaped = s.escape_ascii().to_string();
assert_eq!(escaped, "0\\t\\r\\n\\'\\\"\\\\\\x9d");
```
#### pub fn trim_ascii_start(&self) -> &[u8]
🔬This is a nightly-only experimental API. (`byte_slice_trim_ascii`)Returns a byte slice with leading ASCII whitespace bytes removed.
‘Whitespace’ refers to the definition used by
`u8::is_ascii_whitespace`.
##### Examples
```
#![feature(byte_slice_trim_ascii)]
assert_eq!(b" \t hello world\n".trim_ascii_start(), b"hello world\n");
assert_eq!(b" ".trim_ascii_start(), b"");
assert_eq!(b"".trim_ascii_start(), b"");
```
#### pub fn trim_ascii_end(&self) -> &[u8]
🔬This is a nightly-only experimental API. (`byte_slice_trim_ascii`)Returns a byte slice with trailing ASCII whitespace bytes removed.
‘Whitespace’ refers to the definition used by
`u8::is_ascii_whitespace`.
##### Examples
```
#![feature(byte_slice_trim_ascii)]
assert_eq!(b"\r hello world\n ".trim_ascii_end(), b"\r hello world");
assert_eq!(b" ".trim_ascii_end(), b"");
assert_eq!(b"".trim_ascii_end(), b"");
```
#### pub fn trim_ascii(&self) -> &[u8]
🔬This is a nightly-only experimental API. (`byte_slice_trim_ascii`)Returns a byte slice with leading and trailing ASCII whitespace bytes removed.
‘Whitespace’ refers to the definition used by
`u8::is_ascii_whitespace`.
##### Examples
```
#![feature(byte_slice_trim_ascii)]
assert_eq!(b"\r hello world\n ".trim_ascii(), b"hello world");
assert_eq!(b" ".trim_ascii(), b"");
assert_eq!(b"".trim_ascii(), b"");
```
1.23.0 · source#### pub fn to_ascii_uppercase(&self) -> Vec<u8, GlobalReturns a vector containing a copy of this slice where each byte is mapped to its ASCII upper case equivalent.
ASCII letters ‘a’ to ‘z’ are mapped to ‘A’ to ‘Z’,
but non-ASCII letters are unchanged.
To uppercase the value in-place, use `make_ascii_uppercase`.
1.23.0 · source#### pub fn to_ascii_lowercase(&self) -> Vec<u8, GlobalReturns a vector containing a copy of this slice where each byte is mapped to its ASCII lower case equivalent.
ASCII letters ‘A’ to ‘Z’ are mapped to ‘a’ to ‘z’,
but non-ASCII letters are unchanged.
To lowercase the value in-place, use `make_ascii_lowercase`.
1.0.0 · source#### pub fn sort(&mut self)where
T: Ord,
Sorts the slice.
This sort is stable (i.e., does not reorder equal elements) and *O*(*n* * log(*n*)) worst-case.
When applicable, unstable sorting is preferred because it is generally faster than stable sorting and it doesn’t allocate auxiliary memory.
See `sort_unstable`.
##### Current implementation
The current algorithm is an adaptive, iterative merge sort inspired by timsort.
It is designed to be very fast in cases where the slice is nearly sorted, or consists of two or more sorted sequences concatenated one after another.
Also, it allocates temporary storage half the size of `self`, but for short slices a non-allocating insertion sort is used instead.
##### Examples
```
let mut v = [-5, 4, 1, -3, 2];
v.sort();
assert!(v == [-5, -3, 1, 2, 4]);
```
1.0.0 · source#### pub fn sort_by<F>(&mut self, compare: F)where
F: FnMut(&T, &T) -> Ordering,
Sorts the slice with a comparator function.
This sort is stable (i.e., does not reorder equal elements) and *O*(*n* * log(*n*)) worst-case.
The comparator function must define a total ordering for the elements in the slice. If the ordering is not total, the order of the elements is unspecified. An order is a total order if it is (for all `a`, `b` and `c`):
* total and antisymmetric: exactly one of `a < b`, `a == b` or `a > b` is true, and
* transitive, `a < b` and `b < c` implies `a < c`. The same must hold for both `==` and `>`.
For example, while `f64` doesn’t implement `Ord` because `NaN != NaN`, we can use
`partial_cmp` as our sort function when we know the slice doesn’t contain a `NaN`.
```
let mut floats = [5f64, 4.0, 1.0, 3.0, 2.0];
floats.sort_by(|a, b| a.partial_cmp(b).unwrap());
assert_eq!(floats, [1.0, 2.0, 3.0, 4.0, 5.0]);
```
When applicable, unstable sorting is preferred because it is generally faster than stable sorting and it doesn’t allocate auxiliary memory.
See `sort_unstable_by`.
##### Current implementation
The current algorithm is an adaptive, iterative merge sort inspired by timsort.
It is designed to be very fast in cases where the slice is nearly sorted, or consists of two or more sorted sequences concatenated one after another.
Also, it allocates temporary storage half the size of `self`, but for short slices a non-allocating insertion sort is used instead.
##### Examples
```
let mut v = [5, 4, 1, 3, 2];
v.sort_by(|a, b| a.cmp(b));
assert!(v == [1, 2, 3, 4, 5]);
// reverse sorting v.sort_by(|a, b| b.cmp(a));
assert!(v == [5, 4, 3, 2, 1]);
```
1.7.0 · source#### pub fn sort_by_key<K, F>(&mut self, f: F)where
F: FnMut(&T) -> K,
K: Ord,
Sorts the slice with a key extraction function.
This sort is stable (i.e., does not reorder equal elements) and *O*(*m* * *n* * log(*n*))
worst-case, where the key function is *O*(*m*).
For expensive key functions (e.g. functions that are not simple property accesses or basic operations), `sort_by_cached_key` is likely to be significantly faster, as it does not recompute element keys.
When applicable, unstable sorting is preferred because it is generally faster than stable sorting and it doesn’t allocate auxiliary memory.
See `sort_unstable_by_key`.
##### Current implementation
The current algorithm is an adaptive, iterative merge sort inspired by timsort.
It is designed to be very fast in cases where the slice is nearly sorted, or consists of two or more sorted sequences concatenated one after another.
Also, it allocates temporary storage half the size of `self`, but for short slices a non-allocating insertion sort is used instead.
##### Examples
```
let mut v = [-5i32, 4, 1, -3, 2];
v.sort_by_key(|k| k.abs());
assert!(v == [1, 2, -3, 4, -5]);
```
1.34.0 · source#### pub fn sort_by_cached_key<K, F>(&mut self, f: F)where
F: FnMut(&T) -> K,
K: Ord,
Sorts the slice with a key extraction function.
During sorting, the key function is called at most once per element, by using temporary storage to remember the results of key evaluation.
The order of calls to the key function is unspecified and may change in future versions of the standard library.
This sort is stable (i.e., does not reorder equal elements) and *O*(*m* * *n* + *n* * log(*n*))
worst-case, where the key function is *O*(*m*).
For simple key functions (e.g., functions that are property accesses or basic operations), `sort_by_key` is likely to be faster.
##### Current implementation
The current algorithm is based on pattern-defeating quicksort by <NAME>,
which combines the fast average case of randomized quicksort with the fast worst case of heapsort, while achieving linear time on slices with certain patterns. It uses some randomization to avoid degenerate cases, but with a fixed seed to always provide deterministic behavior.
In the worst case, the algorithm allocates temporary storage in a `Vec<(K, usize)>` the length of the slice.
##### Examples
```
let mut v = [-5i32, 4, 32, -3, 2];
v.sort_by_cached_key(|k| k.to_string());
assert!(v == [-3, -5, 2, 32, 4]);
```
1.0.0 · source#### pub fn to_vec(&self) -> Vec<T, Global>where
T: Clone,
Copies `self` into a new `Vec`.
##### Examples
```
let s = [10, 40, 30];
let x = s.to_vec();
// Here, `s` and `x` can be modified independently.
```
#### pub fn to_vec_in<A>(&self, alloc: A) -> Vec<T, A>where
A: Allocator,
T: Clone,
🔬This is a nightly-only experimental API. (`allocator_api`)Copies `self` into a new `Vec` with an allocator.
##### Examples
```
#![feature(allocator_api)]
use std::alloc::System;
let s = [10, 40, 30];
let x = s.to_vec_in(System);
// Here, `s` and `x` can be modified independently.
```
1.40.0 · source#### pub fn repeat(&self, n: usize) -> Vec<T, Global>where
T: Copy,
Creates a vector by copying a slice `n` times.
##### Panics
This function will panic if the capacity would overflow.
##### Examples
Basic usage:
```
assert_eq!([1, 2].repeat(3), vec![1, 2, 1, 2, 1, 2]);
```
A panic upon overflow:
```
// this will panic at runtime b"0123456789abcdef".repeat(usize::MAX);
```
1.0.0 · source#### pub fn concat<Item>(&self) -> <[T] as Concat<Item>>::Output where
[T]: Concat<Item>,
Item: ?Sized,
Flattens a slice of `T` into a single value `Self::Output`.
##### Examples
```
assert_eq!(["hello", "world"].concat(), "helloworld");
assert_eq!([[1, 2], [3, 4]].concat(), [1, 2, 3, 4]);
```
1.3.0 · source#### pub fn join<Separator>(
&self,
sep: Separator
) -> <[T] as Join<Separator>>::Output where
[T]: Join<Separator>,
Flattens a slice of `T` into a single value `Self::Output`, placing a given separator between each.
##### Examples
```
assert_eq!(["hello", "world"].join(" "), "hello world");
assert_eq!([[1, 2], [3, 4]].join(&0), [1, 2, 0, 3, 4]);
assert_eq!([[1, 2], [3, 4]].join(&[0, 0][..]), [1, 2, 0, 0, 3, 4]);
```
1.0.0 · source#### pub fn connect<Separator>(
&self,
sep: Separator
) -> <[T] as Join<Separator>>::Output where
[T]: Join<Separator>,
👎Deprecated since 1.3.0: renamed to joinFlattens a slice of `T` into a single value `Self::Output`, placing a given separator between each.
##### Examples
```
assert_eq!(["hello", "world"].connect(" "), "hello world");
assert_eq!([[1, 2], [3, 4]].connect(&0), [1, 2, 0, 3, 4]);
```
Trait Implementations
---
### impl Clone for HeaderValues
#### fn clone(&self) -> HeaderValues
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter.
#### fn default() -> HeaderValues
Returns the “default value” for a type.
#### type Target = [HeaderValue]
The resulting type after dereferencing.#### fn deref(&self) -> &<HeaderValues as Deref>::Target
Dereferences the value.### impl DerefMut for HeaderValues
#### fn deref_mut(&mut self) -> &mut <HeaderValues as Deref>::Target
Mutably dereferences the value.### impl From<&'static [u8]> for HeaderValues
#### fn from(value: &'static [u8]) -> HeaderValues
Converts to this type from the input type.### impl From<&'static str> for HeaderValues
#### fn from(value: &'static str) -> HeaderValues
Converts to this type from the input type.### impl From<Cow<'static, str>> for HeaderValues
#### fn from(value: Cow<'static, str>) -> HeaderValues
Converts to this type from the input type.### impl From<HeaderValue> for HeaderValues
#### fn from(v: HeaderValue) -> HeaderValues
Converts to this type from the input type.### impl From<String> for HeaderValues
#### fn from(value: String) -> HeaderValues
Converts to this type from the input type.### impl<HV> From<Vec<HV, Global>> for HeaderValueswhere
HV: Into<HeaderValue>,
#### fn from(v: Vec<HV, Global>) -> HeaderValues
Converts to this type from the input type.### impl From<Vec<u8, Global>> for HeaderValues
#### fn from(value: Vec<u8, Global>) -> HeaderValues
Converts to this type from the input type.### impl<I> FromIterator<I> for HeaderValueswhere
I: Into<HeaderValue>,
#### fn from_iter<T>(iter: T) -> HeaderValueswhere
T: IntoIterator<Item = I>,
Creates a value from an iterator.
#### type Item = &'a HeaderValue
The type of the elements being iterated over.#### type IntoIter = Iter<'a, HeaderValueWhich kind of iterator are we turning this into?#### fn into_iter(self) -> <&'a HeaderValues as IntoIterator>::IntoIter
Creates an iterator from a value.
#### type Item = HeaderValue
The type of the elements being iterated over.#### type IntoIter = IntoIter<[HeaderValue; 1]Which kind of iterator are we turning this into?#### fn into_iter(self) -> <HeaderValues as IntoIterator>::IntoIter
Creates an iterator from a value.
#### fn eq(&self, other: &[&str]) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<HeaderValues> for HeaderValues
#### fn eq(&self, other: &HeaderValues) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Eq for HeaderValues
### impl StructuralEq for HeaderValues
### impl StructuralPartialEq for HeaderValues
Auto Trait Implementations
---
### impl RefUnwindSafe for HeaderValues
### impl Send for HeaderValues
### impl Sync for HeaderValues
### impl Unpin for HeaderValues
### impl UnwindSafe for HeaderValues
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Checks if this value is equivalent to the given key.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.{"&[u8]":"<h3>Notable traits for <code>&[<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u8.html\">u8</a>]</code></h3><pre><code><span class=\"where fmt-newline\">impl <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/std/io/trait.Read.html\" title=\"trait std::io::Read\">Read</a> for &[<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u8.html\">u8</a>]</span>","<[T] as Concat<Item>>::Output":"<h3>Notable traits for <code>&[<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u8.html\">u8</a>]</code></h3><pre><code><span class=\"where fmt-newline\">impl <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/std/io/trait.Read.html\" title=\"trait std::io::Read\">Read</a> for &[<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u8.html\">u8</a>]</span><span class=\"where fmt-newline\">impl <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/std/io/trait.Write.html\" title=\"trait std::io::Write\">Write</a> for &mut [<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u8.html\">u8</a>]</span>","<[T] as Join<Separator>>::Output":"<h3>Notable traits for <code>&[<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u8.html\">u8</a>]</code></h3><pre><code><span class=\"where fmt-newline\">impl <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/std/io/trait.Read.html\" title=\"trait std::io::Read\">Read</a> for &[<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u8.html\">u8</a>]</span><span class=\"where fmt-newline\">impl <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/std/io/trait.Write.html\" title=\"trait std::io::Write\">Write</a> for &mut [<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u8.html\">u8</a>]</span>"}
Struct trillium::Headers
===
```
pub struct Headers { /* private fields */ }
```
Trillium’s header map type
Implementations
---
### impl Headers
#### pub fn with_capacity(capacity: usize) -> Headers
Construct a new Headers, expecting to see at least this many known headers.
#### pub fn new() -> Headers
Construct a new headers with a default capacity of 15 known headers
#### pub fn reserve(&mut self, additional: usize)
Extend the capacity of the known headers map by this many
#### pub fn iter(&self) -> Iter<'_Return an iterator over borrowed header names and header values. First yields the known headers and then the unknown headers, if any.
#### pub fn is_empty(&self) -> bool
Are there zero headers?
#### pub fn len(&self) -> usize
How many unique `HeaderName` have been added to these `Headers`?
Note that each header name may have more than one `HeaderValue`.
#### pub fn append(
&mut self,
name: impl Into<HeaderName<'static>>,
value: impl Into<HeaderValues>
)
add the header value or header values into this header map. If there is already a header with the same name, the new values will be added to the existing ones. To replace any existing values, use `Headers::insert`
#### pub fn append_all(&mut self, other: Headers)
A slightly more efficient way to combine two [`Header`]s than using `Extend`
#### pub fn insert(
&mut self,
name: impl Into<HeaderName<'static>>,
value: impl Into<HeaderValues>
)
Add a header value or header values into this header map. If a header already exists with the same name, it will be replaced. To combine, see `Headers::append`
#### pub fn try_insert(
&mut self,
name: impl Into<HeaderName<'static>>,
value: impl Into<HeaderValues>
)
Add a header value or header values into this header map if and only if there is not already a header with the same name.
#### pub fn get_str<'a>(&self, name: impl Into<HeaderName<'a>>) -> Option<&strRetrieves a &str header value if there is at least one header in the map with this name. If there are several headers with the same name, this follows the behavior defined at
`HeaderValues::one`. Returns None if there is no header with the provided header name.
#### pub fn get<'a>(&self, name: impl Into<HeaderName<'a>>) -> Option<&HeaderValueRetrieves a singular header value from this header map. If there are several headers with the same name, this follows the behavior defined at `HeaderValues::one`. Returns None if there is no header with the provided header name
#### pub fn remove<'a>(
&mut self,
name: impl Into<HeaderName<'a>>
) -> Option<HeaderValuesTakes all headers with the provided header name out of this header map and returns them. Returns None if the header did not have an entry in this map.
#### pub fn get_values<'a>(
&self,
name: impl Into<HeaderName<'a>>
) -> Option<&HeaderValuesRetrieves a reference to all header values with the provided header name. If you expect there to be only one value, use
`Headers::get`.
#### pub fn has_header<'a>(&self, name: impl Into<HeaderName<'a>>) -> bool
Predicate function to check whether this header map contains the provided header name. If you are using this to conditionally insert a value, consider using
`Headers::try_insert` instead.
#### pub fn eq_ignore_ascii_case<'a>(
&'a self,
name: impl Into<HeaderName<'a>>,
needle: &str
) -> bool
Convenience function to check whether the value contained in this header map for the provided name is ascii-case-insensitively equal to the provided comparison
&str. Returns false if there is no value for the name
#### pub fn contains_ignore_ascii_case<'a>(
&self,
name: impl Into<HeaderName<'a>>,
needle: &str
) -> bool
Convenience function to check whether the value contained in this header map for the provided name. Prefer testing against a lower case string, as the implementation currently has to allocate if .
#### pub fn with_inserted_header(
self,
name: impl Into<HeaderName<'static>>,
values: impl Into<HeaderValues>
) -> Headers
Chainable method to insert a header
#### pub fn with_appended_header(
self,
name: impl Into<HeaderName<'static>>,
values: impl Into<HeaderValues>
) -> Headers
Chainable method to append a header
#### pub fn without_header(self, name: impl Into<HeaderName<'static>>) -> Headers
Chainable method to remove a header
Trait Implementations
---
### impl Clone for Headers
#### fn clone(&self) -> Headers
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter.
#### fn default() -> Headers
Returns the “default value” for a type.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter.
HN: Into<HeaderName<'static>>,
HV: Into<HeaderValues>,
#### fn extend<T>(&mut self, iter: T)where
T: IntoIterator<Item = (HN, HV)>,
Extends a collection with the contents of an iterator.
🔬This is a nightly-only experimental API. (`extend_one`)Extends a collection with exactly one element.#### fn extend_reserve(&mut self, additional: usize)
🔬This is a nightly-only experimental API. (`extend_one`)Reserves capacity in a collection for the given number of additional elements.
HN: Into<HeaderName<'static>>,
HV: Into<HeaderValues>,
#### fn from_iter<T>(iter: T) -> Headerswhere
T: IntoIterator<Item = (HN, HV)>,
Creates a value from an iterator.
#### fn run<'life0, 'async_trait>(
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
Executes this handler, performing any modifications to the Conn that are desired.#### fn init<'life0, 'life1, 'async_trait>(
&'life0 mut self,
_info: &'life1 mut Info
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
Performs one-time async set up on a mutable borrow of the Handler before the server starts accepting requests. This allows a Handler to be defined in synchronous code but perform async setup such as establishing a database connection or fetching some state from an external source. This is optional,
and chances are high that you do not need this.
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
Performs any final modifications to this conn after all handlers have been run. Although this is a slight deviation from the simple conn->conn->conn chain represented by most Handlers, it provides an easy way for libraries to effectively inject a second handler into a response chain. This is useful for loggers that need to record information both before and after other handlers have run,
as well as database transaction handlers and similar library code.
predicate function answering the question of whether this Handler would like to take ownership of the negotiated Upgrade. If this returns true, you must implement `Handler::upgrade`. The first handler that responds true to this will receive ownership of the
`trillium::Upgrade` in a subsequent call to `Handler::upgrade`#### fn upgrade<'life0, 'async_trait>(
&'life0 self,
_upgrade: Upgrade
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
This will only be called if the handler reponds true to
`Handler::has_upgrade` and will only be called once for this upgrade. There is no return value, and this function takes exclusive ownership of the underlying transport once this is called. You can downcast the transport to whatever the source transport type is and perform any non-http protocol communication that has been negotiated. You probably don’t want this unless you’re implementing something like websockets. Please note that for many transports such as TcpStreams, dropping the transport
(and therefore the Upgrade) will hang up / disconnect.#### fn name(&self) -> Cow<'static, strCustomize the name of your handler. This is used in Debug implementations. The default is the type name of this handler.### impl<'a> IntoIterator for &'a Headers
#### type Item = (HeaderName<'a>, &'a HeaderValues)
The type of the elements being iterated over.#### type IntoIter = Iter<'aWhich kind of iterator are we turning this into?#### fn into_iter(self) -> <&'a Headers as IntoIterator>::IntoIter
Creates an iterator from a value.
#### type Item = (HeaderName<'static>, HeaderValues)
The type of the elements being iterated over.#### type IntoIter = IntoIter
Which kind of iterator are we turning this into?#### fn into_iter(self) -> <Headers as IntoIterator>::IntoIter
Creates an iterator from a value.
#### fn eq(&self, other: &Headers) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Eq for Headers
### impl StructuralEq for Headers
### impl StructuralPartialEq for Headers
Auto Trait Implementations
---
### impl RefUnwindSafe for Headers
### impl Send for Headers
### impl Sync for Headers
### impl Unpin for Headers
### impl UnwindSafe for Headers
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Checks if this value is equivalent to the given key.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
T: Display + ?Sized,
#### default fn to_string(&self) -> String
Converts the given value to a `String`.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct trillium::Info
===
```
pub struct Info { /* private fields */ }
```
This struct represents information about the currently connected server.
It is passed to `Handler::init` and the `Init` handler.
Implementations
---
### impl Info
#### pub fn server_description(&self) -> &str
Returns a user-displayable description of the server. This might be a string like “trillium x.y.z (trillium-tokio x.y.z)” or “my special application”.
#### pub fn listener_description(&self) -> &str
Returns a user-displayable string description of the location or port the listener is bound to, potentially as a url. Do not rely on the format of this string, as it will vary between server implementations and is intended for user display. Instead, use `Info::tcp_socket_addr` for any processing.
#### pub const fn tcp_socket_addr(&self) -> Option<&SocketAddrReturns the `local_addr` of a bound tcp listener, if such a thing exists for this server
#### pub fn server_description_mut(&mut self) -> &mut String
obtain a mutable borrow of the server description, suitable for appending information or replacing it
#### pub fn listener_description_mut(&mut self) -> &mut String
obtain a mutable borrow of the listener description, suitable for appending information or replacing it
Trait Implementations
---
### impl Clone for Info
#### fn clone(&self) -> Info
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> Self
Returns the “default value” for a type.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn from(description: &str) -> Self
Converts to this type from the input type.### impl From<SocketAddr> for Info
#### fn from(socket_addr: SocketAddr) -> Self
Converts to this type from the input type.### impl From<SocketAddr> for Info
#### fn from(s: SocketAddr) -> Self
Converts to this type from the input type.Auto Trait Implementations
---
### impl RefUnwindSafe for Info
### impl Send for Info
### impl Sync for Info
### impl Unpin for Info
### impl UnwindSafe for Info
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
T: Display + ?Sized,
#### default fn to_string(&self) -> String
Converts the given value to a `String`.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct trillium::Init
===
```
pub struct Init<T>(_);
```
Provides support for asynchronous initialization of a handler after the server is started.
```
use trillium::{Conn, State, Init};
#[derive(Debug, Clone)]
struct MyDatabaseConnection(String);
impl MyDatabaseConnection {
async fn connect(uri: String) -> std::io::Result<Self> {
Ok(Self(uri))
}
async fn query(&mut self, query: &str) -> String {
format!("you queried `{}` against {}", query, &self.0)
}
}
let mut handler = (
Init::new(|_| async {
let url = std::env::var("DATABASE_URL").unwrap();
let db = MyDatabaseConnection::connect(url).await.unwrap();
State::new(db)
}),
|mut conn: Conn| async move {
let db = conn.state_mut::<MyDatabaseConnection>().unwrap();
let response = db.query("select * from users limit 1").await;
conn.ok(response)
}
);
std::env::set_var("DATABASE_URL", "db://db");
use trillium_testing::prelude::*;
init(&mut handler);
assert_ok!(
get("/").on(&handler),
"you queried `select * from users limit 1` against db://db"
);
```
Because () is the noop handler, this can also be used to perform one-time set up:
```
use trillium::{Init, Conn};
let mut handler = (
Init::new(|info| async move { log::info!("{}", info); }),
|conn: Conn| async move { conn.ok("ok!") }
);
use trillium_testing::prelude::*;
init(&mut handler);
assert_ok!(get("/").on(&handler), "ok!");
```
Implementations
---
### impl<T: Handler> Init<T#### pub fn new<F, Fut>(init: F) -> Selfwhere
F: FnOnce(Info) -> Fut + Send + Sync + 'static,
Fut: Future<Output = T> + Send + 'static,
Constructs a new Init handler with an async function that returns the handler post-initialization. The async function receives `Info` for the current server.
Trait Implementations
---
### impl<T: Handler> Debug for Init<T#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
Executes this handler, performing any modifications to the Conn that are desired.#### fn init<'life0, 'life1, 'async_trait>(
&'life0 mut self,
info: &'life1 mut Info
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
Performs one-time async set up on a mutable borrow of the Handler before the server starts accepting requests. This allows a Handler to be defined in synchronous code but perform async setup such as establishing a database connection or fetching some state from an external source. This is optional,
and chances are high that you do not need this.
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
Performs any final modifications to this conn after all handlers have been run. Although this is a slight deviation from the simple conn->conn->conn chain represented by most Handlers, it provides an easy way for libraries to effectively inject a second handler into a response chain. This is useful for loggers that need to record information both before and after other handlers have run,
as well as database transaction handlers and similar library code.
predicate function answering the question of whether this Handler would like to take ownership of the negotiated Upgrade. If this returns true, you must implement `Handler::upgrade`. The first handler that responds true to this will receive ownership of the
`trillium::Upgrade` in a subsequent call to `Handler::upgrade`#### fn upgrade<'life0, 'async_trait>(
&'life0 self,
upgrade: Upgrade
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
This will only be called if the handler reponds true to
`Handler::has_upgrade` and will only be called once for this upgrade. There is no return value, and this function takes exclusive ownership of the underlying transport once this is called. You can downcast the transport to whatever the source transport type is and perform any non-http protocol communication that has been negotiated. You probably don’t want this unless you’re implementing something like websockets. Please note that for many transports such as TcpStreams, dropping the transport
(and therefore the Upgrade) will hang up / disconnect.#### fn name(&self) -> Cow<'static, strCustomize the name of your handler. This is used in Debug implementations. The default is the type name of this handler.Auto Trait Implementations
---
### impl<T> !RefUnwindSafe for Init<T### impl<T> Send for Init<T>where
T: Send,
### impl<T> Sync for Init<T>where
T: Sync,
### impl<T> Unpin for Init<T>where
T: Unpin,
### impl<T> !UnwindSafe for Init<TBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct trillium::State
===
```
pub struct State<T>(_);
```
A handler for sharing state across an application.
---
State is a handler that puts a clone of any `Clone + Send + Sync + 'static` type into every conn’s state map.
```
use std::sync::{atomic::{AtomicBool, Ordering}, Arc};
use trillium::{Conn, State};
use trillium_testing::prelude::*;
#[derive(Clone, Default)] // Clone is mandatory struct MyFeatureFlag(Arc<AtomicBool>);
impl MyFeatureFlag {
pub fn is_enabled(&self) -> bool {
self.0.load(Ordering::Relaxed)
}
pub fn toggle(&self) {
self.0.fetch_xor(true, Ordering::Relaxed);
}
}
let feature_flag = MyFeatureFlag::default();
let handler = (
State::new(feature_flag.clone()),
|conn: Conn| async move {
if conn.state::<MyFeatureFlag>().unwrap().is_enabled() {
conn.ok("feature enabled")
} else {
conn.ok("not enabled")
}
}
);
assert!(!feature_flag.is_enabled());
assert_ok!(get("/").on(&handler), "not enabled");
assert_ok!(get("/").on(&handler), "not enabled");
feature_flag.toggle();
assert!(feature_flag.is_enabled());
assert_ok!(get("/").on(&handler), "feature enabled");
assert_ok!(get("/").on(&handler), "feature enabled");
```
Please note that as with the above contrived example, if your state needs to be mutable, you need to choose your own interior mutability with whatever cross thread synchronization mechanisms are appropriate for your application. There will be one clones of the contained T type in memory for each http connection, and any locks should be held as briefly as possible so as to minimize impact on other conns.
**Stability note:** This is a common enough pattern that it currently exists in the public api, but may be removed at some point for simplicity.
Implementations
---
### impl<T> State<T>where
T: Clone + Send + Sync + 'static,
#### pub fn new(t: T) -> Self
Constructs a new State handler from any `Clone` + `Send` + `Sync` +
`'static`
Trait Implementations
---
### impl<T: Debug> Debug for State<T#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
T: Default + Clone + Send + Sync + 'static,
#### fn default() -> Self
Returns the “default value” for a type.
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
Executes this handler, performing any modifications to the Conn that are desired.#### fn init<'life0, 'life1, 'async_trait>(
&'life0 mut self,
_info: &'life1 mut Info
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
Performs one-time async set up on a mutable borrow of the Handler before the server starts accepting requests. This allows a Handler to be defined in synchronous code but perform async setup such as establishing a database connection or fetching some state from an external source. This is optional,
and chances are high that you do not need this.
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
Performs any final modifications to this conn after all handlers have been run. Although this is a slight deviation from the simple conn->conn->conn chain represented by most Handlers, it provides an easy way for libraries to effectively inject a second handler into a response chain. This is useful for loggers that need to record information both before and after other handlers have run,
as well as database transaction handlers and similar library code.
predicate function answering the question of whether this Handler would like to take ownership of the negotiated Upgrade. If this returns true, you must implement `Handler::upgrade`. The first handler that responds true to this will receive ownership of the
`trillium::Upgrade` in a subsequent call to `Handler::upgrade`#### fn upgrade<'life0, 'async_trait>(
&'life0 self,
_upgrade: Upgrade
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
This will only be called if the handler reponds true to
`Handler::has_upgrade` and will only be called once for this upgrade. There is no return value, and this function takes exclusive ownership of the underlying transport once this is called. You can downcast the transport to whatever the source transport type is and perform any non-http protocol communication that has been negotiated. You probably don’t want this unless you’re implementing something like websockets. Please note that for many transports such as TcpStreams, dropping the transport
(and therefore the Upgrade) will hang up / disconnect.#### fn name(&self) -> Cow<'static, strCustomize the name of your handler. This is used in Debug implementations. The default is the type name of this handler.Auto Trait Implementations
---
### impl<T> RefUnwindSafe for State<T>where
T: RefUnwindSafe,
### impl<T> Send for State<T>where
T: Send,
### impl<T> Sync for State<T>where
T: Sync,
### impl<T> Unpin for State<T>where
T: Unpin,
### impl<T> UnwindSafe for State<T>where
T: UnwindSafe,
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct trillium::StateSet
===
```
pub struct StateSet(_);
```
Store and retrieve values by
`TypeId`. This allows storing arbitrary data that implements `Sync + Send + 'static`.
Implementations
---
### impl StateSet
#### pub fn new() -> StateSet
Create an empty `StateSet`.
#### pub fn insert<T>(&mut self, val: T) -> Option<T>where
T: Send + Sync + 'static,
Insert a value into this `StateSet`.
If a value of this type already exists, it will be returned.
#### pub fn contains<T>(&self) -> boolwhere
T: 'static,
Check if container contains value for type
#### pub fn get<T>(&self) -> Option<&T>where
T: 'static,
Get a reference to a value previously inserted on this `StateSet`.
#### pub fn get_mut<T>(&mut self) -> Option<&mut T>where
T: 'static,
Get a mutable reference to a value previously inserted on this `StateSet`.
#### pub fn take<T>(&mut self) -> Option<T>where
T: 'static,
Remove a value from this `StateSet`.
If a value of this type exists, it will be returned.
#### pub fn get_or_insert<T>(&mut self, default: T) -> &mut Twhere
T: Send + Sync + 'static,
Gets a value from this `StateSet` or populates it with the provided default.
#### pub fn get_or_insert_with<F, T>(&mut self, default: F) -> &mut Twhere
F: FnOnce() -> T,
T: Send + Sync + 'static,
Gets a value from this `StateSet` or populates it with the provided default function.
Trait Implementations
---
### impl AsMut<StateSet> for Conn
#### fn as_mut(&mut self) -> &mut StateSet
Converts this type into a mutable reference of the (usually inferred) input type.### impl AsRef<StateSet> for Conn
#### fn as_ref(&self) -> &StateSet
Converts this type into a shared reference of the (usually inferred) input type.### impl Debug for StateSet
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter.
#### fn default() -> StateSet
Returns the “default value” for a type. Read moreAuto Trait Implementations
---
### impl !RefUnwindSafe for StateSet
### impl Send for StateSet
### impl Sync for StateSet
### impl Unpin for StateSet
### impl !UnwindSafe for StateSet
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Enum trillium::Error
===
```
#[non_exhaustive]
pub enum Error {
Io(Error),
UnexpectedUriFormat,
HeaderMissing(&'static str),
RequestPathMissing,
Closed,
Httparse(Error),
TryFromIntError(TryFromIntError),
PartialHead,
MalformedHeader(Cow<'static, str>),
UnsupportedVersion(u8),
UnrecognizedMethod(String),
MissingMethod,
MissingStatusCode,
UnrecognizedStatusCode(u16),
MissingVersion,
EncodingError(Utf8Error),
UnexpectedHeader(&'static str),
HeadersTooLong,
}
```
Concrete errors that occur within trillium’s http implementation
Variants (Non-exhaustive)
---
Non-exhaustive enums could have additional variants added in future. Therefore, when matching against variants of non-exhaustive enums, an extra wildcard arm must be added to account for any future variants.### Io(Error)
`std::io::Error`
### UnexpectedUriFormat
this error describes a malformed request with a path that does not start with / or http:// or https://
### HeaderMissing(&'static str)
the relevant http protocol expected this header, but it was not provided
### RequestPathMissing
this error describes a request that does not specify a path
### Closed
connection was closed
### Httparse(Error)
[`httparse::Error`]
### TryFromIntError(TryFromIntError)
`TryFromIntError`
### PartialHead
an incomplete http head
### MalformedHeader(Cow<'static, str>)
we were unable to parse a header
### UnsupportedVersion(u8)
async-h1 doesn’t speak this http version this error is deprecated
### UnrecognizedMethod(String)
we were unable to parse this http method
### MissingMethod
this request did not have a method
### MissingStatusCode
this request did not have a status code
### UnrecognizedStatusCode(u16)
we were unable to parse this http method
### MissingVersion
this request did not have a version, but we expect one this error is deprecated
### EncodingError(Utf8Error)
we expected utf8, but there was an encoding error
### UnexpectedHeader(&'static str)
we received a header that does not make sense in context
### HeadersTooLong
for security reasons, we do not allow request headers beyond 8kb.
Trait Implementations
---
### impl Debug for Error
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter.
#### fn fmt(&self, __formatter: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter.
#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more1.0.0 · source#### fn description(&self) -> &str
👎Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎Deprecated since 1.33.0: replaced by Error::source, which can support downcasting#### fn provide<'a>(&'a self, demand: &mut Demand<'a>)
🔬This is a nightly-only experimental API. (`error_generic_member_access`)Provides type based access to context intended for error reports.
#### fn from(source: Error) -> Error
Converts to this type from the input type.### impl From<Error> for Error
#### fn from(source: Error) -> Error
Converts to this type from the input type.### impl From<TryFromIntError> for Error
#### fn from(source: TryFromIntError) -> Error
Converts to this type from the input type.### impl From<Utf8Error> for Error
#### fn from(source: Utf8Error) -> Error
Converts to this type from the input type.Auto Trait Implementations
---
### impl !RefUnwindSafe for Error
### impl Send for Error
### impl Sync for Error
### impl Unpin for Error
### impl !UnwindSafe for Error
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<E> Provider for Ewhere
E: Error + ?Sized,
#### fn provide<'a>(&'a self, demand: &mut Demand<'a>)
🔬This is a nightly-only experimental API. (`provide_any`)Data providers should implement this method to provide *all* values they are able to provide by using `demand`.
T: Display + ?Sized,
#### default fn to_string(&self) -> String
Converts the given value to a `String`.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Enum trillium::Method
===
```
#[non_exhaustive]
pub enum Method {
Acl,
BaselineControl,
Bind,
Checkin,
Checkout,
Connect,
Copy,
Delete,
Get,
Head,
Label,
Link,
Lock,
Merge,
MkActivity,
MkCalendar,
MkCol,
MkRedirectRef,
MkWorkspace,
Move,
Options,
OrderPatch,
Patch,
Post,
Pri,
PropFind,
PropPatch,
Put,
Rebind,
Report,
Search,
Trace,
Unbind,
Uncheckout,
Unlink,
Unlock,
Update,
UpdateRedirectRef,
VersionControl,
}
```
HTTP request methods.
See also Mozilla’s documentation, the RFC7231, Section 4 and IANA’s Hypertext Transfer Protocol (HTTP) Method Registry.
Variants (Non-exhaustive)
---
Non-exhaustive enums could have additional variants added in future. Therefore, when matching against variants of non-exhaustive enums, an extra wildcard arm must be added to account for any future variants.### Acl
The ACL method modifies the access control list (which can be read via the DAV:acl property) of a resource.
See RFC3744, Section 8.1.
### BaselineControl
A collection can be placed under baseline control with a BASELINE-CONTROL request.
See RFC3253, Section 12.6.
### Bind
The BIND method modifies the collection identified by the Request- URI, by adding a new binding from the segment specified in the BIND body to the resource identified in the BIND body.
See RFC5842, Section 4.
### Checkin
A CHECKIN request can be applied to a checked-out version-controlled resource to produce a new version whose content and dead properties are copied from the checked-out resource.
See RFC3253, Section 4.4 and RFC3253, Section 9.4.
### Checkout
A CHECKOUT request can be applied to a checked-in version-controlled resource to allow modifications to the content and dead properties of that version-controlled resource.
See RFC3253, Section 4.3 and RFC3253, Section 8.8.
### Connect
The CONNECT method requests that the recipient establish a tunnel to the destination origin server identified by the request-target and, if successful, thereafter restrict its behavior to blind forwarding of packets, in both directions, until the tunnel is closed.
See RFC7231, Section 4.3.6.
### Copy
The COPY method creates a duplicate of the source resource identified by the Request-URI,
in the destination resource identified by the URI in the Destination header.
See RFC4918, Section 9.8.
### Delete
The DELETE method requests that the origin server remove the association between the target resource and its current functionality.
See RFC7231, Section 4.3.5.
### Get
The GET method requests transfer of a current selected representation for the target resource.
See RFC7231, Section 4.3.1.
### Head
The HEAD method is identical to GET except that the server MUST NOT send a message body in the response.
See RFC7231, Section 4.3.2.
### Label
A LABEL request can be applied to a version to modify the labels that select that version.
See RFC3253, Section 8.2.
### Link
The LINK method establishes one or more Link relationships between the existing resource identified by the Request-URI and other existing resources.
See RFC2068, Section 19.6.1.2.
### Lock
The LOCK method is used to take out a lock of any access type and to refresh an existing lock.
See RFC4918, Section 9.10.
### Merge
The MERGE method performs the logical merge of a specified version (the “merge source”)
into a specified version-controlled resource (the “merge target”).
See RFC3253, Section 11.2.
### MkActivity
A MKACTIVITY request creates a new activity resource.
See RFC3253, Section 13.5.
### MkCalendar
An HTTP request using the MKCALENDAR method creates a new calendar collection resource.
See RFC4791, Section 5.3.1 and RFC8144, Section 2.3.
### MkCol
MKCOL creates a new collection resource at the location specified by the Request-URI.
See RFC4918, Section 9.3, RFC5689, Section 3 and RFC8144, Section 2.3.
### MkRedirectRef
The MKREDIRECTREF method requests the creation of a redirect reference resource.
See RFC4437, Section 6.
### MkWorkspace
A MKWORKSPACE request creates a new workspace resource.
See RFC3253, Section 6.3.
### Move
The MOVE operation on a non-collection resource is the logical equivalent of a copy (COPY),
followed by consistency maintenance processing, followed by a delete of the source, where all three actions are performed in a single operation.
See RFC4918, Section 9.9.
### Options
The OPTIONS method requests information about the communication options available for the target resource, at either the origin server or an intervening intermediary.
See RFC7231, Section 4.3.7.
### OrderPatch
The ORDERPATCH method is used to change the ordering semantics of a collection, to change the order of the collection’s members in the ordering, or both.
See RFC3648, Section 7.
### Patch
The PATCH method requests that a set of changes described in the request entity be applied to the resource identified by the Request- URI.
See RFC5789, Section 2.
### Post
The POST method requests that the target resource process the representation enclosed in the request according to the resource’s own specific semantics.
For example, POST is used for the following functions (among others):
* Providing a block of data, such as the fields entered into an HTML form, to a data-handling process;
* Posting a message to a bulletin board, newsgroup, mailing list, blog, or similar group of articles;
* Creating a new resource that has yet to be identified by the origin server; and
* Appending data to a resource’s existing representation(s).
See RFC7231, Section 4.3.3.
### Pri
This method is never used by an actual client. This method will appear to be used when an HTTP/1.1 server or intermediary attempts to parse an HTTP/2 connection preface.
See RFC7540, Section 3.5 and RFC7540, Section 11.6
### PropFind
The PROPFIND method retrieves properties defined on the resource identified by the Request-URI.
See RFC4918, Section 9.1 and RFC8144, Section 2.1.
### PropPatch
The PROPPATCH method processes instructions specified in the request body to set and/or remove properties defined on the resource identified by the Request-URI.
See RFC4918, Section 9.2 and RFC8144, Section 2.2.
### Put
The PUT method requests that the state of the target resource be created or replaced with the state defined by the representation enclosed in the request message payload.
See RFC7231, Section 4.3.4.
### Rebind
The REBIND method removes a binding to a resource from a collection, and adds a binding to that resource into the collection identified by the Request-URI.
See RFC5842, Section 6.
### Report
A REPORT request is an extensible mechanism for obtaining information about a resource.
See RFC3253, Section 3.6 and RFC8144, Section 2.1.
### Search
The client invokes the SEARCH method to initiate a server-side search. The body of the request defines the query.
See RFC5323, Section 2.
### Trace
The TRACE method requests a remote, application-level loop-back of the request message.
See RFC7231, Section 4.3.8.
### Unbind
The UNBIND method modifies the collection identified by the Request- URI by removing the binding identified by the segment specified in the UNBIND body.
See RFC5842, Section 5.
### Uncheckout
An UNCHECKOUT request can be applied to a checked-out version-controlled resource to cancel the CHECKOUT and restore the pre-CHECKOUT state of the version-controlled resource.
See RFC3253, Section 4.5.
### Unlink
The UNLINK method removes one or more Link relationships from the existing resource identified by the Request-URI.
See RFC2068, Section 19.6.1.3.
### Unlock
The UNLOCK method removes the lock identified by the lock token in the Lock-Token request header.
See RFC4918, Section 9.11.
### Update
The UPDATE method modifies the content and dead properties of a checked-in version-controlled resource (the “update target”) to be those of a specified version (the
“update source”) from the version history of that version-controlled resource.
See RFC3253, Section 7.1.
### UpdateRedirectRef
The UPDATEREDIRECTREF method requests the update of a redirect reference resource.
See RFC4437, Section 7.
### VersionControl
A VERSION-CONTROL request can be used to create a version-controlled resource at the request-URL.
See RFC3253, Section 3.5.
Implementations
---
### impl Method
#### pub const fn is_safe(&self) -> bool
Predicate that returns whether the method is “safe.”
> Request methods are considered “safe” if their defined semantics are
> essentially read-only; i.e., the client does not request, and does
> not expect, any state change on the origin server as a result of
> applying a safe method to a target resource.
– rfc72314.2.1
#### pub const fn is_idempotent(&self) -> bool
predicate that returns whether this method is considered “idempotent”.
> A request method is considered “idempotent” if the intended effect on
> the server of multiple identical requests with that method is the
> same as the effect for a single such request.
– rfc72314.2.2
#### pub const fn as_str(&self) -> &'static str
returns the static str representation of this method
Trait Implementations
---
### impl AsRef<str> for Method
#### fn as_ref(&self) -> &str
Converts this type into a shared reference of the (usually inferred) input type.### impl Clone for Method
#### fn clone(&self) -> Method
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter.
#### type Err = Error
The associated error which can be returned from parsing.#### fn from_str(s: &str) -> Result<Method, <Method as FromStr>::ErrParses a string `s` to return a value of this type.
#### fn hash<__H>(&self, state: &mut __H)where
__H: Hasher,
Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where
H: Hasher,
Self: Sized,
Feeds a slice of this type into the given `Hasher`.
#### fn cmp(&self, other: &Method) -> Ordering
This method returns an `Ordering` between `self` and `other`. Read more1.21.0 · source#### fn max(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the maximum of two values. Read more1.21.0 · source#### fn min(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the minimum of two values. Read more1.50.0 · source#### fn clamp(self, min: Self, max: Self) -> Selfwhere
Self: Sized + PartialOrd<Self>,
Restrict a value to a certain interval.
#### fn eq(&self, other: &Method) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialOrd<Method> for Method
#### fn partial_cmp(&self, other: &Method) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists. Read more1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool
This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool
This method tests less than or equal to (for `self` and `other`) and is used by the `<=`
operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool
This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool
This method tests greater than or equal to (for `self` and `other`) and is used by the `>=`
operator.
#### type Error = Error
The type returned in the event of a conversion error.#### fn try_from(
value: &'a str
) -> Result<Method, <Method as TryFrom<&'a str>>::ErrorPerforms the conversion.### impl Copy for Method
### impl Eq for Method
### impl StructuralEq for Method
### impl StructuralPartialEq for Method
Auto Trait Implementations
---
### impl RefUnwindSafe for Method
### impl Send for Method
### impl Sync for Method
### impl Unpin for Method
### impl UnwindSafe for Method
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Checks if this value is equivalent to the given key.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
T: Display + ?Sized,
#### default fn to_string(&self) -> String
Converts the given value to a `String`.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Enum trillium::Status
===
```
pub enum Status {
Continue,
SwitchingProtocols,
EarlyHints,
Ok,
Created,
Accepted,
NonAuthoritativeInformation,
NoContent,
ResetContent,
PartialContent,
MultiStatus,
ImUsed,
MultipleChoice,
MovedPermanently,
Found,
SeeOther,
NotModified,
TemporaryRedirect,
PermanentRedirect,
BadRequest,
Unauthorized,
PaymentRequired,
Forbidden,
NotFound,
MethodNotAllowed,
NotAcceptable,
ProxyAuthenticationRequired,
RequestTimeout,
Conflict,
Gone,
LengthRequired,
PreconditionFailed,
PayloadTooLarge,
UriTooLong,
UnsupportedMediaType,
RequestedRangeNotSatisfiable,
ExpectationFailed,
ImATeapot,
MisdirectedRequest,
UnprocessableEntity,
Locked,
FailedDependency,
TooEarly,
UpgradeRequired,
PreconditionRequired,
TooManyRequests,
RequestHeaderFieldsTooLarge,
UnavailableForLegalReasons,
InternalServerError,
NotImplemented,
BadGateway,
ServiceUnavailable,
GatewayTimeout,
HttpVersionNotSupported,
VariantAlsoNegotiates,
InsufficientStorage,
LoopDetected,
NotExtended,
NetworkAuthenticationRequired,
}
```
HTTP response status codes.
As defined by rfc7231 section 6.
Read more
Variants
---
### Continue
100 Continue
This interim response indicates that everything so far is OK and that the client should continue the request, or ignore the response if the request is already finished.
### SwitchingProtocols
101 Switching Protocols
This code is sent in response to an Upgrade request header from the client, and indicates the protocol the server is switching to.
### EarlyHints
103 Early Hints
This status code is primarily intended to be used with the Link header,
letting the user agent start preloading resources while the server prepares a response.
### Ok
200 Ok
The request has succeeded
### Created
201 Created
The request has succeeded and a new resource has been created as a result. This is typically the response sent after POST requests, or some PUT requests.
### Accepted
202 Accepted
The request has been received but not yet acted upon. It is noncommittal, since there is no way in HTTP to later send an asynchronous response indicating the outcome of the request. It is intended for cases where another process or server handles the request,
or for batch processing.
### NonAuthoritativeInformation
203 Non Authoritative Information
This response code means the returned meta-information is not exactly the same as is available from the origin server, but is collected from a local or a third-party copy. This is mostly used for mirrors or backups of another resource. Except for that specific case, the
“200 OK” response is preferred to this status.
### NoContent
204 No Content
There is no content to send for this request, but the headers may be useful. The user-agent may update its cached headers for this resource with the new ones.
### ResetContent
205 Reset Content
Tells the user-agent to reset the document which sent this request.
### PartialContent
206 Partial Content
This response code is used when the Range header is sent from the client to request only part of a resource.
### MultiStatus
207 Multi-Status
A Multi-Status response conveys information about multiple resources in situations where multiple status codes might be appropriate.
### ImUsed
226 Im Used
The server has fulfilled a GET request for the resource, and the response is a representation of the result of one or more instance-manipulations applied to the current instance.
### MultipleChoice
300 Multiple Choice
The request has more than one possible response. The user-agent or user should choose one of them. (There is no standardized way of choosing one of the responses, but HTML links to the possibilities are recommended so the user can pick.)
### MovedPermanently
301 Moved Permanently
The URL of the requested resource has been changed permanently. The new URL is given in the response.
### Found
302 Found
This response code means that the URI of requested resource has been changed temporarily. Further changes in the URI might be made in the future. Therefore, this same URI should be used by the client in future requests.
### SeeOther
303 See Other
The server sent this response to direct the client to get the requested resource at another URI with a GET request.
### NotModified
304 Not Modified
This is used for caching purposes. It tells the client that the response has not been modified, so the client can continue to use the same cached version of the response.
### TemporaryRedirect
307 Temporary Redirect
The server sends this response to direct the client to get the requested resource at another URI with same method that was used in the prior request. This has the same semantics as the 302 Found HTTP response code, with the exception that the user agent must not change the HTTP method used: If a POST was used in the first request, a POST must be used in the second request.
### PermanentRedirect
308 Permanent Redirect
This means that the resource is now permanently located at another URI,
specified by the Location: HTTP Response header. This has the same semantics as the 301 Moved Permanently HTTP response code, with the exception that the user agent must not change the HTTP method used: If a POST was used in the first request, a POST must be used in the second request.
### BadRequest
400 Bad Request
The server could not understand the request due to invalid syntax.
### Unauthorized
401 Unauthorized
Although the HTTP standard specifies “unauthorized”, semantically this response means “unauthenticated”. That is, the client must authenticate itself to get the requested response.
### PaymentRequired
402 Payment Required
This response code is reserved for future use. The initial aim for creating this code was using it for digital payment systems, however this status code is used very rarely and no standard convention exists.
### Forbidden
403 Forbidden
The client does not have access rights to the content; that is, it is unauthorized, so the server is refusing to give the requested resource. Unlike 401, the client’s identity is known to the server.
### NotFound
404 Not Found
The server can not find requested resource. In the browser, this means the URL is not recognized. In an API, this can also mean that the endpoint is valid but the resource itself does not exist. Servers may also send this response instead of 403 to hide the existence of a resource from an unauthorized client. This response code is probably the most famous one due to its frequent occurrence on the web.
### MethodNotAllowed
405 Method Not Allowed
The request method is known by the server but has been disabled and cannot be used. For example, an API may forbid DELETE-ing a resource. The two mandatory methods, GET and HEAD, must never be disabled and should not return this error code.
### NotAcceptable
406 Not Acceptable
This response is sent when the web server, after performing server-driven content negotiation, doesn’t find any content that conforms to the criteria given by the user agent.
### ProxyAuthenticationRequired
407 Proxy Authentication Required
This is similar to 401 but authentication is needed to be done by a proxy.
### RequestTimeout
408 Request Timeout
This response is sent on an idle connection by some servers, even without any previous request by the client. It means that the server would like to shut down this unused connection. This response is used much more since some browsers, like Chrome, Firefox 27+,
or IE9, use HTTP pre-connection mechanisms to speed up surfing. Also note that some servers merely shut down the connection without sending this message.
### Conflict
409 Conflict
This response is sent when a request conflicts with the current state of the server.
### Gone
410 Gone
This response is sent when the requested content has been permanently deleted from server, with no forwarding address. Clients are expected to remove their caches and links to the resource. The HTTP specification intends this status code to be used for “limited-time,
promotional services”. APIs should not feel compelled to indicate resources that have been deleted with this status code.
### LengthRequired
411 Length Required
Server rejected the request because the Content-Length header field is not defined and the server requires it.
### PreconditionFailed
412 Precondition Failed
The client has indicated preconditions in its headers which the server does not meet.
### PayloadTooLarge
413 Payload Too Large
Request entity is larger than limits defined by server; the server might close the connection or return an Retry-After header field.
### UriTooLong
414 URI Too Long
The URI requested by the client is longer than the server is willing to interpret.
### UnsupportedMediaType
415 Unsupported Media Type
The media format of the requested data is not supported by the server,
so the server is rejecting the request.
### RequestedRangeNotSatisfiable
416 Requested Range Not Satisfiable
The range specified by the Range header field in the request can’t be fulfilled; it’s possible that the range is outside the size of the target URI’s data.
### ExpectationFailed
417 Expectation Failed
This response code means the expectation indicated by the Expect request header field can’t be met by the server.
### ImATeapot
418 I’m a teapot
The server refuses the attempt to brew coffee with a teapot.
### MisdirectedRequest
421 Misdirected Request
The request was directed at a server that is not able to produce a response. This can be sent by a server that is not configured to produce responses for the combination of scheme and authority that are included in the request URI.
### UnprocessableEntity
422 Unprocessable Entity
The request was well-formed but was unable to be followed due to semantic errors.
### Locked
423 Locked
The resource that is being accessed is locked.
### FailedDependency
424 Failed Dependency
The request failed because it depended on another request and that request failed (e.g., a PROPPATCH).
### TooEarly
425 Too Early
Indicates that the server is unwilling to risk processing a request that might be replayed.
### UpgradeRequired
426 Upgrade Required
The server refuses to perform the request using the current protocol but might be willing to do so after the client upgrades to a different protocol. The server sends an Upgrade header in a 426 response to indicate the required protocol(s).
### PreconditionRequired
428 Precondition Required
The origin server requires the request to be conditional. This response is intended to prevent the ‘lost update’ problem, where a client GETs a resource’s state, modifies it, and PUTs it back to the server, when meanwhile a third party has modified the state on the server, leading to a conflict.
### TooManyRequests
429 Too Many Requests
The user has sent too many requests in a given amount of time (“rate limiting”).
### RequestHeaderFieldsTooLarge
431 Request Header Fields Too Large
The server is unwilling to process the request because its header fields are too large. The request may be resubmitted after reducing the size of the request header fields.
### UnavailableForLegalReasons
451 Unavailable For Legal Reasons
The user-agent requested a resource that cannot legally be provided,
such as a web page censored by a government.
### InternalServerError
500 Internal Server Error
The server has encountered a situation it doesn’t know how to handle.
### NotImplemented
501 Not Implemented
The request method is not supported by the server and cannot be handled.
The only methods that servers are required to support (and therefore that must not return this code) are GET and HEAD.
### BadGateway
502 Bad Gateway
This error response means that the server, while working as a gateway to get a response needed to handle the request, got an invalid response.
### ServiceUnavailable
503 Service Unavailable
The server is not ready to handle the request. Common causes are a server that is down for maintenance or that is overloaded. Note that together with this response, a user-friendly page explaining the problem should be sent. This responses should be used for temporary conditions and the Retry-After: HTTP header should, if possible, contain the estimated time before the recovery of the service. The webmaster must also take care about the caching-related headers that are sent along with this response, as these temporary condition responses should usually not be cached.
### GatewayTimeout
504 Gateway Timeout
This error response is given when the server is acting as a gateway and cannot get a response in time.
### HttpVersionNotSupported
505 HTTP Version Not Supported
The HTTP version used in the request is not supported by the server.
### VariantAlsoNegotiates
506 Variant Also Negotiates
The server has an internal configuration error: the chosen variant resource is configured to engage in transparent content negotiation itself, and is therefore not a proper end point in the negotiation process.
### InsufficientStorage
507 Insufficient Storage
The server is unable to store the representation needed to complete the request.
### LoopDetected
508 Loop Detected
The server detected an infinite loop while processing the request.
### NotExtended
510 Not Extended
Further extensions to the request are required for the server to fulfil it.
### NetworkAuthenticationRequired
511 Network Authentication Required
The 511 status code indicates that the client needs to authenticate to gain network access.
Implementations
---
### impl Status
#### pub fn is_informational(&self) -> bool
Returns `true` if the status code is `1xx` range.
If this returns `true` it indicates that the request was received,
continuing process.
#### pub fn is_success(&self) -> bool
Returns `true` if the status code is the `2xx` range.
If this returns `true` it indicates that the request was successfully received, understood, and accepted.
#### pub fn is_redirection(&self) -> bool
Returns `true` if the status code is the `3xx` range.
If this returns `true` it indicates that further action needs to be taken in order to complete the request.
#### pub fn is_client_error(&self) -> bool
Returns `true` if the status code is the `4xx` range.
If this returns `true` it indicates that the request contains bad syntax or cannot be fulfilled.
#### pub fn is_server_error(&self) -> bool
Returns `true` if the status code is the `5xx` range.
If this returns `true` it indicates that the server failed to fulfill an apparently valid request.
#### pub fn canonical_reason(&self) -> &'static str
The canonical reason for a given status code
Trait Implementations
---
### impl Clone for Status
#### fn clone(&self) -> Status
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter.
#### fn run<'life0, 'async_trait>(
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
Executes this handler, performing any modifications to the Conn that are desired.#### fn init<'life0, 'life1, 'async_trait>(
&'life0 mut self,
_info: &'life1 mut Info
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
Performs one-time async set up on a mutable borrow of the Handler before the server starts accepting requests. This allows a Handler to be defined in synchronous code but perform async setup such as establishing a database connection or fetching some state from an external source. This is optional,
and chances are high that you do not need this.
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
Performs any final modifications to this conn after all handlers have been run. Although this is a slight deviation from the simple conn->conn->conn chain represented by most Handlers, it provides an easy way for libraries to effectively inject a second handler into a response chain. This is useful for loggers that need to record information both before and after other handlers have run,
as well as database transaction handlers and similar library code.
predicate function answering the question of whether this Handler would like to take ownership of the negotiated Upgrade. If this returns true, you must implement `Handler::upgrade`. The first handler that responds true to this will receive ownership of the
`trillium::Upgrade` in a subsequent call to `Handler::upgrade`#### fn upgrade<'life0, 'async_trait>(
&'life0 self,
_upgrade: Upgrade
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
This will only be called if the handler reponds true to
`Handler::has_upgrade` and will only be called once for this upgrade. There is no return value, and this function takes exclusive ownership of the underlying transport once this is called. You can downcast the transport to whatever the source transport type is and perform any non-http protocol communication that has been negotiated. You probably don’t want this unless you’re implementing something like websockets. Please note that for many transports such as TcpStreams, dropping the transport
(and therefore the Upgrade) will hang up / disconnect.#### fn name(&self) -> Cow<'static, strCustomize the name of your handler. This is used in Debug implementations. The default is the type name of this handler.### impl Hash for Status
#### fn hash<__H>(&self, state: &mut __H)where
__H: Hasher,
Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where
H: Hasher,
Self: Sized,
Feeds a slice of this type into the given `Hasher`.
#### fn eq(&self, other: &Status) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<u16> for Status
#### fn eq(&self, other: &u16) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl TryFrom<u16> for Status
#### type Error = Error
The type returned in the event of a conversion error.#### fn try_from(num: u16) -> Result<Status, <Status as TryFrom<u16>>::ErrorPerforms the conversion.### impl Copy for Status
### impl Eq for Status
### impl StructuralEq for Status
### impl StructuralPartialEq for Status
Auto Trait Implementations
---
### impl RefUnwindSafe for Status
### impl Send for Status
### impl Sync for Status
### impl Unpin for Status
### impl UnwindSafe for Status
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Checks if this value is equivalent to the given key.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
T: Display + ?Sized,
#### default fn to_string(&self) -> String
Converts the given value to a `String`.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Enum trillium::Version
===
```
#[non_exhaustive]
pub enum Version {
Http0_9,
Http1_0,
Http1_1,
Http2_0,
Http3_0,
}
```
The version of the HTTP protocol in use.
Variants (Non-exhaustive)
---
Non-exhaustive enums could have additional variants added in future. Therefore, when matching against variants of non-exhaustive enums, an extra wildcard arm must be added to account for any future variants.### Http0_9
HTTP/0.9
### Http1_0
HTTP/1.0
### Http1_1
HTTP/1.1
### Http2_0
HTTP/2.0
### Http3_0
HTTP/3.0
Implementations
---
### impl Version
#### pub const fn as_str(&self) -> &'static str
returns the http version as a static str, such as “HTTP/1.1”
Trait Implementations
---
### impl AsRef<[u8]> for Version
#### fn as_ref(&self) -> &[u8]
Converts this type into a shared reference of the (usually inferred) input type.### impl AsRef<str> for Version
#### fn as_ref(&self) -> &str
Converts this type into a shared reference of the (usually inferred) input type.### impl Clone for Version
#### fn clone(&self) -> Version
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter.
#### type Err = UnrecognizedVersion
The associated error which can be returned from parsing.#### fn from_str(s: &str) -> Result<Version, <Version as FromStr>::ErrParses a string `s` to return a value of this type.
#### fn cmp(&self, other: &Version) -> Ordering
This method returns an `Ordering` between `self` and `other`. Read more1.21.0 · source#### fn max(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the maximum of two values. Read more1.21.0 · source#### fn min(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the minimum of two values. Read more1.50.0 · source#### fn clamp(self, min: Self, max: Self) -> Selfwhere
Self: Sized + PartialOrd<Self>,
Restrict a value to a certain interval.
#### fn eq(&self, other: &&Version) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<Version> for &Version
#### fn eq(&self, other: &Version) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<Version> for Version
#### fn eq(&self, other: &Version) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialOrd<Version> for Version
#### fn partial_cmp(&self, other: &Version) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists. Read more1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool
This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool
This method tests less than or equal to (for `self` and `other`) and is used by the `<=`
operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool
This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool
This method tests greater than or equal to (for `self` and `other`) and is used by the `>=`
operator.
### impl Eq for Version
### impl StructuralEq for Version
### impl StructuralPartialEq for Version
Auto Trait Implementations
---
### impl RefUnwindSafe for Version
### impl Send for Version
### impl Sync for Version
### impl Unpin for Version
### impl UnwindSafe for Version
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Checks if this value is equivalent to the given key.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
T: Display + ?Sized,
#### default fn to_string(&self) -> String
Converts the given value to a `String`.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.{"&[u8]":"<h3>Notable traits for <code>&[<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u8.html\">u8</a>]</code></h3><pre><code><span class=\"where fmt-newline\">impl <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/std/io/trait.Read.html\" title=\"trait std::io::Read\">Read</a> for &[<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u8.html\">u8</a>]</span>"}
Trait trillium::Handler
===
```
pub trait Handler: Send + Sync + 'static {
// Required method
fn run<'life0, 'async_trait>(
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>
where Self: 'async_trait,
'life0: 'async_trait;
// Provided methods
fn init<'life0, 'life1, 'async_trait>(
&'life0 mut self,
_info: &'life1 mut Info
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>
where Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait { ... }
fn before_send<'life0, 'async_trait>(
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>
where Self: 'async_trait,
'life0: 'async_trait { ... }
fn has_upgrade(&self, _upgrade: &Upgrade) -> bool { ... }
fn upgrade<'life0, 'async_trait>(
&'life0 self,
_upgrade: Upgrade
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>
where Self: 'async_trait,
'life0: 'async_trait { ... }
fn name(&self) -> Cow<'static, str> { ... }
}
```
The building block for Trillium applications.
---
### Concept
Many other frameworks have a notion of `middleware` and `endpoints`,
in which the model is that a request passes through a router and then any number of middlewares, then a single endpoint that returns a response, and then passes a response back through the middleware stack.
Because a Trillium Conn represents both a request and response, there is no distinction between middleware and endpoints, as all of these can be modeled as `Fn(Conn) -> Future<Output = Conn>`.
### Implementing Handler
The simplest handler is an async closure or async fn that receives a Conn and returns a Conn, and Handler has a blanket implementation for any such Fn.
```
// as a closure let handler = |conn: trillium::Conn| async move { conn.ok("trillium!") };
use trillium_testing::prelude::*;
assert_ok!(get("/").on(&handler), "trillium!");
```
```
// as an async function async fn handler(conn: trillium::Conn) -> trillium::Conn {
conn.ok("trillium!")
}
use trillium_testing::prelude::*;
assert_ok!(get("/").on(&handler), "trillium!");
```
The simplest implementation of Handler for a named type looks like this:
```
pub struct MyHandler;
#[trillium::async_trait]
impl trillium::Handler for MyHandler {
async fn run(&self, conn: trillium::Conn) -> trillium::Conn {
conn
}
}
use trillium_testing::prelude::*;
assert_not_handled!(get("/").on(&MyHandler)); // we did not halt or set a body status
```
**Temporary Note:** Until rust has true async traits, implementing handler requires the use of the async_trait macro, which is reexported as `trillium::async_trait`.
### Full trait specification
Unfortunately, the async_trait macro results in the difficult-to-read documentation at the top of the page, so here is how the trait is actually defined in trillium code:
```
#[trillium::async_trait]
pub trait Handler: Send + Sync + 'static {
async fn run(&self, conn: Conn) -> Conn;
async fn init(&mut self, info: &mut Info); // optional
async fn before_send(&self, conn: Conn); // optional
fn has_upgrade(&self, _upgrade: &Upgrade) -> bool; // optional
async fn upgrade(&self, _upgrade: Upgrade); // mandatory only if has_upgrade returns true
fn name(&self) -> Cow<'static, str>; // optional
}
```
See each of the function definitions below for advanced implementation.
For most application code and even trillium-packaged framework code,
`run` is the only trait function that needs to be implemented.
Required Methods
---
#### fn run<'life0, 'async_trait>(
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
Executes this handler, performing any modifications to the Conn that are desired.
Provided Methods
---
#### fn init<'life0, 'life1, 'async_trait>(
&'life0 mut self,
_info: &'life1 mut Info
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
Performs one-time async set up on a mutable borrow of the Handler before the server starts accepting requests. This allows a Handler to be defined in synchronous code but perform async setup such as establishing a database connection or fetching some state from an external source. This is optional,
and chances are high that you do not need this.
It also receives a mutable borrow of the `Info` that represents the current connection.
**stability note:** This may go away at some point. Please open an
**issue if you have a use case which requires it.
#### fn before_send<'life0, 'async_trait>(
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
Performs any final modifications to this conn after all handlers have been run. Although this is a slight deviation from the simple conn->conn->conn chain represented by most Handlers, it provides an easy way for libraries to effectively inject a second handler into a response chain. This is useful for loggers that need to record information both before and after other handlers have run,
as well as database transaction handlers and similar library code.
**❗IMPORTANT NOTE FOR LIBRARY AUTHORS:** Please note that this will run **whether or not the conn has was halted before
`Handler::run` was called on a given conn**. This means that if you want to make your `before_send` callback conditional on whether `run` was called, you need to put a unit type into the conn’s state and check for that.
stability note: I don’t love this for the exact reason that it breaks the simplicity of the conn->conn->model, but it is currently the best compromise between that simplicity and convenience for the application author, who should not have to add two Handlers to achieve an “around” effect.
#### fn has_upgrade(&self, _upgrade: &Upgrade) -> bool
predicate function answering the question of whether this Handler would like to take ownership of the negotiated Upgrade. If this returns true, you must implement `Handler::upgrade`. The first handler that responds true to this will receive ownership of the
`trillium::Upgrade` in a subsequent call to `Handler::upgrade`
#### fn upgrade<'life0, 'async_trait>(
&'life0 self,
_upgrade: Upgrade
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
This will only be called if the handler reponds true to
`Handler::has_upgrade` and will only be called once for this upgrade. There is no return value, and this function takes exclusive ownership of the underlying transport once this is called. You can downcast the transport to whatever the source transport type is and perform any non-http protocol communication that has been negotiated. You probably don’t want this unless you’re implementing something like websockets. Please note that for many transports such as TcpStreams, dropping the transport
(and therefore the Upgrade) will hang up / disconnect.
#### fn name(&self) -> Cow<'static, strCustomize the name of your handler. This is used in Debug implementations. The default is the type name of this handler.
Trait Implementations
---
### impl Debug for Box<dyn Handler#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
Executes this handler, performing any modifications to the Conn that are desired.#### fn init<'life0, 'life1, 'async_trait>(
&'life0 mut self,
info: &'life1 mut Info
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
Performs one-time async set up on a mutable borrow of the Handler before the server starts accepting requests. This allows a Handler to be defined in synchronous code but perform async setup such as establishing a database connection or fetching some state from an external source. This is optional,
and chances are high that you do not need this.
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
Performs any final modifications to this conn after all handlers have been run. Although this is a slight deviation from the simple conn->conn->conn chain represented by most Handlers, it provides an easy way for libraries to effectively inject a second handler into a response chain. This is useful for loggers that need to record information both before and after other handlers have run,
as well as database transaction handlers and similar library code.
predicate function answering the question of whether this Handler would like to take ownership of the negotiated Upgrade. If this returns true, you must implement `Handler::upgrade`. The first handler that responds true to this will receive ownership of the
`trillium::Upgrade` in a subsequent call to `Handler::upgrade`#### fn upgrade<'life0, 'async_trait>(
&'life0 self,
upgrade: Upgrade
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
This will only be called if the handler reponds true to
`Handler::has_upgrade` and will only be called once for this upgrade. There is no return value, and this function takes exclusive ownership of the underlying transport once this is called. You can downcast the transport to whatever the source transport type is and perform any non-http protocol communication that has been negotiated. You probably don’t want this unless you’re implementing something like websockets. Please note that for many transports such as TcpStreams, dropping the transport
(and therefore the Upgrade) will hang up / disconnect.Implementations on Foreign Types
---
### impl<A, B, C, D, E, F, G, H, I, J, K, L, M> Handler for (A, B, C, D, E, F, G, H, I, J, K, L, M)where
A: Handler,
B: Handler,
C: Handler,
D: Handler,
E: Handler,
F: Handler,
G: Handler,
H: Handler,
I: Handler,
J: Handler,
K: Handler,
L: Handler,
M: Handler,
#### fn run<'life0, 'async_trait>(
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn init<'life0, 'life1, 'async_trait>(
&'life0 mut self,
info: &'life1 mut Info
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
#### fn before_send<'life0, 'async_trait>(
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn has_upgrade(&self, upgrade: &Upgrade) -> bool
#### fn upgrade<'life0, 'async_trait>(
&'life0 self,
upgrade: Upgrade
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn name(&self) -> Cow<'static, str### impl<H: Handler> Handler for Option<H#### fn run<'life0, 'async_trait>(
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn init<'life0, 'life1, 'async_trait>(
&'life0 mut self,
info: &'life1 mut Info
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
#### fn before_send<'life0, 'async_trait>(
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn name(&self) -> Cow<'static, str#### fn has_upgrade(&self, upgrade: &Upgrade) -> bool
#### fn upgrade<'life0, 'async_trait>(
&'life0 self,
upgrade: Upgrade
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
### impl Handler for ()
#### fn run<'life0, 'async_trait>(
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
### impl Handler for &'static str
#### fn run<'life0, 'async_trait>(
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn name(&self) -> Cow<'static, str### impl<A, B, C, D, E, F, G, H, I, J, K, L> Handler for (A, B, C, D, E, F, G, H, I, J, K, L)where
A: Handler,
B: Handler,
C: Handler,
D: Handler,
E: Handler,
F: Handler,
G: Handler,
H: Handler,
I: Handler,
J: Handler,
K: Handler,
L: Handler,
#### fn run<'life0, 'async_trait>(
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn init<'life0, 'life1, 'async_trait>(
&'life0 mut self,
info: &'life1 mut Info
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
#### fn before_send<'life0, 'async_trait>(
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn has_upgrade(&self, upgrade: &Upgrade) -> bool
#### fn upgrade<'life0, 'async_trait>(
&'life0 self,
upgrade: Upgrade
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn name(&self) -> Cow<'static, str### impl Handler for Box<dyn Handler#### fn run<'life0, 'async_trait>(
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn init<'life0, 'life1, 'async_trait>(
&'life0 mut self,
info: &'life1 mut Info
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
#### fn before_send<'life0, 'async_trait>(
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn name(&self) -> Cow<'static, str#### fn has_upgrade(&self, upgrade: &Upgrade) -> bool
#### fn upgrade<'life0, 'async_trait>(
&'life0 self,
upgrade: Upgrade
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
### impl<A, B, C, D, E, F> Handler for (A, B, C, D, E, F)where
A: Handler,
B: Handler,
C: Handler,
D: Handler,
E: Handler,
F: Handler,
#### fn run<'life0, 'async_trait>(
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn init<'life0, 'life1, 'async_trait>(
&'life0 mut self,
info: &'life1 mut Info
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
#### fn before_send<'life0, 'async_trait>(
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn has_upgrade(&self, upgrade: &Upgrade) -> bool
#### fn upgrade<'life0, 'async_trait>(
&'life0 self,
upgrade: Upgrade
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn name(&self) -> Cow<'static, str### impl<A, B, C> Handler for (A, B, C)where
A: Handler,
B: Handler,
C: Handler,
#### fn run<'life0, 'async_trait>(
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn init<'life0, 'life1, 'async_trait>(
&'life0 mut self,
info: &'life1 mut Info
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
#### fn before_send<'life0, 'async_trait>(
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn has_upgrade(&self, upgrade: &Upgrade) -> bool
#### fn upgrade<'life0, 'async_trait>(
&'life0 self,
upgrade: Upgrade
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn name(&self) -> Cow<'static, str### impl<A, B, C, D, E, F, G, H, I, J, K, L, M, N> Handler for (A, B, C, D, E, F, G, H, I, J, K, L, M, N)where
A: Handler,
B: Handler,
C: Handler,
D: Handler,
E: Handler,
F: Handler,
G: Handler,
H: Handler,
I: Handler,
J: Handler,
K: Handler,
L: Handler,
M: Handler,
N: Handler,
#### fn run<'life0, 'async_trait>(
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn init<'life0, 'life1, 'async_trait>(
&'life0 mut self,
info: &'life1 mut Info
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
#### fn before_send<'life0, 'async_trait>(
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn has_upgrade(&self, upgrade: &Upgrade) -> bool
#### fn upgrade<'life0, 'async_trait>(
&'life0 self,
upgrade: Upgrade
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn name(&self) -> Cow<'static, str### impl Handler for String
#### fn run<'life0, 'async_trait>(
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn name(&self) -> Cow<'static, str### impl<A, B> Handler for (A, B)where
A: Handler,
B: Handler,
#### fn run<'life0, 'async_trait>(
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn init<'life0, 'life1, 'async_trait>(
&'life0 mut self,
info: &'life1 mut Info
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
#### fn before_send<'life0, 'async_trait>(
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn has_upgrade(&self, upgrade: &Upgrade) -> bool
#### fn upgrade<'life0, 'async_trait>(
&'life0 self,
upgrade: Upgrade
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn name(&self) -> Cow<'static, str### impl<A, B, C, D, E, F, G> Handler for (A, B, C, D, E, F, G)where
A: Handler,
B: Handler,
C: Handler,
D: Handler,
E: Handler,
F: Handler,
G: Handler,
#### fn run<'life0, 'async_trait>(
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn init<'life0, 'life1, 'async_trait>(
&'life0 mut self,
info: &'life1 mut Info
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
#### fn before_send<'life0, 'async_trait>(
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn has_upgrade(&self, upgrade: &Upgrade) -> bool
#### fn upgrade<'life0, 'async_trait>(
&'life0 self,
upgrade: Upgrade
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn name(&self) -> Cow<'static, str### impl<A, B, C, D, E, F, G, H> Handler for (A, B, C, D, E, F, G, H)where
A: Handler,
B: Handler,
C: Handler,
D: Handler,
E: Handler,
F: Handler,
G: Handler,
H: Handler,
#### fn run<'life0, 'async_trait>(
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn init<'life0, 'life1, 'async_trait>(
&'life0 mut self,
info: &'life1 mut Info
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
#### fn before_send<'life0, 'async_trait>(
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn has_upgrade(&self, upgrade: &Upgrade) -> bool
#### fn upgrade<'life0, 'async_trait>(
&'life0 self,
upgrade: Upgrade
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn name(&self) -> Cow<'static, str### impl<A, B, C, D> Handler for (A, B, C, D)where
A: Handler,
B: Handler,
C: Handler,
D: Handler,
#### fn run<'life0, 'async_trait>(
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn init<'life0, 'life1, 'async_trait>(
&'life0 mut self,
info: &'life1 mut Info
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
#### fn before_send<'life0, 'async_trait>(
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn has_upgrade(&self, upgrade: &Upgrade) -> bool
#### fn upgrade<'life0, 'async_trait>(
&'life0 self,
upgrade: Upgrade
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn name(&self) -> Cow<'static, str### impl<T, E> Handler for Result<T, E>where
T: Handler,
E: Handler,
#### fn run<'life0, 'async_trait>(
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn init<'life0, 'life1, 'async_trait>(
&'life0 mut self,
info: &'life1 mut Info
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
#### fn before_send<'life0, 'async_trait>(
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn name(&self) -> Cow<'static, str#### fn has_upgrade(&self, upgrade: &Upgrade) -> bool
#### fn upgrade<'life0, 'async_trait>(
&'life0 self,
upgrade: Upgrade
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
### impl<A, B, C, D, E, F, G, H, I, J> Handler for (A, B, C, D, E, F, G, H, I, J)where
A: Handler,
B: Handler,
C: Handler,
D: Handler,
E: Handler,
F: Handler,
G: Handler,
H: Handler,
I: Handler,
J: Handler,
#### fn run<'life0, 'async_trait>(
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn init<'life0, 'life1, 'async_trait>(
&'life0 mut self,
info: &'life1 mut Info
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
#### fn before_send<'life0, 'async_trait>(
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn has_upgrade(&self, upgrade: &Upgrade) -> bool
#### fn upgrade<'life0, 'async_trait>(
&'life0 self,
upgrade: Upgrade
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn name(&self) -> Cow<'static, str### impl<A, B, C, D, E, F, G, H, I> Handler for (A, B, C, D, E, F, G, H, I)where
A: Handler,
B: Handler,
C: Handler,
D: Handler,
E: Handler,
F: Handler,
G: Handler,
H: Handler,
I: Handler,
#### fn run<'life0, 'async_trait>(
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn init<'life0, 'life1, 'async_trait>(
&'life0 mut self,
info: &'life1 mut Info
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
#### fn before_send<'life0, 'async_trait>(
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn has_upgrade(&self, upgrade: &Upgrade) -> bool
#### fn upgrade<'life0, 'async_trait>(
&'life0 self,
upgrade: Upgrade
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn name(&self) -> Cow<'static, str### impl<A, B, C, D, E, F, G, H, I, J, K, L, M, N, O> Handler for (A, B, C, D, E, F, G, H, I, J, K, L, M, N, O)where
A: Handler,
B: Handler,
C: Handler,
D: Handler,
E: Handler,
F: Handler,
G: Handler,
H: Handler,
I: Handler,
J: Handler,
K: Handler,
L: Handler,
M: Handler,
N: Handler,
O: Handler,
#### fn run<'life0, 'async_trait>(
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn init<'life0, 'life1, 'async_trait>(
&'life0 mut self,
info: &'life1 mut Info
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
#### fn before_send<'life0, 'async_trait>(
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn has_upgrade(&self, upgrade: &Upgrade) -> bool
#### fn upgrade<'life0, 'async_trait>(
&'life0 self,
upgrade: Upgrade
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn name(&self) -> Cow<'static, str### impl<A, B, C, D, E> Handler for (A, B, C, D, E)where
A: Handler,
B: Handler,
C: Handler,
D: Handler,
E: Handler,
#### fn run<'life0, 'async_trait>(
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn init<'life0, 'life1, 'async_trait>(
&'life0 mut self,
info: &'life1 mut Info
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
#### fn before_send<'life0, 'async_trait>(
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn has_upgrade(&self, upgrade: &Upgrade) -> bool
#### fn upgrade<'life0, 'async_trait>(
&'life0 self,
upgrade: Upgrade
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn name(&self) -> Cow<'static, str### impl<H: Handler> Handler for Vec<H#### fn run<'life0, 'async_trait>(
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn init<'life0, 'life1, 'async_trait>(
&'life0 mut self,
info: &'life1 mut Info
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
#### fn before_send<'life0, 'async_trait>(
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn name(&self) -> Cow<'static, str#### fn has_upgrade(&self, upgrade: &Upgrade) -> bool
#### fn upgrade<'life0, 'async_trait>(
&'life0 self,
upgrade: Upgrade
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
### impl<A, B, C, D, E, F, G, H, I, J, K> Handler for (A, B, C, D, E, F, G, H, I, J, K)where
A: Handler,
B: Handler,
C: Handler,
D: Handler,
E: Handler,
F: Handler,
G: Handler,
H: Handler,
I: Handler,
J: Handler,
K: Handler,
#### fn run<'life0, 'async_trait>(
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn init<'life0, 'life1, 'async_trait>(
&'life0 mut self,
info: &'life1 mut Info
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
#### fn before_send<'life0, 'async_trait>(
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn has_upgrade(&self, upgrade: &Upgrade) -> bool
#### fn upgrade<'life0, 'async_trait>(
&'life0 self,
upgrade: Upgrade
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn name(&self) -> Cow<'static, str### impl<H: Handler> Handler for Arc<H#### fn run<'life0, 'async_trait>(
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn init<'life0, 'life1, 'async_trait>(
&'life0 mut self,
info: &'life1 mut Info
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
#### fn before_send<'life0, 'async_trait>(
&'life0 self,
conn: Conn
) -> Pin<Box<dyn Future<Output = Conn> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
#### fn name(&self) -> Cow<'static, str#### fn has_upgrade(&self, upgrade: &Upgrade) -> bool
#### fn upgrade<'life0, 'async_trait>(
&'life0 self,
upgrade: Upgrade
) -> Pin<Box<dyn Future<Output = ()> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
Implementors
---
### impl Handler for Status
### impl Handler for Headers
### impl<Fun, Fut> Handler for Funwhere
Fun: Fn(Conn) -> Fut + Send + Sync + 'static,
Fut: Future<Output = Conn> + Send + 'static,
### impl<T: Clone + Send + Sync + 'static> Handler for State<T### impl<T: Handler> Handler for Init<TFunction trillium::init
===
```
pub fn init<T, F, Fut>(init: F) -> Init<T>where
F: FnOnce(Info) -> Fut + Send + Sync + 'static,
Fut: Future<Output = T> + Send + 'static,
T: Handler,
```
alias for `Init::new`
Function trillium::state
===
```
pub fn state<T: Clone + Send + Sync + 'static>(t: T) -> State<T>
```
Constructs a new `State` handler from any Clone + Send + Sync +
’static. Alias for `State::new`
Type Definition trillium::Upgrade
===
```
pub type Upgrade = Upgrade<BoxedTransport>;
```
A HTTP protocol upgrade
---
This exists to erase the generic transport for convenience using a
`BoxedTransport`. See
`Upgrade` for additional documentation |
waffle | hex | Erlang | Waffle
===
Waffle [Sponsored by Evrone](https://evrone.com?utm_source=waffle)
===
[![Hex.pm Version][hex-img]](https://hex.pm/packages/waffle)
[![waffle documentation](http://img.shields.io/badge/hexdocs-documentation-brightgreen.svg)](https://hexdocs.pm/waffle)
<img align="right" width="176" height="120"
```
alt="Waffle is a flexible file upload library for Elixir"
src="https://elixir-waffle.github.io/waffle/assets/logo.svg">
```
Waffle is a flexible file upload library for Elixir with straightforward integrations for Amazon S3 and ImageMagick.
[Documentation](https://hexdocs.pm/waffle)
[installation](#module-installation)
Installation
---
Add the latest stable release to your `mix.exs` file, along with the required dependencies for [`ExAws`](https://hexdocs.pm/ex_aws/2.4.1/ExAws.html) if appropriate:
```
defp deps do
[
{:waffle, "~> 1.1"},
# If using S3:
{:ex_aws, "~> 2.1.2"},
{:ex_aws_s3, "~> 2.0"},
{:hackney, "~> 1.9"},
{:sweet_xml, "~> 0.6"}
]
end
```
Then run [`mix deps.get`](https://hexdocs.pm/mix/Mix.Tasks.Deps.Get.html) in your shell to fetch the dependencies.
[usage](#module-usage)
Usage
---
After installing Waffle, another two things should be done:
1. setup a storage provider 2. define a definition module
###
[setup-a-storage-provider](#module-setup-a-storage-provider)
Setup a storage provider
Waffle has two built-in storage providers:
* [`Waffle.Storage.Local`](Waffle.Storage.Local.html)
* [`Waffle.Storage.S3`](Waffle.Storage.S3.html)
[Other available storage providers](#other-storage-providers)
are supported by the community.
An example for setting up [`Waffle.Storage.Local`](Waffle.Storage.Local.html):
```
config :waffle,
storage: Waffle.Storage.Local,
asset_host: "http://static.example.com" # or {:system, "ASSET_HOST"}
```
An example for setting up [`Waffle.Storage.S3`](Waffle.Storage.S3.html):
```
config :waffle,
storage: Waffle.Storage.S3,
bucket: "custom_bucket", # or {:system, "AWS_S3_BUCKET"}
asset_host: "http://static.example.com" # or {:system, "ASSET_HOST"}
config :ex_aws,
json_codec: Jason
# any configurations provided by https://github.com/ex-aws/ex_aws
```
###
[define-a-definition-module](#module-define-a-definition-module)
Define a definition module
Waffle requires a **definition module** which contains the relevant functions to store and retrieve files:
* Optional transformations of the uploaded file
* Where to put your files (the storage directory)
* How to name your files
* How to secure your files (private? Or publicly accessible?)
* Default placeholders
This module can be created manually or generated by [`mix waffle.g`](Mix.Tasks.Waffle.G.html)
automatically.
As an example, we will generate a definition module for handling avatars:
```
mix waffle.g avatar
```
This should generate a file at `lib/[APP_NAME]_web/uploaders/avatar.ex`.
Check this file for descriptions of configurable options.
[examples](#module-examples)
Examples
---
* [An example for Local storage driver](local.html)
* [An example for S3 storage driver](s3.html)
[usage-with-ecto](#module-usage-with-ecto)
Usage with Ecto
---
Waffle comes with a companion package for use with Ecto. If you intend to use Waffle with Ecto, it is highly recommended you also add the
[`waffle_ecto`](https://github.com/elixir-waffle/waffle_ecto)
dependency. Benefits include:
* Changeset integration
* Versioned urls for cache busting (`.../thumb.png?v=63601457477`)
[other-storage-providers](#module-other-storage-providers)
Other Storage Providers
---
* **Rackspace** - [arc_rackspace](https://github.com/lokalebasen/arc_rackspace)
* **Manta** - [arc_manta](https://github.com/onyxrev/arc_manta)
* **OVH** - [arc_ovh](https://github.com/stephenmoloney/arc_ovh)
* **Google Cloud Storage** - [waffle_gcs](https://github.com/kolorahl/waffle_gcs)
* **Microsoft Azure Storage** - [arc_azure](https://github.com/phil-a/arc_azure)
* **Aliyun OSS Storage** - [waffle_aliyun_oss](https://github.com/ug0/waffle_aliyun_oss)
[testing](#module-testing)
Testing
---
The basic test suite can be run with without supplying any S3 information:
```
mix test
```
In order to test S3 capability, you must have access to an S3/equivalent bucket. For S3 buckets, the bucket must be configured to allow ACLs and it must allow public access.
The following environment variables will be used by the test suite:
* WAFFLE_TEST_BUCKET
* WAFFLE_TEST_BUCKET2
* WAFFLE_TEST_S3_KEY
* WAFFLE_TEST_S3_SECRET
* WAFFLE_TEST_REGION
After setting these variables, you can run the full test suite with `mix test --include s3:true`.
[attribution](#module-attribution)
Attribution
---
Great thanks to <NAME> (@stavro) for the original awesome work on the library.
This project is forked from [Arc](https://github.com/stavro/arc) from the version `v0.11.0`.
[license](#module-license)
License
---
Copyright 2019 <NAME> [<EMAIL>](mailto:<EMAIL>)
Copyright 2015 <NAME>
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
```
http://www.apache.org/licenses/LICENSE-2.0
```
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Waffle.Actions.Delete
===
Delete files from a defined adapter.
After an object is stored through Waffle, you may optionally remove it. To remove a stored object, pass the same path identifier and scope from which you stored the object.
```
# Without a scope:
{:ok, original_filename} = Avatar.store("/Images/me.png")
:ok = Avatar.delete(original_filename)
# With a scope:
user = Repo.get!(User, 1)
{:ok, original_filename} = Avatar.store({"/Images/me.png", user})
# example 1
:ok = Avatar.delete({original_filename, user})
# example 2 user = Repo.get!(User, 1)
:ok = Avatar.delete({user.avatar, user})
```
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[delete(definition, filepath)](#delete/2)
[Link to this section](#functions)
Functions
===
Waffle.Actions.Store
===
Store files to a defined adapter.
The definition module responds to `Avatar.store/1` which accepts either:
* A path to a local file
* A path to a remote `http` or `https` file
* A map with a filename and path keys (eg, a `%Plug.Upload{}`)
* A map with a filename and binary keys (eg, `%{filename: "image.png", binary: <<255,255,255,...>>}`)
* A two-element tuple consisting of one of the above file formats as well as a scope map
Example usage as general file store:
```
# Store any locally accessible file Avatar.store("/path/to/my/file.png") #=> {:ok, "file.png"}
# Store any remotely accessible file Avatar.store("http://example.com/file.png") #=> {:ok, "file.png"}
# Store a file directly from a `%Plug.Upload{}`
Avatar.store(%Plug.Upload{filename: "file.png", path: "/a/b/c"}) #=> {:ok, "file.png"}
# Store a file from a connection body
{:ok, data, _conn} = Plug.Conn.read_body(conn)
Avatar.store(%{filename: "file.png", binary: data})
```
Example usage as a file attached to a `scope`:
```
scope = Repo.get(User, 1)
Avatar.store({%Plug.Upload{}, scope}) #=> {:ok, "file.png"}
```
This scope will be available throughout the definition module to be used as an input to the storage parameters (eg, store files in
`/uploads/#{scope.id}`).
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[store(definition, filepath)](#store/2)
[Link to this section](#functions)
Functions
===
Waffle.Actions.Url
===
Url generation.
Saving your files is only the first half of any decent storage solution. Straightforward access to your uploaded files is equally as important as storing them in the first place.
Often times you will want to regain access to the stored files. As such, [`Waffle`](Waffle.html) facilitates the generation of urls.
```
# Given some user record user = %{id: 1}
Avatar.store({%Plug.Upload{}, user}) #=> {:ok, "selfie.png"}
# To generate a regular, unsigned url (defaults to the first version):
Avatar.url({"selfie.png", user})
#=> "https://bucket.s3.amazonaws.com/uploads/1/original.png"
# To specify the version of the upload:
Avatar.url({"selfie.png", user}, :thumb)
#=> "https://bucket.s3.amazonaws.com/uploads/1/thumb.png"
# To generate a signed url:
Avatar.url({"selfie.png", user}, :thumb, signed: true)
#=> "https://bucket.s3.amazonaws.com/uploads/1/thumb.png?AWSAccessKeyId=<KEY>&Signature=5PzIbSgD1V2vPLj%2B4WLRSFQ5M%3D&Expires=1434395458"
# To generate urls for all versions:
Avatar.urls({"selfie.png", user})
#=> %{original: "https://.../original.png", thumb: "https://.../thumb.png"}
```
**Default url**
In cases where a placeholder image is desired when an uploaded file is not present, Waffle allows the definition of a default image to be returned gracefully when requested with a `nil` file.
```
def default_url(version) do
MyApp.Endpoint.url <> "/images/placeholders/profile_image.png"
end
Avatar.url(nil) #=> "http://example.com/images/placeholders/profile_image.png"
Avatar.url({nil, scope}) #=> "http://example.com/images/placeholders/profile_image.png"
```
**Virtual Host**
To support AWS regions other than US Standard, it may be required to generate urls in the
[`virtual_host`](http://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html)
style. This will generate urls in the style:
`https://#{bucket}.s3.amazonaws.com` instead of
`https://s3.amazonaws.com/#{bucket}`.
To use this style of url generation, your bucket name must be DNS compliant.
This can be enabled with:
```
config :waffle,
virtual_host: true
```
> When using virtual hosted–style buckets with SSL, the SSL wild card certificate only matches buckets that do not contain periods. To work around this, use HTTP or write your own certificate verification logic.
**Asset Host**
You may optionally specify an asset host rather than using the default `bucket.s3.amazonaws.com` format.
In your application configuration, you'll need to provide an `asset_host` value:
```
config :waffle,
asset_host: "https://d3gav2egqolk5.cloudfront.net", # For a value known during compilation
asset_host: {:system, "ASSET_HOST"} # For a value not known until runtime
```
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[url(definition, file, version, options)](#url/4)
[Link to this section](#functions)
Functions
===
Waffle.Definition
===
Defines uploader to manage files.
```
defmodule Avatar do
use Waffle.Definition end
```
Consists of several components to manage different parts of file managing process.
* [`Waffle.Definition.Versioning`](Waffle.Definition.Versioning.html)
* [`Waffle.Definition.Storage`](Waffle.Definition.Storage.html)
* [`Waffle.Actions.Store`](Waffle.Actions.Store.html)
* [`Waffle.Actions.Delete`](Waffle.Actions.Delete.html)
* [`Waffle.Actions.Url`](Waffle.Actions.Url.html)
Waffle.Definition.Storage
===
Uploader configuration.
Add `use Waffle.Definition` inside your module to use it as uploader.
[storage-directory](#module-storage-directory)
Storage directory
---
By default, the storage directory is `uploads`. But, it can be customized in two ways.
###
[by-setting-up-configuration](#module-by-setting-up-configuration)
By setting up configuration
Customize storage directory via configuration option `:storage_dir`.
```
config :waffle,
storage_dir: "my/dir"
```
###
[by-overriding-the-relevant-functions-in-definition-modules](#module-by-overriding-the-relevant-functions-in-definition-modules)
By overriding the relevant functions in definition modules
Every definition module has a default `storage_dir/2` which is overridable.
For example, a common pattern for user avatars is to store each user's uploaded images in a separate subdirectory based on primary key:
```
def storage_dir(version, {file, scope}) do
"uploads/users/avatars/#{scope.id}"
end
```
> **Note**: If you are "attaching" a file to a record on creation (eg, while inserting the record at the same time), then you cannot use the model's `id` as a path component. You must either (1) use a different storage path format, such as UUIDs, or (2) attach and update the model after an id has been given. [Read more about how to integrate it with Ecto](https://hexdocs.pm/waffle_ecto/filepath-with-id.html#content)
> **Note**: The storage directory is used for both local filestorage (as the relative or absolute directory), and S3 storage, as the path name (not including the bucket).
[asynchronous-file-uploading](#module-asynchronous-file-uploading)
Asynchronous File Uploading
---
If you specify multiple versions in your definition module, each version is processed and stored concurrently as independent Tasks.
To prevent an overconsumption of system resources, each Task is given a specified timeout to wait, after which the process will fail. By default, the timeout is `15_000` milliseconds.
If you wish to change the time allocated to version transformation and storage, you can add a configuration option:
```
config :waffle,
:version_timeout, 15_000 # milliseconds
```
To disable asynchronous processing, add `@async false` to your definition module.
[storage-of-files](#module-storage-of-files)
Storage of files
---
Waffle currently supports:
* [`Waffle.Storage.Local`](Waffle.Storage.Local.html)
* [`Waffle.Storage.S3`](Waffle.Storage.S3.html)
Override the `__storage` function in your definition module if you want to use a different type of storage for a particular uploader.
[file-validation](#module-file-validation)
File Validation
---
While storing files on S3 eliminates some malicious attack vectors,
it is strongly encouraged to validate the extensions of uploaded files as well.
Waffle delegates validation to a `validate/1` function with a tuple of the file and scope. As an example, in order to validate that an uploaded file conforms to popular image formats, you can use:
```
defmodule Avatar do
use Waffle.Definition
@extension_whitelist ~w(.jpg .jpeg .gif .png)
def validate({file, _}) do
file_extension = file.file_name |> Path.extname() |> String.downcase()
case Enum.member?(@extension_whitelist, file_extension) do
true -> :ok
false -> {:error, "invalid file type"}
end
end end
```
Validation will be considered successful if the function returns `true` or `:ok`.
A customized error message can be returned in the form of `{:error, message}`.
Any other return value will return `{:error, :invalid_file}` when passed through to `Avatar.store`.
[passing-custom-headers-when-downloading-from-remote-path](#module-passing-custom-headers-when-downloading-from-remote-path)
Passing custom headers when downloading from remote path
---
By default, when downloading files from remote path request headers are empty,
but if you wish to provide your own, you can override the `remote_file_headers/1`
function in your definition module. For example:
```
defmodule Avatar do
use Waffle.Definition
def remote_file_headers(%URI{host: "elixir-lang.org"}) do
credentials = Application.get_env(:my_app, :avatar_credentials)
token = Base.encode64(credentials[:username] <> ":" <> credentials[:password])
[{"Authorization", "Basic #{token}")}]
end end
```
This code would authenticate request only for specific domain. Otherwise, it would send empty request headers.
Waffle.Definition.Versioning
===
Define proper name for a version.
It may be undesirable to retain original filenames (eg, it may contain personally identifiable information, vulgarity,
vulnerabilities with Unicode characters, etc).
You may specify the destination filename for uploaded versions through your definition module.
A common pattern is to combine directories scoped to a particular model's primary key, along with static filenames. (eg:
`user_avatars/1/thumb.png`).
```
# To retain the original filename, but prefix the version and user id:
def filename(version, {file, scope}) do
file_name = Path.basename(file.file_name, Path.extname(file.file_name))
"#{scope.id}_#{version}_#{file_name}"
end
# To make the destination file the same as the version:
def filename(version, _), do: version
```
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[resolve_file_name(definition, version, arg)](#resolve_file_name/3)
[Link to this section](#functions)
Functions
===
Waffle.MissingExecutableError exception
===
Waffle.Processor
===
Apply transformation to files.
Waffle can be used to facilitate transformations of uploaded files via any system executable. Some common operations you may want to take on uploaded files include resizing an uploaded avatar with ImageMagick or extracting a still image from a video with FFmpeg.
To transform an image, the definition module must define a
`transform/2` function which accepts a version atom and a tuple consisting of the uploaded file and corresponding scope.
This transform handler accepts the version atom, as well as the file/scope argument, and is responsible for returning one of the following:
* `:noaction` - The original file will be stored as-is.
* `:skip` - Nothing will be stored for the provided version.
* `{executable, args}` - The `executable` will be called with
`System.cmd` with the format
`#{original_file_path} #{args} #{transformed_file_path}`.
* `{executable, fn(input, output) -> args end}` If your executable expects arguments in a format other than the above, you may supply a function to the conversion tuple which will be invoked to generate the arguments. The arguments can be returned as a string (e.g. – `" #{input} -strip -thumbnail 10x10 #{output}"`)
or a list (e.g. – `[input, "-strip", "-thumbnail", "10x10", output]`) for even more control.
* `{executable, args, output_extension}` - If your transformation changes the file extension (eg, converting to `png`), then the new file extension must be explicit.
* `fn version, file -> {:ok, file} end` - Implement custom transformation as elixir function,
[read more about custom transformations](custom_transformation.html)
[imagemagick-transformations](#module-imagemagick-transformations)
ImageMagick transformations
---
As images are one of the most commonly uploaded filetypes, Waffle has a recommended integration with ImageMagick's `convert` tool for manipulation of images. Each definition module may specify as many versions as desired, along with the corresponding transformation for each version.
The expected return value of a `transform` function call must either be `:noaction`, in which case the original file will be stored as-is, `:skip`, in which case nothing will be stored, or `{:convert, transformation}` in which the original file will be processed via ImageMagick's `convert` tool with the corresponding transformation parameters.
The following example stores the original file, as well as a squared 100x100 thumbnail version which is stripped of comments (eg, GPS coordinates):
```
defmodule Avatar do
use Waffle.Definition
@versions [:original, :thumb]
def transform(:thumb, _) do
{:convert, "-strip -thumbnail 100x100^ -gravity center -extent 100x100"}
end end
```
Other examples:
```
# Change the file extension through ImageMagick's `format` parameter:
{:convert, "-strip -thumbnail 100x100^ -gravity center -extent 100x100 -format png", :png}
# Take the first frame of a gif and process it into a square jpg:
{:convert, fn(input, output) -> "#{input}[0] -strip -thumbnail 100x100^ -gravity center -extent 100x100 -format jpg #{output}", :jpg}
```
For more information on defining your transformation, please consult
[ImageMagick's convert documentation](http://www.imagemagick.org/script/convert.php).
> **Note**: Keep this transformation function simple and deterministic based on the version, file name, and scope object. The `transform` function is subsequently called during URL generation, and the transformation is scanned for the output file format. As such, if you conditionally format the image as a `png` or `jpg` depending on the time of day, you will be displeased with the result of Waffle's URL generation.
> **System Resources**: If you are accepting arbitrary uploads on a public site, it may be prudent to add system resource limits to prevent overloading your system resources from malicious or nefarious files. Since all processing is done directly in ImageMagick, you may pass in system resource restrictions through the [-limit](http://www.imagemagick.org/script/command-line-options.php#limit) flag. One such example might be: `-limit area 10MB -limit disk 100MB`.
[ffmpeg-transformations](#module-ffmpeg-transformations)
FFmpeg transformations
---
Common transformations of uploaded videos can be also defined through your definition module:
```
# To take a thumbnail from a video:
{:ffmpeg, fn(input, output) -> "-i #{input} -f jpg #{output}" end, :jpg}
# To convert a video to an animated gif
{:ffmpeg, fn(input, output) -> "-i #{input} -f gif #{output}" end, :gif}
```
[complex-transformations](#module-complex-transformations)
Complex Transformations
---
[`Waffle`](Waffle.html) requires the output of your transformation to be located at a predetermined path. However, the transformation may be done completely outside of [`Waffle`](Waffle.html). For fine-grained transformations,
you should create an executable wrapper in your $PATH (eg. bash script) which takes these proper arguments, runs your transformation, and then moves the file into the correct location.
For example, to use `soffice` to convert a doc to an html file, you should place the following bash script in your $PATH:
```
#!/usr/bin/env sh
# `soffice` doesn't allow for output file path option, and waffle can't find the
# temporary file to process and copy. This script has a similar argument list as
# what waffle expects. See https://github.com/stavro/arc/issues/77.
set -e set -o pipefail
function convert {
soffice \
--headless \
--convert-to html \
--outdir $TMPDIR \
"$1"
}
function filter_new_file_name {
awk -F$TMPDIR '{print $2}' \
| awk -F" " '{print $1}' \
| awk -F/ '{print $2}'
}
converted_file_name=$(convert "$1" | filter_new_file_name)
cp $TMPDIR/$converted_file_name "$2"
rm $TMPDIR/$converted_file_name
```
And perform the transformation as such:
```
def transform(:html, _) do
{:soffice_wrapper, fn(input, output) -> [input, output] end, :html}
end
```
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[process(definition, version, arg)](#process/3)
[Link to this section](#functions)
Functions
===
Waffle.Storage.Local
===
Local storage provides facility to store files locally.
[local-configuration](#module-local-configuration)
Local configuration
---
```
config :waffle,
storage: Waffle.Storage.Local,
# in order to have a different storage directory from url
storage_dir_prefix: "priv/waffle/private",
# add custom host to url
asset_host: "https://example.com"
```
If you want to handle your attachments by phoenix application,
configure the endpoint to serve it.
```
defmodule AppWeb.Endpoint do
plug Plug.Static,
at: "/uploads",
from: Path.expand("./priv/waffle/public/uploads"),
gzip: false end
```
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[delete(definition, version, file_and_scope)](#delete/3)
[put(definition, version, arg)](#put/3)
[url(definition, version, file_and_scope, options \\ [])](#url/4)
[Link to this section](#functions)
Functions
===
Waffle.Storage.S3
===
The module to facilitate integration with S3 through ExAws.S3
```
config :waffle,
storage: Waffle.Storage.S3,
bucket: {:system, "AWS_S3_BUCKET"}
```
Along with any configuration necessary for ExAws.
[ExAws](https://github.com/CargoSense/ex_aws) is used to support Amazon S3.
To store your attachments in Amazon S3, you'll need to provide a bucket destination in your application config:
```
config :waffle,
bucket: "uploads"
```
You may also set the bucket from an environment variable:
```
config :waffle,
bucket: {:system, "S3_BUCKET"}
```
In addition, ExAws must be configured with the appropriate Amazon S3 credentials.
ExAws has by default the following configuration (which you may override if you wish):
```
config :ex_aws,
json_codec: Jason,
access_key_id: [{:system, "AWS_ACCESS_KEY_ID"}, :instance_role],
secret_access_key: [{:system, "AWS_SECRET_ACCESS_KEY"}, :instance_role]
```
This means it will first look for the AWS standard
`AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` environment variables, and fall back using instance meta-data if those don't exist. You should set those environment variables to your credentials, or configure an instance that this library runs on to have an iam role.
[specify-multiple-buckets](#module-specify-multiple-buckets)
Specify multiple buckets
---
Waffle lets you specify a bucket on a per definition basis. In case you want to use multiple buckets, you can specify a bucket in the definition module like this:
```
def bucket, do: :some_custom_bucket_name
```
You can also use the current scope to define a target bucket
```
def bucket({_file, scope}), do: scope.bucket || bucket()
```
[access-control-permissions](#module-access-control-permissions)
Access Control Permissions
---
Waffle defaults all uploads to `private`. In cases where it is desired to have your uploads public, you may set the ACL at the module level (which applies to all versions):
```
@acl :public_read
```
Or you may have more granular control over each version. As an example, you may wish to explicitly only make public a thumbnail version of the file:
```
def acl(:thumb, _), do: :public_read
```
Supported access control lists for Amazon S3 are:
| ACL | Permissions Added to ACL |
| --- | --- |
| `:private` | Owner gets `FULL_CONTROL`. No one else has access rights (default). |
| `:public_read` | Owner gets `FULL_CONTROL`. The `AllUsers` group gets READ access. |
| `:public_read_write` | Owner gets `FULL_CONTROL`. The `AllUsers` group gets `READ` and `WRITE` access. |
| | Granting this on a bucket is generally not recommended. |
| `:authenticated_read` | Owner gets `FULL_CONTROL`. The `AuthenticatedUsers` group gets `READ` access. |
| `:bucket_owner_read` | Object owner gets `FULL_CONTROL`. Bucket owner gets `READ` access. |
| `:bucket_owner_full_control` | Both the object owner and the bucket owner get `FULL_CONTROL` over the object. |
For more information on the behavior of each of these, please consult Amazon's documentation for [Access Control List (ACL)
Overview](https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html).
[s3-object-headers](#module-s3-object-headers)
S3 Object Headers
---
The definition module may specify custom headers to pass through to S3 during object creation. The available custom headers include:
* `:cache_control`
* `:content_disposition`
* `:content_encoding`
* `:content_length`
* `:content_type`
* `:expect`
* `:expires`
* `:storage_class`
* `:website_redirect_location`
* `:encryption` (set to "AES256" for encryption at rest)
As an example, to explicitly specify the content-type of an object,
you may define a `s3_object_headers/2` function in your definition,
which returns a Keyword list, or Map of desired headers.
```
def s3_object_headers(version, {file, scope}) do
[content_type: MIME.from_path(file.file_name)] # for "image.png", would produce: "image/png"
end
```
[alternate-s3-configuration-example](#module-alternate-s3-configuration-example)
Alternate S3 configuration example
---
If you are using a region other than US-Standard, it is necessary to specify the correct configuration for `ex_aws`. A full example configuration for both waffle and ex_aws is as follows:
```
config :waffle,
bucket: "my-frankfurt-bucket"
config :ex_aws,
json_codec: Jason,
access_key_id: "my_access_key_id",
secret_access_key: "my_secret_access_key",
region: "eu-central-1",
s3: [
scheme: "https://",
host: "s3.eu-central-1.amazonaws.com",
region: "eu-central-1"
]
```
> For your host configuration, please examine the approved [AWS Hostnames](http://docs.aws.amazon.com/general/latest/gr/rande.html). There are often multiple hostname formats for AWS regions, and it will not work unless you specify the correct one.
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[delete(definition, version, arg)](#delete/3)
[put(definition, version, arg)](#put/3)
[s3_key(definition, version, file_and_scope)](#s3_key/3)
[url(definition, version, file_and_scope, options \\ [])](#url/4)
[Link to this section](#functions)
Functions
===
mix waffle
===
mix waffle.g
===
A task for generating waffle uploader modules.
If generating an uploader in a Phoenix project, your a uploader will be generated in lib/[APP_NAME]_web/uploaders/
[example](#module-example)
Example
---
```
mix waffle.g avatar # creates lib/[APP_NAME]_web/uploaders/avatar.ex
```
If not generating an uploader in a Phoenix project, then you must pass the path to where the uploader should be generated.
[example-1](#module-example-1)
Example
---
```
mix waffle.g avatar uploaders/avatar.ex # creates uploaders/avatar.ex
```
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[run(arg1)](#run/1)
Callback implementation for [`Mix.Task.run/1`](https://hexdocs.pm/mix/Mix.Task.html#c:run/1).
[Link to this section](#functions)
Functions
===
An Example for Local
===
Setup the storage provider:
```
config :waffle,
storage: Waffle.Storage.Local,
asset_host: "http://static.example.com" # or {:system, "ASSET_HOST"}
```
Define a definition module:
```
defmodule Avatar do
use Waffle.Definition
@versions [:original, :thumb]
@extensions ~w(.jpg .jpeg .gif .png)
def validate({file, _}) do
file_extension = file.file_name |> Path.extname |> String.downcase
case Enum.member?(@extensions, file_extension) do
true -> :ok
false -> {:error, "file type is invalid"}
end
end
def transform(:thumb, _) do
{:convert, "-thumbnail 100x100^ -gravity center -extent 100x100 -format png", :png}
end
def filename(version, _) do
version
end
def storage_dir(_, {file, user}) do
"uploads/avatars/#{user.id}"
end end
```
Store or Get files:
```
# Given some current_user record current_user = %{id: 1}
# Store any accessible file Avatar.store({"/path/to/my/selfie.png", current_user})
#=> {:ok, "selfie.png"}
# ..or store directly from the `params` of a file upload within your controller Avatar.store({%Plug.Upload{}, current_user})
#=> {:ok, "selfie.png"}
# and retrieve the url later Avatar.url({"selfie.png", current_user}, :thumb)
#=> "uploads/avatars/1/thumb.png"
```
[API Reference](api-reference.html)
[Next Page →
An Example for S3](s3.html) |
@aws-cdk/aws-apigatewayv2-authorizers-alpha | npm | JavaScript | [AWS APIGatewayv2 Authorizers](#aws-apigatewayv2-authorizers)
===
---
> The APIs of higher level constructs in this module are experimental and under active development.
> They are subject to non-backward compatible changes or removal in any future version. These are
> not subject to the [Semantic Versioning](https://semver.org/) model and breaking changes will be
> announced in the release notes. This means that while you may use them, you may need to update
> your source code when upgrading to a newer version of this package.
---
[Table of Contents](#table-of-contents)
---
* [Introduction](#introduction)
* [HTTP APIs](#http-apis)
+ [Default Authorization](#default-authorization)
+ [Route Authorization](#route-authorization)
+ [JWT Authorizers](#jwt-authorizers)
- [User Pool Authorizer](#user-pool-authorizer)
+ [Lambda Authorizers](#lambda-authorizers)
+ [IAM Authorizers](#iam-authorizers)
* [WebSocket APIs](#websocket-apis)
+ [Lambda Authorizer](#lambda-authorizer)
+ [IAM Authorizers](#iam-authorizer)
[Introduction](#introduction)
---
API Gateway supports multiple mechanisms for controlling and managing access to your HTTP API. They are mainly classified into Lambda Authorizers, JWT authorizers and standard AWS IAM roles and policies. More information is available at [Controlling and managing access to an HTTP API](https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-access-control.html).
[HTTP APIs](#http-apis)
---
Access control for Http Apis is managed by restricting which routes can be invoked via.
Authorizers and scopes can either be applied to the api, or specifically for each route.
### [Default Authorization](#default-authorization)
When using default authorization, all routes of the api will inherit the configuration.
In the example below, all routes will require the `manage:books` scope present in order to invoke the integration.
```
import { HttpJwtAuthorizer } from '@aws-cdk/aws-apigatewayv2-authorizers-alpha';
const issuer = 'https://test.us.auth0.com';
const authorizer = new HttpJwtAuthorizer('DefaultAuthorizer', issuer, {
jwtAudience: ['3131231'],
});
const api = new apigwv2.HttpApi(this, 'HttpApi', {
defaultAuthorizer: authorizer,
defaultAuthorizationScopes: ['manage:books'],
});
```
### [Route Authorization](#route-authorization)
Authorization can also configured for each Route. When a route authorization is configured, it takes precedence over default authorization.
The example below showcases default authorization, along with route authorization. It also shows how to remove authorization entirely for a route.
* `GET /books` and `GET /books/{id}` use the default authorizer settings on the api
* `POST /books` will require the [write:books] scope
* `POST /login` removes the default authorizer (unauthenticated route)
```
import { HttpJwtAuthorizer } from '@aws-cdk/aws-apigatewayv2-authorizers-alpha';
import { HttpUrlIntegration } from '@aws-cdk/aws-apigatewayv2-integrations-alpha';
const issuer = 'https://test.us.auth0.com';
const authorizer = new HttpJwtAuthorizer('DefaultAuthorizer', issuer, {
jwtAudience: ['3131231'],
});
const api = new apigwv2.HttpApi(this, 'HttpApi', {
defaultAuthorizer: authorizer,
defaultAuthorizationScopes: ['read:books'],
});
api.addRoutes({
integration: new HttpUrlIntegration('BooksIntegration', 'https://get-books-proxy.example.com'),
path: '/books',
methods: [apigwv2.HttpMethod.GET],
});
api.addRoutes({
integration: new HttpUrlIntegration('BooksIdIntegration', 'https://get-books-proxy.example.com'),
path: '/books/{id}',
methods: [apigwv2.HttpMethod.GET],
});
api.addRoutes({
integration: new HttpUrlIntegration('BooksIntegration', 'https://get-books-proxy.example.com'),
path: '/books',
methods: [apigwv2.HttpMethod.POST],
authorizationScopes: ['write:books']
});
api.addRoutes({
integration: new HttpUrlIntegration('LoginIntegration', 'https://get-books-proxy.example.com'),
path: '/login',
methods: [apigwv2.HttpMethod.POST],
authorizer: new apigwv2.HttpNoneAuthorizer(),
});
```
### [JWT Authorizers](#jwt-authorizers)
JWT authorizers allow the use of JSON Web Tokens (JWTs) as part of [OpenID Connect](https://openid.net/specs/openid-connect-core-1_0.html) and [OAuth 2.0](https://oauth.net/2/) frameworks to allow and restrict clients from accessing HTTP APIs.
When configured, API Gateway validates the JWT submitted by the client, and allows or denies access based on its content.
The location of the token is defined by the `identitySource` which defaults to the http `Authorization` header. However it also
[supports a number of other options](https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-lambda-authorizer.html#http-api-lambda-authorizer.identity-sources).
It then decodes the JWT and validates the signature and claims, against the options defined in the authorizer and route (scopes).
For more information check the [JWT Authorizer documentation](https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-jwt-authorizer.html).
Clients that fail authorization are presented with either 2 responses:
* `401 - Unauthorized` - When the JWT validation fails
* `403 - Forbidden` - When the JWT validation is successful but the required scopes are not met
```
import { HttpJwtAuthorizer } from '@aws-cdk/aws-apigatewayv2-authorizers-alpha';
import { HttpUrlIntegration } from '@aws-cdk/aws-apigatewayv2-integrations-alpha';
const issuer = 'https://test.us.auth0.com';
const authorizer = new HttpJwtAuthorizer('BooksAuthorizer', issuer, {
jwtAudience: ['3131231'],
});
const api = new apigwv2.HttpApi(this, 'HttpApi');
api.addRoutes({
integration: new HttpUrlIntegration('BooksIntegration', 'https://get-books-proxy.example.com'),
path: '/books',
authorizer,
});
```
#### [User Pool Authorizer](#user-pool-authorizer)
User Pool Authorizer is a type of JWT Authorizer that uses a Cognito user pool and app client to control who can access your Api. After a successful authorization from the app client, the generated access token will be used as the JWT.
Clients accessing an API that uses a user pool authorizer must first sign in to a user pool and obtain an identity or access token.
They must then use this token in the specified `identitySource` for the API call. More information is available at [using Amazon Cognito user pools as authorizer](https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-integrate-with-cognito.html).
```
import * as cognito from 'aws-cdk-lib/aws-cognito';
import { HttpUserPoolAuthorizer } from '@aws-cdk/aws-apigatewayv2-authorizers-alpha';
import { HttpUrlIntegration } from '@aws-cdk/aws-apigatewayv2-integrations-alpha';
const userPool = new cognito.UserPool(this, 'UserPool');
const authorizer = new HttpUserPoolAuthorizer('BooksAuthorizer', userPool);
const api = new apigwv2.HttpApi(this, 'HttpApi');
api.addRoutes({
integration: new HttpUrlIntegration('BooksIntegration', 'https://get-books-proxy.example.com'),
path: '/books',
authorizer,
});
```
### [Lambda Authorizers](#lambda-authorizers)
Lambda authorizers use a Lambda function to control access to your HTTP API. When a client calls your API, API Gateway invokes your Lambda function and uses the response to determine whether the client can access your API.
Lambda authorizers depending on their response, fall into either two types - Simple or IAM. You can learn about differences [here](https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-lambda-authorizer.html#http-api-lambda-authorizer.payload-format-response).
```
import { HttpLambdaAuthorizer, HttpLambdaResponseType } from '@aws-cdk/aws-apigatewayv2-authorizers-alpha';
import { HttpUrlIntegration } from '@aws-cdk/aws-apigatewayv2-integrations-alpha';
// This function handles your auth logic declare const authHandler: lambda.Function;
const authorizer = new HttpLambdaAuthorizer('BooksAuthorizer', authHandler, {
responseTypes: [HttpLambdaResponseType.SIMPLE], // Define if returns simple and/or iam response
});
const api = new apigwv2.HttpApi(this, 'HttpApi');
api.addRoutes({
integration: new HttpUrlIntegration('BooksIntegration', 'https://get-books-proxy.example.com'),
path: '/books',
authorizer,
});
```
### [IAM Authorizers](#iam-authorizers)
API Gateway supports IAM via the included `HttpIamAuthorizer` and grant syntax:
```
import { HttpIamAuthorizer } from '@aws-cdk/aws-apigatewayv2-authorizers-alpha';
import { HttpUrlIntegration } from '@aws-cdk/aws-apigatewayv2-integrations-alpha';
declare const principal: iam.AnyPrincipal;
const authorizer = new HttpIamAuthorizer();
const httpApi = new apigwv2.HttpApi(this, 'HttpApi', {
defaultAuthorizer: authorizer,
});
const routes = httpApi.addRoutes({
integration: new HttpUrlIntegration('BooksIntegration', 'https://get-books-proxy.example.com'),
path: '/books/{book}',
});
routes[0].grantInvoke(principal);
```
[WebSocket APIs](#websocket-apis)
---
You can set an authorizer to your WebSocket API's `$connect` route to control access to your API.
### [Lambda Authorizer](#lambda-authorizer)
Lambda authorizers use a Lambda function to control access to your WebSocket API. When a client connects to your API, API Gateway invokes your Lambda function and uses the response to determine whether the client can access your API.
```
import { WebSocketLambdaAuthorizer } from '@aws-cdk/aws-apigatewayv2-authorizers-alpha';
import { WebSocketLambdaIntegration } from '@aws-cdk/aws-apigatewayv2-integrations-alpha';
// This function handles your auth logic declare const authHandler: lambda.Function;
// This function handles your WebSocket requests declare const handler: lambda.Function;
const authorizer = new WebSocketLambdaAuthorizer('Authorizer', authHandler);
const integration = new WebSocketLambdaIntegration(
'Integration',
handler,
);
new apigwv2.WebSocketApi(this, 'WebSocketApi', {
connectRouteOptions: {
integration,
authorizer,
},
});
```
### [IAM Authorizer](#iam-authorizer)
IAM authorizers can be used to allow identity-based access to your WebSocket API.
```
import { WebSocketIamAuthorizer } from '@aws-cdk/aws-apigatewayv2-authorizers-alpha';
import { WebSocketLambdaIntegration } from '@aws-cdk/aws-apigatewayv2-integrations-alpha';
// This function handles your connect route declare const connectHandler: lambda.Function;
const webSocketApi = new apigwv2.WebSocketApi(this, 'WebSocketApi');
webSocketApi.addRoute('$connect', {
integration: new WebSocketLambdaIntegration('Integration', connectHandler),
authorizer: new WebSocketIamAuthorizer()
});
// Create an IAM user (identity)
const user = new iam.User(this, 'User');
const webSocketArn = Stack.of(this).formatArn({
service: 'execute-api',
resource: webSocketApi.apiId,
});
// Grant access to the IAM user user.attachInlinePolicy(new iam.Policy(this, 'AllowInvoke', {
statements: [
new iam.PolicyStatement({
actions: ['execute-api:Invoke'],
effect: iam.Effect.ALLOW,
resources: [webSocketArn],
}),
],
}));
```
Readme
---
### Keywords
* aws
* cdk
* constructs
* apigateway |
PCMBase | cran | R | Package ‘PCMBase’
November 18, 2022
Type Package
Title Simulation and Likelihood Calculation of Phylogenetic
Comparative Models
Version 1.2.13
Maintainer <NAME> <<EMAIL>>
Description Phylogenetic comparative methods represent models of continuous trait
data associated with the tips of a phylogenetic tree. Examples of such models
are Gaussian continuous time branching stochastic processes such as Brownian
motion (BM) and Ornstein-Uhlenbeck (OU) processes, which regard the data at the
tips of the tree as an observed (final) state of a Markov process starting from
an initial state at the root and evolving along the branches of the tree. The
PCMBase R package provides a general framework for manipulating such models.
This framework consists of an application programming interface for specifying
data and model parameters, and efficient algorithms for simulating trait evolution
under a model and calculating the likelihood of model parameters for an assumed
model and trait data. The package implements a growing collection of models,
which currently includes BM, OU, BM/OU with jumps, two-speed OU as well as mixed
Gaussian models, in which different types of the above models can be associated
with different branches of the tree. The PCMBase package is limited to
trait-simulation and likelihood calculation of (mixed) Gaussian phylogenetic
models. The PCMFit package provides functionality for inference of
these models to tree and trait data. The package web-site
<https://venelin.github.io/PCMBase/>
provides access to the documentation and other resources.
Encoding UTF-8
License GPL (>= 3.0)
LazyData true
Depends R (>= 3.1.0)
Imports ape, expm, mvtnorm, data.table, ggplot2, xtable
Suggests testthat, knitr, rmarkdown, abind, ggtree, cowplot, covr,
mvSLOUCH, BiocManager
RoxygenNote 7.1.0
ByteCompile yes
VignetteBuilder knitr, rmarkdown
URL https://venelin.github.io/PCMBase/, https://venelin.github.io
BugReports https://github.com/venelin/PCMBase/issues
NeedsCompilation no
Repository CRAN
Author <NAME> [aut, cre, cph] (<a
href=``https://venelin.github.io''>venelin.github.io</a>),
<NAME> [ctb],
<NAME> [ctb],
<NAME> [ths]
Date/Publication 2022-11-18 08:40:02 UTC
R topics documented:
Args_MixedGaussian_MGPMDefaultModelType... 5
Args_MixedGaussian_MGPMScalarOUTyp... 6
Args_MixedGaussian_MGPMSurfaceOUTyp... 6
as.MixedGaussia... 7
dataFig... 8
FormatCellAsLate... 8
FormatTableAsLate... 9
is.GaussianPC... 9
is.MixedGaussia... 10
is.PC... 10
is.PCMTre... 11
MatchListMember... 11
MGPMScalarOUTyp... 12
MGPMSurfaceOUTyp... 12
MixedGaussia... 13
MixedGaussianTemplat... 14
PC... 15
PCMAbCdE... 17
PCMAddToListAttribut... 18
PCMApplyTransformatio... 19
PCMBaseIsADevReleas... 20
PCMBaseTestObject... 20
PCMColorPalett... 22
PCMCombineListAttribut... 22
PCMCon... 23
PCMCond.GaussianPC... 24
PCMCondVO... 25
PCMCreateLikelihoo... 26
PCMDefaultModelType... 27
PCMDefaultObjec... 28
PCMDescrib... 29
PCMDescribeParameter... 29
PCMExtractDimension... 30
PCMExtractRegime... 30
PCMFindMetho... 31
PCMFixParamete... 32
PCMGenerateModelType... 32
PCMGenerateParameterization... 33
PCMGetAttribut... 34
PCMGetVecParamsRegimesAndModel... 35
PCMInf... 35
PCMLi... 37
PCMLikDmvNor... 39
PCMLikTrac... 40
PCMListMember... 42
PCMListParameterization... 43
PCMLm... 44
PCMMapModelTypesToRegime... 45
PCMMea... 46
PCMMeanAtTim... 47
PCMModel... 48
PCMModelType... 49
PCMNumRegime... 49
PCMNumTrait... 50
PCMOption... 50
PCMPairSum... 52
PCMPara... 53
PCMParamCoun... 53
PCMParamGetShortVecto... 54
PCMParamLoadOrStor... 55
PCMParamLocateInShortVecto... 55
PCMParamLowerLimi... 56
PCMParamRandomVecParam... 57
PCMParamSetByNam... 58
PCMParamTyp... 59
PCMParamUpperLimi... 61
PCMParentClasse... 62
PCMParseErrorMessag... 62
PCMPExpxMeanEx... 63
PCMPLambdaP_... 64
PCMPlotGaussianDensityGrid2... 64
PCMPlotGaussianSample2... 65
PCMPlotMat... 66
PCMPlotTraitData2... 66
PCMPresentCoordinate... 68
PCMRegime... 69
PCMSetAttribut... 69
PCMSi... 70
PCMSpecif... 72
PCMTabl... 72
PCMTableParameterization... 73
PCMTrajector... 74
PCMTre... 76
PCMTreeBackbonePartitio... 78
PCMTreeDropClad... 80
PCMTreeDtNode... 81
PCMTreeEdgeTime... 82
PCMTreeEvalNestedEDxOnTre... 82
PCMTreeExtractClad... 84
PCMTreeGetBranchLengt... 85
PCMTreeGetDaughter... 86
PCMTreeGetLabel... 86
PCMTreeGetParen... 87
PCMTreeGetPartitio... 87
PCMTreeGetPartName... 88
PCMTreeGetPartRegime... 89
PCMTreeGetPartsForNode... 89
PCMTreeGetRegimesForEdge... 90
PCMTreeGetRegimesForNode... 90
PCMTreeGetTipsInPar... 91
PCMTreeGetTipsInRegim... 92
PCMTreeInsertSingleton... 93
PCMTreeJump... 95
PCMTreeListAllPartition... 96
PCMTreeListCladePartition... 97
PCMTreeListDescendant... 98
PCMTreeListRootPath... 99
PCMTreeLocateEpochOnBranche... 99
PCMTreeLocateMidpointsOnBranche... 100
PCMTreeMatchLabel... 100
PCMTreeMatrixNodesInSamePar... 101
PCMTreeNearestNodesToEpoc... 102
PCMTreeNodeTime... 103
PCMTreeNumNode... 103
PCMTreeNumPart... 104
PCMTreeNumTip... 104
PCMTreePlo... 105
PCMTreePostorde... 105
PCMTreePreorde... 106
PCMTreeSetLabel... 106
PCMTreeSetPartitio... 107
PCMTreeSetPartRegime... 109
PCMTreeSetRegimesForEdge... 110
PCMTreeSplitAtNod... 111
PCMTreeTableAncestor... 113
PCMTreeToStrin... 113
PCMTreeVC... 114
PCMUnfixParamete... 114
PCMVa... 115
PCMVarAtTim... 116
TruePositiveRat... 118
UpperTriFacto... 118
Whit... 119
Args_MixedGaussian_MGPMDefaultModelTypes
Arguments to be passed to the constructor MixedGaussian when con-
structing a MGPM model with some of the default MGPM model types.
Description
Arguments to be passed to the constructor MixedGaussian when constructing a MGPM model with
some of the default MGPM model types.
Usage
Args_MixedGaussian_MGPMDefaultModelTypes(omitGlobalSigmae_x = TRUE)
Arguments
omitGlobalSigmae_x
logical, indicating if the returned list should specify the global Sigmae_x param-
eter as ’_Omitted’. Default: TRUE.
Value
a list of named arguments. Currently only a named element Sigmae_x with specification depending
on omitGlobalSigmae_x.
See Also
MGPMDefaultModelTypes
Args_MixedGaussian_MGPMScalarOUType
Arguments for the MixedGaussian constructor for scalar OU MGPM
models.
Description
Arguments for the MixedGaussian constructor for scalar OU MGPM models.
Usage
Args_MixedGaussian_MGPMScalarOUType()
Value
a list.
Args_MixedGaussian_MGPMSurfaceOUType
Arguments for the MixedGaussian constructor for SURFACE OU
MGPM models.
Description
Arguments for the MixedGaussian constructor for SURFACE OU MGPM models.
Usage
Args_MixedGaussian_MGPMSurfaceOUType()
Value
a list.
as.MixedGaussian Convert a GaussianPCM model object to a MixedGaussian model ob-
ject
Description
Convert a GaussianPCM model object to a MixedGaussian model object
Usage
as.MixedGaussian(o, modelTypes = NULL)
Arguments
o an R object: either a GaussianPCM or a MixedGaussian.
modelTypes NULL (the default) or a (possibly named) character string vector. Each such
string denotes a mixed Gaussian regime model class, e.g. the result of calling
MGPMDefaultModelTypes(). If specified, an attempt is made to match the de-
duced Gaussian regime model type from o with the elements of modelTypes
and an error is raised if the match fails. If the match succeeds the converted
MixedGaussian object will have the specified modelTypes parameter as an at-
tribute "modelTypes".
Value
a MixedGaussian object.
Examples
mg <- as.MixedGaussian(PCMBaseTestObjects$model.ab.123.bSigmae_x)
stopifnot(
PCMLik(
X = PCMBaseTestObjects$traits.ab.123,
PCMBaseTestObjects$tree.ab,
PCMBaseTestObjects$model.ab.123.bSigmae_x) ==
PCMLik(
X = PCMBaseTestObjects$traits.ab.123,
PCMBaseTestObjects$tree.ab,
mg))
dataFig3 Data for Fig3 in the TPB manuscript
Description
A list containing simulated tree, models and data used in Fig. 3
Usage
dataFig3
Format
This is a list containing the following named elements representing simulation parameters, a sim-
ulated tree and PCM objects, used in Fig. 3. For details on all these objects, read the file data-
raw/Fig3.Rmd.
FormatCellAsLatex Latex representation of a model parameter or other found in a
data.table object
Description
Latex representation of a model parameter or other found in a data.table object
Usage
FormatCellAsLatex(x)
Arguments
x an R object. Currently, character vectors of length 1, numeric vectors and ma-
trices are supported.
Value
a character string
FormatTableAsLatex Latex representation of a data.table with matrix and vectors in its cells
Description
Latex representation of a data.table with matrix and vectors in its cells
Usage
FormatTableAsLatex(x, argsXtable = list(), ...)
Arguments
x a data.table
argsXtable a list (empty list by default) passed to xtable...
... additional arguments passed to print.xtable.
Value
a character string representing a parseable latex text.
Examples
dt <- data.table::data.table(
A = list(
matrix(c(2, 0, 1.2, 3), 2, 2),
matrix(c(2.1, 0, 1.2, 3.2, 1.3, 3.4), 3, 2)),
b = c(2.2, 3.1))
print(FormatTableAsLatex(dt))
is.GaussianPCM Check if an object is a ‘GaussianPCM‘
Description
Check if an object is a ‘GaussianPCM‘
Usage
is.GaussianPCM(x)
Arguments
x any object
Value
TRUE if x inherits from the S3 class ‘GaussianPCM‘, FALSE otherwise.
is.MixedGaussian Check if an object is a ‘MixedGaussian‘ PCM
Description
Check if an object is a ‘MixedGaussian‘ PCM
Usage
is.MixedGaussian(x)
Arguments
x any object
Value
TRUE if x inherits from the S3 class ‘MixedGaussian‘, FALSE otherwise.
is.PCM Check if an object is a PCM.
Description
Check if an object is a PCM.
Usage
is.PCM(x)
Arguments
x an object.
Value
TRUE if ‘x‘ inherits from the S3 class "PCM".
is.PCMTree Check that a tree is a PCMTree
Description
Check that a tree is a PCMTree
Usage
is.PCMTree(tree)
Arguments
tree a tree object.
Value
a logical TRUE if ‘inherits(tree, "PCMTree")‘ is TRUE.
MatchListMembers Find the members in a list matching a member expression
Description
Find the members in a list matching a member expression
Usage
MatchListMembers(object, member, enclos = "?", q = "'", ...)
Arguments
object a list containing named elements.
member a member expression. Member expressions are character strings denoting named
elements in a list object (see examples).
enclos a character string containing the special symbol ’?’. This symbol is to be re-
placed by matching expressions. The result of this substitution can be anything
but, usually would be a valid R expression. Default: "?".
q a quote symbol, Default: "'".
... additional arguments passed to grep. For example, these could be ignore.case=TRUE
or perl=TRUE.
Value
a named character vector, with names corresponding to the matched member quoted expressions
(using the argument q as a quote symbol), and values corresponding to the ’enclos-ed’ expressions
after substituting the ’?’.
See Also
PCMListMembers
Examples
model <- PCMBaseTestObjects$model_MixedGaussian_ab
MatchListMembers(model, "Sigma_x", "diag(model?[,,1L])")
MatchListMembers(model, "S.*_x", "diag(model?[,,1L])")
MatchListMembers(model, "Sigma_x", "model?[,,1L][upper.tri(model?[,,1L])]")
MatchListMembers(model, "a$Sigma_x", "model?[,,1L][upper.tri(model?[,,1L])]")
MGPMScalarOUType Class name for the scalar OU MGPM model type
Description
Class name for the scalar OU MGPM model type
Usage
MGPMScalarOUType()
Value
a character vector of one named element (ScalarOU)
MGPMSurfaceOUType Class name for the SURFACE OU MGPM model type
Description
Class name for the SURFACE OU MGPM model type
Usage
MGPMSurfaceOUType()
Value
a character vector of one named element (SURFACE)
MixedGaussian Create a multi-regime Gaussian model (MixedGaussian)
Description
Create a multi-regime Gaussian model (MixedGaussian)
Usage
MixedGaussian(
k,
modelTypes,
mapping,
className = paste0("MixedGaussian_", do.call(paste0, as.list(mapping))),
X0 = structure(0, class = c("VectorParameter", "_Global"), description =
"trait values at the root"),
...,
Sigmae_x = structure(0, class = c("MatrixParameter", "_UpperTriangularWithDiagonal",
"_WithNonNegativeDiagonal", "_Global"), description =
"Upper triangular factor of the non-phylogenetic variance-covariance")
)
Arguments
k integer specifying the number of traits.
modelTypes, mapping
These two arguments define the mapping between the regimes in the model and
actual types of models. For convenience, different combinations are possible as
explained below:
• modelTypes is a (possibly named) character string vector. Each such string
denotes a mixed Gaussian regime model class, e.g. the result of calling
MGPMDefaultModelTypes(). In that case mapping can be either an integer
vector with values corresponding to indices in modelTypes or a charac-
ter string vector. If mapping is a character string vector, first it is matched
against names(modelTypes) and if the match fails either because of names(modelTypes)
being NULL or because some of the entries in mapping are not present in
names(modelTypes), then an attempt is made to match mapping against
modelTypes, i.e. it is assumed that mapping contains actual class names.
• modelTypes is a (possibly named) list of PCM models of k traits. In this
case mapping can again be an integer vector denoting indices in modelTypes
or a character string vector denoting names in modelTypes.
As a final note, mapping can also be named. In this case the names are assumed
to be the names of the different regimes in the model. If mapping is not named,
the regimes are named automatically as as.character(seq_len(mapping)).
For example, if modelTypes = c("BM", "OU") and mapping = c(a = 1, b = 1,
c = 2, d = 1) defines an MixedGaussian with four different regimes named ’a’,
’b’, ’c’, ’d’, and model-types BM, BM, OU and BM, corresponding to each
regime.
className a character string defining a valid S3 class name for the resulting MixedGaus-
sian object. If not specified, a className is generated using the expression
paste0("MixedGaussian_", do.call(paste0, as.list(mapping))).
X0 specification for the global vector X0 to be used by all models in the MixedGaus-
sian.
... specifications for other _Global parameters coming after X0.
Sigmae_x sepcification of a _Global Sigmae_x parameter. This is used by Submodels only
if they have Sigmae_x _Omitted.
Details
If X0 is not NULL it has no sense to use model-types including X0 as a parameter (e.g. use BM1
or BM3 instead of BM or BM2). Similarly if Sigmae_x is not NULL there is no meaning in using
model-types including Sigmae_x as a parameter, (e.g. use OU2 or OU3 instead of OU or OU1).
Value
an object of S3 class className inheriting from MixedGaussian, GaussianPCM and PCM.
See Also
PCMTreeGetPartNames
PCMModels()
MixedGaussianTemplate Create a template MixedGaussian object containing a regime for each
model type
Description
Create a template MixedGaussian object containing a regime for each model type
Usage
MixedGaussianTemplate(mg, modelTypes = NULL)
Arguments
mg a MixedGaussian object or an object that can be converted to such via as.MixedGaussian.
modelTypes a (possibly named) character string vector. Each such string denotes a mixed
Gaussian regime model class, e.g. the result of calling MGPMDefaultModelTypes().
If specified, an attempt is made to match PCMModelTypes(as.MixedGaussian(mg))
with the elements of modelTypes and an error is raised if the match fails. If not
named, the model types and regimes in the resulting MixedGaussian object are
named by the capital latin letters A,B,C,.... Default: NULL, which is interpreted
as PCMModelTypes(as.MixedGaussian(mg, NULL)).
Value
a MixedGaussian with the same global parameter settings as for mg, the same modelTypes as
modelTypes, and with a regime for each model type. The function will stop with an error if mg
is not convertible to a MixedGaussian object or if there is a mismatch between the model types in
mg and modelTypes.
Examples
mg <- MixedGaussianTemplate(PCMBaseTestObjects$model.ab.123.bSigmae_x)
mgTemplBMOU <- MixedGaussianTemplate(PCMBaseTestObjects$model.OU.BM)
PCM Create a phylogenetic comparative model object
Description
This is the entry-point function for creating model objects within the PCMBase framework rep-
resenting a single model-type with one or several model-regimes of this type associated with the
branches of a tree. For mixed Gaussian phylogenetic models, which enable multiple model-types,
use the MixedGaussian function.
Usage
PCM(
model,
modelTypes = class(model)[1],
k = 1L,
regimes = 1L,
params = NULL,
vecParams = NULL,
offset = 0L,
spec = NULL,
...
)
Arguments
model This argument can take one of the following forms:
• a character vector of the S3-classes of the model object to be created (one
model object can have one or more S3-classes, with the class PCM at the
origin of the hierarchy);
• an S3 object which’s class inherits from the PCM S3 class.
The Details section explains how these two types of input are processed.
modelTypes a character string vector specifying a set (family) of model-classes, to which the
constructed model object belongs. These are used for model-selection.
k integer denoting the number of traits (defaults to 1).
regimes a character or integer vector denoting the regimes.
params NULL (default) or a list of parameter values (scalars, vectors, matrices, or ar-
rays) or sub-models (S3 objects inheriting from the PCM class). See details.
vecParams NULL (default) or a numeric vector the vector representation of the variable
parameters in the model. See details.
offset integer offset in vecParams; see Details.
spec NULL or a list specifying the model parameters (see PCMSpecify). If NULL
(default), the generic PCMSpecify is called on the created object of class model.
... additional parameters intended for use by sub-classes of the PCM class.
Details
This is an S3 generic. The PCMBase package defines three methods for it:
• PCM.PCM: A default constructor for any object with a class inheriting from "PCM".
• PCM.character: A default PCM constructor from a character string specifying the type of
model.
• PCM.default: A default constructor called when no other constructor is found. When called
this constructor raises an error message.
Value
an object of S3 class as defined by the argument model.
See Also
MixedGaussian
Examples
# a Brownian motion model with one regime
modelBM <- PCM(model = "BM", k = 2)
# print the model
modelBM
# a BM model with two regimes
modelBM.ab <- PCM("BM", k = 2, regimes = c("a", "b"))
modelBM.ab
# print a single parameter of the model (in this case, the root value)
modelBM.ab$X0
# assign a value to this parameter (note that the brackets [] are necessary
# to preserve the parameter attributes):
modelBM.ab$X0[] <- c(5, 2)
PCMNumTraits(modelBM)
PCMNumRegimes(modelBM)
PCMNumRegimes(modelBM.ab)
# number of numerical parameters in the model
PCMParamCount(modelBM)
# Get a vector representation of all parameters in the model
PCMParamGetShortVector(modelBM)
# Limits for the model parameters:
lowerLimit <- PCMParamLowerLimit(modelBM)
upperLimit <- PCMParamUpperLimit(modelBM)
# assign the model parameters at random: this will use uniform distribution
# with boundaries specified by PCMParamLowerLimit and PCMParamUpperLimit
# We do this in two steps:
# 1. First we generate a random vector. Note the length of the vector equals PCMParamCount(modelBM)
randomParams <- PCMParamRandomVecParams(modelBM, PCMNumTraits(modelBM), PCMNumRegimes(modelBM))
randomParams
# 2. Then we load this random vector into the model.
PCMParamLoadOrStore(modelBM, randomParams, 0, PCMNumTraits(modelBM), PCMNumRegimes(modelBM), TRUE)
print(modelBM)
PCMParamGetShortVector(modelBM)
# generate a random phylogenetic tree of 10 tips
tree <- ape::rtree(10)
#simulate the model on the tree
traitValues <- PCMSim(tree, modelBM, X0 = modelBM$X0)
# calculate the likelihood for the model parameters, given the tree and the trait values
PCMLik(traitValues, tree, modelBM)
# create a likelihood function for faster processing for this specific model.
# This function is convenient for calling in optim because it recieves and parameter
# vector instead of a model object.
likFun <- PCMCreateLikelihood(traitValues, tree, modelBM)
likFun(randomParams)
PCMAbCdEf Quadratic polynomial parameters A, b, C, d, E, f for each node
Description
An S3 generic function that has to be implemented for every model class. This function is called by
PCMLik.
Usage
PCMAbCdEf(
tree,
model,
SE = matrix(0, PCMNumTraits(model), PCMTreeNumTips(tree)),
metaI = PCMInfo(NULL, tree, model, verbose = verbose),
verbose = FALSE
)
Arguments
tree a phylo object with N tips.
model an S3 object specifying both, the model type (class, e.g. "OU") as well as the
concrete model parameter values at which the likelihood is to be calculated (see
also Details).
SE a k x N matrix specifying the standard error for each measurement in X. Al-
ternatively, a k x k x N cube specifying an upper triangular k x k factor of the
variance covariance matrix for the measurement error for each tip i=1, ..., N.
When SE is a matrix, the k x k measurement error variance matrix for a tip i
is calculated as VE[, , i] <- diag(SE[, i] * SE[, i], nrow = k). When SE is
a cube, the way how the measurement variance matrix for a tip i is calculated
depends on the runtime option PCMBase.Transpose.Sigma_x as follows:
if getOption("PCMBase.Transpose.Sigma_x", FALSE) == FALSE (default):
VE[, , i] <- SE[, , i] %*% t(SE[, , i])
if getOption("PCMBase.Transpose.Sigma_x", FALSE) == TRUE: VE[, , i] <-
t(SE[, , i]) %*% SE[, , i]
Note that the above behavior is consistent with the treatment of the model pa-
rameters Sigma_x, Sigmae_x and Sigmaj_x, which are also specified as upper
triangular factors. Default: matrix(0.0, PCMNumTraits(model), PCMTreeNumTips(tree)).
metaI a list returned from a call to PCMInfo(X, tree, model, SE), containing meta-
data such as N, M and k. Alternatively, this can be a character string naming a
function or a function object that returns such a list, e.g. the functionPCMInfo or
the function PCMInfoCpp from the PCMBaseCpp package.
verbose logical indicating if some debug-messages should printed.
PCMAddToListAttribute Add a value to a list-valued attribute of a member or members match-
ing a pattern
Description
Add a value to a list-valued attribute of a member or members matching a pattern
Usage
PCMAddToListAttribute(
name,
value,
object,
member = "",
enclos = "?",
spec = TRUE,
inplace = TRUE,
...
)
Arguments
name a character string denoting the attribute name.
value the value for the attribute.
object a PCM or a list object.
member a member expression. Member expressions are character strings denoting named
elements in a list object (see examples). Default: "".
enclos a character string containing the special symbol ’?’. This symbol is to be re-
placed by matching expressions. The result of this substitution can be anything
but, usually would be a valid R expression. Default: "?".
spec a logical (TRUE by default) indicating if the attribute should also be set in the
corresponding member of the spec attribute (this is for PCM objects only).
inplace logical (TRUE by default) indicating if the attribute should be set to the object
in the current environment, or a modified object should be returned.
... additional arguments passed to MatchListMembers.
Value
if inplace is TRUE no value is returned. Otherwise, a modified version of object is returned.
PCMApplyTransformation
Map a parametrization to its original form.
Description
This is an S3 generic that transforms the passed argument by applying the transformation rules for
its S3 class.
This is an S3 generic. See ‘PCMApplyTransformation._CholeskyFactor‘ for an example.
Usage
PCMApplyTransformation(o, ...)
Arguments
o a PCM object or a parameter
... additional arguments that can be used by implementing methods.
Details
This function returns the same object if it is not transformable.
Value
a transformed version of o.
See Also
is.Transformable
PCMBaseIsADevRelease Check if the PCMBase version corresponds to a dev release
Description
Check if the PCMBase version corresponds to a dev release
Usage
PCMBaseIsADevRelease()
Value
a logical
PCMBaseTestObjects Test objects for the PCMBase package
Description
A list containing simulated trees, trait-values and model objects for tests and examples of the PCM-
Base package
Usage
PCMBaseTestObjects
Format
This is a list containing the following named elements representing parameters of BM, OU and
MixedGaussian models with up to three traits and up to two regimes, model objects, simulated trees
with partition of their nodes in up to two parts (corresponding to the two regimes), and trait data
simulated on these trees.
a.H, b.H H matrices for OU models for regimes ’a’ and ’b’.
a.Theta, b.Theta Theta vectors for OU models for regimes ’a’ and ’b’.
a.Sigma_x, b.Sigma_x Sigma_x matrices for BM and OU models for regimes ’a’ and ’b’.
a.Sigmae_x, b.Sigmae_x Sigmae_x matrices regimes ’a’ and ’b’.
a.X0, b.X0 X0 vectors for regimes ’a’ and ’b’.
H an array resulting from abind(a.H, b.H).
Theta a matrix resulting from cbind(Theta.a, Theta.b).
Sigma_x an array resulting from abind(a.Sigma_x, b.Sigma_x).
Sigmae_x an array resulting from abind(a.Sigmae_x, b.Sigmae_x).
model.a.1, model.a.2, model.a.3 univariate models with a single regime for each of 3 traits.
model.a.1.Omitted_X0 same as model.a.1 but omitting X0; suitable for nesting in an MGPM
model.
model.a.123, model.b.123 single-regime 3-variate models.
model.a.123.Omitted_X0 single-regime 3-variate model with omitted X0 (suitable for nesting in
an MGPM.
model.a.123.Omitted_X0__bSigmae_x same as model.a.123.Omitted_X0 but with the value of
Sigmae_x copied from model.b.123.
model.a.123.Omitted_X0__Omitted_Sigmae_x same as model.a.123 but omitting X0 and Sig-
mae_x.
model.b.123.Omitted_X0, model.b.123.Omitted_X0__Omitted_Sigmae_x analogical to corre-
sponding model.a.123...
model.ab.123 a two-regime 3-variate model.
model.ab.123.bSigmae_x a two-regime 3-variate model having Sigmae_x from b.Sigmae_x.
model_MixedGaussian_ab a two-regime MGPM model with a local Sigmae_x for each regime.
model_MixedGaussian_ab_globalSigmae_x a two-regime MGPM model with a global Sigmae_x.
N number of tips in simulated trees
tree_15_tips a tree of 15 tips used for testing clade extraction.
tree.a a tree with one part only (one regime)
tree.ab a tree partitioned in two parts (two regimes)
traits.a.1 trait values simulated with model.a.1.
traits.a.123 trait values simulated with model.a.123.
traits.a.2 trait values simulated with model.a.2.
traits.a.3 trait values simulated with model.a.3.
traits.ab.123 trait values simulated with model.ab.123 on tree.ab.
tree a tree of 5 tips used for examples.
X 3-trait data for 5 tips used together with tree for examples.
model.OU.BM a mixed Gaussian phylogenetic model for 3 traits and an OU and BM regime used
in examples.
PCMColorPalette A fixed palette of n colors
Description
A fixed palette of n colors
Usage
PCMColorPalette(
n,
names,
colors = structure(hcl(h = seq(15, 375, length = n + 1), l = 65, c =
100)[seq_len(n)], names = names)
)
Arguments
n an integer defining the number of colors in the resulting palette.
names a character vector of length ‘n‘.
colors a vector of n values convertible to colors. Default: structure(hcl( h = seq(15,
375, length = n + 1), l = 65, c = 100)[seq_len(n)],names = names)
Value
A vector of character strings which can be used as color specifications by R graphics functions.
PCMCombineListAttribute
Combine all member attributes of a given name into a list
Description
Combine all member attributes of a given name into a list
Usage
PCMCombineListAttribute(object, name)
Arguments
object a named list object.
name a character string denoting the name of the attribute.
Value
a list of attribute values
PCMCond Conditional distribution of a daughter node given its parent node
Description
An S3 generic function that has to be implemented for every model class.
Usage
PCMCond(
tree,
model,
r = 1,
metaI = PCMInfo(NULL, tree, model, verbose = verbose),
verbose = FALSE
)
Arguments
tree a phylo object with N tips.
model an S3 object specifying both, the model type (class, e.g. "OU") as well as the
concrete model parameter values at which the likelihood is to be calculated (see
also Details).
r an integer specifying a model regime
metaI a list returned from a call to PCMInfo(X, tree, model, SE), containing meta-
data such as N, M and k. Alternatively, this can be a character string naming a
function or a function object that returns such a list, e.g. the functionPCMInfo or
the function PCMInfoCpp from the PCMBaseCpp package.
verbose logical indicating if some debug-messages should printed.
Value
an object of type specific to the type of model
PCMCond.GaussianPCM Conditional distribution of a daughter node given its parent node
Description
An S3 generic function that has to be implemented for every model class.
Usage
## S3 method for class 'GaussianPCM'
PCMCond(
tree,
model,
r = 1,
metaI = PCMInfo(NULL, tree, model, verbose = verbose),
verbose = FALSE
)
Arguments
tree a phylo object with N tips.
model an S3 object specifying both, the model type (class, e.g. "OU") as well as the
concrete model parameter values at which the likelihood is to be calculated (see
also Details).
r an integer specifying a model regime
metaI a list returned from a call to PCMInfo(X, tree, model, SE), containing meta-
data such as N, M and k. Alternatively, this can be a character string naming a
function or a function object that returns such a list, e.g. the functionPCMInfo or
the function PCMInfoCpp from the PCMBaseCpp package.
verbose logical indicating if some debug-messages should printed.
Value
For GaussianPCM models, a named list with the following members:
omega d
Phi
V
PCMCondVOU Variance-covariance matrix of an OU process with optional measure-
ment error and jump at the start
Description
Variance-covariance matrix of an OU process with optional measurement error and jump at the start
Usage
PCMCondVOU(
H,
Sigma,
Sigmae = NULL,
Sigmaj = NULL,
xi = NULL,
e_Ht = NULL,
threshold.Lambda_ij = getOption("PCMBase.Threshold.Lambda_ij", 1e-08)
)
Arguments
H a numerical k x k matrix - selection strength parameter.
Sigma a numerical k x k matrix - neutral drift unit-time variance-covariance matrix.
Sigmae a numerical k x k matrix - environmental variance-covariance matrix.
Sigmaj is the variance matrix of the normal jump distribution (default is NULL).
xi a vector of 0’s and 1’s corresponding to each branch in the tree. A value of 1
indicates that a jump takes place at the beginning of the branch. This arugment
is only used if Sigmaj is not NULL. Default is NULL.
e_Ht a numerical k x k matrix - the result of the matrix exponential expm(-t*H).
threshold.Lambda_ij
a 0-threshold for abs(Lambda_i + Lambda_j), where Lambda_i and Lambda_j
are eigenvalues of the parameter matrix H. This threshold-values is used as a
condition to take the limit time of the expression ‘(1-exp(-Lambda_ij*time))/Lambda_ij‘
as ‘(Lambda_i+Lambda_j) –> 0‘. You can control this value by the global op-
tion "PCMBase.Threshold.Lambda_ij". The default value (1e-8) is suitable for
branch lengths bigger than 1e-6. For smaller branch lengths, you may want to in-
crease the threshold value using, e.g. ‘options(PCMBase.Threshold.Lambda_ij=1e-
6)‘.
Value
a function of one numerical argument (time) and an integer indicating the branch-index that is used
to check the corresponding element in xi.
PCMCreateLikelihood Create a likelihood function of a numerical vector parameter
Description
Create a likelihood function of a numerical vector parameter
Usage
PCMCreateLikelihood(
X,
tree,
model,
SE = matrix(0, PCMNumTraits(model), PCMTreeNumTips(tree)),
metaI = PCMInfo(X, tree, model, SE),
positiveValueGuard = Inf
)
Arguments
X a k x N numerical matrix with possible NA and NaN entries. For i=1,..., N, the
column i of X contains the measured trait values for species i (the tip with inte-
ger identifier equal to i in tree). Missing values can be either not-available (NA)
or not existing (NaN). These two values are treated differently when calculating
likelihoods (see PCMPresentCoordinates).
tree a phylo object with N tips.
model an S3 object specifying both, the model type (class, e.g. "OU") as well as the
concrete model parameter values at which the likelihood is to be calculated (see
also Details).
SE a k x N matrix specifying the standard error for each measurement in X. Al-
ternatively, a k x k x N cube specifying an upper triangular k x k factor of the
variance covariance matrix for the measurement error for each tip i=1, ..., N.
When SE is a matrix, the k x k measurement error variance matrix for a tip i
is calculated as VE[, , i] <- diag(SE[, i] * SE[, i], nrow = k). When SE is
a cube, the way how the measurement variance matrix for a tip i is calculated
depends on the runtime option PCMBase.Transpose.Sigma_x as follows:
if getOption("PCMBase.Transpose.Sigma_x", FALSE) == FALSE (default):
VE[, , i] <- SE[, , i] %*% t(SE[, , i])
if getOption("PCMBase.Transpose.Sigma_x", FALSE) == TRUE: VE[, , i] <-
t(SE[, , i]) %*% SE[, , i]
Note that the above behavior is consistent with the treatment of the model pa-
rameters Sigma_x, Sigmae_x and Sigmaj_x, which are also specified as upper
triangular factors. Default: matrix(0.0, PCMNumTraits(model), PCMTreeNumTips(tree)).
metaI a list returned from a call to PCMInfo(X, tree, model, SE), containing meta-
data such as N, M and k. Alternatively, this can be a character string naming a
function or a function object that returns such a list, e.g. the functionPCMInfo or
the function PCMInfoCpp from the PCMBaseCpp package.
positiveValueGuard
positive numerical value (default Inf), which serves as a guard for numerical
error. Values exceeding this positiveGuard are most likely due to numerical
error and PCMOptions()$PCMBase.Value.NA is returned instead.
Details
It is possible to specify a function for the argument metaI. This function should have three parame-
ters (X, tree, model) and should return a metaInfo object. (see PCMInfo).
Value
a function of a numerical vector parameter called p returning the likelihood of X given the tree and
the model with parameter values specified by p.
PCMDefaultModelTypes Class names for the the default PCM and MGPM model types
Description
Utility functions returning named character vector of the model class-names for the default model
types used for PCM and MixedGaussian model construction.
Usage
PCMDefaultModelTypes()
MGPMDefaultModelTypes()
Value
Both, PCMFDefaultModelTypes and MGPMDefaultModelTypes return a character string vector with
named elements (A,B,C,D,E,F) defined as follows (Mitov et al. 2019a):
A. BM (H = 0, diagonal Σ): BM, uncorrelated traits.
B. BM (H = 0, symmetric Σ): BM, correlated traits.
C. OU (diagonal H, diagonal Σ): OU, uncorrelated traits.
D. OU (diagonal H, symmetric Σ): OU, correlated traits, but simple (diagonal) selection strength
matrix.
E. OU (symmetric H, symmetric Σ): An OU with nondiagonal symmetric H and nondiagonal
symmetric Σ.
F. OU (asymmetric H, symmetric Σ): An OU with nondiagonal asymmetric H and nondiagonal
symmetric Σ.
The only difference between the two functions is that the model types returned by PCMFDefaultModelTypes
have a global parameter X0, while the model types returned by MGPMFDefaultModelTypes have an
omitted parameter X0.
References
[Mitov et al. 2019a] <NAME>., <NAME>., & <NAME>. (2019). Automatic generation of
evolutionary hypotheses using mixed Gaussian phylogenetic models. Proceedings of the National
Academy of Sciences of the United States of America, 35, 201813823. http://doi.org/10.1073/pnas.1813823116
See Also
Args_MixedGaussian_MGPMDefaultModelTypes
PCMDefaultObject Generate a default object of a given PCM model type or parameter
type
Description
This is an S3 generic. See, e.g. ‘PCMDefaultObject.MatrixParameter‘.
Usage
PCMDefaultObject(spec, model, ...)
Arguments
spec any object having a class attribute. The value of this object is not used, but its
class is used for method-dispatch.
model a PCM object used to extract attributes needed for creating a default object of
class specified in class(spec), such as the number of traits (k) or the regimes
and the number of regimes;
... additional arguments that can be used by methods.
Value
a parameter or a PCM object.
PCMDescribe Human friendly description of a PCM
Description
Human friendly description of a PCM
Usage
PCMDescribe(model, ...)
Arguments
model a PCM model object
... additional arguments used by implementing methods.
Details
This S3 generic function is intended to be specified for user models
Value
a character string
PCMDescribeParameters Describe the parameters of a PCM
Description
This is an S3 generic.
Usage
PCMDescribeParameters(model, ...)
Arguments
model a PCM object.
... additional arguments that can be used by implementing methods.
Value
a named list with character elements corresponding to each parameter.
PCMExtractDimensions Given a PCM or a parameter object, extract an analogical object for
a subset of the dimensions (traits) in the original object.
Description
Given a PCM or a parameter object, extract an analogical object for a subset of the dimensions
(traits) in the original object.
Usage
PCMExtractDimensions(obj, dims = seq_len(PCMNumTraits(obj)), nRepBlocks = 1L)
Arguments
obj a PCM or a parameter object.
dims an integer vector; should be a subset or equal to seq_len(PCMNumTraits(obj))
(the default).
nRepBlocks a positive integer specifying if the specified dimensions should be replicated to
obtain a higher dimensional model, where the parameter matrices are block-
diagonal with blocks corresponding to dims. Default: 1L.
Details
This is an S3 generic
Value
an object of the same class as obj with a subset of obj’s dimensions multiplied nRepBlocks times.
PCMExtractRegimes Given a PCM or a parameter object, extract an analogical object for
a subset of the regimes in the original object.
Description
Given a PCM or a parameter object, extract an analogical object for a subset of the regimes in the
original object.
Usage
PCMExtractRegimes(obj, regimes = seq_len(PCMNumRegimes(obj)))
Arguments
obj a PCM or a parameter object.
regimes an integer vector; should be a subset or equal to seq_len(PCMNumRegimes(obj))
(the default).
Details
This is an S3 generic
Value
an object of the same class as obj with a subset of obj’s regimes
PCMFindMethod Find the S3 method for a given PCM object or class-name and an S3
generic
Description
Find the S3 method for a given PCM object or class-name and an S3 generic
Usage
PCMFindMethod(x, method = "PCMCond")
Arguments
x a character string denoting a PCM S3 class name (e.g. "OU"), or a PCM object.
method a character string denoting the name of an S3 generic function. Default: "PCM-
Cond".
Value
a function object corresponding to the S3 method found or an error is raised if no such function is
found for the specified object and method.
PCMFixParameter Fix a parameter in a PCM model
Description
Fix a parameter in a PCM model
Usage
PCMFixParameter(model, name)
Arguments
model a PCM object
name a character string
Value
a copy of the model with added class ’_Fixed’ to the class of the parameter name
PCMGenerateModelTypes Generate default model types for given PCM base-classes
Description
This function calls ‘PCMListParameterizations‘ or ‘PCMListDefaultParameterizations‘ and gener-
ates the corresponding ‘PCMParentClasses‘ and ‘PCMSpecify‘ methods in the global environment.
Usage
PCMGenerateModelTypes(
baseTypes = list(BM = "default", OU = "default", White = "all"),
sourceFile = NULL
)
Arguments
baseTypes a named list with character string elements among c("default", "all") and
names specifying base S3-class names for which the parametrizations (sub-
classes) will be generated. Defaults to list(BM="default", OU = "default",
White = "all"). The element value specifies which one of ‘PCMListParame-
terizations‘ or ‘PCMListDefaultParameterizations‘ should be used:
• "all"for calling ‘PCMListParameterizations‘
• "default"for calling ‘PCMListDefaultParameterizations‘
sourceFile NULL or a character string indicating a .R filename, to which the automatically
generated code will be saved. If NULL (the default), the generated source code
is evaluated and the S3 methods are defined in the global environment. Default:
NULL.
Value
This function has side effects only and does not return a value.
See Also
PCMListDefaultParameterizations
PCMGenerateParameterizations
Generate possible parameterizations for a given type of model
Description
A parameterization of a PCM of given type, e.g. OU, is a PCM-class inheriting from this type, which
imposes some restrictions or transformations of the parameters in the base-type. This function
generates the S3 methods responsible for creating such parameterizations, in particular it generates
the definition of the methods for the two S3 generics ‘PCMParentClasses‘ and ‘PCMSpecify‘ for al
parameterizations specified in the ‘tableParameterizations‘ argument.
Usage
PCMGenerateParameterizations(
model,
listParameterizations = PCMListParameterizations(model),
tableParameterizations = PCMTableParameterizations(model, listParameterizations),
env = .GlobalEnv,
useModelClassNameForFirstRow = FALSE,
sourceFile = NULL
)
Arguments
model a PCM object.
listParameterizations
a list or a sublist returned by ‘PCMListParameterizations‘. Default: ‘PCMList-
Parameterizations(model)‘.
tableParameterizations
a data.table containing the parameterizations to generate. By default this is gen-
erated from ‘listParameterizations‘ using a call ‘PCMTableParameterizations(model,
listParameterizations)‘. If specified by the user, this parameter takes precedence
over ‘listParameterizations‘ and ‘listParameterizations‘ is not used.
env an environment where the method definitions will be stored. Default: ‘env =
.GlobalEnv‘.
useModelClassNameForFirstRow
A logical specifying if the S3 class name of ‘model‘ should be used as a S3
class for the model defined in the first row of ‘tableParameterizations‘. Default:
FALSE.
sourceFile NULL or a character string indicating a .R filename, to which the automatically
generated code will be saved. If NULL (the default), the generated source code
is evaluated and the S3 methods are defined in the global environment. Default:
NULL.
Value
This function does not return a value. It only has a side effect by defining S3 methods in ‘env‘.
PCMGetAttribute Value of an attribute of an object or values for an attribute found in its
members
Description
Value of an attribute of an object or values for an attribute found in its members
Usage
PCMGetAttribute(name, object, member = "", ...)
Arguments
name attribute name.
object a PCM model object or a PCMTree object.
member a member expression. Member expressions are character strings denoting named
elements in a list object (see examples). Default: "".
... additional arguments passed to MatchListMembers.
Value
if member is an empty string, attr(object, name). Otherwise, a named list containing the value
for the attribute for each member in object matched by member.
Examples
PCMGetAttribute("class", PCMBaseTestObjects$model_MixedGaussian_ab)
PCMGetAttribute(
"dim", PCMBaseTestObjects$model_MixedGaussian_ab,
member = "$Sigmae_x")
PCMGetVecParamsRegimesAndModels
Get a vector of all parameters (real and discrete) describing a model
on a tree including the numerical parameters of each model regime,
the integer ids of the splitting nodes defining the regimes on the tree
and the integer ids of the model types associated with each regime.
Description
Get a vector of all parameters (real and discrete) describing a model on a tree including the numer-
ical parameters of each model regime, the integer ids of the splitting nodes defining the regimes on
the tree and the integer ids of the model types associated with each regime.
Usage
PCMGetVecParamsRegimesAndModels(model, tree, ...)
Arguments
model a PCM model
tree a phylo object with an edge.part member.
... additional parameters passed to methods.
Details
This is an S3 generic. In the default implementation, the last entry in the returned vector is the
number of numerical parameters. This is used to identify the starting positions in the vector of the
first splitting node.
Value
a numeric vector concatenating the result
PCMInfo Meta-information about a tree and trait data associated with a PCM
Description
This function pre-processes the given tree and data in order to create meta-information used during
likelihood calculation.
Usage
PCMInfo(
X,
tree,
model,
SE = matrix(0, PCMNumTraits(model), PCMTreeNumTips(tree)),
verbose = FALSE,
preorder = NULL,
...
)
Arguments
X a k x N numerical matrix with possible NA and NaN entries. For i=1,..., N, the
column i of X contains the measured trait values for species i (the tip with inte-
ger identifier equal to i in tree). Missing values can be either not-available (NA)
or not existing (NaN). These two values are treated differently when calculating
likelihoods (see PCMPresentCoordinates).
tree a phylo object with N tips.
model an S3 object specifying both, the model type (class, e.g. "OU") as well as the
concrete model parameter values at which the likelihood is to be calculated (see
also Details).
SE a k x N matrix specifying the standard error for each measurement in X. Al-
ternatively, a k x k x N cube specifying an upper triangular k x k factor of the
variance covariance matrix for the measurement error for each tip i=1, ..., N.
When SE is a matrix, the k x k measurement error variance matrix for a tip i
is calculated as VE[, , i] <- diag(SE[, i] * SE[, i], nrow = k). When SE is
a cube, the way how the measurement variance matrix for a tip i is calculated
depends on the runtime option PCMBase.Transpose.Sigma_x as follows:
if getOption("PCMBase.Transpose.Sigma_x", FALSE) == FALSE (default):
VE[, , i] <- SE[, , i] %*% t(SE[, , i])
if getOption("PCMBase.Transpose.Sigma_x", FALSE) == TRUE: VE[, , i] <-
t(SE[, , i]) %*% SE[, , i]
Note that the above behavior is consistent with the treatment of the model pa-
rameters Sigma_x, Sigmae_x and Sigmaj_x, which are also specified as upper
triangular factors. Default: matrix(0.0, PCMNumTraits(model), PCMTreeNumTips(tree)).
verbose logical indicating if some debug-messages should printed.
preorder an integer vector of row-indices in tree$edge matrix as returned by PCMTreeP-
reorder. This can be given for performance speed-up when several operations
needing preorder are executed on the tree. Default : NULL.
... additional arguments used by implementing methods.
Value
a named list with the following elements:
X k x N matrix denoting the trait data;
VE k x k x N array denoting the measurement error variance covariance matrix for
each for each tip i = 1,...,N. See the parameter SE in PCMLik.
M total number of nodes in the tree;
N number of tips;
k number of traits;
RTree number of parts on the tree (distinct elements of tree$edge.part);
RModel number of regimes in the model (elements of attr(model, regimes));
p number of free parameters describing the model;
r an integer vector corresponding to tree$edge with the regime for each branch in
tree;
xi an integer vector of 0’s and 1’s corresponding to the rows in tree$edge indicating
the presence of a jump at the corresponding branch;
pc a logical matrix of dimension k x M denoting the present coordinates for each
node; in special cases this matrix can be edited by hand after calling PCMInfo
and before passing the returned list to PCMLik. Otherwise, this matrix can be
calculated in a custom way by specifying the option PCMBase.PCMPresentCoordinatesFun.
See also PCMPresentCoordinates and PCMOptions.
This list is passed to PCMLik.
PCMLik Likelihood of a multivariate Gaussian phylogenetic comparative
model with non-interacting lineages
Description
The likelihood of a PCM represents the probability density function of observed trait values (data)
at the tips of a tree given the tree and the model parameters. Seen as a function of the model
parameters, the likelihood is used to fit the model to the observed trait data and the phylogenetic tree
(which is typically inferred from another sort of data, such as an alignment of genetic sequences for
the species at the tips of the tree). The PCMLik function provides a common interface for calculating
the (log-)likelihood of different PCMs. Below we denote by N the number of tips, by M the total
number of nodes in the tree including tips, internal and root node, and by k - the number of traits.
Usage
PCMLik(
X,
tree,
model,
SE = matrix(0, PCMNumTraits(model), PCMTreeNumTips(tree)),
metaI = PCMInfo(X = X, tree = tree, model = model, SE = SE, verbose = verbose),
log = TRUE,
verbose = FALSE
)
Arguments
X a k x N numerical matrix with possible NA and NaN entries. For i=1,..., N, the
column i of X contains the measured trait values for species i (the tip with inte-
ger identifier equal to i in tree). Missing values can be either not-available (NA)
or not existing (NaN). These two values are treated differently when calculating
likelihoods (see PCMPresentCoordinates).
tree a phylo object with N tips.
model an S3 object specifying both, the model type (class, e.g. "OU") as well as the
concrete model parameter values at which the likelihood is to be calculated (see
also Details).
SE a k x N matrix specifying the standard error for each measurement in X. Al-
ternatively, a k x k x N cube specifying an upper triangular k x k factor of the
variance covariance matrix for the measurement error for each tip i=1, ..., N.
When SE is a matrix, the k x k measurement error variance matrix for a tip i
is calculated as VE[, , i] <- diag(SE[, i] * SE[, i], nrow = k). When SE is
a cube, the way how the measurement variance matrix for a tip i is calculated
depends on the runtime option PCMBase.Transpose.Sigma_x as follows:
if getOption("PCMBase.Transpose.Sigma_x", FALSE) == FALSE (default):
VE[, , i] <- SE[, , i] %*% t(SE[, , i])
if getOption("PCMBase.Transpose.Sigma_x", FALSE) == TRUE: VE[, , i] <-
t(SE[, , i]) %*% SE[, , i]
Note that the above behavior is consistent with the treatment of the model pa-
rameters Sigma_x, Sigmae_x and Sigmaj_x, which are also specified as upper
triangular factors. Default: matrix(0.0, PCMNumTraits(model), PCMTreeNumTips(tree)).
metaI a list returned from a call to PCMInfo(X, tree, model, SE), containing meta-
data such as N, M and k. Alternatively, this can be a character string naming a
function or a function object that returns such a list, e.g. the functionPCMInfo or
the function PCMInfoCpp from the PCMBaseCpp package.
log logical indicating whether a log-likelehood should be calculated. Default is
TRUE.
verbose logical indicating if some debug-messages should printed.
Details
For efficiency, the argument metaI can be provided explicitly, because this is not supposed to
change during a model inference procedure such as likelihood maximization.
Value
a numerical value with named attributes as follows:
X0 A numerical vector of length k specifying the value at the root for which the likelihood value
was calculated. If the model contains a member called X0, this vector is used; otherwise
the value of X0 maximizing the likelihood for the given model parameters is calculated by
maximizing the quadratic polynomial ’X0 * L_root * X0 + m_root * X0 + r_root’;
error A character string with information if a numerical or other logical error occurred during
likelihood calculation.
If an error occured during likelihood calculation, the default behavior is to return NA with a non-
NULL error attribute. This behavior can be changed in using global options:
"PCMBase.Value.NA" Allows to specify a different NA value such as -Inf or -1e20 which can
be used in combination with log = TRUE when using optim to maximize the log-likelihood;
"PCMBase.Errors.As.Warnings" Setting this option to FALSE will cause any error to result in
calling the stop R-base function. If not caught in a tryCatch, this will cause the inference
procedure to abort at the occurence of a numerical error. By default, this option is set to
TRUE, which means that getOption("PCMBase.Value.NA", as.double(NA)) is returned
with an error attribute and a warning is issued.
See Also
PCMInfo PCMAbCdEf PCMLmr PCMSim PCMCond
Examples
N <- 10
tr <- PCMTree(ape::rtree(N))
model <- PCMBaseTestObjects$model_MixedGaussian_ab
PCMTreeSetPartRegimes(tr, c(`11` = 'a'), setPartition = TRUE)
set.seed(1, kind = "Mersenne-Twister", normal.kind = "Inversion")
X <- PCMSim(tr, model, X0 = rep(0, 3))
PCMLik(X, tr, model)
PCMLikDmvNorm Calculate the likelihood of a model using the standard formula for
multivariate pdf
Description
Calculate the likelihood of a model using the standard formula for multivariate pdf
Usage
PCMLikDmvNorm(
X,
tree,
model,
SE = matrix(0, PCMNumTraits(model), PCMTreeNumTips(tree)),
metaI = PCMInfo(X, tree, model, SE, verbose = verbose),
log = TRUE,
verbose = FALSE
)
Arguments
X a k x N numerical matrix with possible NA and NaN entries. For i=1,..., N, the
column i of X contains the measured trait values for species i (the tip with inte-
ger identifier equal to i in tree). Missing values can be either not-available (NA)
or not existing (NaN). These two values are treated differently when calculating
likelihoods (see PCMPresentCoordinates).
tree a phylo object with N tips.
model an S3 object specifying both, the model type (class, e.g. "OU") as well as the
concrete model parameter values at which the likelihood is to be calculated (see
also Details).
SE a k x N matrix specifying the standard error for each measurement in X. Al-
ternatively, a k x k x N cube specifying an upper triangular k x k factor of the
variance covariance matrix for the measurement error for each tip i=1, ..., N.
When SE is a matrix, the k x k measurement error variance matrix for a tip i
is calculated as VE[, , i] <- diag(SE[, i] * SE[, i], nrow = k). When SE is
a cube, the way how the measurement variance matrix for a tip i is calculated
depends on the runtime option PCMBase.Transpose.Sigma_x as follows:
if getOption("PCMBase.Transpose.Sigma_x", FALSE) == FALSE (default):
VE[, , i] <- SE[, , i] %*% t(SE[, , i])
if getOption("PCMBase.Transpose.Sigma_x", FALSE) == TRUE: VE[, , i] <-
t(SE[, , i]) %*% SE[, , i]
Note that the above behavior is consistent with the treatment of the model pa-
rameters Sigma_x, Sigmae_x and Sigmaj_x, which are also specified as upper
triangular factors. Default: matrix(0.0, PCMNumTraits(model), PCMTreeNumTips(tree)).
metaI a list returned from a call to PCMInfo(X, tree, model, SE), containing meta-
data such as N, M and k. Alternatively, this can be a character string naming a
function or a function object that returns such a list, e.g. the functionPCMInfo or
the function PCMInfoCpp from the PCMBaseCpp package.
log logical indicating whether a log-likelehood should be calculated. Default is
TRUE.
verbose logical indicating if some debug-messages should printed.
Value
a numerical value with named attributes as follows:
PCMLikTrace Tracing the log-likelihood calculation of a model over each node of
the tree
Description
This is an S3 generic function providing tracing information for the likelihood calculation for a
given tree, data and model parameters. Useful for illustration or for debugging purpose.
Usage
PCMLikTrace(
X,
tree,
model,
SE = matrix(0, PCMNumTraits(model), PCMTreeNumTips(tree)),
metaI = PCMInfo(X = X, tree = tree, model = model, SE = SE, verbose = verbose),
log = TRUE,
verbose = FALSE
)
Arguments
X a k x N numerical matrix with possible NA and NaN entries. For i=1,..., N, the
column i of X contains the measured trait values for species i (the tip with inte-
ger identifier equal to i in tree). Missing values can be either not-available (NA)
or not existing (NaN). These two values are treated differently when calculating
likelihoods (see PCMPresentCoordinates).
tree a phylo object with N tips.
model an S3 object specifying both, the model type (class, e.g. "OU") as well as the
concrete model parameter values at which the likelihood is to be calculated (see
also Details).
SE a k x N matrix specifying the standard error for each measurement in X. Al-
ternatively, a k x k x N cube specifying an upper triangular k x k factor of the
variance covariance matrix for the measurement error for each tip i=1, ..., N.
When SE is a matrix, the k x k measurement error variance matrix for a tip i
is calculated as VE[, , i] <- diag(SE[, i] * SE[, i], nrow = k). When SE is
a cube, the way how the measurement variance matrix for a tip i is calculated
depends on the runtime option PCMBase.Transpose.Sigma_x as follows:
if getOption("PCMBase.Transpose.Sigma_x", FALSE) == FALSE (default):
VE[, , i] <- SE[, , i] %*% t(SE[, , i])
if getOption("PCMBase.Transpose.Sigma_x", FALSE) == TRUE: VE[, , i] <-
t(SE[, , i]) %*% SE[, , i]
Note that the above behavior is consistent with the treatment of the model pa-
rameters Sigma_x, Sigmae_x and Sigmaj_x, which are also specified as upper
triangular factors. Default: matrix(0.0, PCMNumTraits(model), PCMTreeNumTips(tree)).
metaI a list returned from a call to PCMInfo(X, tree, model, SE), containing meta-
data such as N, M and k. Alternatively, this can be a character string naming a
function or a function object that returns such a list, e.g. the functionPCMInfo or
the function PCMInfoCpp from the PCMBaseCpp package.
log logical indicating whether a log-likelehood should be calculated. Default is
TRUE.
verbose logical indicating if some debug-messages should printed.
Value
The returned object will, in general, depend on the type of model and the algorithm used for likeli-
hood calculation. For a G_LInv model and pruning-wise likelihood calculation, the returned object
will be a data.table with columns corresponding to the node-state variables, e.g. the quadratic poly-
nomial coefficients associated with each node in the tree.
See Also
PCMInfo PCMAbCdEf PCMLmr PCMSim PCMCond PCMParseErrorMessage
PCMListMembers A vector of access-code strings to all members of a named list
Description
A vector of access-code strings to all members of a named list
Usage
PCMListMembers(
l,
recursive = TRUE,
format = c("$", "$'", "$\"", "$`", "[['", "[[\"", "[[`")
)
Arguments
l a named list object.
recursive logical indicating if list members should be gone through recursively. TRUE by
default.
format a character string indicating the format for accessing a member. Acceptable val-
ues are c("$", "$'", '$"', '$`', "[['", '[["', '[[`') of which the first
one is taken as default.
Value
a vector of character strings denoting each named member of the list.
Examples
PCMListMembers(PCMBaseTestObjects$model_MixedGaussian_ab)
PCMListMembers(PCMBaseTestObjects$model_MixedGaussian_ab, format = '$`')
PCMListMembers(PCMBaseTestObjects$tree.ab, format = '$`')
PCMListParameterizations
Specify the parameterizations for each parameter of a model
Description
These are S3 generics. ‘PCMListParameterizations‘ should return all possible parametrizations for
the class of ‘model‘. ‘PCMListDefaultParameterizations‘ is a handy way to specify a subset of all
parametrizations. ‘PCMListDefaultParameterizations‘ should be used to avoid generating too many
model parametrizations which occupy space in the R-global environment while they are not used
(see PCMGenerateParameterizations). It is mandatory to implement a specification for ‘PCMList-
Parameterizations‘ for each newly defined class of models. ‘PCMListDefaultParameterizations‘ has
a default implementation that calls ‘PCMListParameterizations‘ and returns the first parametriza-
tion for each parameter. Hence, implementing a method for ‘PCMListDefaultParameterizations‘
for a newly defined model type is optional.
Usage
PCMListParameterizations(model, ...)
PCMListDefaultParameterizations(model, ...)
Arguments
model a PCM.
... additional arguments used by implementing methods.
Value
a named list with list elements corresponding to each parameter in model. Each list element is a
list of character vectors, specifying the possible S3 class attributes for the parameter in question.
For an example, type ‘PCMListParameterizations.BM‘ to see the possible parameterizations for the
BM model.
See Also
PCMGenerateParameterizations
PCMLmr Quadratic polynomial parameters L, m, r
Description
Quadratic polynomial parameters L, m, r
Usage
PCMLmr(
X,
tree,
model,
SE = matrix(0, PCMNumTraits(model), PCMTreeNumTips(tree)),
metaI = PCMInfo(X = X, tree = tree, model = model, SE = SE, verbose = verbose),
root.only = TRUE,
verbose = FALSE
)
Arguments
X a k x N numerical matrix with possible NA and NaN entries. For i=1,..., N, the
column i of X contains the measured trait values for species i (the tip with inte-
ger identifier equal to i in tree). Missing values can be either not-available (NA)
or not existing (NaN). These two values are treated differently when calculating
likelihoods (see PCMPresentCoordinates).
tree a phylo object with N tips.
model an S3 object specifying both, the model type (class, e.g. "OU") as well as the
concrete model parameter values at which the likelihood is to be calculated (see
also Details).
SE a k x N matrix specifying the standard error for each measurement in X. Al-
ternatively, a k x k x N cube specifying an upper triangular k x k factor of the
variance covariance matrix for the measurement error for each tip i=1, ..., N.
When SE is a matrix, the k x k measurement error variance matrix for a tip i
is calculated as VE[, , i] <- diag(SE[, i] * SE[, i], nrow = k). When SE is
a cube, the way how the measurement variance matrix for a tip i is calculated
depends on the runtime option PCMBase.Transpose.Sigma_x as follows:
if getOption("PCMBase.Transpose.Sigma_x", FALSE) == FALSE (default):
VE[, , i] <- SE[, , i] %*% t(SE[, , i])
if getOption("PCMBase.Transpose.Sigma_x", FALSE) == TRUE: VE[, , i] <-
t(SE[, , i]) %*% SE[, , i]
Note that the above behavior is consistent with the treatment of the model pa-
rameters Sigma_x, Sigmae_x and Sigmaj_x, which are also specified as upper
triangular factors. Default: matrix(0.0, PCMNumTraits(model), PCMTreeNumTips(tree)).
metaI a list returned from a call to PCMInfo(X, tree, model, SE), containing meta-
data such as N, M and k. Alternatively, this can be a character string naming a
function or a function object that returns such a list, e.g. the functionPCMInfo or
the function PCMInfoCpp from the PCMBaseCpp package.
root.only logical indicating whether to return the calculated values of L,m,r only for the
root or for all nodes in the tree.
verbose logical indicating if some debug-messages should printed.
Value
A list with the members A,b,C,d,E,f,L,m,r for all nodes in the tree or only for the root if root.only=TRUE.
PCMMapModelTypesToRegimes
Integer vector giving the model type index for each regime
Description
Integer vector giving the model type index for each regime
Usage
PCMMapModelTypesToRegimes(model, tree, ...)
Arguments
model a PCM model
tree a phylo object with an edge.part member
... additional parameters passed to methods
Details
This is a generic S3 method. The default implementation for the basic class PCM returns a vector of
1’s, because it assumes that a single model type is associated with each regime. The implementation
for mixed Gaussian models returns the mapping attribute of the MixedGaussian object reordered to
correspond to PCMTreeGetPartNames(tree).
Value
an integer vector with elements corresponding to the elements in PCMTreeGetPartNames(tree)
PCMMean Expected mean vector at each tip conditioned on a trait-value vector
at the root
Description
Expected mean vector at each tip conditioned on a trait-value vector at the root
Usage
PCMMean(
tree,
model,
X0 = model$X0,
metaI = PCMInfo(NULL, tree, model, verbose = verbose),
internal = FALSE,
verbose = FALSE
)
Arguments
tree a phylo object with N tips.
model an S3 object specifying both, the model type (class, e.g. "OU") as well as the
concrete model parameter values at which the likelihood is to be calculated (see
also Details).
X0 a k-vector denoting the root trait
metaI a list returned from a call to PCMInfo(X, tree, model, SE), containing meta-
data such as N, M and k. Alternatively, this can be a character string naming a
function or a function object that returns such a list, e.g. the functionPCMInfo or
the function PCMInfoCpp from the PCMBaseCpp package.
internal a logical indicating ig the per-node mean vectors should be returned (see Value).
Default FALSE.
verbose logical indicating if some debug-messages should printed.
Value
If internal is FALSE (default), then a k x N matrix Mu, such that Mu[, i] equals the expected mean
k-vector at tip i, conditioned on X0 and the tree. Otherwise, a k x M matrix Mu containing the mean
vector for each node.
Examples
# a Brownian motion model with one regime
modelBM <- PCM(model = "BM", k = 2)
# print the model
modelBM
# assign the model parameters at random: this will use uniform distribution
# with boundaries specified by PCMParamLowerLimit and PCMParamUpperLimit
# We do this in two steps:
# 1. First we generate a random vector. Note the length of the vector equals PCMParamCount(modelBM)
randomParams <- PCMParamRandomVecParams(modelBM, PCMNumTraits(modelBM), PCMNumRegimes(modelBM))
randomParams
# 2. Then we load this random vector into the model.
PCMParamLoadOrStore(modelBM, randomParams, 0, PCMNumTraits(modelBM), PCMNumRegimes(modelBM), TRUE)
# create a random tree of 10 tips
tree <- ape::rtree(10)
PCMMean(tree, modelBM)
PCMMeanAtTime Calculate the mean at time t, given X0, under a PCM model
Description
Calculate the mean at time t, given X0, under a PCM model
Usage
PCMMeanAtTime(
t,
model,
X0 = model$X0,
regime = PCMRegimes(model)[1L],
verbose = FALSE
)
Arguments
t positive numeric denoting time
model a PCM model object
X0 a numeric vector of length k, where k is the number of traits in the model (De-
faults to model$X0).
regime an integer or a character denoting the regime in model for which to do the cal-
culation; Defaults to PCMRegimes(model)[1L], meaning the first regime in the
model.
verbose a logical indicating if (debug) messages should be written on the console (De-
faults to FALSE).
Value
A numeric vector of length k
Examples
# a Brownian motion model with one regime
modelBM <- PCM(model = "BM", k = 2)
# print the model
modelBM
# assign the model parameters at random: this will use uniform distribution
# with boundaries specified by PCMParamLowerLimit and PCMParamUpperLimit
# We do this in two steps:
# 1. First we generate a random vector. Note the length of the vector equals PCMParamCount(modelBM)
randomParams <- PCMParamRandomVecParams(modelBM, PCMNumTraits(modelBM), PCMNumRegimes(modelBM))
randomParams
# 2. Then we load this random vector into the model.
PCMParamLoadOrStore(modelBM, randomParams, 0, PCMNumTraits(modelBM), PCMNumRegimes(modelBM), TRUE)
# PCMMeanAtTime(1, modelBM)
# note that the variance at time 0 is not the 0 matrix because the model has a non-zero
# environmental deviation
PCMMeanAtTime(0, modelBM)
PCMModels Get a list of PCM models currently implemented
Description
Get a list of PCM models currently implemented
Usage
PCMModels(pattern = NULL, parentClass = NULL, ...)
Arguments
pattern a character string specifying an optional for the model-names to search for.
parentClass a character string specifying an optional parent class of the models to look for.
... additional arguments used by implementing methods.
Details
The function is using the S3 api function methods looking for all registered implementations of the
function PCMSpecify.
Value
a character vector of the model classes found.
Examples
PCMModels()
PCMModels("^OU")
PCMModelTypes Get the model type(s) of a model
Description
For a regular PCM object, the model type is its S3 class. For a MixedGaussian each regime is
mapped to one of several possible model types.
Usage
PCMModelTypes(obj)
Arguments
obj a PCM object
Value
a character vector
PCMNumRegimes Number of regimes in a obj
Description
Number of regimes in a obj
Usage
PCMNumRegimes(obj)
Arguments
obj a PCM object
Value
an integer
PCMNumTraits Number of traits modeled by a PCM
Description
Number of traits modeled by a PCM
Usage
PCMNumTraits(model)
Arguments
model a PCM object or an a parameter object (the name of this argument could be
misleading, because both, model and parameter objects are supported).
Value
an integer
PCMOptions Global options for the PCMBase package
Description
Global options for the PCMBase package
Usage
PCMOptions()
Value
a named list with the currently set values of the following global options:
• PCMBase.Value.NA NA value for the likelihood; used in GaussianPCM to return this value in
case of an error occurring during likelihood calculation. By default, this is set to as.double(NA).
• PCMBase.Errors.As.Warnings a logical flag indicating if errors (occuring, e.g. during like-
lihood calculation) should be treated as warnings and added as an attribute "error" to attach to
the likelihood values. Default TRUE.
• PCMBase.Raise.Lik.Errors Should numerical and other sort of errors occurring during likeli-
hood calculation be raised either as errors or as warnings, depending on the option PCMBase.Errors.As.Warnings.
Default TRUE. This option can be useful if too frequent warnings get raised during a model
fit procedure.
• PCMBase.Threshold.Lambda_ij a 0-threshold for abs(Lambda_i + Lambda_j), where Lambda_i
and Lambda_j are eigenvalues of the parameter matrix H of an OU or other model. Default
1e-8. See PCMPExpxMeanExp.
• PCMBase.Threshold.EV A 0-threshold for the eigenvalues of the matrix V for a given branch.
The V matrix is considered singular if it has eigenvalues smaller than PCMBase.Threshold.EV
or when the ratio min(svdV)/max(svdV) is below PCMBase.Threshold.SV . Default is 1e-5.
Treatment of branches with singular V matrix is defined by the option PCMBase.Skip.Singular.
• PCMBase.Threshold.SV A 0-threshold for min(svdV)/max(svdV), where svdV is the vector
of singular values of the matrix V for a given branch. The V matrix is considered singular if it
has eigenvalues smaller than PCMBase.Threshold.EV or when the ratio min(svdV)/max(svdV)
is below PCMBase.Threshold.SV. Default is 1e-6. Treatment of branches with singular V ma-
trix is defined by the option PCMBase.Skip.Singular.
• PCMBase.Threshold.Skip.Singular A double indicating if an internal branch of shorter
length with singular matrix V should be skipped during likelihood calculation. Setting this
option to a higher value, together with a TRUE value for the option PCMBase.Skip.Singular
will result in tolerating some parameter values resulting in singular variance covariance matrix
of the transition distribution. Default 1e-4.
• PCMBase.Skip.Singular A logical value indicating whether internal branches with singu-
lar matrix V and shorter than getOption("PCMBase.Threshold.Skip.Singular") should
be skipped during likelihood calculation, adding their children L,m,r values to their parent
node. Default TRUE. Note, that setting this option to FALSE may cause some models to stop
working, e.g. the White model. Setting this option to FALSE will also cause errors or NA
likelihood values in the case of trees with very short or 0-length branches.
• PCMBase.Tolerance.Symmetric A double specifying the tolerance in tests for symmetric
matrices. Default 1e-8; see also isSymmetric.
• PCMBase.Lmr.mode An integer code specifying the parallel likelihood calculation mode.
• PCMBase.ParamValue.LowerLimit Default lower limit value for parameters, default setting
is -10.0.
• PCMBase.ParamValue.LowerLimit.NonNegative Numeric (default: 0.0) indication the lower
limit for parameters inheriting from class '_NonNegative's
• PCMBase.ParamValue.LowerLimit.NonNegativeDiagonal Default lower limit value for pa-
rameters corresponding to non-negative diagonal elements of matrices, default setting is 0.0.
• PCMBase.ParamValue.UpperLimit Default upper limit value for parameters, default setting
is 10.0.
• PCMBase.Transpose.Sigma_x Should upper diagonal factors for variance-covariance rate
matrices be transposed, e.g. should Sigma = t(Sigma_x) Sigma_x or, rather Sigma = Sigma_x
t(Sigma_x)? Note that the two variants are not equal. The default is FALSE, meaning Sigma
= Sigma_x t(Sigma_x). In this case, Sigma_x is not the actual upper Cholesky factor of
Sigma, i.e. chol(Sigma) != Sigma_x. See also chol and UpperTriFactor. This option ap-
plies to parameters Sigma_x, Sigmae_x, Sigmaj_x and the measurement errors SE[,,i] for
each measurement i when the argument SE is specified as a cube.
• PCMBase.MaxLengthListCladePartitions Maximum number of tree partitions returned by
PCMTreeListCladePartitions. This option has the goal to interrupt the recursive search for
new partitions in the case of calling PCMTreeListCladePartitions on a big tree with a small
value of the maxCladeSize argument. By default this is set to Inf.
• PCMBase.PCMPresentCoordinatesFun A function with the same synopsis as PCMPresentCoordinates
that can be specified in case of custom setting for the present coordinates for specific nodes of
the tree. See PCMPresentCoordinates, and PCMInfo.
• PCMBase.Use1DClasses Logical indicating if 1D arithmetic operations should be used in-
stead of multi-dimensional ones. This can speed-up computations in the case of a single trait.
Currently, this feature is implemented only in the PCMBaseCpp R-package and only for some
model types, such as OU and BM. Default: FALSE
• PCMBase.PrintSubscript_u Logical indicating if a subscript ’u’ should be printed instead
of a subscript ’x’. Used in PCMTable. Default: FALSE.
• PCMBase.MaxNForGuessSigma_x A real fraction number in the interval (0, 1) or an integer
bigger than 1 controlling the number of tips to use for analytical calculation of the evolutionary
rate matrix under a BM assumption. This option is used in the suggested PCMFit R-package.
Default: 0.25.
• PCMBase.UsePCMVarForVCV Logical (default: FALSE) indicating if the function PCMTreeVCV
should use PCMVar instead of ape’s function vcv to calculate the phylogenetic variance covari-
ance matrix under BM assumption. Note that setting this option to TRUE would slow down
the function PCMTreeVCV considerably but may be more stable, particularly in the case of
very big and deep trees, where previous ape’s versions of the vcv function have thrown stack-
overflow errors.
Examples
PCMOptions()
PCMPairSums Sums of pairs of elements in a vector
Description
Sums of pairs of elements in a vector
Usage
PCMPairSums(lambda)
Arguments
lambda a numeric vector
Value
a squared symmetric matrix with elem_ij=lambda_i+lambda_j.
PCMParam Module PCMParam
Description
Global and S3 generic functions for manipulating model parameters. The parameters in a PCM are
named objects with a class attribute specifying the main type and optional properties (tags).
S3 generic functions:
PCMParamCount() Counting the number of actual numeric parameters (used, e.g. for calculating
information scores, e.g. AIC);
PCMParamLoadOrStore(), PCMParamLoadOrStore() Storing/loading a parameter to/from a
numerical vector;
PCMParamLowerLimit(),PCMParamUpperLimit() Specifying parameter upper and lower lim-
its;
PCMParamRandomVecParams() Generating a random parameter vector;
For all the above properties, check-functions are defined, e.g. ‘is.Local(o)‘, ‘is.Global(o)‘, ‘is.ScalarParameter(o)‘,
‘is.VectorParameter‘, etc.
PCMParamCount Count the number of free parameters associated with a PCM or a
PCM-parameter
Description
Count the number of free parameters associated with a PCM or a PCM-parameter
Usage
PCMParamCount(
o,
countRegimeChanges = FALSE,
countModelTypes = FALSE,
offset = 0L,
k = 1L,
R = 1L,
parentModel = NULL
)
Arguments
o a PCM model object or a parameter of a PCM object
countRegimeChanges
logical indicating if regime changes should be counted. If TRUE, the default
implementation would add PCMNumRegimes(model) - 1. Default FALSE.
countModelTypes
logical indicating whether the model type should be counted. If TRUE the de-
fault implementation will add +1 only if there are more than one modelTypes
(length(attr(model, "modelTypes", exact = TRUE)) > 1), assuming that all
regimes are regimes of the same model type (e.g. OU). The implementation for
MRG models will add +1 for every regime if there are more than one model-
Types. Default FALSE.
offset an integer denoting an offset count from which to start counting (internally
used). Default: 0.
k an integer denoting the number of modeled traits. Default: 1.
R an integer denoting the number of regimes in the model. Default: 1.
parentModel NULL or a PCM object. Default: NULL.
Value
an integer
PCMParamGetShortVector
Get a vector of the variable numeric parameters in a model
Description
The short vector of the model parameters does not include the nodes in the tree where a regime
change occurs, nor the the model types associated with each regime.
Usage
PCMParamGetShortVector(o, k = 1L, R = 1L, ...)
Arguments
o a PCM model object or a parameter of a PCM object
k an integer denoting the number of modeled traits. Default: 1.
R an integer denoting the number of regimes in the model. Default: 1.
... other arguments that could be used by implementing methods.
Value
a numeric vector of length equal to ‘PCMParamCount(o, FALSE, FALSE, 0L, k, R)‘.
PCMParamLoadOrStore Load (or store) a PCM parameter from (or to) a vector of the variable
parameters in a model.
Description
Load (or store) a PCM parameter from (or to) a vector of the variable parameters in a model.
Usage
PCMParamLoadOrStore(o, vecParams, offset, k, R, load, parentModel = NULL)
Arguments
o a PCM model object or a parameter of a PCM object
vecParams a numeric vector.
offset an integer denoting an offset count from which to start counting (internally
used). Default: 0.
k an integer denoting the number of modeled traits. Default: 1.
R an integer denoting the number of regimes in the model. Default: 1.
load logical indicating if parameters should be loaded from vecParams into o (TRUE)
or stored to vecParams from o (FALSE).
parentModel NULL or a PCM object. Default: NULL.
Details
This S3 generic function has both, a returned value and side effects.
Value
an integer equaling the number of elements read from vecParams. In the case of type=="custom",
the number of indices bigger than offset returned by the function indices(offset, k).
PCMParamLocateInShortVector
Locate a named parameter in the short vector representation of a
model
Description
Locate a named parameter in the short vector representation of a model
Usage
PCMParamLocateInShortVector(o, accessExpr, enclos = "?")
Arguments
o a PCM model object.
accessExpr a character string used to access the parameter, e.g. "$Theta[,,1]" or "[['Theta']][,,1]".
enclos a character string containing the symbol ’?’, e.g. 'diag(?)'. The meaning
of this symbol is to be replaced by the matching accessExpr (see examples).
Default value : '?'.
Value
an integer vector of length PCMParamCount(o) with NAs everywhere except at the coordinates
corresponding to the parameter in question.
Examples
model <- PCM(PCMDefaultModelTypes()["D"], k = 3, regimes = c("a", "b"))
# The parameter H is a diagonal 3x3 matrix. If this matrix is considered as
# a vector the indices of its diagonal elements are 1, 5 and 9. These indices
# are indicated as the non-NA entries in the returned vector.
PCMParamLocateInShortVector(model, "$H[,,1]")
PCMParamLocateInShortVector(model, "$H[,,'a']")
PCMParamLocateInShortVector(model, "$H[,,'b']")
PCMParamLocateInShortVector(model, "$Sigma_x[,,'b']", enclos = 'diag(?)')
PCMParamLocateInShortVector(model, "$Sigma_x[,,'b']", enclos = '?[upper.tri(?)]')
PCMParamLowerLimit The lower limit for a given model or parameter type
Description
This is an S3 generic function.
Usage
PCMParamLowerLimit(o, k, R, ...)
Arguments
o an object such as a VectorParameter a MatrixParameter or a PCM.
k integer denoting the number of traits
R integer denoting the number of regimes in the model in which o belongs to.
... additional arguments (optional or future use).
Value
an object of the same S3 class as o representing a lower limit for the class.
PCMParamRandomVecParams
Generate a random parameter vector for a model using uniform dis-
tribution between its lower and upper bounds.
Description
Generate a random parameter vector for a model using uniform distribution between its lower and
upper bounds.
Usage
PCMParamRandomVecParams(
o,
k,
R,
n = 1L,
argsPCMParamLowerLimit = NULL,
argsPCMParamUpperLimit = NULL
)
Arguments
o a PCM model object or a parameter
k integer denoting the number of traits.
R integer denoting the number of regimes.
n an integer specifying the number of random vectors to generate
argsPCMParamLowerLimit, argsPCMParamUpperLimit
named lists of arguments passed to PCMParamLowerLimit and PCMParamUpperLimit.
Value
a numeric matrix of dimension n x PCMParamCount(o).
See Also
PCMParamLimits PCMParamGetShortVector
PCMParamSetByName Set model parameters from a named list
Description
Set model parameters from a named list
Usage
PCMParamSetByName(
model,
params,
inplace = TRUE,
replaceWholeParameters = FALSE,
deepCopySubPCMs = FALSE,
failIfNamesInParamsDontExist = TRUE,
...
)
Arguments
model a PCM model object
params a named list with elements among the names found in model
inplace logical indicating if the parameters should be set "inplace" for the model object
in the calling environment or a new model object with the parameters set as
specified should be returned. Defaults to TRUE.
replaceWholeParameters
logical, by default set to FALSE. If TRUE, the parameters will be completely
replaced, meaning that their attributes (e.g. S3 class) will be replaced as well
(dangerous).
deepCopySubPCMs
a logical indicating whether nested PCMs should be ’deep’-copied, meaning
element by element, eventually preserving the attributes as in model. By default
this is set to FALSE, meaning that sub-PCMs found in params will completely
overwrite the sub-PCMs with the same name in model.
failIfNamesInParamsDontExist
logical indicating if an error should be raised if params contains elements not
existing in model. Default: TRUE.
... other arguments that can be used by implementing methods.
Value
If inplace is TRUE, the function only has a side effect of setting the parameters of the model object
in the calling environment; otherwise the function returns a modified copy of the model object.
PCMParamType Parameter types
Description
The parameter types are divided in the following categories:
Main type These are the "ScalarParameter", "VectorParameter" and "MatrixParameter" classes.
Each model parameter must have a main type.
Scope/Omission These are the "_Global" and "_Omitted" classes. Every parameter can be global
for all regimes or local for a single regime. If not specified, local scope is assumed. In some
special cases a parameter (e.g. Sigmae can be omitted from a model. This is done by adding
"_Omitted" to its class attribute.
Constancy (optional) These are the "_Fixed", "_Ones", "_Identity" and "_Zeros" classes.
Transformation (optional) These are the "_Transformable", "_CholeskyFactor" and "_Schur" classes.
Other properties (optional) These are the "_NonNegative", "_WithNonNegativeDiagonal", "_Low-
erTriangular", "_AllEqual", "_ScalarDiagonal", "_Symmetric", "_UpperTriangular", "_Low-
erTriangularWithDiagonal" and "_UpperTriangularWithDiagonal" classes.
Usage
is.Local(o)
is.Global(o)
is.ScalarParameter(o)
is.VectorParameter(o)
is.MatrixParameter(o)
is.WithCustomVecParams(o)
is.Fixed(o)
is.Zeros(o)
is.Ones(o)
is.Identity(o)
is.AllEqual(o)
is.NonNegative(o)
is.Diagonal(o)
is.ScalarDiagonal(o)
is.Symmetric(o)
is.UpperTriangular(o)
is.UpperTriangularWithDiagonal(o)
is.WithNonNegativeDiagonal(o)
is.LowerTriangular(o)
is.LowerTriangularWithDiagonal(o)
is.Omitted(o)
is.CholeskyFactor(o)
is.Schur(o)
is.Transformable(o)
is.Transformed(o)
is.SemiPositiveDefinite(o)
Arguments
o an object, i.e. a PCM or a parameter object.
Value
logical indicating if the object passed is from the type appearing in the function-name.
Functions
• is.Local:
• is.Global:
• is.ScalarParameter:
• is.VectorParameter:
• is.MatrixParameter:
• is.WithCustomVecParams:
• is.Fixed:
• is.Zeros:
• is.Ones:
• is.Identity:
• is.AllEqual:
• is.NonNegative:
• is.Diagonal:
• is.ScalarDiagonal:
• is.Symmetric:
• is.UpperTriangular:
• is.UpperTriangularWithDiagonal:
• is.WithNonNegativeDiagonal:
• is.LowerTriangular:
• is.LowerTriangularWithDiagonal:
• is.Omitted:
• is.CholeskyFactor:
• is.Schur:
• is.Transformable:
• is.Transformed:
• is.SemiPositiveDefinite:
PCMParamUpperLimit The upper limit for a given model or parameter type
Description
This is an S3 generic function.
Usage
PCMParamUpperLimit(o, k, R, ...)
Arguments
o an object such as a VectorParameter a MatrixParameter or a PCM.
k integer denoting the number of traits
R integer denoting the number of regimes in the model in which o belongs to.
... additional arguments (optional or future use).
Value
an object of the same S3 class as o representing an upper limit for the class.
PCMParentClasses Parent S3 classes for a model class
Description
Parent S3 classes for a model class
Usage
PCMParentClasses(model)
Arguments
model an S3 object.
Details
This S3 generic function is intended to be specified for user models. This function is called by the
‘PCM.character‘ method to determine the parent classes for a given model class.
Value
a vector of character string denoting the names of the parent classes
PCMParseErrorMessage Extract error information from a formatted error message.
Description
Extract error information from a formatted error message.
Usage
PCMParseErrorMessage(x)
Arguments
x character string representing the error message.
Value
Currently the function returns x unchanged.
PCMPExpxMeanExp Create a function of time that calculates (1-exp(-
lambda_ij*time))/lambda_ij for every element lambda_ij of the
input matrix Lambda_ij.
Description
Create a function of time that calculates (1-exp(-lambda_ij*time))/lambda_ij for every element
lambda_ij of the input matrix Lambda_ij.
Usage
PCMPExpxMeanExp(
Lambda_ij,
threshold.Lambda_ij = getOption("PCMBase.Threshold.Lambda_ij", 1e-08)
)
Arguments
Lambda_ij a squared numerical matrix of dimension k x k
threshold.Lambda_ij
a 0-threshold for abs(Lambda_i + Lambda_j), where Lambda_i and Lambda_j
are eigenvalues of the parameter matrix H. This threshold-value is used as a con-
dition to take the limit time of the expression ‘(1-exp(-Lambda_ij*time))/Lambda_ij‘
as ‘(Lambda_i+Lambda_j) –> 0‘. You can control this value by the global op-
tion "PCMBase.Threshold.Lambda_ij". The default value (1e-8) is suitable for
branch lengths bigger than 1e-6. For smaller branch lengths, you may want to in-
crease the threshold value using, e.g. ‘options(PCMBase.Threshold.Lambda_ij=1e-
6)‘.
Details
the function (1-exp(-lambda_ij*time))/lambda_ij corresponds to the product of the CDF of an ex-
ponential distribution with rate Lambda_ij multiplied by its mean value (mean waiting time).
Value
a function of time returning a matrix with entries formed from the above function or the limit, time,
if |Lambda_ij|<=trehshold0.
PCMPLambdaP_1 Eigen-decomposition of a matrix H
Description
Eigen-decomposition of a matrix H
Usage
PCMPLambdaP_1(H)
Arguments
H a numeric matrix
Details
The function fails with an error message if H is defective, that is, if its matrix of eigenvectors is
computationally singular. The test for singularity is based on the rcond function.
Value
a list with elements as follows:
lambda a vector of the eigenvalues of H
P a squared matrix with column vectors, the eigenvectors of H corresponding to
the eigenvalues in lambda
P_1 the inverse matrix of P
.
PCMPlotGaussianDensityGrid2D
A 2D Gaussian distribution density grid in the form of a ggplot object
Description
A 2D Gaussian distribution density grid in the form of a ggplot object
Usage
PCMPlotGaussianDensityGrid2D(
mu,
Sigma,
xlim,
ylim,
xNumPoints = 100,
yNumPoints = 100,
...
)
Arguments
mu numerical mean vector of length 2
Sigma numerical 2 x 2 covariance matrix
xlim, ylim numerical vectors of length 2
xNumPoints, yNumPoints
integers denoting how many points should the grid contain for each axis.
... additional arguments passed to ggplot
Value
a ggplot object
PCMPlotGaussianSample2D
A 2D sample from Gaussian distribution
Description
A 2D sample from Gaussian distribution
Usage
PCMPlotGaussianSample2D(mu, Sigma, numPoints = 1000, ...)
Arguments
mu numerical mean vector of length 2
Sigma numerical 2 x 2 covariance matrix
numPoints an integer denoting how many points should be randomly sampled (see details).
... additional arguments passed to ggplot.
Details
This function generates a random sample of numPoints 2d points using the function rmvnorm from
the mvtnorm R-package. Then it produces a ggplot on the generated points.
Value
a ggplot object
PCMPlotMath Beautiful model description based on plotmath
Description
This is an S3 generic that produces a plotmath expression for its argument.
Usage
PCMPlotMath(o, roundDigits = 2, transformSigma_x = FALSE)
Arguments
o a PCM or a parameter object.
roundDigits an integer, default: 2.
transformSigma_x
a logical indicating if Cholesky transformation should be applied to Cholesky-
factor parameters prior to generating the plotmath expression.
Value
a character string.
PCMPlotTraitData2D Scatter plot of 2-dimensional data
Description
Scatter plot of 2-dimensional data
Usage
PCMPlotTraitData2D(
X,
tree,
sizePoints = 2,
alphaPoints = 1,
labeledTips = NULL,
sizeLabels = 8,
nudgeLabels = c(0, 0),
palette = PCMColorPalette(PCMNumRegimes(tree), PCMRegimes(tree)),
scaleSizeWithTime = !is.ultrametric(tree),
numTimeFacets = if (is.ultrametric(tree) || scaleSizeWithTime) 1L else 3L,
nrowTimeFacets = 1L,
ncolTimeFacets = numTimeFacets
)
Arguments
X a k x N matrix
tree a phylo object
sizePoints, alphaPoints
numeric parameters passed as arguments size and alpha to geom_point. Default:
sizePoints = 2, alphaPoints = 1.
labeledTips a vector of tip-numbers to label (NULL by default)
sizeLabels passed to geom_text to specify the size of tip-labels for the trait-points.
nudgeLabels a numeric vector of two elements (default: c(0,0)), passed as arguments nudge_x
and nudge_y of geom_text.
palette a named vector of colors
scaleSizeWithTime
logical indicating if the size and the transparency of the points should reflect the
distance from the present (points that are farther away in time with respect to
the present moment, i.e. closer to the root of the tree, are displayed smaller and
more transparent.). By default this is set to !is.ultrametric(tree).
numTimeFacets a number or a numeric vector controlling the creation of different facets corre-
sponding to different time intervals when the tree is non-ultrametric. If a single
number, it will be interpreted as an integer specifying the number of facets, each
facets corresponding to an equal interval of time. If a numeric vector, it will be
used to specify the cut-points for each interval. Default: if(is.ultrametric(tree)
|| scaleSizeWithTime) 1L else 3.
nrowTimeFacets, ncolTimeFacets
integers specifying how the time facets should be layed out. Default: nrowTimeFacets
= 1L, ncolTimeFacets = numTimeFacets.
Value
a ggplot object
PCMPresentCoordinates Determine which traits are present (active) on each node of the tree
Description
For every node (root, internal or tip) in tree, build a logical vector of length k with TRUE values
for every present coordinate. Non-present coordinates arize from NA-values in the trait data. These
can occur in two cases:
Missing measurements for some traits at some tips: the present coordinates are FALSE for the
corresponding tip and trait, but are full for all traits at all internal and root nodes.
non-existent traits for some species: the FALSE present coordinates propagate towards the parent
nodes - an internal or root node will have a present coordinate set to FALSE for a given trait,
if all of its descendants have this coordinate set to FALSE.
These two cases have different effect on the likelihood calculation: missing measurements (NA)
are integrated out at the parent nodes; while non-existent traits (NaN) are treated as reduced dimen-
sionality of the vector at the parent node.
Usage
PCMPresentCoordinates(X, tree, metaI)
Arguments
X numeric k x N matrix of observed values, with possible NA entries. The columns
in X are in the order of tree$tip.label
tree a phylo object
metaI The result of calling PCMInfo.
Value
a k x M logical matrix. The function fails in case when all traits are NAs for some of the tips. In that
case an error message is issued "PCMPresentCoordinates:: Some tips have 0 present coordinates.
Consider removing these tips.".
See Also
PCMLik
PCMRegimes Get the regimes (aka colors) of a PCM or of a PCMTree object
Description
Get the regimes (aka colors) of a PCM or of a PCMTree object
Usage
PCMRegimes(obj)
Arguments
obj a PCM or a PCMTree object
Value
a character or an integer vector giving the regime names in the obj
PCMSetAttribute Set an attribute of a named member in a PCM or other named list
object
Description
Set an attribute of a named member in a PCM or other named list object
Usage
PCMSetAttribute(
name,
value,
object,
member = "",
spec = TRUE,
inplace = TRUE,
...
)
Arguments
name a character string denoting the attribute name.
value the value for the attribute.
object a PCM or a list object.
member a member expression. Member expressions are character strings denoting named
elements in a list object (see examples). Default: "".
spec a logical (TRUE by default) indicating if the attribute should also be set in the
corresponding member of the spec attribute (this is for PCM objects only).
inplace logical (TRUE by default) indicating if the attribute should be set to the object
in the current environment, or a modified object should be returned.
... additional arguments passed to MatchListMembers.
Details
Calling this function can affect the attributes of multiple members matched by the member argument.
Value
if inplace is TRUE (default) nothing is returned. Otherwise, a modified version of object is returned.
Examples
model <- PCMBaseTestObjects$model_MixedGaussian_ab
PCMSetAttribute("class", c("MatrixParameter", "_Fixed"), model, "H")
PCMSim Simulation of a phylogenetic comparative model on a tree
Description
Generate trait data on a tree according to a multivariate stochastic model with one or several regimes
Usage
PCMSim(
tree,
model,
X0,
SE = matrix(0, PCMNumTraits(model), PCMTreeNumTips(tree)),
metaI = PCMInfo(X = NULL, tree = tree, model = model, SE = SE, verbose = verbose),
verbose = FALSE
)
Arguments
tree a phylo object specifying a rooted tree.
model an S3 object specifying the model (see Details).
X0 a numeric vector of length k (the number of traits) specifying the trait values at
the root of the tree.
SE a k x N matrix specifying the standard error for each measurement in X. Al-
ternatively, a k x k x N cube specifying an upper triangular k x k factor of the
variance covariance matrix for the measurement error for each node i=1, ..., N.
Default: matrix(0.0, PCMNumTraits(model), PCMTreeNumTips(tree)).
metaI a named list containing meta-information about the data and the model.
verbose a logical indicating if informative messages should be written during execution.
Details
Internally, this function uses the PCMCond implementation for the given model class.
Value
numeric M x k matrix of values at all nodes of the tree, i.e. root, internal and tip, where M is
the number of nodes: M=dim(tree$edge)[1]+1, with indices from 1 to N=length(tree$tip.label)
corresponding to tips, N+1 corresponding to the root and bigger than N+1 corresponding to internal
nodes. The function will fail in case that the length of the argument vector X0 differs from the
number of traits specified in metaI$k. Error message: "PCMSim:: X0 must be of length ...".
See Also
PCMLik PCMInfo PCMCond
Examples
N <- 10
L <- 100.0
tr <- ape::stree(N)
tr$edge.length <- rep(L, N)
for(epoch in seq(1, L, by = 1.0)) {
tr <- PCMTreeInsertSingletonsAtEpoch(tr, epoch)
}
model <- PCMBaseTestObjects$model_MixedGaussian_ab
PCMTreeSetPartRegimes(tr, c(`11` = 'a'), setPartition = TRUE)
set.seed(1, kind = "Mersenne-Twister", normal.kind = "Inversion")
X <- PCMSim(tr, model, X0 = rep(0, 3))
PCMSpecify Parameter specification of PCM model
Description
The parameter specification of a PCM model represents a named list with an entry for each param-
eter of the model. Each entry in the list is a structure defining the S3 class of the parameter and its
verbal description. This is an S3 generic. See ‘PCMSpecify.OU‘ for an example method.
Usage
PCMSpecify(model, ...)
Arguments
model a PCM model object.
... additional arguments used by implementing methods.
Value
a list specifying the parameters of a PCM.
PCMTable A data.table representation of a PCM object
Description
A data.table representation of a PCM object
Usage
PCMTable(
model,
skipGlobalRegime = FALSE,
addTransformed = TRUE,
removeUntransformed = TRUE
)
Arguments
model a PCM object.
skipGlobalRegime
logical indicating whether a raw in the returned table for the global-scope pa-
rameters should be omitted (this is mostly for internal use). Default (FALSE).
addTransformed logical. If TRUE (the default), columns for the transformed version of the trans-
formable parameters will be added.
removeUntransformed
logical If TRUE (default), columns for the untransformed version of the trans-
formable parameters will be omitted.
Details
This is an S3 generic.
Value
an object of S3 class PCMTable
PCMTableParameterizations
Cartesian product of possible parameterizations for the different pa-
rameters of a model
Description
This function generates a data.table in which each column corresponds to one parameter of model
and each row corresponds to one combination of parameterizations for the model parameters, such
that the whole table corresponds to the Cartesian product of the lists found in ‘listParameterizations‘.
Usually, subsets of this table should be passed to ‘PCMGenerateParameterizations‘
Usage
PCMTableParameterizations(
model,
listParameterizations = PCMListParameterizations(model, ...),
...
)
Arguments
model a PCM object.
listParameterizations
a list returned by a method for ‘PCMListParameterizations‘. Default: ‘PCM-
ListParameterizations(model, ...)‘.
... additional arguments passed to ‘PCMListParameterizations(model, ...)‘.
Value
a data.table object.
PCMTrajectory Generate a trajectory for the mean in one regime of a PCM
Description
Generate a trajectory for the mean in one regime of a PCM
Usage
PCMTrajectory(
model,
regime = PCMRegimes(model)[1],
X0 = rep(0, PCMNumTraits(model)),
W0 = matrix(0, nrow = PCMNumTraits(model), ncol = PCMNumTraits(model)),
tX = seq(0, 100, by = 1),
tVar = tX[seq(0, length(tX), length.out = 4)],
dims = seq_len(PCMNumTraits(model)),
sizeSamp = 100,
doPlot2D = FALSE,
plot = NULL
)
Arguments
model a PCM object.
regime a regime in ‘model‘. Default is PCMRegimes(model)[1].
X0 a numeric vector specifying an initial point in the trait space. Default is rep(0,
PCMNumTraits(model))
W0 a numeric k x k symmetric positive definite matrix or 0 matrix, specifying the
initial variance covariance matrix at t0. By default, this is a k x k 0 matrix.
tX, tVar numeric vectors of positive points in time sorted in increasing order. tX specifies
the points in time at which to calculate the mean (conditional on X0). tVar
specifies a subset of the points in tX at which to generate random samples from
the k-variate Gaussian distribution with mean equal to the mean value at the
corresponding time conditional on X0 and variance equal to the variance at this
time, conditional on W0. Default settings are ‘tX = seq(0, 100, by = 1)‘ and
‘tVar = tX[seq(0, length(tX), length.out = 4)]‘.
dims an integer vector specifying the traits for which samples at tVar should be gen-
erated (see tX,tVar above). Default: seq_len(PCMNumTraits(model)).
sizeSamp an integer specifying the number points in the random samples (see tX and tVar
above). Default 100.
doPlot2D Should a ggplot object be produced and returned. This is possible only for two
of the traits specified in dims. Default: FALSE.
plot a ggplot object. This can be specified when doPlot2D is TRUE and allows to
add the plot of this trajectory as a layer in an existing ggplot. Default: NULL
Value
if doPlot2D is TRUE, returns a ggplot. Otherwise a named list of two elements:
• dt a data.table with columns ’regime’, ’t’, ’X’, ’V’ and ’samp’. For each row corresponding to
time in tVar, the column samp represents a list of sizeSamp k-vectors.
• dtPlot a data.table with the same data as in dt, but with converted columns X and samp into 2
x k columns denoted xi, i=1,...,k and xsi (i=1...k) This is suitable for plotting with ggplot.
Examples
set.seed(1, kind = "Mersenne-Twister", normal.kind = "Inversion")
# a Brownian motion model with one regime
modelOU <- PCM(model = PCMDefaultModelTypes()['F'], k = 2)
# assign the model parameters at random: this will use uniform distribution
# with boundaries specified by PCMParamLowerLimit and PCMParamUpperLimit
# We do this in two steps:
# 1. First we generate a random vector. Note the length of the vector equals
# PCMParamCount(modelBM).
randomParams <- PCMParamRandomVecParams(
modelOU, PCMNumTraits(modelOU), PCMNumRegimes(modelOU))
# 2. Then we load this random vector into the model.
PCMParamLoadOrStore(
modelOU,
randomParams,
0, PCMNumTraits(modelBM), PCMNumRegimes(modelBM), load = TRUE)
# let's plot the trajectory of the model starting from X0 = c(0,0)
PCMTrajectory(
model = modelOU,
X0 = c(0, 0),
doPlot2D = TRUE)
# A faceted grid of plots for the two regimes in a mixed model:
pla <- PCMTrajectory(
model = PCMBaseTestObjects$model_MixedGaussian_ab, regime = "a",
X0 = c(0, 0, 0),
doPlot2D = TRUE) +
ggplot2::scale_y_continuous(limits = c(0, 10)) +
ggplot2::facet_grid(.~regime)
plb <- PCMTrajectory(
model = PCMBaseTestObjects$model_MixedGaussian_ab, regime = "b",
X0 = c(0, 0, 0),
doPlot2D = TRUE) +
ggplot2::scale_y_continuous(limits = c(0, 10)) +
ggplot2::facet_grid(.~regime) +
ggplot2::theme(
axis.title.y = ggplot2::element_blank(),
axis.text.y = ggplot2::element_blank(),
axis.ticks.y = ggplot2::element_blank())
cowplot::plot_grid(pla, plb)
PCMTree Create a PCMTree object from a phylo object
Description
PCMTree is class that inherits from the class ’phylo’ in the R-package ’ape’. Thus, all the functions
working on a phylo object would work in the same way if they receive as argument an object of
class ’PCMTree’. A PCMTree object has the following members in addition to the regular members
(’tip.label’, ’node.label’, ’edge’, ’edge.length’) found in a regular phylo object:
edge.part a character vector having as many elements as there are branches in the tree (corre-
sponding to the rows in ‘tree$edge‘). Each element denotes the name of the part to which the
corresponding branch belongs. A part in the tree represents a connected subset of its nodes
and the branches leading to these nodes. A partition of the tree represents the splitting of the
tree into a number of parts. Visually, a partition can be represented as a coloring of the tree, in
which no color is assigned to more than one part. In other words, if two branches in the tree
are connected by the same color, they either share a node, or all the branches on the path in
the tree connecting these two branches have the same color. Formally, we define a partition of
the tree as any set of nodes in the tree that includes the root. Each node in this set defines a
part as the set of its descendant nodes that can be reached without traversing another partition
node. We name each part by the label of its most ancestral node, that is, the node in it, which
is closest to the root fo the tree. The value of edge.part for an edge in the tree is the name of
the part that contains the node to which the edge is pointing.
part.regime a named vector of size the number of parts in the tree. The names correspond to
part-names whereas the values denote the ids or character names of regimes in a PCM object.
The constructor PCMTree() returns an object of call
Usage
PCMTree(tree)
Arguments
tree a phylo object. If this is already a PCMTree object, a copy of this object will be
returned.
Value
an object of class PCMTree. This is a copy of the passed phylo object which is guaranteed to have
node.label, edge.part and a part.regime entries set.
Examples
tree <- ape::rtree(8)
# the following four are NULLs
tree$node.label
tree$edge.part
tree$part.regime
tree$edge.regime
# In previous version regimes were assigned directly to the edges via
# tree$edge.regime. This is supported but not recommended anymore:
tree$edge.regime <- sample(
letters[1:3], size = PCMTreeNumNodes(tree) - 1, replace = TRUE)
tree.a <- PCMTree(tree)
PCMTreeGetLabels(tree.a)
tree.a$node.label
tree.a$edge.part
tree.a$part.regime
# this is set to NULL - starting from PCMBase 1.2.9 all of the information
# for the regimes is stored in tree$edge.part and tree$part.regime.
tree.a$edge.regime
PCMTreeGetPartition(tree.a)
PCMTreeGetPartNames(tree.a)
PCMTreeGetPartRegimes(tree.a)
# let's see how the tree looks like
PCMTreePlot(tree.a) + ggtree::geom_nodelab() + ggtree::geom_tiplab()
# This is the recommended way to set a partition on the tree
PCMTreeSetPartition(tree.a, c(10, 12))
PCMTreeGetPartition(tree.a)
PCMTreeGetPartNames(tree.a)
PCMTreeGetPartRegimes(tree.a)
PCMTreePlot(tree.a) + ggtree::geom_nodelab() + ggtree::geom_tiplab()
PCMTreeGetPartsForNodes(tree.a, c(11, 15, 12))
PCMTreeGetPartsForNodes(tree.a, c("11", "15", "12"))
PCMTreeSetPartRegimes(tree.a, c(`9` = 'a', `10` = 'b', `12` = 'c'))
PCMTreeGetPartition(tree.a)
PCMTreeGetPartNames(tree.a)
PCMTreeGetPartRegimes(tree.a)
PCMTreePlot(tree.a) + ggtree::geom_nodelab() + ggtree::geom_tiplab()
PCMTreeBackbonePartition
Prune the tree leaving one tip for each or some of its parts
Description
Prune the tree leaving one tip for each or some of its parts
Usage
PCMTreeBackbonePartition(tree, partsToKeep = PCMTreeGetPartNames(tree))
Arguments
tree a PCMTree or a phylo object.
partsToKeep a character vector denoting part names in the tree to be kept. Defaults to ‘PCMTreeGet-
PartNames(tree)‘.
Value
a PCMTree object representing a pruned version of tree.
See Also
PCMTreeSetPartition
PCMTree
Examples
set.seed(1, kind = "Mersenne-Twister", normal.kind = "Inversion")
tree <- PCMTree(ape::rtree(25))
PCMTreePlot(tree) + ggtree::geom_nodelab(angle = 45) +
ggtree::geom_tiplab(angle = 45)
backb <- PCMTreeBackbonePartition(tree)
PCMTreePlot(backb) + ggtree::geom_nodelab(angle = 45) +
ggtree::geom_tiplab(angle = 45)
tree2 <- PCMTreeSetPartRegimes(
tree, c(`26` = "a", `28` = "b"), setPartition = TRUE,
PCMTreeBackbonePartition 79
inplace = FALSE)
PCMTreePlot(tree2) + ggtree::geom_nodelab(angle = 45) +
ggtree::geom_tiplab(angle = 45)
backb <- PCMTreeBackbonePartition(tree2)
PCMTreePlot(backb) + ggtree::geom_nodelab(angle = 45) +
ggtree::geom_tiplab(angle = 45)
tree3 <- PCMTreeSetPartRegimes(
tree, c(`26` = "a", `28` = "b", `41` = "c"), setPartition = TRUE,
inplace = FALSE)
PCMTreePlot(tree3) + ggtree::geom_nodelab(angle = 45) +
ggtree::geom_tiplab(angle = 45)
backb <- PCMTreeBackbonePartition(tree3)
PCMTreePlot(backb) + ggtree::geom_nodelab(angle = 45) +
ggtree::geom_tiplab(angle = 45)
backb41 <- PCMTreeBackbonePartition(tree3, partsToKeep = "41")
PCMTreePlot(backb41) + ggtree::geom_nodelab(angle = 45) +
ggtree::geom_tiplab(angle = 45)
backbMoreNodes <- PCMTreeInsertSingletonsAtEpoch(
backb, epoch = 3.7, minLength = 0.001)
PCMTreeGetPartRegimes(backbMoreNodes)
PCMTreePlot(backbMoreNodes) + ggtree::geom_nodelab(angle=45) +
ggtree::geom_tiplab(angle=45)
backbMoreNodes <- PCMTreeInsertSingletonsAtEpoch(
backbMoreNodes, epoch = 0.2, minLength = 0.001)
PCMTreeGetPartRegimes(backbMoreNodes)
PCMTreePlot(backbMoreNodes) + ggtree::geom_nodelab(angle=45) +
ggtree::geom_tiplab(angle=45)
backbMoreNodes <- PCMTreeInsertSingletonsAtEpoch(
backbMoreNodes, epoch = 1.2, minLength = 0.001)
PCMTreeGetPartRegimes(backbMoreNodes)
PCMTreePlot(backbMoreNodes) + ggtree::geom_nodelab(angle=45) +
ggtree::geom_tiplab(angle=45)
PCMTreeDropClade Drop a clade from a phylogenetic tree
Description
Drop a clade from a phylogenetic tree
Usage
PCMTreeDropClade(
tree,
cladeRootNode,
tableAncestors = NULL,
X = NULL,
returnList = !is.null(X),
errorOnMissing = FALSE
)
Arguments
tree a phylo object
cladeRootNode a character string denoting the label or an integer denoting a node in the tree
tableAncestors an integer matrix returned by a previous call to PCMTreeTableAncestors(tree)
or NULL.
X an optional k x N matrix with trait value vectors for each tip in tree.
returnList logical indicating if a list of the phylo object associated with the tree after drop-
ping the clade and the corresponding entries in X should be returned. Defaults
to !is.null(X)
errorOnMissing logical indicating if an error should be raised if cladeRootNode is not among the
nodes in tree. Default FALSE, meaning that if cladeRootNode is not a node in
tree the tree (and X if returnList is TRUE) is/are returned unchanged.
Value
If returnList is FALSE, a phylo object associated with the remaining tree after dropping the clade,
otherise, a list with two named members :
• treethe phylo object associated with the remaining tree after dropping the clade
• Xthe submatrix of X with columns corresponding to the tips in the remaining tree
See Also
PCMTreeSpliAtNode PCMTreeExtractClade
Examples
set.seed(1, kind = "Mersenne-Twister", normal.kind = "Inversion")
tree <- PCMTree(ape::rtree(25))
PCMTreeSetPartRegimes(
tree, c(`26`="a", `28`="b", `45`="c"), setPartition = TRUE)
PCMTreePlot(tree, palette=c(a = "red", b = "green", c = "blue")) +
ggtree::geom_nodelab(angle = 45) + ggtree::geom_tiplab(angle = 45)
redGreenTree <- PCMTreeDropClade(tree, 45)
PCMTreeGetPartRegimes(redGreenTree)
PCMTreePlot(redGreenTree, palette=c(a = "red", b = "green", c = "blue")) +
ggtree::geom_nodelab(angle = 45) + ggtree::geom_tiplab(angle = 45)
# we need to use the label here, because the node 29 in tree is not the same
# id in redGreenTree:
redGreenTree2 <- PCMTreeDropClade(redGreenTree, "29")
PCMTreePlot(redGreenTree2, palette=c(a = "red", b = "green", c = "blue")) +
ggtree::geom_nodelab(angle = 45) + ggtree::geom_tiplab(angle = 45)
PCMTreeDtNodes A data.table with time, part and regime information for the nodes in a
tree
Description
A data.table with time, part and regime information for the nodes in a tree
Usage
PCMTreeDtNodes(tree)
Arguments
tree a phylo object with node-labels and parts
Value
a data.table with a row for each node in tree and columns as follows:
• startNode the starting node of each edge or NA_integer_ for the root
• endNode the end node of each edge or the root id for the root
• startNodeLab the character label for the startNode
• endNodeLab the character label for endNode
• startTime the time (distance from the root node) for the startNode or NA_real_ for the root
node
• endTime the time (distance from the root node) for the endNode or NA_real_ for the root node
• part the part to which the edge belongs, i.e. the part of the endNode
• regime the regime to which the edge belongs, i.e. the regime of the part of the endNode
Examples
PCMTreeDtNodes(PCMBaseTestObjects$tree.ab)
PCMTreeEdgeTimes A matrix with the begin and end time from the root for each edge in
tree
Description
A matrix with the begin and end time from the root for each edge in tree
Usage
PCMTreeEdgeTimes(tree)
Arguments
tree a phylo
PCMTreeEvalNestedEDxOnTree
Perfrorm nested extractions or drops of clades from a tree
Description
Perfrorm nested extractions or drops of clades from a tree
Usage
PCMTreeEvalNestedEDxOnTree(expr, tree)
Arguments
expr a character string representing an R expression of nested calls of functions
E(x,node) denoting extracting the clade rooted at node from the tree x, or
D(x,node), denoting dropping the clade rooted at node from the tree x. These
calls can be nested, i.e. x can be either the symbol x (corresponding to the
original tree passed as argument) or a nested call to d or e.
tree a phylo object with named tips and internal nodes
Value
the resulting phylo object from evaluating expr on tree.
Examples
set.seed(1, kind = "Mersenne-Twister", normal.kind = "Inversion")
tree <- PCMTree(ape::rtree(25))
PCMTreeSetPartRegimes(
tree, c(`26`="a", `28`="b", `45`="c", `47`="d"), setPartition = TRUE)
PCMTreePlot(
tree, palette=c(a = "red", b = "green", c = "blue", d = "magenta")) +
ggtree::geom_nodelab(angle = 45) + ggtree::geom_tiplab(angle = 45)
bluePart <- PCMTreeEvalNestedEDxOnTree("D(E(tree,45),47)", tree)
PCMTreeGetPartRegimes(bluePart)
PCMTreePlot(
bluePart, palette=c(a = "red", b = "green", c = "blue", d = "magenta")) +
ggtree::geom_nodelab(angle = 45) + ggtree::geom_tiplab(angle = 45)
# Swapping the D and E calls has the same result:
bluePart2 <- PCMTreeEvalNestedEDxOnTree("E(D(tree,47),45)", tree)
PCMTreeGetPartRegimes(bluePart2)
PCMTreePlot(
bluePart2, palette=c(a = "red", b = "green", c = "blue", d = "magenta")) +
ggtree::geom_nodelab(angle = 45) + ggtree::geom_tiplab(angle = 45)
greenPart <- PCMTreeEvalNestedEDxOnTree("E(tree,28)", tree)
bgParts <- bluePart+greenPart
PCMTreePlot(
greenPart, palette=c(a = "red", b = "green", c = "blue", d = "magenta")) +
ggtree::geom_nodelab(angle = 45) + ggtree::geom_tiplab(angle = 45)
PCMTreePlot(
bluePart + greenPart, palette=c(a = "red", b = "green", c = "blue", d = "magenta")) +
ggtree::geom_nodelab(angle = 45) + ggtree::geom_tiplab(angle = 45)
PCMTreePlot(
greenPart + bluePart, palette=c(a = "red", b = "green", c = "blue", d = "magenta")) +
ggtree::geom_nodelab(angle = 45) + ggtree::geom_tiplab(angle = 45)
PCMTreePlot(
bgParts, palette=c(a = "red", b = "green", c = "blue", d = "magenta")) +
ggtree::geom_nodelab(angle = 45) + ggtree::geom_tiplab(angle = 45)
PCMTreeExtractClade Extract a clade from phylogenetic tree
Description
Extract a clade from phylogenetic tree
Usage
PCMTreeExtractClade(
tree,
cladeRootNode,
tableAncestors = NULL,
X = NULL,
returnList = !is.null(X)
)
Arguments
tree a PCMTree object.
cladeRootNode a character string denoting the label or an integer denoting a node in the tree.
tableAncestors an integer matrix returned by a previous call to PCMTreeTableAncestors(tree)
or NULL.
X an optional k x N matrix with trait value vectors for each tip in tree.
returnList logical indicating if only the phylo object associated with the clade should be
returned. Defaults to !is.null(X)
Value
If returnList is FALSE, a phylo object associated with the clade, otherwise, a list with two named
members :
• treethe phylo object associated with the clade
• Xthe submatrix of X with columns corresponding to the tips in the clade
See Also
PCMTreeSpliAtNode PCMTreeDropClade
Examples
set.seed(1, kind = "Mersenne-Twister", normal.kind = "Inversion")
tree <- PCMTree(ape::rtree(25))
PCMTreeSetPartRegimes(
tree, c(`26`="a", `28`="b", `45`="c"), setPartition = TRUE)
PCMTreePlot(tree, palette=c(a = "red", b = "green", c = "blue")) +
ggtree::geom_nodelab(angle = 45) + ggtree::geom_tiplab(angle = 45)
blueTree <- PCMTreeExtractClade(tree, 45)
PCMTreeGetPartRegimes(blueTree)
PCMTreePlot(blueTree, palette=c(a = "red", b = "green", c = "blue")) +
ggtree::geom_nodelab(angle = 45) + ggtree::geom_tiplab(angle = 45)
# we need to use the label here, because the node 29 in tree is not the same
# id in redGreenTree:
blueTree2 <- PCMTreeDropClade(blueTree, "48")
PCMTreePlot(blueTree2, palette=c(a = "red", b = "green", c = "blue")) +
ggtree::geom_nodelab(angle = 45) + ggtree::geom_tiplab(angle = 45)
PCMTreeGetBranchLength
The length of the branch leading to a node
Description
The length of the branch leading to a node
Usage
PCMTreeGetBranchLength(tree, daughterId)
Arguments
tree a phylo object.
daughterId an integer denoting the id of a daughter node
Value
a double denoting the length of the branch leading to daughterId
PCMTreeGetDaughters A vector of the daughter nodes for a given parent node id in a tree
Description
A vector of the daughter nodes for a given parent node id in a tree
Usage
PCMTreeGetDaughters(tree, parentId)
Arguments
tree a phylo object.
parentId an integer denoting the id of the parent node
Value
an integer vector of the direct descendants of parentId
PCMTreeGetLabels Node labels of a tree
Description
Get the character labels of the tips, root and internal nodes in the tree (see Functions below).
Usage
PCMTreeGetLabels(tree)
PCMTreeGetRootLabel(tree)
PCMTreeGetNodeLabels(tree)
PCMTreeGetTipLabels(tree)
Arguments
tree a phylo object
Value
a character vector
Functions
• PCMTreeGetLabels: Get all labels in the order (tips,root,internal).
• PCMTreeGetRootLabel: Get the root label
• PCMTreeGetNodeLabels: Get the internal node labels
• PCMTreeGetTipLabels: Get the tip labels
PCMTreeGetParent The parent node id of a daughter node in a tree
Description
The parent node id of a daughter node in a tree
Usage
PCMTreeGetParent(tree, daughterId)
Arguments
tree a phylo object.
daughterId an integer denoting the id of the daughter node
Value
an integer denoting the parent of daughterId
PCMTreeGetPartition Get the starting branch’ nodes for each part on a tree
Description
Get the starting branch’ nodes for each part on a tree
Usage
PCMTreeGetPartition(tree)
Arguments
tree a phylo object with an edge.part member denoting parts. The function assumes
that each part covers a linked set of branches on the tree.
Details
We call a starting branch the first branch from the root to the tips of a given part. A starting node is
the node at which a starting branch ends.
Value
a named integer vector with elements equal to the starting nodes for each part in tree and names
equal to the labels of these nodes.
See Also
PCMTreeSetPartition
Examples
set.seed(1, kind = "Mersenne-Twister", normal.kind = "Inversion")
PCMTreeGetPartition(PCMTree(ape::rtree(20)))
PCMTreeGetPartNames Unique parts on a tree in the order of occurrence from the root to the
tips (preorder)
Description
Unique parts on a tree in the order of occurrence from the root to the tips (preorder)
Usage
PCMTreeGetPartNames(tree)
Arguments
tree a phylo object with an additional member edge.part which should be a character
or an integer vector of length equal to the number of branches.
Value
a character vector.
PCMTreeGetPartRegimes Regime mapping for the parts in a tree
Description
Regime mapping for the parts in a tree
Usage
PCMTreeGetPartRegimes(tree)
Arguments
tree a PCMTree or a phylo object.
Value
a named vector with names corresponding to the part names in tree and values corresponding to
regime names or ids.
PCMTreeGetPartsForNodes
Get the parts of the branches leading to a set of nodes or tips
Description
Get the parts of the branches leading to a set of nodes or tips
Usage
PCMTreeGetPartsForNodes(tree, nodes = seq_len(PCMTreeNumNodes(tree)))
Arguments
tree a phylo object with an edge.part member denoting parts.
nodes an integer vector denoting the nodes. Default is seq_len(PCMTreeNumNodes(tree).
Value
a character vector denoting the parts of the branches leading to the nodes, according to tree$edge.part.
PCMTreeGetRegimesForEdges
Model regimes (i.e. colors) associated with the branches in a tree
Description
Model regimes (i.e. colors) associated with the branches in a tree
Usage
PCMTreeGetRegimesForEdges(tree)
Arguments
tree a PCMTree or a phylo object.
Value
a vector with entries corresponding to the rows in tree$edge denoting the regime associated with
each branch in the tree. The type of the vector element is defined by the type of tree$part.regime.
PCMTreeGetRegimesForNodes
Get the regimes of the branches leading to a set of nodes or tips
Description
Get the regimes of the branches leading to a set of nodes or tips
Usage
PCMTreeGetRegimesForNodes(tree, nodes = seq_len(PCMTreeNumNodes(tree)))
Arguments
tree a phylo object with an edge.part member denoting parts.
nodes an integer vector denoting the nodes. Default is seq_len(PCMTreeNumNodes(tree).
Value
a character vector denoting the parts of the branches leading to the nodes, according to tree$edge.part.
PCMTreeGetTipsInPart Get the tips belonging to a part in a tree
Description
Get the tips belonging to a part in a tree
Usage
PCMTreeGetTipsInPart(tree, part)
Arguments
tree a phylo object with an edge.regime member or a PCMTree object
part a character or integer denoting a part name in the tree.
Value
an integer vector with the ids of the tips belonging to part
See Also
PCMTreeGetTipsInRegime, PCMTreeGetPartNames, PCMRegimes, PCMTreeGetPartRegimes, PCMTreeSet-
PartRegimes
Examples
set.seed(1, kind = "Mersenne-Twister", normal.kind = "Inversion")
tree <- ape::rtree(10)
regimes <- sample(letters[1:3], nrow(tree$edge), replace = TRUE)
PCMTreeSetRegimesForEdges(tree, regimes)
PCMTreePlot(tree) + ggtree::geom_nodelab() + ggtree::geom_tiplab()
part <- PCMTreeGetPartNames(tree)[1]
PCMTreeGetTipsInPart(tree, part)
print(part)
PCMTreeGetTipsInRegime
Get the tips belonging to a regime in a tree
Description
Get the tips belonging to a regime in a tree
Usage
PCMTreeGetTipsInRegime(tree, regime)
Arguments
tree a phylo object with an edge.regime member or a PCMTree object
regime a character or integer denoting a regime in the tree.
Value
an integer vector with the ids of the tips belonging to regime.
See Also
PCMTreeGetTipsInPart, PCMTreeGetPartNames, PCMRegimes, PCMTreeGetPartRegimes, PCMTreeSet-
PartRegimes, PCMTreeGetPartition
Examples
set.seed(1, kind = "Mersenne-Twister", normal.kind = "Inversion")
tree <- ape::rtree(10)
regimes <- sample(letters[1:3], nrow(tree$edge), replace = TRUE)
PCMTreeSetRegimesForEdges(tree, regimes)
PCMTreePlot(tree) + ggtree::geom_nodelab() + ggtree::geom_tiplab()
regime <- PCMRegimes(tree)[1]
PCMTreeGetTipsInRegime(tree, regime)
print(regime)
PCMTreeInsertSingletons
Insert tips or singleton nodes on chosen edges
Description
Insert tips or singleton nodes on chosen edges
Usage
PCMTreeInsertSingletons(tree, nodes, positions)
PCMTreeInsertSingletonsAtEpoch(tree, epoch, minLength = 0.1)
PCMTreeInsertTipsOrSingletons(
tree,
nodes,
positions = rep(0, length(nodes)),
singleton = FALSE,
tipBranchLengths = 0.01,
nodeLabels = NULL,
tipLabels = NULL
)
Arguments
tree a phylo object
nodes an integer vector denoting the terminating nodes of the edges on which a sin-
gleton node is to be inserted. This vector should not have duplicated nodes - if
there is a need to insert two or more singleton nodes at distinct positions of the
same edge, this should be done by calling the function several times with the
longest position first and so on .
positions a positive numeric vector of the same length as nodes denoting the root-ward
distances from nodes at which the singleton nodes should be inserted. For
PCMTreeInsertTipsOrSingletons this can contains 0’s and is set by default to
rep(0, length(nodes)).
epoch a numeric indicating a distance from the root at which a singleton node should
be inserted in all lineages that are alive at that time.
minLength a numeric indicating the minimum allowed branch-length after dividing a branch
by insertion of a singleton nodes. No singleton node is inserted if this would
result in a branch shorter than ‘minLength‘. Note that this condition is checked
only in ‘PCMTreeInsertSingletonsAtEpoch‘.
singleton (PCMTreeInsertTipsOrSingletons only) a logical indicating if a singleton node
should be inserted and no tip node should be inserted.
tipBranchLengths
(PCMTreeInsertTipsOrSingletons only) positive numeric vector of the length of
nodes, denoting the lengths of the new edges leading to tips.
nodeLabels (PCMTreeInsertSingletons and PCMTreeInsertTipsOrSingletons) a character vec-
tor of the same length as nodes, indicating the names of the newly inserted
nodes. These names are ignored where positions is 0. This argument is op-
tional and default node labels will be assigned if this is not specified or set to
NULL. If specified, then it should not contain node-labels already present in the
tree.
tipLabels (PCMTreeInsertTipsOrSingletons only) a character vector of the same length as
nodes of the new tip-labels. This argument is optional and default tip labels will
be assigned if this is not specified or set to NULL. If specified, then it should
not contain tip-labels already present in the tree.
Value
a modified copy of tree.
Functions
• PCMTreeInsertSingletonsAtEpoch:
• PCMTreeInsertTipsOrSingletons:
See Also
PCMTreeEdgeTimes PCMTreeLocateEpochOnBranches PCMTreeLocateMidpointsOnBranches
Examples
set.seed(1, kind = "Mersenne-Twister", normal.kind = "Inversion")
tree <- PCMTree(ape::rtree(25))
PCMTreeSetPartRegimes(
tree, c(`26`="a", `28`="b", `45`="c", `47`="d"), setPartition = TRUE)
PCMTreePlot(
tree,
palette=c(a = "red", b = "green", c = "blue", d = "magenta")) +
ggtree::geom_nodelab(angle = 45) + ggtree::geom_tiplab(angle = 45)
cbind(tree$edge, PCMTreeEdgeTimes(tree))
id47 <- PCMTreeMatchLabels(tree, "47")
length47 <- PCMTreeGetBranchLength(tree, id47)
# insert a singleton at 0.55 (root-ward) from node 47
tree <- PCMTreeInsertSingletons(tree, nodes = "47", positions = (length47/2))
PCMTreePlot(
tree,
palette=c(a = "red", b = "green", c = "blue", d = "magenta")) +
ggtree::geom_nodelab(angle = 45) + ggtree::geom_tiplab(angle = 45)
# this fails, because the branch leading to node "47" is shorter now (0.55).
ggplot2::should_stop(
tree <- PCMTreeInsertSingletons(
tree, nodes = "47", positions= 2* length47 / 3))
# the tree is the same
PCMTreePlot(
tree, palette=c(a = "red", b = "green", c = "blue", d = "magenta")) +
ggtree::geom_nodelab(angle = 45) + ggtree::geom_tiplab(angle = 45)
# we can insert at a position within the edge:
tree <- PCMTreeInsertSingletons(tree, nodes = "47", positions = length47/3)
PCMTreePlot(
tree, palette=c(a = "red", b = "green", c = "blue", d = "magenta")) +
ggtree::geom_nodelab(angle = 45) + ggtree::geom_tiplab(angle = 45)
# Insert singletons at all branches crossing a given epoch. This will skip
# inserting singleton nodes where the resulting branches would be shorter
# than 0.1.
tree <- PCMTreeInsertSingletonsAtEpoch(tree, 2.3)
PCMTreePlot(
tree, palette=c(a = "red", b = "green", c = "blue", d = "magenta")) +
ggtree::geom_nodelab(angle = 45) + ggtree::geom_tiplab(angle = 45)
# Insert singletons at all branches crossing a given epoch
tree <- PCMTreeInsertSingletonsAtEpoch(tree, 2.3, minLength = 0.001)
PCMTreePlot(
tree,
palette=c(a = "red", b = "green", c = "blue", d = "magenta")) +
ggtree::geom_nodelab(angle = 45) + ggtree::geom_tiplab(angle = 45)
PCMTreeJumps Jumps in modeled traits associated with branches in a tree
Description
Jumps in modeled traits associated with branches in a tree
Usage
PCMTreeJumps(tree)
Arguments
tree a phylo object
Value
an integer vector of 0’s and 1’s with entries corresponding to the denoting if a jump took place at
the beginning of a branch.
PCMTreeListAllPartitions
A list of all possible (including recursive) partitions of a tree
Description
A list of all possible (including recursive) partitions of a tree
Usage
PCMTreeListAllPartitions(
tree,
minCladeSize,
skipNodes = character(),
tableAncestors = NULL,
verbose = FALSE
)
Arguments
tree a phylo object with set labels for the internal nodes
minCladeSize integer indicating the minimum number of tips allowed in one part.
skipNodes an integer or character vector indicating the ids or labels of nodes that should
not be used as partition nodes. By default, this is an empty character vector.
tableAncestors NULL (default) or an integer matrix returned by a previous call to PCMTreeTableAncestors(tree).
verbose a logical indicating if informative messages should be printed to the console.
Value
a list of integer vectors.
Examples
set.seed(1, kind = "Mersenne-Twister", normal.kind = "Inversion")
tree <- PCMTree(ape::rtree(10))
PCMTreePlot(tree) + ggtree::geom_nodelab() + ggtree::geom_tiplab()
# list of all partitions into parts of at least 4 tips
PCMTreeListAllPartitions(tree, 4)
# list of all partitions into parts of at least 3 tips
PCMTreeListAllPartitions(tree, 3)
# list all partitions into parts of at least 3 tips, excluding the partitions
# where node 16 is one of the partition nodes:
PCMTreeListAllPartitions(tree, minCladeSize = 3, skipNodes = "16")
PCMTreeListCladePartitions
A list of all possible clade partitions of a tree with a number of splitting
nodes
Description
Each subset of nNodes distinct internal or tip nodes defines a partition of the branches of the tree
into nNodes+1 blocks called parts. This function generates partitions where each part has nNodes
splitting nodes and contains at least minCladeSize tips.
Usage
PCMTreeListCladePartitions(
tree,
nNodes,
minCladeSize = 0,
skipNodes = character(0),
tableAncestors = NULL,
countOnly = FALSE,
verbose = FALSE
)
Arguments
tree a phylo object
nNodes an integer giving the number of partitioning nodes. There would be nNodes+1
blocks in each partition (see details).
minCladeSize integer indicating the minimum number of tips allowed in a clade.
skipNodes an integer or character vector indicating the ids or labels of nodes that should
not be used as partition nodes. By default, this is an empty character vector.
tableAncestors NULL (default) or an integer matrix returned by a previous call to PCMTreeTableAncestors(tree).
countOnly logical indicating if the only the number of partitions should be returned.
verbose a logical indicating if informative messages should be printed to the console.
Value
a list of integer nNodes-vectors. By default a full traversal of all partitions is done. It is possible to
truncate the search to a limited number of partitions by setting the option PCMBase.MaxLengthListCladePartitions
to a finite positive integer.
See Also
PCMOptions
PCMTreeListDescendants
A list of the descendants for each node in a tree
Description
A list of the descendants for each node in a tree
Usage
PCMTreeListDescendants(tree, tableAncestors = PCMTreeTableAncestors(tree))
Arguments
tree a phylo object
tableAncestors an integer matrix resulting from a call to PCMTreeTableAncestors(tree).
Details
This function has time and memory complexity O(M^2), where M is the number of nodes in the
tree. It can take several minutes and gigabytes of memory on trees of more than 10000 tips.
Value
a list with unnamed elements in the order of nodes in the tree. Each element is an integer vector
containing the descendant nodes (in increasing order) of the node identified by its index-number in
the list.
PCMTreeListRootPaths A list of the path to the root from each node in a tree
Description
A list of the path to the root from each node in a tree
Usage
PCMTreeListRootPaths(tree, tableAncestors = PCMTreeTableAncestors(tree))
Arguments
tree a phylo object
tableAncestors an integer matrix resulting from a call to PCMTreeTableAncestors(tree).
Details
This function has time and memory complexity O(M^2), where M is the number of nodes in the
tree. It can take several minutes and gigabytes of memory on trees of more than 10000 tips.
Value
a list with unnamed elements in the order of nodes in the tree. Each element is an integer vector
containing the ancestors nodes on the path from the node (i) to the root of the tree in that order (the
first element in the vector is the parent node of i and so on).
PCMTreeLocateEpochOnBranches
Find the crossing points of an epoch-time with each lineage of a tree
Description
Find the crossing points of an epoch-time with each lineage of a tree
Usage
PCMTreeLocateEpochOnBranches(tree, epoch)
Arguments
tree a phylo
epoch a positive numeric indicating tip-ward distance from the root
Value
a named list with an integer vector element "nodes" denoting the ending nodes for each branch
crossing epoch and numeric vector element "positions" denoting the root-ward offset from each
node in nodes.
PCMTreeLocateMidpointsOnBranches
Find the middle point of each branch longer than a threshold
Description
Find the middle point of each branch longer than a threshold
Usage
PCMTreeLocateMidpointsOnBranches(tree, threshold = 0)
Arguments
tree a phylo
threshold a positive numeric; only branches longer than threshold will be returned; Default
0.
Value
a named list with an integer vector element "nodes" denoting the ending nodes for each branch
crossing epoch and numeric vector element "positions" denoting the root-ward offset from each
node in nodes.
PCMTreeMatchLabels Get the node numbers associated with tip- or node-labels in a tree
Description
Get the node numbers associated with tip- or node-labels in a tree
Usage
PCMTreeMatchLabels(tree, labels, stopIfNotFound = TRUE)
Arguments
tree a phylo object
labels a character vector with valid tip or node labels from tree
stopIfNotFound logical indicating if an error should be raised in case a label has not been found
in the tree labels. Default: TRUE
Value
an integer vector giving the tip- or node- integer indices corresponding to labels. If stopIfNotFound
is set to FALSE, this vector may contain NAs for the labels that were not found.
Examples
set.seed(1, kind = "Mersenne-Twister", normal.kind = "Inversion")
PCMTreeMatchLabels(PCMTree(ape::rtree(20)), c("t1", "t15", "21", "39"))
PCMTreeMatchLabels(PCMTree(ape::rtree(20)), c("t1", "45"), stopIfNotFound = FALSE)
PCMTreeMatrixNodesInSamePart
Which couples from a given set of nodes in a tree belong to the same
part?
Description
Which couples from a given set of nodes in a tree belong to the same part?
Which couples from a given set of nodes in a tree belong to the same regime?
Usage
PCMTreeMatrixNodesInSamePart(
tree,
nodes = seq_len(PCMTreeNumNodes(tree)),
upperTriangle = TRUE,
returnVector = TRUE
)
PCMTreeMatrixNodesInSameRegime(
tree,
nodes = seq_len(PCMTreeNumNodes(tree)),
upperTriangle = TRUE,
returnVector = TRUE
)
Arguments
tree a PCMTree object or a phylo object.
nodes an integer vector of length L >= 2 denoting a set of nodes in the tree.
upperTriangle logical indicating if all duplicated entries and diagonal entries should be set to
NA (by default TRUE).
returnVector logical indicating if a vector instead of a matrix should be returned (correspond-
ing to calling as.vector on the resulting matrix and removing NAs). Default:
TRUE
Value
a L x L logical matrix with TRUE on the diagonal and for each couple of tips that belong to the
same part or regime. If returnVector is TRUE (default) only a vector of the non-NA entries will be
returned.
a L x L logical matrix with TRUE on the diagonal and for each couple of tips that belong to the
same part or regime. If returnVector is TRUE (default) only a vector of the non-NA entries will be
returned.
Examples
set.seed(1, kind = "Mersenne-Twister", normal.kind = "Inversion")
tree <- PCMTree(ape::rtree(8))
PCMTreeMatrixNodesInSamePart(tree, returnVector = FALSE)
PCMTreeSetPartition(tree, c(10, 12))
PCMTreeMatrixNodesInSamePart(tree, returnVector = FALSE)
PCMTreeMatrixNodesInSamePart(tree)
PCMTreeMatrixNodesInSamePart(tree, seq_len(PCMTreeNumTips(tree)))
PCMTreeMatrixNodesInSamePart(
tree, seq_len(PCMTreeNumTips(tree)), returnVector = FALSE)
set.seed(1, kind = "Mersenne-Twister", normal.kind = "Inversion")
tree <- PCMTree(ape::rtree(8))
PCMTreeMatrixNodesInSamePart(tree, returnVector = FALSE)
PCMTreeSetPartition(tree, c(10, 12))
PCMTreeMatrixNodesInSamePart(tree, returnVector = FALSE)
PCMTreeMatrixNodesInSamePart(tree)
PCMTreeMatrixNodesInSamePart(tree, seq_len(PCMTreeNumTips(tree)))
PCMTreeMatrixNodesInSamePart(
tree, seq_len(PCMTreeNumTips(tree)), returnVector = FALSE)
PCMTreeNearestNodesToEpoch
Find the nearest node to a given time from the root (epoch) on each
lineage crossing this epoch
Description
Find the nearest node to a given time from the root (epoch) on each lineage crossing this epoch
Usage
PCMTreeNearestNodesToEpoch(tree, epoch)
Arguments
tree a phylo
epoch a non-negative numeric
Value
an integer vector
PCMTreeNodeTimes Calculate the time from the root to each node of the tree
Description
Calculate the time from the root to each node of the tree
Usage
PCMTreeNodeTimes(tree, tipsOnly = FALSE)
Arguments
tree an object of class phylo
tipsOnly Logical indicating whether the returned results should be truncated only to the
tips of the tree.
Value
A vector of size the number of nodes in the tree (tips, root, internal) containing the time from the
root to the corresponding node in the tree.
PCMTreeNumNodes Number of all nodes in a tree
Description
Number of all nodes in a tree
Usage
PCMTreeNumNodes(tree)
Arguments
tree a phylo object
Details
Wrapper for nrow(tree$edge) + 1
Value
the number of nodes in tree including root, internal and tips.
PCMTreeNumParts Number of unique parts on a tree
Description
Number of unique parts on a tree
Usage
PCMTreeNumParts(tree)
Arguments
tree a phylo object
Value
the number of different parts encountered on the tree branches
PCMTreeNumTips Wrapper for length(tree$tip.label)
Description
Wrapper for length(tree$tip.label)
Usage
PCMTreeNumTips(tree)
Arguments
tree a phylo object
Value
the number of tips in tree
PCMTreePlot Plot a tree with parts and regimes assigned to these parts
Description
Plot a tree with parts and regimes assigned to these parts
Usage
PCMTreePlot(
tree,
palette = PCMColorPalette(PCMNumRegimes(tree), PCMRegimes(tree)),
...
)
Arguments
tree a PCMTree or a phylo object.
palette a named vector of colors corresponding to the regimes in tree
... Arguments passed to ggtree, e.g. layout = ’fan’, open.angle = 8, size=.25.
Note
This function requires that the ggtree package is installed. At the time of releasing this version the
ggtree package is not available on CRAN. Check the ggtree homepage for instruction on how to
install this package: .
PCMTreePostorder Post-order tree traversal
Description
Post-order tree traversal
Usage
PCMTreePostorder(tree)
Arguments
tree a phylo object with possible singleton nodes (i.e. internal nodes with one daugh-
ter node)
Value
a vector of indices of edges in tree$edge in post-order.
PCMTreePreorder Pre-order tree traversal
Description
Pre-order tree traversal
Usage
PCMTreePreorder(tree)
Arguments
tree a phylo object with possible singleton nodes (i.e. internal nodes with one daugh-
ter node)
Value
a vector of indices of edges in tree$edge in pre-order.
PCMTreeSetLabels Set tip and internal node labels in a tree
Description
Set tip and internal node labels in a tree
Usage
PCMTreeSetLabels(
tree,
labels = as.character(1:PCMTreeNumNodes(tree)),
inplace = TRUE
)
Arguments
tree a phylo object or a PCMTree object. If this is a PCMTree object, the internal
edge.part and part.regime members will be updated accordingly.
labels a character vector in the order 1:PCMTreeNumNodes(tree) as denoted in the
tree$edge matrix.
inplace a logical indicating if the change should be done in place on the object in the
calling environment (in this case tree must not be a temporary object, e.g. re-
turned by another function call). Default is TRUE.
Value
if inplace is FALSE, a copy of tree with set or modified tree$tip.label and tree$node.label. If the
original tree has a member edge.part, the returned tree has tree$edge.part and tree$part.regime
updated. If inplace is TRUE (the default), nothing is returned and the above changes are made
directly on the input tree.
See Also
PCMTree
Examples
tree <- ape::rtree(5)
tree$tip.label
# the following three are NULLs
tree$node.label
tree$edge.part
tree$part.regime
tree <- PCMTree(tree)
PCMTreeSetPartition(tree, c(6, 8))
tree$tip.label
tree$node.label
tree$edge.part
tree$part.regime
PCMTreeSetLabels(
tree, labels = paste0(c(rep("t", 5), rep("n", 4)), PCMTreeGetLabels(tree)))
PCMTreeGetLabels(tree)
tree$tip.label
tree$node.label
tree$edge.part
tree$part.regime
PCMTreeSetPartition Set a partition of a tree by specifying the partition nodes
Description
Set a partition of a tree by specifying the partition nodes
Usage
PCMTreeSetPartition(tree, nodes = c(PCMTreeNumTips(tree) + 1L), inplace = TRUE)
Arguments
tree a PCMTree object.
nodes a character vector containing tip or node labels or an integer vector denoting tip
or internal nodes in tree - the parts change at the start of the branches leading to
these nodes. Default: c(PCMTreeNumTips(tree) + 1L).
inplace a logical indicating if the change should be done to the tree in the calling en-
vironment (TRUE) or a copy of the tree with set edge.part member should be
returned (FALSE). Default is TRUE.
Value
If inplace is TRUE nothing, otherwise a copy of the tree with set edge.part member.
See Also
PCMTreeGetPartition
PCMTree
Examples
set.seed(1, kind = "Mersenne-Twister", normal.kind = "Inversion")
tree <- PCMTree(ape::rtree(8))
PCMTreeSetLabels(tree, paste0("x", PCMTreeGetLabels(tree)))
PCMTreeGetPartition(tree)
PCMTreeGetPartNames(tree)
PCMTreeGetPartRegimes(tree)
PCMTreePlot(tree) + ggtree::geom_nodelab() + ggtree::geom_tiplab()
tree <- PCMTreeSetPartition(tree, c(12, 14), inplace = FALSE)
PCMTreeGetPartition(tree)
PCMTreeGetPartNames(tree)
PCMTreeGetPartRegimes(tree)
PCMTreePlot(tree) + ggtree::geom_nodelab() + ggtree::geom_tiplab()
# reset the partition to a default one, where there is only one part:
PCMTreeSetPartition(tree)
PCMTreeGetPartition(tree)
PCMTreeGetPartNames(tree)
PCMTreeGetPartRegimes(tree)
PCMTreePlot(tree) + ggtree::geom_nodelab() + ggtree::geom_tiplab()
# reset the labels to the default labels which are character representations
# of the node ids
PCMTreeSetLabels(tree)
PCMTreeGetPartition(tree)
PCMTreeGetPartNames(tree)
PCMTreeGetPartRegimes(tree)
PCMTreeSetPartRegimes Set regimes for the parts in a tree
Description
Set regimes for the parts in a tree
Usage
PCMTreeSetPartRegimes(tree, part.regime, setPartition = FALSE, inplace = TRUE)
Arguments
tree a PCMTree object.
part.regime a named vector containing regimes to be assigned to some of or to each of the
parts in the tree.
setPartition a logical indicating if the partition of the tree should be set as well. If this argu-
ment is set to TRUE, the names of part.regime are passed as the nodes argument
in a call to PCMTreeSetPartition. Default: FALSE.
inplace a logical indicating if the change should be done to the tree in the calling en-
vironment (TRUE) or a copy of the tree with set edge.part member should be
returned (FALSE). Default is TRUE.
Value
If inplace is TRUE nothing, otherwise a copy of the tree with set edge.part and part.regime members.
See Also
PCMTree
Examples
tree <- PCMTree(ape::rtree(25))
PCMTreeGetPartition(tree)
PCMTreeGetPartRegimes(tree)
PCMTreeGetPartNames(tree)
PCMTreeSetPartRegimes(tree, c(`26` = 2))
PCMTreeGetPartition(tree)
PCMTreeGetPartRegimes(tree)
PCMTreeGetPartNames(tree)
PCMTreeSetPartRegimes(tree, c(`26` = "global-regime"))
PCMTreeGetPartition(tree)
PCMTreeGetPartRegimes(tree)
PCMTreeGetPartNames(tree)
# This should fail because no partition with nodes 26, 28 and 41 has been
# done.
ggplot2::should_stop(
PCMTreeSetPartRegimes(tree, c(`26` = "a", `28` = "b", `41` = "c")))
# This should succeed and change the partition as well as regime assignment
PCMTreeSetPartRegimes(
tree, c(`26` = "a", `28` = "b", `41` = "c"), setPartition = TRUE)
PCMTreeGetPartition(tree)
PCMTreeGetPartRegimes(tree)
PCMTreeGetPartNames(tree)
set.seed(1, kind = "Mersenne-Twister", normal.kind = "Inversion")
# number of tips
N <- 40
# tree with one regime
tree.a <- ape::rtree(N)
tree.a <- PCMTree(tree.a)
PCMTreeSetPartRegimes(
tree.a,
part.regime = structure("a", names = as.character(N+1L)),
setPartition = TRUE,
inplace = TRUE)
PCMTreePlot(tree.a) + ggtree::geom_nodelab() + ggtree::geom_tiplab()
tree.ab <- tree.a
PCMTreeSetPartRegimes(
tree.ab,
part.regime = structure(c("a", "b"), names = as.character(c(N+1L, N+31L))),
setPartition = TRUE,
inplace = TRUE)
PCMTreePlot(tree.ab) + ggtree::geom_nodelab() + ggtree::geom_tiplab()
PCMTreeSetRegimesForEdges
Set the regime for each individual edge in a tree explicitly
Description
Set the regime for each individual edge in a tree explicitly
Usage
PCMTreeSetRegimesForEdges(tree, regimes, inplace = TRUE)
Arguments
tree a PCMTree or a phylo object.
regimes a vector of the length equal to ‘nrow(tree$edge)‘.
inplace a logical indicating if the change should be done within the tree in the calling
environment or a copy of the tree with modified regime assignment should be
returned.
Value
if inplace is TRUE, nothing, otherwise a modified copy of tree.
Note
Calling this function overwrites the current partitioning of the tree.
Examples
set.seed(1, kind = "Mersenne-Twister", normal.kind = "Inversion")
tree <- ape::rtree(10)
regimes <- sample(letters[1:3], nrow(tree$edge), replace = TRUE)
PCMTreeSetRegimesForEdges(tree, regimes)
PCMTreePlot(tree)
PCMTreeSplitAtNode Slit a tree at a given internal node into a clade rooted at this node and
the remaining tree after dropping this clade
Description
Slit a tree at a given internal node into a clade rooted at this node and the remaining tree after
dropping this clade
Usage
PCMTreeSplitAtNode(
tree,
node,
tableAncestors = PCMTreeTableAncestors(tree),
X = NULL
)
Arguments
tree a PCMTree object.
node an integer or character indicating a root, internal or tip node
tableAncestors an integer matrix returned by a previous call to PCMTreeTableAncestors(tree)
or NULL.
X an optional k x N matrix with trait value vectors for each tip in tree.
Details
In the current implementation, the edge.jump and edge.part members of the tree will be discarded
and not present in the clade.
Value
A list containing two named phylo objects:
• clade The subtree (clade) starting at node.
• Xclade The portion of X attributable to the tips in clade; NULL if X is NULL.
• rest The tree resulting after dropping all tips in the clade.
• Xrest The portion of X attributable to the tips in rest; NULL if X is NULL.
Examples
set.seed(1, kind = "Mersenne-Twister", normal.kind = "Inversion")
tree <- PCMTree(ape::rtree(25))
PCMTreePlot(tree) + ggtree::geom_nodelab(angle = 45) +
ggtree::geom_tiplab(angle = 45)
spl <- PCMTreeSplitAtNode(tree, 28)
PCMTreePlot(PCMTree(spl$clade)) + ggtree::geom_nodelab(angle = 45) +
ggtree::geom_tiplab(angle = 45)
PCMTreePlot(PCMTree(spl$rest)) + ggtree::geom_nodelab(angle = 45) +
ggtree::geom_tiplab(angle = 45)
PCMTreeTableAncestors A matrix (table) of ancestors/descendants for each node in a tree
Description
A matrix (table) of ancestors/descendants for each node in a tree
Usage
PCMTreeTableAncestors(tree, preorder = PCMTreePreorder(tree))
Arguments
tree a phylo object
preorder an integer vector returned by a previous call to PCMTreePreorder(tree). De-
fault PCMTreePreorder(tree).
Details
This function has time and memory complexity O(M^2), where M is the number of nodes in the
tree. It can take several minutes and gigabytes of memory on trees of more than 10000 tips.
Value
an integer square matrix of size M x M where M is the number of nodes in the tree. Element j on
row i is 0 if j is not an ancestor of i or a positive integer equal to the position of j on the path from
the root to i if j is an ancestor of i.
PCMTreeToString A character representation of a phylo object.
Description
A character representation of a phylo object.
Usage
PCMTreeToString(tree, includeLengths = FALSE, includePartition = FALSE)
Arguments
tree a phylo object.
includeLengths logical. Default: FALSE.
includePartition
logical. Default: FALSE.
Value
a character string.
PCMTreeVCV Phylogenetic Variance-covariance matrix
Description
This is a simplified wrapper for ape’s vcv function. Setting the runtime option PCMBase.UsePCMVarForVCV
to TRUE will switch to a computation of the matrix using the function PCMVar.
Usage
PCMTreeVCV(tree)
Arguments
tree a phylo object
Value
a N x N matrix. Assuming a BM model of evolution, this is a matrix in which element (i,j) is equal
to the shared root-distance of the nodes i and j.
See Also
vcv PCMVar PCMOptions
PCMUnfixParameter Unfix a parameter in a PCM model
Description
Unfix a parameter in a PCM model
Usage
PCMUnfixParameter(model, name)
Arguments
model a PCM object
name a character string
Value
a copy of the model with removed class ’_Fixed’ from the class of the parameter name
PCMVar Expected variance-covariance matrix for each couple of tips (i,j)
Description
Expected variance-covariance matrix for each couple of tips (i,j)
Usage
PCMVar(
tree,
model,
W0 = matrix(0, PCMNumTraits(model), PCMNumTraits(model)),
SE = matrix(0, PCMNumTraits(model), PCMTreeNumTips(tree)),
metaI = PCMInfo(NULL, tree, model, verbose = verbose),
internal = FALSE,
diagOnly = FALSE,
verbose = FALSE
)
Arguments
tree a phylo object with N tips.
model an S3 object specifying both, the model type (class, e.g. "OU") as well as the
concrete model parameter values at which the likelihood is to be calculated (see
also Details).
W0 a numeric matrix denoting the initial k x k variance covariance matrix at the root
(default is the k x k zero matrix).
SE a k x N matrix specifying the standard error for each measurement in X. Al-
ternatively, a k x k x N cube specifying an upper triangular k x k factor of the
variance covariance matrix for the measurement error for each tip i=1, ..., N.
When SE is a matrix, the k x k measurement error variance matrix for a tip i
is calculated as VE[, , i] <- diag(SE[, i] * SE[, i], nrow = k). When SE is
a cube, the way how the measurement variance matrix for a tip i is calculated
depends on the runtime option PCMBase.Transpose.Sigma_x as follows:
if getOption("PCMBase.Transpose.Sigma_x", FALSE) == FALSE (default):
VE[, , i] <- SE[, , i] %*% t(SE[, , i])
if getOption("PCMBase.Transpose.Sigma_x", FALSE) == TRUE: VE[, , i] <-
t(SE[, , i]) %*% SE[, , i]
Note that the above behavior is consistent with the treatment of the model pa-
rameters Sigma_x, Sigmae_x and Sigmaj_x, which are also specified as upper
triangular factors. Default: matrix(0.0, PCMNumTraits(model), PCMTreeNumTips(tree)).
metaI a list returned from a call to PCMInfo(X, tree, model, SE), containing meta-
data such as N, M and k. Alternatively, this can be a character string naming a
function or a function object that returns such a list, e.g. the functionPCMInfo or
the function PCMInfoCpp from the PCMBaseCpp package.
internal a logical indicating if the per-node variance-covariances matrices for the internal
nodes should be returned (see Value). Default FALSE.
diagOnly a logical indicating if only the variance blocks for the nodes should be calcu-
lated. By default this is set to FALSE, meaning that the co-variances are calcu-
lated for all couples of nodes.
verbose logical indicating if some debug-messages should printed.
Value
If internal is FALSE, a (k x N) x (k x N) matrix W, such that k x k block W[((i-1)*k)+(1:k),
((j-1)*k)+(1:k)] equals the expected covariance matrix between tips i and j. Otherwise, a list
with an element ’W’ as described above and a k x M matrix element ’Wii’ containing the per-node
variance covariance matrix for each node: The k x k block Wii[, (i-1)*k + (1:k)] represents the
variance covariance matrix for node i.
Examples
# a Brownian motion model with one regime
modelBM <- PCM(model = "BM", k = 2)
# print the model
modelBM
# assign the model parameters at random: this will use uniform distribution
# with boundaries specified by PCMParamLowerLimit and PCMParamUpperLimit
# We do this in two steps:
# 1. First we generate a random vector. Note the length of the vector equals PCMParamCount(modelBM)
randomParams <- PCMParamRandomVecParams(modelBM, PCMNumTraits(modelBM), PCMNumRegimes(modelBM))
randomParams
# 2. Then we load this random vector into the model.
PCMParamLoadOrStore(modelBM, randomParams, 0, PCMNumTraits(modelBM), PCMNumRegimes(modelBM), TRUE)
# create a random tree of 10 tips
tree <- ape::rtree(10)
covMat <- PCMVar(tree, modelBM)
PCMVarAtTime Calculate the variance covariance k x k matrix at time t, under a PCM
model
Description
Calculate the variance covariance k x k matrix at time t, under a PCM model
Usage
PCMVarAtTime(
t,
model,
W0 = matrix(0, PCMNumTraits(model), PCMNumTraits(model)),
SE = matrix(0, PCMNumTraits(model), PCMNumTraits(model)),
regime = PCMRegimes(model)[1L],
verbose = FALSE
)
Arguments
t positive numeric denoting time
model a PCM model object
W0 a numeric matrix denoting the initial k x k variance covariance matrix at the root
(default is the k x k zero matrix).
SE a k x k matrix specifying the upper triangular factor of the measurement error
variance-covariance matrix. The product SE Default: SE = matrix(0.0, PCM-
NumTraits(model), PCMNumTraits(model)).
regime an integer or a character denoting the regime in model for which to do the cal-
culation; Defaults to PCMRegimes(model)[1L], meaning the first regime in the
model.
verbose a logical indicating if (debug) messages should be written on the console (De-
faults to FALSE).
Value
A numeric k x k matrix
Examples
# a Brownian motion model with one regime
modelBM <- PCM(model = "BM", k = 2)
# print the model
modelBM
# assign the model parameters at random: this will use uniform distribution
# with boundaries specified by PCMParamLowerLimit and PCMParamUpperLimit
# We do this in two steps:
# 1. First we generate a random vector. Note the length of the vector equals PCMParamCount(modelBM)
randomParams <- PCMParamRandomVecParams(modelBM, PCMNumTraits(modelBM), PCMNumRegimes(modelBM))
randomParams
# 2. Then we load this random vector into the model.
PCMParamLoadOrStore(modelBM, randomParams, 0, PCMNumTraits(modelBM), PCMNumRegimes(modelBM), TRUE)
# PCMVarAtTime(1, modelBM)
# note that the variance at time 0 is not the 0 matrix because the model has a non-zero
# environmental deviation
PCMVarAtTime(0, modelBM)
TruePositiveRate True positive rate of a set of binary predictions against their trues
Description
Let the set of predictions be described by a logical vector ‘pred‘, and let the corresponding trues
by described in a logical vector ‘true‘ of the same length. Then, the true positive rate is given by
the expression: sum(pred & true)/sum(true). The false positive rate is given by the expression:
sum(pred & !true)/sum(!true). If these expressions do not give a finite number, NA_real_ is
returned.
Usage
TruePositiveRate(pred, true)
FalsePositiveRate(pred, true)
Arguments
pred, true vectors of the same positive length that can be converted to logical.
Value
a double between 0 and 1 or NA_real_ if the result is not a finite number.
Examples
TruePositiveRate(c(1,0,1,1), c(1,1,0,1))
TruePositiveRate(c(0,0,0,0), c(1,1,0,1))
TruePositiveRate(c(1,1,1,1), c(1,1,0,1))
FalsePositiveRate(c(1,0,1,1), c(1,1,0,1))
FalsePositiveRate(c(0,0,0,0), c(1,1,0,1))
FalsePositiveRate(c(1,1,1,1), c(1,1,0,1))
TruePositiveRate(c(1,0,1,1), c(0,0,0,0))
FalsePositiveRate(c(1,0,1,1), c(1,1,1,1))
UpperTriFactor Upper triangular factor of a symmetric positive definite matrix
Description
This function is an analog to the Cholesky decomposition.
Usage
UpperTriFactor(Sigma)
Arguments
Sigma A symmetric positive definite k x k matrix that can be passed as argument to
chol.
Value
an upper triangular matrix Sigma_x, such that Sigma = Sigma_x %*% t(Sigma_x)
See Also
chol;
the option PCMBase.Transpose.Sigma_x in PCMOptions.
Examples
# S is a symmetric positive definite matrix
M<-matrix(rexp(9),3,3); S <- M %*% t(M)
# This should return a zero matrix:
UpperTriFactor(S) %*% t(UpperTriFactor(S)) - S
# This should return a zero matrix too:
t(chol(S)) %*% chol(S) - S
# Unless S is diagonal, in the general case, this will return a
# non-zero matrix:
chol(S) %*% t(chol(S)) - S
White White Gaussian PCM ignoring phylogenetic history
Description
White model ignoring phylogenetic history, treating trait values as independent samples from a
k-variate Gaussian.
Details
Calculating likelihoods for this model does not work if the global option PCMBase.Singular.Skip
is set to FALSE. |
laravel_ru_docs_v5 | free_programming_book | SQL | # Уведомления
Вдобавок к поддержке отправки email Laravel поддерживает отправку уведомлений по разным каналам доставки, включая почту, SMS (через Nexmo) и Slack. Уведомления также можно сохранять в БД, чтобы выводить их в вашем веб-интерфейсе.
Обычно уведомления — это короткие информационные сообщения для пользователей о том, что в приложении что-то произошло. Например, при создании биллингового приложения вы можете посылать пользователям уведомления «Счёт оплачен» по email и SMS каналам.
## Создание уведомлений
В Laravel каждое уведомление представлено одним классом (обычно хранящимся в папке app/Notifications). Если у вас в приложении нет такой папки — не беда, она будет создана для вас при выполнении Artisan-команды `shmake:notification` : > shphp artisan make:notification InvoicePaid
Эта команда поместит новый класс уведомления в папку app/Notifications. Класс каждого уведомления содержит метод `PHPvia()` и разное число методов создания сообщения (таких как `PHPtoMail()` и `PHPtoDatabase()` ), которые конвертируют уведомление в сообщение, оптимизированное под определённый канал.
## Отправка уведомлений
### С помощью типажа Notifiable
Уведомления можно отправлять двумя способами: используя метод `PHPnotify()` типажа Notifiable, или используя фасад Notification. Сначала попробуем типаж Notifiable. Этот типаж использует стандартной моделью App\User и содержит один метод, который можно использовать для отправки уведомлений — `PHPnotify()` . Этот метод принимает экземпляр уведомления:
```
use App\Notifications\InvoicePaid;
```
$user->notify(new InvoicePaid($invoice));
Запомните, вы можете использовать типаж Illuminate\Notifications\Notifiable на любой из ваших моделей. Никто не принуждает вас включать его только в модель User.
### С помощью фасада Notification
Или вы можете отправлять уведомления через фасад Notification. Это особенно полезно при необходимости отправки уведомления нескольким уведомляемым сущностям, таким как коллекция пользователей. Для отправки уведомлений через фасад передайте все уведомляемые сущности и экземпляр уведомления в метод `PHPsend()` :
```
Notification::send($users, new InvoicePaid($invoice));
```
### Указание каналов доставки
Класс каждого уведомления содержит метод `PHPvia()` , который определяет, по каким каналам будет доставляться уведомление. Изначально уведомления можно отправлять через каналы mail, database, broadcast, nexmo и slack.
Если вы хотите использовать другие каналы доставки, такие как Telegram или Pusher, зайдите на созданный сообществом сайт каналов уведомлений Laravel.
Метод `PHPvia()` получает экземпляр `PHP$notifiable` , который будет экземпляром класса, которому посылается уведомление. Вы можете использовать `PHP$notifiable` для определения того, по каким каналам должно доставляться уведомление: `/**` * Получить каналы доставки уведомления. * * @param mixed $notifiable * @return array */ public function via($notifiable) { return $notifiable->prefers_sms ? ['nexmo'] : ['mail', 'database']; }
### Постановка уведомлений в очередь
Перед использованием очереди для уведомлений вам надо настроить вашу очередь и запустить обработчика.
Отправка уведомлений может занять какое-то время, особенно если каналу необходим вызов внешнего API для доставки уведомления. Для ускорения отклика вашего приложения позвольте вашим уведомлениям вставать в очередь, добавив в их классы интерфейс ShouldQueue и типаж Queueable. Эти интерфейс и типаж уже импортированы во все уведомления, создаваемые при помощи команды `shmake:notification` , поэтому вы можете сразу добавить их в класс своего уведомления: `<?php` namespace App\Notifications; use Illuminate\Bus\Queueable; use Illuminate\Notifications\Notification; use Illuminate\Contracts\Queue\ShouldQueue; class InvoicePaid extends Notification implements ShouldQueue { use Queueable; // ... }
После добавления интерфейса ShouldQueue в уведомление вы можете отправлять уведомление как обычно. Laravel обнаружит в классе интерфейс ShouldQueue и автоматически поставит доставку уведомления в очередь:
Если вы хотите отложить доставку уведомления, вы можете прицепить метод `PHPdelay()` к созданию экземпляра уведомления:
```
$when = Carbon::now()->addMinutes(10);
```
$user->notify((new InvoicePaid($invoice))->delay($when));
## Почтовые уведомления
### Форматирование почтовых сообщений
Чтобы уведомление поддерживало отправку по email, вам надо определить метод `PHPtoMail()` в классе уведомления. Этот метод получает сущность `PHP$notifiable` и должен возвращать экземпляр Illuminate\Notifications\Messages\MailMessage. Почтовые сообщения могут содержать строки текста, а также «вызовы действий». Давайте посмотрим на пример метода `PHPtoMail()` : `/**` * Получить представление уведомления в виде письма. * * @param mixed $notifiable * @return \Illuminate\Notifications\Messages\MailMessage */ public function toMail($notifiable) { $url = url('/invoice/'.$this->invoice->id); return (new MailMessage) ->greeting('Привет!') ->line('Один из ваших счетов оплачен!') ->action('Посмотреть счёт', $url) ->line('Спасибо за использование нашего приложения!'); } Обратите внимание, мы используем
```
PHP$this->invoice->id
```
в нашем методе `PHPmessage()` . Вы можете передать любые данные, необходимые вашему уведомлению для генерирования его сообщения, в конструктор уведомления.
В этом примере мы регистрируем приветствие, строку с текстом, вызов действия, а затем ещё одну строку с текстом. Эти методы, предоставляемые объектом MailMessage, позволяют легко и быстро форматировать небольшие транзакционные письма. Затем почтовый канал переведёт компоненты сообщения в приятный, удобный HTML-шаблон email с копией в виде простого текста. Вот пример письма, сгенерированного каналом mail:
При отправке почтовых уведомлений не забудьте задать значение name в файле config/app.php. Это значение будет использоваться в заголовке и подвале сообщений ваших почтовых уведомлений.
### Изменение получателя
При отправке уведомлений по каналу mail система уведомлений будет автоматически искать свойство `PHPemail` в уведомляемой сущности. Вы можете изменить адрес email, используемый для доставки уведомления, определив метод
```
PHProuteNotificationForMail()
```
на сущности: `<?php` namespace App; use Illuminate\Notifications\Notifiable; use Illuminate\Foundation\Auth\User as Authenticatable; class User extends Authenticatable { use Notifiable; /** * Направление уведомлений для почтового канала. * * @return string */ public function routeNotificationForMail() { return $this->email_address; } }
### Изменение темы
По умолчанию тема письма — имя класса уведомления в формате «Title Case». Например, если класс вашего уведомления называется InvoicePaid, тема письма будет Invoice Paid. Если вы хотите указать тему письма сами, вызовите метод `PHPsubject()` при создании вашего сообщения: `/**` * Получить представление уведомления в виде письма. * * @param mixed $notifiable * @return \Illuminate\Notifications\Messages\MailMessage */ public function toMail($notifiable) { return (new MailMessage) ->subject('Тема уведомления') ->line('...'); }
### Изменение шаблонов
Вы можете изменить HTML-шаблоны и шаблоны в виде простого текста, используемые почтовыми уведомлениями, опубликовав ресурсы пакета уведомлений. После выполнения этой команды шаблоны почтовых уведомлений будут помещены в папку resources/views/vendor/notifications:
> shphp artisan vendor:publish --tag=laravel-notifications
### Сообщения об ошибках
Некоторые уведомления информирую пользователей об ошибках, таких как неудачная оплата счёта. Вы можете отметить, что почтовое сообщение связано с ошибкой, вызвав метод `PHPerror()` при создании сообщения. При использовании метода `PHPerror()` на почтовом сообщении кнопка вызова действия будет красной, а не синей: `/**` * Получить представление уведомления в виде письма. * * @param mixed $notifiable * @return \Illuminate\Notifications\Message */ public function toMail($notifiable) { return (new MailMessage) ->error() ->subject('Тема уведомления') ->line('...'); }
## Уведомления для БД
Канал уведомлений database сохраняет информацию уведомлений в таблицу БД. Эта таблица будет содержать такую информацию, как тип уведомления и JSON-данные, описывающие уведомление.
Вы можете сделать запрос к таблице, чтобы вывести уведомления в пользовательском интерфейсе приложения. Но перед этим вам надо создать таблицу для хранения ваших уведомлений. Вы можете использовать команду
```
shnotifications:table
```
для создания миграции с необходимой схемой таблицы: > shphp artisan notifications:table php artisan migrate
### Форматирование уведомлений для БД
Чтобы уведомление поддерживало сохранение в БД, вам надо определить метод `PHPtoDatabase()` или метод `PHPtoArray()` в классе уведомления. Этот метод получает сущность `PHP$notifiable` и должен возвращать простой PHP-массив. Возвращённый массив будет закодирован в JSON и сохранён в столбец data вашей таблицы notifications. Давайте посмотрим на пример метода `PHPtoArray()` : `/**` * Получить представление уведомления в виде массива. * * @param mixed $notifiable * @return array */ public function toArray($notifiable) { return [ 'invoice_id' => $this->invoice->id, 'amount' => $this->invoice->amount, ]; } `PHPtoDatabase()` или `PHPtoArray()` Метод `PHPtoArray()` также используется каналом broadcast для определения того, какие данные вещать вашему JavaScript-клиенту. Если вы хотите иметь два разных представления в виде массива для каналов database и broadcast, вам надо определить метод `PHPtoDatabase()` вместо метода `PHPtoArray()` .
### Обращение к уведомлениям
Когда уведомления сохранены в БД, вам нужен удобный способ для доступа к ним из вашей уведомляемой сущности. Типаж Illuminate\Notifications\Notifiable, включённый в стандартную модель Laravel App\User, содержит Eloquent-отношение notifications, которое возвращает уведомления для сущности. Для получения уведомлений вы можете обратиться к этому методу, как к любому другому Eloquent-отношению. По умолчанию уведомления будут отсортированы по отметке времени created_at:
foreach ($user->notifications as $notification) { echo $notification->type; }
Если вы хотите получить только «непрочитанные» уведомления, используйте отношение unreadNotifications. Эти уведомления так же будут отсортированы по отметке времени created_at:
foreach ($user->unreadNotifications as $notification) { echo $notification->type; }
Для доступа к вашим уведомлениям из вашего JavaScript-клиента вам надо определить контроллер уведомлений для вашего приложения, который возвращает уведомления для уведомляемой сущности, такой как текущий пользователь. Затем вы можете сделать HTTP-запрос к URI этого контроллера из вашего JavaScript-клиента.
### Пометка прочитанного уведомления
Обычно надо отметить уведомление «прочитанным», когда пользователь просмотрел его. Типаж Illuminate\Notifications\Notifiable предоставляет метод `PHPmarkAsRead()` , который изменяет поле read_at записи уведомления в БД:
foreach ($user->unreadNotifications as $notification) { $notification->markAsRead(); } Но вместо перебора всех уведомлений вы можете использовать метод `PHPmarkAsRead()` прямо на коллекции уведомлений:
```
$user->unreadNotifications->markAsRead();
```
Также вы можете использовать массовый запрос на изменение, чтобы отметить все уведомления прочитанными, не получая их из БД:
$user->unreadNotifications()->update(['read_at' => Carbon::now()]);
Само собой, вы можете удалить уведомления из таблицы целиком:
```
$user->notifications()->delete();
```
## Вещание уведомлений
Перед вещанием уведомлений вам надо познакомиться со службами вещания событий Laravel и настроить их. Вещание событий предоставляет способ реагировать на события серверной части Laravel в вашем JavaScript-клиенте.
### Форматирование уведомлений для вещания
Канал broadcast вещает уведомления с помощью сервисов вещания событий Laravel, позволяя вашему JavaScript-клиенту получать уведомления в реальном времени. Чтобы уведомление поддерживало вещание, вам надо определить метод `PHP()toBroadcast` или метод `PHP()toArray` в классе уведомления. Этот метод принимает сущность `PHP$notifiable` и должен возвращать простой PHP-массив. Возвращённый массив будет закодирован в JSON и будет вещаться JavaScript-клиенту. Давайте посмотрим на пример метода `PHPtoArray()` : `/**` * Получить представление уведомления в виде массива. * * @param mixed $notifiable * @return array */ public function toArray($notifiable) { return [ 'invoice_id' => $this->invoice->id, 'amount' => $this->invoice->amount, ]; }
В добавок к указанным вами данным вещание уведомлений будет также содержать поле type, содержащее имя класса уведомления.
`PHPtoBroadcast()` или `PHPtoArray()` Метод `PHPtoArray()` также используется каналом database для определения того, какие данные сохранять в таблицу БД. Если вы хотите иметь два разных представления в виде массива для каналов database и broadcast, вам надо определить метод `PHPtoBroadcast()` вместо метода `PHPtoArray()` .
### Прослушивание уведомлений
Уведомления будут вещаться в приватный канал в формате {notifiable}.{id}. Например, если вы посылаете уведомление экземпляру App\User c ID равным 1, то уведомление будет вещаться в приватный канал App.User.1. При использовании Laravel Echo вы легко можете слушать уведомления на канале вспомогательным методом `PHPnotification()` :
```
Echo.private('App.User.' + userId)
```
.notification((notification) => { console.log(notification.type); });
## SMS уведомления
Отправка SMS уведомлений в Laravel обеспечивается с помощью Nexmo. Перед отправкой уведомлений через Nexmo, вам надо установить Composer-пакет nexmo/client и добавить несколько параметров в файл настроек config/services.php. Вы можете скопировать этот пример настройки:
> conf'nexmo' => [ 'key' => env('NEXMO_KEY'), 'secret' => env('NEXMO_SECRET'), 'sms_from' => '15556666666', ],
Параметр sms_from — телефонный номер, с которого будут отправляться ваши SMS сообщения. Вам надо сгенерировать этот номер для своего приложения в панели управления Nexmo.
Чтобы уведомление поддерживало отправку по SMS, вам надо определить метод `PHP()toNexmo` в классе уведомления. Этот метод принимает сущность `PHP$notifiable` и должен возвращать экземпляр Illuminate\Notifications\Messages\NexmoMessage: `/**` * Получить представления уведомления в виде Nexmo / SMS. * * @param mixed $notifiable * @return NexmoMessage */ public function toNexmo($notifiable) { return (new NexmoMessage) ->content('Содержимое вашего SMS сообщения'); }
### Изменение номера «From»
Если вы хотите отправлять некоторые уведомления с номера, отличающегося от указанного в вашем файле config/services.php, вы можете использовать метод `PHPfrom()` на экземпляре NexmoMessage: `/**` * Получить представления уведомления в виде Nexmo / SMS. * * @param mixed $notifiable * @return NexmoMessage */ public function toNexmo($notifiable) { return (new NexmoMessage) ->content('Содержимое вашего SMS сообщения') ->from('15554443333'); }
### Изменение получателя SMS
При отправке уведомлений по каналу nexmo система уведомлений автоматически будет искать атрибут phone_number в уведомляемой сущности. Если вы хотите изменить телефонный номер для доставки уведомления, определите метод
```
PHProuteNotificationForNexmo()
```
на сущности: `<?php` namespace App; use Illuminate\Notifications\Notifiable; use Illuminate\Foundation\Auth\User as Authenticatable; class User extends Authenticatable { use Notifiable; /** * Направление уведомлений для канала Nexmo. * * @return string */ public function routeNotificationForNexmo() { return $this->phone; } }
## Уведомления Slack
Перед отправкой уведомлений через Slack, вы должны установить HTTP-библиотеку Guzzle через Composer:
> shcomposer require guzzlehttp/guzzle
Также вам надо настроить интеграцию с «Входящим Webhook» для вашей Slack-команды. Эта интеграция предоставит вам URL, который можно использовать при направлении уведомлений Slack.
Чтобы уведомление поддерживало отправку в виде Slack-сообщения, вам надо определить метод `PHP()toSlack` в классе уведомления. Этот метод принимает сущность `PHP$notifiable` и должен возвращать экземпляр Illuminate\Notifications\Messages\SlackMessage. В Slack-сообщении может быть текстовое содержимое, а также «attachment» (вложение), которое форматирует дополнительный текст или массив полей. Давайте посмотрим на пример метода `PHPtoSlack()` : `/**` * Получить представление уведомления для Slack. * * @param mixed $notifiable * @return SlackMessage */ public function toSlack($notifiable) { return (new SlackMessage) ->content('Один из ваших счетов оплачен!'); }
В этом примере мы просто отправляем одну строку текста в Slack, при этом будет создано сообщение, выглядящее вот так:
Также вы можете добавлять вложения в Slack-сообщения. Вложения обеспечивают более широкие возможности форматирования, чем простые текстовые сообщения. В этом примере мы отправим уведомление об ошибке — об исключении в приложении со ссылкой для просмотра подробной информации об исключении:
`/**` * Получить представление уведомления для Slack. * * @param mixed $notifiable * @return SlackMessage */ public function toSlack($notifiable) { $url = url('/exceptions/'.$this->exception->id); return (new SlackMessage) ->error() ->content('Упс! Что-то пошло не так.') ->attachment(function ($attachment) use ($url) { $attachment->title('Исключение: файл не найден', $url) ->content('Файл [background.jpg] не найден.'); }); }
Этот пример создаст такое Slack-сообщение:
Вложения также позволяют вам указать массив данных, который будет предоставлен пользователю. Эти данные будут представлены в виде таблицы для удобства чтения:
`/**` * Получить представление уведомления для Slack. * * @param mixed $notifiable * @return SlackMessage */ public function toSlack($notifiable) { $url = url('/invoices/'.$this->invoice->id); return (new SlackMessage) ->success() ->content('Один из ваших счетов оплачен!') ->attachment(function ($attachment) use ($url) { $attachment->title('Счёт 1322', $url) ->fields([ 'Title' => 'Оплата сервера', 'Amount' => '$1,234', 'Via' => 'American Express', 'Was Overdue' => ':-1:', ]); }); }
Этот пример создаст такое Slack-сообщение:
Изменение отправителя и получателя
Вы можете использовать методы `PHPfrom()` и `PHPto()` для изменения отправителя и получателя. Метод `PHPfrom()` принимает имя пользователя и идентификатор emoji, а метод `PHPto()` принимает имя канала или пользователя: `/**` * Получить представление уведомления для Slack. * * @param mixed $notifiable * @return SlackMessage */ public function toSlack($notifiable) { return (new SlackMessage) ->from('Призрак', ':ghost:') ->to('#other') ->content('Это будет отправлено #other'); }
### Изменение получателя уведомлений Slack
Для направления уведомлений Slack в нужное место определите метод
```
PHProuteNotificationForSlack()
```
на уведомляемой сущности. Он должен возвращать URL того webhook, которому должно быть доставлено уведомление. Webhook URL можно генерировать с помощью добавления сервиса «Входящий Webhook» в вашу Slack-команду: `<?php` namespace App; use Illuminate\Notifications\Notifiable; use Illuminate\Foundation\Auth\User as Authenticatable; class User extends Authenticatable { use Notifiable; /** * Направление уведомлений для канала Slack. * * @return string */ public function routeNotificationForSlack() { return $this->slack_webhook_url; } }
## События уведомлений
Когда отправляется уведомление, система уведомлений создаёт событие Illuminate\Notifications\Events\NotificationSent. Оно содержит «уведомляемую» сущность и экземпляр самого уведомления. Вы можете зарегистрировать слушателей этого события в своём EventServiceProvider:
`/**` * Привязка слушателей события для приложения. * * @var array */ protected $listen = [ 'Illuminate\Notifications\Events\NotificationSent' => [ 'App\Listeners\LogNotification', ], ]; После регистрации слушателей в EventServiceProvider используйте Artisan-команду `shevent:generate` , чтобы быстро сгенерировать классы слушателей.
В слушателе событий вы можете обращаться к свойствам события notifiable, notification и channel, чтобы узнать больше о получателе или о самом уведомлении:
`/**` * Обработка события. * * @param NotificationSent $event * @return void */ public function handle(NotificationSent $event) { // $event->channel // $event->notifiable // $event->notification }
## Другие каналы
Laravel поставляется с набором каналов для уведомлений, но вы можете написать свой собственный драйвер для доставки уведомлений по другим каналам. В Laravel это делается просто. Сначала определите класс с методом `PHPsend()` . Метод должен получать два аргумента: `PHP$notifiable` и `PHP$notification` : `<?php` namespace App\Channels; use Illuminate\Notifications\Notification; class VoiceChannel { /** * Отправить данное уведомление. * * @param mixed $notifiable * @param \Illuminate\Notifications\Notification $notification * @return void */ public function send($notifiable, Notification $notification) { $message = $notification->toVoice($notifiable); // Send notification to the $notifiable instance... } } После определения класса канала уведомлений вы можете просто вернуть имя класса из метода `PHPvia()` любого из ваших уведомлений: `<?php` namespace App\Notifications; use Illuminate\Bus\Queueable; use App\Channels\VoiceChannel; use App\Channels\Messages\VoiceMessage; use Illuminate\Notifications\Notification; use Illuminate\Contracts\Queue\ShouldQueue; class InvoicePaid extends Notification { use Queueable; /** * Получить каналы уведомлений. * * @param mixed $notifiable * @return array|string */ public function via($notifiable) { return [VoiceChannel::class]; } /** * Получить голосовое представление уведомления. * * @param mixed $notifiable * @return VoiceMessage */ public function toVoice($notifiable) { // ... } }
# Работа с e-mail
Laravel предоставляет простой API к популярной библиотеке SwiftMailer c драйверами для SMTP, Mailgun, Mandrill (для Laravel 5.2 и ранее), SparkPost, Amazon SES, функции PHP mail, и sendmail, поэтому вы можете быстро приступить к рассылке почты с помощью локального или облачного сервиса на ваш выбор.
Драйверы на основе API, такие как Mailgun и SparkPost, зачастую работают проще и быстрее, чем SMTP-серверы. По возможности лучше использовать именно их. Для работы таких драйверов необходима HTTP-библиотека Guzzle, которую можно установить через менеджер пакетов Composer:
> shcomposer require guzzlehttp/guzzle
Для использования драйвера Mailgun установите Guzzle и задайте для параметра driver значение mailgun в конфигурационном файле config/mail.php. Затем проверьте, что в конфигурационном файле config/services.php есть следующие параметры:
`'mailgun' => [` 'domain' => 'your-mailgun-domain', 'secret' => 'your-mailgun-key', ],
добавлено в 5.2 () 5.1 () 5.0 ()
Для использования драйвера Mandrill установите Guzzle и задайте для параметра driver значение mandrill в конфигурационном файле config/mail.php. Затем проверьте, что в конфигурационном файле config/services.php есть следующие параметры:
`'mandrill' => [` 'secret' => 'your-mandrill-key', ],
Для использования драйвера SparkPost установите Guzzle и задайте для параметра driver значение sparkpost в конфигурационном файле config/mail.php. Затем проверьте, что в конфигурационном файле config/services.php есть следующие параметры:
`'sparkpost' => [` 'secret' => 'your-sparkpost-key', ], Чтобы использовать драйвер Amazon SES, сначала установите Amazon AWS SDK для PHP. Вы можете установить эту библиотеку, добавив следующую строку в раздел require файла composer.json и выполнив команду `shcomposer update` :
```
"aws/aws-sdk-php": "~3.0"
```
Затем задайте для параметра driver значение ses в конфигурационном файле config/mail.php и проверьте, что в конфигурационном файле config/services.php есть следующие параметры:
`'ses' => [` 'key' => 'your-ses-key', 'secret' => 'your-ses-secret', 'region' => 'ses-region', // например, us-east-1 ],
добавлено в 5.3 ()
## Создание отправляемых объектов
В Laravel каждый отправляемый приложением тип email-сообщения представлен «отправляемым» классом. Эти классы хранятся в папке app/Mail. Если у вас нет такой папки, она будет сгенерирована при создании первого отправляемого класса командой `shmake:mail` : > shphp artisan make:mail OrderShipped
## Написание отправляемых объектов
Все настройки отправляемого класса происходят в методе `PHPbuild()` . В этом методе вы можете вызывать различные другие методы, такие как `PHPfrom()` , `PHPsubject()` , `PHPview()` и `PHPattach()` , для настройки содержимого и доставки email-сообщения.
### Настройка отправителя
Сначала давайте рассмотрим настройку отправителя email. Другими словами — от кого будет приходить email. Есть два способа настроить отправителя. Первый — с помощью метода `PHPfrom()` в методе `PHPbuild()` вашего отправляемого класса: `/**` * Создание сообщения. * * @return $this */ public function build() { return $this->from('<EMAIL>') ->view('emails.orders.shipped'); }
Используя глобальный адрес from
Если ваше приложение использует одинаковый адрес отправителя для всех своих email-сообщений, то будет излишне вызывать метод `PHPfrom()` в каждом генерируемом классе. Вместо этого вы можете указать глобальный адрес from в файле настроек config/mail.php. Этот адрес будет использован тогда, когда в отправляемом классе не указан никакой другой адрес отправителя:
```
'from' => ['address' => '<EMAIL>', 'name' => 'App Name'],
```
### Настройка представления
В методе `PHPbuild()` отправляемого класса вы можете использовать метод `PHPview()` для указания шаблона для построения содержимого email-сообщения. Поскольку обычно для построения содержимого всех email-сообщений используется Blade-шаблон, вам доступна вся мощь и всё удобство шаблонизатора Blade для создания HTML-кода ваших email-сообщений: `/**` * Создание сообщения. * * @return $this */ public function build() { return $this->view('emails.orders.shipped'); }
Вы можете создать папку resources/views/emails для размещения всех ваших email-шаблонов, или любую другую папку в каталоге resources/views.
Если вы хотите определить текстовую версию вашего email, используйте метод `PHPtext()` . Подобно методу `PHPview()` метод `PHPtext()` принимает имя шаблона, который будет использован для построения содержимого email-сообщения. Вы можете определить обе версии вашего сообщения — HTML и текстовую: `/**` * Создание сообщения. * * @return $this */ public function build() { return $this->view('emails.orders.shipped') ->text('emails.orders.shipped_plain'); }
### Данные представления
Обычно необходимо передать какие-либо данные в представление, которые можно использовать при построении HTML-кода email-сообщения. Есть два способа сделать данные доступными для вашего представления. Первый — любое общедоступное (public) свойство, определённое в вашем отправляемом классе, автоматически станет доступным для представления. Например, вы можете передать данные в конструктор вашего отправляемого класса и поместить эти данные в общедоступные свойства, определённые в классе:
`<?php` namespace App\Mail; use App\Order; use Illuminate\Bus\Queueable; use Illuminate\Mail\Mailable; use Illuminate\Queue\SerializesModels; class OrderShipped extends Mailable { use Queueable, SerializesModels; /** * Экземпляр заказа. * * @var Order */ public $order; /** * Создание нового экземпляра сообщения. * * @return void */ public function __construct(Order $order) { $this->order = $order; } /** * Создание сообщения. * * @return $this */ public function build() { return $this->view('emails.orders.shipped'); } }
Когда данные помещены в общедоступное свойство, они автоматически станут доступны в вашем представлении, и вы можете обращаться к ним как к любым другим данным Blade-шаблонах:
`<div>` Price: {{ $order->price }} </div> Если вы хотите настроить формат данных вашего email-сообщения до того, как они будут отправлены в шаблон, вы можете вручную передать данные в представление методом `PHPwith()` . Вы будете по-прежнему передавать данные через конструктор отправляемого класса, но вам надо поместить эти данные в свойства protected или private, чтобы данные не были автоматически доступны шаблону. Затем при вызове метода `PHPwith()` передайте массив данных, которые должны быть доступны шаблону: `<?php` namespace App\Mail; use App\Order; use Illuminate\Bus\Queueable; use Illuminate\Mail\Mailable; use Illuminate\Queue\SerializesModels; class OrderShipped extends Mailable { use Queueable, SerializesModels; /** * Экземпляр заказа. * * @var Order */ protected $order; /** * Создание нового экземпляра сообщения. * * @return void */ public function __construct(Order $order) { $this->order = $order; } /** * Создание сообщения. * * @return $this */ public function build() { return $this->view('emails.orders.shipped') ->with([ 'orderName' => $this->order->name, 'orderPrice' => $this->order->price, ]); } } После передачи данных в метод `PHPwith()` они автоматически станут доступны для вашего представления, и вы сможете обращаться к ним как к любым другим данным Blade-шаблонах: `<div>` Price: {{ $orderPrice }} </div### Вложения
добавлено в 5.3 ()
Для добавления к письму вложений используйте метод `PHPattach()` в методе `PHPbuild()` отправляемого класса. Метод `PHPattach()` принимает первым аргументом полный путь к файлу: `/**` * Создание сообщения. * * @return $this */ public function build() { return $this->view('emails.orders.shipped') ->attach('/path/to/file'); } При добавлении файлов можно указать их MIME-тип и/или отображаемое имя, передав массив вторым аргументом метода `PHPattach()` :
добавлено в 5.3 ()
`/**` * Создание сообщения. * * @return $this */ public function build() { return $this->view('emails.orders.shipped') ->attach('/path/to/file', [ 'as' => 'name.pdf', 'mime' => 'application/pdf', ]); }
добавлено в 5.2 () 5.1 () 5.0 ()
```
$message->attach($pathToFile, ['as' => $display, 'mime' => $mime]);
```
добавлено в 5.3 ()
Метод `PHPattachData()` можно использовать для добавления «сырой» строки байтов в качестве вложения. Например, вы можете использовать этот метод, если вы сгенерировали PDF в памяти и хотите прикрепить его к письму, не записывая на диск. Метод `PHPattachData()` принимает «сырые» данные первым аргументом, имя файла вторым аргументом, и массив параметров третьим аргументом: `/**` * Создание сообщения. * * @return $this */ public function build() { return $this->view('emails.orders.shipped') ->attachData($this->pdf, 'name.pdf', [ 'mime' => 'application/pdf', ]); }
добавлено в 5.2 ()
Метод `PHPattachData()` можно использовать для добавления «сырой» строки байтов в качестве вложения. Например, вы можете использовать этот метод, если вы сгенерировали PDF в памяти и хотите прикрепить его к письму, не записывая на диск:
```
$message->attachData($pdf, 'invoice.pdf');
```
$message->attachData($pdf, 'invoice.pdf', ['mime' => $mime]);
### Строчные вложения
Обычно добавление строчных вложений — утомительное занятие, однако Laravel делает его проще, позволяя вам добавлять изображения и получать соответствующие CID.
Строчные вложения — файлы, не видимые получателю в списке вложений, но используемые внутри HTML-тела сообщения; CID — уникальный идентификатор внутри данного сообщения, используемый вместо URL в таких атрибутах, как src — прим. пер.
Для вставки строчного изображения используйте метод `PHPembed()` на переменной `PHP$message` в шаблоне вашего email. Laravel автоматически делает переменную `PHP$message` доступной для всех шаблонов ваших сообщений, поэтому вам не надо передавать их вручную:
добавлено в 5.3 ()
`<body>` Вот какая-то картинка: <img src="{{ $message->embed($pathToFile) }}"> </bodyдобавлено в 5.2 () 5.1 () 5.0 ()
`<body>` Вот какая-то картинка: <img src="<?php echo $message->embed($pathToFile); ?>"> </body> Вставка файла из потока данныхЕсли у вас уже есть строка сырых данных для вставки в шаблон email-сообщения, вы можете использовать метод `PHPembedData()` на переменной `PHP$message` :
добавлено в 5.3 ()
`<body>` А вот картинка, полученная из строки с данными: <img src="{{ $message->embedData($data, $name) }}"> </bodyдобавлено в 5.2 () 5.1 () 5.0 ()
`<body>` А вот картинка, полученная из строки с данными: <img src="<?php echo $message->embedData($data, $name); ?>"> </body## Отправка почты
добавлено в 5.3 ()
Для отправки сообщений используйте метод `PHPto()` фасада Mail. Метод `PHPto()` принимает email-адрес, экземпляр пользователя или коллекцию пользователей. Если вы передадите объект или коллекцию объектов, то обработчик почты автоматически использует их свойства email и name для настройки получателей email-сообщения, поэтому эти атрибуты должны быть доступны в ваших объектах. После настройки получателей вы можете передать экземпляр вашего отправляемого класса в метод `PHPsend()` : `<?php` namespace App\Http\Controllers; use App\Order; use App\Mail\OrderShipped; use Illuminate\Http\Request; use Illuminate\Support\Facades\Mail; use App\Http\Controllers\Controller; class OrderController extends Controller { /** * Доставка данного заказа. * * @param Request $request * @param int $orderId * @return Response */ public function ship(Request $request, $orderId) { $order = Order::findOrFail($orderId); // Доставка заказа... Mail::to($request->user())->send(new OrderShipped($order)); } }
Само собой при отправке сообщений вы можете указать не только получателей to. Вы можете задать получателей to, cc и bcc одним сцепленным вызовом метода:
->cc($moreUsers) ->bcc($evenMoreUsers) ->send(new OrderShipped($order));
добавлено в 5.2 () 5.1 () 5.0 ()
Laravel позволяет хранить почтовые сообщения в представлениях. Например, вы можете расположить свои сообщения в папке emails, создав её в папке resources/views.
Для отправки сообщения используется метод `PHPsend()` фасада Mail. Этот метод принимает три аргумента. Первый — имя представления, которое содержит текст сообщения. Второй — массив данных, передаваемых в представление. Третий — функция-замыкание, получающая экземпляр сообщения, и позволяющая вам настроить различные параметры сообщения, такие как получатели, тема и т.д.: `<?php` namespace App\Http\Controllers; use Mail; use App\User; use Illuminate\Http\Request; use App\Http\Controllers\Controller; class UserController extends Controller { /** * Отправка пользователю напоминания по e-mail. * * @param Request $request * @param int $id * @return Response */ public function sendEmailReminder(Request $request, $id) { $user = User::findOrFail($id); Mail::send('emails.reminder', ['user' => $user], function ($m) use ($user) { $m->from('<EMAIL>', 'Your Application'); $m->to($user->email, $user->name)->subject('Your Reminder!'); }); } }
Поскольку в этом примере мы передаём массив с ключом user, мы можем отобразить имя пользователя в нашем представлении с помощью такого PHP-кода:
```
<?php echo $user->name; ?>
```
Переменная `PHP$message` всегда передаётся в представления и позволяет вам прикреплять вложения. Поэтому вам не стоит передавать одноимённую переменную с данными для представления. Как уже было сказано, третий аргумент метода `PHPsend()` — замыкание, которое позволяет указать различные настройки для самого сообщения. С помощью этого замыкания вы можете задать другие атрибуты сообщения, такие как явная рассылка на несколько адресов, скрытая рассылка на несколько адресов, и т.д.:
```
Mail::send('emails.welcome', $data, function ($message) {
```
$message->from('<EMAIL>', 'Laravel'); $message->to('<EMAIL>')->cc('<EMAIL>'); }); Ниже приведён список доступных методов для экземпляра построителя сообщения `PHP$message` :
```
$message->from($address, $name = null);
```
$message->sender($address, $name = null); $message->to($address, $name = null); $message->cc($address, $name = null); $message->bcc($address, $name = null); $message->replyTo($address, $name = null); $message->subject($subject); $message->priority($level); $message->attach($pathToFile, array $options = []); // Прикрепить файл из сырой $data string... $message->attachData($data, $name, array $options = []); // Получить низкоуровневый экземпляр сообщения SwiftMailer... $message->getSwiftMessage(); Экземпляр сообщения, передаваемый замыканию метода `PHPMail::send()` , наследует класс сообщения SwiftMailer, что позволяет вам вызывать любые методы этого класса для создания своего сообщения. По умолчанию ожидается, что передаваемое в метод `PHPsend()` представление содержит HTML. Но передавая первым аргументом метода `PHPsend()` массив, вы можете указать простое текстовое представление вдобавок к HTML-представлению:
```
Mail::send(['html.view', 'text.view'], $data, $callback);
```
А если вам надо отправить только простой текст, вы можете указать это при помощи ключа text в массиве:
```
Mail::send(['text' => 'view'], $data, $callback);
```
Если вам надо отправить непосредственно простую строку, используйте метод `PHPraw()` :
```
Mail::raw('Text to e-mail', function ($message) {
```
## Очереди отправки
Помещение сообщения в очередь отправки
Из-за того, что отправка сообщений может сильно повлиять на время обработки запроса, многие разработчики помещают их в очередь на фоновую отправку. Laravel позволяет легко делать это, используя единое API очередей. Для помещения сообщения в очередь просто используйте метод `PHPqueue()` фасада Mail после настройки получателей сообщения:
добавлено в 5.3 ()
->cc($moreUsers) ->bcc($evenMoreUsers) ->queue(new OrderShipped($order));
добавлено в 5.2 () 5.1 () 5.0 ()
```
Mail::queue('emails.welcome', $data, function ($message) {
```
// });
Этот метод автоматически позаботится о помещении задачи в очередь, поэтому сообщение будет отправлено в фоне. Не забудьте настроить механизм очередей перед использованием данной возможности.
Вы можете задержать отправку сообщения на нужное число секунд методом `PHPlater()` . Первый аргумент метода — экземпляр DateTime, указывающий, когда необходимо отправить сообщение:
добавлено в 5.3 ()
```
$when = Carbon\Carbon::now()->addMinutes(10);
```
Mail::to($request->user()) ->cc($moreUsers) ->bcc($evenMoreUsers) ->later($when, new OrderShipped($order));
добавлено в 5.2 () 5.1 () 5.0 ()
```
Mail::later(5, 'emails.welcome', $data, function ($message) {
```
// });
Помещение сообщения в определённую очередь
добавлено в 5.3 ()
Поскольку все генерируемые командой `shmake:mail` отправляемые классы используют типаж Illuminate\Bus\Queueable, вы можете вызывать методы `PHPonQueue()` и `PHPonConnection()` на любом экземпляре отправляемого класса, что позволяет вам указывать название подключения и имя очереди для сообщения:
```
$message = (new OrderShipped($order))
```
->onConnection('sqs') ->onQueue('emails'); Mail::to($request->user()) ->cc($moreUsers) ->bcc($evenMoreUsers) ->queue($message);
Помещение в очередь по умолчанию
Если у вас есть отправляемые классы, которые необходимо всегда помещать в очередь, вы можете реализовать контракт ShouldQueue в классе. Тогда даже при вызове метода `PHPsend()` отправляемый объект будет помещаться в очередь, поскольку он реализует контракт ShouldQueue:
```
use Illuminate\Contracts\Queue\ShouldQueue;
```
class OrderShipped extends Mailable implements ShouldQueue { // }
## Почта и локальная разработка
На стадии разработки приложения обычно предпочтительно отключить доставку отправляемых сообщений. В Laravel есть несколько способов «отключить» реальную отправку почтовых сообщений во время разработки.
Вместо отправки сообщений почтовый драйвер log будет заносить все сообщения в ваши журналы для возможности их просмотра. Подробнее о настройке различных окружений для приложения читайте в документации по настройке.
Другой вариант — задать универсального получателя для всех сообщений от фреймворка. при этом все сообщения, генерируемые вашим приложением, будут отсылаться на заданный адрес, вместо адреса, указанного при отправке сообщения. Это можно сделать с помощью параметра to в файле настроек config/mail.php:
`'to' => [` 'address' => '<EMAIL>', 'name' => 'Example' ],
И наконец, вы можете использовать сервис MailTrap и драйвер smtp для отправки ваших почтовых сообщений на фиктивный почтовый ящик, где вы сможете посмотреть их при помощи настоящего почтового клиента. Преимущество этого вариант в том, что вы можете проверить то, как в итоге будут выглядеть ваши почтовые сообщения, при помощи средства для просмотра сообщений Mailtrap.
Laravel генерирует событие непосредственно перед отправкой почтовых сообщений. Учтите, это событие возникает при отправке сообщения, а не при помещении его в очередь. Вы можете зарегистрировать слушатель для этого события в своём EventServiceProvider:
`/**` * Отображения слушателя событий для приложения. * * @var array */ protected $listen = [ 'Illuminate\Mail\Events\MessageSending' => [ 'App\Listeners\LogSentMessage', ], ];
Laravel генерирует событие mailer.sending непосредственно перед отправкой почтовых сообщений. Учтите, это событие возникает при отправке сообщения, а не при помещении его в очередь. Вы можете зарегистрировать слушатель события в своём EventServiceProvider:
`/**` * Регистрация всех остальных событий для вашего приложения. * * @param \Illuminate\Contracts\Events\Dispatcher $events * @return void */ public function boot(DispatcherContract $events) { parent::boot($events); $events->listen('mailer.sending', function ($message) { // }); }
# Вещание событий
Во многих современных веб-приложениях для реализации обновляющегося на лету пользовательского интерфейса, работающего в режиме реального времени, используются WebSockets. Когда какая-либо информация изменяется на сервере, обычно посылается сообщение через WebSocket-подключение для обработки на клиенте. Это обеспечивает более надёжную и эффективную альтернативу постоянному опросу вашего приложения о наличии изменений.
Для помощи в создании таких приложений Laravel обеспечивает простую настройку «вещания» ваших событий через WebSocket-подключение. Вещание ваших Laravel-событий позволяет использовать одинаковые названия событий в коде на стороне клиента и в клиентском JavaScript-приложении.
Перед погружением в вещание событий не забудьте ознакомиться с документацией по событиям и слушателям.
Все настройки вещания событий вашего приложения хранятся в файле config/broadcasting.php. Laravel изначально поддерживает несколько драйверов вещания: Pusher, Redis и драйвер log для локальной разработки и отладки. Вдобавок, есть драйвер null, позволяющий полностью отключить вещание. Для каждого из них есть пример настройки в файле config/broadcasting.php.
Перед вещанием событий вам сначала надо зарегистрировать App\Providers\BroadcastServiceProvider. В свежем Laravel-приложении вам достаточно раскомментировать этот провайдер в массиве providers в файле config/app.php. Этот провайдер позволит вам зарегистрировать маршруты и обратные вызовы авторизации вещания.
Для Laravel Echo будет необходим доступ к CSRF-токену текущей сессии. Echo получит токен из JavaScript-объекта Laravel.csrfToken, если он доступен. Этот объект определён в макете resources/views/layouts/app.blade.php, который создаётся при выполнении Artisan-команды `shmake:auth` . Если вы не используете этот шаблон, вы можете определить тег meta в HTML-элементе head вашего приложения: > xml<meta name="csrf-token" content="{{ csrf_token() }}"### Требования драйверов
Если вы вещаете события через Pusher, вам надо установить Pusher PHP SDK через менеджер пакетов Composer:
```
composer require pusher/pusher-php-server
```
Затем вам надо настроить ваши учётные данные Pusher в файле config/broadcasting.php. В этом файле есть пример настройки Pusher, поэтому вы можете быстро указать свой ключ Pusher, секрет и ID приложения. Настройка pusher в файле config/broadcasting.php также позволяет вам указать дополнительные options, поддерживаемые Pusher, такие как кластер:
`'options' => [` 'cluster' => 'eu', 'encrypted' => true ],
При использовании Pusher и Laravel Echo вам надо указать pusher в качестве желаемого вещателя при создании экземпляра Echo в вашем файле resources/assets/js/bootstrap.js:
> jsimport Echo from "laravel-echo" window.Echo = new Echo({ broadcaster: 'pusher', key: 'your-pusher-key' });
Если вы используете вещатель Redis, вам надо установить библиотеку Predis:
> shcomposer require predis/predis
Вещатель Redis будет вещать сообщения с помощью функции Redis издатель-подписчик, но вам надо дополнить его WebSocket-сервером, который сможет получать сообщения от Redis и вещать их в ваши WebSocket-каналы.
Когда вещатель Redis публикует событие, оно публикуется на имена каналов в зависимости от события, а его содержимое — закодированная JSON-строка с именем события, данными data и пользователем, который сгенерировал ID сокета события (при наличии).
Если вы хотите использовать вещатель Redis вместе с сервером Socket.IO, вам надо включить клиентскую JavaScript-библиотеку Socket.IO в HTML-элемент head вашего приложения. После запуска сервер Socket.IO автоматически предоставляет клиентскую JavaScript-библиотеку на стандартном URL. Например, если вы запускаете сервер Socket.IO на том же домене, где находится ваше веб-приложение, вы можете обратиться к клиентской библиотеке вот так:
> xml<script src="//{{ Request::getHost() }}:6001/socket.io/socket.io.js"></scriptЗатем вам надо создать новый экземпляр Echo с коннектором socket.io и с хостом host.
> jsimport Echo from "laravel-echo" window.Echo = new Echo({ broadcaster: 'socket.io', host: window.location.hostname + ':6001' });
И наконец, вам надо запустить совместимый сервер Socket.IO. В Laravel не включена реализация сервера Socket.IO, но в GitHub-репозитории tlaverdure/laravel-echo-server есть созданный сообществом сервер Socket.IO.
Перед вещанием событий вам также необходимо настроить и запустить слушатель очереди. Всё вещание событий происходит через задачи очереди, поэтому оно незначительно влияет на время отклика вашего приложения.
## Обзор концепции
Вещание событий Laravel позволяет вам вещать ваши Laravel-события со стороны сервера вашему клиентскому JavaScript-приложению используя подход к WebSockets на основе драйверов. Сейчас Laravel поставляется с драйверами Pusher и Redis. События можно легко получать на стороне клиента, используя JavaScript-пакет Laravel Echo.
События вещаются на «каналы», которые могут быть публичными или приватными. Любой посетитель вашего приложения может подписаться на публичный канал без какой-либо аутентификации и авторизации, но чтобы подписаться на приватный канал, пользователь должен быть аутентифицирован и авторизован на прослушивание этого канала.
### Использование примера приложения
Перед погружением во все компоненты вещания событий давайте сделаем поверхностный осмотр интернет-магазина в качестве примера. Мы пока не будем обсуждать детали настройки Pusher и Laravel Echo, поскольку они будут рассмотрены в других разделах этой статьи.
Предположим, в нашем приложении есть страница, позволяющая пользователям просматривать статус доставки их заказов. И предположим, что при обработке приложением обновления статуса доставки возникает событие ShippingStatusUpdated:
Когда пользователь просматривает один из своих заказов, надо показывать ему обновления статуса, не требуя обновлять страницу. Вместо этого будем вещать обновления в приложение по мере их создания. Итак, нам надо отметить событие ShippingStatusUpdated интерфейсом ShouldBroadcast. Таким образом Laravel поймёт, что надо вещать это событие при его возникновении:
`<?php` namespace App\Events; use Illuminate\Broadcasting\Channel; use Illuminate\Queue\SerializesModels; use Illuminate\Broadcasting\PrivateChannel; use Illuminate\Broadcasting\PresenceChannel; use Illuminate\Broadcasting\InteractsWithSockets; use Illuminate\Contracts\Broadcasting\ShouldBroadcast; class ShippingStatusUpdated implements ShouldBroadcast { // } Интерфейс ShouldBroadcast требует, чтобы в событии был определён метод `PHPbroadcastOn()` . Этот метод отвечает за возврат каналов, на которые надо вещать событие. В генерируемых классах событий уже определена пустая заготовка этого метода, нам остаётся только заполнить её. Нам надо, чтобы у создателя заказа была возможность просмотреть обновления статуса, поэтому мы будем вещать событие на приватный канал, привязанный к заказу: `/**` * Получить каналы, на которые надо вещать событие. * * @return array */ public function broadcastOn() { return new PrivateChannel('order.'.$this->update->order_id); } Запомните, пользователи должны быть авторизованы для прослушивания приватных каналов. Мы можем определить правила авторизации нашего канала в методе `PHPboot()` сервис-провайдера BroadcastServiceProvider. В данном примере нам надо проверить, что пользователь, пытающийся слушать приватный канал order.1, является создателем заказа:
return $user->id === Order::findOrNew($orderId)->user_id; }); Метод `PHPchannel()` принимает два аргумента: название канала и функцию обратного вызова, которая возвращает `PHPtrue` или `PHPfalse` , в зависимости от того, авторизован пользователь слушать канал или нет. Все авторизационные функции обратного вызова получают первым аргументом текущего аутентифицированного пользователя, а последующими аргументами — все дополнительные подстановочные параметры. В данном примере мы используем символ `PHP*` для указания, что часть `PHPID` в имени канала является подстановочной. Затем, всё что нам остаётся — это слушать события в нашем JavaScript-приложении. Это можно делать с помощью Laravel Echo. Сначала мы используем метод `PHPprivate()` , чтобы подписаться на приватный канал. Затем мы можем использовать метод `PHPlisten()` , чтобы слушать событие ShippingStatusUpdated. По умолчанию все общедоступные ( `PHPpublic` ) свойства события будут включены в вещание события:
```
Echo.private(`order.${orderId}`)
```
.listen('ShippingStatusUpdated', (e) => { console.log(e.update); });
## Определение событий для вещания
Чтобы сообщить Laravel, что данное событие должно вещаться, реализуйте интерфейс Illuminate\Contracts\Broadcasting\ShouldBroadcast на классе события. Этот интерфейс уже импортирован во все классы событий, генерируемые фреймворком, поэтому вы легко можете добавить его в любое ваше событие.
Интерфейс ShouldBroadcast требует от вас реализовать единственный метод `PHPbroadcastOn()` . Этот метод должен возвращать канал или массив каналов, на которые должно вещаться событие. Каналы должны быть экземплярами Channel, PrivateChannel или PresenceChannel. Экземпляры Channel представляют публичные каналы, на которые может подписаться любой пользователь, а PrivateChannel и PresenceChannel представляют приватные каналы, которые требуют авторизации каналов: `<?php` namespace App\Events; use Illuminate\Broadcasting\Channel; use Illuminate\Queue\SerializesModels; use Illuminate\Broadcasting\PrivateChannel; use Illuminate\Broadcasting\PresenceChannel; use Illuminate\Broadcasting\InteractsWithSockets; use Illuminate\Contracts\Broadcasting\ShouldBroadcast; class ServerCreated implements ShouldBroadcast { use SerializesModels; public $user; /** * Создать новый экземпляр события. * * @return void */ public function __construct(User $user) { $this->user = $user; } /** * Получить каналы, на которые надо вещать событие. * * @return Channel|array */ public function broadcastOn() { return new PrivateChannel('user.'.$this->user->id); } }
Затем вам надо только создать событие как обычно. После этого задача в очереди автоматически проведёт вещание события через ваши драйверы вещания.
### Название вещания
По умолчанию Laravel вещает событие, используя имя класса события. Но вы можете изменить название вещания, определив метод `PHPbroadcastAs()` на событии: `/**` * Название вещания события. * * @return string */ public function broadcastAs() { return 'server.created'; }
### Данные вещания
При вещании события все его `PHPpublic` свойства автоматически сериализуются и вещаются в качестве содержимого события, позволяя вам обращаться к любым общедоступным данным события из вашего JavaScript-приложения. Так например, если у вашего события есть единственное общедоступное свойство `PHP$user` , которое содержит модель Eloquent, при вещании содержимым события будет: `{` "user": { "id": 1, "name": "<NAME>" ... } } Но если вы хотите иметь более точный контроль над содержимым вещания, вы можете добавить метод `PHPbroadcastWith()` в ваше событие. Этот метод должен возвращать массив данных, которые вы хотите вещать в качестве содержимого события: `/**` * Получить данные для вещания. * * @return array */ public function broadcastWith() { return ['id' => $this->user->id]; }
### Очередь вещания
По умолчанию каждое событие для вещания помещается в очередь по умолчанию на подключении по умолчанию, указанном в вашем файле настроек queue.php. Вы можете изменить используемую вещателем очередь, определив свойство `PHPbroadcastQueue` в классе вашего события. Это свойство должно указывать название очереди для вещания: `/**` * Название очереди для размещения события. * * @var string */ public $broadcastQueue = 'your-queue-name';
## Авторизация каналов
Приватные каналы требуют авторизации того, что текущий аутентифицированный пользователь действительно может слушать канал. Это достигается HTTP-запросом в ваше Laravel-приложение с названием канала, а приложение должно определить, может ли пользователь слушать этот канал. При использовании Laravel Echo HTTP-запрос для авторизации подписки на приватные каналы делается автоматически, но вам надо определить необходимые маршруты, чтобы отвечать на эти запросы.
### Определение маршрутов авторизации
К счастью, Laravel позволяет легко определить маршруты для ответа на запросы авторизации каналов. В BroadcastServiceProvider, включённом в ваше Laravel-приложение, вы увидите вызов метода
. Этот метод зарегистрирует маршрут /broadcasting/auth для обработки запросов авторизации: `Broadcast::routes();` Метод
автоматически поместит свои маршруты в группу посредников web, но вы можете передать в метод массив атрибутов маршрута, если хотите изменить назначенные атрибуты:
```
Broadcast::routes($attributes);
```
### Определение функций обратного вызова для авторизации
Затем нам надо определить логику, которая будет отвечать за саму авторизацию. Как и определение маршрутов авторизации, это делается в методе `PHPboot()` в BroadcastServiceProvider. В этом методе вы можете использовать метод
```
PHPBroadcast::channel()
```
для регистрации функций обратного вызова для авторизации:
return $user->id === Order::findOrNew($orderId)->user_id; }); Метод `PHPchannel()` принимает два аргумента: название канала и функцию обратного вызова, которая возвращает `PHPtrue` или `PHPfalse` , в зависимости о того, авторизован пользователь на прослушивание канала или нет. Все авторизационные функции обратного вызова получают первым аргументом текущего аутентифицированного пользователя, а последующими аргументами — все дополнительные подстановочные параметры. В данном примере мы используем символ `PHP*` для указания, что часть `PHPID` в имени канала является подстановочной.
## Вещание событий
Когда вы определили событие и отметили его интерфейсом ShouldBroadcast, вам остаётся только создать событие функцией `PHPevent()` . Диспетчер события обнаружит, что оно отмечено интерфейсом ShouldBroadcast, и поместит его в очередь для вещания:
### Только другим
При создании приложения с использованием вещания событий вы можете заменить функцию `PHPevent()` функцией `PHPbroadcast()` . Как и функция `PHPevent()` , функция `PHPbroadcast()` отправляет событие вашим слушателям на стороне сервера:
```
broadcast(new ShippingStatusUpdated($update));
```
Но функция `PHPbroadcast()` также предоставляет метод `PHPtoOthers()` , который позволяет вам исключить текущего пользователя из списка получателей вещания:
```
broadcast(new ShippingStatusUpdated($update))->toOthers();
```
Для лучшего понимания того, когда вам может пригодиться метод `PHPtoOthers()` , давайте представим приложение со списком задач, в котором пользователь может создать новую задачу, указав её имя. Чтобы создать задачу, ваше приложение должно сделать запрос к конечной точке /task, которая произведёт вещание создания задачи и вернёт JSON-представление новой задачи. Когда ваше JavaScript-приложение получит отклик от конечной точки, оно сможет вставить новую задачу прямо в список задач:
```
this.$http.post('/task', task)
```
.then((response) => { this.tasks.push(response.data); });
Но помните, что мы также вещаем создание события. Если ваше JavaScript-приложение слушает это событие, чтобы добавить задачу в список задач, мы получим дублирование задачи в списке: одна из конечной точки, а другая из вещания.
Этого можно избежать с помощью метода `PHPtoOthers()` , который укажет вещателю, что событие не надо вещать текущему пользователю. Когда вы инициализируете экземпляр Laravel Echo, на подключение назначается ID сокета. Если вы используете Vue и Vue Resource, ID сокета будет автоматически прикреплён к каждому исходящему запросу в виде заголовка X-Socket-ID. Затем, когда вы вызываете метод `PHPtoOthers()` , Laravel извлечёт ID сокета из заголовка и даст вещателю команду — не вещать на любые подключения с этим ID сокета. Если вы не используете Vue и Vue Resource, вам надо вручную настроить ваше JavaScript-приложение, чтобы оно посылало заголовок X-Socket-ID. Вы можете получить ID сокета методом `PHPEcho.socketId()` :
```
var socketId = Echo.socketId();
```
## Получение вещания
### Установка Laravel Echo
Laravel Echo — JavaScript-библиотека, которая позволяет без труда подписываться на каналы и слушать вещание событий Laravel. Вы можете установить Echo через менеджер пакетов NPM. В данном примере мы также установим пакет pusher-js, поскольку будем использовать вещатель Pusher:
```
npm install --save laravel-echo pusher-js
```
После установки Echo вы можете создать свежий экземпляр Echo в JavaScript своего приложения. Отличное место для этого — конец файла resources/assets/js/bootstrap.js, который включён в фреймворк Laravel:
> jsimport Echo from "laravel-echo" window.Echo = new Echo({ broadcaster: 'pusher', key: 'your-pusher-key' });
При создании экземпляра Echo, который использует коннектор pusher, вы также можете указать cluster, и надо ли шифровать подключение:
> jswindow.Echo = new Echo({ broadcaster: 'pusher', key: 'your-pusher-key', cluster: 'eu', encrypted: true });
### Прослушивание событий
Когда вы установили и создали экземпляр Echo, вы можете начать слушать вещание событий. Сначала используйте метод `PHPchannel()` для получения экземпляра канала, а затем вызовите метод `PHPlisten()` для прослушивания указанного события: > jsEcho.channel('orders') .listen('OrderShipped', (e) => { console.log(e.order.name); });
Если вы хотите слушать события на приватном канале, тогда используйте метод `PHPprivate()` . Вы можете продолжить прицеплять вызовы метода `PHPlisten()` , чтобы слушать несколько событий на одном канале: > jsEcho.private('orders') .listen(...) .listen(...) .listen(...);
### Выход из канала
Чтобы выйти из канала, можете вызвать метод `PHPleave()` на вашем экземпляре Echo: > jsEcho.leave('orders');
В этих примерах вы могли заметить, что мы не указываем полное пространство имён для классов событий. Это потому, что Echo автоматически считает, что события расположены в пространстве имён App\Events. Но вы может настроить корневое пространство имён при создании экземпляра Echo, передав параметр namespace:
> jswindow.Echo = new Echo({ broadcaster: 'pusher', key: 'your-pusher-key', namespace: 'App.Other.Namespace' });
В качестве альтернативы вы можете добавлять к классам событий префикс `PHP.` , когда подписываетесь на них с помощью Echo. Это позволит вам всегда указывать полное имя класса: > jsEcho.channel('orders') .listen('.Namespace.Event.Class', (e) => { // });
## Каналы присутствия
Каналы присутствия построены на безопасности приватных каналов, при этом они предоставляют дополнительные возможности по осведомлённости о тех, кто подписан на канал. Это позволяет легко создавать мощные функции приложения для совместной работы, такие как уведомление пользователей, когда другой пользователь просматривает ту же страницу.
### Авторизация каналов присутствия
Все каналы присутствия также являются приватными каналами, поэтому пользователи должны быть авторизованы для доступа к ним. Но при определении функций обратного вызова для каналов присутствия вам не надо возвращать `PHPtrue` , если пользователь авторизован на подключение к каналу. Вместо этого вы должны вернуть массив данных о пользователе. Данные, возвращённые функцией обратного вызова для авторизации, станут доступны для слушателей событий канала присутствия в вашем JavaScript-приложении. Если пользователь не авторизован на подключение к каналу, вы должны вернуть `PHPfalse` или `PHPnull` :
```
Broadcast::channel('chat.*', function ($user, $roomId) {
```
if ($user->canJoinRoom($roomId)) { return ['id' => $user->id, 'name' => $user->name]; } });
### Подключение к каналам присутствия
Для подключения к каналу присутствия вы можете использовать метод Echo `PHPjoin()` . Метод `PHPjoin()` вернёт реализацию PresenceChannel, которая помимо предоставления метода `PHPlisten()` позволит вам подписаться на события here, joining и leaving. > jsEcho.join(`chat.${roomId}`) .here((users) => { // }) .joining((user) => { console.log(user.name); }) .leaving((user) => { console.log(user.name); });
Обратный вызов `PHPhere()` будет выполнен немедленно после успешного подключения к каналу и получит массив с информацией о пользователе для всех остальных пользователей, подключенных к каналу в данный момент. Метод `PHPjoining()` будет выполнен, когда новый пользователь подключится к каналу, а метод `PHPleaving()` будет выполнен, когда пользователь покинет канал.
### Вещание в канал присутствия
Канал присутствия может получать события так же, как публичный и приватный каналы. Используя пример с чатом, нам может понадобиться вещание событий NewMessage в канал присутствия чата. Для этого мы будем возвращать экземпляр PresenceChannel из метода события `PHPbroadcastOn()` : `/**` * Получить каналы, на которые необходимо вещать событие. * * @return Channel|array */ public function broadcastOn() { return new PresenceChannel('room.'.$this->message->room_id); } Подобно публичным и приватным событиям, события канала присутствия можно вещать с помощью функции `PHPbroadcast()` . Как и с другими событиями вы можете использовать метод `PHPtoOthers()` , чтобы исключить текущего пользователя из списка получателей вещания:
```
broadcast(new NewMessage($message));
```
broadcast(new NewMessage($message))->toOthers(); Вы можете слушать событие подключения методом Echo `PHPlisten()` : > jsEcho.join(`chat.${roomId}`) .here(...) .joining(...) .leaving(...) .listen('NewMessage', (e) => { // });
## Уведомления
Дополнив вещание событий уведомлениями, вы позволите вашему JavaScript-приложению получать новые уведомления при их возникновении без необходимости обновлять страницу. Сначала прочитайте документацию по использованию канала вещания уведомлений.
Когда вы настроите уведомления на использование канала вещания, вы сможете слушать события вещания с помощью метода Echo `PHPnotification()` . Запомните, название канала должно совпадать с именем класса той сущности, которая получает уведомления:
```
jsEcho.private(`App.User.${userId}`)
.notification((notification) => {
console.log(notification.type);
});
```
В этом примере все уведомления, посылаемые экземплярам App\User через канал broadcast, будут получены функцией обратного вызова. Функция обратного вызова для авторизации канала App.User.* включена в стандартный BroadcastServiceProvider, который поставляется с фреймворком Laravel.
# Тестирование приложения
```
$this->visitRoute('profile');
```
$this->visitRoute('profile', ['user' => 1]);
### Взаимодействие со ссылками
{ $this->visit('/') ->click('О нас') ->seePageIs('/about-us'); }
Вы можете проверить, что пользователь достиг корректный именованный маршрут, методом %%seeRouteIs():
```
->seeRouteIs('profile', ['user' => 1]);
```
### Взаимодействие с формами
Также Laravel предоставляет несколько методов для тестирования форм. Методы `PHPtype()` , `PHPselect()` , `PHPcheck()` , `PHPattach()` и `PHPpress()` позволяют вам взаимодействовать со всеми элементами ввода на ваших формах. Например, представим, что на странице регистрации в приложении есть такая форма: > xml<form action="/register" method="POST"> {{ csrf_field() }} <div> Name: <input type="text" name="name"> </div> <div> <input type="checkbox" value="yes" name="terms"> Accept Terms </div> <div> <input type="submit" value="Register"> </div> </form{ $this->visit('/upload') ->attach($pathToFile, 'photo') ->press('Upload') ->see('Upload Successful!'); }
Также Laravel предоставляет несколько вспомогательных функций для тестирования JSON API и их откликов. Например, методы `PHPjson()` , `PHPget()` , `PHPpost()` , `PHPput()` , `PHPpatch()` и `PHPdelete()` используются для выполнения различных HTTP-запросов. Вы также легко можете передать данные и заголовки в эти методы. Для начала давайте напишем тест, выполняющий POST-запрос к /user и проверяющий, что возвращаются ожидаемые данные: `<?php` class ExampleTest extends TestCase { /** * Пример базового теста функции. * * @return void */ public function testBasicExample() { $this->json('POST', '/user', ['name' => 'Sally']) ->seeJson([ 'created' => true, ]); } } Метод `PHPseeJson()` конвертирует данный массив в JSON, а затем проверяет, что фрагмент JSON появляется где-либо внутри полного JSON-отклика, возвращаемого приложением. Поэтому, если в нём будут ещё и другие свойства, этот тест всё равно будет пройден успешно, так как данный фрагмент присутствует в отклике.
### Проверка на точное совпадение
Если вы хотите проверить точное совпадение данного массива с возвращённым из приложения JSON, вам надо использовать метод `PHPseeJsonEquals()` : `<?php` class ExampleTest extends TestCase { /** * Пример базового теста функции. * * @return void */ public function testBasicExample() { $this->json('POST', '/user', ['name' => 'Sally']) ->seeJsonEquals([ 'created' => true, ]); } }
### Проверка на совпадение структуры
Также можно проверить соответствие структуры JSON определённым требованиям. В этом случае используйте метод
В Laravel есть несколько функций для работы с сессиями во время тестирования. Сначала вы можете задать данные сессии для данного массива при помощи метода `PHPwithSession()` . Это полезно для загрузки сессии с данными перед выполнением тестового запроса в приложение: `<?php` class ExampleTest extends TestCase { public function testApplication() { $this->withSession(['foo' => 'bar']) ->visit('/'); } } Конечно, чаще всего сессии используют для поддержания нужного состояния аутентифицированного пользователя. Простой способ аутентифицировать данного пользователя в качестве текущего — метод `PHPactingAs()` . Например, мы можем использовать фабрику модели, чтобы сгенерировать и аутентифицировать пользователя: `<?php` class ExampleTest extends TestCase { public function testApplication() { $user = factory(App\User::class)->create(); $this->actingAs($user) ->withSession(['foo' => 'bar']) ->visit('/') ->see('Hello, '.$user->name); } } Вторым аргументом метода `PHPactingAs()` вы можете указать защитника для аутентификации данного пользователя:
```
$this->actingAs($user, 'api')
```
# Тестирование БД
Laravel предоставляет множество полезных инструментов для тестирования ваших приложений, использующих БД. Во-первых, вы можете использовать вспомогательный метод `PHPseeInDatabase()` для проверки того, что данные в БД соответствуют определённому набору критериев. Например, если вы хотите проверить, что в таблице users есть запись с полем email равным <EMAIL>, вы можете сделать следующее:
{ // Осуществление вызова приложения... $this->seeInDatabase('users', [ 'email' => '<EMAIL>' ]); } Само собой, такие методы как `PHPseeInDatabase()` созданы для удобства. Вы можете использовать любые встроенные методы проверки PHPUnit для дополнения своих тестов.
## Сброс БД после каждого теста
Зачастую бывает полезно сбрасывать вашу БД после каждого теста, чтобы данные из предыдущего теста не влияли на последующие тесты.
### Использование миграций
Один из способов сброса состояния БД — откатывать БД после каждого теста и мигрировать её перед следующим тестом. Laravel предоставляет простой типаж DatabaseMigrations, который автоматически сделает это для вас. Просто используйте этот типаж на классе вашего теста, и всё будет сделано за вас:
`<?php` use Illuminate\Foundation\Testing\WithoutMiddleware; use Illuminate\Foundation\Testing\DatabaseMigrations; use Illuminate\Foundation\Testing\DatabaseTransactions; class ExampleTest extends TestCase { use DatabaseMigrations; /** * Пример базового функционального теста. * * @return void */ public function testBasicExample() { $this->visit('/') ->see('Laravel 5'); } }
### Использование транзакций
Другой способ сброса состояния БД — обернуть каждый тест-кейс в транзакцию БД. И для этого Laravel также предоставляет удобный типаж DatabaseTransactions, который автоматически сделает это для вас:
`<?php` use Illuminate\Foundation\Testing\WithoutMiddleware; use Illuminate\Foundation\Testing\DatabaseMigrations; use Illuminate\Foundation\Testing\DatabaseTransactions; class ExampleTest extends TestCase { use DatabaseTransactions; /** * Пример базового функционального теста. * * @return void */ public function testBasicExample() { $this->visit('/') ->see('Laravel 5'); } } По умолчанию этот типаж будет оборачивать в транзакции только подключение к БД по умолчанию. Если ваше приложение использует несколько подключений к БД, вам надо определить свойство
```
PHP$connectionsToTransact
```
в классе вашего теста. Это свойство должно быть массивом имён подключений для выполнения транзакций над ними.
## Создание фабрик
При тестировании вам может понадобиться вставить несколько записей в вашу БД перед выполнением теста. При создании этих данных Laravel позволяет вам вместо указания значений каждого столбца вручную определить стандартный набор атрибутов для каждой из ваших моделей Eloquent с помощью фабрик. Для начала посмотрите на файл database/factories/ModelFactory.php в вашем приложении. Изначально этот файл содержит определение одной фабрики:
static $password; return [ 'name' => $faker->name, 'email' => $faker->unique()->safeEmail, 'password' => $password ?: $password = bcrypt('secret'), 'remember_token' => str_random(10), ]; });
В замыкании, которое служит определением фабрики, вы можете вернуть стандартные тестовые значения всех атрибутов модели. Замыкание получит экземпляр PHP-библиотеки Faker, которая позволяет вам удобно генерировать различные случайные данные для тестирования.
Само собой, вы можете добавить свои собственные дополнительные фабрики в файл ModelFactory.php. Также вы можете создать дополнительные файлы фабрик для каждой модели для более понятной организации. Например, вы можете создать файлы UserFactory.php и CommentFactory.php в вашей папке database/factories. Все файлы в папке factories будут автоматически загружены Laravel.
### Состояния фабрик
Состояния позволяют вам определить отдельные изменения, которые можно применять к вашим фабрикам моделей в любых комбинациях. Например, ваша модель User может иметь состояние delinquent (нарушитель), которое изменяет стандартное значение одного из атрибутов. Вы можете определить свои преобразования состояния с помощью метода `PHPstate()` :
```
$factory->state(App\User::class, 'delinquent', function ($faker) {
```
return [ 'account_status' => 'delinquent', ]; });
## Использование фабрик
### Создание моделей
После определения фабрик вы можете использовать глобальную функцию `PHPfactory()` в своих тестах или файлах начальных данных для генерирования экземпляров модели. Итак, давайте рассмотрим несколько примеров создания моделей. Во-первых, мы используем метод `PHPmake()` для создания моделей, но не сохраним их в БД:
{ $user = factory(App\User::class)->make(); // Использование модели в тестах... }
Также вы можете создать коллекцию моделей или создать модели определённого типа:
```
// Создание трёх экземпляров App\User...
```
$users = factory(App\User::class, 3)->make();
Также вы можете применить любые состояния к моделям. Если вы хотите применить к моделям несколько изменений состояния, вам надо указать имя каждого состояния для применения:
```
$users = factory(App\User::class, 5)->states('delinquent')->make();
```
$users = factory(App\User::class, 5)->states('premium', 'delinquent')->make(); Если вы хотите переопределить некоторые из стандартных значений ваших моделей, вы можете передать массив значений в метод `PHPmake()` . Будут заменены только указанные значения, а остальные будут заданы в соответствии с указанными в фабрике:
### Постоянные модели
Метод `PHPcreate()` не только создаёт экземпляры моделей, но также сохраняет их в БД с помощью метода Eloquent `PHPsave()` :
{ // Создание одного экземпляра App\User... $user = factory(App\User::class)->create(); // Создание трёх экземпляров App\User... $users = factory(App\User::class, 3)->create(); // Использование модели в тестах... } Вы можете переопределить атрибуты модели, передав массив в метод `PHPcreate()` :
### Отношения
В этом примере мы прикрепим отношение к некоторым созданным моделям. При использовании метода `PHPcreate()` для создания нескольких моделей возвращается экземпляр коллекции Eloquent, позволяя вам использовать все удобные функции для работы с коллекцией, такие как `PHPeach()` :
->create() ->each(function ($u) { $u->posts()->save(factory(App\Post::class)->make()); });
Отношения и атрибуты замыкания
Также вы можете прикреплять отношения к моделям с помощью атрибутов замыкания в определениях ваших фабрик. Например, если вы хотите создать новый экземпляр модели User при создании Post, то можете сделать так:
return [ 'title' => $faker->title, 'content' => $faker->paragraph, 'user_id' => function () { return factory(App\User::class)->create()->id; } ]; });
Это замыкание также получает определённый массив атрибутов фабрики, которая его содержит:
# Заглушки
При тестирование Laravel-приложений иногда нужно «заглушить» некоторые части приложения, чтобы во время тестирования они на самом деле не работали. Например, при тестировании контроллера, создающего события, можно заглушить слушателей событий, чтобы они не выполнились во время теста. Это позволит вам протестировать только HTTP-отклик контроллера, не беспокоясь о выполнении слушателей событий, которые можно протестировать отдельно.
В Laravel изначально есть вспомогательные функции для создания заглушек событий, задач и фасадов. Эти функции в основном обеспечивают удобную прослойку над Mockery, поэтому вам не надо делать вызовы сложных методов Mockery вручную. Само собой вы также можете сами создавать заглушки или шпионов с помощью Mockery или PHPUnit.
Если вы активно используете систему событий Laravel, вам может понадобиться приглушить или заглушить некоторые события при тестировании. Например, когда вы тестируете регистрацию пользователя, вы можете отключить запуск некоторых обработчиков события UserRegistered, потому что слушатели могут отправить e-mail с «приветствием» и т.п.
Laravel предоставляет удобный метод `PHPexpectsEvents()` , который проверяет возникновение ожидаемых событий, но предотвращает запуск любых слушателей этих событий: `<?php` use App\Events\UserRegistered; class ExampleTest extends TestCase { /** * Тестирование регистрации нового пользователя. */ public function testUserRegistration() { $this->expectsEvents(UserRegistered::class); // Тестирование регистрации пользователя... } } Используйте метод
, чтобы проверить, что указанные события не возникли: `<?php` use App\Events\OrderShipped; use App\Events\OrderFailedToShip; class ExampleTest extends TestCase { /** * Тестирование доставки заказа. */ public function testOrderShipping() { $this->expectsEvents(OrderShipped::class); $this->doesntExpectEvents(OrderFailedToShip::class); // Тестирование доставки заказа... } } Если вы хотите предотвратить запуск всех слушателей событий, используйте метод `PHPwithoutEvents()` . При вызове этого метода все слушатели для всех событий будут заглушены: `<?php` class ExampleTest extends TestCase { public function testUserRegistration() { $this->withoutEvents(); // Код тестирования регистрации пользователя... } }
В качестве альтернативы заглушкам вы можете использовать метод `PHPfake()` фасада Event для предотвращения запуска всех слушателей событий. Затем вы можете проверить, что события возникли, и даже изучить данные, которые они получили. При использовании подделок проверки делаются после выполнения тестируемого кода: `<?php` use App\Events\OrderShipped; use App\Events\OrderFailedToShip; use Illuminate\Support\Facades\Event; class ExampleTest extends TestCase { /** * Тестирование доставки заказа. */ public function testOrderShipping() { Event::fake(); // Выполнение доставки заказа... Event::assertFired(OrderShipped::class, function ($e) use ($order) { return $e->order->id === $order->id; }); Event::assertNotFired(OrderFailedToShip::class); } }
## Задачи
Иногда возникает необходимость протестировать то, что при поступлении запросов в ваше приложение вызывается определённая задача. Это позволит вам тестировать ваши маршруты и контроллеры изолированно, не беспокоясь о логике вашей задачи. Разумеется, затем вам надо протестировать задачу отдельным тест-кейсом.
Laravel предоставляет удобный метод `PHPexpectsJobs()` , который проверяет, что ожидаемые задачи были вызваны. Но при этом сама задача не будет выполнена: `<?php` use App\Jobs\ShipOrder; class ExampleTest extends TestCase { public function testOrderShipping() { $this->expectsJobs(ShipOrder::class); // Тестирование доставки заказа... } } Этот метод обнаруживает только те задачи, которые вызываются через методы типажа DispatchesJobs или вспомогательную функцию `PHPdispatch()` . Он не обнаруживает задачи в очереди, отправленные напрямую в `PHPQueue::push()` . Как и в случае с заглушками событий, вы можете также проверить, что задача не была запущена, методом
```
PHPdoesntExpectJobs()
```
: `<?php` use App\Jobs\ShipOrder; class ExampleTest extends TestCase { /** * Тестирование отмены заказа. */ public function testOrderCancellation() { $this->doesntExpectJobs(ShipOrder::class); // Тестирование отмены заказа... } } Или же вы можете игнорировать все вызванные задачи, используя метод `PHPwithoutJobs()` . Этот метод вызывается в методе теста, и все задачи, вызванные во время теста, будут отменены: `<?php` use App\Jobs\ShipOrder; class ExampleTest extends TestCase { /** * Тестирование отмены заказа. */ public function testOrderCancellation() { $this->withoutJobs(); // Тестирование отмены заказа... } }
В качестве альтернативы заглушкам вы можете использовать метод `PHPfake()` фасада Queue для предотвращения постановки задач в очередь. Затем вы можете проверить, что задачи были поставлены в очередь и даже исследовать полученные ими данные. При использовании подделок проверки делаются после выполнения тестируемого кода: `<?php` use App\Jobs\ShipOrder; use Illuminate\Support\Facades\Queue; class ExampleTest extends TestCase { public function testOrderShipping() { Queue::fake(); // Выполнение доставки заказа... Queue::assertPushed(ShipOrder::class, function ($job) use ($order) { return $job->order->id === $order->id; }); // Проверка того, что задача была поставлена в данную очередь... Queue::assertPushedOn('queue-name', ShipOrder::class); // Проверка того, что задача не была поставлена... Queue::assertNotPushed(AnotherJob::class); } }
## Почтовые подделки
Вы можете использовать метод `PHPfake()` фасада Mail для предотвращения отправки писем. Затем вы можете проверить, что почта была отправлена пользователям и даже исследовать переданные данные. При использовании подделок проверки делаются после выполнения тестируемого кода: `<?php` use App\Mail\OrderShipped; use Illuminate\Support\Facades\Mail; class ExampleTest extends TestCase { public function testOrderShipping() { Mail::fake(); // Выполнение доставки заказа... Mail::assertSent(OrderShipped::class, function ($mail) use ($order) { return $mail->order->id === $order->id; }); // Проверка того, что сообщение было отправлено данному пользователю... Mail::assertSentTo([$user], OrderShipped::class); // Проверка того, что сообщение не было отправлено... Mail::assertNotSent(AnotherMailable::class); } }
## Подделки уведомлений
Вы можете использовать метод `PHPfake()` фасада Notification для предотвращения отправки уведомлений. Затем вы можете проверить, что уведомления были отправлены, и даже исследовать переданные им данные. При использовании подделок проверки делаются после выполнения тестируемого кода: `<?php` use App\Notifications\OrderShipped; use Illuminate\Support\Facades\Notification; class ExampleTest extends TestCase { public function testOrderShipping() { Notification::fake(); // Выполнение доставки заказа... Notification::assertSentTo( $user, OrderShipped::class, function ($notification, $channels) use ($order) { return $notification->order->id === $order->id; } ); // Проверка того, что уведомление было отправлено данным пользователям... Notification::assertSentTo( [$user], OrderShipped::class ); // Проверка того, что уведомление не было отправлено... Notification::assertNotSentTo( [$user], AnotherNotification::class ); } }
## Фасады
В отличие от вызовов обычных статических методов фасады могут быть заглушены. Это даёт большое преимущество над обычными статическими методами и предоставляет те же возможности для тестирования, что и внедрение зависимостей. При тестировании часто бывает необходимо заглушить вызов к фасаду Laravel в одном из ваших контроллеров. Например, рассмотрим такое действие контроллера:
`<?php` namespace App\Http\Controllers; use Illuminate\Support\Facades\Cache; class UserController extends Controller { /** * Показать список всех пользователей приложения. * * @return Response */ public function index() { $value = Cache::get('key'); // } } Мы можем сделать заглушку вызова фасада Cache, используя метод `PHPshouldReceive()` , который вернёт экземпляр заглушки Mockery. Поскольку фасады на самом деле извлекаются и управляются сервис-контейнером Laravel, их намного проще тестировать, чем обычные статические классы. Например, давайте заглушим наш вызов метода `PHPget()` фасада Cache: `<?php` class FooTest extends TestCase { public function testGetIndex() { Cache::shouldReceive('get') ->once() ->with('key') ->andReturn('value'); $this->visit('/users')->see('value'); } } Вы не должны «заглушать» фасад Request. Вместо этого передайте необходимые входные данные во вспомогательные методы HTTP, такие как `PHPcall()` и `PHPpost()` , при выполнении ваших тестов. А также вместо заглушки фасада Config просто вызовите метод `PHPConfig::set()` в своём тесте.
# API аутентификация (Passport)
В Laravel можно легко настроить аутентификацию через обычные формы входа, но что насчёт API? API обычно использует токены для аутентификации пользователей и не сохраняет состояние сессии между запросами. В Laravel реализована простая API аутентификация с помощью Laravel Passport, который предоставляет полную реализацию сервера OAuth2 для вашего приложения в считанные минуты. Passport создан на основе сервера League OAuth2, созданного Алексом Билби.
Предполагается, что вы уже знакомы с OAuth2. Если вы ничего не знаете о OAuth2, рекомендуем ознакомиться с основной терминологией и возможностями OAuth2 самостоятельно перед прочтением этой статьи.
Сначала установите Passport через менеджер пакетов Composer:
```
composer require laravel/passport
```
Затем зарегистрируйте сервис-провайдер Passport в массиве `PHPproviders` в файле настроек config/app.php:
```
Laravel\Passport\PassportServiceProvider::class,
```
Сервис-провайдер Passport регистрирует свой собственный каталог миграций БД в фреймворке, поэтому вам надо мигрировать вашу БД после регистрации провайдера. Миграции Passport создадут таблицы, необходимые вашему приложению для хранения клиентов и токенов доступа:
> shphp artisan migrate
Затем выполните команду `shpassport:install` . Она создаст ключи шифрования, необходимые для генерирования надёжных токенов доступа. Кроме того команда создаст клиентов «personal access» (персональный доступ) и «password grant» (предоставление пароля), которые будут использоваться для генерирования токенов доступа: > shphp artisan passport:install
После выполнения этой команды добавьте типаж Laravel\Passport\HasApiTokens в свою модель App\User. Этот типаж предоставит вашей модели несколько вспомогательных методов, которые позволят вам проверять токен и права аутентифицированного пользователя:
`<?php` namespace App; use Laravel\Passport\HasApiTokens; use Illuminate\Notifications\Notifiable; use Illuminate\Foundation\Auth\User as Authenticatable; class User extends Authenticatable { use HasApiTokens, Notifiable; } Затем вызовите метод
из метода `PHPboot()` вашего AuthServiceProvider. Этот метод зарегистрирует маршруты, необходимые для выдачи токенов доступа и отмены действия токенов доступа, клиентов и персональных токенов доступа: `<?php` namespace App\Providers; use Laravel\Passport\Passport; use Illuminate\Support\Facades\Gate; use Illuminate\Foundation\Support\Providers\AuthServiceProvider as ServiceProvider; class AuthServiceProvider extends ServiceProvider { /** * Сопоставления политик для приложения. * * @var array */ protected $policies = [ 'App\Model' => 'App\Policies\ModelPolicy', ]; /** * Регистрация всех сервисов аутентификации / авторизации. * * @return void */ public function boot() { $this->registerPolicies(); Passport::routes(); } } И наконец, в файле настроек config/auth.php задайте значение `PHPpassport` параметру `PHPdriver` защитника аутентификации `PHPapi` . После этого ваше приложение будет использовать `PHPTokenGuard` от Passport при аутентификации входящих API-запросов: `'guards' => [` 'web' => [ 'driver' => 'session', 'provider' => 'users', ], 'api' => [ 'driver' => 'passport', 'provider' => 'users', ], ],
### Быстрое знакомство с фронтендом
Чтобы использовать Vue-компоненты Passport, вам необходимо использовать JavaScript-фреймворк Vue. Эти компоненты также используют CSS-фреймворк Bootstrap. Но даже если вы не используете эти фреймворки, компоненты будут хорошим образцом для вашей собственной реализации фронтенда.
Passport поставляется с JSON API, который можно использовать, чтобы разрешить вашим пользователям создавать клиентов и персональные токены доступа. Но написание кода фронтенда для взаимодействия с этими API может занять много времени. Поэтому Passport также содержит встроенные Vue-компоненты , которые вы можете использовать, как пример или отправную точку для своей собственной реализации.
Для публикации Vue-компонентов Passport используйте Artisan-команду `shvendor:publish` : > shphp artisan vendor:publish --tag=passport-components
Опубликованные компоненты будут помещены в каталог resources/assets/js/components. После публикации компонентов их необходимо зарегистрировать в файле resources/assets/js/app.js:
> jsVue.component( 'passport-clients', require('./components/passport/Clients.vue') ); Vue.component( 'passport-authorized-clients', require('./components/passport/AuthorizedClients.vue') ); Vue.component( 'passport-personal-access-tokens', require('./components/passport/PersonalAccessTokens.vue') );
После регистрации компонентов не забудьте выполнить `shgulp` для перекомпилирования ваших ресурсов. После этого вы можете поместить компоненты в один из шаблонов вашего приложения, чтобы начать создавать клиентов и персональные токены доступа:
<passport-authorized-clients></passport-authorized-clients> <passport-personal-access-tokens></passport-personal-access-tokens### Срок действия токенов
По умолчанию Passport выдаёт длительные токены доступа, которые вообще не надо обновлять. Если вы хотите настроить более короткое время действия токенов, используйте методы `PHPtokensExpireIn()` и
. Эти методы надо вызывать из метода `PHPboot()` вашего AuthServiceProvider: `use Carbon\Carbon;` /** * Регистрация всех сервисов аутентификации / авторизации. * * @return void */ public function boot() { $this->registerPolicies(); Passport::routes(); Passport::tokensExpireIn(Carbon::now()->addDays(15)); Passport::refreshTokensExpireIn(Carbon::now()->addDays(30)); }
## Выдача токенов доступа
Большинство разработчиков знакомы с OAuth2 по работе с кодами авторизации. При использовании кодов авторизации клиентское приложение перенаправляет пользователя на ваш сервер, где разрешается или запрещается его запрос на получение токена на доступ к клиенту.
### Управление клиентами
Во-первых, разработчикам приложений, которым необходимо взаимодействие с API вашего приложения, необходимо зарегистрировать свои приложения в вашем, создав «клиента». Обычно это сводится к предоставлению названия их приложения и URL, на который ваше приложение сможет перенаправлять пользователей, когда они запрашивают авторизацию.
Простейший способ создать клиента — Artisan-команда `shpassport:client` . Её можно использовать для создания своих клиентов для тестирования возможностей OAuth2. Когда вы запустите команду `shclient` , Passport запросит у вас информацию о вашем клиенте и предоставит вам ID и секрет клиента: > shphp artisan passport:client
Поскольку у ваших пользователей не будет возможности использовать команду `shclient` , Passport предоставляет JSON API, который вы можете использовать для создания клиентов. Это избавляет вас от необходимости вручную писать контроллеры для создания, изменения и удаления клиентов.
Но вам надо будет дополнить Passport JSON API своим собственным фронтендом, чтобы предоставить пользователям панель для управления их клиентами. Далее мы рассмотрим все конечные точки API для управления клиентами. Для удобства мы используем Vue для демонстрации создания HTTP-запросов к конечным точкам.
Этот маршрут возвращает всех клиентов для аутентифицированного пользователя. Это особенно полезно для просмотра всех клиентов пользователя, чтобы редактировать или удалять их:
```
this.$http.get('/oauth/clients')
```
.then(response => { console.log(response.data); }); Этот маршрут используется для создания новых клиентов. Ему необходимы следующие данные: название клиента ( `PHPname` ) и URL переадресации ( `PHPredirect` ). Этот URL будет использован для переадресации пользователя после подтверждения или отклонения запроса авторизации.
После создания клиента ему будут выданы ID клиента и секрет клиента. Эти значения будут использоваться при запросе токенов доступа у вашего приложения. Маршрут создания клиента вернёт новый экземпляр клиента:
`const data = {` name: '<NAME>', redirect: 'http://example.com/callback' }; this.$http.post('/oauth/clients', data) .then(response => { console.log(response.data); }) .catch (response => { // Список ошибок в отклике... });
PUT /oauth/clients/{client-id}
Этот маршрут используется для изменения клиентов. Ему необходимы следующие данные: название клиента ( `PHPname` ) и URL переадресации ( `PHPredirect` ). Этот URL будет использован для переадресации пользователя после подтверждения или отклонения запроса авторизации. Маршрут вернёт обновлённый экземпляр клиента: `const data = {` name: 'New Client Name', redirect: 'http://example.com/callback' }; this.$http.put('/oauth/clients/' + clientId, data) .then(response => { console.log(response.data); }) .catch (response => { // Список ошибок в отклике... });
DELETE /oauth/clients/{client-id}
Этот маршрут используется для удаления клиентов:
```
this.$http.delete('/oauth/clients/' + clientId)
```
.then(response => { // });
После создания клиента разработчики могут использовать ID и секрет своего клиента для запроса кода авторизации и токенов доступа у вашего приложения. Во-первых, запрашивающее приложение должно сделать запрос переадресации в маршрут /oauth/authorize вашего приложения вот так:
. Вам не надо вручную определять этот маршрут.
При получении запросов авторизации Passport автоматически выведет пользователю шаблон, позволяя им подтвердить или отклонить запрос авторизации. Если они подтверждают запрос, то автоматически переадресуются обратно на redirect_uri, указанный запрашивающим приложением. Этот redirect_uri должен совпадать с URL redirect, указанным при создании клиента.
Если вы хотите изменить окно подтверждения авторизации, вы можете опубликовать представления Passport Artisan-командой `shvendor:publish` . Опубликованные представления будут размещены в каталоге resources/views/vendor/passport: > shphp artisan vendor:publish --tag=passport-views
Конвертирование кодов авторизации в токены доступа
Если пользователь подтвердит запрос авторизации, то будет переадресован обратно в запрашивающее приложение. Затем оно должно сделать POST-запрос в ваше приложение, чтобы запросить токен доступа. Запрос должен содержать код авторизации, который был выдан вашим приложением, когда пользователь подтвердил запрос авторизации. В этом примере мы используем HTTP-библиотеку Guzzle для создания POST-запроса:
```
Route::get('/callback', function (Request $request) {
```
$http = new GuzzleHttp\Client; $response = $http->post('http://your-app.com/oauth/token', [ 'form_params' => [ 'grant_type' => 'authorization_code', 'client_id' => 'client-id', 'client_secret' => 'client-secret', 'redirect_uri' => 'http://example.com/callback', 'code' => $request->code, ], ]); return json_decode((string) $response->getBody(), true); });
Маршрут /oauth/token вернёт JSON-отклик, содержащий атрибуты access_token, refresh_token и expires_in. Атрибут expires_in содержит число секунд до истечения действия токена доступа.
Как и маршрут /oauth/authorize `PHP, маршрут` (t)/oauth/token
```
PHPтакже определён методом
```
Passport::routes()%%. Вам не надо вручную определять этот маршрут.
### Обновление токенов
Если ваше приложение выдаёт краткосрочные токены доступа, пользователям будет необходимо обновлять свои токены доступа с помощью токена обновления, который предоставляется при выдаче токена доступа. В этом примере мы используем HTTP-библиотеку Guzzle для обновления токена:
$response = $http->post('http://your-app.com/oauth/token', [ 'form_params' => [ 'grant_type' => 'refresh_token', 'refresh_token' => 'the-refresh-token', 'client_id' => 'client-id', 'client_secret' => 'client-secret', 'scope' => '', ], ]); return json_decode((string) $response->getBody(), true);
Маршрут /oauth/token вернёт JSON-отклик, содержащий атрибуты access_token, refresh_token и expires_in. Атрибут expires_in содержит число секунд до истечения действия токена доступа.
## Токены предоставления пароля
Предоставление пароля OAuth2 позволяет вашим собственным клиентам, таким как мобильное приложение, получить токен доступа с помощью e-mail/логина и пароля. Это позволяет вам безопасно выдавать токены доступа своим собственным клиентам, не требуя от пользователя проходить через весь процесс авторизационной переадресации OAuth2.
Создание клиента предоставления пароля
Перед тем как ваше приложение сможет выдавать токены через предоставление пароля, вам надо создать клиент предоставления пароля. Это можно сделать командой `shpassport:client` с ключом `sh--password` . Если вы уже выполняли команду `shpassport:install` , то вам не надо выполнять эту команду: > shphp artisan passport:client --password
После создания клиента предоставления пароля вы можете запросить токен, сделав POST-запрос по маршруту /oauth/token с адресом email и паролем пользователя. Запомните, этот маршрут уже зарегистрирован методом
, поэтому вам не надо определять его вручную. Если запрос успешен, вы получите access_token и refresh_token в JSON-отклике от сервера:
$response = $http->post('http://your-app.com/oauth/token', [ 'form_params' => [ 'grant_type' => 'password', 'client_id' => 'client-id', 'client_secret' => 'client-secret', 'username' => '<EMAIL>', 'password' => 'my-password', 'scope' => '', ], ]); return json_decode((string) $response->getBody(), true);
Запомните, по умолчанию токены долговременны. Но вы можете настроить максимальное время действия токенов доступа при необходимости.
### Запрос всех прав
При использовании предоставления пароля вам может понадобиться авторизовать токен для всех прав, поддерживаемых в вашем приложении. Это можно сделать, запросив право `PHP*` . Если вы запросили право `PHP*` , метод `PHPcan()` на экземпляре токена будет всегда возвращать `PHPtrue` . Это право может быть выдано только токену, выданному с помощью предоставления password:
```
$response = $http->post('http://your-app.com/oauth/token', [
```
'form_params' => [ 'grant_type' => 'password', 'client_id' => 'client-id', 'client_secret' => 'client-secret', 'username' => '<EMAIL>', 'password' => 'my-password', 'scope' => '*', ], ]);
## Неявное предоставление токенов
Неявное предоставление похоже на предоставление авторизационного кода, но токен возвращается клиенту без обмена авторизационным кодом. Это предоставление наиболее часто используется для JavaScript или мобильных приложений, когда клиентские учётные данные не могут надёжно храниться. Для включения предоставления вызовите метод
```
PHPenableImplicitGrant()
```
в AuthServiceProvider: `/**` * Регистрация всех сервисов аутентификации / авторизации. * * @return void */ public function boot() { $this->registerPolicies(); Passport::routes(); Passport::enableImplicitGrant(); }
После включения предоставления разработчики смогут использовать ID своего клиента для запроса токенов доступа у вашего приложения. Запрашивающее приложение должно сделать запрос переадресации по маршруту /oauth/authorize вашего приложения вот так:
. Вам не надо определять его вручную.
## Токены предоставления учётных данных клиента
Предоставление учётных данных клиента подходит для аутентификации машина-машина. Например, вы можете использовать это предоставление в запланированной задаче, которая выполняет обслуживающие действия через API. Для получения токена сделайте запрос к конечной точке oauth/token:
```
$guzzle = new GuzzleHttp\Client;
```
$response = $guzzle->post('http://your-app.com/oauth/token', [ 'form_params' => [ 'grant_type' => 'client_credentials', 'client_id' => 'client-id', 'client_secret' => 'client-secret', 'scope' => 'your-scope', ], ]); echo json_decode((string) $response->getBody(), true);
## Персональные токены доступа
Иногда пользователям необходимо выдавать токены доступа самим себе, не проходя через обычный процесс авторизационной переадресации. Позволяя пользователям выдавать токены самим себе через UI вашего приложения, вы даёте им возможность экспериментировать с вашим API или упрощаете весь подход к выдаче токенов.
Персональные токены доступа всегда долговременны. Их срок действия не изменяется методами `PHPtokensExpireIn()` и
### Создание клиента персонального доступа
Перед тем как ваше приложение сможет выдавать токены персонального доступа, вам надо создать клиент персонального доступа. Это можно сделать командой `shpassport:client` с ключом `PHP--personal()` . Если вы уже выполняли команду `shpassport:install` вам не надо выполнять эту команду: > shphp artisan passport:client --personal
### Управление токенами персонального доступа
После создания клиента персонального доступа вы можете выдавать токены конкретному пользователю с помощью метода `PHPcreateToken()` на экземпляре модели User. Этот метод принимает первым аргументом название токена, а вторым аргументом — необязательный массив прав токена:
// Создание токена без указания прав... $token = $user->createToken('Token Name')->accessToken; // Создание токена с правами... $token = $user->createToken('My Token', ['place-orders'])->accessToken;
В Passport также есть JSON API для управления персональными токенами доступа. Вы можете дополнить его своим фронтендом, предоставив пользователям панель для управления токенами персонального доступа. Далее мы рассмотрим все конечные точки API для управления токенами персонального доступа. Для удобства мы используем Vue для демонстрации создания HTTP-запросов к конечным точкам.
Этот маршрут возвращает все права, определённые для вашего приложения. Вы можете использовать этот маршрут для получения списка прав, которые пользователь может назначить персональному токену доступа:
```
this.$http.get('/oauth/scopes')
```
.then(response => { console.log(response.data); });
GET /oauth/personal-access-tokens
Этот маршрут возвращает все персональные токены доступа, которые создал пользователь. Это особенно полезно для получения списка всех токенов пользователя, чтобы он мог отредактировать или удалить их:
```
this.$http.get('/oauth/personal-access-tokens')
```
.then(response => { console.log(response.data); });
POST /oauth/personal-access-tokens
Этот маршрут создаёт новые персональные токены доступа. Ему необходимы следующие данные: название токена ( `PHPname` ) и права ( `PHPscopes` ) для него: `const data = {` name: 'Token Name', scopes: [] }; this.$http.post('/oauth/personal-access-tokens', data) .then(response => { console.log(response.data.accessToken); }) .catch (response => { // Список ошибок в отклике... });
DELETE /oauth/personal-access-tokens/{token-id}
Этот маршрут удаляет персональный токен доступа:
```
this.$http.delete('/oauth/personal-access-tokens/' + tokenId);
```
## Защита маршрутов
### С помощью посредников
Passport содержит защитника аутентификации, который будет проверять токены доступа во входящих запросах. После настройки защитника api на использование драйвера passport вам остаётся только указать посредника auth:api для всех маршрутов, которые требуют корректный токен доступа:
```
Route::get('/user', function () {
```
// })->middleware('auth:api');
### Передача токена доступа
При вызове маршрутов, защищённых с помощью Passport, пользователи API вашего приложения должны указать свой токен доступа как токен Bearer в заголовке Authorization своего запроса. Например, при использовании HTTP-библиотеки Guzzle:
```
$response = $client->request('GET', '/api/user', [
```
'headers' => [ 'Accept' => 'application/json', 'Authorization' => 'Bearer '.$accessToken, ], ]);
## Права токена
### Определение прав
Права позволяют клиентам вашего API запрашивать определённый набор разрешений при запросе авторизации на доступ к аккаунту. Например, если вы создаёте приложение для продаж, не всем пользователям API будет нужна возможность размещать заказы. Вместо этого вы можете позволить им только запрашивать авторизацию на доступ к статусам доставки. Другими словами, права позволяют пользователям вашего приложения ограничить действия, которые может выполнять стороннее приложение от их имени.
Вы можете определить права своего API методом
```
PHPPassport::tokensCan()
```
в методе `PHPboot()` вашего AuthServiceProvider. Метод `PHPtokensCan()` принимает массив названий и описаний прав. Описание права может быть каким угодно и будет выводиться пользователям на экране подтверждения авторизации:
```
use Laravel\Passport\Passport;
```
Passport::tokensCan([ 'place-orders' => 'Размещать заказы', 'check-status' => 'Проверять статус заказа', ]);
### Назначение прав токенам
При запросе токена доступа с помощью предоставления кода авторизации, запрашивающая сторона должна указать необходимые ей права в виде строкового параметра запроса scope. Параметр scope должен быть списком прав, разделённых пробелами:
$query = http_build_query([ 'client_id' => 'client-id', 'redirect_uri' => 'http://example.com/callback', 'response_type' => 'code', 'scope' => 'place-orders check-status', ]); return redirect('http://your-app.com/oauth/authorize?'.$query); });
При выдаче персональных токенов доступа
Если вы выдаёте персональные токены доступа с помощью метода `PHPcreateToken()` модели User, вы можете передать массив необходимых вам прав в качестве второго аргумента:
```
$token = $user->createToken('My Token', ['place-orders'])->accessToken;
```
### Проверка прав
Passport содержит два посредника, которые можно использовать для проверки того, что входящий запрос аутентифицирован токеном, которому выданы нужные права. Для начала добавьте следующих посредников в свойство `PHP$routeMiddleware` в файле app/Http/Kernel.php:
```
'scopes' => \Laravel\Passport\Http\Middleware\CheckScopes::class,
```
'scope' => \Laravel\Passport\Http\Middleware\CheckForAnyScope::class,
Посредник scopes можно назначить на маршрут для проверки того, что токен доступа входящего запроса имеет все перечисленные права:
// Токен доступа имеет оба права: "check-status" и "place-orders"... })->middleware('scopes:check-status,place-orders');
Посредник scopes можно назначить на маршрут для проверки того, что токен доступа входящего запроса имеет хотя бы одно из перечисленных прав:
// Токен доступа имеет либо право "check-status", либо "place-orders"... })->middleware('scope:check-status,place-orders');
Проверка прав экземпляра токена
Когда авторизованный токеном доступа запрос поступил в ваше приложение, у вас всё ещё есть возможность проверить, есть ли у токена определённое право, методом `PHPtokenCan()` на аутентифицированном экземпляре User:
Route::get('/orders', function (Request $request) { if ($request->user()->tokenCan('place-orders')) { // } });
## Использование вашего API с помощью JavaScript
При создании API бывает чрезвычайно полезно иметь возможность использовать ваш собственный API из вашего JavaScript-приложения. Такой подход к разработке API позволяет вашему приложению использовать тот же API, который вы предоставляете всем остальным. Один и тот же API может быть использован вашим веб-приложением, мобильными приложениями, сторонними приложениями и любыми SDK, которые вы можете опубликовать в различных менеджерах пакетов.
Обычно, если вы хотите использовать свой API из вашего JavaScript-приложения, вам необходимо вручную послать токен доступа в приложение и передавать его с каждым запросом в ваше приложение. Однако, Passport содержит посредника, который может обработать это для вас. Вам надо только добавить посредника CreateFreshApiToken в вашу группу посредников web:
`'web' => [` // Другие посредники... \Laravel\Passport\Http\Middleware\CreateFreshApiToken::class, ],
Этот посредник будет прикреплять cookie laravel_token к вашим исходящим откликам. Этот cookie содержит зашифрованный JWT, который Passport будет использовать для аутентификации API-запросов из вашего JavaScript-приложения. Теперь вы можете делать запросы в API вашего приложения, не передавая токен доступа в явном виде:
```
this.$http.get('/user')
```
.then(response => { console.log(response.data); });
При использовании этого метода аутентификации вам надо посылать CSRF-токен с каждым запросом, используя заголовокX-CSRF-TOKEN. Laravel будет автоматически отправлять этот заголовок, если вы используете стандартные настройки Vue, которые включены в фреймворк:
```
Vue.http.interceptors.push((request, next) => {
```
request.headers.set('X-CSRF-TOKEN', Laravel.csrfToken); next(); });
Если вы используете другой JavaScript-фреймворк, вам надо убедиться, что он настроен на отправку этого заголовка с каждым исходящим запросом.
Passport создаёт события при выдаче токенов доступа и обновлении токенов. Вы можете использовать эти события для удаления или отмены других токенов доступа в вашей БД. Вы можете прикрепить слушателей к этим событиям в EventServiceProvider вашего приложения:
`/**` * Привязки слушателя событий для приложения. * * @var array */ protected $listen = [ 'Laravel\Passport\Events\AccessTokenCreated' => [ 'App\Listeners\RevokeOldTokens', ], 'Laravel\Passport\Events\RefreshTokenCreated' => [ 'App\Listeners\PruneOldTokens', ], ];
# Laravel Scout
Laravel Scout предоставляет простое решение на основе драйверов для добавления полнотекстового поиска в ваши Eloquent-модели. С помощью наблюдателей за моделями Scout будет автоматически синхронизировать ваши поисковые индексы с вашими записями Eloquent.
Сейчас Scout поставляется с драйвером Algolia, однако написать свой драйвер довольно просто и вы можете дополнить Scout своей собственной реализацией поиска.
Сначала установите Scout через менеджер пакетов Composer:
> shcomposer require laravel/scout
Затем добавьте
```
PHPScoutServiceProvider
```
в массив `PHPproviders` в файле настроек config/app.php:
```
Laravel\Scout\ScoutServiceProvider::class,
```
После регистрации сервис-провайдера Scout опубликуйте его конфигурацию с помощью Artisan-команды `shvendor:publish` . Эта команда опубликует файл настроек scout.php в ваш каталог config: > shphp artisan vendor:publish --provider="Laravel\Scout\ScoutServiceProvider"
И наконец, добавьте типаж
в модель, которую хотите сделать доступной для поиска. Этот типаж зарегистрирует наблюдателя модели, чтобы синхронизировать модель с вашим драйвером поиска: `<?php` namespace App; use Laravel\Scout\Searchable; use Illuminate\Database\Eloquent\Model; class Post extends Model { use Searchable; }
### Очередь
Вам нужно изучить настройку драйвера очереди перед использованием этой библиотеки, не смотря на то, что использовать очереди в Scout не обязательно. После запуска обработчика очереди Scout сможет ставить в очередь все операции синхронизации информации вашей модели с вашими поисковыми индексами, обеспечивая намного лучшее время отклика для веб-интерфейса вашего приложения.
После настройки драйвера очереди задайте значение `PHPtrue` параметру `PHPqueue` в файле настроек config/scout.php: `'queue' => true,`
### Условия для работы драйвера
# Algolia
При использовании драйвера Algolia задайте ваши учётные данные Algolia ( `PHPid` и `PHPsecret` ) в файле настроек config/scout.php. После этого установите Algolia PHP SDK через менеджер пакетов Composer: > shcomposer require algolia/algoliasearch-client-php
### Настройка индексов модели
Каждая модель Eloquent синхронизируется с заданным поисковым «индексом», который содержит все записи для поиска этой модели. Другими словами, можно представить каждый индекс как таблицу MySQL. По умолчанию для каждой модели будет назначен индекс, совпадающий с именем «таблицы» модели. Обычно это множественная форма имени модели, но вы можете изменить индекс модели, переопределив для неё метод `PHPsearchableAs()` : `<?php` namespace App; use Laravel\Scout\Searchable; use Illuminate\Database\Eloquent\Model; class Post extends Model { use Searchable; /** * Получить имя индекса для модели. * * @return string */ public function searchableAs() { return 'posts_index'; } }
### Настройка данных для поиска
По умолчанию в индексе модели будут все её данные, аналогично результату `PHPtoArray()` . Если вы хотите настроить то, какие данные будут синхронизироваться с поисковым индексом, переопределите метод модели
```
PHPtoSearchableArray()
```
: `<?php` namespace App; use Laravel\Scout\Searchable; use Illuminate\Database\Eloquent\Model; class Post extends Model { use Searchable; /** * Получить индексируемый массив данных для модели. * * @return array */ public function toSearchableArray() { $array = $this->toArray(); // Изменение массива... return $array; } }
## Индексирование
### Групповой импорт
Если вы устанавливаете Scout в существующий проект, у вас уже могут быть записи БД, которые вам надо импортировать в ваш драйвер поиска. Scout предоставляет Artisan-команду import, которую можно использовать для импорта всех существующих записей в ваши поисковые индексы:
> shphp artisan scout:import "App\Post"
### Добавление записей
После добавления в модель типажа
вам остаётся только сохранить ( `PHPsave` ) экземпляр модели, и она автоматически добавится в поисковый индекс. Если вы настроили Scout на использование очереди, то эта операция будет выполнена в фоне вашим обработчиком очереди:
```
$order = new App\Order;
```
// ... $order->save();
# Добавление через запрос
Если вы хотите добавить коллекцию моделей в свой поисковый индекс через запрос Eloquent, вы можете добавить метод `PHPsearchable()` к запросу Eloquent. Метод `PHPsearchable()` разделит на части результаты запроса и добавит записи в ваш поисковый индекс. Если вы настроили Scout на использование очередей, то все части будут добавлены в фоне обработчиками очереди:
```
// Добавление через запрос Eloquent...
```
App\Order::where('price', '>', 100)->searchable(); // Вы также можете добавить записи через отношения... $user->orders()->searchable(); // Вы также можете добавить записи через коллекции... $orders->searchable(); Метод `PHPsearchable()` можно рассматривать как операцию «upsert». Другими словами, если запись модели уже в вашем индексе, она будет обновлена. А если её нет в поисковом индексе, то она будет добавлена в него.
### Обновление записей
Для обновления модели, предназначенной для поиска, вам надо обновить только свойства экземпляра модели и сохранить ( `PHPsave` ) модель в БД. Scout автоматически внесёт изменения в ваш поисковый индекс:
// Обновить заказ... $order->save(); Вы также можете использовать метод `PHPsearchable()` на запросе Eloquent для обновления коллекции моделей. Если моделей нет в поисковом индексе, они будут созданы:
App\Order::where('price', '>', 100)->searchable(); // Вы также можете обновить через отношения... $user->orders()->searchable(); // Вы также можете обновить через коллекции... $orders->searchable();
### Удаление записей
Для удаления записи из индекса просто удалите ( `PHPdelete` ) модель из БД. Этот способ удаления совместим даже с мягко удаляемыми моделями:
$order->delete(); Если вам не надо получать модель перед удалением записи, используйте метод `PHPunsearchable()` на экземпляре запроса Eloquent или коллекции:
App\Order::where('price', '>', 100)->unsearchable(); // Также вы можете удалять через отношения... $user->orders()->unsearchable(); // Также вы можете удалять через коллекции... $orders->unsearchable();
### Приостановка индексирования
Иногда бывает необходимо выполнить на модели группу операций Eloquent, при этом не синхронизируя данные модели с поисковым индексом. Это можно сделать методом
```
PHPwithoutSyncingToSearch()
```
. Этот метод принимает единственное замыкание, который будет выполнен немедленно. Все операции на модели, сделанные в замыкании, не будут синхронизированы с индексом модели:
```
App\Order::withoutSyncingToSearch(function () {
```
// Выполнение действий над моделью... });
## Поиск
Вы можете начать поиск модели методом `PHPsearch()` . Он принимает единственную строку, которая используется для поиска для поиска моделей. Затем вам надо добавить метод `PHPget()` к поисковому запросу для получения моделей Eloquent, которые будут найдены по этому запросу:
```
$orders = App\Order::search('Star Trek')->get();
```
Поскольку поиск Scout возвращает коллекцию моделей Eloquent, вы можете вернуть результаты даже напрямую из маршрута или контроллера и они будут автоматически конвертированы в JSON:
Route::get('/search', function (Request $request) { return App\Order::search($request->search)->get(); });
### Условия WHERE
Scout позволяет добавлять простые условия «where» в ваши поисковые запросы. На данный момент эти условия поддерживают только базовые числовые сравнения на равенство и в основном полезны для ограничения поисковых запросов определённым ID. Поскольку поисковые индексы не являются реляционной БД, то более сложные условия «where» пока не поддерживаются:
```
$orders = App\Order::search('Star Trek')->where('user_id', 1)->get();
```
Помимо получения коллекции моделей вы можете разделить результаты поиска на страницы методом `PHPpaginate()` . Этот метод вернёт экземпляр `PHPPaginator` , как и в случае страничного вывода обычного запроса Eloquent:
```
$orders = App\Order::search('Star Trek')->paginate();
```
Вы можете указать, сколько моделей необходимо размещать на одной странице, передав количество первым аргументом методу `PHPpaginate()` :
```
$orders = App\Order::search('Star Trek')->paginate(15);
```
Когда вы получите результаты, вы можете отобразить их и вывести страничные ссылки с помощью Blade, как и в случае страничного вывода обычного запроса Eloquent:
@foreach ($orders as $order) {{ $order->price }} @endforeach </div> {{ $orders->links() }}
## Пользовательские движки
### Создание движка
Если ни один из встроенных поисковых движков Scout вам не подходит, вы можете написать свой собственный движок и зарегистрировать его в Scout. Ваш движок должен наследовать абстрактный класс
```
PHPLaravel\Scout\Engines\Engine
```
. Этот абстрактный класс содержит пять методов, которые должен реализовать ваш движок:
```
use Laravel\Scout\Builder;
```
abstract public function update($models); abstract public function delete($models); abstract public function search(Builder $builder); abstract public function paginate(Builder $builder, $perPage, $page); abstract public function map($results, $model); Вам может быть полезно посмотреть на реализацию этих методов в классе
```
PHPLaravel\Scout\Engines\AlgoliaEngine
```
. Этот класс предоставит вам хорошую отправную точку для изучения того, как реализуется каждый из этих методов.
### Регистрация движка
После создания своего движка вы можете зарегистрировать его в Scout с помощью метода `PHPextend()` менеджера движков Scout. Вам надо вызвать метод `PHPextend()` из метода `PHPboot()` вашего
```
PHPAppServiceProvider
```
или любого другого сервис-провайдера в вашем приложении. Например, если вы создали `PHPMySqlSearchEngine` , зарегистрируйте его так:
```
use Laravel\Scout\EngineManager;
```
/** * Начальная загрузка всех сервисов приложения. * * @return void */ public function boot() { resolve(EngineManager::class)->extend('mysql', function () { return new MySqlSearchEngine; }); }
После регистрации движка вы можете указать его в качестве драйвера (driver) Scout по умолчанию в файле настроек config/scout.php:
`'driver' => 'mysql',`
# Eloquent ORM
Система объектно-реляционного отображения (ORM) Eloquent — красивая и простая реализация шаблона ActiveRecord в Laravel для работы с базами данных. Каждая таблица имеет соответствующий класс-модель, который используется для работы с этой таблицей. Модели позволяют запрашивать данные из таблиц, а также вставлять в них новые записи.
Прежде чем начать, настройте ваше соединение с БД в файле config/database.php. Подробнее о настройке БД читайте в документации.
## Определение моделей
Для начала создадим модель Eloquent. Модели обычно располагаются в папке app, но вы можете поместить их в любое место, в котором работает автозагрузчик в соответствии с вашим файлом composer.json. Все модели Eloquent наследуют класс Illuminate\Database\Eloquent\Model.
Простейший способ создать экземпляр модели — с помощью Artisan-команды `shmake:model` : > shphp artisan make:model User
Если вы хотите создать миграцию БД при создании модели, используйте параметр `sh--migration` или `sh-m` : > shphp artisan make:model User --migration php artisan make:model User -m
### Условия для моделей Eloquent
Теперь давайте посмотрим на пример модели Flight, который мы будем использовать для получения и хранения информации из таблицы flights:
`<?php` namespace App; use Illuminate\Database\Eloquent\Model; class Flight extends Model { // } Заметьте, что мы не указали, какую таблицу Eloquent должен привязать к нашей модели. Если это имя не указано явно, то в соответствии с принятым соглашением будет использовано имя класса в нижнем регистре и во множественном числе. В нашем случае Eloquent предположит, что модель `PHPFlight` хранит свои данные в таблице flights. Вы можете указать произвольную таблицу, определив свойство `PHPtable` в классе модели: `<?php` namespace App; use Illuminate\Database\Eloquent\Model; class Flight extends Model { /** * Связанная с моделью таблица. * * @var string */ protected $table = 'my_flights'; } Eloquent также предполагает, что каждая таблица имеет первичный ключ с именем `PHPid` . Вы можете определить свойство `PHP$primaryKey` для указания другого имени.
добавлено в 5.2 ()
По умолчанию Eloquent ожидает наличия в ваших таблицах столбцов updated_at и created_at. Если вы не хотите, чтобы они автоматически обрабатывались в Eloquent, установите свойство `PHP$timestamps` класса модели в `PHPfalse` : `<?php` namespace App; use Illuminate\Database\Eloquent\Model; class Flight extends Model { /** * Определяет необходимость отметок времени для модели. * * @var bool */ public $timestamps = false; }
Если вы хотите изменить формат отметок времени, задайте свойство dateFormat вашей модели. Это свойство определяет, как атрибуты времени будут храниться в базе данных, а также задаёт их формат при сериализации модели в массив или JSON:
`<?php` namespace App; use Illuminate\Database\Eloquent\Model; class Flight extends Model { /** * Формат хранения отметок времени модели. * * @var string */ protected $dateFormat = 'U'; }
добавлено в 5.3 ()
добавлено в 5.0 ()
По умолчанию модели Eloquent будут использовать основное соединение с БД, настроенное для вашего приложения. Если вы хотите указать другое соединение для модели, используйте свойство `PHP$connection` : `<?php` namespace App; use Illuminate\Database\Eloquent\Model; class Flight extends Model { /** * Название соединения для модели. * * @var string */ protected $connection = 'connection-name'; }
добавлено в 5.0 ()
Указание имени соединения с БД
Иногда вам нужно указать, какое подключение должно быть использовано при выполнении запроса Eloquent — просто используйте метод `PHPon()` :
```
$user = User::on('connection-name')->find(1);
```
Если вы используете соединения для чтения/записи, вы можете заставить запрос использовать соединение «записи» с помощью следующего метода:
```
$user = User::onWriteConnection()->find(1);
```
## Получение моделей
После создания модели и связанной с ней таблицы, вы можете начать получать данные из вашей БД. Каждая модель Eloquent представляет собой мощный конструктор запросов, позволяющий удобно выполнять запросы к связанной таблице. Например:
добавлено в 5.3 ()
`<?php` use App\Flight; $flights = App\Flight::all(); foreach ($flights as $flight) { echo $flight->name; }
добавлено в 5.2 () 5.1 () 5.0 ()
`<?php` namespace App\Http\Controllers; use App\Flight; use App\Http\Controllers\Controller; class FlightController extends Controller { /** * Показать список всех доступных рейсов. * * @return Response */ public function index() { $flights = Flight::all(); return view('flight.index', ['flights' => $flights]); } }
Если у вас есть экземпляр модели Eloquent, вы можете обращаться к значениям столбцов модели, обращаясь к соответствующим свойствам. Например, давайте пройдёмся по каждому экземпляру Flight, возвращённому нашим запросом, и выведем значение столбца name:
echo $flight->name; }
Добавление дополнительных ограничений
Метод `PHPall()` в Eloquent возвращает все результаты из таблицы модели. Поскольку модели Eloquent работают как конструктор запросов, вы можете также добавить ограничения в запрос, а затем использовать метод `PHPget()` для получения результатов:
```
$flights = App\Flight::where('active', 1)
```
->orderBy('name', 'desc') ->take(10) ->get();
Все методы, доступные в конструкторе запросов, также доступны при работе с моделями Eloquent. Вы можете использовать любой из них в запросах Eloquent.
Такие методы Eloquent, как `PHPall()` и `PHPget()` , которые получают несколько результатов, возвращают экземпляр Illuminate\Database\Eloquent\Collection. Класс Collection предоставляет множество полезных методов для работы с результатами Eloquent.
```
$flights = $flights->reject(function ($flight) {
```
return $flight->cancelled; });
Само собой, вы также можете просто перебирать такую коллекцию в цикле как массив:
echo $flight->name; }
### Разделение результата на блоки
Если вам нужно обработать тысячи записей Eloquent, используйте команду `PHPchunk()` (блок — прим. пер.). Метод `PHPchunk()` получает модель Eloquent частями, передавая их в замыкание для обработки. Использование этого метода уменьшает используемый объём оперативной памяти:
```
Flight::chunk(200, function ($flights) {
```
foreach ($flights as $flight) { // } });
Первый передаваемый в метод аргумент — число записей, получаемых в одном блоке. Передаваемая в качестве второго аргумента функция-замыкание будет вызываться для каждого блока, получаемого из БД.
Для получения каждого блока записей, передаваемого в замыкание, будет выполнен запрос к базе данных.
Метод `PHPcursor()` позволяет проходить по записям базы данных, используя курсор, который выполняет только один запрос. При обработке больших объёмов данных метод `PHPcursor()` может значительно уменьшить расходование памяти:
## Получение одиночных моделей / агрегатные функции
Разумеется, кроме получения всех записей указанной таблицы вы можете также получить конкретные записи с помощью `PHPfind()` и `PHPfirst()` . Вместо коллекции моделей эти методы возвращают один экземпляр модели:
```
// Получение модели по её первичному ключу...
```
$flight = App\Flight::find(1); // Получение первой модели, удовлетворяющей условиям... $flight = App\Flight::where('active', 1)->first();
добавлено в 5.2 ()
Иногда вам нужно возбудить исключение, если определённая модель не была найдена. Это удобно в маршрутах и контроллерах. Методы `PHPfindOrFail()` и `PHPfirstOrFail()` получают первый результат запроса. А если результатов не найдено, происходит исключение Illuminate\Database\Eloquent\ModelNotFoundException:
```
$model = App\Flight::findOrFail(1);
```
$model = App\Flight::where('legs', '>', 100)->firstOrFail();
Если исключение не поймано, пользователю автоматически посылается HTTP-отклик 404. Нет необходимости писать явные проверки для возврата откликов 404 при использовании этих методов:
```
Route::get('/api/flights/{id}', function ($id) {
```
return App\Flight::findOrFail($id); });
добавлено в 5.0 ()
Это позволит вам отловить исключение, занести его в журнал и вывести необходимую страницу ошибки. Чтобы поймать исключение
```
PHPModelNotFoundException
```
, добавьте какую-либо логику в ваш файл app/Exceptions/Handler.php.
```
use Illuminate\Database\Eloquent\ModelNotFoundException;
```
class Handler extends ExceptionHandler { public function render($request, Exception $e) { if ($e instanceof ModelNotFoundException) { // Ваша логика для ненайденной модели... } return parent::render($request, $e); } }
Построение запросов в моделях Eloquent
```
$users = User::where('votes', '>', 100)->take(10)->get();
```
foreach ($users as $user) { var_dump($user->name); }
Вам также доступны агрегатные функции конструктора запросов, такие как `PHPcount()` , `PHPmax()` , `PHPsum()` и др. Эти методы возвращают соответствующее скалярное значение вместо полного экземпляра модели:
```
$count = App\Flight::where('active', 1)->count();
```
$max = App\Flight::where('active', 1)->max('price');
добавлено в 5.0 ()
### Вставки
Для создания новой записи в БД просто создайте экземпляр модели, задайте атрибуты модели и вызовите метод `PHPsave()` : `<?php` namespace App\Http\Controllers; use App\Flight; use Illuminate\Http\Request; use App\Http\Controllers\Controller; class FlightController extends Controller { /** * Создание нового экземпляра рейса. * * @param Request $request * @return Response */ public function store(Request $request) { // Проверка запроса... $flight = new Flight; $flight->name = $request->name; $flight->save(); } } В этом примере мы просто присвоили значение параметра name из входящего HTTP-запроса атрибуту name экземпляра модели App\Flight. При вызове метода `PHPsave()` запись будет вставлена в таблицу. Отметки времени created_at и `PHPupdated_at(t)` будут автоматически установлены при вызове `PHPsave()` , поэтому не надо задавать их вручную.
добавлено в 5.0 ()
### Изменения
Метод `PHPsave()` можно использовать и для изменения существующей модели в БД. Для изменения модели вам нужно получить её, изменить необходимые атрибуты и вызвать метод `PHPsave()` . Отметка времени `PHPupdated_at(t)` будет установлена автоматически, поэтому не надо задавать её вручную:
$flight->name = 'New Flight Name'; $flight->save();
Изменения можно выполнить для нескольких моделей, которые соответствуют указанному запросу. В этом примере все рейсы, которые отмечены как active и имеют destination равное San Diego, будут отмечены как delayed:
```
App\Flight::where('active', 1)
```
->where('destination', 'San Diego') ->update(['delayed' => 1]); Метод `PHPupdate()` ожидает массив пар столбец/значение, обозначающий, какие столбцы необходимо изменить.
При использовании массовых изменений Eloquent для изменяемых моделей не будут возникать события saved и updated. Это происходит потому, что на самом деле модели вообще не извлекаются при массовом изменении.
### Массовое заполнение
Вы также можете использовать метод `PHPcreate()` для создания и сохранения модели одной строкой. Метод вернёт добавленную модель. Однако перед этим вам нужно определить либо свойство `PHP$fillable` , либо `PHP$guarded` в классе модели, так как все модели Eloquent изначально защищены от массового заполнения. Уязвимость массового заполнения проявляется, когда пользователь передаёт с помощью запроса неподходящий HTTP-параметр, и вы не ожидаете, что этот параметр изменит столбец в вашей БД. Например, злоумышленник может послать в HTTP-запросе параметр is_admin, который затем передаётся в метод `PHPcreate()` вашей модели, позволяя пользователю повысить свои привилегии до администратора. Поэтому, для начала надо определить, для каких атрибутов разрешить массовое назначение. Это делается с помощью свойства модели `PHP$fillable` . Например, давайте разрешим массовое назначение атрибута name нашей модели Flight: `<?php` namespace App; use Illuminate\Database\Eloquent\Model; class Flight extends Model { /** * Атрибуты, для которых разрешено массовое назначение. * * @var array */ protected $fillable = ['name']; } Теперь мы можем использовать метод `PHPcreate()` для вставки новой записи в БД. Метод `PHPcreate()` возвращает сохранённый экземпляр модели:
```
$flight = App\Flight::create(['name' => 'Flight 10']);
```
добавлено в 5.3 ()
Параметр `PHP$fillable` служит «белым списком» атрибутов, для которых разрешено массовое назначение. А параметр `PHP$guarded` служит «чёрным списком». Параметр `PHP$guarded` должен содержать массив атрибутов, для которых будет запрещено массовое назначение. Атрибутам, не вошедшим в этот массив, будет разрешено массовое назначение. Само собой, вы должны использовать только один из этих параметров. В данном примере всем атрибутам кроме price разрешено массовое заполнение: `<?php` namespace App; use Illuminate\Database\Eloquent\Model; class Flight extends Model { /** * Атрибуты, для которых запрещено массовое назначение. * * @var array */ protected $guarded = ['price']; }
добавлено в 5.3 ()
/*** Атрибуты, для которых запрещено массовое назначение. * * @var array */ protected $guarded = [];
```
%%(DOCNEW 5.0=5d10040a981deee82c0fde0e8e5d2ffc49eaaecb 8.02.2016 18:09:11)
```
.(alert) При использовании %%$guarded%%, вы по-прежнему не должны передавать %%Input::get()%% или любые сырые массивы пользовательского ввода в методы %%save()%% и %%update()%%, потому что может быть обновлён любой незащищённый столбец. **Защита всех атрибутов от массового заполнения** Вы также можете запретить массовое заполнение всем атрибутам, используя символ %%*%%: %% protected $guarded = ['*']; %%
### Другие методы создания
Есть ещё два метода, используемые для создания моделей с помощью массового заполнения: `PHPfirstOrCreate()` и `PHPfirstOrNew()` . Метод `PHPfirstOrCreate()` пытается найти запись БД, используя указанные пары столбец/значение. Если модель не найдена в БД, запись будет вставлена в БД с указанными атрибутами. Метод `PHPfirstOrNew()` как и `PHPfirstOrCreate()` пытается найти в БД запись, соответствующую указанным атрибутам. Однако если модель не найдена, будет возвращён новый экземпляр модели. Учтите, что эта модель ещё не помещена в БД. Вам надо вызвать метод `PHPsave()` вручную, чтобы сохранить её:
```
// Получить рейс по атрибутам или создать, если он не существует...
```
$flight = App\Flight::firstOrCreate(['name' => 'Flight 10']); // Получить рейс по атрибутам, или создать новый экземпляр... $flight = App\Flight::firstOrNew(['name' => 'Flight 10']);
добавлено в 5.3 ()
Ещё вы можете столкнуться с ситуациями, когда надо обновить существующую модель или создать новую, если её пока нет. Laravel предоставляет метод `PHPupdateOrCreate()` для выполняет этой задачи за один шаг. Подобно методу `PHPfirstOrCreate()` , метод `PHPupdateOrCreate()` сохраняет модель, поэтому не надо вызывать метод `PHPsave()` :
```
// Если есть рейс из Oakland в San Diego, установить стоимость = $99.
```
// Если подходящей модели нет, создать новую. $flight = App\Flight::updateOrCreate( ['departure' => 'Oakland', 'destination' => 'San Diego'], ['price' => 99] );
добавлено в 5.0 ()
После сохранения или создания новой модели, использующей автоматические (autoincrementing) ID, вы можете получать ID объектов, обращаясь к их атрибуту `PHPid` :
```
$insertedId = $user->id;
```
Сохранение модели и её отношений
Иногда вам может быть нужно сохранить не только модель, но и все её отношения. Для этого просто используйте метод `PHPpush()` : `$user->push();`
Вы также можете выполнять обновления в виде запросов к набору моделей:
```
$affectedRows = User::where('votes', '>', 100)->update(['status' => 2]);
```
При обновлении набора моделей с помощью конструктора запросов Eloquent никакие события моделей не срабатывают.
## Удаление моделей
Для удаления модели вызовите метод `PHPdelete()` на её экземпляре:
$flight->delete(); В предыдущем примере мы получили модель из БД перед вызовом метода `PHPdelete()` . Но если вы знаете первичный ключ модели, вы можете удалить модель, не получая её. Для этого вызовите метод `PHPdestroy()` :
```
App\Flight::destroy(1);
```
App\Flight::destroy([1, 2, 3]); App\Flight::destroy(1, 2, 3);
Конечно, вы также можете выполнить оператор удаления на наборе моделей. В этом примере мы удалим все рейсы, отмеченные неактивными. Подобно массовому обновлению, массовое удаление не вызовет никаких событий для удаляемых моделей:
```
$deletedRows = App\Flight::where('active', 0)->delete();
```
При использовании массового удаления Eloquent для удаляемых моделей не будут возникать события deleting и deleted. Это происходит потому, что на самом деле модели вообще не извлекаются при выполнении оператора удаления.
### Мягкое удаление
Кроме обычного удаления записей из БД Eloquent также может «мягко удалять» модели. Когда вы «мягко» удаляете модель, она на самом деле остаётся в базе данных, но в БД устанавливается её атрибут deleted_at. Если у модели ненулевое значение deleted_at, значит модель мягко удалена. Для включения мягкого удаления для модели используйте для неё типаж
и добавьте столбец deleted_at в свойство `PHP$dates` : `<?php` namespace App; use Illuminate\Database\Eloquent\Model; use Illuminate\Database\Eloquent\SoftDeletes; class Flight extends Model { use SoftDeletes; /** * Атрибуты, которые должны быть преобразованы в даты. * * @var array */ protected $dates = ['deleted_at']; } Разумеется, вам необходимо добавить столбец deleted_at в вашу таблицу. Для этого используется метод `PHPsoftDeletes()` конструктора таблиц:
```
Schema::table('flights', function ($table) {
```
$table->softDeletes(); }); Теперь когда вы вызовите метод `PHPdelete()` , поле deleted_at будет установлено в значение текущей даты и времени. При запросе моделей, использующих мягкое удаление, «удалённые» модели не будут включены в результат запроса. Для определения того, удалён ли экземпляр модели, используйте метод `PHPtrashed()` :
```
if ($flight->trashed()) {
```
### Запрос мягко удалённых моделей
Включение удалённых моделей в результат выборки
Как было сказано, мягко удалённые модели автоматически исключаются из результатов запроса. Для отображения всех моделей, в том числе удалённых, используйте метод `PHPwithTrashed()` :
```
$flights = App\Flight::withTrashed()
```
->where('account_id', 1) ->get(); Метод `PHPwithTrashed` может быть использован в отношениях:
```
$flight->history()->withTrashed()->get();
```
Рекомендации для условия WHERE
При добавлении в запрос мягко удалённых моделей условий orWhere всегда используйте сложные условия WHERE для их логической группировки. Например:
```
User::where(function($query) {
```
$query->where('name', '=', 'John') ->orWhere('votes', '>', 100); }) ->get();
Таким образом получится следующий SQL-запрос:
> sqlselect * from `users` where `users`.`deleted_at` is null and (`name` = 'John' or `votes` > 100)
Если условия orWhere не сгруппированы, то получится следующий SQL-запрос, который будет содержать мягко удалённые записи:
> sqlselect * from `users` where `users`.`deleted_at` is null and `name` = 'John' or `votes` > 100
Получение только мягко удалённых моделей
Если вы хотите получить только мягко удалённые модели, вызовите метод `PHPonlyTrashed()` :
```
$flights = App\Flight::onlyTrashed()
```
->where('airline_id', 1) ->get();
Восстановление мягко удалённых моделей
Иногда необходимо восстановить мягко удалённую модель. Для восстановления мягко удалённой модели в активное состояние используется метод `PHPrestore()` : `$flight->restore();`
Вы также можете использовать его в запросе для быстрого восстановления нескольких моделей. Подобно другим массовым операциям, это не вызовет никаких событий для восстанавливаемых моделей::
```
App\Flight::withTrashed()
```
->where('airline_id', 1) ->restore(); Как и метод `PHPwithTrashed` , метод `PHPrestore()` можно использовать и в отношениях:
```
$flight->history()->restore();
```
Если вы хотите полностью удалить модель из БД, используйте метод `PHPforceDelete()` :
```
// Принудительное удаление одного экземпляра модели...
```
$flight->forceDelete(); // Принудительное удаление всех связанных моделей... $flight->history()->forceDelete();
## Заготовки запросов
### Глобальные заготовки
Глобальные заготовки позволяют добавить ограничения во все запросы для данной модели. Собственная функция Laravel мягкое удаление использует глобальные заготовки, чтобы получать из базы данных только «неудалённые» модели. Написание собственных глобальных заготовок обеспечивает удобный и простой способ наложить определённые ограничения на каждый запрос для конкретной модели.
Написание глобальных заготовок
Писать глобальные заготовки просто. Определите класс, реализующий интерфейс Illuminate\Database\Eloquent\Scope. Этот интерфейс требует реализации одного метода: `PHPapply()` . Метод `PHPapply()` может добавить к запросу ограничение `PHPwhere` при необходимости: `<?php` namespace App\Scopes; use Illuminate\Database\Eloquent\Scope; use Illuminate\Database\Eloquent\Model; use Illuminate\Database\Eloquent\Builder; class AgeScope implements Scope { /** * Применение заготовки к данному построителю запросов Eloquent. * * @param \Illuminate\Database\Eloquent\Builder $builder * @param \Illuminate\Database\Eloquent\Model $model * @return void */ public function apply(Builder $builder, Model $model) { $builder->where('age', '>', 200); } }
В Laravel-приложении по умолчанию нет определённой папки для хранения заготовок, поэтому вы можете создать свою папку Scopes в папке app вашего приложения.
Применение глобальных заготовок
Для назначения глобальной заготовки на модель вам надо переопределить метод `PHPboot()` данной модели и использовать метод `PHPaddGlobalScope()` : `<?php` namespace App; use App\Scopes\AgeScope; use Illuminate\Database\Eloquent\Model; class User extends Model { /** * "Загружающий" метод модели. * * @return void */ protected static function boot() { parent::boot(); static::addGlobalScope(new AgeScope); } } После добавления заготовки запрос к `PHPUser::all()` будет создавать следующий SQL: > sqlselect * from `users` where `age` > 200
Анонимные глобальные заготовки
Также Eloquent позволяет определять глобальные заготовки с помощью замыканий, что особенно удобно для простых заготовок, которым не нужен отдельный класс:
`<?php` namespace App; use Illuminate\Database\Eloquent\Model; use Illuminate\Database\Eloquent\Builder; class User extends Model { /** * "Загружающий" метод модели. * * @return void */ protected static function boot() { parent::boot(); static::addGlobalScope('age', function (Builder $builder) { $builder->where('age', '>', 200); }); } } Первый аргумент `PHPaddGlobalScope()` служит идентификатором для удаления заготовки:
```
User::withoutGlobalScope('age')->get();
```
Если вы хотите удалить глобальную заготовку для данного запроса, то можете использовать метод
. Этот метод принимает единственный аргумент — имя класса глобальной заготовки:
```
User::withoutGlobalScope(AgeScope::class)->get();
```
Если вы хотите удалить несколько или все глобальные заготовки, то можете использовать метод
```
// Удалить все глобальные заготовки...
```
User::withoutGlobalScopes()->get(); // Удалить некоторые глобальные заготовки... User::withoutGlobalScopes([ FirstScope::class, SecondScope::class ])->get();
### Локальные заготовки
Заготовки позволяют вам повторно использовать логику запросов в моделях. Например, если вам часто требуется получать пользователей, которые сейчас «популярны». Для создания заготовки просто начните имя метода с префикса scope:
`<?php` namespace App; use Illuminate\Database\Eloquent\Model; class User extends Model { /** * Заготовка запроса популярных пользователей. * * @param \Illuminate\Database\Eloquent\Builder $query * @return \Illuminate\Database\Eloquent\Builder */ public function scopePopular($query) { return $query->where('votes', '>', 100); } /** * Заготовка запроса активных пользователей. * * @param \Illuminate\Database\Eloquent\Builder $query * @return \Illuminate\Database\Eloquent\Builder */ public function scopeActive($query) { return $query->where('active', 1); } }
Использование локальной заготовки
Когда заготовка определена, вы можете вызывать методы заготовки при запросах к модели. Но теперь вам не нужно использовать префикс scope. Вы можете даже сцеплять вызовы разных заготовок, например:
Иногда вам может потребоваться определить заготовку, которая принимает параметры. Для этого просто добавьте эти параметры в заготовку. Они должны быть определены после параметра `PHP$query` : `<?php` namespace App; use Illuminate\Database\Eloquent\Model; class User extends Model { /** * Заготовка запроса пользователей определённого типа. * * @param \Illuminate\Database\Eloquent\Builder $query * @param mixed $type * @return \Illuminate\Database\Eloquent\Builder */ public function scopeOfType($query, $type) { return $query->where('type', $type); } }
А затем передайте их при вызове метода заготовки:
```
$users = App\User::ofType('admin')->get();
```
Иногда вам требуется определить заготовку, которая будет применяться для всех выполняемых в модели запросов. По сути так и работает «мягкое удаление» в Eloquent. Глобальные заготовки определяются с помощью комбинации типажей PHP и реализации Illuminate\Database\Eloquent\ScopeInterface.
Сначала определим типаж. В этом примере мы будем использовать встроенный в Laravel `PHPSoftDeletes` : `trait SoftDeletes {` /** * Загрузка типажа мягкого удаления для модели. * * @return void */ public static function bootSoftDeletes() { static::addGlobalScope(new SoftDeletingScope); } } Если в модели Eloquent используется типаж, содержащий соответствующий соглашению по названиям bootNameOfTrait метод, тогда этот метод типажа будет вызываться при загрузке модели Eloquent. Это даёт вам возможность зарегистрировать глобальную заготовку, или сделать ещё что-либо необходимое. Заготовка должна реализовывать `PHPScopeInterface` , который содержит два метода: `PHPapply()` и `PHPremove()` . Метод `PHPapply()` принимает объект конструктора запросов Illuminate\Database\Eloquent\Builder и модель, к которой он применяется, и отвечает за добавление любых дополнительных операторов where, которые необходимы заготовке. Метод `PHPremove()` также принимает объект Builder и модель, и отвечает за отмену действий, произведённых методом `PHPapply()` . Другими словами, `PHPremove()` должен удалить добавленные операторы where (или любые другие). Поэтому для нашей `PHPSoftDeletingScope` методы будут такими: `/**` * Применение заготовки к указанному конструктору запросов Eloquent. * * @param \Illuminate\Database\Eloquent\Builder $builder * @param \Illuminate\Database\Eloquent\Model $model * @return void */ public function apply(Builder $builder, Model $model) { $builder->whereNull($model->getQualifiedDeletedAtColumn()); $this->extend($builder); } /** * Удаление заготовки из указанного конструктора запросов Eloquent. * * @param \Illuminate\Database\Eloquent\Builder $builder * @param \Illuminate\Database\Eloquent\Model $model * @return void */ public function remove(Builder $builder, Model $model) { $column = $model->getQualifiedDeletedAtColumn(); $query = $builder->getQuery(); foreach ((array) $query->wheres as $key => $where) { // Если оператор where ограничивает мягкое удаление данных, мы удалим его из // запроса и сбросим ключи в операторах where. Это позволит разработчику // включить удалённую модель в отношения результирующего набора, который загружается "лениво". if ($this->isSoftDeleteConstraint($where, $column)) { unset($query->wheres[$key]); $query->wheres = array_values($query->wheres); } } }
## Отношения
Создание связи «один к одному»
Связь вида «один к одному» является очень простой. К примеру, модель `PHPUser` может иметь один `PHPPhone` . Мы можем определить такое отношение в Eloquent:
public function phone() { return $this->hasOne('App\Phone'); } } Первый параметр, передаваемый `PHPhasOne()` , — имя связанной модели. Как только отношение установлено, вы можете получить к нему доступ через динамические свойства Eloquent:
Сгенерированный SQL имеет такой вид:
> sqlselect * from users where id = 1 select * from phones where user_id = 1
Заметьте, что Eloquent считает, что поле в таблице называется по имени модели плюс _id. В данном случае предполагается, что это user_id. Если вы хотите перекрыть стандартное имя, передайте второй параметр методу `PHPhasOne()` . Кроме того вы можете передать в метод третий аргумент, чтобы указать, какие локальные столбцы следует использовать для объединения:
return $this->hasOne('App\Phone', 'foreign_key', 'local_key'); Для создания обратного отношения в модели `PHPPhone` используйте метод `PHPbelongsTo()` («принадлежит к»):
public function user() { return $this->belongsTo('App\User'); } } В примере выше Eloquent будет искать поле user_id в таблице phones. Если вы хотите назвать внешний ключ по другому, передайте это имя вторым параметром в метод `PHPbelongsTo()` :
public function user() { return $this->belongsTo('App\User', 'local_key'); } }
Кроме того, вы передаёте третий параметр, который определяет имя связанного столбца в родительской таблице:
public function user() { return $this->belongsTo('App\User', 'local_key', 'parent_key'); } }
Примером отношения «один ко многим» является статья в блоге, которая имеет «много» комментариев. Вы можете смоделировать это отношение таким образом:
public function comments() { return $this->hasMany('App\Comment'); } }
Теперь мы можем получить все комментарии с помощью динамического свойства:
```
$comments = Post::find(1)->comments;
```
Если вам нужно добавить ограничения на получаемые комментарии, можно вызвать метод `PHPcomments()` и продолжить добавлять условия:
```
$comments = Post::find(1)->comments()->where('title', '=', 'foo')->first();
```
И опять вы можете передать второй параметр в метод `PHPhasMany()` для перекрытия стандартного имени ключа. И как и для отношения «hasOne» также может быть указан локальный столбец:
return $this->hasMany('App\Comment', 'foreign_key', 'local_key');
Определение обратного отношения
Для определения обратного отношения используйте метод `PHPbelongsTo()` :
public function post() { return $this->belongsTo('App\Post'); } }
Отношения типа «многие ко многим» — более сложные, чем остальные виды отношений. Примером может служить пользователь, имеющий много ролей, где роли также относятся ко многим пользователям. Например, один пользователь может иметь роль admin. Нужны три таблицы для этой связи: users, roles и role_user. Название таблицы role_user происходит от упорядоченных по алфавиту имён связанных моделей, она должна иметь поля user_id и role_id.
Вы можете определить отношение «многие ко многим» через метод `PHPbelongsToMany()` :
public function roles() { return $this->belongsToMany('App\Role'); } } Теперь мы можем получить роли через модель `PHPUser` :
Вы можете передать второй параметр к методу `PHPbelongsToMany()` с указанием имени связующей (pivot) таблицы вместо стандартной:
```
return $this->belongsToMany('App\Role', 'user_roles');
```
Вы также можете перекрыть имена ключей по умолчанию:
```
return $this->belongsToMany('App\Role', 'user_roles', 'user_id', 'foo_id');
```
Конечно, вы можете определить и обратное отношение на модели `PHPRole` :
```
class Role extends Model {
```
public function users() { return $this->belongsToMany('App\User'); } }
Связь «ко многим через» обеспечивает удобный короткий путь для доступа к удалённым отношениям через промежуточные. Например, модель Country может иметь много Post через модель User. Таблицы для этих отношений будут выглядеть так:
Несмотря на то, что таблица posts не содержит столбца country_id, отношение «hasManyThrough» позволит нам получить доступ к posts через country с помощью `PHP$country->posts` . Давайте определим отношения:
public function posts() { return $this->hasManyThrough('App\Post', 'App\User'); } }
Если вы хотите указать ключи отношений вручную, вы можете передать их в качестве третьего и четвертого аргументов метода:
public function posts() { return $this->hasManyThrough('App\Post', 'App\User', 'country_id', 'user_id'); } }
### Полиморфические отношения
Полиморфические отношения позволяют модели быть связанной с более, чем одной моделью. Например, может быть модель `PHPPhoto` , содержащая записи, принадлежащие к моделям `PHPStaff` (сотрудники) и `PHPOrder` . Мы можем создать такое отношение таким образом:
```
class Photo extends Model {
```
public function imageable() { return $this->morphTo(); } } class Staff extends Model { public function photos() { return $this->morphMany('App\Photo', 'imageable'); } } class Order extends Model { public function photos() { return $this->morphMany('App\Photo', 'imageable'); } }
Теперь мы можем получить фотографии и для сотрудника, и для заказа:
```
$staff = Staff::find(1);
```
foreach ($staff->photos as $photo) { // } Однако настоящая «магия» полиморфизма происходит при чтении связи на модели `PHPPhoto` :
```
$photo = Photo::find(1);
```
$imageable = $photo->imageable; Отношение `PHPimageable` модели `PHPPhoto` вернёт либо объект `PHPStaff` , либо объект `PHPOrder` в зависимости от типа модели, которой принадлежит фотография.
Структура таблиц полиморфической связи
Чтобы понять, как это работает, давайте изучим структуру БД для полиморфического отношения:
```
confstaff
id - integer
name - string
orders
id - integer
price - integer
photos
id - integer
path - string
imageable_id - integer
imageable_type - string
```
Главные поля, на которые нужно обратить внимание: imageable_id и imageable_type в таблице photos. Первое содержит ID владельца, в нашем случае — заказа или персонала, а второе — имя класса-модели владельца. Это позволяет ORM определить, какой класс модели должен быть возвращён при использовании отношения `PHPimageable` .
### Полиморфические связи многие ко многим
Структура таблиц полиморфической связи многие ко многим
В дополнение к традиционным полиморфическим связям вы можете также задать полиморфические связи многие ко многим. Например, модели блогов Post и Video могут разделять полиморфическую связь с моделью Tag. Во-первых, давайте рассмотрим структуру таблиц:
Далее, мы готовы к установке связи с моделью. Обе модели Post и Video будут иметь связь «morphToMany» через метод `PHPtags` :
public function tags() { return $this->morphToMany('App\Tag', 'taggable'); } }
Модель Tag может определить метод для каждого из своих отношений:
```
class Tag extends Model {
```
public function posts() { return $this->morphedByMany('App\Post', 'taggable'); } public function videos() { return $this->morphedByMany('App\Video', 'taggable'); } }
Проверка связей при выборке
При чтении отношений модели вам может быть нужно ограничить результаты в зависимости от существования связи. Например, вы хотите получить все статьи в блоге, имеющие хотя бы один комментарий. Для этого можно использовать метод `PHPhas()` :
```
$posts = Post::has('comments')->get();
```
Вы также можете указать оператор и число:
```
$posts = Post::has('comments', '>=', 3)->get();
```
Можно конструировать вложенные операторы `PHPhas` с помощью точечной нотации:
```
$posts = Post::has('comments.votes')->get();
```
Если вам нужно ещё больше возможностей, вы можете использовать методы `PHPwhereHas` и `PHPorWhereHas` , чтобы поместить условия "where" в ваши запросы has:
```
$posts = Post::whereHas('comments', function($q)
```
{ $q->where('content', 'like', 'foo%'); })->get();
### Динамические свойства
Eloquent позволяет вам читать отношения через динамические свойства. Eloquent автоматически определит используемую связь и даже вызовет `PHPget()` для связей «один ко многим» и `PHPfirst()` — для связей «один к одному». Эта связь будет доступна через динамическое свойство с тем же именем. К примеру, для следующей модели `PHP$phone` :
public function user() { return $this->belongsTo('App\User'); } } $phone = Phone::find(1);
Вместо того, чтобы получить e-mail пользователя так:
```
echo $phone->user()->first()->email;
```
...вызов может быть сокращён до такого:
```
echo $phone->user->email;
```
Внимание: Отношения, которые возвращают много результатов, вернут экземпляр класса
```
PHPIlluminate\Database\Eloquent\Collection
```
### Активная загрузка
Активная загрузка (eager loading) призвана устранить проблему запросов N + 1. Например, представьте, что у нас есть модель `PHPBook` со связью к модели `PHPAuthor` . Отношение определено как:
```
class Book extends Model {
```
public function author() { return $this->belongsTo('App\Author'); } }
Теперь предположим, у нас есть такой код:
```
foreach (Book::all() as $book)
```
{ echo $book->author->name; }
Цикл выполнит один запрос для получения всех книг в таблице, а затем будет выполнять по одному запросу на каждую книгу для получения автора. Таким образом, если у нас 25 книг, то потребуется 26 запросов.
К счастью, мы можем использовать активную загрузку для кардинального уменьшения числа запросов. Отношение будет активно загружено, если оно было указано при вызове метода `PHPwith()` :
```
foreach (Book::with('author')->get() as $book)
```
{ echo $book->author->name; }
В цикле выше будут выполнены всего два запроса:
> sqlselect * from books select * from authors where id in (1, 2, 3, 4, 5, ...)
Разумное использование активной загрузки поможет сильно повысить производительность вашего приложения.
Конечно, вы можете загрузить несколько отношений одновременно:
```
$books = Book::with('author', 'publisher')->get();
```
Вы даже можете загрузить вложенные отношения:
```
$books = Book::with('author.contacts')->get();
```
В примере выше связь `PHPauthor` будет активно загружена вместе со связью `PHPcontacts` модели автора.
### Ограничения активной загрузки
Иногда вам может быть нужно не только активно загрузить отношение, но также указать условие для его загрузки:
{ $query->where('title', 'like', '%первое%'); }])->get();
В этом примере мы загружаем сообщения пользователя, но только те, заголовок которых содержит подстроку «первое».
Конечно, функции-замыкания активной загрузки не ограничиваются только условиями. Вы также можете применить упорядочивание:
{ $query->orderBy('created_at', 'desc'); }])->get();
### Ленивая активная загрузка
Можно активно загрузить связанные модели напрямую из уже созданного набора объектов моделей. Это может быть полезно при определении во время выполнения, требуется ли такая загрузка или нет, или в комбинации с кэшированием.
```
$books = Book::all();
```
$books->load('author', 'publisher');
Вы можете передать замыкание, чтобы задать ограничения для запроса:
```
$books->load(['author' => function($query)
```
{ $query->orderBy('published_date', 'asc'); }]);
### Вставка связанных моделей
Часто вам нужно будет добавить связанную модель. Например, вы можете создать новый комментарий к сообщению. Вместо явного указания значения для поля post_id вы можете вставить модель напрямую через её родителя — модели `PHPPost` :
$post = Post::find(1); $comment = $post->comments()->save($comment);
В этом примере поле post_id вставленного комментария автоматически получит значение ID своей статьи.
Сохранить несколько связанных моделей можно так:
`$comments = [` new Comment(['message' => 'A new comment.']), new Comment(['message' => 'Another comment.']), new Comment(['message' => 'The latest comment.']) ]; $post = Post::find(1); $post->comments()->saveMany($comments);
### Связывание моделей (belongs to)
При обновлении связей `PHPbelongsTo` («принадлежит к») вы можете использовать метод `PHPassociate()` . Он установит внешний ключ на дочерней модели:
```
$account = Account::find(10);
```
$user->account()->associate($account); $user->save();
### Вставка связанных моделей (многие ко многим)
Вы также можете вставлять связанные модели при работе с отношениями многие ко многим. Продолжим использовать наши модели `PHPUser` и `PHPRole` в качестве примеров. Вы можете легко привязать новые роли к пользователю методом `PHPattach()` .
Связывание моделей «многие ко многим»
$user->roles()->attach(1);
Вы также можете передать массив атрибутов, которые должны быть сохранены в связующей (pivot) таблице для этого отношения:
```
$user->roles()->attach(1, ['expires' => $expires]);
```
Конечно, существует противоположность `PHPattach()` — `PHPdetach()` :
```
$user->roles()->detach(1);
```
Оба метода `PHPattach()` и `PHPdetach()` также принимают в качестве параметров массивы ID:
$user->roles()->detach([1, 2, 3]); $user->roles()->attach([1 => ['attribute1' => 'value1'], 2, 3]); Использование `PHPsync()` для привязки моделей «многие ко многим» Вы также можете использовать метод `PHPsync()` для привязки связанных моделей. Этот метод принимает массив ID, которые должны быть сохранены в связующей таблице. Когда операция завершится, только переданные ID будут существовать в промежуточной таблице для данной модели:
Добавление данных для связующей таблицы при синхронизации
Вы также можете связать другие связующие таблицы с нужными ID:
```
$user->roles()->sync([1 => ['expires' => true]]);
```
Иногда вам может быть нужно создать новую связанную модель и добавить её одной командой. Для этого вы можете использовать метод `PHPsave()` :
```
$role = new Role(['name' => 'Editor']);
```
User::find(1)->roles()->save($role); В этом примере новая модель `PHPRole` будет сохранена и привязана к модели `PHPUser` . Вы можете также передать массив атрибутов для помещения в связующую таблицу:
```
User::find(1)->roles()->save($role, ['expires' => $expires]);
```
### Обновление времени владельца
Когда модель принадлежит другой посредством `PHPbelongsTo()` — например, `PHPComment` , принадлежащий `PHPPost` — иногда нужно обновить время изменения владельца при обновлении связанной модели. Например, при изменении модели `PHPComment` вы можете обновлять поле updated_at её модели `PHPPost` . Eloquent делает этот процесс простым — просто добавьте свойство `PHP$touches` , содержащее имена всех отношений с моделями-потомками:
protected $touches = ['post']; public function post() { return $this->belongsTo('App\Post'); } } Теперь при обновлении `PHPComment` владелец `PHPPost` также обновит своё поле updated_at:
```
$comment = Comment::find(1);
```
$comment->text = 'Изменение этого комментария!'; $comment->save();
### Работа со связующими таблицами
Как вы уже узнали, работа отношения многие ко многим требует наличия промежуточной таблицы. Например, предположим, что наш объект `PHPUser` имеет множество связанных объектов `PHPRole` . После чтения отношения мы можем прочитать таблицу `PHPpivot` на обеих моделях:
foreach ($user->roles as $role) { echo $role->pivot->created_at; } Заметьте, что каждая модель `PHPRole` автоматически получила атрибут `PHPpivot` . Этот атрибут содержит модель, представляющую промежуточную таблицу, и она может быть использована как любая другая модель Eloquent. По умолчанию, только ключи будут представлены в объекте `PHPpivot` . Если ваша связующая таблица содержит другие поля, вы можете указать их при создании отношения:
```
return $this->belongsToMany('App\Role')->withPivot('foo', 'bar');
```
Теперь атрибуты foo и bar будут также доступны на объекте `PHPpivot` модели `PHPRole` . Если вы хотите автоматически поддерживать поля created_at и updated_at актуальными, используйте метод `PHPwithTimestamps()` при создании отношения:
Удаление всех связующих записей
Для удаления всех записей в связующей таблице можно использовать метод `PHPdetach()` :
```
User::find(1)->roles()->detach();
```
Заметьте, что эта операция не удаляет записи из таблицы roles, а только из связующей таблицы.
Обновление записи в связующей таблице
Иногда необходимо обновить связующую таблицу не отвязывая её. Для обновления вашей связующей таблицы на месте используйте метод
```
User::find(1)->roles()->updateExistingPivot($roleId, $attributes);
```
Определение собственной связующей модели
Laravel также позволяет определять собственную связующую модель. Для этого сначала создайте свой класс «основной» модели, который наследует `PHPEloquent` . В остальных ваших моделях Eloquent наследуйте эту базовую модель вместо базового `PHPEloquent` по умолчанию. В вашу базовую модель добавьте следующую функцию, которая возвращает экземпляр вашей собственной связующей модели:
```
public function newPivot(Model $parent, array $attributes, $table, $exists)
```
{ return new YourCustomPivot($parent, $attributes, $table, $exists); }
## Коллекции
В документации Laravel 5.1+ данный раздел вынесен в отдельную статью — Коллекции. — прим. пер.
Все методы Eloquent, возвращающие набор моделей — либо через `PHPget()` , либо через отношения — возвращают объект-коллекцию. Этот объект реализует стандартный интерфейс PHP `PHPIteratorAggregate` , что позволяет ему быть использованным в циклах как массив. Однако этот объект также имеет набор других полезных методов для работы с результатом запроса.
Проверка на существование ключа в коллекции
Например, мы можем выяснить, содержит ли результат запись с определённым первичным ключом, методом `PHPcontains()` :
if ($roles->contains(2)) { // }
Коллекции также могут быть преобразованы в массив или строку JSON:
```
$roles = User::find(1)->roles->toArray();
```
$roles = User::find(1)->roles->toJson();
Если коллекция преобразуется в строку, результатом будет JSON-выражение:
Коллекции Eloquent имеют несколько полезных методов для прохода и фильтрации содержащихся в них элементов:
```
$roles = $user->roles->each(function($role)
```
{ // });
Фильтрация элементов коллекции
При фильтрации коллекций передаваемая функция будет использована как функция обратного вызова для array_filter.
```
$users = $users->filter(function($user)
```
{ return $user->isAdmin(); }); Внимание: При фильтрации коллекций и конвертации их в JSON попробуйте сначала вызвать функцию `PHPvalues` для сброса ключей массива.
Применение функции обратного вызова к каждому объекту коллекции
$roles->each(function($role) { // });
Сортировка коллекции по значению
```
$roles = $roles->sortBy(function($role)
```
{ return $role->created_at; }); $roles = $roles->sortByDesc(function($role) { return $role->created_at; });
Сортировка коллекции по значению
```
$roles = $roles->sortBy('created_at');
```
$roles = $roles->sortByDesc('created_at');
Использование произвольного класса коллекции
Иногда вам может быть нужно получить собственный объект `PHPCollection` со своими методами. Вы можете указать его при определении модели Eloquent, перекрыв метод `PHPnewCollection()` :
public function newCollection(array $models = []) { return new CustomCollection($models); } }
В документации Laravel 5.1+ данный раздел вынесен в отдельную статью — Преобразователи. — прим. пер.
Eloquent содержит мощный механизм для преобразования атрибутов модели при их чтении и записи. Просто объявите в её классе метод `PHPgetFooAttribute()` . Помните, что имя метода должно следовать соглашению camelCase, даже если поля таблицы используют соглашение snake-case (он же — «<NAME>», с подчёркиваниями — прим. пер.):
public function getFirstNameAttribute($value) { return ucfirst($value); } }
В примере выше поле first_name теперь имеет читателя (accessor). Заметьте, что оригинальное значение атрибута передаётся методу в виде параметра.
Преобразователи (mutators) объявляются подобным образом:
public function setFirstNameAttribute($value) { $this->attributes['first_name'] = strtolower($value); } }
### Преобразователи дат
По умолчанию Eloquent преобразует поля created_at и updated_at в объекты Carbon, которые предоставляют множество полезных методов, расширяя стандартный класс PHP DateTime.
Вы можете указать, какие поля будут автоматически преобразованы, и даже полностью отключить преобразование, перекрыв метод `PHPgetDates()` класса модели:
{ return ['created_at']; } Когда поле является датой, вы можете установить его в число-оттиск времени формата Unix (timestamp), строку даты в формате Y-m-d, строку даты-времени и, конечно, экземпляр объекта `PHPDateTime` или `PHPCarbon` . Чтобы полностью отключить преобразование дат, просто верните пустой массив из метода `PHPgetDates()` :
{ return []; }
### Изменение атрибутов
Если у вас есть несколько атрибутов, которые вы хотите всегда конвертировать в другой формат данных, вы можете добавить атрибут в свойство `PHPcasts` вашей модели. Иначе вам нужно будет определять преобразователь для каждого из атрибутов, а это может отнять много времени. `/**` * Атрибуты, которые нужно преобразовать в нативный тип. * * @var array */ protected $casts = [ 'is_admin' => 'boolean', ]; Теперь при каждом обращении атрибут `PHPis_admin` будет преобразовываться в `PHPboolean` , даже если базовое значение сохранено в базе данных как `PHPinteger` . Другие поддерживаемые типы для преобразования: `PHPinteger` , `PHPreal` , `PHPfloat` , `PHPdouble` , `PHPstring` , `PHPboolean` , `PHPobject` и `PHParray` . Преобразование типа `PHParray` особенно полезно для работы с полями, которые сохранены как сериализованный JSON. Например, если у вашей базы данных есть поле типа TEXT, которое содержит сериализованный JSON, добавление преобразования в тип `PHParray` к атрибуту автоматически десериализует атрибут в массив PHP при обращении к нему через модель Eloquent: `/**` * Атрибуты, которые нужно преобразовать в нативный тип. * * @var array */ protected $casts = [ 'options' => 'array', ];
Теперь, когда вы используете модель Eloquent:
// $options - массив... $options = $user->options; // options автоматически сериализуются обратно в JSON... $user->options = ['foo' => 'bar'];
## События моделей
Модели Eloquent инициируют несколько событий, что позволяет вам добавить к ним свои обработчики с помощью следующих методов: `PHPcreating()` , `PHPcreated()` , `PHPupdating()` , `PHPupdated()` , `PHPsaving()` , `PHPsaved()` , `PHPdeleting()` , `PHPdeleted()` , `PHPrestoring()` , `PHPrestored()` . События позволяют вам легко выполнять код при каждом сохранении или изменении класса конкретной модели в БД. Когда новая модель сохраняется первый раз, возникают события creating и created. Если модель уже существовала на момент вызова метода `PHPsave()` , вызываются события updating и updated. В обоих случаях также возникнут события saving и saved. Например, давайте определим слушателя событий Eloquent в сервис-провайдере. В нашем слушателе событий мы будем вызывать метод `PHPisValid()` для данной модели, и возвращать `PHPfalse` , если она не прошла проверку. Возврат `PHPfalse` из слушателя событий Eloquent отменит операции save/update: `<?php` namespace App\Providers; use App\User; use Illuminate\Support\ServiceProvider; class AppServiceProvider extends ServiceProvider { /** * Загрузка любых сервисов приложения. * * @return void */ public function boot() { User::creating(function ($user) { return $user->isValid(); }); } /** * Регистрация сервис-провайдера. * * @return void */ public function register() { // } }
добавлено в 5.0 ()
## Наблюдатели моделей
Если вы прослушиваете много событий для определённой модели, вы можете использовать наблюдателей (observer) для объединения всех слушателей в единый класс. В классах наблюдателей названия методов отражают те события Eloquent, которые вы хотите прослушивать. Каждый такой метод получает модель в качестве единственного аргумента. В Laravel нет стандартного каталога для наблюдателей, поэтому вы можете создать любой каталог для хранения классов ваших наблюдателей:
`<?php` namespace App\Observers; use App\User; class UserObserver { /** * Прослушивание события создания пользователя. * * @param User $user * @return void */ public function created(User $user) { // } /** * Прослушивание события удаления пользователя. * * @param User $user * @return void */ public function deleting(User $user) { // } } Для регистрации наблюдателя используйте метод `PHPobserve()` на наблюдаемой модели. Вы можете зарегистрировать наблюдателей в методе `PHPboot()` одного из ваших сервис-провайдеров. В этом примере мы зарегистрируем наблюдателя в AppServiceProvider: `<?php` namespace App\Providers; use App\User; use App\Observers\UserObserver; use Illuminate\Support\ServiceProvider; class AppServiceProvider extends ServiceProvider { /** * Загрузка любых сервисов приложения * * @return void */ public function boot() { User::observe(UserObserver::class); } /** * Регистрация сервис-провайдера. * * @return void */ public function register() { // } }
добавлено в 5.0 ()
Для того, чтобы держать все обработчики событий моделей вместе, вы можете зарегистрировать наблюдателя (observer). Объект-наблюдатель может содержать методы, соответствующие различным событиям моделей. Например, методы `PHPcreating()` , `PHPupdating()` и `PHPsaving()` , а также любые другие методы, соответствующие именам событий.
К примеру, класс наблюдателя может выглядеть так:
`class UserObserver {` public function saving($model) { // } public function saved($model) { // } } Вы можете зарегистрировать его используя метод `PHPobserve()` :
```
User::observe(new UserObserver);
```
## Генерация URL модели
Когда вы передаёте модель в методы `PHProute()` и `PHPaction()` , её первичный ключ вставляется в сгенерированный URI. Например:
```
Route::get('user/{user}', 'UserController@show');
```
action('UserController@show', [$user]); В этом примере свойство `PHP$user->id` будет подставлено вместо строки-переменной `PHP{user}` в сгенерированный URL. Но если вы хотите использовать другое свойств вместо ID, переопределите метод `PHPgetRouteKey()` в своей модели:
```
public function getRouteKey()
```
{ return $this->slug; }
## Преобразование в массивы и JSON
Преобразование модели в массив
При создании JSON API вам часто потребуется преобразовывать модели и отношения к массивам или выражениям JSON. Eloquent содержит методы для выполнения этих задач. Для преобразования модели или загруженного отношения в массив можно использовать метод `PHPtoArray()` :
```
$user = User::with('roles')->first();
```
return $user->toArray();
Заметьте, что целая коллекция моделей также может быть преобразована в массив:
```
return User::all()->toArray();
```
Для преобразования модели к JSON вы можете использовать метод `PHPtoJson()` :
```
return User::find(1)->toJson();
```
Обратите внимание, что если модель преобразуется к строке, результатом также будет JSON, — это значит, что вы можете возвращать объекты Eloquent напрямую из ваших маршрутов!
```
Route::get('users', function()
```
{ return User::all(); });
Скрытие атрибутов при преобразовании в массив или JSON
Иногда вам может быть нужно ограничить список атрибутов, включённых в преобразованный массив или JSON-строку — например, скрыть пароли. Для этого определите в классе модели свойство `PHPhidden` :
protected $hidden = ['password']; } Внимание: При скрытии отношений используйте имя `PHPmethod` отношения, а не имя для динамического доступа. Вы также можете использовать атрибут `PHP$visible` для указания разрешённых полей:
```
protected $visible = ['first_name', 'last_name'];
```
Иногда вам может быть нужно добавить поле, которое не существует в таблице. Для этого просто определите для него читателя:
```
public function getIsAdminAttribute()
```
{ return $this->attributes['admin'] == 'yes'; } Когда вы создали читателя, просто добавьте значение к свойству-массиву `PHPappends` класса модели:
```
protected $appends = ['is_admin'];
```
Как только атрибут был добавлен к списку `PHPappends` , он будет включён в массивы и выражения JSON, образованные от этой модели. Атрибуты в массиве `PHPappends` соответствуют настройкам модели `PHPvisible` и `PHPhidden` .
# CSRF-защита
Laravel позволяет легко защитить ваше приложение от атак с подделкой межсайтовых запросов (CSRF). Подделка межсайтовых запросов — тип атаки на сайты, при котором несанкционированные команды выполняются от имени аутентифицированного пользователя.
Laravel автоматически генерирует CSRF-"токен" для каждой активной пользовательской сессии в приложении. Этот токен используется для проверки того, что именно авторизованный пользователь делает запрос в приложение.
При определении каждой HTML-формы вы должны включать в неё скрытое поле CSRF-токена, чтобы посредник CSRF-защиты мог проверить запрос. Вы можете использовать вспомогательную функцию `PHPcsrf_field()` для генерирования поля токена:
{{ csrf_field() }} ... </form> Посредник `PHPVerifyCsrfToken` , входящий в группу посредников web, автоматически проверяет совпадение токена в данных запроса с токеном, хранящимся в сессии.
## Исключение URI из CSRF-защиты
Иногда бывает необходимо исключить набор URI из-под CSRF-защиты. Например, если вы используете Stripe для обработки платежей и применяете их систему веб-хуков (hook), то вам надо исключить маршрут вашего обработчика веб-хуков Stripe из-под CSRF-защиты, так как Stripe не будет знать, какой CSRF-токен надо послать в ваш маршрут.
Обычно такие маршруты помещаются вне группы посредников web, которую
применяет ко всем маршрутам в файле routes/web.php. Но вы также можете исключить маршруты, добавив их URI в свойство `PHP$except` посредника `PHPVerifyCsrfToken` : `<?php` namespace App\Http\Middleware; use Illuminate\Foundation\Http\Middleware\VerifyCsrfToken as BaseVerifier; class VerifyCsrfToken extends BaseVerifier { /** * URI, которые надо исключить из CSRF-проверки. * * @var array */ protected $except = [ 'stripe/*', ]; }
## X-CSRF-TOKEN
Помимо проверки CSRF-токена как POST-параметра, посредник `PHPVerifyCsrfToken` будет также проверять заголовок запроса X-CSRF-TOKEN. Например, вы можете хранить токен в HTML-теге meta:
После создания тега meta вы можете указать библиотеке, такой как jQuery, автоматически добавлять токен в заголовки всех запросов. Это обеспечивает простую, удобную CSRF-защиту для ваших приложений на базе AJAX:
`$.ajaxSetup({` headers: { 'X-CSRF-TOKEN': $('meta[name="csrf-token"]').attr('content') } });
## X-XSRF-TOKEN
Laravel хранит текущий CSRF-токен в cookie XSRF-TOKEN, которую включается в каждый отклик, генерируемый фреймворком. Вы можете использовать значение cookie, чтобы задать заголовок запроса X-XSRF-TOKEN.
Этот cookie в основном посылается для удобства, потому что некоторые JavaScript-фреймворки, такие как Angular, автоматически помещают его значение в заголовок X-XSRF-TOKEN.
# Переадресации
## Создание переадресаций
Отклики для переадресации — это экземпляры класса
```
PHPIlluminate\Http\RedirectResponse
```
, они содержат соответствующие заголовки, необходимые для переадресации пользователя на другой URL. Есть несколько способов создания экземпляров `PHPRedirectResponse` . Простейший способ — использовать глобальную вспомогательную функцию `PHPredirect()` :
return redirect('home/dashboard'); }); Иногда необходимо перенаправить пользователя в предыдущее место, например, когда в отправленной форме обнаружены ошибки. Вы можете сделать это с помощью глобальной вспомогательной функции `PHPback()` . Поскольку для этого используются сессии, убедитесь в том, что вызывающий функцию `PHPback()` маршрут использует группу посредников web или использует всех посредников сессий:
## Переадресация на именованные маршруты
При вызове вспомогательной функции `PHPredirect()` без параметров возвращается экземпляр
```
PHPIlluminate\Routing\Redirector
```
, позволяя вам вызывать любой метод на экземпляре `PHPRedirector` . Например, чтобы создать `PHPRedirectResponse` на именованный маршрут, вы можете использовать метод `PHProute()` :
return redirect()->route('profile', ['id' => 1]);
Получение параметров из моделей Eloquent
Если вы делаете переадресацию на маршрут с параметром ID, который был получен из модели Eloquent, то вы можете просто передать саму модель. ID будет извлечён автоматически:
return redirect()->route('profile', [$user]); Если вы хотите изменить значение, которое помещается в параметр маршрута, вам надо переопределить метод `PHPgetRouteKey()` в вашей модели Eloquent: `/**` * Получить значение ключа маршрута модели. * * @return mixed */ public function getRouteKey() { return $this->slug; }
## Переадресация на действия контроллера
Также вы можете создать переадресацию на действия контроллера. Для этого передайте контроллер и имя действия в метод `PHPaction()` . Обратите внимание: вам не надо указывать полное пространство имён контроллера, потому что
задаст его сам:
```
return redirect()->action('HomeController@index');
```
Если маршруту вашего контроллера нужны параметры, то вы можете передать их вторым аргументом метода `PHPaction()` :
'UserController@profile', ['id' => 1] );
## Переадресация с данными сессии
Обычно переадресация на новый URL происходит одновременно с передачей данных в сессию. Обычно это делается после успешного выполнения действия, когда вы передаёте в сессию сообщение об успехе. Для удобства вы можете создать экземпляр `PHPRedirectResponse` и передать данные в сессию в одной цепочке вызовов:
# Сброс пароля
Хотите быстро приступить к работе? Просто запустите
в новом приложении Laravel и перейдите в свой браузер по адресу http://your-app.dev/register или по любому другому URL, который назначен вашему приложению. Эта единственная команда позаботится о строительстве всей вашей системы аутентификации, включая сброс паролей!
Большинство веб-приложений предоставляют пользователям возможность сбросить забытые пароли. Вместо того, чтобы заставлять вас повторять это в каждом приложении, Laravel предлагает удобные методы для отправки напоминаний о пароле и сброса пароля.
Прежде чем использовать функции сброса пароля Laravel, ваш пользователь должен использовать типаж
```
PHPIlluminate\Notifications\Notifiable
```
## О базе данных
Для начала убедитесь, что ваша модель `PHPApp\User` реализует контракт
```
PHPIlluminate\Contracts\Auth\CanResetPassword
```
. Конечно, модель `PHPApp\User` , включенная в инфраструктуру, уже реализует этот интерфейс и использует типаж
```
PHPIlluminate\Auth\Passwords\CanResetPassword
```
, чтобы включить методы, необходимые для реализации интерфейса.
### Создание таблицы переназначения маркеров
Затем необходимо создать таблицу для хранения токенов сброса пароля. Перенос для этой таблицы входит в комплект поставки Laravel и находится в каталоге database/migrations. Итак, все, что вам нужно сделать, это запустить миграцию базы данных:
> shphp artisan migrate
Laravel включает классы
```
PHPAuth\ForgotPasswordController
```
```
PHPAuth\ResetPasswordController
```
, которые содержат логику, необходимую для отправки по электронной почте паролей сброса пароля и сброса пользовательских паролей. Все маршруты, необходимые для выполнения сброса пароля, могут быть сгенерированы командой Artisan make:auth: > shphp artisan make:auth
## Представления
Опять же, Laravel сгенерирует все необходимые представления для сброса пароля, когда выполняется команда make:auth. Эти представления размещаются в resources/views/auth/passwords. Вы можете настроить их по мере необходимости для своего приложения.
## После сброса паролей
После того как вы определили маршруты и представления для сброса паролей пользователя, вы можете просто получить доступ к маршруту в вашем браузере с помощью /password/reset. Контроллер
, входящий в состав фреймворка, уже включает в себя логику отправки электронной почты для сброса пароля, в то время как
включает в себя логику сброса пользовательских паролей. После сброса пароля пользователь автоматически будет зарегистрирован в приложении и перенаправлен на /home. Вы можете настроить местоположение переадресации сброса после сброса пароля, указав свойство `PHPredirectTo` в
По умолчанию токены сброса пароля истекают через один час. Вы можете изменить это с помощью опции сброса пароля `PHPexpire` в файле config/auth.php.
### Настройка защиты аутентификации
В файле конфигурации `PHPauth.php` вы можете настроить несколько «охранников»(guards), которые могут использоваться для определения поведения аутентификации для нескольких пользовательских таблиц. Вы можете настроить включенный
для использования охранника по вашему выбору, переопределив метод `PHPguard` на контроллере. Этот метод должен возвращать его экземпляр:
protected function guard() { return Auth::guard('guard-name'); }
### Настройка брокера паролей
В файле конфигурации `PHPauth.php` вы можете настроить несколько «брокеров» (brokers) пароля, которые могут быть использованы для сброса паролей в нескольких пользовательских таблицах. Вы можете настроить включенные функции
, чтобы использовать брокера по вашему выбору, переопределив метод `PHPbroker` :
```
use Illuminate\Support\Facades\Password;
```
/** * Получите брокер, который будет использоваться при сбросе пароля. * * @return PasswordBroker */ protected function broker() { return Password::broker('name'); }
### Сброс настроек электронной почты
Вы можете легко изменить класс уведомления, используемый для отправки ссылки сброса пароля пользователю. Для начала переопределите метод
```
PHPsendPasswordResetNotification
```
в вашей модели `PHPUser` . В рамках этого метода вы можете отправить уведомление с использованием любого выбранного вами класса уведомлений. `PHP$token` — это первый аргумент, полученный методом: `/**` * Отправка уведомления об изменении пароля. * * @param string $token * @return void */ public function sendPasswordResetNotification($token) { $this->notify(new ResetPasswordNotification($token)); }
# Выпуски Laravel 4.x/5.x
## Политика поддержки
Для LTS-версий, таких как Laravel 5.1, обеспечивается исправление ошибок в течение 2 лет и исправление ошибок безопасности в течение 3 лет. Такие версии имеют наибольший срок поддержки. Для обычных версий обеспечивается исправление ошибок в течение 6 месяцев и исправление ошибок безопасности в течение 1 года.
## Laravel 5.3
В Laravel 5.3 продолжены улучшения, сделанные в Laravel 5.2, добавлена система уведомлений на основе драйверов, надёжная поддержка режима реального времени с помощью Laravel Echo, простая настройка серверов OAuth2 с помощью Laravel Passport, полнотекстовый поиск моделей с помощью Laravel Scout, поддержка Webpack в Laravel Elixir, «отправляемые по почте» объекты, явное разделение маршрутов web и api, консольные команды на основе замыканий, удобные вспомогательные функции для хранения загружаемых файлов, поддержка POPO и контроллеров одностороннего действия, улучшенная заготовка фронтенда по умолчанию, и многое другое.
Уведомления Laravel предоставляют простой, выразительный API для отправки уведомлений по различным каналам доставки, таким как email, Slack, SMS и другим. Например, вы можете определить уведомление об оплате счёта и доставлять его по email и SMS. Затем вы можете отправить уведомление с помощью одного простого метода:
Для уведомлений уже создано огромное множество созданных сообществом драйверов, включая поддержку уведомлений для iOS и Android. Подробнее об уведомлениях читайте в полной документации.
### WebSockets / Вещание событий
Вещание событий есть и в предыдущих версиях Laravel, но в Laravel 5.3 оно значительно улучшено добавлением аутентификации на уровне канала для частных каналов и каналов присутствия WebSocket :
`/*` * Аутентификация подписки на канал... */ Broadcast::channel('oders.*', function ($user, $orderId) { return $user->placedOrder($orderId); });
Выпущен Laravel Echo — JavaScript-пакет, устанавливаемый через NPM. Он обеспечивает простой, прекрасный API для подписок на каналы и прослушивания ваших событий на стороне сервера в вашем JavaScript-приложении на стороне клиента. Echo включает поддержку Pusher и Socket.io:
```
Echo.channel('orders.' + orderId)
```
.listen('ShippingStatusUpdated', (e) => { console.log(e.description); });
В дополнение к подпискам на обычные каналы Laravel Echo упрощает подписку на каналы присутствия, которые предоставляют информацию о том, кто прослушивает данный канал:
```
Echo.join('chat.' + roomId)
```
.here((users) => { // }) .joining((user) => { console.log(user.name); }) .leaving((user) => { console.log(user.name); });
Подробнее об Echo и вещании событий читайте в полной документации.
### Laravel Passport (Сервер OAuth2)
В Laravel 5.3 реализована простая API аутентификация с помощью Laravel Passport, который предоставляет полную реализацию сервера OAuth2 для вашего приложения в считанные минуты. Passport создан на основе сервера League OAuth2, созданного Алексом Билби.
Passport упрощает создание токенов доступа с помощью кодов авторизации OAuth2. Также вы можете позволить своим пользователям создавать «персональные токены доступа» с помощью вашего веб-UI. Чтобы вы могли быстрее разобраться, Passport содержит Vue-компоненты, которые могут служить отправной точкой для вашей панели управления OAuth2, позволяя пользователям создавать клиентов, отзывать токены доступа и многое другое:
<passport-authorized-clients></passport-authorized-clients> <passport-personal-access-tokens></passport-personal-access-tokensЕсли вы не хотите использовать Vue-компоненты, вы можете предоставить свою собственную фронтенд-панель для управления клиентами и токенами доступа. Passport предоставляет простой JSON API, который вы можете использовать с любым JavaScript-фреймворком на ваш выбор.
Разумеется, Passport также позволяет легко определить ограничения токенов доступа, которые могут быть запрошены приложением, использующим ваш API:
```
Passport::tokensCan([
```
'place-orders' => 'Разместить новые заказы', 'check-status' => 'Проверить статус заказа', ]);
Кроме того, в Passport есть вспомогательный посредник для проверки того, что аутентифицированный токеном доступа запрос содержит необходимые ограничения токена:
```
Route::get('/orders/{order}/status', function (Order $order) {
```
// Токен доступа имеет ограничение "проверить-статус"... })->middleware('scope:check-status');
И наконец, Passport поддерживает использование вашего собственного API в вашем JavaScript-приложении, позволяя не беспокоиться о передаче токенов доступа. Это достигается при помощи шифрованных JWT-cookie и синхронизированных CSRF-токенов, позволяя вам сконцентрироваться на том, что важно, — вашем приложении. Подробнее о Passport читайте в полной документации.
### Поиск (Laravel Scout)
Laravel Scout предоставляет простое решение на основе драйверов для добавления полнотекстового поиска в ваши Eloquent-модели. С помощью наблюдателей за моделями Scout будет автоматически синхронизировать ваши поисковые индексы с вашими записями Eloquent. Сейчас Scout поставляется с драйвером Algolia, однако, создавать свои драйверы просто, и вы можете дополнить Scout своей собственной реализацией поиска.
Делать модели доступными для поиска так же просто, как добавить типаж Searchable в модель:
`<?php` namespace App; use Laravel\Scout\Searchable; use Illuminate\Database\Eloquent\Model; class Post extends Model { use Searchable; }
Когда типаж добавлен в вашу модель, её информация будет синхронизироваться с вашими поисковыми индексами при сохранении модели:
`$order = new Order;` // ... $order->save();
Когда ваши модели проиндексированы, можно легко выполнять полнотекстовый поиск по всем вашим моделям. Вы можете даже сделать страничный вывод для результатов поиска:
```
return Order::search('Star Trek')->get();
```
return Order::search('Star Trek')->where('user_id', 1)->paginate();
Разумеется, Scout имеет многие другие возможности, описанные в полной документации.
### Почтовые объекты
Laravel 5.3 поддерживает почтовые объекты. Эти объекты позволяют вам представлять ваши email-сообщения в виде простых объектов вместо того, чтобы настраивать письма в замыканиях. Например, вы можете определить простой почтовый объект для письма «welcome»:
```
class WelcomeMessage extends Mailable
```
{ use Queueable, SerializesModels; /** * Создать сообщение. * * @return $this */ public function build() { return $this->view('emails.welcome'); } }
После определения почтового объекта вы можете отправить его пользователю с помощью простого и выразительного API. Почтовые объекты позволяют легко определить цель сообщения при просмотре вашего кода:
```
Mail::to($user)->send(new WelcomeMessage);
```
Само собой, вы можете пометить почтовый объект как объект «для очереди», тогда он будет отправлен в фоновом режиме вашими обработчиками очереди:
```
class WelcomeMessage extends Mailable implements ShouldQueue
```
{ // }
Подробнее о почтовых объектах читайте в документации по email.
На Laracasts есть бесплатный видео-урок по этой теме.
Один из самых распространённых сценариев хранения файлов в веб-приложениях — хранение файлов, загружаемых пользователем, таких как изображение для профиля, фотографии и документы. Laravel 5.3 позволяет очень просто сохранять загружаемые файлы с помощью нового метода `PHPstore()` на экземпляре загруженного файла. Просто вызовите метод `PHPstore()` , указав путь для сохранения загруженного файла: `/**` * Изменить аватар пользователя. * * @param Request $request * @return Response */ public function update(Request $request) { $path = $request->file('avatar')->store('avatars', 's3'); return $path; }
Подробнее о хранении загруженных файлов читайте в полной документации.
### Webpack и Laravel Elixir
Вместе с Laravel 5.3 вышла версия Laravel Elixir 6.0 с встроенной поддержкой для сборщиков модулей Webpack и Rollup JavaScript. Теперь файл gulpfile.js в Laravel 5.3 по умолчанию использует Webpack для компилирования вашего JavaScript. В полной документации по Elixir содержится подробная информация об обоих этих сборщиках:
`elixir(mix => {` mix.sass('app.scss') .webpack('app.js'); });
### Структура фронтенда
На Laracasts есть бесплатный видео-урок по этой теме.
В Laravel 5.3 более современная структура фронтенда. Это повлияло главным образом на заготовку аутентификации `shmake:auth` . Вместо загрузки ресурсов фронтенда из CDN зависимости указаны в стандартном файле package.json.
Кроме того, теперь сразу «из коробки» поддерживаются однофайловые Vue-компоненты. Пример компонента Example.vue находится в каталоге resources/assets/js/components. Более того, новый файл resources/assets/js/app.js загружает и настраивает ваши JavaScript-библиотеки и при необходимости Vue-компоненты.
Эта структура обеспечивает лучшую основу для начала разработки современных, надёжных JavaScript-приложений, не требуя от вашего приложения использования какого-либо из этих JavaScript и CSS фреймворка. Подробнее о том, как начать разработку современного фронтенда на Laravel, читайте в новом вводном разделе о фронтенде.
### Файлы маршрутов
По умолчанию свежее приложение Laravel 5.3 содержит два файла HTTP-машрутов в новой директории верхнего уровня routes. Файлы маршрутов web и api обеспечивают более явное понимание того, как разделяются маршруты для вашего веб-интерфейса и вашего API. RouteServiceProvider автоматически назначает префикс api маршрутам из файла api.
### Консольные команды замыкания
В добавление к тому, что Artisan-команды определены как классы, теперь они могут быть определены как простые замыкания в методе `PHPcommands()` вашего файла app/Console/Kernel.php. В свежих приложениях Laravel 5.3 метод `PHPcommands()` загружает файл routes/console.php, который позволяет вам определять ваши консольные команды как «маршрутоподобные» точки входа в ваше приложение на основе замыканий:
$this->info('Building project...'); });
Подробнее о командах замыканиях читайте в полной документации по Artisan.
### Переменная
В циклах внутри Blade-шаблонов будет доступна переменная %%$loop%. Эта переменная предоставляет доступ к некоторым полезным данным, таким как текущий индекс цикла, и является ли данная итерация цикла первой или последней:
@if ($loop->first) Это первая итерация. @endif @if ($loop->last) Это последняя итерация. @endif <p>Это пользователь {{ $user->id }}</p> @endforeach
Подробнее читайте в полной документации по Blade
## Laravel 5.2
В Laravel 5.2 вошли такие улучшения, как поддержка нескольких драйверов аутентификации, неявная привязка модели, упрощение глобальных заготовок Eloquent, оптимизация создания заготовок для аутентификации, группы посредников, посредник для ограничения скорости, улучшения проверки ввода массивов, и много другое.
### Драйверы аутентификации
В предыдущих версиях Laravel изначально поддерживался только один драйвер аутентификации — драйвер на основе сессий. В вашем приложении мог быть только один экземпляр аутентифицируемой модели.
В Laravel 5.2 вы можете определить дополнительные драйверы аутентификации, а также определить несколько аутентифицируемых моделей или таблиц пользователей, и контролировать процесс их аутентификации отдельно друг от друга. Например, если у вас есть одна таблица для администраторов и другая таблица для обучаемых, вы можете использовать методы Auth для каждой их них отдельно.
### Заготовка аутентификации
Аутентификация в Laravel и без того довольно проста в настройке, но в Laravel 5.2 появился удобный и быстрый способ создания заготовок для представлений аутентификации для вашего «бэкенда». Просто выполните команду `shmake:auth` в терминале: > shphp artisan make:auth
Эта команда сгенерирует совместимые с начальной загрузкой пустые представления для входа пользователя, регистрации и сброса пароля. Также эта команда дополнит ваш файл маршрутов соответствующими маршрутами.
Эта возможность предназначена только для новых приложений, и не подходит для обновления приложений.
### Неявная привязка модели
Явная привязка модели затрудняет внедрение соответствующих моделей непосредственно в ваши маршруты и контроллеры. Например, предположим у вас есть маршрут, определённый таким образом:
`use App\User;` Route::get('/user/{user}', function (User $user) { return $user; }); В Laravel 5.1 вам бы пришлось использовать метод `PHPRoute::model()` , чтобы указать Laravel, что надо внедрить экземпляр `PHPApp\User` , соответствующий параметру `PHP{user}` в определении вашего маршрута. А в Laravel 5.2 фреймворк автоматически внедрит эту модель на основе сегмента URI, позволяя вам быстро получить доступ к нужным экземплярам модели. Laravel автоматически внедрит модель, если сегмент параметра маршрута ( `PHP{user}` ) совпадает с замыканием маршрута или именем соответствующей переменной метода контроллера ( `PHP$user` ), и переменная указывает тип класса модели Eloquent.
Группы посредников позволяют вам объединять несколько посредников маршрутов одним удобным ключом и назначать на маршрут сразу несколько посредников. Например, это может пригодится при создании веб-интерфейса и API в одном приложении. Вы можете сгруппировать маршруты сессии и CSRF в группу `PHP'web'` , а ограничитель скорости в группу `PHP'api'` .
На самом деле по умолчанию структура приложения в Laravel 5.2 использует именно такой подход. Например, в стандартном файле App\Http\Kernel.php вы найдёте вот что:
`/**` * Группы посредников маршрутов приложения. * * @var array */ protected $middlewareGroups = [ 'web' => [ \App\Http\Middleware\EncryptCookies::class, \Illuminate\Cookie\Middleware\AddQueuedCookiesToResponse::class, \Illuminate\Session\Middleware\StartSession::class, \Illuminate\View\Middleware\ShareErrorsFromSession::class, \App\Http\Middleware\VerifyCsrfToken::class, ], 'api' => [ 'throttle:60,1', ], ]; Теперь группа `PHP'web'` может быть назначена на маршрут вот так:
```
Route::group(['middleware' => ['web']], function () {
```
// }); Но не забывайте, что группа посредников `PHP'web'` уже применяется к вашим маршрутам по умолчанию, так как RouteServiceProvider включает её в группу посредников по умолчанию.
### Ограничение скорости
Теперь во фреймворк включён новый посредник ограничения скорости, позволяющий вам легко ограничить количество запросов, которое может выполнить указанный IP за определённое число минут. Например, для ограничения количества запросов от одного IP-адреса до 60 в минуту сделайте так:
### Проверка ввода массивов
В Laravel 5.2 стало намного проще проверять ввод массивов через поля форм. Например, для проверки того, что каждый e-mail в данном поле ввода массива уникален, сделайте так:
'person.*.email' => 'email|unique:users' ]);
Более того, вы можете использовать символ * при написании сообщений для проверки ввода в ваших языковых файлах, что позволяет использовать одно сообщение для полей ввода массивов:
`'custom' => [` 'person.*.email' => [ 'unique' => 'Каждый пользователь должен иметь уникальный e-mail', ] ], == Правило проверки ввода Bail ===Добавлено новое правило проверки ввода `PHPbail` , оно останавливает валидатор после первой неудачной проверки по данному правилу. Например, вы можете отменить проверку значения на уникальность `PHPunique` , если оно не прошло проверку на `PHPinteger` :
'user_id' => 'bail|integer|unique:users' ]);
### Улучшения глобальных заготовок Eloquent
В предыдущих версиях Laravel глобальные заготовки Eloquent были сложными и приводили к появлению ошибок. Но в Laravel 5.2 глобальные заготовки запросов требуют от вас реализации только одного простого метода `PHPapply()` .
Подробнее о написании глобальных заготовок читайте в полной документации по Eloquent.
## Laravel 5.1.11
В Laravel 5.1.11 добавлена поддержка авторизации прямо из коробки! Удобно организуйте логику авторизации ваше приложения при помощи простых обратных вызовов или классов политик, и авторизуйте действия при помощи простых и выразительных методов.
Подробнее читайте в документации по авторизации.
## Laravel 5.1.4
В Laravel 5.1.4 добавлена простая блокировка входа в приложение. Подробнее читайте в документации по аутентификации.
## Laravel 5.1
В Laravel 5.1 продолжены сделанные в Laravel 5.0 улучшения по применению стандарта PSR-2, и добавлено вещание событий, параметры посредников, улучшения Artisan, и многое другое.
### PHP 5.5.9+
Поскольку PHP 5.4 в сентябре «перестанет жить», и для него больше не будет обновлений безопасности от команды разработки PHP, Laravel 5.1 требует PHP 5.5.9 или выше. PHP 5.5.9 обеспечивает совместимость с последними версиями популярных PHP-библиотек, таких как Guzzle и AWS SDK.
### LTS
Laravel 5.1 — первый выпуск Laravel, поддерживающий долгосрочную поддержку (long term support). Для Laravel 5.1 будет обеспечено исправление ошибок в течение 2 лет и исправление ошибок безопасности в течение 3 лет. Это наибольший срок поддержки для вышедших версий Laravel, он обеспечивает стабильность и уверенность для больших, корпоративных клиентов и заказчиков.
### PSR-2
Руководство по стилю программирования PSR-2 принято стандартом для стиля разработки фреймворка Laravel. Вдобавок, все генераторы обновлены для генерирования PSR-2-совместимого синтаксиса.
### Документация
Каждая страница документации Laravel тщательно отредактирована и значительно исправлена. Все примеры кода также пересмотрены и расширены для обеспечения большей релевантности и понимания контекста.
### Вещание событий
Во многих современных веб-приложениях используются веб-сокеты для реализации живых интерфейсов пользователя, работающих в реальном времени. Когда на сервере обновляются какие-либо данные, по веб-сокет соединению отправляется сообщение для обработки клиентом.
Чтобы помочь вам в создании приложений такого типа, Laravel позволяет легко «вещать» ваши события по веб-сокет соединению. Вещание ваших событий Laravel позволяет вам использовать одинаковые названия событий в вашем коде на стороне сервера и вашем JavaScript-фреймворке на стороне клиента.
Подробнее о вещании событий читайте в документации событий.
### Параметры посредников
Теперь посредники могут принимать дополнительные параметры. Например, если в вашем приложении необходима проверка наличия у пользователя необходимой «роли» для выполнения данного действия, вы можете создать посредника RoleMiddleware, который принимает название роли в качестве дополнительного аргумента:
`<?php` namespace App\Http\Middleware; use Closure; class RoleMiddleware { /** * Запуск фильтра запроса. * * @param \Illuminate\Http\Request $request * @param \Closure $next * @param string $role * @return mixed */ public function handle($request, Closure $next, $role) { if (! $request->user()->hasRole($role)) { // Переадресация... } return $next($request); } }
Параметры посредника можно указать при определении маршрута, разделив двоеточием название посредника и его параметры. Несколько параметров разделяются между собой запятыми:
// }]);
Подробнее о посредниках читайте в документации посредников.
### Пересмотр тестирования
Встроенные в Laravel возможности для тестирования были значительно улучшены. Набор новых методов обеспечивает гибкий и выразительный интерфейс для взаимодействия с вашим приложением и проверки его откликов. Например, взгляните на следующий тест:
{ $this->visit('/register') ->type('Taylor', 'name') ->check('terms') ->press('Register') ->seePageIs('/dashboard'); }
Подробнее о тестировании читайте в документации по тестированию.
Теперь Laravel содержит простой способ для создания заглушек Eloquent-моделей при помощи фабрик моделей. Фабрики моделей позволяют вам легко задать набор «стандартных» атрибутов для вашей Eloquent-модели, и затем сгенерировать экземпляры тестовой модели для ваших тестов и тестовые данные для БД. Также фабрики моделей содержат полезные возможности мощной PHP-библиотеки Faker для генерирования случайных данных:
```
$factory->define(App\User::class, function ($faker) {
```
return [ 'name' => $faker->name, 'email' => $faker->email, 'password' => str_random(10), 'remember_token' => str_random(10), ]; });
Подробнее о фабриках моделей читайте в документации.
### Улучшения Artisan
Теперь Artisan-команды можно определить с помощью простой, маршруто-подобной «сигнатуры», которая обеспечивает чрезвычайно простой интерфейс для определения аргументов и параметров командной строки. Например, вы можете определить простую команду и её параметры таким образом:
`/**` * Имя и сигнатура терминальной команды. * * @var string */ protected $signature = 'email:send {user} {--force}';
Подробнее об определении Artisan-команд читайте в документации Artisan.
### Структура папок
Для большего соответствия своему назначению папка app/Commands переименована в app/Jobs. Также папка app/Handlers собрана в единую папку app/Listeners, которая содержит слушатели событий. Однако, эти изменения не обязательны, и вам не требуется обновлять структуру папок для использования Laravel 5.1.
В предыдущих версиях Laravel за шифрование отвечало PHP-расширение mcrypt. А теперь, начиная с Laravel 5.1, за шифрование отвечает расширение openssl, которое развивается намного активнее.
## Laravel 5.0
В Laravel 5.0 введена новая структура приложения для проекта Laravel по умолчанию. Эта новая структура служит лучшей основой для построения надёжных приложений на Laravel, а также обеспечивает поддержку стандартов автозагрузки (PSR-4) для всего приложения. Для начала давайте рассмотрим некоторые основные новшества.
### Новая структура папок
Старая папка app/models была полностью удалена. Вместо этого весь ваш код теперь располагается прямо в папке app и по умолчанию попадает в пространство имён App. Это пространство имён по умолчанию можно быстро изменить с помощью новой Artisan-команды `shapp:name` .
Контроллеры, посредники и запросы (новый тип классов в Laravel 5.0) теперь сгруппированы в папке app/Http, так как все они связаны с транспортным уровнем HTTP вашего приложения. Вместо единого, длинного файла фильтров маршрутов теперь все посредники разбиты по своим отдельным файлам классов.
Новая папка app/Providers заменила файлы app/start из предыдущей версии Laravel 4.x. Эти сервис-провайдеры предоставляют различные функции начальной загрузки для вашего приложения, такие как обработка ошибок, ведение журналов, загрузка маршрутов и т.д. Само собой, вы можете создавать дополнительные сервис-провайдеры для своего приложения.
Языковые файлы приложения и шаблоны переместились в папку resources.
### Контракты
Все основные компоненты Laravel реализуют интерфейсы, расположенные в репозитории illuminate/contracts. У репозитория нет внешних зависимостей. Наличие удобного, централизованного набора интерфейсов, который вы можете использовать для отвязки и внедрения зависимостей, может служить простой альтернативой для фасадов Laravel.
Подробнее о контрактах читайте в полной документации.
### Кэширование маршрутов
Если ваше приложение полностью состоит из маршрутов контроллеров, вы можете использовать новую Artisan-команду `shroute:cache` для радикального ускорения регистрации ваших маршрутов. Это особенно полезно для приложений с числом маршрутов 100+. Эта часть вашего приложения получит радикальное ускорение.
### Посредники маршрутов
Вдобавок к «фильтрам» маршрутов из Laravel 4 теперь в версии 5 поддерживаются HTTP-посредники, а встроенная авторизация и CSRF-"фильтры" преобразованы в посредников. Посредники предоставляют единый, цельный интерфейс для замены всех типов фильтров, позволяя вам легко проверять, и даже отклонять запросы до того, как они попадут в ваше приложение.
### Внедрение методов контроллера
Вдобавок к существующему внедрению конструктора теперь вы можете указывать типы зависимостей в методах контроллера. Сервис-контейнер автоматически внедрит зависимости, даже если маршрут содержит другие параметры:
```
public function createPost(Request $request, PostRepository $posts)
```
### Преднастроенная авторизация
Контроллеры регистрации, авторизации и сброса паролей пользователей теперь встроены сразу из коробки, а также соответствующие им простые шаблоны, расположенные в resources/views/auth. Вдобавок миграция таблицы «users» была включена во фреймворк. Наличие этих простых ресурсов позволяет ускорить разработку идей для приложения, не увязая в создании шаблонной авторизации. Шаблоны авторизации доступны по маршрутам auth/login и auth/register. Сервис App\Services\Auth\Registrar отвечает за проверку данных и создание пользователей.
### Объекты событий
Теперь вы можете определять события как объекты вместо простого использования строк. Например, взгляните на такое событие:
```
class PodcastWasPurchased {
```
public $podcast; public function __construct(Podcast $podcast) { $this->podcast = $podcast; } }
Событие может быть отправлено как обычно:
```
Event::fire(new PodcastWasPurchased($podcast));
```
Само собой, ваш обработчик событий получит объект события вместо списка данных:
```
class ReportPodcastPurchase {
```
public function handle(PodcastWasPurchased $event) { // } }
Подробнее о работе с событиями читайте в полной документации.
### Команды / Очереди
Вдобавок к поддерживаемому в Laravel 4 формату очередей команд в Laravel 5 можно представлять ваши задачи для очереди как простые объекты команд. Эти команды живут в папке app/Commands. Вот пример команды:
```
class PurchasePodcast extends Command implements SelfHandling, ShouldBeQueued {
```
use SerializesModels; protected $user, $podcast; /** * Создание нового экземпляра команды. * * @return void */ public function __construct(User $user, Podcast $podcast) { $this->user = $user; $this->podcast = $podcast; } /** * Выполнение команды. * * @return void */ public function handle() { // Обработка логики покупки подкаста... event(new PodcastWasPurchased($this->user, $this->podcast)); } }
Базовый контроллер Laravel использует новый типаж DispatchesCommands, позволяя вам легко отправлять ваши команды на выполнение:
```
$this->dispatch(new PurchasePodcastCommand($user, $podcast));
```
Конечно, вы можете использовать команды также и для команд, выполняющихся синхронно (не в очереди). На самом деле, использование команд — отличный способ инкапсулировать сложные задачи, которые необходимо выполнять вашему приложению. Более подробно об этом написано в документации по командной шине.
### Очередь БД
Драйвер очереди database теперь включён в Laravel — это простой, локальный драйвер очереди, не требующий установки дополнительных пакетов кроме ПО вашей БД.
### Планировщик Laravel
Раньше разработчикам приходилось создавать Cron-задачи для каждой консольной команды, которую они хотели запланировать. Это утомительно. Для планирования консольных команд теперь не ведётся контроль версий, и для добавления Cron-задач вам необходимо подключаться к серверу по SSH. Давайте упростим нам жизнь. Планировщик команд Laravel позволяет вам гибко и выразительно задавать план команд в самом Laravel, а на сервере нужна только одна Cron-задача.
```
$schedule->command('artisan:command')->dailyAt('15:00');
```
Чтобы узнать всё о планировщике, загляните в полную документацию!
### Tinker / Psysh
Теперь команда `shphp artisan tinker` использует Psysh от Джастина Хильмана, более надёжный REPL для PHP. Если вам нравился Boris в Laravel 4, то вы полюбите Psysh. И более того — он работает в Windows! Для начала просто попробуйте: `php artisan tinker`
### DotEnv
Вместо множества сбивающих с толку, вложенных папок с настройками среды, Laravel 5 теперь использует DotEnv от Ванса Лукаса. Эта библиотека предоставляет простой способ управления вашими настройками сред, и делает простым определение среды в Laravel 5.Подробнее читайте в официальной документации.
### Laravel Elixir
Laravel Elixir от <NAME> предоставляет гибкий, выразительный интерфейс для компилирования и соединения ваших ресурсов. Если вы имели печальный опыт изучения Grunt или Gulp, то эти страшные времена позади. Elixir помогает начать пользоваться Gulp для компилирования ваших Less, Sass и CoffeeScript. Он может даже запустить ваши тесты для вас!
Подробнее об Elixir читайте в официальной документации.
### Laravel Socialite
Laravel Socialite — необязательный, совместимый с Laravel 5.0+ пакет, обеспечивающий безболезненную авторизацию с помощью провайдеров OAuth. На данный момент Socialite поддерживает работу с Facebook, Twitter, Google и GitHub. Вот как это выглядит:
```
public function redirectForAuth()
```
{ return Socialize::with('twitter')->redirect(); } public function getUserFromProvider() { $user = Socialize::with('twitter')->user(); }
Больше не надо тратить часы на написание потоков авторизации OAuth. Начните за несколько минут! В полной документации есть все подробности.
### Интеграция Flysystem
Теперь Laravel содержит мощную библиотеку для абстракции файловой системы Flysystem, обеспечивающую безболезненную интеграцию с локальным хранилищем, облачными хранилищами Amazon S3 и Rackspace — всё в одном, едином и элегантном API! Хранить файлы в Amazon S3 теперь вот так просто:
```
Storage::put('file.txt', 'contents');
```
Более подробно об интеграции Laravel Flysystem читайте в полной документации.
### Запросы форм
В Laravel 5.0 введены запросы форм, которые наследуют класс Illuminate\Foundation\Http\FormRequest. Эти объекты запросов могут быть совмещены с внедрением методов контроллера для обеспечения метода проверки ввода пользователей, без необходимости написания шаблонного кода. Давайте разберёмся подробнее и взглянем на пример `PHPFormRequest` :
```
<?php namespace App\Http\Requests;
```
class RegisterRequest extends FormRequest { public function rules() { return [ 'email' => 'required|email|unique:users', 'password' => 'required|confirmed|min:8', ]; } public function authorize() { return true; } }
Когда класс определён, мы можем указать его тип в нашем действии контроллера:
```
public function register(RegisterRequest $request)
```
{ var_dump($request->input()); }
Когда сервис-контейнер Laravel определяет, что внедряемый им класс — это экземпляр FormRequest, запрос автоматически пройдёт проверку. Это значит, что если вызвано действие вашего контроллера, то вы можете быть уверены, что ввод HTTP-запроса прошёл проверку по тем правилам, которые вы указали в классе запроса формы. Более того, если запрос некорректен, будет автоматически вызвана HTTP-переадресация, которую вы можете настроить, а сообщение об ошибке будет либо передано в сессию, либо преобразовано в JSON. Проверка форм никогда не была настолько простой. Подробнее о проверке FormRequest читайте в документации.
### Простая проверка запроса контроллера
Базовый контроллер Laravel 5 теперь содержит типаж ValidatesRequests. Этот типаж предоставляет простой метод `PHPvalidate` для проверки входящих запросов. Если FormRequests для вашего приложения избыточен, попробуйте это:
```
public function createPost(Request $request)
```
{ $this->validate($request, [ 'title' => 'required|max:255', 'body' => 'required', ]); }
Если проверка неудачна, будет выброшено исключение, и нужный HTTP-отклик будет автоматически возвращён в браузер. Ошибки при проверки будут переданы даже в сессию! Если запрос был AJAX-запросом, то Laravel даже позаботиться об отправке JSON-представления ошибок проверки обратно к вам.
Подробнее об этом новом методе читайте в документации.
### Новые генераторы
В дополнение к новой структуре приложения по умолчанию в фреймворк были добавлены новые Artisan-команды генераторы. Для подробностей посмотрите `shphp artisan list` .
### Кэш настроек
Теперь вы можете кэшировать все свои настройки в едином файле с помощью команды `shconfig:cache` .
### Symfony VarDumper
Популярная вспомогательная функция `PHPdd` , которая сохраняет переменную отладочную информацию, была переведена на использование восхитительного Symfony VarDumper. Это обеспечивает цветовое выделение вывода и даже сворачивание массивов. Просто попробуйте сделать так в своём проекте: `dd([1, 2, 3]);`
## Laravel 4.2
Полный список изменений этой версии можно увидеть, выполнив команду php artisan changes в установленной версии 4.2 или посмотреть в файле изменений на Github. В это описание вошли только значительные улучшения и изменения данной версии.
Примечание: Во время разработки версии 4.2 многие небольшие исправления и улучшения были включены в различные подверсии Laravel 4.1. Поэтому не забудьте также ознакомиться со списком изменений Laravel 4.1!
### Требование PHP 5.4
Laravel 4.2 требует PHP 5.4 или выше. Это обновлённое требование PHP позволяет нам использовать новые возможности PHP, такие как типажи, чтобы обеспечивать более выразительные интерфейсы для таких инструментов как Laravel Cashier. Также PHP 5.4 даёт преимущество в скорости и производительности по сравнению с PHP 5.3.
### Laravel Forge
Laravel Forge (кузница) — новое веб-приложение для простого создания и управленияPHP-серверами в облаке по вашему выбору, включая Linode, DigitalOcean, Rackspace и Amazon EC2. Поддерживая автоматическую настройку Nginx, доступ по ключам SSH, автоматизацию работы Cron, мониторинг серверов через NewRelic & Papertrail, «Push To Deploy» («нажми, чтобы развернуть»), настройку обработчика очереди Laravel, и многое другое, Forge предоставляет наиболее простой и доступный способ для запуска всех ваших приложений Laravel.
Сейчас установочный файл конфигурации Laravel 4.2 app/config/database.php по умолчанию настроен на использование Forge, что обеспечивает более удобное развёртывание свежих приложений на платформе.
Больше информации о Laravel Forge можно найти на официальном сайте Forge.
### Laravel Homestead
Laravel Homestead (ферма) — официальное окружение Vagrant для разработки надёжных приложений на Laravel и PHP. Подавляющее большинство необходимых для коробок операций обрабатывается перед тем, как коробка подготавливается к распространению, что позволяет ей загружаться чрезвычайно быстро. Homestead включает в себя Nginx 1.6, PHP 5.5.12, MySQL, Postgres, Redis, Memcached, Beanstalk, Node, Gulp, Grunt и Bower. Homestead содержит простой конфигурационный файл Homestead.yaml для управления несколькими приложениями Laravel в одной коробке.
Сейчас по умолчанию установленный Laravel 4.2 содержит файл конфигурации app/config/local/database.php, который настроен на использование базы данных Homestead, что делает начальную установку и настройку Laravel удобнее.
В официальную документацию также была включена документация Homestead.
### Laravel Cashier
Laravel Cashier (кассир) — простая, удобная библиотека для управления подписками биллинга с помощью Stripe. Начиная с версии Laravel 4.2, мы добавили документацию по Cashier к основной документации Laravel, хотя установка самого компонента по-прежнему не обязательна. Эта версия Cashier содержит много исправлений, поддерживает множество валют и совместима с последним Stripe API.
### Обработчики очереди — демоны
Команда Artisan queue:work теперь поддерживает параметр --daemon для запуска обработчика в «режиме демона», это значит, что обработчик будет продолжать выполнять работу без перезагрузки фреймворка. Это приводит к значительному снижению загрузки процессора за счёт несколько более сложного процесса развёртывания приложения.
Больше информации о демонах — обработчиках очереди можно найти в документации по очередям.
### Драйверы Mail API
В Laravel 4.2 вошли новые драйверы Mailgun и Mandrill API для функций `PHPMail` . Для многих приложений это обеспечивает более быстрый и надёжный способ отправки электронной почты, чем варианты SMTP. Новые драйверы используют HTTP библиотеку Guzzle 4.
### Мягко удаляемые типажи
Более чистая архитектура для «мягких удалений» и других «глобальных областей» была добавлена в PHP 5.4 с помощью типажей. Эта новая архитектура позволяет проще создавать подобные глобальные типажи и чище разделять задачи внутри самого фреймворка.
Больше информации о новых `PHPSoftDeletingTrait` можно найти в документации Eloquent.
### Удобная аутентификация и запоминаемые типажи
Теперь Laravel 4.2 по-умолчанию использует простые типажи, содержащие необходимые свойства для пользовательских интерфейсов аутентификации и напоминания паролей. Это обеспечивает более чистый файл модели `PHPUser` по умолчанию.
### «Simple Paginate»
В построитель запросов и Eloquent был добавлен метод `PHPsimplePaginate` (простое разбиение на страницы), который обеспечивает более эффективные запросы при использовании простых ссылок «Далее» и «Назад» в вашем постраничном представлении.
### Подтверждение миграций
Теперь в работающих приложениях деструктивные операции миграций будут запрашивать подтверждение. Команды можно выполнять без дополнительных подтверждений, используя параметр --force.
## Laravel 4.1
### Полный список изменений
Полный список изменений для этой версии можно увидеть, выполнив команду
```
PHPphp artisan changes
```
в установленной версии 4.1 или посмотреть файл изменений на Github. Это описание содержит только основные улучшения и изменения в версии.
### Новый компонент SSH
В эту версию вошёл полностью новый компонент `PHPSSH` . Эта функция позволяет вам легко подключаться через SSH к удалённым серверам и выполнять команды. Чтобы узнать больше, загляните в документацию компонента SSH.
Новая команда php artisan tail использует новый компонент SSH. Чтобы узнать больше, загляните в документацию команды tail.
### Boris в Tinker
Теперь команда php artisan tinker использует Boris REPL, если ваша система поддерживает его. Для использования этой возможности должны быть установлены расширения PHP readline и pcntl. Если у вас нет этих расширений, будет использована оболочка из 4.0.
### Улучшения Eloquent
В Eloquent добавлено новое отношение `PHPhasManyThrough` . Посмотрите, как им пользоваться, в документации Eloquent. Также был добавлен новый метод `PHPwhereHas` для возможности получения моделей на основе ограничений отношений.
### Подключения чтения/записи к БД
Теперь автоматическая обработка отдельных подключений чтения/записи доступна во всём слое БД, в том числе в построителе запросов и Eloquent. Чтобы узнать больше, загляните в документацию.
### Приоритет очереди
Теперь поддерживаются приоритеты очереди с помощью передачи в команду queue:listen списка, разделённого запятыми.
### Обработка неудавшихся заданий очереди
Теперь в очередях есть возможность автоматической обработки неудавшихся заданий при использовании нового переключателя --tries в queue:listen. Больше информации об обработке неудавшихся заданий можно найти в документации очередей.
### Тэги кэша
«Секции» кэша были заменены «тэгами». Тэги кэша позволяют назначать несколько «тэгов» элементу кэша и получать элементы по тэгу. Больше информации об использовании тэгов кэша можно найти в документации кэша.
### Гибкие напоминатели паролей
Движок напоминателя паролей был изменён для обеспечения большей гибкости разработки при проверке паролей, выводе сообщений о состоянии в сессии, и т.д. Чтобы узнать больше об использовании расширенного движка напоминателя паролей, загляните в документацию.
### Улучшенный движок путей
В Laravel 4.1 вошёл полностью переписанный слой путей. API прежний, но регистрация путей на все 100% быстрее по сравнению с 4.0. Весь движок был значительно упрощён, и зависимость от Symfony Routing была минимизирована до составления выражений для путей.
### Улучшенный движок сессий
Также в версию вошёл полностью новый движок сессий. Также как и улучшения путей, новый слой сессий проще и быстрее. Мы больше не используем средства обработки сессий Symfony(а значит и PHP), а используем своё решение, которое проще и легче в обслуживании.
### Doctrine DBAL
Если вы используете функцию `PHPrenameColumn` в ваших миграциях, то вам надо будет добавить зависимость doctrine/dbal в ваш файл composer.json. Этот пакет больше не входит в Laravel по умолчанию.
# Руководство по обновлению
## Обновление до 5.3.0 с 5.2
Примерное время обновления — 2-3 часа
Обновите ваши laravel/framework зависимости в файле composer.json до 5.3.*.
А также в том же файле обновите зависимости symfony/css-selector и symfony/dom-crawler в секции require-dev до 3.1.*.
### PHP и HHVM
Laravel 5.3 требует PHP 5.6.4 или выше. HHVM теперь официально не поддерживается, поскольку в ней нет тех языковых возможностей, которые есть в PHP 5.6+.
Из фреймворка были удалены все зависимости, перечисленные в разделе обновления до Laravel 5.2. Вам надо просмотреть этот список и убедиться, что вы больше не используете эти устаревшие возможности.
### Сервис-провайдеры приложения
Вы можете удалить аргументы из метода `PHPboot()` в классах EventServiceProvider, RouteServiceProvider и AuthServiceProvider. Вместо всех обращений к этим аргументам можно использовать эквивалентный фасад. Например, вместо обращений метода к аргументу `PHP$dispatcher` вы можете просто вызвать фасад Event. А также вместо обращений метода к аргументу `PHP$router` вы можете вызвать фасад Route, а вместо обращений метода к аргументу `PHP$gate` вы можете вызвать фасад Gate.
При изменении обращений метода на вызов фасадов не забудьте импортировать класс фасада в свой сервис-провайдер.
### Массивы
Изменение порядка Ключ/Значение
Методы `PHPfirst()` , `PHPlast()` и `PHPwhere()` класса Arr теперь передают «value» первым параметром передаваемого замыкания обратного вызова. Например:
```
Arr::first($array, function ($value, $key) {
```
return ! is_null($value); }); В предыдущих версиях Laravel первым передавался `PHP$key` . А поскольку в большинстве случаев нужно только `PHP$value` , теперь оно передаётся первым. Вам надо сделать «глобальный поиск» этих методов в своём приложении, чтобы убедиться в том, что вы ожидаете `PHP$value` первым аргументом своего замыкания.
### Artisan
Команда `shmake:console` переименована в `shmake:command` .
Подготовка заготовок для аутентификации
Два стандартных встроенных в фреймворк контроллера аутентификации были разделены на четыре небольших контроллера. Это изменение даёт нам более простые и более узконаправленные стандартные контроллеры. Самый простой способ обновить контроллеры аутентификации в вашем приложении — скачать свежие копии каждого контроллера с GitHub и скопировать их в приложение.
Также вам надо убедиться, что вы вызываете метод `PHPAuth::routes()` в своём файле routes/web.php. Этот метод зарегистрирует правильные маршруты для новых контроллеров аутентификации. После размещения контроллеров в приложении вам возможно потребуется заново реализовать все изменения, которые вы делали в этих контроллерах. Например, если вы изменяли защитника аутентификации, вам может потребоваться переопределить метод контроллера `PHPguard()` . Вы можете проверить каждый типаж контроллера аутентификации, чтобы определить, какие методы надо переопределить. Если вы не изменяли контроллеры аутентификации, вам будет достаточно скопировать новые контроллеры с GitHub и проверить, что вы вызываете метод `PHPAuth::routes()` в своём файле routes/web.php. Письма для сброса пароля теперь используют новую функцию уведомлений Laravel. Если вы хотите изменить отправку уведомлений со ссылками для смены пароля, вам надо переопределить метод
```
PHPsendPasswordResetNotification()
```
типажа Illuminate\Auth\Passwords\CanResetPassword.
Ваша модель User должна использовать новый типаж Illuminate\Notifications\Notifiable, чтобы письма со ссылками для сброса пароля могли быть доставлены:
`<?php` namespace App; use Illuminate\Notifications\Notifiable; use Illuminate\Foundation\Auth\User as Authenticatable; class User extends Authenticatable { use Notifiable; } Не забудьте зарегистрировать Illuminate\Notifications\NotificationServiceProvider в массиве `PHPproviders` в файле настроек config/app.php. Метод `PHPAuth::routes()` теперь регистрирует маршрут POST для /logout вместо маршрута GET. Это препятствует другим веб-приложениям осуществлять «выход» пользователей из вашего приложения. Для обновления вы должны либо конвертировать ваши запросы «выхода», чтобы использовать в них HTTP-метод POST, либо зарегистрировать ваш собственный GET маршрут для URI /logout:
```
Route::get('/logout', 'Auth\LoginController@logout');
```
Вызов методов политик с именами классов
Некоторые методы политик получают только текущего аутентифицированного пользователя, но не экземпляр модели, которую они авторизуют. Эта ситуация наиболее распространена при авторизации действий create. Например, при создании блога вам понадобиться проверять, что пользователь авторизован для создания статьи.
При определении методов политик, которые не будут получать экземпляр модели, таких как метод `PHPcreate()` , имя класса больше не будет передаваться в метод вторым аргументом. Ваш метод должен ожидать только экземпляр аутентифицированного пользователя: `/**` * Определение, может ли данный пользователь создавать статьи. * * @param \App\User $user * @return bool */ public function create(User $user) { // }
Типаж AuthorizesResources теперь объединён с типажом AuthorizesRequests. Вам надо удалить типаж AuthorizesResources из файла app/Http/Controllers/Controller.php.
В предыдущих версиях Laravel при регистрации пользовательских директив Blade с помощью метода `PHPdirective()` передаваемое в обратный вызов вашей директивы $expression содержало внешнюю скобку. В Laravel 5.3 эта внешняя скобка не включена в передаваемое в обратный вызов вашей директивы выражение. Загляните в документацию по наследованию Blade и проверьте, что ваши собственные директивы Blade по-прежнему работают правильно.
### Вещание
Laravel 5.3 содержит значительные улучшения вещания событий. Вам надо добавить новый BroadcastServiceProvider в вашу папку app/Providers получив свежую копию из источника на GitHub. После определения нового сервис-провайдера вы должны добавить его в массив providers в файле настроек config/app.php.
### Кэш
Привязка наследующего замыкания и `PHP$this` При вызове метода `PHPCache::extend()` с замыканием `PHP$this` будет привязан к экземпляру CacheManager, позволяя вам вызвать его методы в вашем наследующем замыкании:
```
Cache::extend('memcached', function ($app, $config) {
```
try { return $this->createMemcachedDriver($config); } catch (Exception $e) { return $this->createNullDriver($config); } });
### Cashier
Если вы используете Cashier, вам надо обновить ваш пакет laravel/cashier до версии ~7.0. В этой версии Cashier обновлены только несколько внутренних методов для совместимости с Laravel 5.3, поэтому сохранена обратная совместимость.
Изменения порядка Ключ/Значение
Методы коллекций `PHPfirst()` , `PHPlast()` и `PHPcontains()` передают «value» первым параметром передаваемого замыкания обратного вызова. Например:
```
$collection->first(function ($value, $key) {
```
return ! is_null($value); }); В предыдущих версиях Laravel первым передавался `PHP$key` . А поскольку в большинстве случаев нужно только `PHP$value` , теперь оно передаётся первым. Вам надо сделать «глобальный поиск» этих методов в своём приложении, чтобы убедиться в том, что вы ожидаете `PHP$value` первым аргументом своего замыкания. Методы сравнения коллекций `PHPwhere()` по умолчанию «нестрогие» Теперь метод коллекций `PHPwhere()` по умолчанию выполняет «нестрогое» сравнение вместо строгого. Если вы хотите сделать строгое сравнение, используйте метод `PHPwhereStrict()` . Также метод `PHPwhere()` теперь не принимает третий аргумент для обозначения «строгости». Вы должны явно вызывать либо `PHPwhere()` , либо `PHPwhereStrict()` , в зависимости от того, какое сравнение вам нужно.
Добавьте в файл настроек config/app.php следующую опцию:
> conf'name' => 'Your Application Name',
В предыдущих версиях Laravel вы могли обращаться к переменным сессии или к аутентифицированному пользователю в конструкторе вашего контроллера. Но это никогда не считалось явной особенностью фреймворка. В Laravel 5.3 вы не можете обращаться к переменным сессии или к аутентифицированному пользователю в конструкторе вашего контроллера, поскольку ещё не запущен посредник.
В качестве альтернативы вы можете определить посредника на основе замыкания прямо в конструкторе вашего контроллера. Перед использованием этой возможности убедитесь, что ваше приложение использует Laravel 5.3.4 или выше:
`<?php` namespace App\Http\Controllers; use App\User; use Illuminate\Support\Facades\Auth; use App\Http\Controllers\Controller; class ProjectController extends Controller { /** * Все проекты текущего пользователя. */ protected $projects; /** * Создание нового экземпляра контроллера. * * @return void */ public function __construct() { $this->middleware(function ($request, $next) { $this->projects = Auth::user()->projects; return $next($request); }); } }
Разумеется, вы также можете обращаться к данным сессии запроса или к аутентифицированному пользователю указывая тип класса Illuminate\Http\Request для действия вашего контроллера:
`/**` * Show all of the projects for the current user. * * @param \Illuminate\Http\Request $request * @return Response */ public function index(Request $request) { $projects = $request->user()->projects; $value = $request->session()->get('key'); // }
Теперь конструктор запросов возвращает экземпляры Illuminate\Support\Collection вместо простых массивов, благодаря чему стали согласованы типы результатов, возвращаемых конструктором запросов и Eloquent.
Если вы не хотите мигрировать результаты конструктора запросов в экземпляры Collection, вы можете прицепить метод `PHPall()` к вашим вызовам методов конструктора запросов `PHPget()` и `PHPpluck()` . Тогда результатом будет простой PHP-массив, что позволит вам сохранить обратную совместимость:
```
$users = DB::table('users')->get()->all();
```
$usersIds = DB::table('users')->pluck('id')->all(); Метод Eloquent `PHPgetRelation()` Метод Eloquent `PHPgetRelation()` больше не выбрасывает BadMethodCallException, если отношение не может быть загружено. Вместо этого он выбросит Illuminate\Database\Eloquent\RelationNotFoundException. Это измненение затронет ваше приложение, только если вы ловили BadMethodCallException вручную. Свойство Eloquent `PHP$morphClass` Свойство `PHP$morphClass` , которое можно было определять на моделях Eloquent, теперь удалено в пользу определения «морфинговой карты». Определение морфинговой карты обеспечивает поддержку активной загрузки и решает дополнительные проблемы с полиморфными отношениями. Если раньше вы использовали свойство `PHP$morphClass` , то вам надо мигрировать на `PHPmorphMap` с помощью следующего синтаксиса: `Relation::morphMap([` 'YourCustomMorphName' => YourModel::class, ]); Например, если раньше вы определили такой `PHP$morphClass` :
{ protected $morphClass = 'user' } Вам надо определить такую `PHPmorphMap` в методе `PHPboot()` вашего AppServiceProvider:
Relation::morphMap([ 'user' => User::class, ]); Теперь заготовки Eloquent чувствительны к логике, стоящей в начале ограничения. Например, если вы начали свою заготовку с ограничения `PHPorWhere` , теперь оно не будет конвертировано в обычное `PHPwhere` . Если вы использовали эту возможность (например, добавляли несколько ограничений `PHPorWhere` в цикле), то вам надо проверить, что первое условие — обычное `PHPwhere` , чтобы избежать возможных проблем с логикой. Если ваша заготовка начинается с ограничений `PHPwhere` , то делать ничего не надо. Помните, вы можете проверить SQL-код своего запроса, с помощью метода `PHPtoSql()` на запросе:
```
User::where('foo', 'bar')->toSql();
```
Класс JoinClause был переписан для унификации его синтаксиса с конструкотором запросов. Необязательный параметр `PHP$where` оператора `PHPon` был удалён. Для добавления условий «where» вам надо явно использовать один из методов `PHPwhere` , предлагаемых конструктором запросов:
```
$query->join('table', function ($join) {
```
$join->on('foo', 'bar')->where('bar', 'baz'); }); Оператор `PHPon` теперь проверяется и больше не может содержать некорректных значений. Если вы использовали эту возможность (например,
```
PHP$join->on('foo', 'in', DB::raw('("bar")'))
```
), вам надо переписать условие, используя подходящий оператор `PHPwhere` :
```
$join->whereIn('foo', ['bar']);
```
Также удалено свойство `PHP$bindings` . Вы можете использовать метод `PHPaddBinding()` , чтобы напрямую управлять привязками `PHPjoin` :
```
$query->join(DB::raw('('.$subquery->toSql().') table'), function ($join) use ($subquery) {
```
$join->addBinding($subquery->getBindings(), 'join'); });
Шифратор Mcrypt считается устаревшим ещё с версии Laravel 5.1.0, вышедшей в июне 2015 года. Этот шифратор полностью удалён в версии 5.3.0, вместо него используется новая реализация шифрования на базе OpenSSL, которая используется по умолчанию во всех версиях, начиная с Laravel 5.1.0.
Если вы по-прежнему используете cipher (шифр) на базе Mcrypt в своём файле config/app.php, вам надо обновить шифр на AES-256-CBC и задать ваш ключ — строку из 32 случайных байтов, которую можно безопасно сгенерировать с помощью команды
.
Если вы храните шифрованные данные в своей базе данных с помощью шифратора Mcrypt, вы можете установить пакет laravel/legacy-encrypter, который содержит реализацию устаревшего шифратора Mcrypt. Вам надо использовать этот пакет, чтобы дешифровать ваши зашифрованные данные, и перешифровать их с помощью нового шифратора OpenSSL. Например, вы можете сделать что-то подобное в собственной Artisan-команде:
```
$legacy = new McryptEncrypter($encryptionKey);
```
foreach ($records as $record) { $record->encrypted = encrypt( $legacy->decrypt($record->encrypted) ); $record->save(); }
### Обработчик исключений
Базовый класс обработчика исключений теперь требует передачи в его конструктор экземпляра Illuminate\Container\Container. Это изменение затронет ваше приложение, только если вы определили свой метод `PHP__construct()` в файле app/Exceptions/Handler.php. Если вы это сделали, вам надо передать экземпляр контейнера в метод
```
PHPparent::__construct()
```
```
parent::__construct(app());
```
Вам надо добавить метод `PHPunauthenticated()` в свой класс App\Exceptions\Handler. Этот метод будет конвертировать исключения аутентификации в HTTP-ответы: `/**` * Конвертирование исключения аутентификации в неаутентифицированный ответ. * * @param \Illuminate\Http\Request $request * @param \Illuminate\Auth\AuthenticationException $exception * @return \Illuminate\Http\Response */ protected function unauthenticated($request, AuthenticationException $exception) { if ($request->expectsJson()) { return response()->json(['error' => 'Unauthenticated.'], 401); } return redirect()->guest('login'); }
Изменение пространства имён посредника can
Надо обновить посредник can, указанный в свойстве `PHP$routeMiddleware` вашего HTTP-ядра, на следующий класс:
```
'can' => \Illuminate\Auth\Middleware\Authorize::class,
```
Исключение аутентификации посредника can
Теперь посредник can будет выбрасывать экземпляр Illuminate\Auth\AuthenticationException, если пользователь не аутентифицирован. Если вы вручную ловили другой тип исключения, вам надо изменить приложение, чтобы ловить это исключение. В большинстве случаев это изменение не затронет ваше приложение.
Привязка модели маршрута теперь осуществляется с помощью посредника. Все приложения должны добавить Illuminate\Routing\Middleware\SubstituteBindings в группу посредников web в вашем файле app/Http/Kernel.php:
```
\Illuminate\Routing\Middleware\SubstituteBindings::class,
```
Также вам надо зарегистрировать посредника маршрута для привязки замены в свойстве `PHP$routeMiddleware` вашего HTTP-ядра:
```
'bindings' => \Illuminate\Routing\Middleware\SubstituteBindings::class,
```
После регистрации этого посредника маршрута вам надо добавить его в группу посредников api:
`'api' => [` 'throttle:60,1', 'bindings', ],
Laravel 5.3 содержит новую систему уведомлений на основе драйверов. Вам надо зарегистрировать Illuminate\Notifications\NotificationServiceProvider в массиве providers в файле настроек config/app.php.
Также вам надо добавить фасад Illuminate\Support\Facades\Notification в массив aliases в файле настроек config/app.php.
И наконец, вы можете использовать типаж Illuminate\Notifications\Notifiable на модели User или на любой другой модели, где вы хотите получать уведомления.
В Laravel 5.3 намного проще по сравнению с предыдущими версиями Laravel 5.x изменять генерируемый для страничного вывода HTML. Вместо определения класса «Presenter», вам надо только определить простой шаблон Blade. Самый простой способ изменить представления страничного вывода — экспортировать их в вашу папку resources/views/vendor командой `shvendor:publish` :
```
php artisan vendor:publish --tag=laravel-pagination
```
Эта команда поместит представления в папку resources/views/vendor/pagination. В этой папке находится файл default.blade.php, соответствует стандартному представлению страничного вывода. Просто отредактируйте этот файл, чтобы изменить HTML страничного вывода.
Не забудьте ознакомиться с полной документацией по страничному выводу.
### Очереди
В настройке ваших очередей все элементы expire должны быть переименованы в retry_after. А также в настройке Beanstalk элемент ttr должен быть переименован в retry_after. Это изменение имён даёт более ясное представление о назначении этих опций.
Больше не поддерживается очередь замыканий. Если вы в своём приложении ставите в очередь замыкания, вам надо конвертировать замыкание в класс и ставить в очередь экземпляр класса:
```
dispatch(new ProcessPodcast($podcast));
```
Типаж Illuminate\Queue\SerializesModels теперь правильно сериализует экземпляры Illuminate\Database\Eloquent\Collection. Это, скорее всего, не станет серьёзным изменением для подавляющего большинства приложений, но если ваше приложение полностью зависит от того, что коллекции не будут повторно извлекаться из базы данных задачами из очереди, вам надо убедиться, что это изменение не навредит вашему приложению.
Больше нет необходимости указывать опцию `sh--daemon` при вызове Artisan-команды `shqueue:work` . Запуск команды
```
shphp artisan queue:work
```
автоматически приведёт к запуску обработчика в режиме демона. Если вы хотите обработать одну задачу, вы можете использовать опцию `sh--once` для этой команды:
```
// Запуск обработчика очереди в режиме демона...
```
php artisan queue:work // Обработка одной задачи... php artisan queue:work --once
Если вы используете драйвер database для хранения ваших задач в очереди, вам надо удалить из таблицы jobs индекс jobs_queue_reserved_reserved_at_index, а затем удалить столбец reserved. Этот столбец больше не требуется при использовании драйвера database. Когда вы завершите эти изменения, вам надо добавить новый составной индекс для столбцов queue и reserved_at.
Ниже приведён пример миграции, которую вы можете использовать для выполнения необходимых изменений:
`public function up()` { Schema::table('jobs', function (Blueprint $table) { $table->dropIndex('jobs_queue_reserved_reserved_at_index'); $table->dropColumn('reserved'); $table->index(['queue', 'reserved_at']); }); Schema::table('failed_jobs', function (Blueprint $table) { $table->longText('exception')->after('payload'); }); } public function down() { Schema::table('jobs', function (Blueprint $table) { $table->tinyInteger('reserved')->unsigned(); $table->index(['queue', 'reserved', 'reserved_at']); $table->dropIndex('jobs_queue_reserved_at_index'); }); Schema::table('failed_jobs', function (Blueprint $table) { $table->dropColumn('exception'); }); } Различные события задач в очереди, такие как JobProcessing и JobProcessed, больше не содержат свойства `PHP$data` . Вам надо изменить своё приложение, чтобы вызывать
```
PHP$event->job->payload()
```
для получения эквивалентных данных. Если вы вызываете метод `PHPQueue::failing()` в вашем AppServiceProvider, вам надо изменить описание метода на следующее:
```
use Illuminate\Queue\Events\JobFailed;
```
Queue::failing(function (JobFailed $event) { // $event->connectionName // $event->job // $event->exception });
Расширение для контроля процессов
Если ваше приложение использует параметр `sh--timeout` для обработчиков очереди, вам надо убедиться, что установлено расширение pcntl.
Сериализация моделей для задач в очереди, использующих устаревший стиль
Обычно задачи в Laravel ставятся в очередь с помощью передачи нового экземпляра задачи в метод `PHPQueue::push()` . Но некоторые приложения могут ставить задачи в очередь с помощью такого устаревшего синтаксиса:
```
Queue::push('ClassName@method');
```
Если вы ставите задачи в очередь с помощью этого синтаксиса, то модели Eloquent теперь не будут автоматически сериализованы и заново получены очередью. Если вы хотите, чтобы очередь могла автоматически сериализовать ваши Eloquent модели, вам надо использовать типаж Illuminate\Queue\SerializesModels на классе вашей задачи и ставить задачи в очередь с помощью нового синтаксиса `PHPpush` :
```
Queue::push(new ClassName);
```
Параметры-ресурсы в единственном числе по умолчанию
В предыдущих версиях Laravel параметры маршрута, регистрируемые с помощью `PHPRoute::resource` , не переводились в единственное число. Это могло привести к неожиданному поведению при регистрации привязок модели маршрута. Например, возьмём такой вызов `PHPRoute::resource` :
Для маршрута show будет определён следующий URI:
`/photos/{photos}` В Laravel 5.3 все параметры-ресурсы маршрутов по умолчанию приводятся к единственному числу. Поэтому такой же вызов `PHPRoute::resource` зарегистрирует следующий URI: `/photos/{photo}` Если вам нужна старая логика вместо автоматического приведения к единственному числу, вы можете сделать следующий вызов метода
```
PHPsingularResourceParameters()
```
в вашем AppServiceProvider:
```
use Illuminate\Support\Facades\Route;
```
Route::singularResourceParameters(false);
Имена ресурсов маршрута больше не дополняются префиксами
Префиксы URL больше не влияют на имена маршрутов, назначенные на маршруты с помощью `PHPRoute::resource` , поскольку такое поведение противоречит самой цели использования имён маршрутов на первом месте. Если в вашем приложении используется `PHPRoute::resource` в вызове `PHPRoute::group` с указанием опции prefix, вам необходимо проверить все ваши вызовы вспомогательной функции `PHProute()` и
```
PHPUrlGenerator::route
```
, чтобы убедиться, что вы не добавляете префикс URI в имя маршрута. Если это изменение привело к тому, что у вас теперь есть два маршрута с одним именем, то у вас есть два варианта. Первый — вы можете использовать опцию names при вызове `PHPRoute::resource` , чтобы указать нужное вам имя для данного маршрута. Подробнее читайте в документации по маршрутизации ресурсов. Второй — вы можете добавить опцию as в вашу группу маршрутов:
```
Route::group(['as' => 'admin.', 'prefix' => 'admin'], function () {
```
Если проверка ввода запроса формы закончилась ошибкой, то теперь Laravel выбросит экземпляр Illuminate\Validation\ValidationException вместо экземпляра HttpException. Если вы отлавливали выброшенный запросом формы экземпляр HttpException вручную, вам надо изменить ваши блоки `PHPcatch` , чтобы ловить ValidationException. Если раньше вы использовали метод `PHPhas()` для определения наличия сообщений в экземпляре Illuminate\Support\MessageBag, то теперь вам надо использовать метод `PHPcount()` . Метод `PHPhas()` теперь требует указание параметра и определяет только то, есть ли конкретный ключ в корзине сообщений.
Примитивы с разрешённым значением Null
Теперь при проверке ввода массивов, логических типов, целых и вещественных чисел и строк значение `PHPnull` не считается корректным, если в наборе правил нет правила nullable:
```
Validate::make($request->all(), [
```
'field' => 'nullable|max:5', ]);
## Обновление до 5.2.0 с 5.1
Обновите свой файл composer.json до laravel/framework 5.2.*.
Добавьте в раздел require-dev этого файла "symfony/dom-crawler": "~3.0" и "symfony/css-selector": "~3.0".
Вам надо заменить свой файл настроек config/auth.php на этот: [https://github.com].
После замены файла на новый задайте необходимые настройки аутентификации на основе их значений из старого файла. Если вы будете использовать обычные сервисы аутентификации на основе Eloquent, которые были доступны в Laravel 5.1, то большинство значений должны остаться без изменений.
Обратите особое внимание на параметр passwords.users.email в новом файле настроек auth.php и проверьте, что путь к представлению совпадает с реальным расположением представления в вашем приложении, так как в Laravel 5.2 путь по умолчанию к этому представлению изменился. И если они не совпадают, то измените параметр конфигурации.
Если вы реализуете контракт Illuminate\Contracts\Auth\Authenticatable, но не используете типаж `PHPAuthenticatable` , то вам надо добавить новый метод
в вашу реализацию контракта. Обычно этот метод будет возвращать имя столбца «первичного ключа» вашей аутентифицируемой сущности. Например, id.
Это вряд ли повлияет на ваше приложение, если вы не реализовали этот интерфейс вручную.
Если вы используете метод `PHPAuth::extend()` для определения своего собственного метода получения пользователей, то теперь вам надо использовать `PHPAuth::provider()` для определения своего собственного провайдера для пользователей. После определения своего провайдера вы можете настроить его в массиве providers в своём новом файле auth.php.
Подробнее о пользовательских провайдерах аутентификации читайте в полной документации.
Метод `PHPloginPath()` удалён из Illuminate\Foundation\Auth\AuthenticatesUsers, поэтому больше не надо добавлять переменную `PHP$loginPath` в `PHPAuthController` . По умолчанию при ошибках аутентификации типаж всегда будет переадресовывать пользователя обратно в его предыдущее расположение.
Illuminate\Auth\Access\UnauthorizedException переименован в Illuminate\Auth\Access\AuthorizationException. Это не повлияет на ваше приложение, если вы не отлавливаете это исключение вручную.
Экземпляр коллекции Eloquent теперь возвращает базовую коллекцию (Illuminate\Support\Collection) для следующих методов: `PHPpluck()` , `PHPkeys()` , `PHPzip()` , `PHPcollapse()` , `PHPflatten()` , `PHPflip()` . Методы `PHPslice()` , `PHPchunk()` и `PHPreverse()` теперь сохраняют ключи коллекции. Если вы не хотите, чтобы эти методы сохраняли ключи, используйте метод `PHPvalues()` на экземпляре `PHPCollection` .
### Класс Composer
Класс Illuminate\Foundation\Composer перемещён в Illuminate\Support\Composer. Это не повлияет на ваше приложение, если вы не использовали этот класс вручную.
Вам больше не надо реализовывать контракт `PHPSelfHandling` в ваших задачах / командах. Теперь все задачи по умолчанию самообрабатываемые, поэтому вы можете удалить этот интерфейс из своих классов.
Отдельные команды и обработчики
Командная шина Laravel 5.2 теперь поддерживает только самообрабатываемые команды, и больше не поддерживает отдельные команды и обработчики.
Если вы хотите продолжать использовать отдельные команды и обработчики, вы можете установить пакет Laravel Collective, который обеспечивает обратную совместимость: [https://github.com].
Добавьте параметр env в файл настроек app.php, он выглядит вот так:
```
'env' => env('APP_ENV', 'production'),
```
Если вы используете команду `shconfig:cache` во время развёртывания, то вы должны убедиться, что вызываете функцию `PHPenv` только из файлов настроек, и не вызываете где-либо ещё в вашем приложении. Если вы вызываете `PHPenv` из приложения, то крайне рекомендуется вместо этого добавить нужные значения в файлы настроек и вызывать `PHPenv` именно оттуда, что позволит вам конвертировать вызовы `PHPenv` в вызовы `PHPconfig` . Если в файле (t)config/compile.php `PHPв массиве` (t)files%% есть эти строки, то удалите их:
```
realpath(__DIR__.'/../app/Providers/BusServiceProvider.php'),
```
realpath(__DIR__.'/../app/Providers/ConfigServiceProvider.php'), Если этого не сделать, то может возникнуть ошибка при запуске
```
shphp artisan optimize
```
, если указанные сервис-провайдеры не существуют.
### CSRF проверка
Теперь CSRF проверка не выполняется автоматически при запуске юнит-тестов. Это не повлияет на ваше приложение.
Начиная с MySQL 5.7,
```
PHP0000-00-00 00:00:00
```
больше не считается корректной датой, поскольку «строгий» режим включён по умолчанию. Все столбцы, хранящие отметки времени, должны получать корректное значение по умолчанию, когда вы вставляете записи в базу данных. Вы можете использовать метод `PHPuseCurrent()` в своих миграциях, чтобы задать текущее время в качестве значения по умолчанию для этих столбцов, или вы можете сделать их `PHPnullable` , чтобы разрешить значение `PHPnull` :
```
$table->timestamp('foo')->nullable();
```
$table->timestamp('foo')->useCurrent(); $table->nullableTimestamps();
Теперь тип столбца json создаёт настоящие JSON-столбцы при использовании драйвера MySQL. Если вы используете MySQL ниже версии 5.7, то этот тип будет недоступен для вас. Вместо этого используйте в своих миграциях тип text.
Теперь при выполнении загрузки начальных данных в БД все модели Eloquent по умолчанию незащищённые. Раньше был необходим вызов `PHPModel::unguard()` . Вы можете вызвать `PHPModel::reguard()` в начале своего класса `PHPDatabaseSeeder` , если вы хотите, чтобы модели были защищены во время загрузки начальных данных.
Теперь любые атрибуты, добавленные в свойство `PHP$casts` , такие как `PHPdate` или `PHPdatetime` , будут конвертироваться в строку при вызове `PHPtoArray()` на модели или коллекции моделей. Это делает преобразование при приведении типа дат согласованным с датами, указанными в вашем массиве `PHP$dates` . Реализация глобальных заготовок была переписана с целью упрощения их использования. Теперь вашим глобальным заготовкам не нужен метод `PHPremove()` , его можно удалить из всех написанных вами глобальных заготовок. Если вы вызывали `PHPgetQuery()` на конструкторе запросов Eloquent для обращения к нижележащему экземпляру конструктора запросов, теперь вам надо вызывать `PHPtoBase()` . Если вы по какой-то причине вызывали метод `PHPremove()` , вам надо заменить этот вызов на
```
PHP$eloquentBuilder->withoutGlobalScope($scope)
```
. В конструктор запросов Eloquent были добавлены новые методы
. Все вызовы
```
PHP$model->removeGlobalScopes($builder)
```
можно заменить на простое
```
PHP$builder->withoutGlobalScopes()
```
. По умолчанию Eloquent считает ваши первичные ключи числовыми (integer) и автоматически приводит их числовому типу. Для всех не числовых первичных ключей вы должны изменить значение свойства `PHP$incrementing` модели Eloquent на `PHPfalse` : `/**` * Указывает, являются ли ID автоинкрементными. * * @var bool */ public $incrementing = true;
### События
Теперь некоторые события ядра, вызываемые Laravel, используют объекты событий вместо строковых имён событий и динамических параметров. Ниже приведён список старых имён событий и соответствующих им новых событий на основе объектов:
Старое | Новое |
| --- | --- |
artisan.start | Illuminate\Console\Events\ArtisanStarting |
auth.attempting | Illuminate\Auth\Events\Attempting |
auth.login | Illuminate\Auth\Events\Login |
auth.logout | Illuminate\Auth\Events\Logout |
cache.missed | Illuminate\Cache\Events\CacheMissed |
cache.hit | Illuminate\Cache\Events\CacheHit |
cache.write | Illuminate\Cache\Events\KeyWritten |
cache.delete | Illuminate\Cache\Events\KeyForgotten |
connection.{name}.beginTransaction | Illuminate\Database\Events\TransactionBeginning |
connection.{name}.committed | Illuminate\Database\Events\TransactionCommitted |
connection.{name}.rollingBack | Illuminate\Database\Events\TransactionRolledBack |
illuminate.query | Illuminate\Database\Events\QueryExecuted |
illuminate.queue.before | Illuminate\Queue\Events\JobProcessing |
illuminate.queue.after | Illuminate\Queue\Events\JobProcessed |
illuminate.queue.failed | Illuminate\Queue\Events\JobFailed |
illuminate.queue.stopping | Illuminate\Queue\Events\WorkerStopping |
mailer.sending | Illuminate\Mail\Events\MessageSending |
router.matched | Illuminate\Routing\Events\RouteMatched |
Каждый из этих объектов событий содержит в точности те же параметры, которые передавались в обработчик событий в Laravel 5.1. Например, если в 5.1.* вы использовали `PHPDB::listen()` , вы можете обновить свой код для 5.2.* вот так:
```
DB::listen(function ($event) {
```
dump($event->sql); dump($event->bindings); });
Вы можете заглянуть в каждый класс новых объектов событий, чтобы увидеть их общедоступные свойства.
### Обработка исключений
Свойство `PHP$dontReport` вашего класса App\Exceptions\Handler надо изменить — включить по крайней мере следующие типы исключений:
```
use Illuminate\Validation\ValidationException;
```
use Illuminate\Auth\Access\AuthorizationException; use Illuminate\Database\Eloquent\ModelNotFoundException; use Symfony\Component\HttpKernel\Exception\HttpException; /** * Список типов исключений, о которых не надо сообщать. * * @var array */ protected $dontReport = [ AuthorizationException::class, HttpException::class, ModelNotFoundException::class, ValidationException::class, ];
### Вспомогательные функции
Вспомогательная функция `PHPurl()` теперь возвращает экземпляр Illuminate\Routing\UrlGenerator, когда не предоставлен никакой путь.
### Неявная привязка моделей
В Laravel 5.2 появилась «неявная привязка моделей» — новая удобная фича для автоматического внедрения экземпляров моделей в маршруты и контроллеры на базе их идентификаторов, указанных в URI. Но при этом изменилось поведение маршрутов и контроллеров, которые используют указание типов экземпляров моделей.
Если в вашем маршруте или контроллере вы использовали указание типа экземпляра модели и ожидали, что будет внедрён пустой экземпляр модели, то вам надо удалить это указание типа и создать пустой экземпляр модели непосредственно в маршруте или контроллере. Иначе Laravel попытается получить из БД существующий экземпляр модели по идентификатору, указанному в URI маршрута.
### IronMQ
Драйвер очереди IronMQ переехал в свой собственный пакет и больше не поставляется с ядром фреймворка.
### Задачи / Очередь
Теперь команда
```
shphp artisan make:job
```
по умолчанию создаёт определение класса задачи «для очереди». Если вы хотите создать «синхронную» задачу, используйте опцию `sh--sync` при запуске команды.
### Почта
Удалён параметр настройки почты `PHPpretend` . Вместо этого используйте драйвер почты log, который выполняет ту же функцию, что и `PHPpretend` , и записывает даже больше информации о почтовом сообщении.
Для согласования с остальными URL, генерируемыми фреймворком, URL страничного вывода больше не содержат завершающий слэш. Это не повлияет на ваше приложение.
### Сессии
Из-за изменений в системе аутентификации все существующие сессии будут отключены при обновлении до Laravel 5.2.
Для фреймворка написан новый драйвер сессий database, он содержит больше информации о пользователе, такой как ID пользователя, IP-адрес и user-agent. Если вы хотите продолжить использовать старый драйвер, вы можете указать драйвер legacy-database в файле настроек session.php.
Если вы хотите использовать новый драйвер, вам надо добавить в таблицу сессий столбцы user_id (nullable integer), ip_address (nullable string) и user_agent (text).
### Stringy
Библиотека «Stringy» больше не входит в состав фреймворка. Если вы хотите использовать её в своём приложении, то можете установить её вручную через Composer.
Теперь типаж `PHPValidatesRequests` выбрасывает экземпляр Illuminate\Foundation\Validation\ValidationException вместо экземпляра Illuminate\Http\Exception\HttpResponseException. Это не повлияет на ваше приложение, если вы не ловили это исключение вручную.
Следующие функции Laravel устарели и будут полностью удалены в релизе Laravel 5.3 в июне 2016:
* Контракт Illuminate\Contracts\Bus\SelfHandling. Его можно удалить из задач.
* Метод
`PHPlists()` на коллекции, конструкторе запросов и объектах конструктора запросов Eloquent был переименован в `PHPpluck()` . Сигнатура метода осталась прежней. * Устарели неявные маршруты контроллера с использованием
. Пожалуйста, используйте явную регистрацию маршрутов в файле маршрутов. Вероятно, это будет вынесено в отдельный пакет. * Удалены вспомогательные функции маршрутов
`PHPget()` , `PHPpost()` и другие. Вместо этого вы можете использовать фасад `PHPRoute` . * Драйвер сессии database из 5.1 был переименован в legacy-database и будет удалён. Подробнее описано выше в описании «драйвера сессии database».
* Отказались от функции
```
PHPStr::randomBytes()
```
в пользу нативной PHP-функции `PHPrandom_bytes()` . * Отказались от функции
`PHPStr::equals()` в пользу нативной PHP-функции `PHPhash_equals()` . * Отказались от Illuminate\View\Expression в пользу Illuminate\Support\HtmlString.
* Удалён драйвер кэша WincacheStore.
## Обновление до 5.1.11
Laravel 5.1.11 поддерживает авторизацию и политики. Добавить эти новые фичи в ваше существующее приложение очень просто.
Это обновление не обязательное, и если вы его пропустите, это не скажется на вашем приложении.
Сначала создайте пустую папку app/Policies в своём приложении.
Создание/регистрация AuthServiceProvider и фасада Gate
Создайте файл AuthServiceProvider в своей папке app/Providers. Вы можете скопировать в него содержимое провайдера по умолчанию с GitHub. Не забудьте изменить пространство имён провайдера, если в вашем приложении используется своё пространство имён. После создания провайдера не забудьте зарегистрировать его в своём файле настроек app.php в массиве providers.
Также вам надо зарегистрировать фасад Gate в вашем файле app.php:
```
'Gate' => Illuminate\Support\Facades\Gate::class,
```
Теперь используйте типаж Illuminate\Foundation\Auth\Access\Authorizable и контракт Illuminate\Contracts\Auth\Access\Authorizable в своей модели App\User:
`<?php` namespace App; use Illuminate\Auth\Authenticatable; use Illuminate\Database\Eloquent\Model; use Illuminate\Auth\Passwords\CanResetPassword; use Illuminate\Foundation\Auth\Access\Authorizable; use Illuminate\Contracts\Auth\Authenticatable as AuthenticatableContract; use Illuminate\Contracts\Auth\Access\Authorizable as AuthorizableContract; use Illuminate\Contracts\Auth\CanResetPassword as CanResetPasswordContract; class User extends Model implements AuthenticatableContract, AuthorizableContract, CanResetPasswordContract { use Authenticatable, Authorizable, CanResetPassword; }
Обновление базового контроллера
Затем добавьте в свой базовый контроллер App\Http\Controllers\Controller использование типажа Illuminate\Foundation\Auth\Access\AuthorizesRequests:
`<?php` namespace App\Http\Controllers; use Illuminate\Foundation\Bus\DispatchesJobs; use Illuminate\Routing\Controller as BaseController; use Illuminate\Foundation\Validation\ValidatesRequests; use Illuminate\Foundation\Auth\Access\AuthorizesRequests; abstract class Controller extends BaseController { use AuthorizesRequests, DispatchesJobs, ValidatesRequests; }
## Обновление до 5.1.0
### Обновление bootstrap/autoload.php
Обновите переменную `PHP$compiledPath` в файле bootstrap/autoload.php на следующую:
```
$compiledPath = __DIR__.'/cache/compiled.php';
```
### Создание папки bootstrap/cache
Создайте в папке bootstrap подпапку cache (bootstrap/cache). И поместите в неё файл .gitignore с таким содержимым:
> * !.gitignore
В эту папку должна быть разрешена запись, она будет использоваться фреймворком для хранения временных файлов оптимизации, таких как: compiled.php, routes.php, config.php и services.json.
### Добавление провайдера BroadcastServiceProvider
Добавьте Illuminate\Broadcasting\BroadcastServiceProvider в массив providers файла config/app.php.
Если вы используете провайдер AuthController, в котором используется типаж AuthenticatesAndRegistersUsers, то вам надо внести несколько изменений в процесс проверки и создания новых пользователей.
Во-первых, вам больше не надо передавать экземпляры Guard и Registrar в базовый конструктор. Вы можете полностью удалить эти зависимости из конструктора своего контроллера.
Во-вторых, больше не нужен класс App\Services\Registrar, который использовался в Laravel 5.0. Вы можете просто скопировать методы `PHPvalidator()` и `PHPcreate()` из этого класса прямо в свой AuthController. Больше в этих методах ничего менять не надо, но вам надо не забыть импортировать фасад Validator и модель User в начале вашего AuthController.
Встроенный PasswordController больше не требует никаких зависимостей в своем конструкторе. Вы можете удалить обе зависимости, необходимые в версии 5.0.
Если вы переопределяете метод
```
PHPformatValidationErrors()
```
в своём базовом классе контроллера, то теперь вам надо указать тип контракта Illuminate\Contracts\Validation\Validator вместо конкретного экземпляра Illuminate\Validation\Validator. И также если вы переопределяете метод `PHPformatErrors()` в базовом классе запроса формы, то вам надо указать тип контракта Illuminate\Contracts\Validation\Validator вместо конкретного экземпляра Illuminate\Validation\Validator.
Если у вас есть миграции, изменяющие название столбца или удаляющие столбцы из базы данных SQLite, то вам надо добавить зависимость doctrine/dbal в файл composer.json и выполнить в терминале команду `shcomposer update` для установки библиотеки.
Метод Eloquent `PHPcreate()` теперь может быть вызван без параметров. Если вы переопределяете метод `PHPcreate()` в своих моделях, задайте значение по умолчанию для параметра `PHP$attributes` как массив:
```
public static function create(array $attributes = [])
```
{ // Ваша собственная реализация } Если вы переопределяете метод `PHPfind()` в своих моделях и вызываете в нём `PHPparent::find()` , то вам надо изменить этот вызов на вызов метода `PHPfind()` построителя запросов Eloquent:
```
public static function find($id, $columns = ['*'])
```
{ $model = static::query()->find($id, $columns); // ... return $model; } Теперь метод `PHPlists()` возвращает для запросов Eloquent экземпляр Collection вместо простого массива. Если вы хотите конвертировать Collection в простой массив, используйте метод `PHPall()` :
```
User::lists('id')->all();
```
Но помните, что метод `PHPlists()` построителя запросов по-прежнему возвращает массив. Раньше формат хранения полей с датами в Eloquent можно было изменить переопределив метод `PHPgetDateFormat()` в своей модели. Это по-прежнему возможно, но для удобства вы можете просто указать свойство `PHP$dateFormat` в модели, вместо переопределения метода. Формат даты теперь также применяется при сериализации модели в `PHParray` или JSON. Это может изменить формат ваших JSON-сериализованных полей с датами при миграции с Laravel 5.0 на 5.1. Чтобы указать формат дат для сериализованных моделей, вы можете переопределить метод
```
PHPserializeDate(DateTime $date)
```
в своей модели. Этот метод предоставляет вам точный контроль над форматированием сериализованных полей с датами в Eloquent, не изменяя формат их хранения.
### Класс Collection
```
$collection = $collection->sort($callback);
```
```
$collection = $collection->sortBy('name');
```
Метод `PHPgroupBy()` Метод `PHPgroupBy()` теперь возвращает экземпляры Collection для каждого элемента родительской Collection. Если вы хотите конвертировать все элементы обратно в простые массивы, вы можете сделать с ними map:
```
$collection->groupBy('type')->map(function ($item)
```
{ return $item->all(); }); Метод `PHPlists()` Метод `PHPlists()` теперь возвращает экземпляр Collection вместо простого массива. Если вы хотите конвертировать Collection в простой массив, используйте метод `PHPall()` :
```
$collection->lists('id')->all();
```
Папка app/Commands переименована в app/Jobs. Но вам не надо перемещать все ваши команды в новое место, и вы по-прежнему можете использовать Artisan-команды `shmake:command` и `shhandler:command` для генерирования своих классов. Также папка app/Handlers переименована в app/Listeners и теперь содержит только слушатели событий. Но вам не надо перемещать или переименовывать ваши существующие команды и обработчики событий, вы по-прежнему можете использовать команду `shhandler:event` для генерирования обработчиков событий.
Благодаря обратной совместимости со структурой папок Laravel 5.0, вы можете обновить своё приложение до Laravel 5.1 и постепенно обновлять свои события и команды в их новом месте расположения, когда это удобно для вас или вашей команды.
Методы `PHPcreateMatcher()` ,
```
PHPcreateOpenMatcher()
```
```
PHPcreatePlainMatcher()
```
удалены из компилятора Blade. Используйте новый метод `PHPdirective()` для создания своих директив для Blade в Laravel 5.1. За подробностями загляните в документацию по расширению Blade.
Добавьте защищённое свойство `PHP$baseUrl` в файл tests/TestCase.php:
```
protected $baseUrl = 'http://localhost';
```
### Файлы перевода
Папка по умолчанию для размещения языковых файлов для сторонних пакетов переехала. Переместите все языковые файлы сторонних пакетов из resources/lang/packages/{locale}/{namespace} в папку resources/lang/vendor/{namespace}/{locale}. Например, файл английского языка пакета Acme/Anvil пространства имён acme/anvil::foo надо переместить из resources/lang/packages/en/acme/anvil/foo.php в папку resources/lang/vendor/acme/anvil/en/foo.php.
### SDK веб-сервисов Amazon
Если вы используете драйвер очереди AWS SQS или драйвер почты AWS SES, вам надо обновить версию вашего AWS PHP SDK до 3.0.
Если вы используете драйвер файловой системы Amazon S3, вам надо обновить соответствующий пакет Flysystem с помощью Composer:
Следующие функции Laravel устарели и будут полностью удалены в релизе Laravel 5.2 в декабре 2015:
* Отказались от фильтров маршрутов в пользу посредников.
* Устарел контракт Illuminate\Contracts\Routing\Middleware. Для ваших посредников не нужен контракт. В придачу, устарел и контракт TerminableMiddleware. Вместо реализации интерфейса, просто определите метод
`PHPterminate()` в своём посреднике. * Отказались от контракта Illuminate\Contracts\Queue\ShouldBeQueued в пользу Illuminate\Contracts\Queue\ShouldQueue.
* Отказались от «push-очередей» Iron.io в пользу обычных очередей и слушателей очереди.
* Устарел типаж Illuminate\Foundation\Bus\DispatchesCommands и был переименован в Illuminate\Foundation\Bus\DispatchesJobs.
* Illuminate\Container\BindingResolutionException перемещён в Illuminate\Contracts\Container\BindingResolutionException.
* Отказались от метод сервис-контейнера
`PHPbindShared()` в пользу метода `PHPsingleton()` . * Устарел метод Eloquent и построителя запросов
`PHPpluck()` и был переименован в `PHPvalue()` . * Отказались от метода коллекции
`PHPfetch()` в пользу метода `PHPpluck()` . * Отказались от вспомогательной функции
`PHParray_fetch()` в пользу метода `PHParray_pluck()` .
## Обновление до 5.0.16
В файле bootstrap/autoload.php обновите значение переменной $compiledPath на:
```
$compiledPath = __DIR__.'/../vendor/compiled.php';
```
Из списка сервис-провайдеров в файле настроек app.php можно удалить App\Providers\BusServiceProvider.
Из списка сервис-провайдеров в файле настроек app.php можно удалить App\Providers\ConfigServiceProvider.
## Обновление до 5.0 с 4.2
### Новая установка, затем миграция
Рекомендуемый способ обновления — создать новую установку Laravel 5.0, а затем скопировать уникальные файлы вашего приложения 4.2 в новое приложение. Это касается контроллеров, маршрутов, моделей Eloquent, Artisan-команд, контента и другого кода, специфичного для вашего приложения.
Для начала установите новое приложение Laravel 5 в чистую папку в вашу локальную среду. А каждую часть процесса миграции мы обсудим более подробно далее.
### Зависимости и пакеты Composer
Не забудьте скопировать все дополнительные зависимости Composer в ваше 5.0-приложение. В том числе сторонний код, такой как SDK.
Некоторые пакеты, предназначенные для Laravel, могут быть несовместимы со свежей версией. Проверьте на сайте разработчика пакета, какую версию необходимо использовать с Laravel 5. После добавления всех дополнительных зависимостей Composer необходимо выполнить команду `shcomposer update` .
По умолчанию приложения Laravel 4 не используют пространства имён в коде вашего приложения. Например, все модели Eloquent и контроллеры просто живут в пространстве имён «global». Для ускорения миграции вы можете просто оставить эти классы в глобальном пространстве имён и в Laravel 5.
Миграция переменных средыСкопируйте новый файл .env.example в .env, который в 5.0 соответствует старому файлу .env.php. Задайте в нём все соответствующие значения, такие как APP_ENV и APP_KEY (ваш ключ шифрования), данные для подключения к вашей БД, и ваши драйверы кэша и сессий.
Вдобавок, скопируйте и задайте значения из вашего старого файла .env.php и вставьте их и в .env (действующие значения для вашей локальной среды), и в .env.example (примеры значений для других участников команды).
Более подробно о настройке среды читайте в полной документации.
Внимание: Вам необходимо поместить соответствующий файл .env с необходимыми значениями на ваш продакшн-сервер перед развёртыванием вашего приложения на Laravel 5.
Конфигурационные файлыВ Laravel 5.0 больше не используются каталоги app/config/{environmentName}/ для расположения конкретных файлов настроек для каждой среды. Вместо этого поместите все значения настроек, которые зависят от среды, в .env, а затем обращайтесь к ним из ваших конфигурационных файлов с помощью
```
confenv('key', 'default value')
```
. Примеры можно увидеть в файле настроек config/database.php. Настройте конфигурационные файлы в папке config/ в соответствии с теми значениями, которые необходимы для ваших сред, или настройте их на использование `PHPenv()` для загрузки значений, зависящих от среды.
Не забывайте при добавлении новых параметров в файл .env добавлять значения для примера и в файл .env.example. Это поможет другим участникам вашей команды создавать собственные файлы .env.
### Маршруты
Скопируйте и вставьте ваш старый файл routes.php в ваш новый app/Http/routes.php.
Затем поместите все ваши контроллеры в каталог app/Http/Controllers. Поскольку в этой статье мы не собираемся мигрировать в полное пространство имён, добавьте каталог app/Http/Controllers в директиву classmap вашего файла composer.json. Теперь вы можете удалить пространство имён из абстрактного базового класса app/Http/Controllers/Controller.php. Проверьте, что ваши мигрированные контроллеры наследуют этот базовый класс.
В вашем файле app/Providers/RouteServiceProvider.php задайте значение для свойства `PHPnamespace` равное `PHPnull` .
### Фильтры маршрутов
Скопируйте ваши привязки маршрутов из app/filters.php и вставьте их в метод `PHPboot()` в app/Providers/RouteServiceProvider.php. Добавьте строку
```
PHPuse Illuminate\Support\Facades\Route;
```
в файл app/Providers/RouteServiceProvider.php, чтобы продолжить использование фасада Route. Нет необходимости копировать какие-либо фильтры Laravel 4.0 по умолчанию, такие как auth и csrf, — они у вас уже есть, но в виде посредников. Редактируйте любые маршруты и контроллеры, которые соответствуют старым фильтрам по умолчанию (например,
```
PHP['before' => 'auth']
```
), и измените их в соответствии с новым посредником (например,
```
PHP['middleware' => 'auth']
```
). Фильтры не удалены из Laravel 5. Вы по-прежнему можете привязывать и использовать собственные фильтры с помощью `PHPbefore` и `PHPafter` .
### Глобальная CSRF-защита
По умолчанию CSRF-защита включена для всех маршрутов. Если захотите отключить её, или включить только для нескольких маршрутов вручную, удалите эту строчку массива middleware из App\Http\Kernel:
```
conf'App\Http\Middleware\VerifyCsrfToken',
```
Если хотите использовать её где-либо ещё, добавьте эту строчку в `PHP$routeMiddleware` :
```
'csrf' => 'App\Http\Middleware\VerifyCsrfToken',
```
Теперь вы можете добавить посредников в отдельные маршруты / контроллеры с помощью
```
PHP['middleware' => 'csrf']
```
Для хранения своих моделей Eloquent вы свободно можете создать новую папку app/Models. И опять, добавьте эту папку в директиву classmap в файле composer.json.
Обновляйте любые модели с помощью SoftDeletingTrait для использования
. Кэширование EloquentВ Eloquent теперь нет метода `PHPremember` для кэширования запросов. Теперь вы сами отвечаете за ручное кэширование ваших запросов с помощью функции `PHPCache::remember` . Подробнее об этом читайте в полной документации.
### Модель авторизации пользователя
Для обновления вашей модели User для системы авторизации Laravel 5 следуйте этим инструкциям:
Удалите эти строки из блока `PHPuse` :
```
use Illuminate\Auth\UserInterface;
```
use Illuminate\Auth\Reminders\RemindableInterface; Добавьте эти строки в блок `PHPuse` :
```
use Illuminate\Auth\Authenticatable;
```
use Illuminate\Auth\Passwords\CanResetPassword; use Illuminate\Contracts\Auth\Authenticatable as AuthenticatableContract; use Illuminate\Contracts\Auth\CanResetPassword as CanResetPasswordContract;
Удалите интерфейсы UserInterface и RemindableInterface.
Отметьте класс, как реализующий следующие интерфейсы:
```
implements AuthenticatableContract, CanResetPasswordContract
```
Включите следующие типажи в объявление класса:
```
use Authenticatable, CanResetPassword;
```
Если они использовались, то удалите Illuminate\Auth\Reminders\RemindableTrait и Illuminate\Auth\UserTrait из вашего блока `PHPuse` и объявления класса.
### Изменения пользователя Cashier
Изменились названия типажа и интерфейса, используемые для Laravel Cashier. Вместо BillableTrait используйте типаж Laravel\Cashier\Billable. А вместо Laravel\Cashier\BillableInterface реализуйте интерфейс Laravel\Cashier\Contracts\Billable. Больше никаких изменений методов не требуется.
### Artisan-команды
Переместите все ваши классы команд из старой папки app/commands в новую app/Console/Commands. Затем добавьте эту папку app/Console/Commands в директиву classmap в файле composer.json.
Затем скопируйте ваш список Artisan-команд из start/artisan.php в массив commands в файле app/Console/Kernel.php.
### Миграции и наполнение начальными данными БД
Удалите две миграции, включённые в Laravel 5.0, поскольку у вас уже есть таблица пользователей в БД.
Переместите все ваши классы миграций из старой папки app/database/migrations в новую database/migrations. Все ваши начальные данные должны быть перемещены из app/database/seeds в database/seeds.
### Глобальные привязки IoC
Если у вас есть привязки IoC в start/global.php, переместите их в метод `PHPregister` файла app/Providers/AppServiceProvider.php. Вам может понадобиться импортировать фасад App.
Как вариант, вы можете разбить эти привязки на отдельные сервис-провайдеры по категориям.
### Шаблоны
Переместите ваши шаблоны из app/views в новую папку resources/views.
### Изменения тегов Blade
Для лучшей защищённости по умолчанию Laravel 5.0 экранирует весь вывод от обеих директив Blade — {{ }} и {{{ }}}. Была введена новая директива {!! !!} для отображения сырого, неэкранированного вывода. Самый безопасный вариант при обновлении вашего приложения — использовать новую директиву {!! !!}, только когда вы уверены, что выводить сырые данные безопасно.
Но если вам необходимо использовать старый синтаксис Blade, добавьте следующие строки в конец AppServiceProvider@register:
```
\Blade::setRawTags('{{', '}}');
```
\Blade::setContentTags('{{{', '}}}'); \Blade::setEscapedContentTags('{{{', '}}}');
К этому следует отнестись серьёзно, так как ваше приложение может стать более уязвимым к XSS-эксплоитам. Также перестанет работать комментирование кода с помощью {{--.
### Языковые файлы
Переместите ваши языковые файлы из app/lang в новую папку resources/lang.
### Общая папка
Скопируйте общий контент вашего приложения из каталога public вашего приложения 4.2 в каталог public вашего нового приложения. Не забудьте сохранить файл index.php версии 5.0.
Переместите свои тесты из app/tests в новую папку tests.
### Другие файлы
Скопируйте остальные файлы в свой проект. Например, .scrutinizer.yml, bower.json и другие похожие файлы настроек инструментария.
Вы можете расположить ваши Sass, Less и CoffeeScript где пожелаете. Хорошим местом по умолчанию может послужить папка resources/assets.
### Вспомогательные функции форм и HTML
Если вы используете вспомогательные функции форм или HTML, то столкнётесь с ошибкой «класс Form не найден» или «класс Html не найден». Вспомогательные функции форм и HTML упразднены в Laravel 5.0, но есть разработанные сообществом замены для них, такие как разработанная командой Laravel Collective.
Например, вы можете добавить
```
PHP"laravelcollective/html": "~5.0"
```
в раздел `PHPrequire` своего файла composer.json. Вам также надо будет добавить фасады Form и HTML и сервис-провайдер. Отредактируйте config/app.php — вставьте эту строку в массив `PHPproviders` :
```
'Collective\Html\HtmlServiceProvider',
```
Затем добавьте эти строки в массив `PHPaliases` :
```
'Form' => 'Collective\Html\FormFacade',
```
'Html' => 'Collective\Html\HtmlFacade',
### CacheManager
Если в ваш код был внедрён Illuminate\Cache\CacheManager для получения бесфасадной версии кэша Laravel, то внедрите вместо него Illuminate\Contracts\Cache\Repository.
Замените все вызовы
```
PHP$paginator->links()
```
```
PHP$paginator->render()
```
. Замените все вызовы
```
PHP$paginator->getFrom()
```
```
PHP$paginator->getTo()
```
```
PHP$paginator->firstItem()
```
```
PHP$paginator->lastItem()
```
соответственно. Удалите префикс «get» из вызовов
```
PHP$paginator->getPerPage()
```
```
PHP$paginator->getCurrentPage()
```
```
PHP$paginator->getLastPage()
```
```
PHP$paginator->getTotal()
```
(например,
```
PHP$paginator->perPage()
```
).
### Очереди Beanstalk
Laravel 5.0 требует
```
PHP"pda/pheanstalk": "~3.0"
```
вместо
```
PHP"pda/pheanstalk": "~2.1"
```
### Компонент Remote
Компонент Remote был упразднён.
### Компонент Workbench
Компонент Workbench был упразднён.
## Обновление на 4.2 с 4.1
### PHP 5.4+
Laravel 4.2 требует PHP 5.4.0 или выше.
### Настройки шифрования
Добавьте новый параметр cipher в свой файл конфигурации app/config/app.php. Значение этого параметра должно быть MCRYPT_RIJNDAEL_256.
> confcipher => MCRYPT_RIJNDAEL_256
Этот параметр используется для управления шифром по умолчанию для средств шифрования Laravel.
В Laravel 4.2 шифр по умолчанию — это MCRYPT_RIJNDAEL_128 (AES). Он считается самым безопасным шифром. Необходимо изменение значения шифра назад на MCRYPT_RIJNDAEL_256, чтобы расшифровывать cookies/values, которые были зашифрованы в Laravel 4.1.
### Модели безопасного удаления теперь используют типажи
Если вы используете модели безопасного удаления, знайте, что теперь параметр softDeletes удалён. Теперь надо использовать SoftDeletingTrait:
```
use Illuminate\Database\Eloquent\SoftDeletingTrait;
```
class User extends Eloquent { use SoftDeletingTrait; }
Также надо вручную добавить поле deleted_at в параметр dates:
```
class User extends Eloquent {
```
use SoftDeletingTrait; protected $dates = ['deleted_at']; }
API для операций безопасного удаления остался прежним.
SoftDeletingTrait не может быть применён на базовую модель. Он должен быть в классе реальной модели.
### Переименованы классы View и Pagination
Если вы непосредственно ссылаетесь на класс Illuminate\View\Environment или класс Illuminate\Pagination\Environment, обновите свой код на Illuminate\View\Factory и Illuminate\Pagination\Factory вместо них. Новое название лучше отражает их функции.
### Дополнительный параметр в Pagination Presenter
Если вы наследуете класс Illuminate\Pagination\Presenter, то теперь в абстрактный метод
```
PHPgetPageLinkWrapper
```
добавился параметр rel:
```
abstract public function getPageLinkWrapper($url, $page, $rel = null);
```
### Шифрование очереди Iron.Io
Если вы используете драйвер очереди Iron.io, вам нужно будет добавить новый параметр encrypt в конфигурационный файл очереди:
> conf'encrypt' => true
## Обновление до 4.1.29 с 4.1.x
В Laravel 4.1.29 улучшено квотирование столбцов для всех драйверов баз данных. Это защищает ваше приложение от некоторых уязвимостей массового назначения, когда в модели не используется параметр fillable. Если вы используете параметр fillable для защиты от массового назначения, ваше приложение не является уязвимым. Однако, если вы используете guarded и передаёте пользовательские массивы в функции типа «update» или «save», вы должны скорей обновиться до 4.1.29, так как ваше приложение находится под угрозой массового назначения.
Чтобы обновить Laravel до 4.1.29, просто запустите `shcomposer update` . В этом релизе нет критических изменений.
## Обновление до 4.1.26 с 4.1.25
В Laravel 4.1.26 входят улучшения в области безопасности для cookies «запомнить меня». До этого обновления, если злоумышленник перехватывал cookie «запомнить меня», то этот cookie оставался действующим в течение длительного периода времени, даже после того как настоящий владелец аккаунта изменил свой пароль, вышел из системы и т.д.
Это изменение требует добавления нового столбца remember_token в таблицу базы данных users (или её аналог). После этого изменения новый токен будет присваиваться пользователю каждый раз, когда он подключается к вашему приложению. Токен также будет обновлён, когда пользователь выйдет из приложения. Последствия этого изменения: если cookie «запомнить меня» перехвачен, простой выход из приложения обновит этот cookie.
### Обновление Path
Во-первых, добавьте в таблицу users новый, занулённый столбец remember_token типа VARCHAR(100), TEXT, или эквивалентного типа.
Во-вторых, если вы используете драйвер аутентификации Eloquent, добавьте в класс User следующие три метода:
```
public function getRememberToken()
```
{ return $this->remember_token; } public function setRememberToken($value) { $this->remember_token = $value; } public function getRememberTokenName() { return 'remember_token'; }
Все существующие сессии «запомнить меня» станут недействительны после этого изменения, так что все пользователи будут вынуждены повторно зайти в ваше приложение.
### Создатели пакетов
Два новых метода были добавлены в интерфейс Illuminate\Auth\UserProviderInterface. Примеры реализации можно найти в драйверах по умолчанию:
```
public function retrieveByToken($identifier, $token);
```
public function updateRememberToken(UserInterface $user, $token);
Illuminate\Auth\UserInterface также получил три новых метода, описанные выше в разделе «Обновление Path».
## Обновление до 4.1 с 4.0
Чтобы обновить своё приложение Laravel до 4.1, измените версию вашего laravel/framework на 4.1.* в файле composer.json.
### Замена файлов
### Добавление файлов конфигурации и параметров
Обновите свои массивы aliases и providers в своём конфигурационном файле app/config/app.php. Обновлённые значения для этих массивов можно найти в этом файле. Не забудьте заново добавить в массивы свои пользовательские и пакетные поставщики услуг и алиасы.
Добавьте новый файл app/config/remote.php из репозитория.
Добавьте новый параметр конфигурации expire_on_close в свой файл app/config/session.php. По умолчанию значение должно быть установлено в false.
Добавьте новую секцию failed в свой файл app/config/queue.php. По умолчанию значения секции должны быть следующими:
> conf 'failed' => [ 'database' => 'mysql', 'table' => 'failed_jobs', ],
(Необязательно) Обновите параметр pagination в файле app/config/view.php на pagination::slider-3.
### Обновление контроллера
Если app/controllers/BaseController.php содержит use вверху, измените use Illuminate\Routing\Controllers\Controller; на use Illuminate\Routing\Controller;
### Обновление сброса паролей
Сброс паролей был перестроен для большей гибкости. Вы можете изучить новый контроллер, выполнив Artisan-команду
(запускайте только после выполнения других изменений ниже). Вы можете также просмотреть обновленную документацию и обновить своё приложение в соответствии с ней.
Обновите свой языковой файл app/lang/en/reminders.php на этот
### Обновление обнаружения среды
Из соображений безопасности URL-домены больше не используются для обнаружения среды приложения. Эти значения легко подменить, а это позволит злоумышленникам изменить среду для запроса. Используйте для обнаружения среды имена хостов (команда `shhostname` на Mac, Linux и Windows).
### Более простые лог-файлы
Laravel теперь генерирует единственный файл журнала: app/storage/logs/laravel.log. Однако вы по-прежнему можете настроить ведение журналов с помощью файла app/start/global.php.
### Удаление завершающего слеша для переадресации
В вашем файле bootstrap/start.php удалите вызов
```
PHP$app->redirectIfTrailingSlash()
```
. Этот метод больше не нужен, так как это функция теперь реализована в файле .htaccess, включенном в фреймворк.
Затем замените файл .htaccess вашего Apache на этот новый файл, который обрабатывает завершающие слешы.
### Доступ к текущему маршруту
Текущий маршрут теперь доступен с помощью `PHPRoute::current()` вместо
```
PHPRoute::getCurrentRoute()
```
Как только вы завершили все перечисленные выше обновления, вы можете выполнить команду `shcomposer update` , чтобы обновить файлы ядра приложения! Если у вас появляются ошибки загрузки класса, попытайтесь выполнить команду `shupdate` с параметром `sh--no-scripts` : > shcomposer update --no-scripts
На Linux вам может потребоваться сделать
```
shsudo composer update
```
, если вы получаете ошибку доступа.
### Слушатели событий по шаблону
Слушатели событий по шаблону теперь не передают события в параметры ваших функций-обработчиков. Если вам надо найти событие, которое было запущено, вы должны использовать `PHPEvent::firing()` .
# Помощь проекту
## Сообщения об ошибках
Чтобы стимулировать активное сотрудничество, Laravel настоятельно рекомендует отправлять запросы на исправления и улучшения, а не только отчёты об ошибках. «Отчёт об ошибке» также может быть отправлен в виде запроса на улучшение, содержащего проваленный тест.
А если вы отправляете отчёт об ошибке, то ваша заявка должна содержать заголовок и понятное описание проблемы. Вам следует прикрепить как можно больше сопутствующей информации и пример кода, демонстрирующий проблему. Цель отчёта — упростить вам (и остальным) возможность воспроизвести ошибку и разработать исправление.
Помните, отчёт об ошибке создаётся с целью объединения людей, столкнувшихся с той же проблемой, для её решения. Не ждите, что отчёт автоматически приведёт к скорейшему решению, и остальные разработчики кинуться решать проблему. Создание отчёта служит началом для вас и остальных в решении проблемы.
Исходный код Laravel расположен на Github. Вот репозитории каждого из проектов Laravel:
* Laravel Framework
* Laravel Application
* Laravel Documentation
* Laravel Cashier
* Laravel Cashier для Braintree
* Laravel Envoy
* Laravel Homestead
* Laravel Homestead Build Scripts
* Laravel Passport
* Laravel Scout
* Laravel Socialite
* Laravel Website
* Laravel Art
## Обсуждение разработки ядра
Вы можете предложить новую функцию или исправление существующего поведения Laravel на внутренней доске задач. Пожалуйста, когда предлагаете новую функцию, напишите хотя бы часть необходимого для её реализации кода.
Обсуждение относительно ошибок, новых функций и реализации существующих функций ведётся на канале #internals в LaraChat команды Slack. <NAME>, создатель Laravel, обычно присутствует на канале по будням с 8 утра до 5 вечера (по чикагскому времени UTC-06:00), а иногда и в другое время.
## Какая ветка?
Все исправления ошибок должны отправляться в последнюю стабильную ветку. Исправления ошибок никогда не должны отправляться в ветку master, только если они относятся к функциям, которые есть только в следующем релизе.
Минорные функции, которые полностью обратно совместимы с текущим релизом Laravel, могут быть отправлены в последнюю стабильную ветку.
Мажорные новые функции должны всегда отправляться в ветку master, которая содержит следующий релиз Laravel.
Если вы не уверены к каким функциям относится ваша, мажорным или минорным, спросите об этом Тэйлора Отвелла на канале #internals в LaraChat команды Slack.
## Уязвимости безопасности
Если вы обнаружите уязвимость в безопасности Laravel, пожалуйста, напишите об этом Тэйлору Отвеллу на почту <EMAIL>. Все уязвимости будут оперативно устранены.
## Стандарты кода
Laravel следует стандарту автозагрузки PSR-4 и стандарту написания кода PSR-2.
### PHPDoc
Ниже приведён пример правильного блока документации Laravel. Обратите внимание, что после атрибута `PHP@param` стоят два пробела, тип аргумента, ещё два пробела и в конце имя переменной: `/**` * Register a binding with the container. * * @param string|array $abstract * @param \Closure|string|null $concrete * @param bool $shared * @return void */ public function bind($abstract, $concrete = null, $shared = false) { // }
### StyleCI
Если ваш стиль написания кода не идеален, не волнуйтесь! StyleCI автоматически поместит в репозиторий Laravel все исправления стиля после размещения запроса на включение. Это позволяет нам сконцентрироваться на содержании, а не форме.
# Установка
## Требования к серверу
У Laravel есть несколько системных требований. Само собой, все они учтены в виртуальной машине Laravel Homestead, поэтому рекомендуется использовать для локальной разработки именно её.
Но если вы не используете Homestead, то вам необходимо выполнить следующие требования:
* PHP >= 5.6.4
* PDO расширение для PHP (для версии 5.1+)
* MCrypt расширение для PHP (для версии 5.0)
* OpenSSL (расширение для PHP)
* Mbstring (расширение для PHP)
* Tokenizer (расширение для PHP)
* XML (расширение для PHP) (для версии 5.3+)
Для PHP 5.5 в некоторых дистрибутивах ОС может потребоваться вручную установить расширение PHP JSON. В Ubuntu это можно сделать командой
```
shapt-get install php5-json
```
## Установка Laravel
Laravel использует Composer для управления зависимостями. Поэтому сначала установите Composer, а затем Laravel.
Сначала загрузите установщик Laravel с помощью Composer.
> shcomposer global require "laravel/installer"
Не забудьте поместить каталог $HOME/.composer/vendor/bin (или его эквивалент в вашей ОС) в вашу переменную PATH, чтобы исполняемый файл laravel мог быть найден вашей системой.
После установки команда `shlaravel new` произведёт установку свежего Laravel в указанный каталог. Например, `shlaravel new blog` создаст каталог с именем blog, содержащий свежий Laravel со всеми установленными зависимостями: > shlaravel new blog
С помощью создания проекта Composer
Вы также можете установить Laravel с помощью команды `shcreate-project` : > shcomposer create-project --prefer-dist laravel/laravel blog "5.3.*"
добавлено в 5.1 ()
> shcomposer create-project laravel/laravel blog "5.1.*"
добавлено в 5.0 ()
> shcomposer create-project laravel/laravel {directory} "~5.0.0" --prefer-dist
После установки необходимо обновить пакеты до последних версий. Сначала удалите файл {directory}/vendor/compiled.php, затем смените текущий каталог на {directory} и выполните команду `shcomposer update` . Laravel устанавливается с готовыми преднастройками для регистрации и авторизации пользователей. Если хотите удалить их, используйте Artisan-команду `shfresh` : > shphp artisan fresh
Локальный сервер для разработки
Если на вашей локальной машине установлен PHP, и вы хотите использовать встроенный в него сервер для разработки вашего приложения, вы можете использовать Artisan-команду `shserve` . Эта команда запустит сервер на http://localhost:8000: > shphp artisan serve
Конечно, больше возможностей для надёжной локальной разработки доступно в Homestead и Valet.
Все файлы настроек для фреймворка Laravel хранятся в папке config. Каждый параметр хорошо описан, поэтому вы можете просмотреть их, чтобы ознакомиться с доступными возможностями.
После установки Laravel вам может понадобиться настроить некоторые права. У вашего веб-сервера должны быть права на запись в папки внутри storage и bootstrap/cache (vendor для версии 5.0), иначе Laravel не запустится. Если вы используете виртуальную машину Homestead, то там эти права уже настроены.
Далее вам необходимо задать случайную строку в качестве ключа приложения. Если вы установили Laravel с помощью Composer, то этот ключ уже был задан для вас командой
.
Обычно эта строка должна быть длиной 32 символа. Ключ может быть задан в файле среды .env. Если вы ещё не переименовали файл .env.example в .env, то вам надо сделать это сейчас. Если ключ приложения не задан, данные пользовательских сессий и другие шифрованные данные не будут защищены!
Laravel практически не требует других начальных настроек — вы можете сразу начинать разработку! Однако вам может пригодиться файл config/app.php и его документация. Он содержит несколько настроек вроде timezone и locale, которые вы можете изменить для вашего приложения.
Также вы можете настроить некоторые дополнительные компоненты Laravel, такие как:
Никогда не оставляйте параметр app.debug со значением true в продакшне.
добавлено в 5.3 () 5.1 () 5.0 ()
## Настройка веб-сервера
### Красивые URL
Laravel поставляется вместе с файлом public/.htaccess, который настроен для обработки URL без указания index.php. Если вы используете Apache в качестве веб-сервера, обязательно включите модуль mod_rewrite.
Если стандартный .htaccess не работает для вашего Apache, попробуйте следующий:
При использовании Nginx следующая директива в настройках вашего сайта позволит перенаправлять все запросы в фронт-контроллер `shindex.php` :
```
conflocation / {
try_files $uri $uri/ /index.php?$query_string;
}
```
Само собой, при использовании Homestead или Valet красивые URL будут настроены автоматически.
### Настройки окружения
Часто бывает полезно иметь разные значения настроек в зависимости от окружения, в котором работает приложение. Например, если вы используете разные драйверы кэша при локальной разработке и на сервере. Для этого можно просто использовать настройки на основе окружения.
Для удобства в Laravel используется PHP-библиотека DotEnv от <NAME>. В свежеустановленном Laravel в корневом каталоге вашего приложения находится файл .env.example. Если вы установили Laravel при помощи Composer, этот файл автоматически переименован в .env. В другом случае вам придётся переименовать его вручную.
Все находящиеся в этом файле переменные будут загружены в супер-глобальную переменную PHP $_ENV, когда ваше приложение получит запрос. Вы можете использовать функцию `PHPenv()` для получения значений из этой переменной. На самом деле, если вы посмотрите в файлы настроек Laravel, то обнаружите, что некоторые параметры уже используют эту функцию!
Вы можете изменять переменные среды по своему желанию для своего локального сервера, и для «продакшн»-сервера. Но вам не надо помещать файл .env в вашу систему контроля версий, так как каждому разработчику / серверу могут быть необходимы собственные настройки окружения для использования вашего приложения.
Если вы работаете в команде, вы можете продолжать включать файл .env.example в ваше приложение. Поместив примеры значений в пример файла настроек, вы поможете другим разработчикам легко разобраться, какие переменные среды необходимы для запуска вашего приложения.
Получение текущего окружения приложенияТекущее окружение приложения определяется с помощью переменной APP_ENV в файле .env. Вы можете получить это значение методом `PHPenvironment()` фасада App:
Также вы можете передать аргумент в метод `PHPenvironment()` , чтобы проверить совпадение с указанным значением. При необходимости вы можете передать даже несколько значений:
// Локальное окружение } if (App::environment('local', 'staging')) { // Окружение либо локальное, либо тестовое... } Экземпляр приложения также можно получить при помощи вспомогательного метода `PHPapp()` :
### Настройки кэширования
Для ускорения вашего приложения вы можете кэшировать все файлы настроек в единый файл при помощи Artisan-команды `shconfig:cache` . Эта команда соберёт все параметры вашего приложения в единый файл, который может быть быстро загружен фреймворком. Вам стоит всегда выполнять команду
, как часть процедуры развёртывания в «продакшн». При локальной разработке не стоит выполнять эту команду, так как параметры необходимо часто изменять при разработке приложения.
### Доступ к значениям настроек
Вы легко можете обратиться к значениям настроек при помощи глобальной вспомогательной функции `PHPconfig()` . К значениям настроек можно обращаться с помощью «точечного» синтаксиса, который включает в себя имя файла и необходимый параметр. Также можно указать значение по умолчанию, которое будет возвращено, если запрашиваемый параметр не существует:
Чтобы задать значения настроек во время выполнения, передайте массив в функцию `PHPconfig()` :
### Название для приложения
После установки Laravel вы можете дать «имя» вашему приложению. По умолчанию папка app входит в пространство имён App, и загружается с помощью Composer по стандарту автозагрузки PSR-4. Но вы можете изменить пространство имён в соответствии с названием вашего приложения, это делается простой Artisan-командой `shapp:name` .
Например, если ваше приложение называется «Horsefly», вы можете выполнить такую команду в корневом каталоге приложения:
> shphp artisan app:name Horsefly
Переименование приложения вовсе не обязательно, при желании вы можете оставить пространство имён App.
Когда ваше приложение находится в режиме обслуживания, для всех запросов в ваше приложение будет отображаться специальное представление. Это позволяет легко «отключить» приложение при его обновлении или выполнении обслуживания. Проверка режима обслуживания включена в стандартный набор посредников для вашего приложения. Если приложение находится в режиме обслуживания, будет выброшено исключение HttpException с кодом состояния 503.
Для включения режима обслуживания просто выполните Artisan-команду `shdown` : > shphp artisan down
Для отключения режима обслуживания используйте команду `shup` : > shphp artisan up
### Шаблон отклика режима обслуживания
Стандартный шаблон отклика режима обслуживания расположен в resources/views/errors/503.blade.php.
### Режим обслуживания и очереди
Когда ваше приложение находится в режиме обслуживания, не будут обрабатываться задачи в очереди. После выключения режима обслуживания задачи продолжат обрабатываться в обычном режиме.
# Настройка
Данная статья документации актуальна для версий 5.3, 5.2 и 5.0, для версии 5.1 не актуальна.
Все файлы настроек Laravel хранятся в папке config. Каждая настройка задокументирована, поэтому не стесняйтесь изучить эти файлы и познакомиться с доступными возможностями конфигурирования.
## После установки
### Название вашего приложения
После установки Laravel вам может потребоваться «назвать» ваше приложение. По умолчанию каталог app находится в пространстве имён `PHPApp` и автоматически загружается с помощью Composer по стандарту автозагрузки PSR-4. Но вы можете изменить пространство имён в соответствии с названием вашего приложения с помощью Artisan-команды `shapp:name` .
Например, если ваше приложение называется «Horsefly», вы можете выполнить следующую команду в корне вашей инсталляции:
> shphp artisan app:name Horsefly
Задание имени для вашего приложения вовсе не обязательно, и вы можете оставить пространство имён `PHPApp` , если захотите.
### Другие настройки
После установки Laravel нужно настроить самую малость. Вы можете сразу начинать разработку! Но если захотите, просмотрите файл config/app.php и его документацию. Он содержит несколько параметров, таких как timezone и locale, которые могут пригодиться для настройки вашего приложения в зависимости от вашего местоположения.
Внимание: никогда не устанавливайте параметр app.debug в значение true для продакшна.
### Права доступа
Для Laravel может потребоваться настроить один набор прав доступа: веб-серверу требуются права записи в папки, находящиеся в storage и vendor.
## Настройки среды
Часто необходимо иметь разные значения для различных настроек в зависимости от среды, в которой выполняется приложение. Например, вы можете захотеть использовать разные драйвера кэша на локальном и продакшн-сервере.
Для этого в Laravel используется PHP-библиотека DotEnv от Ванса Лукаса. В свежей инсталляции Laravel в корне вашего приложения будет файл .env.example. Если вы установили Laravel с помощью Composer, этот файл будет автоматически переименован в .env, иначе вам следует сделать это вручную.
добавлено в 5.3 ()
Все перечисленные в этом файле переменные будут загружены в супер-глобальную переменную PHP `PHP$_ENV` , когда ваше приложение получит запрос. Но вы можете использовать вспомогательную функцию `PHPenv()` для получения значений этих переменных в ваших конфигурационных файлах. На самом деле, если вы заглянете в файлы настроек Laravel, то заметите несколько опций, уже использующих эту функцию:
```
'debug' => env('APP_DEBUG', false),
```
Второе аргумент этой функции — значение по умолчанию. Оно будет использовано, если такой переменной среды нет.
Не бойтесь изменять ваши переменные среды так, как необходимо для вашего локального сервера, а также для среды продакшна. Но ваш файл .env не должен быть включён в систему контроля версий вашего приложения, так как каждому использующему ваше приложение разработчику/серверу может потребоваться разная настройка среды.
Если вы работаете в команде, то, возможно, вам захочется и дальше поставлять файл .env.example с вашим приложением. Поместив примеры значений в этот файл, вы дадите понять другим разработчикам, какие переменные среды необходимы для запуска вашего приложения.
### Определение текущей среды
добавлено в 5.2 ()
Текущая среда приложения определяется по переменной `PHPAPP_ENV` в файле .env. Вы можете получить это значение методом `PHPenvironment()` фасада App:
// Среда локальная (local) } if (App::environment('local', 'staging')) { // Среда локальная (local) ИЛИ отладочная (staging)... } Экземпляр приложения также доступен через вспомогательный метод `PHPapp()` :
добавлено в 5.0 ()
Вы можете получить имя текущей среды приложения с помощью метода `PHPenvironment` для экземпляра `PHPApplication` :
```
$environment = $app->environment();
```
```
if ($app->environment('local'))
```
{ // Среда локальная (local) } if ($app->environment('local', 'staging')) { // Среда локальная (local) ИЛИ отладочная (staging)... } Чтобы получить экземпляр приложения, выполните контракт Illuminate\Contracts\Foundation\Application с помощью сервис-контейнера. Конечно, когда вы находитесь внутри сервис-провайдера, экземпляр приложения доступен через переменную `PHP$this->app` . Экземпляр приложения также доступен через функцию `PHPapp()` или фасад `PHPApp` :
$environment = App::environment();
## Чтение значений настроек
Вы можете легко обращаться к значениям настроек с помощью глобальной вспомогательной функции `PHPconfig()` из любого места в вашем приложении. К значениям настроек можно обращаться, используя «точечный» синтаксис, в который входит имя файла и название опции. При этом можно указать значение по умолчанию, которое будет возвращено при отсутствии запрашиваемой опции:
Для задания значений настроек во время выполнения передайте массив в функцию `PHPconfig()` :
## Кэширование настроек
Для повышения скорости работы вашего приложения вы можете кэшировать все файлы настроек в единый файл с помощью Artisan-команды `shconfig:cache` . Она объединит все параметры настроек вашего приложения в единый файл, который будет быстро загружен фреймворком. У вас должно войти в привычку запускать команду
, как типичная часть разработки. Не стоит запускать эту команду в процессе локальной разработки, так как при этом часто требуется изменять параметры настроек.
добавлено в 5.3 ()
Когда ваше приложение находится в режиме обслуживания (maintenance mode), специальный шаблон будет отображаться для всех запросов в вашем приложении. Это позволяет легко «отключать» приложение во время его обновления или при обслуживании. Проверка режима обслуживания включена в стандартный набор посредников для вашего приложения. Когда приложение находится в режиме обслуживания, будет вызвано исключение MaintenanceModeException с кодом состояния 503 (для версии 5.2 и ранее — HttpException).
Для включения этого режима просто выполните Artisan-команду down:
> shphp artisan down
Чтобы выйти из режима обслуживания выполните команду up:
> shphp artisan up
Шаблон ответа для режима обслуживанияШаблон для ответов по умолчанию для режима обслуживания находится в resources/views/errors/503.blade.php. Вы можете изменить его для своего приложения. Режим обслуживания и очередиПока ваше приложение находится в режиме обслуживания, не будут обрабатываться никакие задачи очередей. Задачи продолжат обрабатываться как обычно, как только приложение выйдет из режима обслуживания.
добавлено в 5.2 ()
Альтернативы режиму обслуживания
Поскольку для режима обслуживания необходима остановка вашего приложения на несколько секунд, рассмотрите такие альтернативы, как Envoyer, чтобы обеспечить обновление на лету.
## Красивые URL
### Apache
Фреймворк поставляется с файлом public/.htaccess, который используется для реализации URL-адресов без index.php. Если вы используете Apache для работы вашего Laravel-приложения, убедитесь, что включён модуль mod_rewrite.
Если поставляемый с Laravel файл .htaccess не работает с вашей инсталляцией Apache, попробуйте этот:
Если ваш хостинг не разрешает опцию FollowSymlinks, попробуйте заменить её на Options +SymLinksIfOwnerMatch.
### Nginx
В Nginx следующая директива в настройках вашего сайта позволит использовать «красивые» URL-адреса:
`location / {` try_files $uri $uri/ /index.php?$query_string; }
Само собой, при использовании Homestead красивые URL будут настроены автоматически.
# Homestead
Laravel стремится преобразить процесс разработки на PHP, это относится и к локальной среде разработки. Vagrant обеспечивает простой, элегантный способ настройки и управления виртуальными машинами.
Laravel Homestead — официальная предустановленная Vagrant-"коробка", которая предоставляет вам замечательную среду разработки без необходимости установки PHP, веб-сервера и любого другого серверного программного обеспечения на ваш компьютер. Можно больше не беспокоиться о том, что ваша операционная система засоряется! Vagrant-коробки очень удобны. Если что-то пошло не так, вы можете уничтожить и пересоздать коробку в считанные минуты!
Homestead запускается на ОС Windows, Mac и Linux, и включает в себя веб-сервер Nginx, PHP 7.1, MySQL, Postgres, Redis, Memcached, Node и все другие полезные штуки, которые вам понадобятся для разработки удивительных Laravel-приложений.
Если вы используете Windows, возможно, вам необходимо включить аппаратную виртуализацию (VT-x). Она обычно включается через BIOS. Если вы используете Hyper-V на UEFI-системе, вам может понадобиться отключить Hyper-V, для доступа к VT-x.
### Включённое ПО
* Ubuntu 16.04
* Git (в 5.1+)
* PHP 7.1
* HHVM (только в 5.2 и ранее)
* Nginx
* MySQL
* MariaDB (в 5.1+)
* Sqlite3 (в 5.1+)
* Postgres
* Composer (в 5.1+)
* Node (с Yarn (в 5.3+), PM2, Bower, Grunt и Gulp)
* Redis
* Memcached
* Beanstalkd
* Laravel Envoy (в 5.0)
* Профайлер Blackfire (в 5.0)
### Первые шаги
Прежде чем запустить Homestead-среду, вы должны установить VirtualBox 5.1,VMWare или Parallels (с версии Laravel 5.3), а также Vagrant. Эти программные пакеты предоставляют простые в использовании визуальные инсталляторы для всех популярных операционных систем.
Для использования VMWare вам необходимо приобрести и VMware Fusion/Workstation, и плагин VMware Vagrant. Хотя он и платный, зато VMware изначально обеспечивает большую скорость работы общих папок.
Для использования провайдера Parallels вам необходимо установить плагин Parallels Vagrant. Он бесплатный.
Установка Vagrant-коробки HomesteadПосле установки VirtualBox/VMware и Vagrant вы должны добавить коробку laravel/homestead в установленный пакет Vagrant, используя следующую команду в вашем терминале. Скачивание коробки может занять несколько минут: > shvagrant box add laravel/homestead
Если выполнение команды завершится неудачно, проверьте, что у вас установлена свежая версия Vagrant.
Вы можете установить Homestead, просто клонировав репозиторий. Клонируйте репозиторий в папку Homestead в тот каталог, где вы храните все свои проекты Laravel, потому что коробка Homestead станет хостом всех ваших Laravel-проектов:
> shcd ~ git clone https://github.com/laravel/homestead.git Homestead
После клонирования репозитория Homestead выполните в этой папке команду `shbash init.sh` , чтобы создать конфигурационный файл Homestead.yaml. Этот файл будет помещён в скрытый каталог ~/.homestead: > sh// Mac / Linux... bash init.sh // Windows... init.bat
### Настройка Homestead
Параметр provider в файле ~/.homestead/Homestead.yaml указывает на то, какой провайдер должен использоваться: virtualbox, vmware_fusion (Mac OS X) или vmware_workstation (Windows) или parallels. Вы можете задать тот, который предпочитаете:
```
confprovider: virtualbox
```
В параметре folders в файле Homestead.yaml перечислены все папки, которые вы хотите расшарить для вашей среды Homestead. Поскольку файлы в этих папках будут меняться, они будут синхронизироваться с вашей локальной машиной и средой Homestead. Вы можете настроить столько папок, сколько вам необходимо:
```
conffolders:
- map: ~/Code
to: /home/vagrant/Code
```
Для включения NFS просто добавьте простой ключ к вашей синхронизируемой папке:
```
conffolders:
- map: ~/Code
to: /home/vagrant/Code
type: "nfs"
```
Также вы можете передавать параметры, поддерживаемые синхронизируемыми папками Vagrant, указывая их под ключом options:
```
conffolders:
- map: ~/Code
to: /home/vagrant/Code
type: "rsync"
options:
rsync__args: ["--verbose", "--archive", "--delete", "-zz"]
rsync__exclude: ["node_modules"]
```
Не знакомы с Nginx? Не проблема. Параметр sites позволяет легко связать «домен» с папкой в среде Homestead. Типовая конфигурация сайта включена в файл Homestead.yaml. И снова вы можете добавить столько сайтов к своей среде Homestead, сколько необходимо. Homestead может служить удобной виртуальной средой для каждого проекта Laravel, над которым вы работаете:
```
confsites:
- map: homestead.app
to: /home/vagrant/Code/Laravel/public
```
Вы можете настроить использование HHVM для любого сайта Homestead, установив параметр hhvm в значение true:
Если вы измените параметр sites после подключения Homestead-коробки, вам будет необходимо перезапустить
, чтобы обновить конфигурацию Nginx на виртуальной машине.
Вы должны добавить «домены» для своих Nginx-сайтов в файл hosts на вашей машине. Файл hosts перенаправит запросы к вашим сайтам в вашу машину Homestead. На Mac и Linux этот файл расположен в /etc/hosts. На Windows он расположен в C:\Windows\System32\drivers\etc\hosts. Строки, которые вы добавляете в этот файл, будут выглядеть примерно так:
```
conf192.168.10.10 homestead.app
```
Удостоверьтесь, что IP-адрес тот же, что вы установили в своём файле ~/.homestead/Homestead.yaml. Когда вы добавите домен в свой файл hosts и запустите Vagrant-коробку, вы можете получить доступ к сайту через свой веб-браузер:
> http://homestead.app
### Запуск Vagrant Box
Когда вы отредактировали Homestead.yaml, выполните команду `shvagrant up` в папке Homestead. Vagrant загрузит виртуальную машину и настроит ваши общие папки и сайты Nginx автоматически. Чтобы уничтожить машину, вы можете использовать команду
```
shvagrant destroy --force
```
.
Чтобы узнать, как подключиться к своей базе данных, читайте дальше!
### Установка для проекта
Вместо глобальной установки Homestead и использования одной Homestead-коробки для всех ваших проектов, вы можете настроить отдельный экземпляр Homestead для каждого проекта. Установка Homestead для проекта может быть выгоднее, когда вы хотите поставлять файл Vagrantfile вместе с вашим проектом, позволяя тем, кто работает над проектом, просто выполнять `shvagrant up` .
Чтобы установить Homestead непосредственно в ваш проект, затребуйте его с помощью Composer:
> shcomposer require laravel/homestead --dev
Когда Homestead установлен, используйте команду `shmake` для создания Vagrantfile и файла Homestead.yaml в корне вашего проекта. Команда `shmake` автоматически настроит директивы sites и folders в файле Homestead.yaml. > shphp vendor/bin/homestead make
```
shvendor\\bin\\homestead make
```
Затем выполните команду `shvagrant up` в терминале и зайдите в свой проект http://homestead.app через браузер. Не забывайте, что вам по-прежнему необходимо добавить строку для homestead.app (или выбранного вами домена) в файл /etc/hosts.
### Установка MariaDB
Если вы решили использовать MariaDB вместо MySQL, вы можете добавить параметр mariadb в ваш файл Homestead.yaml. Этот параметр удалит MySQL и установит MariaDB. MariaDB является полноценной заменой для MySQL, поэтому вы должны использовать тот же драйвер БД mysql в настройках базы данных приложения:
```
confbox: laravel/homestead
ip: "192.168.20.20"
memory: 2048
cpus: 4
provider: virtualbox
mariadb: true
```
## Повседневное использование
### Глобальный доступ к Homestead
Иногда вам может понадобиться выполнить `shvagrant up` вашей Homestead-машины из любого места вашей файловой системы.
добавлено в 5.2 ()
Это можно сделать, добавив простую Bash-функцию в профиль вашего Bash. Эта функция позволит вам выполнять любые команды Vagrant из любого места вашей системы, и автоматически укажет команде на ваш установленный Homestead:
```
conffunction homestead() {
( cd ~/Homestead && vagrant $* )
}
```
Не забудьте исправить путь ~/Homestead в функции на реальное расположение вашего установленного Homestead. Когда функция установлена, вы можете выполнять такие команды, как `shhomestead up` и `shhomestead ssh` из любого места вашей системы.
Это можно сделать, добавив простой Bash-псевдоним в профиль вашего Bash. Этот псевдоним позволит вам выполнять любые команды Vagrant из любого места вашей системы, и автоматически укажет команде на ваш установленный Homestead:
> confalias homestead='function __homestead() { (cd ~/Homestead && vagrant $*); unset -f __homestead; }; __homestead'
Не забудьте исправить путь ~/Homestead в псевдониме на реальное расположение вашего установленного Homestead. Когда псевдоним установлен, вы можете выполнять такие команды, как `shhomestead up` и `shhomestead ssh` из любого места вашей системы.
добавлено в 5.0 ()
Чтобы добавить псевдонимы для Bash в вашу коробку Homestead, просто добавьте их в файл aliases в корень каталога ~/.homestead.
> shalias vm="ssh [email protected] -p 2222"
После создания псевдонима, вы можете просто использовать команду `shvm` для подключения к вашей Homestead-машине по SSH из любого места вашей системы.
Теперь отредактируйте файл Homestead.yaml. В этом файле вы можете настроить путь к своему паблик-ключу SSH, а также настроить папки, которые вы хотите расшарить между вашей основной машиной и виртуальной машиной Homestead.
У вас нет SSH-ключа? На Mac и Linux вы можете создать пару SSH-ключей, используя следующую команду:
> shssh-keygen -t rsa -C "you@homestead"
На Windows вы можете установить Git и использовать Git Bash, встроенную в оболочку Git, чтобы выполнить указанную выше команду. Также вы можете использовать PuTTY и PuTTYgen.
Как только вы создали SSH-ключ, задайте путь к ключу в параметре authorize в вашем файле Homestead.yaml.
### Подключение через SSH
Для подключения к своей виртуальной машине по SSH вы можете использовать команду `shvagrant ssh` из вашего каталога Homestead.
Но поскольку вам, скорее всего, потребуется часто подключаться к вашей Homestead-машине по SSH, будет удобно создать «функцию» (для версии 5.1 и ранее «псевдоним») на вашей хост-машине для быстрого подключения, как описано выше.
### Подключение к базам данных
База homestead изначально настроена на использование и MySQL, и Postgres. Для ещё большего удобства файл Laravel .env настроен на использование этой БД по умолчанию.
Чтобы подключиться к вашей базе данных MySQL или Postgres через клиент БД с вашей хост-машины, вы должны подключиться к 127.0.0.1 через порт 33060 (MySQL) или 54320 (Postgres). Имя пользователя и пароль для обеих баз данных — homestead / secret.
Вы должны использовать эти нестандартные порты, только подключаясь к базам данных с вашей главной машины. Вы будете использовать порты 3306 и 5432 в вашем конфигурационном файле базы данных Laravel, так как Laravel запущен на виртуальной машине.
### Добавление дополнительных сайтов
После настройки и запуска вашей среды Homestead вы можете захотеть добавить дополнительные Nginx-сайты для своих Laravel-приложений. Вы можете запустить в одной среде Homestead столько установок Laravel, сколько захотите. Вы можете просто добавить сайты в свой файл ~/.homestead/Homestead.yaml и затем выполнить
(для версии 5.2 и ранее `shvagrant provision` )) из папки Homestead.
добавлено в 5.0 ()
Этот процесс деструктивен. При запуске команды `shprovision` , ваша существующая БД будет уничтожена и создана заново. А ещё вы можете использовать скрипт `shserve` , который доступен в среде Homestead. Чтобы использовать скрипт `shserve` , подключитесь по SSH к вашей среде Homestead и запустите следующую команду: > shserve domain.app /home/vagrant/Code/path/to/public/directory 80
После выполнения команды `shserve` , не забудьте добавить новый сайт в файл hosts на вашей главной машине!
### Настройка расписания Cron
Laravel предоставляет удобный способ для планирования Cron-задач путём планирования единственной Artisan-команды `shschedule:run` на ежеминутное выполнение. Команда `shschedule:run` проверит запланированные задачи, определённые в классе App\Console\Kernel, и определит, какие задачи необходимо выполнить. Если вы хотите выполнить команду `shschedule:run` для сайта Homestead, вы можете задать значение true для параметра schedule при определении сайта:
Cron-задача для сайта будет определена в папке /etc/cron.d на виртуальной машине.
### Порты
По умолчанию следующие порты переадресованы в вашу среду Homestead:
* SSH: 2222 → переадресован в 22
* HTTP: 8000 → переадресован в 80
* HTTPS: 44300 → переадресован в 443
* MySQL: 33060 → переадресован в 3306
* Postgres: 54320 → переадресован в 5432
Добавление дополнительных портов
Если хотите, вы можете переадресовать дополнительные порты в Vagrant-коробку, а также указать их протокол:
```
confports:
- send: 93000
to: 9300
- send: 7777
to: 777
protocol: udp
```
## Сетевые интерфейсы
Свойство networks в файле Homestead.yaml настраивает сетевые интерфейсы для вашей среды Homestead. Вы можете настроить сколько угодно интерфейсов:
```
confnetworks:
- type: "private_network"
ip: "192.168.10.20"
```
Для включения сетевого моста задайте параметр bridge и измените тип сети на public_network:
Для включения DHCP просто удалите параметр ip из конфигурации:
## Обновление Homestead
Для обновления Homestead надо выполнить два простых шага. Во-первых, вам надо обновить Vagrant-коробку с помощью команды `shvagrant box update` : > shvagrant box update
Затем вам надо обновить исходный код Homestead. Если вы клонировали репозиторий, то можете просто выполнить
```
shgit pull origin master
```
в то место, куда вы клонировали репозиторий изначально.
Если вы установили Homestead через файл composer.json вашего проекта, вам надо убедиться, что этот файл содержит строку "laravel/homestead": "^4" и обновить ваши зависимости:
> shcomposer update
## Старые версии
Вы можете легко изменить используемую в Homestead версию коробки, добавив следующую строку в ваш файл Homestead.yaml:
`confversion: 0.6.0`
```
confbox: laravel/homestead
version: 0.6.0
ip: "192.168.20.20"
memory: 2048
cpus: 4
provider: virtualbox
```
При использовании старых версий коробки Homestead вам надо проверить версию на совместимость с исходным кодом Homestead. Ниже приведена таблица поддерживаемых версий коробки с указанием того, какую версию исходного кода необходимо использовать, и какая в коробке версия PHP:
Версия Homestead | Версия коробки |
| --- | --- |
PHP 7.0 | 3.1.0 | 0.6.0 |
PHP 7.1 | 4.0.0 | 1.0.0 |
## Профайлер Blackfire
Профайлер Blackfire от SensioLabs автоматически собирает данные о выполнении вашего кода, такие как использование ОЗУ, время ЦПУ и ввод/вывод диска. В Homestead легко использовать этот профайлер для ваших приложений.
Все необходимые пакеты уже установлены в коробку Homestead, вам надо просто задать ID сервера и его ключ в файле Homestead.yaml:
```
confblackfire:
- id: your-server-id
token: your-server-token
client-id: your-client-id
client-token: your-client-token
```
После настройки прав доступа для Blackfire переподключите коробку с помощью команды `shvagrant provision` из вашей папки Homestead. Обязательно загляните в документацию Blackfire, чтобы узнать, как установить расширение Blackfire для вашего браузера.
# Valet
Date: 2016-12-08
Categories:
Tags:
Valet — среда для разработки в Laravel для минималистов, работающих на Mac. Без Vagrant, без файла /etc/hosts. Можно даже расшаривать сайты в общий доступ через локальные туннели. Да, нам и самим это нравится.
Laravel Valet включает на вашем Mac фоновую автозагрузку Nginx (до версии Valet 2.0 — Caddy ). Затем с помощью DnsMasq Valet настраивает прокси для всех запросов к домену *.dev, для переадресации их на сайты на вашей локальной машине.
Другими словами, это молниеносное окружение для разработки в Laravel, которое использует около 7 Мб RAM. Valet не является полной заменой для Vagrant или Homestead, но предоставляет отличную альтернативу, когда вам нужна база для гибкой настройки, максимальная скорость, или вы работаете на машине с ограниченным объёмом RAM.
Из коробки Valet поддерживает (но не ограничивается только ими):
Также вы можете дополнить Valet своим собственным драйвером.
### Valet или Homestead
Как вы знаете, Laravel предлагает Homestead — другую локальную среду разработки. Valet и Homestead отличаются целевой аудиторией и подходом к локальной разработке. Homestead включает в себя целую виртуальную машину с Ubuntu и автоматической настройкой Nginx. Homestead — отличный выбор, если вам нужна полностью виртуальная среда разработки на Linux или на Windows / Linux.
Valet поддерживает только Mac и требует установки PHP и сервера базы данных непосредственно на вашу локальную машину. Это легко делается при помощи Homebrew такими командами: `shbrew install php71` и
. Valet обеспечивает молниеносную среду разработки с минимальным потреблением ресурсов, поэтому идеально подходит для разработчиков, использующих только PHP / MySQL и не нуждающихся в полностью виртуальной среде разработки.
И Valet, и Homestead являются отличным выбором для настройки среды разработки в Laravel. Выбор зависит от ваших личных предпочтений и потребностей вашей команды.
Valet требует MacOS и Homebrew. Перед установкой необходимо убедиться, что другие программы, такие как Apache и Nginx, не используют 80 порт вашей локальной машины.
* Установите или обновите Homebrew до последней версии с помощью
`shbrew update` . * Установите PHP 7.1 с помощью Homebrew командой
. * Установите Valet через Composer командой
```
shcomposer global require laravel/valet
```
. Убедитесь, что в вашей системной переменной PATH есть каталог ~/.composer/vendor/bin. * Выполните команду
`shvalet install` . Она установит и настроит Valet и DnsMasq, и пропишет демон Valet в автозагрузку. После установки Valet попробуйте выполнить в терминале `shping` к любому домену *.dev такой командой `shping foobar.dev` . Если Valet установлен корректно, то вы увидите, что этот домен соответствует 127.0.0.1. Valet автоматически запускает своего демона при запуске ОС. После начальной установки Valet больше не потребуется выполнять `shvalet start` или `shvalet install` . По умолчанию Valet использует для ваших проектов домен верхнего уровня .dev. Если вы хотите использовать другой домен, используйте команду
```
shvalet domain tld-name
```
. Например, чтобы использовать .app вместо .dev, выполните `shvalet domain app` , и Valet автоматически начнёт использовать новый домен. Если вам нужна база данных, попробуйте MariaDB с помощью команды
. После установки MariaDB, вы можете запустить её командой
```
shbrew services start mariadb
```
. Затем вы можете подключиться к БД на 127.0.0.1, имя пользователя — root, пароль — пустая строка.
добавлено в 5.3 ()
Вы можете обновить установленный Valet терминальной командой
. После обновления рекомендуется выполнить команду `shvalet install` , чтобы Valet сделал дополнительные обновления ваших конфигурационных файлов при необходимости.
В Valet 2.0 изменён базовый веб-сервер с Caddy на Nginx. Перед обновлением на эту версию вы должны выполнить следующие команды, чтобы остановить и удалить существующий демон Caddy:
> shvalet stop valet uninstall
Далее вам надо обновить Valet до последней версии. Это делается с помощью Git или Composer, в зависимости от того, как вы устанавливали Valet изначально. Если вы устанавливали его через Composer, то вам надо использовать следующие команды для обновления до последней мажорной версии:
> shcomposer global require laravel/valet
Когда будет скачан свежий исходный код Valet, вы должны выполнить команду `shinstall` : > shvalet install valet restart
После обновления может понадобиться пере-разместить или пере-привязать ваши сайты.
## Описание версии
### Версия 1.1.5
В версию 1.1.5 вошло множество внутренних улучшений.
После обновления Valet с помощью команды
вам надо выполнить команду `shvalet install` .
### Версия 1.1.0
В версию 1.1.0 вошло множество значительных улучшений. Встроенный PHP-сервер был заменён на Caddy для обработки входящих HTTP-запросов. Выбор Caddy позволит сделать множество улучшений в будущем, а также позволяет сайтам Valet выполнять HTTP-запросы к другим сайтам Valet, не блокируя встроенный PHP-сервер.
После обновления Valet с помощью команды
вам надо выполнить команду `shvalet install` для создания нового файла демона Caddy в вашей системе.
## Обслуживание сайтов
После установки Valet можно начать обслуживать сайты. Valet предоставляет две команды для помощи в обслуживании Laravel-сайтов: `shpark` и `shlink` .
* Создайте новый каталог на своём Mac с помощью команды
`shmkdir ~/Sites` . Затем перейдите к ней `shcd ~/Sites` и выполните `shvalet park` . Эта команда зарегистрирует текущий каталог как путь, по которому Valet будет искать сайты. * Теперь создайте новый Laravel-сайт в этом каталоге:
`shlaravel new blog` . * Откройте http://blog.dev в своём браузере.
Вот и всё. Теперь все проекты, которые вы разместите в каталоге «парковки», будут обслуживаться автоматически с адресами в соответствии с названиями их папок http://folder-name.dev.
Для обслуживания Laravel-сайтов также можно использовать команду `shlink` . Эта команда полезна, когда вы хотите обслуживать один сайт в каталоге, а не весь каталог.
* Для использования команды перейдите к одному из своих проектов и выполните
```
shvalet link app-name
```
в терминале. Valet создаст в ~/.valet/Sites символьную ссылку на ваш текущий каталог. * После запуска команды
`shlink` вы можете перейти на сайт http://app-name.dev в своём браузере. Для просмотра списка всех привязанных каталогов выполните команду `shvalet links` . Для удаления символьной ссылки используйте
```
shvalet unlink app-name
```
.
добавлено в 5.3 ()
По умолчанию Valet обслуживает сайты через чистый HTTP. Но при желании вы можете включить шифрование TLS, используя HTTP/2, с помощью команды `shsecure` . Например, если Valet обслуживает ваш сайт на домене laravel.dev, то для его защиты вам надо выполнить команду: > shvalet secure laravel
Чтобы отключить защиту и перевести трафик обратно на чистый HTTP, используйте команду `shunsecure` . Как и команда secure эта команда принимает имя хоста, для которого необходимо отключить защиту: > shvalet unsecure laravel
## Открытие общего доступа к сайтам
Valet имеет команду для открытия доступа к вашим локальным сайтам для всех. Кроме Valet не нужно никакое ПО.
Для открытия доступа к сайту перейдите в терминале в его каталог и выполните команду `shvalet share` . URL для общего доступа будет помещён в буфер обмена, и его можно вставить в браузер. Это всё.
Для закрытия общего доступа нажмите Control + C для отмены процесса.
## Пользовательские драйверы Valet
Вы можете написать свой собственный «драйвер» Valet для обслуживания PHP-приложений, работающих на другом фреймворке или CMS, изначально не поддерживаемых Valet. При установке Valet создаётся папка ~/.valet/Drivers, содержащая файл SampleValetDriver.php. В этом файле находится пример реализации драйвера для демонстрации. Для написания драйвера необходимо реализовать всего три метода: `PHPserves()` , `PHPisStaticFile()` и
. Все три метода принимают в качестве аргументов `PHP$sitePath` , `PHP$siteName` и `PHP$uri` . Параметр `PHP$sitePath` — полный путь к сайту на вашей машине, например, /Users/Lisa/Sites/my-project. Параметр `PHP$siteName` — часть домена «хост» / «имя сайта» (my-project). Параметр `PHP$uri` — URI входящих запросов (/foo/bar).
Когда вы завершили написание своего драйвера Valet, поместите его в каталог ~/.valet/Drivers используя принцип именования FrameworkValetDriver.php. Например, если вы написали драйвер для WordPress, то имя файла должно быть WordPressValetDriver.php.
Давайте рассмотрим примеры реализации каждого из этих методов.
Метод `PHPserves()` должен возвращать `PHPtrue` , если ваш драйвер должен обрабатывать входящие запросы. Иначе метод должен возвращать `PHPfalse` . В этом методе вы должны попытаться определить, содержит ли данный `PHP$sitePath` проект того типа, который вы хотите обслуживать. Например, давайте предположим, что мы пишем WordPressValetDriver. Наш метод `PHPserves()` должен выглядеть примерно так: `/**` * Определение, обслуживает ли драйвер запрос. * * @param string $sitePath * @param string $siteName * @param string $uri * @return bool */ public function serves($sitePath, $siteName, $uri) { return is_dir($sitePath.'/wp-admin'); } Метод `PHPisStaticFile()` должен определять, является ли входящий запрос запросом к «статическому» файлу, такому как изображение или таблица стилей. Если файл статический, метод должен вернуть полный путь к этому файлу на диске. Если входящий запрос не к статическому файлу, метод должен вернуть `PHPfalse` : `/**` * Определение, является ли входящий запрос запросом к статическому файлу. * * @param string $sitePath * @param string $siteName * @param string $uri * @return string|false */ public function isStaticFile($sitePath, $siteName, $uri) { if (file_exists($staticFilePath = $sitePath.'/public/'.$uri)) { return $staticFilePath; } return false; } Метод `PHPisStaticFile()` будет вызван, только если метод `PHPserves()` возвращает `PHPtrue` для входящего запроса, и значение URI запроса не равно /. Метод
Метод
должен вернуть полный путь к «первичному контроллеру» вашего приложения, которым обычно является ваш файл index.php или его эквивалент: `/**` * Получение полного пути к первичному контроллеру приложения. * * @param string $sitePath * @param string $siteName * @param string $uri * @return string */ public function frontControllerPath($sitePath, $siteName, $uri) { return $sitePath.'/public/index.php'; }
## Другие команды Valet
Команда | Описание |
| --- | --- |
valet forget | Выполните эту команду в каталоге "парковки" для удаления его из списка каталогов парковки. |
valet paths | Просмотр всех путей "парковки". |
valet restart | Перезапуск демона Valet. |
valet start | Запуск демона Valet. |
valet stop | Остановка демона Valet. |
valet uninstall | Полное удаление демона Valet. |
# Быстрый старт
Чтобы рассмотреть основной набор функций Laravel, мы создадим простой список задач и будем придерживаться его (типичный пример списка «to-do»). Полный финальный вариант исходного кода для этого проекта доступен на GitHub.
Больше информации относительно создания локальной среды разработки Laravel вы сможете найти в документации по Homestead и по установке.
Во-первых, давайте использовать миграцию для определения таблицы базы данных для хранения всех наших задач. Миграции БД в Laravel позволяют простым способом определить структуру таблицы базы данных и выполнять модификации с использованием простого и выразительного PHP кода. Вместо того чтобы вручную добавлять столбцы в свои локальные копии БД, ваши товарищи по команде могут просто запустить миграции, которые вы поместили в систему управления версиями.
Итак, давайте создадим таблицу БД, которая будет содержать все наши задачи. Для создания различных классов может быть использован интерфейс Artisan. Он избавит вас от ручной генерации кода при создании проектов Laravel. Поэтому давайте используем команду `shmake:migration` для создания миграции новой базы данных для нашей таблицы tasks: > shphp artisan make:migration create_tasks_table --create=tasks
Миграция будет помещена в каталог database/migrations вашего проекта. Как вы могли заметить, команда `shmake:migration` уже добавила автоинкремент ID и метки времени к файлу миграции. Давайте отредактируем этот файл и добавим дополнительный столбец string для имён наших задач: `<?php` use Illuminate\Database\Schema\Blueprint; use Illuminate\Database\Migrations\Migration; class CreateTasksTable extends Migration { /** * Запуск миграций * * @return void */ public function up() { Schema::create('tasks', function (Blueprint $table) { $table->increments('id'); $table->string('name'); $table->timestamps(); }); } /** * Откатить миграции * * @return void */ public function down() { Schema::drop('tasks'); } } Чтобы запустить нашу миграцию, мы будем использовать команду Artisan `shmigrate` . Если вы используете Homestead, вы должны выполнить эту команду в своей виртуальной машине, так как у вашей host-машины не будет прямого доступа к базе данных: > shphp artisan migrate
## Модели Eloquent
Eloquent — это стандартное ORM для Laravel (объектно-реляционное отображение). Eloquent делает безболезненным получение и хранение данных в вашей базе данных, используя чётко определённые «модели». Обычно, каждая Eloquent модель однозначно соответствует одной таблице базы данных.
Давайте определим модель Task, которая будет соответствовать только что созданной нами таблице tasks. Мы снова можем использовать команду Artisan, чтобы сгенерировать эту модель. В этом случае мы будем использовать команду `shmake:model` : > shphp artisan make:model Task
Модель будет помещена в каталог app вашего приложения. По умолчанию класс модели пуст. Нам не надо явно указывать, какой таблице соответствует Eloquent модель, потому что подразумевается, что имя таблицы – это имя модели во множественном числе (s на конце). В этом случае модель Task, как предполагается, соответствует таблице базы данных tasks. Вот на что должна быть похожа наша пустая модель:
`<?php` namespace App; use Illuminate\Database\Eloquent\Model; class Task extends Model { // }
Мы ещё познакомимся чуть ближе с моделями Eloquent, когда добавим маршруты к нашему приложению. Разумеется, вы можете заглянуть и в полную документацию по Eloquent для получения дополнительной информации.
### Заглушки маршрутов
Теперь можно добавить несколько маршрутов в наше приложение. Маршруты используются для связи URL с контроллерами или анонимными функциями, которые должны быть выполнены, когда пользователь переходит на данную страницу. По умолчанию все маршруты Laravel определены в файле app/Http/routes.php, который автоматически добавляется в каждый новый проект.
Для нашего приложения нам будут нужны по крайней мере три маршрута: маршрут для вывода на экран списка всех наших задач, маршрут для добавления новых задач и маршрут для удаления существующих задач. Давайте напишем заглушки для всех этих маршрутов в файле app/Http/routes.php:
`<?php` use App\Task; use Illuminate\Http\Request; /** * Вывести панель с задачами */ Route::get('/', function () { // }); /** * Добавить новую задачу */ Route::post('/task', function (Request $request) { // }); /** * Удалить задачу */ Route::delete('/task/{task}', function (Task $task) { // });
Если в вашей копии Laravel есть RouteServiceProvider, который уже содержит файл маршрутов по умолчанию в группе посредников web, то вам не надо вручную добавлять группу в ваш файл routes.php.
Давайте заполним наш маршрут /. По этому маршруту мы хотим отрисовывать HTML-шаблон, который содержит форму добавления новой задачи, а также список всех текущих задач.
В Laravel все HTML-шаблоны хранятся в каталоге resources/views, и мы можем использовать вспомогательную функцию `PHPview()` , чтобы возвратить один из этих шаблонов по нашему маршруту:
return view('tasks'); }); Передача tasks в функцию `PHPview()` создаст экземпляр объекта View, который соответствует шаблону resources/views/tasks.blade.php.
Конечно, нам необходимо создать это представление, поэтому давайте сделаем это!
Наше приложение содержит одно представление с формой добавления новых задач, а также список всех текущих задач. Чтобы помочь вам визуализировать представление, мы сделали скриншот законченного приложения со стандартными стилями Bootstrap CSS:
Теперь мы должны определить представление, которое содержит форму создания новой задачи, а также таблицу со списком всех существующих задач. Давайте определим это представление в resources/views/tasks.blade.php.
```
<!-- resources/views/tasks.blade.php -->
```
# Несколько разъясняющих замечаний
загрузит шаблон resources/views/common/errors.blade.php. Мы его ещё не определили, но скоро сделаем это!
Итак, мы определили основной макет и представление для нашего приложения. Помните, мы возвращаем это представление по маршруту /:
return view('tasks'); });
Теперь мы готовы добавить код в наш маршрут POST /task, чтобы обработать входящие данные из формы и добавить новую задачу в БД.
Теперь, когда у нас есть форма на нашем представлении, мы должны добавить код к нашему маршруту POST /task в app/Http/routes.php, чтобы проверить входящие данные из формы и создать новую задачу. Во-первых, давайте проверим ввод.
Для этой формы мы создадим обязательное поле name и зададим, что оно должно содержать не более 255 символов. Если проверка не пройдёт, то мы перенаправим пользователя назад к URL /, а также возвратим ему в сессию его введённые данные с указанием на ошибки. Возврат введённых данных в сессию позволит нам сохранить их, даже если в них будут ошибки:
$validator = Validator::make($request->all(), [ 'name' => 'required|max:255', ]); if ($validator->fails()) { return redirect('/') ->withInput() ->withErrors($validator); } // Создание задачи... });
`PHP$errors` Давайте сделаем перерыв на минутку, чтобы поговорить о строчке
в нашем примере. Вызов
подсветит в сессии ошибки данного экземпляра проверки ввода, и к ним можно будет обратиться через переменную `PHP$errors` в нашем представлении. Помните, что мы использовали директиву
Теперь, когда обрабатывается ввод данных, давайте создадим новую задачу, продолжая заполнять наш маршрут. Как только новая задача была создана, мы перенаправим пользователя назад к URL /. Чтобы создать задачу, мы можем использовать метод `PHPsave()` после создания и установки свойств для новой модели Eloquent:
$validator = Validator::make($request->all(), [ 'name' => 'required|max:255', ]); if ($validator->fails()) { return redirect('/') ->withInput() ->withErrors($validator); } $task = new Task; $task->name = $request->name; $task->save(); return redirect('/'); });
Отлично! Теперь мы можем создавать задачи. Давайте продолжим создание нашего представления, добавив список всех существующих задач.
Во-первых, мы должны отредактировать наш маршрут /, чтобы передать все существующие задачи в представление. Функция `PHPview()` принимает массив данных вторым параметром, который будет доступным для представления. Каждый ключ массива станет переменной в представлении:
$tasks = Task::orderBy('created_at', 'asc')->get(); return view('tasks', [ 'tasks' => $tasks ]); }); Когда данные переданы, мы можем обращаться к задачам в нашем представлении tasks.blade.php и выводить их на экран таблицей. Blade-конструкция `PHP@foreach` позволяет нам кратко писать циклы, которые компилируются в молниеносный простой PHP-код:
Мы оставили отметку «TODO» в коде, где предположительно будет находиться наша кнопка. Давайте добавим кнопку удаления к каждой строке нашего списка задач в представлении tasks.blade.php. Мы создадим маленькую однокнопочную форму для каждой задачи в списке. После нажатия кнопки приложению будет отправляться запрос DELETE /task:
`<tr>` <!-- Имя задачи --> <td class="table-text"> <div>{{ $task->name }}</div> </td> <!-- Кнопка Удалить --> <td> <form action="{{ url('task/'.$task->id) }}" method="POST"> {{ csrf_field() }} {{ method_field('DELETE') }} <button type="submit" class="btn btn-danger"> <i class="fa fa-trash"></i> Удалить </button> </form> </td> </trНаконец, давайте добавим к нашему маршруту логику удаления текущей задачи. Мы можем использовать неявную привязку модели, чтобы автоматически получить модель Task, которая соответствует параметру маршрута {task}.
В обратном вызове нашего маршрута мы используем метод `PHPdelete()` для удаления записи. Как только запись удалена, мы перенаправим пользователя назад к URL /:
```
Route::delete('/task/{task}', function (Task $task) {
```
$task->delete(); return redirect('/'); });
# Углублённый быстрый старт
Чтобы рассмотреть основной набор функций Laravel, мы создадим простой список задач и будем придерживаться его (типичный пример списка «to-do»). В отличие от базового данное руководство позволит пользователям создавать аккаунты и аутентифицироваться в приложении. Полный финальный вариант исходного кода для этого проекта доступен на GitHub.
Более полную информацию о создании локальной среды разработки Laravel вы сможете найти в документации по Homestead и по установке.
Во-первых, давайте используем миграцию для определения таблицы базы данных для хранения всех наших задач. Миграции БД в Laravel позволяют простым способом определить структуру таблицы базы данных и выполнять модификации с использованием простого и выразительного PHP кода. Вместо того чтобы вручную добавлять столбцы в свои локальные копии БД, ваши товарищи по команде могут просто запустить миграции, которые вы поместили в систему управления версиями.
# Таблица users
Поскольку мы решили, что пользователь может создать свой аккаунт в приложении, нам нужна таблица для хранениях пользователей. К счастью, Laravel уже поставляется с миграцией, включающей в себя базовую таблицу users. Поэтому нам не нужно вручную её создавать. По умолчанию миграция для таблицы users находится в каталоге database/migrations.
# Таблица tasks
Теперь давайте создадим таблицу, которая будет содержать все наши задачи. Для создания различных классов может быть использован интерфейс Artisan. Он избавит вас от ручной генерации кода при создании проектов Laravel. Поэтому давайте используем команду `shmake:migration` для создания миграции новой базы данных для нашей таблицы tasks: > shphp artisan make:migration create_tasks_table --create=tasks
Миграция будет помещена в каталог database/migrations вашего проекта. Как вы могли заметить, команда `shmake:migration` уже добавила автоинкрементный ID и метки времени к файлу миграции. Давайте отредактируем этот файл и добавим дополнительный столбец name для имён наших задач, а также столбец user_id, который свяжет таблицу tasks с таблицей users: `<?php` use Illuminate\Database\Schema\Blueprint; use Illuminate\Database\Migrations\Migration; class CreateTasksTable extends Migration { /** * Запуск миграций * * @return void */ public function up() { Schema::create('tasks', function (Blueprint $table) { $table->increments('id'); $table->integer('user_id')->unsigned()->index(); $table->string('name'); $table->timestamps(); }); } /** * Откатить миграции * * @return void */ public function down() { Schema::drop('tasks'); } } Чтобы запустить нашу миграцию, мы будем использовать команду Artisan `shmigrate` . Если вы используете Homestead, вы должны выполнить эту команду на своей виртуальной машине, так как у вашей host-машины не будет прямого доступа к базе данных: > shphp artisan migrate
Eloquent — это стандартное ORM для Laravel (объектно-реляционное отображение). Eloquent делает безболезненным получение и хранение данных в вашей базе данных, используя чётко определённые «модели». Обычно, каждая Eloquent модель однозначно соответствует одной таблице базы данных.
# Модель User
В первую очередь нам нужна модель, соответствующая нашей таблице users. Однако, если вы зайдете в папку app вашего проекта, вы увидите, что Laravel уже поставляется в комплекте с моделью User, поэтому нам не нужно создавать её вручную.
# Модель Task
Давайте определим модель Task, которая будет соответствовать только что созданной нами таблице tasks. Мы снова можем использовать команду Artisan, чтобы сгенерировать эту модель. В этом случае мы будем использовать команду `shmake:model` : > shphp artisan make:model Task
Модель будет помещена в каталог app вашего приложения. По умолчанию класс модели пуст. Нам не надо явно указывать, какой таблице соответствует Eloquent модель, потому что подразумевается, что имя таблицы – это имя модели во множественном числе (s на конце). В этом случае предполагается, что модель Task соответствует таблице базы данных tasks.
Давайте добавим несколько вещей в эту модель. Для начала определим, что атрибут name этой модели должен быть массово присваиваемым. Это позволит нам заполнить атрибут name при использовании метода Eloquent `PHPcreate()` : `<?php` namespace App; use Illuminate\Database\Eloquent\Model; class Task extends Model { /** * Массово присваиваемые атрибуты. * * @var array */ protected $fillable = ['name']; }
Мы познакомимся с моделями Eloquent ближе, когда добавим маршруты к нашему приложению. Разумеется, вы можете заглянуть и в полную документацию по Eloquent для получения дополнительной информации.
### Отношения Eloquent
Теперь, когда наши модели определены, нам нужно связать их. Например, наш User может иметь несколько Task, в то время как Task привязан к единственному User. Определение взаимосвязи позволит нам быстро проходить через наши отношения:
foreach ($user->tasks as $task) { echo $task->name; }
# Отношение tasks
Во-первых, давайте определим отношение для нашей модели User. Отношения Eloquent определены как методы моделей. Eloquent поддерживает несколько различных типов отношений, с которыми можно ознакомиться в полной документации по Eloquent. Мы определим функцию `PHPtasks` в модели User, которая вызывает Eloquent-метод `PHPhasMany()` :
добавлено в 5.2 ()
`<?php` namespace App; use Illuminate\Foundation\Auth\User as Authenticatable; class User extends Authenticatable { // Другие Eloquent свойства... /** * Получить все задачи пользователя. */ public function tasks() { return $this->hasMany(Task::class); } }
добавлено в 5.1 ()
`<?php` namespace App; use Illuminate\Foundation\Auth\User as Authenticatable; class User extends Authenticatable { // Другие Eloquent свойства... /** * Получить все задачи пользователя. */ public function tasks() { return $this->hasMany(Task::class); } }
# Отношение user
Теперь давайте определим отношение user для модели Tasks. И снова мы определим отношение как метод модели. В этом случае мы будем использовать Eloquent-метод `PHPbelongsTo()` , определяющий отношение: `<?php` namespace App; use App\User; use Illuminate\Database\Eloquent\Model; class Task extends Model { /** * Массово присваиваемые атрибуты. * * @var array */ protected $fillable = ['name']; /** * Получить пользователя - владельца данной задачи */ public function user() { return $this->belongsTo(User::class); } }
Прекрасно! Теперь наши отношения определены, и мы можем начать создавать наши контроллеры!
В базовой версии нашего приложения мы определили всю нашу логику в файле routes.php, используя замыкания. В данном приложении мы будем использовать контроллеры для организации наших маршрутов. Контроллеры позволят нам лучше организовать логику HTTP-запросов в нескольких файлах.
У нас будет один маршрут, использующий замыкание: наш маршрут /, представляющий из себя лендинг для гостей приложения. Давайте заполним наш маршрут /. По этому маршруту мы хотим отрисовывать HTML-шаблон, который содержит страницу «welcome».
В Laravel все HTML-шаблоны хранятся в каталоге resources/views, и мы можем использовать вспомогательную функцию `PHPview()` , чтобы возвратить один из этих шаблонов по нашему маршруту:
return view('welcome'); });
Конечно, нам необходимо создать это представление, давайте сделаем это!
Помните, что мы также должны позволить пользователям создавать учётные записи и входить в наше приложение. Как правило, построение всего слоя аутентификации в веб-приложении является трудоёмкой задачей . Однако, так как это распространённая задача, Laravel попытался сделать эту процедуру абсолютно безболезненной.
Во-первых, обратите внимание, что app/Http/Controllers/Auth/AuthController уже включён в приложение Laravel. Этот контроллер использует специальный типаж (trait) AuthenticatesAndRegistersUsers со всей необходимой логикой для создания и аутентификации пользователей.
# Маршруты и представления аутентификации
Итак, что нам осталось сделать? Нам всё ещё нужно создать шаблоны регистрации и входа в систему, а также определить маршруты, указывающие на контроллер аутентификации.
добавлено в 5.2 ()
Мы можем сделать это с помощью Artisan-команды `shmake:auth` : > shphp artisan make:auth
Теперь нам осталось только добавить маршруты аутентификации в наш файл маршрутов. Это можно сделать методом `PHPauth()` фасада Route, который зарегистрирует все необходимые нам маршруты для регистрации, входа и сброса пароля:
Route::auth(); Когда маршруты auth зарегистрированы, проверьте, что свойство `PHP$redirectTo` контроллера app/Http/Controllers/Auth/AuthController имеет значение `PHP/tasks` :
```
protected $redirectTo = '/tasks';
```
А также необходимо изменить в файле app/Http/Middleware/RedirectIfAuthenticated.php путь переадресации:
```
return redirect('/tasks');
```
Для начала давайте добавим нужные нам маршруты в файл app/Http/routes.php:
# Представления аутентификации
Для аутентификации необходимо создать login.blade.php и register.blade.php в папке resources/views/auth. Конечно, дизайн и стиль этих представлений не имеет значения. Тем не менее, они должны содержать по крайней мере несколько основных полей.
Файл register.blade.php должен содержать форму, включающую в себя поля name, email, password и password_confirmation, эта форма должна создавать POST запрос к маршруту /auth/register.
Файл login.blade.php должен содержать форму, включающую в себя поля email и password, эта форма должна создавать POST запрос к маршруту /auth/login.
Если вы хотите просмотреть полные примеры для этих представлений, помните, что весь исходный код приложения доступен на GitHub.
### Контроллер задач
Поскольку мы знаем, что нам нужно получать и сохранять задачи, давайте создадим TaskController с помощью командной строки Artisan, при этом новый контроллер будет помещён в папку app/Http/Controllers:
добавлено в 5.2 ()
> shphp artisan make:controller TaskController
добавлено в 5.1 ()
> shphp artisan make:controller TaskController --plain
Теперь, когда контроллер создан, давайте создадим стабы для некоторых маршрутов в нашем файле app/Http/routes.php, указывающих на контроллер:
```
Route::get('/tasks', 'TaskController@index');
```
Route::post('/task', 'TaskController@store'); Route::delete('/task/{task}', 'TaskController@destroy');
# Аутентификация всех маршрутов задач
В данном приложении мы хотим, чтобы все наши маршруты задач требовали аутентификации пользователя. Другими словами, пользователь должен зайти в систему, чтобы создать задачу. Поэтому мы должны ограничить доступ к нашим маршрутам задач и открывать доступ только аутентифицированным пользователям. Laravel контролирует это, используя посредника.
Чтобы проверять аутентификацию пользователя в каждом действии, мы можем добавить вызов метода middleware в конструктор контроллера. Все доступные посредники маршрута определены в файле app/Http/Kernel.php. В данном случае мы хотим добавить посредник auth ко всем методам контроллера:
`<?php` namespace App\Http\Controllers; use App\Http\Requests; use Illuminate\Http\Request; use App\Http\Controllers\Controller; class TaskController extends Controller { /** * Создание нового экземпляра контроллера. * * @return void */ public function __construct() { $this->middleware('auth'); } }
Основная часть этого приложения содержит одно представление с формой добавления новых задач, а также список всех текущих задач. Чтобы помочь вам визуализировать представление, мы сделали скриншот законченного приложения со стандартными стилями Bootstrap CSS:
Отлично. Макет нашего сайта завершён. Теперь мы должны определить представление, которое содержит форму создания новой задачи, а также таблицу со списком всех существующих задач. Давайте определим это представление в файле resources/views/tasks/index.blade.php, оно будет соответствовать методу index в нашем TaskController.
```
<!-- resources/views/tasks/index.blade.php -->
```
# Несколько поясняющих замечаний
загрузит шаблон, расположенный в resources/views/common/errors.blade.php. Он пока не определён, но скоро мы это сделаем!
Итак, мы определили основной макет и представление для нашего приложения. Давайте вернём это представление из метода index контроллера TaskController:
`/**` * Отображение списка всех задач пользователя. * * @param Request $request * @return Response */ public function index(Request $request) { return view('tasks.index'); }
Теперь мы готовы добавить код в наш метод контроллера маршрута POST /task, чтобы обработать входящие данные из формы и добавить новую задачу в БД.
Теперь, когда у нас есть форма на нашем представлении, мы должны добавить код к нашему методу TaskController@store, чтобы проверить входящие данные из формы и создать новую задачу. Во-первых, давайте проверим ввод.
Для этой формы мы создадим обязательное поле name и зададим, что оно должно содержать не более 255 символов. Если проверка не пройдёт, то мы перенаправим пользователя назад к URL /tasks, а также возвратим ему в сессию его введённые данные с указанием на ошибки:
`/**` * Создание новой задачи. * * @param Request $request * @return Response */ public function store(Request $request) { $this->validate($request, [ 'name' => 'required|max:255', ]); // Создание задачи... }
Если вы создавали приложение по краткому руководству, то могли заметить, что код валидации там другой. Так как мы находимся в контроллере, мы можем использовать удобный типаж ValidatesRequests, который включён в базовый контроллер Laravel. Этот типаж представляет простой метод validate, который принимает запрос и массив правил валидации.
Нам даже не нужно самим определять результат валидации и даже не нужно вручную делать перенаправление. Если валидация не пройдена для заданных правил, пользователь будет автоматически перенаправлен туда, откуда он пришёл, и ошибки будут автоматически высвечены в сессии. Отлично!
`PHP$errors` Помните, что мы использовали директиву
Теперь, когда обрабатывается ввод данных, давайте создадим новую задачу, продолжая заполнять наш маршрут. Как только новая задача будет создана, мы перенаправим пользователя назад к URL /tasks. Чтобы создать задачу, мы будем использовать мощность Eloquent отношений.
Большинство Laravel отношений предоставляют метод create, который принимает массив атрибутов и автоматически устанавливает значение внешнего ключа на соответствующей модели перед сохранением в базе данных. В этом случае метод create автоматически установит свойство user_id данной задачи на ID текущего аутентифицированного пользователя, к которому мы обращаемся с помощью `PHP$request->user()` : `/**` * Создание новой задачи. * * @param Request $request * @return Response */ public function store(Request $request) { $this->validate($request, [ 'name' => 'required|max:255', ]); $request->user()->tasks()->create([ 'name' => $request->name, ]); return redirect('/tasks'); }
Отлично! Теперь мы можем создавать задачи. Давайте продолжим создание нашего представления, добавив список всех существующих задач.
## Отображение существующих задач
Во-первых, мы должны отредактировать наш метод TaskController@index, чтобы передать все существующие задачи в представление. Функция `PHPview()` принимает массив данных вторым параметром, который будет доступным для представления. Каждый ключ массива станет переменной в представлении. Например, мы можем сделать так: `/**` * Показать список всех задач пользователя. * * @param Request $request * @return Response */ public function index(Request $request) { $tasks = $request->user()->tasks()->get(); //для версии 5.1 //$tasks = Task::where('user_id', $request->user()->id)->get(); return view('tasks.index', [ 'tasks' => $tasks, ]); }
Тем не менее, давайте рассмотрим некоторые возможности внедрения зависимостей от Laravel, чтобы внедрить TaskRepository в наш TaskController, который мы будем использовать для доступа ко всем нашим данным.
### Внедрение зависимостей
Сервис-контейнер Laravel является одной из самых мощных возможностей всего фреймворка. После прочтения базового руководства, не забудьте прочитать всю документацию по контейнеру.
# Создание репозитория
Как мы уже упоминали ранее, мы хотим определить TaskRepository, который содержит логику доступа ко всем данным для модели Task. Это будет особенно полезно, если приложение будет расти, и вам понадобится повсеместно использовать Eloquent запросы в приложении.
Итак, давайте создадим папку app/Repositories и добавим класс TaskRepository. Помните, что все app папки Laravel автоматически загружаются с помощью стандарта автоматической загрузки PSR-4, так что вы можете создать сколько угодно дополнительных каталогов:
добавлено в 5.2 ()
`<?php` namespace App\Repositories; use App\User; class TaskRepository { /** * Получить все задачи заданного пользователя. * * @param User $user * @return Collection */ public function forUser(User $user) { return $user->tasks() ->orderBy('created_at', 'asc') ->get(); } }
добавлено в 5.1 ()
`<?php` namespace App\Repositories; use App\User; use App\Task; class TaskRepository { /** * Получить все задачи заданного пользователя. * * @param User $user * @return Collection */ public function forUser(User $user) { return Task::where('user_id', $user->id) ->orderBy('created_at', 'asc') ->get(); } }
# Внедрение репозитория
Как только наш репозиторий определён, мы можем просто передать в конструктор наш TaskController и использовать его в нашем маршруте index. Так как Laravel использует контейнер, чтобы разрешить все контроллеры, наши зависимости будут автоматически внедрены в экземпляр контроллера:
`<?php` namespace App\Http\Controllers; use App\Task; use App\Http\Requests; use Illuminate\Http\Request; use App\Http\Controllers\Controller; use App\Repositories\TaskRepository; class TaskController extends Controller { /** * Экземпляр TaskRepository. * * @var TaskRepository */ protected $tasks; /** * Создание нового экземпляра контроллера. * * @param TaskRepository $tasks * @return void */ public function __construct(TaskRepository $tasks) { $this->middleware('auth'); $this->tasks = $tasks; } /** * Показать список всех задач пользователя. * * @param Request $request * @return Response */ public function index(Request $request) { return view('tasks.index', [ 'tasks' => $this->tasks->forUser($request->user()), ]); } }
Когда данные переданы, мы можем обращаться к задачам в нашем представлении tasks/index.blade.php и выводить их на экран таблицей. Blade-конструкция `PHP@foreach` позволяет нам кратко писать циклы, которые компилируются в молниеносный простой PHP-код:
Мы оставили отметку «TODO» в коде, где предположительно будет находиться наша кнопка. Давайте добавим кнопку удаления к каждой строке нашего списка задач в представлении tasks/index.blade.php. Мы создадим маленькую однокнопочную форму для каждой задачи в списке. После нажатия кнопки приложению будет отправляться запрос DELETE /task, который будет обращаться к методу TaskController@destroy:
добавлено в 5.2 ()
`<tr>` <!-- Имя задачи --> <td class="table-text"> <div>{{ $task->name }}</div> </td> <!-- Кнопка Удалить --> <td> <form action="{{ url('task/'.$task->id) }}" method="POST"> {{ csrf_field() }} {{ method_field('DELETE') }} <button type="submit" id="delete-task-{{ $task->id }}" class="btn btn-danger"> <i class="fa fa-btn fa-trash"></i>Удалить </button> </form> </td> </trдобавлено в 5.1 ()
`<tr>` <!-- Имя задачи --> <td class="table-text"> <div>{{ $task->name }}</div> </td> <!-- Кнопка Удалить --> <td> <form action="/task/{{ $task->id }}" method="POST"> {{ csrf_field() }} {{ method_field('DELETE') }} <button>Удалить задачу</button> </form> </td> </tr### Привязка модели маршрута
Теперь мы почти готовы определить метод `PHPdestroy()` в нашем TaskController. Но для начала давайте пересмотрим наше объявление маршрута и метод контроллера для этого маршрута:
```
Route::delete('/task/{task}', 'TaskController@destroy');
```
Если не добавлять никакого дополнительного кода, то Laravel внедрит ID заданной задачи в метод TaskController@destroy:
`/**` * Уничтожить заданную задачу. * * @param Request $request * @param string $taskId * @return Response */ public function destroy(Request $request, $taskId) { // } Однако, в первую очередь в этом методе мы должны будем получить экземпляр Task из базы данных, используя пришедший ID. Было бы неплохо, если б Laravel мог просто внедрить экземпляр Task, соответствующий этому ID? Давайте сделаем это возможным!В нашем файле app/Providers/RouteServiceProvider.php в методе `PHPboot()` давайте добавим следующую строку:
```
$router->model('task', 'App\Task');
```
Эта небольшая строчка заставит Laravel извлечь модель Task, соответствующую заданному ID, каждый раз, когда он видит `PHP{task}` в объявлении маршрута. Теперь мы можем определить наш метод `PHPdestroy()` : `/**` * Уничтожить заданную задачу. * * @param Request $request * @param Task $task * @return Response */ public function destroy(Request $request, Task $task) { // }
добавлено в 5.2 ()
Поскольку переменная `PHP{task}` в нашем маршруте совпадает с переменной `PHP$task` , определённой в методе нашего контроллера, неявная привязка модели Laravel автоматически внедрит экземпляр соответствующей модели Task.
Теперь у нас есть экземпляр Task, внедрённый в метод `PHPdestroy()` . Тем не менее, нет никакой гарантии того, что аутентифицированный пользователь на самом деле «владеет» данной задачей. Например, злоумышленник может сделать запрос на удаление задачи другого пользователя, передавая случайный ID задачи по URL /tasks/{task}. Поэтому мы должны использовать возможности авторизации Laravel, чтобы быть уверенным, что аутентифицированный пользователь на самом деле является владельцем экземпляра Task, который был внедрён в маршрут.
# Создание политики
Laravel использует «политики» для организации логики авторизации в простых небольших классах. Как правило, каждая политика соответствует модели. Давайте создадим TaskPolicy, используя команду Artisan, которая поместит сгенерированный файл в app/Policies/TaskPolicy.php:
> shphp artisan make:policy TaskPolicy
Следующим шагом будет добавление метода `PHPdestroy()` к политике. Этот метод получает экземпляр User и экземпляр Task. Метод должен просто проверить, соответствует ли ID пользователя user_id задачи. Фактически, все методы политики должны возвращать true или false: `<?php` namespace App\Policies; use App\User; use App\Task; use Illuminate\Auth\Access\HandlesAuthorization; class TaskPolicy { use HandlesAuthorization; /** * Определяем, может ли данный пользователь удалить данную задачу. * * @param User $user * @param Task $task * @return bool */ public function destroy(User $user, Task $task) { return $user->id === $task->user_id; } } В конце нам надо связать нашу модель Task с TaskPolicy. Мы можем сделать это, добавив одну строчку к свойству `PHP$policies` в файле app/Providers/AuthServiceProvider.php. Она проинформирует Laravel о том, какая политика должна быть использована каждый раз, когда мы пытаемся авторизовать действие в экземпляре Task: `/**` * Маппинг политики для приложения. * * @var array */ protected $policies = [ 'App\Task' => 'App\Policies\TaskPolicy', //для версии 5.1 //Task::class => TaskPolicy::class, ];
# Авторизация действия
Теперь, когда наша политика написана, давайте использовать ее в нашем методе `PHPdestroy()` . Все контроллеры Laravel могут вызвать метод `PHPauthorize()` , который представлен типажом AuthorizesRequest: `/**` * Уничтожение заданной задачи. * * @param Request $request * @param Task $task * @return Response */ public function destroy(Request $request, Task $task) { $this->authorize('destroy', $task); // Удаление задачи... } Давайте немного исследуем этот вызов метода. Первым параметром, переданным методу `PHPauthorize()` , является имя метода политики, который мы хотим вызвать. Второй параметр — экземпляр модели, который нас сейчас интересует. Помните, мы недавно сообщили Laravel, что наша модель Task соответствует нашему TaskPolicy. Значит фреймворк знает, с какой политикой выполнить метод `PHPdestroy()` . Текущий пользователь будет автоматически отправлен в метод политики. Таким образом, мы не должны будем вручную передавать его здесь. Если действие авторизовано, наш код будет продолжать выполняться как и всегда. Однако, если действие не авторизовано (метод политики `PHPdestroy()` вернул false), то будет выдано исключение 403, и на экран пользователя будет выведена страница ошибки.
Существует несколько других способов взаимодействия с сервисами авторизации от Laravel. Обязательно ознакомьтесь с ними в полной документации по авторизации.
Наконец, давайте добавим к нашему методу `PHPdestroy()` логику удаления текущей задачи. Мы можем использовать Eloquent-метод `PHPdelete()` , чтобы удалить заданный экземпляр модели в базе данных. Когда запись будет удалена, мы перенаправим пользователя назад к URL /tasks: `/**` * Уничтожение заданной задачи. * * @param Request $request * @param Task $task * @return Response */ public function destroy(Request $request, Task $task) { $this->authorize('destroy', $task); $task->delete(); return redirect('/tasks'); }
# Маршрутизация
## Простейшая маршрутизация
В Laravel простейшие маршруты принимают URI (путь) и функцию-замыкание, предоставляя очень простой и выразительный метод определения маршрутов:
Все маршруты (routes) Laravel определены в файлах маршрутов, которые расположены в каталоге routes. Эти файлы автоматически загружаются фреймворком. В файле routes/web.php определены маршруты для вашего web-интерфейса. Эти маршруты входят в группу посредников web, которые обеспечивают такие возможности, как состояние сессии и CSRF-защита. Маршруты из файла routes/api.php не поддерживают состояния и входят в группу посредников api.
Для большинства приложений сначала определяются маршруты в файле routes/web.php.
Доступные методы маршрутизатора
Маршрутизатор позволяет регистрировать маршруты для любого HTTP-запроса:
```
Route::get($uri, $callback);
```
Route::post($uri, $callback); Route::put($uri, $callback); Route::patch($uri, $callback); Route::delete($uri, $callback); Route::options($uri, $callback); Иногда необходимо зарегистрировать маршрут, который отвечает на HTTP-запросы нескольких типов. Это можно сделать методом `PHPmatch()` . Или вы можете зарегистрировать маршрут, отвечающий на HTTP-запросы всех типов, с помощью метода `PHPany()` :
// }); Route::any('foo', function () { // });
return 'Hello World'; }); Route::post('foo/bar', function () { return 'Hello World'; }); Route::put('foo/bar', function () { // }); Route::delete('foo/bar', function () { // });
Регистрация маршрута для нескольких типов запросов
Иногда необходимо зарегистрировать маршрут, который отвечает на HTTP-запросы нескольких типов. Это можно сделать методом `PHPmatch()` фасада Route:
return 'Hello World'; }); Или вы можете зарегистрировать маршрут, отвечающий на HTTP-запросы всех типов, с помощью метода `PHPany()` :
```
Route::any('foo', function () {
```
return 'Hello World'; });
Генерирование адресов URL для маршрутов
Вы можете генерировать URL для маршрутов вашего приложения методом `PHPurl()` : `$url = url('foo');`
добавлено в 5.3 ()
Все HTML-формы, ведущие к маршрутам POST, PUT или DELETE, которые определены в файле маршрутов web, должны иметь поле CSRF-токена. Иначе запрос будет отклонён. Подробнее о CSRF-защите читайте в разделе о CSRF:
{{ csrf_field() }} ... </form## Параметры маршрутов
### Обязательные параметры
Разумеется, иногда вам может понадобиться захватить сегменты URI в вашем маршруте. Например, если вам необходимо захватить ID пользователя из URL. Это можно сделать, определив параметры маршрута:
return 'User '.$id; });
Вы можете определить сколько угодно параметров:
```
Route::get('posts/{post}/comments/{comment}', function ($postId, $commentId) {
```
// });
Параметры маршрута всегда заключаются в фигурные скобки и должны состоять из буквенных символов. Параметры маршрута не могут содержать символ -. Используйте вместо него подчёркивание _.
### Необязательные параметры маршрута
Иногда необходимо указать параметр маршрута, но при этом сделать его наличие необязательным. Это можно сделать, поместив знак вопроса ? после названия параметра. Не забудьте задать значение по умолчанию для соответствующей переменной маршрута:
```
Route::get('user/{name?}', function ($name = null) {
```
return $name; }); Route::get('user/{name?}', function ($name = 'John') { return $name; });
### Ограничения регулярными выражениями
Вы можете ограничить формат параметров вашего маршрута с помощью метода `PHPwhere()` на экземпляре маршрута. Метод `PHPwhere()` принимает название параметра и регулярное выражение, определяющее ограничения для параметра:
```
Route::get('user/{name}', function ($name) {
```
// })->where('name', '[A-Za-z]+'); Route::get('user/{id}', function ($id) { // })->where('id', '[0-9]+'); Route::get('user/{id}/{name}', function ($id, $name) { // })->where(['id' => '[0-9]+', 'name' => '[a-z]+']); Если вы хотите, чтобы параметр был всегда ограничен заданным регулярным выражением, то можете использовать метод `PHPpattern()` . Вам следует определить эти шаблоны в методе `PHPboot()` вашего RouteServiceProvider:
добавлено в 5.3 ()
`/**` * Определение привязок вашей модели, шаблонов фильтрации и т.д. * * @return void */ public function boot() { Route::pattern('id', '[0-9]+'); parent::boot(); }
добавлено в 5.2 () 5.1 () 5.0 ()
`/**` * Определение привязок вашей модели, шаблонов фильтрации и т.д. * * @param \Illuminate\Routing\Router $router * @return void */ public function boot(Router $router) { $router->pattern('id', '[0-9]+'); parent::boot($router); }
Когда шаблон определён, он автоматически применяется ко всем маршрутам, использующим этот параметр:
// Выполняется только если {id} числовой. });
добавлено в 5.0 ()
Доступ к значению параметра маршрута
Если вам нужен доступ к значению параметра маршрута извне, то вы можете использовать метод `PHPinput()` :
```
if ($route->input('id') == 1)
```
{ // }
Также вы можете получить параметры текущего маршрута через экземпляр Illuminate\Http\Request. Получить объект текущего запроса можно через фасад Request или указав тип Illuminate\Http\Request, в котором внедрены зависимости:
Route::get('user/{id}', function(Request $request, $id) { if ($request->route('id')) { // } });
## Именованные маршруты
Именованные маршруты позволяют удобно генерировать URL-адреса и делать переадресацию на конкретный маршрут.
```
Route::get('user/profile', [
```
'as' => 'profile', 'uses' => 'UserController@showProfile' ]); Или, вместо указания имени маршрута в определении массива маршрутов вы можете «прицепить» метод `PHPname()` к определению маршрута:
Группы маршрутов и именованные маршруты
Если вы используете группы маршрутов, то можете использовать ключ `PHPas` в массиве атрибутов группы маршрутов, так вы можете задать общий префикс для имён маршрутов в группе:
```
Route::group(['as' => 'admin::'], function () {
```
Route::get('dashboard', ['as' => 'dashboard', function () { // Маршрут назван "admin::dashboard" }]); });
добавлено в 5.0 ()
Генерирование URL адресов для именованных маршрутов
Когда вы назначили имя маршруту, вы можете использовать это имя для генерирования URL адресов и переадресаций глобальным методом `PHProute()` :
```
// Генерирование URL...
```
$url = route('profile'); // Генерирование переадресаций... return redirect()->route('profile'); Если у именованного маршрута есть параметры, вы можете передать их вторым аргументом метода `PHProute()` . Эти параметры будут автоматически вставлены в соответствующие места URL:
```
Route::get('user/{id}/profile', function ($id) {
```
// })->name('profile'); $url = route('profile', ['id' => 1]);
## Группы маршрутов
Группы маршрутов позволяют использовать общие атрибуты маршрутов, такие как посредники и пространства имён, для большого числа маршрутов без необходимости определять эти атрибуты для каждого отдельного маршрута. Общие атрибуты указываются в виде массива первым аргументом метода `PHPRoute::group()` .
Посредники применяются ко всем маршрутам в группе путём указания списка этих посредников с параметром middleware в массиве групповых атрибутов. Посредники выполняются в порядке перечисления в этом массиве:
```
Route::group(['middleware' => 'auth'], function () {
```
Route::get('/', function () { // Использует посредника Auth }); Route::get('user/profile', function () { // Использует посредника Auth }); });
Другой типичный пример использования групп маршрутов — назначение одного пространства имён PHP для группы контроллеров, используя параметр namespace в массиве группы:
```
Route::group(['namespace' => 'Admin'], function() {
```
// Контроллеры в пространстве имён "App\Http\Controllers\Admin" //для версии 5.2 и ранее: Route::group(['namespace' => 'User'], function() { // Контроллеры в пространстве имён "App\Http\Controllers\Admin\User" }); });
Помните, по умолчанию RouteServiceProvider включает ваши файлы маршрутов в группу пространства имён, позволяя вам регистрировать маршруты контроллера без указания полного префикса пространства имён App\Http\Controllers. Поэтому нам надо указать лишь ту часть пространства имён, которая следует за базовым пространством имён App\Http\Controllers.
### Доменная маршрутизация
Группы маршрутов можно использовать для обработки маршрутизации поддоменов. Поддоменам можно назначать параметры маршрутов также как URI маршрутов, поэтому вы можете захватить часть поддомена и использовать в своём маршруте или контроллере. Поддомен можно указать с помощью ключа domain в массиве атрибутов группы:
```
Route::group(['domain' => '{account}.myapp.com'], function () {
```
Route::get('user/{id}', function ($account, $id) { // }); });
### Префиксы маршрута
Атрибут группы prefix можно использовать для указания URI-префикса каждого маршрута в группе. Например, если вы хотите добавить admin ко всем URI маршрутов в группе:
```
Route::group(['prefix' => 'admin'], function () {
```
Route::get('users', function () { // Соответствует URL "/admin/users" }); });
добавлено в 5.0 ()
## CSRF-защита
Данный раздел актуален для версии 5.2 и ранее. В версии 5.3 он был перенесён в отдельную статью о CSRF.
### Введение
В Laravel легко защитить ваше приложение от CSRF-атаки межсайтовой подделки запросов. Межсайтовая подделка запроса — тип вредоносных атак, при котором неавторизованные команды выполняются от имени авторизованного пользователя.
Laravel автоматически генерирует CSRF-"токен" для каждой активной сессии пользователя в приложении. Этот токен используется для проверки того, что запрос в приложение отправляет именно авторизованный пользователь.
Каждый раз определяя HTML-форму в своём приложении, вы должны включать в неё скрытое поле CSRF-токена, чтобы посредник CSRF-защиты смог проверить запрос. Чтобы сгенерировать скрытое поле ввода _token, содержащее CSRF-токен, используйте вспомогательную функцию `PHPcsrf_field()` : `// Vanilla PHP` <?php echo csrf_field(); ?> // Blade Template Syntax {{ csrf_field() }} Функция `PHPcsrf_field()` генерирует такой HTML: > xml<input type="hidden" name="_token" value="<?php echo csrf_token(); ?>"Вам не нужно вручную проверять CSRF-токен в запросах типа POST, PUT или DELETE. посредник VerifyCsrfToken, который входит в группу посредников web, будет автоматически проверять совпадение токена входящего запроса с токеном, хранящимся в сессии.
### Исключение URI из CSRF-защиты
Иногда бывает необходимо исключить набор URI из CSRF-защиты. Например, при использовании Stripe для обработки платежей и их системы веб-хуков вам необходимо исключить ваш маршрут обработчика веб-хуков из CSRF-защиты Laravel.
Вы можете исключить URI, определив их маршруты вне группы посредников web, которая включена в файл по умолчанию routes.php, или добавив эти URI в свойство `PHP$except` посредника VerifyCsrfToken: `<?php` namespace App\Http\Middleware; use Illuminate\Foundation\Http\Middleware\VerifyCsrfToken as BaseVerifier; class VerifyCsrfToken extends BaseVerifier { /** * URI, которые надо исключить из проверки CSRF. * * @var array */ protected $except = [ 'stripe/*', ]; }
### X-CSRF-TOKEN
Кроме поиска CSRF-ключа в параметрах «POST», посредник VerifyCsrfToken также проверит наличие заголовка запроса X-CSRF-TOKEN. Например, вы можете хранить ключ в теге «meta»:
Когда вы создали тег meta, вы можете добавлять токен в заголовки всех запросов с помощью библиотеки, такой как jQuery. Так обеспечивается простая и удобная CSRF-защита для ваших приложений на основе AJAX:
`$.ajaxSetup({` headers: { 'X-CSRF-TOKEN': $('meta[name="csrf-token"]').attr('content') } });
### X-XSRF-TOKEN
Laravel также хранит CSRF-ключ в cookie XSRF-TOKEN. Вы можете использовать значение из cookie, чтобы задать заголовок запроса X-XSRF-TOKEN. Некоторые фреймворки JavaScript, такие как Angular, делают это автоматически. Вряд ли вам придётся использовать это значение вручную.
Разница между X-CSRF-TOKEN и X-XSRF-TOKEN в том, что первый использует простое текстовое значение, а второй — шифрованное значение, т.к. cookie в Laravel всегда шифруются. Если для передачи значения ключа вы используете функцию `PHPcsrf_token()` , то вам вероятно нужно использовать заголовок X-CSRF-TOKEN.
## Привязка модели
При внедрении ID модели в действие маршрута или контроллера бывает часто необходимо получить модель, соответствующую этому ID. Привязка моделей — удобный способ автоматического внедрения экземпляров модели напрямую в ваши маршруты. Например, вместо внедрения ID пользователя вы можете внедрить весь экземпляр модели User, который соответствует данному ID.
### Неявная привязка
Laravel автоматически включает модели Eloquent, определённые в действиях маршрута или контроллера, чьи переменные имеют имена, совпадающие с сегментом маршрута. Например:
```
Route::get('api/users/{user}', function (App\User $user) {
```
return $user->email; }); В этом примере Laravel автоматически внедрит экземпляр модели, который имеет ID, совпадающий с соответствующим значением из URI запроса, потому что переменная Eloquent `PHP$user` , определённая в маршруте, совпадает с сегментом URI маршрута {user}. Если совпадающий экземпляр модели не найден в базе данных, будет автоматически сгенерирован HTTP-отклик 404. Если вы хотите, чтобы для получения класса данной модели вместо столбца id использовался другой столбец базы данных, вы можете переопределить метод `PHPgetRouteKeyName()` в своей модели Eloquent: `/**` * Получить ключ маршрута для модели. * * @return string */ public function getRouteKeyName() { return 'slug'; }
### Явная привязка
Для регистрации явной привязки используйте метод маршрута `PHPmodel()` для указания класса для данного параметра. Вам надо определить явные привязки вашей модели в методе `PHPboot()` класса RouteServiceProvider:
добавлено в 5.3 ()
{ parent::boot(); Route::model('user', App\User::class); }
добавлено в 5.2 () 5.1 () 5.0 ()
```
public function boot(Router $router)
```
{ parent::boot($router); $router->model('user', 'App\User'); } Затем определите маршрут, содержащий параметр `PHP{user}` :
```
Route::get('profile/{user}', function (App\User $user) {
```
```
Route::get('profile/{user}', function(App\User $user)
```
{ // }); Из-за того, что мы ранее привязали все параметры `PHP{user}` к модели App\User, её экземпляр будет внедрён в маршрут. Таким образом, к примеру, запрос profile/1 внедрит объект User, полученный из БД, который соответствует ID 1.
Если совпадающий экземпляр модели не найден в базе данных, будет автоматически сгенерирован HTTP-отклик 404.
Изменение логики принятия решения
добавлено в 5.3 ()
Если захотите использовать собственную логику принятия решения, используйте метод `PHPRoute::bind()` . Переданное в метод `PHPbind()` замыкание получит значение сегмента URI, и должно вернуть экземпляр класса, который вы хотите внедрить в маршрут:
{ parent::boot(); Route::bind('user', function ($value) { return App\User::where('name', $value)->first(); }); }
добавлено в 5.2 ()
```
$router->bind('user', function ($value) {
```
return App\User::where('name', $value)->first(); });
добавлено в 5.0 ()
```
Route::model('user', 'User', function()
```
{ throw new NotFoundHttpException; });
```
Route::bind('user', function($value)
```
{ return User::where('name', $value)->first(); });
## Подмена методов
HTML-формы не поддерживают действия PUT, PATCH и DELETE. Поэтому при определении маршрутов PUT, PATCH и DELETE, вызываемых из HTML-формы, вам надо добавить в неё скрытое поле _method. Переданное в этом поле значение будет использовано как метод HTTP-запроса:
> xml<form action="/foo/bar" method="POST"> <input type="hidden" name="_method" value="PUT"> <input type="hidden" name="_token" value="{{ csrf_token() }}"> </form>
Используйте вспомогательный метод `PHPmethod_field()` , чтобы сгенерировать скрытое поле для _method:
```
{{ method_field('PUT') }}
```
## Получение текущего маршрута
добавлено в 5.2 ()
Метод `PHPRoute::current()` вернёт маршрут, обрабатывающий текущий HTTP-запрос, позволяя вам проверить весь экземпляр Illuminate\Routing\Route:
```
$route = Route::current();
```
$name = $route->getName(); $actionName = $route->getActionName(); Также вы можете использовать вспомогательные методы
```
PHPcurrentRouteName()
```
фасада Route, чтобы получить имя или действие текущего маршрута:
```
$name = Route::currentRouteName();
```
$action = Route::currentRouteAction();
Чтобы изучить все доступные методы, обратитесь к документации по API класса, лежащего в основе фасада Route, и экземпляра Route.
## Ошибки 404
Есть два способа вручную вызвать исключение 404 (Not Found) из маршрута. Первый — методом `PHPabort()` , который просто вызывает Symfony\Component\HttpFoundation\Exception\HttpException с указанным кодом состояния: `abort(404);`
Второй — вручную вызвав экземпляр Symfony\Component\HttpKernel\Exception\NotFoundHttpException.
Больше информации о том, как обрабатывать исключения 404 и отправлять собственный ответ для таких ошибок, вы найдёте в разделе об ошибках.
# Middleware
Посредники (англ. middleware) предоставляют удобный механизм для фильтрации HTTP-запросов вашего приложения. Например, в Laravel есть посредник для проверки аутентификации пользователя. Если пользователь не аутентифицирован, посредник перенаправит его на экран входа в систему. Если же пользователь аутентифицирован, посредник позволит запросу пройти далее в приложение.
Конечно, посредники нужны не только для авторизации. CORS-посредник может пригодиться для добавления особых заголовков ко всем ответам в вашем приложении. А посредник логов может зарегистрировать все входящие запросы.
В Laravel есть несколько стандартных посредников, включая посредники для аутентификации и CSRF-защиты. Все они расположены в каталоге app/Http/Middleware.
## Создание посредника
Чтобы создать посредника, используйте команду Artisan `shmake:middleware` : > shphp artisan make:middleware CheckAge
Эта команда поместит новый класс CheckAge в ваш каталог app/Http/Middleware. В этом посреднике мы будем пропускать только те запросы, в которых age будет больше 200, а во всех остальных случаях будем перенаправлять пользователей на «home» URI.
`<?php` namespace App\Http\Middleware; use Closure; class CheckAge { /** * Обработка входящего запроса. * * @param \Illuminate\Http\Request $request * @param \Closure $next * @return mixed */ public function handle($request, Closure $next) { if ($request->age <= 200) { return redirect('home'); } return $next($request); } } Как видите, если переданный age меньше или равен 200, то посредник вернёт клиенту переадресацию, иначе, запрос будет передан далее в приложение. Чтобы передать запрос дальше в приложение (позволяя посреднику «передать» его), просто вызовите замыкание `PHP$next` с параметром `PHP$request` .
Проще всего представить посредника как набор «уровней», которые должен пройти HTTP-запрос, прежде чем он дойдёт до вашего приложения. Каждый уровень может проверить запрос и даже вовсе отклонить его.
### Выполнение посредника «до» или «после» запроса
Момент, в который сработает посредник — до или после запроса, зависит от него самого. Например, этот посредник выполнит некоторую задачу прежде, чем запрос будет обработан приложением:
`<?php` namespace App\Http\Middleware; use Closure; class BeforeMiddleware { public function handle($request, Closure $next) { // Выполнение действия return $next($request); } }
А этот посредник выполнит задачу после того, как запрос будет обработан приложением:
`<?php` namespace App\Http\Middleware; use Closure; class AfterMiddleware { public function handle($request, Closure $next) { $response = $next($request); // Выполнение действия return $response; } }
## Регистрация посредника
### Глобальный посредник
Если вы хотите, чтобы посредник запускался для каждого HTTP-запроса в вашем приложении, добавьте этот посредник в свойство `PHP$middleware` класса app/Http/Kernel.php.
### Назначение посредника для маршрутов
Если вы хотите назначить посредника для конкретных маршрутов, то сначала вам надо добавить ключ посредника в класс app/Http/Kernel.php. По умолчанию свойство `PHP$routeMiddleware` этого класса содержит записи посредников Laravel. Чтобы добавить ваш собственный посредник, просто добавьте его к этому списку и присвойте ему ключ на свой выбор. Например:
```
// в классе App\Http\Kernel...
```
protected $routeMiddleware = [ 'auth.basic' => \Illuminate\Auth\Middleware\AuthenticateWithBasicAuth::class, 'guest' => \App\Http\Middleware\RedirectIfAuthenticated::class, //для версии 5.2 и выше: 'throttle' => \Illuminate\Routing\Middleware\ThrottleRequests::class, //для версии 5.3 и выше: 'auth' => \Illuminate\Auth\Middleware\Authenticate::class, 'bindings' => \Illuminate\Routing\Middleware\SubstituteBindings::class, 'can' => \Illuminate\Auth\Middleware\Authorize::class, //для версии 5.2 и ранее: 'auth' => \App\Http\Middleware\Authenticate::class, ];
добавлено в 5.3 ()
Когда посредник определён в HTTP-ядре, вы можете использовать метод middleware для назначения посредника на маршрут:
// })->middleware('auth');
Для назначения нескольких посредников для маршрута:
Когда посредник определён в HTTP-ядре, вы можете использовать ключ middleware в массиве параметров маршрута:
// }]);
Используйте массив для назначения нескольких посредников для маршрута:
```
Route::get('/', ['middleware' => ['first', 'second'], function () {
```
// }]); Вместо использования массива вы можете использовать сцепку метода `PHPmiddleware()` с определением маршрута:
// })->middleware(['first', 'second']);
При назначении посредника вы можете указать полное имя класса:
```
use App\Http\Middleware\CheckAge;
```
Route::get('admin/profile', function () { // })->middleware(CheckAge::class);
Иногда бывает полезно объединить несколько посредников под одним ключом, чтобы проще назначать их на маршруты. Это можно сделать при помощи свойства `PHP$middlewareGroups` вашего HTTP-ядра.
Изначально в Laravel есть группы посредников web и api, которые содержат те посредники, которые часто применяются к вашим маршрутам веб-UI и API:
`/**` * Группы посредников маршрутов приложения. * * @var array */ protected $middlewareGroups = [ 'web' => [ \App\Http\Middleware\EncryptCookies::class, \Illuminate\Cookie\Middleware\AddQueuedCookiesToResponse::class, \Illuminate\Session\Middleware\StartSession::class, \Illuminate\View\Middleware\ShareErrorsFromSession::class, \App\Http\Middleware\VerifyCsrfToken::class, \Illuminate\Routing\Middleware\SubstituteBindings::class, ], 'api' => [ 'throttle:60,1', 'auth:api', ], ];
Группы посредников могут быть назначены на маршруты и действия контроллера с помощью того же синтаксиса, что и для одного посредника. Группы посредников просто делают проще единое назначение нескольких посредников на маршрут:
// })->middleware('web'); Route::group(['middleware' => ['web']], function () { // });
Группа посредников web автоматически применяется к вашему файлу routes/web.php сервис-провайдером RouteServiceProvider.
## Параметры посредника
В посредник можно передавать дополнительные параметры. Например, если в вашем приложении необходима проверка того, есть ли у аутентифицированного пользователя определённая «роль» для выполнения данного действия, вы можете создать посредника CheckRole, который принимает название роли в качестве дополнительного аргумента.
Дополнительные параметры посредника будут передаваться в посредник после аргумента `PHP$next` : `<?php` namespace App\Http\Middleware; use Closure; class CheckRole { /** * Обработка входящего запроса. * * @param \Illuminate\Http\Request $request * @param \Closure $next * @param string $role * @return mixed */ public function handle($request, Closure $next, $role) { if (! $request->user()->hasRole($role)) { // Redirect... } return $next($request); } }
Параметры посредника можно указать при определении маршрута, отделив название посредника от параметров двоеточием :. Сами параметры разделяются запятыми:
```
Route::put('post/{id}', function ($id) {
```
// })->middleware('role:editor');
добавлено в 5.2 () 5.1 () 5.0 ()
## Посредник terminable
Иногда посредник должен выполнить некоторые действия уже после отправки HTTP-отклика браузеру. Например, посредник «session», поставляемый с Laravel, записывает данные сессии в хранилище после отправки ответа в браузер. Если вы определите метод `PHPterminate()` в посреднике, то он будет автоматически вызываться после отправки отклика в браузер: `<?php` namespace Illuminate\Session\Middleware; use Closure; class StartSession { public function handle($request, Closure $next) { return $next($request); } public function terminate($request, $response) { // Сохранение данных сессии... } } Метод `PHPterminate()` получает и запрос, и ответ. Определив посредника как «terminable», вы должны добавить его в список глобальных посредников в вашем HTTP-ядре. При вызове метода `PHPterminate()` в посреднике, Laravel получит свежий экземпляр посредника из сервис-контейнера. Если вы хотите использовать тот же самый экземпляр посредника при вызовах методов `PHPhandle()` и `PHPterminate()` , зарегистрируйте посредника в контейнере при помощи метода `PHPsingleton()` .
# Контроллеры
Вместо того, чтобы определять всю логику обработки запросов в виде замыканий в файлах маршуртов, вы можете организовать её с помощью классов контроллеров. Контроллеры могут группировать связанную с обработкой HTTP-запросов логику в отдельный класс. Контроллеры хранятся в папке app/Http/Controllers.
## Простейшие контроллеры
### Определение контроллеров
Ниже приведён пример простейшего класса контроллера. Обратите внимание, контроллер наследует базовый класс контроллера, встроенный в Laravel. Базовый класс предоставляет несколько удобных методов, таких как метод `PHPmiddleware()` , используемый для назначения посредников на действия контроллера: `<?php` namespace App\Http\Controllers; use App\User; use App\Http\Controllers\Controller; class UserController extends Controller { /** * Показать профиль данного пользователя. * * @param int $id * @return Response */ public function show($id) { return view('user.profile', ['user' => User::findOrFail($id)]); } }
Мы можем определить маршрут для действия (action) этого контроллера вот так:
```
Route::get('user/{id}', 'UserController@show');
```
Теперь при соответствии запроса указанному URI маршрута будет выполняться метод `PHPshow()` класса UserController. Само собой параметры маршрута также будут переданы в метод. Контроллерам не обязательно наследовать базовый класс. Но тогда у вас не будет таких удобных возможностей, как методы `PHPmiddleware()` , `PHPvalidate()` и `PHPdispatch()` .
### Контроллеры и пространства имён
Важно помнить, что при определении маршрута контроллера нам не надо указывать полное пространство имён контроллера, а только ту часть имени класса, которая следует за «корнем» пространства имён — App\Http\Controllers. Потому что RouteServiceProvider загружает ваши файлы маршрутов вместе с группой маршрутизации, содержащей корневое пространство имён контроллера.
Если вы решите разместить свои контроллеры в поддиректориях App\Http\Controllers, то просто используйте конкретное имя класса относительно корня пространства имён App\Http\Controllers. Тогда, если полный путь к вашему классу будет App\Http\Controllers\Photos\AdminController, то вам надо зарегистрировать маршруты к контроллеру вот так:
```
Route::get('foo', 'Photos\AdminController@method');
```
### Контроллеры одного действия
Для определения контроллера, обрабатывающего всего одно действие, поместите в контроллер единственный метод `PHP__invoke()` : `<?php` namespace App\Http\Controllers; use App\User; use App\Http\Controllers\Controller; class ShowProfile extends Controller { /** * Показать профиль данного пользователя. * * @param int $id * @return Response */ public function __invoke($id) { return view('user.profile', ['user' => User::findOrFail($id)]); } }
При регистрации маршрутов для контроллеров одного действия вам не надо указывать метод:
```
Route::get('user/{id}', 'ShowProfile');
```
Подобно маршрутам замыканий, вы можете указать имена для маршрутов контроллеров:
```
Route::get('foo', ['uses' => 'FooController@method', 'as' => 'name']);
```
Также вы можете использовать вспомогательную функцию `PHProute()` , чтобы сгенерировать URL маршрута названного контроллера:
```
$url = route('name');
```
URL-адреса для действий контроллера
Также вы можете использовать вспомогательный метод `PHPaction()` , чтобы сгенерировать URL, используя имена классов и методов контроллера. И снова, нам надо указать только ту часть имени контроллера, которая идёт после базового пространства имён App\Http\Controllers:
```
$url = action('FooController@method');
```
Получить имя действия, которое выполняется в данном запросе, можно методом
фасада Route:
```
$action = Route::currentRouteAction();
```
добавлено в 5.0 ()
Для получения URL для действия контроллера используйте метод `PHPaction()` :
```
$url = action('App\Http\Controllers\FooController@method');
```
Если вы хотите получить URL для действия контроллера, используя только часть имени класса относительно пространства имён контроллера, зарегистрируйте корневое пространство имён контроллера с помощью генератора URL:
```
URL::setRootControllerNamespace('App\Http\Controllers');
```
$url = action('FooController@method');
## Посредник контроллера
Посредник можно указать для маршрутов контроллера в файлах маршрутов:
```
Route::get('profile', 'UserController@show')->middleware('auth');
```
'middleware' => 'auth', 'uses' => 'UserController@showProfile' ]); Но удобнее указать посредника в конструкторе вашего контроллера. Используя метод `PHPmiddleware()` в конструкторе контроллера, вы легко можете назначить посредника для действия контроллера. Вы можете даже ограничить использование посредника, назначив его только для определённых методов класса контроллера:
добавлено в 5.3 ()
{ /** * Создание нового экземпляра контроллера. * * @return void */ public function __construct() { $this->middleware('auth'); $this->middleware('log')->only('index'); $this->middleware('subscribed')->except('store'); } }
добавлено в 5.2 () 5.1 () 5.0 ()
{ /** * Создание нового экземпляра UserController. */ public function __construct() { $this->middleware('auth'); $this->middleware('log', ['only' => [ 'fooAction', 'barAction', ]]); $this->middleware('subscribed', ['except' => [ 'fooAction', 'barAction', ]]); } }
добавлено в 5.3 ()
Контроллеры также позволяют регистрировать посредников с помощью замыканий. Это удобный способ определения посредника для одного контроллера, не требующий определения целого класса посредника:
```
$this->middleware(function ($request, $next) {
```
// ... return $next($request); });
Вы можете назначить посредника на определённый набор действий контроллера, но если возникает такая необходимость, возможно ваш контроллер стал слишком велик. Вместо этого вы можете разбить контроллер на несколько маленьких контроллеров.
## Контроллеры ресурсов
Маршрутизация ресурсов Laravel назначает обычные CRUD-маршруты на контроллеры одной строчкой кода. Например, вы можете создать контроллер, обрабатывающий все HTTP-запросы к фотографиям, хранимым вашим приложением. Вы можете быстро создать такой контроллер с помощью Artisan-команды `shmake:controller` :
Эта команда сгенерирует контроллер app/Http/Controllers/PhotoController.php. Контроллер будет содержать метод для каждой доступной операции с ресурсами.
Теперь мы можем зарегистрировать маршрут контроллера ресурса:
Один этот вызов создаёт множество маршрутов для обработки различных действий для ресурса. Сгенерированный контроллер уже имеет методы-заглушки для каждого из этих действий с комментариями о том, какие URI и типы запросов они обрабатывают.
Действия, обрабатываемые контроллером ресурсов
Тип | URI | Действие | Имя маршрута |
| --- | --- | --- | --- |
GET | /photos | index | photo.index |
GET | /photos/create | create | photo.create |
POST | /photos | store | photo.store |
GET | /photos/{photo} | show | photo.show |
GET | /photos/{photo}/edit | edit | photo.edit |
PUT/PATCH | /photos/{photo} | update | photo.update |
DELETE | /photos/{photo} | destroy | photo.destroy |
### Частичные маршруты ресурсов
При объявлении маршрута вы можете указать подмножество всех возможных действий, которые должен обрабатывать контроллер вместо полного набора стандартных действий:
```
Route::resource('photo', 'PhotoController', ['only' => [
```
'index', 'show' ]]); Route::resource('photo', 'PhotoController', ['except' => [ 'create', 'store', 'update', 'destroy' ]]);
### Именование маршрутов ресурсов
По умолчанию все действия контроллера ресурсов имеют имена маршрутов, но вы можете переопределить эти имена, передав массив names вместе с остальными параметрами:
```
Route::resource('photo', 'PhotoController', ['names' => [
```
'create' => 'photo.build' ]]);
Иногда необходимо определить маршруты для «вложенных» ресурсов. Например, фото-ресурс может иметь множество «комментариев», которые могут быть прикреплены к фотографии. Используйте «точечную» нотацию в объявлении маршрута для контроллеров «вложенных» ресурсов:
```
Route::resource('photos.comments', 'PhotoCommentController');
```
Этот маршрут зарегистрирует «вложенный» ресурс, к которому можно обратиться по такому URL: photos/{photos}/comments/{comments}.
`<?php` namespace App\Http\Controllers; use App\Http\Controllers\Controller; class PhotoCommentController extends Controller { /** * Показать определённый комментарий к фото. * * @param int $photoId * @param int $commentId * @return Response */ public function show($photoId, $commentId) { // } }
### Именование параметров маршрута ресурса
По умолчанию `PHPRoute::resource` создаст параметры для ваших маршрутов ресурсов на основе имени ресурса в единственном числе. Это легко можно изменить для каждого ресурса, передав parameters в массив опций. Массив parameters должен быть ассоциативным массивом имён ресурсов и имён параметров:
```
Route::resource('user', 'AdminUserController', ['parameters' => [
```
'user' => 'admin_user' ]]);
Этот пример генерирует следующие URI для маршрута ресурса show:
`/user/{admin_user}`
добавлено в 5.2 ()
Вместо передачи массива имён параметров вы можете просто передать слово singular, чтобы Laravel использовал имена параметров по умолчанию, но при этом «выделил» их (сингуляризовал):
```
Route::resource('users.photos', 'PhotoController', [
```
'parameters' => 'singular' ]); // /users/{user}/photos/{photo}
Или, вы можете глобально задать, чтобы параметры маршрута вашего ресурса были сингулярными, или задать глобальное сопоставление имён параметров ресурса:
```
Route::singularResourceParameters();
```
Route::resourceParameters([ 'user' => 'person', 'photo' => 'image' ]);
При изменении параметров ресурса важно помнить о приоритете именования:
* Параметры явно передаются в
`PHPRoute::resource` . * Глобальное сопоставление параметра задаётся в
```
PHPRoute::resourceParameters
```
. * Настройка singular передаётся через массив parameters в
`PHPRoute::resource` или задаётся в
```
PHPRoute::singularResourceParameters
```
. * Поведение по умолчанию.
### Добавление дополнительных маршрутов в контроллеры ресурсов
Если вам надо добавить дополнительные маршруты в контроллер ресурсов, не входящие в набор маршрутов ресурсов по умолчанию, их надо определить до вызова `PHPRoute::resource` , иначе определенные методом `PHPresource()` маршруты могут нечаянно «победить» ваши дополнительные маршруты:
```
Route::get('photos/popular', 'PhotoController@method');
```
Route::resource('photos', 'PhotoController');
Старайтесь, чтобы контроллеры были узкоспециализированными. Если вам постоянно требуются методы вне стандартного набора действий с ресурсами, попробуйте разделить контроллер на два небольших контроллера.
## Неявные контроллеры
Laravel позволяет вам легко создавать единый маршрут для обработки всех действий в классе контроллера. Для начала зарегистрируйте маршрут методом
. Этот метод принимает два аргумента. Первый — обрабатываемый контроллером базовый URI, а второй — имя класса контроллера:
```
Route::controller('users', 'UserController');
```
После регистрации просто добавьте методы в контроллер, назвав их по имени URI с большой буквы и с префиксом в виде типа HTTP-запроса (HTTP verb), который они обрабатывают:
`<?php` namespace App\Http\Controllers; class UserController extends Controller { /** * Отклик на запрос GET /users */ public function getIndex() { // } /** * Отклик на запрос GET /users/show/1 */ public function getShow($id) { // } /** * Отклик на запрос GET /users/admin-profile */ public function getAdminProfile() { // } /** * Отклик на запрос POST /users/profile */ public function postProfile() { // } }
Как видите, в этом примере методы index обрабатывают корневой URI контроллера — в нашем случае это users.
добавлено в 5.0 ()
Если вы хотите назвать некоторые маршруты в контроллере, вы можете передать массив имён в качестве третьего аргумента в метод `PHPcontroller()` :
```
Route::controller('users', 'UserController', [
```
'getShow' => 'user.show', ]);
## Внедрение зависимостей и контроллеры
Сервис-контейнер в Laravel используется для работы всех контроллеров Laravel. В результате вы можете указывать типы любых зависимостей, которые могут потребоваться вашему контроллеру в его конструкторе. Заявленные зависимости будут автоматически получены и внедрены в экземпляр контроллера:
`<?php` namespace App\Http\Controllers; //для версии 5.1 и ранее: //use Illuminate\Routing\Controller; use App\Repositories\UserRepository; class UserController extends Controller { /** * Экземпляр репозитория пользователя. */ protected $users; /** * Создание нового экземпляра контроллера. * * @param UserRepository $users * @return void */ public function __construct(UserRepository $users) { $this->users = $users; } }
Разумеется, вы можете также указать тип любого контракта Laravel. Если контейнер может с ним работать, значит вы можете указывать его тип. В некоторых случаях внедрение зависимостей в контроллер обеспечивает лучшую тестируемость приложения.
Кроме внедрения в конструктор, вы также можете указывать типы зависимостей в методах вашего контроллера. Распространённый пример внедрения в метод — внедрение экземпляра Illuminate\Http\Request в один из методов контроллера:
`<?php` namespace App\Http\Controllers; use Illuminate\Http\Request; //для версии 5.1 и ранее: //use Illuminate\Routing\Controller; class UserController extends Controller { /** * Сохранить нового пользователя. * * @param Request $request * @return Response */ public function store(Request $request) { $name = $request->name; // } }
Если метод вашего контроллера также ожидает данных из параметра маршрута, просто перечислите аргументы маршрута после остальных зависимостей. Например, если ваш маршрут определён так:
Вы по-прежнему можете указать тип Illuminate\Http\Request и обращаться к параметру id, определив метод контроллера вот так:
`<?php` namespace App\Http\Controllers; use Illuminate\Http\Request; //для версии 5.1 и ранее: //use Illuminate\Routing\Controller; class UserController extends Controller { /** * Обновить данного пользователя. * * @param Request $request * @param string $id * * //для версии 5.1 и ранее * @param int $id * * @return Response */ public function update(Request $request, $id) { // } }
добавлено в 5.0 ()
Внедрение в метод полностью совместимо с привязкой к модели. Контейнер автоматически определит, какие из аргументов привязаны к модели, а какие аргументы должны быть внедрены.
## Кэширование маршрутов
Маршруты на основе замыканий нельзя кэшировать. Чтобы использовать кэширование маршрутов, необходимо перевести все маршруты замыканий на классы контроллера.
Если ваше приложение единолично использует маршруты контроллера, то вы можете воспользоваться преимуществом кэширования маршрутов в Laravel. Использование кэша маршрутов радикально уменьшит время, требуемое для регистрации всех маршрутов вашего приложения. В некоторых случаях регистрация ваших маршрутов может стать быстрее в 100 раз. Для создания кэша маршрутов просто выполните Artisan-команду `shroute:cache` : > shphp artisan route:cache
После выполнения этой команды ваши кэшированные маршруты будут загружаться при каждом запросе. Помните, после добавления новых маршрутов, вам необходимо заново сгенерировать свежий кэш маршрутов. Поэтому нужно выполнить команду `shroute:cache` уже при развёртывании вашего проекта. Для очистки кэша маршрутов используйте команду `shroute:clear` : > shphp artisan route:clear
# Запросы и ввод
## Получение экземпляра запроса
Для получения экземпляра текущего HTTP-запроса через внедрение зависимости вам надо указать тип класса в методе вашего контроллера. Экземпляр входящего запроса будет автоматически внедрён сервис-контейнером:
`<?php` namespace App\Http\Controllers; use Illuminate\Http\Request; //для версии 5.1 и ранее: //use Illuminate\Routing\Controller; class UserController extends Controller { /** * Сохранить нового пользователя. * * @param Request $request * @return Response */ public function store(Request $request) { $name = $request->input('name'); // } }
Внедрение зависимости и параметры маршрута
Если метод вашего контроллера также ожидает ввода из параметров маршрута, вам надо перечислить параметры вашего маршрута после других зависимостей. Например, если ваш маршрут определён вот так:
То вы так же можете указать тип Illuminate\Http\Request и получить параметр id вашего маршрута, определив метод вашего контроллера таким образом:
`<?php` namespace App\Http\Controllers; use Illuminate\Http\Request; //для версии 5.1 и ранее: //use Illuminate\Routing\Controller; class UserController extends Controller { /** * Обновить указанного пользователя. * * @param Request $request * * для версии 5.2 и выше: * @param string $id * * для версии 5.1 и ранее: * @param int $id * * @return Response */ public function update(Request $request, $id) { // } }
добавлено в 5.3 ()
добавлено в 5.0 ()
Через фасадФасад Request предоставит вам доступ к текущему запросу, который привязан в контейнере. Например:
Помните, если ваш код находится в собственном пространстве имён, то вам надо будет импортировать фасад Request с помощью оператора `PHPuse Request;` в начале вашего файла.
### Методы и путь запроса
Экземпляр класса Illuminate\Http\Request содержит множество методов для изучения входящего в ваше приложение запроса. Он наследует класс
```
PHPSymfony\Component\HttpFoundation\Request
```
. Ниже мы обсудим несколько наиболее полезных методов этого класса. Метод `PHPpath()` возвращает информацию о пути запроса. Например, если входящий запрос обращён к http://domain.com/foo/bar, то метод `PHPpath()` вернёт foo/bar:
```
$uri = $request->path();
```
Метод `PHPis()` позволяет проверить соответствие пути запроса заданной маске. При использовании этого метода можно использовать символ * в качестве маски:
```
if ($request->is('admin/*')) {
```
// } Для получения полного URL без строки запроса используйте метод `PHPurl()` :
```
// без строки запроса...
```
$url = $request->url();
добавлено в 5.2 ()
Метод `PHPmethod()` вернёт HTTP-действие запроса. Вы можете использовать метод `PHPisMethod()` для проверки соответствия HTTP-действия заданной строке:
```
$method = $request->method();
```
if ($request->isMethod('post')) { // }
добавлено в 5.0 ()
```
$uri = Request::path();
```
Соответствует ли запрос маске пути?
```
if (Request::is('admin/*'))
```
```
$url = Request::url();
```
`if (Request::ajax())` { // }
```
$method = Request::method();
```
if (Request::isMethod('post')) { // }
### Запросы PSR-7
Стандарт PSR-7 описывает интерфейсы для HTTP-сообщений, включая запросы и отклики. Если вы хотите получить экземпляр запроса PSR-7 вместо Laravel-запроса, сначала вам надо установить несколько библиотек. Laravel использует компонент Symfony HTTP Message Bridge для конвертации обычных запросов и откликов Laravel в совместимые с PSR-7:
> shcomposer require symfony/psr-http-message-bridge composer require zendframework/zend-diactoros
Когда вы установите эти библиотеки, вы можете получить запрос PSR-7, указав интерфейс запроса в замыкании вашего маршрута или методе контроллера:
```
use Psr\Http\Message\ServerRequestInterface;
```
Route::get('/', function (ServerRequestInterface $request) { // });
Если вы возвращаете экземпляр отклика PSR-7 из маршрута или контроллера, он будет автоматически конвертирован обратно в экземпляр отклика Laravel и будет отображён фреймворком.
## Получение ввода
Вы можете получить все данные ввода в виде массива с помощью метода `PHPall()` :
```
$input = $request->all();
```
Вы можете получить доступ ко всем введённым данным из экземпляра Illuminate\Http\Request, используя всего несколько простых методов. Вам не нужно думать о том, какой тип HTTP-запроса был использован (GET, POST и т.д.) — метод `PHPinput()` работает одинаково для любого из них:
Вы можете передать значение по умолчанию вторым аргументом метода `PHPinput()` . Это значение будет возвращено, когда запрашиваемый ввод отсутствует в запросе:
```
$name = $request->input('name', 'Sally');
```
При работе с формами, имеющими переменные-массивы, вы можете использовать точечную запись для обращения к массивам:
```
$name = $request->input('products.0.name');
```
$names = $request->input('products.*.name');
Получение ввода через динамические свойства
Также вы можете получать пользовательский ввод, используя динамический свойства экземпляра Illuminate\Http\Request. Например, если одна из форм приложения содержит поле name, вы можете получить значение отправленного поля вот так:
```
$name = $request->name;
```
При использовании динамических свойств Laravel сначала ищет значение параметра в данных запроса. Если его там нет, Laravel будет искать поле в параметрах маршрута.
Получения значения из ввода JSON
При отправке JSON-запросов в приложение вы можете получить доступ к JSON-данным методом `PHPinput()` , поскольку заголовок Content-Type запроса установлен в application/json. Вы даже можете использовать «точечный» синтаксис, чтобы погружаться в массивы JSON:
```
$name = $request->input('user.name');
```
Определение наличия значения переменной
Для определения наличия значения в запросе используйте метод `PHPhas()` . Метод `PHPhas()` возвращает true, если значение существует и является непустой строкой:
```
if ($request->has('name')) {
```
// }
Получение части переменных запроса
Если вам необходимо получить только часть данных ввода, используйте методы `PHPonly()` и `PHPexcept()` . Оба этих метода принимают один массив или динамический список аргументов:
```
$input = $request->only(['username', 'password']);
```
$input = $request->only('username', 'password'); $input = $request->except(['credit_card']); $input = $request->except('credit_card');
добавлено в 5.0 ()
Получение переменной или значения по умолчанию, если переменная не была передана
```
$name = Request::input('name', 'Sally');
```
```
if (Request::has('name'))
```
{ // }
Получение всех переменных запроса
```
$input = Request::all();
```
Получение некоторых переменных
```
$input = Request::only('username', 'password');
```
$input = Request::except('credit_card');
При работе с переменными-массивами вы можете использовать точечную запись для доступа к их элементам:
```
$input = Request::input('products.0.name');
```
### Старый ввод
Вам может пригодиться сохранение пользовательского ввода между двумя запросами. Например, после проверки формы на корректность вы можете заполнить её старыми значениями в случае ошибки. Однако, если вы используете включённые в Laravel возможности проверки ввода, то вряд ли вам понадобиться использовать эти методы вручную, так как встроенные возможности Laravel вызовут их автоматически.
Метод `PHPflash()` класса Illuminate\Http\Request передаст текущий ввод в сессию, и он будет доступен во время следующего пользовательского запроса к приложению: `$request->flash();`
добавлено в 5.0 ()
`Request::flash();` Для передачи некоторых переменных в сессию используйте методы `PHPflashOnly()` и `PHPflashExcept()` . Эти методы полезны для хранения важной информации (например, паролей) вне сессии:
$request->flashOnly(['username', 'email']); //для версии 5.1 и ранее: //$request->flashOnly('username', 'email'); $request->flashExcept('password');
добавлено в 5.0 ()
```
Request::flashOnly('username', 'email');
```
Request::flashExcept('password'); Поскольку часто требуется передать ввод в сессии и затем перенаправить на предыдущую страницу, вы можете легко прицепить передачу ввода к перенаправлению с помощью метода `PHPwithInput()` :
return redirect('form')->withInput( $request->except('password') );
добавлено в 5.0 ()
return redirect('form')->withInput(Request::except('password')); Для получения переданного ввода из предыдущего запроса используйте метод `PHPold()` на экземпляре Request. Метод `PHPold()` получит переданные ранее данные ввода из сессии:
```
$username = $request->old('username');
```
```
$username = Request::old('username');
```
В Laravel есть глобальная вспомогательная функция `PHPold()` . Когда вы выводите старый ввод в шаблоне Blade, удобнее использовать эту функцию `PHPold` . Если для данного поля нет старого ввода, вернётся `PHPnull` :
<input type="text" name="username" value="{{ old('username') }}"> //для версии 5.1 и ранее: //{{ old('username') }}
### Cookies
Все cookie, создаваемые Laravel, шифруются и подписываются специальным кодом — таким образом, если клиент изменит их значение, то они станут недействительными. Для получения значения cookie из запроса используйте метод `PHPcookie()` на экземпляре Illuminate\Http\Request:
```
$value = $request->cookie('name');
```
```
$value = Request::cookie('name');
```
добавлено в 5.3 ()
Вы можете прикрепить cookie к исходящему экземпляру Illuminate\Http\Response с помощью метода `PHPcookie()` . Вы должны передать в этот метод имя, значение и количество минут, в течение которого cookie должен считаться действующим:
'name', 'value', $minutes ); Метод `PHPcookie()` также принимает ещё несколько аргументов, используемых менее часто. В целом эти аргументы имеют то же назначение и значение, что и передаваемые в PHP-метод setcookie аргументы:
'name', 'value', $minutes, $path, $domain, $secure, $httpOnly ); Если вы хотите сгенерировать экземпляр Symfony\Component\HttpFoundation\Cookie, который позднее можно будет передать экземпляру отклика, используйте глобальную вспомогательную функцию `PHPcookie()` . Этот cookie не будет отправлен обратно клиенту до тех пор, пока не будет прикреплён к экземпляру отклика:
```
$cookie = cookie('name', 'value', $minutes);
```
return response('Hello World')->cookie($cookie);
добавлено в 5.2 () 5.1 () 5.0 ()
Добавление cookie к ответуГлобальная вспомогательная функция `PHPcookie()` служит простой фабрикой для генерирования экземпляров Symfony\Component\HttpFoundation\Cookie. Сookie можно прикреплять к экземпляру Illuminate\Http\Response с помощью метода `PHPwithCookie()` :
```
$response = new Illuminate\Http\Response('Hello World');
```
//для версии 5.2 и выше: $response->withCookie('name', 'value', $minutes); //для версии 5.1 и ранее: //$response->withCookie(cookie('name', 'value', $minutes)); return $response; Для создания долгоживущих cookie, которые будут существовать в течение 5 лет, используйте метод `PHPforever()` на фабрике cookie, сначала вызвав метод `PHPcookie()` без аргументов, а затем «прицепить» метод `PHPforever()` к возвращённой фабрике cookie:
```
$response->withCookie(cookie()->forever('name', 'value'));
```
Вы также можете поставить cookie в очередь для добавления к исходящему ответу, даже до того как этот ответ был создан:
use Cookie; use Illuminate\Routing\Controller; class UserController extends Controller { /** * Обновить ресурс * * @return Response */ public function update() { Cookie::queue('name', 'value'); return response('Hello World'); } }
### Файлы
### Получение загруженных файлов
Получить доступ к загруженным файлам из экземпляра Illuminate\Http\Request можно с помощью метода `PHPfile()` или динамических свойств. Метод `PHPfile()` возвращает экземпляр класса Symfony\Component\HttpFoundation\File\UploadedFile, который наследует PHP-класс SplFileInfo и предоставляет различные методы для взаимодействия с файлами:
```
$file = $request->file('photo');
```
$file = $request->photo;
добавлено в 5.0 ()
```
$file = Request::file('photo');
```
Также вы можете определить, есть ли в запросе файл, с помощью метода `PHPhasFile()` :
```
if ($request->hasFile('photo')) {
```
```
if (Request::hasFile('photo'))
```
{ // }
Прошёл ли загруженный файл проверку?
Вдобавок к проверке наличия файла вы можете проверить, что при загрузке файла не возникло никаких проблем, с помощью метода `PHPisValid()` :
```
if ($request->file('photo')->isValid()) {
```
```
if (Request::file('photo')->isValid())
```
{ // }
добавлено в 5.3 ()
Путь и расширение файлаВ классе UploadedFile также есть методы для получения полного пути файла и его расширения. Метод `PHPextension()` пытается определить расширение файла на основе его содержимого. Это расширение может отличаться от указанного клиентом:
```
$path = $request->photo->path();
```
$extension = $request->photo->extension();
добавлено в 5.2 () 5.1 () 5.0 ()
Перемещение загруженного файла
Для перемещения загруженного файла в новое место используйте метод `PHPmove()` . Этот метод переместит файл из его временного места хранения (определённого в ваших настройках PHP) на место постоянного хранения по вашему выбору:
```
$request->file('photo')->move($destinationPath);
```
$request->file('photo')->move($destinationPath, $fileName);
добавлено в 5.0 ()
```
Request::file('photo')->move($destinationPath);
```
Request::file('photo')->move($destinationPath, $fileName);
Другие методы для работы с файлами
Есть множество других методов для экземпляров UploadedFile. Загляните в документацию по API класса за более подробной информацией об этих методах.
Для хранения загруженного файла обычно используется одна из настроенных файловых систем. В классе UploadedFile есть метод `PHPstore()` , который переместит загруженный файл на один из ваших дисков, который может располагаться в вашей локальной файловой системе или даже в облачном хранилище, таком как Amazon S3. Метод `PHPstore()` принимает путь, куда необходимо сохранить файл относительно настроенного корневого каталога файловой системы. Этот путь не должен включать имя файла, поскольку будет автоматически сгенерирован UUID в качестве имени файла. Также метод `PHPstore()` принимает второй необязательный аргумент — имя диска для сохранения файла. Этот метод вернёт путь файла относительно корня диска:
```
$path = $request->photo->store('images');
```
$path = $request->photo->store('images', 's3'); Если вы не хотите автоматически генерировать имя файла, используйте метод `PHPstoreAs()` , который принимает аргументы: путь, имя файла и имя диска:
```
$path = $request->photo->storeAs('images', 'filename.jpg');
```
$path = $request->photo->storeAs('images', 'filename.jpg', 's3');
# HTTP-отклики
## Создание откликов
Все маршруты и контроллеры должны возвращать отклик для отправки обратно в браузер. Laravel предоставляет несколько разных способов для возврата откликов. Самый основной отклик — простой возврат строки из маршрута или контроллера. Фреймворк автоматически конвертирует строку в полный HTTP-отклик:
return 'Hello World'; });
добавлено в 5.3 ()
Помимо строк из маршрутов и контроллеров можно также возвращать массивы. Фреймворк автоматически конвертирует массив в JSON-отклик:
return [1, 2, 3]; });
Вы знали, что вы можете также возвращать коллекции Eloquent из маршрутов и контроллеров? Они будут автоматически сконвертированы в JSON. Попробуйте!
Чаще всего вы будете возвращать из действий маршрутов не просто строки или массивы. Вместо этого вы будете возвращать полные экземпляры Illuminate\Http\Response или представления.
Возврат полного экземпляра Response позволяет вам изменять HTTP-код состояния и заголовки откликов. Экземпляр Response наследует класс Symfony\Component\HttpFoundation\Response, который предоставляет множество методов для создания HTTP-откликов:
return response('Hello World', 200) ->header('Content-Type', 'text/plain'); });
добавлено в 5.2 () 5.1 () 5.0 ()
Но для большинства маршрутов и действий контроллера вы будете возвращать полный объект Illuminate\Http\Response или представление. Возвращая полный экземпляр Response, вы можете изменить код HTTP-статуса и заголовок отклика. Объект Response наследует от класса Symfony\Component\HttpFoundation\Response, предоставляя множество методов для построения HTTP-откликов:
```
use Illuminate\Http\Response;
```
Route::get('home', function () { return (new Response($content, $status)) ->header('Content-Type', $value); }); Для удобства вы можете использовать вспомогательную функцию `PHPresponse()` :
return response($content, $status) ->header('Content-Type', $value); });
Полный список доступных методов класса Response можно найти в документации по его API и в документации по API Symfony.
Добавление заголовков в отклики
Имейте ввиду, что большинство методов Response сцепляемы, что делает создание откликов более гибким. Например, вы можете использовать метод `PHPheader()` для добавления нескольких заголовков в отклик перед его отправкой пользователю:
->header('Content-Type', $type) ->header('X-Header-One', 'Header Value') ->header('X-Header-Two', 'Header Value');
добавлено в 5.0 ()
Использование метода `PHPcookie()` на экземплярах отклика позволяет вам легко добавить cookie в отклик. Например, вы можете использовать метод `PHPcookie()` для создания cookie и гибкого добавления его в экземпляр отклика:
->header('Content-Type', $type) ->cookie('name', 'value', $minutes); Метод `PHPcookie()` также принимает ещё несколько аргументов, используемых реже. В целом эти аргументы имеют такое же назначение и значение, как аргументы, передаваемые в PHP-метод setcookie:
добавлено в 5.2 ()
Или, вы можете использовать метод `PHPqueue()` фасада Cookie для создания cookie, который будет автоматически добавлен в исходящий отклик: `<?php` namespace App\Http\Controllers; use Cookie; use App\Http\Controllers\Controller; class DashboardController extends Controller { /** * Показать панель управления приложения. * * @return Response */ public function index() { Cookie::queue('saw_dashboard', true, 15); return view('dashboard'); } }
В этом примере cookie saw_dashboard будет автоматически добавлен в исходящий отклик, без необходимости вручную прикреплять cookie к конкретному экземпляру отклика.
Использование вспомогательного метода `PHPwithCookie()` на экземпляре отклика позволяет вам легко добавить cookie в отклик. Например, вы можете использовать метод `PHPwithCookie()` для создания cookie и добавления его в экземпляр отклика:
```
return response($content)->header('Content-Type', $type)
```
->withCookie('name', 'value'); Метод `PHPwithCookie()` принимает дополнительные необязательные аргументы для настройки свойств cookie:
По умолчанию все генерируемые cookie в Laravel шифруются и подписываются, поэтому они не могут быть прочитаны или изменены клиентом. Если вы хотите отключить шифрование для определённого набора cookie в вашем приложении, вы можете использовать свойство `PHP$except` посредника App\Http\Middleware\EncryptCookies, который находится в каталоге app/Http/Middleware: `/**` * Названия тех cookie, которые не надо шифровать. * * @var array */ protected $except = [ 'cookie_name', ];
## Переадресация
Отклики для переадресации — это объекты класса Illuminate\Http\RedirectResponse, и они содержат надлежащие заголовки, необходимые для переадресации пользователя на другой URL. Есть несколько способов создания объекта RedirectResponse. Простейший из них — использовать глобальный вспомогательный метод `PHPredirect()` :
return redirect('home/dashboard'); }); Если вы захотите переадресовать пользователя на предыдущую страницу (например, после отправки формы с некорректными данными), используйте глобальный метод `PHPback()` . Поскольку для этого используются сессии, маршрут, вызывающий метод `PHPback()` , должен использовать группу посредников web или должен применять всех посредников сессий:
### Переадресация на именованный маршрут
Когда вы вызываете функцию `PHPredirect()` без параметров, возвращается экземпляр Illuminate\Routing\Redirector, позволяя вам вызывать любой метод объекта Redirector. Например, для создания RedirectResponse на именованный маршрут вы можете использовать метод `PHProute()` :
return redirect()->route('profile', ['id' => 1]);
Заполнение параметров через модели Eloquent
Если вы переадресовываете на маршрут с параметром «ID», который был взят из модели Eloquent, то вы можете просто передать саму модель. ID будет извлечён автоматически:
return redirect()->route('profile', [$user]);
добавлено в 5.3 ()
### Переадресация на действие контроллера
Также вы можете создать отклик-переадресацию на действие контроллера. Для этого передайте контроллер и название действия в метод `PHPaction()` .
добавлено в 5.0 ()
Если маршрут вашего контроллера требует параметров, вы можете передать их вторым аргументом метода `PHPaction()` :
'UserController@profile', ['id' => 1] );
добавлено в 5.0 ()
Переадресация на действие контроллера с именованными параметрами
```
return redirect()->action('App\Http\Controllers\UserController@profile', ['user' => 1]);
```
### Переадресация с одноразовыми переменными сессии
Переадресация на новый URL и передача данных в сессию обычно происходят в одно время. Обычно это делается после успешного выполнения действия, когда вы передаёте сообщение об этом в сессию. Для удобства вы можете создать экземпляр RedirectResponse и передать данные в сессию в одной гибкой связке методов:
## Другие типы откликов
Метод `PHPresponse()` можно использовать для создания экземпляров откликов других типов. Когда `PHPresponse()` вызывается без аргументов, возвращается реализация контракта Illuminate\Contracts\Routing\ResponseFactory. Этот контракт предоставляет несколько полезных методов для создания откликов.
### Отклики представления
Если вам необходим контроль над статусом и заголовком отклика, но при этом необходимо возвращать представление в качестве содержимого отклика, используйте метод `PHPview()` : `return response()` ->view('hello', $data, 200) ->header('Content-Type', $type); Конечно, если вам не надо передавать свой код HTTP-статуса и свои заголовки, вы можете просто использовать глобальную функцию `PHPview()` .
### JSON-отклики
Метод `PHPjson()` автоматически задаст заголовок Content-Type для application/json, а также конвертирует данный массив в JSON при помощи PHP-функции `PHPjson_encode()` :
```
return response()->json([
```
'name' => 'Abigail', 'state' => 'CA' ]); Если вы хотите создать JSONP-отклик, используйте метод `PHPjson()` совместно с методом `PHPsetCallback()` : `return response()` ->json(['name' => 'Abigail', 'state' => 'CA']) ->withCallback($request->input('callback'));
### Отклик загрузки файла
Метод `PHPdownload()` используется для создания отклика, получив который, браузер пользователя скачивает файл по указанному пути. Метод `PHPdownload()` принимает вторым аргументом имя файла, именно это имя увидит пользователь при скачивании файла. И наконец, вы можете передать массив HTTP-заголовков третьим аргументом метода:
```
return response()->download($pathToFile);
```
return response()->download($pathToFile, $name, $headers);
Класс Symfony HttpFoundation, который управляет загрузкой файлов, требует, чтобы загружаемый файл имел ASCII-имя.
### Отклики файлы
Метод `PHPfile()` служит для вывода файла (такого как изображение или PDF) прямо в браузере пользователя, вместо запуска его скачивания. Этот метод принимает первым аргументом путь к файлу, а вторым аргументом — массив заголовков:
```
return response()->file($pathToFile);
```
return response()->file($pathToFile, $headers);
## Макрос отклика
Если вы хотите определить собственный отклик, который вы смогли бы использовать повторно в различных маршрутах и контроллерах, то можете использовать метод `PHPmacro()` на фасаде Response. Например, из метода `PHPboot()` сервис-провайдера: `<?php` namespace App\Providers; use Illuminate\Support\Facades\Response; //для версии 5.2 и ранее: //use Response; use Illuminate\Support\ServiceProvider; class ResponseMacroServiceProvider extends ServiceProvider { /** * Регистрация макроса отклика приложения. * * @return void */ public function boot() { Response::macro('caps', function ($value) { return Response::make(strtoupper($value)); }); } } `<?php` namespace App\Providers; use Illuminate\Support\ServiceProvider; use Illuminate\Contracts\Routing\ResponseFactory; class ResponseMacroServiceProvider extends ServiceProvider { /** * Выполнение загрузки сервисов после регистрации. * * @param ResponseFactory $factory * @return void */ public function boot(ResponseFactory $factory) { $factory->macro('caps', function ($value) use ($factory) { return $factory->make(strtoupper($value)); }); } } Функция `PHPmacro()` принимает имя в качестве первого аргумента и замыкание в качестве второго. Замыкание макроса будет выполнено при вызове имени макроса из реализации ResponseFactory: или из метода `PHPresponse()` :
```
return response()->caps('foo');
```
# Представления
## Создание представлений
Представления (views), они же макеты, содержат HTML-код, передаваемый вашим приложением. Это удобный способ разделения бизнес-логики и логики отображения информации. Представления находятся в каталоге resources/views. Простое представление выглядит примерно так:
> xml<!-- Представление resources/views/greeting.blade.php --> <html> <body> <h1>Hello, {{ $name }}</h1> </body> </html>
Поскольку это представление хранится в resources/views/greeting.blade.php, мы можем вернуть его в браузер при помощи глобальной вспомогательной функции `PHPview()` примерно так:
return view('greeting', ['name' => 'James']); }); Как видите, первый параметр, переданный вспомогательной функции `PHPview()` , соответствует имени файла представления в каталоге resources/views. Вторым параметром является массив данных, которые будут доступны для представления. В данном случае мы передаём переменную name, которая отображается в представлении с использованием синтаксиса Blade.
Конечно, представления могут быть и в поддиректориях resources/views. Для доступа к ним можно использовать «точечную» запись. Например, если ваше представление хранится в resources/views/admin/profile.blade.php, можно ссылаться на него вот так:
```
return view('admin.profile', $data);
```
Определение существования представления
Если вам нужно определить, существует ли представление, вы можете использовать фасад View. Метод `PHPexists()` вернёт значение true, если представление существует:
```
use Illuminate\Support\Facades\View;
```
if (View::exists('emails.customer')) { // }
## Передача данных в представление
В предыдущих примерах вы увидели, что можете передать массив данных в представление:
```
return view('greetings', ['name' => 'Victoria']);
```
Вы также можете передать массив данных в качестве второго параметра в функцию `PHPview()` :
```
$view = view('greetings', $data);
```
Передавая данные таким способом `PHP$data` должен быть массивом с парами ключ/значение. Теперь эти данные можно получить в представлении, используя соответствующий ключ, подобно
```
PHP<?php echo $key; ?>
```
. Альтернативой передаче всего массива данных в функцию `PHPview()` является использование метода `PHPwith()` для добавления отдельных частей данных в представление:
```
return view('greeting')->with('name', 'Victoria');
```
Передача данных во все представления
Иногда вам нужно передать данные во все представления вашего приложения.
добавлено в 5.3 ()
Это можно сделать с помощью метода `PHPshare()` фасада представлений. Обычно вызов `PHPshare()` располагается в методе `PHPboot()` сервис-провайдера. Вы можете вставить его в AppServiceProvider или создать отдельный сервис-провайдер для него: `<?php` namespace App\Providers; use Illuminate\Support\Facades\View; class AppServiceProvider extends ServiceProvider { /** * Загрузка всех сервисов приложения. * * @return void */ public function boot() { View::share('key', 'value'); } /** * Регистрация сервис-провайдера. * * @return void */ public function register() { // } } Это можно сделать с помощью метода `PHPshare()` фабрики представлений. Обычно вызов `PHPshare()` располагается в методе `PHPboot()` сервис-провайдера. Вы можете вставить его в AppServiceProvider или создать отдельный сервис-провайдер для него: `<?php` namespace App\Providers; class AppServiceProvider extends ServiceProvider { /** * Загрузка всех сервисов приложения. * * @return void */ public function boot() { view()->share('key', 'value'); } /** * Регистрация сервис-провайдера. * * @return void */ public function register() { // } }
добавлено в 5.0 ()
У вас есть несколько способов: функция `PHPview()` , контракт Illuminate\Contracts\View\Factory или шаблон построителя представлений. Например, используя функцию `PHPview()` :
```
view()->share('data', [1, 2, 3]);
```
Вы также можете использовать фасад View:
```
View::share('data', [1, 2, 3]);
```
Этот код вы можете поместить в метод `PHPboot()` сервис-провайдера — либо общего сервис-провайдера приложения AppServiceProvider, либо своего собственного.
Получение представления по указанному пути файла
Вы можете взять файл представления по его полному пути в файловой системе:
```
return view()->file($pathToFile, $data);
```
## Построители представлений
Построители (view composers) — функции обратного вызова или методы класса, которые вызываются, когда представление отрисовано. Если у вас есть данные, которые вы хотите привязать к представлению при каждой его отрисовке, то построители помогут вам выделить такую логику в отдельном месте.
Давайте для этого примера зарегистрируем свои построители в сервис-провайдере. В Laravel нет папки, в которой должны находится классы построителей. Вы можете создать её сами там, где вам будет удобно. Например, это может быть App\Http\ViewComposers.
добавлено в 5.3 ()
Мы воспользуемся фасадом `PHPview()` для доступа к лежащей в основе реализации контракта Illuminate\Contracts\View\Factory: `<?php` namespace App\Providers; use Illuminate\Support\Facades\View; use Illuminate\Support\ServiceProvider; class ComposerServiceProvider extends ServiceProvider { /** * Регистрация привязок в контейнере. * * @return void */ public function boot() { // Использование построителей на основе класса... View::composer( 'profile', 'App\Http\ViewComposers\ProfileComposer' ); // Использование построителей на основе замыканий... View::composer('dashboard', function ($view) { // }); } /** * Регистрация сервис-провайдера. * * @return void */ public function register() { // } } Мы воспользуемся функцией `PHPview()` для доступа к лежащей в основе реализации контракта Illuminate\Contracts\View\Factory: `<?php` namespace App\Providers; use Illuminate\Support\ServiceProvider; class ComposerServiceProvider extends ServiceProvider { /** * Регистрация привязок в контейнере. * * @return void */ public function boot() { // Использование построителей на основе класса... view()->composer( 'profile', 'App\Http\ViewComposers\ProfileComposer' ); // Использование построителей на основе замыканий... view()->composer('dashboard', function ($view) { // }); } /** * Регистрация сервис-провайдера. * * @return void */ public function register() { // } }
добавлено в 5.0 ()
Мы будем использовать фасад View для того, чтобы получить доступ к реализации контракта Illuminate\Contracts\View\Factory:
use View; use Illuminate\Support\ServiceProvider; class ComposerServiceProvider extends ServiceProvider { /** * Регистрация привязок в контейнере. * * @return void */ public function boot() { // Если построитель реализуется при помощи класса... View::composer('profile', 'App\Http\ViewComposers\ProfileComposer'); // Если построитель реализуется в функции-замыкании... View::composer('dashboard', function($view) { }); } /** * Регистрация сервис-провайдера * * @return void */ public function register() { // } }
Не забывайте, при создании нового сервис-провайдера для регистрации ваших построителей представлений, вам нужно будет добавить его в массив providers в конфигурационном файле config/app.php.
Теперь, когда построитель зарегистрирован, при каждой отрисовке представления profile будет вызываться метод
```
PHPProfileComposer@compose
```
. Давайте определим класс построителя: `<?php` namespace App\Http\ViewComposers; //для версии 5.2 и выше: use Illuminate\View\View; use App\Repositories\UserRepository; //для версии 5.1 и ранее: //use Illuminate\Contracts\View\View; //use Illuminate\Users\Repository as UserRepository; class ProfileComposer { /** * Реализация пользовательского репозитория. * * @var UserRepository */ protected $users; /** * Создание построителя нового профиля. * * @param UserRepository $users * @return void */ public function __construct(UserRepository $users) { // Зависимости автоматически извлекаются сервис-контейнером... $this->users = $users; } /** * Привязка данных к представлению. * * @param View $view * @return void */ public function compose(View $view) { $view->with('count', $this->users->count()); } } Непосредственно перед отрисовкой представления, метод построителя `PHPcompose()` вызывается с экземпляром Illuminate\View\View (для версии 5.1 и ранее Illuminate\Contracts\View\View). Вы можете использовать метод `PHPwith()` , чтобы привязать данные к представлению.
Все построители извлекаются из сервис-контейнера, поэтому вы можете указать необходимые зависимости в конструкторе построителя — они будут автоматически поданы ему.
Построители представлений по маскеВы можете присоединить построитель к нескольким представлениям сразу, передав массив в качестве первого аргумента метода `PHPcomposer()` :
добавлено в 5.3 ()
`View::composer(` ['profile', 'dashboard'], 'App\Http\ViewComposers\MyViewComposer' ); `view()->composer(` ['profile', 'dashboard'], 'App\Http\ViewComposers\MyViewComposer' ); Метод `PHPcomposer()` принимает символ * как маску, позволяя присоединить построитель ко всем представлениям:
добавлено в 5.3 ()
```
View::composer('*', function ($view) {
```
Назначение построителя для нескольких представлений
Метод `PHPcomposer()` принимает символ * как маску, Например, вот так можно назначить построитель для всех представлений:
```
View::composer('*', function($view)
```
{ // });
Назначение построителя для нескольких представлений
Вы можете также присоединить построитель к нескольким представлениям сразу:
```
View::composer(['profile', 'dashboard'], 'App\Http\ViewComposers\MyViewComposer');
```
Регистрация нескольких построителей
Вы можете использовать метод `PHPcomposers()` , чтобы зарегистрировать несколько построителей одновременно: `View::composers([` 'App\Http\ViewComposers\AdminComposer' => ['admin.index', 'admin.profile'], 'App\Http\ViewComposers\UserComposer' => 'user', 'App\Http\ViewComposers\ProductComposer' => 'product' ]); Создатели представлений работают точно так же как построители, но выполняются сразу после создания объекта представления, не дожидаясь его отрисовки. Для регистрации создателя используйте метод `PHPcreator()` :
```
View::creator('profile', 'App\Http\ViewCreators\ProfileCreator');
```
# Поставщики услуг
Поставщики услуг, или сервис-провайдеры (service providers), лежат в основе «первоначальной загрузки» всего Laravel. И ваше приложение, и все базовые сервисы Laravel загружаются через сервис-провайдеры.
Но что мы понимаем под «первоначальной загрузкой»? В основном это регистрация таких вещей как привязки сервис-контейнера, слушатели событий, посредники и даже маршруты. Сервис-провайдеры — центральное место для настройки вашего приложения.
Если вы откроете файл config/app.php, поставляемый с Laravel, то вы увидите массив providers. В нём перечислены все классы сервис-провайдеров, которые загружаются для вашего приложения. Конечно, многие из них являются «отложенными» (deferred), они не загружаются при каждом запросе, а только при необходимости.
В этом обзоре вы узнаете, как создавать свои собственные сервис-провайдеры и регистрировать их в своём приложении.
## Создание сервис-провайдеров
Все сервис-провайдеры наследуют класс Illuminate\Support\ServiceProvider. В большинстве сервис-провайдеров есть методы `PHPregister()` и `PHPboot()` . В методе `PHPregister()` вы должны только привязывать свои классы в сервис-контейнер. Никогда не пытайтесь зарегистрировать в этом методе слушателей событий, маршруты и какие-либо другие возможности. С помощью Artisan CLI можно создать новый провайдер командой `shmake:provider` : > shphp artisan make:provider RiakServiceProvider
### Метод register()
Как уже было сказано, внутри метода `PHPregister()` вы должны только привязывать свои классы в сервис-контейнер. Никогда не пытайтесь зарегистрировать в этом методе слушателей событий, маршруты и какие-либо другие возможности. Иначе может получиться, что вы обратитесь к сервису, предоставляемому сервис-провайдером, который ещё не был загружен.
Давайте взглянем на простой сервис-провайдер. Из любого метода вашего сервис-провайдера у вас всегда есть доступ к свойству $app, которое предоставляет доступ к сервис-контейнеру:
`<?php` namespace App\Providers; use Riak\Connection; use Illuminate\Support\ServiceProvider; class RiakServiceProvider extends ServiceProvider { /** * Регистрация привязки в контейнере. * * @return void */ public function register() { $this->app->singleton(Connection::class, function ($app) { //для версии 5.1 и ранее: //$this->app->singleton('Riak\Contracts\Connection', function ($app) { return new Connection(config('riak')); }); } } Этот сервис-провайдер только определяет метод `PHPregister()` и использует его, чтобы определить реализацию Riak\Connection (для версии 5.1 и ранее — Riak\Contracts\Connection) в сервис-контейнере. Если вы не поняли, как работает сервис-контейнер, прочитайте его документацию.
добавлено в 5.0 ()
### Метод boot()
А что, если нам нужно зарегистрировать построитель представления в нашем сервис-провайдере? Это нужно делать в методе `PHPboot` . Этот метод вызывают после того, как все другие сервис-провайдеры были зарегистрированы. Это значит, что у вас есть доступ ко всем другим сервисам, которые были зарегистрированы фреймворком.
добавлено в 5.3 ()
```
namespace App\Providers;
```
use Illuminate\Support\ServiceProvider; class ComposerServiceProvider extends ServiceProvider { /** * Загрузка любых сервисов приложения. * * @return void */ public function boot() { view()->composer('view', function () { // }); } }
добавлено в 5.2 ()
`<?php` namespace App\Providers; use Illuminate\Contracts\Events\Dispatcher as DispatcherContract; use Illuminate\Foundation\Support\Providers\EventServiceProvider as ServiceProvider; class EventServiceProvider extends ServiceProvider { // Свойства других сервис-провайдеров... /** * Регистрация всех остальных событий для вашего приложения. * * @param \Illuminate\Contracts\Events\Dispatcher $events * @return void */ public function boot(DispatcherContract $events) { parent::boot($events); view()->composer('view', function () { // }); } } `<?php` namespace App\Providers; use Illuminate\Support\ServiceProvider; class EventServiceProvider extends ServiceProvider { /** * Загрузка сервисов после регистрации. * * @return void */ public function boot() { view()->composer('view', function () { // }); } /** * Привязка к контейнеру. * * @return void */ public function register() { // } } Внедрение зависимостей метода `PHPboot()` Вы можете указать зависимости для метода `PHPboot()` вашего сервис-провайдера. Сервис-контейнер автоматически внедрит те зависимости, которые вы зададите:
```
use Illuminate\Contracts\Routing\ResponseFactory;
```
public function boot(ResponseFactory $response) { $response->macro('caps', function ($value) { // }); }
## Регистрация провайдеров
Все сервис-провайдеры регистрируются в файле `PHPconfig/app.php` путем добавления имён их классов в массив providers. Изначально в нём указан набор базовых сервис-провайдеров Laravel. Эти провайдеры загружают базовые компоненты Laravel, такие как обработчик почты, очередь, кэш и другие.
Чтобы зарегистрировать свой сервис-провайдер, просто добавьте его в этот массив:
`'providers' => [` // Другие сервис-провайдеры App\Providers\ComposerServiceProvider::class, ],
## Отложенные провайдеры
Если ваш провайдер только регистрирует привязки в сервис-контейнере, то вы можете отложить регистрацию до момента, когда одна из этих привязок будет запрошена из сервис-контейнера. Это позволит не дёргать файловую систему при каждом запросе, что увеличит производительность вашего приложения.
Laravel собирает и хранит список всех сервисов, предоставляемых отложенными сервис-провайдерами, и их классов. Когда в процессе работы приложению понадобится один из этих сервисов, Laravel загрузит нужный сервис-провайдер.
Для того, чтобы сделать сервис-провайдер отложенным, установите свойство defer в true и определите метод `PHPprovides()` . Метод `PHPprovides()` должен вернуть привязки сервис-контейнера, зарегистрированные в вашем провайдере: `<?php` namespace App\Providers; use Riak\Connection; use Illuminate\Support\ServiceProvider; class RiakServiceProvider extends ServiceProvider { /** * Задаём, отложена ли загрузка провайдера. * * @var bool */ protected $defer = true; /** * Регистрация сервис-провайдера. * * @return void */ public function register() { $this->app->singleton(Connection::class, function ($app) { //Для версии 5.1 и ранее: //$this->app->singleton('Riak\Contracts\Connection', function ($app) { return new Connection($app['config']['riak']); }); } /** * Получить сервисы от провайдера. * * @return array */ public function provides() { return [Connection::class]; //Для версии 5.1 и ранее: //return ['Riak\Contracts\Connection']; } }
# Сервис-контейнер
Сервис-контейнер в Laravel — это мощное средство для управления зависимостями классов и внедрения зависимостей. Внедрение зависимостей — это модный термин, который означает «внедрение» зависимостей класса в этот класс через конструктор или метод-сеттер.
Давайте посмотрим на простой пример:
добавлено в 5.3 ()
`<?php` namespace App\Http\Controllers; use App\User; use App\Repositories\UserRepository; use App\Http\Controllers\Controller; class UserController extends Controller { /** * Реализация репозитория пользователей. * * @var UserRepository */ protected $users; /** * Создать новый экземпляр контроллера. * * @param UserRepository $users * @return void */ public function __construct(UserRepository $users) { $this->users = $users; } /** * Показать профиль данного пользователя. * * @param int $id * @return Response */ public function show($id) { $user = $this->users->find($id); return view('user.profile', ['user' => $user]); } }
В этом примере UserController должен получить пользователей из источника данных. Поэтому мы внедрим сервис, умеющий получать пользователей. В данном случае наш UserRepository скорее всего использует Eloquent для получения информации о пользователе из БД. Но поскольку репозиторий внедрён, мы можем легко подменить его другой реализацией. Мы также можем легко создать «mock» или фиктивную реализацию UserRepository при тестировании нашего приложения.
добавлено в 5.2 () 5.1 () 5.0 ()
`<?php` namespace App\Jobs; use App\User; use Illuminate\Contracts\Mail\Mailer; use Illuminate\Contracts\Bus\SelfHandling; class PurchasePodcast implements SelfHandling { /** * Реализация почтового сервиса. */ protected $mailer; /** * Создание нового экземпляра. * * @param Mailer $mailer * @return void */ public function __construct(Mailer $mailer) { $this->mailer = $mailer; } /** * Покупка подкаста. * * @return void */ public function handle() { // } }
В этом примере командный обработчик PurchasePodcast должен отправить письмо пользователю для подтверждения покупки. Поэтому мы внедрим сервис, отправляющий электронные письма. Когда сервис внедрён, мы можем легко подменить его с другой реализацией. Мы также можем легко создать «mock» или фиктивную реализацию отправителя почтовых сообщений при тестировании нашего приложения.
Глубокое понимание сервис-контейнера Laravel важно для создания мощного, высокопроизводительного приложения, а также для работы с ядром Laravel.
## Связывание
### Основы связывания
Поскольку почти все ваши привязки сервис-контейнеров будут зарегистрированы в сервис-провайдерах, то все следующие примеры демонстрируют использование контейнеров в данном контексте.
Если классы не зависят от каких-либо интерфейсов, то нет необходимости связывать их в контейнере. Не нужно объяснять контейнеру, как создавать эти объекты, поскольку он автоматически извлекает такие объекты при помощи отражения (reflection).
В сервис-провайдере всегда есть доступ к контейнеру через свойство `PHP$this->app` . Зарегистрировать привязку можно методом `PHPbind()` , передав имя того класса или интерфейса, который мы хотим зарегистрировать, вместе с замыканием, которое возвращает экземпляр класса:
```
$this->app->bind('HelpSpot\API', function ($app) {
```
return new HelpSpot\API($app->make('HttpClient')); });
Обратите внимание, что мы получаем сам контейнер в виде аргумента «резолвера». Затем мы можем использовать контейнер, чтобы получать под-зависимости создаваемого объекта.
Метод `PHPsingleton()` привязывает класс или интерфейс к контейнеру, который должен быть создан только один раз. После создания связанного синглтона все последующие обращения к нему будут возвращать этот созданный экземпляр:
```
$this->app->singleton('HelpSpot\API', function ($app) {
```
return new HelpSpot\API($app->make('HttpClient')); });
Связывание существующего экземпляра класса с контейнером
Вы можете также привязать существующий экземпляр объекта к контейнеру, используя метод `PHPinstance()` . Данный экземпляр будет всегда возвращаться при последующих обращениях к контейнеру:
```
$api = new HelpSpot\API(new HttpClient);
```
$this->app->instance('HelpSpot\Api', $api);
Иногда у вас есть такой класс, который получает другие внедрённые классы, но при этом ему нужны ещё и внедрённые примитивные значения, например, числа. Вы можете использовать контекстное связывание для внедрения любых необходимых вашему классу значений:
```
$this->app->when('App\Http\Controllers\UserController')
```
->needs('$variableName') ->give($value);
### Связывание интерфейса с реализацией
Довольно мощная функция сервис-контейнера — возможность связать интерфейс с реализацией.
Например, если наше приложение интегрировано с веб-сервисом для отправки и получения событий в реальном времени Pusher. И если мы используем Pusher PHP SDK, то можем внедрить экземпляр клиента Pusher в класс:
```
<?php namespace App\Handlers\Commands;
```
use App\Commands\CreateOrder; use Pusher\Client as PusherClient; class CreateOrderHandler { /** * Экземпляр клиента Pusher SDK. */ protected $pusher; /** * Создание нового экземпляра обработчика заказов. * * @param PusherClient $pusher * @return void */ public function __construct(PusherClient $pusher) { $this->pusher = $pusher; } /** * Выполнение данной команды. * * @param CreateOrder $command * @return void */ public function execute(CreateOrder $command) { // } }
В этом примере хорошо то, что мы внедрили зависимости класса, но теперь мы жёстко привязаны к Pusher SDK. Если методы Pusher SDK изменятся, или мы решим полностью перейти на новый сервис событий, то нам надо будет переписывать CreateOrderHandler.
### Программа для интерфейса
Чтобы «изолировать» CreateOrderHandler от изменений в сервисе событий, мы можем определить интерфейс EventPusher и реализацию PusherEventPusher:
```
<?php namespace App\Contracts;
```
interface EventPusher { /** * Push a new event to all clients. * * @param string $event * @param array $data * @return void */ public function push($event, array $data); }
Например, допустим, у нас есть интерфейс EventPusher и реализация RedisEventPusher. Когда мы написали нашу реализацию RedisEventPusher для этого интерфейса, мы можем зарегистрировать её в сервис-контейнере так:
`$this->app->bind(` 'App\Contracts\EventPusher', 'App\Services\RedisEventPusher' );
Так контейнер понимает, что должен внедрить RedisEventPusher, когда классу нужна реализация EventPusher. Теперь мы можем использовать указание типа интерфейса EventPusher в конструкторе, или любом другом месте, где сервис-контейнер внедряет зависимости:
```
use App\Contracts\EventPusher;
```
/** * Создание нового экземпляра класса. * * @param EventPusher $pusher * @return void */ public function __construct(EventPusher $pusher) { $this->pusher = $pusher; }
### Контекстное связывание
Иногда у вас может быть два класса, которые используют один интерфейс. Но вы хотите внедрить различные реализации в каждый класс.
Например, два контроллера могут работать на основе разных реализаций контракта Illuminate\Contracts\Filesystem\Filesystem. Laravel предоставляет простой и гибкий интерфейс для описания такого поведения:
use App\Http\Controllers\PhotoController; use App\Http\Controllers\VideoController; use Illuminate\Contracts\Filesystem\Filesystem; $this->app->when(PhotoController::class) ->needs(Filesystem::class) ->give(function () { return Storage::disk('local'); }); $this->app->when(VideoController::class) ->needs(Filesystem::class) ->give(function () { return Storage::disk('s3'); });
добавлено в 5.2 () 5.1 () 5.0 ()
Например, когда наша система получает новый заказ, нам может понадобиться отправка сообщения через PubNub, а не через Pusher. Laravel предоставляет простой и гибкий интерфейс для описания такого поведения:
->needs('App\Contracts\EventPusher') ->give('App\Services\PubNubEventPusher'); Вы даже можете передать замыкание в метод `PHPgive()` :
->needs('App\Contracts\EventPusher') ->give(function () { // Извлечение зависимости... });
### Тегирование
Иногда вам может потребоваться получить все реализации в определенной категории. Например, вы пишете сборщик отчётов, который принимает массив различных реализаций интерфейса Report. После регистрации реализаций Report вы можете присвоить им тег, используя метод `PHPtag()` :
// }); $this->app->bind('MemoryReport', function () { // }); $this->app->tag(['SpeedReport', 'MemoryReport'], 'reports'); Теперь вы можете получить их по тегу методом `PHPtagged()` :
```
$this->app->bind('ReportAggregator', function ($app) {
```
return new ReportAggregator($app->tagged('reports')); });
добавлено в 5.0 ()
## Использование на практике
Laravel предоставляет несколько способов использования сервис-контейнера для повышения гибкости и тестируемости вашего приложения. Одним из характерных примеров является получение контроллеров. Все контроллеры регистрируются через сервис-контейнер, и поэтому при получении класса контроллера из контейнера автоматически получаются все зависимости, указанные в аргументах конструктора и других методах контроллера.
use Illuminate\Routing\Controller; use App\Repositories\OrderRepository; class OrdersController extends Controller { /** * Экземляр репозитория заказа. */ protected $orders; /** * Создание экземпляра контроллера. * * @param OrderRepository $orders * @return void */ public function __construct(OrderRepository $orders) { $this->orders = $orders; } /** * Показать все заказы. * * @return Response */ public function index() { $orders = $this->orders->all(); return view('orders', ['orders' => $orders]); } }
В этом примере класс OrderRepository будет автоматически внедрён в контроллер. Это означает, что «mock» OrderRepository может быть привязан к контейнеру во время unit-тестирования, позволяя сделать безболезненную заглушку для взаимодействия на уровне базы данных.
Другие примеры использования контейнера
Естественно, как уже упоминалось, контроллеры — это не единственные классы, которые Laravel получает из сервис-контейнера. Вы также можете использовать указание типов в функциях-замыканиях маршрутов, фильтрах, очередях, слушателях событий и т.д. Примеры использования сервис-контейнера приведены в соответствующих разделах документации.
## Получение из контейнера
Вы можете использовать метод `PHPmake()` для получения экземпляра класса из контейнера. Метод принимает имя класса или интерфейса, который вы хотите получить:
```
$api = $this->app->make('HelpSpot\API');
```
Если вы находитесь в таком месте вашего кода, где нет доступа к переменной `PHP$app` , вы можете использовать глобальный вспомогательный метод `PHPresolve()` :
```
$api = resolve('HelpSpot\API');
```
И самая важная возможность — вы можете просто указать тип зависимости в конструкторе класса, который имеется в контейнере, включая контроллеры, слушатели событий, очереди задач, посредники и т.д. Это тот способ, с помощью которого должно получатся большинство объектов из контейнера на практике.
Например, вы можете указать тип репозитория, определённого вашим приложением в конструкторе контроллера. Репозиторий будет автоматически получен и внедрён в класс:
`<?php` namespace App\Http\Controllers; use App\Users\Repository as UserRepository; class UserController extends Controller { /** * Экземпляр репозитория пользователя. */ protected $users; /** * Создание нового экземпляра контроллера. * * @param UserRepository $users * @return void */ public function __construct(UserRepository $users) { $this->users = $users; } /** * Показать пользователя с данным ID. * * @param int $id * @return Response */ public function show($id) { // } } В этом примере для версии 5.1 и ранее использовался ещё один контроллер
— прим. пер.
## События контейнера
Контейнер создаёт событие каждый раз, когда из него извлекается объект. Вы можете слушать эти события, используя метод `PHPresolving()` :
```
$this->app->resolving(function ($object, $app) {
```
// Вызывается при извлечении объекта любого типа... }); $this->app->resolving(HelpSpot\API::class, function ($api, $app) { // Вызывается при извлечении объекта типа "HelpSpot\API"... });
Как видите, объект, получаемый из контейнера, передаётся в функцию обратного вызова, что позволяет вам задать любые дополнительные свойства для объекта перед тем, как отдать его тому, кто его запросил.
# Контракты
Контракты в Laravel — это набор интерфейсов, которые описывают основной функционал, предоставляемый фреймворком. Например, контракт Illuminate\Contracts\Queue\Queue определяет методы, необходимые для организации очередей, в то время как контракт Illuminate\Contracts\Mail\Mailer определяет методы, необходимые для отправки электронной почты.
Каждый контракт имеет свою реализацию во фреймворке. Например, Laravel предоставляет реализацию `PHPQueue` с различными драйверами и реализацию `PHPMailer` , использующую SwiftMailer.
Все контракты Laravel живут в своих собственных репозиториях GitHub. Эта ссылка ведёт на все доступные контракты, а также на один отдельный пакет, который может быть использован разработчиками пакетов.
### Контракты или фасады?
Фасады Laravel и вспомогательные функции дают простой способ использования сервисов Laravel без необходимости типизирования и извлечения контрактов из сервис-контейнера. В большинстве случаев у каждого фасада есть эквивалентный контракт.
В отличие от фасадов, которые не требуют того, чтобы вы запрашивали их в конструкторе вашего класса, контракты позволяют вам определить конкретные зависимости для ваших классов. Некоторые разработчики предпочитают именно так явно определять свои зависимости, поэтому предпочитают использовать контракты, а другие разработчики наслаждаются удобством фасадов.
Для большинства приложений неважно, что вы выберете — фасады или контракты. Но если вы создаёте пакет, то вам надо использовать контракты, так как в этом случае их проще тестировать.
## Когда использовать контракты
Это обсуждается повсюду, и большинство дискуссий сводятся к тому, что использование контрактов или фасадов — это дело вкуса или предпочтений вашей команды разработчиков. И те, и другие можно использовать для создания надёжных, проверенных Laravel-приложений. Пока вы сохраняете границы ответственности вашего класса узкими, вы сможете заметить всего несколько практических различий между использованием контрактов и фасадов.
Однако, у вас по-прежнему могут остаться некоторые вопросы о контрактах. Например, зачем вообще нужны интерфейсы? Разве их использование делает жизнь проще? Определим причины использования интерфейсов как следующие: это слабая связанность (loose coupling) и упрощение кода.
### Слабая связанность
Но для начала рассмотрим код с сильной связанностью с реализацией кэша. Рассмотрите следующее:
`<?php` namespace App\Orders; class Repository { /** * Экземпляр кэша. */ protected $cache; /** * Создание нового экземпляра репозитория. * * @param \SomePackage\Cache\Memcached $cache * @return void */ public function __construct(\SomePackage\Cache\Memcached $cache) { $this->cache = $cache; } /** * Получение заказа по ID. * * @param int $id * @return Order */ public function find($id) { if ($this->cache->has($id)) { // } } }
В этом классе код сильно связан с реализацией кэша, потому что мы зависим от конкретного класса Cache данного пакета. Если API этого пакета изменится, наш код должен также измениться.
Аналогично, если мы хотим заменить нашу базовую технологию кэша (Memcached) другой технологией (Redis), нам придётся вносить изменения в наш репозиторий. А наш репозиторий не должен задумываться о том, кто именно предоставляет данные или как он это делает.
Вместо такого подхода, мы можем улучшить наш код, добавив зависимость от простого интерфейса, которой не зависит от поставщика:
`<?php` namespace App\Orders; use Illuminate\Contracts\Cache\Repository as Cache; class Repository { /** * Экземпляр кэша. */ protected $cache; /** * Создание нового экземпляра репозитория. * * @param Cache $cache * @return void */ public function __construct(Cache $cache) { $this->cache = $cache; } }
Теперь код не привязан к какому-либо определённому поставщику, и даже не привязан к Laravel. Контракт не содержит никакой конкретной реализации и никаких зависимостей. Вы можете легко написать свою реализацию любого контракта, что позволяет вам заменить реализацию работы с кэшем, не изменяя ни одной строчки вашего кода, работающего с кэшем.
### Упрощение кода
Когда все сервисы Laravel аккуратно определены в простых интерфейсах, очень легко определить функциональность, предлагаемую данными сервисами. Фактически, контракты являются краткой документацией для функций Laravel.
Кроме того, когда в своём приложении вы внедряете в классы зависимости от простых интерфейсов, в вашем коде легче разобраться и его проще поддерживать. Вместо того, чтобы искать методы в большом и сложном классе, вы можете обратиться к простому и понятному интерфейсу.
## Как использовать контракты
Как получить реализацию контракта? Это довольно просто. Множество типов классов в Laravel регистрируются в сервис-контейнере, включая контроллеры, слушатели событий, посредники, очереди и даже замыкания. Поэтому, чтобы получить реализацию контракта, вам достаточно указать тип интерфейса в конструкторе необходимого класса. Например, посмотрите на этот обработчик событий:
`<?php` namespace App\Listeners; use App\User; use App\Events\OrderWasPlaced; use Illuminate\Contracts\Redis\Database; class CacheOrderInformation { /** * Реализация базы данных Redis. */ protected $redis; /** * Создание нового экземпляра обработчика событий. * * @param Database $redis * @return void */ public function __construct(Database $redis) { $this->redis = $redis; } /** * Обработка события. * * @param OrderWasPlaced $event * @return void */ public function handle(OrderWasPlaced $event) { // } }
Когда будет получен слушатель события, сервис-контейнер прочитает указание типа в конструкторе класса и внедрит нужное значение. Узнать больше о регистрации в сервис-контейнере можно в документации.
## Список контрактов
В этой таблице приведены ссылки на все контракты Laravel, а также эквивалентные им фасады:
# Фасады
Фасады предоставляют «статический» интерфейс к классам, доступным в сервис-контейнере. Laravel поставляется со множеством фасадов, предоставляющих доступ почти ко всем возможностям Laravel. Фасады Laravel служат «статическими прокси» для классов в сервис-контейнере, обеспечивая преимущества краткого выразительного синтаксиса, и поддерживая большую тестируемость и гибкость, чем традиционные статические методы.
Все фасады Laravel определены в пространстве имён Illuminate\Support\Facades. Поэтому мы можем легко обратиться к фасаду таким образом:
Route::get('/cache', function () { return Cache::get('key'); });
Во многих примерах в документации по Laravel используются фасады для демонстрации различных возможностей фреймворка.
## Когда стоит использовать фасады
Фасады имеют много преимуществ. Они обеспечивают лаконичный, запоминающийся синтаксис, который позволяет вам использовать возможности Laravel, не запоминая длинные имена классов, которые должны внедряться или настраиваться вручную. Более того, благодаря уникальному использованию динамических методов PHP они легко тестируются.
Тем не менее, при использовании фасадов необходимо соблюдать некоторые предосторожности. Основная опасность фасадов — разрастание границ класса. Поскольку фасады так просты в использовании и не требуют внедрения, становится очень вероятна ситуация, когда ваши классы постоянно продолжают расти, и когда в одном классе используется множество фасадов. При использовании внедрения зависимостей риск возникновения такой ситуации снижается за счёт того, что при виде большого конструктора вы можете визуально оценить увеличение размера вашего класса. Поэтому при использовании фасадов обратите особое внимание на размер вашего класса, чтобы границы его ответственности оставались узкими.
При создании стороннего пакета, взаимодействующего с Laravel, лучше внедрять контракты Laravel вместо использования фасадов. Поскольку пакеты создаются вне самого Laravel, у вас не будет доступа к вспомогательным функциям тестирования фасадов.
### Фасады или внедрение зависимостей
Одно из основных преимуществ внедрения зависимостей — возможность подменить реализацию внедрённого класса. Это полезно при тестировании, потому что вы можете внедрить макет или заглушку, в которой якобы будут вызываться различные методы.
Обычно невозможно сделать макет или заглушку метода настоящего статического класса. Но поскольку фасады используют динамические методы для передачи вызовов к объектам, извлекаемым из сервис-контейнера, мы можем тестировать фасад точно так же, как тестировали бы внедрённый экземпляр класса. Например, возьмём следующий маршрут:
Route::get('/cache', function () { return Cache::get('key'); }); Мы можем написать следующий тест, чтобы проверить, что метод `PHPCache::get()` был вызван с ожидаемым нами аргументом:
### Фасады или вспомогательные функции
Кроме фасадов в Laravel есть множество «вспомогательных» функций, которые могут выполнять общие задачи, такие как генерация представлений, запуск событий, постановка задач и отправка HTTP-откликов. Многие из этих вспомогательных функций выполняют те же задачи, что и соответствующий фасад. Например, эти вызовы фасада и вспомогательной функции эквивалентны:
```
return View::make('profile');
```
return view('profile');
Нет абсолютно никакой фактической разницы между фасадами и вспомогательными функциями. При использовании вспомогательных функций вы можете тестировать их точно так же, как тестировали бы соответствующий фасад. Например, возьмём следующий маршрут:
```
Route::get('/cache', function () {
```
return cache('key'); }); При этом вспомогательная функция `PHPcache()` вызовет метод `PHPget()` на классе, лежащем в основе фасада `PHPCache` . Поэтому несмотря на то, что мы используем вспомогательную функцию, мы можем написать следующий тест, чтобы проверить, что метод был вызван с ожидаемым нами аргументом:
## Как работают фасады
В Laravel-приложении фасад — это класс, который предоставляет доступ к объекту в контейнере. Весь этот механизм реализован в классе `PHPFacade` . И фасады Laravel, и ваши собственные, наследуют базовый класс Illuminate\Support\Facades\Facade. Базовый класс `PHPFacade` использует магический метод PHP `PHP__callStatic()` для перенаправления вызовов методов с вашего фасада на полученный объект. В примере ниже делается обращение к механизму кэширования Laravel. На первый взгляд может показаться, что метод `PHPget()` принадлежит классу `PHPCache` : `<?php` namespace App\Http\Controllers; use Cache; use App\Http\Controllers\Controller; class UserController extends Controller { /** * Показать профиль данного пользователя. * * @param int $id * @return Response */ public function showProfile($id) { $user = Cache::get('user:'.$id); return view('profile', ['user' => $user]); } }
Обратите внимание, в начале мы «импортируем» фасад Cache. Этот фасад служит переходником для доступа к лежащей в основе реализации интерфейса Illuminate\Contracts\Cache\Factory. Все выполняемые через этот фасад вызовы передаются ниже — в экземпляр кэш-сервиса Laravel.
Если вы посмотрите в исходный код класса Illuminate\Support\Facades\Cache, то увидите, что он не содержит метода `PHPget()` :
```
class Cache extends Facade
```
{ /** * Получить зарегистрированное имя компонента. * * @return string */ protected static function getFacadeAccessor() { return 'cache'; } } Вместо этого, фасад Cache наследует базовый класс `PHPFacade` и определяет метод
```
PHPgetFacadeAccessor()
```
. Его задача — вернуть имя привязки в сервис-контейнере. Когда вы обращаетесь к любому статическому методу фасада Cache, Laravel получает по привязке объект cache из сервис-контейнера и вызывает на нём требуемый метод (в этом случае — `PHPget()` ).
добавлено в 5.0 ()
Таким образом, вызов `PHPCache::get()` может быть переписан так:
```
$value = $app->make('cache')->get('key');
```
Помните, если вы используете фасад в контроллере, который находится в пространстве имён, то вам надо импортировать класс фасада в пространство имён. Все фасады живут в глобальном пространстве имён:
use Cache; class PhotosController extends Controller { /** * Получить все фотографии приложения. * * @return Response */ public function index() { $photos = Cache::get('photos'); // } }
## Создание фасадов
Создать фасад в вашем приложении или пакете довольно просто. Вам нужны только три вещи:
* Привязка в сервис-контейнере.
* Класс-фасад.
* Настройка для псевдонима фасада.
Посмотрим на следующий пример. Здесь определён класс
```
PHPPaymentGateway\Payment
```
```
namespace PaymentGateway;
```
class Payment { public function process() { // } }
Нам нужно, чтобы этот класс извлекался из сервис-контейнера, поэтому давайте добавим для него привязку (binding):
```
App::bind('payment', function()
```
{ return new \PaymentGateway\Payment; }); Самое лучшее место для регистрации этой привязки — новый поставщик услуг, который мы назовём
```
PHPPaymentServiceProvider
```
и в котором мы создадим метод `PHPregister()` , содержащий представленный выше код. После этого вы можете настроить Laravel для загрузки этого поставщика в файле config/app.php.
Дальше мы можем создать класс нашего фасада:
```
use Illuminate\Support\Facades\Facade;
```
class Payment extends Facade { protected static function getFacadeAccessor() { return 'payment'; } } Наконец, по желанию можно добавить псевдоним (alias) для этого фасада в массив aliases файла настроек config/app.php. Тогда мы сможем вызывать метод `PHPprocess()` на экземпляре класса `PHPPayment` : `Payment::process();`
### Об автозагрузке псевдонимов
В некоторых случаях классы в массиве aliases не доступны из-за того, что PHP не загружает неопределённые классы при указании их типов. Если
```
PHP\ServiceWrapper\ApiTimeoutException
```
имеет псевдоним
```
PHPApiTimeoutException
```
, то
```
PHPcatch (ApiTimeoutException $e)
```
, помещённый в любое пространство имён, кроме `PHPServiceWrapper` , никогда не «поймает» исключение, даже когда оно возникнет. Аналогичная проблема возникает в классах, которые содержат указания типов на классы с псевдонимами. Единственное решение — не использовать псевдонимы и вместо них в начале каждого файла, где это необходимо, писать `PHPuse` для тех классов, типы которых вы хотите указать.
## Фасады-заглушки
Юнит-тесты играют важную роль в том, как именно работают фасады. На самом деле возможность тестирования — основная причина, по которой фасады вообще существуют. Эта тема подробнее раскрыта в разделе документации о фасадах-заглушках.
## Соответствие фасадов и классов
В таблице ниже перечислены все фасады и соответствующие им классы. Это полезный инструмент для быстрого начала работы с документацией по API данного корня фасадов. Также указаны ключи привязок в сервис-контейнере, где это нужно.
# Прохождение запроса
При использовании любого инструмента в «реальном мире» вы чувствуете больше уверенности, когда понимаете, как он устроен. Разработка приложений — не исключение. Когда вы понимаете, как функционируют ваши средства разработки, вы чувствуете себя более комфортно и уверенно.
Задача этого документа состоит в том, чтобы дать вам хороший поверхностный обзор того, как работает фреймворк Laravel. Чем лучше вы знаете фреймворк, тем меньше в нём остаётся «волшебства», и вы более уверенно создаёте приложения. Не отчаивайтесь, если не всё сразу поймёте! Постарайтесь просто получить базовое понимание того, что происходит, и ваши знания будут расти по мере изучения других разделов документации.
## Прохождение запроса
### Начало
Входная точка для всех запросов к вашему приложению — файл public/index.php. Все запросы направляются в этот файл настройками вашего веб-сервера (Apache / Nginx). Файл index.php содержит довольно мало кода. Скорее, он просто отправная точка для загрузки всего остального фреймворка.
Файл index.php загружает сгенерированное с помощью Composer определение автозагрузчика, а затем извлекает экземпляр Laravel-приложения из скрипта bootstrap/app.php. Первое действие самого Laravel — создание экземпляра приложения / сервис-контейнера.
### Ядра HTTP / Console
Далее входящий запрос посылается либо в HTTP-ядро, либо в ядро консоли, в зависимости от типа этого запроса. Эти ядра служат центральным местом, через которое протекают все запросы. Пока давайте рассмотрим HTTP-ядро, которое расположено в app/Http/Kernel.php.
HTTP-ядро наследует класс Illuminate\Foundation\Http\Kernel, который определяет массив загрузчиков `PHPbootstrappers` , которые будут запущены перед выполнением запроса. Эти загрузчики настраивают обработку ошибок, настраивают ведение журналов, определяют среду приложения и выполняют другие задачи, которые надо выполнить перед самой обработкой запроса.
HTTP-ядро также определяет список посредников HTTP, через которые должны пройти все запросы, прежде чем будут обработаны приложением. Эти посредники обрабатывают чтение и запись HTTP-сессии, определяя, находится ли приложение в режиме обслуживания, проверяют CSRF-последовательность, и т.п.
Принцип действия метода `PHPhandle` HTTP-ядра очень прост: получить Request и вернуть Response. Представьте ядро как большую чёрную коробку, которая представляет собой всё ваше приложение. Наполняйте его HTTP-запросами и оно будет возвращать HTTP-ответы.
### Поставщики услуг
Одно из важнейших действий ядра при загрузке — загрузка поставщиков услуг для вашего приложения. Все поставщики услуг настраиваются в конфигурационном файле config/app.php в массиве providers. Сначала будет вызван метод `PHPregister` для всех поставщиков, а когда все они будут зарегистрированы, будет вызван метод `PHPboot` .
Поставщики услуг отвечают за начальную загрузку всевозможных компонентов фреймворка: БД, очередь, проверка ввода и маршрутизация. Поставщики услуг — важнейший элемент всего процесса начальной загрузки Laravel, так как они отвечают за загрузку и настройку всех возможностей, необходимых фреймворку.
### Отправка запроса
Когда приложение загружено, и все поставщики услуг зарегистрированы, запрос Request будет передан роутеру для отправки. Роутер отправит запрос по маршруту или контроллеру, а также запустит посредника, соответствующего маршруту.
## Сконцентрируйтесь на поставщиках услуг
Поставщики услуг — поистине ключ к загрузке Laravel-приложения. Экземпляр приложения создан, поставщики услуг зарегистрированы, и запрос передан в загруженное приложение. Вот так просто!
Очень важно иметь хорошее понимание того, как строится и загружается Laravel-приложение с помощью поставщиков услуг. Само собой поставщики услуг вашего приложения по умолчанию хранятся в app/Providers.
По умолчанию AppServiceProvider довольно пуст. Этот поставщик является отличным местом для добавления в ваше приложение собственной автозагрузки и привязок сервис-контейнера. Для больших приложений вы можете создать несколько поставщиков услуг, которые будут содержать определённые части автозагрузки.
# Структура приложения
Структура Laravel-приложения по умолчанию — отличная отправная точка как для больших, так и для маленьких приложений. Но, конечно, вы можете свободно организовать ваше приложение как пожелаете. Laravel не накладывает практически никаких ограничений на то, где будет размещён какой-либо класс, пока Composer будет в состоянии автоматически загружать этот класс.
В начале изучения Laravel многие разработчики удивляются отсутствию каталога models. Однако, это сделано специально. Мы считаем, что слово «модели» очень неопределённое, потому что оно означает совершенно разные вещи для разных людей. Для некоторых разработчиков «модель» приложения — это вообще вся бизнес-логика приложения, а для других «модели» — это классы, взаимодействующие с реляционной базой данных.
Поэтому мы решили поместить модели Eloquent в каталог app и позволить разработчикам разместить их где-нибудь в другом месте, если они захотят.
## Корневой каталог
### app
Папка app, как вы можете догадаться, содержит код ядра вашего приложения . Ниже мы рассмотрим эту папку подробнее; однако, почти все классы вашего приложения будут находится в этой папке.
### bootstrap
Папка bootstrap содержит файлы, которые загружают фреймворк и настраивают автозагрузку. Также в папке bootstrap находится папка cache, которая содержит сгенерированные фреймворком файлы для оптимизации производительности — например, кэш-файлы маршрутов и сервисов.
Папка config, как гласит её название, содержит все конфигурационные файлы ваших приложений. Будет не лишним прочитать эти файлы и ознакомиться со всеми доступными параметрами.
### database
Папка database содержит миграции и классы для наполнения начальными данными вашей БД. При необходимости эту папку можно использовать для хранения базы данных SQLite.
### public
Папка public содержит файл index.php, который является входной точкой для всех запросов, поступающих в ваше приложение. Также эта папка содержит ваши ресурсы, такие как изображения, JavaScript, CSS.
### resources
Папка resources содержит ваши представления, а также сырые, некомпилированные ресурсы, такие как LESS, SASS, JavaScript. А также здесь находятся все «языковые» файлы.
### routes
Папка routes содержит все определения маршрутов вашего приложения. По умолчанию в Laravel встроено три файла маршрутов web.php, api.php и console.php.
web.phpФайл web.php содержит маршруты, которые RouteServiceProvider помещает в группу посредников web, которая обеспечивает состояние сессии, CSRF-защиту и шифрование cookie. Если ваше приложение не предоставляет «stateless» RESTful API, то скорее всего все ваши маршруты можно определить в файле web.php. api.phpФайл api.php содержит маршруты, которые RouteServiceProvider помещает в группу посредников api, которая обеспечивает ограничение скорости. Эти маршруты должны быть «stateless», т.е. входящие через эти маршруты запросы должны быть аутентифицированы с помощью токенов и они не будут иметь доступа к состоянию сессии. console.phpФайл console.php — то место, где вы можете определить все свои консольные команды на основе замыканий. Каждое замыкание привязывается к экземпляру команды, обеспечивая простое взаимодействие с методами ввода/вывода каждой команды. Несмотря на то, что в этом файле не определяются HTTP-маршруты, в нём определяются консольные входные точки (пути) в ваше приложение.
### storage
Папка storage содержит скомпилированные Blade-шаблоны, файл-сессии, кэши файлов и другие файлы, создаваемые фреймворком. Эта папка делится на подпапки app, framework и logs. В папке app можно хранить любые файлы, генерируемые вашим приложением. В папке framework хранятся создаваемые фреймворком файлы и кэш. А в папке logs находятся файлы журналов приложения.
Папку storage/app/public можно использовать для хранения пользовательских файлов, таких как аватарки, которые должны быть доступны всем. Вам надо создать символьную ссылку на public/storage, которая ведёт к этой папке. Вы можете создать ссылку командой
```
shphp artisan storage:link
```
### tests
Папка tests содержит ваши автотесты. Изначально там уже есть пример PHPUnit. Класс каждого теста должен иметь в имени суффикс Test. Вы можете запускать свои тесты командами `shphpunit` и
```
shphp vendor/bin/phpunit
```
### vendor
Папка vendor содержит ваши Composer-зависимости.
## Каталог app
Основная часть вашего приложения находится в каталоге app. По умолчанию этот каталог зарегистрирован под пространством имён `PHPApp` и автоматически загружается с помощью Composer по стандарту автозагрузки PSR-4.
В каталоге app находится ряд дополнительных каталогов, таких как Console, Http и Providers. Можно сказать, что каталоги Console и Http предоставляют API ядра вашего приложения. Протокол HTTP и командная строка — это механизмы взаимодействия с вашим приложением, но они не содержат логики приложения. Другими словами, это просто два способа передачи команд вашему приложению. Каталог Console содержит все ваши Artisan-команды, а каталог Http содержит ваши контроллеры, посредники и запросы.
Многие другие каталоги будут созданы в каталоге app, когда вы выполните Artisan-команду `shmake` для генерирования классов. Например, каталог app/Jobs не будет создан, пока вы не выполните Artisan-команду `shmake:job` , чтобы сгенерировать класс задачи. Многие классы в каталоге app можно сгенерировать Artisan-командами. Для просмотра доступных команд выполните в терминале команду
```
shphp artisan list make
```
### Console
Папка Console содержит все дополнительные Artisan-команды для вашего приложения. Эти команды можно сгенерировать командой `shmake:command` . Также этот каталог содержит ядро вашей консоли, где регистрируются ваши дополнительные Artisan-команды и определяются ваши запланированные задачи.
### Events
Изначально этого каталога нет, он создаётся Artisan-командами `shevent:generate` и `shmake:event` . В папке Events, как можно догадаться, хранятся классы событий. События можно использовать для оповещения других частей приложения о каком-либо событии, что обеспечивает большую гибкость и модульность.
### Exceptions
Папка Exceptions содержит обработчик исключений вашего приложения. Эта папка также является хорошим местом для размещения всех исключений, возникающих в вашем приложении. Если вы хотите изменить то, как журналируются и отображаются ваши исключения, вам надо изменить класс Handler в этом каталоге.
### Http
Папка Http содержит ваши контроллеры, посредники и запросы форм. Здесь будет размещена почти вся логика обработки запросов, входящих в приложение.
### Jobs
Изначально этого каталога нет, он создаётся Artisan-командой `shmake:job` . В папке Jobs хранятся задачи для вашего приложения. Задачи могут быть обработаны вашим приложением в порядке очереди, а также их можно запустить синхронно в рамках прохождения текущего запроса. Иногда задачи, которые запускаются синхронно во время текущего запроса, называют «командами», потому что они реализуют шаблон Команда.
### Listeners
Изначально этого каталога нет, он создаётся Artisan-командами `shevent:generate` и `shmake:listener` . Папка Listeners содержит классы обработчиков для ваших событий. Слушатели событий получают экземпляр события и выполняют логику в ответ на это событие. Например, событие UserRegistered может быть обработано слушателем SendWelcomeEmail.
добавлено в 5.3 ()
Изначально этого каталога нет, он создаётся Artisan-командой `shmake:mail` . Каталог Mail содержит все ваши классы, отвечающие за отправляемые вашим приложением email-сообщения. Почтовые объекты позволяют вам инкапсулировать всю логику создания email-сообщений в единый, простой класс, который можно отправить методом `PHPMail::send()` .
### Notifications
Изначально этого каталога нет, он создаётся Artisan-командой `shmake:notification` . Каталог Notifications содержит все «транзакционные» уведомления, которые отправляются вашим приложением, например, простое уведомление о событии, произошедшем в вашем приложении. Возможность уведомлений в Laravel абстрагирует отправку уведомлений через разные драйверы, такие как email, Slack, SMS или сохранение в БД.
### Providers
Папка Providers содержит все сервис-провайдеры для вашего приложения. Сервис-провайдеры загружают ваше приложение, привязывая сервисы в сервис-контейнер, регистрируя события, и выполняя любые другие задачи для подготовки вашего приложения к входящим запросам.
В свежеустановленном приложении Laravel эта папка уже содержит несколько провайдеров. При необходимости вы можете добавлять свои провайдеры в эту папку.
### Policies
Изначально этого каталога нет, он создаётся Artisan-командой `shmake:policy` . Папка Policies содержит классы политик авторизации. Политики служат для определения, разрешено ли пользователю данное действие над ресурсом. Подробнее читайте в документации по авторизации.
добавлено в 5.0 ()
### Commands
В папке Commands, разумеется, хранятся команды для вашего приложения. Команды представляют собой задания, которые могут быть обработаны вашим приложениям в порядке очереди, а также задачи, которые вы можете запустить синхронно в рамках прохождения текущего запроса.
### Handlers
Папка Handlers содержит классы обработчиков команд и событий. Обработчики получают команду или событие и выполняют логику в ответ на эту команду или возникновение события.
### Services
Папка Services содержит ряд «вспомогательных» служб, необходимых для работы вашего приложения. Например, включённая в Laravel служба Registrar отвечает за проверку и создание новых пользователей вашего приложения. Другой пример — службы для взаимодействия с внешними API, с системами метрик, или даже со службами, которые собирают данные от вашего приложения.
## Задание пространства имён для вашего приложения
Как уже было сказано, по умолчанию название пространства имён приложения — `PHPApp` . Но вы можете изменить его, чтобы оно совпадало с названием вашего приложения. Это можно сделать с помощью Artisan-команды `shapp:name` . Например, если ваше приложение называется SocialNet, вам надо выполнить следующую команду: > shphp artisan app:name SocialNet
# Аутентификация
Хотите сразу попробовать? Просто выполните команды
в свежем приложении Laravel. Затем откройте в браузере http://your-app.dev/register или любой другой URL, назначенный вашему приложению. Эти команды создадут заготовку для всей вашей системы аутентификации!
В Laravel сделать аутентификацию очень просто. Фактически, почти всё сконфигурировано для вас уже из коробки. Конфигурационный файл аутентификации расположен в config/auth.php. Он содержит несколько хорошо описанных опций для тонкой настройки поведения служб аутентификации.
Средства Laravel для аутентификации состоят из «защитников» и «провайдеров». Защитники определяют то, как аутентифицируются пользователи для каждого запроса. Например, Laravel поставляется с защитником «session», который поддерживает состояние с помощью хранилища сессий и cookies.
Провайдеры определяют то, как пользователи извлекаются из вашего постоянного хранилища. Laravel поставляется с поддержкой извлечения пользователей с помощью Eloquent и конструктора запросов БД. Но при необходимости вы можете определить для своего приложения дополнительные провайдеры.
Не переживайте, если сейчас это звучит запутанно! Для многих приложений никогда не потребуется изменять стандартные настройки аутентификации.
По умолчанию в Laravel есть модель Eloquent App\User в вашем каталоге app. Эта модель может использоваться с базовым драйвером аутентификации Eloquent. Если ваше приложение не использует Eloquent, вы можете использовать драйвер аутентификации database, который использует конструктор запросов Laravel.
При создании схемы базы данных для модели App\User создайте столбец для паролей с длиной не менее 60 символов. Хорошим выбором для строкового столбца будет длина 255 символов.
Кроме того, перед началом работы удостоверьтесь, что ваша таблица users (или эквивалентная) содержит строковый столбец remember_token на 100 символов с допустимым значением NULL. Этот столбец будет использоваться, чтобы хранить ключи для пользователей, выбравших параметр «запомнить меня» при входе в приложение.
## Краткое руководство по аутентификации
Laravel поставляется с несколькими контроллерами аутентификации, расположенными в пространстве имён App\Http\Controllers\Auth.
Каждый из этих контроллеров использует типажи для подключения необходимых методов. Для многих приложений вам вообще не придётся изменять эти контроллеры.
Laravel обеспечивает быстрый способ создания заготовок всех необходимых для аутентификации маршрутов и представлений с помощью одной команды:
> shphp artisan make:auth
Эту команду надо использовать на свежем приложении, она установит представление макета, представления для регистрации и входа, а также маршруты для всех конечных точек аутентификации. Также будет сгенерирован HomeController для обслуживания запросов к панели настроек вашего приложения после входа.
По умолчанию в Laravel нет маршрутов для запросов к контроллерам аутентификации. Вы можете добавить их вручную в файле app/Http/routes.php:
Как было сказано в предыдущем разделе, команда
создаст все необходимые вам представления для аутентификации и поместит их в папку resources/views/auth` directory. Также команда `shmake:auth` создаст папку resources/views/layouts, содержащую основной макет для вашего приложения. Все эти представления используют CSS-фреймворк Bootstrap, но вы можете изменять их как угодно.
Не смотря на то, что контроллеры аутентификации включены в фреймворк, вам необходимо предоставить представления, которые эти контроллеры смогут отрисовать. Представления необходимо расположить в каталоге resources/views/auth. Вы вольны настроить эти представления так, как сами желаете. Представление для входа в систему должно быть в resources/views/auth/login.blade.php, а представление для регистрации — в resources/views/auth/register.blade.php.
<form method="POST" action="/auth/login"> {!! csrf_field() !!} <div> Email <input type="email" name="email" value="{{ old('email') }}"> </div> <div> Password <input type="password" name="password" id="password"> </div> <div> <input type="checkbox" name="remember"> Remember Me </div> <div> <button type="submit">Login</button> </div> </form```
<!-- resources/views/auth/register.blade.php -->
```
<form method="POST" action="/auth/register"> {!! csrf_field() !!} <div> Name <input type="text" name="name" value="{{ old('name') }}"> </div> <div> Email <input type="email" name="email" value="{{ old('email') }}"> </div> <div> Password <input type="password" name="password"> </div> <div> Confirm Password <input type="password" name="password_confirmation"> </div> <div> <button type="submit">Register</button> </div> </formТеперь, когда у вас есть маршруты и представления для имеющихся контроллеров аутентификации, вы готовы регистрировать и аутентифицировать новых пользователей своего приложения. Вы можете просто перейти по этим маршрутам в браузере, поскольку контроллеры аутентификации уже содержат логику (благодаря их типажам) для аутентификации существующих пользователей и сохранения новых пользователей в базе данных.
Когда пользователь успешно аутентифицируется, он будет перенаправлен на URI /home (для версии 5.2 — URI /). Вы можете изменить место для перенаправления после входа, задав свойство redirectTo контроллеров LoginController, RegisterController и ResetPasswordController (для версии 5.2 — контроллера AuthController):
```
protected $redirectTo = '/';
```
добавлено в 5.3 ()
Если для генерирования пути для переадресации необходима дополнительная логика, вы можете определить метод `PHPredirectTo()` вместо свойства redirectTo:
```
protected function redirectTo()
```
{ // } У метода `PHPredirectTo()` приоритет выше, чем у атрибута redirectTo. По умолчанию Laravel использует для аутентификации поле email. Чтобы изменить это, определите метод `PHPusername()` в своём LoginController:
```
public function username()
```
{ return 'username'; }
добавлено в 5.2 ()
Когда аутентификация пользователя не успешна, он автоматически будет перенаправлен обратно на форму входа.
Чтобы изменить место для перенаправления после выхода из приложения, вы можете задать свойство redirectAfterLogout контроллера AuthController:
```
protected $redirectAfterLogout = '/login';
```
Если это свойство не задано, пользователь будет перенаправлен на URI /.
добавлено в 5.3 ()
Вы также можете изменить «защитника», используемого для аутентификации и регистрации пользователей. Для начала определите метод `PHPguard()` в контроллерах LoginController, RegisterController и ResetPasswordController. Метод должен возвращать экземпляр защитника:
protected function guard() { return Auth::guard('guard-name'); }
добавлено в 5.2 ()
Когда пользователь успешно аутентифицирован, он будет переадресован к URI /home, для обработки которой вам необходимо будет зарегистрировать маршрут. Вы можете настроить место для переадресации после аутентификации, задав свойство redirectPath в AuthController:
```
protected $redirectPath = '/dashboard';
```
Когда аутентификация пользователя не успешна, он будет переадресован на URI /auth/login. Вы можете настроить место для переадресации после неудачной аутентификации, задав свойство loginPath в AuthController:
```
protected $loginPath = '/login';
```
Свойство loginPath не влияет на то, куда будут переходить пользователи при попытке доступа к защищённому маршруту. Это контролируется методом `PHPhandle()` посредника App\Http\Middleware\Authenticate.
Настройка хранилища/проверки ввода
Чтобы изменить требуемые поля для формы регистрации нового пользователя, или для изменения способа добавления новых пользователей в вашу базу данных, вы можете изменить класс RegisterController (для версии 5.2 — AuthController). Этот класс отвечает за проверку ввода и создание новых пользователей в вашем приложении.
Метод `PHPvalidator()` класса RegisterController (для версии 5.2 — AuthController) содержит правила проверки ввода данных для новых пользователей приложения, а метод `PHPcreate()` этого класса отвечает за создание новых записей App\User в вашей базе данных с помощью Eloquent ORM. Вы можете изменить каждый из этих методов, как пожелаете.
### Получение аутентифицированного пользователя
Вы можете обращаться к аутентифицированному пользователю через фасад Auth:
// Получить текущего аутентифицированного пользователя... $user = Auth::user(); // Получить ID текущего аутентифицированного пользователя... $id = Auth::id();
Или, когда пользователь аутентифицирован, вы можете обращаться к нему через экземпляр Illuminate\Http\Request. Не забывайте, указание типов классов приводит к их автоматическому внедрению в методы вашего контроллера:
`<?php` namespace App\Http\Controllers; use Illuminate\Http\Request; //для версии 5.1 и ранее: //use Illuminate\Routing\Controller; class ProfileController extends Controller { /** * Обновление профиля пользователя. * * @param Request $request * @return Response */ public function update(Request $request) { // $request->user() возвращает экземпляр аутентифицированного пользователя... } }
добавлено в 5.0 ()
Также вы можете указать тип контракта Illuminate\Contracts\Auth\Authenticatable. Это указание типа может быть добавлено к конструктору контроллера, методу контроллера или любому другому конструктору класса, реализуемому в сервис-контейнере:
use Illuminate\Routing\Controller; use Illuminate\Contracts\Auth\Authenticatable; class ProfileController extends Controller { /** * Обновление профиля пользователя. * * @return Response */ public function updateProfile(Authenticatable $user) { // $user - это экземпляр аутентифицированного пользователя... } }
Определение, аутентифицирован ли пользователь
Чтобы определить, что пользователь уже вошёл в ваше приложение, вы можете использовать метод `PHPcheck()` фасада Auth, который вернёт true, если пользователь аутентифицирован:
if (Auth::check()) { // Пользователь вошёл в систему... } Можно определить, аутентифицирован ли пользователь, методом `PHPcheck()` , но обычно вы будете использовать посредника, чтобы проверить, аутентифицирован ли пользователь, до предоставления ему доступа к определённым маршрутам/контроллерам, вы можете использовать посредника. Подробнее об этом читайте в разделе Защита маршрутов.
### Защита маршрутов
Посредник Route можно использовать, чтобы давать доступ к определённому маршруту только аутентифицированным пользователям.
Laravel поставляется с посредником auth, который определён в Illuminate\Auth\Middleware\Authenticate. Поскольку посредник уже зарегистрирован в вашем HTTP-ядре, вам остаётся только присоединить его к определению маршрута:
Laravel поставляется с посредником auth, который определён в app\Http\Middleware\Authenticate.php. Всё, что вам надо сделать — это присоединить его к определению маршрута:
```
// С помощью замыкания маршрута...
```
Route::get('profile', ['middleware' => 'auth', function() { // Только аутентифицированные пользователи могут зайти... }]); // С помощью контроллера... Route::get('profile', [ 'middleware' => 'auth', 'uses' => 'ProfileController@show' ]); Конечно, если вы используете контроллеры, вы можете вызвать метод `PHPmiddleware()` из конструктора контроллера, вместо присоединения его напрямую к определению маршрута:
{ $this->middleware('auth'); }
добавлено в 5.3 ()
Во время прикрепления посредника auth к маршруту, вы можете также указать, какой защитник должен быть использован для аутентификации пользователя. Указанный защитник должен соответствовать одному из ключей в массиве guards вашего файла auth.php:
{ $this->middleware('auth:api'); }
добавлено в 5.2 ()
Во время прикрепления посредника auth к маршруту, вы можете также указать, какой защитник должен быть использован для выполнения аутентификации:
'middleware' => 'auth:api', 'uses' => 'ProfileController@show' ]); Указанный защитник должен соответствовать одному из ключей в массиве guards конфигурационного файла `PHPauth.php` .
добавлено в 5.3 ()
### Блокировка входа
Если вы используете встроенный в Laravel класс LoginController, то типаж Illuminate\Foundation\Auth\ThrottlesLogins уже включён в ваш контроллер. По умолчанию пользователь не сможет войти в приложение в течение одной минуты, если он несколько раз указал неправильные данные для входа. Блокировка происходит отдельно для имени пользователя/адреса e-mail и его IP-адреса.
### Блокировка аутентификации
Если вы используете встроенный в Laravel класс AuthController, то вы можете использовать типаж Illuminate\Foundation\Auth\ThrottlesLogins для блокировки попыток входа в ваше приложение. По умолчанию пользователь не сможет войти в приложение в течение одной минуты, если он несколько раз указал неправильные данные для входа. Блокировка происходит отдельно для имени пользователя/адреса e-mail и его IP-адреса:
`<?php` namespace App\Http\Controllers\Auth; use App\User; use Validator; use App\Http\Controllers\Controller; use Illuminate\Foundation\Auth\ThrottlesLogins; use Illuminate\Foundation\Auth\AuthenticatesAndRegistersUsers; class AuthController extends Controller { use AuthenticatesAndRegistersUsers, ThrottlesLogins; // Остальное содержимое класса... }
## Ручная аутентификация
Если вы не хотите использовать встроенные контроллеры аутентификации, вам нужно будет напрямую управлять аутентификацией пользователей, используя классы аутентификации Laravel. Не волнуйтесь, они не кусаются!
Мы будем работать со службами аутентификации Laravel через фасад Auth, поэтому нам надо не забыть импортировать фасад Auth в начале класса. Далее давайте посмотрим на метод `PHPattempt()` :
use Illuminate\Support\Facades\Auth; //для версии 5.2: //use Auth; //для версии 5.1 и ранее: //use Illuminate\Routing\Controller; class LoginController extends Controller //для версии 5.2 и ранее: //class AuthController extends Controller { /** * Обработка попытки аутентификации * * @return Response */ public function authenticate() { if (Auth::attempt(['email' => $email, 'password' => $password])) { // Аутентификация успешна return redirect()->intended('dashboard'); } } } Метод `PHPattempt()` принимает массив пар ключ/значение в качестве первого аргумента. Значения массива будут использованы для поиска пользователя в таблице базы данных. Так, в приведённом выше примере пользователь будет получен по значению столбца email. Если пользователь будет найден, хешированный пароль, сохранённый в базе данных, будет сравниваться с хешированным значением password, переданным в метод через массив. Если два хешированных пароля совпадут, то для пользователя будет запущена новая аутентифицированная сессия. Метод `PHPattempt()` вернёт true, если аутентификация прошла успешно. В противном случае будет возвращён false. Метод `PHPintended()` «переадресатора» перенаправит пользователя к тому URL, к которому он обращался до того, как был перехвачен посредником аутентификации. В этот метод можно передать запасной URI, на случай недоступности требуемого пути.
Указание дополнительных условий
При необходимости вы можете добавить дополнительные условия к запросу аутентификации, помимо адреса e-mail и пароля. Например, можно проверить отметку «активности» пользователя:
```
if (Auth::attempt(['email' => $email, 'password' => $password, 'active' => 1])) {
```
// Пользователь активен, не приостановлен, и существует. }
В этих примерах email не является обязательным вариантом, он приведён только для примера. Вы можете использовать какой угодно столбец, соответствующий «username» в вашей базе данных.
Обращение к конкретным экземплярам защитника
С помощью метода `PHPguard()` фасада Auth вы можете указать, какой экземпляр защитника необходимо использовать. Это позволяет управлять аутентификацией для отдельных частей вашего приложения, используя полностью отдельные модели для аутентификации или таблицы пользователей. Передаваемое в метод `PHPguard()` имя защитника должно соответствовать одному из защитников, настроенных в файле auth.php:
```
if (Auth::guard('admin')->attempt($credentials)) {
```
// } Для завершения сессии пользователя можно использовать метод `PHPlogout()` фасада Auth. Он очистит информацию об аутентификации в сессии пользователя: `Auth::logout();`
### Запоминание пользователей
Если вы хотите обеспечить функциональность «запомнить меня» в вашем приложении, вы можете передать логическое значение как второй параметр методу `PHPattempt()` , который сохранит пользователя аутентифицированным на неопределённое время, или пока он вручную не выйдет из системы. Конечно, ваша таблица users должна содержать строковый столбец remember_token, который будет использоваться для хранения ключей «запомнить меня».
```
if (Auth::attempt(['email' => $email, 'password' => $password], $remember)) {
```
// Пользователь запомнен... }
добавлено в 5.3 ()
Если вы «запоминаете» пользователей, вы можете использовать метод `PHPviaRemember()` , чтобы определить, аутентифицировался ли пользователь, используя cookie «запомнить меня»:
```
if (Auth::viaRemember()) {
```
### Другие методы аутентификации
Аутентификация экземпляра пользователя
Если вам необходимо «залогинить» в приложение существующий экземпляр пользователя, вызовите метод `PHPcall()` с экземпляром пользователя. Данный объект должен быть реализацией контракта Illuminate\Contracts\Auth\Authenticatable. Само собой, встроенная в Laravel модель App\User реализует этот интерфейс: `Auth::login($user);`
добавлено в 5.2 ()
Аутентификация пользователя по ID
Для входа пользователя в приложение по его ID, используйте метод `PHPloginUsingId()` . Этот метод просто принимает первичный ключ пользователя, которого необходимо аутентифицировать:
```
Auth::loginUsingId(1);
```
```
// Войти и "запомнить" данного пользователя...
```
Auth::loginUsingId(1, true);
добавлено в 5.0 ()
Вход пользователя для одного запроса
Вы также можете использовать метод `PHPonce()` для пользовательского входа в систему для одного запроса. Сеансы и cookies не будут использоваться, что может быть полезно при создании API без состояний (stateless API):
```
if (Auth::once($credentials)) {
```
## Простая HTTP-аутентификация
HTTP Basic Authentication — простой и быстрый способ аутентификации пользователей вашего приложения без создания дополнительной страницы входа. Для начала прикрепите посредника auth.basic к своему маршруту. Этот посредник встроен в Laravel, поэтому вам не надо определять его:
// Только аутентифицированные пользователи могут зайти... })->middleware('auth.basic');
добавлено в 5.2 () 5.1 () 5.0 ()
```
Route::get('profile', ['middleware' => 'auth.basic', function() {
```
// Только аутентифицированные пользователи могут зайти... }]);
Когда посредник прикреплён к маршруту, вы автоматически получите запрос данных для входа при обращении к маршруту через браузер. По умолчанию посредник auth.basic будет использовать столбец email из записи пользователя в качестве «username».
Если вы используете PHP FastCGI, то простая HTTP-аутентификация изначально может работать неправильно. Надо добавить следующие строки к вашему файлу .htaccess:
> confRewriteCond %{HTTP:Authorization} ^(.+)$ RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}]
### Простая Stateless HTTP-аутентификация
Вы также можете использовать простую HTTP-аутентификацию, не задавая пользовательскую cookie для сессии, что особенно полезно для API-аутентификации. Чтобы это сделать, определите посредника, который вызывает метод `PHPonceBasic()` . Если этот метод ничего не возвращает, запрос может быть передан дальше в приложение: `<?php` namespace Illuminate\Auth\Middleware; use Illuminate\Support\Facades\Auth; //для версии 5.2 и ранее: //use Auth; //use Closure; class AuthenticateOnceWithBasicAuth { /** * Обработка входящего запроса. * * @param \Illuminate\Http\Request $request * @param \Closure $next * @return mixed */ public function handle($request, $next) //для версии 5.2 и ранее: //public function handle($request, Closure $next) { return Auth::onceBasic() ?: $next($request); } }
Затем зарегистрируйте посредника маршрута и прикрепите его к маршруту:
```
Route::get('api/user', function () {
```
```
Route::get('api/user', ['middleware' => 'auth.basic.once', function() {
```
// Только аутентифицированные пользователи могут зайти... }]);
## Сброс и изменение паролей
В документации по ветке 5.3 и далее данный раздел статьи был вынесен в отдельную статью — Сброс пароля — прим. пер.
Большинство веб-приложений предоставляет пользователям возможность сбросить их забытые пароли. Вместо того, чтобы постоянно реализовывать это в каждом новом приложении, Laravel предлагает удобные методы для отправки писем о сбросе пароля и выполнении самого сброса.
Для начала проверьте, что ваша модель App\User реализует контракт Illuminate\Contracts\Auth\CanResetPassword. Конечно, модель App\User, встроенная во фреймворк, уже реализует этот интерфейс и использует типаж Illuminate\Auth\Passwords\CanResetPassword для подключения методов, необходимых для реализации интерфейса.
Создание миграции для таблицы сброса паролей
Затем должна быть создана таблица для хранения ключей сброса пароля. Миграция для этой таблицы включена в Laravel, и находится в каталоге database/migrations. Вам остаётся только выполнить миграцию:
> shphp artisan migrate
Laravel содержит Auth\PasswordController, который содержит логику, необходимую для сброса пользовательских паролей.
Но вам надо определить маршруты для запросов к этому контроллеру:
```
// Маршруты запроса ссылки для сброса пароля...
```
Route::get('password/email', 'Auth\PasswordController@getEmail'); Route::post('password/email', 'Auth\PasswordController@postEmail'); // Маршруты сброса пароля... Route::get('password/reset/{token}', 'Auth\PasswordController@getReset'); Route::post('password/reset', 'Auth\PasswordController@postReset');
Кроме определения маршрутов для PasswordController, вам надо предоставить представления, которые могут быть возвращены этим контроллером. Не волнуйтесь, мы предоставили примеры представлений, чтобы вам было легче начать. Вы можете изменить эти шаблоны под дизайн своего приложения.
Пример формы запроса ссылки для сброса пароля
Вам надо сделать HTML-представление для формы запроса сброса пароля. Это представление должно быть помещено в resources/views/auth/password.blade.php. Эта форма предоставляет единственное поле для ввода адреса e-mail, позволяя запросить ссылку для сброса пароля:
```
<!-- resources/views/auth/password.blade.php -->
```
<form method="POST" action="/password/email"> {!! csrf_field() !!} @if (count($errors) > 0) <ul> @foreach ($errors->all() as $error) <li>{{ $error }}</li> @endforeach </ul> @endif <div> Email <input type="email" name="email" value="{{ old('email') }}"> </div> <div> <button type="submit"> Send Password Reset Link </button> </div> </form> Когда пользователь подтвердит запрос на сброс пароля, он получит электронное письмо со ссылкой, которая указывает на метод `PHPgetReset()` (обычно расположенный по маршруту /password/reset) контроллера PasswordController. Вам надо создать представление для этого письма в resources/views/emails/password.blade.php. Представление получит переменную `PHP$token` , которая содержит ключ для сброса пароля, по которому происходит сопоставление пользователя с запросом сброса пароля. Вот пример представления для e-mail:
```
<!-- resources/views/emails/password.blade.php -->
```
Нажмите здесь для сброса пароля: {{ url('password/reset/'.$token) }}
Когда пользователь переходит по ссылке из письма для сброса пароля, ему автоматически будет выведена форма сброса пароля. Это представление необходимо поместить в resources/views/auth/reset.blade.php.
Вот пример формы сброса пароля:
<form method="POST" action="/password/reset"> {!! csrf_field() !!} <input type="hidden" name="token" value="{{ $token }}"> @if (count($errors) > 0) <ul> @foreach ($errors->all() as $error) <li>{{ $error }}</li> @endforeach </ul> @endif <div> Email <input type="email" name="email" value="{{ old('email') }}"> </div> <div> Password <input type="password" name="password"> </div> <div> Confirm Password <input type="password" name="password_confirmation"> </div> <div> <button type="submit"> Reset Password </button> </div> </form### После сброса пароля
Когда вы определили маршруты и представления для сброса паролей пользователей, вы можете просто обратиться к данному маршруту через браузер (/password/reset). Встроенный в фреймворк PasswordController содержит логику отправки сообщений со ссылкой для сброса пароля, а также логику обновления паролей в базе данных.
После сброса пароля пользователь автоматически войдёт в приложение и будет перенаправлен к /home. Вы можете изменить этот путь задав свойство redirectTo в PasswordController:
По умолчанию ключи сброса пароля истекают через один час. Вы можете изменить это с помощью параметра expire в вашем файле config/auth.php.
Настройка защитника аутентификации
В файле настроек auth.php вы можете настроить несколько «защитников», которых можно использовать для задания логики аутентификации для нескольких таблиц пользователей. Вы можете изменить встроенный PasswordController, чтобы он использовал необходимого вам защитника, добавив в контроллер свойство $guard:
`/**` * Будет использоваться указанный защитник аутентификации. * * @var string */ protected $guard = 'admins';
Вы можете настроить несколько «брокеров» паролей в файле настроек auth.php, их можно использовать для сброса паролей в нескольких таблицах пользователей. Вы можете изменить встроенный PasswordController, чтобы он использовал необходимого вам брокера, добавив в контроллер свойство $broker:
`/**` * Будет использоваться указанный брокер паролей. * * @var string */ protected $broker = 'admins';
## Добавление собственных защитников
Вы можете определить своих собственных защитников аутентификации с помощью метода `PHPextend()` фасада Auth. Вам нужно разместить этот вызов «провайдера» в сервис-провайдере. Поскольку в Laravel уже есть AuthServiceProvider, мы можем поместить код в этот провайдер:
добавлено в 5.3 ()
`<?php` namespace App\Providers; use App\Services\Auth\JwtGuard; use Illuminate\Support\Facades\Auth; use Illuminate\Foundation\Support\Providers\AuthServiceProvider as ServiceProvider; class AuthServiceProvider extends ServiceProvider { /** * Регистрация всех сервисов аутентификации/авторизации приложения. * * @return void */ public function boot() { $this->registerPolicies(); Auth::extend('jwt', function ($app, $name, array $config) { // Вернуть экземпляр Illuminate\Contracts\Auth\Guard... return new JwtGuard(Auth::createUserProvider($config['provider'])); }); } }
добавлено в 5.2 ()
`<?php` namespace App\Providers; use Auth; use App\Services\Auth\JwtGuard; use Illuminate\Support\ServiceProvider; class AuthServiceProvider extends ServiceProvider { /** * Выполнение после-регистрационной загрузки сервисов. * * @return void */ public function boot() { Auth::extend('jwt', function($app, $name, array $config) { // Вернуть экземпляр Illuminate\Contracts\Auth\Guard... return new JwtGuard(Auth::createUserProvider($config['provider'])); }); } /** * Регистрация привязок в контейнере. * * @return void */ public function register() { // } } Как видите, в этом примере переданная в метод `PHPextend()` функция обратного вызова должна вернуть реализацию Illuminate\Contracts\Auth\Guard. Этот интерфейс содержит несколько методов, которые вам надо реализовать, для определения собственного защитника. Когда вы определили своего защитника, вы можете использовать его в настройке guards в вашем файле auth.php: > conf'guards' => [ 'api' => [ 'driver' => 'jwt', 'provider' => 'users', ], ],
## Социальная аутентификация
Для версии 5.2 и выше данный раздел перенесён на GitHub — прим. пер.
В дополнение к обычной аутентификации на основе формы, Laravel также предоставляет простой и удобный способ аутентификации с помощью провайдеров OAuth, используя Laravel Socialite. Socialite в настоящее время поддерживает аутентификацию через Facebook, Twitter, LinkedIn, Google, GitHub и Bitbucket.
Чтобы начать работать с Socialite, добавьте зависимость в свой файл composer.json:
```
confcomposer require laravel/socialite
```
После установки библиотеки Socialite зарегистрируйте Laravel\Socialite\SocialiteServiceProvider в своем конфигурационном файле config/app.php.
> conf'providers' => [ // Другие сервис-провайдеры... Laravel\Socialite\SocialiteServiceProvider::class, ],
Также добавьте фасад Socialite в массив aliases в файле app:
> conf'Socialite' => Laravel\Socialite\Facades\Socialite::class,
Вам будет необходимо добавить учётные данные для сервисов OAuth, которые использует ваше приложение. Эти учётные данные должны быть помещены в ваш конфигурационный файл config/services.php и должны использовать ключ facebook, twitter, linkedin, google, github или bitbucket, в зависимости от провайдеров, которые необходимы вашему приложению. Например:
> conf'github' => [ 'client_id' => 'your-github-app-id', 'client_secret' => 'your-github-app-secret', 'redirect' => 'http://your-callback-url', ],
### Основы использования
Теперь можно аутентифицировать пользователей! Вам будут нужны два маршрута: один для перенаправления пользователя на провайдер OAuth, и второй для получения обратного вызова от провайдера после аутентификации. Мы обратимся к Socialite через фасад Socialite:
`<?php` namespace App\Http\Controllers; use Socialite; use Illuminate\Routing\Controller; class AuthController extends Controller { /** * Переадресация пользователя на страницу аутентификации GitHub. * * @return Response */ public function redirectToProvider() { return Socialite::driver('github')->redirect(); } /** * Получение информации о пользователе от GitHub. * * @return Response */ public function handleProviderCallback() { $user = Socialite::driver('github')->user(); // $user->token; } } Метод `PHPredirect()` отвечает за отправку пользователя провайдеру OAuth, а метод `PHPuser()` читает входящий запрос и получает информацию пользователя от провайдера. Прежде чем перенаправить пользователя, вы можете также установить «области видимости» для запроса с помощью метода `PHPscope()` . Этот метод переопределит все существующие области видимости:
```
return Socialite::driver('github')
```
->scopes(['scope1', 'scope2'])->redirect();
Само собой, вам необходимо определить маршруты для ваших методов контроллера:
```
Route::get('auth/github', 'Auth\AuthController@redirectToProvider');
```
Route::get('auth/github/callback', 'Auth\AuthController@handleProviderCallback'); Некоторые из провайдеров OAuth поддерживают необязательные параметры в запросе переадресации. Чтобы включить какие-либо необязательные параметры в запрос, вызовите метод `PHPwith()` с ассоциативным массивом:
```
return Socialite::driver('google')
```
->with(['hd' => 'example.com'])->redirect();
Получение пользовательских данных
Когда у вас есть экземпляр пользователя, вы можете получить более подробную информацию о пользователе:
```
$user = Socialite::driver('github')->user();
```
// Два провайдера OAuth $token = $user->token; // Один провайдер OAuth $token = $user->token; $tokenSecret = $user->tokenSecret; // Все провайдеры $user->getId(); $user->getNickname(); $user->getName(); $user->getEmail(); $user->getAvatar();
## Добавление собственных провайдеров пользователей
Если вы не используете традиционную реляционную базу данных для хранения ваших пользователей, вам необходимо добавить в Laravel свой собственный провайдер аутентификации пользователей. Мы используем метод `PHPprovider()` фасада Auth для определения своего драйвера:
добавлено в 5.3 ()
`<?php` namespace App\Providers; use Illuminate\Support\Facades\Auth; use App\Extensions\RiakUserProvider; use Illuminate\Support\ServiceProvider; class AuthServiceProvider extends ServiceProvider { /** * Регистрация всех сервисов аутентификации/авторизации приложения. * * @return void */ public function boot() { $this->registerPolicies(); Auth::provider('riak', function ($app, array $config) { // Возврат экземпляра Illuminate\Contracts\Auth\UserProvider... return new RiakUserProvider($app->make('riak.connection')); }); } }
добавлено в 5.2 ()
`<?php` namespace App\Providers; use Auth; use App\Extensions\RiakUserProvider; use Illuminate\Support\ServiceProvider; class AuthServiceProvider extends ServiceProvider { /** * Выполнение пост-регистрационной загрузки служб. * * @return void */ public function boot() { Auth::provider('riak', function($app, array $config) { // Возврат экземпляра Illuminate\Contracts\Auth\UserProvider... return new RiakUserProvider($app['riak.connection']); }); } /** * Регистрация привязок в контейнере. * * @return void */ public function register() { // } } После регистрации провайдера методом `PHPprovider()` , вы можете переключиться на новый провайдер в файле настроек auth.php. Сначала определите «провайдера», который использует ваш новый драйвер: > conf'providers' => [ 'users' => [ 'driver' => 'riak', ], ],
И наконец, вы можете использовать этот провайдер в вашей настройке 'guards':
> conf'guards' => [ 'web' => [ 'driver' => 'session', 'provider' => 'users', ], ],
## Добавление драйверов аутентификации
Данный раздел статьи добавлен в документацию для версии 5.1.Для версий 5.2 и выше и 5.0 и ниже он неактуален. Если вы не используете традиционную реляционную базу данных для хранения ваших пользователей, вам необходимо добавить в Laravel свой собственный драйвер аутентификации. Мы используем метод `PHPextend()` фасада Auth для определения своего драйвера. Вам надо поместить этот вызов метода `PHPextend()` в сервис-провайдер: `<?php` namespace App\Providers; use Auth; use App\Extensions\RiakUserProvider; use Illuminate\Support\ServiceProvider; class AuthServiceProvider extends ServiceProvider { /** * Выполнение пост-регистрационной загрузки служб. * * @return void */ public function boot() { Auth::extend('riak', function($app) { // Возврат экземпляра Illuminate\Contracts\Auth\UserProvider... return new RiakUserProvider($app['riak.connection']); }); } /** * Регистрация привязок в контейнере. * * @return void */ public function register() { // } } После регистрации драйвера методом `PHPextend()` , вы можете переключиться на новый драйвер в файле настроек config/auth.php.
### Контракт User Provider
Реализации Illuminate\Contracts\Auth\UserProvider отвечают только за извлечение реализаций Illuminate\Contracts\Auth\Authenticatable из постоянных систем хранения, таких как MySQL, Riak, и т.п. Эти два интерфейса позволяют механизмам аутентификации Laravel продолжать функционировать независимо от того, как хранятся данные пользователей и какой тип класса использован для их представления.
Давайте посмотрим на контракт Illuminate\Contracts\Auth\UserProvider:
`<?php` namespace Illuminate\Contracts\Auth; interface UserProvider { public function retrieveById($identifier); public function retrieveByToken($identifier, $token); public function updateRememberToken(Authenticatable $user, $token); public function retrieveByCredentials(array $credentials); public function validateCredentials(Authenticatable $user, array $credentials); } Функция `PHPretrieveById()` обычно принимает ключ, отображающий пользователя, такой как автоинкрементный ID из базы данных MySQL. Реализация Authenticatable, соответствующая этому ID, должна быть получена и возвращена этим методом. Функция `PHPretrieveByToken()` принимает пользователя по его уникальному `PHP$identifier` и ключу `PHP$token` «запомнить меня», хранящемуся в поле remember_token. Как и предыдущий метод, он должен возвращать реализацию Authenticatable. Метод
обновляет поле remember_token пользователя `PHP$user` значением нового `PHP$token` . Новый ключ может быть как свежим ключом, назначенным при успешной попытке входа «запомнить меня», так и `PHPnull` при выходе пользователя. Метод
принимает массив авторизационных данных, переданных в метод `PHPAuth::attempt()` при попытке входа в приложение. Затем метод должен «запросить» у основного постоянного хранилища того пользователя, который соответствует этим авторизационным данным. Обычно этот метод выполняет запрос с условием «where» для
. Затем метод должен вернуть реализацию Authenticatable (для версии 5.2 и ранее — UserInterface). Этот метод не должен пытаться проверить пароль или аутентифицировать пользователя. Метод
должен сравнить данного `PHP$user` с `PHP$credentials` для аутентификации пользователя. Например, этот метод может использовать `PHPHash::check` для сравнения строки
```
PHP$user->getAuthPassword()
```
со значением
. Этот метод должен возвращать `PHPtrue` или `PHPfalse` , сообщая о правильности пароля.
### Контракт Authenticatable
Теперь, когда мы изучили каждый метод в UserProvider, давайте посмотрим на контракт Authenticatable. Помните, провайдер должен вернуть реализацию этого интерфейса из методов `PHPretrieveById()` и
: `<?php` namespace Illuminate\Contracts\Auth; interface Authenticatable { public function getAuthIdentifierName(); //для версии 5.2 и выше public function getAuthIdentifier(); public function getAuthPassword(); public function getRememberToken(); public function setRememberToken($value); public function getRememberTokenName(); } Этот интерфейс прост. Метод
должен возвращать имя поля «первичного ключа» пользователя (для версии 5.2 и выше), а метод
должен возвращать «первичный ключ» пользователя. При использовании MySQL это будет автоинкрементный первичный ключ. Метод
```
PHPgetAuthPassword ()
```
должен возвращать хешированный пароль пользователя. Этот интерфейс позволяет системе аутентификации работать с классом User, независимо от используемой ORM и уровня абстракции хранилища. По умолчанию Laravel содержит в папке app класс User, который реализует этот интерфейс. Вы можете подсмотреть в нём пример реализации.
Laravel генерирует различные события в процессе аутентификации. Вы можете прикрепить слушателей к этим событиям в вашем EventServiceProvider:
добавлено в 5.3 ()
`/**` * Сопоставления слушателя событий для вашего приложения. * * @var array */ protected $listen = [ 'Illuminate\Auth\Events\Registered' => [ 'App\Listeners\LogRegisteredUser', ], 'Illuminate\Auth\Events\Attempting' => [ 'App\Listeners\LogAuthenticationAttempt', ], 'Illuminate\Auth\Events\Authenticated' => [ 'App\Listeners\LogAuthenticated', ], 'Illuminate\Auth\Events\Login' => [ 'App\Listeners\LogSuccessfulLogin', ], 'Illuminate\Auth\Events\Failed' => [ 'App\Listeners\LogFailedLogin', ], 'Illuminate\Auth\Events\Logout' => [ 'App\Listeners\LogSuccessfulLogout', ], 'Illuminate\Auth\Events\Lockout' => [ 'App\Listeners\LogLockout', ], ];
добавлено в 5.2 ()
`/**` * Сопоставления слушателя событий для вашего приложения. * * @var array */ protected $listen = [ 'Illuminate\Auth\Events\Attempting' => [ 'App\Listeners\LogAuthenticationAttempt', ], 'Illuminate\Auth\Events\Login' => [ 'App\Listeners\LogSuccessfulLogin', ], 'Illuminate\Auth\Events\Logout' => [ 'App\Listeners\LogSuccessfulLogout', ], 'Illuminate\Auth\Events\Lockout' => [ 'App\Listeners\LogLockout', ], ]; `/**` * Регистрация любых других событий для вашего приложения. * * @param \Illuminate\Contracts\Events\Dispatcher $events * @return void */ public function boot(DispatcherContract $events) { parent::boot($events); // Возникает при каждой попытке аутентификации... $events->listen('auth.attempt', function ($credentials, $remember, $login) { // }); // Возникает при успешных входах... $events->listen('auth.login', function ($user, $remember) { // }); // Возникает при выходах... $events->listen('auth.logout', function ($user) { // }); }
# Авторизация
В Laravel сразу после установки есть сервисы аутентификации, а также он обеспечивает простой способ авторизации действий пользователя с определённым ресурсом. Подход Laravel к авторизации такой же простой, как и к аутентификации.
Авторизация была добавлена в Laravel 5.1.11, поэтому обратитесь к руководству по обновлению перед добавлением этих возможностей в своё приложение.
Есть два основных способа авторизации действий: шлюзы и политики.
Шлюзы и политики похожи на маршруты и контроллеры. Шлюзы обеспечивают простой подход к авторизации на основе замыканий, а политики, подобно контроллерам, группируют свою логику вокруг конкретной модели или ресурса. Сначала мы рассмотрим шлюзы, а затем политики.
При создании приложения вам не надо выбирать только что-то одно: либо шлюзы, либо политики. В большинстве приложений будет использоваться смесь из шлюзов и политик, и это очень здорово! Шлюзы больше подходят для действий, не связанных с моделями и ресурсами, такими как просмотр панели управления администратора. А политики надо использовать для авторизации действий для конкретных моделей или ресурсов.
## Шлюзы
### Написание шлюзов
Шлюзы — это замыкания, определяющие, авторизован ли пользователь на выполнение данного действия. Обычно они определяются в классе App\Providers\AuthServiceProvider с помощью фасада Gate. Шлюзы всегда получают объект пользователя первым аргументом и могут получать дополнительные аргументы, такие как соответствующая Eloquent-модель:
`/**` * Регистрация всех сервисов аутентификации / авторизации. * * @return void */ public function boot() { $this->registerPolicies(); Gate::define('update-post', function ($user, $post) { return $user->id == $post->user_id; }); }
### Авторизация действий
Для авторизации действий с помощью шлюзов используйте методы `PHPallows()` и `PHPdenies()` . Помните, вам не надо передавать текущего аутентифицированного пользователя в эти методы. Laravel автоматически позаботится о передаче пользователя в замыкание шлюза:
// Текущий пользователь может редактировать статью... } if (Gate::denies('update-post', $post)) { // Текущий пользователь не может редактировать статью... } Для определения, авторизован ли конкретный пользователь на выполнение действия, используйте метод `PHPforUser()` фасада Gate:
// Пользователь может редактировать статью... } if (Gate::forUser($user)->denies('update-post', $post)) { // Пользователь не может редактировать статью... }
## Создание политик
### Генерирование политик
Политики — это классы, организующие логику авторизации вокруг конкретной модели или ресурса. Например, если ваше приложение — блог, у вас может быть модель Post и соответствующая PostPolicy для авторизации действий пользователя, таких как создание или редактирование статьи.
Вы можете сгенерировать политику с помощью artisan-команды `shmake:policy` . Сгенерированная политика будет помещена в каталог app/Policies. Если такого каталога нет в вашем приложении, Laravel создаст его: > shphp artisan make:policy PostPolicy
Команда `shmake:policy` сгенерирует пустой класс политики. Если вы хотите создать класс уже с базовыми CRUD методами политики, укажите `sh--model` при выполнении команды: > shphp artisan make:policy PostPolicy --model=Post
Все политики извлекаются через сервис-контейнер Laravel, позволяя вам указывать типы любых необходимых зависимостей в конструкторе политики, и они будут внедрены автоматически.
### Регистрация политик
Когда политика создана, её надо зарегистрировать. В Laravel-приложении по умолчанию есть AuthServiceProvider, имеющий свойство policies, которое связывает ваши Eloquent-модели с соответствующими им политиками. Регистрация политики позволит Laravel использовать необходимую политику для авторизации действия над данной моделью:
`<?php` namespace App\Providers; use App\Post; use App\Policies\PostPolicy; use Illuminate\Support\Facades\Gate; use Illuminate\Foundation\Support\Providers\AuthServiceProvider as ServiceProvider; class AuthServiceProvider extends ServiceProvider { /** * Привязка политик для приложения. * * @var array */ protected $policies = [ Post::class => PostPolicy::class, ]; /** * Регистрация всех сервисов аутентификации / авторизации приложения. * * @return void */ public function boot() { $this->registerPolicies(); // } }
## Написание политик
### Методы политик
После регистрации политики вы можете добавить методы для каждого действия, которое она авторизует. Например, давайте определим метод `PHPupdate()` в нашей PostPolicy, который будет определять, может ли данный User редактировать данный объект Post. Метод `PHPupdate()` получит в качестве аргументов методы `PHPUser()` и `PHPPost()` и должен вернуть `PHPtrue` или `PHPfalse` , сообщая, авторизован ли пользователь для редактирования данной статьи. В этом примере давайте проверим, совпадает ли id пользователя с user_id статьи: `<?php` namespace App\Policies; use App\User; use App\Post; class PostPolicy { /** * Определение, может ли данная статья редактироваться пользователем. * * @param \App\User $user * @param \App\Post $post * @return bool */ public function update(User $user, Post $post) { return $user->id === $post->user_id; } } При необходимости вы можете определить дополнительные методы политики для различных действий, которые она авторизует. Например, вы можете определить метод `PHPview()` или `PHPdelete()` для авторизации различных действий со статьёй, и помните, что вы можете называть методы политики как угодно. Если вы использовали параметр `sh--model` при генерировании вашей политики через консоль Artisan, она уже будет содержать методы для действий view, create, update и delete.
### Методы без моделей
Некоторые методы политик принимают только текущего аутентифицированного пользователя без экземпляра модели, для которой они авторизуют. Такие ситуации наиболее распространены при авторизации действий create. Например, если вы создаёте блог, вы можете проверять, авторизован ли пользователь создавать статьи в принципе.
При определении методов политик, которые не будут получать экземпляр модели, таких как метод `PHPcreate()` , они не будут получать экземпляр модели. Вместо этого вам надо определить метод, ожидающий только аутентифицированного пользователя: `/**` * Определение, может ли данный пользователь создавать статьи. * * @param \App\User $user * @return bool */ public function create(User $user) { // }
### Фильтры политик
Для некоторых пользователей вы можете авторизовывать все действия определённой политики. Для этого определите в политике метод `PHPbefore()` . Этот метод будет выполнятся перед всеми остальными методами политики, давая вам возможность авторизовать действие до того, как будет вызван соответствующий метод. Эта возможность наиболее часто используется для авторизации администраторов приложения на выполнение каких-либо действий:
{ if ($user->isSuperAdmin()) { return true; } } Если вы хотите запретить всю авторизацию для пользователя, вам надо вернуть `PHPfalse` из метода `PHPbefore()` . Если вернётся `PHPnull` , авторизация перейдёт к методу политики.
## Авторизация действий с помощью политик
### Через модель User
Модель User, встроенная в Laravel, содержит два полезных метода для авторизации действий: `PHPcan()` и `PHPcant()` . Метод `PHPcan()` получает действие для авторизации и соответствующую модель. Например, давайте определим, авторизован ли пользователь на редактирование данной модели Post:
// } Если политика зарегистрирована для данной модели, метод `PHPcan()` автоматически вызовет соответствующую политику и вернет логический результат. Если для модели не зарегистрировано ни одной политики, метод `PHPcan()` попытается вызвать шлюз на основе замыкания, совпадающий с именем данного действия.
Действия, не требующие моделей
Помните, что некоторые действия, такие как create, не требуют экземпляр модели. В таких ситуациях вы можете передать имя класса в метод `PHPcan()` . Имя класса будет использовано для определения того, какая политика авторизует действие: `use App\Post;` if ($user->can('create', Post::class)) { // Выполняется метод "create" соответствующей политики... }
### Через посредника
Laravel содержит посредника, который может авторизовать действия даже до того, как входящий запрос достигнет ваших маршрутов или контроллеров. По умолчанию посреднику Illuminate\Auth\Middleware\Authorize назначен ключ can в вашем классе App\Http\Kernel. Давайте рассмотрим пример использования посредника can для авторизации пользователя на редактирование статьи блога:
`use App\Post;` Route::put('/post/{post}', function (Post $post) { // Текущий пользователь может редактировать статью... })->middleware('can:update,post');
В этом примере мы передаём посреднику can два аргумента. Первый — имя действия для авторизации, а второе — параметр маршрута для передачи в метод политики. В данном случае, поскольку мы используем неявную привязку модели, модель Post будет передана в метод политики. Если пользователь не авторизован на выполнение данного действия, посредник сгенерирует HTTP-отклик с кодом состояния 403.
Действия, не требующие моделей
И снова, некоторые действия, такие как create, не требуют экземпляр модели. В таких ситуациях вы можете передать имя класса в посредник. Имя класса будет использовано для определения того, какая политика авторизует действие:
// Текущий пользователь может создавать статьи... })->middleware('can:create,App\Post');
### Через вспомогательные методы контроллера
В дополнение к полезным методам модели User Laravel предоставляет полезный метод `PHPauthorize()` для всех ваших контроллеров, наследующих базовый класс App\Http\Controllers\Controller. Подобно методу `PHPcan()` , этот метод принимает имя действия для авторизации и соответствующую модель. Если действие не авторизовано, метод `PHPauthorize()` выбросит исключение Illuminate\Auth\Access\AuthorizationException, которое будет конвертировано стандартным обработчиком исключений Laravel в HTTP-отклик с кодом состояния 403: `<?php` namespace App\Http\Controllers; use App\Post; use Illuminate\Http\Request; use App\Http\Controllers\Controller; class PostController extends Controller { /** * Редактировать данную статью. * * @param Request $request * @param Post $post * @return Response */ public function update(Request $request, Post $post) { $this->authorize('update', $post); // Текущий пользователь может редактировать статью... } }
Действия, не требующие моделей
Как было сказано ранее, некоторые действия, такие как create, не требуют экземпляр модели. В таких ситуациях вы можете передать имя класса в метод `PHPauthorize()` . Имя класса будет использовано для определения того, какая политика авторизует действие: `/**` * Create a new blog post. * * @param Request $request * @return Response */ public function create(Request $request) { $this->authorize('create', Post::class); // The current user can create blog posts... }
### Через шаблоны Blade
При написании шаблонов Blade вы можете выводить часть страницы, только если пользователь авторизован на выполнение данного действия. Например, можно показывать форму редактирования статьи, только если пользователь может редактировать статью. В данной ситуации вы можете использовать семейство директив `PHP@can` и `PHP@cannot` :
<!-- Текущий Пользователь Может Редактировать Статью --> @elsecan('create', $post) <!-- Текущий Пользователь Может Создать Новую Статью --> @endcan @cannot('update', $post) <!-- Текущий Пользователь Не Может Редактировать Статью --> @elsecannot('create', $post) <!-- Текущий Пользователь Не Может Создать Новую Статью --> @endcannot Эти директивы — удобный короткий вариант для написания операторов `PHP@if` и `PHP@unless` . Операторы `PHP@can` и `PHP@cannot` из приведённого примера можно перевести соответственно в следующие операторы:
```
@if (Auth::user()->can('update', $post))
```
<!-- Текущий Пользователь Может Редактировать Статью --> @endif @unless (Auth::user()->can('update', $post)) <!-- Текущий Пользователь Не Может Редактировать Статью --> @endunless
Действия, не требующие моделей
Как и большинство других методов авторизации вы можете передать в директивы `PHP@can` и `PHP@cannot` имя класса, если действие не требует экземпляра модели:
```
@can('create', Post::class)
```
<!-- Текущий Пользователь Может Создавать Статьи --> @endcan @cannot('create', Post::class) <!-- Текущий Пользователь Не Может Создавать Статьи --> @endcannot
## Определение прав
Простейший способ определить наличие у пользователя прав на выполнение конкретного действия — задать «право» при помощи класса Illuminate\Auth\Access\Gate. Поставляемый с Laravel AuthServiceProvider служит удобным местом для определения всех прав для вашего приложения. Например, давайте определим право update-post, которое получает текущего User и модель Post. Внутри нашего права мы будем проверять совпадает ли id пользователя с user_id статьи:
`<?php` namespace App\Providers; use Illuminate\Contracts\Auth\Access\Gate as GateContract; use Illuminate\Foundation\Support\Providers\AuthServiceProvider as ServiceProvider; class AuthServiceProvider extends ServiceProvider { /** * Регистрация любых сервисов аутентификации/авторизации для приложения. * * @param \Illuminate\Contracts\Auth\Access\Gate $gate * @return void */ public function boot(GateContract $gate) { $this->registerPolicies($gate); $gate->define('update-post', function ($user, $post) { return $user->id == $post->user_id; }); } } До версии 5.2 в этом примере для сравнения использовался оператор тождественного равенства
```
PHPreturn $user->id === $post->user_id;
```
— прим. пер. Заметьте, мы не проверили данного $user на NULL. Gate автоматически вернёт значение false для всех прав, когда нет аутентифицированного пользователя, или конкретный пользователь не был указан с помощью метода `PHPforUser()` .
### Права на основе класса
В добавление к регистрации замыканий Closures в качестве обратных вызовов авторизации, вы можете регистрировать методы класса, передавая строку с именем класса и метода. Когда понадобится, класс будет извлечён при помощи сервис-контейнера:
```
$gate->define('update-post', 'Class@method');
```
### Перехват проверок авторизации
Иногда необходимо предоставить полные права конкретному пользователю. Для таких случаев используйте метод `PHPbefore()` , чтобы задать обратный вызов, который будет выполняться до всех остальных проверок авторизации:
```
$gate->before(function ($user, $ability) {
```
if ($user->isSuperAdmin()) { return true; } }); Если обратный вызов `PHPbefore()` возвращает не пустой результат, то этот результат будет считаться результатом проверки. Вы можете использовать метод `PHPafter()` для задания обратного вызова, который будет выполняться после каждой проверки авторизации. Но из этого метода нельзя изменить результат проверки авторизации:
```
$gate->after(function ($user, $ability, $result, $arguments) {
```
## Проверка прав
### С помощью фасада Gate
Когда право задано, мы можем «проверить» его разными способами. Во-первых, мы можем использовать методы фасада Gate — `PHPcheck()` , `PHPallows()` и `PHPdenies()` . Все они получают имя права и аргументы, которые необходимо передать в обратный вызов права. Вам не надо передавать текущего пользователя в эти методы, поскольку Gate автоматически подставит его перед аргументами, передаваемыми в обратный вызов. Поэтому при проверке права update-post, которое мы определили ранее, нам надо передать только экземпляр Post в метод `PHPdenies()` : `<?php` namespace App\Http\Controllers; use Gate; use App\User; use App\Post; use App\Http\Controllers\Controller; class PostController extends Controller { /** * Обновление данной статьи. * * @param int $id * @return Response */ public function update($id) { $post = Post::findOrFail($id); if (Gate::denies('update-post', $post)) { abort(403); } // Обновление статьи... } } Метод `PHPallows()` обратен методу `PHPdenies()` и возвращает true, если действие авторизовано. Метод `PHPcheck()` — псевдоним метода `PHPallows()` .
Проверка прав конкретного пользователя
Если вы хотите использовать фасад Gate для проверки наличия определённого права у пользователя, отличного от текущего аутентифицированного пользователя, то можете использовать метод `PHPforUser()` :
// }
Передача нескольких аргументов
Конечно, обратные вызовы прав могут принимать несколько аргументов:
```
Gate::define('delete-comment', function ($user, $post, $comment) {
```
// });
Если вашему праву необходимо несколько аргументов, просто передайте массив аргументов в методы Gate:
```
if (Gate::allows('delete-comment', [$post, $comment])) {
```
### С помощью модели User
Альтернативный способ проверки прав — с помощью экземпляра модели User. По умолчанию в Laravel модель App\User использует типаж Authorizable, который предоставляет два метода: `PHPcan()` и `PHPcannot()` . Эти методы могут быть использованы так же, как методы `PHPallows()` и `PHPdenies()` фасада Gate. Тогда, используя наш предыдущий пример, мы можем изменить код вот так: `<?php` namespace App\Http\Controllers; use App\Post; use Illuminate\Http\Request; use App\Http\Controllers\Controller; class PostController extends Controller { /** * Обновление данной статьи. * * @param \Illuminate\Http\Request $request * @param int $id * @return Response */ public function update(Request $request, $id) { $post = Post::findOrFail($id); if ($request->user()->cannot('update-post', $post)) { abort(403); } // Обновление статьи... } } Метод `PHPcan()` обратен методу `PHPcannot()` :
```
if ($request->user()->can('update-post', $post)) {
```
// Обновление статьи... }
### В шаблонах Blade
Для удобства Laravel предоставляет Blade-директиву @can для быстрой проверки наличия данного права у текущего аутентифицированного пользователя. Например:
```
<a href="/post/{{ $post->id }}">View Post</a>
```
@can('update-post', $post) <a href="/post/{{ $post->id }}/edit">Edit Post</a> @endcan
Также вы можете комбинировать директиву @can с директивой @else:
```
@can('update-post', $post)
```
<!-- The Current User Can Update The Post --> @else <!-- The Current User Can't Update The Post --> @endcan
### В запросах форм
Также вы можете использовать определёные в Gate права в методе `PHPauthorize()` запроса формы. Например: `/**` * Определение авторизации пользователя для выполнения этого запроса. * * @return bool */ public function authorize() { $postId = $this->route('post'); return Gate::allows('update', Post::findOrFail($postId)); }
## Политики
### Создание политик
В больших приложениях определение всей логики авторизации в AuthServiceProvider может стать громоздким, поэтому Laravel позволяет вам разделять вашу логику авторизации на классы «Политики». Политики — простые PHP-классы, которые группируют логику авторизации на основе авторизуемых ресурсов.
Сначала давайте сгенерируем политику для управления авторизацией для нашей модели Post. Вы можете сгенерировать политику используя artisan-команду `shmake:policy` . Сгенерированная политика будет помещена в папку app/Policies: > shphp artisan make:policy PostPolicy
Когда мы создали политику, нам надо зарегистрировать её классом Gate. AuthServiceProvider содержит свойство policies, которое сопоставляет различные сущности с управляющими ими политиками. Поэтому мы укажем, что политика модели Post — это класс PostPolicy:
`<?php` namespace App\Providers; use App\Post; use App\Policies\PostPolicy; use Illuminate\Foundation\Support\Providers\AuthServiceProvider as ServiceProvider; class AuthServiceProvider extends ServiceProvider { /** * Сопоставление политик для приложения. * * @var array */ protected $policies = [ Post::class => PostPolicy::class, ]; /** * Регистрация любых сервисов аутентификации/авторизации для приложения. * * @param \Illuminate\Contracts\Auth\Access\Gate $gate * @return void */ public function boot(GateContract $gate) { $this->registerPolicies($gate); } }
### Написание политик
Когда политика сгенерирована и зарегистрирована, мы можем добавлять методы для каждого права, за которое она отвечает. Например, определим метод `PHPupdate()` в нашем PostPolicy, который будет проверять может ли данный User «обновлять» Post: `<?php` namespace App\Policies; use App\User; use App\Post; class PostPolicy { /** * Проверка может ли данный пост быть обновлён пользователем. * * @param \App\User $user * @param \App\Post $post * @return bool */ public function update(User $user, Post $post) { return $user->id === $post->user_id; } } При необходимости вы можете продолжить определять дополнительные методы для политики для различных прав, за которые она отвечает. Например, вы можете определить методы `PHPshow()` , `PHPdestroy()` и `PHPaddComment()` для авторизации различных действий с Post.
Все политики подключаются через сервис-контейнер Laravel, а значит, вы можете указать типы любых необходимых зависимостей в конструкторе политики, и они будут внедрены автоматически.
Если вам необходимо предоставить конкретному пользователю все права политики, определите в этой политике метод `PHPbefore()` . Этот метод будет выполняться до всех остальных проверок авторизации этой политики:
{ if ($user->isSuperAdmin()) { return true; } } Если метод `PHPbefore()` возвращает не пустой результат, то этот результат будет считаться результатом проверки.
### Проверка политик
Методы политик вызываются точно так же, как обратные вызовы авторизации на основе замыканий Closure. Вы можете использовать фасад Gate, модель User, Blade-директиву @can, или вспомогательную функцию `PHPpolicy()` .
Gate автоматически определяет какую политику использовать, исходя из классов аргументов, передаваемых в его методы. Если мы передаём экземпляр Post в метод denies, то Gate будет использовать соответствующий PostPolicy для авторизации действий:
`<?php` namespace App\Http\Controllers; use Gate; use App\User; use App\Post; use App\Http\Controllers\Controller; class PostController extends Controller { /** * Обновление данной статьи. * * @param int $id * @return Response */ public function update($id) { $post = Post::findOrFail($id); if (Gate::denies('update', $post)) { abort(403); } // Обновление статьи... } } Методы `PHPcan()` и `PHPcannot()` модели User будут так же автоматически использовать политики, когда они доступны для данных аргументов. Эти методы предоставляют удобный способ для авторизации действий для любого экземпляра User, получаемого вашим приложением:
// } if ($user->cannot('update', $post)) { // }
Точно так же Blade-директива @can будет использовать политики, когда они доступны для данных аргументов:
<!-- Текущий Пользователь Может Обновлять Статью --> @endcan
С помощью вспомогательной функции Policy
Глобальная вспомогательная функция `PHPpolicy()` может использоваться для получения класса Policy для данного экземпляра класса. Например, мы можем передать экземпляр Post в функцию `PHPpolicy()` для получения экземпляра соответствующего класса PostPolicy:
```
if (policy($post)->update($user, $post)) {
```
## Авторизация контроллера
Базовый класс Laravel App\Http\Controllers\Controller по умолчанию использует типаж AuthorizesRequests. Этот типаж предоставляет метод `PHPauthorize()` , который может быть использован для быстрой авторизации данного действия или выброса AuthorizationException (HttpException для версии 5.1 и ранее — прим. пер.), если действие не авторизовано. Метод `PHPauthorize()` разделяет ту же подпись, что и различные другие методы авторизации, такие как `PHPGate::allows` и `PHP$user->can()` . Давайте используем метод `PHPauthorize()` для быстрой авторизации запроса на обновление Post: `<?php` namespace App\Http\Controllers; use App\Post; use App\Http\Controllers\Controller; class PostController extends Controller { /** * Обновление данной статьи. * * @param int $id * @return Response */ public function update($id) { $post = Post::findOrFail($id); $this->authorize('update', $post); // Обновление статьи... } } Если действие авторизовано, контроллер продолжит нормально выполняться; но если метод `PHPauthorize()` определит, что действие не авторизовано, будет автоматически выброшено AuthorizationException (HttpException для версии 5.1 и ранее — прим. пер.), которое сгенерирует HTTP-ответ с кодом состояния 403 Not Authorized. Как видите, метод `PHPauthorize()` — удобный и быстрый способ авторизации действия или выброса исключения одной строчкой кода. Типаж AuthorizesRequests также предоставляет метод
```
PHPauthorizeForUser()
```
для авторизации действия для пользователя, который не является текущим аутентифицированным пользователем:
```
$this->authorizeForUser($user, 'update', $post);
```
Автоматическое определение методов политики
Часто методы политики будут соответствовать методам контроллера. Например, в приведённом выше методе `PHPupdate()` у метода контроллера и метода политики одинаковое название `PHPupdate()` . Поэтому Laravel позволяет просто передать аргументы экземпляра в метод `PHPauthorize()` , и авторизуемое действие будет автоматически определено на основе имени вызываемой функции. В этом примере, поскольку `PHPauthorize()` вызывается из метода контроллера `PHPupdate()` , то и в политике будет вызван метод `PHPupdate()` : `/**` * Обновление данной статьи. * * @param int $id * @return Response */ public function update($id) { $post = Post::findOrFail($id); $this->authorize($post); // Обновление статьи... }
# Laravel Cashier
Laravel Cashier (кассир — прим. пер.) обеспечивает выразительный и гибкий интерфейс для сервисов биллинговых подписок Stripe и Braintree. Он сам создаст практически весь шаблонный код биллинговых подписок, который вы боитесь писать. В дополнение к основному управлению подписками Cashier может работать с купонами, заменой подписок, «величинами» подписок, отменой льготного периода, и даже генерировать PDF-файлы счетов.
Если вы обрабатываете только разовые платежи и не предлагаете людям подписки, вам не стоит использовать Cashier. Используйте SDK Stripe и Braintree напрямую.
### Stripe
Сначала добавьте пакет Cashier для Stripe в свой файл composer.json и выполните команду `shcomposer update` :
* "laravel/cashier": "~7.0"
* "laravel/cashier": "~6.0" — для Laravel 5.2
* "laravel/cashier": "~5.0" — для Stripe SDK ≈2.0 и версии Stripe APIs от 2015-02-18 и выше
* "laravel/cashier": "~4.0" — для версии Stripe APIs от 2015-02-18 и выше
* "laravel/cashier": "~3.0" — для версии Stripe APIs от 2015-02-16 включительно и выше
Затем зарегистрируйте сервис-провайдер Laravel\Cashier\CashierServiceProvider в вашем файле настроек config/app.
Перед тем, как начать использовать Cashier, надо подготовить нашу БД.
Надо добавить несколько столбцов в таблицу users и создать новую таблицу subscriptions для хранения всех подписок пользователей:
$table->string('stripe_id')->nullable(); $table->string('card_brand')->nullable(); $table->string('card_last_four')->nullable(); $table->timestamp('trial_ends_at')->nullable(); }); Schema::create('subscriptions', function ($table) { $table->increments('id'); $table->integer('user_id'); $table->string('name'); $table->string('stripe_id'); $table->string('stripe_plan'); $table->integer('quantity'); $table->timestamp('trial_ends_at')->nullable(); $table->timestamp('ends_at')->nullable(); $table->timestamps(); }); После создания миграции выполните команду `shmigrate` .
Далее добавьте типаж Billable в определение вашей модели. Этот типаж предоставляет различные методы для выполнения основных задач по оплате, таких как создание подписок, применение купонов, обновление информации о банковской карте:
class User extends Authenticatable { use Billable; }
Наконец, надо настроить ключ Stripe в файле настроек services.php. Получить свои ключи Stripe API можно в панели управления Stripe:
`'stripe' => [` 'model' => App\User::class, 'secret' => env('STRIPE_SECRET'), ],
Далее добавьте типаж Billable и соответствующие преобразователи даты в определение вашей модели:
use Laravel\Cashier\Contracts\Billable as BillableContract; class User extends Model implements BillableContract { use Billable; protected $dates = ['trial_ends_at', 'subscription_ends_at']; } Добавление столбцов в свойство модели `PHP$dates` даёт Eloquent команду вернуть столбцы в виде экземпляров Carbon/DateTime вместо «сырых» строк.
Наконец, внесите ваш Stripe-ключ в конфигурационный файл services.php:
`'stripe' => [` 'model' => 'User', 'secret' => env('STRIPE_API_SECRET'), ],
### Braintree
Преимущества и недостатки Braintree
Для многих операций реализация функций Cashier в Stripe и Braintree одинакова. Оба сервиса предоставляют возможность оплаты подписок банковскими картами, но Braintree также поддерживает оплату через PayPal. Однако, в Braintree нет некоторых возможностей, имеющихся в Stripe. При выборе между ними учитывайте следующее:
* Braintree поддерживает PayPal, а Stripe нет.
* Braintree не поддерживает методы
`PHPincrement` и `PHPdecrement` для подписок. Это ограничение Braintree, а не Cashier. * Braintree не поддерживает скидки в процентах. Это ограничение Braintree, а не Cashier.
Сначала добавьте пакет Cashier для Braintree в ваш файл composer.json и выполните команду `shcomposer update` : > sh"laravel/cashier-braintree": "~2.0"
Далее зарегистрируйте сервис-провайдер Laravel\Cashier\CashierServiceProvider в файле настроек config/app.
Перед использованием Cashier с Braintree вам надо определить скидку для плана оплаты в панели настроек Braintree. Эта скидка будет использоваться для пропорционального пересчёта подписок, которые переходят с годовой на ежемесячную оплату, или наоборот с ежемесячной на годовую. Настраиваемый в панели настроек Braintree размер скидки может быть любым, на ваше усмотрение, а Cashier будет просто изменять размер по умолчанию на заданный при каждом применении купона. Этот купон необходим из-за того, что Braintree изначально не поддерживает пропорциональный пересчёт подписок для каждого выставления счёта.
Перед использованием Cashier нам надо подготовить базу данных. Надо добавить несколько столбцов в таблицу users и создать новую таблицу subscriptions для хранения всех подписок пользователей:
$table->string('braintree_id')->nullable(); $table->string('paypal_email')->nullable(); $table->string('card_brand')->nullable(); $table->string('card_last_four')->nullable(); $table->timestamp('trial_ends_at')->nullable(); }); Schema::create('subscriptions', function ($table) { $table->increments('id'); $table->integer('user_id'); $table->string('name'); $table->string('braintree_id'); $table->string('braintree_plan'); $table->integer('quantity'); $table->timestamp('trial_ends_at')->nullable(); $table->timestamp('ends_at')->nullable(); $table->timestamps(); }); После создания миграций просто выполните Artisan-команду `shmigrate` .
Далее добавьте типаж Billable в определение вашей модели:
class User extends Authenticatable { use Billable; }
Далее надо настроить следующие параметры в файле настроек services.php:
`'braintree' => [` 'model' => App\User::class, 'environment' => env('BRAINTREE_ENV'), 'merchant_id' => env('BRAINTREE_MERCHANT_ID'), 'public_key' => env('BRAINTREE_PUBLIC_KEY'), 'private_key' => env('BRAINTREE_PRIVATE_KEY'), ], Затем надо добавить следующие вызовы Braintree SDK в метод `PHPboot()` вашего сервис провайдера AppServiceProvider:
```
\Braintree_Configuration::environment(config('services.braintree.environment'));
```
\Braintree_Configuration::merchantId(config('services.braintree.merchant_id')); \Braintree_Configuration::publicKey(config('services.braintree.public_key')); \Braintree_Configuration::privateKey(config('services.braintree.private_key'));
```
\Braintree_Configuration::environment(env('BRAINTREE_ENV'));
```
\Braintree_Configuration::merchantId(env('BRAINTREE_MERCHANT_ID')); \Braintree_Configuration::publicKey(env('BRAINTREE_PUBLIC_KEY')); \Braintree_Configuration::privateKey(env('BRAINTREE_PRIVATE_KEY'));
добавлено в 5.3 ()
### Настройка валюты
Стандартная валюта Cashier — доллар США (USD). Вы можете изменить валюту, вызвав метод
```
PHPCashier::useCurrency()
```
из метода `PHPboot()` одного из ваших сервис-провайдеров. Метод `PHPuseCurrency()` принимает два строковых параметра: валюту и символ валюты:
```
use Laravel\Cashier\Cashier;
```
Cashier::useCurrency('eur', '€');
## Подписки
### Создание подписок
добавлено в 5.2 ()
Для создания подписки сначала получите экземпляр оплачиваемой модели, который обычно является экземпляром App\User. Когда вы получили модель, вы можете использовать метод `PHPnewSubscription()` для управления подписками модели:
$user->newSubscription('main', 'monthly')->create($creditCardToken); Первый аргумент метода `PHPnewSubscription()` — название подписки. Если в вашем приложении используется только одна подписка, то вы можете назвать её main или primary. Второй аргумент — конкретный план Stripe/Braintree, на который подписывается пользователь. Это значение должно соответствовать идентификатору плана в Stripe или Braintree. Метод `PHPcreate()` создаст подписку, а также внесёт в вашу базу данных ID заказчика и другую связанную информацию по оплате.
Дополнительная информация о пользователе
Если вы хотите указать дополнительную информацию о пользователе, передайте её вторым аргументом методу `PHPcreate()` :
```
$user->newSubscription('main', 'monthly')->create($creditCardToken, [
```
'email' => $email, ]);
Подробнее о дополнительных полях, поддерживаемых Stripe и Braintree, читайте в документации по созданию заказчика Stripe и в документации Braintree.
Если надо применить купон при создании подписки, используйте метод `PHPwithCoupon()` :
```
$user->newSubscription('main', 'monthly')
```
->withCoupon('code') ->create($creditCardToken); Для создания подписки сначала получите экземпляр оплачиваемой модели, который обычно является экземпляром App\User. Когда вы получили модель, вы можете использовать метод `PHPsubscription()` для управления подписками модели:
$user->subscription('monthly')->create($creditCardToken); Метод `PHPcreate()` автоматически создаст подписку Stripe, а также внесёт в вашу базу данных ID заказчика Stripe и другую связанную информацию по оплате. Если для вашего тарифа настроен пробный период в Stripe, то для записи пользователя автоматически будет задана дата окончания периода.
Если вы хотите использовать пробные периоды, но при этом управлять ими полностью из своего приложения, а не определять их в Stripe, то вам надо задать дату окончания периода вручную:
$user->save();
Дополнительная информация о пользователе
Если вы хотите указать дополнительную информацию о пользователе, передайте её вторым аргументом методу `PHPcreate()` :
```
$user->subscription('monthly')->create($creditCardToken, [
```
'email' => $email, 'description' => 'Our First Customer' ]);
Подробнее о дополнительных полях, поддерживаемых Stripe, читайте в документации по созданию заказчика Stripe.
Если надо применить купон при создании подписки, используйте метод `PHPwithCoupon()` :
```
$user->subscription('monthly')
```
->withCoupon('code') ->create($creditCardToken);
### Проверка статуса подписки
```
if ($user->subscribed('main')) {
```
```
if ($user->subscription('main')->onTrial()) {
```
// } Метод
```
PHPsubscribedToPlan()
```
используется для определения, подписан ли пользователь на данный тариф, на основе его Stripe/Braintree ID. В этом примере мы определим, подписана ли подписка main пользователя на план monthly:
```
if ($user->subscribedToPlan('monthly', 'main')) {
```
```
if ($user->subscribed()) {
```
// } Метод `PHPonPlan()` используется для определения, подписан ли пользователь на данный тариф, на основе его Stripe ID:
```
if ($user->cancelled()) {
```
// } Метод `PHPeverSubscribed()` используется для определения, подписывался ли пользователь когда-либо на ваше приложение:
```
if ($user->everSubscribed()) {
```
### Смена тарифа
Когда пользователь подписан на ваше приложение, он может захотеть сменить свой тарифный план. Чтобы переключить пользователя на новую подписку, передайте идентификатор тарифа в метод `PHPswap()` :
$user->subscription('main')->swap('provider-plan-id');
Если пользователь был на пробном периоде, то пробный период продолжится. Кроме того, если у подписки есть «количество», то оно тоже применится.
Если вы хотите сменить план и отменить все текущие пробные периоды пользователя, используйте метод `PHPskipTrial()` :
```
$user->subscription('main')
```
->skipTrial() ->swap('provider-plan-id'); Когда пользователь подписан на ваше приложение, он может захотеть сменить свой тарифный план. Чтобы переключить пользователя на новую подписку, используйте метод `PHPswap()` . Например, мы легко можем переключить пользователя на подписку premium:
$user->subscription('premium')->swap(); Если пользователь был на пробном периоде, то пробный период продолжится. Кроме того, если у подписки есть «количество», то оно тоже применится. При смене тарифа вы можете использовать метод `PHPprorate()` , чтобы указать, что стоимость должна быть пересчитана пропорционально. Вдобавок, вы можете использовать метод `PHPswapAndInvoice()` для того, чтобы выставить счёт за смену тарифа немедленно:
```
$user->subscription('premium')
```
->prorate() ->swapAndInvoice();
### Количество подписки
Количество подписки поддерживается только версией Cashier для Stripe. В Braintree нет возможности, подобной «количеству» в Stripe.
Иногда подписки зависят от «количества». Например, ваше приложение стоит $10 в месяц с одного пользователя учётной записи. Чтобы легко увеличить или уменьшить количество вашей подписки, используйте методы
```
PHPincrementQuantity()
```
```
PHPdecrementQuantity()
```
```
$user->subscription('main')->updateQuantity(10);
```
Иногда подписки зависят от «количества». Например, ваше приложение стоит $10 в месяц с одного пользователя учётной записи. Чтобы легко увеличить или уменьшить количество вашей подписки, используйте методы `PHPincrement()` и `PHPdecrement()` :
```
$user->subscription()->updateQuantity(10);
```
Более подробная информация о количествах подписок в документации Stripe.
### Налог на подписку
С помощью Cashier можно легко изменить значение tax_percent, посылаемое в Stripe. Чтобы указать процент налога, который пользователь платит за подписку, реализуйте метод `PHPgetTaxPercent()` в своей модели, и верните числовое значение от 0 до 100 с не более, чем двумя знаками после запятой.
```
public function getTaxPercent() {
```
return 20; } Метод `PHPtaxPercentage()` позволяет вам использовать разные налоговые ставки по-модельно, что будет полезно при наличии пользователей из разных стран. Метод `PHPtaxPercentage()` применяется только к оплате подписок. Если вы используете Cashier для разовых платежей, то в таких случаях вам надо указывать размер налога вручную.
### Отмена подписки
Отменить подписку можно методом `PHPcancel()` :
```
$user->subscription('main')->cancel();
```
// }
добавлено в 5.3 ()
Отменить подписку можно методом `PHPcancel()` :
```
$user->subscription()->cancel();
```
### Возобновление подписки
Если пользователь отменит подписку и затем возобновит её до того, как она полностью истекла, тогда не произойдет моментального расчёта оплаты. Его подписка будет просто повторно активирована, и расчёт оплаты будет происходить по изначальному циклу.
## Пробные подписки
### С запросом банковской карты
Если вы хотите предлагать клиентам пробные периоды, но при этом сразу собирать данные о способе оплаты, используйте метод `PHPtrialDays()` при создании подписок:
$user->newSubscription('main', 'monthly') ->trialDays(10) ->create($creditCardToken);
Этот метод задаст дату окончания пробного периода в записи БД о подписке, а также проинструктирует Stripe/Braintree о том, что не нужно начинать считать оплату для клиента до окончания этого периода.
Если подписка клиента не будет отменена до окончания пробного периода, ему будет выставлен счёт, как только истечёт этот период, поэтому не забывайте уведомлять своих пользователей о дате окончания их пробного периода.
Вы можете определить, идёт ли до сих пор пробный период у пользователя, с помощью метода `PHPonTrial()` экземпляра пользователя или с помощью метода `PHPonTrial()` экземпляра подписки. Эти два примера выполняют одинаковую задачу:
```
if ($user->onTrial('main')) {
```
// } if ($user->subscription('main')->onTrial()) { // }
### Без запроса банковской карты
Если вы хотите предлагать клиентам пробные периоды, не собирая данные о способе оплаты, вы можете просто задать значение столбца trial_ends_at в записи пользователя равное дате окончания пробного периода. Это обычно делается во время регистрации пользователя:
```
$user = User::create([
```
// Заполнение других свойств пользователя... 'trial_ends_at' => Carbon::now()->addDays(10), ]);
Не забудьте добавить преобразователь дат для trial_ends_at в определении вашей модели.
Cashier относится к пробным периодам такого типа, как к «общим пробным периодам», поскольку они не прикреплены к какой-либо из существующих подписок. Метод `PHPonTrial()` на экземпляре User вернёт `PHPtrue` , если текущая дата не превышает значение trial_ends_at:
// Пользователь на пробном периоде... } Если вы хотите узнать, что пользователь именно на «общем» пробном периоде и ещё не создал настоящую подписку, используйте метод `PHPonGenericTrial()` :
```
if ($user->onGenericTrial()) {
```
// Пользователь на "общем" пробном периоде... } Когда вы готовы создать для пользователя настоящую подписку, используйте метод `PHPnewSubscription()` как обычно:
$user->newSubscription('main', 'monthly')->create($creditCardToken);
## Обработка веб-хуков Stripe
Stripe и Braintree могут уведомлять ваше приложение о различных событиях через веб-хуки. Для обработки веб-хуков Stripe определите маршрут для контроллера веб-хуков Cashier. Этот контроллер будет обрабатывать все входящие веб-хук запросы и отправлять их в нужный метод контроллера:
`Route::post(` 'stripe/webhook', '\Laravel\Cashier\Http\Controllers\WebhookController@handleWebhook' );
```
Route::post('stripe/webhook', '\Laravel\Cashier\WebhookController@handleWebhook');
```
После регистрации маршрута не забудьте настроить URL веб-хука в панели настроек Stripe.
По умолчанию этот контроллер будет автоматически обрабатывать отмену подписок, для которых произошло слишком много неудачных попыток оплаты (это число задаётся в настройках Stripe); но вскоре мы рассмотрим, что можно наследовать этот контроллер для обработки любых необходимых веб-хук событий.
'stripe/*', ];
Cashier автоматически обрабатывает отмену подписки при неудачной оплате, но если у вас есть дополнительные webhook-события для Stripe, которые вы хотели бы обработать, просто наследуйте контроллер `PHPWebhook` . Ваши имена методов должны соответствовать принятому в Cashier соглашению, в частности, методы должны быть снабжены префиксом handle и именем того webhook-события Stripe, которое вы хотите обработать, в стиле «CamelCase». Например, если вы хотите обработать webhook
```
PHPinvoice.payment_succeeded
```
, вы должны добавить метод
```
PHPhandleInvoicePaymentSucceeded()
```
в контроллер: `<?php` namespace App\Http\Controllers; use Laravel\Cashier\Http\Controllers\WebhookController as CashierController; //для версии 5.1 и ранее: //use Laravel\Cashier\WebhookController as CashierController; class WebhookController extends CashierController { /** * Обработка веб-хука Stripe. * * @param array $payload * @return Response */ public function handleInvoicePaymentSucceeded($payload) { // Обработка события } }
Что если срок действия банковской карты клиента истёк? Никаких проблем — Cashier включает в себя контроллер `PHPWebhook` , который может легко отменить подписку клиента. Как уже было сказано, просто укажите путь к контроллеру: `Route::post(` 'stripe/webhook', '\Laravel\Cashier\Http\Controllers\WebhookController@handleWebhook' );
Вот и всё! Неудавшиеся платежи будут перехвачены и обработаны контроллером. Контроллер отменит подписку клиента, если Stripe определит, что подписка не удалась (обычно после трёх неудавшихся платежей).
## Обработка веб-хуков Braintree
Stripe и Braintree могут уведомлять ваше приложение о различных событиях через веб-хуки. Для обработки веб-хуков Stripe определите маршрут для контроллера веб-хуков Cashier. Этот контроллер будет обрабатывать все входящие веб-хук запросы и отправлять их в нужный метод контроллера:
`Route::post(` 'braintree/webhook', '\Laravel\Cashier\Http\Controllers\WebhookController@handleWebhook' );
По умолчанию этот контроллер будет автоматически обрабатывать отмену подписок, для которых произошло слишком много неудачных попыток оплаты (это число задаётся в настройках Braintree); но вскоре мы рассмотрим, что можно наследовать этот контроллер для обработки любых необходимых веб-хук событий.
'braintree/*', ];
После регистрации маршрута не забудьте настроить URI веб-хука в панели настроек Braintree.
Cashier автоматически обрабатывает отмену подписки при неудачной оплате, но если у вас есть дополнительные webhook-события для Braintree, которые вы хотели бы обработать, просто наследуйте контроллер `PHPWebhook` . Ваши имена методов должны соответствовать принятому в Braintree соглашению, в частности, методы должны быть снабжены префиксом handle и именем того webhook-события Braintree, которое вы хотите обработать, в стиле «CamelCase». Например, если вы хотите обработать веб-хук `PHPdispute_opened` , вы должны добавить метод
```
PHPhandleDisputeOpened()
```
в контроллер. `<?php` namespace App\Http\Controllers; use Braintree\WebhookNotification; use Laravel\Cashier\Http\Controllers\WebhookController as CashierController; class WebhookController extends CashierController { /** * Обработка веб-хука Braintree. * * @param WebhookNotification $webhook * @return Response */ public function handleDisputeOpened(WebhookNotification $notification) { // Обработка события } }
Что если срок действия банковской карты клиента истёк? Никаких проблем — Cashier включает в себя контроллер `PHPWebhook` , который может легко отменить подписку клиента. Как уже было сказано, просто укажите путь к контроллеру: `Route::post(` 'braintree/webhook', '\Laravel\Cashier\Http\Controllers\WebhookController@handleWebhook' );
Вот и всё! Неудавшиеся платежи будут перехвачены и обработаны контроллером. Контроллер отменит подписку клиента, если Braintree определит, что подписка не удалась (обычно после трёх неудавшихся платежей). Не забудьте настроить URL веб-хука в панели настроек Braintree.
## Одиночные платежи
### Простой платёж
При использовании Stripe метод `PHPcharge()` принимает сумму, которую необходимо оплатить, с наименьшим знаменателем используемой в вашем приложении валюты. А при использовании Braintree вы должны передавать в метод `PHPcharge()` полную сумму в долларах. Если вы хотите сделать «одноразовый» платёж вместо использования банковской карты подписанного пользователя, используйте метод `PHPcharge()` для экземпляра оплачиваемой модели.
$user->charge(100); // Braintree принимает сумму в долларах... $user->charge(1); Метод `PHPcharge()` принимает вторым аргументом массив, позволяя вам передавать любые необходимые параметры для создания основного Stripe/Braintree-платежа. О доступных при создании платежей опциях читайте в документации Stripe и Braintree: `$user->charge(100, [` 'custom_option' => $value, ]); Метод `PHPcharge()` выбросит исключение при ошибке платёжа. Если платёж пройдёт успешно, метод вернёт полный ответ Stripe/Braintree: `try {` $response = $user->charge(100); } catch (Exception $e) { // }
### Платёж со счётом
Иногда бывает необходимо сделать одноразовый платёж и сгенерировать счёт-фактуру для него, чтобы вы могли предоставить клиенту PDF-квитанцию. Именно для этого служит метод `PHPinvoiceFor()` . Например, давайте выставим клиенту счёт $5.00 «разовой оплаты» («One Time Fee»):
$user->invoiceFor('One Time Fee', 500); // Braintree принимает сумму в долларах... $user->invoiceFor('One Time Fee', 5); Счёт будет немедленно оплачен банковской картой пользователя. Метод `PHPinvoiceFor()` также принимает третьим аргументом массив, позволяя вам передавать любые параметры для создания платежа Stripe/Braintree:
```
$user->invoiceFor('One Time Fee', 500, [
```
'custom-option' => $value, ]); Метод `PHPinvoiceFor()` создаст счёт Stripe, который будет повторять проваленные попытки оплаты. Если вы не хотите повторять проваленные платежи, вам необходимо закрывать их с помощью Stripe API после первого неудачного платежа. Если вы хотите сделать «одноразовый» платёж вместо использования банковской карты подписанного пользователя, используйте метод `PHPcharge()` для экземпляра модели. Метод `PHPcharge()` принимает сумму, которую необходимо оплатить, с наименьшим знаменателем используемой в вашем приложении валюты. Например, в этом примере будет списано 100 центов, или $1, с банковской карты пользователя: `$user->charge(100);` Метод `PHPcharge()` принимает в качестве второго аргумента массив, позволяя вам передавать любые необходимые параметры для создания основного Stripe-платежа: `$user->charge(100, [` 'source' => $token, 'receipt_email' => $user->email, ]); Метод `PHPcharge()` вернёт false, если платёж не пройдёт. Обычно это значит, что платёж был отклонён:
```
if ( ! $user->charge(100)) {
```
// Платёж был отклонён... }
Если платёж прошёл успешно, метод возвратит полный Stripe-ответ.
### Без предоплаты
Если в вашем приложении будет бесплатный пробный период, не требующий предварительного предъявления банковской карты, установите свойство cardUpFront вашей модели в false:
```
protected $cardUpFront = false;
```
При создании аккаунта не забудьте установить дату окончания пробного периода в модели:
## Счета
Вы можете легко получить массив счетов пользователя, используя метод `PHPinvoices()` :
```
$invoices = $user->invoices();
```
// Включить отложенные счета в результаты... $invoices = $user->invoicesIncludingPending();
При просмотре счетов клиента вы можете использовать эти вспомогательные методы, чтобы вывести на экран соответствующую информацию о счёте. Например, вам может понадобиться просмотреть каждый счёт в таблице, позволив пользователю легко скачать любой из них:
`<table>` @foreach ($invoices as $invoice) <tr> <td>{{ $invoice->date()->toFormattedDateString() }}</td> <td>{{ $invoice->total() }}</td> <td><a href="/user/invoice/{{ $invoice->id }}">Download</a></td> </tr> @endforeach </table> `<table>` @foreach ($invoices as $invoice) <tr> <td>{{ $invoice->dateString() }}</td> <td>{{ $invoice->dollars() }}</td> <td><a href="/user/invoice/{{ $invoice->id }}">Download</a></td> </tr> @endforeach </table### Создание PDF-файлов счетов
Перед созданием PDF-файлов счетов вам необходимо установить PHP-библиотеку dompdf:
> shcomposer require dompdf/dompdf
Затем используйте метод `PHPdownloadInvoice()` в маршруте или контроллере, чтобы cгенерировать PDF-файл счёта. Этот метод автоматически сгенерирует нужный HTTP-отклик чтобы отправить загрузку в браузер:
Route::get('user/invoice/{invoice}', function (Request $request, $invoiceId) { return $request->user()->downloadInvoice($invoiceId, [ 'vendor' => 'Your Company', 'product' => 'Your Product', ]); });
# Кэш
Laravel предоставляет выразительный, универсальный API для различных систем кэширования. Настройки кэша находятся в файле config/cache.php. Здесь вы можете указать драйвер, используемый по умолчанию в вашем приложении. Laravel изначально поддерживает многие популярные системы, такие как Memcached и Redis.
Этот файл также содержит множество других настроек, которые в нём же документированы, поэтому обязательно ознакомьтесь с ними. По умолчанию, Laravel настроен для использования драйвера file, который хранит сериализованные объекты кэша в файловой системе. Для больших приложений рекомендуется использование более надёжных драйверов, таких как Memcached или Redis. Вы можете настроить даже несколько конфигураций кэширования для одного драйвера.
### Необходимые условия для драйверов
Перед использованием драйвера database вам нужно создать таблицу для хранения элементов кэша. Ниже приведён пример объявления структуры Schema:
```
Schema::create('cache', function ($table) {
```
$table->string('key')->unique(); $table->text('value'); $table->integer('expiration'); });
Для использования системы кэширования Memcached необходим установленный пакет Memcached PECL. Вы можете перечислить все свои сервера Memcached в файле настроек config/cache.php:
`'memcached' => [` [ 'host' => '127.0.0.1', 'port' => 11211, 'weight' => 100 ], ],
Вы также можете задать параметр host для пути UNIX-сокета. В этом случае в параметр port следует записать значение 0:
`'memcached' => [` [ 'host' => '/var/run/memcached/memcached.sock', 'port' => 0, 'weight' => 100 ], ],
Перед тем, как использовать систему Redis необходимо установить пакет predis/predis (~1.0) с помощью Composer.
Загляните в раздел по настройке Redis
## Использование кэша
### Получение экземпляра кэша
Контракты Illuminate\Contracts\Cache\Factory и Illuminate\Contracts\Cache\Repository предоставляют доступ к службам кэша Laravel. Контракт Factory предоставляет доступ ко всем драйверам кэша, определённым для вашего приложения. А контракт Repository обычно является реализацией драйвера кэша по умолчанию для вашего приложения, который задан в файле настроек cache.
А также вы можете использовать фасад Cache, который мы будем использовать в данной статье. Фасад Cache обеспечивает удобный и лаконичный способ доступа к лежащим в его основе реализациям контрактов кэша Laravel:
`<?php` namespace App\Http\Controllers; use Illuminate\Support\Facades\Cache; //для версии 5.2 и ранее: //use Cache; class UserController extends Controller { /** * Вывод списка всех пользователей приложения. * * @return Response */ public function index() { $value = Cache::get('key'); // } } В этом примере для версии 5.1 и ранее использовался ещё один контроллер
— прим. пер.
Доступ к нескольким хранилищам кэша
Используя фасад Cache можно обращаться к разным хранилищам кэша с помощью метода `PHPstore()` . Передаваемый в метод ключ должен соответствовать одному из хранилищ, перечисленных в массиве stores в вашем файле настроек cache:
```
$value = Cache::store('file')->get('foo');
```
Cache::store('redis')->put('bar', 'baz', 10);
Для получения элементов из кэша используется метод `PHPget()` фасада Cache. Если элемента в кэше не существует, будет возвращён `PHPnull` . При желании вы можете указать другое возвращаемое значение, передав его вторым аргументом метода `PHPget()` :
```
$value = Cache::get('key');
```
$value = Cache::get('key', 'default');
А также вы можете передать замыкание в качестве значения по умолчанию. Тогда, если элемента не существует, будет возвращён результат замыкания. С помощью замыкания вы можете настроить получение значений по умолчанию из базы данных или внешнего сервиса:
```
$value = Cache::get('key', function () {
```
return DB::table(...)->get(); });
Проверка существования элемента
Для определения существования элемента в кэше используется метод `PHPhas()` . Он вернёт `PHPfalse` , если значение равно `PHPnull` :
```
if (Cache::has('key')) {
```
// }
Увеличение/уменьшение значений
Для изменения числовых элементов кэша используются методы `PHPincrement()` и `PHPdecrement()` . Оба они могут принимать второй необязательный аргумент, определяющий значение, на которое нужно изменить значение элемента:
```
Cache::increment('key');
```
Cache::increment('key', $amount); Cache::decrement('key'); Cache::decrement('key', $amount); Иногда необходимо получить элемент из кэша и при этом сохранить значение по умолчанию, если запрашиваемый элемент не существует. Например, когда необходимо получить всех пользователей из кэша, а если они не существуют, получить их из базы данных и добавить в кэш. Это можно сделать с помощью метода `PHPCache::remember()` :
```
$value = Cache::remember('users', $minutes, function () {
```
return DB::table('users')->get(); }); Если элемента нет в кэше, будет выполнено переданное в метод `PHPremember()` замыкание, а его результат будет помещён в кэш. При необходимости получить элемент и удалить его из кэша используется метод `PHPpull()` . Как и метод `PHPget()` , данный метод вернёт `PHPnull` , если элемента не существует:
```
$value = Cache::pull('key');
```
### Сохранение элементов в кэш
Для помещения элементов в кэш используется метод `PHPput()` фасада Cache. При помещении элемента в кэш необходимо указать, сколько минут его необходимо хранить:
```
Cache::put('key', 'value', $minutes);
```
Вместо указания количества минут, можно передать экземпляр PHP-типа `PHPDateTime` , для указания времени истечения срока хранения:
```
$expiresAt = Carbon::now()->addMinutes(10);
```
Cache::put('key', 'value', $expiresAt); Метод `PHPadd()` просто добавит элемент в кэш, если его там ещё нет. Метод вернёт true, если элемент действительно будет добавлен в кэш. Иначе — false:
```
Cache::add('key', 'value', $minutes);
```
Для бесконечного хранения элемента кэша используется метод `PHPforever()` . Поскольку срок хранения таких элементов не истечёт никогда, они должны удаляться из кэша вручную с помощью метода `PHPforget()` :
```
Cache::forever('key', 'value');
```
При использовании драйвера Memcached элементы, сохранённые «навсегда», могут быть удалены, когда размер кэша достигнет своего лимита.
Можно удалить элементы из кэша с помощью метода `PHPforget()` :
```
Cache::forget('key');
```
Вы можете очистить весь кэш методом `PHPflush()` : `Cache::flush();`
Очистка не поддерживает префикс кэша и удаляет из него все элементы. Учитывайте это при очистке кэша, которым пользуются другие приложения.
### Вспомогательная функция
`PHPCache()` Помимо использования фасада Cache или контракта кэша, вы также можете использовать глобальную функцию `PHPcache()` для получения данных из кэша и помещения данных в него. При вызове функции `PHPcache()` с одним строковым аргументом она вернёт значение данного ключа:
Если вы передадите в функцию массив пар ключ/значение и время хранения, она сохранит значения в кэше на указанное время:
```
cache(['key' => 'value'], $minutes);
```
cache(['key' => 'value'], Carbon::now()->addSeconds(10)); При тестировании вызова глобальной функции `PHPcache()` вы можете использовать метод
```
PHPCache::shouldReceive()
```
, как при тестировании фасада.
## Теги кэша
Теги кэша не поддерживаются драйверами file и database. Кроме того, при использовании нескольких тегов для кэшей, хранящихся «вечно», лучшая производительность будет достигнута при использовании такого драйвера как memcached, который автоматически зачищает устаревшие записи.
### Сохранение элементов с тегами
Теги кэша позволяют отмечать связанные элементы в кэше, и затем сбрасывать все элементы, которые были отмеченны одним тегом. Вы можете обращаться к кэшу с тегами, передавая упорядоченный массив имён тегов. Например, давайте обратимся к кэшу с тегами и поместим в него значение методом `PHPput()` :
```
Cache::tags(['people', 'artists'])->put('John', $john, $minutes);
```
Cache::tags(['people', 'authors'])->put('Anne', $anne, $minutes);
### Обращение к элементам кэша с тегами
Для получения элемента с тегом передайте тот же упорядоченный список тегов в метод `PHPtags()` , а затем вызовите метод `PHPget()` с тем ключом, который необходимо получить:
```
$john = Cache::tags(['people', 'artists'])->get('John');
```
$anne = Cache::tags(['people', 'authors'])->get('Anne');
### Удаление элементов кэша с тегами
Вы можете очистить все элементы с заданным тегом или списком тегов. Например, следующий код удалит все элементы, отмеченные либо тегом people, либо authors, либо и тем и другим. Поэтому и Anne и John будут удалены из кэша:
```
Cache::tags(['people', 'authors'])->flush();
```
В отличие от предыдущего, следующий код удалит только те элементы, которые отмечены тегом authors, поэтому Anne будет удалён, а John — нет:
```
Cache::tags('authors')->flush();
```
## Добавление своих драйверов кэша
### Написание драйвера
Чтобы создать свой драйвер кэша, нам надо сначала реализовать контракт Illuminate\Contracts\Cache\Store. Итак, наша реализация кэша MongoDB будет выглядеть примерно так:
добавлено в 5.3 ()
`<?php` namespace App\Extensions; use Illuminate\Contracts\Cache\Store; class MongoStore implements Store { public function get($key) {} public function many(array $keys); public function put($key, $value, $minutes) {} public function putMany(array $values, $minutes); public function increment($key, $value = 1) {} public function decrement($key, $value = 1) {} public function forever($key, $value) {} public function forget($key) {} public function flush() {} public function getPrefix() {} } `<?php` namespace App\Extensions; class MongoStore implements \Illuminate\Contracts\Cache\Store { public function get($key) {} public function put($key, $value, $minutes) {} public function increment($key, $value = 1) {} public function decrement($key, $value = 1) {} public function forever($key, $value) {} public function forget($key) {} public function flush() {} public function getPrefix() {} }
Нам просто надо реализовать каждый из этих методов, используя соединение MongoDB. Примеры реализации каждого из этих методов можно найти в Illuminate\Cache\MemcachedStore в исходном коде фреймворка. Когда наша реализация готова, мы можем завершить регистрацию нашего драйвера:
```
Cache::extend('mongo', function ($app) {
```
return Cache::repository(new MongoStore); });
Если вы задумались о том, где разместить код вашего драйвера, то можете создать пространство имён Extensions в папке app. Но не забывайте, в Laravel нет жёсткой структуры приложения, и вы можете организовать его как пожелаете.
### Регистрация драйвера
Чтобы зарегистрировать свой драйвер кэша в Laravel, мы будем использовать метод `PHPextend()` фасада Cache. Вызов `PHPCache::extend()` можно делать из метода `PHPboot()` сервис-провайдера по умолчанию App\Providers\AppServiceProvider, который есть в каждом Laravel-приложении. Или вы можете создать свой сервис-провайдер для размещения расширения — только не забудьте зарегистрировать его в массиве провайдеров config/app.php: `<?php` namespace App\Providers; use App\Extensions\MongoStore; use Illuminate\Support\Facades\Cache; //для версии 5.2 и ранее: //use Cache; use Illuminate\Support\ServiceProvider; class CacheServiceProvider extends ServiceProvider { /** * Выполнение после-регистрационной загрузки сервисов. * * @return void */ public function boot() { Cache::extend('mongo', function ($app) { return Cache::repository(new MongoStore); }); } /** * Регистрация привязок в контейнере. * * @return void */ public function register() { // } } Первый аргумент метода `PHPextend()` — имя драйвера. Оно будет соответствовать параметру driver в вашем файле настроек config/cache.php. Второй аргумент — замыкание, которое должно возвращать экземпляр Illuminate\Cache\Repository. В замыкание будет передан экземпляр `PHP$app` , который является экземпляром сервис-контейнера.
Когда ваше расширение зарегистрировано, просто укажите его имя в качестве значения параметра driver в файле настроек config/cache.php.
## События кэша
Для выполнения какого-либо кода при каждой операции с кэшем вы можете прослушивать события, инициируемые кэшем. Обычно вам необходимо поместить эти слушатели событий в ваш EventServiceProvider:
`/**` * Отображения слушателей событий для приложения. * * @var array */ protected $listen = [ 'Illuminate\Cache\Events\CacheHit' => [ 'App\Listeners\LogCacheHit', ], 'Illuminate\Cache\Events\CacheMissed' => [ 'App\Listeners\LogCacheMissed', ], 'Illuminate\Cache\Events\KeyForgotten' => [ 'App\Listeners\LogKeyForgotten', ], 'Illuminate\Cache\Events\KeyWritten' => [ 'App\Listeners\LogKeyWritten', ], ]; Для выполнения какого-либо кода при каждой операции с кэшем вы можете прослушивать события, инициируемые кэшем. Обычно вам необходимо поместить эти слушатели событий в метод `PHPboot()` вашего EventServiceProvider: `/**` * Регистрация любых других событий для вашего приложения. * * @param \Illuminate\Contracts\Events\Dispatcher $events * @return void */ public function boot(DispatcherContract $events) { parent::boot($events); $events->listen('cache.hit', function ($key, $value) { // }); $events->listen('cache.missed', function ($key) { // }); $events->listen('cache.write', function ($key, $value, $minutes) { // }); $events->listen('cache.delete', function ($key) { // }); }
Класс Illuminate\Support\Collection предоставляет гибкую и удобную обёртку для работы с массивами данных. Например, посмотрите на следующий код. Мы будем использовать вспомогательную функцию `PHPcollect()` , чтобы создать новый экземпляр коллекции из массива, выполним функцию `PHPstrtoupper()` для каждого элемента, а затем удалим все пустые элементы:
```
$collection = collect(['taylor', 'abigail', null])->map(function ($name) {
```
return strtoupper($name); }) ->reject(function ($name) { return empty($name); });
Как видите, класс Collection позволяет использовать свои методы в связке для гибкого отображения и уменьшения исходного массива. В общем, коллекции «неизменны», то есть каждый метод класса Collection возвращает совершенно новый экземпляр Collection.
### Создание коллекций
Как упоминалось выше, вспомогательная функция `PHPcollect()` возвращает новый экземпляр класса Illuminate\Support\Collection для заданного массива. Поэтому создать коллекцию очень просто:
Результаты запросов Eloquent всегда возвращаются в виде экземпляров класса Collection.
В остальной части данной документации, мы будем обсуждать каждый метод, доступный в классе Collection. Помните, все эти методы могут использоваться в связке для гибкого управления заданным массивом. Кроме того, почти каждый метод возвращает новый экземпляр класса Collection, позволяя вам при необходимости сохранить оригинал коллекции:
allavg chunk collapse combine contains count diff diffKeys each every except filter first flatMap flatten flip forget forPage get groupBy has implode intersect isEmpty keyBy keys last map mapWithKeys max merge min only pipe pluck pop prepend pull push put random reduce reject reverse search shift shuffle slice sort sortBy sortByDesc splice split sum take toArray toJson transform union unique values where whereStrict whereLoose whereIn whereInStrict whereInLoose zip
## Список методов
Метод `PHPall()` возвращает заданный массив, представленный коллекцией:
```
collect([1, 2, 3])->all();
```
// [1, 2, 3] Метод `PHPavg()` возвращает среднее значение всех элементов в коллекции:
```
collect([1, 2, 3, 4, 5])->avg();
```
// 3
Если коллекция содержит вложенные массивы или объекты, то вы должны передать ключ, чтобы определить, среднее значение каких значений необходимо вычислить:
['name' => 'JavaScript: The Good Parts', 'pages' => 176], ['name' => 'JavaScript: The Definitive Guide', 'pages' => 1096], ]); $collection->avg('pages'); // 636 Метод `PHPchunk()` разбивает коллекцию на множество мелких коллекций заданного размера:
```
$collection = collect([1, 2, 3, 4, 5, 6, 7]);
```
$chunks = $collection->chunk(4); $chunks->toArray(); // [[1, 2, 3, 4], [5, 6, 7]]
Этот метод особенно полезен в представлениях при работе с системой сеток, такой как Bootstrap. Представьте, что у вас есть коллекция моделей Eloquent, которую вы хотите отобразить в сетке:
```
@foreach ($products->chunk(3) as $chunk)
```
<div class="row"> @foreach ($chunk as $product) <div class="col-xs-4">{{ $product->name }}</div> @endforeach </div> @endforeach Метод `PHPcollapse()` сворачивает коллекцию массивов в одну одномерную коллекцию:
```
$collection = collect([[1, 2, 3], [4, 5, 6], [7, 8, 9]]);
```
$collapsed = $collection->collapse(); $collapsed->all(); // [1, 2, 3, 4, 5, 6, 7, 8, 9]
добавлено в 5.2 ()
Метод `PHPcontains()` определяет, содержит ли коллекция заданное значение:
```
$collection = collect(['name' => 'Desk', 'price' => 100]);
```
$collection->contains('Desk'); // true $collection->contains('New York'); // false Также вы можете передать пару ключ/значение в метод `PHPcontains()` , определяющий, существует ли заданная пара в коллекции:
['product' => 'Desk', 'price' => 200], ['product' => 'Chair', 'price' => 100], ]); $collection->contains('product', 'Bookcase'); // false Напоследок, вы можете передать функцию обратного вызова в метод `PHPcontains()` для выполнения своих собственных условий:
$collection->contains(function ($value, $key) { return $value > 5; }); // false Метод `PHPcount()` возвращает общее количество элементов в коллекции:
$collection->count(); // 4 Метод `PHPdiff()` сравнивает одну коллекцию с другой коллекцией или с простым PHP `PHParray` на основе их значений. Этот метод вернёт те значения исходной коллекции, которых нет в переданной для сравнения коллекции:
$diff = $collection->diff([2, 4, 6, 8]); $diff->all(); // [1, 3, 5] Метод `PHPdiffKeys()` сравнивает одну коллекцию с другой коллекцией или с простым PHP `PHParray` на основе их ключей. Этот метод вернёт те пары ключ/значение из исходной коллекции, которых нет в переданной для сравнения коллекции:
'one' => 10, 'two' => 20, 'three' => 30, 'four' => 40, 'five' => 50, ]); $diff = $collection->diffKeys([ 'two' => 2, 'four' => 4, 'six' => 6, 'eight' => 8, ]); $diff->all(); // ['one' => 10, 'three' => 30, 'five' => 50] Метод `PHPeach()` перебирает элементы в коллекции и передает каждый элемент в функцию обратного вызова:
// });
Верните false из функции обратного вызова, чтобы выйти из цикла:
if (/* ваше условие */) { return false; } }); Метод `PHPevery()` создает новую коллекцию, состоящую из каждого n-го элемента:
```
$collection = collect(['a', 'b', 'c', 'd', 'e', 'f']);
```
$collection->every(4); // ['a', 'e']
Вы можете дополнительно передать смещение элементов вторым параметром:
```
$collection->every(4, 1);
```
// ['b', 'f'] Метод `PHPexcept()` возвращает все элементы в коллекции, кроме тех, чьи ключи указаны в передаваемом массиве:
$filtered = $collection->except(['price', 'discount']); $filtered->all(); // ['product_id' => 1] Метод only — инверсный методу `PHPexcept` . Метод `PHPfilter()` фильтрует коллекцию с помощью переданной функции обратного вызова, оставляя только те элементы, которые соответствуют заданному условию:
добавлено в 5.2 ()
$filtered = $collection->filter(function ($value, $key) { return $value > 2; }); $filtered->all(); // [3, 4]
$filtered = $collection->filter(function ($item) { return $item > 2; }); $filtered->all(); // [3, 4]
добавлено в 5.3 ()
Метод reject() — инверсный методу `PHPfilter()` . Метод `PHPfirst()` возвращает первый элемент в коллекции, который подходит под заданное условие:
```
collect([1, 2, 3, 4])->first(function ($value, $key) {
```
return $value > 2; }); // 3 Также вы можете вызвать метод `PHPfirst()` без параметров, чтобы получить первый элемент в коллекции. Если коллекция пуста, то вернётся `PHPnull` :
```
collect([1, 2, 3, 4])->first();
```
// 1 Метод `PHPflatMap()` проходит по коллекции и передаёт каждое значение в заданную функцию обратного вызова. Эта функция может изменить элемент и вернуть его, формируя таким образом новую коллекцию модифицированных элементов. Затем массив «сплющивается» в одномерный:
['name' => 'Sally'], ['school' => 'Arkansas'], ['age' => 28] ]); $flattened = $collection->flatMap(function ($values) { return array_map('strtoupper', $values); }); $flattened->all(); // ['name' => 'SALLY', 'school' => 'ARKANSAS', 'age' => '28']; Метод `PHPflatten()` преобразует многомерную коллекцию в одномерную:
```
$collection = collect(['name' => 'taylor', 'languages' => ['php', 'javascript']]);
```
$flattened = $collection->flatten(); $flattened->all(); // ['taylor', 'php', 'javascript'];
добавлено в 5.2 ()
При необходимости вы можете передать в метод аргумент «глубины»:
'Apple' => [ ['name' => 'iPhone 6S', 'brand' => 'Apple'], ], 'Samsung' => [ ['name' => 'Galaxy S7', 'brand' => 'Samsung'] ], ]); $products = $collection->flatten(1); $products->values()->all(); /* [ ['name' => 'iPhone 6S', 'brand' => 'Apple'], ['name' => 'Galaxy S7', 'brand' => 'Samsung'], ] */ Если вызвать `PHPflatten()` без указания глубины, то вложенные массивы тоже «расплющатся», и получим ['iPhone 6S', 'Apple', 'Galaxy S7', 'Samsung']. Глубина задаёт уровень вложенности массивов, ниже которого «расплющивать» не нужно. Метод `PHPflip()` меняет местами ключи и значения в коллекции:
$flipped = $collection->flip(); $flipped->all(); // ['taylor' => 'name', 'laravel' => 'framework'] Метод `PHPforget()` удаляет элемент из коллекции по его ключу:
$collection->forget('name'); $collection->all(); // ['framework' => 'laravel'] В отличие от большинства других методов коллекции, `PHPforget()` не возвращает новую модифицированную коллекцию. Он изменяет коллекцию при вызове. Метод `PHPforPage()` возвращает новую коллекцию, содержащую элементы, которые будут присутствовать на странице с заданным номером. Первый аргумент метода — номер страницы, второй аргумент — число элементов для вывода на странице:
```
$collection = collect([1, 2, 3, 4, 5, 6, 7, 8, 9]);
```
$chunk = $collection->forPage(2, 3); $chunk->all(); // [4, 5, 6] Метод `PHPget()` возвращает нужный элемент по заданному ключу. Если ключ не существует, то возвращается `PHPnull` :
$value = $collection->get('name'); // taylor
Вторым параметром вы можете передать значение по умолчанию:
$value = $collection->get('foo', 'default-value'); // default-value
Вы даже можете передать функцию обратного вызова в качестве значения по умолчанию. Результат функции обратного вызова будет возвращён, если указанный ключ не существует:
```
$collection->get('email', function () {
```
return 'default-value'; }); // default-value Метод `PHPgroupBy()` группирует элементы коллекции по заданному ключу:
['account_id' => 'account-x10', 'product' => 'Chair'], ['account_id' => 'account-x10', 'product' => 'Bookcase'], ['account_id' => 'account-x11', 'product' => 'Desk'], ]); $grouped = $collection->groupBy('account_id'); $grouped->toArray(); /* [ 'account-x10' => [ ['account_id' => 'account-x10', 'product' => 'Chair'], ['account_id' => 'account-x10', 'product' => 'Bookcase'], ], 'account-x11' => [ ['account_id' => 'account-x11', 'product' => 'Desk'], ], ] */
В дополнение к передаваемой строке key, вы можете также передать функцию обратного вызова. Она должна возвращать значение, по которому вы хотите группировать:
```
$grouped = $collection->groupBy(function ($item, $key) {
```
return substr($item['account_id'], -3); }); $grouped->toArray(); /* [ 'x10' => [ ['account_id' => 'account-x10', 'product' => 'Chair'], ['account_id' => 'account-x10', 'product' => 'Bookcase'], ], 'x11' => [ ['account_id' => 'account-x11', 'product' => 'Desk'], ], ] */ Метод `PHPhas()` определяет, существует ли заданный ключ в коллекции:
```
$collection = collect(['account_id' => 1, 'product' => 'Desk']);
```
$collection->has('product'); // true Метод `PHPimplode()` соединяет элементы в коллекции. Его параметры зависят от типа элементов в коллекции. Если коллекция содержит массивы или объекты, вы должны передать ключ атрибутов, значения которых вы хотите соединить, и «промежуточную» строку, которую вы хотите поместить между значениями:
['account_id' => 1, 'product' => 'Desk'], ['account_id' => 2, 'product' => 'Chair'], ]); $collection->implode('product', ', '); // Desk, Chair
Если коллекция содержит простые строки или числовые значения, просто передайте только «промежуточный» параметр в метод:
```
collect([1, 2, 3, 4, 5])->implode('-');
```
// '1-2-3-4-5 Метод `PHPIntersect()` удаляет любые значения из исходной коллекции, которых нет в переданном массиве или коллекции. Результирующая коллекция сохранит ключи оригинальной коллекции:
```
$collection = collect(['Desk', 'Sofa', 'Chair']);
```
$intersect = $collection->intersect(['Desk', 'Chair', 'Bookcase']); $intersect->all(); // [0 => 'Desk', 2 => 'Chair'] Метод `PHPisEmpty()` возвращает true, если коллекция пуста. В противном случае вернётся false:
```
collect([])->isEmpty();
```
// true Метод `PHPkeyBy()` возвращает коллекцию по указанному ключу. Если несколько элементов имеют одинаковый ключ, в результирующей коллекции появится только последний их них:
['product_id' => 'prod-100', 'name' => 'desk'], ['product_id' => 'prod-200', 'name' => 'chair'], ]); $keyed = $collection->keyBy('product_id'); $keyed->all(); /* [ 'prod-100' => ['product_id' => 'prod-100', 'name' => 'Desk'], 'prod-200' => ['product_id' => 'prod-200', 'name' => 'Chair'], ] */
Также вы можете передать в метод функцию обратного вызова, которая должна возвращать значение ключа коллекции для этого метода:
```
$keyed = $collection->keyBy(function ($item) {
```
return strtoupper($item['product_id']); }); $keyed->all(); /* [ 'PROD-100' => ['product_id' => 'prod-100', 'name' => 'Desk'], 'PROD-200' => ['product_id' => 'prod-200', 'name' => 'Chair'], ] */ keys()Метод `PHPkeys()` возвращает все ключи коллекции:
'prod-100' => ['product_id' => 'prod-100', 'name' => 'Desk'], 'prod-200' => ['product_id' => 'prod-200', 'name' => 'Chair'], ]); $keys = $collection->keys(); $keys->all(); // ['prod-100', 'prod-200'] Метод `PHPlast()` возвращает последний элемент в коллекции, для которого выполняется заданное условие:
```
collect([1, 2, 3, 4])->last(function ($value, $key) {
```
return $value < 3; }); // 2 Также вы можете вызвать метод `PHPlast()` без параметров, чтобы получить последний элемент в коллекции. Если коллекция пуста, то вернётся `PHPnull` :
```
collect([1, 2, 3, 4])->last();
```
// 4 Метод `PHPmap()` перебирает коллекцию и передаёт каждое значению в функцию обратного вызова. Функция обратного вызова может свободно изменять элемент и возвращать его, формируя тем самым новую коллекцию измененных элементов:
$multiplied = $collection->map(function ($item, $key) { return $item * 2; }); $multiplied->all(); // [2, 4, 6, 8, 10] Как и большинство других методов коллекции, метод `PHPmap()` возвращает новый экземпляр коллекции. Он не изменяет коллекцию при вызове. Если вы хотите преобразовать оригинальную коллекцию, используйте метод transform().
добавлено в 5.3 ()
Метод `PHPmapWithKeys()` проходит по элементам коллекции и передаёт каждое значение в функцию обратного вызова, которая должна вернуть ассоциативный массив, содержащий одну пару ключ/значение:
[ 'name' => 'John', 'department' => 'Sales', 'email' => '<EMAIL>' ], [ 'name' => 'Jane', 'department' => 'Marketing', 'email' => '<EMAIL>' ] ]); $keyed = $collection->mapWithKeys(function ($item) { return [$item['email'] => $item['name']]; }); $keyed->all(); /* [ '<EMAIL>' => 'John', '<EMAIL>' => 'Jane', ] */ Метод `PHPmax()` возвращает максимальное значение по заданному ключу:
```
$max = collect([['foo' => 10], ['foo' => 20]])->max('foo');
```
// 20 $max = collect([1, 2, 3, 4, 5])->max(); // 5 Метод `PHPmerge()` добавляет указанный массив в исходную коллекцию. Значения исходной коллекции, имеющие тот же строковый ключ, что и значение в массиве, будут перезаписаны:
```
$collection = collect(['product_id' => 1, 'price' => 100]);
```
$merged = $collection->merge(['price' => 200, 'discount' => false]); $merged->all(); // ['product_id' => 1, 'price' => 200, 'discount' => false]
Если заданные ключи в массиве числовые, то значения будут добавляться в конец коллекции:
```
$collection = collect(['Desk', 'Chair']);
```
$merged = $collection->merge(['Bookcase', 'Door']); $merged->all(); // ['Desk', 'Chair', 'Bookcase', 'Door'] Метод `PHPmin()` возвращает минимальное значение по заданному ключу:
```
$min = collect([['foo' => 10], ['foo' => 20]])->min('foo');
```
// 10 $min = collect([1, 2, 3, 4, 5])->min(); // 1 Метод `PHPonly()` возвращает элементы коллекции с заданными ключами:
$filtered = $collection->only(['product_id', 'name']); $filtered->all(); // ['product_id' => 1, 'name' => 'Desk'] Метод except() — инверсный для метода `PHPonly()` .
добавлено в 5.3 ()
Метод `PHPpluck()` извлекает все значения по заданному ключу:
['product_id' => 'prod-100', 'name' => 'Desk'], ['product_id' => 'prod-200', 'name' => 'Chair'], ]); $plucked = $collection->pluck('name'); $plucked->all(); // ['Desk', 'Chair']
Также вы можете указать, с каким ключом вы хотите получить коллекцию:
```
$plucked = $collection->pluck('name', 'product_id');
```
$plucked->all(); // ['prod-100' => 'Desk', 'prod-200' => 'Chair'] Метод `PHPpop()` удаляет и возвращает последний элемент из коллекции:
$collection->pop(); // 5 $collection->all(); // [1, 2, 3, 4] Метод `PHPprepend()` добавляет элемент в начало коллекции:
$collection->prepend(0); $collection->all(); // [0, 1, 2, 3, 4, 5]
Вторым аргументом вы можете передать ключ добавляемого элемента
```
$collection = collect(['one' => 1, 'two', => 2]);
```
$collection->prepend(0, 'zero'); $collection->all(); // ['zero' => 0, 'one' => 1, 'two', => 2] Метод `PHPpull()` удаляет и возвращает элемент из коллекции по его ключу:
$collection->pull('name'); // 'Desk' $collection->all(); // ['product_id' => 'prod-100'] Метод `PHPpush()` добавляет элемент в конец коллекции:
$collection->push(5); $collection->all(); // [1, 2, 3, 4, 5] Метод `PHPput()` устанавливает заданный ключ и значение в коллекцию:
$collection->put('price', 100); $collection->all(); // ['product_id' => 1, 'name' => 'Desk', 'price' => 100] Метод `PHPrandom()` возвращает случайный элемент из коллекции:
$collection->random(); // 4 - (получен в случайном порядке) Также вы можете передать целое число в `PHPrandom()` , чтобы указать, сколько случайных элементов необходимо получить. Если это число больше, чем 1, то возвращается коллекция элементов:
```
$random = $collection->random(3);
```
$random->all(); // [2, 4, 5] - (получены в случайном порядке) Метод `PHPreduce()` уменьшает коллекцию до одного значения, передавая результат каждой итерации в последующую итерацию:
$total = $collection->reduce(function ($carry, $item) { return $carry + $item; }); // 6 Значение для `PHP$carry` в первой итерации — null. Тем не менее, вы можете указать его начальное значение во втором параметре метода `PHPreduce()` :
```
$collection->reduce(function ($carry, $item) {
```
return $carry + $item; }, 4); // 10 Метод `PHPreject()` фильтрует коллекцию, используя заданную функцию обратного вызова. Функция обратного вызова должна возвращать true для элементов, которые необходимо удалить из результирующей коллекции:
добавлено в 5.2 ()
$filtered = $collection->reject(function ($value, $key) { return $value > 2; }); $filtered->all(); // [1, 2]
$filtered = $collection->reject(function ($item) { return $item > 2; }); $filtered->all(); // [1, 2] Метод filter() — инверсный для метода `PHPreject()` . Метод `PHPreverse()` меняет порядок элементов коллекции:
$reversed = $collection->reverse(); $reversed->all(); // [5, 4, 3, 2, 1] Метод `PHPsearch()` ищет в коллекции заданное значение и возвращает его ключ при успешном поиске. Если элемент не найден, то возвращается false.
```
$collection = collect([2, 4, 6, 8]);
```
$collection->search(4); // 1
Поиск проводится с помощью «неточного» сравнения, то есть строка с числовым значением будет считаться равной числу с таким же значением. Чтобы использовать строгое сравнение, передайте true вторым параметром метода:
```
$collection->search('4', true);
```
// false
В качестве альтернативы, вы можете передать свою собственную функцию обратного вызова для поиска первого элемента, для которого выполняется ваше условие:
```
$collection->search(function ($item, $key) {
```
return $item > 5; }); // 2 Метод `PHPshift()` удаляет и возвращает первый элемент из коллекции:
$collection->shift(); // 1 $collection->all(); // [2, 3, 4, 5] Метод `PHPshuffle()` перемешивает элементы в коллекции случайным образом:
$shuffled = $collection->shuffle(); $shuffled->all(); // [3, 2, 5, 1, 4] // (generated randomly) Метод `PHPslice()` возвращает часть коллекции, начиная с заданного индекса:
```
$collection = collect([1, 2, 3, 4, 5, 6, 7, 8, 9, 10]);
```
$slice = $collection->slice(4); $slice->all(); // [5, 6, 7, 8, 9, 10]
Если вы хотите ограничить размер получаемой части коллекции, передайте желаемый размер вторым параметром в метод:
```
$slice = $collection->slice(4, 2);
```
$slice->all(); // [5, 6]
добавлено в 5.2 ()
Метод `PHPsort()` сортирует коллекцию. Отсортированная коллекция сохраняет оригинальные ключи массива, поэтому в этом примере мы используем метод values() для сброса ключей и последовательной нумерации индексов:
$sorted = $collection->sort(); $sorted->values()->all(); // [1, 2, 3, 4, 5] Если вам необходимо отсортировать коллекцию с дополнительными условиями, вы можете передать функцию обратного вызова в метод `PHPsort()` с вашим собственным алгоритмом. Найдите в документации по PHP метод usort, который вызывается внутри метода `PHPsort()` вашей коллекции.
Для сортировки коллекции вложенных массивов или объектов, смотрите методы sortBy() и sortByDesc().
Метод `PHPsortBy()` сортирует коллекцию по заданному ключу. Отсортированная коллекция сохраняет оригинальные ключи массива, поэтому в этом примере мы используем метод values() для сброса ключей и последовательной нумерации индексов:
['name' => 'Desk', 'price' => 200], ['name' => 'Chair', 'price' => 100], ['name' => 'Bookcase', 'price' => 150], ]); $sorted = $collection->sortBy('price'); $sorted->values()->all(); /* [ ['name' => 'Chair', 'price' => 100], ['name' => 'Bookcase', 'price' => 150], ['name' => 'Desk', 'price' => 200], ] */
Также вы можете передать свою собственную функцию обратного вызова, чтобы определить, как сортировать значения коллекции:
['name' => 'Desk', 'colors' => ['Black', 'Mahogany']], ['name' => 'Chair', 'colors' => ['Black']], ['name' => 'Bookcase', 'colors' => ['Red', 'Beige', 'Brown']], ]); $sorted = $collection->sortBy(function ($product, $key) { return count($product['colors']); }); $sorted->values()->all(); /* [ ['name' => 'Chair', 'colors' => ['Black']], ['name' => 'Desk', 'colors' => ['Black', 'Mahogany']], ['name' => 'Bookcase', 'colors' => ['Red', 'Beige', 'Brown']], ] */
Этот метод использует такую же сигнатуру, что и sortBy(), но будет сортировать коллекцию в обратном порядке.
Метод `PHPsplice()` удаляет и возвращает часть элементов, начиная с заданного индекса:
$chunk = $collection->splice(2); $chunk->all(); // [3, 4, 5] $collection->all(); // [1, 2]
Вы можете передать второй параметр в метод для ограничения размера возвращаемой части коллекции:
$chunk = $collection->splice(2, 1); $chunk->all(); // [3] $collection->all(); // [1, 2, 4, 5]
Также вы можете передать в метод третий параметр, содержащий новые элементы, чтобы заменить элементы, которые будут удалены из коллекции:
$chunk = $collection->splice(2, 1, [10, 11]); $chunk->all(); // [3] $collection->all(); // [1, 2, 10, 11, 4, 5]
добавлено в 5.3 ()
Метод `PHPsum()` возвращает сумму всех элементов в коллекции:
```
collect([1, 2, 3, 4, 5])->sum();
```
// 15
Если коллекция содержит вложенные массивы или объекты, вам нужно передать ключ для определения значений, которые нужно суммировать:
['name' => 'JavaScript: The Good Parts', 'pages' => 176], ['name' => 'JavaScript: The Definitive Guide', 'pages' => 1096], ]); $collection->sum('pages'); // 1272
Также вы можете передать свою собственную функцию обратного вызова, чтобы определить, какие значения коллекции суммировать:
['name' => 'Chair', 'colors' => ['Black']], ['name' => 'Desk', 'colors' => ['Black', 'Mahogany']], ['name' => 'Bookcase', 'colors' => ['Red', 'Beige', 'Brown']], ]); $collection->sum(function ($product) { return count($product['colors']); }); // 6 Метод `PHPtake()` возвращает новую коллекцию с заданным числом элементов:
$chunk = $collection->take(3); chunk->all(); // [0, 1, 2]
Также вы можете передать отрицательное целое число, чтобы получить определенное количество элементов с конца коллекции:
$chunk = $collection->take(-2); $chunk->all(); // [4, 5] Метод `PHPtoArray()` преобразует коллекцию в простой PHP `PHParray` . Если значения коллекции являются моделями Eloquent, то модели также будут преобразованы в массивы:
$collection->toArray(); /* [ ['name' => 'Desk', 'price' => 200], ] */ Метод `PHPtoArray()` также преобразует все вложенные объекты коллекции в массив. Если вы хотите получить базовый массив, используйте вместо этого метод all(). Метод `PHPtoJson()` преобразует коллекцию в JSON:
$collection->toJson(); // '{"name":"Desk", "price":200}' Метод `PHPtransform()` перебирает коллекцию и вызывает заданную функцию обратного вызова для каждого элемента коллекции. Элементы коллекции будут заменены на значения, полученный из функции обратного вызова:
$collection->transform(function ($item, $key) { return $item * 2; }); $collection->all(); // [2, 4, 6, 8, 10] В отличие от большинства других методов коллекции, `PHPtransform()` изменяет саму коллекцию. Если вместо этого вы хотите создать новую коллекцию, используйте метод map(). Метод `PHPunion()` добавляет данный массив в коллекцию. Если массив содержит ключи, которые уже есть в исходной коллекции, то будут оставлены значения исходной коллекции:
```
$collection = collect([1 => ['a'], 2 => ['b']]);
```
$union = $collection->union([3 => ['c'], 1 => ['b']]); $union->all(); // [1 => ['a'], 2 => ['b'], [3 => ['c']] Метод `PHPunique()` возвращает все уникальные элементы в коллекции. Полученная коллекция сохраняет оригинальные ключи массива, поэтому в этом примере мы используем метод values() для сброса ключей и последовательной нумерации индексов:
```
$collection = collect([1, 1, 2, 2, 3, 4, 2]);
```
$unique = $collection->unique(); $unique->values()->all(); // [1, 2, 3, 4]
Имея дело со вложенными массивами или объектами, вы можете задать ключ, используемый для определения уникальности:
['name' => 'iPhone 6', 'brand' => 'Apple', 'type' => 'phone'], ['name' => 'iPhone 5', 'brand' => 'Apple', 'type' => 'phone'], ['name' => 'Apple Watch', 'brand' => 'Apple', 'type' => 'watch'], ['name' => 'Galaxy S6', 'brand' => 'Samsung', 'type' => 'phone'], ['name' => 'Galaxy Gear', 'brand' => 'Samsung', 'type' => 'watch'], ]); $unique = $collection->unique('brand'); $unique->values()->all(); /* [ ['name' => 'iPhone 6', 'brand' => 'Apple', 'type' => 'phone'], ['name' => 'Galaxy S6', 'brand' => 'Samsung', 'type' => 'phone'], ] */
Также вы можете передать свою собственную функцию обратного вызова, чтобы определять уникальность элементов:
```
$unique = $collection->unique(function ($item) {
```
return $item['brand'].$item['type']; }); $unique->values()->all(); /* [ ['name' => 'iPhone 6', 'brand' => 'Apple', 'type' => 'phone'], ['name' => 'Apple Watch', 'brand' => 'Apple', 'type' => 'watch'], ['name' => 'Galaxy S6', 'brand' => 'Samsung', 'type' => 'phone'], ['name' => 'Galaxy Gear', 'brand' => 'Samsung', 'type' => 'watch'], ] */ Метод `PHPvalues()` возвращает новую коллекцию со сброшенными ключами и последовательно пронумерованными индексами:
10 => ['product' => 'Desk', 'price' => 200], 11 => ['product' => 'Desk', 'price' => 200] ]); $values = $collection->values(); $values->all(); /* [ 0 => ['product' => 'Desk', 'price' => 200], 1 => ['product' => 'Desk', 'price' => 200], ] */ Метод `PHPwhere()` фильтрует коллекцию по заданной паре ключ/значение:
['product' => 'Desk', 'price' => 200], ['product' => 'Chair', 'price' => 100], ['product' => 'Bookcase', 'price' => 150], ['product' => 'Door', 'price' => 100], ]); $filtered = $collection->where('price', 100); $filtered->all(); /* [ ['product' => 'Chair', 'price' => 100], ['product' => 'Door', 'price' => 100], ] */
добавлено в 5.3 ()
Метод `PHPwhere()` использует «неточное» сравнение при проверке значений элементов. Используйте метод whereStrict() для фильтрации с использованием строгого сравнения. Этот метод имеет такую же сигнатуру, что и метод `PHPwhere()` . Однако, все значения сравниваются с использованием строгого сравнения.
добавлено в 5.2 () 5.1 () 5.0 ()
Метод `PHPwhere()` использует строгое сравнение при проверке значений элементов. Используйте метод whereLoose() для фильтрации с использованием «неточного» сравнения. Этот метод имеет такую же сигнатуру, что и метод `PHPwhere()` . Однако, все значения сравниваются с использованием «неточного» сравнения. Метод `PHPwhereIn()` фильтрует коллекцию по заданным ключу/значению, содержащимся в данном массиве.
['product' => 'Desk', 'price' => 200], ['product' => 'Chair', 'price' => 100], ['product' => 'Bookcase', 'price' => 150], ['product' => 'Door', 'price' => 100], ]); $filtered = $collection->whereIn('price', [150, 200]); $filtered->all(); /* [ ['product' => 'Bookcase', 'price' => 150], ['product' => 'Desk', 'price' => 200], ] */
добавлено в 5.3 ()
Метод `PHPwhereIn()` использует «неточное» сравнение при проверке значений элементов. Используйте метод whereInStrict() для фильтрации с использованием строгого сравнения. Этот метод имеет такую же сигнатуру, что и метод `PHPwhereIn()` . Однако, все значения сравниваются с использованием строгого сравнения.
добавлено в 5.2 ()
Метод `PHPwhereIn()` использует строгое сравнение при проверке значений элементов. Используйте метод whereInLoose() для фильтрации с использованием «неточного» сравнения. Этот метод имеет такую же сигнатуру, что и метод `PHPwhereIn()` . Однако, все значения сравниваются с использованием «неточного» сравнения. Метод `PHPzip()` объединяет все значения заданного массива со значениями исходной коллекции на соответствующем индексе:
```
$collection = collect(['Chair', 'Desk']);
```
$zipped = $collection->zip([100, 200]); $zipped->all(); // [['Chair', 100], ['Desk', 200]]
# Командная шина
Командная шина Laravel предоставляет удобный способ инкапсуляции задач вашего приложения в простые и понятные «команды». Чтобы понять назначение команд, давайте представим, что мы создаём приложение, которое позволяет пользователям покупать подкасты.
Когда пользователь покупает подкаст, должно произойти несколько событий. Например, мы должны снять деньги с карты пользователя, добавить запись о покупке в нашу базу данных и послать электронное письмо с подтверждением покупки. Возможно, мы также должны выполнить некоторые проверки, например, разрешено ли пользователю вообще покупать подкасты.
Мы могли бы поместить всю эту логику в метод контроллера. Однако у этого подхода есть несколько недостатков. Во-первых, наш контроллер, вероятно, обрабатывает и несколько других входящих HTTP-действий, поэтому включение сложной логики в каждый метод контроллера приведёт к его быстрому разрастанию, и его будет сложно читать. Во-вторых, будет трудно снова использовать логику покупки подкаста за пределами контекста контроллера. В-третьих, будет сложнее проводить юнит-тесты команд, поскольку нам придётся также генерировать заглушки для HTTP-запросов и выполнять полноценный запрос к приложению, чтобы протестировать логику покупки подкаста.
Вместо размещения этой логики в контроллере, мы можем принять решение инкапсулировать её в объект «команды», такой как команда `PHPPurchasePodcast` .
Командная строка Artisan может генерировать новые классы команды, используя команду `shmake:command` : > shphp artisan make:command PurchasePodcast
Только что сгенерированный класс будет помещён в каталог app/Commands. По умолчанию команда содержит два метода: конструктор и метод `PHPhandle` . Конструктор позволяет вам передавать любые соответствующие объекты команде, в то время как метод `PHPhandle` выполняет команду. Например:
```
class PurchasePodcast extends Command implements SelfHandling {
```
protected $user, $podcast; /** * Создание нового экземпляра команды. * * @return void */ public function __construct(User $user, Podcast $podcast) { $this->user = $user; $this->podcast = $podcast; } /** * Выполнение команды. * * @return void */ public function handle() { // Логика покупки подкаста... event(new PodcastWasPurchased($this->user, $this->podcast)); } } Метод `PHPhandle` может использовать указание типов зависимостей, и они будут автоматически внедрены сервис-контейнером. `/**` * Выполнение команды. * * @return void */ public function handle(BillingGateway $billing) { // Логика покупки подкаста... }
## Выполнение команд
Как выполнить только что созданную команду? Можно непосредственно вызвать метод `PHPhandle` . Однако, выполнение команд через «командную шину» Laravel имеет ряд преимуществ, которые мы обсудим далее. Если вы посмотрите на основной контроллер своего приложения, то вы заметите типаж
. Он позволяет вызывать метод `PHPdispatch` из любого контроллера.
```
public function purchasePodcast($podcastId)
```
{ $this->dispatch( new PurchasePodcast(Auth::user(), Podcast::findOrFail($podcastId)) ); } Командная шина займётся выполнением команды и вызовом IoC-контейнера для внедрения любых необходимых зависимостей в метод `PHPhandle` . Вы можете добавить типаж Illuminate\Foundation\Bus\DispatchesCommands в любой класс. Если вы хотите получить экземпляр командной шины через конструктор какого-либо из ваших классов, вы можете указать тип интерфейса
```
PHPIlluminate\Contracts\Bus\Dispatcher
```
. Наконец, вы можете просто использовать фасад `PHPBus` , чтобы быстро выполнить команду: `Bus::dispatch(` new PurchasePodcast(Auth::user(), Podcast::findOrFail($podcastId)) );
### Передача параметров в запросах
Передача переменных HTTP-запроса в команды очень распространена. Поэтому, вместо того чтобы делать это вручную для каждого запроса, Laravel предоставляет некоторые вспомогательные методы. Давайте взглянем на метод `PHPdispatchFrom` , доступный в типаже
```
$this->dispatchFrom('Command\Class\Name', $request);
```
Этот метод проверяет конструктор класса команды, который он получил первым аргументом, и затем извлекает переменные из HTTP-запроса (или любого другого объекта типа `PHPArrayAccess` ), чтобы заполнить необходимые параметры конструктора команды. Так, если наш класс команды примет аргумент `PHPfirstName` в своём конструкторе, то командная шина попытается получить параметр `PHPfirstName` из HTTP-запроса. Вы можете также передать массив как третий аргумент метода `PHPdispatchFrom` . Этот массив будет использоваться, чтобы заполнить любые параметры конструктора, которые не доступны из запроса:
```
$this->dispatchFrom('Command\Class\Name', $request, [
```
'firstName' => 'Taylor', ]);
## Очередь команд
Командная шина предназначена не только для синхронных задач, которые работают во время прохождения текущего запроса. Она также служит основным методом создания очередей задач в Laravel. Так как же заставить командную шину поставить нашу задачу в очередь для фоновой обработки вместо того, чтобы выполнить её синхронно? Очень просто. Во-первых, при генерации новой команды, просто добавьте к команде флаг `sh--queued` : > shphp artisan make:command PurchasePodcast --queued
И вы увидите, это добавляет ещё несколько функций к команде, а именно, интерфейс Illuminate\Contracts\Queue\ShouldBeQueued и типаж `PHPSerializesModels` . С помощью них командная шина ставит команды в очередь, а также корректно сериализует и десериализует любые Eloquent-модели, которые содержатся в вашей команде как свойства.
Если вы хотите преобразовать существующую команду в команду для очереди, просто реализуйте интерфейс Illuminate\Contracts\Queue\ShouldBeQueued в классе вручную. Он не содержит методов и просто служит индикатором для командной шины.
Теперь просто пишите свою команду как обычно. Шина автоматически поставит команду в очередь для фоновой обработки.
Для получения дополнительной информации о взаимодействии с очередью команд просмотрите полную документацию по очередям.
Прежде чем команда передастся в обработчик, вы можете передать её через другие классы на «конвейер». Каналы команд работают как HTTP-middleware, за исключением ваших команд! Например, канал команды может обернуть всю работу команды в транзакцию базы данных, или просто зарегистрировать в логе её выполнение.
Чтобы добавить канал в вашу шину, вызовите метод диспетчера `PHPpipeThrough` из вашего метода
```
PHPApp\Providers\BusServiceProvider::boot
```
```
$dispatcher->pipeThrough(['UseDatabaseTransactions', 'LogCommand']);
```
Канал команды определяется в методе `PHPhandle` , подобно middleware:
```
class UseDatabaseTransactions {
```
public function handle($command, $next) { return DB::transaction(function() use ($command, $next) { return $next($command); }); } }
Классы каналов команд выполняются через IoC-контейнер, поэтому свободно указывайте типы любых необходимых зависимостей в их конструкторах.
Вы можете даже определить `PHPClosure` в качестве канала команды:
```
$dispatcher->pipeThrough([function($command, $next)
```
{ return DB::transaction(function() use ($command, $next) { return $next($command); }); }]);
# Расширение фреймворка
## Управляющие и фабрики
Laravel содержит несколько классов `PHPManager` , которые управляют созданием компонентов, основанных на драйверах. Эти компоненты включают в себя кэш, сессии, авторизацию и очереди. Класс-управляющий ответственен за создание конкретной реализации драйвера в зависимости от настроек приложения. Например, класс `PHPCacheManager` может создавать объекты-реализации APC, Memcached, File и различных других драйверов кэша. Каждый из этих управляющих классов имеет метод `PHPextend()` , который может использоваться для простого добавления новой реализации драйвера. Мы поговорим об этих управляющих классах ниже, с примерами того, как добавить собственный драйвер в каждый из них. Внимание: Уделите несколько минут изучению различных классов `PHPManager` , которые поставляются с Laravel, таких как `PHPCacheManager` и `PHPSessionManager` . Знакомство с их кодом поможет вам лучше понять внутреннюю работу Laravel. Все классы-управляющие наследуют базовый класс
```
PHPIlluminate\Support\Manager
```
, который реализует общую полезную функциональность для каждого из них.
## Кэш
Для расширения подсистемы кэширования мы используем метод `PHPextend()` класса `PHPCacheManager` , который используется для привязки стороннего драйвера к управляющему классу и является общим для всех таких классов. Например, для регистрации нового драйвера кэша с именем mongo мы бы сделали следующее:
{ return Cache::repository(new MongoStore); }); Первый параметр, передаваемый методу `PHPextend()` — имя драйвера. Это имя соответствует значению параметра driver файла настроек config/cache.php. Второй параметр — функция-замыкание, которая должна вернуть объект типа
```
PHPIlluminate\Cache\Repository
```
. Замыкание получит параметр `PHP$app` — объект
```
PHPIlluminate\Foundation\Application
```
и сервис-контейнер. Вызов `PHPCache::extend` может быть сделан в методе `PHPboot()` в провайдере по умолчанию App\Providers\AppServiceProvider, который входит в новые приложения Laravel. Или вы можете создать свой сервис-провайдер для размещения расширения — только не забудьте зарегистрировать провайдер в массиве провайдеров в config/app.php. Для создания стороннего драйвера для кэша мы начнём с реализации контракта
```
PHPIlluminate\Contracts\Cache\Store
```
. Итак, наша реализация MongoDB будет выглядеть примерно так:
```
class MongoStore implements Illuminate\Contracts\Cache\Store {
```
public function get($key) {} public function put($key, $value, $minutes) {} public function increment($key, $value = 1) {} public function decrement($key, $value = 1) {} public function forever($key, $value) {} public function forget($key) {} public function flush() {} }
Нам только нужно реализовать каждый из этих методов с использованием подключения к MongoDB. Как только мы это сделали, можно закончить регистрацию нового драйвера:
{ return Cache::repository(new MongoStore); }); Если вы задумались о том, куда поместить код вашего нового драйвера кэша — подумайте о публикации его через Packagist!. Либо вы можете создать пространство имён `PHPExtensions` в вашей папке app. Однако держите в уме то, что Laravel не имеет жёсткой структуры папок и вы можете организовать свои файлы, как вам удобно.
## Сессии
Расширение системы сессий Laravel собственным драйвером так же просто, как и расширение драйвером кэша. Мы вновь используем метод `PHPextend()` для регистрации собственного кода:
{ // Вернуть объект, реализующий SessionHandlerInterface });
### Где расширять сессии
Код расширения системы сессий следует поместить в метод `PHPboot()` вашего AppServiceProvider.
### Написание расширения сессий
Заметьте, что наш драйвер сессии должен реализовывать интерфейс
```
PHPSessionHandlerInterface
```
. Этот интерфейс содержит несколько простых методов, которые нам нужно написать. Заглушка драйвера MongoDB выглядит так:
```
class MongoHandler implements SessionHandlerInterface {
```
public function open($savePath, $sessionName) {} public function close() {} public function read($sessionId) {} public function write($sessionId, $data) {} public function destroy($sessionId) {} public function gc($lifetime) {} } Эти методы не так легки в понимании, как методы драйвера кэша ( `PHPStoreInterface` ), поэтому давайте пробежимся по каждому из них подробнее:
* Метод open обычно используется при открытии системы сессий, основанной на файлах. Так как Laravel поставляется с драйвером сессий file, вам почти никогда не понадобиться добавлять что-либо в этот метод. Вы можете оставить его пустым. Фактически, это просто плохое решение PHP, из-за которого мы должны написать этот метод (мы обсудим это ниже).
* Метод close(), аналогично методу
`PHPopen()` , обычно также игнорируется. Для большей части драйверов он не требуется. * Метод read() должен вернуть строку — данные сессии, связанные с переданным
`PHP$sessionId` . Нет необходимости сериализовать объекты или делать какие-то другие преобразования при чтении или записи данных сессии в вашем драйвере — Laravel делает это автоматически. * Метод write() должен связать строку
`PHP$data` с данными сессии с переданным идентификатором `PHP$sessionId` , сохранив её в каком-либо постоянном хранилище, таком как MongoDB, Dynamo и др. * Метод destroy() должен удалить все данные, связанные с переданным
`PHP$sessionId` , из постоянного хранилища. * Метод gc() должен удалить все данные, которые старее переданного
`PHP$lifetime` (отпечатка времени Unix). Для самоочищающихся систем вроде Memcached и Redis этот метод может быть пустым.
Когда SessionHandlerInterface реализован, мы можем зарегистрировать драйвер в управляющем классе сессий:
{ return new MongoHandler; });
Когда драйвер сессий зарегистрирован, мы можем использовать драйвер mongo в нашем файле настроек config/session.php.
Внимание: Помните, если вы написали новый драйвер сессии, поделитесь им на Packagist!
## Авторизация
Механизм авторизации может быть расширен тем же способом, что и кэш и сессии. Мы используем метод `PHPextend()` , с которым вы уже знакомы:
{ // Вернуть объект, реализующий Illuminate\Contracts\Auth\UserProvider });
Реализации UserProvider ответственны только за то, чтобы получать реализацию Illuminate\Contracts\Auth\Authenticatable из постоянного хранилища, такого как MySQL, Riak и др. Эти два интерфейса позволяют работать механизму авторизации Laravel вне зависимости от того, как хранятся пользовательские данные и какой класс используется для их представления.
Давайте посмотрим на контракт UserProvider:
```
interface UserProvider {
```
public function retrieveById($identifier); public function retrieveByToken($identifier, $token); public function updateRememberToken(Authenticatable $user, $token); public function retrieveByCredentials(array $credentials); public function validateCredentials(Authenticatable $user, array $credentials); } Метод `PHPretrieveById()` обычно получает числовой ключ, идентифицирующий пользователя — такой, как автоинкрементное числовое поле ID в СУБД MySQL. Метод должен возвращать объект-реализацию `PHPAuthenticatable` , соответствующий переданному ID. Метод `PHPretrieveByToken()` получает пользователя по его уникальному `PHP$identifier` и токену «запомнить меня» — `PHP$token` , который хранится в поле `PHPremember_token` . Как и предыдущий метод, он должен возвращать объект-реализацию `PHPAuthenticatable` . Метод
записывает новое значение `PHP$token` в поле `PHPremember_token` для указанного `PHP$user` . Новый токен может быть как свежим, назначенным при удачном входе с опцией «запомнить меня», так и нулевым, когда пользователь выходит из приложения. Метод
получает массив данных, которые были переданы методу `PHPAuth::attempt()` при попытке входа в систему. Этот метод должен запросить своё постоянное хранилище на наличие пользователя с совпадающими данными. Обычно этот метод выполнит SQL-запрос с проверкой на
. А затем он должен вернуть объект-реализацию UserInterface. Этот метод не должен производить сравнение паролей или выполнять вход. Метод
должен сравнить переданный объект пользователя `PHP$user` с данными для входа `PHP$credentials` для того, чтобы его авторизовать. К примеру, этот метод может сравнивать строку
```
PHP$user->getAuthPassword
```
с результатом вызова `PHPHash::make()` на строке
. Этот метод должен только проверять данные пользователя и возвращать значение типа boolean. Теперь, когда мы узнали о каждом методе интерфейса
```
PHPUserProviderInterface
```
давайте посмотрим на интерфейс `PHPAuthenticatable` . Как вы помните, провайдер должен вернуть реализацию этого интерфейса из своих методов `PHPretrieveById()` и
```
interface Authenticatable {
```
public function getAuthIdentifier(); public function getAuthPassword(); public function getRememberToken(); public function setRememberToken($value); public function getRememberTokenName(); } Это простой интерфейс. Метод
должен просто вернуть «первичный ключ» пользователя. Если используется хранилище MySQL, то это будет автоинкрементное числовое поле-первичный ключ. Метод `PHPgetAuthPassword()` должен вернуть хэшированный пароль. Этот интерфейс позволяет системе авторизации работать с любым классом пользователя, вне зависимости от используемой ORM или хранилища данных. Изначально Laravel содержит класс User в папке app, который реализует этот интерфейс, поэтому вы можете обратиться к этому классу, чтобы увидеть пример реализации. Наконец, когда мы написали класс-реализацию `PHPUserProvider` , мы готовы зарегистрировать наше расширение в фасаде `PHPAuth` :
{ return new RiakUserProvider($app['riak.connection']); }); Когда вы зарегистрировали драйвер методом `PHPextend()` вы можете активировать его в вашем файле настроек config/auth.php.
## Расширения на основе сервис-контейнера
Почти все сервис-провайдеры Laravel получают свои объекты из сервис-контейнера. Вы можете увидеть список провайдеров вашего приложения в файле config/app.php. Вам стоит пробежаться по коду каждого из поставщиков в свободное время — сделав это вы получите намного более чёткое представление о том, какую именно функциональность каждый из них добавляет к фреймворку, а также какие ключи используются для привязки различных услуг в сервис-контейнере.
Например,
использует привязку ключа hash для получения экземпляра
```
PHPIlluminate\Hashing\BcryptHasher
```
из сервис-контейнера. Вы можете легко расширить и перекрыть этот класс в вашем приложении перекрыв эту привязку. Например:
class SnappyHashProvider extends \Illuminate\Hashing\HashServiceProvider { public function boot() { parent::boot(); $this->app->bindShared('hash', function() { return new \Snappy\Hashing\ScryptHasher; }); } } Заметьте, что этот класс наследует
, а не базовый класс по умолчанию `PHPServiceProvider` . Когда вы расширили провайдер, измените
в файле настроек config/app.php на имя вашего нового поставщика услуг.
Это общий подход к расширению любого класса ядра, который привязан к контейнеру. Фактически каждый класс так или иначе привязан к нему, и с его помощью может быть перекрыт. Опять же, прочитав код включённых в фреймворк сервис-провайдеров, вы познакомитесь с тем, где различные классы привязываются к контейнеру и какие ключи для этого используются. Это отличный способ узнать, как работает Laravel.
# Elixir
Laravel Elixir предоставляет чистый и гибкий API для определения основных Gulp-задач вашего Laravel-приложения. Elixir поддерживает основные препроцессоры CSS и JavaScript, такие как Sass и Webpack. С помощью сцепки методов Elixir позволяет вам гибко определять свой файлопровод (asset pipeline). Например:
> jselixir(function(mix) { mix.sass('app.scss') .webpack('app.js'); });
Если вы сомневались и путались, с чего начать работу с Gulp и компиляцией ресурсов, то вам точно понравится Laravel Elixir. Но вам необязательно использовать именно его при разработке своего приложения. Вы можете использовать любой другой инструмент для файлопровода, или вообще не использовать его.
Перед запуском Elixir необходимо убедиться, что на вашем компьютере установлен Node.js и NPM (для версии 5.3 и выше).
> shnode -v npm -v
По умолчанию Laravel Homestead включает в себя всё необходимое. Однако, если вы не используете Vagrant, то вы можете легко установить свежие версии Node и NPM с помощью простых графических установщиков с их страницы загрузки.
После вам надо получить Gulp как глобальный NPM-пакет:
> shnpm install --global gulp
добавлено в 5.2 ()
> shnpm install --global gulp-cli
Если вы используете систему контроля версий, то можете выполнить команду `shnpm shrinkwrap` , чтобы зафиксировать свои NPM-требования: > shnpm shrinkwrap
После выполнения этой команды, вы можете «закоммитить» npm-shrinkwrap.json в контроль версий.
Осталось только установить Laravel Elixir. В корневом каталоге установленного Laravel вы найдете файл package.json. Стандартный файл package.json содержит JavaScript-модули Elixir и Webpack. Это то же самое, что и файл composer.json, только он определяет зависимости Node, а не PHP. Вы можете установить перечисленные в нём зависимости выполнив:
> shnpm install
Если вы разрабатываете в системе Windows или запускаете виртуальную машину с Windows, вам может потребоваться команда `shnpm install` с ключом `sh--no-bin-links` : > shnpm install --no-bin-links
## Запуск Elixir
Elixir построен на основе Gulp, поэтому для запуска задач Elixir вам надо просто выполнить консольную команду `shgulp` . А ключ `sh--production` минимизирует ваши файлы CSS и JavaScript: > sh// Запустить все задачи... gulp // Запустить все задачи и минимизировать все файлы CSS и JavaScript... gulp --production
При выполнении этой команды вы увидите красиво оформленную таблицу с информацией о всех выполненных при этом операциях.
Отслеживание изменений ресурсов
Команда `shgulp watch` после запуска продолжает выполняться в терминале и следит за всеми изменениями ваших ресурсов. Gulp автоматически перекомпилирует ваши ресурсы, если вы измените их, пока запущена команда `shwatch` : > shgulp watch
## Работа с таблицами стилей
Файл gulpfile.js в корневом каталоге вашего проекта содержит все ваши Elixir-задачи. Elixir-задачи могут быть сцеплены вместе для точного определения того, как должны быть скомпилированы ваши ресурсы.
### Компилирование Less
Чтобы скомпилировать Less в CSS используется метод `PHPless()` . Этот метод предполагает, что ваши файлы находятся в resources/assets/less. В этом примере задача по умолчанию поместит скомпилированную CSS в public/css/app.css: > jselixir(function(mix) { mix.less('app.less'); });
Также вы можете комбинировать несколько Less-файлов в один CSS-файл. Итоговая CSS будет помещена в public/css/app.css:
> jselixir(function(mix) { mix.less([ 'app.less', 'controllers.less' ]); });
Указать другой путь для сохранения итоговых файлов можно вторым аргументом метода `PHPless()` : > jselixir(function(mix) { mix.less('app.less', 'public/stylesheets'); }); // Указание конкретного имени выходного файла... elixir(function(mix) { mix.less('app.less', 'public/stylesheets/style.css'); });
### Компилирование Sass
Метод `PHPsass()` позволяет компилировать Sass в CSS. Предполагается, что ваши Sass-файлы хранятся в resources/assets/sass. > jselixir(function(mix) { mix.sass('app.scss'); });
добавлено в 5.0 ()
По умолчанию Elixir под капотом использует библиотеку LibSass для компиляции. В некоторых случаях это может оказаться выгодней, чем использование версии Ruby, которая работает медленнее, но зато более функциональна. Если у вас установлены и Ruby и Sass ( `shgem install sass` ), вы можете включить Ruby-режим: > jselixir(function(mix) { mix.rubySass("app.sass"); });
Рекомендуется использовать стандартные каталоги Laravel для ресурсов, но если вам необходима другой базовый каталог, вы можете указать путь к любому файлу начиная с ./. Тогда Elixir поймёт, что надо начать с корня проекта, а не использовать стандартный базовый каталог.
Например, чтобы скомпилировать файл из app/assets/sass/app.scss и разместить результат в public/css/app.css, вам надо сделать такой вызов метода `PHPsass()` : > jselixir(function(mix) { mix.sass('./app/assets/sass/app.scss'); });
### Компилирование Stylus
Для компилирования Stylus в CSS используется метод `PHPstylus()` . Если ваши Stylus-файлы хранятся в resources/assets/stylus, вызовите метод вот так: > jselixir(function(mix) { mix.stylus('app.styl'); });
Эта сигнатура метода идентична и для `PHPmix.less()` и для `PHPmix.sass()` .
добавлено в 5.3 () 5.2 () 5.1 ()
### Компилирование простых CSS
Если вы хотите просто скомбинировать некоторые простые таблицы стилей в один файл, используйте метод `PHPstyles()` . Передаваемые в этот метод пути относительны к resources/assets/css, а итоговая CSS будет помещена в public/css/all.css: > jselixir(function(mix) { mix.styles([ 'normalize.css', 'main.css' ]); });
Вы также можете указать Elixir сохранить итоговый файл в другой каталог или файл, передав второй аргумент в метод `PHPstyles()` : > jselixir(function(mix) { mix.styles([ 'normalize.css', 'main.css' ], 'public/assets/css/site.css'); });
Комбинирование стилей и сохранение по базовому пути
> jselixir(function(mix) { mix.styles([ "normalize.css", "main.css" ], 'public/build/css/everything.css', 'public/css'); });
Третий аргумент методов `PHPstyles()` и `PHPscripts()` определяет относительный каталог для всех путей, передаваемых в методы.
Комбинирование всех стилей в каталоге
> jselixir(function(mix) { mix.stylesIn("public/css"); });
### Компилирование без карт кода
В Elixir карты кода (source maps) включены по умолчанию и обеспечивают более полную отладочную информацию для инструментов разработчика в вашем браузере при использовании скомпилированных ресурсов. Для каждого скомпилированного файла вы найдете соответствующий ему файл *.css.map или *.js.map в том же каталоге.
Чтобы отключить эту функцию, просто отключите настройку sourcemaps:
> jselixir.config.sourcemaps = false; elixir(function(mix) { mix.sass('app.scss'); });
## Работа со скриптами
Elixir предоставляет несколько функций для работы с JavaScript-файлами, например: компилирование ECMAScript 2015, сбор модулей, минификация и простая конкатенация простых JavaScript-файлов.
При написании ES2015 с модулями у вас есть выбор между [https://webpack.github.ioWebpack] и Rollup. Если эти инструменты вам незнакомы, не беспокойтесь, Elixir сам выполнит всю сложную работу. По умолчанию gulpfile Laravel использует webpack для компилирования Javascript, но вы можете использовать любой другой сборщик модулей.
### Компилирование Webpack
Для компилирования и сборки ECMAScript 2015 в простой JavaScript служит метод `PHPwebpack()` . Эта функция принимает путь к файлу относительно каталога resources/assets/js и генерирует один собранный файл в каталоге public/js: > jselixir(function(mix) { mix.webpack('app.js'); });
Для выбора другого каталога для вывода или базового каталога просто укажите необходимые пути с точкой в начале. Затем вы можете указать пути относительно корня вашего приложения. Например, чтобы скомпилировать app/assets/js/app.js в public/dist/app.js:
> jselixir(function(mix) { mix.webpack( './app/assets/js/app.js', './public/dist' ); });
Если вы хотите использовать функциональность Webpack по полной, Elixir прочитает любой файл webpack.config.js, находящийся в корне вашего проекта, и включит его настройки в процесс сборки.
### Компилирование Rollup
Подобно Webpack, Rollup — это сборщик для ES2015 следующего поколения. Эта функция принимает массив файлов относительно каталога resources/assets/js и генерирует один файл в каталоге public/js:
> jselixir(function(mix) { mix.rollup('app.js'); });
Как и с методом `PHPwebpack()` вы можете настроить место для ввода и вывода файлов, передаваемых в метод `PHProllup()` :
```
elixir(function(mix) {
```
mix.rollup( './resources/assets/js/app.js', './public/dist' ); });
добавлено в 5.2 () 5.1 () 5.0 ()
### Компилирование CoffeeScript
Для компилирования CoffeeScript в простой JavaScript служит метод `PHPcoffee()` . Он принимает строку или массив CoffeeScript-файлов относительно пути resources/assets/coffee и генерирует единый файл public/js/app.js: > jselixir(function(mix) { mix.coffee(['app.coffee', 'controllers.coffee']); });
### Browserify
В Elixir есть метод `PHPbrowserify ()` , который обеспечивает вас всеми преимуществами запроса модулей в браузере и ECMAScript 6 и JSX (для версии 5.2 и выше).
Эта задача предполагает, что ваши скрипты хранятся в resources/assets/js, и сохранит итоговый файл в public/js/main.js:
> jselixir(function(mix) { mix.browserify('main.js'); });
В Browserify уже включены трансформеры Partialify и Babelify, при необходимости вы можете установить и добавить другие:
> shnpm install aliasify --save-dev
> jselixir.config.js.browserify.transformers.push({ name: 'aliasify', options: {} }); elixir(function(mix) { mix.browserify('main.js'); });
### Babel
Для компилирования ECMAScript 6 и 7 и JSX (для версии 5.2 и выше) в простой JavaScript служит метод `PHPbabel()` . Он принимает массив или файлы относительно пути resources/assets/js, и генерирует единый файл public/js/all.js: > jselixir(function(mix) { mix.babel([ 'order.js', 'product.js', 'react-component.jsx' //для версии 5.2 и выше ]); });
Другое место для сохранения итогового файла можно указать вторым аргументом. Сигнатура и функциональность этого метода идентичны `PHPmix.scripts()` , за исключением компилирования Babel.
### Скрипты
Чтобы скомбинировать несколько JavaScript-файлов в один, используйте метод `PHPscripts()` , который обеспечивает автоматические карты кода, конкатенацию и минификацию. Метод `PHPscripts()` работает с файлами по относительному пути resources/assets/js и сохраняет итоговый JavaScript в public/js/all.js: > jselixir(function(mix) { mix.scripts([ 'order.js', 'forum.js' ]); });
Если вам необходимо конкатенировать несколько наборов скриптов в разные файлы, вы можете сделать несколько вызовов метода `PHPscripts()` . Имя итогового файла для каждой конкатенации определяется вторым аргументом метода: > jselixir(function(mix) { mix.scripts(['app.js', 'controllers.js'], 'public/js/app.js') .scripts(['forum.js', 'threads.js'], 'public/js/forum.js'); });
Если вам необходимо скомбинировать все скрипты в данной папке, используйте метод `PHPscriptsIn()` . Итоговый JavaScript будет помещён в public/js/all.js: > jselixir(function(mix) { mix.scriptsIn('public/js/some/directory'); });
Если вы собираетесь конкатенировать несколько предварительно минифицированных библиотек от производителей, таких как jQuery, попробуйте вместо этого использовать `PHPmix.combine()` . При этом файлы будут скомбинированы без выполнения шагов по созданию карт кода и минификации. В результате значительно уменьшится время компилирования.
## Копирование файлов и папок
Метод `PHPcopy()` используется для копирования файлов и папок в новое место. Все операции относительны корневой папки проекта: > jselixir(function(mix) { mix.copy('vendor/foo/bar.css', 'public/css/bar.css');
## Версии файлов /очистка кэша
Многие разработчики добавляют в имена ресурсов время создания или уникальный токен, чтобы браузер загружал свежие ресурсы вместо обработки устаревшего кода. В Elixir для этого служит метод `PHPversion()` . Метод `PHPversion()` принимает имя файла относительно папки public и добавляет к нему уникальный хеш для возможности очистки кэша. Например, сгенерированное имя файла будет выглядеть приблизительно так — all-16d570a7.css: > jselixir(function(mix) { mix.version('css/all.css'); });
Сгенерировав версию файла, вы можете использовать глобальную функцию Laravel `PHPelixir()` в ваших представлениях для загрузки соответствующих хешированных ресурсов. Функция `PHPelixir()` автоматически определит текущее имя хешированного файла: > xml<link rel="stylesheet" href="{{ elixir('css/all.css') }}">
Вы также можете передать массив в метод `PHPversion()` , чтобы присвоить версию нескольким файлам: > jselixir(function(mix) { mix.version(['css/all.css', 'js/app.js']); });
Когда файлам присвоена версия, вы можете использовать функцию `PHPelixir()` для генерации ссылок к нужным хешированным файлам. Помните, вам надо просто передать в эту функцию имя нехешированного файла. Она использует это имя для определения текущей хешированной версии файла: > xml<link rel="stylesheet" href="{{ elixir('css/all.css') }}"> <script src="{{ elixir('js/app.js') }}"></script## BrowserSync
BrowserSync автоматически производит обновление в браузере при изменениях в ваших ресурсах. Метод `PHPbrowserSync()` принимает JavaScript-объект с атрибутом proxy, содержащим локальный URL вашего приложения. Теперь, когда вы запустите `shgulp watch` , вы сможете обращаться к своему приложению через 3000 порт (http://project.dev:3000) и наслаждаться браузерной синхронизацией: > jselixir(function(mix) { mix.browserSync({ proxy: 'project.dev' }); });
## Gulp
Теперь, когда вы указали, какие задачи должен выполнять Elixir, вы должны вызвать Gulp из командной строки.
Выполнение всех зарегистрированных задач разом
> shgulp
Отслеживание изменений ресурсов
> shgulp watch
Только скомпилированные скрипты
> shgulp scripts
> shgulp styles
Отслеживание изменений тестов и PHP-классов
> shgulp tdd
Запуск всех задач будет происходить для среды разработки, и минимизация выполняться не будет. Для продакшна используйте `shgulp --production` .
добавлено в 5.2 () 5.1 () 5.0 ()
## Вызов существующих Gulp-задач
Для вызова существующей Gulp-задачи используйте метод `PHPtask()` . Как пример, представьте, что у вас есть Gulp-задача, которая просто выводит небольшой текст при вызове. > jsgulp.task('speak', function() { var message = 'Tea...Earl Grey...Hot'; gulp.src('').pipe(shell('say ' + message)); });
Если вы хотите вызвать эту задачу из Elixir, используйте метод `PHPmix.task()` и передайте имя задачи единственным аргументом метода: > jselixir(function(mix) { mix.task('speak'); });
Если вам необходимо зарегистрировать наблюдателя, чтобы запускать ваши задачи каждый раз, когда изменяется один или несколько файлов, вы можете передать регулярное выражение в качестве второго аргумента метода `PHPtask()` : > jselixir(function(mix) { mix.task('speak', 'app/**/*.php'); });
## Создание расширений Elixir
Для ещё большей гибкости вы можете создать полноценные расширения Elixir. Расширения Elixir позволяют передавать аргументы в ваши задачи. Например, можно написать такое расширение:
> js// File: elixir-extensions.js var gulp = require('gulp'); var shell = require('gulp-shell'); var Elixir = require('laravel-elixir'); var Task = Elixir.Task; Elixir.extend('speak', function(message) { new Task('speak', function() { return gulp.src('').pipe(shell('say ' + message)); }); }); // mix.speak('Hello World');
Вот и всё! Обратите внимание на то, что ваша Gulp-логика должна быть в функции, передаваемой вторым аргументом конструктору `PHPTask()` . Вы можете либо расположить это в начале вашего Gulpfile, либо вынести в отдельный файл задач. Например, если вы разместите расширения в elixir-extensions.js, то можете затребовать этот файл из основного (t)Gulpfile%% вот так: > js// File: Gulpfile.js var elixir = require('laravel-elixir'); require('./elixir-extensions') elixir(function(mix) { mix.speak('Tea, Earl Grey, Hot'); });
Если вы хотите перезапускать свою задачу во время выполнения `shgulp watch` , можете зарегистрировать наблюдателя: > jsnew Task('speak', function() { return gulp.src('').pipe(shell('say ' + message)); }) .watch('./app/**');
# Шифрование
Шифратор Laravel использует OpenSSL для шифрования по алгоритмам AES-256 и AES-128. Настоятельно призываем вас использовать встроенные в Laravel возможности шифрования и не пытаться применять свои «самодельные» алгоритмы шифрования. Все шифрованные значения подписаны кодом аутентификации сообщения (MAC — message authentication code) для предотвращения любых изменений в зашифрованной строке.
Перед использованием шифрования Laravel обязательно задайте ключ key в файле config/app.php. Для этого вам надо использовать команду
, которая использует надёжный генератор случайных байтов PHP для создания вашего ключа. Иначе все зашифрованные значения не будут безопасными.
## Использование шифратора
Вы можете зашифровать значение с помощью вспомогательной функции `PHPencrypt()` (в версии 5.2 и ранее используется метод `PHPencrypt()` фасада Crypt). Все значения шифруются с помощью OpenSSL и шифра AES-256-CBC. Более того, все шифрованные значения подписаны кодом аутентификации сообщения (MAC — message authentication code) для обнаружения любых изменений в зашифрованной строке: `<?php` namespace App\Http\Controllers; //для версии 5.2 и ранее: //use Crypt; use App\User; use Illuminate\Http\Request; use App\Http\Controllers\Controller; class UserController extends Controller { /** * Сохранение секретного сообщения для пользователя. * * @param Request $request * @param int $id * @return Response */ public function storeSecret(Request $request, $id) { $user = User::findOrFail($id); $user->fill([ 'secret' => encrypt($request->secret) //для версии 5.2 и ранее: //'secret' => Crypt::encrypt($request->secret) ])->save(); } }
При шифровании значения подвергаются «сериализации», что позволяет шифровать объекты и массивы. Поэтому при получении шифрованных значений клиентам «без PHP» необходимо будет «десериализовать» данные.
Вы можете расшифровать значение при помощи вспомогательной функции `PHPdecrypt()` (в версии 5.2 и ранее — метод `PHPdecrypt()` фасада Crypt). Если значение не может быть корректно расшифровано, например, при неверном MAC, будет выброшено исключение Illuminate\Contracts\Encryption\DecryptException:
```
use Illuminate\Contracts\Encryption\DecryptException;
```
try { $decrypted = decrypt($encryptedValue); //для версии 5.2 и ранее: //$decrypted = Crypt::decrypt($encryptedValue); } catch (DecryptException $e) { // }
добавлено в 5.0 ()
Laravel предоставляет средства для надёжного шифрования по алгоритму AES с помощью расширения mcrypt для PHP.
```
$encrypted = Crypt::encrypt('секрет');
```
```
$decrypted = Crypt::decrypt($encryptedValue);
```
Изменение алгоритма и режима шифрования
```
Crypt::setMode('ctr');
```
Crypt::setCipher($cipher);
# Инструмент запуска задач Envoy
Laravel Envoy обеспечивает чистый и минималистичный синтаксис для регистрации общих задач, запускаемых на удалённых серверах. Используя синтаксис в стиле Blade, вы легко можете настроить задачи для развёртывания, Artisan-команды и прочее. На данный момент Envoy работает только на ОС Mac/Linux.
### Установка
Сначала установите Envoy с помощью команды Composer `shglobal require` : > shcomposer global require "laravel/envoy=~1.0"
Поскольку из-за глобальных библиотек Composer иногда могут возникать конфликты версий пакетов, вы можете использовать cgr, являющийся полноценной заменой для команды
```
shcomposer global require
```
. Инструкции по установке библиотеки cgr можно найти на GitHub. Не забудьте поместить каталог ~/.composer/vendor/bin в вашу переменную PATH, чтобы исполняемый файл envoy мог быть найден при запуске команды `shenvoy` в терминале. Также вы можете использовать Composer для обновления вашего Envoy. Выполнение команды
обновит все ваши установленные глобально пакеты Composer: > shcomposer global update
## Написание задач
Все ваши Envoy-задачи должны быть определены в файле Envoy.blade.php в корне вашего проекта. Вот пример для начала:
```
@servers(['web' => ['[email protected]']])
```
@task('foo', ['on' => 'web']) ls -la @endtask
Как видите, массив @servers определён в начале файла. Вы можете ссылаться на эти сервера в параметре on при объявлении задач. Поместите в ваши объявления @task тот Bash-код, который должен запускаться на сервере при исполнении задачи.
### Начальная настройка
Иногда необходимо выполнить некий PHP-код перед выполнением ваших Envoy-задач. Для объявления переменных и выполнения других общих PHP-задач перед выполнением всех остальных ваших задач вы можете использовать директиву @setup:
`@setup` $now = new DateTime(); $environment = isset($env) ? $env : "testing"; @endsetup
Если вам необходимо подключить другие PHP-файлы перед выполнением задачи, используйте директиву @include в начале файла Envoy.blade.php:
```
@include('vendor/autoload.php');
```
@task('foo') # ... @endtask
### Переменные
При необходимости вы можете передать значения параметров в Envoy-задачи с помощью командной строки:
> shenvoy run deploy --branch=master
Вы можете обращаться к параметрам в своих задачах с помощью Blade-синтаксиса «echo»:
@task('deploy', ['on' => 'web']) cd site git pull origin {{ $branch }} php artisan migrate @endtask
добавлено в 5.3 ()
Само собой, вы также можете использовать операторы `PHPif` и циклы в своих задачах. Например, давайте проверим наличие переменной `PHP$branch` перед выполнением команды `shgit pull` :
@task('deploy', ['on' => 'web']) cd site @if ($branch) git pull origin {{ $branch }} @endif php artisan migrate @endtask
### Группа Story
Группа Story — набор задач под единым удобным именем, который позволяет вам сгруппировать маленькие узконаправленные задачи в одну большую задачу. Например, группа deploy может запустить задачи git и composer, если указать имена этих задач в её описании:
@story('deploy') git composer @endstory @task('git') git pull origin master @endtask @task('composer') composer install @endtask
Теперь группа deploy может быть запущена как обычная задача:
> shenvoy run deploy
### Макрос задач
Макрос позволяет вам определить набор задач, которые будут запускаться последовательно с помощью одной команды. Например, макрос deploy может запустить задачи git и composer:
@macro('deploy') git composer @endmacro @task('git') git pull origin master @endtask @task('composer') composer install @endtask
Теперь макрос deploy может быть запущен одной простой командой:
> shenvoy run deploy
### Несколько серверов
Благодаря Envoy вы легко можете запускать задачи на нескольких серверах. Сначала добавьте дополнительные сервера в объявление @servers. Каждому серверу должно быть присвоено уникальное имя. Когда вы определили дополнительные сервера, перечислите каждый из них в массиве задачи on:
@task('deploy', ['on' => ['web-1', 'web-2']]) cd site git pull origin {{ $branch }} php artisan migrate @endtask
По умолчанию задачи будут выполняться на каждом сервере поочерёдно. То есть задача будет завершаться на первом сервере перед переходом к выполнению на следующем.
Если вы хотите запускать задачи на нескольких серверах параллельно, просто добавьте параметр parallel в объявление задач:
@task('deploy', ['on' => ['web-1', 'web-2'], 'parallel' => true]) cd site git pull origin {{ $branch }} php artisan migrate @endtask
## Запуск задач
Для запуска задачи или группы задач из файла Envoy.blade.php используйте Envoy-команду `shrun` вашего Envoy, передавая ей название задачи или группы задач для запуска. Envoy запустит задачу и выведет ответ от серверов, когда задача будет запущена: > shenvoy run task
### Запрос подтверждения перед запуском задач
Если хотите получать запрос на подтверждение перед запуском определённой задачи на ваших серверах, добавьте директиву confirm в определение вашей задачи. Этот параметр полезен в основном для необратимых операций:
```
@task('deploy', ['on' => 'web', 'confirm' => true])
```
cd site git pull origin {{ $branch }} php artisan migrate @endtask
## Оповещения
### Slack
Envoy также поддерживает отправку оповещений в Slack после выполнения каждой команды.
Вы можете получить URL вашего webhook, создав на сайте Slack интеграцию «Incoming WebHooks». Аргумент 'hook' должен содержать полный полученный URL вашего webhook. Например:
> https://<KEY>
А в качестве аргумента 'channel' может быть:
* #channel — для отправки уведомлений на канале
* @user — для отправки уведомлений пользователю
### HipChat
Вы можете посылать оповещение в чат HipChat своей команды после запуска задачи с помощью простой директивы @hipchat. Директива принимает токен API, название чата, и имя отправителя сообщения:
@task('foo', ['on' => 'web']) ls -la @endtask @after @hipchat('token', 'room', 'Envoy') @endafter
Вы также можете задать своё собственное сообщение в чат HipChat, в котором можете использовать любые переменные, доступные вашим Envoy-задачам:
добавлено в 5.2 ()
`@after` @hipchat('token', 'room', 'Envoy', "$task ran in the $env environment.") @endafter
# Ошибки и журнал
Когда вы начинаете новый Laravel проект, обработка ошибок и исключений уже настроена для вас. Все происходящие в вашем приложении исключения записываются в журнал и отображаются пользователю в классе App\Exceptions\Handler. В этой статье мы подробно рассмотрим этот класс.
Для журналирования Laravel использует библиотеку Monolog, которая обеспечивает поддержку различных мощных обработчиков журналов. В Laravel настроены несколько из них, благодаря чему вы можете выбрать между единым файлом журнала, ротируемыми файлами журналов и записью информации в системный журнал.
### Детализация ошибок
Параметр `confdebug` в файле настроек config/app.php определяет, сколько информации об ошибке показывать пользователю. По умолчанию этот параметр установлен в соответствии со значением переменной среды APP_DEBUG, которая хранится в файле .env.
Для локальной разработки вам следует установить переменную среды APP_DEBUG в значение true. В продакшн-среде эта переменная всегда должна иметь значение false. Если значение равно true на продакшн-сервере, вы рискуете раскрыть важные значения настроек вашим конечным пользователям.
### Хранилище журналов
Изначально Laravel поддерживает запись журналов в единый файл (single), в отдельные файлы за каждый день (daily), в syslog и errorlog. Для использования определённого механизма хранения вам надо изменить параметр `conflog` в файле config/app.php. Например, если вы хотите использовать ежедневные файлы журнала вместо единого файла, вам надо установить значение log равное daily в файле настроек app: `'log' => 'daily'`
### Уровни важности событий
При использовании Monolog сообщения в журнале могут иметь разные уровни важности. По умолчанию Laravel сохраняет в журнал события всех уровней. Но на продакшн-сервере вы можете задать минимальный уровень важности, который необходимо заносить в журнал, добавив параметр `conflog_level` в файл app.php. После задания этого параметра Laravel будет записывать события всех уровней начиная с указанного и выше. Например, при `conflog_level` равном error будут записываться события error, critical, alert и emergency: > conf'log_level' => env('APP_LOG_LEVEL', 'error'),
В Monolog используются следующие уровни важности — от меньшего к большему: debug, info, notice, warning, error, critical, alert, emergency.
Если вы хотите иметь полный контроль над конфигурацией Monolog для вашего приложения, вы можете использовать метод приложения
```
PHPconfigureMonologUsing()
```
. Вызов этого метода необходимо поместить в файл bootstrap/app.php прямо перед тем, как в нём возвращается переменная `PHP$app` :
```
$app->configureMonologUsing(function ($monolog) {
```
$monolog->pushHandler(...); }); return $app;
## Обработчик исключений
`PHPreport()` Все исключения обрабатываются классом App\Exceptions\Handler. Этот класс содержит два метода: `PHPreport()` и `PHPrender()` . Рассмотрим каждый из них подробнее. Метод `PHPreport()` используется для занесения исключений в журнал или для отправки их во внешний сервис, такой как BugSnag или Sentry. По умолчанию метод `PHPreport()` просто передаёт исключение в базовую реализацию родительского класса, где это исключение зафиксировано. Но вы можете регистрировать исключения как пожелаете. Например, если вам необходимо сообщать о различных типах исключений разными способами, вы можете использовать оператор сравнения PHP `PHPinstanceof` :: `/**` * Сообщить или зарегистрировать исключение. * * Это отличное место для отправки исключений в Sentry, Bugsnag, и т.д. * * @param \Exception $exception * @return void */ public function report(Exception $exception) { if ($exception instanceof CustomException) { // } return parent::report($exception); }
Игнорирование исключений заданного типа
Свойство обработчика исключений `PHP$dontReport` содержит массив с типами исключений, которые не будут заноситься в журнал. Например, исключения, возникающие при ошибке 404, а также при некоторых других типах ошибок, не записываются в журналы. При необходимости вы можете включить другие типы исключений в этот массив: `/**` * Список типов исключений, о которых не надо сообщать. * * @var array */ protected $dontReport = [ \Illuminate\Auth\AuthenticationException::class, \Illuminate\Auth\Access\AuthorizationException::class, \Symfony\Component\HttpKernel\Exception\HttpException::class, \Illuminate\Database\Eloquent\ModelNotFoundException::class, \Illuminate\Validation\ValidationException::class, ];
`PHPrender()` Метод `PHPrender()` отвечает за конвертацию исключения в HTTP-отклик, который должен быть возвращён браузеру. По умолчанию исключение передаётся в базовый класс, который генерирует для вас отклик. Но вы можете проверить тип исключения или вернуть ваш собственный отклик: `/**` * Отрисовка HTTP-оклика для исключения. * * @param \Illuminate\Http\Request $request * @param \Exception $exception * @return \Illuminate\Http\Response */ public function render($request, Exception $exception) { if ($exception instanceof CustomException) { return response()->view('errors.custom', [], 500); } return parent::render($request, $exception); }
## HTTP-исключения
Некоторые исключения описывают коды HTTP-ошибок от сервера. Например, это может быть ошибка «страница не найдена» (404), «ошибка авторизации» (401) или даже сгенерированная разработчиком ошибка 500. Для возврата такого отклика из любого места в приложении можете использовать вспомогательный метод `PHPabort()` : `abort(404);` Вспомогательный метод `PHPabort()` немедленно создаёт исключение, которое будет отрисовано обработчиком исключений. Или вы можете предоставить такой отклик:
```
abort(403, 'Unauthorized action.');
```
### Свои страницы HTTP-ошибок
В Laravel можно легко возвращать свои собственные страницы для различных кодов HTTP-ошибок. Например, для выдачи собственной страницы для ошибки 404 создайте файл resources/views/errors/404.blade.php. Этот файл будет использован для всех ошибок 404, генерируемых вашим приложением. Представления в этой папке должны иметь имена, соответствующие кодам ошибок. Экземпляр HttpException, созданный функцией `PHPabort()` , будет передан в представление как переменная `PHP$exception` .
## Журналы
Laravel обеспечивает простой простой уровень абстракции над мощной библиотекой Monolog. По умолчанию Laravel настроен на создание файла журнала в storage/logs. Вы можете записывать информацию в журнал при помощи фасада Log:
`<?php` namespace App\Http\Controllers; use Illuminate\Support\Facades\Log; //для версии 5.2 и ранее: //use Log; use App\User; use App\Http\Controllers\Controller; class UserController extends Controller { /** * Показать профиль данного пользователя. * * @param int $id * @return Response */ public function showProfile($id) { Log::info('Showing user profile for user: '.$id); return view('user.profile', ['user' => User::findOrFail($id)]); } }
Регистратор событий предоставляет восемь уровней журналирования, описанных в RFC 5424: debug, info, notice, warning, error, critical, alert и emergency.
```
Log::emergency($message);
```
Log::alert($message); Log::critical($message); Log::error($message); Log::warning($message); Log::notice($message); Log::info($message); Log::debug($message);
Также в методы журналирования может быть передан массив контекстных данных:
```
Log::info('User failed to login.', ['id' => $user->id]);
```
Обращение к низкоуровневому экземпляру Monolog
В Monolog доступно множество дополнительных обработчиков для журналов. При необходимости вы можете получить доступ к низкоуровневому экземпляру Monolog, используемому в Laravel:
```
$monolog = Log::getMonolog();
```
# События
События в Laravel представлены реализацией паттерна Observer, что позволяет вам подписываться и прослушивать различные события, возникающие в вашем приложения. Как правило, классы событий находятся в папке app/Events, а классы обработчиков событий — в app/Listeners. Если у вас нет этих папок, не переживайте, они будут созданы при создании событий и слушателей с помощью Artisan-команд.
События — отличный способ для разделения различных аспектов вашего приложения, поскольку одно событие может иметь несколько слушателей, независящих друг от друга. Например, вы можете отправлять пользователю Slack-уведомление каждый раз, когда заказ доставлен. Вместо привязки кода обработки заказа к коду Slack-уведомления вы можете просто создать событие OrderShipped, которое сможет получить слушатель и преобразовать в Slack-уведомление.
## Регистрация событий и слушателей
Сервис-провайдер EventServiceProvider, включённый в ваше Laravel приложение, предоставляет удобное место для регистрации всех слушателей событий. Свойство listen содержит массив всех событий (ключей) и их слушателей (значения). Конечно, вы можете добавить столько событий в этот массив, сколько требуется вашему приложению. Например, давайте добавим событие OrderShipped:
`/**` * Слушатель события в вашем приложении. * * @var array */ protected $listen = [ 'App\Events\OrderShipped' => [ 'App\Listeners\SendShipmentNotification', ], ];
### Генерация классов событий и слушателей
Конечно, вручную создавать файлы для каждого события и слушателя затруднительно. Вместо этого добавьте слушателей и события в ваш EventServiceProvider и используйте команду `shevent:generate` . Эта команда сгенерирует все события и слушателей, которые перечислены в вашем EventServiceProvider. Конечно, уже существующие события и слушатели останутся нетронутыми: > shphp artisan event:generate
### Регистрация событий вручную
Как правило, события должны регистрироваться через массив $listen в EventServiceProvider.
Однако, также вы можете регистрировать события вручную с обработчиком событий, используя либо фасад Event, либо реализацию контракта Illuminate\Contracts\Events\Dispatcher
`/**` * Регистрация своих событий в приложении. * * @param \Illuminate\Contracts\Events\Dispatcher $events * @return void */ public function boot(DispatcherContract $events) { parent::boot($events); $events->listen('event.name', function ($foo, $bar) { // }); }
Вы даже можете регистрировать слушателей, используя символ * как маску, что позволит вам поймать несколько событий для одного слушателя. Такой метод вернёт весь массив данных событий одним параметром:
```
Event::listen('event.*', function (array $data) {
```
## Определение событий
Класс события — это просто контейнер данных, содержащий информацию, которая относится к событию.
Например, предположим, что наше сгенерированное событие OrderShipped принимает объект Eloquent ORM:
`<?php` namespace App\Events; use App\Order; use Illuminate\Queue\SerializesModels; class OrderShipped { use SerializesModels; public $order; /** * Создание нового экземпляра события. * * @param Order $order * @return void */ public function __construct(Order $order) { $this->order = $order; } } Как видите, этот класс события не содержит никакой логики. Это просто контейнер для объекта Order. Типаж SerializesModels, используемый событием, корректно сериализирует любые Eloquent модели, если объект события будет сериализирован php-функцией `PHPserialize()` .
добавлено в 5.2 () 5.1 () 5.0 ()
Например, предположим, что наше сгенерированное событие PodcastWasPurchased принимает объект Eloquent ORM:
`<?php` namespace App\Events; use App\Podcast; use App\Events\Event; use Illuminate\Queue\SerializesModels; class PodcastWasPurchased extends Event { use SerializesModels; public $podcast; /** * Создание нового экземпляра события. * * @param Podcast $podcast * @return void */ public function __construct(Podcast $podcast) { $this->podcast = $podcast; } } Как видите, этот класс события не содержит никакой логики. Это просто контейнер для объекта Podcast. Типаж SerializesModels, используемый событием, корректно сериализирует любые Eloquent модели, если объект события будет сериализирован php-функцией `PHPserialize()` .
## Определение слушателей
Теперь давайте взглянем на слушателя для нашего примера события. Слушатели событий принимают экземпляр события в свой метод `PHPhandle()` . Команда `shevent:generate` автоматически импортирует класс события и указывает тип события в метод `PHPhandle()` . В методе `PHPhandle()` вы можете выполнять любые действия, необходимые для ответа на событие.
добавлено в 5.3 ()
`<?php` namespace App\Listeners; use App\Events\OrderShipped; class SendShipmentNotification { /** * Создание слушателя события. * * @return void */ public function __construct() { // } /** * Обработка события. * * @param OrderShipped $event * @return void */ public function handle(OrderShipped $event) { // Доступ к order, используя $event->order... } }
Ваши слушатели события могут также указывать тип любых зависимостей, которые необходимы для их конструкторов. Все слушатели события доступны через сервис-контейнер Laravel, поэтому зависимости будут инъецированы автоматически.
добавлено в 5.2 () 5.1 () 5.0 ()
`<?php` namespace App\Listeners; use App\Events\PodcastWasPurchased; // для версии 5.1 и ранее: // use Illuminate\Queue\InteractsWithQueue; // use Illuminate\Contracts\Queue\ShouldQueue; class EmailPurchaseConfirmation { /** * Создание слушателя события. * * @return void */ public function __construct() { // } /** * Обработка события. * * @param PodcastWasPurchased $event * @return void */ public function handle(PodcastWasPurchased $event) { // Доступ к podcast, используя $event->podcast... } }
Ваши слушатели события могут также указывать тип любых зависимостей, которые необходимы для их конструкторов. Все слушатели события доступны через сервис-контейнер Laravel, поэтому зависимости будут инъецированы автоматически:
```
use Illuminate\Contracts\Mail\Mailer;
```
public function __construct(Mailer $mailer) { $this->mailer = $mailer; }
Остановка распространения события
Иногда, вам необходимо остановить распространение события для других слушателей. Вы можете сделать это, возвратив false из метода `PHPhandle()` вашего слушателя.
## Слушатели события в очереди
Поместить слушателя в очередь может быть полезно, если ваш слушатель будет выполнять медленную задачу, например, отправку e-mail или выполнение HTTP-запроса. Прежде чем помещать слушателей в очередь, не забудьте настроить вашу очередь и запустить слушателя очереди на вашем сервере или в локальной среде разработки.
Чтобы указать, что слушатель должен быть поставлен в очередь, добавьте интерфейс ShouldQueue в класс слушателя. В слушателях, сгенерированных Artisan-командой `shevent:generate` , уже импортирован этот интерфейс в текущее пространство имен. Так что вы можете сразу использовать его: `<?php` namespace App\Listeners; use App\Events\OrderShipped; // для версии 5.1 и ранее: // use Illuminate\Queue\InteractsWithQueue; use Illuminate\Contracts\Queue\ShouldQueue; class SendShipmentNotification implements ShouldQueue { // }
Вот и всё! Теперь, когда этого слушателя вызывают для события, он будет автоматически поставлен в очередь диспетчером события, использующим систему очереди Laravel. Если никакие исключения не будут выброшены, когда слушатель выполняется из очереди, то задача в очереди будет автоматически удалена после завершения её выполнения.
### Ручной доступ к очереди
Если вам необходимо вручную получить доступ к базовым методам очереди слушателя `PHPdelete()` и `PHPrelease()` , вы можете сделать это с помощью типажа Illuminate\Queue\InteractsWithQueue. Этот типаж по умолчанию импортирован в сгенерированные слушатели и предоставляет доступ к этим методам: `<?php` namespace App\Listeners; use App\Events\OrderShipped; use Illuminate\Queue\InteractsWithQueue; use Illuminate\Contracts\Queue\ShouldQueue; class SendShipmentNotification implements ShouldQueue { use InteractsWithQueue; public function handle(OrderShipped $event) { if (true) { $this->release(30); } } }
## Запуск событий
добавлено в 5.3 ()
Чтобы запустить событие, вы можете передать экземпляр события во вспомогательный метод `PHPevent()` . Этот метод распространит событие для всех его зарегистрированных слушателей. Поскольку метод `PHPevent()` доступен глобально, вы можете вызвать его из любого места вашего приложения: `<?php` namespace App\Http\Controllers; use App\Order; use App\Events\OrderShipped; use App\Http\Controllers\Controller; class OrderController extends Controller { /** * Доставка данного заказа. * * @param int $orderId * @return Response */ public function ship($orderId) { $order = Order::findOrFail($orderId); // Логика доставки заказа... event(new OrderShipped($order)); } }
При тестировании может быть полезным проверка запуска некоторых событий без уведомления их слушателей. в этом вам помогут встроенные вспомогательные функции для тестирования Laravel.
добавлено в 5.2 () 5.1 () 5.0 ()
Чтобы запустить событие, вы можете использовать фасад Event, передав экземпляр события методу `PHPfire()` . Метод `PHPfire()` распространит событие для всех его зарегистрированных слушателей: `<?php` namespace App\Http\Controllers; use Event; use App\Podcast; use App\Events\PodcastWasPurchased; use App\Http\Controllers\Controller; class UserController extends Controller { /** * Показать профиль заданного пользователя. * * @param int $userId * @param int $podcastId * @return Response */ public function purchasePodcast($userId, $podcastId) { $podcast = Podcast::findOrFail($podcastId); // Логика покупки podcast... Event::fire(new PodcastWasPurchased($podcast)); } } Также вы можете использовать глобальную вспомогательную функцию `PHPevent()` для запуска события:
```
event(new PodcastWasPurchased($podcast));
```
## Широковещательные события
Во многих современных веб-приложениях используются веб-сокеты, чтобы реализовать быстро обновляющиеся пользовательские интерфейсы реального времени. Когда некоторые данные обновлены на сервере, сообщение обычно отправляется по websocket соединению, которое будет обработано клиентом.
Чтобы помочь вам в создании этих типов приложений, Laravel упрощает «передачу» ваших событий по websocket соединению. Широковещательные события Laravel позволяют вам совместно использовать те же имена событий между серверным кодом и клиентской платформой JavaScript.
Все параметры широковещательных событий находятся в конфигурационном файле config/broadcasting.php. Laravel поддерживает несколько широковещательных драйверов из коробки: Pusher, Redis и драйвер log для локальной разработки и отладки. Пример конфигурации включен для каждого из этих драйверов.
Требования к широковещательным событиям
Следующие зависимости будут необходимы:
* Pusher: pusher/pusher-php-server ~2.0
* Redis: predis/predis ~1.0
Перед использованием широковещательных событий также вам будет необходимо сконфигурировать и запустить слушателя очереди. Все широковещательные события используют очереди, чтобы не уменьшать время отклика вашего приложения.
### Помечаем широковещательные события
Чтобы проинформировать Laravel о том, что заданное событие должно быть широковещательным, реализуйте интерфейс Illuminate\Contracts\Broadcasting\ShouldBroadcast в классе события. Интерфейс ShouldBroadcast требует реализации одного метода: `PHPbroadcastOn()` . Метод `PHPbroadcastOn()` должен возвращать массив «channel» имён, для которого событие должно быть широковещательным: `<?php` namespace App\Events; use App\User; use App\Events\Event; use Illuminate\Queue\SerializesModels; use Illuminate\Contracts\Broadcasting\ShouldBroadcast; class ServerCreated extends Event implements ShouldBroadcast { use SerializesModels; public $user; /** * Создание нового экземпляра события. * * @return void */ public function __construct(User $user) { $this->user = $user; } /** * Получение каналов, для которых событие должно быть широковещательным. * * @return array */ public function broadcastOn() { return ['user.'.$this->user->id]; } }
Теперь вам нужно только запустить событие, как вы это обычно делали. Как только событие было запущено, обработчик очереди автоматически передаст широковещательное событие по вашему указанному широковещательному драйверу.
Переписываем имена широковещательных событий
По умолчанию широковещательное имя события будет полностью определенным именем класса события. Используя класс в качестве примера выше, широковещательное событие было бы App\Events\ServerCreated. Вы можете определить имя широковещательного события, как вам будет удобнее, используя метод `PHPbroadcastAs()` : `/**` * Получить имя широковещательного события. * * @return string */ public function broadcastAs() { return 'app.server-created'; }
добавлено в 5.2 () 5.1 () 5.0 ()
### Широковещательные данные
Когда событие широковещательное, все его public свойства автоматически сериализированы и передаются вместе с событием, что позволяет вам получить доступ к любым из его общедоступных данных из JavaScript приложения. Например, если у вашего события есть единственное общедоступное свойство $user, которое содержит Eloquent модель, широковещательные данные будут выглядеть так:
> { "user": { "id": 1, "name": "<NAME>" ... } }
Однако, если вы хотите иметь еще более тщательный контроль над своими широковещательными данными, вы можете добавить к своему событию метод `PHPbroadcastWith()` . Этот метод должен возвращать массив данных, которые вы хотите передать с событием: `/**` * Получить данные для передачи. * * @return array */ public function broadcastWith() { return ['user' => $this->user->id]; }
добавлено в 5.2 ()
### Настройка широковещательных событий
По умолчанию имя широковещательного события — это полное имя класса этого события. Например, если имя класса App\Events\ServerCreated, то событие будет App\Events\ServerCreated. Имя можно изменить, определив метод `PHPbroadcastAs()` в классе события: `/**` * Получение имени широковещательного события. * * @return string */ public function broadcastAs() { return 'app.server-created'; } По умолчанию каждое широковещательное событие помещается в очередь по умолчанию для подключения по умолчанию в вашем файле настроек queue.php. Можно изменить очередь для вещания событий, добавив метод `PHPonQueue()` в класс события. Этот метод должен возвращать имя нужной очереди: `/**` * Задание имени очереди для размещения событий. * * @return string */ public function onQueue() { return 'your-queue-name'; }
добавлено в 5.2 () 5.1 () 5.0 ()
### Использование широковещательных событий
Вы можете удобно использовать широковещательную передачу событий, используя драйвер Pusher, используя Pusher SDK JavaScript. Например, давайте используем событие App\Events\ServerCreated из наших предыдущих примеров:
```
this.pusher = new Pusher('pusher-key');
```
this.pusherChannel = this.pusher.subscribe('user.' + USER_ID); this.pusherChannel.bind('App\\Events\\ServerCreated', function(message) { console.log(message.user); });
Если вы будете использовать Redis, то вы должны будете написать свой собственный Redis потребитель типа издатель-подписчик, чтобы получать сообщения и широковещательно передавать их, используя websocket технологию на ваш выбор. Например, вы можете воспользоваться популярной библиотекой Socket.io, которая написана на Node.
Используя библиотеки Node socket.io и ioredis, вы можете быстро написать broadcaster событий, чтобы публиковать все события, которые широковещаются вашим приложением Laravel:
```
var app = require('http').createServer(handler);
```
var io = require('socket.io')(app); var Redis = require('ioredis'); var redis = new Redis(); app.listen(6001, function() { console.log('Server is running!'); }); function handler(req, res) { res.writeHead(200); res.end(''); } io.on('connection', function(socket) { // }); redis.psubscribe('*', function(err, count) { // }); redis.on('pmessage', function(subscribed, channel, message) { message = JSON.parse(message); io.emit(channel + ':' + message.event, message.data); });
## Подписчики событий
### Написание подписчиков событий
Подписчики событий — это классы, которые могут подписаться на множество событий из самого класса, что позволяет вам определить несколько обработчиков событий в одном классе. Подписчики должны определить метод `PHPsubscribe()` , в который будет передан экземпляр диспетчера события. Вы можете вызвать метод `PHPlisten()` на данном диспетчере для регистрации слушателей события: `<?php` namespace App\Listeners; class UserEventSubscriber { /** * Обработка события входа пользователя в систему. */ public function onUserLogin($event) {} /** * Обработка события выхода пользователя из системы. */ public function onUserLogout($event) {} /** * Регистрация слушателей для подписки. * * @param Illuminate\Events\Dispatcher $events */ public function subscribe($events) { $events->listen( 'Illuminate\Auth\Events\Login', 'App\Listeners\UserEventSubscriber@onUserLogin' ); $events->listen( 'Illuminate\Auth\Events\Logout', 'App\Listeners\UserEventSubscriber@onUserLogout' ); } }
### Регистрация подписчика события
После написания подписчика, вы можете зарегистрировать его в диспетчере события. Вы можете зарегистрировать подписчиков, используя свойство $subscribe в EventServiceProvider. Например, давайте добавим UserEventSubscriber.
`<?php` namespace App\Providers; //для версии 5.2 и ранее: //use Illuminate\Contracts\Events\Dispatcher as DispatcherContract; use Illuminate\Foundation\Support\Providers\EventServiceProvider as ServiceProvider; class EventServiceProvider extends ServiceProvider { /** * Привязки слушателя события для приложения. * * @var array */ protected $listen = [ // ]; /** * Классы подписчиков для регистрации. * * @var array */ protected $subscribe = [ 'App\Listeners\UserEventSubscriber', ]; }
## События фреймворка
Laravel предоставляет множество «базовых» событий для действий, выполняемых фреймворком. Вы можете подписаться на них таким же образом, как вы подписываетесь на свои собственные события:
Событие | Параметр(ы) |
| --- | --- |
artisan.start | $application |
auth.attempt | $credentials, $remember, $login |
auth.login | $user, $remember |
auth.logout | $user |
cache.missed | $key |
cache.hit | $key, $value |
cache.write | $key, $value, $minutes |
cache.delete | $key |
connection.{name}.beganTransaction | $connection |
connection.{name}.committed | $connection |
connection.{name}.rollingBack | $connection |
illuminate.query | $query, $bindings, $time, $connectionName |
illuminate.queue.after | $connection, $job, $data |
illuminate.queue.failed | $connection, $job, $data |
illuminate.queue.stopping | null |
mailer.sending | $message |
router.matched | $route, $request |
{view name} | $view |
{view name} | $view |
# Файловая система
Laravel предоставляет мощную абстракцию для работы с файловой системой благодаря восхитительному PHP-пакету Flysystem от Франка де Жонге. Laravel Flysystem содержит простые в использовании драйвера для работы с локальными файловыми системами, Amazon S3 и Rackspace Cloud Storage. Более того, можно очень просто переключаться между этими вариантами хранения файлов, поскольку у всех одинаковый API.
Настройки файловой системы находятся в файле config/filesystems.php. В нём вы можете настроить все свои «disks». Каждый диск представляет определенный драйвер и место хранения. В конфигурационном файле имеются примеры для каждого поддерживаемого драйвера. Поэтому вы можете просто немного изменить конфигурацию под ваши нужды!
Конечно, вы можете сконфигурировать столько дисков, сколько вам будет угодно, и даже можете иметь несколько дисков, которые используют один драйвер.
### Общий диск
Диск public предназначен для общего доступа к файлам. По умолчанию диск public использует драйвер local и хранит файлы в storage/app/public. Чтобы сделать их доступными через веб, вам надо создать символьную ссылку из public/storage на storage/app/public. При этом ваши общедоступные файлы будут храниться в одной папке, которую легко можно использовать в разных развёртываниях при использовании систем обновления на лету, таких как Envoyer.
Для создания символьной ссылки используйте Artisan-команду `shstorage:link` : > shphp artisan storage:link
Само собой, когда файл сохранён и создана символьная ссылка, вы можете создать URL к файлу с помощью вспомогательной функции `PHPasset()` :
```
echo asset('storage/file.txt');
```
### Драйвер local
При использовании драйвера local все файловые операции выполняются относительно каталога root, определенного в вашем конфигурационном файле. По умолчанию это каталог storage/app. Поэтому следующий метод сохранит файл в storage/app/file.txt:
```
Storage::disk('local')->put('file.txt', 'Contents');
```
### Требования к драйверам
Перед использованием S3 или Rackspace вы должны установить соответствующие пакеты при помощи Composer:
Интеграция Flysystem отлично работает с FTP, но в стандартном файле настроек filesystems.php нет примера настройки FTP. Если вам надо настроить файловую систему FTP, вы можете использовать в качестве примера приведенные ниже настройки:
`'ftp' => [` 'driver' => 'ftp', 'host' => 'ftp.example.com', 'username' => 'ваш-логин', 'password' => 'ваш-пароль', // Необязательные настройки FTP... // 'port' => 21, // 'root' => '', // 'passive' => true, // 'ssl' => true, // 'timeout' => 30, ],
Интеграция Flysystem отлично работает с Rackspace, но в стандартном файле настроек filesystems.php нет примера настройки Rackspace. Если вам надо настроить файловую систему Rackspace, вы можете использовать в качестве примера приведенные ниже настройки:
`'rackspace' => [` 'driver' => 'rackspace', 'username' => 'ваш-логин', 'key' => 'ваш-ключ', 'container' => 'ваш-контейнер', 'endpoint' => 'https://identity.api.rackspacecloud.com/v2.0/', 'region' => 'IAD', 'url_type' => 'publicURL', ],
## Получение экземпляров дисков
Для взаимодействия с любым из ваших сконфигурированных дисков можно использовать фасад Storage. Например, вы можете использовать метод этого фасада `PHPput()` , чтобы сохранить аватар на диск по умолчанию. Если вы вызовите метод фасада Storage без предварительного вызова метода `PHPdisk()` , то вызов метода будет автоматически передан диску по умолчанию:
добавлено в 5.3 ()
Storage::put('avatars/1', $fileContents);
добавлено в 5.2 () 5.1 () 5.0 ()
`<?php` namespace App\Http\Controllers; use Storage; use Illuminate\Http\Request; use App\Http\Controllers\Controller; class UserController extends Controller { /** * Обновить аватар данного пользователя. * * @param Request $request * @param int $id * @return Response */ public function updateAvatar(Request $request, $id) { $user = User::findOrFail($id); Storage::put( 'avatars/'.$user->id, file_get_contents($request->file('avatar')->getRealPath()) ); } } При использовании нескольких дисков вы можете обращаться к нужному диску с помощью метода `PHPdisk()` фасада Storage:
```
Storage::disk('s3')->put('avatars/1', $fileContents);
```
## Чтение файлов
Методом `PHPget()` можно получать содержимое файла. Он возвращает сырую строку содержимого файла. Не забывайте, все пути файлов необходимо указывать относительно настроенного для диска «корня»:
```
$contents = Storage::get('file.jpg');
```
### URL файла
При использовании драйвера local или s3 вы можете использовать метод `PHPurl()` для получения URL для файла. При использовании драйвера local в начало пути к файлу будет просто подставлено /storage, и будет возвращён относительный URL. При использовании драйвера s3 будет возвращён полный удалённый URL:
$url = Storage::url('file1.jpg');
При использовании драйвера local все файлы, которые должны быть общедоступны, необходимо помещать в каталог storage/app/public. Кроме того, вам надо создать символьную ссылку в public/storage, которая указывает на папку storage/app/public.
### Метаданные файла
Помимо чтения и записи файлов Laravel может предоставить информацию о самих файлах. Например, для получения размера файла в байтах служит метод `PHPsize()` :
$size = Storage::size('file1.jpg'); Для получения времени последней модификации файла (отметка времени UNIX) служит метод `PHPlastModified()` :
```
$time = Storage::lastModified('file1.jpg');
```
### Запись файлов
Для записи файла на диск служит метод `PHPput()` . Также вы можете передать в метод `PHPput()` PHP-resource, чтобы использовать низкоуровневую поддержку потоков Flysystem. Очень рекомендуем использовать потоки при работе с большими файлами:
Storage::put('file.jpg', $contents); Storage::put('file.jpg', $resource);
добавлено в 5.3 ()
Автоматическая работа с потоками
Если вы хотите, чтобы Laravel автоматически использовал потоки для записи файла в хранилище, используйте методы `PHPputFile()` или `PHPputFileAs()` . Эти методы принимают объект Illuminate\Http\File или Illuminate\Http\UploadedFile, и автоматически используют потоки для размещения файла в необходимом месте:
```
use Illuminate\Http\File;
```
// Автоматическое генерирование UUID для имени файла... Storage::putFile('photos', new File('/path/to/photo')); // Ручное указание имени файла... Storage::putFileAs('photos', new File('/path/to/photo'), 'photo.jpg'); У метода `PHPputFile()` есть несколько важных нюансов. Заметьте, мы указали только название каталога без имени файла. По умолчанию метод `PHPputFile()` генерирует UUID в качестве имени файла. Метод вернёт путь к файлу, поэтому вы можете сохранить в БД весь путь, включая сгенерированное имя. Методы `PHPputFile()` и `PHPputFileAs()` принимают также аргумент «видимости» сохраняемого файла. Это полезно в основном при хранении файлов в облачном хранилище, таком как S3, когда необходим общий доступ к файлам:
```
Storage::putFile('photos', new File('/path/to/photo'), 'public');
```
Добавление контента в начало / конец файла
Для вставки контента в начало или конец файла служат методы `PHPprepend()` и `PHPappend()` :
```
Storage::prepend('file.log', 'Prepended Text');
```
Storage::append('file.log', 'Appended Text');
Копирование и перемещение файлов
Метод `PHPcopy()` используется для копирования существующего файла в новое расположение на диске, а метод `PHPmove()` — для переименования или перемещения существующего файла в новое расположение:
```
Storage::copy('old/file1.jpg', 'new/file1.jpg');
```
Storage::move('old/file1.jpg', 'new/file1.jpg');
добавлено в 5.3 ()
### Загрузка файлов
Загрузка файлов в веб-приложениях — это чаще всего загрузка пользовательских файлов, таких как аватар, фотографии и документы. В Laravel очень просто сохранять загружаемые файлы методом `PHPstore()` на экземпляре загружаемого файла. Просто вызовите метод `PHPstore()` , указав путь для сохранения загружаемого файла: `<?php` namespace App\Http\Controllers; use Illuminate\Http\Request; use App\Http\Controllers\Controller; class UserAvatarController extends Controller { /** * Обновление аватара пользователя. * * @param Request $request * @return Response */ public function update(Request $request) { $path = $request->file('avatar')->store('avatars'); return $path; } } В этом примере есть несколько важных моментов. Заметьте, мы указали только название каталога без имени файла. По умолчанию метод `PHPstore()` генерирует UUID в качестве имени файла. Метод вернёт путь к файлу, поэтому вы можете сохранить в БД весь путь, включая сгенерированное имя. Также вы можете вызвать метод `PHPputFile()` фасада Storage для выполнения этой же операции над файлом, как показано в примере:
```
$path = Storage::putFile('avatars', $request->file('avatar'));
```
Если вы не хотите, чтобы файлу автоматически было назначено имя, можете использовать метод `PHPstoreAs()` , который принимает в виде аргументов путь, имя файла, и (необязательно) диск:
```
$path = $request->file('avatar')->storeAs(
```
'avatars', $request->user()->id ); Конечно, вы также можете использовать метод `PHPputFileAs()` фасада Storage, который выполняет такую же операцию:
```
$path = Storage::putFileAs(
```
'avatars', $request->file('avatar'), $request->user()->id ); По умолчанию этот метод использует диск по умолчанию. Если необходимо указать другой диск, передайте имя диска в качестве второго аргумента в метод `PHPstore()` :
```
$path = $request->file('avatar')->store(
```
'avatars/'.$request->user()->id, 's3' );
### Видимость файлов
В интеграции Flysystem в Laravel «видимость» — это абстракция разрешений на файлы для использования на нескольких платформах. Файлы могут быть обозначены как public или private. Если файл отмечен как public, значит он должен быть доступен остальным. Например, при использовании драйвера S3 вы можете получить URL для public-файлов.
Вы можете задать видимость при размещении файла методом `PHPput()` :
Storage::put('file.jpg', $contents, 'public'); Если файл уже был сохранён, то получить и задать его видимость можно методами `PHPgetVisibility()` и `PHPsetVisibility()` :
```
$visibility = Storage::getVisibility('file.jpg');
```
Storage::setVisibility('file.jpg', 'public')
## Удаление файлов
Метод `PHPdelete()` принимает имя одного файла или массив файлов для удаления с диска:
Storage::delete('file.jpg'); Storage::delete(['file1.jpg', 'file2.jpg']);
## Папки
Получение всех файлов из папки
Метод `PHPfiles()` возвращает массив всех файлов из указанной папки. Если вы хотите получить массив всех файлов папки и её подпапок, используйте метод `PHPallFiles()` :
$files = Storage::files($directory); $files = Storage::allFiles($directory); Метод `PHPdirectories()` возвращает массив всех папок из указанной папки. Вдобавок, вы можете использовать метод `PHPallDirectories()` для получения списка всех папок в данной папке и во всех её подпапках:
```
$directories = Storage::directories($directory);
```
// рекурсивно... $directories = Storage::allDirectories($directory); Метод `PHPmakeDirectory()` создаёт указанную папку, включая необходимые подпапки:
```
Storage::makeDirectory($directory);
```
И наконец, метод `PHPdeleteDirectory()` удаляет папку и все её файлы с диска:
```
Storage::deleteDirectory($directory);
```
## Пользовательские файловые системы
Laravel Flysystem предоставляет драйверы для нескольких «drivers» из коробки. Однако, Flysystem не ограничен ими и содержит в себе адаптеры для многих других систем хранения. Вы можете создать свой драйвер, если хотите использовать один из этих дополнительных адаптеров в вашем приложении Laravel.
Чтобы установить свою файловую систему, вы должны будете создать сервис-провайдер такой как DropboxServiceProvider. Для определения своего драйвера вы можете использовать метод `PHPextend()` фасада Storage в методе `PHPboot()` провайдера: `<?php` namespace App\Providers; use Storage; use League\Flysystem\Filesystem; use Dropbox\Client as DropboxClient; use Illuminate\Support\ServiceProvider; use League\Flysystem\Dropbox\DropboxAdapter; class DropboxServiceProvider extends ServiceProvider { /** * Выполнение после-регистрационной загрузки сервисов. * * @return void */ public function boot() { Storage::extend('dropbox', function ($app, $config) { $client = new DropboxClient( $config['accessToken'], $config['clientIdentifier'] ); return new Filesystem(new DropboxAdapter($client)); }); } /** * Регистрация привязок в контейнере. * * @return void */ public function register() { // } } Первый аргумент метода `PHPextend()` — имя драйвера, второй — замыкание, которое получает переменные `PHP$app` и `PHP$config` . Замыкание должно возвратить экземпляр League\Flysystem\Filesystem. Переменная `PHP$config` содержит значения, определенные в config/filesystems.php для указанного диска.
Когда вы создали сервис-провайдер для регистрации расширения, вы можете использовать драйвер dropbox в своём файле с настройками config/filesystems.php.
# Хеширование
Фасад Hash в Laravel обеспечивает надёжное Bcrypt-хеширование для хранения паролей пользователей.
Bcrypt — отличный выбор для хеширования паролей, потому что сложность его вычисления настраивается. Это значит, что можно увеличить время генерации хеша при увеличении мощности железа.
Вы можете хешировать пароль, вызвав метод `PHPmake()` фасада Hash:
добавлено в 5.3 ()
use Illuminate\Http\Request; use Illuminate\Support\Facades\Hash; use App\Http\Controllers\Controller; class UpdatePasswordController extends Controller { /** * Обновление пароля пользователя. * * @param Request $request * @return Response */ public function update(Request $request) { // Проверка длины пароля... $request->user()->fill([ 'password' => Hash::make($request->newPassword) ])->save(); } }
use Hash; use App\User; use Illuminate\Http\Request; use App\Http\Controllers\Controller; class UserController extends Controller { /** * Обновление пароля пользователя. * * @param Request $request * @param int $id * @return Response */ public function updatePassword(Request $request, $id) { $user = User::findOrFail($id); // Проверка длины пароля... $user->fill([ 'password' => Hash::make($request->newPassword) ])->save(); } Вы также можете использовать глобальную вспомогательную функцию `PHPbcrypt` :
```
$password = bcrypt('секрет');
```
```
$password = Hash::make('секрет');
```
добавлено в 5.3 () 5.2 () 5.1 ()
Метод `PHPcheck()` позволяет сверить текстовую строку и имеющийся хеш. Но если вы используете встроенный в Laravel LoginController (для версии 5.2 и ранее — AuthController), то вам не нужно использовать этот метод, так как этот контроллер автоматически вызывает его:
```
if (Hash::check('секрет', $hashedPassword)) {
```
// Пароль верный... }
Проверка необходимости повторного хеширования пароля
Функция `PHPneedsRehash` позволяет определить, изменилась ли используемая сложность хеширования с момента создания хеша для пароля:
```
if (Hash::needsRehash($hashed)) {
```
$hashed = Hash::make('секрет'); }
# Функции
Laravel содержит множество глобальных «вспомогательных» PHP-функций. Многие из них используются самим фреймворком, но вы также можете использовать их в своих приложениях, если они вам понадобятся.
## Массивы
### array_add
Добавить указанную пару ключ/значение в массив, если она там ещё не существует.
```
$array = array_add(['name' => 'Desk'], 'price', 100);
```
// ['name' => 'Desk', 'price' => 100]
### array_collapse
Функция `PHParray_collapse()` (Laravel 5.1+) собирает массив массивов в единый массив:
```
$array = array_collapse([[1, 2, 3], [4, 5, 6], [7, 8, 9]]);
```
// [1, 2, 3, 4, 5, 6, 7, 8, 9]
### array_divide
Вернуть два массива — один с ключами, другой со значениями оригинального массива.
```
list($keys, $values) = array_divide(['name' => 'Desk']);
```
// $keys: ['name'] // $values: ['Desk']
### array_dot
Сделать многоуровневый массив плоским, объединяя вложенные массивы с помощью точки в именах.
```
$array = array_dot(['foo' => ['bar' => 'baz']]);
```
// ['foo.bar' => 'baz'];
### array_except
Удалить указанную пару ключ/значение из массива.
$array = array_except($array, ['price']); // ['name' => 'Desk']
### array_first
Вернуть первый элемент массива, удовлетворяющий требуемому условию.
$value = array_first($array, function ($value, $key) { return $value >= 150; }); // 200
Третьим параметром можно передать значение по умолчанию на случай, если ни одно значение не пройдёт условие:
```
$value = array_first($array, $callback, $default);
```
### array_flatten
Сделать многоуровневый массив плоским.
```
$array = ['name' => 'Joe', 'languages' => ['PHP', 'Ruby']];
```
$array = array_flatten($array); // ['Joe', 'PHP', 'Ruby'];
### array_forget
Удалить указанную пару ключ/значение из многоуровневого массива, используя синтаксис имени с точкой.
array_forget($array, 'products.desk'); // ['products' => []]
добавлено в 5.0 ()
### array_get
Вернуть значение из многоуровневого массива, используя синтаксис имени с точкой.
$value = array_get($array, 'products.desk'); // ['price' => 100]
Также третьим аргументом можно передать значение по умолчанию на случай, если указанный ключ не будет найден:
```
$value = array_get($array, 'names.john', 'default');
```
Если вам нужно что-то похожее на `PHParray_get()` , но только для объектов, используйте `PHPobject_get()` .
добавлено в 5.3 () 5.2 () 5.1 ()
### array_has
Функция `PHParray_has()` проверяет существование данного элемента или элементов в массиве с помощью «точечной» записи:
```
$array = ['product' => ['name' => 'desk', 'price' => 100]];
```
$hasItem = array_has($array, 'product.name'); // true $hasItems = array_has($array, ['product.price', 'product.discount']); // false
добавлено в 5.0 ()
### array_only
Вернуть из массива только указанные пары ключ/значения.
```
$array = ['name' => 'Desk', 'price' => 100, 'orders' => 10];
```
$array = array_only($array, ['name', 'price']); // ['name' => 'Desk', 'price' => 100]
### array_pluck
Извлечь значения из многоуровневого массива, соответствующие переданному ключу.
`$array = [` ['developer' => ['id' => 1, 'name' => 'Taylor']], ['developer' => ['id' => 2, 'name' => 'Abigail']], ]; $array = array_pluck($array, 'developer.name'); // ['Taylor', 'Abigail'];
Также вы можете указать ключ для полученного списка:
```
$array = array_pluck($array, 'developer.name', 'developer.id');
```
// [1 => 'Taylor', 2 => 'Abigail'];
добавлено в 5.2 ()
### array_pull
Извлечь значения из многоуровневого массива, соответствующие переданному ключу, и удалить их.
$name = array_pull($array, 'name'); // $name: Desk // $array: ['price' => 100]
### array_set
Установить значение в многоуровневом массиве, используя синтаксис имени с точкой.
array_set($array, 'products.desk.price', 200); // ['products' => ['desk' => ['price' => 200]]]
### array_sort
Отсортировать массив по результатам вызовов переданной функции-замыкания.
`$array = [` ['name' => 'Desk'], ['name' => 'Chair'], ]; $array = array_values(array_sort($array, function ($value) { return $value['name']; })); /* [ ['name' => 'Chair'], ['name' => 'Desk'], ] */
добавлено в 5.1 ()
### array_sort_recursive
Функция
```
PHParray_sort_recursive()
```
рекурсивно сортирует массив с помощью функции `PHPsort()` : `$array = [` [ 'Roman', 'Taylor', 'Li', ], [ 'PHP', 'Ruby', 'JavaScript', ], ]; $array = array_sort_recursive($array); /* [ [ 'Li', 'Roman', 'Taylor', ], [ 'JavaScript', 'PHP', 'Ruby', ] ]; */
### array_where
Фильтровать массив с помощью переданной функции-замыкания.
```
$array = [100, '200', 300, '400', 500];
```
$array = array_where($array, function ($value, $key) { return is_string($value); }); // [1 => 200, 3 => 400]
### head
Вернуть первый элемент массива.
$first = head($array); // 100
### last
Вернуть последний элемент массива.
$last = last($array); // 300
## Пути
### app_path
Получить полный путь к папке app. Также вы можете использовать функцию `PHPapp_path()` для получения полного пути к указанному файлу относительно каталога приложения: `$path = app_path();` $path = app_path('Http/Controllers/Controller.php');
### base_path
Получить полный путь к корневой папке приложения. Также вы можете использовать функцию `PHPbase_path()` для получения полного пути к указанному файлу относительно корня проекта: `$path = base_path();` $path = base_path('vendor/bin');
### config_path
Получить полный путь к папке config:
```
$path = config_path();
```
### elixir
Функция `PHPelixir()` получает путь к файлу Elixir в системе контроля версий: `elixir($file);`
### public_path
Получить полный путь к папке public:
```
$path = public_path();
```
### storage_path
Получить полный путь к папке storage. Также вы можете использовать функцию `PHPstorage_path()` для получения полного пути к указанному файлу относительно каталога storage:
```
$path = storage_path();
```
$path = storage_path('app/file.txt');
добавлено в 5.0 ()
### get
Зарегистрировать новый маршрут GET.
```
get('/', function() { return 'Hello World'; });
```
### post
```
post('foo/bar', 'FooController@action');
```
### put
Зарегистрировать новый маршрут PUT.
```
put('foo/bar', 'FooController@action');
```
### patch
Зарегистрировать новый маршрут PATCH.
```
patch('foo/bar', 'FooController@action');
```
### delete
```
delete('foo/bar', 'FooController@action');
```
### resource
Зарегистрировать новый маршрут ресурса RESTful.
```
resource('foo', 'FooController');
```
## Строки
### camel_case
Преобразовать строку в camelCase.
```
$camel = camel_case('foo_bar');
```
// fooBar
### class_basename
Получить имя переданного класса без пространства имён.
```
$class = class_basename('Foo\Bar\Baz');
```
// Baz
### e
Выполнить над строкой htmlspecialchars (для 5.2 и ранее: htmlentities ).
```
echo e('<html>foo</html>');
```
// <html>foo</html### ends_with
Определить, заканчивается ли строка переданной подстрокой.
```
$value = ends_with('This is my name', 'name');
```
### snake_case
Преобразовать строку в snake_case (стиль именования Си с подчёркиваниями вместо пробелов — прим. пер.).
```
$snake = snake_case('fooBar');
```
// foo_bar
### str_limit
Ограничить число символов в строке. Функция принимает строку первым аргументом, а вторым — максимальное число символов:
```
$value = str_limit('The PHP framework for web artisans.', 7);
```
// The PHP...
### starts_with
Определить, начинается ли строка с переданной подстроки.
```
$value = starts_with('This is my name', 'This');
```
### str_contains
Определить, содержит ли строка переданную подстроку.
```
$value = str_contains('This is my name', 'my');
```
### str_finish
Добавить одно вхождение подстроки в конец переданной строки.
```
$string = str_finish('this/string', '/');
```
// this/string/
### str_is
Определить, соответствует ли строка маске. Можно использовать звёздочки (*) как символы подстановки.
```
$value = str_is('foo*', 'foobar');
```
// true $value = str_is('baz*', 'foobar'); // false
### str_plural
Преобразовать слово-строку во множественное число (только для английского).
```
$plural = str_plural('car');
```
// cars $plural = str_plural('child'); // children
Вы можете указать число вторым аргументом функции для получения единственного или множественного числа строки:
```
$plural = str_plural('child', 2);
```
// children $plural = str_plural('child', 1); // child
### str_random
Создать последовательность случайных символов заданной длины. Эта функция использует PHP-функцию `PHPrandom_bytes()` :
```
$string = str_random(40);
```
### str_singular
Преобразовать слово-строку в единственное число (только для английского).
```
$singular = str_singular('cars');
```
// car
### str_slug
Сгенерировать подходящую для URL «заготовку» из переданной строки.
```
$title = str_slug('Laravel 5 Framework', '-');
```
// laravel-5-framework
### studly_case
Преобразовать строку в StudlyCase.
```
$value = studly_case('foo_bar');
```
// FooBar
добавлено в 5.3 ()
### trans
Перевести переданную языковую строку с помощью ваших языковых файлов:
```
echo trans('validation.required'):
```
### trans_choice
Перевести переданную языковую строку с изменениями:
```
$value = trans_choice('foo.bar', $count);
```
## URL-адреса
### action
Сгенерировать URL для заданного действия контроллера. Вам не надо передавать полное пространство имён в контроллер. Вместо этого передайте имя класса контроллера в пространстве имён App\Http\Controllers:
```
$url = action('HomeController@getIndex');
```
Если метод принимает параметры маршрута, вы можете передать их вторым аргументом:
```
$url = action('UserController@profile', ['id' => 1]);
```
### asset
Сгенерировать URL к ресурсу (изображению и пр.) на основе текущей схемы запроса (HTTP или HTTPS):
```
$url = asset('img/photo.jpg');
```
### secure_asset
Сгенерировать HTML-ссылку на ресурс (изображение и пр.) через HTTPS:
```
echo secure_asset('foo/bar.zip', $title, $attributes = []);
```
### route
Сгенерировать URL для заданного именованного маршрута.
```
$url = route('routeName');
```
Если маршрут принимает параметры, вы можете передать их вторым аргументом:
```
$url = route('routeName', ['id' => 1]);
```
Сгенерировать HTML-ссылку на указанный полный путь.
```
echo url('user/profile');
```
echo url('user/profile', [1]);
добавлено в 5.2 ()
## Прочее
### abort
Выбросить HTTP-исключение, которое будет отображено обработчиком исключений:
`abort(401);`
Вы можете передать текст для вывода при ответе с этим исключением:
```
abort(401, 'Unauthorized.');
```
### abort_if
```
abort_if(! Auth::user()->isAdmin(), 403);
```
### abort_unless
```
abort_unless(Auth::user()->isAdmin(), 403);
```
### auth
Функция `PHPauth()` возвращает экземпляр аутентификатора. Вы можете использовать её вместо фасада Auth для удобства:
```
$user = auth()->user();
```
### back
Функция `PHPback()` создаёт отклик-переадресацию на предыдущую страницу: `return back();`
### bcrypt
Функция `PHPbcrypt()` хеширует переданное значение с помощью Bcrypt. Вы можете использовать её вместо фасада Hash:
```
$password = bcrypt('my-secret-password');
```
### collect
Функция `PHPcollect()` создаёт экземпляр коллекции из переданного массива:
```
$collection = collect(['taylor', 'abigail']);
```
Функция `PHPconfig()` получает значение переменной из конфигурации. К значениям конфигурации можно обращаться с помощью «точечного» синтаксиса, в котором указывается имя файла и необходимый параметр. Можно указать значение по умолчанию, которое будет возвращено, если параметра не существует:
$value = config('app.timezone', $default); Функцию `PHPconfig()` можно использовать для задания переменных конфигурации во время выполнения, передав массив пар ключ/значение:
```
config(['app.debug' => true]);
```
### csrf_field
Функция `PHPcsrf_field()` создаёт скрытое поле ввода HTML, содержащее значение CSRF-последовательности. Например, используя синтаксис Blade: `{{ csrf_field() }}` // для 5.1 и ранее: // {!! csrf_field() !!}
добавлено в 5.3 ()
### cache
Получить значение из кэша. Если в кэше нет заданного ключа, будет возвращено необязательное значение по умолчанию:
$value = cache('key', 'default');
Вы можете добавить элементы в кэш, передав массив пар ключ/значение. Также вам надо передать количество минут или время, в течение которого кэшированные значения будут считаться действительными:
```
cache(['key' => 'value'], 5);
```
cache(['key' => 'value'], Carbon::now()->addSeconds(10));
### csrf_token
Получить текущее значение CSRF-последовательности.
```
$token = csrf_token();
```
### dd
Вывести дамп переменных и завершить выполнение скрипта.
`dd($value);` dd($value1, $value2, $value3, ...);
добавлено в 5.2 ()
Если вы не хотите останавливать выполнение скрипта, используйте функцию `PHPdump()` : `dump($value);`
### dispatch
Поместить новую задачу в очередь задач Laravel:
```
dispatch(new App\Jobs\SendEmails);
```
### env
Получить значение переменной среды или вернуть значение по умолчанию.
```
$env = env('APP_ENV');
```
// Возврат значения по умолчанию, если переменная не существует... $env = env('APP_ENV', 'production');
### event
Отправить указанное событие его слушателям:
```
event(new UserRegistered($user));
```
### info
```
info('Некая полезная информация!');
```
```
info('Неудачная попытка входа пользователя.', ['id' => $user->id]);
```
### logger
Записать в журнал сообщение уровня «debug»:
```
logger('Отладочное сообщение');
```
```
logger('Вход пользователя.', ['id' => $user->id]);
```
Если в функцию не переданы значения, будет возвращён экземпляр логгера:
```
logger()->error('Вам сюда нельзя.');
```
### factory
Функция `PHPfactory()` создаёт построитель фабрики моделей для данного класса, имени и количества. Его можно использовать при тестировании или заполнении БД:
```
$user = factory(App\User::class)->make();
```
### method_field
Функция `PHPmethod_field()` создаёт скрытое поле ввода HTML, содержащее подменённое значение HTTP-типа формы. Например, используя синтаксис Blade: `<form method="POST">` {{ method_field('DELETE') }} // для 5.1 и ранее: // {!! method_field('delete') !!} </form### old
Функция `PHPold()` получает значение «старого» ввода, переданного в сессию:
```
$value = old('value');
```
$value = old('value', 'default');
### redirect
Функция `PHPredirect()` возвращает HTTP-отклик переадресации, или экземпляр переадресатора, если вызывается без аргументов:
```
return redirect('/home');
```
return redirect()->route('route.name');
### request
Функция `PHPrequest()` возвращает экземпляр текущего запроса или получает элемент ввода:
```
$request = request();
```
$value = request('key', $default = null)
### response
Функция `PHPresponse()` создаёт экземпляр отклика или получает экземпляр фабрики откликов:
```
return response('Hello World', 200, $headers);
```
return response()->json(['foo' => 'bar'], 200, $headers);
### session
Функция `PHPsession()` используется для получения или задания значений сессии:
```
$value = session('key');
```
Вы можете задать значения, передав массив пар ключ/значение в функцию:
```
session(['chairs' => 7, 'instruments' => 3]);
```
Если в функцию не было передано значение, то она вернёт значения сессии:
```
$value = session()->get('key');
```
session()->put('key', $value);
### value
Если переданное значение — функция-замыкание, то вызвать её и вернуть результат. В противном случае вернуть само значение.
```
$value = value(function () {
```
return 'bar'; });
### view
Получить экземпляр представления:
```
return view('auth.login');
```
# Локализация
Возможности для локализации в Laravel предоставляют удобный способ получения языковых строк, позволяя вашему приложению поддерживать несколько языков интерфейса. Языковые строки хранятся в папке resources/lang. Внутри неё должны располагаться подпапки для всех языков, поддерживаемых приложением:
> /resources /lang /en messages.php /es messages.php
Все языковые файлы (скрипты) просто возвращают массив пар ключ/значение. Например:
`<?php` return [ 'welcome' => 'Добро пожаловать на мой сайт!' ];
Язык по умолчанию указан в файле настроек config/app.php. Само собой, вы можете изменить это значение для вашего приложения при необходимости. Вы также можете изменить текущий язык во время работы вашего приложения методом `PHPsetLocale()` фасада App:
```
Route::get('welcome/{locale}', function ($locale) {
```
App::setLocale($locale); // });
Вы можете настроить «запасной язык», который будет использоваться, когда в файле текущего языка нет соответствующей строки. Как и язык по умолчанию, запасной язык также настраивается в файле config/app.php:
```
'fallback_locale' => 'en',
```
Вы можете использовать методы `PHPgetLocale()` и `PHPisLocale()` фасада App для определения текущего языка и для проверки на совпадение текущего языка с переданным значением:
```
$locale = App::getLocale();
```
if (App::isLocale('en')) { // }
## Получение языковых строк
Вы можете получить строки из языкового файла с помощью функции `PHPtrans()` . Метод `PHPtrans()` принимает файл и ключ языковой строки первым аргументом. Например, давайте получим языковую строку welcome из файла resources/lang/messages.php:
```
echo trans('messages.welcome');
```
Конечно, если вы используете шаблонизатор Blade, то для получения языковой строки можете использовать синтаксис {{ }} или директиву `PHP@lang` (для версии 5.2 и выше):
```
{{ trans('messages.welcome') }}
```
@lang('messages.welcome') Если строка не найдена, то метод `PHPtrans()` вернёт её имя (ключ). В нашем примере это будет messages.welcome.
добавлено в 5.0 ()
### Замена частей в строках
Если хотите, задайте место для замены в языковой строке. Все места для замены начинаются с двоеточия (:). Например, вы можете задать сообщение приветствия с использованием замены для имени:
```
'welcome' => 'Welcome, :name',
```
Для подстановки значения при получении языковой строки передайте массив замен вторым аргументом метода `PHPtrans()` :
```
echo trans('messages.welcome', ['name' => 'dayle']);
```
## Множественное число
Формы множественного числа — проблема для многих языков, так как все они имеют разные сложные правила формирования множественного числа. Однако вы можете легко справиться с ней в ваших языковых файлах используя символ | для разделения форм единственного и множественного числа:
```
'apples' => 'Это одно яблоко|Это много яблок',
```
После определения языковой строки с вариантами для разных чисел, вы можете использовать функцию `PHPtrans_choice()` для получения строки в нужном числе. В данном примере возвратится вариант во множественном числе, так как указано число большее 1:
```
echo trans_choice('messages.apples', 10);
```
Благодаря тому, что Laravel использует компонент Symfony Translation вы можете легко создать более точные правила, которые будут указывать языковые строки для нескольких числовых промежутков:
```
'apples' => '{0} Это нисколько|[1,19] Это несколько|[20,Inf] Это много',
```
## Перекрытие языковых файлов пакета
Многие пакеты поставляются с собственными языковыми файлами. Вместо того, чтобы вскрывать файлы внутри пакета, чтобы настроить строки в них, вы можете перекрыть их, разместив файлы в каталоге resources/lang/vendor/{package}/{locale} (для версии Laravel 5.0 — resources/lang/packages/{locale}/{package}).
Например, если вам необходимо изменить строки английского языка в messages.php для пакета skyrim/hearthfire, вам надо поместить языковой файл в resources/lang/vendor/hearthfire/en/messages.php. В этом файле вам надо задать только те строки, которые вы хотите перекрыть. Все остальные строки, которые вам не надо перекрывать, будут загружаться из языковых файлов самого пакета.
# Разработка пакетов
Пакеты (packages) — основной способ добавления нового функционала в Laravel. Пакеты могут быть всем, чем угодно — от классов для удобной работы с датами наподобие Carbon, до целых библиотек BDD-тестирования наподобие Behat.
Конечно, есть разные типы пакетов. Некоторые пакеты автономны, что позволяет им работать в составе любого PHP-фреймворка, не только Laravel. Примерами таких отдельных пакетов являются Carbon и Behat. Любой из них может быть использован в Laravel с помощью простого указания их в файле composer.json.
С другой стороны, некоторые пакеты разработаны специально для использования в Laravel. Они могут содержать маршруты, контроллеры, представления и настройки, специально рассчитанные для улучшения приложения на Laravel. Этот раздел документации в основном посвящён разработке именно пакетов для Laravel.
Все пакеты Laravel распространяются через Packagist и Composer, поэтому нужно изучить эти прекрасные средства распространения пакетов для PHP.
### Замечание о фасадах
При создании Laravel-приложения в целом неважно, что использовать — контракты или фасады, поскольку и те и другие обеспечивают практически одинаковый уровень тестируемости. Но при создании пакетов лучше использовать контракты, а не фасады. Поскольку у вашего пакета не будет доступа ко всем вспомогательным методам Laravel для тестирования, будет проще подменить или заглушить контракт, а не фасад.
## Сервис-провайдеры
Сервис-провайдеры — связующие элементы между вашим пакетом и Laravel. Они содержит привязки сервис-контейнера, а также инструкции о том, где хранятся настройки пакета, его представления и языковые файлы.
Сервис-провайдер наследует класс Illuminate\Support\ServiceProvider и содержит два метода: `PHPregister()` и `PHPboot()` . Базовый класс ServiceProvider находится в пакете Composer illuminate/support, который вы должны добавить в зависимости своего пакета. Подробнее о структуре и задачах сервис-провайдеров читайте в документации.
добавлено в 5.3 ()
Чтобы определить маршруты для своего пакета, передайте файл маршрутов в метод `PHPloadRoutesFrom()` из метода `PHPboot()` сервис-провайдера вашего пакета. В этом файле вы можете использовать фасад Illuminate\Support\Facades\Route для регистрации маршрутов, точно так же, как в обычном приложении Laravel: `/**` * Выполнение после-регистрационной загрузки сервисов. * * @return void */ public function boot() { $this->loadRoutesFrom(__DIR__.'/path/to/routes.php'); } Чтобы определить маршруты для своего пакета, просто затребуйте ( `PHPrequire` ) файл маршрутов в методе `PHPboot()` вашего сервис-провайдера. В этом файле вы можете использовать фасад Route для регистрации маршрутов, точно так же, как в обычном приложении Laravel: `/**` * Выполнение после-регистрационной загрузки сервисов. * * @return void */ public function boot() { if (! $this->app->routesAreCached()) { require __DIR__.'/../../routes.php'; } }
добавлено в 5.0 ()
Чтобы загрузить файл маршрутов для вашего пакета, просто подключите его (include) в методе `PHPboot()` вашего сервис-провайдера.
{ include __DIR__.'/../../routes.php'; }
Если ваш пакет использует контроллеры, вам надо убедиться в том, что они правильно настроены в разделе автозагрузки файла composer.json.
## Ресурсы
### Настройки
В типичном случае вам потребуется опубликовать файл настроек вашего пакета в папку config самого приложения. Это позволит пользователям вашего пакета легко изменять настройки по умолчанию. Чтобы позволить вашим файлам быть опубликованными, вызовите метод `PHPpublishes()` из метода `PHPboot()` вашего сервис-провайдера: `/**` * Выполнение после-регистрационной загрузки сервисов. * * @return void */ public function boot() { $this->publishes([ __DIR__.'/path/to/config/courier.php' => config_path('courier.php'), ]); } Теперь, когда пользователи вашего пакета вызовут команду Laravel `shvendor:publish` , ваш файл будет скопирован в указанное место публикации. Само собой, когда ваш файл настроек опубликован, к его значениям можно обращаться как к любому другому файлу настроек:
```
$value = config('courier.option');
```
Вы также можете выбрать вариант соединения файла настроек вашего пакета с его копией в приложении. Это позволит вашим пользователям включать только те параметры, которые они хотят изменить в опубликованной копии конфигурации. Для объединения конфигураций используйте метод `PHPmergeConfigFrom()` в методе `PHPregister()` вашего сервис-провайдера: `/**` * Регистрация привязок в контейнере. * * @return void */ public function register() { $this->mergeConfigFrom( __DIR__.'/path/to/config/courier.php', 'courier' ); }
Этот метод объединяет только первый уровень массива настроек. Если ваши пользователи определят многомерный массив настроек, отсутствующие параметры не будут объединены.
Если ваш пакет содержит миграции БД, вы можете использовать метод
, чтобы указать Laravel как их загружать. Метод
принимает единственный аргумент — путь к миграциям вашего пакета: `/**` * Выполнение после-регистрационной загрузки сервисов. * * @return void */ public function boot() { $this->loadMigrationsFrom(__DIR__.'/path/to/migrations'); } После регистрации миграций вашего пакета они будут автоматически запущены при выполнении команды
. Вам не надо экспортировать их в основную папку приложения database/migrations.
### Переводы
Если ваш пакет содержит языковые файлы, вы можете использовать метод
```
PHPloadTranslationsFrom()
```
, чтобы указать Laravel, как их загружать. Например, если ваш пакет называется courier, вы можете добавить в метод `PHPboot()` своего сервис-провайдера следующее: `/**` * Выполнение после-регистрационной загрузки сервисов. * * @return void */ public function boot() { $this->loadTranslationsFrom(__DIR__.'/path/to/translations', 'courier'); } На файлы переводов пакета ссылаются, используя синтаксис
```
PHPpackage::file.line
```
. Поэтому вы можете загрузить строку welcome пакета courier из файла messages таким образом:
```
echo trans('courier::messages.welcome');
```
Помните, что в вашей папке с переводами должны быть подпапки для каждого языка, такие как en, es, ru и т.д.
Для публикации переводов вашего пакета в папку resources/lang/vendor приложения, используйте метод сервис-провайдера `PHPpublishes()` . Этот метод принимает массив путей к переводам пакета и предполагаемые места для их публикации. Например, для публикации языковых файлов пакета courier сделайте так: `/**` * Выполнение после-регистрационной загрузки сервисов. * * @return void */ public function boot() { $this->loadTranslationsFrom(__DIR__.'/path/to/translations', 'courier'); $this->publishes([ __DIR__.'/path/to/translations' => resource_path('lang/vendor/courier'), //для версии 5.1 и ранее: //__DIR__.'/path/to/translations' => base_path('resources/lang/vendor/courier'), ]); } Теперь, когда пользователи вашего пакета вызовут Artisan-команду Laravel `shvendor:publish` , переводы вашего пакета будут скопированы в указанное место публикации.
Для регистрации представлений вашего пакета в Laravel, вам надо указать Laravel, где они расположены. Вы можете сделать это методом `PHPloadViewsFrom()` . Этот метод принимает два аргумента: путь к шаблонам ваших представлений и название пакета. Например, если ваш пакет называется courier, вам надо добавить в метод `PHPboot()` своего сервис-провайдера следующее: `/**` * Выполнение после-регистрационной загрузки сервисов. * * @return void */ public function boot() { $this->loadViewsFrom(__DIR__.'/path/to/views', 'courier'); } На представления пакета ссылаются, используя синтаксис `PHPpackage::view` . Поэтому, когда путь вашего представления зарегистрирован в сервис-провайдере, вы можете загрузить представление admin из пакета courier таким образом:
```
Route::get('admin', function () {
```
return view('courier::admin'); });
Переопределение представлений пакета
Когда вы используете метод `PHPloadViewsFrom()` , на самом деле Laravel регистрирует два расположения для ваших представлений: в папке приложения resources/views/vendor и в указанной вами папке. Поэтому в нашем примере с courier Laravel сначала проверит, предоставил ли разработчик свою версию представления в
```
PHPresources/views/vendor/courier
```
. Затем, если представление не было изменено, Laravel будет искать в вызове `PHPloadViewsFrom()` указанную вами папку представлений пакета. Это упрощает редактирование или замену представлений вашего пакета для тех, кто им будет пользоваться. Для публикации представлений вашего пакета в папку resources/views/vendor используйте метод `PHPpublishes()` в методе `PHPboot()` вашего сервис-провайдера. Метод `PHPpublishes()` принимает массив путей к представлениям пакета и предполагаемые места для их публикации. `/**` * Выполнение после-регистрационной загрузки сервисов. * * @return void */ public function boot() { $this->loadViewsFrom(__DIR__.'/path/to/views', 'courier'); $this->publishes([ __DIR__.'/path/to/views' => resource_path('views/vendor/courier'), //для версии 5.1 и ранее: //__DIR__.'/path/to/views' => base_path('resources/views/vendor/courier'), ]); } Теперь, когда пользователи вашего пакета вызовут Artisan-команду Laravel `shvendor:publish` , папка ваших представлений будет скопирована в указанное место публикации.
добавлено в 5.3 ()
## Команды
Для регистрации в Laravel Artisan-команд вашего пакета используйте метод `PHPcommands()` . Этот метод принимает массив имён классов команд. После регистрации команд вы можете выполнять их через консоль Artisan: `/**` * Начальная загрузка сервисов приложения. * * @return void */ public function boot() { if ($this->app->runningInConsole()) { $this->commands([ FooCommand::class, BarCommand::class, ]); } }
## Общие ресурсы
Ваш пакет может иметь ресурсы, такие как JavaScript, CSS и изображения. Для публикации этих ресурсов в папку приложения public используйте метод `PHPpublishes()` в методе `PHPboot()` вашего сервис-провайдера. В этом примере мы также добавим для ресурсов групповой тег «public», который можно использовать для публикации групп связанных ресурсов: `/**` * Выполнение после-регистрационной загрузки сервисов. * * @return void */ public function boot() { $this->publishes([ __DIR__.'/path/to/assets' => public_path('vendor/courier'), ], 'public'); } Теперь, когда пользователи вашего пакета вызовут команду Laravel `shvendor:publish` , ваши ресурсы будут скопированы в указанное место публикации. Поскольку обычно каждый раз при обновлении пакета вам необходимо перезаписать ресурсы, используйте флаг `sh--force` : > shphp artisan vendor:publish --tag=public --force
## Публикация файлов по группам
Вам может пригодиться возможность публиковать отдельные группы файлов. Например, если вы захотите дать вашим пользователям возможность публиковать файлы настроек вашего пакета и файлы ресурсов по отдельности. Вы можете сделать это, присвоив им теги при вызове метода `PHPpublishes()` из сервис-провайдера пакета. Например, давайте используем теги для определения двух групп для публикации в методе `PHPboot()` сервис-провайдера пакета: `/**` * Выполнение после-регистрационной загрузки сервисов. * * @return void */ public function boot() { // Публикация файла настроек $this->publishes([ __DIR__.'/../config/package.php' => config_path('package.php') ], 'config'); // Публикация ваших миграций $this->publishes([ __DIR__.'/../database/migrations/' => database_path('migrations') ], 'migrations'); } Теперь ваши пользователи могут публиковать эти группы отдельно, указывая их тег при выполнении команды `shvendor:publish` : > shphp artisan vendor:publish --tag=config
# Страничный вывод
В некоторых фреймворках страничный вывод может быть большой проблемой. Страничный вывод в Laravel интегрирован с построителем запросов и Eloquent ORM и обеспечивает удобный, простой в использовании вывод результатов БД. Генерируемый HTML-код совместим с CSS-фреймворком Bootstrap.
### Страничный вывод выборки из БД
Есть несколько способов разделения данных на страницы. Самый простой — используя метод `PHPpaginate()` на объекте-конструкторе запросов или на запросе Eloquent. Метод `PHPpaginate()` автоматически позаботится о задании правильных пределов и промежутков на основе текущей просматриваемой пользователем страницы. По умолчанию текущая страница определяется по значению аргумента ?page в HTTP-запросе. Само собой, Laravel автоматически определяет это значение, и так же автоматически вставляет его в ссылки, генерируемые для страничного вывода. В этом примере единственный аргумент метода `PHPpaginate()` — число элементов на одной странице. Давайте укажем, что мы хотим выводить по 15 элементов на страницу: `<?php` namespace App\Http\Controllers; use Illuminate\Support\Facades\DB; //для версии 5.2 и ранее: //use DB; use App\Http\Controllers\Controller; class UserController extends Controller { /** * Вывести всех пользователей приложения. * * @return Response */ public function index() { $users = DB::table('users')->paginate(15); return view('user.index', ['users' => $users]); } } На данный момент операции страничного вывода, которые используют оператор `PHPgroupBy` , не могут эффективно выполняться в Laravel. Если вам необходимо использовать `PHPgroupBy` для постраничного набора результатов, рекомендуется делать запрос в БД и создавать экземпляр страничного вывода вручную. Если вам необходимо вывести для страничного представления только ссылки «Далее» и «Назад», вы можете использовать метод `PHPsimplePaginate()` для более эффективных запросов. Для больших наборов данных очень полезно, когда вам не надо отображать номер каждой страницы в вашем представлении:
### Страничный вывод запросов Eloquent
Можно также делать постраничный вывод запросов Eloquent. В этом примере мы разобьём на страницы модель User по 15 элементов на странице. Как видите, синтаксис практически совпадает со страничным выводом выборки из БД:
```
$users = App\User::paginate(15);
```
Разумеется, вы можете вызвать `PHPpaginate()` после задания других условий запроса, таких как where:
```
$users = User::where('votes', '>', 100)->paginate(15);
```
Также вы можете использовать метод `PHPsimplePaginate()` моделей Eloquent:
```
$users = User::where('votes', '>', 100)->simplePaginate(15);
```
### Ручное создание экземпляра страничного вывода
Иногда необходимо создать экземпляр страничного вывода вручную, передав ему массив данных. Это можно сделать создав либо экземпляр Illuminate\Pagination\Paginator, либо экземпляр Illuminate\Pagination\LengthAwarePaginator, в зависимости от ситуации.
Классу Paginator не надо знать общее количество элементов в конечном наборе, и поэтому класс не должен иметь методы для получения индекса последней страницы. LengthAwarePaginator принимает почти те же аргументы, что и Paginator, но ему требуется общее количество элементов в конечном наборе.
Другими словами, Paginator соответствует методу `PHPsimplePaginate()` на конструкторе запросов и Eloquent, а LengthAwarePaginator соответствует методу `PHPpaginate()` .
При ручном создании экземпляра страничного вывода вы должны вручную «поделить» передаваемый в него массив результатов. Если вы не знаете, как это сделать, используйте PHP-функцию array_slice.
## Отображение результатов страничного вывода
При вызове методов `PHPpaginate()` и `PHPsimplePaginate()` на конструкторе запросов или запросе Eloquent вы получите экземпляр страничного вывода. Для метода `PHPpaginate()` это будет экземпляр Illuminate\Pagination\LengthAwarePaginator. А для метода `PHPsimplePaginate()` это будет экземпляр Illuminate\Pagination\Paginator. Эти объекты предоставляют несколько методов для вывода конечного набора. В дополнение к этим вспомогательным методам экземпляры страничного вывода — итераторы, к ним можно обращаться как к массивам. Итак, когда вы получили результаты, вы можете вывести их и создать ссылки на страницы с помощью Blade:
@foreach ($users as $user) {{ $user->name }} @endforeach </div> {{ $users->links() }} //для версии 5.1 и ранее: //{!! $users->render() !!}
добавлено в 5.0 ()
Метод `PHPlinks()` (для версии 5.1 и ранее `PHPrender()` ) выведет ссылки на остальные страницы в конечном наборе. Каждая из этих ссылок уже будет содержать правильную переменную строки запроса page. Помните, сгенерированный методом `PHPlinks()` HTML-код совместим с CSS-фреймворком Bootstrap.
Настройка URI для вывода ссылок
Настроить URI для вывода ссылок на страницы можно с помощью метода `PHPsetPath()` . Например, если вы хотите получить ссылки вида http://example.com/custom/url?page=N, вам надо передать custom/url в метод `PHPsetPath()` :
$users = App\User::paginate(15); $users->setPath('custom/url'); // }); Вы можете добавить параметры запросов к ссылкам страниц с помощью метода `PHPappends()` . Например, чтобы добавить &sort=votes к каждой страничной ссылке, вам надо вызвать `PHPappends()` вот так:
```
{{ $users->appends(['sort' => 'votes'])->links() }}
```
//для версии 5.1 и ранее: //{!! $users->appends(['sort' => 'votes'])->render() !!}
Код выше создаст ссылки наподобие:
> http://example.com/something?page=2&sort=votes
Если вы хотите добавить «хэш-фрагмент» в URL-адреса страничного вывода, вы можете использовать метод `PHPfragment()` . Например, чтобы добавить #foo к каждой страничной ссылке, вам надо вызвать `PHPfragment()` вот так:
```
{{ $users->fragment('foo')->links() }}
```
//для версии 5.1 и ранее: //{!! $users->fragment('foo')->render() !!}
Вызов этого метода сгенерирует URL-адреса наподобие:
> http://example.com/something?page=2#foo
### Преобразование в JSON
Классы страничного вывода Laravel реализуют контракт интерфейса Illuminate\Contracts\Support\Jsonable и предоставляют метод `PHPtoJson()` , поэтому можно очень легко конвертировать ваш страничный вывод в JSON. Вы также можете преобразовать экземпляр страничного вывода в JSON, просто вернув его из маршрута или действия контроллера.
return App\User::paginate(); });
JSON-форма экземпляра будет включать некоторые «мета-данные», такие как total, current_page, last_page и другие. Данные экземпляра будут доступны через ключ data в массиве JSON. Вот пример JSON, созданного при помощи возврата экземпляра страничного вывода из маршрута:
`{` "total": 50, "per_page": 15, "current_page": 1, "last_page": 4, "next_page_url": "http://laravel.app?page=2", "prev_page_url": null, "from": 1, "to": 15, "data":[ { // Объект вывода }, { // Объект вывода } ] }
добавлено в 5.3 ()
## Настройка представления страничного вывода
По умолчанию отрисованные представления для отображения ссылок страничного вывода совместимы с CSS-фреймворком Bootstrap. Но если вы не используете Bootstrap, вы можете определить свои собственные представления для отрисовки этих ссылок. При вызове метода `PHPlinks()` на экземпляре страничного вывода передайте первым аргументом имя представления:
```
{{ $paginator->links('view.name') }}
```
Но самый простой способ изменить представления страничного вывода — экспортировать их в ваш каталог resources/views/vendor с помощью команды `shvendor:publish` : > shphp artisan vendor:publish --tag=laravel-pagination
Эта команда поместит представления в папку resources/views/vendor/pagination. Файл default.blade.php в этой папке является стандартным представлением страничного вывода. Просто отредактируйте этот файл, чтобы изменить HTML страничного вывода.
## Методы экземпляра страничного вывода
Каждый экземпляр страничного вывода предоставляет дополнительную информацию с помощью этих методов:
*
`PHP$results->count()` *
```
PHP$results->currentPage()
```
```
PHP$results->firstItem()
```
```
PHP$results->hasMorePages()
```
```
PHP$results->lastItem()
```
```
PHP$results->lastPage()
```
(недоступен при использовании simplePaginate) *
```
PHP$results->nextPageUrl()
```
```
PHP$results->perPage()
```
```
PHP$results->previousPageUrl()
```
*
`PHP$results->total()` (недоступен при использовании simplePaginate) *
```
PHP$results->url($page)
```
*
`PHPfirstItem()` *
`PHPlastItem()`
# Очереди
Очереди Laravel предоставляют единое API для различных сервисов очередей, таких как Beanstalk, Amazon SQS, Redis или даже реляционных БД. Очереди позволяют вам отложить выполнение времязатратных задач, таких как отправка e-mail, на более позднее время, таким образом на порядок ускоряя обработку веб-запросов в вашем приложении.
Настройки очередей хранятся в файле config/queue.php. В нём вы найдёте настройки для каждого драйвера очереди, которые поставляются вместе с фреймворком: база данных, Beanstalkd, IronMQ (только для версии 5.1 и ранее), Amazon SQS, Redis, null, а также синхронный драйвер (для локального использования). Драйвер null просто отменяет задачи очереди, поэтому они никогда не выполнятся.
### Подключения или очереди
Перед изучением очередей Laravel важно понимать различие между «подключениями» и «очередями». В вашем файле настроек config/queue.php есть параметр connections, который определяет конкретное подключение к сервису очередей, такому как Amazon SQS, Beanstalk или Redis. Но у любого подключения может быть несколько «очередей», которые можно представить в виде отдельных стеков или наборов задач в очереди.
Обратите внимание, в каждом примере настройки подключения в файле queue есть атрибут queue. Это та очередь, в которую будут помещаться задачи при отправке по данному подключению. Другими словами, если вы отправляете задачу без явного указания очереди, задача будет помещена в очередь, заданную в атрибуте queue в настройках подключения:
```
// Эта задача отправлена в очередь по умолчанию...
```
dispatch(new Job); // Эта задача отправлена в очередь "emails"... dispatch((new Job)->onQueue('emails'));
Для некоторых приложений нет необходимости помещать задачи в несколько очередей, для них достаточно иметь одну простую очередь. Иметь несколько очередей особенно полезно для приложений, в которых необходимо разделение обрабатываемых задач по сегментам или приоритетам. Обработчик очереди Laravel позволяет вам указывать какие задачи имеют больший приоритет. Например, если вы отправите задачу в очередь high, то можете запустить обработчик, который даст им больший приоритет при выполнении:
> shphp artisan queue:work --queue=high,default
Для использования драйвера очереди database вам понадобится таблица в БД для хранения задач. Чтобы генерировать миграцию для создания этой таблицы, выполните Artisan-команду `shqueue:table` . Когда миграция будет создана, вы сможете мигрировать свою базу данных командой `shmigrate` : > shphp artisan queue:table php artisan migrate
Для использования драйвера очереди redis вам надо настроить подключение к базе данных Redis в файле config/database.php.
Если ваше подключение к очереди Redis использует Redis Cluster, имена ваших очередей должны содержать хештег ключа. Это необходимо, чтобы убедиться в том, что все ключи Redis для данной очереди помещены в один хеш-слот:
> conf'redis' => [ 'driver' => 'redis', 'connection' => 'default', 'queue' => '{default}', 'retry_after' => 90, ],
Упомянутые выше драйвера имеют следующие зависимости:
* Amazon SQS: aws/aws-sdk-php ~3.0
* Beanstalkd: pda/pheanstalk ~3.0
* IronMQ: iron-io/iron_mq ~2.0|~4.0 (только для версии 5.1 и ранее)
* Redis: predis/predis ~1.0
## Создание задач
### Генерирование классов задач
По умолчанию все помещаемые в очередь задачи вашего приложения хранятся в папке app/Jobs (в версии 5.0 App\Commands). Если папки app/Jobs не существует, она будет создана при запуске Artisan-команды `shmake:job` . Вы можете сгенерировать новую задачу для очереди с помощью Artisan-команды:
добавлено в 5.1 ()
> shphp artisan make:job SendReminderEmail --queued
добавлено в 5.0 ()
> shphp artisan make:command SendEmail --queued
Сгенерированный класс будет реализацией интерфейса Illuminate\Contracts\Queue\ShouldQueue, так Laravel поймёт, что задачу надо поместить в очередь, а не выполнить немедленно.
### Структура класса
Классы задач очень просты, обычно они содержат только метод `PHPhandle()` , который вызывается при обработке задачи в очереди. Для начала давайте посмотрим на пример класса задачи.
добавлено в 5.3 ()
В этом примере мы представим, что создаём сервис публикации подкастов, и нам надо обрабатывать загружаемые файлы подкастов перед их публикацией:
`<?php` namespace App\Jobs; use App\Podcast; use App\AudioProcessor; use Illuminate\Bus\Queueable; use Illuminate\Queue\SerializesModels; use Illuminate\Queue\InteractsWithQueue; use Illuminate\Contracts\Queue\ShouldQueue; class ProcessPodcast implements ShouldQueue { use InteractsWithQueue, Queueable, SerializesModels; protected $podcast; /** * Создание нового экземпляра задачи. * * @param Podcast $podcast * @return void */ public function __construct(Podcast $podcast) { $this->podcast = $podcast; } /** * Выполнение задачи. * * @param AudioProcessor $processor * @return void */ public function handle(AudioProcessor $processor) { // Обработка загруженного подкаста... } }
добавлено в 5.2 () 5.1 () 5.0 ()
`<?php` namespace App\Jobs; use App\User; use App\Jobs\Job; use Illuminate\Contracts\Mail\Mailer; use Illuminate\Queue\SerializesModels; use Illuminate\Queue\InteractsWithQueue; use Illuminate\Contracts\Queue\ShouldQueue; //для версии 5.1 и ранее: //use Illuminate\Contracts\Bus\SelfHandling; // //class SendReminderEmail extends Job implements SelfHandling, ShouldQueue class SendReminderEmail extends Job implements ShouldQueue { use InteractsWithQueue, SerializesModels; protected $user; /** * Создание нового экземпляра задачи. * * @param User $user * @return void */ public function __construct(User $user) { $this->user = $user; } /** * Выполнение задачи. * * @param Mailer $mailer * @return void */ public function handle(Mailer $mailer) { $mailer->send('emails.reminder', ['user' => $this->user], function ($m) { // }); $this->user->reminders()->create(...); } }
Обратите внимание, в этом примере мы можем передать модель Eloquent напрямую в конструктор задачи. Благодаря используемому в задаче типажу SerializesModels, модели Eloquent будут изящно сериализованы и десериализованы при выполнении задачи. Если ваша задача принимает модель Eloquent в своём конструкторе, в очередь будет сериализован только идентификатор модели. А когда очередь начнёт обработку задачи, система очередей автоматически запросит полный экземпляр модели из БД. Это всё полностью прозрачно для вашего приложения и помогает избежать проблем, связанных с сериализацией полных экземпляров моделей Eloquent.
Метод `PHPhandle()` вызывается при обработке задачи очередью. Обратите внимание, что мы можем указывать зависимости в методе `PHPhandle()` задачи. сервис-контейнер Laravel автоматически внедрит эти зависимости.
добавлено в 5.0 ()
добавлено в 5.2 () 5.1 () 5.0 ()
Если вы хотите «отпустить» задачу вручную, то типаж InteractsWithQueue, который уже включён в ваш сгенерированный класс задачи, предоставляет доступ к методу задачи `PHPrelease()` . Этот метод принимает один аргумент — число секунд, через которое задача станет снова доступной:
{ if (condition) { $this->release(10); } }
Проверка числа попыток запуска
Как уже было сказано, при возникновении исключения при выполнении задачи она будет автоматически возвращена в очередь. Вы можете проверить число сделанных попыток выполнения задачи при помощи метода `PHPattempts()` :
{ if ($this->attempts() > 3) { // } }
## Отправка задач в очередь
добавлено в 5.3 ()
Когда вы напишете класс вашей задачи, вы можете отправить её в очередь с помощью вспомогательного метода `PHPdispatch()` . Необходимо передать единственный аргумент — экземпляр задачи: `<?php` namespace App\Http\Controllers; use App\Jobs\ProcessPodcast; use Illuminate\Http\Request; use App\Http\Controllers\Controller; class PodcastController extends Controller { /** * Сохранить новый подкаст. * * @param Request $request * @return Response */ public function store(Request $request) { // Создать подкаст... dispatch(new ProcessPodcast($podcast)); } } Вспомогательный метод `PHPdispatch()` — удобная, лаконичная, доступная глобально функция, а также её чрезвычайно просто тестировать. Узнайте больше из документации по тестированию.
добавлено в 5.2 () 5.1 () 5.0 ()
Стандартный контроллер Laravel расположен в app/Http/Controllers/Controller.php и использует типаж DispatchesJobs. Этот типаж предоставляет несколько методов, позволяющих вам удобно помещать задачи в очередь. Например, метод `PHPdispatch()` : `<?php` namespace App\Http\Controllers; use App\User; use Illuminate\Http\Request; use App\Jobs\SendReminderEmail; use App\Http\Controllers\Controller; class UserController extends Controller { /** * Послать уведомление по e-mail данному пользователю. * * @param Request $request * @param int $id * @return Response */ public function sendReminderEmail(Request $request, $id) { $user = User::findOrFail($id); $this->dispatch(new SendReminderEmail($user)); } }
добавлено в 5.2 ()
Разумеется, иногда нужно отправить задачу из другого места вашего приложения — не из маршрута или контроллера. Для этого вы можете включить типаж DispatchesJobs в любой класс своего приложения, чтобы получить доступ к его различным методам отправки. Например, вот пример класса, использующего этот типаж:
`<?php` namespace App; use Illuminate\Foundation\Bus\DispatchesJobs; class ExampleClass { use DispatchesJobs; } Или, вы можете использовать глобальную функцию `PHPdispatch()` :
```
Route::get('/job', function () {
```
dispatch(new App\Jobs\PerformTask); return 'Готово!'; });
добавлено в 5.0 ()
Для добавления новой задачи в очередь используйте метод `PHPQueue::push()` :
```
Queue::push(new SendEmail($message));
```
В этом примере мы используем фасад Queue напрямую. Но обычно вы будете отправлять команды в очередь через командную шину. В этой статье мы продолжим использовать фасад Queue, но также коснёмся и командной шины, поскольку она используется и для отправки синхронных задач, и для отправки задач в очередь.
Сгенерированный обработчик будет помещён в App\Handlers\Commands и будет извлечён из IoC-контейнера.
### Отложенные задачи
Если вам надо отложить выполнение задачи в очереди, используйте метод `PHPdelay()` на экземпляре задачи. Этот метод обеспечивается типажом Illuminate\Bus\Queueable, который включён во все генерируемые классы задач по умолчанию.
добавлено в 5.3 ()
Например, давайте укажем, что задача станет доступной для обработки через 10 минут после отправки в очередь:
`<?php` namespace App\Http\Controllers; use Carbon\Carbon; use App\Jobs\ProcessPodcast; use Illuminate\Http\Request; use App\Http\Controllers\Controller; class PodcastController extends Controller { /** * Сохранить новый подкаст. * * @param Request $request * @return Response */ public function store(Request $request) { // Создать подкаст... $job = (new ProcessPodcast($podcast)) ->delay(Carbon::now()->addMinutes(10)); dispatch($job); } }
добавлено в 5.2 () 5.1 () 5.0 ()
Например, вы захотите поместить в очередь задачу, которая отправляет пользователю e-mail через 5 минут после регистрации:
`<?php` namespace App\Http\Controllers; use App\User; use Illuminate\Http\Request; use App\Jobs\SendReminderEmail; use App\Http\Controllers\Controller; class UserController extends Controller { /** * Отправить уведомление на e-mail данному пользователю. * * @param Request $request * @param int $id * @return Response */ public function sendReminderEmail(Request $request, $id) { $user = User::findOrFail($id); $job = (new SendReminderEmail($user))->delay(60 * 5); $this->dispatch($job); } }
В этом примере мы указали, что задача в очереди должна быть отложена на 5 минут до того, как станет доступной обработчикам.
Сервис Amazon SQS имеет ограничение на задержку не более 15 минут.
добавлено в 5.0 ()
Вы можете добиться этого, используя метод `PHPQueue::later()` :
```
$date = Carbon::now()->addMinutes(15);
```
Queue::later($date, new SendEmail($message));
В этом примере мы используем библиотеку для работы со временем Carbon, чтобы указать необходимое время задержки для задачи. Другой вариант — передать необходимое число секунд для задержки.
### Настройка очереди и подключения
добавлено в 5.0 ()
Помещая задачи в разные очереди, вы можете разделять их по категориям, а также задавать приоритеты по количеству обработчиков разных очередей. Это не касается различных «подключений» очередей, определённых в файле настроек очереди, а только конкретных очередей в рамках одного подключения. Чтобы указать очередь используйте метод `PHPonQueue()` на экземпляре задачи:
добавлено в 5.3 ()
`<?php` namespace App\Http\Controllers; use App\Jobs\ProcessPodcast; use Illuminate\Http\Request; use App\Http\Controllers\Controller; class PodcastController extends Controller { /** * Сохранить новый подкаст. * * @param Request $request * @return Response */ public function store(Request $request) { // Создать подкаст... $job = (new ProcessPodcast($podcast))->onQueue('processing'); dispatch($job); } }
добавлено в 5.2 () 5.1 () 5.0 ()
`<?php` namespace App\Http\Controllers; use App\User; use Illuminate\Http\Request; use App\Jobs\SendReminderEmail; use App\Http\Controllers\Controller; class UserController extends Controller { /** * Отправить уведомление на e-mail данному пользователю. * * @param Request $request * @param int $id * @return Response */ public function sendReminderEmail(Request $request, $id) { $user = User::findOrFail($id); $job = (new SendReminderEmail($user))->onQueue('emails'); $this->dispatch($job); } }
добавлено в 5.0 ()
добавлено в 5.3 ()
`<?php` namespace App\Http\Controllers; use App\Jobs\ProcessPodcast; use Illuminate\Http\Request; use App\Http\Controllers\Controller; class PodcastController extends Controller { /** * Сохранить новый подкаст. * * @param Request $request * @return Response */ public function store(Request $request) { // Создать подкаст... $job = (new ProcessPodcast($podcast))->onConnection('sqs'); dispatch($job); } }
добавлено в 5.2 ()
`<?php` namespace App\Http\Controllers; use App\User; use Illuminate\Http\Request; use App\Jobs\SendReminderEmail; use App\Http\Controllers\Controller; class UserController extends Controller { /** * Послать данному пользователю напоминание на e-mail. * * @param Request $request * @param int $id * @return Response */ public function sendReminderEmail(Request $request, $id) { $user = User::findOrFail($id); $job = (new SendReminderEmail($user))->onConnection('alternate'); $this->dispatch($job); } }
### Обработка ошибок
Если во время выполнения задачи возникло исключение, она будет автоматически возвращена в очередь, и можно будет снова попробовать выполнить её. Попытки будут повторяться до тех пор, пока не будет достигнуто заданное максимальное число для вашего приложения. Это число определяется параметром `sh--tries` , используемом в Artisan-команде `shqueue:work` . Подробнее о запуске обработчика очереди читайте ниже.
добавлено в 5.3 ()
## Запуск обработчика очереди
Laravel содержит обработчик очереди, который обрабатывает новые задачи при поступлении в очередь. Вы можете запустить его Artisan-командой `shqueue:work` . Запомните, команда `shqueue:work` будет выполняться, пока вы не остановите её вручную или закроете свой терминал: > shphp artisan queue:work
Чтобы процесс `shqueue:work` выполнялся в фоне постоянно, используйте монитор процессов, такой как Supervisor, для проверки, что обработчик очереди не остановился.
Запомните, обработчики очереди — долгоживущие процессы, и они хранят в памяти состояние загруженного приложения. Поэтому они не заметят изменения в вашем коде, сделанные после их запуска. Поэтому в процессе развёртывания не забудьте перезапустить обработчиков очереди.
Указание подключения и очереди
Также вы можете указать, какое подключение должен использовать обработчик очереди. Имя подключения передаётся команде `shwork` и должно соответствовать одному из подключений, определённых в файле config/queue.php: > shphp artisan queue:work redis
Можно ещё точнее настроить обработчика очереди, указав для него конкретные очереди в данном подключении. Например, если все ваши email-сообщения обрабатываются в очереди emails в подключении redis, вы можете выполнить такую команду для запуска обработчика только для этой очереди:
> shphp artisan queue:work redis --queue=emails
Демоны-обработчики очереди не «перезагружают» фреймворк перед обработкой каждой задачи. Поэтому вам надо освобождать все тяжёлые ресурсы после завершения каждой задачи. Например, если вы выполняете обработку изображения с помощью библиотеки GD, то после завершения обработки вам надо освободить память с помощью `PHPimagedestroy` .
### Приоритеты очередей
Иногда бывает необходимо организовать обработку очередей по их приоритетам. Например, в файле config/queue.php вы можете задать очередь low как очередь по умолчанию для подключения redis. При этом при необходимости вы можете помещать задачу в очередь high вот так:
```
dispatch((new Job)->onQueue('high'));
```
Чтобы запустить обработчика, который сначала проверит, что обработаны все задачи из очереди high, и только потом продолжит обрабатывать задачи из очереди low, передайте в команду `sh` список очередей через запятую: > shphp artisan queue:work --queue=high,low
### Обработчики очереди и развёртывание
Поскольку обработчики очереди — долгоживущие процессы, они не подхватят изменения в вашем коде, пока не будут перезапущены. Поэтому простейший способ развернуть приложение с обработчиками очереди — перезапустить обработчиков во время процесса развёртывания. Вы можете мягко перезапустить всех своих обработчиков командой `shqueue:restart` : > shphp artisan queue:restart
Эта команда заставит обработчиков очереди мягко «умереть» после завершения ими текущей задачи, поэтому ни одна существующая задаче не будет потеряна. Поскольку обработчики очереди умрут при выполнении команды `shqueue:restart` , вам надо запустить менеджер процессов, такой как Supervisor, для автоматического перезапуска обработчиков очереди.
### Окончание срока задачи и таймауты
В файле config/queue.php для каждого подключения определён параметр retry_after. Этот параметр указывает, сколько секунд подключение должно ждать, прежде чем заново приступить к обрабатываемой задаче. Например, если значение retry_after равно 90, то задача будет возвращена назад в очередь, если пробудет в обработке 90 секунд и не будет удалена. Обычно для retry_after лучше задать значение равное максимальному разумному количеству секунд, которое потребуется задачам для выполнения.
Единственное подключение, у которого нет значения retry_after, — Amazon SQS. SQS пробует выполнить задачу заново исходя из стандартного таймаута видимости, который настраивается в консоли AWS.
Artisan-команда `shqueue:work` имеет параметр `sh--timeout` . Этот параметр указывает, сколько должен ждать главный процесс обработки очередей Laravel, прежде чем уничтожить дочернего обработчика очереди, выполняющего задачу. Дочерний процесс обработки очереди может «зависнуть» по разным причинам, таким как внешний HTTP-вызов, который не отвечает. Параметр `sh--timeout` удаляет зависнувшие процессы, превысившие указанный лимит времени: > shphp artisan queue:work --timeout=60
Параметр retry_after в настройках и параметр командной строки `sh--timeout` разные, но работают совместно для проверки, что задачи не потерялись, и что задачи выполняются успешно только один раз. Значение `sh--timeout` должно быть всегда по крайней мере на несколько секунд меньше значения retry_after в настройках. Таким образом обработчик, выполняющий данную задачу, будет всегда убиваться до повторной попытки выполнения задачи. Если ваш параметр `sh--timeout` имеет значение больше значения retry_after, ваши задачи могут выполниться дважды.
Длительность паузы обработчика
Пока в очереди доступны задачи, обработчик будет выполнять их без задержки между ними. С помощью параметра `shsleep` можно задать время, на которое будет «засыпать» обработчик, если нет новых задач: > shphp artisan queue:work --sleep=3
### Получение задач из запросов
Очень часто переменные HTTP-запросов переносятся в задачи. Поэтому вместо того, чтобы заставлять вас делать это вручную для каждого запроса, Laravel предоставляет несколько вспомогательных методов. Давайте посмотрим на метод `PHPdispatchFrom()` типажа DispatchesJobs. По умолчанию этот типаж включён в базовый класс контроллера Laravel: `<?php` namespace App\Http\Controllers; use Illuminate\Http\Request; use App\Http\Controllers\Controller; class CommerceController extends Controller { /** * Обработка данного заказа. * * @param Request $request * @param int $id * @return Response */ public function processOrder(Request $request, $id) { // Обработка запроса... $this->dispatchFrom('App\Jobs\ProcessOrder', $request); } }
Этот метод проверит конструктор класса данной задачи и извлечёт переменные из HTTP-запроса (или любого другого объекта ArrayAccess) для заполнения необходимых параметров конструктора задачи. Поэтому, если класс нашей задачи принимает в конструкторе переменную productId, то шина задач попытается получить параметр productId из HTTP-запроса.
Вы также можете передать массив третьим аргументом метода `PHPdispatchFrom()` . Этот массив будет использован для заполнения тех параметров, которых нет в запросе:
```
$this->dispatchFrom('App\Jobs\ProcessOrder', $request, [
```
'taxPercentage' => 20, ]);
добавлено в 5.0 ()
### Добавление замыканий в очередь
Вы можете помещать в очередь и функции-замыкания. Это очень удобно для простых, быстрых задач, выполняющихся в очереди.
Добавление замыкания в очередь
```
Queue::push(function($job) use ($id)
```
{ Account::delete($id); $job->delete(); }); Вместо того, чтобы делать объекты доступными для замыканий в очереди через директиву `PHPuse` , передавайте первичные ключи и повторно извлекайте связанные модели из вашей задачи в очереди. Это часто позволяет избежать неожиданного поведения при сериализации.
При использовании push-очередей Iron.io будьте особенно внимательны при добавлении замыканий. Конечная точка выполнения, получающая ваше сообщение, должна проверить входящую последовательность-ключ, чтобы удостовериться, что запрос действительно исходит от Iron.io. Например, ваша конечная push-точка может иметь адрес вида https://yourapp.com/queue/receive?token=SecretToken — где значение token можно проверять перед собственно обработкой задачи.
## Настройка Supervisor
Supervisor — монитор процессов для ОС Linux, он автоматически перезапустит ваш процесс `shqueue:work` , если он остановится. Для установки Supervisor в Ubuntu используйте такую команду: > shsudo apt-get install supervisor
Если самостоятельная настройка Supervisor кажется вам слишком сложной, вы можете использовать Laravel Forge, который автоматически установит и настроит Supervisor для ваших проектов Laravel.
Файлы настроек Supervisor обычно находятся в папке /etc/supervisor/conf.d. Там вы можете создать любое количество файлов с настройками, по которым Supervisor поймёт, как отслеживать ваши процессы. Например, давайте создадим файл laravel-worker.conf, который запускает и наблюдает за процессом queue:work:
> conf[program:laravel-worker] process_name=%(program_name)s_%(process_num)02d command=php /home/forge/app.com/artisan queue:work sqs --sleep=3 --tries=3 autostart=true autorestart=true user=forge numprocs=8 redirect_stderr=true stdout_logfile=/home/forge/app.com/worker.log
В этом примере numprocs указывает, что Supervisor должен запустить 8 процессов queue:work и наблюдать за ними, автоматически перезапуская их при их остановках. Само собой, вам надо изменить часть queue:work sqs директивы command в соответствии с вашим подключением очереди.
Подробнее о Supervisor читайте в его документации.
## Проваленные задачи
Не всегда всё идёт по плану, иногда ваши задачи в очереди будут заканчиваться ошибкой. Не волнуйтесь, такое с каждым случается! В Laravel есть удобный способ указать максимальное количество попыток выполнения задачи. После превышения этого количества попыток задача будет добавлена в таблицу failed_jobs. Для создания миграции для таблицы failed_jobs можно использовать команду `shqueue:failed-table` : > shphp artisan queue:failed-table php artisan migrate
При запуске вашего обработчика очереди нужно указать максимальное количество попыток выполнения задачи с помощью параметра `sh--tries` команды `shqueue:work` . Если не задать этот параметр, то попытки выполнения задачи будут бесконечны: > shphp artisan queue:work redis --tries=3
### Уборка после проваленных задач
Вы можете определить метод `PHPfailed()` прямо в классе задачи, это позволит вам выполнять уборку после данной конкретной задачи при возникновении ошибки. Это идеальное место для отправки предупреждения вашим пользователям или для отмены всех действий, выполненных задачей. Исключение `PHPException` , приведшее к провалу задачи, будет передано в метод `PHPfailed()` : `<?php` namespace App\Jobs; use Exception; use App\Podcast; use App\AudioProcessor; use Illuminate\Bus\Queueable; use Illuminate\Queue\SerializesModels; use Illuminate\Queue\InteractsWithQueue; use Illuminate\Contracts\Queue\ShouldQueue; class ProcessPodcast implements ShouldQueue { use InteractsWithQueue, Queueable, SerializesModels; protected $podcast; /** * Создать новый экземпляр задачи. * * @param Podcast $podcast * @return void */ public function __construct(Podcast $podcast) { $this->podcast = $podcast; } /** * Выполнить задачу. * * @param AudioProcessor $processor * @return void */ public function handle(AudioProcessor $processor) { // Обработка загруженного подкаста... } /** * Ошибка выполнения задачи. * * @param Exception $exception * @return void */ public function failed(Exception $exception) { // Отправить пользователю уведомление об ошибке, и т.п. ... } }
### События проваленных задач
Если вы хотите зарегистрировать событие, которое будет вызываться при ошибке выполнения задачи, используйте метод `PHPQueue::failing()` . Это событие — отличная возможность оповестить вашу команду через email или HipChat. Например, мы можем прикрепить обратный вызов к данному событию из AppServiceProvider, который включён в Laravel: `<?php` namespace App\Providers; use Illuminate\Support\Facades\Queue; use Illuminate\Support\ServiceProvider; //для версии 5.2 и выше: use Illuminate\Queue\Events\JobFailed; class AppServiceProvider extends ServiceProvider { /** * Начальная загрузка всех сервисов приложения. * * @return void */ public function boot() { //для версии 5.2 и выше: Queue::failing(function (JobFailed $event) { // $event->connectionName // $event->job // $event->exception //для версии 5.1 и ранее: //Queue::failing(function ($connection, $job, $data) { // // Оповещение команды о проваленной задаче... }); } /** * Регистрация сервис-провайдера. * * @return void */ public function register() { // } }
добавлено в 5.2 () 5.1 () 5.0 ()
Метод `PHPfailed()` класса задачи Для более точного контроля, вы можете определить метод `PHPfailed()` непосредственно в классе задачи, это позволит выполнять специфичное для задачи действие при возникновении ошибки: `<?php` namespace App\Jobs; use App\Jobs\Job; use Illuminate\Contracts\Mail\Mailer; use Illuminate\Queue\SerializesModels; use Illuminate\Queue\InteractsWithQueue; use Illuminate\Contracts\Queue\ShouldQueue; //для версии 5.1 и ранее: //use Illuminate\Contracts\Bus\SelfHandling; // //class SendReminderEmail extends Job implements SelfHandling, ShouldQueue class SendReminderEmail extends Job implements ShouldQueue { use InteractsWithQueue, SerializesModels; /** * Выполнение задачи. * * @param Mailer $mailer * @return void */ public function handle(Mailer $mailer) { // } /** * Обработка ошибки задачи. * * @return void */ public function failed() { // Вызывается при ошибке в задаче... } }
добавлено в 5.0 ()
### Повторный запуск проваленных задач
Чтобы просмотреть все проваленные задачи, которые были помещены в вашу таблицу failed_jobs, можно использовать Artisan-команду `shqueue:failed` : > shphp artisan queue:failed
Эта команда выведет список задач с их ID, подключением, очередью и временем ошибки. ID задачи можно использовать для повторной попытки её выполнения. Например, для повторной попытки выполнения задачи с ID = 5 выполните такую команду:
> shphp artisan queue:retry 5
Чтобы повторить все проваленные задачи, выполните команду `shqueue:retry` указав `shall` в качестве ID: > shphp artisan queue:retry all
Если вы хотите удалить проваленную задачу, используйте команду `shqueue:forget` : > shphp artisan queue:forget 5
Для удаления всех проваленных задач используйте команду `shqueue:flush` : > shphp artisan queue:flush
### События задач
добавлено в 5.3 ()
Методы `PHPbefore()` и `PHPafter()` фасада Queue позволяют указать обратные вызовы, которые будут выполнены до выполнения задачи или после успешного выполнения задачи. Эти обратные вызовы — отличная возможность для выполнения дополнительного журналирования или увеличения статистики для панели управления. Обычно эти методы вызываются из сервис-провайдера. Например, мы можем использовать AppServiceProvider, который включён в Laravel: `<?php` namespace App\Providers; use Illuminate\Support\Facades\Queue; use Illuminate\Support\ServiceProvider; use Illuminate\Queue\Events\JobProcessed; use Illuminate\Queue\Events\JobProcessing; class AppServiceProvider extends ServiceProvider { /** * Начальная загрузка всех сервисов приложения. * * @return void */ public function boot() { Queue::before(function (JobProcessing $event) { // $event->connectionName // $event->job // $event->job->payload() }); Queue::after(function (JobProcessed $event) { // $event->connectionName // $event->job // $event->job->payload() }); } /** * Регистрация сервис-провайдера. * * @return void */ public function register() { // } } С помощью метода `PHPlooping()` фасада Queue вы можете указать обратные вызовы, которые будут выполнены перед тем, как обработчик попытается получить задачу из очереди. Например, вы можете зарегистрировать замыкание для отката всех транзакций, которые были оставлены открытыми предыдущей проваленной задачей:
while (DB::transactionLevel() > 0) { DB::rollBack(); } });
добавлено в 5.2 () 5.1 () 5.0 ()
Методы `PHPQueue::before()` и `PHPQueue::after()` позволяют зарегистрировать обратный вызов, который будет выполнен до выполнения задачи или после успешного выполнения задачи. Обратные вызовы — отличная возможность для выполнения дополнительного журналирования, помещения в очередь последующей задачи, или увеличения статистики для панели управления. Например, мы можем прикрепить обратный вызов к этому событию из AppServiceProvider, который включён в Laravel: `<?php` namespace App\Providers; use Queue; use Illuminate\Support\ServiceProvider; //для версии 5.2 и выше: use Illuminate\Queue\Events\JobProcessed; class AppServiceProvider extends ServiceProvider { /** * Начальная загрузка всех сервисов приложения. * * @return void */ public function boot() { //для версии 5.2 и выше: Queue::after(function (JobProcessed $event) { // $event->connectionName // $event->job // $event->data //для версии 5.1 и ранее: //Queue::after(function ($connection, $job, $data) { // // }); } /** * Регистрация сервис-провайдера. * * @return void */ public function register() { // } }
## Запуск слушателя очереди
Laravel включает в себя Artisan-команду, которая будет выполнять новые задачи по мере их поступления. Вы можете запустить слушателя командой `shqueue:listen` : > shphp artisan queue:listen
Заметьте, что когда это задание запущено, оно будет продолжать работать, пока вы не остановите его вручную. Вы можете использовать монитор процессов, такой как Supervisor, чтобы удостовериться, что задание продолжает работать.
Вы можете передать команде `shlisten` список подключений к очереди через запятую, чтобы задать приоритеты для очереди: > shphp artisan queue:listen --queue=high,low
В этом примере задачи из подключения high-connection всегда будут обрабатывать перед задачами из low-connection.
Указание времени на выполнение задачи
Кроме этого вы можете указать число секунд, в течение которых будет выполняться каждая задача:
> shphp artisan queue:listen --timeout=60
Указание перерыва между задачами
Также вы можете задать число секунд ожидания перед проверкой наличия новых задач в очереди:
> shphp artisan queue:listen --sleep=5
Обратите внимание, что очередь «засыпает» только тогда, когда в ней нет задач. Если задачи есть, очередь будет продолжать обрабатывать их без перерыва.
### Демон-обработчик очереди
Команда `shqueue:work` также имеет опцию `sh--daemon` для того, чтобы обработчик принудительно продолжал обрабатывать задачи без необходимости повторной загрузки фреймворка. Это приводит к значительному снижению использования CPU по сравнению с командой `shqueue:listen` . Для запуска обработчика очереди в режиме демона используйте флаг `sh--daemon` :
добавлено в 5.2 ()
> shphp artisan queue:work connection-name --daemon php artisan queue:work connection-name --daemon --sleep=3 php artisan queue:work connection-name --daemon --sleep=3 --tries=3
> shphp artisan queue:work connection --daemon php artisan queue:work connection --daemon --sleep=3 php artisan queue:work connection --daemon --sleep=3 --tries=3
добавлено в 5.2 () 5.1 () 5.0 ()
Как видите, команда `shqueue:work` поддерживает те же самые опции, что и команда `shqueue:listen` . Для просмотра всех доступных опций используйте команду
```
shphp artisan help queue:work
```
.
Написание кода для демонов-обработчиков очереди
Демоны-обработчики очереди не перезапускают фреймворк перед обработкой каждой задачи. Поэтому нужно быть аккуратным при освобождении любых «тяжёлых» ресурсов до того, как закончится выполнение вашей задачи. Например, если вы обрабатываете изображение с помощью библиотеки GD, то после завершения обработки вам необходимо освобождать память с помощью `PHPimagedestroy()` .
добавлено в 5.2 () 5.1 () 5.0 ()
### Развёртывание с помощью демонов-обработчиков очереди
Поскольку демоны-обработчики очереди — длительные процессы, они не подхватывают изменения в коде без перезапуска. Поэтому простейший способ развернуть приложение, используя обработчики очереди, — перезапустить их во время выполнения скрипта развёртывания. Вы можете изящно перезапустить все обработчики, включив следующую команду в ваш скрипт для развёртывания:
> shphp artisan queue:restart
добавлено в 5.2 ()
Эта команда сообщит всем обработчикам очереди о необходимости «умереть» после завершения обработки их текущих задач, поэтому текущие задачи не пропадут. Не забывайте, что обработчики очереди умрут при выполнении команды `shqueue:restart` , поэтому необходимо запустить менеджер процессов, такой как Supervisor, который автоматически перезапустит обработчики задач.
добавлено в 5.0 ()
## Push-очереди
Push-очереди дают вам доступ ко всем мощным возможностям, предоставляемым подсистемой очередей Laravel 5, без запуска демонов и фоновых слушателей. На текущий момент push-очереди поддерживает только драйвер Iron.io. Перед тем, как начать, создайте аккаунт и впишите его данные в config/queue.php.
Регистрация подписчика push-очереди
После этого вы можете использовать Artisan-команду `shqueue:subscribe` для регистрации URL конечной точки, которая будет получать добавляемые в очередь задачи: > shphp artisan queue:subscribe queue_name queue/receive php artisan queue:subscribe queue_name http://foo.com/queue/receive
Теперь, когда вы войдёте в ваш профиль Iron.io, то увидите новую push-очередь и URL её подписки. Вы можете подписать любое число URL на одну очередь. Дальше создайте маршрут для вашей конечной точки queue/receive, и пусть он возвращает результат вызова метода `PHPQueue::marshal()` :
```
Route::post('queue/receive', function()
```
{ return Queue::marshal(); }); Этот метод позаботится о вызове нужного класса-обработчика задачи. Для помещения задач в push-очередь просто используйте всё тот же метод `PHPQueue::push()` , который работает и для обычных очередей.
# Сессии
HTTP-приложения не имеют состояний. Сессии — способ сохранения информации о пользователе между отдельными запросами. Laravel поставляется со множеством различных механизмов сессий, доступных через единый выразительный API. Из коробки поддерживаются такие популярные системы, как Memcached, Redis и СУБД.
Настройки сессии содержатся в файле config/session.php. Обязательно просмотрите параметры, доступные вам в этом файле. По умолчанию Laravel использует драйвер сессий file, который подходит для большинства приложений. Для увеличения производительности сессий в продакшне вы можете использовать драйверы memcached или redis.
Настройки драйвера определяют, где будут храниться данные сессии для каждого запроса. Laravel поставляется с целым набором замечательных драйверов:
* file — данные хранятся в storage/framework/sessions.
* cookie — данные хранятся в виде зашифрованных cookie.
* database — хранение данных в реляционной БД.
* memcached / redis — для хранения используются эти быстрые кэширующие хранилища.
* array — сессии хранятся в виде PHP-массивов и не будут сохраняться между запросами.
Внимание: драйвер array обычно используется во время тестирования, так как он на самом деле не сохраняет данные для последующих запросов.
При использовании драйвера сессий database вам необходимо создать таблицу для хранения данных сессии. Ниже — пример такого объявления с помощью конструктора таблиц:
```
Schema::create('sessions', function ($table) {
```
$table->string('id')->unique(); $table->integer('user_id')->nullable(); $table->string('ip_address', 45)->nullable(); $table->text('user_agent')->nullable(); $table->text('payload'); $table->integer('last_activity'); }); Для создания этой миграции вы можете использовать Artisan-команду `shsession:table` : > shphp artisan session:table php artisan migrate
Чтобы использовать сессии Redis в Laravel, необходимо установить пакет predis/predis (~1.0) с помощью Composer. Вы можете настроить подключения Redis в файле настроек database. А в файле session в параметре connection можно указать конкретное подключение Redis для сессии.
## Использование сессий
### Получение данных
В Laravel есть два основных способа работы с данными сессии: с помощью глобального вспомогательного метода `PHPsession()` и через экземпляр Request. Сначала давайте обратимся к сессии через экземпляр Request, который может быть указан в качестве зависимости в методе контроллера. Учтите, зависимости метода контроллера автоматически внедряются при помощи сервис-контейнера Laravel: `<?php` namespace App\Http\Controllers; use Illuminate\Http\Request; use App\Http\Controllers\Controller; class UserController extends Controller { /** * Показать профиль данного пользователя. * * @param Request $request * @param int $id * @return Response */ public function show(Request $request, $id) { $value = $request->session()->get('key'); // } } При получении значения из сессии, вы можете передать значение по умолчанию вторым аргументом метода `PHPget()` . Это значение будет возвращено, если указанного ключа нет в сессии. Если вы передадите в метод замыкание в качестве значения по умолчанию, и запрашиваемого ключа не существует, то будет выполняться это замыкание и возвращаться его результат:
$value = $request->session()->get('key', function () { return 'default'; }); Глобальный вспомогательный метод `PHPsession()` Также вы можете использовать глобальную PHP-функцию `PHPsession()` для извлечения и помещения данных в сессию. При вызове `PHPsession()` с одним строковым аргументом, метод вернёт значение ключа этой сессии. При вызове `PHPsession()` с массивом пар ключ/значение, эти значения будут сохранены в сессии:
// Получить кусок данных из сессии... $value = session('key'); // Указать значение по умолчанию... $value = session('key', 'default'); // Сохранить кусок данных в сессию... session(['key' => 'value']); }); Есть небольшое практическое отличие между использованием сессий через экземпляр HTTP-запроса и использованием глобального вспомогательного метода `PHPsession()` . Оба способа тестируются методом
```
PHPassertSessionHas()
```
, доступным во всех ваших тест-кейсах. Если вы хотите получить все данные из сессии, используйте метод `PHPall()` :
```
$data = $request->session()->all();
```
Определение наличия элемента в сессии
Для проверки существования значения в сессии можно использовать метод `PHPhas()` . Этот метод вернёт true, если значение существует и не равно `PHPnull` :
```
if ($request->session()->has('users')) {
```
// } Для проверки существования значения в сессии, даже если оно равно `PHPnull` , можно использовать метод `PHPexists()` . Этот метод вернёт true, если значение существует:
```
if ($request->session()->exists('users')) {
```
### Сохранение данных
Для сохранения данных в сессии обычно используются метод `PHPput()` или вспомогательный метод `PHPsession()` :
```
// Через экземпляр запроса...
```
$request->session()->put('key', 'value'); // Через глобальный вспомогательный метод... session(['key' => 'value']);
Запись данных в массивы сессии
Метод `PHPpush()` служит для записи нового значения в элемент сессии, который является массивом. Например, если ключ user.teams содержит массив с именами команд, вы можете записать новое значение в массив вот так:
```
$request->session()->push('user.teams', 'developers');
```
Метод `PHPpull()` прочитает и удалит элемент из сессии за одно действие:
```
$value = $request->session()->pull('key', 'default');
```
## Одноразовые данные
Иногда вам нужно сохранить переменную в сессии только для следующего запроса. Вы можете сделать это методом `PHPflash()` (flash англ. — вспышка — прим. пер.). Сохранённые этим методом данные будут доступны только во время следующего HTTP-запроса, а затем будут удалены. В основном такие данные полезны для кратковременных сообщений о состоянии:
```
$request->session()->flash('status', 'Задание выполнено успешно!');
```
```
Session::flash('key', 'value');
```
Для сохранения одноразовых данных в течение большего числа запросов используйте метод `PHPreflash()` , который оставит все эти данные для следующего запроса. А если вам надо хранить только определённые данные, то используйте метод `PHPkeep()` :
```
$request->session()->reflash();
```
$request->session()->keep(['username', 'email']);
добавлено в 5.0 ()
`Session::reflash();` Session::keep(['username', 'email']);
### Удаление данных
Метод `PHPforget()` удалит куски данных из сессии. Для удаления из сессии всех данных используйте метод `PHPflush()` :
```
$request->session()->forget('key');
```
$request->session()->flush();
### Обновление ID сессии
Обновление ID сессии часто используется для защиты приложения от злоумышленников, применяющих атаку фиксации сессии.
Laravel автоматически обновляет ID сессии во время аутентификации, если вы используете встроенный LoginController; но если вы хотите обновлять ID сессии вручную, используйте метод `PHPregenerate()` .
```
$request->session()->regenerate();
```
добавлено в 5.0 ()
С сессиями можно работать несколькими способами: с помощью метода `PHPsession()` HTTP-запросов, с помощью фасада Session, или с помощью функции `PHPsession()` . При вызове функции `PHPsession()` без аргументов она возвратит весь объект сессии. Например:
```
session()->regenerate();
```
Сохранение переменной в сессии
```
Session::put('key', 'value');
```
session(['key' => 'value']);
Добавление элемента к переменной-массиву
```
Session::push('user.teams', 'developers');
```
```
$value = Session::get('key');
```
$value = session('key');
Чтение переменной или возврат значения по умолчанию
$value = Session::get('key', function() { return 'default'; });
Прочитать переменную и забыть её
```
$value = Session::pull('key', 'default');
```
Получение всех переменных сессии
```
$data = Session::all();
```
Проверка существования переменой
```
if (Session::has('users'))
```
```
Session::forget('key');
```
`Session::flush();`
Присвоение сессии нового идентификатора
```
Session::regenerate();
```
## Добавление своих драйверов сессий
Ваш драйвер сессий должен реализовывать SessionHandlerInterface. Этот интерфейс содержит всего несколько простых методов, которые надо реализовать. Заглушка реализации MongoDB выглядит приблизительно так:
`<?php` namespace App\Extensions; class MongoHandler implements SessionHandlerInterface { public function open($savePath, $sessionName) {} public function close() {} public function read($sessionId) {} public function write($sessionId, $data) {} public function destroy($sessionId) {} public function gc($lifetime) {} }
В Laravel нет стандартного каталога для ваших расширений. Вы можете разместить их где угодно. В этом примере мы создали каталог Extensions для хранения в нём MongoHandler.
Поскольку задачи этих методов не так очевидны, давайте коротко рассмотрим каждый из них:
* Метод
`PHPopen()` обычно используется в системе хранения файл-сессий. Поскольку Laravel поставляется с драйвером сессий file, вам почти никогда не потребуется делать что-либо в этом методе. Вы можете оставить его пустым как заглушку. То, что PHP требует реализовать данный метод, — это пример плохого проектирования интерфейса (обсудим это позже). * Методом
`PHPclose()` зачастую можно пренебречь, как и методом `PHPopen()` . Для большинства драйверов он не нужен. * Метод
`PHPread()` должен вернуть данные сессии по `PHP$sessionId` в виде строки. Не нужно выполнять сериализацию или другое преобразование при получении или сохранении данных сессии в ваш драйвер, поскольку Laravel выполнит сериализацию за вас. * Метод
`PHPwrite()` должен записать указанную строку `PHP$data` в соответствии с `PHP$sessionId` в какое-либо постоянное хранилище, такое как MongoDB, Dynamo и т.п. И снова, не нужно выполнять сериализацию — Laravel выполнит её за вас. * Метод
`PHPdestroy()` должен удалить из постоянного хранилища данные, соответствующие `PHP$sessionId` . * Метод
`PHPgc()` должен удалить все данные сессий, которые старше заданного `PHP$lifetime` (который является отметкой времени UNIX). Для самоочищающихся систем, таких как Memcached и Redis, этот метод можно оставить пустым. После реализации драйвера его можно зарегистрировать в фреймворке. Для добавления дополнительных драйверов для работы с сессиями в Laravel используйте метод `PHPextend()` фасада Session. Вам надо вызвать метод `PHPextend()` из метода `PHPboot()` сервис-провайдера. Это можно сделать в имеющемся AppServiceProvider или создать абсолютно новый провайдер: `<?php` namespace App\Providers; use App\Extensions\MongoSessionStore; use Illuminate\Support\Facades\Session; //для версии 5.2 и ранее: //use Session; use Illuminate\Support\ServiceProvider; class SessionServiceProvider extends ServiceProvider { /** * Выполнение после-регистрационной загрузки сервисов. * * @return void */ public function boot() { Session::extend('mongo', function ($app) { // Возврат реализации SessionHandlerInterface... return new MongoSessionStore; }); } /** * Регистрация привязок в контейнере. * * @return void */ public function register() { // } }
Когда драйвер сессий зарегистрирован, вы можете использовать драйвер mongo в своём файле настроек config/session.php.
# Шаблоны Blade
Blade — простой, но мощный шаблонизатор, поставляемый с Laravel. В отличие от других популярных шаблонизаторов для PHP Blade не ограничивает вас в использовании чистого PHP-кода в ваших представлениях. На самом деле все представления Blade скомпилированы в чистый PHP-код и кешированы, пока в них нет изменений, а значит, Blade практически не нагружает ваше приложение. Файлы представлений Blade используют расширение .blade.php и обычно хранятся в папке resources/views.
## Наследование шаблонов
### Определение макета
Два основных преимущества использования Blade — наследование шаблонов и секции. Для начала давайте рассмотрим простой пример. Во-первых, изучим макет «главной» страницы. Поскольку многие веб-приложения используют один общий макет для разных страниц, удобно определить этот макет как одно представление Blade:
```
<!-- Хранится в resources/views/layouts/app.blade.php -->
```
<html> <head> <title>App Name - @yield('title')</title> </head> <body> @section('sidebar') Это главная боковая панель. @show <div class="container"> @yield('content') </div> </body> </htmlКак видите, этот файл имеет типичную HTML-разметку. Но обратите внимание на директивы @section и @yield. Директива @section, как следует из её названия, определяет секцию содержимого, а директива @yield используется для отображения содержимого данной секции.
Мы определили макет для нашего приложения, давайте определим дочернюю страницу, которая унаследует макет.
### Наследование макета
При определении дочернего представления используйте Blade-директиву @extends для указания макета, который должен быть «унаследован» дочерним представлением. Представления, которые наследуют макет Blade, могут внедрять содержимое в секции макета с помощью директив @section. Запомните, как видно из приведённого выше примера, содержимое этих секций будет отображено в макете при помощи @yield:
```
<!-- Хранится в resources/views/child.blade.php -->
```
@extends('layouts.app') @section('title', 'Page Title') @section('sidebar') @parent <p>Это дополнение к основной боковой панели.</p> @endsection @section('content') <p>Это содержимое тела страницы.</p> @endsection
В этом примере секция sidebar использует директиву @parent для дополнения (а не перезаписи) содержимого к боковой панели макета. Директива @parent будет заменена содержимым макета при отрисовке представления.
Blade-представления могут быть возвращены из маршрутов при помощи глобальной вспомогательной функции `PHPview()` :
```
Route::get('blade', function () {
```
return view('child'); });
## Отображение данных
Вы можете отобразить данные, переданные в ваши Blade-представления, обернув переменную в фигурные скобки. Например, для такого маршрута:
```
Route::get('greeting', function () {
```
return view('welcome', ['name' => 'Samantha']); });
Вы можете отобразить содержимое переменной name вот так:
`Hello, {{ $name }}.`
Вы не ограничены отображением только содержимого переменных, передаваемых в представление. Вы также можете выводить результаты любых PHP-функций. На самом деле, вы можете поместить любой необходимый PHP-код в оператор вывода Blade:
```
The current UNIX timestamp is {{ time() }}.
```
Blade-оператор `PHP{{ }}` автоматически отправляется через PHP-функцию `PHPhtmlentities()` для предотвращения XSS-атак.
Вывод переменных после проверки на существование
Вместо написания тернарного оператора Blade позволяет вам использовать такое удобное сокращение, которое будет скомпилировано в тернарный оператор, приведённый ранее:
Если переменная $name имеет значение, то оно будет отображено, иначе будет выведено слово Default.
По умолчанию Blade-оператор `PHP{{ }}` автоматически отправляется через PHP-функцию `PHPhtmlentities()` для предотвращения XSS-атак. Если вы не хотите экранировать данные, используйте такой синтаксис:
Будьте очень осторожны и экранируйте переменные, которые содержат ввод от пользователя. Всегда используйте экранирование синтаксисом с двойными скобками, чтобы предотвратить XSS-атаки при отображении предоставленных пользователем данных.
### Blade и JavaScript-фреймворки
Поскольку многие JavaScript-фреймворки тоже используют фигурные скобки для обозначения того, что данное выражение должно быть отображено в браузере, то вы можете использовать символ @, чтобы указать механизму отрисовки Blade, что выражение должно остаться нетронутым. Например:
`<h1>Laravel</h1>` Hello, @{{ name }}. В этом примере Blade удалит символ @, но выражение `PHP{{ name }}` останется нетронутым, что позволит вашему JavaScript-фреймворку отрисовать его вместо Blade.
добавлено в 5.3 ()
Если вы выводите JavaScript-переменные в большой части вашего шаблона, вы можете обернуть HTML директивой `PHP@verbatim` , тогда вам не нужно будет ставить символ `PHP@` перед каждым оператором вывода Blade: `@verbatim` <div class="container"> Hello, {{ name }}. </div> @endverbatim
## Управляющие конструкции
В дополнение к наследованию шаблонов и отображению данных Blade предоставляет удобные сокращения для распространенных управляющих конструкций PHP, таких как условные операторы и циклы. Эти сокращения обеспечивают очень чистый и краткий способ работы с управляющими конструкциями PHP и при этом остаются очень похожими на свои PHP-прообразы.
### Оператор If
Вы можете конструировать оператор `PHPif` при помощи директив `PHP@if` , `PHP@elseif` , `PHP@else` и `PHP@endif` . Эти директивы работают идентично своим PHP-прообразам:
Здесь есть одна запись! @elseif (count($records) > 1) Здесь есть много записей! @else Здесь нет записей! @endif Для удобства Blade предоставляет и директиву `PHP@unless` :
```
@unless (Auth::check())
```
Вы не вошли в систему. @endunless
добавлено в 5.2 ()
### Циклы
В дополнение к условным операторам Blade предоставляет простые директивы для работы с конструкциями циклов PHP. Данные директивы тоже идентичны их PHP-прообразам:
Текущее значение: {{ $i }} @endfor @foreach ($users as $user) <p>Это пользователь {{ $user->id }}</p> @endforeach @forelse($users as $user) <li>{{ $user->name }}</li> @empty <p>Нет пользователей</p> @endforelse @while (true) <p>Это будет длиться вечно.</p> @endwhile
При работе с циклами вы можете использовать переменную loop для получения полезной информации о цикле, например, находитесь ли вы на первой или последней итерации цикла.
При работе с циклами вы также можете закончить цикл или пропустить текущую итерацию:
@if ($user->type == 1) @continue @endif <li>{{ $user->name }}</li> @if ($user->number == 5) @break @endif @endforeach
Также можно включить условие в строку объявления директивы:
@continue($user->type == 1) <li>{{ $user->name }}</li> @break($user->number == 5) @endforeach
добавлено в 5.3 ()
### Переменная Loop
При работе с циклами внутри цикла будет доступна переменная `PHP$loop` . Эта переменная предоставляет доступ к некоторым полезным данным, например, текущий индекс цикла, или находитесь ли вы на первой или последней итерации цикла:
@if ($loop->first) Это первая итерация. @endif @if ($loop->last) Это последняя итерация. @endif <p>Это пользователь {{ $user->id }}</p> @endforeach Если вы во вложенном цикле, вы можете обратиться к переменной `PHP$loop` родительского цикла через свойство `PHPparent` :
@foreach ($user->posts as $post) @if ($loop->parent->first) Это первая итерация родительского цикла. @endif @endforeach @endforeach Переменная `PHP$loop` содержит также множество других полезных свойств:
Свойство | Описание |
| --- | --- |
$loop->index | Индекс текущей итерации цикла (начинается с 0). |
$loop->iteration | Текущая итерация цикла(начинается с 1). |
$loop->remaining | Число оставшихся итераций цикла. |
$loop->count | Общее число элементов итерируемого массива. |
$loop->first | Первая ли это итерация цикла. |
$loop->last | Последняя ли это итерация цикла. |
$loop->depth | Уровень вложенности текущего цикла. |
$loop->parent | Переменная loop родительского цикла, для вложенного цикла. |
### Комментарии
Blade также позволяет вам определить комментарии в ваших представлениях. Но в отличие от HTML-комментариев, Blade-комментарии не включаются в HTML-код, возвращаемый вашим приложением:
```
{{-- Этого комментария не будет в итоговом HTML --}}
```
### PHP
В некоторых случаях бывает полезно встроить PHP-код в ваши представления. Вы можете использовать Blade-директиву `PHP@php` для выполнения блока чистого PHP в вашем шаблоне: `@php` // @endphp
Несмотря на то, что в Blade есть эта возможность, её частое использование может быть сигналом того, что у вас слишком много логики, встроенной в шаблон.
## Включение подшаблонов
Blade-директива @include позволяет вам включать Blade-представление в другое представление. Все переменные, доступные родительскому представлению, будут доступны и включаемому представлению:
`<div>` @include('shared.errors') <form> <!-- Содержимое формы --> </form> </divХотя включаемое представление унаследует все данные, доступные родительскому представлению, вы также можете передать в него массив дополнительных данных:
Само собой, если вы попробуете сделать `PHP@include` представления, которого не существует, то Laravel выдаст ошибку. Если вы хотите включить представление, которого может не существовать, вам надо использовать директиву `PHP@includeIf` :
```
@includeIf('view.name', ['some' => 'data'])
```
Вам следует избегать использования констант __DIR__ и __FILE__ в ваших Blade-представлениях, поскольку они будут ссылаться на расположение кешированных, скомпилированных представлений.
### Отрисовка представлений для коллекций
Вы можете комбинировать циклы и включения в одной строке при помощи Blade-директивы @each:
```
@each('view.name', $jobs, 'job')
```
Первый аргумент — часть представления, которую надо отрисовать для каждого элемента массива или коллекции. Второй аргумент — массив или коллекция для перебора, а третий — имя переменной, которое будет назначено для текущей итерации в представлении. Например, если вы перебираете массив jobs, то скорее всего захотите обращаться к каждому элементу как к переменной job внутри вашей части представления. Ключ для текущей итерации будет доступен в виде переменной key в вашей части представления.
Вы также можете передать четвёртый аргумент в директиву @each. Этот аргумент определяет представление, которое будет отрисовано, если данный массив пуст.
```
@each('view.name', $jobs, 'job', 'view.empty')
```
## Стеки
Blade позволяет использовать именованные стеки, которые могут быть отрисованы где-нибудь ещё в другом представлении или макете. Это удобно в основном для указания любых JavaScript-библиотек, требуемых для ваших дочерних представлений:
`@push('scripts')` <script src="/example.js"></script> @endpush «Пушить» в стек можно сколько угодно раз. Для отрисовки всего содержимого стека передайте имя стека в директиву `PHP@stack` : `<head>` <!-- Содержимое заголовка --> @stack('scripts') </head## Внедрение сервисов
Директива @inject служит для извлечения сервиса из сервис-контейнера Laravel. Первый аргумент, передаваемый в @inject, это имя переменной, в которую будет помещён сервис. А второй аргумент — имя класса или интерфейса сервиса, который вы хотите извлечь:
```
@inject('metrics', 'App\Services\MetricsService')
```
<div> Месячный доход: {{ $metrics->monthlyRevenue() }}. </div## Наследование Blade
Blade позволяет вам определять даже свои собственные директивы с помощью метода `PHPdirective()` . Когда компилятор Blade встречает пользовательскую директиву, он вызывает предоставленный обратный вызов с содержащимся в директиве выражением. Следующий пример создаёт директиву `PHP@datetime($var)` , которая форматирует данный `PHP$var` , который должен быть экземпляром DateTime: `<?php` namespace App\Providers; use Illuminate\Support\Facades\Blade; //для версии 5.2 и ранее: //use Blade; use Illuminate\Support\ServiceProvider; class AppServiceProvider extends ServiceProvider { /** * Выполнение после-регистрационной загрузки сервисов. * * @return void */ public function boot() { Blade::directive('datetime', function ($expression) { return "<?php echo ($expression)->format('m/d/Y H:i'); ?>"; }); } /** * Регистрация привязок в контейнере. * * @return void */ public function register() { // } } Как видите, мы прицепили метод `PHPformat()` к тому выражению, которое передаётся в директиву. Поэтому финальный PHP-код, сгенерированный этой директивой, будет таким:
```
<?php echo ($var)->format('m/d/Y H:i'); ?>
```
# Шаблоны
## Шаблоны Blade
Blade — простой, но мощный шаблонизатор, входящий в состав Laravel. В отличии от шаблонов контроллеров, Blade основан на концепции наследования шаблонов и секциях. Все шаблоны Blade должны иметь расширение .blade.php.
> xml<!-- Расположен в resources/views/layouts/master.blade.php --> <html> <head> <title>App Name - @yield('title')</title> </head> <body> @section('sidebar') Это - главная боковая панель. @show <div class="container"> @yield('content') </div> </body> </html>
> xml@extends('layouts.master') @section('title', 'Page Title') @section('sidebar') @parent <p>Этот элемент будет добавлен к главной боковой панели.</p> @stop @section('content') <p>Это - содержимое страницы.</p> @stop
Заметьте, что шаблоны, которые расширяют другой Blade-шаблон с помощью @extends, просто перекрывают его секции. Старое (перекрытое) содержимое может быть выведено директивой @parent, что позволяет добавлять содержимое в такие секции, как боковая панель или «подвал».
Иногда — например, когда вы не уверены, что секция была определена — вам может понадобиться указать значение по умолчанию для директивы @yield. Вы можете передать его вторым аргументом:
```
@yield('section', 'Default Content')
```
## Другие директивы Blade
`Hello, {{ $name }}.` The current UNIX timestamp is {{ time() }}.
Вывод переменных после проверки на существование
Вместо написания тернарного оператора, Blade позволяет вам использовать такое удобное сокращение:
Вывод сырого текста в фигурных скобках
Если вам нужно вывести строку в фигурных скобках, вы можете отменить её обработку с помощью Blade, поставив перед текстом символ @:
```
@{{ Это не будет обработано с помощью Blade }}
```
Если вы не хотите экранировать данные, используйте такой синтаксис:
Внимание: будьте очень осторожны и экранируйте переменные, которые содержат ввод от пользователя. Всегда используйте двойные скобки, чтобы экранировать все HTML-сущности.
Здесь есть одна запись! @elseif (count($records) > 1) Здесь есть много записей! @else Здесь нет записей! @endif @unless (Auth::check()) Вы не вошли в систему. @endunless
Текущее значение: {{ $i }} @endfor @foreach ($users as $user) <p>Это пользователь {{ $user->id }}</p> @endforeach @forelse($users as $user) <li>{{ $user->name }}</li> @empty <p>Нет пользователей</p> @endforelse @while (true) <p>Это будет длиться вечно.</p> @endwhile
```
@include('view.name')
```
Вы также можете передать массив данных во включаемый шаблон:
Для полной перезаписи можно использовать директиву @overwrite:
```
@extends('list.item.container')
```
@section('list.item.content') <p>Это - элемент типа {{ $item->type }}</p> @overwrite
```
@lang('language.line')
```
@choice('language.line', 1)
```
{{-- Этот комментарий не будет включён в сгенерированный HTML --}}
```
# Планировщик задач
Раньше вы могли создавать Cron-записи для каждой запланированной задачи на вашем сервере. Но это могло быстро превратиться в рутину, так как планировщик задач больше не находится в системе контроля версий, и вы должны заходить через SSH на свой сервер, чтобы добавить Cron-записи.
Планировщик команд Laravel позволяет вам гибко и выразительно определить планирование своих команд в самом Laravel. И для этого на вашем сервере необходима только одна Cron-запись. Ваш планировщик задач определён в методе `PHPschedule()` файла app/Console/Kernel.php. Чтобы помочь вам начать, там уже есть простой пример с методом.
### Запуск планировщика
При использовании планировщика вам надо добавить на ваш сервер только одну эту Cron-запись. Если вы не знаете, как добавлять Cron-записи на сервер, то можете использовать такой сервис, как Laravel Forge, который может управлять Cron-записями для вас:
> sh* * * * * php /path/to/artisan schedule:run >>/dev/null 2>&1
Этот Cron будет вызывать планировщик команд Laravel каждую минуту. Когда будет выполнена команда `shschedule:run` , Laravel обработает ваши запланированные задачи и запустит только те задачи, которые должен.
## Определение планировщика
Вы можете определить все свои запланированные задачи в методе `PHPschedule()` класса App\Console\Kernel. Для начала давайте посмотрим на пример планирования задачи. В этом примере мы запланируем замыкание `PHPClosure` , которое будет вызываться каждый день в полночь. В `PHPClosure` мы выполним запрос БД, чтобы очистить таблицу: `<?php` namespace App\Console; use DB; use Illuminate\Console\Scheduling\Schedule; use Illuminate\Foundation\Console\Kernel as ConsoleKernel; class Kernel extends ConsoleKernel { /** * Artisan-команды вашего приложения. * * @var array */ protected $commands = [ \App\Console\Commands\Inspire::class, ]; /** * Определяем планировщик команд приложения. * * @param \Illuminate\Console\Scheduling\Schedule $schedule * @return void */ protected function schedule(Schedule $schedule) { $schedule->call(function () { DB::table('recent_users')->delete(); })->daily(); } } В дополнение к планированию вызовов `PHPClosure` вы можете также запланировать Artisan-команды и команды операционной системы. Например, вы можете использовать метод `PHPcommand()` , чтобы запланировать Artisan-команду, используя либо имя команды, либо класс:
```
$schedule->command('emails:send —force')->daily();
```
$schedule->command(EmailsCommand::class, ['--force'])->daily(); Команда `PHPexec()` может быть использована для обращения к операционной системе:
```
$schedule->exec('node /home/forge/script.js')->daily();
```
### Настройки частоты планировщика
Конечно, есть множество вариантов планировщика, которые вы можете назначить на свою задачу:
Метод | Описание |
| --- | --- |
->cron('* * * * *'); | Запускать задачу по пользовательскому расписанию |
->everyMinute(); | Запускать задачу каждую минуту |
->everyFiveMinutes(); | Запускать задачу каждые 5 минут |
->everyTenMinutes(); | Запускать задачу каждые 10 минут |
->everyThirtyMinutes(); | Запускать задачу каждые 30 минут |
->hourly(); | Запускать задачу каждый час |
->hourlyAt(17); | Запускать задачу каждый час в хх:17 минут (для версии 5.3 и выше) |
->daily(); | Запускать задачу каждый день в полночь |
->dailyAt('13:00'); | Запускать задачу каждый день в 13:00 |
->twiceDaily(1, 13); | Запускать задачу каждый день в 1:00 и 13:00 |
->weekly(); | Запускать задачу каждую неделю |
->monthly(); | Запускать задачу каждый месяц |
->monthlyOn(4, '15:00'); | Запускать задачу 4 числа каждого месяца в 15:00 (для версии 5.2 и выше) |
->quarterly(); | Запускать задачу каждые 3 месяца (для версии 5.2 и выше) |
->yearly(); | Запускать задачу каждый год |
->timezone('America/New_York'); | Задать часовой пояс (для версии 5.2 и выше) |
Эти методы могут быть объединены с дополнительными ограничениями для создания ещё более гибкого планировщика, который будет работать только в определённые дни недели. Например, чтобы запланировать команду на еженедельный запуск в понедельник:
```
// Запуск каждый понедельник в 13:00...
```
$schedule->call(function () { // })->weekly()->mondays()->at('13:00');
```
// Запускать каждый час с 8:00 до 17:00 по будням...
```
$schedule->command('foo') ->weekdays() ->hourly() ->timezone('America/Chicago') ->between('8:00', '17:00');
Ниже приведён список дополнительных ограничений в расписании:
Метод | Описание |
| --- | --- |
->weekdays(); | Ограничить задачу рабочими днями |
->sundays(); | Ограничить задачу воскресеньем |
->mondays(); | Ограничить задачу понедельником |
->tuesdays(); | Ограничить задачу вторником |
->wednesdays(); | Ограничить задачу средой |
->thursdays(); | Ограничить задачу четвергом |
->fridays(); | Ограничить задачу пятницей |
->saturdays(); | Ограничить задачу субботой |
->between($start, $end); | Ограничить запуск задачи между временем начала и конца промежутка (для версии 5.3 и выше) |
->when(Closure); | Ограничить задачу на основе успешного теста |
Ограничение промежутком времени
Методом `PHPbetween()` можно ограничить выполнение задачи в зависимости от времени дня:
->hourly() ->between('7:00', '22:00'); А методом `PHPunlessBetween()` можно исключить выполнение задачи в указанный период времени:
->hourly() ->unlessBetween('23:00', '4:00'); Метод `PHPwhen()` может быть использован, чтобы ограничить выполнение задачи на основании результата успешно пройденного теста. Другими словами, если заданное `PHPClosure` возвращает true, задача будет выполняться до тех пор, пока никакие другие ограничивающие условия не препятствуют её выполнению:
return true; });
добавлено в 5.2 ()
При использовании сцепленного метода `PHPwhen()` , запланированная команда выполнится только при условии, что все `PHPwhen()` возвратят true.
### Предотвращение перекрытий задач
По умолчанию, запланированные задачи будут запускаться, даже если предыдущий экземпляр задачи всё ещё выполняется. Чтобы предотвратить это, вы можете использовать метод
```
$schedule->command('emails:send')->withoutOverlapping();
```
В этом примере Artisan-команда `shemails:send` будет запускаться каждую минуту, если она ещё не запущена. Метод
особенно полезен, если у вас есть задачи, которые изменяются коренным образом во время своего выполнения, что мешает вам предсказывать точно, сколько времени данная задача будет выполняться.
добавлено в 5.3 ()
### Режим обслуживания
Запланированные задачи Laravel не будут запускаться, когда Laravel находится в режиме обслуживания, так как мы не хотим, чтобы ваши команды столкнулись с какими-либо незаконченными операциями обслуживания сервера. Но если вы хотите, чтобы задача запускалась даже в режиме обслуживания, вы можете использовать метод
```
PHPevenInMaintenanceMode()
```
```
$schedule->command('emails:send')->evenInMaintenanceMode();
```
## Выходные данные задачи
Планировщик Laravel предоставляет несколько удобных методов для работы с выходными данными, сгенерированными запланированными задачами. Во-первых, используя метод `PHPsendOutputTo()` , вы можете отправить вывод данных в файл для последующего анализа:
->daily() ->sendOutputTo($filePath); Если вы хотите добавить вывод в указанный файл, вы можете использовать метод `PHPappendOutputTo()` :
->daily() ->appendOutputTo($filePath); Используя метод `PHPemailOutputTo()` , вы можете отправить по электронной почте выходные данные на адрес по вашему усмотрению. Обратите внимание, что данные должны быть сначала отправлены в файл с помощью метода `PHPsendOutputTo()` . Перед отправкой на электронную почту результата выполнения задачи вы должны настроить службы электронной почты Laravel:
```
$schedule->command('foo')
```
->daily() ->sendOutputTo($filePath) ->emailOutputTo('<EMAIL>'); Методы `PHPemailOutputTo()` , `PHPsendOutputTo()` и `PHPappendOutputTo()` доступны исключительно для метода `PHPcommand()` и не поддерживаются для `PHPcall()` .
## Обработчики прерываний задач
Используя методы `PHPbefore()` и `PHPafter()` , вы можете указать код, который будет выполняться до запуска и после завершения запланированной задачи:
->daily() ->before(function () { // Перед запуском задачи... }) ->after(function () { // Задача завершена... }); Используя методы `PHPpingBefore()` и `PHPthenPing()` , планировщик может автоматически пинговать заданный URL до запуска и после завершения задачи. Этот метод полезен для уведомления внешней службы, например, Laravel Envoyer, о том, что ваша запланированная задача запустилась или закончила выполнение:
->daily() ->pingBefore($url) ->thenPing($url); Использование функций `PHPpingBefore($url)` или `PHPthenPing($url)` требует библиотеки Guzzle HTTP.
добавлено в 5.3 ()
# Тестирование
Laravel построен с учётом тестирования. Фактически, поддержка PHPUnit доступна по умолчанию, а файл phpunit.xml уже настроен для вашего приложения. Также фреймворк содержит удобные методы для полноценного тестирования ваших приложений.
Папка tests содержит файл с примером теста ExampleTest.php. После установки нового приложения Laravel просто выполните команду `shphpunit` для запуска ваших тестов.
### Среда
При выполнении тестов Laravel автоматически задаст настройки среды testing. Также при тестировании Laravel автоматически настроит сессии и кэш на драйвер array, а значит данные сессий и кэша не сохранятся при тестировании.
При необходимости вы можете определить другие значения настроек для тестовой среды. Переменные среды testing можно настроить в файле phpunit.xml, но перед запуском тестов не забудьте очистить кэш настроек с помощью Artisan-команды `shconfig:clear` !
### Создание и выполнение тестов
Для создания теста используйте Artisan-команду `shmake:test` : > shphp artisan make:test UserTest
Эта команда поместит новый класс UserTest в папку tests. Далее вы можете объявлять методы тестов как вы обычно объявляете их для PHPUnit. Для запуска тестов просто выполните команду `shphpunit` в терминале: `<?php` use Illuminate\Foundation\Testing\WithoutMiddleware; use Illuminate\Foundation\Testing\DatabaseMigrations; use Illuminate\Foundation\Testing\DatabaseTransactions; class UserTest extends TestCase { /** * Пример базового теста. * * @return void */ public function testExample() { $this->assertTrue(true); } } Если вы определили собственный метод `PHPsetUp()` , не забудьте вызвать `PHPparent::setUp()` .
Начиная с версии Laravel 5.3 последующие разделы данной статьи вынесены в отдельные статьи Тестирование приложения, Тестирование БД и Заглушки. — прим. пер.
## Тестирование приложения
### Взаимодействие с приложением
Само собой, можно делать намного больше, чем просто проверять появление текста в данном отклике. Давайте рассмотрим несколько примеров нажатия ссылок и заполнения форм.
{ $this->visit('/') ->click('О нас') ->seePageIs('/about-us'); } Также Laravel предоставляет несколько методов для тестирования форм. Методы `PHPtype()` , `PHPselect()` , `PHPcheck()` , `PHPattach()` и `PHPpress()` позволяют вам взаимодействовать со всеми элементами ввода на ваших формах. Например, представим, что на странице регистрации в приложении есть такая форма: > xml<form action="/register" method="POST"> {{ csrf_field() }} <!--<p> для версии 5.1 и ранее: {!! csrf_field() !!} </p>--> <div> Name: <input type="text" name="name"> </div> <div> <input type="checkbox" value="yes" name="terms"> Accept Terms </div> <div> <input type="submit" value="Register"> </div> </form{ $this->visit('/upload') ->type('File Name', 'name') ->attach($absolutePathToFile, 'photo') ->press('Upload') ->see('Upload Successful!'); }
Также Laravel предоставляет несколько вспомогательных функций для тестирования JSON API и их откликов. Например, методы `PHPget()` , `PHPpost()` , `PHPput()` , `PHPpatch()` и `PHPdelete()` используются для выполнения различных HTTP-запросов. Вы также легко можете передать данные и заголовки в эти методы. Для начала давайте напишем тест, выполняющий POST-запрос к /user и проверяющий, что данный массив возвращается в формате JSON: `<?php` class ExampleTest extends TestCase { /** * Пример базового теста функции. * * @return void */ public function testBasicExample() { $this->json('POST', '/user', ['name' => 'Sally']) ->seeJson([ 'created' => true, ]); } } Метод `PHPseeJson()` конвертирует данный массив в JSON, а затем проверяет, что фрагмент JSON появляется где-либо внутри полного JSON-отклика, возвращаемого приложением. Поэтому, если в нём будут ещё и другие свойства, этот тест всё равно будет пройден успешно, так как данный фрагмент присутствует в отклике.
Проверка точного совпадения JSON
Если вы хотите проверить точное совпадение данного массива с возвращённым из приложения JSON, вам надо использовать метод `PHPseeJsonEquals()` : `<?php` class ExampleTest extends TestCase { /** * Пример базового теста функции. * * @return void */ public function testBasicExample() { $this->json('POST', '/user', ['name' => 'Sally']) ->seeJsonEquals([ 'created' => true, ]); } }
добавлено в 5.2 ()
Также можно проверить соответствие структуры JSON определённым требованиям. Для этого служит используйте метод
В Laravel есть несколько функций для работы с сессиями во время тестирования. Сначала вы можете задать данные сессии для данного массива при помощи метода `PHPwithSession()` . Это полезно для загрузки сессии с данными перед выполнением тестового запроса в приложение: `<?php` class ExampleTest extends TestCase { public function testApplication() { $this->withSession(['foo' => 'bar']) ->visit('/'); } } Конечно, чаще всего сессии используют для задания нужного состояния пользователя, например, аутентифицированный пользователь. Простой способ аутентифицировать данного пользователя в качестве текущего — метод `PHPactingAs()` . Например, мы можем использовать фабрику модели, чтобы сгенерировать и аутентифицировать пользователя: `<?php` class ExampleTest extends TestCase { public function testApplication() { $user = factory(App\User::class)->create(); $this->actingAs($user) ->withSession(['foo' => 'bar']) ->visit('/') ->see('Hello, '.$user->name); } }
добавлено в 5.2 ()
Вызов контроллера из теста
Вы также можете вызвать из теста любой контроллер:
```
$response = $this->action('GET', 'HomeController@index');
```
$response = $this->action('GET', 'UserController@profile', ['user' => 1]); Вам не надо указывать полное пространство имён контроллера при использовании метода `PHPaction()` . Укажите только ту часть, которая идёт за App\Http\Controllers. Метод `PHPgetContent()` вернёт содержимое-строку ответа. Если ваш маршрут вернёт `PHPView` , вы можете получить его через свойство `PHP$original` :
```
$view = $response->original;
```
$this->assertEquals('John', $view['name']); Для вызова HTTPS-маршрута можно использовать метод `PHPcallSecure()` :
```
$response = $this->callSecure('GET', 'foo/bar');
```
## Работа с базами данных
Также Laravel содержит различные полезные инструменты для упрощения тестирования приложений, использующих БД. Во-первых, вы можете использовать функцию `PHPseeInDatabase ()` для проверки наличия в БД данных, подходящих по набору критериев. Например, если мы хотим проверить, что в таблице users есть запись со значением в поле email равным <EMAIL>, то можем сделать следующее:
{ // Сделать вызов в приложение... $this->seeInDatabase('users', ['email' => '<EMAIL>']); } Само собой, метод `PHPseeInDatabase()` и другие подобные методы служат для удобства. Но вы можете использовать любые встроенные в PHPUnit методы проверок для дополнения своих тестов.
### Сброс базы данных после каждого теста
Часто бывает полезно сбрасывать БД после каждого теста, чтобы данные из предыдущего теста не попадали в последующие тесты.
Один из вариантов — откатывать БД после каждого теста и мигрировать её перед следующим тестом. В Laravel есть простой типаж DatabaseMigrations, который автоматически сделает это за вас. Просто используйте типаж в классе теста:
`<?php` use Illuminate\Foundation\Testing\WithoutMiddleware; use Illuminate\Foundation\Testing\DatabaseMigrations; use Illuminate\Foundation\Testing\DatabaseTransactions; class ExampleTest extends TestCase { use DatabaseMigrations; /** * Пример базового теста функции. * * @return void */ public function testBasicExample() { $this->visit('/') ->see('Laravel 5'); } }
Другой вариант — обернуть каждый тест в транзакцию БД. И снова, в Laravel есть удобный типаж DatabaseTransactions, который автоматически сделает это за вас:
`<?php` use Illuminate\Foundation\Testing\WithoutMiddleware; use Illuminate\Foundation\Testing\DatabaseMigrations; use Illuminate\Foundation\Testing\DatabaseTransactions; class ExampleTest extends TestCase { use DatabaseTransactions; /** * Пример базового теста функции. * * @return void */ public function testBasicExample() { $this->visit('/') ->see('Laravel 5'); } }
Этот типаж обернёт в транзакцию только то соединения с БД, которое используется по умолчанию.
При тестировании часто необходимо вставить несколько записей в БД перед выполнением теста. Вместо указания значений каждого поля тестовых данных вручную, Laravel позволяет определить набор атрибутов для каждой из ваших моделей Eloquent при помощи «фабрик». Для начала посмотрим на файл database/factories/ModelFactory.php в вашем приложении. Изначально этот файл содержит одно определение фабрики:
return [ 'name' => $faker->name, 'email' => $faker->email, 'password' => bcrypt(str_random(10)), 'remember_token' => str_random(10), ]; });
В замыкание, которое выступает в качестве определения фабрики, вы можете вернуть тестовые значения по умолчанию для всех атрибутов модели. Замыкание получит экземпляр PHP-библиотеки Faker, которая позволяет вам удобно генерировать случайные данных различных типов для тестирования.
Конечно, вы можете добавить свои собственные дополнительные фабрики в файл ModelFactory.php. А для улучшения организации вы можете также создать файлы дополнительной фабрики. Например, вы можете создать файлы UserFactory.php и CommentFactory.php в папке database/factories.
Иногда вам необходимо иметь несколько фабрик для одного класса модели Eloquent. Например, если нужна фабрика для пользователей «Administrator» вдобавок к обычным пользователям. Эти фабрики можно определить методом `PHPdefineAs()` :
```
$factory->defineAs(App\User::class, 'admin', function ($faker) {
```
return [ 'name' => $faker->name, 'email' => $faker->email, 'password' => str_random(10), 'remember_token' => str_random(10), 'admin' => true, ]; }); Вместо дублирования всех атрибутов из вашей основной фабрики пользователя, вы можете использовать метод `PHPraw()` для получения базовых атрибутов. Когда у вас есть атрибуты, просто дополните их любыми необходимыми дополнительными значениями:
```
$factory->defineAs(App\User::class, 'admin', function ($faker) use ($factory) {
```
$user = $factory->raw(App\User::class); return array_merge($user, ['admin' => true]); }); Когда вы определили свои фабрики, вы можете использовать их в своих тестах или файлах для заполнения БД, чтобы генерировать экземпляры модели с помощью глобальной функции `PHPfactory()` . Давайте рассмотрим несколько примеров создания моделей. Сначала используем метод `PHPmake()` , который создаёт модели, но не сохраняет их в БД:
{ $user = factory(App\User::class)->make(); // Использование модели в тестах... } Если вы хотите переопределить некоторые из значений по умолчанию для своих моделей, вы можете передать массив значений в метод `PHPmake()` . Будут заменены только указанные значения, а остальные будут иметь значения, определённые в фабрике:
'name' => 'Abigail', ]);
Также вы можете создать коллекцию моделей или создать модели заданного типа:
```
// Создать три экземпляра App\User...
```
$users = factory(App\User::class, 3)->make(); // Создать экземпляр App\User "admin"... $user = factory(App\User::class, 'admin')->make(); // Создать три экземпляра App\User "admin"... $users = factory(App\User::class, 'admin', 3)->make(); Метод `PHPcreate()` не только создаёт экземпляры модели, но также сохраняет их в БД при помощи Eloquent-метода `PHPsave()` :
{ $user = factory(App\User::class)->create(); // Использование модели в тестах... } Вы можете переопределить атрибуты для модели, передав массив в метод `PHPcreate()` :
'name' => 'Abigail', ]); Вы можете сохранить в БД даже несколько моделей. В данном примере мы даже прикрепим к созданным моделям отношение. При использовании метода `PHPcreate()` для создания нескольких моделей возвращается экземпляр коллекции, позволяя вам использовать любые удобные функции для работы с коллекцией, такие как `PHPeach()` :
->create() ->each(function ($u) { $u->posts()->save(factory(App\Post::class)->make()); });
добавлено в 5.2 ()
Отношения и атрибуты замыкания
Также вы можете прикрепить к моделям отношения с помощью атрибутов замыканий в определении вашей фабрики. Например, если вы хотите создать экземпляр User при создании Post, можно сделать так:
return [ 'title' => $faker->title, 'content' => $faker->paragraph, 'user_id' => function () { return factory(App\User::class)->create()->id; } ]; });
Эти замыкания также получают подготовленный массив атрибутов фабрики, который содержит их:
## Заглушки
### Заглушки событий
Если вы используете систему событий Laravel, то при желании можете отключить или заглушить определённые события при тестировании. Например, при тестировании регистрации пользователя вам, вероятно, не нужно вызывать обработчики всех событий UserRegistered, поскольку они могут посылать письма «добро пожаловать» и т.п.
В Laravel есть удобный метод `PHPexpectsEvents()` , который проверяет возникновение ожидаемых событий, но предотвращает запуск всех обработчиков для этих событий: `<?php` class ExampleTest extends TestCase { public function testUserRegistration() { $this->expectsEvents(App\Events\UserRegistered::class); // Тестирование регистрации пользователя... } }
добавлено в 5.2 ()
Методом
можно проверить, что заданные события не произошли: `<?php` class ExampleTest extends TestCase { public function testPodcastPurchase() { $this->expectsEvents(App\Events\PodcastWasPurchased::class); $this->doesntExpectEvents(App\Events\PaymentWasDeclined::class); // Тестирование покупки подкаста... } } Если вы хотите предотвратить запуск всех обработчиков событий, используйте метод `PHPwithoutEvents()` : `<?php` class ExampleTest extends TestCase { public function testUserRegistration() { $this->withoutEvents(); // Тестирование кода регистрации пользователя... } }
### Заглушки задач
Иногда вам может понадобиться просто проверить, что определённые задачи запускаются вашими контроллерами при выполнении запросов в ваше приложение. Это позволяет вам тестировать ваши маршруты / контроллеры изолированно — отдельно от логики вашей задачи. А саму задачу вы можете протестировать в отдельном тест-классе.
В Laravel есть удобный метод `PHPexpectsJobs()` , который проверяет, что ожидаемые задачи вызываются, но при этом сами задачи выполняться не будут: `<?php` class ExampleTest extends TestCase { public function testPurchasePodcast() { $this->expectsJobs(App\Jobs\PurchasePodcast::class); // Тестирование кода покупки подкаста... } } Этот метод обнаруживает только те задачи, которые запускаются методами типажа DispatchesJobs или вспомогательной функцией `PHPdispatch()` . Он не обнаружит задачу, которая отправлена напрямую в `PHPQueue::push` .
### Заглушки фасадов
При тестировании вам может потребоваться отловить вызов к одному из фасадов Laravel. Например, рассмотрим такое действие контроллера:
`<?php` namespace App\Http\Controllers; use Cache; //для версии 5.1 и ранее: //use Illuminate\Routing\Controller; class UserController extends Controller { /** * Показать список всех пользователей приложения. * * @return Response */ public function index() { $value = Cache::get('key'); // } } Вы можете отловить вызов фасада Cache с помощью метода `PHPshouldReceive()` , который вернёт экземпляр объекта-заглушки Mockery. Поскольку фасады извлекаются и управляются сервис-контейнером Laravel, их намного проще тестировать, чем обычный статический класс. Например, давайте отловим наш вызов фасада Cache: `<?php` class FooTest extends TestCase { public function testGetIndex() { Cache::shouldReceive('get') ->once() ->with('key') ->andReturn('value'); $this->visit('/users')->see('value'); } } Не делайте этого для фасада Request. Вместо этого, передайте желаемый ввод вспомогательному HTTP-методу, такому как `PHPcall()` или `PHPpost()` , во время выполнения вашего теста.
добавлено в 5.0 ()
Рассмотрим такое действие контроллера:
```
public function getIndex()
```
{ Event::fire('foo', ['name' => 'Dayle']); return 'All done!'; } Вы можете отловить вызов класса Event с помощью метода `PHPshouldReceive()` этого фасада, который вернёт экземпляр объекта-заглушки Mockery.
```
public function testGetIndex()
```
{ Event::shouldReceive('fire')->once()->with('foo', ['name' => 'Dayle']); $this->call('GET', '/'); } Не делайте этого для объекта Request. Вместо этого, передайте желаемый ввод методу `PHPcall()` во время выполнения вашего теста.
## Вспомогательные методы
Класс TestCase содержит несколько вспомогательных методов для упрощения тестирования вашего приложения.
Установка и очистка сессий из теста
```
$this->session(['foo' => 'bar']);
```
$this->flushSession();
Установка текущего авторизованного пользователя
Вы можете установить текущего авторизованного пользователя с помощью метода `PHPbe()` :
```
$user = new User(['name' => 'John']);
```
$this->be($user);
Заполнение БД тестовыми данными
Вы можете заполнить вашу БД начальными данными из теста методом `PHPseed()` : `$this->seed();` $this->seed('DatabaseSeeder');
Больше информации на тему начальных данных доступно в разделе о миграциях.
## Обновление приложения
Как вы уже возможно знаете, вы можете получить доступ к сервис-контейнеру вашего приложения с помощью `PHP$this->app` из любого тестового метода. Этот экземпляр сервис-контейнера обновляется для каждого тестового класса. Если вы хотите вручную обновить приложение для определённого метода, вы можете использовать метод
```
PHPrefreshApplication()
```
из этого тестового метода. Это приведет к сбросу дополнительных привязок, таких как заглушки, которые были помещены в сервис-контейнер после запуска теста.
# Проверка ввода
Laravel предоставляет несколько разных подходов к проверке входящих в ваше приложение данных. По умолчанию базовый класс контроллера использует типаж ValidatesRequests, который предоставляет удобный метод проверки входящего HTTP-запроса с помощью различных мощных правил проверки.
## Краткое руководство по проверке ввода
Для изучения мощных возможностей проверки ввода в Laravel, давайте рассмотрим полный пример проверки ввода через форму и вывода сообщений об ошибках.
### Определение маршрутов
Сначала давайте предположим, что у нас есть следующие маршруты, определённые в файле routes/web.php:
```
Route::get('post/create', 'PostController@create');
```
Route::post('post', 'PostController@store');
Очевидно, маршрут GET выведет пользователю форму для написания новой статьи, а маршрут POST сохранит её в БД.
## Создание контроллера
Теперь давайте посмотрим на простой контроллер, который обрабатывает эти маршруты. Метод `PHPstore()` мы пока оставим пустым: `<?php` namespace App\Http\Controllers; use Illuminate\Http\Request; use App\Http\Controllers\Controller; class PostController extends Controller { /** * Вывод формы написания статьи. * * @return Response */ public function create() { return view('post.create'); } /** * Сохранение новой статьи. * * @param Request $request * @return Response */ public function store(Request $request) { // Проверка и сохранение новой статьи. } }
### Написание логики проверки ввода
Теперь мы готовы наполнить метод `PHPstore()` логикой для проверки новой статьи. Если вы посмотрите в базовый класс контроллера приложения (App\Http\Controllers\Controller), то увидите, что в нём используется типаж ValidatesRequests. Этот типаж предоставляет удобный метод `PHPvalidate()` всем вашим контроллерам. Метод `PHPvalidate()` принимает входящий HTTP-запрос и набор правил для проверки. Если проверка успешна, ваш код продолжит нормально выполняться. Но если проверка провалится, возникнет исключение, и пользователю будет автоматически отправлен отклик с соответствующей ошибкой. Для обычных HTTP-запросов будет сгенерирован отклик-переадресация, а для AJAX-запросов — JSON-отклик. Для лучшего понимания метода `PHPvalidate()` давайте вернёмся к методу `PHPstore()` : `/**` * Сохранение новой статьи. * * @param Request $request * @return Response */ public function store(Request $request) { $this->validate($request, [ 'title' => 'required|unique:posts|max:255', 'body' => 'required', ]); // Статья прошла проверку, сохранение в БД... } Как видите, мы просто передали входящий HTTP-запрос и требуемые правила проверки в метод `PHPvalidate()` . Если проверка провалится, будет сгенерирован соответствующий отклик. Если проверка успешна, ваш контроллер продолжит нормально выполняться.
добавлено в 5.2 ()
Остановка после первой ошибки ввода
Иногда надо остановить выполнение правил проверки ввода для атрибута после первой ошибки. Для этого назначьте на атрибут правило bail:
'title' => 'bail|required|unique:posts|max:255', 'body' => 'required', ]);
Если правило required на атрибуте title не выполнится, то правило unique не будет проверяться. Правила будут проверены в порядке их назначения.
Замечание о вложенных атрибутах
Если ваш HTTP-запрос содержит «вложенные» параметры, вы можете указать их в правилах проверки с помощью «точечной» записи:
'title' => 'required|unique:posts|max:255', 'author.name' => 'required', 'author.description' => 'required', ]);
### Вывод ошибок
Что будет, если параметры входящего запроса не пройдут проверку? Как уже было сказано, Laravel автоматически перенаправит пользователя на предыдущую страницу. Вдобавок, все ошибки будут автоматически отправлены в сессию.
Заметьте, мы не привязывали сообщения об ошибках к представлению в нашем маршруте GET. Потому что Laravel проверяет наличие ошибок в данных сессии и автоматически привязывает их к представлению, если они доступны. Поэтому важно помнить, что переменная `PHP$errors` будет всегда доступна во всех ваших представлениях при каждом запросе, позволяя вам всегда рассчитывать на то, что она определена и может быть безопасно использована. Переменная `PHP$errors` будет экземпляром Illuminate\Support\MessageBag. Более подробно о работе с этим объектом читайте в его документации. Переменная `PHP$errors` привязывается к представлению посредником Illuminate\View\Middleware\ShareErrorsFromSession, который входит в состав группы посредников web. Итак, в нашем примере при неудачной проверке пользователь будет перенаправлен в метод `PHPcreate()` нашего контроллера, позволяя нам вывести сообщения об ошибках в представлении:
```
<!-- /resources/views/post/create.blade.php -->
```
<h1>Написать статью</h1> @if (count($errors) > 0) <div class="alert alert-danger"> <ul> @foreach ($errors->all() as $error) <li>{{ $error }}</li> @endforeach </ul> </div> @endif <!-- Форма написания статьи --Настройка формата передачи ошибок
Чтобы настроить формат ошибок проверки, передаваемых в сессию при их возникновении, переопределите formatValidationErrors в базовом контроллере. Не забудьте импортировать класс Illuminate\Contracts\Validation\Validator (для Laravel 5.0 Illuminate\Validation\Validator) в начале файла:
`<?php` namespace App\Http\Controllers; use Illuminate\Foundation\Bus\DispatchesJobs; use Illuminate\Contracts\Validation\Validator; use Illuminate\Routing\Controller as BaseController; use Illuminate\Foundation\Validation\ValidatesRequests; abstract class Controller extends BaseController { use DispatchesJobs, ValidatesRequests; /** * {@inheritdoc} */ protected function formatValidationErrors(Validator $validator) { return $validator->errors()->all(); } } В этом примере мы использовали обычную форму для отправки данных в приложение. Но многие приложения используют AJAX-запросы. При использовании метода `PHPvalidate()` для AJAX-запросов Laravel не создаёт отклик-переадресацию. Вместо этого Laravel создаёт JSON-отклик, содержащий все возникшие при проверке ошибки. Этот JSON-отклик будет отправлен с HTTP-кодом состояния 422.
## Проверка запроса формы
### Создание запроса формы
Для более сложных сценариев проверки вам может понадобиться «запрос формы». Запросы формы — изменённые классы запросов, содержащие логику проверки. Для создания класса запроса формы используйте Artisan-команду `shmake:request` : > shphp artisan make:request StoreBlogPost
Сгенерированный класс будет помещён в папку app/Http/Requests. Если такой папки нет, она будет создана при запуске команды `shmake:request` . Давайте добавим несколько правил проверки в метод `PHPrules()` : `/**` * Получить правила проверки для применения к запросу. * * @return array */ public function rules() { return [ 'title' => 'required|unique|max:255', 'body' => 'required', ]; }
А как же запускаются правила проверки? Надо просто указать тип запроса в методе контроллера. Входящий запрос формы проверяется до вызова метода контроллера, это значит, что вам не надо захламлять ваш контроллер логикой проверки:
`/**` * Сохранить входящую статью. * * @param StoreBlogPost $request * @return Response */ public function store(StoreBlogPost $request) { // Входящий запрос прошёл проверку... }
Если проверка неуспешна, будет сгенерирован отклик-переадресация для перенаправления пользователя на предыдущую страницу. Также в сессию будут переданы ошибки, и их можно будет отобразить. Если запрос был AJAX-запросом, то пользователю будет возвращён HTTP-отклик с кодом состояния 422, содержащий JSON-представление ошибок проверки.
### Авторизация запроса формы
Класс запроса формы также содержит метод `PHPauthorize()` . Этим методом вы можете проверить, действительно ли у авторизованного пользователя есть право на изменение данного ресурса. Например, вы можете определить, является ли пользователь владельцем комментария, который он пытается изменить: `/**` * Определить, авторизован ли пользователь для этого запроса. * * @return bool */ public function authorize() { $comment = Comment::find($this->route('comment')); return $comment && $this->user()->can('update', $comment); } Поскольку все запросы форм наследуют базовый класс запроса Laravel, мы можем использовать метод `PHPuser()` для получения текущего аутентифицированного пользователя. Также обратите внимание на вызов метода `PHProute()` в этом примере. Этот метод даёт вам доступ к параметрам URI, определённым в вызванном маршруте, таким как параметр `PHP{comment}` в примере ниже:
```
Route::post('comment/{comment}');
```
Если метод `PHPauthorize()` возвращает false, то будет автоматически возвращён HTTP-ответ с кодом состояния 403, и метод вашего контроллера не будет выполнен. Если вы планируете разместить логику авторизации в другой части приложения, просто верните true из метода `PHPauthorize()` : `/**` * Определить, авторизован ли пользователь для этого запроса. * * @return bool */ public function authorize() { return true; }
### Настройка формата ошибок
Если хотите настроить формат сообщений об ошибках ввода, передаваемых в сессию при их возникновении, переопределите `PHPformatErrors()` в вашем базовом запросе (App\Http\Requests\Request). Не забудьте импортировать класс Illuminate\Contracts\Validation\Validator в начале файла: `/**` * {@inheritdoc} */ protected function formatErrors(Validator $validator) { return $validator->errors()->all(); }
### Настройка сообщений об ошибках
Для настройки сообщений об ошибках, используемых запросом формы, переопределите метод `PHPmessages()` . Этот метод должен возвращать массив пар атрибут/правило и соответствующие им сообщения об ошибках: `/**` * Получить сообщения об ошибках для определённых правил проверки. * * @return array */ public function messages() { return [ 'title.required' => 'Необходимо указать заголовок', 'body.required' => 'Необходимо написать статью', ]; }
## Создание валидаторов вручную
Если вы не хотите использовать метод `PHPvalidate()` типажа ValidatesRequests, вы можете создать экземпляр валидатора вручную с помощью фасада Validator. Метод `PHPmake()` этого фасада создаёт новый экземпляр валидатора: `<?php` namespace App\Http\Controllers; use Validator; use Illuminate\Http\Request; use App\Http\Controllers\Controller; class PostController extends Controller { /** * Сохранить новую статью. * * @param Request $request * @return Response */ public function store(Request $request) { $validator = Validator::make($request->all(), [ 'title' => 'required|unique:posts|max:255', 'body' => 'required', ]); if ($validator->fails()) { return redirect('post/create') ->withErrors($validator) ->withInput(); } // Сохранить статью... } } Первый аргумент метода `PHPmake()` — данные для проверки. Второй — правила, которые должны быть применены к этим данным. Если запрос не пройдёт проверку, вы можете использовать метод `PHPwithErrors()` , чтобы передать сообщения об ошибках в сессию. При использовании этого метода переменная `PHP$errors` автоматически станет общей для ваших представлений после переадресации, позволяя вам легко выводить их пользователю. Метод `PHPwithErrors()` принимает валидатор, MessageBag или PHP-массив.
добавлено в 5.3 ()
### Автоматическая переадресация
Если вы хотите создать экземпляр валидатора вручную, но при этом иметь возможность автоматической переадресации, предлагаемой типажом ValidatesRequest, то можете вызвать метод `PHPvalidate()` на существующем экземпляре валидатора. Если при проверке обнаружатся ошибки, пользователь будет автоматически переадресован, а в случае AJAX-запроса будет возвращён JSON-отклик:
```
Validator::make($request->all(), [
```
'title' => 'required|unique:posts|max:255', 'body' => 'required', ])->validate();
добавлено в 5.0 ()
Использование массивов для указания правил
Несколько правил могут быть разделены либо прямой чертой (|), либо быть отдельными элементами массива.
['name' => 'Ваня'], ['name' => ['required', 'min:5']] );
[ 'name' => 'Ваня', 'password' => 'плохойпароль', 'email' => '<EMAIL>' ], [ 'name' => 'required', 'password' => 'required|min:8', 'email' => 'required|email|unique:users' ] ); Когда создан экземпляр `PHPValidator` , метод `PHPfails()` (или `PHPpasses()` ) может быть использован для проведения проверки.
```
if ($validator->fails()) {
```
// Переданные данные не прошли проверку }
Если валидатор нашёл ошибки, вы можете получить его сообщения таким образом:
```
$messages = $validator->messages();
```
Вы также можете получить массив правил, по которым данные не прошли проверку, без самих сообщений — с помощью метода `PHPfailed()` :
```
$failed = $validator->failed();
```
### Именованные наборы ошибок
Если у вас несколько форм на одной странице, вы можете дать имена наборам ошибок MessageBag, и получать сообщения об ошибках для конкретной формы. Просто передайте имя вторым аргументом метода `PHPwithErrors()` :
```
return redirect('register')
```
->withErrors($validator, 'login'); Теперь вы можете обращаться к экземпляру MessageBag из переменной `PHP$errors` :
```
{{ $errors->login->first('email') }}
```
### Вебхук после проверки
Валидатор также позволяет прикрепить обратные вызовы, которые будут запущены после завершения проверки. Это позволяет легко выполнять последующие проверки, а также добавлять сообщения об ошибках в коллекцию сообщений. Для начала используйте метод `PHPafter()` на экземпляре класса `PHPValidator` :
```
$validator = Validator::make(...);
```
$validator->after(function($validator) { if ($this->somethingElseIsInvalid()) { $validator->errors()->add('field', 'В этом поле что-то не так!'); } }); if ($validator->fails()) { // }
## Работа с сообщениями об ошибках
После вызова метода `PHPerrors()` (для Laravel 5.0 `PHPmessages()` ) на экземпляре Validator вы получите объект
```
PHPIlluminate\Support\MessageBag
```
, который имеет набор полезных методов для работы с сообщениями об ошибках. Переменная `PHP$errors` , которая автоматически становится доступной всем представлениям, также является экземпляром класса MessageBag.
Получение первого сообщения для поля
Для получения первого сообщения об ошибке для данного поля используйте метод `PHPfirst()` :
```
$errors = $validator->errors();
```
echo $errors->first('email');
Получение всех сообщений для одного поля
Для получения массива всех сообщений для данного поля используйте метод `PHPget()` :
```
foreach ($errors->get('email') as $message) {
```
Получение всех сообщений для всех полей
Для получения массива всех сообщения для всех полей используйте метод `PHPall()` :
```
foreach ($errors->all() as $message) {
```
// }
Проверка наличия сообщения для поля
Для определения наличия сообщений об ошибках для определённого поля служит метод `PHPhas()` :
```
if ($messages->has('email')) {
```
Получение ошибки в заданном формате
```
echo $messages->first('email', '<p>:message</p>');
```
По умолчанию сообщения форматируются в вид, который подходит для Bootstrap.
Получение всех сообщений в заданном формате
```
foreach ($messages->all('<li>:message</li>') as $message) {
```
### Изменение сообщений об ошибках
При необходимости вы можете задать свои сообщения об ошибках проверки ввода вместо изначальных. Для этого есть несколько способов. Во-первых, вы можете передать свои сообщения третьим аргументом метода `PHPValidator::make()` : `$messages = [` 'required' => 'Необходимо указать :attribute.', ]; $validator = Validator::make($input, $rules, $messages);
В этом примере обозначение :attribute будет заменено именем проверяемого поля. Вы можете использовать и другие обозначения. Например:
`$messages = [` 'same' => ':attribute и :other должны совпадать.', 'size' => ':attribute должен быть равен :size.', 'between' => ':attribute должен быть между :min и :max.', 'in' => ':attribute должен иметь один из следующих типов: :values', ];
Указание своего сообщения для конкретного атрибута
Иногда вам может понадобиться указать своё сообщения только для конкретного поля. Вы можете сделать это с помощью «точечной» записи. Сначала укажите имя атрибута, а затем правило:
`$messages = [` 'email.required' => 'Нам надо знать ваш e-mail!', ];
Указание своих сообщений в языковых файлах
В большинстве случаев вы будете указывать свои сообщения в языковом файле, а не передавать их напрямую в Validator. Для этого добавьте свои сообщения в массив custom в языковом файле resources/lang/xx/validation.php:
`'custom' => [` 'email' => [ 'required' => 'Нам надо знать ваш e-mail!', ], ],
добавлено в 5.3 ()
## Доступные правила проверки
### accepted
Поле должно быть в значении yes, on, 1 или true. Это полезно для проверки принятия правил и лицензий.
### active_url
Поле должно иметь корректную запись A или AAAA согласно PHP-функции dns_get_record.
Поле должно быть корректным URL согласно PHP-функции checkdnsrr.
### after:date
Поле должно быть датой, более поздней, чем date. Строки приводятся к датам функцией strtotime:
```
'start_date' => 'required|date|after:tomorrow'
```
Вместо того, чтобы передавать строку-дату в strtotime, вы можете указать другое поле для сравнения с датой:
```
'finish_date' => 'required|date|after:start_date'
```
### alpha
Поле должно содержать только латинские символы.
### alpha_dash
Поле должно содержать только латинские символы, цифры, знаки подчёркивания (_) и дефисы (-).
### alpha_num
Поле должно содержать только латинские символы и цифры.
### array
Поле должно быть PHP-массивом (тип array).
### before:date
Поле должно быть датой, более ранней, чем date. Строки приводятся к датам функцией strtotime.
### between:min,max
Поле должно быть числом в диапазоне от min до max. Строки, числа и файлы трактуются аналогично правилу `PHPsize` .
### boolean
Поле должно соответствовать логическому типу. Доступные значения: true, false, 1, 0, "1" и "0".
### confirmed
Значение поля должно соответствовать значению поля с этим именем, плюс _confirmation. Например, если проверяется поле password, то на вход должно быть передано совпадающее по значению поле password_confirmation.
### date
Поле должно быть правильной датой в соответствии с PHP-функцией strtotime.
### date_format:format
Поле должно подходить под заданный формат. При проверке поля вы должны использовать либо date, либо date_format, но не оба сразу.
### different:field
Значение проверяемого поля должно отличаться от значения поля field.
### digits:value
Поле должно быть числовым и иметь длину, равную value.
### digits_between:min,max
Поле должно иметь длину в диапазоне между min и max.
### dimensions
Файл должен быть изображением с подходящими под ограничения размерами, которые указаны в параметрах правила:
```
'avatar' => 'dimensions:min_width=100,min_height=200'
```
Доступные ограничения: min_width, max_width, min_height, max_height, width, height, ratio.
Ограничение ratio должно быть задано в виде ширины, поделённой на высоту. Это можно указать выражением вида `PHP3/2` или в виде нецелого числа `PHP1.5` :
```
'avatar' => 'dimensions:ratio=3/2'
```
### distinct
При работе с массивами поле не должно содержать дублирующих значений.
```
'foo.*.id' => 'distinct'
```
Поле должно быть корректным адресом e-mail.
### exists:table,column
Поле должно существовать в заданной таблице базы данных.
```
'state' => 'exists:states'
```
```
'state' => 'exists:states,abbreviation'
```
Вы также можете указать больше условий, которые будут добавлены к запросу WHERE:
Это условие можно обратить с помощью знака ! :
```
'email' => 'exists:staff,email,role,!admin'
```
Если передать значение NULL/NOT_NULL в запрос WHERE, то это добавит проверку значения БД на совпадение с NULL/NOT_NULL:
```
'email' => 'exists:staff,email,deleted_at,NULL'
```
'email' => 'exists:staff,email,deleted_at,NOT_NULL'
добавлено в 5.3 ()
Если вы хотите изменить запрос, выполняемый правилом проверки, то можете использовать класс Rule, чтобы задать правило гибко. В этом примере мы также укажем правила проверки в виде массива, вместо использования символа `PHP|` для разделения правил:
Validator::make($data, [ 'email' => [ 'required', Rule::exists('staff')->where(function ($query) { $query->where('account_id', 1); }), ], ]);
### file
Поле должно быть успешно загруженным файлом.
### filled
Поле не должно быть пустым, если оно есть.
### image
Загруженный файл должен быть изображением в формате JPEG, PNG, BMP, GIF или SVG.
### in:foo,bar,...
### integer
Поле должно иметь корректное целочисленное значение.
### ip
### json
Поле должно быть JSON-строкой.
### max:value
Значение поля должно быть меньше или равно value. Строки, числа и файлы трактуются аналогично правилу `PHPsize` .
добавлено в 5.2 ()
### mimetypes:text/plain,…
Файл должен быть одного из перечисленных MIME-типов:
```
'video' => 'mimetypes:video/avi,video/mpeg,video/quicktime'
```
Для определения MIME-типа загруженного файла фреймворк прочитает его содержимое и попытается определить MIME-тип, который может отличаться от указанного клиентом.
### mimes:foo,bar,...
MIME-тип загруженного файла должен быть одним из перечисленных.
```
'photo' => 'mimes:jpeg,bmp,png'
```
Несмотря на то, что вы просто указываете расширение файла, это правило проверяет MIME-тип файла, обращаясь к его содержимому и распознавая его MIME-тип.
Полный список MIME-типов и соответствующих им расширений можно найти по ссылке [https://svn.apache.org].
### min:value
Значение поля должно быть более или равно value. Строки, числа и файлы трактуются аналогично правилу `PHPsize` .
добавлено в 5.3 ()
### not_in:foo,bar,...
### numeric
Поле должно иметь корректное числовое или дробное значение.
### regex:pattern
Поле должно соответствовать заданному регулярному выражению.
При использовании этого правила может быть необходимо указать другие правила в виде элементов массива, вместо разделения их с помощью символа вертикальной черты, особенно если выражение содержит этот символ вертикальной черты (|).
### required
Проверяемое поле должно иметь непустое значение. Поле считается «пустым», при выполнении одного из следующих условий:
* Значение поля — NULL
* Значение поля — пустая строка
* Значение поля — пустой массив или пустой Countable-объект
* Значение поля — файл для загрузки без пути
### required_if:anotherfield,value,...
### required_unless:anotherfield,value,...
### required_with:foo,bar,...
### required_with_all:foo,bar,...
### required_without:foo,bar,...
### required_without_all:foo,bar,...
### same:field
Поле должно иметь то же значение, что и поле field.
### size:value
Поле должно иметь совпадающий с value размер. Для строк это обозначает длину, для чисел — число, для массивов — число элементов массива, для файлов — размер в килобайтах.
### string
Поле должно быть строкового типа. Если вы хотите разрешить для поля значение `PHPnull` , назначьте на поле правило `PHPnullable` .
### timezone
Поле должно содержать корректный идентификатор временной зоны в соответствии с PHP-функцией
```
PHPtimezone_identifiers_list
```
### unique:table,column,except,idColumn
Значение поля должно быть уникальным в заданной таблице базы данных. Если column не указано, то будет использовано имя поля.
Указание имени столбца в таблице
```
'email' => 'unique:users,email_address'
```
Иногда вам может потребоваться задать собственное соединение для запросов к базе данных от `PHPValidator` . Как видно выше, правило проверки `PHPunique:users` будет использовать соединение с базой данных по умолчанию для запроса к базе данных. Чтобы изменить это, укажите соединение и имя таблицы, используя «точечную» запись:
```
'email' => 'unique:connection.users,email_address'
```
```
$verifier = App::make('validation.presence');
```
$verifier->setConnection('connectionName'); $validator = Validator::make($input, [ 'name' => 'required', 'password' => 'required|min:8', 'email' => 'required|email|unique:users', ]); $validator->setPresenceVerifier($verifier);
Игнорирование определённого ID
Иногда бывает необходимо игнорировать конкретный ID при проверке на уникальность. Например, представим экран «изменения профиля», который содержит имя пользователя, адрес e-mail и местоположение. Разумеется, вы захотите проверить уникальность e-mail. Но если пользователь изменит только поле с именем, и не изменит e-mail, вам надо избежать возникновения ошибки из-за того, что пользователь сам уже является владельцем этого e-mail.
добавлено в 5.3 ()
Чтобы валидатор игнорировал ID пользователя, мы используем класс Rule для гибкого задания правила. В этом примере мы также укажем правила проверки в виде массива, вместо использования символа `PHP|` для разделения правил:
Validator::make($data, [ 'email' => [ 'required', Rule::unique('users')->ignore($user->id), ], ]); Если в вашей таблице используется первичный ключ с именем, отличающимся от id, укажите его при вызове метода `PHPignore()` :
```
'email' => Rule::unique('users')->ignore($user->id, 'user_id')
```
Вам надо, чтобы ошибка возникла только в том случае, когда пользователь укажет e-mail, который уже был использован другим пользователем. Чтобы правило проверки на уникальность игнорировало ID пользователя, передайте ID третьим параметром:
Если в вашей таблице используется первичный ключ с именем, отличающимся от id, укажите его четвёртым параметром:
```
'email' => 'unique:users,email_address,'.$user->id.',user_id'
```
Добавление дополнительных условий
## Условные правила
В некоторых случаях вам нужно запускать проверки поля, только если оно есть во входном массиве. Чтобы быстро это сделать, добавьте правило sometimes в ваш список правил:
'email' => 'sometimes|required|email', ]);
В этом примере поле email будет проверено, только если оно есть в массиве $data.
Иногда вам может быть нужно, чтобы поле имело какое-либо значение, только если другое поле имеет значение, скажем, больше 100. Или вы можете требовать наличия двух полей, только когда также указано третье. Это легко достигается условными правилами. Сперва создайте объект `PHPValidator` с набором статичных правил, которые никогда не изменяются:
'email' => 'required|email', 'games' => 'required|numeric', ]); Теперь предположим, что ваше приложения написано для коллекционеров игр. Если регистрируется коллекционер с более, чем 100 играми, то мы хотим его спросить, зачем ему такое количество. Например, у него может быть магазин игр, или может ему просто нравится их собирать. Итак, для добавления такого условного правила мы используем метод `PHPsometimes()` на экземпляре `PHPValidator` :
```
$v->sometimes('reason', 'required|max:500', function ($input) {
```
return $input->games >= 100; });
Первый аргумент этого метода — имя поля, которое мы проверяем. Второй аргумент — правило, которое мы хотим добавить, если переданная функция-замыкание (третий аргумент) вернёт true. Этот метод позволяет легко создавать сложные правила проверки ввода. Вы можете даже добавлять условные правила для нескольких полей одновременно:
```
$v->sometimes(['reason', 'cost'], 'required', function ($input) {
```
return $input->games >= 100; }); Параметр `PHP$input` , передаваемый замыканию — объект Illuminate\Support\Fluent и может использоваться для чтения проверяемого ввода и файлов.
## Проверка ввода массивов
Проверка ввода массива из полей ввода не должна быть сложной. Например, чтобы проверить, что каждый e-mail в данном поле ввода массива уникален, можно сделать так:
'person.*.email' => 'email|unique:users', 'person.*.first_name' => 'required_with:person.*.last_name', ]);
Также вы можете использовать символ * при задании сообщений об ошибках ввода в языковых файлах, что позволяет легко использовать одно сообщение для полей на основе массивов:
`'custom' => [` 'person.*.email' => [ 'unique' => 'Каждый пользователь должен иметь уникальный адрес e-mail', ] ],
## Собственные правила проверки
Laravel содержит множество полезных правил, однако вам может понадобиться создать собственные. Один из способов зарегистрировать произвольное правило — через метод
```
PHPValidator::extend()
```
. Давайте используем этот метод в сервис-провайдере для регистрации своего правила: `<?php` namespace App\Providers; use Illuminate\Support\Facades\Validator; //для версии 5.2 и ранее: //use Validator; use Illuminate\Support\ServiceProvider; class AppServiceProvider extends ServiceProvider { /** * Загрузка любых сервисов приложения. * * @return void */ public function boot() { Validator::extend('foo', function ($attribute, $value, $parameters, $validator) { return $value == 'foo'; }); } /** * Регистрация сервис-провайдера. * * @return void */ public function register() { // } } Переданная функция-замыкание получает четыре аргумента: имя проверяемого поля `PHP$attribute` , значение поля `PHP$value` , массив параметров `PHP$parameters` , переданных правилу, и объект Validator. Вместо замыкания в метод `PHPextend()` также можно передать метод класса:
```
Validator::extend('foo', 'FooValidator@validate');
```
Определение сообщения об ошибке
Вам также понадобится определить сообщение об ошибке для нового правила. Вы можете сделать это либо передавая его в виде массива строк в `PHPValidator` , либо вписав в языковой файл. Это сообщение необходимо поместить на первом уровне массива, а не в массив custom, который используется только для сообщений по конкретным полям:
```
"foo" => "Your input was invalid!",
```
"accepted" => "The :attribute must be accepted.", // Остальные сообщения об ошибках...
добавлено в 5.0 ()
Расширение класса `PHPValidator` Вместо использования функций-замыканий для расширения набора доступных правил вы можете расширить сам класс `PHPValidator` . Для этого создайте класс, который наследует Illuminate\Validation\Validator. Вы можете добавить новые методы проверок, начав их имя с validate: `<?php` class CustomValidator extends \Illuminate\Validation\Validator public function validateFoo($attribute, $value, $parameters) { return $value == 'foo'; } } Регистрация нового класса `PHPValidator`
Затем вам нужно зарегистрировать собственное расширение:
```
Validator::resolver(function($translator, $data, $rules, $messages)
```
{ return new CustomValidator($translator, $data, $rules, $messages); }); Иногда при создании своего правила вам может понадобиться определить собственные строки-переменные для замены в сообщениях об ошибках. Это делается путём создания класса, как было описано выше, и вызовом метода
```
PHPValidator::replacer()
```
. Это можно сделать в методе `PHPboot()` сервис-провайдера: `/**` * Загрузка любых сервисов приложения. * * @return void */ public function boot() { Validator::extend(...); Validator::replacer('foo', function ($message, $attribute, $rule, $parameters) { return str_replace(...); }); }
добавлено в 5.0 ()
По умолчанию, если проверяемое поле отсутствует или имеет пустое значение по правилу required, обычные правила не запускаются, в том числе собственные правила. Например, правило unique не будет запущено для значения null:
```
$rules = ['name' => 'unique'];
```
$input = ['name' => null]; Validator::make($input, $rules)->passes(); // true Чтобы применять правило даже для пустых полей, правило должно считать, что поле обязательное. Для создания такого «неявного» наследования используйте метод
```
PHPValidator::extendImplicit()
```
```
Validator::extendImplicit('foo', function ($attribute, $value, $parameters, $validator) {
```
return $value == 'foo'; });
«Неявное» наследование только указывает на обязательность поля. А будет ли правило пропускать пустое или отсутствующее поле или нет, зависит от вас.
# JavaScript и CSS
Laravel не навязывает использование определённых препроцессорров JavaScript и CSS, но предоставляет основу, с которой можно начать, используя Bootstrap и Vue, которые будут полезны во многих приложениях. По умолчанию Laravel использует NPM для установки этих фронтенд-пакетов.
Laravel Elixir предоставляет чистый и выразительный API для компилирования SASS и Less — это расширения чистого CSS, в которых есть переменные, примеси и другие мощные возможности, с которыми намного приятнее работать с CSS.
В данном документе мы коротко обсудим компилирование CSS в целом, а подробнее о компилировании SASS и Less вы можете прочесть в полной документации по Laravel Elixir.
Laravel не требует использования конкретного JavaScript-фреймворка или библиотеки для создания ваших приложений. На самом деле, вам вообще не обязательно использовать JavaScript. Но в Laravel есть некоторые базовые заготовки, упрощающие написание современного JavaScript с помощью библиотеки Vue. Vue предоставляет выразительный API для создания надёжных JavaScript-приложений с помощью компонентов.
## Написание CSS
Файл Laravel package.json содержит пакет bootstrap-sass, чтобы помочь вам начать прототипировать фронтенд своего приложения с помощью Bootstrap. Но вы можете свободно добавлять или удалять пакеты из файла package.json в зависимости от необходимости для вашего приложения. Вам не обязательно использовать фреймворк Bootstrap для создания Laravel-приложения — он предоставлен просто в качестве хорошей отправной точки для тех, кто решит его использовать.
Перед компилированием своих CSS установите зависимости фронтенда вашего проекта с помощью NPM:
> shnpm install
После установки зависимостей с помощью `shnpm install` вы можете скомпилировать ваши SASS-файлы в чистый CSS с помощью Gulp. Команда `shgulp` обработает инструкции из вашего файла gulpfile.js. В обычном случае ваши скомпилированные CSS будут помещены в каталог public/css: > shgulp
Стандартный gulpfile.js, поставляемый с Laravel, компилирует SASS-файл
```
PHPresources/assets/sass/app.scss
```
. Этот файл app.scss импортирует файл переменных SASS и загружает Bootstrap, который обеспечивает хорошую отправную точку для большинства приложений. Вы можете свободно изменять файл app.scss по необходимости, или даже использовать совершенно другой препроцессор, настроив Laravel Elixir.
## Написание JavaScript
Все требуемые для вашего приложения JavaScript-зависимости можно найти в файле package.json в корневой папке проекта. Этот файл похож на composer.json, отличие в том, что в нём указаны JavaScript-зависимости вместо PHP-зависимостей. Вы можете установить эти зависимости с помощью менеджера пакетов Node (NPM):
> shnpm install
По умолчанию файл package.json в Laravel содержит несколько таких пакетов, как vue и vue-resource, чтобы помочь вам начать создание JavaScript-приложения. Вы можете свободно добавлять или удалять пакеты из файла package.json в зависимости от необходимости для вашего приложения.
После установки пакетов вы можете использовать команду `shgulp` для компилирования ваших ресурсов. Gulp — система сборки для JavaScript, работающая из командной строки. Когда вы запустите команду `shgulp` , Gulp выполнит инструкции из файла gulpfile.js: > shgulp
Стандартный gulpfile.js, поставляемый с Laravel, компилирует ваш SASS и файл resources/assets/js/app.js. В файле app.js вы можете зарегистрировать свои компоненты Vue, а если вы предпочитаете другой фреймворк, то настроить своё собственное JavaScript-приложение. В обычном случае ваш скомпилированный JavaScript будет помещён в папку public/js.
Файл app.js будет загружать файл resources/assets/js/bootstrap.js, который загружает и настраивает Vue, Vue Resource, jQuery и все остальные JavaScript-зависимости. Если вам надо настроить дополнительные JavaScript-зависимости, вы можете сделать это в данном файле.
### Написание компонентов Vue
По умолчанию в свежем приложении Laravel есть Vue-компонент Example.vue, расположенный в папке resources/assets/js/components. Файл Example.vue — пример однофайлового Vue-компонента, который определяет свои JavaScript и HTML-шаблоны в этом же файле. Однофайловые компоненты обеспечивают очень удобный подход к созданию приложений на основе JavaScript. Этот образец компонента зарегистрирован в вашем файле app.js:
> jsVue.component('example', require('./components/Example.vue'));
Для использования компонента в вашем приложении вы можете просто разместить его в одном из своих HTML-шаблонов. Например, после запуска Artisan-команды make:auth для создания заготовок экранов аутентификации и регистрации в вашем приложении, вы можете разместить компонент в Blade-шаблоне home.blade.php:
@section('content') <example></example> @endsection Не забывайте, вам надо выполнять команду `shgulp` после каждого изменения во Vue-компоненте. Или вы можете выполнить команду `shgulp watch` , чтобы отслеживать и автоматически перекомпилировать ваши компоненты, после каждого изменения в них.
Разумеется, если вам интересно узнать больше о написании Vue-компонентов, вы можете прочитать документацию по Vue, в которой простым языком подробно описан весь фреймворк Vue.
# Основы работы с базами данных
В Laravel можно чрезвычайно просто взаимодействовать с БД на различных «движках», будь то сырой SQL, гибкий построитель запросов или Eloquent ORM. На данный момент Laravel поддерживает четыре системы баз данных:
* MySQL
* Postgres
* SQLite
* SQL Server
Настройки работы с БД хранятся в файле config/database.php. Здесь вы можете указать все используемые вами соединения к БД, а также задать соединение по умолчанию. Примеры настройки большинства поддерживаемых видов подключений находятся в этом же файле.
По умолчанию образец настройки окружения Laravel подготовлен для использования с Laravel Homestead — удобной виртуальной машиной для Laravel-разработки на вашей локальной машине. Разумеется, вы можете изменить эти настройки для работы с вашей локальной БД.
После создания новой базы данных SQLite при помощи команды
```
shtouch database/database.sqlite
```
, вы можете легко настроить переменные вашей среды для этой новой базы данных, используя её абсолютный путь: > confDB_CONNECTION=sqlite DB_DATABASE=/absolute/path/to/database.sqlite
Laravel поддерживает работу с SQL Server из коробки, надо лишь добавить настройку подключения к БД в ваш файл настроек config/database.php:
> conf'sqlsrv' => [ 'driver' => 'sqlsrv', 'host' => env('DB_HOST', 'localhost'), 'database' => env('DB_DATABASE', 'forge'), 'username' => env('DB_USERNAME', 'forge'), 'password' => env('DB_PASSWORD', ''), 'charset' => 'utf8', 'prefix' => '', ],
### Соединения для чтения и записи
Иногда вам может понадобиться использовать разные подключения к базе данных: одно для запросов SELECT, а другое для запросов INSERT, UPDATE и DELETE. В Laravel это делается очень просто, и всегда будет использоваться соответствующее соединение, используете ли вы сырые запросы, построитель запросов или Eloquent ORM.
Чтобы увидеть, как должны быть настроены соединения чтения/записи, давайте посмотрим на этот пример:
`'mysql' => [` 'read' => [ 'host' => '192.168.1.1', ], 'write' => [ 'host' => '196.168.1.2' ], 'driver' => 'mysql', 'database' => 'database', 'username' => 'root', 'password' => '', 'charset' => 'utf8', 'collation' => 'utf8_unicode_ci', 'prefix' => '', ], Обратите внимание, что в массив настроек были добавлены два элемента: `sql'read'` и `sql'write'` . Оба элемента представляют собой массив с одним элементом `sql'host'` . Остальные параметры БД для подключений чтения/записи будут заимствованы из основного массива `sql'mysql'` . Вам стоит размещать элементы в массивах `PHPread` и `PHPwrite` , только если вы хотите переопределить их значения из основного массива. Таким образом, в этом случае, 192.168.1.1 будет использоваться как хост для подключения «чтения», а 192.168.1.2 — для подключения «записи». Учётные данные для БД, префикс, набор символов, и все другие параметры основного массива `sql'mysql'` будут использованы для обоих подключений.
## Использование нескольких соединений с БД
При использовании нескольких соединений с БД вы можете получить доступ к каждому из них через метод `PHPconnection()` фасада DB. Передаваемое в этот метод имя name должно соответствовать одному из перечисленных в файле config/database.php соединений:
```
$users = DB::connection('foo')->select(...);
```
Вы также можете получить низкоуровневый объект PDO для этого подключения методом `PHPgetPdo()` :
```
$pdo = DB::connection()->getPdo();
```
## Выполнение сырых SQL-запросов
Когда вы настроили соединение с базой данных, вы можете выполнять запросы, используя фасад DB. Этот фасад имеет методы для каждого типа запроса: select, update, insert, delete и statement.
Чтобы выполнить базовый запрос, можно использовать метод `PHPselect()` фасада DB: `<?php` namespace App\Http\Controllers; use Illuminate\Support\Facades\DB; //для версии 5.2 и ранее: //use DB; use App\Http\Controllers\Controller; class UserController extends Controller { /** * Показать список всех пользователей приложения. * * @return Response */ public function index() { $users = DB::select('select * from users where active = ?', [1]); return view('user.index', ['users' => $users]); } } Первый аргумент метода `PHPselect()` — сырой SQL-запрос, второй — любые связки параметров для прикрепления к запросу. Обычно это значения ограничений условия where. Привязка параметров обеспечивает защиту от SQL-инъекций. Метод `PHPselect()` всегда возвращает массив результатов. Каждый результат в массиве — объект PHP StdClass, что позволяет вам обращаться к значениям результатов:
echo $user->name; }
Вместо использования знака вопроса ? для обозначения привязки параметров, вы можете выполнить запрос, используя привязку по имени:
```
$results = DB::select('select * from users where id = :id', ['id' => 1]);
```
Чтобы выполнить запрос insert, можно использовать метод `PHPinsert()` фасада DB. Как и `PHPselect()` , данный метод принимает сырой SQL-запрос первым аргументом, а вторым — привязки:
```
DB::insert('insert into users (id, name) values (?, ?)', [1, 'Dayle']);
```
Для обновления существующих записей в БД используется метод `PHPupdate()` , который возвращает количество изменённых записей:
```
$affected = DB::update('update users set votes = 100 where name = ?', ['John']);
```
Для удаления записей из БД используется метод `PHPdelete()` , который возвращает количество изменённых записей:
```
$deleted = DB::delete('delete from users');
```
Выполнение запроса общего типа
Некоторые запросы к БД не возвращают никаких значений. Для операций такого типа можно использовать метод `PHPstatement()` фасада DB:
```
DB::statement('drop table users');
```
### Прослушивание событий запросов
Если вы хотите получать каждый выполненный вашим приложением SQL-запрос, используйте метод `PHPlisten()` . Этот метод полезен для журналирования запросов и отладки. Вы можете зарегистрировать свой слушатель запросов в сервис-провайдере: `<?php` namespace App\Providers; use Illuminate\Support\Facades\DB; //для версии 5.2 и ранее: //use DB; use Illuminate\Support\ServiceProvider; class AppServiceProvider extends ServiceProvider { /** * Загрузка всех сервисов приложения. * * @return void */ public function boot() { DB::listen(function ($query) { // $query->sql // $query->bindings // $query->time //В этом примере из документации по версии 5.1 и ранее было: //DB::listen(function($sql, $bindings, $time) { // // }); } /** * Регистрация сервис-провайдера. * * @return void */ public function register() { // } }
## Транзакции
Для выполнения набора запросов внутри одной транзакции вы можете использовать метод `PHPtransaction()` фасада DB. Если в замыкании транзакции произойдёт исключение, она автоматически откатится. А если замыкание выполнится успешно, транзакция автоматически применится к БД. Вам не стоит переживать об этом при использовании метода `PHPtransaction()` :
DB::table('users')->update(['votes' => 1]); DB::table('posts')->delete(); });
добавлено в 5.3 ()
Метод `PHPtransaction()` принимает второй необязательный аргумент, с помощью которого задаётся число повторных попыток транзакции при возникновении взаимной блокировки (англ. deadlock). После истечения этих попыток будет выброшено исключение:
DB::table('users')->update(['votes' => 1]); DB::table('posts')->delete(); }, 5);
Ручное использование транзакций
Если вы хотите запустить транзакцию вручную и иметь полный контроль над её откатом и применением, используйте метод
```
PHPbeginTransaction()
```
фасада DB:
```
DB::beginTransaction();
```
Вы можете откатить транзакцию методом `PHProllback()` : `DB::rollback();` Наконец, вы можете применить транзакцию методом `PHPcommit()` : `DB::commit();`
Методы фасада DB для транзакций также контролируют транзакции построителя запросов и Eloquent ORM.
Транзакция — особое состояние БД, в котором выполняемые запросы либо все вместе успешно завершаются, либо (в случае ошибки) все их изменения откатываются. Это позволяет поддерживать целостность внутренней структуры данных. К примеру, если вы вставляете запись о заказе, а затем в отдельную таблицу добавляете товары, то при неуспешном выполнении скрипта (в том числе падения веб-сервера, ошибки в запросе и пр.) СУБД автоматически удалит запись о заказе и все товары, которые вы успели добавить — прим. пер.
Иногда вам может понадобиться переподключиться и вы можете сделать это так:
```
DB::reconnect('foo');
```
Если вам нужно отключиться от данной БД из-за превышения основного предела экземпляров PDO max_connections, используйте метод `PHPdisconnect()` :
```
DB::disconnect('foo');
```
## Журнал запросов
По умолчанию Laravel записывает все SQL-запросы, выполненные в рамках текущего запроса страницы. Однако в некоторых случаях — например, при вставке большого набора записей — это может быть слишком ресурсозатратно. Для подключения журнала вы можете использовать метод `PHPenableQueryLog()` :
```
DB::connection()->enableQueryLog();
```
Чтобы получить массив выполненных запросов, вы можете использовать метод `PHPgetQueryLog()` :
```
$queries = DB::getQueryLog();
```
# Конструктор запросов
Конструктор запросов Laravel предоставляет удобный, выразительный интерфейс для создания и выполнения запросов к базе данных. Он может использоваться для выполнения большинства типов операций и работает со всеми поддерживаемыми СУБД.
Конструктор запросов Laravel использует привязку параметров к запросам средствами PDO для защиты вашего приложения от SQL-инъекций. Нет необходимости экранировать строки перед их передачей в запрос.
## Получение результатов
Получение всех записей таблицы
Используйте метод `PHPtable()` фасада DB для создания запроса. Метод `PHPtable()` возвращает экземпляр конструктора запросов для данной таблицы, позволяя вам «прицепить» к запросу дополнительные условия и в итоге получить результат методом `PHPget()` : `<?php` namespace App\Http\Controllers; use Illuminate\Support\Facades\DB; //для версии 5.2 и ранее: //use DB; use App\Http\Controllers\Controller; class UserController extends Controller { /** * Показать список всех пользователей приложения. * * @return Response */ public function index() { $users = DB::table('users')->get(); return view('user.index', ['users' => $users]); } } Метод `PHPget()` возвращает объект Illuminate\Support\Collection (для версии 5.2 и ранее — массив) c результатами, в котором каждый результат — это экземпляр PHP-объекта StdClass. Вы можете получить значение каждого столбца, обращаясь к столбцу как к свойству объекта:
echo $user->name; }
Получение одной строки/столбца из таблицы
Если вам необходимо получить только одну строку из таблицы БД, используйте метод `PHPfirst()` . Этот метод вернёт один объект StdClass:
```
$user = DB::table('users')->where('name', 'John')->first();
```
echo $user->name; Если вам не нужна вся строка, вы можете извлечь одно значение из записи методом `PHPvalue()` . Этот метод вернёт значение конкретного столбца:
```
$email = DB::table('users')->where('name', 'John')->value('email');
```
Получение списка всех значений одного столбца
Если вы хотите получить массив значений одного столбца, используйте метод `PHPpluck()` . В этом примере мы получим коллекцию (для версии 5.2 и ранее — массив) названий ролей:
```
$titles = DB::table('roles')->pluck('title');
```
foreach ($titles as $title) { echo $title; }
Вы можете указать произвольный ключ для возвращаемой коллекции (для версии 5.2 и ранее — массива):
```
$roles = DB::table('roles')->pluck('title', 'name');
```
foreach ($roles as $name => $title) { echo $title; } Если вы хотите получить массив значений одного столбца, используйте метод `PHPlists()` . В этом примере мы получим массив названий ролей:
```
$titles = DB::table('roles')->lists('title');
```
foreach ($titles as $title) { echo $title; }
Вы можете указать произвольный ключ для возвращаемого массива:
```
$roles = DB::table('roles')->lists('title', 'name');
```
foreach ($roles as $name => $title) { echo $title; }
### Получение результатов из таблицы «по кускам»
Если вам необходимо обработать тысячи записей БД, попробуйте использовать метод `PHPchunk()` . Этот метод получает небольшой «кусок» результатов за раз и отправляет его в замыкание для обработки. Этот метод очень полезен для написания Artisan-команд, которые обрабатывают тысячи записей. Например, давайте обработаем всю таблицу users «кусками» по 100 записей:
foreach ($users as $user) { // } });
Вы можете остановить обработку последующих «кусков» вернув false из замыкания:
// Обработка записей... return false; });
Конструктор запросов содержит множество агрегатных методов, таких как count, max, min, avg и sum. Вы можете вызывать их после создания своего запроса:
```
$users = DB::table('users')->count();
```
$price = DB::table('orders')->max('price');
Разумеется, вы можете комбинировать эти методы с другими условиями:
```
$price = DB::table('orders')
```
->where('finalized', 1) ->avg('price');
## Выборка (SELECT)
Само собой, не всегда вам необходимо выбрать все столбцы из таблицы БД. Используя метод `PHPselect()` вы можете указать необходимые столбцы для запроса:
```
$users = DB::table('users')->select('name', 'email as user_email')->get();
```
Метод `PHPdistinct()` позволяет вернуть только отличающиеся результаты:
```
$users = DB::table('users')->distinct()->get();
```
Если у вас уже есть экземпляр конструктора запросов и вы хотите добавить столбец к существующему набору для выборки, используйте метод `PHPaddSelect()` :
```
$query = DB::table('users')->select('name');
```
$users = $query->addSelect('age')->get();
## Сырые выражения
Иногда вам может понадобиться использовать уже готовое SQL-выражение в вашем запросе. Такие выражения вставляются в запрос напрямую в виде строк, поэтому будьте внимательны и не допускайте возможностей для SQL-инъекций! Для создания сырого выражения используйте метод `PHPDB::raw()` :
->select(DB::raw('count(*) as user_count, status')) ->where('status', '<>', 1) ->groupBy('status') ->get();
## Объединения (JOIN)
Конструктор запросов может быть использован для объединения данных из нескольких таблиц через `PHPJOIN` . Для выполнения обычного объединения «inner join», используйте метод `PHPjoin()` на экземпляре конструктора запросов. Первый аргумент метода `PHPjoin()` — имя таблицы, к которой необходимо присоединить другие, а остальные аргументы указывают условия для присоединения столбцов. Как видите, вы можете объединять несколько таблиц одним запросом:
->join('contacts', 'users.id', '=', 'contacts.user_id') ->join('orders', 'users.id', '=', 'orders.user_id') ->select('users.*', 'contacts.phone', 'orders.price') ->get(); Для выполнения объединения «left join» вместо «inner join», используйте метод `PHPleftJoin()` . Этот метод имеет ту же сигнатуру, что и метод `PHPjoin()` :
->leftJoin('posts', 'users.id', '=', 'posts.user_id') ->get(); Для выполнения объединения CROSS JOIN используйте метод `PHPcrossJoin()` с именем таблицы, с которой нужно произвести объединение. CROSS JOIN формирует таблицу перекрестным соединением (декартовым произведением) двух таблиц:
```
$users = DB::table('sizes')
```
->crossJoin('colours') ->get(); Вы можете указать более сложные условия для объединения. Для начала передайте замыкание вторым аргументом метода `PHPjoin()` . Замыкание будет получать объект JoinClause, позволяя вам указать условия для объединения: `DB::table('users')` ->join('contacts', function ($join) { $join->on('users.id', '=', 'contacts.user_id')->orOn(...); }) ->get(); Если вы хотите использовать стиль «where» для ваших объединений, то можете использовать для этого методы `PHPwhere()` и `PHPorWhere()` . Вместо сравнения двух столбцов эти методы будут сравнивать столбец и значение: `DB::table('users')` ->join('contacts', function ($join) { $join->on('users.id', '=', 'contacts.user_id') ->where('contacts.user_id', '>', 5); }) ->get();
## Слияние (UNION)
Конструктор запросов позволяет создавать слияния двух запросов вместе. Например, вы можете создать начальный запрос и с помощью метода `PHPunion()` слить его со вторым запросом:
```
$first = DB::table('users')
```
->whereNull('first_name'); $users = DB::table('users') ->whereNull('last_name') ->union($first) ->get(); Также существует метод `PHPunionAll()` с аналогичными параметрами.
## Условия WHERE
Для добавления в запрос условий where используйте метод `PHPwhere()` на экземпляре конструктора запросов. Самый простой вызов `PHPwhere()` требует три аргумента. Первый — имя столбца. Второй — оператор (любой из поддерживаемых базой данных). Третий — значение для сравнения со столбцом.
Например, вот запрос, проверяющий равенство значения столбца «votes» и 100:
```
$users = DB::table('users')->where('votes', '=', 100)->get();
```
Для удобства, если вам необходимо просто проверить равенство значения столбца и данного значения, вы можете передать значение сразу вторым аргументом метода `PHPwhere()` :
```
$users = DB::table('users')->where('votes', 100)->get();
```
Разумеется, вы можете использовать различные другие операторы при написании условия where:
->where('votes', '>=', 100) ->get(); $users = DB::table('users') ->where('votes', '<>', 100) ->get(); $users = DB::table('users') ->where('name', 'like', 'T%') ->get(); Вы можете сцепить вместе условия where, а также условия or в запросе. Метод `PHPorWhere()` принимает те же аргументы, что и метод `PHPwhere()` :
->where('votes', '>', 100) ->orWhere('name', 'John') ->get(); Метод `PHPwhereBetween()` проверяет, что значения столбца находится в указанном интервале:
->whereBetween('votes', [1, 100])->get(); Метод `PHPwhereNotBetween()` проверяет, что значения столбца находится вне указанного интервала:
->whereNotBetween('votes', [1, 100]) ->get();
Фильтрация по совпадению с массивом значений
Метод `PHPwhereIn()` проверяет, что значения столбца содержатся в данном массиве:
->whereIn('id', [1, 2, 3]) ->get(); Метод `PHPwhereNotIn()` проверяет, что значения столбца не содержатся в данном массиве:
->whereNotIn('id', [1, 2, 3]) ->get();
Поиск неустановленных значений (NULL)
Метод `PHPwhereNull()` проверяет, что значения столбца равны `PHPNULL` :
->whereNull('updated_at') ->get(); Метод `PHPwhereNotNull()` проверяет, что значения столбца не равны `PHPNULL` :
->whereNotNull('updated_at') ->get();
добавлено в 5.3 ()
whereDate / whereMonth / whereDay / whereYear
Метод `PHPwhereDate()` служит для сравнения значения столбца с датой:
->whereDate('created_at', '2016-12-31') ->get(); Метод `PHPwhereMonth()` служит для сравнения значения столбца с месяцем в году:
->whereMonth('created_at', '12') ->get(); Метод `PHPwhereDay()` служит для сравнения значения столбца с днём месяца:
->whereDay('created_at', '31') ->get(); Метод `PHPwhereYear()` служит для сравнения значения столбца с указанным годом:
->whereYear('created_at', '2016') ->get();
добавлено в 5.0 ()
Вы можете использовать даже «динамические» условия where для гибкого построения операторов, используя магические методы:
```
$admin = DB::table('users')->whereId(1)->first();
```
$john = DB::table('users') ->whereIdAndEmail(2, '<EMAIL>') ->first(); $jane = DB::table('users') ->whereNameOrAge('Jane', 22) ->first(); Для проверки на совпадение двух столбцов можно использовать метод `PHPwhereColumn()` :
->whereColumn('first_name', 'last_name') ->get();
В метод также можно передать оператор сравнения:
->whereColumn('updated_at', '>', 'created_at') ->get(); В метод `PHPwhereColumn()` также можно передать массив с несколькими условиями. Эти условия будут объединены оператором AND:
->whereColumn([ ['first_name', '=', 'last_name'], ['updated_at', '>', 'created_at'] ])->get();
### Группировка условий
Иногда вам нужно сделать выборку по более сложным параметрам, таким как «существует ли» или вложенная группировка условий. Конструктор запросов Laravel справится и с такими запросами. Для начала посмотрим на пример группировки условий в скобках:
`DB::table('users')` ->where('name', '=', 'John') ->orWhere(function ($query) { $query->where('votes', '>', 100) ->where('title', '<>', 'Admin'); }) ->get(); Как видите, передав замыкание в метод `PHPorWhere()` , мы дали конструктору запросов команду, начать группировку условий. Замыкание получит экземпляр конструктора запросов, который вы можете использовать для задания условий, поместив их в скобки. Приведённый пример выполнит такой SQL-запрос: > sqlselect * from users where name = 'John' or (votes > 100 and title <> 'Admin')
### Проверка на существование
Метод `PHPwhereExists()` позволяет написать SQL-условие where exists. Метод `PHPwhereExists()` принимает в качестве аргумента замыкание, которое получит экземпляр конструктора запросов, позволяя вам определить запрос для помещения в условие «exists»: `DB::table('users')` ->whereExists(function ($query) { $query->select(DB::raw(1)) ->from('orders') ->whereRaw('orders.user_id = users.id'); }) ->get();
Этот пример выполнит такой SQL-запрос:
> sqlselect * from users where exists ( select 1 from orders where orders.user_id = users.id )
### JSON фильтрация (WHERE)
Laravel также поддерживает запросы для столбцов типа JSON в тех БД, которые поддерживают тип столбцов JSON. На данный момент это MySQL 5.7 и Postgres. Для запроса JSON столбца используйте оператор -> :
->where('options->language', 'en') ->get(); $users = DB::table('users') ->where('preferences->dining->meal', 'salad') ->get();
## Упорядочивание, группировка, предел и смещение
Метод `PHPorderBy()` позволяет вам отсортировать результат запроса по заданному столбцу. Первый аргумент метода `PHPorderBy()` — столбец для сортировки по нему, а второй — задаёт направление сортировки и может быть либо asc, либо desc:
->orderBy('name', 'desc') ->get();
добавлено в 5.3 ()
Методы `PHPgroupBy()` и `PHPhaving()` используются для группировки результатов запроса. Сигнатура метода `PHPhaving()` аналогична методу `PHPwhere()` :
->groupBy('account_id') ->having('account_id', '>', 100) ->get(); Метод `PHPhavingRaw()` используется для передачи сырой строки в условие having. Например, мы можем найти все филиалы с объёмом продаж выше $2,500:
```
$users = DB::table('orders')
```
->select('department', DB::raw('SUM(price) as total_sales')) ->groupBy('department') ->havingRaw('SUM(price) > 2500') ->get(); Для ограничения числа возвращаемых результатов из запроса или для пропуска заданного числа результатов в запросе используются методы `PHPskip()` и `PHPtake()` :
```
$users = DB::table('users')->skip(10)->take(5)->get();
```
## Условное применение условий
Иногда необходимо применять условие к запросу, только если выполняется какое-то другое условие. Например, выполнять оператор `PHPwhere` , только если нужное значение есть во входящем запросе. Это можно сделать с помощью метода `PHPwhen()` :
```
$role = $request->input('role');
```
$users = DB::table('users') ->when($role, function ($query) use ($role) { return $query->where('role_id', $role); }) ->get(); Метод `PHPwhen()` выполняет данное замыкание, только когда первый параметр равен `PHPtrue` . Если первый параметр равен `PHPfalse` , то замыкание не будет выполнено.
добавлено в 5.3 ()
Вы можете передать ещё одно замыкание третьим параметром метода `PHPwhen()` . Это замыкание будет выполнено, если первый параметр будет иметь значение `PHPfalse` . Для демонстрации работы этой функции мы используем её для настройки сортировки по умолчанию для запроса: `$sortBy = null;` $users = DB::table('users') ->when($sortBy, function ($query) use ($sortBy) { return $query->orderBy($sortBy); }, function ($query) { return $query->orderBy('name'); }) ->get();
## Вставка (INSERT)
Конструктор запросов предоставляет метод `PHPinsert()` для вставки записей в таблицу БД. Метод `PHPinsert()` принимает массив имён столбцов и значений:
```
DB::table('users')->insert(
```
['email' => '<EMAIL>', 'votes' => 0] ); Вы можете вставить в таблицу сразу несколько записей одним вызовом `PHPinsert()` , передав ему массив массивов, каждый из которых — строка для вставки в таблицу:
```
DB::table('users')->insert([
```
['email' => '<EMAIL>', 'votes' => 0], ['email' => '<EMAIL>', 'votes' => 0] ]); Если в таблице есть автоинкрементный ID, используйте метод `PHPinsertGetId()` для вставки записи и получения её ID:
```
$id = DB::table('users')->insertGetId(
```
['email' => '<EMAIL>', 'votes' => 0] ); При использовании метода `PHPinsertGetId()` для PostgreSQL автоинкрементное поле должно иметь имя `PHPid` . Если вы хотите получить ID из другого поля таблицы, вы можете передать его имя вторым аргументом.
## Обновление (UPDATE)
Разумеется, кроме вставки записей в БД конструктор запросов может и изменять существующие строки с помощью метода `PHPupdate()` . Метод `PHPupdate()` , как и метод `PHPinsert()` , принимает массив столбцов и пар значений, содержащих столбцы для обновления. Вы можете ограничить запрос `PHPupdate()` условием `PHPwhere()` : `DB::table('users')` ->where('id', 1) ->update(['votes' => 1]);
добавлено в 5.3 ()
### Increment и Decrement
Конструктор запросов предоставляет удобные методы для увеличения и уменьшения значений заданных столбцов. Это просто более выразительный и краткий способ по сравнению с написанием оператора update вручную.
Оба метода принимают один обязательный аргумент — столбец для изменения. Второй аргумент может быть передан для указания, на какую величину необходимо изменить значение столбца:
```
DB::table('users')->increment('votes');
```
DB::table('users')->increment('votes', 5); DB::table('users')->decrement('votes'); DB::table('users')->decrement('votes', 5);
Вы также можете указать дополнительные поля для изменения:
```
DB::table('users')->increment('votes', 1, ['name' => 'John']);
```
## Удаление (DELETE)
Конструктор запросов предоставляет метод `PHPdelete()` для удаления записей из таблиц. Вы можете ограничить оператор `PHPdelete()` , добавив условие `PHPwhere()` перед его вызовом:
```
DB::table('users')->delete();
```
DB::table('users')->where('votes', '>', 100)->delete(); Если вы хотите очистить таблицу (усечение), удалив все строки и обнулив счётчик ID, используйте метод `PHPtruncate()` :
```
DB::table('users')->truncate();
```
Усечение таблицы аналогично удалению всех её записей, а также сбросом счётчика autoincrement-полей. — прим. пер.
## Пессимистическая блокировка
В конструкторе запросов есть несколько функций, которые помогают делать «пессимистическую блокировку» (pessimistic locking) для ваших операторов SELECT. Для запуска оператора SELECT с «разделяемой блокировкой» вы можете использовать в запросе метод `PHPsharedLock()` . Разделяемая блокировка предотвращает изменение выбранных строк до конца транзакции:
```
DB::table('users')->where('votes', '>', 100)->sharedLock()->get();
```
Или вы можете использовать метод `PHPlockForUpdate()` . Блокировка «для изменения» предотвращает изменение строк и их выбор другими разделяемыми блокировками:
```
DB::table('users')->where('votes', '>', 100)->lockForUpdate()->get();
```
# Конструктор таблиц
В Laravel, класс `PHPSchema` представляет собой независимый от БД интерфейс манипулирования таблицами. Он хорошо работает со всеми СУБД, поддерживаемыми Laravel, и предоставляет унифицированный API для любой из этих систем.
## Создание и удаление таблиц
Для создания новой таблицы используется метод `PHPSchema::create()` :
{ $table->increments('id'); }); Первый аргумент метода `PHPcreate()` — имя таблицы, а второй — замыкание, которое получает объект `PHPBlueprint` , использующийся для заполнения новой таблицы. Чтобы переименовать существующую таблицу используется метод `PHPrename()` :
Для указания того, какое подключение к БД должно использоваться для выполнения операции, используется метод
```
PHPSchema::connection()
```
```
Schema::connection('foo')->create('users', function($table)
```
{ $table->increments('id'); }); Для удаления таблицы вы можете использовать метод `PHPSchema::drop()` :
Schema::dropIfExists('users');
## Добавление полей
Для обновления существующей таблицы мы будем использовать метод `PHPSchema::table()` :
{ $table->string('email'); });
Конструктор таблиц поддерживает различные типы полей, которые вы можете использовать при создании таблиц:
Команда | Описание |
| --- | --- |
$table->bigIncrements('id'); | Первичный последовательный ключ типа BIGINT |
$table->bigInteger('votes'); | Поле BIGINT |
$table->binary('data'); | Поле BLOB |
$table->boolean('confirmed'); | Поле BOOLEAN |
$table->char('name', 4); | Поле CHAR с указанной длиной |
$table->date('created_at'); | Поле DATE |
$table->dateTime('created_at'); | Поле DATETIME |
$table->decimal('amount', 5, 2); | Поле DECIMAL с указанной размерностью и точностью |
$table->double('column', 15, 8); | Поле DOUBLE с указанной точностью |
$table->enum('choices', ['foo', 'bar']); | Поле ENUM |
$table->float('amount'); | Поле FLOAT |
$table->increments('id'); | Первичный последовательный ключ (autoincrement) |
$table->integer('votes'); | Поле INTEGER |
$table->json('options'); | Поле JSON |
$table->jsonb('options'); | Поле JSONB |
$table->longText('description'); | Поле LONGTEXT |
$table->mediumInteger('numbers'); | Поле MEDIUMINT |
$table->mediumText('description'); | Поле MEDIUMTEXT |
$table->morphs('taggable'); | Добавляет INTEGER поле taggable_id и STRING поле taggable_type |
$table->nullableTimestamps(); | То же что и timestamps(), но разрешены значения NULL |
$table->smallInteger('votes'); | Поле SMALLINT |
$table->tinyInteger('numbers'); | Поле TINYINT |
$table->softDeletes(); | Добавляет поле deleted_at для мягкого удаления |
$table->string('email'); | Поле VARCHAR |
$table->string('name', 100); | Поле VARCHAR с указанной длиной |
$table->text('description'); | Поле TEXT |
$table->time('sunrise'); | Поле TIME |
$table->timestamp('added_on'); | Поле TIMESTAMP |
$table->timestamps(); | Добавляет поля created_at и updated_at |
$table->rememberToken(); | Добавляет поле remember_token с типом VARCHAR(100) NULL |
->nullable() | Указывает, что поле может быть NULL |
->default($value) | Указывает значение по умолчанию для поля |
->unsigned() | Переводит INTEGER в беззнаковое число UNSIGNED |
Вставка поля после существующего в MySQL
Если вы используете MySQL, то метод `PHPafter()` позволит вам вставить поле после определённого существующего поля:
```
$table->string('name')->after('email');
```
## Изменение полей
Внимание: Перед изменением полей не забудьте добавить зависимость doctrine/dbal в свой файл composer.json.
Иногда необходимо изменить существующее поле. Например, если надо увеличить размер строкового поля. С методом `PHPchange()` это легко! Давайте увеличим размер поля name с 25 до 50:
{ $table->string('name', 50)->change(); });
Также мы можем сделать поле обнуляемым (nullable):
{ $table->string('name', 50)->nullable()->change(); });
## Переименование полей
Для переименования поля можно использовать метод `PHPrenameColumn()` объекта конструктора. Перед переименованием полей не забудьте добавить зависимость doctrine/dbal в ваш файл composer.json.
{ $table->renameColumn('from', 'to'); });
Внимание: Переименование полей типа ENUM не поддерживается.
## Удаление полей
Для удаления поля можно использовать метод конструктора таблиц `PHPdropColumn()` . Перед удалением поля убедитесь, что в файл composer.json добавлена зависимость `confdoctrine/dbal` .
Удаление одного поля из таблицы
{ $table->dropColumn('votes'); });
Удаление нескольких полей таблицы
{ $table->dropColumn(['votes', 'avatar', 'location']); });
## Проверка на существование
Вы можете легко проверить существование таблицы или поля с помощью методов `PHPhasTable()` и `PHPhasColumn()` .
Проверка существования таблицы
```
if (Schema::hasTable('users'))
```
```
if (Schema::hasColumn('users', 'email'))
```
## Добавление индексов
Конструктор таблиц поддерживает несколько типов индексов. Есть два способа добавлять индексы. Первый — определять их на самих полях, либо добавлять отдельно:
Или вы можете добавлять их отдельными строками. Ниже список всех доступных типов индексов:
Команда | Описание |
| --- | --- |
$table->primary('id'); | Добавляет первичный ключ |
$table->primary(['first', 'last']); | Добавляет составной первичный ключ |
$table->unique('email'); | Добавляет уникальный индекс |
$table->index('state'); | Добавляет простой индекс |
## Внешние ключи
Laravel поддерживает добавление внешних ключей (foreign key constraints) для ваших таблиц:
```
$table->integer('user_id')->unsigned();
```
$table->foreign('user_id')->references('id')->on('users');
В этом примере мы указываем, что поле user_id связано с полем id таблицы users. Не забудьте сначала создать поле для внешних ключей!
Вы также можете задать действия, происходящие при обновлении (on update) и удалении (on delete) записей:
->references('id')->on('users') ->onDelete('cascade'); Для удаления внешнего ключа используется метод `PHPdropForeign()` . Схема именования ключей — та же, что и для индексов:
Внимание: при создании внешнего ключа, указывающего на инкрементное числовое поле, не забудьте сделать поле внешнего ключа типа UNSIGNED.
## Удаление индексов
Для удаления индекса вы должны указать его имя. По умолчанию Laravel присваивает каждому индексу осознанное имя. Просто объедините имя таблицы, имена всех его полей и добавьте тип индекса. Вот несколько примеров:
Команда | Описание |
| --- | --- |
$table->dropPrimary('users_id_primary'); | Удаление первичного ключа из таблицы users |
$table->dropUnique('users_email_unique'); | Удаление уникального индекса из таблицы users |
$table->dropIndex('geo_state_index'); | Удаление простого индекса из таблицы geo |
## Удаление полей Timestamps и SoftDeletes
Для удаления полей с типами `PHPtimestamps` ,
```
PHPnullableTimestamps
```
и `PHPsoftDeletes` используйте следующие методы:
Команда | Описание |
| --- | --- |
$table->dropTimestamps(); | Удаляет из таблицы поля created_at и updated_at |
$table->dropSoftDeletes(); | Удаляет из таблицы поле deleted_at |
## Системы хранения
Для задания конкретной системы хранения таблицы задайте свойство `PHPengine` конструктора таблиц:
{ $table->engine = 'InnoDB'; $table->string('email'); });
Система хранения — тип архитектуры таблицы. Некоторые СУБД поддерживают только свой встроенный тип (такие, как SQLite), в то время другие — например, MySQL — позволяют использовать различные системы даже внутри одной БД (наиболее используемыми являются MyISAM, InnoDB и MEMORY). Правда, использование таблиц различных архитектур в одном запросе заметно снижает его производительность — прим. пер.
# Миграции
Миграции — что-то вроде системы контроля версий для вашей базы данных. Они позволяют вашей команде изменять структуру БД, в то же время оставаясь в курсе изменений других участников. Миграции обычно идут рука об руку с построителем структур для более простого обращения с архитектурой вашей базы данных. Если вы когда-нибудь просили коллегу вручную добавить столбец в его локальную БД, значит вы сталкивались с проблемой, которую решают миграции БД.
Фасад Laravel Schema обеспечивает поддержку создания и изменения таблиц в независимости от используемой СУБД из числа тех, что поддерживаются в Laravel.
## Создание миграций
Для создания новой миграции используйте Artisan-команду `shmake:migration` : > shphp artisan make:migration create_users_table
Миграция будет помещена в папку database/migrations и будет содержать метку времени, которая позволяет фреймворку определять порядок применения миграций.
Можно также использовать параметры --table и --create для указания имени таблицы и того факта, что миграция будет создавать новую таблицу (а не изменять существующую — прим. пер.). Эти параметры просто заранее создают указанную таблицу в создаваемом файле-заглушке миграции:
> shphp artisan make:migration create_users_table --create=users php artisan make:migration add_votes_to_users_table --table=users
Если вы хотите указать свой путь для сохранения создаваемых миграций, используйте параметр --path при запуске команды `shmake:migration` . Этот путь должен быть указан относительно базового пути вашего приложения.
## Структура миграций
Класс миграций содержит два метода: `PHPup()` и `PHPdown()` . Метод `PHPup()` используется для добавления новых таблиц, столбцов или индексов в вашу БД, а метод `PHPdown()` просто отменяет операции, выполненные методом `PHPup()` .
В обоих методах вы можете использовать построитель структур Laravel для удобного создания и изменения таблиц. О всех доступных методах построителя структур читайте в его документации. Например, эта миграция создаёт таблицу flights:
`<?php` use Illuminate\Support\Facades\Schema; use Illuminate\Database\Schema\Blueprint; use Illuminate\Database\Migrations\Migration; class CreateFlightsTable extends Migration { /** * Выполнение миграций. * * @return void */ public function up() { Schema::create('flights', function (Blueprint $table) { $table->increments('id'); $table->string('name'); $table->string('airline'); $table->timestamps(); }); } /** * Отмена миграций. * * @return void */ public function down() { Schema::drop('flights'); } }
## Выполнение миграций
Для запуска всех необходимых вам миграций используйте Artisan-команду `shmigrate` . > shphp artisan migrate
Если вы используете виртуальную машину Homestead, вам надо выполнить эту команду на ВМ.
Принудительные миграции в продакшне
Некоторые операции миграций разрушительны, значит они могут привести к потере ваших данных. Для предотвращения случайного запуска этих команд на вашей боевой БД перед их выполнением запрашивается подтверждение. Для принудительного запуска команд без подтверждения используйте ключ `sh--force` : > shphp artisan migrate --force
### Откат миграций
Для отмены изменений, сделанных последней миграцией, используйте команду `shrollback` . Эта команда отменит результат последней «партии» миграций, которая может включать несколько файлов миграций: > shphp artisan migrate:rollback
добавлено в 5.3 ()
Команда `shmigrate:reset` отменит изменения всех миграций вашего приложения: > shphp artisan migrate:reset
Откат всех миграций и их повторное применение одной командой
Команда `shmigrate:refresh` отменит изменения всех ваших миграций, а затем выполнит команду `shmigrate` . Эта команда эффективно создаёт заново всю вашу БД: > shphp artisan migrate:refresh // Обновить БД и запустить заполнение БД начальными данными... php artisan migrate:refresh --seed
## Таблицы
### Создание таблиц
Для создания новой таблицы БД используйте метод `PHPcreate()` фасада Schema. Метод `PHPcreate()` принимает два аргумента. Первый — имя таблицы, второй — замыкание, которое получает объект Blueprint, который можно использовать для определения новой таблицы:
$table->increments('id'); });
Само собой, при создании таблицы вы можете использовать любые методы для работы со столбцами построителя структур.
Проверка существования таблицы/столбца
Вы можете легко проверить существование таблицы или столбца при помощи методов `PHPhasTable()` и `PHPhasColumn()` :
// } if (Schema::hasColumn('users', 'email')) { // }
Подключение и подсистема хранения данных
Если вы хотите выполнить операции над структурой через подключение к БД, которое не является вашим основным подключением, используйте метод `PHPconnection()` :
```
Schema::connection('foo')->create('users', function (Blueprint $table) {
```
$table->increments('id'); });
Используйте свойство engine построителя структур, чтобы задать подсистему хранения данных для таблицы:
$table->engine = 'InnoDB'; $table->increments('id'); });
### Переименование/удаление таблиц
Для переименования существующей таблицы используйте метод `PHPrename()` :
Для удаления существующей таблицы используйте методы `PHPdrop()` и `PHPdropIfExists()` :
Schema::dropIfExists('users');
добавлено в 5.2 ()
Переименование таблиц с внешними ключами
Перед переименованием таблицы вы должны проверить, что для всех ограничений внешних ключей таблицы есть явные имена в файлах вашей миграции, чтобы избежать автоматического назначения имён на основе принятого соглашения. Иначе имя ограничения внешнего ключа будет ссылаться на имя старой таблицы.
## Столбцы
### Создание столбцов
Для изменения существующей таблицы мы будем использовать метод `PHPtable()` фасада Schema. Как и метод `PHPcreate()` , метод `PHPtable()` принимает два аргумента: имя таблицы и замыкание, которое получает экземпляр Blueprint, который можно использовать для добавления столбцов в таблицу:
$table->string('email'); });
Разумеется, построитель структур содержит различные типы столбцов, которые вы можете указывать при построении ваших таблиц:
Команда | Описание |
| --- | --- |
$table->bigIncrements('id'); | Инкрементный ID (первичный ключ), использующий эквивалент "UNSIGNED BIG INTEGER" |
$table->bigInteger('votes'); | Эквивалент BIGINT для базы данных |
$table->binary('data'); | Эквивалент BLOB для базы данных |
$table->boolean('confirmed'); | Эквивалент BOOLEAN для базы данных |
$table->char('name', 4); | Эквивалент CHAR для базы данных |
$table->date('created_at'); | Эквивалент DATE для базы данных |
$table->dateTime('created_at'); | Эквивалент DATETIME для базы данных |
$table->dateTimeTz('created_at'); | Эквивалент DATETIME (с часовым поясом) для базы данных (для версии 5.2 и выше) |
$table->decimal('amount', 5, 2); | Эквивалент DECIMAL с точностью и масштабом |
$table->double('column', 15, 8); | Эквивалент DOUBLE с точностью, всего 15 цифр, после запятой 8 цифр |
$table->enum('choices', ['foo', 'bar']); | Эквивалент ENUM для базы данных |
$table->float('amount', 8, 2); | Эквивалент FLOAT для базы данных, всего 8 знаков, из них 2 после запятой (для версии 5.3 и выше) |
$table->float('amount'); | Эквивалент FLOAT для базы данных (для версии 5.2 и ранее) |
$table->increments('id'); | Инкрементный ID (первичный ключ), использующий эквивалент "UNSIGNED INTEGER" |
$table->integer('votes'); | Эквивалент INTEGER для базы данных |
$table->ipAddress('visitor'); | Эквивалент IP-адреса для базы данных (для версии 5.2 и выше) |
$table->json('options'); | Эквивалент JSON для базы данных |
$table->jsonb('options'); | Эквивалент JSONB для базы данных |
$table->longText('description'); | Эквивалент LONGTEXT для базы данных |
$table->macAddress('device'); | Эквивалент MAC-адреса для базы данных (для версии 5.2 и выше) |
$table->mediumIncrements('id'); | Инкрементный ID (первичный ключ), использующий эквивалент "UNSIGNED MEDIUM INTEGER" (для версии 5.3 и выше) |
$table->mediumInteger('numbers'); | Эквивалент MEDIUMINT для базы данных |
$table->mediumText('description'); | Эквивалент MEDIUMTEXT для базы данных |
$table->morphs('taggable'); | Добавление столбца taggable_id INTEGER (для версии 5.3 и выше Unsigned INTEGER) и taggable_type STRING |
$table->nullableMorphs('taggable'); | Аналогично morphs(), но разрешено значение NULL (для версии 5.3 и выше) |
$table->nullableTimestamps(); | Аналогично timestamps(), но разрешено значение NULL |
$table->rememberToken(); | Добавление столбца remember_token VARCHAR(100) NULL |
$table->smallIncrements('id'); | Инкрементный ID (первичный ключ), использующий эквивалент "UNSIGNED SMALL INTEGER" (для версии 5.3 и выше) |
$table->smallInteger('votes'); | Эквивалент SMALLINT для базы данных |
$table->softDeletes(); | Добавление столбца deleted_at для мягкого удаления (для версии 5.3 и выше разрешено значение NULL) |
$table->string('email'); | Эквивалент VARCHAR |
$table->string('name', 100); | Эквивалент VARCHAR с длинной |
$table->text('description'); | Эквивалент TEXT для базы данных |
$table->time('sunrise'); | Эквивалент TIME для базы данных |
$table->timeTz('sunrise'); | Эквивалент TIME (с часовым поясом) для базы данных (для версии 5.2 и выше) |
$table->tinyInteger('numbers'); | Эквивалент TINYINT для базы данных |
$table->timestamp('added_on'); | Эквивалент TIMESTAMP для базы данных |
$table->timestampTz('added_on'); | Эквивалент TIMESTAMP (с часовым поясом) для базы данных (для версии 5.2 и выше) |
$table->timestamps(); | Добавление столбцов created_at и updated_at (для версии 5.3 и выше разрешено значение NULL) |
$table->timestampsTz(); | Добавление столбцов created_at и updated_at (с часовым поясом), для которых разрешено значение NULL (для версии 5.3 и выше) |
$table->unsignedBigInteger('votes'); | Эквивалент Unsigned BIGINT для базы данных (для версии 5.3 и выше) |
$table->unsignedInteger('votes'); | Эквивалент Unsigned INT для базы данных (для версии 5.3 и выше) |
$table->unsignedMediumInteger('votes'); | Эквивалент Unsigned MEDIUMINT для базы данных (для версии 5.3 и выше) |
$table->unsignedSmallInteger('votes'); | Эквивалент Unsigned SMALLINT для базы данных (для версии 5.3 и выше) |
$table->unsignedTinyInteger('votes'); | Эквивалент Unsigned TINYINT для базы данных (для версии 5.3 и выше) |
$table->uuid('id'); | Эквивалент UUID для базы данных |
### Модификаторы столбцов
Вдобавок к перечисленным типам столбцов существует несколько «модификаторов» столбцов, которые вы можете использовать при добавлении столбцов в таблицу. Например, чтобы сделать столбец «обнуляемым», используйте метод `PHPnullable()` :
$table->string('email')->nullable(); });
Ниже перечислены все доступные модификаторы столбцов. В этом списке отсутствуют модификаторы индексов:
Модификатор | Описание |
| --- | --- |
->after('column') | Помещает столбец "после" указанного столбца (только MySQL) |
->comment('my comment') | Добавляет комментарий в столбец (для версии 5.2 и выше) |
->default($value) | Указывает значение "по умолчанию" для столбца |
->first() | Помещает столбец "первым" в таблице (только MySQL) |
->nullable() | Разрешает вставлять значения NULL в столбец |
->storedAs($expression) | Создать генерируемый столбец типа stored (только MySQL) (для версии 5.3 и выше) |
->unsigned() | Делает столбцы integer беззнаковыми UNSIGNED |
->virtualAs($expression) | Создать генерируемый столбец типа virtual (только MySQL) (для версии 5.3 и выше) |
### Изменение столбцов
Перед изменением столбцов добавьте зависимость doctrine/dbal в свой файл composer.json. Библиотека Doctrine DBAL используется для определения текущего состояния столбца и создания SQL-запросов, необходимых для выполнения указанных преобразований столбца:
> shcomposer require doctrine/dbal
Метод `PHPchange()` позволяет изменить тип существующего столбца или изменить его атрибуты. Например, если вы захотите увеличить размер строкового столбца name с 25 до 50:
$table->string('name', 50)->change(); });
Также мы можем изменить столбец, чтобы он стал «обнуляемым»:
$table->string('name', 50)->nullable()->change(); });
добавлено в 5.3 ()
Столбы следующих типов нельзя «изменить»: char, double, enum, mediumInteger, timestamp, tinyInteger, ipAddress, json, jsonb, macAddress, mediumIncrements, morphs, nullableMorphs, nullableTimestamps, softDeletes, timeTz, timestampTz, timestamps, timestampsTz, unsignedMediumInteger, unsignedTinyInteger, uuid.
добавлено в 5.2 ()
Для переименования столбца используйте метод `PHPrenameColumn()` на построителе структур. Перед переименованием столбца добавьте зависимость doctrine/dbal в свой файл composer.json:
$table->renameColumn('from', 'to'); });
Пока не поддерживается переименование любых столбцов в таблице, содержащей столбцы типов enum, а для версий 5.2 и ранее ещё и json или jsonb.
### Удаление столбцов
Для удаления столбца используйте метод `PHPdropColumn()` на построителе структур. Перед удалением столбцов из базы данных SQLite вам необходимо добавить зависимость doctrine/dbal в ваш файл composer.json и выполнить команду `shcomposer update` для установки библиотеки:
$table->dropColumn('votes'); }); Вы можете удалить несколько столбцов таблицы, передав массив их имён в метод `PHPdropColumn()` :
$table->dropColumn(['votes', 'avatar', 'location']); });
Удаление и изменение нескольких столбцов одной миграцией не поддерживается для базы данных SQLite.
## Индексы
### Создание индексов
Построитель структур поддерживает несколько типов индексов. Сначала давайте посмотрим на пример, в котором задаётся, что значения столбца должны быть уникальными. Для создания индекса мы можем просто сцепить метод `PHPunique()` с определением столбца:
Другой вариант — создать индекс после определения столбца. Например:
```
$table->unique('email');
```
Вы можете даже передать массив столбцов в метод `PHPindex()` для создания сложного индекса:
```
$table->index(['account_id', 'created_at']);
```
Команда | Описание |
| --- | --- |
$table->primary('id'); | Добавление первичного ключа |
$table->primary(['first', 'last']); | Добавление составных ключей |
$table->unique('email'); | Добавление уникального индекса |
$table->unique('state', 'my_index_name'); | Добавление своего имени индекса (для версии 5.2 и выше) |
$table->unique(['first', 'last']); | Добавление составного уникального индекса (для версии 5.3 и выше) |
$table->index('state'); | Добавление базового индекса |
### Удаление индексов
Для удаления индекса необходимо указать его имя. По умолчанию Laravel автоматически назначает имена индексам. Просто соедините имя таблицы, имя столбца-индекса и тип индекса. Вот несколько примеров:
Команда | Описание |
| --- | --- |
$table->dropPrimary('users_id_primary'); | Удаление первичного ключа из таблицы "users" |
$table->dropUnique('users_email_unique'); | Удаление уникального индекса из таблицы "users" |
$table->dropIndex('geo_state_index'); | Удаление базового индекса из таблицы "geo" |
Если вы передадите массив столбцов в метод для удаления индексов, будет сгенерировано стандартное имя индекса на основе имени таблицы, столбца и типа ключа:
```
Schema::table('geo', function (Blueprint $table) {
```
$table->dropIndex(['state']); // Удаление индекса 'geo_state_index' });
### Ограничения внешнего ключа
Laravel также поддерживает создание ограничений для внешнего ключа, которые используются для обеспечения ссылочной целостности на уровне базы данных. Например, давайте определим столбец user_id в таблице posts, который ссылается на столбец id в таблице users:
```
Schema::table('posts', function (Blueprint $table) {
```
$table->integer('user_id')->unsigned(); $table->foreign('user_id')->references('id')->on('users'); });
Вы также можете указать требуемое действие для свойств "on delete" и "on update" ограничений:
->references('id')->on('users') ->onDelete('cascade'); Для удаления внешнего ключа используйте метод `PHPdropForeign()` . Ограничения внешнего ключа используют те же принципы именования, что и индексы. Итак, мы соединим имя таблицы и столбцов из ограничения, а затем добавим суффикс "_foreign":
Либо вы можете передать значение массива, при этом для удаления будет автоматически использовано стандартное имя ограничения:
```
$table->dropForeign(['user_id']);
```
## Загрузка начальных данных в БД
Кроме миграций, описанных выше, Laravel также включает в себя механизм наполнения вашей БД начальными данными (seeding) с помощью специальных классов. Все такие классы хранятся в database/seeds. Они могут иметь любое имя, но вам, вероятно, следует придерживаться какой-то логики в их именовании — например, `PHPUserTableSeeder` и т.д. По умолчанию для вас уже определён класс `PHPDatabaseSeeder` . Из этого класса вы можете вызывать метод `PHPcall()` для подключения других классов с данными, что позволит вам контролировать порядок их выполнения.
Пример класса для загрузки начальных данных
```
class DatabaseSeeder extends Seeder {
```
public function run() { $this->call('UserTableSeeder'); $this->command->info('Таблица пользователей загружена данными!'); } } class UserTableSeeder extends Seeder { public function run() { DB::table('users')->delete(); User::create(['email' => '<EMAIL>']); } } Для добавления данных в БД используйте Artisan-команду `shdb:seed` : > shphp artisan db:seed
По умолчанию команда db:seed вызывает класс `PHPDatabaseSeeder` , который может быть использован для вызова других классов, заполняющих БД данными. Однако, вы можете использовать параметр `sh--class` для указания конкретного класса для вызова: > shphp artisan db:seed --class=UserTableSeeder
Вы также можете использовать для заполнения БД данными команду `shmigrate:refresh` , которая также откатит и заново применит все ваши миграции: > shphp artisan migrate:refresh --seed
# Загрузка начальных данных в БД
У Laravel есть простой механизм наполнения вашей БД начальными данными (seeding) с помощью специальных классов. Все такие классы хранятся в каталоге database/seeds. Они могут иметь любое имя, но вам, вероятно, следует придерживаться какой-то логики в их именовании — например, UserTableSeeder и т.д. По умолчанию для вас уже определён класс DatabaseSeeder. Из этого класса вы можете вызывать метод `PHPcall()` для подключения других классов с данными, что позволит вам контролировать порядок их выполнения.
## Создание начальных данных
Для добавления данных в БД используйте Artisan-команду `shmake:seeder` . Все начальные данные, сгенерированные фреймворком, будут помещены в папке database/seeds (для версии 5.1 — database/seeders): > shphp artisan make:seeder UsersTableSeeder
Класс начальных данных содержит в себе только один метод по умолчанию — `PHPrun()` . Этот метод вызывается, когда выполняется Artisan-команда `shdb:seed` . В методе `PHPrun()` вы можете вставить любые данные в БД. Вы можете использовать конструктор запросов, чтобы вручную вставить данные. Также можно воспользоваться фабриками Eloquent моделей. В качестве примера давайте модифицируем стандартный класс DatabaseSeeder и добавим оператор вставки в БД в метод `PHPrun()` : `<?php` use Illuminate\Database\Seeder; use Illuminate\Database\Eloquent\Model; class DatabaseSeeder extends Seeder { /** * Загрузка начальных данных. * * @return void */ public function run() { DB::table('users')->insert([ 'name' => str_random(10), 'email' => str_random(10).'@gmail.com', 'password' => bcrypt('secret'), ]); } }
### Использование фабрик моделей
Конечно, ручное определение признаков для каждой модели начальных данных затруднительно. Вместо этого вы можете использовать фабрики моделей для быстрой генерации больших объёмов данных. Во-первых, пересмотрите документацию по фабрике моделей, чтобы изучить, как определяются фабрики. Как только вы определите свои фабрики, вы можете использовать вспомогательную функцию `PHPfactory()` , чтобы вставлять записи в вашу базу данных.
Например, давайте создадим 50 пользователей и привяжем отношения к каждому из них:
`/**` * Загрузка начальных данных. * * @return void */ public function run() { factory(App\User::class, 50)->create()->each(function ($u) { $u->posts()->save(factory(App\Post::class)->make()); }); }
### Вызов дополнительной загрузки начальных данных
В классе DatabaseSeeder вы можете использовать метод `PHPcall()` , чтобы запустить дополнительные классы загрузки. Использование метода `PHPcall()` позволяет вам разбить свою загрузку начальных данных на несколько файлов, чтобы ни один отдельный класс загрузки не разрастался. Просто передайте название класса загрузки, который вы хотите выполнить: `/**` * Загрузка начальных данных. * * @return void */ public function run() { //для версии 5.1: //Model::unguard(); $this->call(UsersTableSeeder::class); $this->call(PostsTableSeeder::class); $this->call(CommentsTableSeeder::class); //для версии 5.1: //Model::reguard(); }
## Запуск загрузки начальных данных
Как только вы написали свои классы загрузки, вы можете использовать Artisan-команду `shdb:seed` для запуска загрузки. По умолчанию команда `shdb:seed` вызывает класс DatabaseSeeder, который может быть использован для вызова других классов, заполняющих БД данными. Однако, вы можете использовать параметр `sh--class` для указания конкретного класса для вызова: > shphp artisan db:seed php artisan db:seed --class=UsersTableSeeder //для версии 5.1: //php artisan db:seed --class=UserTableSeeder
Вы также можете использовать для заполнения БД данными команду `shmigrate:refresh` , которая также откатит и заново применит все ваши миграции: > shphp artisan migrate:refresh --seed
# Redis
Redis — открытое продвинутое хранилище пар ключ/значение. Его часто называют сервисом структур данных, так как ключи могут содержать строки, хэши, списки, наборы и сортированные наборы.
Чтобы начать использовать Redis с Laravel, необходимо либо установить пакет predis/predis с помощью Composer:
> shcomposer require predis/predis
Либо, вы можете установить расширение для PHP PhpRedis через PECL. Расширение сложнее установить, но оно даёт большую производительность для приложений, которые активно используют Redis.
Настройки вашего подключения к Redis хранятся в файле config/database.php. В нём вы найдёте массив redis, содержащий список серверов, используемых приложением:
`'redis' => [` 'client' => 'predis', 'cluster' => false, 'default' => [ 'host' => env('REDIS_HOST', 'localhost'), 'password' => env('REDIS_PASSWORD', null), 'port' => env('REDIS_PORT', 6379), 'database' => 0, ], ],
Значения по умолчанию должны подойти для разработки. Однако вы свободно можете изменять этот массив в зависимости от своего окружения. У каждого сервера Redis, определённого в файле конфигурации, должны быть имя, хост и порт.
Параметр cluster сообщает клиенту Redis Laravel, что нужно выполнить фрагментацию узлов Redis (client-side sharding), что позволит вам обращаться к ним и увеличить доступную RAM. Однако заметьте, что фрагментация не справляется с падениями, поэтому она в основном используется для кэшированных данных, которые доступны из основного источника.
### Predis
Помимо стандартных опций настройки сервера host, port, database и password Predis поддерживает дополнительные параметры подключения, которые можно определить для каждого из ваших серверов Redis. Чтобы использовать эти дополнительные опции, просто добавьте их в конфигурацию вашего сервера Redis в файле config/database.php:
> conf'default' => [ 'host' => env('REDIS_HOST', 'localhost'), 'password' => env('REDIS_PASSWORD', null), 'port' => env('REDIS_PORT', 6379), 'database' => 0, 'read_write_timeout' => 60, ],
### PhpRedis
Если ваше расширение Redis установлено через PECL, вам нужно переименовать псевдоним для Redis в файле config/app.php.
Чтобы использовать расширение PhpRedis, вы должны задать опции client в конфигурации Redis значение phpredis. Эта опция находится в файле config/database.php:
> conf'redis' => [ 'client' => 'phpredis', // остальные настройки Redis... ],
Помимо стандартных опций настройки сервера host, port, database и password PhpRedis поддерживает следующие дополнительные параметры подключения: persistent, prefix, read_timeout и timeout. Вы можете добавить любые из этих опций в настройки своего сервера Redis в файле config/database.php:
> conf'default' => [ 'host' => env('REDIS_HOST', 'localhost'), 'password' => env('REDIS_PASSWORD', null), 'port' => env('REDIS_PORT', 6379), 'database' => 0, 'read_timeout' => 60, ],
## Взаимодействие с Redis
Вы можете взаимодействовать с Redis, вызывая различные методы фасада Redis. Фасад Redis поддерживает динамические методы, а значит вы можете вызвать любую Redis-команду на фасаде, и команда будет передана прямо в Redis. В этом примере мы вызовем Redis-команду GET с помощью вызова метода `PHPget()` фасада Redis: `<?php` namespace App\Http\Controllers; use Illuminate\Support\Facades\Redis; //для версии 5.2 и ранее: //use Redis; use App\Http\Controllers\Controller; class UserController extends Controller { /** * Показать профиль данного пользователя. * * @param int $id * @return Response */ public function showProfile($id) { $user = Redis::get('user:profile:'.$id); return view('user.profile', ['user' => $user]); } }
Как уже было сказано, вы можете вызывать любые Redis-команды фасада Redis. Laravel использует магические методы PHP для передачи команд на сервер Redis, поэтому просто передайте необходимые аргументы Redis-команде:
```
Redis::set('name', 'Taylor');
```
$values = Redis::lrange('names', 5, 10);
добавлено в 5.0 ()
Когда у вас есть экземпляр клиента Redis, вы можете выполнить любую команду Redis.
```
$redis->set('name', 'Taylor');
```
$name = $redis->get('name'); $values = $redis->lrange('names', 5, 10); В качестве альтернативы вы можете передавать команды на сервер методом `PHPcommand()` , который принимает первым аргументом имя команды, а вторым — массив значений:
```
$values = Redis::command('lrange', ['name', 5, 10]);
```
```
$values = $redis->command('lrange', [5, 10]);
```
Использование нескольких подключений Redis
Вы можете получить экземпляр Redis методом
```
PHPRedis::connection()
```
```
$redis = Redis::connection();
```
Так вы получите экземпляр подключения по умолчанию. Если вы не используете фрагментацию, то можно передать этому методу имя сервера для получения конкретного подключения, как оно определено в файле настроек:
```
$redis = Redis::connection('other');
```
Конвейер должен использоваться, когда вы отправляете много команд на сервер за одну операцию. Метод `PHPpipeline()` принимает один аргумент — замыкание, которое получает экземпляр Redis. Вы можете выполнить все ваши команды на этом экземпляре Redis, и все они будут выполнены в рамках одной операции:
```
Redis::pipeline(function ($pipe) {
```
for ($i = 0; $i < 1000; $i++) { $pipe->set("key:$i", $i); } });
## Издатель/подписчик (Pub/Sub)
Laravel предоставляет удобный интерфейс к Redis-командам publish и subscribe. Эти команды позволяют прослушивать сообщения на заданном «канале». Вы можете публиковать сообщения в канал из другого приложения или даже при помощи другого языка программирования, что обеспечивает простую связь между приложениями и процессами.
Сначала давайте настроим слушатель канала с помощью метода `PHPsubscribe()` . Мы поместим вызов этого метода в Artisan-команду, так как вызов метода `PHPsubscribe()` запускает длительный процесс: `<?php` namespace App\Console\Commands; use Illuminate\Support\Facades\Redis; //для версии 5.2 и ранее: //use Redis; use Illuminate\Console\Command; class RedisSubscribe extends Command { /** * Название и сигнатура терминальной команды. * * @var string */ protected $signature = 'redis:subscribe'; /** * Описание терминальной команды. * * @var string */ protected $description = 'Subscribe to a Redis channel'; /** * Выполнение терминальной команды. * * @return mixed */ public function handle() { Redis::subscribe(['test-channel'], function ($message) { echo $message; }); } } Теперь мы можем публиковать сообщения в канал методом `PHPpublish()` :
```
Route::get('publish', function () {
```
// Логика маршрута... Redis::publish('test-channel', json_encode(['foo' => 'bar'])); }); С помощью метода `PHPpsubscribe()` вы можете подписаться на канал по маске, это может быть полезно для отлова всех сообщений на всех каналах. Название канала `PHP$channel` будет передано вторым аргументом в предоставляемый обратный вызов замыкания:
```
Redis::psubscribe(['*'], function ($message, $channel) {
```
echo $message; }); Redis::psubscribe(['users.*'], function ($message, $channel) { echo $message; });
# Отношения
## Определение отношений
Eloquent отношения определены как функции в ваших классах модели Eloquent. Как и сами модели Eloquent, отношения являются мощными конструкторами запросов, которые определяют отношения как функции, и обеспечивают мощную сцепку методов и возможности для запросов. Например, мы можем прицепить дополнительные ограничения к отношению этих posts:
```
$user->posts()->where('active', 1)->get();
```
Но прежде чем погрузиться в использование отношений, давайте узнаем, как определяется каждый тип отношений.
Связь вида «один к одному» является очень простой. К примеру, модель User может иметь один Phone. Чтобы определить такое отношение, мы помещаем метод `PHPphone()` в модель User. Метод `PHPphone()` должен вызвать метод `PHPhasOne()` и вернуть его результат: `<?php` namespace App; use Illuminate\Database\Eloquent\Model; class User extends Model { /** * Получить запись с номером телефона пользователя. */ public function phone() { return $this->hasOne('App\Phone'); } } Первый параметр, передаваемый `PHPhasOne()` , — имя связанной модели. Как только отношение установлено, вы можете получить к нему доступ через динамические свойства Eloquent. Динамические свойства позволяют вам получить доступ к функциям отношений, если бы они были свойствами модели:
Eloquent определяет внешний ключ отношения по имени модели. В данном случае предполагается, что это user_id. Если вы хотите перекрыть стандартное имя, передайте второй параметр методу `PHPhasOne()` :
Также Eloquent подразумевает, что внешний ключ должен иметь значение, привязанное к родительскому столбцу id (или другому $primaryKey). Другими словами, Eloquent будет искать значение столбца id пользователя в столбце user_id записи Phone. Кроме того вы можете передать в метод третий аргумент, чтобы указать свой столбец для объединения:
```
return $this->hasOne('App\Phone', 'foreign_key', 'local_key');
```
Итак, у нас есть доступ к модели Phone из нашего User. Теперь давайте определим отношение для модели Phone, которое будет иметь доступ к User, владеющего этим телефоном. Для создания обратного отношения в модели Phone используйте метод `PHPbelongsTo()` : `<?php` namespace App; use Illuminate\Database\Eloquent\Model; class Phone extends Model { /** * Получить пользователя, владеющего данным телефоном. */ public function user() { return $this->belongsTo('App\User'); } } В примере выше Eloquent будет искать связь между user_id в модели Phone и id в модели User. По умолчанию Eloquent определяет имя внешнего ключа по имени метода отношения, добавляя суффикс _id. Однако, если имя внешнего ключа модели Phone не user_id, передайте это имя вторым параметром в метод `PHPbelongsTo()` : `/**` * Получить пользователя, владеющего данным телефоном. */ public function user() { return $this->belongsTo('App\User', 'foreign_key'); } Если ваша родительская модель не использует id в качестве первичного ключа, или вам бы хотелось присоединить дочернюю модель к другому столбцу, вы можете передать третий параметр в метод `PHPbelongsTo()` , который определяет имя связанного столбца в родительской таблице: `/**` * Получить пользователя, владеющего данным телефоном. */ public function user() { return $this->belongsTo('App\User', 'foreign_key', 'other_key'); }
Отношение «один ко многим» используется для определения отношений, где одна модель владеет некоторым количеством других моделей. Примером отношения «один ко многим» является статья в блоге, которая имеет «много» комментариев. Как и другие отношения Eloquent вы можете смоделировать это отношение таким образом:
`<?php` namespace App; use Illuminate\Database\Eloquent\Model; class Post extends Model { /** * Получить комментарии статьи блога. */ public function comments() { return $this->hasMany('App\Comment'); } }
Помните, что Eloquent автоматически определяет столбец внешнего ключа в модели Comment. По соглашению, Eloquent возьмёт «snake case» названия владеющей модели плюс _id. Таким образом, для данного примера, Eloquent предполагает, что внешним ключом для модели Comment будет post_id.
После определения отношения мы можем получить доступ к коллекции комментариев, обратившись к свойству comments. Помните, что поскольку Eloquent поддерживает «динамические свойства», мы можем обращаться к функциям отношений, как если бы они были определены свойством модели:
```
$comments = App\Post::find(1)->comments;
```
foreach ($comments as $comment) { // } Конечно, так как отношения служат и в качестве конструкторов запросов, вы можете добавлять дополнительные условия к тем комментариям, которые получены вызовом метода `PHPcomments()` :
```
$comments = App\Post::find(1)->comments()->where('title', 'foo')->first();
```
Как и для метода `PHPhasOne()` вы можете указать внешний и локальный ключи, передав дополнительные параметры в метод `PHPhasMany()` :
return $this->hasMany('App\Comment', 'foreign_key', 'local_key');
### Один ко многим (Обратное отношение)
После получения доступа ко всем комментариям статьи давайте определим отношение, которое позволит комментарию получить доступ к его статье. Чтобы определить обратное отношение `PHPhasMany()` , определим функцию отношения на дочерней модели, которая вызывает метод `PHPbelongsTo()` : `<?php` namespace App; use Illuminate\Database\Eloquent\Model; class Comment extends Model { /** * Получить статью данного комментария. */ public function post() { return $this->belongsTo('App\Post'); } }
После определения отношений мы можем получить модель Post для Comment, обратившись к динамическому свойству post:
echo $comment->post->title; В примере выше Eloquent пробует связать post_id из модели Comment с id модели Post. По умолчанию Eloquent определяет внешний ключ по имени метода отношения плюс _id. Однако, если внешний ключ для модели Comment не post_id, вы можете передать своё имя вторым параметром в метод `PHPbelongsTo()` : `/**` * Получить статью данного комментария. */ public function post() { return $this->belongsTo('App\Post', 'foreign_key'); } Если ваша родительская модель не использует id в качестве первичного ключа, или вам бы хотелось присоединить дочернюю модель к другому столбцу, вы можете передать третий параметр в метод `PHPbelongsTo()` , который определяет имя связанного столбца в родительской таблице: `/**` * Получить статью данного комментария. */ public function post() { return $this->belongsTo('App\Post', 'foreign_key', 'other_key'); }
Отношения типа «многие ко многим» сложнее отношений `PHPhasOne()` и `PHPhasMany()` . Примером может служить пользователь, имеющий много ролей, где роли также относятся ко многим пользователям. Например, несколько пользователей могут иметь роль «Admin». Нужны три таблицы для этой связи: users, roles и role_user. Имя таблицы role_user получается из упорядоченных по алфавиту имён связанных моделей, она должна иметь поля user_id и role_id. Вы можете определить отношение «многие ко многим», написав метод, возвращающий результат метода `PHPbelongsToMany()` . Давайте определим метод `PHProles()` для модели User: `<?php` namespace App; use Illuminate\Database\Eloquent\Model; class User extends Model { /** * Роли, принадлежащие пользователю. */ public function roles() { return $this->belongsToMany('App\Role'); } }
Теперь мы можем получить роли пользователя через динамическое свойство roles:
foreach ($user->roles as $role) { // } Естественно, как и для других типов отношений, вы можете вызвать метод `PHProles()` , продолжив конструировать запрос для отношения:
```
$roles = App\User::find(1)->roles()->orderBy('name')->get();
```
Как уже упоминалось ранее, чтобы определить имя для таблицы присоединения отношений, Eloquent соединит два названия взаимосвязанных моделей в алфавитном порядке. Тем не менее, вы можете переопределить имя, передав второй параметр методу `PHPbelongsToMany()` :
```
return $this->belongsToMany('App\Role', 'role_user');
```
В дополнение к заданию имени соединительной таблицы, вы можете также задать имена столбцов ключей в таблице, передав дополнительные параметры методу `PHPbelongsToMany()` . Третий аргумент — это имя внешнего ключа модели, на которой вы определяете отношения, в то время как четвертый аргумент — это внешний ключ модели, с которой вы собираетесь связаться:
```
return $this->belongsToMany('App\Role', 'role_user', 'user_id', 'role_id');
```
Чтобы определить обратное отношение «многие-ко-многим», просто поместите другой вызов `PHPbelongsToMany()` на вашу модель. Чтобы продолжить пример с ролями пользователя, давайте определим метод `PHPusers()` для модели Role: `<?php` namespace App; use Illuminate\Database\Eloquent\Model; class Role extends Model { /** * Пользователи, принадлежащие роли. */ public function users() { return $this->belongsToMany('App\User'); } } Как вы можете видеть, соотношение определяется точно так же, как и его для User, за исключением ссылки App\User. Так как мы повторно используем метод `PHPbelongsToMany()` , все обычные таблицы и параметры настройки ключей доступны при определении обратного отношения многих-ко-многим.
Получение промежуточных столбцов таблицы
Как вы уже запомнили, работа с отношением «многие-ко-многим» требует наличия промежуточной таблицы. Eloquent предоставляет некоторые очень полезные способы взаимодействия с такой таблицей. Например, давайте предположим, что наш объект User имеет много связанных с ним объектов Role. После получения доступа к этому отношению мы можем получить доступ к промежуточной таблице с помощью pivot атрибута модели:
foreach ($user->roles as $role) { echo $role->pivot->created_at; }
Обратите внимание на то, что каждой полученной модели Role автоматически присваивается атрибут pivot. Этот атрибут содержит модель, представляющую промежуточную таблицу, и может быть использован, как и любая другая модель Eloquent.
По умолчанию, только ключи модели будут представлять pivot объект. Если ваша «pivot» таблица содержит дополнительные атрибуты, вам необходимо указать их при определении отношения:
```
return $this->belongsToMany('App\Role')->withPivot('column1', 'column2');
```
Если вы хотите, чтобы ваша «pivot» таблица автоматически поддерживала временные метки created_at и updated_at, используйте метод `PHPwithTimestamps()` при определении отношений:
Фильтрация отношений через столбцы промежуточной таблицы
Вы также можете отфильтровать результаты, возвращённые методом `PHPbelongsToMany()` , с помощью методов `PHPwherePivot()` и `PHPwherePivotIn()` при определении отношения
```
return $this->belongsToMany('App\Role')->wherePivot('approved', 1);
```
return $this->belongsToMany('App\Role')->wherePivotIn('priority', [1, 2]);
Связь «ко многим через» обеспечивает удобный короткий путь для доступа к удалённым отношениям через промежуточные. Например, модель Country может иметь много Post через модель User. В данном примере вы можете просто собрать все статьи для заданной country. Таблицы для этих отношений будут выглядеть так:
Несмотря на то, что таблица posts не содержит столбца country_id, отношение «ко многим через» позволит нам получить доступ к posts через country с помощью `PHP$country->posts` . Для выполнения этого запроса Eloquent ищет country_id в промежуточной таблице users. После нахождения совпадающих ID пользователей они используются в запросе к таблице posts.
Теперь, когда мы рассмотрели структуру таблицы для отношений, давайте определим отношения для модели Country:
`<?php` namespace App; use Illuminate\Database\Eloquent\Model; class Country extends Model { /** * Получить все статьи по заданной области. */ public function posts() { return $this->hasManyThrough('App\Post', 'App\User'); } } Первый параметр, переданный в метод `PHPhasManyThrough()` является именем конечной модели, которую мы получаем, а второй параметр — это имя промежуточной модели. Обычные соглашения для внешнего ключа Eloquent будут использоваться при выполнении запросов отношения. Если вы хотите настроить ключи отношения, вы можете передать их третьим и четвертым параметрами методу `PHPhasManyThrough()` . Третий параметр — имя внешнего ключа для промежуточной модели, четвертый параметр — имя внешнего ключа для конечной модели, а пятый аргумент (для версии 5.2 и выше) — локальный ключ:
{ public function posts() { return $this->hasManyThrough( 'App\Post', 'App\User', 'country_id', 'user_id', 'id' ); //для версии 5.1: //return $this->hasManyThrough('App\Post', 'App\User', 'country_id', 'user_id'); } }
### Полиморфные отношения
Полиморфные отношения позволяют модели быть связанной с более чем одной моделью. Например, предположим, пользователи вашего приложения могут «комментировать» и статьи, и видео. Используя полиморфные отношения, вы можете использовать единственную таблицу comments для обоих этих сценариев. Во-первых, давайте посмотрим на структуру таблицы, необходимую для таких отношений:
```
confposts
id - integer
title - string
body - text
videos
id - integer
title - string
url - string
comments
id - integer
body - text
commentable_id - integer
commentable_type - string
```
Главные поля, на которые нужно обратить внимание: commentable_id и commentable_type в таблице comments. Первое содержит ID статьи или видео, а второе — имя класса-модели владельца. Это позволяет ORM определить, какой класс модели должен быть возвращён при использовании отношения commentable.
Теперь давайте рассмотрим, какие определения для модели нам нужны, чтобы построить её отношения:
`<?php` namespace App; use Illuminate\Database\Eloquent\Model; class Comment extends Model { /** * Получить все модели, обладающие commentable. */ public function commentable() { return $this->morphTo(); } } class Post extends Model { /** * Получить все комментарии статьи. */ public function comments() { return $this->morphMany('App\Comment', 'commentable'); } } class Video extends Model { /** * Получить все комментарии видео. */ public function comments() { return $this->morphMany('App\Comment', 'commentable'); } }
После определения моделей и таблиц вы можете получить доступ к отношениям через модели. Например, чтобы получить все комментарии статьи, просто используйте динамической свойство comments:
foreach ($post->comments as $comment) { // } Вы можете также получить владельца полиморфного отношения от полиморфной модели, получив доступ к имени метода, который вызывает `PHPmorphTo()` . В нашем случае это метод `PHPcommentable()` для модели Comment. Так мы получим доступ к этому методу как к динамическому свойству:
$commentable = $comment->commentable; Отношение `PHPcommentable()` модели Comment вернёт либо объект Post, либо объект Video в зависимости от типа модели, которой принадлежит комментарий.
Пользовательские полиморфные типы
По умолчанию Laravel будет использовать полностью определённое имя класса для хранения типа связанной модели. Например, учитывая пример выше, где Comment может принадлежать Post или Video, значение по умолчанию для commentable_type было бы или App\Post или App\Video, соответственно. Однако вы можете захотеть отделить свою базу данных от внутренней структуры вашего приложения. В этом случае вы можете определить отношение `PHPmorph map()` , чтобы дать команду Eloquent использовать заданное имя для каждой модели вместо имени класса:
Relation::morphMap([ 'posts' => 'App\Post', 'videos' => 'App\Video', ]); Вы можете зарегистрировать `PHPmorphMap()` в функции `PHPboot()` или в своём AppServiceProvider или создать отдельный сервис-провайдер.
### Полиморфные связи многие ко многим
В дополнение к традиционным полиморфным связям вы можете также задать полиморфные связи многие ко многим. Например, модели блогов Post и Video могут разделять полиморфную связь с моделью Tag. Используя полиморфное отношение «многие-ко-многим», вы имеете единственный список уникальных тегов, которые совместно используются через сообщения в блоге и видео. Во-первых, давайте рассмотрим структуру таблиц:
Теперь мы готовы к установке связи с моделью. Обе модели Post и Video будут иметь связь `PHPmorphToMany()` в базовом классе Eloquent через метод `PHPtags()` : `<?php` namespace App; use Illuminate\Database\Eloquent\Model; class Post extends Model { /** * Получить все теги статьи. */ public function tags() { return $this->morphToMany('App\Tag', 'taggable'); } }
Определение обратного отношения
Теперь для модели Tag вы должны определить метод для каждой из моделей отношения. Для нашего примера мы определим метод `PHPposts()` и метод `PHPvideos()` : `<?php` namespace App; use Illuminate\Database\Eloquent\Model; class Tag extends Model { /** * Получить все сообщения, связанные с тегом. */ public function posts() { return $this->morphedByMany('App\Post', 'taggable'); } /** * Получить все видео, связанные с тегом. */ public function videos() { return $this->morphedByMany('App\Video', 'taggable'); } }
Как только ваша таблица и модели определены, вы можете получить доступ к отношениям через свои модели. Например, чтобы получить доступ ко всем тегам для сообщения, вы можете просто использовать динамическое свойство tag:
foreach ($post->tags as $tag) { // } Вы можете также получить владельца полиморфного отношения от полиморфной модели, получив доступ к имени метода, который выполняет вызов `PHPmorphedByMany()` . В нашем случае, это метод `PHPposts()` или `PHPvideos()` для модели Tag. Так вы получите доступ к этим методам как к динамическим свойствам:
```
$tag = App\Tag::find(1);
```
foreach ($tag->videos as $video) { // }
Поскольку все отношения определены функциями, мы можем вызывать эти функции, чтобы получить экземпляр отношения, фактически не выполняя запросы отношения. Кроме того, все типы Eloquent отношений также являются конструкторами запросов, позволяя вам сцеплять условия запроса отношения перед выполнением SQL-запроса к вашей базе данных.
Например, представьте систему блогов, в которой модель User имеет множество связей с моделью Post:
`<?php` namespace App; use Illuminate\Database\Eloquent\Model; class User extends Model { /** * Получить все статьи пользователя. */ public function posts() { return $this->hasMany('App\Post'); } } Вы можете запросить отношения `PHPposts()` и добавить дополнительные условия к запросу отношения так:
$user->posts()->where('active', 1)->get();
Вы можете использовать любой из методов конструктора запросов на отношении, поэтому не забудьте изучить документацию по конструктору запросов, где описаны все доступные вам методы.
### Методы отношений или динамические свойства
Если вам не нужно добавлять дополнительные ограничения к Eloquent запросу отношения, вы можете просто получить доступ к отношению, как будто это свойство. Например, продолжая использовать наши модели User и Post в качестве примера, мы можем получить доступ ко всем сообщениям пользователя так:
foreach ($user->posts as $post) { // }
Динамические свойства поддерживают «ленивую загрузку». Это означает, что они загрузят свои данные для отношения только в момент обращения к ним. Из-за этого разработчики часто используют нетерпеливую загрузку, чтобы предварительно загрузить отношения, для которых они знают, что доступ будет получен после загрузки модели. Нетерпеливая загрузка обеспечивает значительное сокращение SQL-запросов, которые должны быть выполнены, чтобы загрузить отношения модели.
### Проверка существования связей при выборке
При чтении отношений модели вам может быть нужно ограничить результаты в зависимости от существования отношения. Например, вы хотите получить все статьи в блоге, имеющие хотя бы один комментарий. Для этого можно использовать метод `PHPhas()` :
```
// Получить все статьи в блоге, имеющие хотя бы один комментарий...
```
$posts = App\Post::has('comments')->get();
Вы также можете указать оператор и число:
```
// Получить все статьи в блоге, имеющие три и более комментариев...
```
$posts = Post::has('comments', '>=', 3)->get(); Можно конструировать вложенные операторы `PHPhas()` с помощью точечной нотации. Например, вы можете получить все статьи, которые имеют хотя бы один комментарий и голос:
```
// Получить все статьи, которые имеют хотя бы один комментарий и голос...
```
$posts = Post::has('comments.votes')->get(); Если вам нужно ещё больше возможностей, вы можете использовать методы `PHPwhereHas()` и `PHPorWhereHas()` , чтобы поместить условия «where» в ваши запросы `PHPhas()` . Эти методы позволяют вам добавить свои ограничения в отношения, такие как проверку содержимого комментария:
```
// Получить все статьи с хотя бы одним комментарием, имеющим слово "foo"%
```
$posts = Post::whereHas('comments', function ($query) { $query->where('content', 'like', 'foo%'); })->get();
добавлено в 5.3 ()
### Выборка по отсутствию отношения
При получении записей модели бывает необходимо ограничить результаты выборки на основе отсутствия отношения. Например, если вы хотите получить все статьи, у которых нет комментариев. Для этого передайте имя отношения в метод `PHPdoesntHave()` :
```
$posts = App\Post::doesntHave('comments')->get();
```
Для ещё большего уточнения используйте метод `PHPwhereDoesntHave()` , чтобы добавить условия where в ваши запросы `PHPdoesntHave()` . Этот метод позволяет добавить дополнительные ограничения к отношению, например, проверку содержимого комментария:
```
$posts = Post::whereDoesntHave('comments', function ($query) {
```
$query->where('content', 'like', 'foo%'); })->get();
### Подсчёт моделей в отношении
Если вы хотите посчитать число результатов отношения, не загружая их, используйте метод `PHPwithCount()` , который поместит столбец `PHP{relation}_count` в вашу результирующую модель. Например:
```
$posts = App\Post::withCount('comments')->get();
```
foreach ($posts as $post) { echo $post->comments_count; }
Вы можете добавить «число» для нескольких отношений так же, как и добавить ограничения к запросам:
```
$posts = Post::withCount(['votes', 'comments' => function ($query) {
```
$query->where('content', 'like', 'foo%'); }])->get(); echo $posts[0]->votes_count; echo $posts[0]->comments_count;
## Нетерпеливая загрузка
При доступе к Eloquent отношениям как к свойствам отношения «лениво загружаются». Это означает, что данные отношения фактически не загружены, пока вы не обратитесь к свойству. Однако Eloquent может «нетерпеливо загружать» отношения в то время, когда вы запрашиваете родительскую модель. Нетерпеливая загрузка облегчает проблему N+1 запроса. Чтобы проиллюстрировать проблему N+1 запроса, рассмотрите модель Book, которая связана с Author:
`<?php` namespace App; use Illuminate\Database\Eloquent\Model; class Book extends Model { /** * Получить автора книги. */ public function author() { return $this->belongsTo('App\Author'); } }
Теперь давайте получим все книги и их авторов:
foreach ($books as $book) { echo $book->author->name; }
Этот цикл выполнит 1 запрос, чтобы получить все книги по таблице, затем выполнится другой запрос для каждой книги, чтобы получить автора. Так, если бы у нас было 25 книг, этот цикл выполнил бы 26 запросов: 1 для исходной книги и 25 дополнительных запросов, чтобы получить автора каждой книги.
К счастью мы можем использовать нетерпеливую загрузку, чтобы уменьшить эту работу всего до 2 запросов. При запросах вы можете определить, какие отношения должны быть нетерпеливо загружены с использованием метода `PHPwith()` :
```
$books = App\Book::with('author')->get();
```
foreach ($books as $book) { echo $book->author->name; }
Для данной операции будут выполнены только два запроса:
> sqlselect * from books select * from authors where id in (1, 2, 3, 4, 5, ...)
Нетерпеливо загружающиеся множественные отношения
Иногда вам, возможно, понадобится нетерпеливо загружать несколько различных отношений в единственной итерации. Для этого просто передайте дополнительные параметры в метод `PHPwith()` :
```
$books = App\Book::with('author', 'publisher')->get();
```
Вложенная нетерпеливая загрузка
Для вложенных отношений нетерпеливой загрузки вы можете использовать точечную нотацию. Например, давайте нетерпеливо загружать всех авторов книг и все личные контакты автора в одном Eloquent операторе:
```
$books = App\Book::with('author.contacts')->get();
```
### Ограничение нетерпеливых загрузок
Иногда вам может понадобиться нетерпеливо загружать отношения. Но также иногда может понадобиться и определить дополнительные ограничения запроса для нетерпеливого запроса загрузки. Вот пример:
$query->where('title', 'like', '%first%'); }])->get();
В данном примере Eloquent будет нетерпеливо загружать только те статьи, в которых столбец title содержит слово first. Конечно, вы можете вызвать другие методы конструктора запросов для дополнительной настройки нетерпеливой загрузки операции:
$query->orderBy('created_at', 'desc'); }])->get();
### Ленивая нетерпеливая загрузка
Иногда вам, возможно, понадобится нетерпеливо загружать отношение после того, как родительская модель уже была получена. Например, это может быть полезно, если вы должны динамично решить, загружать ли связанные модели:
if ($someCondition) { $books->load('author', 'publisher'); } Если вам нужно установить дополнительные ограничения запроса на нетерпеливо загружаемый запрос, вы можете передать массив, ключами которого будут отношения, которые необходимо загрузить. Значения массива должны быть экземплярами `PHPClosure` , которые получают экземпляр запроса:
```
$books->load(['author' => function ($query) {
```
$query->orderBy('published_date', 'asc'); }]);
### Метод Save
Eloquent предоставляет удобные методы для добавления новых моделей к отношениям. Например, если вам понадобится вставить новый Comment для модели Post. Вместо того, чтобы вручную установить атрибут post_id для Comment, вы можете вставить Comment непосредственно из метода `PHPsave()` отношения:
$post = App\Post::find(1); $post->comments()->save($comment); Заметьте, что мы не обращались к отношению `PHPcomments()` как к динамическому свойству. Вместо этого мы вызвали метод `PHPcomments()` , чтобы получить экземпляр отношения. Метод `PHPsave()` автоматически добавит надлежащее значение post_id в новую модель Comment. Если вам нужно сохранить несколько связанных моделей, вы можете использовать метод `PHPsaveMany()` :
$post->comments()->saveMany([ new App\Comment(['message' => 'A new comment.']), new App\Comment(['message' => 'Another comment.']), ]);
### Метод Create
В дополнение к методам `PHPsave()` и `PHPsaveMany()` вы можете также использовать метод `PHPcreate()` , который принимает массив атрибутов, создает модель и вставляет её в базу данных. Различие между `PHPsave()` и `PHPcreate()` в том, что `PHPsave()` принимает экземпляр Eloquent модели целиком, в то время как `PHPcreate()` принимает простой PHP `PHParray` :
$comment = $post->comments()->create([ 'message' => 'A new comment.', ]); Перед использованием метода `PHPcreate()` пересмотрите документацию по массовому назначению атрибутов.
### Отношение «принадлежит к»
При обновлении отношения `PHPbelongsTo()` вы можете использовать метод `PHPassociate()` . Этот метод установит внешний ключ на дочерней модели:
```
$account = App\Account::find(10);
```
$user->account()->associate($account); $user->save(); При удалении отношения `PHPbelongsTo()` вы можете использовать метод `PHPdissociate()` . Этот метод сбросит внешний ключ отношения в `PHPnull` :
```
$user->account()->dissociate();
```
### Отношение многие-ко-многим
Также Eloquent предоставляет несколько дополнительных вспомогательных методов, чтобы сделать работу со связанными моделями более удобной. Например, давайте предположим, что у пользователя может быть много ролей, и у роли может быть много пользователей. Чтобы присоединить роль к пользователю вставкой записи в промежуточную таблицу, которая присоединяется к моделям, используйте метод `PHPattach()` :
$user->roles()->attach($roleId);
При присоединении отношения к модели вы можете также передать массив дополнительных данных, которые будут вставлены в промежуточную таблицу:
```
$user->roles()->attach($roleId, ['expires' => $expires]);
```
Конечно, иногда может быть необходимо отсоединить роль от пользователя. Чтобы удалить запись отношения многие-ко-многим, используйте метод `PHPdetach()` . Метод `PHPdetach()` удалит соответствующую запись из промежуточной таблицы. Однако, обе модели останутся в базе данных:
```
// Отсоединить одну роль от пользователя...
```
$user->roles()->detach($roleId); // Отсоединить все роли от пользователя... $user->roles()->detach(); Для удобства `PHPattach()` и `PHPdetach()` также принимают массивы ID в параметрах:
$user->roles()->detach([1, 2, 3]); $user->roles()->attach([1 => ['expires' => $expires], 2, 3]);
добавлено в 5.2 ()
Вы можете также использовать метод `PHPsync()` , чтобы создать ассоциации многие-ко-многим. Метод `PHPsync()` принимает массив ID, чтобы поместить его в промежуточную таблицу. Все ID, которые не находятся в данном массиве, будут удалены из промежуточной таблицы. После того как эта работа завершена, только ID из данного массива будут существовать в промежуточной таблице:
Также вы можете передать дополнительные значения промежуточной таблицы с массивом ID:
```
$user->roles()->sync([1 => ['expires' => true], 2, 3]);
```
добавлено в 5.3 ()
Отношение многие-ко-многим также предоставляет метод `PHPtoggle()` , который «переключает» состояние присоединений с заданными ID. Если данный ID сейчас присоединён, то он будет отсоединён. И наоборот, если сейчас он отсоединён, то будет присоединён:
```
$user->roles()->toggle([1, 2, 3]);
```
Сохранение дополнительных данных в сводной таблице
При работе с отношением многие-ко-многим метод `PHPsave()` принимает вторым аргументом массив дополнительных атрибутов промежуточной таблицы:
```
App\User::find(1)->roles()->save($role, ['expires' => $expires]);
```
Изменение записи в сводной таблице
Для изменения существующей строки в сводной таблице используйте метод
. Этот метод принимает внешний ключ сводной записи и массив атрибутов для изменения:
$user->roles()->updateExistingPivot($roleId, $attributes);
## Привязка родительских меток времени
Когда модель имеет связь `PHPbelongsTo()` или `PHPbelongsToMany()` с другими моделями, например, Comment, которая принадлежит Post, иногда полезно обновить метку времени родителя, когда дочерняя модель обновлена. Например, когда модель Comment обновлена, вы можете автоматически «привязать» метки времени updated_at на владеющий ей Post. Eloquent упрощает эту работу. Просто добавьте свойство touches, содержащее имена отношений к дочерней модели: `<?php` namespace App; use Illuminate\Database\Eloquent\Model; class Comment extends Model { /** * Все привязанные отношения. * * @var array */ protected $touches = ['post']; /** * Получить статью, к которой привязан комментарий. */ public function post() { return $this->belongsTo('App\Post'); } }
Теперь, когда вы обновляете Comment, у владеющего Post тоже обновится столбец updated_at, позволяя проще узнать, когда необходимо аннулировать кэш модели Post:
```
comment = App\Comment::find(1);
```
$comment->text = 'Edit to this comment!'; $comment->save();
Все наборы результатов, возвращаемые Eloquent, являются экземплярами объекта Illuminate\Database\Eloquent\Collection, в том числе результаты, получаемые с помощью метода `PHPget()` или доступные через отношения. Объект коллекции Eloquent наследует базовую коллекцию Laravel. Поэтому он наследует десятки методов, используемых для гибкой работы с базовым набором моделей Eloquent.
Конечно же, все коллекции также служат в качестве итераторов, позволяя вам перебирать их в цикле, как будто они простые PHP массивы:
foreach ($users as $user) { echo $user->name; }
Тем не менее, коллекции гораздо мощнее, чем массивы и предоставляют различные варианты операций отображения/уменьшения, которые могут быть сцеплены с использованием интуитивно понятного интерфейса. Например, давайте удалим все неактивные модели и возвратим имена для каждого оставшегося пользователя:
$names = $users->reject(function ($user) { return $user->active === false; }) ->map(function ($user) { return $user->name; });
While most Eloquent collection methods return a new instance of an Eloquent collection, the `pluck`, `keys`, `zip`, `collapse`, `flatten` and `flip` methods return a [base collection](/docs//collections) instance.
While most Eloquent collection methods return a new instance of an Eloquent collection, the `pluck`, `keys`, `zip`, `collapse`, `flatten` and `flip` methods return a [base collection](/docs//collections) instance. Likewise, if a `map` operation returns a collection that does not contain any Eloquent models, it will be automatically cast to a base collection.
Большинство методов для работы с коллекциями Eloquent возвращают новый экземпляр коллекции Eloquent, но методы `PHPpluck()` , `PHPkeys()` , `PHPzip()` , `PHPcollapse()` , `PHPflatten()` и `PHPflip()` возвращают экземпляр базовой коллекции. Более того, если операция `PHPmap()` вернёт коллекцию, в которой нет моделей Eloquent, она будет автоматически приведена к базовой коллекции.
### Базовая коллекция
Все Eloquent коллекции наследуют объект базовой коллекции Laravel; поэтому они наследуют все мощные методы, предоставляемые базовым классом коллекции:all avg chunk collapse combine contains count diff diffkeys each every except filter first flatmapflatmap flatten flip forget forPage get groupBy has implode intersect isEmpty keyBy keys last map max merge min only pluck pop prepend pull push put random reduce reject reverse search shift shuffle slice sort sortBy sortByDesc splice sum take toArray toJson transform ((/docs/v5/collections#method-union) unique values where wherestrict wherein whereinloose whereLoose zip
## Пользовательские коллекции
Если вам нужно использовать пользовательский объект Collection со своими собственными методами наследования, вы можете переопределить метод `PHPnewCollection()` в вашей модели: `<?php` namespace App; use App\CustomCollection; use Illuminate\Database\Eloquent\Model; class User extends Model { /** * Создание экземпляра новой Eloquent коллекции. * * @param array $models * @return \Illuminate\Database\Eloquent\Collection */ public function newCollection(array $models = []) { return new CustomCollection($models); } } После определения метода `PHPnewCollection()` вы получите экземпляр пользовательской коллекции при любом обращении к экземпляру Collection этой модели. Если вы хотите использовать собственную коллекцию для каждой модели в вашем приложении, вы должны переопределить метод `PHPnewCollection()` в базовом классе модели, наследуемой всеми вашими моделями.
# Преобразователи
Читатели и преобразователи позволяют вам форматировать значения атрибутов Eloquent при их чтении или записи в экземпляры моделей. Например, вы хотите использовать Laravel-шифратор, чтобы зашифровать значение, пока оно хранится в базе, и затем автоматически расшифровать атрибут, когда вы обращаетесь к нему в модели Eloquent.
В дополнение к обычным читателям и преобразователям Eloquent также автоматически преобразует поля с датами в экземпляры Carbon или даже преобразует текстовые поля в JSON.
### Определение читателя
Чтобы определить читателя, создайте метод `PHPgetFooAttribute()` в вашей модели, где Foo — отформатированное в соответствии со стилем «studly» название столбца, к которому вы хотите иметь доступ. В данном примере мы определим читателя для атрибута first_name. Читатель будет автоматически вызван Eloquent при попытке получить значение атрибута first_name: `<?php` namespace App; use Illuminate\Database\Eloquent\Model; class User extends Model { /** * Получить имя пользователя. * * @param string $value * @return string */ public function getFirstNameAttribute($value) { return ucfirst($value); } }
Как видите, первоначальное значение столбца передается читателю, позволяя вам управлять значением и возвращать его. Чтобы получить доступ к значению читателя, вы можете просто обратиться к атрибуту first_name экземпляра модели:
$firstName = $user->first_name;
### Определение преобразователя
Чтобы определить преобразователь, определите метод `PHPsetFooAttribute()` для своей модели, где `PHPFoo` — отформатированное в соответствии со стилем «studly» название столбца, к которому вы хотите иметь доступ. И снова давайте определим преобразователь для атрибута first_name. Этот преобразователь будет автоматически вызван, когда мы попытаемся установить значение атрибута first_name в модели: `<?php` namespace App; use Illuminate\Database\Eloquent\Model; class User extends Model { /** * Установить имя пользователя. * * @param string $value * @return void * //для версии 5.2 и ранее: * //@return string */ public function setFirstNameAttribute($value) { $this->attributes['first_name'] = strtolower($value); } } Преобразователь получает значение, которое устанавливается в атрибуте, позволяя вам управлять значением и изменять его во внутреннем свойстве `PHP$attributes` модели Eloquent. Так, например, если мы пытаемся установить атрибут first_name в значение Sally:
$user->first_name = 'Sally'; В этом примере функция
```
PHPsetFirstNameAttribute()
```
будет вызвана со значением Sally. Преобразователь применит функцию `PHPstrtolower()` к имени и установит его результирующее значение во внутреннем массиве `PHP$attributes` .
## Преобразователи дат
По умолчанию Eloquent преобразует столбцы created_at и updated_at в экземпляры Carbon, которые наследуют PHP-класс DateTime и предоставляют ряд полезных методов. Вы можете сами настроить, какие поля автоматически будут преобразовываться, и даже полностью отключить их преобразование, изменив свойство `PHP$dates` вашей модели: `<?php` namespace App; use Illuminate\Database\Eloquent\Model; class User extends Model { /** * Атрибуты, которые должны быть преобразованы к датам. * * @var array */ protected $dates = [ 'created_at', 'updated_at', 'deleted_at' ]; }
Когда столбец является датой, вы можете установить его значение в формат времени UNIX, в строку даты (Y-m-d), в строку даты-времени, и конечно в экземпляр DateTime/Carbon, и значение даты будет автоматически правильно сохранено в вашей базе данных:
$user->deleted_at = Carbon::now(); $user->save(); Как было отмечено выше, полученные атрибуты, которые перечислены в вашем свойстве `PHP$dates` , будут автоматически преобразованы к экземпляру Carbon, позволяя вам использовать любой из методов Carbon для ваших атрибутов:
return $user->deleted_at->getTimestamp(); По умолчанию метки времени отформатированы как 'Y-m-d H:i:s'. Если вам нужно настроить формат метки времени, установите значение `PHP$dateFormat` в своей модели. Это свойство определяет, как атрибуты даты хранятся в базе данных, а также их формат, когда модель преобразована в массив или JSON: `<?php` namespace App; use Illuminate\Database\Eloquent\Model; class Flight extends Model { /** * Формат хранения столбцов с датами модели. * * @var string */ protected $dateFormat = 'U'; }
## Преобразование атрибутов
Свойство `PHP$casts` в вашей модели предоставляет удобный метод преобразования атрибутов к общим типам данных. Свойство `PHP$casts` должно быть массивом, где ключ — название преобразуемого атрибута, а значение — тип, в который вы хотите преобразовать столбец. Поддерживаемые типы для преобразования: `PHPinteger` , `PHPreal` , `PHPfloat` , `PHPdouble` , `PHPstring` , `PHPboolean` , `PHPobject` , `PHParray` , `PHPcollection` , `PHPdate` , `PHPdatetime` и с версии 5.2 `PHPtimestamp` . Например, давайте привяжем атрибут is_admin, который сохранен в нашей базе данных как `PHPinteger` ( `PHP0` или `PHP1'` к значению `PHPboolean` : `<?php` namespace App; use Illuminate\Database\Eloquent\Model; class User extends Model { /** * Атрибуты, которые должны быть преобразованы к базовым типам. * * @var array */ protected $casts = [ 'is_admin' => 'boolean', ]; } Теперь атрибут is_admin будет всегда преобразовываться в boolean, когда вы обращаетесь к нему, даже если само значение хранится в базе данных как `PHPinteger` :
if ($user->is_admin) { // }
### Преобразование в массив и JSON
Тип `PHParray` особенно полезен для преобразования при работе со столбцами, которые хранятся в формате JSON. Например, если у вашей базы данных есть тип поля `PHPTEXT` или `PHPJSON` (начиная с версии 5.3), который содержит JSON данные, добавление преобразования в `PHParray` к этому атрибуту автоматически десериализует атрибут в PHP массив, во время доступа к нему из вашей модели Eloquent: `<?php` namespace App; use Illuminate\Database\Eloquent\Model; class User extends Model { /** * Атрибуты, которые должны быть преобразованы к базовым типам. * * @var array */ protected $casts = [ 'options' => 'array', ]; }
После определения преобразования вы можете обратиться к атрибуту options, и он будет автоматически десериализован из JSON в PHP массив. Когда вы зададите значение атрибута options, данный массив будет автоматически преобразован обратно в JSON для хранения:
$options = $user->options; $options['key'] = 'value'; $user->options = $options; $user->save();
# Сериализация
При создании JSON API вам часто потребуется преобразовывать модели и отношения к массивам или формату JSON. Eloquent содержит методы для выполнения этих преобразований и управляет атрибутами, включенными в вашу сериализацию.
## Сериализация моделей и коллекций
### Сериализация в массивы
Для преобразования модели и её загруженных отношений в массив надо использовать метод `PHPtoArray()` . Этот метод рекурсивный, поэтому все атрибуты и все отношения (включая отношения отношений) будут конвертированы в массивы:
```
$user = App\User::with('roles')->first();
```
return $user->toArray();
Вы можете также преобразовывать целые коллекции моделей в массивы:
return $users->toArray();
### Сериализация в JSON
Для преобразования модели в JSON вам надо использовать метод `PHPtoJson()` . Метод `PHPtoJson()` рекурсивный, поэтому все атрибуты и отношения будут преобразованы в JSON:
return $user->toJson(); В качестве альтернативы, вы можете преобразовать модель или коллекцию в строку, что автоматически вызовет метод `PHPtoJson()` на модели или коллекции:
return (string) $user;
Поскольку модели и коллекции конвертируются в JSON при их преобразовании в строку, вы можете возвращать объекты Eloquent напрямую из ваших маршрутов или контроллеров:
return App\User::all(); });
## Скрытие атрибутов от JSON
Иногда вам может быть нужно ограничить список атрибутов, включённых в преобразованный массив или JSON-строку — например, скрыть пароли. Для этого добавьте в модель свойство `PHP$hidden` : `<?php` namespace App; use Illuminate\Database\Eloquent\Model; class User extends Model { /** * Атрибуты, которые должны быть невидимы для массива. * * @var array */ protected $hidden = ['password']; }
При скрытии отношений используйте имя метода отношения, а не имя его динамического свойства.
Вы также можете использовать свойство visible, чтобы определить белый список атрибутов, которые должны быть включены в ваш массив модели и преобразованный JSON. Все остальные атрибуты будут скрыты при конвертировании модели в массив или JSON:
`<?php` namespace App; use Illuminate\Database\Eloquent\Model; class User extends Model { /** * Атрибуты, которые должны быть видны в массиве. * * @var array */ protected $visible = ['first_name', 'last_name']; }
Временное изменение видимости атрибута
Используйте метод `PHPmakeVisible()` , чтобы сделать обычно скрытые атрибуты видимыми в данном экземпляре модели. Метод `PHPmakeVisible()` возвращает экземпляр модели для удобной сцепки методов:
```
return $user->makeVisible('attribute')->toArray();
```
А также, используйте метод `PHPmakeHidden()` , чтобы сделать обычно видимые атрибуты скрытыми в данном экземпляре модели.
```
return $user->makeHidden('attribute')->toArray();
```
## Добавление значений в JSON
Иногда, при конвертировании моделей в массив или JSON, вам может понадобиться добавить атрибуты, для которых нет соответствующих столбцов в вашей БД. Для этого просто определите для него читателя:
`<?php` namespace App; use Illuminate\Database\Eloquent\Model; class User extends Model { /** * Получить флаг администратора для пользователя. * * @return bool */ public function getIsAdminAttribute() { return $this->attributes['admin'] == 'yes'; } }
После создания читателя, добавьте имя атрибута в свойство модели appends. Обратите внимание на то, что имена атрибутов обычно указываются в стиле «snake case», хотя читатель определяется в стиле «camel case»:
`<?php` namespace App; use Illuminate\Database\Eloquent\Model; class User extends Model { /** * Читатель, добавленный к форме массива модели. * * @var array */ protected $appends = ['is_admin']; }
Когда атрибут добавлен в список appends, он будет включён в оба представления — и в массив модели, и в JSON. Атрибуты в массиве appends соответствуют настройкам модели visible и hidden.
# Консоль Artisan
Artisan — интерфейс командной строки, который поставляется с Laravel. Он содержит набор полезных команд, помогающих вам при разработке приложения. Для просмотра списка доступных команд используйте команду `shlist` : > shphp artisan list
Каждая команда имеет описание, в котором указаны её доступные аргументы и ключи. Для просмотра описания просто добавьте перед командой слово `shhelp` : > shphp artisan help migrate
В дополнение к стандартным командам Artisan вы можете также создавать свои собственные команды. Обычно команды хранятся в папке app/Console/Commands, но вы можете поместить их в любое другое место, в котором их сможет найти и загрузить Composer.
### Генерирование команд
добавлено в 5.3 ()
Для создания новой команды используйте Artisan-команду `shmake:command` . Эта команда создаст новый класс команды в папке app/Console/Commands. Если этой папке не существует, она будет создана при первом запуске команды `shmake:command` . Сгенерированная команда будет содержать стандартный набор свойств и методов, присущих всем командам: > shphp artisan make:command SendEmails
Затем вам надо зарегистрировать команду, тогда она сможет быть запущена через командный интерфейс Artisan.
добавлено в 5.2 () 5.1 () 5.0 ()
Для создания новой команды можно использовать Artisan-команду `shmake:console` , которая создаст заглушку, с которой вы можете начать работать. > shphp artisan make:console SendEmails
Эта команда создаст класс в app/Console/Commands/SendEmails.php. При создании команды может быть использован ключ `sh--command` для назначения имени команды в терминале: > shphp artisan make:console SendEmails --command=emails:send
### Структура команды
После генерирования команды, вам нужно заполнить свойства signature и description в её классе, которые используются при отображении вашей команды в списке команд ( `shlist` ). При вызове вашей команды будет вызван метод `PHPhandle()` . В него вы можете поместить необходимую вам логику.
Для улучшения кода, с точки зрения его повторного использования, полезно сохранять ваши консольные команды простыми и использовать в них сервисы самого приложения для выполнения их задач. Обратите внимание, что в приведённом примере мы внедряем класс сервиса для выполнения «трудоёмкой» задачи отправки писем.
Давайте посмотрим на пример команды. Мы можем внедрить любые необходимые зависимости в конструкторе команды. Сервис-контейнер Laravel автоматически внедрит все указанные в конструкторе зависимости:
`<?php` namespace App\Console\Commands; use App\User; use App\DripEmailer; use Illuminate\Console\Command; class SendEmails extends Command { /** * Имя и аргументы консольной команды. * * @var string */ protected $signature = 'email:send {user}'; /** * Описание консольной команды. * * @var string */ protected $description = 'Send drip e-mails to a user'; /** * Служба "капельных" e-mail сообщений. * * @var DripEmailer */ protected $drip; /** * Создание нового экземпляра команды. * * @param DripEmailer $drip * @return void */ public function __construct(DripEmailer $drip) { parent::__construct(); $this->drip = $drip; } /** * Выполнение консольной команды. * * @return mixed */ public function handle() { $this->drip->send(User::find($this->argument('user'))); } }
добавлено в 5.3 ()
### Команды замыкания
Команды на основе замыканий являются альтернативой определению консольных команд в виде классов, подобно тому, как замыкания маршрутов являются альтернативой контроллерам. В методе `PHPcommands()` файла app/Console/Kernel.php Laravel загружает файл routes/console.php: `/**` * Регистрация команд на основе замыканий для приложения. * * @return void */ protected function commands() { require base_path('routes/console.php'); } Не смотря на то, что этот файл не определяет HTTP-маршруты, он определяет консольные входные точки (маршруты) в ваше приложение. В этом файле вы можете определить все свои маршруты на основе замыканий с помощью метода
```
PHPArtisan::command()
```
. Метод `PHPcommand()` принимает два аргумента: сигнатуру команды и замыкание, которое получает аргументы и ключи команды:
$this->info("Building {$project}!"); });
Замыкание привязано к лежащему в основе экземпляру команды, поэтому у вас есть полный доступ ко всем вспомогательным методам, которые обычно доступны вам в полном классе команды.
Помимо получения аргументов и ключей вашей команды в замыканиях команд можно указывать типы дополнительных зависимостей, которые вам необходимо получить из сервис-контейнера:
`use App\User;` use App\DripEmailer; Artisan::command('email:send {user}', function (DripEmailer $drip, $user) { $drip->send(User::find($user)); }); При определении команд на основе замыканий вы можете использовать метод `PHPdescribe()` для добавления описания команды. Это описание будет выводится при выполнении команд `shphp artisan list` или `shphp artisan help` :
$this->info("Building {$project}!"); })->describe('Build the project');
## Определение ожиданий ввода
При создании консольных команд часто необходимо получать ввод от пользователя через аргументы или ключи. В Laravel очень удобно определить ожидаемый от пользователя ввод, используя свойство signature вашей команды. Это свойство позволяет задать имя, аргументы и ключи для команды в едином, выразительном, маршруто-подобном синтаксисе.
### Аргументы
Все вводимые пользователем аргументы и ключи заключаются в фигурные скобки. В следующем примере команда определяет один требуемый аргумент user:
`/**` * Имя и аргументы консольной команды. * * @var string */ protected $signature = 'email:send {user}';
Вы можете сделать аргумент необязательным и определить значения по умолчанию для аргументов:
```
// Необязательный аргумент...
```
email:send {user?} // Необязательный аргумент со значением по умолчанию... email:send {user=foo}
### Ключи
Ключи, как и аргументы, являются формой пользовательского ввода. Они обозначаются префиксом из двух дефисов (—). Существует два типа ключей: принимающие значение и не принимающие. Ключи, которые не принимают значение, служат логическими «переключателями». Давайте посмотрим на такой тип ключей:
`/**` * Имя и аргументы консольной команды. * * @var string */ protected $signature = 'email:send {user} {--queue}'; В этом примере при вызове Artisan-команды может быть указан ключ `sh--queue` . Если будет передан ключ `sh--queue` , то его значение будет true, иначе false: > shphp artisan email:send 1 --queue
Теперь посмотрим на ключи, которые ожидают значения. Необходимость ввода значения для ключа задаётся при помощи знака равно (=):
`/**` * Имя и аргументы консольной команды. * * @var string */ protected $signature = 'email:send {user} {--queue=}';
В этом примере пользователь может передать значение для ключа вот так:
> shphp artisan email:send 1 --queue=default
Для ключей можно задать значение по умолчанию, указав его после имени ключа. Это значение будет использовано, если пользователь не укажет значение ключа:
```
shemail:send {user} {--queue=default}
```
Чтобы задать сокращение при определении ключа, вы можете указать его перед именем ключа и отделить его символом вертикальной черты (|):
```
shemail:send {user} {--Q|queue}
```
### Ввод массивов
Если вы хотите указать, что аргументы или ключи будут принимать на вход массивы, используйте символ *. Сначала посмотрим на пример указания аргумента массива:
`shemail:send {user*}`
При вызове этой команды в командную строку можно передать аргументы user по порядку. Например, следующая команда установит значение user равное ['foo', 'bar']:
> shphp artisan email:send foo bar
При определении ключа, который будет принимать на вход массив, каждое передаваемое в команду значение ключа должно иметь префикс в виде имени ключа:
> shemail:send {user} {--id=*} php artisan email:send --id=1 --id=2
### Описание ввода
Вы можете задать описание для аргументов и ключей, отделив их двоеточием. Если вам необходимо немного больше места для определения вашей команды, вы можете разделить описание на несколько строк:
`/**` * Имя и аргументы консольной команды. * * @var string */ protected $signature = 'email:send {user : ID пользователя} {--queue= : Ставить ли задачу в очередь}';
## Ввод/вывод команд
Во время выполнения команды вам, конечно, потребуется доступ к аргументам и ключам, которые были переданы ей на вход. Для этого вы можете использовать методы `PHPargument()` и `PHPoption()` . Для чтения значения аргумента используйте метод `PHPargument()` : `/**` * Выполнение консольной команды. * * @return mixed */ public function handle() { $userId = $this->argument('user'); // } Если вам необходимо прочитать все аргументы в виде массива, вызовите метод `PHParguments()` (для версии 5.2 и ранее — метод `PHPargument()` без аргументов):
```
$arguments = $this->arguments();
```
Ключи можно прочитать так же легко, как аргументы, используя метод `PHPoption()` . Чтобы получить массив всех ключей, вызовите метод `PHPoptions()` (для версии 5.2 и ранее — метод `PHPoption()` без аргументов):
```
// Чтение конкретного ключа...
```
$queueName = $this->option('queue'); // Чтение всех ключей... $options = $this->options(); Если аргумента или ключа не существует, будет возвращён `PHPnull` .
В дополнение к отображению вывода вы можете запросить у пользователя данные во время выполнения команды. Метод `PHPask()` выведет запрос, примет введённые данные, а затем вернёт их вашей команде: `/**` * Выполнение консольной команды. * * @return mixed */ public function handle() { $name = $this->ask('What is your name?'); } Метод `PHPsecret()` похож на метод `PHPask()` , но он не отображает вводимые пользователем данные в консоли. Этот метод полезен при запросе секретной информации, такой как пароль:
```
$password = $this->secret('What is the password?');
```
Для получения от пользователя простого подтверждения можно использовать метод `PHPconfirm()` . По умолчанию этот метод возвращает false. Но если пользователь введёт y или yes в ответ на запрос, то метод вернёт true:
```
if ($this->confirm('Do you wish to continue?')) {
```
// } Метод `PHPanticipate()` можно использовать для предоставления пользователю возможных вариантов для выбора. Независимо от наличия этих вариантов пользователь может указать свой вариант.
```
$name = $this->anticipate('What is your name?', ['Taylor', 'Dayle']);
```
Вопросы с несколькими вариантами ответов
Для предоставления пользователю определённого набора вариантов можно использовать метод `PHPchoice()` . Можно задать значение по умолчанию, на случай если вариант не выбран:
```
$name = $this->choice('What is your name?', ['Taylor', 'Dayle'], $default);
```
Для вывода вы можете использовать методы `PHPline()` , `PHPinfo()` , `PHPcomment()` , `PHPquestion()` и `PHPerror()` . Каждый из них будет использовать подходящие по смыслу цвета ANSI для отображении текста. Давайте для примера выведем информационное сообщение для пользователя. Обычно метод `PHPinfo()` выводит в консоль текст зелёного цвета: `/**` * Выполнение консольной команды. * * @return mixed */ public function handle() { $this->info('Вывести это на экран'); } Для вывода сообщения об ошибке используйте метод `PHPerror()` . Он выводит в консоль текст красного цвета:
Для простого вывода текста в консоль без использования специальных цветов используйте метод `PHPline()` :
```
$this->line('Вывести это на экран');
```
Метод `PHPtable()` позволяет легко форматировать несколько строк/столбцов данных. Просто передайте в него заголовки и строки. Ширина и высота будет динамически определена на основе переданных данных:
```
$headers = ['Name', 'Email'];
```
$users = App\User::all(['name', 'email'])->toArray(); $this->table($headers, $users);
Для продолжительных задач бывает полезно вывести индикатор процесса. Используя объект вывода мы можем запустить, передвинуть и остановить индикатор. Сначала определите общее число шагов, по которым будет идти процесс. Затем передвигайте индикатор после выполнения каждого шага:
$bar = $this->output->createProgressBar(count($users)); foreach ($users as $user) { $this->performTask($user); $bar->advance(); } $bar->finish();
Более подробная информация указана в документации по индикаторам процесса Symfony.
Указание имени среды для выполнения команды
Вы можете указать среду, которая будет использоваться при выполнении команды, с помощью ключа `sh--env` : > shphp artisan migrate --env=local
Вывод вашей текущей версии Laravel
Вы также можете узнать версию текущей установки Laravel с помощью ключа `sh--version` : > shphp artisan --version
Когда ваша команда написана, вам нужно зарегистрировать её в Artisan. Все команды регистрируются в файле app/Console/Kernel.php. В нём вы найдёте список команд в свойстве commands. Чтобы зарегистрировать свою команду, просто добавьте имя класса команды в этот список. Когда Artisan загружается, все перечисленные в этом свойстве команды будут включены в сервис-контейнер и зарегистрированы в Artisan:
Commands\SendEmails::class ];
## Программное выполнение команд
Иногда необходимо выполнить команду извне командной строки. Например, когда вы хотите запустить команду из маршрута или контроллера. Для этого можно использовать метод `PHPcall()` фасада Artisan. Этот метод принимает первым аргументом имя команды, а вторым — массив аргументов команды. Будет возвращён код выхода:
$exitCode = Artisan::call('email:send', [ 'user' => 1, '--queue' => 'default' ]); // }); С помощью метода `PHPqueue()` можно даже поставить команды в очередь, тогда они будут обработаны в фоне с помощью ваших обработчиков очереди. Прежде чем использовать этот метод, убедитесь в том, что вы настроили вашу очередь, и что слушатель очереди запущен:
Artisan::queue('email:send', [ 'user' => 1, '--queue' => 'default' ]); // }); Когда нужно указать значение ключа, который не принимает строковое значение, например, флаг `sh--force` команды `shmigrate:refresh` , вы можете передать значение true или false:
```
$exitCode = Artisan::call('migrate:refresh', [
```
'--force' => true, ]);
### Вызов команд из других команд
Иногда необходимо вызвать другую команду из своей. Для этого используйте метод `PHPcall()` . Он принимает имя команды и массив её аргументов: `/**` * Выполнение консольной команды. * * @return mixed */ public function handle() { $this->call('email:send', [ 'user' => 1, '--queue' => 'default' ]); // } Если вы хотите вызвать другую команду и запретить её вывод в консоль, используйте метод `PHPcallSilent()` . Этот метод принимает те же аргументы, что и метод `PHPcall()` :
```
$this->callSilent('email:send', [
```
'user' => 1, '--queue' => 'default' ]);
## Планирование команд Artisan
Данный раздел статьи актуален только для версии 5.0 и былудален в версии 5.1.
Раньше разработчикам приходилось создавать Cron-задачи для каждой консольной команды, которую они хотели запланировать. Это утомительно. Для планирования консольных команд теперь не ведётся контроль версий, и для добавления Cron-задач вам необходимо подключаться к серверу по SSH. Давайте упростим нам жизнь. Планировщик команд Laravel позволяет вам гибко и выразительно задавать план команд в самом Laravel, а на сервере нужна только одна Cron-задача.
Ваш план команд хранится в файле app/Console/Kernel.php. В этом классе вы найдёте метод `PHPschedule` . Чтобы помочь вам разобраться, в метод включён простой пример. Вы можете добавлять в объект `PHPSchedule` столько запланированных задач, сколько пожелаете. А на сервере вам надо добавить единственную Cron-задачу: > sh* * * * * php /path/to/artisan schedule:run 1>> /dev/null 2>&1
Эта задача будет каждую минуту вызывать планировщик команд Laravel. А тот в свою очередь будет проверять имеющиеся в плане задачи и выполнять необходимые. Проще некуда!
### Ещё примеры планирования
Давайте рассмотрим ещё несколько примеров планирования.
```
$schedule->call(function()
```
{ // Выполнить какое-то задание... })->hourly();
Планирование терминальных команд
```
$schedule->exec('composer self-update')->daily();
```
```
$schedule->command('foo')->cron('* * * * *');
```
Выполнение команды каждые пять, десять или тридцать минут соответственно:
```
$schedule->command('foo')->everyFiveMinutes();
```
$schedule->command('foo')->everyTenMinutes(); $schedule->command('foo')->everyThirtyMinutes();
```
$schedule->command('foo')->daily();
```
Ежедневные задачи в указанное время (24-часовой формат)
```
$schedule->command('foo')->dailyAt('15:00');
```
Задачи, выполняемые дважды в день
```
$schedule->command('foo')->twiceDaily();
```
```
$schedule->command('foo')->weekdays();
```
```
$schedule->command('foo')->weekly();
```
// Планирование еженедельной задачи на конкретный день (0-6) и время... $schedule->command('foo')->weeklyOn(1, '8:00');
```
$schedule->command('foo')->monthly();
```
Задачи, выполняемые в указанный день недели
```
$schedule->command('foo')->mondays();
```
$schedule->command('foo')->tuesdays(); $schedule->command('foo')->wednesdays(); $schedule->command('foo')->thursdays(); $schedule->command('foo')->fridays(); $schedule->command('foo')->saturdays(); $schedule->command('foo')->sundays(); По умолчанию запланированные задачи будут запущены, даже если предыдущий экземпляр всё ещё запущен. Для предотвращения такого поведения используйте метод
```
PHPwithoutOverlapping
```
```
$schedule->command('foo')->withoutOverlapping();
```
В этом примере команда foo будет запускаться каждую минуту, если она уже не запущена.
Указание конкретных сред для запуска задач
```
$schedule->command('foo')->monthly()->environments('production');
```
Указание необходимости запуска задачи даже в режиме техобслуживания
```
$schedule->command('foo')->monthly()->evenInMaintenanceMode();
```
Запуск задачи только в случае возврата `PHPtrue`
```
$schedule->command('foo')->monthly()->when(function()
```
{ return true; });
Отправка вывода задачи на e-mail
```
$schedule->command('foo')->sendOutputTo($filePath)->emailOutputTo('<EMAIL>');
```
Необходимо отправить вывод в файл перед тем, как он сможет быть послан по e-mail.
Отправка вывода задачи в указанное место
```
$schedule->command('foo')->sendOutputTo($filePath);
```
Выполнить ping указанного URL после запуска задачи
```
$schedule->command('foo')->thenPing($url);
```
Для использования функции `PHPthenPing($url)` необходима HTTP-библиотека Guzzle. Вы можете добавить Guzzle 5 в свой проект, добавив следующую строку в файл `PHPcomposer.json` :
```
conf"guzzlehttp/guzzle": "~5.0"
```
# Создание команд Artisan
Данная статья документации актуальна только для версии 5.0 и была удалена в версии 5.1. Описание процесса создания команд перенесено в статью Консоль Artisan.
В дополнение к стандартным командам Artisan вы можете также создавать свои собственные команды для работы с приложением. Вы можете поместить их в папку app/Console/Commands, либо в любое другое место, в котором их сможет найти автозагрузчик в соответствии с вашим файлом composer.json.
## Создание команды
### Создание класса
Для создания новой команды можно использовать Artisan-команду `shmake:console` , которая создаст заглушку, с которой вы можете начать работать.
Генерация нового класса команды
> shphp artisan make:console FooCommand
Эта команда создаст класс в app/Console/Commands/FooCommand.php.
При создании команды может быть использован параметр `sh--command` для назначения имени команды в терминале: > shphp artisan make:console AssignUsers --command=users:assign
### Написание команды
Когда вы сгенерировали класс команды, вам нужно заполнить его свойства name и description, которые используются при отображении вашей команды в списке команд ( `shartisan list` ). Метод `PHPfire` будет вызван при вызове вашей команды. В него вы можете поместить любую нужную логику.
### Параметры и ключи
Методы `PHPgetArguments()` и `PHPgetOptions()` служат для определения параметров и ключей, которые ваша команда принимает на вход. Оба эти метода могут возвращать массив команд, которые описываются как массив ключей. При определении `PHParguments` массив имеет такой вид:
Параметр `PHPmode` может быть объектом
```
PHPInputArgument::REQUIRED
```
или
```
PHPInputArgument::OPTIONAL
```
. При определении `PHPoptions` , массив выглядит так:
Параметр `PHPmode` для ключей может быть любым из этих объектов:
```
PHPInputOption::VALUE_REQUIRED
```
```
PHPInputOption::VALUE_OPTIONAL
```
```
PHPInputOption::VALUE_IS_ARRAY
```
```
PHPInputOption::VALUE_NONE
```
. Режим `PHPVALUE_IS_ARRAY` указывает, что ключ может быть использован несколько раз при вызове команды:
```
InputOption::VALUE_REQUIRED | InputOption::VALUE_IS_ARRAY
```
Что позволит выполнить такую команду:
> shphp artisan foo --option=bar --option=baz
Режим `PHPVALUE_NONE` указывает, что ключ является простым переключателем: > shphp artisan foo --option
Во время выполнения команды вам, конечно, потребуется доступ к параметрам и ключам, которые были переданы ей на вход. Для этого вы можете использовать методы `PHPargument()` и `PHPoption()` .
```
$value = $this->argument('name');
```
```
$arguments = $this->argument();
```
```
$value = $this->option('name');
```
```
$options = $this->option();
```
Для вывода сообщений вы можете использовать методы `PHPinfo()` , `PHPcomment()` , `PHPquestion()` и `PHPerror()` . Каждый из них будет использовать подходящие по смыслу цвета ANSI для отображении текста.
```
$this->info('Вывести это на экран');
```
Вы можете также использовать методы `PHPask()` и `PHPconfirm()` для получение ввода от пользователя.
```
$name = $this->ask('Как вас зовут?');
```
```
$password = $this->secret('Какой у вас пароль?');
```
```
if ($this->confirm('Вы хотите продолжить? [да|нет]'))
```
{ // } Вы можете также передать значение по умолчанию в метод `PHPconfirm()` , которое должно быть либо true, либо false:
```
$this->confirm($question, true);
```
### Вызов других команд
Иногда необходимо вызвать другую команду из своей. Для этого используйте метод `PHPcall` :
```
$this->call('command:name', ['argument' => 'foo', '--option' => 'bar']);
```
Регистрация команды ArtisanКак только ваша команда написана, вам нужно зарегистрировать её в Artisan, чтобы она стала доступна для использования. Это обычно делается в файле app/Console/Kernel.php. В нём вы найдёте список команд в свойстве `confcommands` . Чтобы зарегистрировать свою команду, просто добавьте её в этот список:
'App\Console\Commands\FooCommand' ];
Когда Artisan загружается, все перечисленные в этом свойстве команды будут включены в сервис-контейнер и зарегистрированы в Artisan.
# Laravel по-русски
Date:
Categories:
Tags:
Русское сообщество разработки на PHP-фреймворке Laravel.
Ты не вошёл. Вход тут.
Неверный запрос. Ссылка ошибочна или устарела.
Назад
Универсальная вики-разметкаРаботает на FluxBB (перевод Laravel.ru)
|
github.com/alexedwards/scs/badgerstore | go | Go | README
[¶](#section-readme)
---
### badgerstore
A [Badger](https://github.com/dgraph-io/badger) based session store for [SCS](https://github.com/alexedwards/scs).
#### Setup
You should follow the instructions to [install and open a database](https://github.com/dgraph-io/badger#installing), and pass the database to `badgerstore.New()` to establish the session store.
#### Example
```
package main
import (
"io"
"net/http"
"github.com/alexedwards/scs/v2"
"github.com/alexedwards/scs/badgerstore"
"github.com/dgraph-io/badger"
)
var sessionManager *scs.SessionManager
func main() {
// Open a Badger database.
db, err := badger.Open(badger.DefaultOptions("tmp/badger"))
if err != nil {
log.Fatal(err)
}
defer db.Close()
// Initialize a new session manager and configure it to use badgerstore as the session store.
sessionManager = scs.New()
sessionManager.Store = badgerstore.New(db)
mux := http.NewServeMux()
mux.HandleFunc("/put", putHandler)
mux.HandleFunc("/get", getHandler)
http.ListenAndServe(":4000", sessionManager.LoadAndSave(mux))
}
func putHandler(w http.ResponseWriter, r *http.Request) {
sessionManager.Put(r.Context(), "message", "Hello from a session!")
}
func getHandler(w http.ResponseWriter, r *http.Request) {
msg := sessionManager.GetString(r.Context(), "message")
io.WriteString(w, msg)
}
```
#### Expired Session Cleanup
Badger will [automatically remove](https://github.com/dgraph-io/badger#setting-time-to-livettl-and-user-metadata-on-keys) expired session keys.
#### Key Collisions
By default keys are in the form `scs:session:<token>`. For example:
```
"scs:session:ZnirGwi2FiLwXeVlP5nD77IpfJZMVr6un9oZu2qtJrg"
```
Because the token is highly unique, key collisions are not a concern. But if you're configuring *multiple session managers*, both of which use `badgerstore`, then you may want the keys to have a different prefix depending on which session manager wrote them. You can do this by using the `NewWithPrefix()` method like so:
```
db, err := badger.Open(badger.DefaultOptions("tmp/badger"))
if err != nil {
log.Fatal(err)
}
defer db.Close()
sessionManagerOne = scs.New()
sessionManagerOne.Store = badgerstore.NewWithPrefix(db, "scs:session:1:")
sessionManagerTwo = scs.New()
sessionManagerTwo.Store = badgerstore.NewWithPrefix(db, "scs:session:2:")
```
Documentation
[¶](#section-documentation)
---
### Index [¶](#pkg-index)
* [type BadgerStore](#BadgerStore)
* + [func New(db *badger.DB) *BadgerStore](#New)
+ [func NewWithPrefix(db *badger.DB, prefix string) *BadgerStore](#NewWithPrefix)
* + [func (bs *BadgerStore) All() (map[string][]byte, error)](#BadgerStore.All)
+ [func (bs *BadgerStore) Commit(token string, data []byte, expiry time.Time) error](#BadgerStore.Commit)
+ [func (bs *BadgerStore) Delete(token string) error](#BadgerStore.Delete)
+ [func (bs *BadgerStore) Find(token string) ([]byte, bool, error)](#BadgerStore.Find)
### Constants [¶](#pkg-constants)
This section is empty.
### Variables [¶](#pkg-variables)
This section is empty.
### Functions [¶](#pkg-functions)
This section is empty.
### Types [¶](#pkg-types)
####
type [BadgerStore](https://github.com/alexedwards/scs/blob/95fa2ac9d520/badgerstore/badgerstore.go#L10) [¶](#BadgerStore)
```
type BadgerStore struct {
// contains filtered or unexported fields
}
```
BadgerStore represents the session store.
####
func [New](https://github.com/alexedwards/scs/blob/95fa2ac9d520/badgerstore/badgerstore.go#L17) [¶](#New)
```
func New(db *[badger](/github.com/dgraph-io/badger).[DB](/github.com/dgraph-io/badger#DB)) *[BadgerStore](#BadgerStore)
```
New returns a new BadgerStore instance.
The db parameter should be a pointer to a badger store instance.
####
func [NewWithPrefix](https://github.com/alexedwards/scs/blob/95fa2ac9d520/badgerstore/badgerstore.go#L25) [¶](#NewWithPrefix)
```
func NewWithPrefix(db *[badger](/github.com/dgraph-io/badger).[DB](/github.com/dgraph-io/badger#DB), prefix [string](/builtin#string)) *[BadgerStore](#BadgerStore)
```
NewWithPrefix returns a new BadgerStore instance.
The db parameter should be a pointer to a badger store instance.
The prefix parameter controls the Badger key prefix,
which can be used to avoid naming clashes if necessary.
####
func (*BadgerStore) [All](https://github.com/alexedwards/scs/blob/95fa2ac9d520/badgerstore/badgerstore.go#L92) [¶](#BadgerStore.All)
```
func (bs *[BadgerStore](#BadgerStore)) All() (map[[string](/builtin#string)][][byte](/builtin#byte), [error](/builtin#error))
```
All returns a map containing the token and data for all active (i.e.
not expired) sessions in the BadgerStore instance.
####
func (*BadgerStore) [Commit](https://github.com/alexedwards/scs/blob/95fa2ac9d520/badgerstore/badgerstore.go#L58) [¶](#BadgerStore.Commit)
```
func (bs *[BadgerStore](#BadgerStore)) Commit(token [string](/builtin#string), data [][byte](/builtin#byte), expiry [time](/time).[Time](/time#Time)) [error](/builtin#error)
```
Commit adds a session token and data to the BadgerStore instance with the given expiry time. If the session token already exists then the data and expiry time are updated.
####
func (*BadgerStore) [Delete](https://github.com/alexedwards/scs/blob/95fa2ac9d520/badgerstore/badgerstore.go#L77) [¶](#BadgerStore.Delete)
```
func (bs *[BadgerStore](#BadgerStore)) Delete(token [string](/builtin#string)) [error](/builtin#error)
```
Delete removes a session token and corresponding data from the BadgerStore instance.
####
func (*BadgerStore) [Find](https://github.com/alexedwards/scs/blob/95fa2ac9d520/badgerstore/badgerstore.go#L35) [¶](#BadgerStore.Find)
```
func (bs *[BadgerStore](#BadgerStore)) Find(token [string](/builtin#string)) ([][byte](/builtin#byte), [bool](/builtin#bool), [error](/builtin#error))
```
Find returns the data for a given session token from the BadgerStore instance. If the session token is not found or is expired,
the returned exists flag will be set to false. |
github.com/blacktop/go-macho | go | Go | README
[¶](#section-readme)
---
### go-macho
[![Go](https://github.com/blacktop/go-macho/workflows/Go/badge.svg?branch=master)](https://github.com/blacktop/go-macho/actions) [![Go Reference](https://pkg.go.dev/badge/github.com/blacktop/go-macho.svg)](https://pkg.go.dev/github.com/blacktop/go-macho) [![License](http://img.shields.io/:license-mit-blue.svg)](http://doge.mit-license.org)
> Package macho implements access to and creation of Mach-O object files.
---
#### Why 🤔
This package goes beyond the Go's `debug/macho` to:
* Cover ALL load commands and architectures
* Provide nice summary string output
* Allow for creating custom MachO files
* Parse Objective-C runtime information
* Parse Swift runtime information
* Parse code signature information
* Parse fixup chain information
#### Install
```
$ go get github.com/blacktop/go-macho
```
#### Getting Started
```
package main
import "github.com/blacktop/go-macho"
func main() {
m, err := macho.Open("/path/to/macho")
if err != nil {
panic(err)
}
defer m.Close()
fmt.Println(m.FileTOC.String())
}
```
#### License
MIT Copyright (c) 2020-2023 **blacktop**
Documentation
[¶](#section-documentation)
---
### Overview [¶](#pkg-overview)
Package macho implements access to and creation of Mach-O object files.
### Index [¶](#pkg-index)
* [Constants](#pkg-constants)
* [Variables](#pkg-variables)
* [type AtomInfo](#AtomInfo)
* [type BitstreamWrapperHeader](#BitstreamWrapperHeader)
* [type BuildVersion](#BuildVersion)
* + [func (b *BuildVersion) LoadSize() uint32](#BuildVersion.LoadSize)
+ [func (b *BuildVersion) MarshalJSON() ([]byte, error)](#BuildVersion.MarshalJSON)
+ [func (b *BuildVersion) String() string](#BuildVersion.String)
+ [func (b *BuildVersion) Write(buf *bytes.Buffer, o binary.ByteOrder) error](#BuildVersion.Write)
* [type CodeSignature](#CodeSignature)
* + [func (l *CodeSignature) LoadSize() uint32](#CodeSignature.LoadSize)
+ [func (l *CodeSignature) MarshalJSON() ([]byte, error)](#CodeSignature.MarshalJSON)
+ [func (l *CodeSignature) String() string](#CodeSignature.String)
+ [func (l *CodeSignature) Write(buf *bytes.Buffer, o binary.ByteOrder) error](#CodeSignature.Write)
* [type DataInCode](#DataInCode)
* + [func (l *DataInCode) LoadSize() uint32](#DataInCode.LoadSize)
+ [func (l *DataInCode) MarshalJSON() ([]byte, error)](#DataInCode.MarshalJSON)
+ [func (d *DataInCode) String() string](#DataInCode.String)
+ [func (l *DataInCode) Write(buf *bytes.Buffer, o binary.ByteOrder) error](#DataInCode.Write)
* [type DyldChainedFixups](#DyldChainedFixups)
* [type DyldEnvironment](#DyldEnvironment)
* [type DyldExportsTrie](#DyldExportsTrie)
* [type DyldInfo](#DyldInfo)
* + [func (d *DyldInfo) LoadSize() uint32](#DyldInfo.LoadSize)
+ [func (l *DyldInfo) MarshalJSON() ([]byte, error)](#DyldInfo.MarshalJSON)
+ [func (d *DyldInfo) Put(b []byte, o binary.ByteOrder) int](#DyldInfo.Put)
+ [func (d *DyldInfo) String() string](#DyldInfo.String)
+ [func (l *DyldInfo) Write(buf *bytes.Buffer, o binary.ByteOrder) error](#DyldInfo.Write)
* [type DyldInfoOnly](#DyldInfoOnly)
* [type Dylib](#Dylib)
* + [func (d *Dylib) LoadSize() uint32](#Dylib.LoadSize)
+ [func (d *Dylib) MarshalJSON() ([]byte, error)](#Dylib.MarshalJSON)
+ [func (d *Dylib) Put(b []byte, o binary.ByteOrder) int](#Dylib.Put)
+ [func (d *Dylib) String() string](#Dylib.String)
+ [func (d *Dylib) Write(buf *bytes.Buffer, o binary.ByteOrder) error](#Dylib.Write)
* [type DylibCodeSignDrs](#DylibCodeSignDrs)
* [type Dylinker](#Dylinker)
* + [func (d *Dylinker) LoadSize() uint32](#Dylinker.LoadSize)
+ [func (d *Dylinker) MarshalJSON() ([]byte, error)](#Dylinker.MarshalJSON)
+ [func (d *Dylinker) Put(b []byte, o binary.ByteOrder) int](#Dylinker.Put)
+ [func (d *Dylinker) String() string](#Dylinker.String)
+ [func (d *Dylinker) Write(buf *bytes.Buffer, o binary.ByteOrder) error](#Dylinker.Write)
* [type DylinkerID](#DylinkerID)
* [type Dysymtab](#Dysymtab)
* + [func (d *Dysymtab) LoadSize() uint32](#Dysymtab.LoadSize)
+ [func (d *Dysymtab) MarshalJSON() ([]byte, error)](#Dysymtab.MarshalJSON)
+ [func (d *Dysymtab) String() string](#Dysymtab.String)
+ [func (d *Dysymtab) Write(buf *bytes.Buffer, o binary.ByteOrder) error](#Dysymtab.Write)
* [type EncryptionInfo](#EncryptionInfo)
* + [func (e *EncryptionInfo) LoadSize() uint32](#EncryptionInfo.LoadSize)
+ [func (l *EncryptionInfo) MarshalJSON() ([]byte, error)](#EncryptionInfo.MarshalJSON)
+ [func (e *EncryptionInfo) Put(b []byte, o binary.ByteOrder) int](#EncryptionInfo.Put)
+ [func (e *EncryptionInfo) String() string](#EncryptionInfo.String)
+ [func (l *EncryptionInfo) Write(buf *bytes.Buffer, o binary.ByteOrder) error](#EncryptionInfo.Write)
* [type EncryptionInfo64](#EncryptionInfo64)
* + [func (e *EncryptionInfo64) LoadSize() uint32](#EncryptionInfo64.LoadSize)
+ [func (e *EncryptionInfo64) MarshalJSON() ([]byte, error)](#EncryptionInfo64.MarshalJSON)
+ [func (e *EncryptionInfo64) Put(b []byte, o binary.ByteOrder) int](#EncryptionInfo64.Put)
+ [func (e *EncryptionInfo64) String() string](#EncryptionInfo64.String)
+ [func (e *EncryptionInfo64) Write(buf *bytes.Buffer, o binary.ByteOrder) error](#EncryptionInfo64.Write)
* [type EntryPoint](#EntryPoint)
* + [func (e *EntryPoint) LoadSize() uint32](#EntryPoint.LoadSize)
+ [func (e *EntryPoint) MarshalJSON() ([]byte, error)](#EntryPoint.MarshalJSON)
+ [func (e *EntryPoint) Put(b []byte, o binary.ByteOrder) int](#EntryPoint.Put)
+ [func (e *EntryPoint) String() string](#EntryPoint.String)
+ [func (e *EntryPoint) Write(buf *bytes.Buffer, o binary.ByteOrder) error](#EntryPoint.Write)
* [type FatArch](#FatArch)
* [type FatArchHeader](#FatArchHeader)
* [type FatFile](#FatFile)
* + [func CreateFat(name string, files ...string) (*FatFile, error)](#CreateFat)
+ [func NewFatFile(r io.ReaderAt) (*FatFile, error)](#NewFatFile)
+ [func OpenFat(name string) (*FatFile, error)](#OpenFat)
* + [func (ff *FatFile) Close() error](#FatFile.Close)
* [type FatHeader](#FatHeader)
* [type File](#File)
* + [func NewFile(r io.ReaderAt, config ...FileConfig) (*File, error)](#NewFile)
+ [func Open(name string) (*File, error)](#Open)
* + [func (f *File) BuildVersion() *BuildVersion](#File.BuildVersion)
+ [func (f *File) Close() error](#File.Close)
+ [func (f *File) CodeSign(config *codesign.Config) error](#File.CodeSign)
+ [func (f *File) CodeSignature() *CodeSignature](#File.CodeSignature)
+ [func (f *File) DWARF() (*dwarf.Data, error)](#File.DWARF)
+ [func (f *File) DataInCode() *DataInCode](#File.DataInCode)
+ [func (f *File) DyldChainedFixups() (*fixupchains.DyldChainedFixups, error)](#File.DyldChainedFixups)
+ [func (f *File) DyldExports() ([]trie.TrieExport, error)](#File.DyldExports)
+ [func (f *File) DyldExportsTrie() *DyldExportsTrie](#File.DyldExportsTrie)
+ [func (f *File) DyldInfo() *DyldInfo](#File.DyldInfo)
+ [func (f *File) DyldInfoOnly() *DyldInfoOnly](#File.DyldInfoOnly)
+ [func (f *File) DylibID() *IDDylib](#File.DylibID)
+ [func (f *File) Export(path string, dcf *fixupchains.DyldChainedFixups, baseAddress uint64, ...) (err error)](#File.Export)
+ [func (f *File) FileSets() []*FilesetEntry](#File.FileSets)
+ [func (f *File) FindAddressSymbols(addr uint64) ([]Symbol, error)](#File.FindAddressSymbols)
+ [func (f *File) FindSectionForVMAddr(vmAddr uint64) *types.Section](#File.FindSectionForVMAddr)
+ [func (f *File) FindSegmentForVMAddr(vmAddr uint64) *Segment](#File.FindSegmentForVMAddr)
+ [func (f *File) FindSymbolAddress(symbol string) (uint64, error)](#File.FindSymbolAddress)
+ [func (f *File) ForEachV2SplitSegReference(...) error](#File.ForEachV2SplitSegReference)
+ [func (f *File) FunctionStarts() *FunctionStarts](#File.FunctionStarts)
+ [func (f *File) GetBaseAddress() uint64](#File.GetBaseAddress)
+ [func (f *File) GetBindInfo() (types.Binds, error)](#File.GetBindInfo)
+ [func (f *File) GetBindName(pointer uint64) (string, error)](#File.GetBindName)
+ [func (f *File) GetCFStrings() ([]objc.CFString, error)](#File.GetCFStrings)
+ [func (f *File) GetCString(addr uint64) (string, error)](#File.GetCString)
+ [func (f *File) GetCStringAtOffset(strOffset int64) (string, error)](#File.GetCStringAtOffset)
+ [func (f *File) GetDyldExport(symbol string) (*trie.TrieExport, error)](#File.GetDyldExport)
+ [func (f *File) GetEmbeddedInfoPlist() ([]byte, error)](#File.GetEmbeddedInfoPlist)
+ [func (f *File) GetEmbeddedLLVMBitcode() (*xar.Reader, error)](#File.GetEmbeddedLLVMBitcode)
+ [func (f *File) GetExports() ([]trie.TrieExport, error)](#File.GetExports)
+ [func (f *File) GetFileSetFileByName(name string) (*File, error)](#File.GetFileSetFileByName)
+ [func (f *File) GetFunctionData(fn types.Function) ([]byte, error)](#File.GetFunctionData)
+ [func (f *File) GetFunctionForVMAddr(addr uint64) (types.Function, error)](#File.GetFunctionForVMAddr)
+ [func (f *File) GetFunctions(data ...byte) []types.Function](#File.GetFunctions)
+ [func (f *File) GetFunctionsForRange(start, end uint64) ([]types.Function, error)](#File.GetFunctionsForRange)
+ [func (f *File) GetLoadsByName(name string) []Load](#File.GetLoadsByName)
+ [func (f *File) GetObjC(addr uint64) (any, bool)](#File.GetObjC)
+ [func (f *File) GetObjCCategories() ([]objc.Category, error)](#File.GetObjCCategories)
+ [func (f *File) GetObjCClass(vmaddr uint64) (*objc.Class, error)](#File.GetObjCClass)
+ [func (f *File) GetObjCClass2(vmaddr uint64) (*objc.Class, error)](#File.GetObjCClass2)
+ [func (f *File) GetObjCClassInfo(vmaddr uint64) (*objc.ClassRO64, error)](#File.GetObjCClassInfo)
+ [func (f *File) GetObjCClassNames() (map[uint64]string, error)](#File.GetObjCClassNames)
+ [func (f *File) GetObjCClassReferences() (map[uint64]*objc.Class, error)](#File.GetObjCClassReferences)
+ [func (f *File) GetObjCClasses() ([]objc.Class, error)](#File.GetObjCClasses)
+ [func (f *File) GetObjCImageInfo() (*objc.ImageInfo, error)](#File.GetObjCImageInfo)
+ [func (f *File) GetObjCIntegerObjects() (map[uint64]*objc.IntObj, error)](#File.GetObjCIntegerObjects)
+ [func (f *File) GetObjCIvars(vmaddr uint64) ([]objc.Ivar, error)](#File.GetObjCIvars)
+ [func (f *File) GetObjCMethodLists() ([]objc.Method, error)](#File.GetObjCMethodLists)
+ [func (f *File) GetObjCMethodNames() (map[uint64]string, error)](#File.GetObjCMethodNames)
+ [func (f *File) GetObjCMethods(vmaddr uint64) ([]objc.Method, error)](#File.GetObjCMethods)
+ [func (f *File) GetObjCNonLazyCategories() ([]objc.Category, error)](#File.GetObjCNonLazyCategories)
+ [func (f *File) GetObjCNonLazyClasses() ([]objc.Class, error)](#File.GetObjCNonLazyClasses)
+ [func (f *File) GetObjCProperties(vmaddr uint64) ([]objc.Property, error)](#File.GetObjCProperties)
+ [func (f *File) GetObjCProtoReferences() (map[uint64]*objc.Protocol, error)](#File.GetObjCProtoReferences)
+ [func (f *File) GetObjCProtocols() ([]objc.Protocol, error)](#File.GetObjCProtocols)
+ [func (f *File) GetObjCSelectorReferences() (map[uint64]*objc.Selector, error)](#File.GetObjCSelectorReferences)
+ [func (f *File) GetObjCStubs(parse func(uint64, []byte) (map[uint64]*objc.Stub, error)) (map[uint64]*objc.Stub, error)](#File.GetObjCStubs)
+ [func (f *File) GetObjCSuperReferences() (map[uint64]*objc.Class, error)](#File.GetObjCSuperReferences)
+ [func (f *File) GetObjCToc() objc.Toc](#File.GetObjCToc)
+ [func (f *File) GetOffset(address uint64) (uint64, error)](#File.GetOffset)
+ [func (f *File) GetPointer(offset uint64) (uint64, error)](#File.GetPointer)
+ [func (f *File) GetPointerAtAddress(address uint64) (uint64, error)](#File.GetPointerAtAddress)
+ [func (f *File) GetRebaseInfo() ([]types.Rebase, error)](#File.GetRebaseInfo)
+ [func (f *File) GetSectionsForSegment(name string) []*types.Section](#File.GetSectionsForSegment)
+ [func (f *File) GetSwiftAccessibleFunctions() (*types.AccessibleFunctionsSection, error)](#File.GetSwiftAccessibleFunctions)
+ [func (f *File) GetSwiftAssociatedTypes() ([]swift.AssociatedTypeDescriptor, error)](#File.GetSwiftAssociatedTypes)
+ [func (f *File) GetSwiftBuiltinTypes() ([]swift.BuiltinType, error)](#File.GetSwiftBuiltinTypes)
+ [func (f *File) GetSwiftClosures() ([]swift.CaptureDescriptor, error)](#File.GetSwiftClosures)
+ [func (f *File) GetSwiftDynamicReplacementInfo() (*types.AutomaticDynamicReplacements, error)](#File.GetSwiftDynamicReplacementInfo)
+ [func (f *File) GetSwiftDynamicReplacementInfoForOpaqueTypes() (*types.AutomaticDynamicReplacementsSome, error)](#File.GetSwiftDynamicReplacementInfoForOpaqueTypes)
+ [func (f *File) GetSwiftEntry() (uint64, error)](#File.GetSwiftEntry)
+ [func (f *File) GetSwiftFields() ([]*fields.Field, error)](#File.GetSwiftFields)
+ [func (f *File) GetSwiftProtocolConformances() ([]types.ConformanceDescriptor, error)](#File.GetSwiftProtocolConformances)
+ [func (f *File) GetSwiftProtocols() ([]types.Protocol, error)](#File.GetSwiftProtocols)
+ [func (f *File) GetSwiftReflectionStrings() (map[uint64]string, error)](#File.GetSwiftReflectionStrings)
+ [func (f *File) GetSwiftTypeRefs() ([]string, error)](#File.GetSwiftTypeRefs)
+ [func (f *File) GetSwiftTypes() ([]*types.TypeDescriptor, error)](#File.GetSwiftTypes)
+ [func (f *File) GetVMAddress(offset uint64) (uint64, error)](#File.GetVMAddress)
+ [func (f *File) HasFixups() bool](#File.HasFixups)
+ [func (f *File) HasObjC() bool](#File.HasObjC)
+ [func (f *File) HasObjCMessageReferences() bool](#File.HasObjCMessageReferences)
+ [func (f *File) HasPlusLoadMethod() bool](#File.HasPlusLoadMethod)
+ [func (f *File) ImportedLibraries() []string](#File.ImportedLibraries)
+ [func (f *File) ImportedSymbolNames() ([]string, error)](#File.ImportedSymbolNames)
+ [func (f *File) ImportedSymbols() ([]Symbol, error)](#File.ImportedSymbols)
+ [func (f *File) IsCString(addr uint64) (string, bool)](#File.IsCString)
+ [func (f *File) LibraryOrdinalName(libraryOrdinal int) string](#File.LibraryOrdinalName)
+ [func (f *File) PutObjC(addr uint64, obj any)](#File.PutObjC)
+ [func (f *File) ReadAt(p []byte, off int64) (n int, err error)](#File.ReadAt)
+ [func (f *File) Save(outpath string) error](#File.Save)
+ [func (f *File) Section(segment, section string) *types.Section](#File.Section)
+ [func (f *File) Segment(name string) *Segment](#File.Segment)
+ [func (f *File) Segments() Segments](#File.Segments)
+ [func (f *File) SlidePointer(ptr uint64) uint64](#File.SlidePointer)
+ [func (f *File) SourceVersion() *SourceVersion](#File.SourceVersion)
+ [func (f *File) UUID() *UUID](#File.UUID)
* [type FileConfig](#FileConfig)
* [type FileTOC](#FileTOC)
* + [func (t *FileTOC) AddLoad(l Load) uint32](#FileTOC.AddLoad)
+ [func (t *FileTOC) AddSection(s *types.Section)](#FileTOC.AddSection)
+ [func (t *FileTOC) AddSegment(s *Segment)](#FileTOC.AddSegment)
+ [func (t *FileTOC) DerivedCopy(Type types.HeaderFileType, Flags types.HeaderFlag) *FileTOC](#FileTOC.DerivedCopy)
+ [func (t *FileTOC) FileSize() uint64](#FileTOC.FileSize)
+ [func (t *FileTOC) HdrSize() uint32](#FileTOC.HdrSize)
+ [func (t *FileTOC) LoadAlign() uint64](#FileTOC.LoadAlign)
+ [func (t *FileTOC) LoadSize() uint32](#FileTOC.LoadSize)
+ [func (t *FileTOC) MarshalJSON() ([]byte, error)](#FileTOC.MarshalJSON)
+ [func (t *FileTOC) ModifySizeCommands(prev, curr int32) int32](#FileTOC.ModifySizeCommands)
+ [func (t *FileTOC) Print(printer func(t *FileTOC) string) string](#FileTOC.Print)
+ [func (t *FileTOC) RemoveLoad(l Load) error](#FileTOC.RemoveLoad)
+ [func (t *FileTOC) String() string](#FileTOC.String)
+ [func (t *FileTOC) TOCSize() uint32](#FileTOC.TOCSize)
* [type FilesetEntry](#FilesetEntry)
* + [func (l *FilesetEntry) LoadSize() uint32](#FilesetEntry.LoadSize)
+ [func (l *FilesetEntry) MarshalJSON() ([]byte, error)](#FilesetEntry.MarshalJSON)
+ [func (f *FilesetEntry) String() string](#FilesetEntry.String)
+ [func (l *FilesetEntry) Write(buf *bytes.Buffer, o binary.ByteOrder) error](#FilesetEntry.Write)
* [type FormatError](#FormatError)
* + [func (e *FormatError) Error() string](#FormatError.Error)
* [type FunctionStarts](#FunctionStarts)
* [type FvmFile](#FvmFile)
* + [func (l *FvmFile) LoadSize() uint32](#FvmFile.LoadSize)
+ [func (l *FvmFile) MarshalJSON() ([]byte, error)](#FvmFile.MarshalJSON)
+ [func (l *FvmFile) String() string](#FvmFile.String)
+ [func (l *FvmFile) Write(buf *bytes.Buffer, o binary.ByteOrder) error](#FvmFile.Write)
* [type IDDylib](#IDDylib)
* [type IDFvmlib](#IDFvmlib)
* [type Ident](#Ident)
* + [func (i *Ident) LoadSize() uint32](#Ident.LoadSize)
+ [func (i *Ident) MarshalJSON() ([]byte, error)](#Ident.MarshalJSON)
+ [func (i *Ident) String() string](#Ident.String)
+ [func (i *Ident) Write(buf *bytes.Buffer, o binary.ByteOrder) error](#Ident.Write)
* [type LazyLoadDylib](#LazyLoadDylib)
* [type LinkEditData](#LinkEditData)
* + [func (l *LinkEditData) LoadSize() uint32](#LinkEditData.LoadSize)
+ [func (l *LinkEditData) MarshalJSON() ([]byte, error)](#LinkEditData.MarshalJSON)
+ [func (l *LinkEditData) String() string](#LinkEditData.String)
+ [func (l *LinkEditData) Write(buf *bytes.Buffer, o binary.ByteOrder) error](#LinkEditData.Write)
* [type LinkerOptimizationHint](#LinkerOptimizationHint)
* [type LinkerOption](#LinkerOption)
* + [func (l *LinkerOption) LoadSize() uint32](#LinkerOption.LoadSize)
+ [func (l *LinkerOption) MarshalJSON() ([]byte, error)](#LinkerOption.MarshalJSON)
+ [func (l *LinkerOption) String() string](#LinkerOption.String)
+ [func (l *LinkerOption) Write(buf *bytes.Buffer, o binary.ByteOrder) error](#LinkerOption.Write)
* [type Load](#Load)
* [type LoadBytes](#LoadBytes)
* + [func (b LoadBytes) Copy() LoadBytes](#LoadBytes.Copy)
+ [func (b LoadBytes) LoadSize() uint32](#LoadBytes.LoadSize)
+ [func (b LoadBytes) MarshalJSON() ([]byte, error)](#LoadBytes.MarshalJSON)
+ [func (b LoadBytes) Raw() []byte](#LoadBytes.Raw)
+ [func (b LoadBytes) String() string](#LoadBytes.String)
+ [func (b LoadBytes) Write(buf *bytes.Buffer, o binary.ByteOrder) error](#LoadBytes.Write)
* [type LoadCmdBytes](#LoadCmdBytes)
* + [func (s LoadCmdBytes) Copy() LoadCmdBytes](#LoadCmdBytes.Copy)
+ [func (s LoadCmdBytes) String() string](#LoadCmdBytes.String)
* [type LoadDylib](#LoadDylib)
* [type LoadDylinker](#LoadDylinker)
* [type LoadFvmlib](#LoadFvmlib)
* + [func (l *LoadFvmlib) LoadSize() uint32](#LoadFvmlib.LoadSize)
+ [func (l *LoadFvmlib) MarshalJSON() ([]byte, error)](#LoadFvmlib.MarshalJSON)
+ [func (l *LoadFvmlib) String() string](#LoadFvmlib.String)
+ [func (l *LoadFvmlib) Write(buf *bytes.Buffer, o binary.ByteOrder) error](#LoadFvmlib.Write)
* [type Note](#Note)
* + [func (n *Note) LoadSize() uint32](#Note.LoadSize)
+ [func (n *Note) MarshalJSON() ([]byte, error)](#Note.MarshalJSON)
+ [func (n *Note) String() string](#Note.String)
+ [func (n *Note) Write(buf *bytes.Buffer, o binary.ByteOrder) error](#Note.Write)
* [type PrebindCheckSum](#PrebindCheckSum)
* + [func (l *PrebindCheckSum) LoadSize() uint32](#PrebindCheckSum.LoadSize)
+ [func (l *PrebindCheckSum) MarshalJSON() ([]byte, error)](#PrebindCheckSum.MarshalJSON)
+ [func (l *PrebindCheckSum) String() string](#PrebindCheckSum.String)
+ [func (l *PrebindCheckSum) Write(buf *bytes.Buffer, o binary.ByteOrder) error](#PrebindCheckSum.Write)
* [type PreboundDylib](#PreboundDylib)
* + [func (d *PreboundDylib) LoadSize() uint32](#PreboundDylib.LoadSize)
+ [func (d *PreboundDylib) MarshalJSON() ([]byte, error)](#PreboundDylib.MarshalJSON)
+ [func (d *PreboundDylib) Put(b []byte, o binary.ByteOrder) int](#PreboundDylib.Put)
+ [func (d *PreboundDylib) String() string](#PreboundDylib.String)
+ [func (d *PreboundDylib) Write(buf *bytes.Buffer, o binary.ByteOrder) error](#PreboundDylib.Write)
* [type Prepage](#Prepage)
* + [func (c *Prepage) LoadSize() uint32](#Prepage.LoadSize)
+ [func (c *Prepage) MarshalJSON() ([]byte, error)](#Prepage.MarshalJSON)
+ [func (c *Prepage) String() string](#Prepage.String)
+ [func (c *Prepage) Write(buf *bytes.Buffer, o binary.ByteOrder) error](#Prepage.Write)
* [type ReExportDylib](#ReExportDylib)
* [type Regs386](#Regs386)
* [type RegsAMD64](#RegsAMD64)
* [type RegsARM](#RegsARM)
* [type RegsARM64](#RegsARM64)
* [type Routines](#Routines)
* + [func (l *Routines) LoadSize() uint32](#Routines.LoadSize)
+ [func (l *Routines) MarshalJSON() ([]byte, error)](#Routines.MarshalJSON)
+ [func (l *Routines) String() string](#Routines.String)
+ [func (l *Routines) Write(buf *bytes.Buffer, o binary.ByteOrder) error](#Routines.Write)
* [type Routines64](#Routines64)
* + [func (l *Routines64) LoadSize() uint32](#Routines64.LoadSize)
+ [func (l *Routines64) MarshalJSON() ([]byte, error)](#Routines64.MarshalJSON)
+ [func (l *Routines64) String() string](#Routines64.String)
+ [func (l *Routines64) Write(buf *bytes.Buffer, o binary.ByteOrder) error](#Routines64.Write)
* [type Rpath](#Rpath)
* + [func (r *Rpath) LoadSize() uint32](#Rpath.LoadSize)
+ [func (r *Rpath) MarshalJSON() ([]byte, error)](#Rpath.MarshalJSON)
+ [func (r *Rpath) String() string](#Rpath.String)
+ [func (r *Rpath) Write(buf *bytes.Buffer, o binary.ByteOrder) error](#Rpath.Write)
* [type Segment](#Segment)
* + [func (s *Segment) Copy() *Segment](#Segment.Copy)
+ [func (s *Segment) CopyZeroed() *Segment](#Segment.CopyZeroed)
+ [func (s *Segment) Data() ([]byte, error)](#Segment.Data)
+ [func (s *Segment) LessThan(o *Segment) bool](#Segment.LessThan)
+ [func (s *Segment) LoadSize() uint32](#Segment.LoadSize)
+ [func (s *Segment) MarshalJSON() ([]byte, error)](#Segment.MarshalJSON)
+ [func (s *Segment) Open() io.ReadSeeker](#Segment.Open)
+ [func (s *Segment) Put32(b []byte, o binary.ByteOrder) int](#Segment.Put32)
+ [func (s *Segment) Put64(b []byte, o binary.ByteOrder) int](#Segment.Put64)
+ [func (s *Segment) String() string](#Segment.String)
+ [func (s *Segment) UncompressedSize(t *FileTOC, align uint64) uint64](#Segment.UncompressedSize)
+ [func (s *Segment) Write(buf *bytes.Buffer, o binary.ByteOrder) error](#Segment.Write)
* [type SegmentHeader](#SegmentHeader)
* + [func (s *SegmentHeader) String() string](#SegmentHeader.String)
* [type Segments](#Segments)
* + [func (v Segments) Len() int](#Segments.Len)
+ [func (v Segments) Less(i, j int) bool](#Segments.Less)
+ [func (v Segments) Swap(i, j int)](#Segments.Swap)
* [type SourceVersion](#SourceVersion)
* + [func (s *SourceVersion) LoadSize() uint32](#SourceVersion.LoadSize)
+ [func (s *SourceVersion) MarshalJSON() ([]byte, error)](#SourceVersion.MarshalJSON)
+ [func (s *SourceVersion) String() string](#SourceVersion.String)
+ [func (s *SourceVersion) Write(buf *bytes.Buffer, o binary.ByteOrder) error](#SourceVersion.Write)
* [type SplitInfo](#SplitInfo)
* + [func (l *SplitInfo) LoadSize() uint32](#SplitInfo.LoadSize)
+ [func (l *SplitInfo) MarshalJSON() ([]byte, error)](#SplitInfo.MarshalJSON)
+ [func (s *SplitInfo) String() string](#SplitInfo.String)
+ [func (l *SplitInfo) Write(buf *bytes.Buffer, o binary.ByteOrder) error](#SplitInfo.Write)
* [type SubClient](#SubClient)
* + [func (l *SubClient) LoadSize() uint32](#SubClient.LoadSize)
+ [func (l *SubClient) MarshalJSON() ([]byte, error)](#SubClient.MarshalJSON)
+ [func (l *SubClient) String() string](#SubClient.String)
+ [func (l *SubClient) Write(buf *bytes.Buffer, o binary.ByteOrder) error](#SubClient.Write)
* [type SubFramework](#SubFramework)
* + [func (l *SubFramework) LoadSize() uint32](#SubFramework.LoadSize)
+ [func (l *SubFramework) MarshalJSON() ([]byte, error)](#SubFramework.MarshalJSON)
+ [func (l *SubFramework) String() string](#SubFramework.String)
+ [func (l *SubFramework) Write(buf *bytes.Buffer, o binary.ByteOrder) error](#SubFramework.Write)
* [type SubLibrary](#SubLibrary)
* + [func (l *SubLibrary) LoadSize() uint32](#SubLibrary.LoadSize)
+ [func (l *SubLibrary) MarshalJSON() ([]byte, error)](#SubLibrary.MarshalJSON)
+ [func (l *SubLibrary) String() string](#SubLibrary.String)
+ [func (l *SubLibrary) Write(buf *bytes.Buffer, o binary.ByteOrder) error](#SubLibrary.Write)
* [type SubUmbrella](#SubUmbrella)
* + [func (l *SubUmbrella) LoadSize() uint32](#SubUmbrella.LoadSize)
+ [func (l *SubUmbrella) MarshalJSON() ([]byte, error)](#SubUmbrella.MarshalJSON)
+ [func (l *SubUmbrella) String() string](#SubUmbrella.String)
+ [func (l *SubUmbrella) Write(buf *bytes.Buffer, o binary.ByteOrder) error](#SubUmbrella.Write)
* [type SymSeg](#SymSeg)
* + [func (s *SymSeg) LoadSize() uint32](#SymSeg.LoadSize)
+ [func (s *SymSeg) MarshalJSON() ([]byte, error)](#SymSeg.MarshalJSON)
+ [func (s *SymSeg) String() string](#SymSeg.String)
+ [func (s *SymSeg) Write(buf *bytes.Buffer, o binary.ByteOrder) error](#SymSeg.Write)
* [type Symbol](#Symbol)
* + [func (s Symbol) GetLib(m *File) string](#Symbol.GetLib)
+ [func (s Symbol) GetType(m *File) string](#Symbol.GetType)
+ [func (s *Symbol) MarshalJSON() ([]byte, error)](#Symbol.MarshalJSON)
+ [func (s Symbol) String(m *File) string](#Symbol.String)
* [type Symtab](#Symtab)
* + [func (s *Symtab) LoadSize() uint32](#Symtab.LoadSize)
+ [func (s *Symtab) MarshalJSON() ([]byte, error)](#Symtab.MarshalJSON)
+ [func (s *Symtab) Put(b []byte, o binary.ByteOrder) int](#Symtab.Put)
+ [func (s *Symtab) Search(name string) (*Symbol, error)](#Symtab.Search)
+ [func (s *Symtab) String() string](#Symtab.String)
+ [func (s *Symtab) Write(buf *bytes.Buffer, o binary.ByteOrder) error](#Symtab.Write)
* [type Thread](#Thread)
* + [func (t *Thread) LoadSize() uint32](#Thread.LoadSize)
+ [func (t *Thread) MarshalJSON() ([]byte, error)](#Thread.MarshalJSON)
+ [func (t *Thread) String() string](#Thread.String)
+ [func (t *Thread) Write(buf *bytes.Buffer, o binary.ByteOrder) error](#Thread.Write)
* [type TwolevelHints](#TwolevelHints)
* + [func (l *TwolevelHints) LoadSize() uint32](#TwolevelHints.LoadSize)
+ [func (l *TwolevelHints) MarshalJSON() ([]byte, error)](#TwolevelHints.MarshalJSON)
+ [func (l *TwolevelHints) String() string](#TwolevelHints.String)
+ [func (l *TwolevelHints) Write(buf *bytes.Buffer, o binary.ByteOrder) error](#TwolevelHints.Write)
* [type UUID](#UUID)
* + [func (l *UUID) LoadSize() uint32](#UUID.LoadSize)
+ [func (l *UUID) MarshalJSON() ([]byte, error)](#UUID.MarshalJSON)
+ [func (l *UUID) String() string](#UUID.String)
+ [func (l *UUID) Write(buf *bytes.Buffer, o binary.ByteOrder) error](#UUID.Write)
* [type UnixThread](#UnixThread)
* [type UpwardDylib](#UpwardDylib)
* [type VersionMin](#VersionMin)
* + [func (v *VersionMin) LoadSize() uint32](#VersionMin.LoadSize)
+ [func (v *VersionMin) MarshalJSON() ([]byte, error)](#VersionMin.MarshalJSON)
+ [func (v *VersionMin) String() string](#VersionMin.String)
+ [func (v *VersionMin) Write(buf *bytes.Buffer, o binary.ByteOrder) error](#VersionMin.Write)
* [type VersionMinMacOSX](#VersionMinMacOSX)
* [type VersionMinTvOS](#VersionMinTvOS)
* [type VersionMinWatchOS](#VersionMinWatchOS)
* [type VersionMiniPhoneOS](#VersionMiniPhoneOS)
* [type WeakDylib](#WeakDylib)
### Constants [¶](#pkg-constants)
```
const (
BitcodeWrapperMagic = 0x0b17c0de
RawBitcodeMagic = 0xdec04342 // 'BC' 0xc0de
)
```
### Variables [¶](#pkg-variables)
```
var ErrMachODyldInfoNotFound = [errors](/errors).[New](/errors#New)("LC_DYLD_INFO(_ONLY) not found")
```
```
var ErrMachOSectionNotFound = [errors](/errors).[New](/errors#New)("MachO missing required section")
```
```
var ErrNotFat = &[FormatError](#FormatError){0, "not a fat Mach-O file", [nil](/builtin#nil)}
```
ErrNotFat is returned from NewFatFile or OpenFat when the file is not a universal binary but may be a thin binary, based on its magic number.
```
var ErrObjcSectionNEmpty = [errors](/errors).[New](/errors#New)("required ObjC section is empty")
```
```
var ErrObjcSectionNotFound = [errors](/errors).[New](/errors#New)("missing required ObjC section")
```
```
var ErrSwiftSectionError = [fmt](/fmt).[Errorf](/fmt#Errorf)("missing swift section")
```
### Functions [¶](#pkg-functions)
This section is empty.
### Types [¶](#pkg-types)
####
type [AtomInfo](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L2155) [¶](#AtomInfo)
added in v1.1.161
```
type AtomInfo struct {
[LinkEditData](#LinkEditData)
}
```
####
type [BitstreamWrapperHeader](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L2088) [¶](#BitstreamWrapperHeader)
added in v1.1.130
```
type BitstreamWrapperHeader struct {
Magic [uint32](/builtin#uint32)
Version [uint32](/builtin#uint32)
Offset [uint32](/builtin#uint32)
Size [uint32](/builtin#uint32)
CPUType [uint32](/builtin#uint32)
}
```
####
type [BuildVersion](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L2026) [¶](#BuildVersion)
```
type BuildVersion struct {
[LoadBytes](#LoadBytes)
[types](/github.com/blacktop/[email protected]/types).[BuildVersionCmd](/github.com/blacktop/[email protected]/types#BuildVersionCmd)
Tools [][types](/github.com/blacktop/[email protected]/types).[BuildVersionTool](/github.com/blacktop/[email protected]/types#BuildVersionTool)
}
```
A BuildVersion represents a Mach-O build for platform min OS version.
####
func (*BuildVersion) [LoadSize](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L2032) [¶](#BuildVersion.LoadSize)
added in v1.1.117
```
func (b *[BuildVersion](#BuildVersion)) LoadSize() [uint32](/builtin#uint32)
```
####
func (*BuildVersion) [MarshalJSON](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L2065) [¶](#BuildVersion.MarshalJSON)
added in v1.1.117
```
func (b *[BuildVersion](#BuildVersion)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
####
func (*BuildVersion) [String](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L2044) [¶](#BuildVersion.String)
added in v1.0.4
```
func (b *[BuildVersion](#BuildVersion)) String() [string](/builtin#string)
```
####
func (*BuildVersion) [Write](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L2035) [¶](#BuildVersion.Write)
added in v1.1.117
```
func (b *[BuildVersion](#BuildVersion)) Write(buf *[bytes](/bytes).[Buffer](/bytes#Buffer), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [error](/builtin#error)
```
####
type [CodeSignature](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1425) [¶](#CodeSignature)
```
type CodeSignature struct {
[LoadBytes](#LoadBytes)
[types](/github.com/blacktop/[email protected]/types).[CodeSignatureCmd](/github.com/blacktop/[email protected]/types#CodeSignatureCmd)
[codesign](/github.com/blacktop/[email protected]/pkg/codesign).[CodeSignature](/github.com/blacktop/[email protected]/pkg/codesign#CodeSignature)
}
```
A CodeSignature represents a Mach-O LC_CODE_SIGNATURE command.
####
func (*CodeSignature) [LoadSize](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1431) [¶](#CodeSignature.LoadSize)
added in v1.1.117
```
func (l *[CodeSignature](#CodeSignature)) LoadSize() [uint32](/builtin#uint32)
```
####
func (*CodeSignature) [MarshalJSON](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1443) [¶](#CodeSignature.MarshalJSON)
added in v1.1.117
```
func (l *[CodeSignature](#CodeSignature)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
####
func (*CodeSignature) [String](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1440) [¶](#CodeSignature.String)
added in v1.0.8
```
func (l *[CodeSignature](#CodeSignature)) String() [string](/builtin#string)
```
####
func (*CodeSignature) [Write](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1434) [¶](#CodeSignature.Write)
added in v1.1.33
```
func (l *[CodeSignature](#CodeSignature)) Write(buf *[bytes](/bytes).[Buffer](/bytes#Buffer), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [error](/builtin#error)
```
####
type [DataInCode](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1754) [¶](#DataInCode)
```
type DataInCode struct {
[LoadBytes](#LoadBytes)
[types](/github.com/blacktop/[email protected]/types).[DataInCodeCmd](/github.com/blacktop/[email protected]/types#DataInCodeCmd)
Entries [][types](/github.com/blacktop/[email protected]/types).[DataInCodeEntry](/github.com/blacktop/[email protected]/types#DataInCodeEntry)
}
```
A DataInCode represents a Mach-O LC_DATA_IN_CODE command.
####
func (*DataInCode) [LoadSize](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1760) [¶](#DataInCode.LoadSize)
added in v1.1.117
```
func (l *[DataInCode](#DataInCode)) LoadSize() [uint32](/builtin#uint32)
```
####
func (*DataInCode) [MarshalJSON](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1782) [¶](#DataInCode.MarshalJSON)
added in v1.1.117
```
func (l *[DataInCode](#DataInCode)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
####
func (*DataInCode) [String](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1769) [¶](#DataInCode.String)
added in v1.0.8
```
func (d *[DataInCode](#DataInCode)) String() [string](/builtin#string)
```
####
func (*DataInCode) [Write](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1763) [¶](#DataInCode.Write)
added in v1.1.33
```
func (l *[DataInCode](#DataInCode)) Write(buf *[bytes](/bytes).[Buffer](/bytes#Buffer), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [error](/builtin#error)
```
####
type [DyldChainedFixups](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L2099) [¶](#DyldChainedFixups)
added in v1.0.6
```
type DyldChainedFixups struct {
[LinkEditData](#LinkEditData)
}
```
A DyldChainedFixups used with linkedit_data_command
####
type [DyldEnvironment](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1702) [¶](#DyldEnvironment)
added in v1.0.13
```
type DyldEnvironment struct {
[Dylinker](#Dylinker)
}
```
DyldEnvironment represents a Mach-O LC_DYLD_ENVIRONMENT command.
####
type [DyldExportsTrie](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L2090) [¶](#DyldExportsTrie)
added in v1.0.6
```
type DyldExportsTrie struct {
[LinkEditData](#LinkEditData)
}
```
A DyldExportsTrie used with linkedit_data_command, payload is trie
####
type [DyldInfo](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1577) [¶](#DyldInfo)
```
type DyldInfo struct {
[LoadBytes](#LoadBytes)
[types](/github.com/blacktop/[email protected]/types).[DyldInfoCmd](/github.com/blacktop/[email protected]/types#DyldInfoCmd)
}
```
A DyldInfo represents a Mach-O LC_DYLD_INFO command.
####
func (*DyldInfo) [LoadSize](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1582) [¶](#DyldInfo.LoadSize)
added in v1.0.8
```
func (d *[DyldInfo](#DyldInfo)) LoadSize() [uint32](/builtin#uint32)
```
####
func (*DyldInfo) [MarshalJSON](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1622) [¶](#DyldInfo.MarshalJSON)
added in v1.1.117
```
func (l *[DyldInfo](#DyldInfo)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
####
func (*DyldInfo) [Put](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1585) [¶](#DyldInfo.Put)
added in v1.0.8
```
func (d *[DyldInfo](#DyldInfo)) Put(b [][byte](/builtin#byte), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [int](/builtin#int)
```
####
func (*DyldInfo) [String](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1607) [¶](#DyldInfo.String)
added in v1.0.8
```
func (d *[DyldInfo](#DyldInfo)) String() [string](/builtin#string)
```
####
func (*DyldInfo) [Write](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1600) [¶](#DyldInfo.Write)
added in v1.1.33
```
func (l *[DyldInfo](#DyldInfo)) Write(buf *[bytes](/bytes).[Buffer](/bytes#Buffer), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [error](/builtin#error)
```
####
type [DyldInfoOnly](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1657) [¶](#DyldInfoOnly)
added in v1.0.8
```
type DyldInfoOnly struct {
[DyldInfo](#DyldInfo)
}
```
DyldInfoOnly is compressed dyld information only
####
type [Dylib](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L2164) [¶](#Dylib)
```
type Dylib struct {
[LoadBytes](#LoadBytes)
[types](/github.com/blacktop/[email protected]/types).[DylibCmd](/github.com/blacktop/[email protected]/types#DylibCmd)
Name [string](/builtin#string)
}
```
A Dylib represents a Mach-O LC_ID_DYLIB, LC_LOAD_{,WEAK_}DYLIB,LC_REEXPORT_DYLIB load command.
####
func (*Dylib) [LoadSize](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L2170) [¶](#Dylib.LoadSize)
added in v1.1.117
```
func (d *[Dylib](#Dylib)) LoadSize() [uint32](/builtin#uint32)
```
####
func (*Dylib) [MarshalJSON](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L2200) [¶](#Dylib.MarshalJSON)
added in v1.1.117
```
func (d *[Dylib](#Dylib)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
####
func (*Dylib) [Put](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L2173) [¶](#Dylib.Put)
added in v1.1.117
```
func (d *[Dylib](#Dylib)) Put(b [][byte](/builtin#byte), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [int](/builtin#int)
```
####
func (*Dylib) [String](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L2197) [¶](#Dylib.String)
added in v1.0.4
```
func (d *[Dylib](#Dylib)) String() [string](/builtin#string)
```
####
func (*Dylib) [Write](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L2182) [¶](#Dylib.Write)
added in v1.1.117
```
func (d *[Dylib](#Dylib)) Write(buf *[bytes](/bytes).[Buffer](/bytes#Buffer), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [error](/builtin#error)
```
####
type [DylibCodeSignDrs](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1856) [¶](#DylibCodeSignDrs)
added in v1.1.33
```
type DylibCodeSignDrs struct {
[LinkEditData](#LinkEditData)
}
```
####
type [Dylinker](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L2219) [¶](#Dylinker)
added in v1.1.117
```
type Dylinker struct {
[LoadBytes](#LoadBytes)
[types](/github.com/blacktop/[email protected]/types).[DylinkerCmd](/github.com/blacktop/[email protected]/types#DylinkerCmd)
Name [string](/builtin#string)
}
```
A Dylinker represents a Mach-O LC_ID_DYLINKER, LC_LOAD_DYLINKER or LC_DYLD_ENVIRONMENT load command.
####
func (*Dylinker) [LoadSize](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L2225) [¶](#Dylinker.LoadSize)
added in v1.1.117
```
func (d *[Dylinker](#Dylinker)) LoadSize() [uint32](/builtin#uint32)
```
####
func (*Dylinker) [MarshalJSON](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L2252) [¶](#Dylinker.MarshalJSON)
added in v1.1.117
```
func (d *[Dylinker](#Dylinker)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
####
func (*Dylinker) [Put](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L2228) [¶](#Dylinker.Put)
added in v1.1.117
```
func (d *[Dylinker](#Dylinker)) Put(b [][byte](/builtin#byte), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [int](/builtin#int)
```
####
func (*Dylinker) [String](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L2249) [¶](#Dylinker.String)
added in v1.1.117
```
func (d *[Dylinker](#Dylinker)) String() [string](/builtin#string)
```
####
func (*Dylinker) [Write](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L2234) [¶](#Dylinker.Write)
added in v1.1.117
```
func (d *[Dylinker](#Dylinker)) Write(buf *[bytes](/bytes).[Buffer](/bytes#Buffer), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [error](/builtin#error)
```
####
type [DylinkerID](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L953) [¶](#DylinkerID)
added in v1.0.26
```
type DylinkerID struct {
[Dylinker](#Dylinker)
}
```
DylinkerID represents a Mach-O LC_ID_DYLINKER command.
####
type [Dysymtab](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L807) [¶](#Dysymtab)
```
type Dysymtab struct {
[LoadBytes](#LoadBytes)
[types](/github.com/blacktop/[email protected]/types).[DysymtabCmd](/github.com/blacktop/[email protected]/types#DysymtabCmd)
IndirectSyms [][uint32](/builtin#uint32) // indices into Symtab.Syms
}
```
A Dysymtab represents a Mach-O LC_DYSYMTAB command.
####
func (*Dysymtab) [LoadSize](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L813) [¶](#Dysymtab.LoadSize)
added in v1.1.117
```
func (d *[Dysymtab](#Dysymtab)) LoadSize() [uint32](/builtin#uint32)
```
####
func (*Dysymtab) [MarshalJSON](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L875) [¶](#Dysymtab.MarshalJSON)
added in v1.1.117
```
func (d *[Dysymtab](#Dysymtab)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
####
func (*Dysymtab) [String](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L822) [¶](#Dysymtab.String)
added in v1.0.8
```
func (d *[Dysymtab](#Dysymtab)) String() [string](/builtin#string)
```
####
func (*Dysymtab) [Write](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L816) [¶](#Dysymtab.Write)
added in v1.1.33
```
func (d *[Dysymtab](#Dysymtab)) Write(buf *[bytes](/bytes).[Buffer](/bytes#Buffer), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [error](/builtin#error)
```
####
type [EncryptionInfo](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1527) [¶](#EncryptionInfo)
added in v1.0.26
```
type EncryptionInfo struct {
[LoadBytes](#LoadBytes)
[types](/github.com/blacktop/[email protected]/types).[EncryptionInfoCmd](/github.com/blacktop/[email protected]/types#EncryptionInfoCmd)
}
```
A EncryptionInfo represents a Mach-O 32-bit encrypted segment information
####
func (*EncryptionInfo) [LoadSize](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1532) [¶](#EncryptionInfo.LoadSize)
added in v1.0.26
```
func (e *[EncryptionInfo](#EncryptionInfo)) LoadSize() [uint32](/builtin#uint32)
```
####
func (*EncryptionInfo) [MarshalJSON](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1556) [¶](#EncryptionInfo.MarshalJSON)
added in v1.1.117
```
func (l *[EncryptionInfo](#EncryptionInfo)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
####
func (*EncryptionInfo) [Put](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1535) [¶](#EncryptionInfo.Put)
added in v1.0.26
```
func (e *[EncryptionInfo](#EncryptionInfo)) Put(b [][byte](/builtin#byte), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [int](/builtin#int)
```
####
func (*EncryptionInfo) [String](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1550) [¶](#EncryptionInfo.String)
added in v1.0.26
```
func (e *[EncryptionInfo](#EncryptionInfo)) String() [string](/builtin#string)
```
####
func (*EncryptionInfo) [Write](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1544) [¶](#EncryptionInfo.Write)
added in v1.1.33
```
func (l *[EncryptionInfo](#EncryptionInfo)) Write(buf *[bytes](/bytes).[Buffer](/bytes#Buffer), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [error](/builtin#error)
```
####
type [EncryptionInfo64](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1865) [¶](#EncryptionInfo64)
added in v1.0.6
```
type EncryptionInfo64 struct {
[LoadBytes](#LoadBytes)
[types](/github.com/blacktop/[email protected]/types).[EncryptionInfo64Cmd](/github.com/blacktop/[email protected]/types#EncryptionInfo64Cmd)
}
```
A EncryptionInfo64 represents a Mach-O 64-bit encrypted segment information
####
func (*EncryptionInfo64) [LoadSize](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1870) [¶](#EncryptionInfo64.LoadSize)
added in v1.0.6
```
func (e *[EncryptionInfo64](#EncryptionInfo64)) LoadSize() [uint32](/builtin#uint32)
```
####
func (*EncryptionInfo64) [MarshalJSON](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1895) [¶](#EncryptionInfo64.MarshalJSON)
added in v1.1.117
```
func (e *[EncryptionInfo64](#EncryptionInfo64)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
####
func (*EncryptionInfo64) [Put](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1873) [¶](#EncryptionInfo64.Put)
added in v1.0.6
```
func (e *[EncryptionInfo64](#EncryptionInfo64)) Put(b [][byte](/builtin#byte), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [int](/builtin#int)
```
####
func (*EncryptionInfo64) [String](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1889) [¶](#EncryptionInfo64.String)
added in v1.0.6
```
func (e *[EncryptionInfo64](#EncryptionInfo64)) String() [string](/builtin#string)
```
####
func (*EncryptionInfo64) [Write](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1883) [¶](#EncryptionInfo64.Write)
added in v1.1.33
```
func (e *[EncryptionInfo64](#EncryptionInfo64)) Write(buf *[bytes](/bytes).[Buffer](/bytes#Buffer), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [error](/builtin#error)
```
####
type [EntryPoint](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1711) [¶](#EntryPoint)
added in v1.0.4
```
type EntryPoint struct {
[LoadBytes](#LoadBytes)
[types](/github.com/blacktop/[email protected]/types).[EntryPointCmd](/github.com/blacktop/[email protected]/types#EntryPointCmd)
}
```
EntryPoint represents a Mach-O LC_MAIN command.
####
func (*EntryPoint) [LoadSize](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1716) [¶](#EntryPoint.LoadSize)
added in v1.0.4
```
func (e *[EntryPoint](#EntryPoint)) LoadSize() [uint32](/builtin#uint32)
```
####
func (*EntryPoint) [MarshalJSON](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1735) [¶](#EntryPoint.MarshalJSON)
added in v1.1.117
```
func (e *[EntryPoint](#EntryPoint)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
####
func (*EntryPoint) [Put](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1719) [¶](#EntryPoint.Put)
added in v1.0.4
```
func (e *[EntryPoint](#EntryPoint)) Put(b [][byte](/builtin#byte), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [int](/builtin#int)
```
####
func (*EntryPoint) [String](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1732) [¶](#EntryPoint.String)
added in v1.0.4
```
func (e *[EntryPoint](#EntryPoint)) String() [string](/builtin#string)
```
####
func (*EntryPoint) [Write](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1726) [¶](#EntryPoint.Write)
added in v1.1.33
```
func (e *[EntryPoint](#EntryPoint)) Write(buf *[bytes](/bytes).[Buffer](/bytes#Buffer), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [error](/builtin#error)
```
####
type [FatArch](https://github.com/blacktop/go-macho/blob/v1.1.165/fat.go#L46) [¶](#FatArch)
```
type FatArch struct {
[FatArchHeader](#FatArchHeader)
*[File](#File)
// contains filtered or unexported fields
}
```
A FatArch is a Mach-O File inside a FatFile.
####
type [FatArchHeader](https://github.com/blacktop/go-macho/blob/v1.1.165/fat.go#L35) [¶](#FatArchHeader)
```
type FatArchHeader struct {
CPU [types](/github.com/blacktop/[email protected]/types).[CPU](/github.com/blacktop/[email protected]/types#CPU)
SubCPU [types](/github.com/blacktop/[email protected]/types).[CPUSubtype](/github.com/blacktop/[email protected]/types#CPUSubtype)
Offset [uint32](/builtin#uint32)
Size [uint32](/builtin#uint32)
Align [uint32](/builtin#uint32)
}
```
A FatArchHeader represents a fat header for a specific image architecture.
####
type [FatFile](https://github.com/blacktop/go-macho/blob/v1.1.165/fat.go#L23) [¶](#FatFile)
```
type FatFile struct {
[FatHeader](#FatHeader)
// contains filtered or unexported fields
}
```
A FatFile is a Mach-O universal binary that contains at least one architecture.
####
func [CreateFat](https://github.com/blacktop/go-macho/blob/v1.1.165/fat.go#L153) [¶](#CreateFat)
added in v1.1.123
```
func CreateFat(name [string](/builtin#string), files ...[string](/builtin#string)) (*[FatFile](#FatFile), [error](/builtin#error))
```
####
func [NewFatFile](https://github.com/blacktop/go-macho/blob/v1.1.165/fat.go#L59) [¶](#NewFatFile)
```
func NewFatFile(r [io](/io).[ReaderAt](/io#ReaderAt)) (*[FatFile](#FatFile), [error](/builtin#error))
```
NewFatFile creates a new FatFile for accessing all the Mach-O images in a universal binary. The Mach-O binary is expected to start at position 0 in the ReaderAt.
####
func [OpenFat](https://github.com/blacktop/go-macho/blob/v1.1.165/fat.go#L139) [¶](#OpenFat)
```
func OpenFat(name [string](/builtin#string)) (*[FatFile](#FatFile), [error](/builtin#error))
```
OpenFat opens the named file using os.Open and prepares it for use as a Mach-O universal binary.
####
func (*FatFile) [Close](https://github.com/blacktop/go-macho/blob/v1.1.165/fat.go#L234) [¶](#FatFile.Close)
```
func (ff *[FatFile](#FatFile)) Close() [error](/builtin#error)
```
Close with close the Mach-O Fat file.
####
type [FatHeader](https://github.com/blacktop/go-macho/blob/v1.1.165/fat.go#L28) [¶](#FatHeader)
added in v1.1.123
```
type FatHeader struct {
Magic [types](/github.com/blacktop/[email protected]/types).[Magic](/github.com/blacktop/[email protected]/types#Magic)
Count [uint32](/builtin#uint32)
Arches [][FatArch](#FatArch)
}
```
####
type [File](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L29) [¶](#File)
```
type File struct {
[FileTOC](#FileTOC)
Symtab *[Symtab](#Symtab)
Dysymtab *[Dysymtab](#Dysymtab)
// contains filtered or unexported fields
}
```
A File represents an open Mach-O file.
####
func [NewFile](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L125) [¶](#NewFile)
```
func NewFile(r [io](/io).[ReaderAt](/io#ReaderAt), config ...[FileConfig](#FileConfig)) (*[File](#File), [error](/builtin#error))
```
NewFile creates a new File for accessing a Mach-O binary in an underlying reader.
The Mach-O binary is expected to start at position 0 in the ReaderAt.
####
func [Open](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L97) [¶](#Open)
```
func Open(name [string](/builtin#string)) (*[File](#File), [error](/builtin#error))
```
Open opens the named file using os.Open and prepares it for use as a Mach-O binary.
####
func (*File) [BuildVersion](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L1711) [¶](#File.BuildVersion)
```
func (f *[File](#File)) BuildVersion() *[BuildVersion](#BuildVersion)
```
BuildVersion returns the build version load command, or nil if no build version exists.
####
func (*File) [Close](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L114) [¶](#File.Close)
```
func (f *[File](#File)) Close() [error](/builtin#error)
```
Close closes the File.
If the File was created using NewFile directly instead of Open,
Close has no effect.
####
func (*File) [CodeSign](https://github.com/blacktop/go-macho/blob/v1.1.165/export.go#L237) [¶](#File.CodeSign)
added in v1.1.119
```
func (f *[File](#File)) CodeSign(config *[codesign](/github.com/blacktop/[email protected]/pkg/codesign).[Config](/github.com/blacktop/[email protected]/pkg/codesign#Config)) [error](/builtin#error)
```
####
func (*File) [CodeSignature](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L1868) [¶](#File.CodeSignature)
added in v1.0.12
```
func (f *[File](#File)) CodeSignature() *[CodeSignature](#CodeSignature)
```
CodeSignature returns the code signature, or nil if none exists.
####
func (*File) [DWARF](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L2135) [¶](#File.DWARF)
```
func (f *[File](#File)) DWARF() (*dwarf.Data, [error](/builtin#error))
```
DWARF returns the DWARF debug information for the Mach-O file.
####
func (*File) [DataInCode](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L1753) [¶](#File.DataInCode)
added in v1.1.59
```
func (f *[File](#File)) DataInCode() *[DataInCode](#DataInCode)
```
DataInCode returns the LC_DATA_IN_CODE, or nil if none exists.
####
func (*File) [DyldChainedFixups](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L1946) [¶](#File.DyldChainedFixups)
added in v1.0.21
```
func (f *[File](#File)) DyldChainedFixups() (*[fixupchains](/github.com/blacktop/[email protected]/pkg/fixupchains).[DyldChainedFixups](/github.com/blacktop/[email protected]/pkg/fixupchains#DyldChainedFixups), [error](/builtin#error))
```
DyldChainedFixups returns the dyld chained fixups.
####
func (*File) [DyldExports](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L1912) [¶](#File.DyldExports)
added in v1.1.8
```
func (f *[File](#File)) DyldExports() ([][trie](/github.com/blacktop/[email protected]/pkg/trie).[TrieExport](/github.com/blacktop/[email protected]/pkg/trie#TrieExport), [error](/builtin#error))
```
DyldExports returns the dyld export trie symbols
####
func (*File) [DyldExportsTrie](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L1878) [¶](#File.DyldExportsTrie)
added in v1.0.23
```
func (f *[File](#File)) DyldExportsTrie() *[DyldExportsTrie](#DyldExportsTrie)
```
DyldExportsTrie returns the dyld export trie load command, or nil if no dyld info exists.
####
func (*File) [DyldInfo](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L1681) [¶](#File.DyldInfo)
```
func (f *[File](#File)) DyldInfo() *[DyldInfo](#DyldInfo)
```
DyldInfo returns the dyld info load command, or nil if no dyld info exists.
####
func (*File) [DyldInfoOnly](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L1691) [¶](#File.DyldInfoOnly)
added in v1.1.63
```
func (f *[File](#File)) DyldInfoOnly() *[DyldInfoOnly](#DyldInfoOnly)
```
DyldInfoOnly returns the dyld info only load command, or nil if no dyld info only exists.
####
func (*File) [DylibID](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L1671) [¶](#File.DylibID)
```
func (f *[File](#File)) DylibID() *[IDDylib](#IDDylib)
```
DylibID returns the dylib ID load command, or nil if no dylib ID exists.
####
func (*File) [Export](https://github.com/blacktop/go-macho/blob/v1.1.165/export.go#L75) [¶](#File.Export)
added in v1.1.33
```
func (f *[File](#File)) Export(path [string](/builtin#string), dcf *[fixupchains](/github.com/blacktop/[email protected]/pkg/fixupchains).[DyldChainedFixups](/github.com/blacktop/[email protected]/pkg/fixupchains#DyldChainedFixups), baseAddress [uint64](/builtin#uint64), locals [][Symbol](#Symbol)) (err [error](/builtin#error))
```
Export exports an in-memory or cached dylib|kext MachO to a file
####
func (*File) [FileSets](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L1721) [¶](#File.FileSets)
added in v1.1.9
```
func (f *[File](#File)) FileSets() []*[FilesetEntry](#FilesetEntry)
```
FileSets returns an array of Fileset entries.
####
func (*File) [FindAddressSymbols](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L2772) [¶](#File.FindAddressSymbols)
added in v1.0.24
```
func (f *[File](#File)) FindAddressSymbols(addr [uint64](/builtin#uint64)) ([][Symbol](#Symbol), [error](/builtin#error))
```
####
func (*File) [FindSectionForVMAddr](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L1651) [¶](#File.FindSectionForVMAddr)
added in v1.0.12
```
func (f *[File](#File)) FindSectionForVMAddr(vmAddr [uint64](/builtin#uint64)) *[types](/github.com/blacktop/[email protected]/types).[Section](/github.com/blacktop/[email protected]/types#Section)
```
FindSectionForVMAddr returns the section containing a given virtual memory ddress.
####
func (*File) [FindSegmentForVMAddr](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L1641) [¶](#File.FindSegmentForVMAddr)
added in v1.1.21
```
func (f *[File](#File)) FindSegmentForVMAddr(vmAddr [uint64](/builtin#uint64)) *[Segment](#Segment)
```
FindSegmentForVMAddr returns the segment containing a given virtual memory ddress.
####
func (*File) [FindSymbolAddress](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L2751) [¶](#File.FindSymbolAddress)
```
func (f *[File](#File)) FindSymbolAddress(symbol [string](/builtin#string)) ([uint64](/builtin#uint64), [error](/builtin#error))
```
####
func (*File) [ForEachV2SplitSegReference](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L1986) [¶](#File.ForEachV2SplitSegReference)
added in v1.1.113
```
func (f *[File](#File)) ForEachV2SplitSegReference(handler func(fromSectionIndex, fromSectionOffset, toSectionIndex, toSectionOffset [uint64](/builtin#uint64), kind [types](/github.com/blacktop/[email protected]/types).[SplitInfoKind](/github.com/blacktop/[email protected]/types#SplitInfoKind))) [error](/builtin#error)
```
####
func (*File) [FunctionStarts](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L1763) [¶](#File.FunctionStarts)
added in v1.0.5
```
func (f *[File](#File)) FunctionStarts() *[FunctionStarts](#FunctionStarts)
```
FunctionStarts returns the function starts array, or nil if none exists.
####
func (*File) [GetBaseAddress](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L1439) [¶](#File.GetBaseAddress)
added in v1.0.24
```
func (f *[File](#File)) GetBaseAddress() [uint64](/builtin#uint64)
```
GetBaseAddress returns the MachO's preferred load address
####
func (*File) [GetBindInfo](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L2253) [¶](#File.GetBindInfo)
added in v1.1.77
```
func (f *[File](#File)) GetBindInfo() ([types](/github.com/blacktop/[email protected]/types).[Binds](/github.com/blacktop/[email protected]/types#Binds), [error](/builtin#error))
```
####
func (*File) [GetBindName](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L1499) [¶](#File.GetBindName)
added in v1.1.3
```
func (f *[File](#File)) GetBindName(pointer [uint64](/builtin#uint64)) ([string](/builtin#string), [error](/builtin#error))
```
GetBindName returns the import name for a given dyld chained pointer
####
func (*File) [GetCFStrings](https://github.com/blacktop/go-macho/blob/v1.1.165/objc.go#L1452) [¶](#File.GetCFStrings)
added in v1.1.1
```
func (f *[File](#File)) GetCFStrings() ([][objc](/github.com/blacktop/[email protected]/types/objc).[CFString](/github.com/blacktop/[email protected]/types/objc#CFString), [error](/builtin#error))
```
GetCFStrings returns the Objective-C CFStrings
####
func (*File) [GetCString](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L1530) [¶](#File.GetCString)
added in v1.0.5
```
func (f *[File](#File)) GetCString(addr [uint64](/builtin#uint64)) ([string](/builtin#string), [error](/builtin#error))
```
GetCString returns a c-string at a given virtual address in the MachO
####
func (*File) [GetCStringAtOffset](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L1548) [¶](#File.GetCStringAtOffset)
added in v1.0.32
```
func (f *[File](#File)) GetCStringAtOffset(strOffset [int64](/builtin#int64)) ([string](/builtin#string), [error](/builtin#error))
```
GetCStringAtOffset returns a c-string at a given offset into the MachO
####
func (*File) [GetDyldExport](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L1888) [¶](#File.GetDyldExport)
added in v1.1.85
```
func (f *[File](#File)) GetDyldExport(symbol [string](/builtin#string)) (*[trie](/github.com/blacktop/[email protected]/pkg/trie).[TrieExport](/github.com/blacktop/[email protected]/pkg/trie#TrieExport), [error](/builtin#error))
```
DyldExports returns the dyld export trie symbols
####
func (*File) [GetEmbeddedInfoPlist](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L2071) [¶](#File.GetEmbeddedInfoPlist)
added in v1.1.120
```
func (f *[File](#File)) GetEmbeddedInfoPlist() ([][byte](/builtin#byte), [error](/builtin#error))
```
####
func (*File) [GetEmbeddedLLVMBitcode](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L2096) [¶](#File.GetEmbeddedLLVMBitcode)
added in v1.1.130
```
func (f *[File](#File)) GetEmbeddedLLVMBitcode() (*[xar](/github.com/blacktop/[email protected]/pkg/xar).[Reader](/github.com/blacktop/[email protected]/pkg/xar#Reader), [error](/builtin#error))
```
####
func (*File) [GetExports](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L2355) [¶](#File.GetExports)
added in v1.1.77
```
func (f *[File](#File)) GetExports() ([][trie](/github.com/blacktop/[email protected]/pkg/trie).[TrieExport](/github.com/blacktop/[email protected]/pkg/trie#TrieExport), [error](/builtin#error))
```
####
func (*File) [GetFileSetFileByName](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L1732) [¶](#File.GetFileSetFileByName)
added in v1.1.9
```
func (f *[File](#File)) GetFileSetFileByName(name [string](/builtin#string)) (*[File](#File), [error](/builtin#error))
```
GetFileSetFileByName returns the Fileset MachO for a given name.
####
func (*File) [GetFunctionData](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L1859) [¶](#File.GetFunctionData)
added in v1.1.31
```
func (f *[File](#File)) GetFunctionData(fn [types](/github.com/blacktop/[email protected]/types).[Function](/github.com/blacktop/[email protected]/types#Function)) ([][byte](/builtin#byte), [error](/builtin#error))
```
####
func (*File) [GetFunctionForVMAddr](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L1839) [¶](#File.GetFunctionForVMAddr)
added in v1.1.21
```
func (f *[File](#File)) GetFunctionForVMAddr(addr [uint64](/builtin#uint64)) ([types](/github.com/blacktop/[email protected]/types).[Function](/github.com/blacktop/[email protected]/types#Function), [error](/builtin#error))
```
GetFunctionForVMAddr returns the function containing a given virual address
####
func (*File) [GetFunctions](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L1773) [¶](#File.GetFunctions)
added in v1.1.21
```
func (f *[File](#File)) GetFunctions(data ...[byte](/builtin#byte)) [][types](/github.com/blacktop/[email protected]/types).[Function](/github.com/blacktop/[email protected]/types#Function)
```
GetFunctions returns the function array, or nil if none exists.
####
func (*File) [GetFunctionsForRange](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L1849) [¶](#File.GetFunctionsForRange)
added in v1.1.101
```
func (f *[File](#File)) GetFunctionsForRange(start, end [uint64](/builtin#uint64)) ([][types](/github.com/blacktop/[email protected]/types).[Function](/github.com/blacktop/[email protected]/types#Function), [error](/builtin#error))
```
GetFunctionsForRange returns the functions contained in a given virual address range
####
func (*File) [GetLoadsByName](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L1581) [¶](#File.GetLoadsByName)
added in v1.1.118
```
func (f *[File](#File)) GetLoadsByName(name [string](/builtin#string)) [][Load](#Load)
```
####
func (*File) [GetObjC](https://github.com/blacktop/go-macho/blob/v1.1.165/objc.go#L27) [¶](#File.GetObjC)
added in v1.1.107
```
func (f *[File](#File)) GetObjC(addr [uint64](/builtin#uint64)) ([any](/builtin#any), [bool](/builtin#bool))
```
####
func (*File) [GetObjCCategories](https://github.com/blacktop/go-macho/blob/v1.1.165/objc.go#L678) [¶](#File.GetObjCCategories)
added in v1.0.12
```
func (f *[File](#File)) GetObjCCategories() ([][objc](/github.com/blacktop/[email protected]/types/objc).[Category](/github.com/blacktop/[email protected]/types/objc#Category), [error](/builtin#error))
```
GetObjCCategories returns an array of Objective-C categories by parsing the __objc_catlist data
####
func (*File) [GetObjCClass](https://github.com/blacktop/go-macho/blob/v1.1.165/objc.go#L364) [¶](#File.GetObjCClass)
added in v1.0.12
```
func (f *[File](#File)) GetObjCClass(vmaddr [uint64](/builtin#uint64)) (*[objc](/github.com/blacktop/[email protected]/types/objc).[Class](/github.com/blacktop/[email protected]/types/objc#Class), [error](/builtin#error))
```
GetObjCClass parses an Objective-C class at a given virtual memory address
####
func (*File) [GetObjCClass2](https://github.com/blacktop/go-macho/blob/v1.1.165/objc.go#L521) [¶](#File.GetObjCClass2)
added in v1.1.107
```
func (f *[File](#File)) GetObjCClass2(vmaddr [uint64](/builtin#uint64)) (*[objc](/github.com/blacktop/[email protected]/types/objc).[Class](/github.com/blacktop/[email protected]/types/objc#Class), [error](/builtin#error))
```
TODO: get rid of old GetObjCClass GetObjCClass parses an Objective-C class at a given virtual memory address
####
func (*File) [GetObjCClassInfo](https://github.com/blacktop/go-macho/blob/v1.1.165/objc.go#L155) [¶](#File.GetObjCClassInfo)
added in v1.0.12
```
func (f *[File](#File)) GetObjCClassInfo(vmaddr [uint64](/builtin#uint64)) (*[objc](/github.com/blacktop/[email protected]/types/objc).[ClassRO64](/github.com/blacktop/[email protected]/types/objc#ClassRO64), [error](/builtin#error))
```
GetObjCClassInfo returns the ClassRO64 (class_ro_t) for a given virtual memory address
####
func (*File) [GetObjCClassNames](https://github.com/blacktop/go-macho/blob/v1.1.165/objc.go#L179) [¶](#File.GetObjCClassNames)
added in v1.1.75
```
func (f *[File](#File)) GetObjCClassNames() (map[[uint64](/builtin#uint64)][string](/builtin#string), [error](/builtin#error))
```
GetObjCClassNames returns a map of section data virtual memory address to their class names
####
func (*File) [GetObjCClassReferences](https://github.com/blacktop/go-macho/blob/v1.1.165/objc.go#L1281) [¶](#File.GetObjCClassReferences)
added in v1.1.29
```
func (f *[File](#File)) GetObjCClassReferences() (map[[uint64](/builtin#uint64)]*[objc](/github.com/blacktop/[email protected]/types/objc).[Class](/github.com/blacktop/[email protected]/types/objc#Class), [error](/builtin#error))
```
GetObjCClassReferences returns a map of classes to their section data virtual memory address
####
func (*File) [GetObjCClasses](https://github.com/blacktop/go-macho/blob/v1.1.165/objc.go#L247) [¶](#File.GetObjCClasses)
added in v1.0.12
```
func (f *[File](#File)) GetObjCClasses() ([][objc](/github.com/blacktop/[email protected]/types/objc).[Class](/github.com/blacktop/[email protected]/types/objc#Class), [error](/builtin#error))
```
GetObjCClasses returns an array of Objective-C classes
####
func (*File) [GetObjCImageInfo](https://github.com/blacktop/go-macho/blob/v1.1.165/objc.go#L127) [¶](#File.GetObjCImageInfo)
added in v1.0.12
```
func (f *[File](#File)) GetObjCImageInfo() (*[objc](/github.com/blacktop/[email protected]/types/objc).[ImageInfo](/github.com/blacktop/[email protected]/types/objc#ImageInfo), [error](/builtin#error))
```
GetObjCImageInfo returns the parsed __objc_imageinfo data
####
func (*File) [GetObjCIntegerObjects](https://github.com/blacktop/go-macho/blob/v1.1.165/objc.go#L1509) [¶](#File.GetObjCIntegerObjects)
added in v1.1.109
```
func (f *[File](#File)) GetObjCIntegerObjects() (map[[uint64](/builtin#uint64)]*[objc](/github.com/blacktop/[email protected]/types/objc).[IntObj](/github.com/blacktop/[email protected]/types/objc#IntObj), [error](/builtin#error))
```
GetObjCIntObj parses the __objc_intobj section and returns a map of
####
func (*File) [GetObjCIvars](https://github.com/blacktop/go-macho/blob/v1.1.165/objc.go#L1187) [¶](#File.GetObjCIvars)
added in v1.0.12
```
func (f *[File](#File)) GetObjCIvars(vmaddr [uint64](/builtin#uint64)) ([][objc](/github.com/blacktop/[email protected]/types/objc).[Ivar](/github.com/blacktop/[email protected]/types/objc#Ivar), [error](/builtin#error))
```
GetObjCIvars returns the Objective-C instance variables
####
func (*File) [GetObjCMethodLists](https://github.com/blacktop/go-macho/blob/v1.1.165/objc.go#L1036) [¶](#File.GetObjCMethodLists)
added in v1.1.107
```
func (f *[File](#File)) GetObjCMethodLists() ([][objc](/github.com/blacktop/[email protected]/types/objc).[Method](/github.com/blacktop/[email protected]/types/objc#Method), [error](/builtin#error))
```
GetObjCMethodLists parses the method lists in the __objc_methlist section
####
func (*File) [GetObjCMethodNames](https://github.com/blacktop/go-macho/blob/v1.1.165/objc.go#L213) [¶](#File.GetObjCMethodNames)
added in v1.0.12
```
func (f *[File](#File)) GetObjCMethodNames() (map[[uint64](/builtin#uint64)][string](/builtin#string), [error](/builtin#error))
```
GetObjCMethodNames returns a map of section data virtual memory addresses to their method names
####
func (*File) [GetObjCMethods](https://github.com/blacktop/go-macho/blob/v1.1.165/objc.go#L1017) [¶](#File.GetObjCMethods)
added in v1.0.12
```
func (f *[File](#File)) GetObjCMethods(vmaddr [uint64](/builtin#uint64)) ([][objc](/github.com/blacktop/[email protected]/types/objc).[Method](/github.com/blacktop/[email protected]/types/objc#Method), [error](/builtin#error))
```
####
func (*File) [GetObjCNonLazyCategories](https://github.com/blacktop/go-macho/blob/v1.1.165/objc.go#L779) [¶](#File.GetObjCNonLazyCategories)
added in v1.1.107
```
func (f *[File](#File)) GetObjCNonLazyCategories() ([][objc](/github.com/blacktop/[email protected]/types/objc).[Category](/github.com/blacktop/[email protected]/types/objc#Category), [error](/builtin#error))
```
GetObjCNonLazyCategories returns an array of Objective-C classes that implement +load
####
func (*File) [GetObjCNonLazyClasses](https://github.com/blacktop/go-macho/blob/v1.1.165/objc.go#L296) [¶](#File.GetObjCNonLazyClasses)
added in v1.1.75
```
func (f *[File](#File)) GetObjCNonLazyClasses() ([][objc](/github.com/blacktop/[email protected]/types/objc).[Class](/github.com/blacktop/[email protected]/types/objc#Class), [error](/builtin#error))
```
GetObjCNonLazyClasses returns an array of Objective-C classes that implement +load
####
func (*File) [GetObjCProperties](https://github.com/blacktop/go-macho/blob/v1.1.165/objc.go#L1240) [¶](#File.GetObjCProperties)
added in v1.0.12
```
func (f *[File](#File)) GetObjCProperties(vmaddr [uint64](/builtin#uint64)) ([][objc](/github.com/blacktop/[email protected]/types/objc).[Property](/github.com/blacktop/[email protected]/types/objc#Property), [error](/builtin#error))
```
GetObjCProperties returns the Objective-C properties
####
func (*File) [GetObjCProtoReferences](https://github.com/blacktop/go-macho/blob/v1.1.165/objc.go#L1376) [¶](#File.GetObjCProtoReferences)
added in v1.1.29
```
func (f *[File](#File)) GetObjCProtoReferences() (map[[uint64](/builtin#uint64)]*[objc](/github.com/blacktop/[email protected]/types/objc).[Protocol](/github.com/blacktop/[email protected]/types/objc#Protocol), [error](/builtin#error))
```
GetObjCProtoReferences returns a map of protocol names to their section data virtual memory address
####
func (*File) [GetObjCProtocols](https://github.com/blacktop/go-macho/blob/v1.1.165/objc.go#L983) [¶](#File.GetObjCProtocols)
added in v1.0.12
```
func (f *[File](#File)) GetObjCProtocols() ([][objc](/github.com/blacktop/[email protected]/types/objc).[Protocol](/github.com/blacktop/[email protected]/types/objc#Protocol), [error](/builtin#error))
```
GetObjCProtocols returns the Objective-C protocols
####
func (*File) [GetObjCSelectorReferences](https://github.com/blacktop/go-macho/blob/v1.1.165/objc.go#L1413) [¶](#File.GetObjCSelectorReferences)
added in v1.0.12
```
func (f *[File](#File)) GetObjCSelectorReferences() (map[[uint64](/builtin#uint64)]*[objc](/github.com/blacktop/[email protected]/types/objc).[Selector](/github.com/blacktop/[email protected]/types/objc#Selector), [error](/builtin#error))
```
GetObjCSelectorReferences returns a map of selector names to their section data virtual memory address
####
func (*File) [GetObjCStubs](https://github.com/blacktop/go-macho/blob/v1.1.165/objc.go#L1536) [¶](#File.GetObjCStubs)
added in v1.1.107
```
func (f *[File](#File)) GetObjCStubs(parse func([uint64](/builtin#uint64), [][byte](/builtin#byte)) (map[[uint64](/builtin#uint64)]*[objc](/github.com/blacktop/[email protected]/types/objc).[Stub](/github.com/blacktop/[email protected]/types/objc#Stub), [error](/builtin#error))) (map[[uint64](/builtin#uint64)]*[objc](/github.com/blacktop/[email protected]/types/objc).[Stub](/github.com/blacktop/[email protected]/types/objc#Stub), [error](/builtin#error))
```
GetObjCStubs returns the Objective-C stubs
####
func (*File) [GetObjCSuperReferences](https://github.com/blacktop/go-macho/blob/v1.1.165/objc.go#L1329) [¶](#File.GetObjCSuperReferences)
added in v1.1.29
```
func (f *[File](#File)) GetObjCSuperReferences() (map[[uint64](/builtin#uint64)]*[objc](/github.com/blacktop/[email protected]/types/objc).[Class](/github.com/blacktop/[email protected]/types/objc#Class), [error](/builtin#error))
```
GetObjCSuperReferences returns a map of super classes to their section data virtual memory address
####
func (*File) [GetObjCToc](https://github.com/blacktop/go-macho/blob/v1.1.165/objc.go#L88) [¶](#File.GetObjCToc)
added in v1.1.68
```
func (f *[File](#File)) GetObjCToc() [objc](/github.com/blacktop/[email protected]/types/objc).[Toc](/github.com/blacktop/[email protected]/types/objc#Toc)
```
GetObjCToc returns a table of contents of the ObjC objects in the MachO
####
func (*File) [GetOffset](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L1411) [¶](#File.GetOffset)
added in v1.0.4
```
func (f *[File](#File)) GetOffset(address [uint64](/builtin#uint64)) ([uint64](/builtin#uint64), [error](/builtin#error))
```
GetOffset returns the file offset for a given virtual address
####
func (*File) [GetPointer](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L1444) [¶](#File.GetPointer)
added in v1.1.82
```
func (f *[File](#File)) GetPointer(offset [uint64](/builtin#uint64)) ([uint64](/builtin#uint64), [error](/builtin#error))
```
GetPointer returns pointer at a given offset
####
func (*File) [GetPointerAtAddress](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L1456) [¶](#File.GetPointerAtAddress)
added in v1.1.82
```
func (f *[File](#File)) GetPointerAtAddress(address [uint64](/builtin#uint64)) ([uint64](/builtin#uint64), [error](/builtin#error))
```
GetPointerAtAddress returns pointer at a given virtual address
####
func (*File) [GetRebaseInfo](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L2332) [¶](#File.GetRebaseInfo)
added in v1.1.77
```
func (f *[File](#File)) GetRebaseInfo() ([][types](/github.com/blacktop/[email protected]/types).[Rebase](/github.com/blacktop/[email protected]/types#Rebase), [error](/builtin#error))
```
####
func (*File) [GetSectionsForSegment](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L1614) [¶](#File.GetSectionsForSegment)
added in v1.1.21
```
func (f *[File](#File)) GetSectionsForSegment(name [string](/builtin#string)) []*[types](/github.com/blacktop/[email protected]/types).[Section](/github.com/blacktop/[email protected]/types#Section)
```
GetSectionsForSegment returns all the segment's sections or nil if it doesn't have any
####
func (*File) [GetSwiftAccessibleFunctions](https://github.com/blacktop/go-macho/blob/v1.1.165/swift.go#L1096) [¶](#File.GetSwiftAccessibleFunctions)
added in v1.1.101
```
func (f *[File](#File)) GetSwiftAccessibleFunctions() (*[types](/github.com/blacktop/[email protected]/types/swift/types).[AccessibleFunctionsSection](/github.com/blacktop/[email protected]/types/swift/types#AccessibleFunctionsSection), [error](/builtin#error))
```
####
func (*File) [GetSwiftAssociatedTypes](https://github.com/blacktop/go-macho/blob/v1.1.165/swift.go#L622) [¶](#File.GetSwiftAssociatedTypes)
added in v1.0.33
```
func (f *[File](#File)) GetSwiftAssociatedTypes() ([][swift](/github.com/blacktop/[email protected]/types/swift).[AssociatedTypeDescriptor](/github.com/blacktop/[email protected]/types/swift#AssociatedTypeDescriptor), [error](/builtin#error))
```
GetSwiftAssociatedTypes parses all the associated types in the __TEXT.__swift5_assocty section
####
func (*File) [GetSwiftBuiltinTypes](https://github.com/blacktop/go-macho/blob/v1.1.165/swift.go#L876) [¶](#File.GetSwiftBuiltinTypes)
added in v1.0.33
```
func (f *[File](#File)) GetSwiftBuiltinTypes() ([][swift](/github.com/blacktop/[email protected]/types/swift).[BuiltinType](/github.com/blacktop/[email protected]/types/swift#BuiltinType), [error](/builtin#error))
```
GetSwiftBuiltinTypes parses all the built-in types in the __TEXT.__swift5_builtin section
####
func (*File) [GetSwiftClosures](https://github.com/blacktop/go-macho/blob/v1.1.165/swift.go#L923) [¶](#File.GetSwiftClosures)
added in v1.0.33
```
func (f *[File](#File)) GetSwiftClosures() ([][swift](/github.com/blacktop/[email protected]/types/swift).[CaptureDescriptor](/github.com/blacktop/[email protected]/types/swift#CaptureDescriptor), [error](/builtin#error))
```
GetSwiftClosures parses all the closure context objects in the __TEXT.__swift5_capture section
####
func (*File) [GetSwiftDynamicReplacementInfo](https://github.com/blacktop/go-macho/blob/v1.1.165/swift.go#L1034) [¶](#File.GetSwiftDynamicReplacementInfo)
added in v1.1.101
```
func (f *[File](#File)) GetSwiftDynamicReplacementInfo() (*[types](/github.com/blacktop/[email protected]/types/swift/types).[AutomaticDynamicReplacements](/github.com/blacktop/[email protected]/types/swift/types#AutomaticDynamicReplacements), [error](/builtin#error))
```
####
func (*File) [GetSwiftDynamicReplacementInfoForOpaqueTypes](https://github.com/blacktop/go-macho/blob/v1.1.165/swift.go#L1065) [¶](#File.GetSwiftDynamicReplacementInfoForOpaqueTypes)
added in v1.1.101
```
func (f *[File](#File)) GetSwiftDynamicReplacementInfoForOpaqueTypes() (*[types](/github.com/blacktop/[email protected]/types/swift/types).[AutomaticDynamicReplacementsSome](/github.com/blacktop/[email protected]/types/swift/types#AutomaticDynamicReplacementsSome), [error](/builtin#error))
```
####
func (*File) [GetSwiftEntry](https://github.com/blacktop/go-macho/blob/v1.1.165/swift.go#L1010) [¶](#File.GetSwiftEntry)
added in v1.1.100
```
func (f *[File](#File)) GetSwiftEntry() ([uint64](/builtin#uint64), [error](/builtin#error))
```
####
func (*File) [GetSwiftFields](https://github.com/blacktop/go-macho/blob/v1.1.165/swift.go#L505) [¶](#File.GetSwiftFields)
added in v1.0.33
```
func (f *[File](#File)) GetSwiftFields() ([]*[fields](/github.com/blacktop/[email protected]/types/swift/fields).[Field](/github.com/blacktop/[email protected]/types/swift/fields#Field), [error](/builtin#error))
```
GetSwiftFields parses all the fields in the __TEXT.__swift5_fields section
####
func (*File) [GetSwiftProtocolConformances](https://github.com/blacktop/go-macho/blob/v1.1.165/swift.go#L161) [¶](#File.GetSwiftProtocolConformances)
added in v1.0.33
```
func (f *[File](#File)) GetSwiftProtocolConformances() ([][types](/github.com/blacktop/[email protected]/types/swift/types).[ConformanceDescriptor](/github.com/blacktop/[email protected]/types/swift/types#ConformanceDescriptor), [error](/builtin#error))
```
GetSwiftProtocolConformances parses all the protocol conformances in the __TEXT.__swift5_proto section
####
func (*File) [GetSwiftProtocols](https://github.com/blacktop/go-macho/blob/v1.1.165/swift.go#L22) [¶](#File.GetSwiftProtocols)
added in v1.0.32
```
func (f *[File](#File)) GetSwiftProtocols() ([][types](/github.com/blacktop/[email protected]/types/swift/types).[Protocol](/github.com/blacktop/[email protected]/types/swift/types#Protocol), [error](/builtin#error))
```
GetSwiftProtocols parses all the protocols in the __TEXT.__swift5_protos section
####
func (*File) [GetSwiftReflectionStrings](https://github.com/blacktop/go-macho/blob/v1.1.165/swift.go#L1158) [¶](#File.GetSwiftReflectionStrings)
added in v1.1.100
```
func (f *[File](#File)) GetSwiftReflectionStrings() (map[[uint64](/builtin#uint64)][string](/builtin#string), [error](/builtin#error))
```
####
func (*File) [GetSwiftTypeRefs](https://github.com/blacktop/go-macho/blob/v1.1.165/swift.go#L1120) [¶](#File.GetSwiftTypeRefs)
added in v1.1.141
```
func (f *[File](#File)) GetSwiftTypeRefs() ([][string](/builtin#string), [error](/builtin#error))
```
####
func (*File) [GetSwiftTypes](https://github.com/blacktop/go-macho/blob/v1.1.165/swift.go#L268) [¶](#File.GetSwiftTypes)
added in v1.0.32
```
func (f *[File](#File)) GetSwiftTypes() ([]*[types](/github.com/blacktop/[email protected]/types/swift/types).[TypeDescriptor](/github.com/blacktop/[email protected]/types/swift/types#TypeDescriptor), [error](/builtin#error))
```
GetSwiftTypes parses all the types in the __TEXT.__swift5_types section
####
func (*File) [GetVMAddress](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L1425) [¶](#File.GetVMAddress)
added in v1.0.4
```
func (f *[File](#File)) GetVMAddress(offset [uint64](/builtin#uint64)) ([uint64](/builtin#uint64), [error](/builtin#error))
```
GetVMAddress returns the virtal address for a given file offset
####
func (*File) [HasFixups](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L1936) [¶](#File.HasFixups)
added in v1.0.26
```
func (f *[File](#File)) HasFixups() [bool](/builtin#bool)
```
HasFixups does macho contain a LC_DYLD_CHAINED_FIXUPS load command
####
func (*File) [HasObjC](https://github.com/blacktop/go-macho/blob/v1.1.165/objc.go#L43) [¶](#File.HasObjC)
added in v1.0.12
```
func (f *[File](#File)) HasObjC() [bool](/builtin#bool)
```
HasObjC returns true if MachO contains a __objc_imageinfo section
####
func (*File) [HasObjCMessageReferences](https://github.com/blacktop/go-macho/blob/v1.1.165/objc.go#L76) [¶](#File.HasObjCMessageReferences)
added in v1.0.12
```
func (f *[File](#File)) HasObjCMessageReferences() [bool](/builtin#bool)
```
HasObjCMessageReferences returns true if MachO contains a __objc_msgrefs section
####
func (*File) [HasPlusLoadMethod](https://github.com/blacktop/go-macho/blob/v1.1.165/objc.go#L60) [¶](#File.HasPlusLoadMethod)
added in v1.0.12
```
func (f *[File](#File)) HasPlusLoadMethod() [bool](/builtin#bool)
```
HasPlusLoadMethod returns true if MachO contains a __objc_nlclslist or __objc_nlcatlist section
####
func (*File) [ImportedLibraries](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L2707) [¶](#File.ImportedLibraries)
```
func (f *[File](#File)) ImportedLibraries() [][string](/builtin#string)
```
ImportedLibraries returns the paths of all libraries referred to by the binary f that are expected to be linked with the binary at dynamic link time.
####
func (*File) [ImportedSymbolNames](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L2689) [¶](#File.ImportedSymbolNames)
added in v1.0.24
```
func (f *[File](#File)) ImportedSymbolNames() ([][string](/builtin#string), [error](/builtin#error))
```
ImportedSymbolNames returns the names of all symbols referred to by the binary f that are expected to be satisfied by other libraries at dynamic load time.
####
func (*File) [ImportedSymbols](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L2674) [¶](#File.ImportedSymbols)
```
func (f *[File](#File)) ImportedSymbols() ([][Symbol](#Symbol), [error](/builtin#error))
```
ImportedSymbols returns the names of all symbols referred to by the binary f that are expected to be satisfied by other libraries at dynamic load time.
####
func (*File) [IsCString](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L1566) [¶](#File.IsCString)
added in v1.1.80
```
func (f *[File](#File)) IsCString(addr [uint64](/builtin#uint64)) ([string](/builtin#string), [bool](/builtin#bool))
```
IsCString returns cstring at given virtual address if is in a CstringLiterals section
####
func (*File) [LibraryOrdinalName](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L2727) [¶](#File.LibraryOrdinalName)
added in v1.0.28
```
func (f *[File](#File)) LibraryOrdinalName(libraryOrdinal [int](/builtin#int)) [string](/builtin#string)
```
LibraryOrdinalName returns the depancy library oridinal's name
####
func (*File) [PutObjC](https://github.com/blacktop/go-macho/blob/v1.1.165/objc.go#L21) [¶](#File.PutObjC)
added in v1.1.107
```
func (f *[File](#File)) PutObjC(addr [uint64](/builtin#uint64), obj [any](/builtin#any))
```
####
func (*File) [ReadAt](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L1406) [¶](#File.ReadAt)
added in v1.1.10
```
func (f *[File](#File)) ReadAt(p [][byte](/builtin#byte), off [int64](/builtin#int64)) (n [int](/builtin#int), err [error](/builtin#error))
```
ReadAt reads data at offset within MachO
####
func (*File) [Save](https://github.com/blacktop/go-macho/blob/v1.1.165/export.go#L356) [¶](#File.Save)
added in v1.1.118
```
func (f *[File](#File)) Save(outpath [string](/builtin#string)) [error](/builtin#error)
```
####
func (*File) [Section](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L1631) [¶](#File.Section)
```
func (f *[File](#File)) Section(segment, section [string](/builtin#string)) *[types](/github.com/blacktop/[email protected]/types).[Section](/github.com/blacktop/[email protected]/types#Section)
```
Section returns the section with the given name in the given segment,
or nil if no such section exists.
####
func (*File) [Segment](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L1592) [¶](#File.Segment)
```
func (f *[File](#File)) Segment(name [string](/builtin#string)) *[Segment](#Segment)
```
Segment returns the first Segment with the given name, or nil if no such segment exists.
####
func (*File) [Segments](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L1602) [¶](#File.Segments)
```
func (f *[File](#File)) Segments() [Segments](#Segments)
```
Segments returns all Segments.
####
func (*File) [SlidePointer](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L1468) [¶](#File.SlidePointer)
added in v1.1.82
```
func (f *[File](#File)) SlidePointer(ptr [uint64](/builtin#uint64)) [uint64](/builtin#uint64)
```
SlidePointer returns slid or un-chained pointer
####
func (*File) [SourceVersion](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L1701) [¶](#File.SourceVersion)
```
func (f *[File](#File)) SourceVersion() *[SourceVersion](#SourceVersion)
```
SourceVersion returns the source version load command, or nil if no source version exists.
####
func (*File) [UUID](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L1661) [¶](#File.UUID)
```
func (f *[File](#File)) UUID() *[UUID](#UUID)
```
UUID returns the UUID load command, or nil if no UUID exists.
####
type [FileConfig](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L86) [¶](#FileConfig)
added in v1.1.10
```
type FileConfig struct {
Offset [int64](/builtin#int64)
LoadIncluding [][types](/github.com/blacktop/[email protected]/types).[LoadCmd](/github.com/blacktop/[email protected]/types#LoadCmd)
LoadExcluding [][types](/github.com/blacktop/[email protected]/types).[LoadCmd](/github.com/blacktop/[email protected]/types#LoadCmd)
VMAddrConverter [types](/github.com/blacktop/[email protected]/types).[VMAddrConverter](/github.com/blacktop/[email protected]/types#VMAddrConverter)
SectionReader [types](/github.com/blacktop/[email protected]/types).[MachoReader](/github.com/blacktop/[email protected]/types#MachoReader)
CacheReader [types](/github.com/blacktop/[email protected]/types).[MachoReader](/github.com/blacktop/[email protected]/types#MachoReader)
RelativeSelectorBase [uint64](/builtin#uint64)
}
```
FileConfig is a MachO file config object
####
type [FileTOC](https://github.com/blacktop/go-macho/blob/v1.1.165/toc.go#L12) [¶](#FileTOC)
```
type FileTOC struct {
[types](/github.com/blacktop/[email protected]/types).[FileHeader](/github.com/blacktop/[email protected]/types#FileHeader)
ByteOrder [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)
Loads loads
Sections []*[types](/github.com/blacktop/[email protected]/types).[Section](/github.com/blacktop/[email protected]/types#Section)
// contains filtered or unexported fields
}
```
####
func (*FileTOC) [AddLoad](https://github.com/blacktop/go-macho/blob/v1.1.165/toc.go#L20) [¶](#FileTOC.AddLoad)
```
func (t *[FileTOC](#FileTOC)) AddLoad(l [Load](#Load)) [uint32](/builtin#uint32)
```
####
func (*FileTOC) [AddSection](https://github.com/blacktop/go-macho/blob/v1.1.165/toc.go#L58) [¶](#FileTOC.AddSection)
```
func (t *[FileTOC](#FileTOC)) AddSection(s *[types](/github.com/blacktop/[email protected]/types).[Section](/github.com/blacktop/[email protected]/types#Section))
```
AddSection adds section to the most recently added Segment
####
func (*FileTOC) [AddSegment](https://github.com/blacktop/go-macho/blob/v1.1.165/toc.go#L51) [¶](#FileTOC.AddSegment)
```
func (t *[FileTOC](#FileTOC)) AddSegment(s *[Segment](#Segment))
```
AddSegment adds segment s to the file table of contents,
and also zeroes out the segment information with the expectation that this will be added next.
####
func (*FileTOC) [DerivedCopy](https://github.com/blacktop/go-macho/blob/v1.1.165/toc.go#L75) [¶](#FileTOC.DerivedCopy)
```
func (t *[FileTOC](#FileTOC)) DerivedCopy(Type [types](/github.com/blacktop/[email protected]/types).[HeaderFileType](/github.com/blacktop/[email protected]/types#HeaderFileType), Flags [types](/github.com/blacktop/[email protected]/types).[HeaderFlag](/github.com/blacktop/[email protected]/types#HeaderFlag)) *[FileTOC](#FileTOC)
```
DerivedCopy returns a modified copy of the TOC, with empty loads and sections,
and with the specified header type and flags.
####
func (*FileTOC) [FileSize](https://github.com/blacktop/go-macho/blob/v1.1.165/toc.go#L128) [¶](#FileTOC.FileSize)
```
func (t *[FileTOC](#FileTOC)) FileSize() [uint64](/builtin#uint64)
```
FileSize returns the size in bytes of the header, load commands, and the in-file contents of all the segments and sections included in those load commands, accounting for their offsets within the file.
####
func (*FileTOC) [HdrSize](https://github.com/blacktop/go-macho/blob/v1.1.165/toc.go#L101) [¶](#FileTOC.HdrSize)
```
func (t *[FileTOC](#FileTOC)) HdrSize() [uint32](/builtin#uint32)
```
HdrSize returns the size in bytes of the Macho header for a given magic number (where the magic number has been appropriately byte-swapped).
####
func (*FileTOC) [LoadAlign](https://github.com/blacktop/go-macho/blob/v1.1.165/toc.go#L92) [¶](#FileTOC.LoadAlign)
```
func (t *[FileTOC](#FileTOC)) LoadAlign() [uint64](/builtin#uint64)
```
LoadAlign returns the required alignment of Load commands in a binary.
This is used to add padding for necessary alignment.
####
func (*FileTOC) [LoadSize](https://github.com/blacktop/go-macho/blob/v1.1.165/toc.go#L116) [¶](#FileTOC.LoadSize)
```
func (t *[FileTOC](#FileTOC)) LoadSize() [uint32](/builtin#uint32)
```
LoadSize returns the size of all the load commands in a file's table-of contents
(but not their associated data, e.g., sections and symbol tables)
####
func (*FileTOC) [MarshalJSON](https://github.com/blacktop/go-macho/blob/v1.1.165/toc.go#L148) [¶](#FileTOC.MarshalJSON)
added in v1.1.117
```
func (t *[FileTOC](#FileTOC)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
####
func (*FileTOC) [ModifySizeCommands](https://github.com/blacktop/go-macho/blob/v1.1.165/toc.go#L28) [¶](#FileTOC.ModifySizeCommands)
added in v1.1.118
```
func (t *[FileTOC](#FileTOC)) ModifySizeCommands(prev, curr [int32](/builtin#int32)) [int32](/builtin#int32)
```
####
func (*FileTOC) [Print](https://github.com/blacktop/go-macho/blob/v1.1.165/toc.go#L144) [¶](#FileTOC.Print)
added in v1.1.117
```
func (t *[FileTOC](#FileTOC)) Print(printer func(t *[FileTOC](#FileTOC)) [string](/builtin#string)) [string](/builtin#string)
```
####
func (*FileTOC) [RemoveLoad](https://github.com/blacktop/go-macho/blob/v1.1.165/toc.go#L33) [¶](#FileTOC.RemoveLoad)
added in v1.1.118
```
func (t *[FileTOC](#FileTOC)) RemoveLoad(l [Load](#Load)) [error](/builtin#error)
```
####
func (*FileTOC) [String](https://github.com/blacktop/go-macho/blob/v1.1.165/toc.go#L140) [¶](#FileTOC.String)
```
func (t *[FileTOC](#FileTOC)) String() [string](/builtin#string)
```
####
func (*FileTOC) [TOCSize](https://github.com/blacktop/go-macho/blob/v1.1.165/toc.go#L86) [¶](#FileTOC.TOCSize)
```
func (t *[FileTOC](#FileTOC)) TOCSize() [uint32](/builtin#uint32)
```
TOCSize returns the size in bytes of the object file representation of the header and Load Commands (including Segments and Sections, but not their contents) at the beginning of a Mach-O file. This typically overlaps the text segment in the object file.
####
type [FilesetEntry](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L2108) [¶](#FilesetEntry)
added in v1.0.9
```
type FilesetEntry struct {
[LoadBytes](#LoadBytes)
[types](/github.com/blacktop/[email protected]/types).[FilesetEntryCmd](/github.com/blacktop/[email protected]/types#FilesetEntryCmd)
EntryID [string](/builtin#string) // contained entry id
}
```
FilesetEntry used with fileset_entry_command
####
func (*FilesetEntry) [LoadSize](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L2114) [¶](#FilesetEntry.LoadSize)
added in v1.1.117
```
func (l *[FilesetEntry](#FilesetEntry)) LoadSize() [uint32](/builtin#uint32)
```
####
func (*FilesetEntry) [MarshalJSON](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L2135) [¶](#FilesetEntry.MarshalJSON)
added in v1.1.117
```
func (l *[FilesetEntry](#FilesetEntry)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
####
func (*FilesetEntry) [String](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L2132) [¶](#FilesetEntry.String)
added in v1.0.9
```
func (f *[FilesetEntry](#FilesetEntry)) String() [string](/builtin#string)
```
####
func (*FilesetEntry) [Write](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L2117) [¶](#FilesetEntry.Write)
added in v1.1.33
```
func (l *[FilesetEntry](#FilesetEntry)) Write(buf *[bytes](/bytes).[Buffer](/bytes#Buffer), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [error](/builtin#error)
```
####
type [FormatError](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L61) [¶](#FormatError)
```
type FormatError struct {
// contains filtered or unexported fields
}
```
FormatError is returned by some operations if the data does not have the correct format for an object file.
####
func (*FormatError) [Error](https://github.com/blacktop/go-macho/blob/v1.1.165/file.go#L67) [¶](#FormatError.Error)
```
func (e *[FormatError](#FormatError)) Error() [string](/builtin#string)
```
####
type [FunctionStarts](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1693) [¶](#FunctionStarts)
```
type FunctionStarts struct {
[LinkEditData](#LinkEditData)
}
```
A FunctionStarts represents a Mach-O function starts command.
####
type [FvmFile](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L729) [¶](#FvmFile)
added in v1.1.46
```
type FvmFile struct {
[LoadBytes](#LoadBytes)
[types](/github.com/blacktop/[email protected]/types).[FvmFileCmd](/github.com/blacktop/[email protected]/types#FvmFileCmd)
Name [string](/builtin#string)
}
```
A FvmFile represents a Mach-O LC_FVMFILE command.
####
func (*FvmFile) [LoadSize](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L735) [¶](#FvmFile.LoadSize)
added in v1.1.117
```
func (l *[FvmFile](#FvmFile)) LoadSize() [uint32](/builtin#uint32)
```
####
func (*FvmFile) [MarshalJSON](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L756) [¶](#FvmFile.MarshalJSON)
added in v1.1.117
```
func (l *[FvmFile](#FvmFile)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
####
func (*FvmFile) [String](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L753) [¶](#FvmFile.String)
added in v1.1.46
```
func (l *[FvmFile](#FvmFile)) String() [string](/builtin#string)
```
####
func (*FvmFile) [Write](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L738) [¶](#FvmFile.Write)
added in v1.1.117
```
func (l *[FvmFile](#FvmFile)) Write(buf *[bytes](/bytes).[Buffer](/bytes#Buffer), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [error](/builtin#error)
```
####
type [IDDylib](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L935) [¶](#IDDylib)
added in v1.1.117
```
type IDDylib struct {
[Dylib](#Dylib)
}
```
A IDDylib represents a Mach-O LC_ID_DYLIB command.
####
type [IDFvmlib](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L667) [¶](#IDFvmlib)
added in v1.1.46
```
type IDFvmlib struct {
[LoadFvmlib](#LoadFvmlib)
}
```
A IDFvmlib represents a Mach-O LC_IDFVMLIB command.
####
type [Ident](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L676) [¶](#Ident)
added in v1.1.46
```
type Ident struct {
[LoadBytes](#LoadBytes)
[types](/github.com/blacktop/[email protected]/types).[IdentCmd](/github.com/blacktop/[email protected]/types#IdentCmd)
StrTable [][string](/builtin#string)
}
```
A Ident represents a Mach-O LC_IDENT command.
####
func (*Ident) [LoadSize](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L682) [¶](#Ident.LoadSize)
added in v1.1.117
```
func (i *[Ident](#Ident)) LoadSize() [uint32](/builtin#uint32)
```
####
func (*Ident) [MarshalJSON](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L712) [¶](#Ident.MarshalJSON)
added in v1.1.117
```
func (i *[Ident](#Ident)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
####
func (*Ident) [String](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L709) [¶](#Ident.String)
added in v1.1.46
```
func (i *[Ident](#Ident)) String() [string](/builtin#string)
```
####
func (*Ident) [Write](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L692) [¶](#Ident.Write)
added in v1.1.117
```
func (i *[Ident](#Ident)) Write(buf *[bytes](/bytes).[Buffer](/bytes#Buffer), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [error](/builtin#error)
```
####
type [LazyLoadDylib](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1518) [¶](#LazyLoadDylib)
added in v1.0.26
```
type LazyLoadDylib struct {
[Dylib](#Dylib)
}
```
A LazyLoadDylib represents a Mach-O LC_LAZY_LOAD_DYLIB command.
####
type [LinkEditData](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L2299) [¶](#LinkEditData)
```
type LinkEditData struct {
[LoadBytes](#LoadBytes)
[types](/github.com/blacktop/[email protected]/types).[LinkEditDataCmd](/github.com/blacktop/[email protected]/types#LinkEditDataCmd)
}
```
A LinkEditData represents a Mach-O linkedit data command.
```
LC_CODE_SIGNATURE, LC_SEGMENT_SPLIT_INFO, LC_FUNCTION_STARTS, LC_DATA_IN_CODE,
LC_DYLIB_CODE_SIGN_DRS, LC_LINKER_OPTIMIZATION_HINT, LC_DYLD_EXPORTS_TRIE, or LC_DYLD_CHAINED_FIXUPS.
```
####
func (*LinkEditData) [LoadSize](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L2304) [¶](#LinkEditData.LoadSize)
added in v1.1.117
```
func (l *[LinkEditData](#LinkEditData)) LoadSize() [uint32](/builtin#uint32)
```
####
func (*LinkEditData) [MarshalJSON](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L2316) [¶](#LinkEditData.MarshalJSON)
added in v1.1.117
```
func (l *[LinkEditData](#LinkEditData)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
####
func (*LinkEditData) [String](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L2313) [¶](#LinkEditData.String)
added in v1.1.117
```
func (l *[LinkEditData](#LinkEditData)) String() [string](/builtin#string)
```
####
func (*LinkEditData) [Write](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L2307) [¶](#LinkEditData.Write)
added in v1.1.33
```
func (l *[LinkEditData](#LinkEditData)) Write(buf *[bytes](/bytes).[Buffer](/bytes#Buffer), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [error](/builtin#error)
```
####
type [LinkerOptimizationHint](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1961) [¶](#LinkerOptimizationHint)
added in v1.1.33
```
type LinkerOptimizationHint struct {
[LinkEditData](#LinkEditData)
}
```
####
type [LinkerOption](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1918) [¶](#LinkerOption)
added in v1.1.46
```
type LinkerOption struct {
[LoadBytes](#LoadBytes)
[types](/github.com/blacktop/[email protected]/types).[LinkerOptionCmd](/github.com/blacktop/[email protected]/types#LinkerOptionCmd)
Options [][string](/builtin#string)
}
```
A LinkerOption represents a Mach-O LC_LINKER_OPTION command.
####
func (*LinkerOption) [LoadSize](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1924) [¶](#LinkerOption.LoadSize)
added in v1.1.117
```
func (l *[LinkerOption](#LinkerOption)) LoadSize() [uint32](/builtin#uint32)
```
####
func (*LinkerOption) [MarshalJSON](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1945) [¶](#LinkerOption.MarshalJSON)
added in v1.1.117
```
func (l *[LinkerOption](#LinkerOption)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
####
func (*LinkerOption) [String](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1942) [¶](#LinkerOption.String)
added in v1.1.46
```
func (l *[LinkerOption](#LinkerOption)) String() [string](/builtin#string)
```
####
func (*LinkerOption) [Write](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1931) [¶](#LinkerOption.Write)
added in v1.1.117
```
func (l *[LinkerOption](#LinkerOption)) Write(buf *[bytes](/bytes).[Buffer](/bytes#Buffer), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [error](/builtin#error)
```
####
type [Load](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L18) [¶](#Load)
```
type Load interface {
Command() [types](/github.com/blacktop/[email protected]/types).[LoadCmd](/github.com/blacktop/[email protected]/types#LoadCmd)
LoadSize() [uint32](/builtin#uint32) // Need the TOC for alignment, sigh.
Raw() [][byte](/builtin#byte)
Write(buf *[bytes](/bytes).[Buffer](/bytes#Buffer), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [error](/builtin#error)
String() [string](/builtin#string)
MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
}
```
A Load represents any Mach-O load command.
####
type [LoadBytes](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L44) [¶](#LoadBytes)
```
type LoadBytes [][byte](/builtin#byte)
```
A LoadBytes is the uninterpreted bytes of a Mach-O load command.
####
func (LoadBytes) [Copy](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L71) [¶](#LoadBytes.Copy)
```
func (b [LoadBytes](#LoadBytes)) Copy() [LoadBytes](#LoadBytes)
```
####
func (LoadBytes) [LoadSize](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L72) [¶](#LoadBytes.LoadSize)
```
func (b [LoadBytes](#LoadBytes)) LoadSize() [uint32](/builtin#uint32)
```
####
func (LoadBytes) [MarshalJSON](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L61) [¶](#LoadBytes.MarshalJSON)
added in v1.1.117
```
func (b [LoadBytes](#LoadBytes)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
####
func (LoadBytes) [Raw](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L70) [¶](#LoadBytes.Raw)
```
func (b [LoadBytes](#LoadBytes)) Raw() [][byte](/builtin#byte)
```
####
func (LoadBytes) [String](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L46) [¶](#LoadBytes.String)
```
func (b [LoadBytes](#LoadBytes)) String() [string](/builtin#string)
```
####
func (LoadBytes) [Write](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L73) [¶](#LoadBytes.Write)
added in v1.1.33
```
func (b [LoadBytes](#LoadBytes)) Write(buf *[bytes](/bytes).[Buffer](/bytes#Buffer), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [error](/builtin#error)
```
####
type [LoadCmdBytes](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L31) [¶](#LoadCmdBytes)
```
type LoadCmdBytes struct {
[types](/github.com/blacktop/[email protected]/types).[LoadCmd](/github.com/blacktop/[email protected]/types#LoadCmd)
[LoadBytes](#LoadBytes)
}
```
LoadCmdBytes is a command-tagged sequence of bytes.
This is used for Load Commands that are not (yet)
interesting to us, and to common up this behavior for all those that are.
####
func (LoadCmdBytes) [Copy](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L39) [¶](#LoadCmdBytes.Copy)
```
func (s [LoadCmdBytes](#LoadCmdBytes)) Copy() [LoadCmdBytes](#LoadCmdBytes)
```
####
func (LoadCmdBytes) [String](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L36) [¶](#LoadCmdBytes.String)
```
func (s [LoadCmdBytes](#LoadCmdBytes)) String() [string](/builtin#string)
```
####
type [LoadDylib](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L926) [¶](#LoadDylib)
added in v1.1.117
```
type LoadDylib struct {
[Dylib](#Dylib)
}
```
A LoadDylib represents a Mach-O LC_LOAD_DYLIB command.
####
type [LoadDylinker](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L944) [¶](#LoadDylinker)
added in v1.0.4
```
type LoadDylinker struct {
[Dylinker](#Dylinker)
}
```
A LoadDylinker represents a Mach-O LC_LOAD_DYLINKER command.
####
type [LoadFvmlib](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L619) [¶](#LoadFvmlib)
added in v1.1.46
```
type LoadFvmlib struct {
[LoadBytes](#LoadBytes)
[types](/github.com/blacktop/[email protected]/types).[LoadFvmLibCmd](/github.com/blacktop/[email protected]/types#LoadFvmLibCmd)
Name [string](/builtin#string)
}
```
A LoadFvmlib represents a Mach-O LC_LOADFVMLIB command.
####
func (*LoadFvmlib) [LoadSize](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L625) [¶](#LoadFvmlib.LoadSize)
added in v1.1.117
```
func (l *[LoadFvmlib](#LoadFvmlib)) LoadSize() [uint32](/builtin#uint32)
```
####
func (*LoadFvmlib) [MarshalJSON](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L646) [¶](#LoadFvmlib.MarshalJSON)
added in v1.1.117
```
func (l *[LoadFvmlib](#LoadFvmlib)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
####
func (*LoadFvmlib) [String](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L643) [¶](#LoadFvmlib.String)
added in v1.1.46
```
func (l *[LoadFvmlib](#LoadFvmlib)) String() [string](/builtin#string)
```
####
func (*LoadFvmlib) [Write](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L628) [¶](#LoadFvmlib.Write)
added in v1.1.117
```
func (l *[LoadFvmlib](#LoadFvmlib)) Write(buf *[bytes](/bytes).[Buffer](/bytes#Buffer), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [error](/builtin#error)
```
####
type [Note](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1988) [¶](#Note)
added in v1.1.46
```
type Note struct {
[LoadBytes](#LoadBytes)
[types](/github.com/blacktop/[email protected]/types).[NoteCmd](/github.com/blacktop/[email protected]/types#NoteCmd)
}
```
A Note represents a Mach-O LC_NOTE command.
####
func (*Note) [LoadSize](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1993) [¶](#Note.LoadSize)
added in v1.1.117
```
func (n *[Note](#Note)) LoadSize() [uint32](/builtin#uint32)
```
####
func (*Note) [MarshalJSON](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L2005) [¶](#Note.MarshalJSON)
added in v1.1.117
```
func (n *[Note](#Note)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
####
func (*Note) [String](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L2002) [¶](#Note.String)
added in v1.1.46
```
func (n *[Note](#Note)) String() [string](/builtin#string)
```
####
func (*Note) [Write](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1996) [¶](#Note.Write)
added in v1.1.117
```
func (n *[Note](#Note)) Write(buf *[bytes](/bytes).[Buffer](/bytes#Buffer), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [error](/builtin#error)
```
####
type [PrebindCheckSum](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1268) [¶](#PrebindCheckSum)
added in v1.1.117
```
type PrebindCheckSum struct {
[LoadBytes](#LoadBytes)
[types](/github.com/blacktop/[email protected]/types).[PrebindCksumCmd](/github.com/blacktop/[email protected]/types#PrebindCksumCmd)
}
```
A PrebindCheckSum is a Mach-O LC_PREBIND_CKSUM command.
####
func (*PrebindCheckSum) [LoadSize](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1273) [¶](#PrebindCheckSum.LoadSize)
added in v1.1.117
```
func (l *[PrebindCheckSum](#PrebindCheckSum)) LoadSize() [uint32](/builtin#uint32)
```
####
func (*PrebindCheckSum) [MarshalJSON](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1285) [¶](#PrebindCheckSum.MarshalJSON)
added in v1.1.117
```
func (l *[PrebindCheckSum](#PrebindCheckSum)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
####
func (*PrebindCheckSum) [String](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1282) [¶](#PrebindCheckSum.String)
added in v1.1.117
```
func (l *[PrebindCheckSum](#PrebindCheckSum)) String() [string](/builtin#string)
```
####
func (*PrebindCheckSum) [Write](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1276) [¶](#PrebindCheckSum.Write)
added in v1.1.117
```
func (l *[PrebindCheckSum](#PrebindCheckSum)) Write(buf *[bytes](/bytes).[Buffer](/bytes#Buffer), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [error](/builtin#error)
```
####
type [PreboundDylib](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L962) [¶](#PreboundDylib)
added in v1.1.46
```
type PreboundDylib struct {
[LoadBytes](#LoadBytes)
[types](/github.com/blacktop/[email protected]/types).[PreboundDylibCmd](/github.com/blacktop/[email protected]/types#PreboundDylibCmd)
Name [string](/builtin#string)
LinkedModulesBitVector [string](/builtin#string)
}
```
PreboundDylib represents a Mach-O LC_PREBOUND_DYLIB command.
####
func (*PreboundDylib) [LoadSize](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L969) [¶](#PreboundDylib.LoadSize)
added in v1.1.117
```
func (d *[PreboundDylib](#PreboundDylib)) LoadSize() [uint32](/builtin#uint32)
```
####
func (*PreboundDylib) [MarshalJSON](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1001) [¶](#PreboundDylib.MarshalJSON)
added in v1.1.117
```
func (d *[PreboundDylib](#PreboundDylib)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
####
func (*PreboundDylib) [Put](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L972) [¶](#PreboundDylib.Put)
added in v1.1.117
```
func (d *[PreboundDylib](#PreboundDylib)) Put(b [][byte](/builtin#byte), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [int](/builtin#int)
```
####
func (*PreboundDylib) [String](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L998) [¶](#PreboundDylib.String)
added in v1.1.46
```
func (d *[PreboundDylib](#PreboundDylib)) String() [string](/builtin#string)
```
####
func (*PreboundDylib) [Write](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L980) [¶](#PreboundDylib.Write)
added in v1.1.117
```
func (d *[PreboundDylib](#PreboundDylib)) Write(buf *[bytes](/bytes).[Buffer](/bytes#Buffer), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [error](/builtin#error)
```
####
type [Prepage](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L775) [¶](#Prepage)
added in v1.1.46
```
type Prepage struct {
[LoadBytes](#LoadBytes)
[types](/github.com/blacktop/[email protected]/types).[PrePageCmd](/github.com/blacktop/[email protected]/types#PrePageCmd)
}
```
A Prepage represents a Mach-O LC_PREPAGE command.
####
func (*Prepage) [LoadSize](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L780) [¶](#Prepage.LoadSize)
added in v1.1.117
```
func (c *[Prepage](#Prepage)) LoadSize() [uint32](/builtin#uint32)
```
####
func (*Prepage) [MarshalJSON](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L792) [¶](#Prepage.MarshalJSON)
added in v1.1.117
```
func (c *[Prepage](#Prepage)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
####
func (*Prepage) [String](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L789) [¶](#Prepage.String)
added in v1.1.117
```
func (c *[Prepage](#Prepage)) String() [string](/builtin#string)
```
####
func (*Prepage) [Write](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L783) [¶](#Prepage.Write)
added in v1.1.117
```
func (c *[Prepage](#Prepage)) Write(buf *[bytes](/bytes).[Buffer](/bytes#Buffer), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [error](/builtin#error)
```
####
type [ReExportDylib](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1509) [¶](#ReExportDylib)
```
type ReExportDylib struct {
[Dylib](#Dylib)
}
```
A ReExportDylib represents a Mach-O LC_REEXPORT_DYLIB command.
####
type [Regs386](https://github.com/blacktop/go-macho/blob/v1.1.165/macho.go#L16) [¶](#Regs386)
```
type Regs386 struct {
AX [uint32](/builtin#uint32)
BX [uint32](/builtin#uint32)
CX [uint32](/builtin#uint32)
DX [uint32](/builtin#uint32)
DI [uint32](/builtin#uint32)
SI [uint32](/builtin#uint32)
BP [uint32](/builtin#uint32)
SP [uint32](/builtin#uint32)
SS [uint32](/builtin#uint32)
FLAGS [uint32](/builtin#uint32)
IP [uint32](/builtin#uint32)
CS [uint32](/builtin#uint32)
DS [uint32](/builtin#uint32)
ES [uint32](/builtin#uint32)
FS [uint32](/builtin#uint32)
GS [uint32](/builtin#uint32)
}
```
Regs386 is the Mach-O 386 register structure.
####
type [RegsAMD64](https://github.com/blacktop/go-macho/blob/v1.1.165/macho.go#L36) [¶](#RegsAMD64)
```
type RegsAMD64 struct {
AX [uint64](/builtin#uint64)
BX [uint64](/builtin#uint64)
CX [uint64](/builtin#uint64)
DX [uint64](/builtin#uint64)
DI [uint64](/builtin#uint64)
SI [uint64](/builtin#uint64)
BP [uint64](/builtin#uint64)
SP [uint64](/builtin#uint64)
R8 [uint64](/builtin#uint64)
R9 [uint64](/builtin#uint64)
R10 [uint64](/builtin#uint64)
R11 [uint64](/builtin#uint64)
R12 [uint64](/builtin#uint64)
R13 [uint64](/builtin#uint64)
R14 [uint64](/builtin#uint64)
R15 [uint64](/builtin#uint64)
IP [uint64](/builtin#uint64)
FLAGS [uint64](/builtin#uint64)
CS [uint64](/builtin#uint64)
FS [uint64](/builtin#uint64)
GS [uint64](/builtin#uint64)
}
```
RegsAMD64 is the Mach-O AMD64 register structure.
####
type [RegsARM](https://github.com/blacktop/go-macho/blob/v1.1.165/macho.go#L61) [¶](#RegsARM)
```
type RegsARM struct {
R0 [uint32](/builtin#uint32)
R1 [uint32](/builtin#uint32)
R2 [uint32](/builtin#uint32)
R3 [uint32](/builtin#uint32)
R4 [uint32](/builtin#uint32)
R5 [uint32](/builtin#uint32)
R6 [uint32](/builtin#uint32)
R7 [uint32](/builtin#uint32)
R8 [uint32](/builtin#uint32)
R9 [uint32](/builtin#uint32)
R10 [uint32](/builtin#uint32)
R11 [uint32](/builtin#uint32)
R12 [uint32](/builtin#uint32)
SP [uint32](/builtin#uint32)
LR [uint32](/builtin#uint32)
PC [uint32](/builtin#uint32)
CPSR [uint32](/builtin#uint32)
}
```
RegsARM is the Mach-O ARM register structure.
####
type [RegsARM64](https://github.com/blacktop/go-macho/blob/v1.1.165/macho.go#L82) [¶](#RegsARM64)
```
type RegsARM64 struct {
X0 [uint64](/builtin#uint64)
X1 [uint64](/builtin#uint64)
X2 [uint64](/builtin#uint64)
X3 [uint64](/builtin#uint64)
X4 [uint64](/builtin#uint64)
X5 [uint64](/builtin#uint64)
X6 [uint64](/builtin#uint64)
X7 [uint64](/builtin#uint64)
X8 [uint64](/builtin#uint64)
X9 [uint64](/builtin#uint64)
X10 [uint64](/builtin#uint64)
X11 [uint64](/builtin#uint64)
X12 [uint64](/builtin#uint64)
X13 [uint64](/builtin#uint64)
X14 [uint64](/builtin#uint64)
X15 [uint64](/builtin#uint64)
X16 [uint64](/builtin#uint64)
X17 [uint64](/builtin#uint64)
X18 [uint64](/builtin#uint64)
X19 [uint64](/builtin#uint64)
X20 [uint64](/builtin#uint64)
X21 [uint64](/builtin#uint64)
X22 [uint64](/builtin#uint64)
X23 [uint64](/builtin#uint64)
X24 [uint64](/builtin#uint64)
X25 [uint64](/builtin#uint64)
X26 [uint64](/builtin#uint64)
X27 [uint64](/builtin#uint64)
X28 [uint64](/builtin#uint64)
FP [uint64](/builtin#uint64)
LR [uint64](/builtin#uint64)
SP [uint64](/builtin#uint64)
PC [uint64](/builtin#uint64)
CPSR [uint32](/builtin#uint32)
PAD [uint32](/builtin#uint32)
}
```
RegsARM64 is the Mach-O ARM 64 register structure.
####
type [Routines](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1022) [¶](#Routines)
added in v1.1.20
```
type Routines struct {
[LoadBytes](#LoadBytes)
[types](/github.com/blacktop/[email protected]/types).[RoutinesCmd](/github.com/blacktop/[email protected]/types#RoutinesCmd)
}
```
A Routines is a Mach-O LC_ROUTINES command.
####
func (*Routines) [LoadSize](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1027) [¶](#Routines.LoadSize)
added in v1.1.117
```
func (l *[Routines](#Routines)) LoadSize() [uint32](/builtin#uint32)
```
####
func (*Routines) [MarshalJSON](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1039) [¶](#Routines.MarshalJSON)
added in v1.1.117
```
func (l *[Routines](#Routines)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
####
func (*Routines) [String](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1036) [¶](#Routines.String)
added in v1.1.20
```
func (l *[Routines](#Routines)) String() [string](/builtin#string)
```
####
func (*Routines) [Write](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1030) [¶](#Routines.Write)
added in v1.1.117
```
func (l *[Routines](#Routines)) Write(buf *[bytes](/bytes).[Buffer](/bytes#Buffer), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [error](/builtin#error)
```
####
type [Routines64](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1311) [¶](#Routines64)
```
type Routines64 struct {
[LoadBytes](#LoadBytes)
[types](/github.com/blacktop/[email protected]/types).[Routines64Cmd](/github.com/blacktop/[email protected]/types#Routines64Cmd)
}
```
A Routines64 is a Mach-O LC_ROUTINES_64 command.
####
func (*Routines64) [LoadSize](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1316) [¶](#Routines64.LoadSize)
added in v1.1.117
```
func (l *[Routines64](#Routines64)) LoadSize() [uint32](/builtin#uint32)
```
####
func (*Routines64) [MarshalJSON](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1328) [¶](#Routines64.MarshalJSON)
added in v1.1.117
```
func (l *[Routines64](#Routines64)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
####
func (*Routines64) [String](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1325) [¶](#Routines64.String)
added in v1.1.20
```
func (l *[Routines64](#Routines64)) String() [string](/builtin#string)
```
####
func (*Routines64) [Write](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1319) [¶](#Routines64.Write)
added in v1.1.117
```
func (l *[Routines64](#Routines64)) Write(buf *[bytes](/bytes).[Buffer](/bytes#Buffer), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [error](/builtin#error)
```
####
type [Rpath](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1381) [¶](#Rpath)
```
type Rpath struct {
[LoadBytes](#LoadBytes)
[types](/github.com/blacktop/[email protected]/types).[RpathCmd](/github.com/blacktop/[email protected]/types#RpathCmd)
Path [string](/builtin#string)
}
```
A Rpath represents a Mach-O LC_RPATH command.
####
func (*Rpath) [LoadSize](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1387) [¶](#Rpath.LoadSize)
added in v1.1.117
```
func (r *[Rpath](#Rpath)) LoadSize() [uint32](/builtin#uint32)
```
####
func (*Rpath) [MarshalJSON](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1408) [¶](#Rpath.MarshalJSON)
added in v1.1.117
```
func (r *[Rpath](#Rpath)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
####
func (*Rpath) [String](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1405) [¶](#Rpath.String)
added in v1.0.9
```
func (r *[Rpath](#Rpath)) String() [string](/builtin#string)
```
####
func (*Rpath) [Write](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1390) [¶](#Rpath.Write)
added in v1.1.117
```
func (r *[Rpath](#Rpath)) Write(buf *[bytes](/bytes).[Buffer](/bytes#Buffer), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [error](/builtin#error)
```
####
type [Segment](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L112) [¶](#Segment)
```
type Segment struct {
[SegmentHeader](#SegmentHeader)
[LoadBytes](#LoadBytes)
// Embed ReaderAt for ReadAt method.
// Do not embed SectionReader directly
// to avoid having Read and Seek.
// If a client wants Read and Seek it must use
// Open() to avoid fighting over the seek offset
// with other clients.
[io](/io).[ReaderAt](/io#ReaderAt)
// contains filtered or unexported fields
}
```
A Segment represents a Mach-O 32-bit or 64-bit load segment command.
####
func (*Segment) [Copy](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L230) [¶](#Segment.Copy)
```
func (s *[Segment](#Segment)) Copy() *[Segment](#Segment)
```
####
func (*Segment) [CopyZeroed](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L234) [¶](#Segment.CopyZeroed)
```
func (s *[Segment](#Segment)) CopyZeroed() *[Segment](#Segment)
```
####
func (*Segment) [Data](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L207) [¶](#Segment.Data)
```
func (s *[Segment](#Segment)) Data() ([][byte](/builtin#byte), [error](/builtin#error))
```
Data reads and returns the contents of the segment.
####
func (*Segment) [LessThan](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L158) [¶](#Segment.LessThan)
added in v1.1.56
```
func (s *[Segment](#Segment)) LessThan(o *[Segment](#Segment)) [bool](/builtin#bool)
```
####
func (*Segment) [LoadSize](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L247) [¶](#Segment.LoadSize)
```
func (s *[Segment](#Segment)) LoadSize() [uint32](/builtin#uint32)
```
####
func (*Segment) [MarshalJSON](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L268) [¶](#Segment.MarshalJSON)
added in v1.1.117
```
func (s *[Segment](#Segment)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
####
func (*Segment) [Open](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L217) [¶](#Segment.Open)
```
func (s *[Segment](#Segment)) Open() [io](/io).[ReadSeeker](/io#ReadSeeker)
```
Open returns a new ReadSeeker reading the segment.
####
func (*Segment) [Put32](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L128) [¶](#Segment.Put32)
```
func (s *[Segment](#Segment)) Put32(b [][byte](/builtin#byte), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [int](/builtin#int)
```
####
func (*Segment) [Put64](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L143) [¶](#Segment.Put64)
```
func (s *[Segment](#Segment)) Put64(b [][byte](/builtin#byte), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [int](/builtin#int)
```
####
func (*Segment) [String](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L254) [¶](#Segment.String)
```
func (s *[Segment](#Segment)) String() [string](/builtin#string)
```
####
func (*Segment) [UncompressedSize](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L221) [¶](#Segment.UncompressedSize)
```
func (s *[Segment](#Segment)) UncompressedSize(t *[FileTOC](#FileTOC), align [uint64](/builtin#uint64)) [uint64](/builtin#uint64)
```
UncompressedSize returns the size of the segment with its sections uncompressed, ignoring its offset within the file. The returned size is rounded up to the power of two in align.
####
func (*Segment) [Write](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L162) [¶](#Segment.Write)
added in v1.1.33
```
func (s *[Segment](#Segment)) Write(buf *[bytes](/bytes).[Buffer](/bytes#Buffer), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [error](/builtin#error)
```
####
type [SegmentHeader](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L90) [¶](#SegmentHeader)
```
type SegmentHeader struct {
[types](/github.com/blacktop/[email protected]/types).[LoadCmd](/github.com/blacktop/[email protected]/types#LoadCmd)
Len [uint32](/builtin#uint32)
Name [string](/builtin#string)
Addr [uint64](/builtin#uint64)
Memsz [uint64](/builtin#uint64)
Offset [uint64](/builtin#uint64)
Filesz [uint64](/builtin#uint64)
Maxprot [types](/github.com/blacktop/[email protected]/types).[VmProtection](/github.com/blacktop/[email protected]/types#VmProtection)
Prot [types](/github.com/blacktop/[email protected]/types).[VmProtection](/github.com/blacktop/[email protected]/types#VmProtection)
Nsect [uint32](/builtin#uint32)
Flag [types](/github.com/blacktop/[email protected]/types).[SegFlag](/github.com/blacktop/[email protected]/types#SegFlag)
Firstsect [uint32](/builtin#uint32)
}
```
A SegmentHeader is the header for a Mach-O 32-bit or 64-bit load segment command.
####
func (*SegmentHeader) [String](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L105) [¶](#SegmentHeader.String)
```
func (s *[SegmentHeader](#SegmentHeader)) String() [string](/builtin#string)
```
####
type [Segments](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L298) [¶](#Segments)
added in v1.1.56
```
type Segments []*[Segment](#Segment)
```
####
func (Segments) [Len](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L300) [¶](#Segments.Len)
added in v1.1.56
```
func (v [Segments](#Segments)) Len() [int](/builtin#int)
```
####
func (Segments) [Less](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L304) [¶](#Segments.Less)
added in v1.1.56
```
func (v [Segments](#Segments)) Less(i, j [int](/builtin#int)) [bool](/builtin#bool)
```
####
func (Segments) [Swap](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L308) [¶](#Segments.Swap)
added in v1.1.56
```
func (v [Segments](#Segments)) Swap(i, j [int](/builtin#int))
```
####
type [SourceVersion](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1823) [¶](#SourceVersion)
```
type SourceVersion struct {
[LoadBytes](#LoadBytes)
[types](/github.com/blacktop/[email protected]/types).[SourceVersionCmd](/github.com/blacktop/[email protected]/types#SourceVersionCmd)
}
```
A SourceVersion represents a Mach-O LC_SOURCE_VERSION command.
####
func (*SourceVersion) [LoadSize](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1828) [¶](#SourceVersion.LoadSize)
added in v1.1.117
```
func (s *[SourceVersion](#SourceVersion)) LoadSize() [uint32](/builtin#uint32)
```
####
func (*SourceVersion) [MarshalJSON](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1840) [¶](#SourceVersion.MarshalJSON)
added in v1.1.117
```
func (s *[SourceVersion](#SourceVersion)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
####
func (*SourceVersion) [String](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1837) [¶](#SourceVersion.String)
added in v1.0.4
```
func (s *[SourceVersion](#SourceVersion)) String() [string](/builtin#string)
```
####
func (*SourceVersion) [Write](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1831) [¶](#SourceVersion.Write)
added in v1.1.117
```
func (s *[SourceVersion](#SourceVersion)) Write(buf *[bytes](/bytes).[Buffer](/bytes#Buffer), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [error](/builtin#error)
```
####
type [SplitInfo](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1464) [¶](#SplitInfo)
```
type SplitInfo struct {
[LoadBytes](#LoadBytes)
[types](/github.com/blacktop/[email protected]/types).[SegmentSplitInfoCmd](/github.com/blacktop/[email protected]/types#SegmentSplitInfoCmd)
Version [uint8](/builtin#uint8)
}
```
A SplitInfo represents a Mach-O LC_SEGMENT_SPLIT_INFO command.
####
func (*SplitInfo) [LoadSize](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1470) [¶](#SplitInfo.LoadSize)
added in v1.1.117
```
func (l *[SplitInfo](#SplitInfo)) LoadSize() [uint32](/builtin#uint32)
```
####
func (*SplitInfo) [MarshalJSON](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1488) [¶](#SplitInfo.MarshalJSON)
added in v1.1.117
```
func (l *[SplitInfo](#SplitInfo)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
####
func (*SplitInfo) [String](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1479) [¶](#SplitInfo.String)
added in v1.0.9
```
func (s *[SplitInfo](#SplitInfo)) String() [string](/builtin#string)
```
####
func (*SplitInfo) [Write](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1473) [¶](#SplitInfo.Write)
added in v1.1.33
```
func (l *[SplitInfo](#SplitInfo)) Write(buf *[bytes](/bytes).[Buffer](/bytes#Buffer), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [error](/builtin#error)
```
####
type [SubClient](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1142) [¶](#SubClient)
```
type SubClient struct {
[LoadBytes](#LoadBytes)
[types](/github.com/blacktop/[email protected]/types).[SubClientCmd](/github.com/blacktop/[email protected]/types#SubClientCmd)
Name [string](/builtin#string)
}
```
A SubClient is a Mach-O LC_SUB_CLIENT command.
####
func (*SubClient) [LoadSize](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1148) [¶](#SubClient.LoadSize)
added in v1.1.117
```
func (l *[SubClient](#SubClient)) LoadSize() [uint32](/builtin#uint32)
```
####
func (*SubClient) [MarshalJSON](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1169) [¶](#SubClient.MarshalJSON)
added in v1.1.117
```
func (l *[SubClient](#SubClient)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
####
func (*SubClient) [String](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1166) [¶](#SubClient.String)
added in v1.0.9
```
func (l *[SubClient](#SubClient)) String() [string](/builtin#string)
```
####
func (*SubClient) [Write](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1151) [¶](#SubClient.Write)
added in v1.1.117
```
func (l *[SubClient](#SubClient)) Write(buf *[bytes](/bytes).[Buffer](/bytes#Buffer), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [error](/builtin#error)
```
####
type [SubFramework](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1058) [¶](#SubFramework)
```
type SubFramework struct {
[LoadBytes](#LoadBytes)
[types](/github.com/blacktop/[email protected]/types).[SubFrameworkCmd](/github.com/blacktop/[email protected]/types#SubFrameworkCmd)
Framework [string](/builtin#string)
}
```
A SubFramework is a Mach-O LC_SUB_FRAMEWORK command.
####
func (*SubFramework) [LoadSize](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1064) [¶](#SubFramework.LoadSize)
added in v1.1.117
```
func (l *[SubFramework](#SubFramework)) LoadSize() [uint32](/builtin#uint32)
```
####
func (*SubFramework) [MarshalJSON](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1083) [¶](#SubFramework.MarshalJSON)
added in v1.1.117
```
func (l *[SubFramework](#SubFramework)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
####
func (*SubFramework) [String](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1082) [¶](#SubFramework.String)
added in v1.0.24
```
func (l *[SubFramework](#SubFramework)) String() [string](/builtin#string)
```
####
func (*SubFramework) [Write](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1067) [¶](#SubFramework.Write)
added in v1.1.117
```
func (l *[SubFramework](#SubFramework)) Write(buf *[bytes](/bytes).[Buffer](/bytes#Buffer), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [error](/builtin#error)
```
####
type [SubLibrary](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1186) [¶](#SubLibrary)
added in v1.1.46
```
type SubLibrary struct {
[LoadBytes](#LoadBytes)
[types](/github.com/blacktop/[email protected]/types).[SubLibraryCmd](/github.com/blacktop/[email protected]/types#SubLibraryCmd)
Library [string](/builtin#string)
}
```
A SubLibrary is a Mach-O LC_SUB_LIBRARY command.
####
func (*SubLibrary) [LoadSize](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1192) [¶](#SubLibrary.LoadSize)
added in v1.1.117
```
func (l *[SubLibrary](#SubLibrary)) LoadSize() [uint32](/builtin#uint32)
```
####
func (*SubLibrary) [MarshalJSON](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1211) [¶](#SubLibrary.MarshalJSON)
added in v1.1.117
```
func (l *[SubLibrary](#SubLibrary)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
####
func (*SubLibrary) [String](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1210) [¶](#SubLibrary.String)
added in v1.1.46
```
func (l *[SubLibrary](#SubLibrary)) String() [string](/builtin#string)
```
####
func (*SubLibrary) [Write](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1195) [¶](#SubLibrary.Write)
added in v1.1.117
```
func (l *[SubLibrary](#SubLibrary)) Write(buf *[bytes](/bytes).[Buffer](/bytes#Buffer), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [error](/builtin#error)
```
####
type [SubUmbrella](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1100) [¶](#SubUmbrella)
added in v1.1.46
```
type SubUmbrella struct {
[LoadBytes](#LoadBytes)
[types](/github.com/blacktop/[email protected]/types).[SubUmbrellaCmd](/github.com/blacktop/[email protected]/types#SubUmbrellaCmd)
Umbrella [string](/builtin#string)
}
```
A SubUmbrella is a Mach-O LC_SUB_UMBRELLA command.
####
func (*SubUmbrella) [LoadSize](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1106) [¶](#SubUmbrella.LoadSize)
added in v1.1.117
```
func (l *[SubUmbrella](#SubUmbrella)) LoadSize() [uint32](/builtin#uint32)
```
####
func (*SubUmbrella) [MarshalJSON](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1125) [¶](#SubUmbrella.MarshalJSON)
added in v1.1.117
```
func (l *[SubUmbrella](#SubUmbrella)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
####
func (*SubUmbrella) [String](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1124) [¶](#SubUmbrella.String)
added in v1.1.46
```
func (l *[SubUmbrella](#SubUmbrella)) String() [string](/builtin#string)
```
####
func (*SubUmbrella) [Write](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1109) [¶](#SubUmbrella.Write)
added in v1.1.117
```
func (l *[SubUmbrella](#SubUmbrella)) Write(buf *[bytes](/bytes).[Buffer](/bytes#Buffer), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [error](/builtin#error)
```
####
type [SymSeg](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L520) [¶](#SymSeg)
added in v1.1.46
```
type SymSeg struct {
[LoadBytes](#LoadBytes)
[types](/github.com/blacktop/[email protected]/types).[SymsegCmd](/github.com/blacktop/[email protected]/types#SymsegCmd)
}
```
A SymSeg represents a Mach-O LC_SYMSEG command.
####
func (*SymSeg) [LoadSize](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L525) [¶](#SymSeg.LoadSize)
added in v1.1.117
```
func (s *[SymSeg](#SymSeg)) LoadSize() [uint32](/builtin#uint32)
```
####
func (*SymSeg) [MarshalJSON](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L537) [¶](#SymSeg.MarshalJSON)
added in v1.1.117
```
func (s *[SymSeg](#SymSeg)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
####
func (*SymSeg) [String](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L534) [¶](#SymSeg.String)
added in v1.1.46
```
func (s *[SymSeg](#SymSeg)) String() [string](/builtin#string)
```
####
func (*SymSeg) [Write](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L528) [¶](#SymSeg.Write)
added in v1.1.117
```
func (s *[SymSeg](#SymSeg)) Write(buf *[bytes](/bytes).[Buffer](/bytes#Buffer), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [error](/builtin#error)
```
####
type [Symbol](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L376) [¶](#Symbol)
```
type Symbol struct {
Name [string](/builtin#string)
Type [types](/github.com/blacktop/[email protected]/types).[NType](/github.com/blacktop/[email protected]/types#NType)
Sect [uint8](/builtin#uint8)
Desc [types](/github.com/blacktop/[email protected]/types).[NDescType](/github.com/blacktop/[email protected]/types#NDescType)
Value [uint64](/builtin#uint64)
}
```
A Symbol is a Mach-O 32-bit or 64-bit symbol table entry.
####
func (Symbol) [GetLib](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L477) [¶](#Symbol.GetLib)
added in v1.1.150
```
func (s [Symbol](#Symbol)) GetLib(m *[File](#File)) [string](/builtin#string)
```
####
func (Symbol) [GetType](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L384) [¶](#Symbol.GetType)
added in v1.1.150
```
func (s [Symbol](#Symbol)) GetType(m *[File](#File)) [string](/builtin#string)
```
####
func (*Symbol) [MarshalJSON](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L499) [¶](#Symbol.MarshalJSON)
added in v1.1.117
```
func (s *[Symbol](#Symbol)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
####
func (Symbol) [String](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L496) [¶](#Symbol.String)
added in v1.0.24
```
func (s [Symbol](#Symbol)) String(m *[File](#File)) [string](/builtin#string)
```
####
type [Symtab](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L317) [¶](#Symtab)
```
type Symtab struct {
[LoadBytes](#LoadBytes)
[types](/github.com/blacktop/[email protected]/types).[SymtabCmd](/github.com/blacktop/[email protected]/types#SymtabCmd)
Syms [][Symbol](#Symbol)
}
```
A Symtab represents a Mach-O LC_SYMTAB command.
####
func (*Symtab) [LoadSize](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L323) [¶](#Symtab.LoadSize)
```
func (s *[Symtab](#Symtab)) LoadSize() [uint32](/builtin#uint32)
```
####
func (*Symtab) [MarshalJSON](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L355) [¶](#Symtab.MarshalJSON)
added in v1.1.117
```
func (s *[Symtab](#Symtab)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
####
func (*Symtab) [Put](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L326) [¶](#Symtab.Put)
```
func (s *[Symtab](#Symtab)) Put(b [][byte](/builtin#byte), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [int](/builtin#int)
```
####
func (*Symtab) [Search](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L341) [¶](#Symtab.Search)
added in v1.1.85
```
func (s *[Symtab](#Symtab)) Search(name [string](/builtin#string)) (*[Symbol](#Symbol), [error](/builtin#error))
```
####
func (*Symtab) [String](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L349) [¶](#Symtab.String)
```
func (s *[Symtab](#Symtab)) String() [string](/builtin#string)
```
####
func (*Symtab) [Write](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L335) [¶](#Symtab.Write)
added in v1.1.33
```
func (s *[Symtab](#Symtab)) Write(buf *[bytes](/bytes).[Buffer](/bytes#Buffer), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [error](/builtin#error)
```
####
type [Thread](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L556) [¶](#Thread)
added in v1.1.46
```
type Thread struct {
[LoadBytes](#LoadBytes)
[types](/github.com/blacktop/[email protected]/types).[ThreadCmd](/github.com/blacktop/[email protected]/types#ThreadCmd)
Threads [][types](/github.com/blacktop/[email protected]/types).[ThreadState](/github.com/blacktop/[email protected]/types#ThreadState)
// contains filtered or unexported fields
}
```
A Thread represents a Mach-O LC_THREAD command.
####
func (*Thread) [LoadSize](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L563) [¶](#Thread.LoadSize)
added in v1.1.117
```
func (t *[Thread](#Thread)) LoadSize() [uint32](/builtin#uint32)
```
####
func (*Thread) [MarshalJSON](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L593) [¶](#Thread.MarshalJSON)
added in v1.1.117
```
func (t *[Thread](#Thread)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
####
func (*Thread) [String](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L583) [¶](#Thread.String)
added in v1.1.46
```
func (t *[Thread](#Thread)) String() [string](/builtin#string)
```
####
func (*Thread) [Write](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L566) [¶](#Thread.Write)
added in v1.1.117
```
func (t *[Thread](#Thread)) Write(buf *[bytes](/bytes).[Buffer](/bytes#Buffer), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [error](/builtin#error)
```
####
type [TwolevelHints](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1228) [¶](#TwolevelHints)
added in v1.1.46
```
type TwolevelHints struct {
[LoadBytes](#LoadBytes)
[types](/github.com/blacktop/[email protected]/types).[TwolevelHintsCmd](/github.com/blacktop/[email protected]/types#TwolevelHintsCmd)
Hints [][types](/github.com/blacktop/[email protected]/types).[TwolevelHint](/github.com/blacktop/[email protected]/types#TwolevelHint)
}
```
A TwolevelHints is a Mach-O LC_TWOLEVEL_HINTS command.
####
func (*TwolevelHints) [LoadSize](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1234) [¶](#TwolevelHints.LoadSize)
added in v1.1.117
```
func (l *[TwolevelHints](#TwolevelHints)) LoadSize() [uint32](/builtin#uint32)
```
####
func (*TwolevelHints) [MarshalJSON](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1249) [¶](#TwolevelHints.MarshalJSON)
added in v1.1.117
```
func (l *[TwolevelHints](#TwolevelHints)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
####
func (*TwolevelHints) [String](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1246) [¶](#TwolevelHints.String)
added in v1.1.46
```
func (l *[TwolevelHints](#TwolevelHints)) String() [string](/builtin#string)
```
####
func (*TwolevelHints) [Write](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1237) [¶](#TwolevelHints.Write)
added in v1.1.117
```
func (l *[TwolevelHints](#TwolevelHints)) Write(buf *[bytes](/bytes).[Buffer](/bytes#Buffer), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [error](/builtin#error)
```
####
type [UUID](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1347) [¶](#UUID)
```
type UUID struct {
[LoadBytes](#LoadBytes)
[types](/github.com/blacktop/[email protected]/types).[UUIDCmd](/github.com/blacktop/[email protected]/types#UUIDCmd)
}
```
UUID represents a Mach-O LC_UUID command.
####
func (*UUID) [LoadSize](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1352) [¶](#UUID.LoadSize)
```
func (l *[UUID](#UUID)) LoadSize() [uint32](/builtin#uint32)
```
####
func (*UUID) [MarshalJSON](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1364) [¶](#UUID.MarshalJSON)
added in v1.1.117
```
func (l *[UUID](#UUID)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
####
func (*UUID) [String](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1361) [¶](#UUID.String)
```
func (l *[UUID](#UUID)) String() [string](/builtin#string)
```
####
func (*UUID) [Write](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1355) [¶](#UUID.Write)
added in v1.1.117
```
func (l *[UUID](#UUID)) Write(buf *[bytes](/bytes).[Buffer](/bytes#Buffer), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [error](/builtin#error)
```
####
type [UnixThread](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L610) [¶](#UnixThread)
```
type UnixThread struct {
[Thread](#Thread)
}
```
A UnixThread represents a Mach-O LC_UNIXTHREAD command.
####
type [UpwardDylib](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1666) [¶](#UpwardDylib)
```
type UpwardDylib struct {
[Dylib](#Dylib)
}
```
A UpwardDylib represents a Mach-O LC_LOAD_UPWARD_DYLIB load command.
####
type [VersionMin](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L2265) [¶](#VersionMin)
added in v1.1.117
```
type VersionMin struct {
[LoadBytes](#LoadBytes)
[types](/github.com/blacktop/[email protected]/types).[VersionMinCmd](/github.com/blacktop/[email protected]/types#VersionMinCmd)
}
```
A VersionMin represents a Mach-O LC_VERSION_MIN_* command.
####
func (*VersionMin) [LoadSize](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L2270) [¶](#VersionMin.LoadSize)
added in v1.1.117
```
func (v *[VersionMin](#VersionMin)) LoadSize() [uint32](/builtin#uint32)
```
####
func (*VersionMin) [MarshalJSON](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L2282) [¶](#VersionMin.MarshalJSON)
added in v1.1.117
```
func (v *[VersionMin](#VersionMin)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
####
func (*VersionMin) [String](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L2279) [¶](#VersionMin.String)
added in v1.1.117
```
func (v *[VersionMin](#VersionMin)) String() [string](/builtin#string)
```
####
func (*VersionMin) [Write](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L2273) [¶](#VersionMin.Write)
added in v1.1.117
```
func (v *[VersionMin](#VersionMin)) Write(buf *[bytes](/bytes).[Buffer](/bytes#Buffer), o [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder)) [error](/builtin#error)
```
####
type [VersionMinMacOSX](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1675) [¶](#VersionMinMacOSX)
added in v1.0.9
```
type VersionMinMacOSX struct {
[VersionMin](#VersionMin)
}
```
VersionMinMacOSX build for MacOSX min OS version
####
type [VersionMinTvOS](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1970) [¶](#VersionMinTvOS)
added in v1.0.9
```
type VersionMinTvOS struct {
[VersionMin](#VersionMin)
}
```
VersionMinTvOS build for AppleTV min OS version
####
type [VersionMinWatchOS](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1979) [¶](#VersionMinWatchOS)
added in v1.0.9
```
type VersionMinWatchOS struct {
[VersionMin](#VersionMin)
}
```
VersionMinWatchOS build for Watch min OS version
####
type [VersionMiniPhoneOS](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1684) [¶](#VersionMiniPhoneOS)
added in v1.0.9
```
type VersionMiniPhoneOS struct {
[VersionMin](#VersionMin)
}
```
VersionMiniPhoneOS build for iPhoneOS min OS version
####
type [WeakDylib](https://github.com/blacktop/go-macho/blob/v1.1.165/cmds.go#L1302) [¶](#WeakDylib)
```
type WeakDylib struct {
[Dylib](#Dylib)
}
```
A WeakDylib represents a Mach-O LC_LOAD_WEAK_DYLIB command. |
rtop | cran | R | Package ‘rtop’
March 31, 2023
Type Package
Title Interpolation of Data with Variable Spatial Support
Version 0.6-6
Date 2023-03-31
Maintainer <NAME> <<EMAIL>>
Imports gstat, graphics, stats, methods, utils, grDevices, sf, units,
sp
Suggests parallel, intamap, rgeos, spacetime, data.table, rgdal,
reshape2
Description Data with irregular spatial support, such as runoff related data or data from administra-
tive units, can with 'rtop' be interpolated to locations without observations with the top-
kriging method. A description of the pack-
age is given by Skøien et al (2014) <doi:10.1016/j.cageo.2014.02.009>.
License GPL (>= 2)
NeedsCompilation yes
Encoding UTF-8
Author <NAME> [aut, cre],
<NAME> [ctb] (For the original FORTRAN code of SCE-UA, translated
and modified to R in this package.)
Repository CRAN
Date/Publication 2023-03-31 18:10:02 UTC
R topics documented:
rtop-packag... 2
checkVari... 4
createRtopObjec... 7
downloadRtopExampleDat... 8
gDis... 9
getRtopParam... 11
plot.rtopVariogramClou... 15
readAreaInf... 16
readArea... 17
rtopCluste... 18
rtopDis... 20
rtopFitVariogra... 21
rtopKrig... 23
rtopSi... 27
rtopVariogra... 30
sceu... 32
useRtopWithIntama... 35
variogramMode... 36
varMa... 37
rtop-package A package providing methods for analysis and spatial interpolation of
data with an irregular support
Description
This package provides geostatistical methods for analysis and interpolation of data that has an ir-
regular support, such as as runoff characteristics or population health data. The methods in this
package are based on the top-kriging approach suggested in Skoien et al (2006), with some exten-
sions from Gottschalk (1993). This package can be used as an add-on package for the automatic
interpolation package developed within the intamap project (www.intamap.org).
Workflow
The work flow within the package suggests that the user is interested in a prediction of a process at
a series of locations where observations have not been made. The example below shows a region-
alization of mean annual runoff in Austria.
Although it is possible to perform each step with all necessary arguments, the easiest interface to
the method is to store all variables (such as observations, prediction locations and parameters) in an
rtop-object, which is created by a call to createRtopObject. The element params below consists
of changes to the default parameters. A further description can be found in getRtopParams. The
changes below means that the functions will use geostatistical distance instead of full regularization,
and that the variogram model will be fitted to the variogram cloud. Most other functions in the rtop-
package can take this object as an argument, and will add the results as one or more new element(s)
to this object.
The data in the example below are stored as shape-files in the extdata-directory of the rtop-pacakge,
use the directory of your own data instead. The observations consist of mean summer runoff from
138 catchments in Upper Austria. The predictionLocations are 863 catchments in the same region.
observations and predictionLocations can either be stored as SpatialPolygonsDataFrame-objects,
or as sf-polygons.
rpath = system.file("extdata",package="rtop")
if (require("rgdal")) {
observations = readOGR(rpath,"observations")
predictionLocations = readOGR(rpath,"predictionLocations")
} else {
library(sf)
observations = st_read(rpath, "observations")
predictionLocations = st_read(rpath,"predictionLocations")
}
# Create a column with the specific runoff:
observations$obs = observations$QSUMMER_OB/observations$AREASQKM
params = list(gDist = TRUE, cloud = TRUE)
rtopObj = createRtopObject(observations, predictionLocations,
params = params)
There are help-methods available in cases when data are not available as shape-files, or when the
observations are not part of the shape-files. See readAreaInfo and readAreas.
A call to rtopVariogram adds the sample variogram to the object, whereas
rtopFitVariogram fits a variogram model. The last function will call rtopVariogram if rtopObj
does not contain a sample variogram.
rtopObj = rtopVariogram(rtopObj)
rtopObj = rtopFitVariogram(rtopObj, maxn = 2000)
The function checkVario is useful to produce some diagnostic plots for the sample variogram and
the fitted variogram model.
checkVario(rtopObj)
The interpolation function (rtopKrige) solves the kriging system based on the computed regu-
larized semivariances. The covariance matrices are created in a separate regularization function
(varMat), and are stored in the rtop-object for easier access if it is necessary to redo parts of the
analysis, as this is the computationally expensive part of the interpolation. Cross-validation can be
called with the argument cv=TRUE, either in params or in the call to rtopKrige.
rtopObj = rtopKrige(rtopObj)
if (is(rtopObj$predictions, "Spatial")) {
spplot(rtopObj$predictions, col.regions = bpy.colors(), c("var1.pred"))
} else {
# the plotting order to get small polygons on top is not automatic with sf,
# but here is a method that works without modifying the predictions
library(dplyr)
# Arrange according to areas attribute in descending order
rtopObj$predictions |> arrange(desc(AREASQKM)) |>
# Make ggplot and set fill color to var1.pred
ggplot(aes(fill = var1.pred)) + geom_sf()
}
rtopObj = rtopKrige(rtopObj, cv = TRUE)
if (is(rtopObj$predictions, "Spatial")) {
spplot(rtopObj$predictions, col.regions = bpy.colors(), c("var1.pred","var1.var"))
} else {
# Here is an alternative method for plotting small polygons on top of the larger ones,
# modifying the predictions
rtopObj$predictions = rtopObj$predictions[order(rtopObj$predictions$AREASQ, decreasing = TRUE), ]
# It is also possible to change the order of the polygons
ggplot(rtopObj$predictions) + aes(fill = var1.pred) + geom_sf() + scale_fill_distiller(palette = "YlO
}
References
<NAME>. Interpolation of runoff applying objective methods. Stochastic Hydrology and Hy-
draulics, 7:269-281, 1993.
<NAME>., <NAME>, and <NAME>. Top-kriging - geostatistics on stream networks. Hydrology
and Earth System Sciences, 10:277-287, 2006.
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., 2014. Rtop: An
R package for interpolation of data with a variable spatial support, with an example from river
networks. Computers & Geosciences, 67.
checkVario Plot variogram fitted to data with support
Description
The function will create diagnostic plots for analysis of the variograms fitted to sample variograms
of data with support
Usage
## S3 method for class 'rtop'
checkVario(object, acor = 1, log = "xy", cloud = FALSE,
gDist = TRUE, acomp = NULL, curveSmooth = FALSE, params = list(), ...)
## S3 method for class 'rtopVariogramModel'
checkVario(object,
sampleVariogram = NULL, observations = NULL,
areas = NULL, dists = NULL, acomp = NULL,
params = list(), compVars = list(), acor = 1,
log = "xy", legx = NULL, legy = NULL,
plotNugg = TRUE, curveSmooth = FALSE, ...)
Arguments
object either: object of class rtop (see rtop-package), or an object of type
rtopVariogram
acor unit correction factor in the key, e.g. to see numbers more easily interpretable for
large areas. As an example, ucor = 0.000001 when area is given in square meters
and should rather be shown as square kilometers. Note that this parameter also
changes the value of the nugget to the new unit.
log text variable for log-plots, default to log-log "xy", can otherwise be set to "x",
"y" or ""
cloud logical; whether to look at the cloud variogram instead of the binned variogram
gDist logical; whether to use ghosh-distance for semivariogram regularization instead
of full integration of the semivariogram
sampleVariogram
a sample variogram of the data
observations a set of observations
areas either an array of areas that should be used as examples, or the number of areas
per order of magnitude (similar to the parameter amul; see getRtopParams.
amul from rtopObj or from the default parameter set will be used if not defined
here.
dists either an array of distances that should be used as examples, or the number of
distances per order of magnitude(similar to the parameter amul; see getRtopParams.
amul from rtopObj or from the default parameter set will be used if not defined
here.
acomp either a matrix with the area bins that should be visualized, or a number giving
the number of pairs to show. If a sample variogram is given, the acomp pairs
with highest number of pairs will be used
curveSmooth logical or numerical; describing whether the curves in the last plot should be
smoothed or not. If numeric, it gives the degrees of freedom (df) for the splines
used for smoothing. See also smooth.spline
params list of parameters to modify the default parameters of rtopObj or the default
parameters found from getRtopParams
compVars a list of variograms of gstat-type for comparison, see vgm. The names of the
variograms in the list will be used in the key.
legx x-coordinate of the legend for fine-tuning of position, see x-argument of
legend
legy y-coordinate of the legend for fine-tuning of position, see y-argument of
legend
plotNugg logical; whether the nugget effect should be added to the plot or not
... arguments to lower level functions
Value
The function gives diagnostic plots for the fitted variograms, where the regularized variograms are
shown together with the sample variograms and possibly also user defined variograms. In addition,
if an rtopObject is submitted, the function will also give plots of the relationship between variance
and area size and a scatter plot of the fit of the observed and regularized variogram values. The sizes
of the dots are relative to the number of pairs in each group.
Author(s)
<NAME>
References
<NAME>., <NAME>, and <NAME>. Top-kriging - geostatistics on stream networks. Hydrology
and Earth System Sciences, 10:277-287, 2006.
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., 2014. Rtop: An
R package for interpolation of data with a variable spatial support, with an example from river
networks. Computers & Geosciences, 67.
Examples
library(gstat)
rpath = system.file("extdata",package="rtop")
if (require("rgdal")) {
observations = readOGR(rpath,"observations")
predictionLocations = readOGR(rpath,"predictionLocations")
} else {
library(sf)
observations = st_read(rpath, "observations")
predictionLocations = st_read(rpath,"predictionLocations")
}
# Create a column with the specific runoff:
observations$obs = observations$QSUMMER_OB/observations$AREASQKM
params = list(cloud = TRUE, gDist = TRUE)
rtopObj = createRtopObject(observations, predictionLocations,
params = params)
# Fit a variogram (function also creates it)
rtopObj = rtopFitVariogram(rtopObj, maxn = 2000)
checkVario(rtopObj,
compVars = list(first = vgm(5e-6, "Sph", 30000,5e-8),
second = vgm(2e-6, "Sph", 30000,5e-8)))
rtopObj = checkVario(rtopObj, acor = 0.000001,
acomp = data.frame(acl1 = c(2,2,2,2,3,3,3,4,4),
acl2 = c(2,3,4,5,3,4,5,4,5)))
rtopObj = checkVario(rtopObj, cloud = TRUE, identify = TRUE,
acor = 0.000001)
createRtopObject Create an object for interpolation within the rtop package
Description
This is a help function for creating an object (see rtop-package to be used for interpolation within
the rtop package
Usage
createRtopObject(observations, predictionLocations,
formulaString, params=list(), overlapObs,
overlapPredObs, ...)
Arguments
observations SpatialPolygonsDataFrame or sf-polygons with observations
predictionLocations
a SpatialPolygons, SpatialPolygonsDataFrame-object or sf-polygons with
prediction locations
formulaString formula that defines the dependent variable as a linear model of independent
variables; suppose the dependent variable has name z, for ordinary and sim-
ple kriging use the formula z~1; for universal kriging, suppose z is linearly
dependent on x and y, use the formula z~x+y. The formulaString defaults to
"value~1" if value is a part of the data set. If not, the first column of the
data set is used. Universal kriging is not yet properly implemented in the rtop-
package, this element is mainly used for defining the dependent variable.
params parameters to modify the default parameters of the rtop-package, set internally
in this function by a call to getRtopParams
overlapObs matrix with observations that overlap each other
overlapPredObs matrix with observations and predictionLocations that overlap each other
... Extra parameters to getRtopParams and possibility to pass depreceted argu-
ments
Value
An object of class rtop with observations, prediction locations, parameters and possible other el-
ements useful for interpolation in the rtop-package. Most other externally visible functions in the
package will be able to work with this object, and add the results as a new element.
Author(s)
<NAME>
References
<NAME>., <NAME>, and <NAME>. Top-kriging - geostatistics on stream networks. Hydrology
and Earth System Sciences, 10:277-287, 2006.
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., 2014. Rtop: An
R package for interpolation of data with a variable spatial support, with an example from river
networks. Computers & Geosciences, 67.
See Also
getRtopParams
Examples
rpath = system.file("extdata",package="rtop")
# As rgdal is about to be retired:
if (require("rgdal")) {
library(rgdal)
observations = readOGR(rpath,"observations")
predictionLocations = readOGR(rpath,"predictionLocations")
} else {
library(sf)
observations = st_read(rpath, "observations")
predictionLocations = st_read(rpath,"predictionLocations")
}
# Create a column with the specific runoff:
observations$obs = observations$QSUMMER_OB/observations$AREASQKM
# Setting some parameters
params = list(gDist = TRUE, cloud = FALSE)
# Create a column with the specific runoff:
observations$obs = observations$QSUMMER_OB/observations$AREASQKM
# Build an object
rtopObj = createRtopObject(observations, predictionLocations,
params = params)
downloadRtopExampleData
Download additional example data
Description
Download additional example data from Vienna University of Technology
Usage
downloadRtopExampleData(folder = system.file("extdata",
package="rtop"))
Arguments
folder the folder to which the downloaded data set will be copied
Value
The function will have as a side effect that additional example data is downloaded from Vienna
University of Techology. This will for the default case replace the existing example data-set in the
rtop package. Alternatively the user can specify a separate directory for the data set.
Author(s)
<NAME>
References
<NAME>., <NAME>, and <NAME>. Top-kriging - geostatistics on stream networks. Hydrology
and Earth System Sciences, 10:277-287, 2006.
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., 2014. Rtop: An
R package for interpolation of data with a variable spatial support, with an example from river
networks. Computers & Geosciences, 67.
Examples
## Not run:
downloadRtopExampleData()
rpath = system.file("extdata",package="rtop")
observations = readOGR(rpath,"observations")
## End(Not run)
gDist calculate geostatistical distances between areas
Description
Calculate geostatistical distances (Ghosh-distances) between areas
Usage
## S3 method for class 'rtop'
gDist(object, params = list(), ...)
## S3 method for class 'SpatialPolygonsDataFrame'
gDist(object, object2 = NULL, ...)
## S3 method for class 'SpatialPolygons'
gDist(object, object2 = NULL, ...)
## S3 method for class 'list'
gDist(object, object2 = NULL, diag = FALSE, debug.level = 0, ...)
Arguments
object object of class SpatialPolygons or SpatialPolygonsDataFrame with bound-
aries of areas; or list of discretized areas, typically from a call to
rtopDisc; or object of class rtop with such boundaries and/or discretized ele-
ments (the individual areas)
params a set of parameters, used to modify the default parameters for the rtop package,
set in getRtopParams. The argument params can also be used for the other
methods, through the ...-argument.
object2 an object of same type as object, except for rtop; for calculation of geostatis-
tical distances also between the elements in the two different objects
diag logical; if TRUE only calculate the geostatistical distances between each ele-
ment and itself, only when the objects are lists of discretized areas and object2
= object or object2 = NULL
debug.level debug.level = 0 will suppress output from the call to varMat, done for calculation
of the geostatistical distances
... other parameters, for gDist.list when calling one of the other methods, or for
varMat, in which the calculations take place
Value
If called with one list of discretized elements, a matrix with the geostatistical distances between
the elements within the list. If called with two lists of discretized elements, a matrix with the geo-
statistical distances between the elements in the two lists. If called with diag = TRUE, the function
returns an array of the geostatistical distance within each of the elements in the list.
If called with one SpatialPolygons or SpatialPolygonsDataFrame or the function returns a list
with one matrix with geostatistical distances between the elements of the object. If called with two
objects, the list will also containt a matrix of the geostatistical distances between the elements of
the two objects, and an array of the geostatistical distances within the elements of the second object.
If called with an rtop-object, the function will return the object, amended with the list above.
Note
The geostatistical distance can be seen as the average distance between points in two elements, or
the average distance within points in a single element. The distance measure is also sometimes
referred to as Ghosh-distance, from Ghosh (1951) who found analytical expressions for these dis-
tances between blocks with regular geometry.
The use of geostatistical distances within rtop is based on an idea from Gottschalk (1993), who
suggested to replace the traditional regularization of variograms within block-kriging (as done in
the original top-kriging application of Skoien et al (2006)) with covariances of the geostatistical
distance. The covariance between two areas can then be found as C(a1,a2) = cov(gd) where gd
is the geostatistical distance between the two areas a1 and a2, instead of an integration of the
covariance function between the two areas.
rtop is based on semivariograms instead of covariances, and the semivariogram value between the
two areas can be found as gamma(a1,a2) = g(gd) - 0.5 (g(gd1) + g(gd2)) where g is a semivar-
iogram valid for point support, gd1) and gd2 are the geostatistical distances within each of the two
areas.
Author(s)
<NAME>
References
<NAME>. 1951. Random distances within a rectangle and between two rectangles. Bull. Calcutta
Math. Soc., 43, 17-24.
<NAME>. 1993. Correlation and covariance of runoff. Stochastic Hydrology and Hydraulics,
7, 85-101.
<NAME>., <NAME>, and <NAME>. 2006. Top-kriging - geostatistics on stream networks.
Hydrology and Earth System Sciences, 10, 277-287.
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., 2014. Rtop: An
R package for interpolation of data with a variable spatial support, with an example from river
networks. Computers & Geosciences, 67.
Examples
rpath = system.file("extdata",package="rtop")
if (require("rgdal")) {
observations = readOGR(rpath,"observations")
} else {
library(sf)
observations = st_read(rpath, "observations")
}
gDist = gDist(observations)
getRtopParams Setting parameters for the intamap package
Description
This function sets a range of the parameters for the intamap package, to be included in the object
described in rtop-package
Usage
getRtopParams(params,newPar, observations, formulaString, ...)
Arguments
params An existing set of parameters for the interpolation process, of class
intamapParams or a list of parameters for modification of the default parameters
newPar A list of parameters for updating params or for modification of the default
parameters. Possible parameters with their defaults are given below
observations SpatialPolygonsDataFrame with observations, used for setting some of the
default parameters
formulaString formula that defines the dependent variable as a linear model of independent
variables, see e.g. createRtopObject for more details.
... Individual parameters for updating params or for modification of the default
parameters. Possible parameters with their defaults are given below
• model = "Ex1" - variogram model type. Currently the following models are
implemented:
– Exp - Exponential model
– Ex1 - Multiplication of a modified exponential and fractal model, the
same model as used in Skoien et al(2006).
– Gau - Gaussian model
– Ga1 - Multiplication of gaussian and fractal model
– Sph - Spherical model
– Sp1 - Multiplication of spherical and fractal model
– Fra - Fractal model
• parInit - the initial parameters and the limits of the variogram model to be
fitted, given as a matrix with three columns, where the first column is the
lower limit, the second column is the upper limit and the third column are
starting values.
• nugget = FALSE - logical; if TRUE, nugget effect should be estimated
• unc = TRUE - logical; if TRUE the variance of observations are in column
unc
• rresol = 100 - minimum number of discretization points in each area
• hresol = 5 - number of discretization points in one direction for elements in
binned variograms
• cloud = FALSE - logical; if TRUE use the cloud variogram for variogram
fitting
• amul = 1 - defines the number of areal bins within one order of magnitude.
Numbers between 1 and 3 are possible, as this parameter refers to the axp
parameter of axTicks.
• dmul = 3 - defines the number of distance bins within one order of magni-
tude. Numbers between 1 and 3 are possible, as this parameter refers to the
axp parameter of axTicks.
• fit.method = 9 - defines the type of Least Square method for fitting of vari-
ogram. The methods 1-7 correspond to the similar methods in fit.variogram
of gstat.
– 1 - weighted least squares with number of pairs per bin:
err = n * (yobs-ymod)^2
– 2 - weighted least squares difference according to Cressie (1985):
err2 = abs(yobs/ymod-1)
– 6 - ordinary least squares difference: err = (yobs-ymod)^2
– 7 - similar to default of gstat, where higher weights are given to shorter
distances err = n/h^2 * (yobs-mod)^2
– 8 - Opposite of weighted least squares difference according to Cressie
(1985): err3=abs(ymod/yobs-1)
– 9 - neutral WLS-method - err = min(err2,err3)
• gDistEst = FALSE - use geostatistical distance when fitting variograms
• gDistPred = FALSE - use geostatistical distance for semivariogram matrices
• gDist - parameter to set jointly gDistEst = gDistPred = gDist
• nmax = 10for local kriging: the number of nearest observations that should
be used for a kriging prediction or simulation, where nearest is defined in
terms of the space of the spatial locations. By default, 10 observations are
used.
• maxdist = Inf - for local kriging: only observations within a distance of
maxdist from the prediction location are used for prediction or simulation;
if combined with nmax, both criteria apply
• hstype = "regular" - sampling type for binned variograms
• rstype = "rtop" - sampling type for the elements, see also rtopDisc
• nclus = 1- number of CPUs to use if parallel processing is wanted; nclus =
1 means no parallelization
• cnAreas = 100- limit whether parallel processing should be applied; the
minimum number of areas in varMat, and also controlling when to use
parallel processing in rtopDisc, when
nAreas*params$rresol/100 > cnAreas
• clusType = NULL- the cluster type to be started for parallel processing;
uses the default type of the system when clusType = NULL
• outfile = NULLfile where output can be printed during parallel execution
• varClean = FALSElogical; if TRUE it will remove highly correlated areas
from the covariance matrix during simulation
• wlim = 1.5 - an upper limit for the norm of the weights in kriging, see
rtopKrige
• wlimMethod = "all"which method to use for reducing the norm of the
weights if necessary. Either "all", which modifies all weights equally or
"neg" which reduces negative weights and large weights more than the
smallest weights
• singularSolve - logical; When TRUE, the kriging function will attempt
to solve singular kriging matrices by removing catchments that have the
same correlations. This will usually happen when two catchments are al-
most overlapping, and they are discretized with the same points. See also
rtopKrige.
• cv = FALSE - logical; for cross-validation of observations
• debug.level = 1 - used in some functions for giving additional output. See
individual functions for more information.
• partialOverlap = FALSEwhether to work with partially overlapping areas
• olim = 1e-4smallest overlapping area to be used for partial overlap, relative
to the smallest of the areas
• nclus = 1option to use parallel processing, nclus > 1 defines the number of
workers to be started
• clusType = NAwhich cluster type to start if nclus > 1; the default is used if
nclusType = NA
• cnAreas = 200The minimum number of observations or observations plus
predictions allowing parallelization in the creation of the covariance matrix
• cDlim = 1e6The minimum number of discretization points for allowing par-
allelization in the discretization process
• observations - used for initial values of parameters if supplied
• formulaString - used for initial values of parameters if supplied
Value
A list of the parameters with class rtopParams to be included in the object described in rtop-
package
Note
This function will mainly be called by createRtopObject, but can also be called by the user to
create a parameter set or update an existing parameter set. If none of the arguments is a list of
class rtopParams, the function will assume that the argument(s) are modifications to the default set
of parameters. The function can also be called by other functions in the rtop-package if the users
chooses not to work with an object of class rtop.
If the function is called with two lists of parameters (but the first one is not of class rtopParams)
they are both seen as modifications to the default parameter set. If they share some parameters, the
parameter values from the second list will be applied.
Parallel processing has been included for some of the functions. The default is no parallel procesing,
and the package also attempts to decide whether it is sensible to start a set of clusters and distribute
jobs to them based on the size of the job. The default limit might not be the best for every system.
Author(s)
<NAME>
References
Cressie, N. 1985. Fitting variogram models by weighted least squares. Mathematical Geology, 17
(5), 563-586
<NAME>., <NAME>, and <NAME>. Top-kriging - geostatistics on stream networks. Hydrology
and Earth System Sciences, 10:277-287, 2006
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., 2014. Rtop: An
R package for interpolation of data with a variable spatial support, with an example from river
networks. Computers & Geosciences, 67.
See Also
createRtopObject and rtop-package
Examples
# Create a new set of intamapParameters, with default parameters:
params = getRtopParams()
# Make modifications to the default list of parameters
params = getRtopParams(newPar = list(gDist = TRUE, nugget = FALSE))
# Make modifications to an existing list of parameters
params = getRtopParams(params = params, newPar = list(gDist = TRUE,
nugget = FALSE))
plot.rtopVariogramCloud
Plot and Identify Data Pairs on Sample Variogram Cloud
Description
Plot a sample variogram cloud, possibly with identification of individual point pairs
Usage
## S3 method for class 'rtopVariogramCloud'
plot(x, ...)
Arguments
x object of class variogramCloud
... parameters that are passed through to plot.variogramCloud The most impor-
tant are:
• identify logical; if TRUE, the plot allows identification of a series of indi-
vidual point pairs that correspond to individual variogram cloud points (use
left mouse button to select; right mouse button ends)
• digitize logical; if TRUE, select point pairs by digitizing a region with the
mouse (left mouse button adds a point, right mouse button ends)
• xlim limits of x-axis
• ylim limits of y-axis
• xlab x axis label
• ylab y axis label
• keep logical; if TRUE and identify is TRUE, the labels identified and their
position are kept and glued to object x, which is returned. Subsequent calls
to plot this object will now have the labels shown, e.g. to plot to hardcopy
Note
This function is mainly a wrapper around plot.variogramCloud, necessary because of different
column names and different class names. The description of arguments and value can therefore be
found in the help page of plot.variogramCloud.
Author(s)
<NAME>
References
http://www.gstat.org/
See Also
plot.gstatVariogram
Examples
rpath = system.file("extdata",package="rtop")
if (require("rgdal")) {
observations = readOGR(rpath,"observations")
} else {
library(sf)
observations = st_read(rpath, "observations")
}
observations$obs = observations$QSUMMER_OB/observations$AREASQKM
# Create the sample variogram
rtopVario = rtopVariogram(observations, params = list(cloud = TRUE))
plot(rtopVario)
readAreaInfo create SpatialPointsDataFrame with observations of data with a spa-
tial support
Description
readAreaInfo will read a text file with observations and descriptions of data with a spatial support.
Usage
readAreaInfo(fname = "ainfo.txt", id = "id",
iobs = "iobs", obs = "obs", unc = "unc",
filenames= "filenames", sep = "\t",
debug.level = 1, moreCols = list(NULL))
Arguments
fname name of file with areal information
id name of column with observation id
iobs name of column with number of observations
obs name of column with observations
unc name of column with possible uncertainty of observation
filenames name of column with filenames of areas if different names than id should be
used.
sep separator in csv-file
debug.level used for giving additional output
moreCols name of other column names the user wants included in ainfo
Details
The function is of particular use when data are not available as shape-files, or when the observations
are not part of the shape-files. This function is mainly for compatibility with the former FORTRAN-
version. The simplest way to read the data in that case is through readShapePoly in the maptools-
package or readOGR in the rgdal-package. See also rtop-package.
Value
SpatialPointDataFrame with information about observations and/or predictionLocations.
Author(s)
<NAME>
readAreas help file for creating SpatialPolygonsDataFrame with observations
and/or predictionLocations of data with a spatial support
Description
readAreas will read area-files, add observations and convert the result to
SpatialPolygonsDataFrame
Usage
readAreas(object, adir=".",ftype = "xy",projection = NA, ...)
Arguments
object either name of file with areal information or SpatialPointsDataFrame with
observations
adir directory where the files with areal information are to be found
ftype type of file, the only type supported currently is "xy", referring to x- and y-
coordinates of boundaries
projection add projection to the object if input is boundary-files
... further parameters to be passed to readAreaInfo
Details
If object is a file name, readAreaInfo will be called. If it is a
SpatialPointsDataFrame with observations and/or predictionLocations, the function will read
areal data from files according to the ID associated with each observation/predictionLocation.
The function is of particular use when data are not available as shape-files, or when the observations
are not part of the shape-files. This function is mainly for compatibility with the former FORTRAN-
version. The simplest way to read the data in that case is through readShapePoly in the maptools-
package or readOGR in the rgdal-package. See also rtop-package.
Value
The function creates a SpatialPolygonsDataFrame of observations and/or predictionLocations,
depending on the information given in object.
Author(s)
<NAME>
rtopCluster start, access, stop or restart a cluster for parallel computation with
rtop
Description
Convenience function for using parallel computation with rtop. The function is usually not called
by the user.
Usage
rtopCluster(nclus, ..., action = "start", type, outfile = NULL)
Arguments
nclus The number of workers in the cluster
... Arguments for clusterEvalQ; commands to be evaluated for each worker, such
as library-calls
action Defines the action of the function. There are three options:
• "start"Starts a new cluster if necessary, reuses an existing if it has already
been started
• "restart"Stops the cluster and starts it again. To be used in case there are
difficulties with the cluster, or if the user wants to change the type of the
cluster
type The type of cluster; see makeCluster for more details. The default of makeClus-
ter is used if type is missing or NA.
outfile File to direct the output, makeCluster for more details.
Details
It is usually not necessary for the user to call this function for starting or accessing a cluster. This
is done automatically by the different rtop-functions when needed if the parameter nclus is larger
than one (see getRtopParams). If the user actually starts the cluster by a call to this function, it will
also be necessary to set the nclus parameter to a value larger than one for the cluster to be used by
different functions.
Restarting the cluster might be necessary if the cluster has a problem (e.g. does not return memory)
or if the user wants to change to a different cluster type.
Stopping the cluster is useful when the user does not want to continue with parallel computation
and wants to close down the workers.
Value
If the function is called with action equal to "start" or "restart", the result is a cluster with nclus
workers. The cluster is also added to the global options with the name rtopCluster
(getOption("rtopCluster")).
If the function is called with action equal to "stop", the function stops the cluster, sets the rtopCluster
of options to NULL and returns NULL to the user.
Author(s)
<NAME>
rtopDisc Discretize areas
Description
rtopDisc will discretize an area for regularization or calculation of Ghosh-distance
Usage
## S3 method for class 'rtop'
rtopDisc(object, params = list(),...)
## S3 method for class 'SpatialPolygonsDataFrame'
rtopDisc(object, params = list(), bb = bbox(object), ...)
## S3 method for class 'SpatialPolygons'
rtopDisc(object, params = list(), bb = bbox(object), ...)
## S3 method for class 'rtopVariogram'
rtopDisc(object, params = list(), ...)
Arguments
object object of class SpatialPolygons or SpatialPolygonsDataFrame or rtopVariogram,
or an object with class rtop that includes one of the above
bb boundary box, usually modified to be the common boundary box for two spatial
object
params possibility to pass parameters to modify the default parameters for the rtop
package, set in getRtopParams. Typical parameters to modify for this function
are:
• rresol = 100; minimum number of discretization points in areas
• hresol = 5; number of discretization points in one direction for areas in
binned variograms
• hstype = "regular"; sampling type for binned variograms
• rstype = "rtop"; sampling type for real areas
... Possibility to pass individual parameters
Details
There are different options for discretizing the objects. When the areas from the bins are discretized,
the options are random or regular sampling, regular sampling is the default.
For the real areas, regular sampling appears to have computational advantages compared with ran-
dom sampling. In addition to the traditional regular sampling, rtop also offers a third type of
sampling which assures that the same discretization points are used for overlapping areas.
Starting with a coarse grid covering the region of interest, this will for a certain support be refined
till a requested minimum number of points from the grid is within the support. In this way, for areal
supports, the number of points in the area with the largest number of points will be approximately
four times the requested minimum number of points. This methods also assure that points used to
discretize a large support will be reused when discretizing smaller supports within the large one,
e.g. subcatchments within larger catchments.
Value
The function returns a list of discretized areas, or if called with an rtop-object as argument, the
object with lists of discretizations of the observations and prediction locations (if part of the object).
If the function is called with an rtopVariogram (usually this is an internal call), the list contains
discretized pairs of hypothetical objects from each bin of the semivariogram with a centre-to-centre
distance equal to the average distance between the objects in a certain bin.
Author(s)
<NAME>
References
<NAME>., <NAME>, and <NAME>. Top-kriging - geostatistics on stream networks. Hydrology
and Earth System Sciences, 10:277-287, 2006.
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., 2014. Rtop: An
R package for interpolation of data with a variable spatial support, with an example from river
networks. Computers & Geosciences, 67.
See Also
rtopVariogram
rtopFitVariogram Fit variogram model to sample variogram of data with spatial support
Description
rtopFitVariogram will fit a variogram model to the estimated binned variogram or cloud variogram
of data with an areal support.
Usage
## S3 method for class 'rtop'
rtopFitVariogram(object, params = list(), ...)
## S3 method for class 'SpatialPolygonsDataFrame'
rtopFitVariogram(object, params=list(), ...)
## S3 method for class 'SpatialPointsDataFrame'
rtopFitVariogram(object, params=list(), ...)
## S3 method for class 'rtopVariogram'
rtopFitVariogram(object, observations, dists = NULL,
params=list(), mr = FALSE, aOver = NULL, iprint = 0, ...)
## S3 method for class 'rtopVariogramCloud'
rtopFitVariogram(object, observations, dists = NULL,
aOver = NULL, params=list(), mr = FALSE, iprint = 0, ...)
Arguments
object object of class rtopVariogram or rtopVariogramCloud, or an object with class
rtop that includes the sample variograms.
The object can also be of class SpatialPolygonsDataFrame or
SpatialPointsDataFrame with observations. If object is a
SpatialPointsDataFrame, it must have a column with name area.
observations the observations, passed as a Spatial*DataFrame object, if object is an
rtopVariogram or rtopVariogramCloud
params a set of parameters, used to modify the default parameters for the rtop package,
set in getRtopParams. The argument params can also be used for the other
methods, through the ...-argument.
dists either a matrix with geostatistical distances (created by a call to the function
gDist or a list with the areas discretized (from a call to rtopDisc.
mr logical; defining whether the function should return a list with discretized ele-
ments and geostatistical distances, even if it was not called with an rtop-object
as argument.
aOver a matrix with the overlapping areas of the observations, used for computation of
the nugget effect. It will normally be recomputed by the function if it is NULL
and necessary
iprint print flag that is passed to sceua
... Other parameters to functions called from rtopFitVarigoram
Value
The function creates an object with the fitted variogram Model (variogramModel) and a
data.frame (varFit) with the differences between the sample semivariances and the regularized
semivariances. If mr = TRUE, the function also returns other objects (discretized elements and
geostatistical distances, if created) as a part of the returned object. If the function is called with an
rtop-object as argument, it will return an rtop-object with variogramModel and varFit added to
the object, in addition to other objects created.
Note
There are several options for fitting of the variogramModel, where the parameters can be set in
params, which is a list of parameters for modification of the default parameters of the rtop-package
given in a call to getRtopParams. The first choice is between individual fitting and binned fitting.
This is based on the type of variogram submitted, individual fitting is done if a cloud variogram
(of class rtopVariogramCloud) is passed as argument, and binned fitting if the submitted vari-
ogram is of class rtopVariogram. If the function is called with an object of class rtop, having
both variogram and variogramCloud among its arguments, the variogram model is fitted to the
variogram which is consistent with the parameter cloud.
Author(s)
<NAME>
References
<NAME>., <NAME>, and <NAME>. Top-kriging - geostatistics on stream networks. Hydrology
and Earth System Sciences, 10:277-287, 2006.
<NAME>. and <NAME>. Spatio-Temporal Top-Kriging of Runoff Time Series. Water Re-
sources Research 43:W09419, 2007.
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., 2014. Rtop: An
R package for interpolation of data with a variable spatial support, with an example from river
networks. Computers & Geosciences, 67.
Examples
rpath = system.file("extdata",package="rtop")
rpath = system.file("extdata",package="rtop")
if (require("rgdal")) {
observations = readOGR(rpath,"observations")
predictionLocations = readOGR(rpath,"predictionLocations")
} else {
library(sf)
observations = st_read(rpath, "observations")
predictionLocations = st_read(rpath,"predictionLocations")
}
# Create a column with the specific runoff:
observations$obs = observations$QSUMMER_OB/observations$AREASQKM
# Setting some parameters
params = list(gDist = TRUE, cloud = FALSE)
# Create a column with the specific runoff:
observations$obs = observations$QSUMMER_OB/observations$AREASQKM
# Build an object
rtopObj = createRtopObject(observations,predictionLocations,
params = params)
# Fit a variogram (function also creates it)
rtopObj = rtopFitVariogram(rtopObj)
rtopObj$variogramModel
rtopKrige Spatial interpolation of data with spatial support
Description
rtopKrige perform spatial interpolation or cross validation of data with areal support.
Usage
## S3 method for class 'rtop'
rtopKrige(object, varMatUpdate = FALSE, params = list(), ...)
## S3 method for class 'SpatialPolygonsDataFrame'
rtopKrige(object, predictionLocations = NULL,
varMatObs, varMatPredObs, varMat, params = list(),
formulaString, sel, ...)
## S3 method for class 'STSDF'
rtopKrige(object, predictionLocations = NULL,
varMatObs, varMatPredObs, varMat, params = list(),
formulaString, sel, olags = NULL, plags = NULL,
lagExact = TRUE, ...)
## Default S3 method:
rtopKrige(object, predictionLocations = NULL,
varMatObs, varMatPredObs, varMat, params = list(),
formulaString, sel, wret = FALSE, ...)
Arguments
object object of class rtop or SpatialPolygonsDataFrame or STSDF
varMatUpdate logical; if TRUE, also existing variance matrices will be recomputed, if FALSE,
only missing variance matrices will be computed, see also varMat
predictionLocations
SpatialPolygons or SpatialPolygonsDataFrame or STSDF with prediction
locations. NULL if cross validation is to be performed.
varMatObs covariance matrix of observations, where diagonal must consist of internal vari-
ance, typically generated from call to varMat
varMatPredObs covariance matrix between observation locations and prediction locations, typi-
cally generated from call to varMat
varMat list covariance matrices including the two above
params a set of parameters, used to modify the default parameters for the rtop pack-
age, set in getRtopParams. Additionally, it is possible overrule some of the
parameters in object$params by passing them as separate arguments.
formulaString formula that defines the dependent variable as a linear model of independent
variables, see e.g. createRtopObject for more details.
sel array of prediction location numbers, if only a limited number of locations are
to be interpolated/crossvalidated
wret logical; if TRUE, return a matrix of weights instead of the predictions, useful
for batch processing of time series, see also details
olags A vector describing the relative lag which should be applied for the observation
locations. See also details
plags A vector describing the relative lag which should be applied for the prediciton-
Locations. See also details
lagExact logical; whether differences in lagtime should be computed exactly or approxi-
mate
... from rtopKrige.rtop, arguments to be passed to rtopKrige.default. In
rtopKrige.default, parameters for modification of the object parameters or
default parameters. Of particular interest are cv, a logical for doing cross-
validation, nmax, and maxdist for maximum number of neighbours and max-
imum distance to neighbours, respectively, and wlim, the limit for the absolute
values of the weights. It can also be useful to set singularSolve if some of the
areas are almost similar, see also details below.
Details
This function is the interpolation routine of the rtop-package. The simplest way of calling the
function is with an rtop-object that contains the fitted variogram model and all the other necessary
data (see createRtopObject or rtop-package).
The function will, if called with covariance matrices between observations and between observa-
tions and prediction locations, use these for the interpolation. If the function is called without these
matrices, varMat will be called to create them. These matrices can therefore be reused if necessary,
an advantage as it is computationally expensive to create them.
The interpolation that takes part within rtopKrige.default is based on the semivariance matrices
between observations and between observations and prediction locations. It is therefore possible to
use this function also to interpolate data where the matrices have been created in other ways, e.g.
based on distances in physiographical space or distances along a stream.
The function returns the weights rather than the predictions if wret = TRUE. This is useful for batch
processing of time series, e.g. once the weights are created, they can be used to compute the
interpolated values for each time step.
rtop is able to take some advantage of multiple CPUs, which can be invoked with the parameter
nclus. When it gets a number larger than one, rtopKrige will start a cluster with nclus workers,
if the parallel-package has been installed.
The parameter singularSolve can be used when some areas are almost completely overlapping. In
this case, the discretization of them might be equal, and the covariances to other areas will also be
equal. The kriging matrix will in this case be singular. When singularSolve = TRUE, rtopKrige
will remove one of the neighbours, and instead work with the mean of the two observations. An
overview of removed neighbours can be seen in the resulting object, under the name removed.
Kriging of time series is possible when observations and predictionLocations are spatiotem-
poral objects of type STSDF. The interpolation is still spatial, in the sense that the regularisation of
the variograms are just done using the spatial extent of the observations, not a possible temporal
extent, such as done by Skoien and Bloschl (2007). However, it is possible to make predictions
based on observations from different time steps, through the use of the lag-vectors. These vectors
describe a typical "delay" for each observation and prediction location. This delay could for runoff
related variables be similar to travel time to each gauging location. For a certain prediction location,
earlier time steps would be picked for neighbours with shorter travel time and later time steps for
neighbours with slower travel times.
The lagExact parameter indicates whether to use a weighted average of two time steps, or just the
time step which is closest to the difference in lag times.
The use of lag times should in theory increase the computation time, but might, due to different
computation methods, even speed up the computation when the number of neighbours to be used
(parameter nmax) is small compared to the number of observations. If computation is slow, it can
be useful to test olags = rep(0, dim(observations)[1]) and similar for predictionLocations.
Value
If called with SpatialPolygonsDataFrame, the function returns a
SpatialPolygonsDataFrame with predictions, either at the locations defined in
predictionLocations, or as leave-one-out cross-validation predicitons at the same locations as in
object if cv = TRUE
If called with an rtop-object, the function returns the same object with the predictions added to the
object.
Author(s)
<NAME>
References
<NAME>., <NAME>, and <NAME>. Top-kriging - geostatistics on stream networks. Hydrology
and Earth System Sciences, 10:277-287, 2006.
<NAME>. and <NAME>. Spatio-Temporal Top-Kriging of Runoff Time Series. Water Re-
sources Research 43:W09419, 2007.
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., 2014. Rtop: An
R package for interpolation of data with a variable spatial support, with an example from river
networks. Computers & Geosciences, 67.
Examples
# The following command will download the complete example data set
# downloadRtopExampleData()
# observations$obs = observations$QSUMMER_OB/observations$AREASQKM
rpath = system.file("extdata",package="rtop")
if (require("rgdal")) {
observations = readOGR(rpath,"observations")
predictionLocations = readOGR(rpath,"predictionLocations")
} else {
library(sf)
observations = st_read(rpath, "observations")
predictionLocations = st_read(rpath,"predictionLocations")
}
# Setting some parameters; nclus > 1 will start a cluster with nclus
# workers for parallel processing
params = list(gDist = TRUE, cloud = FALSE, nclus = 1, rresol = 25)
# Create a column with the specific runoff:
observations$obs = observations$QSUMMER_OB/observations$AREASQKM
# Build an object
rtopObj = createRtopObject(observations, predictionLocations,
params = params)
# Fit a variogram (function also creates it)
rtopObj = rtopFitVariogram(rtopObj)
# Predicting at prediction locations
rtopObj = rtopKrige(rtopObj)
# Cross-validation
rtopObj = rtopKrige(rtopObj,cv=TRUE)
cor(rtopObj$predictions$observed,rtopObj$predictions$var1.pred)
rtopSim Spatial simulation of data with spatial support
Description
rtopSim will conditionally or unconditionally simulate data with areal support. This function should
be seen as experimental, some issues are described below.
Usage
## S3 method for class 'rtop'
rtopSim(object, varMatUpdate = FALSE, beta = NA,
largeFirst = TRUE, replace = FALSE, params = list(),
dump = NULL, debug.level, ...)
## Default S3 method:
rtopSim(object = NULL, predictionLocations,
varMatObs, varMatPredObs, varMatPred, variogramModel, ...)
Arguments
object object of class rtop or SpatialPolygonsDataFrame or sf (st_sf) or NULL
varMatUpdate logical; if TRUE, also existing variance matrices will be recomputed, if FALSE,
only missing variance matrices will be computed, see also varMat
beta The expected mean of the data, for unconditional simulations
largeFirst Although the simulation method follows a random path around the prediction-
Locations, simulating the largest area first will assure that the true mean of the
simulated values will be closer to beta
replace logical; if observation locations are also present as predictions, should they be
replaced? This is particularly when doing conditional simulations for a set of
observations with uncertainty.
params a set of parameters, used to modify the standard parameters for the rtop pack-
age, set in getRtopParams. The argument params can also be used for the other
methods, through the ...-argument.
dump file name for saving the updated object, after adding variance matrices. Useful if
there are problems with the simulation, particularly if it for some reason crashes.
debug.level debug.levellogical; controls some output, will override the object parameters
predictionLocations
SpatialPolygons or SpatialPolygonsDataFrame or sf-object with locations
for simulations.
varMatObs covariance matrix of possible observations, where diagonal must consist of in-
ternal variance, typically generated from call to varMat
varMatPredObs covariance matrix between possible observation locations and simulation loca-
tions, typically generated from call to varMat
varMatPred covariance matrix between simulation locations, typically generated from a call
to varMat
variogramModel a variogram model of type rtopVariogramModel
... possible modification of the object parameters or default parameters.
Details
This function can do constrained or unconstrained simulation for areas. The simplest way of calling
the function is with an rtop-object that contains the fitted variogram model and all the other neces-
sary data (see createRtopObject or rtop-package). rtopSim is the only function in rtop which
does not need observations. However, a variogram model is still necessary to perform simulations.
The arguments beta and largeFirst are only used for unconditional simulations.
The function is still in an experimental stage, and might change in the future. There are some issues
with the current implementation:
• Numerical issues can in some cases give negative estimation variances, which will result in
an invalid distribution for the simulation. This will result in simulated NA values for these
locations.
• The variability of simulated values for small areas (such as small headwater catchments) will
be relatively high based on the statistical uncertainty. This could be overestimated compared
to the uncertainty which is possible based on rainfall.
Value
If called with SpatialPolygons or sf as predictionLocations and either
SpatialPolygonsDataFrame, sf or NULL for observations, the function returns a
SpatialPolygonsDataFrame or sf with simulations at the locations defined in
predictionLocations
If called with an rtop-object, the function returns the same object with the simulations added to the
object.
Author(s)
<NAME>
References
<NAME>., <NAME>, and <NAME>. Top-kriging - geostatistics on stream networks. Hydrology
and Earth System Sciences, 10:277-287, 2006.
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., 2014. Rtop: An
R package for interpolation of data with a variable spatial support, with an example from river
networks. Computers & Geosciences, 67.
Examples
# The following command will download the complete example data set
# downloadRtopExampleData()
rpath = system.file("extdata",package="rtop")
if (require("rgdal") & FALSE) { # Preparing for retirement of rgdal
observations = readOGR(rpath,"observations")
predictionLocations = readOGR(rpath,"predictionLocations")
} else {
library(sf)
observations = st_read(rpath, "observations")
predictionLocations = st_read(rpath,"predictionLocations")
}
# Setting some parameters; nclus > 1 will start a cluster with nclus
# workers for parallel processing
params = list(gDist = TRUE, cloud = FALSE, nclus = 1, rresol = 25)
# Create a column with the specific runoff:
observations$obs = observations$QSUMMER_OB/observations$AREASQKM
# Build an object
rtopObj = createRtopObject(observations, predictionLocations,
params = params, formulaString = "obs~1")
# Fit a variogram (function also creates it)
rtopObj = rtopFitVariogram(rtopObj)
# Conditional simulations for two new locations
rtopObj10 = rtopSim(rtopObj, nsim = 5)
rtopObj11 = rtopObj
# Unconditional simulation at the observation locations
# (These are moved to the predictionLocations)
rtopObj11$predictionLocations = rtopObj11$observations
rtopObj11$observations = NULL
# Setting varMatUpdate to TRUE, to make sure that covariance
# matrices are recomputed
rtopObj12 = rtopSim(rtopObj11, nsim = 10, beta = 0.01,
varMatUpdate = TRUE)
summary(data.frame(rtopObj10$simulations))
summary(data.frame(rtopObj12$simulations))
rtopVariogram create variogram for data with spatial support
Description
rtopVariogram will create binned variogram or cloud variogram of data with an areal support.
Usage
## S3 method for class 'rtop'
rtopVariogram(object, params = list(), ...)
## S3 method for class 'SpatialPolygonsDataFrame'
rtopVariogram(object, ...)
## S3 method for class 'SpatialPointsDataFrame'
rtopVariogram(object, formulaString, params=list(), cloud,
abins, dbins, ...)
## S3 method for class 'STSDF'
rtopVariogram(object, formulaString, params=list(), cloud,
abins, dbins, data.table = FALSE, ...)
Arguments
object object of class rtop (see rtop-package) or a
SpatialPolygonsDataFrame or SpatialPointsDataFrame with information
about observations. If
object is a
SpatialPointsDataFrame, it must have a column with name area.
formulaString formula that defines the dependent variable as a linear model of independent
variables; suppose the dependent variable has name z, for ordinary and sim-
ple kriging use the formula z~1; for universal kriging, suppose z is linearly
dependent on x and y, use the formula z~x+y. The formulaString defaults to
"value~1" if value is a part of the data set. If not, the first column of the data
set is used.
params a set of parameters, used to modify the default parameters for the rtop package,
set in getRtopParams.
cloud logical; if TRUE, calculate the semivariogram cloud, can be used to overrule the
cloud parameter in params.
abins possibility to set areal bins (not yet implemented)
dbins possibility to set distance bins (not yet implemented)
data.table an option to use data.table internally for the variogram computation for STSDF-
objects
... parameters to other functions called, e.g. gstat’s variogram-function and to
rtopVariogram.SpatialPointsDataFrame when the method is called with an
object of a different class
Value
The function creates a variogram, either of type rtopVariogram or rtopVariogramCloud. This
variogram is based on the variogram function from gstat, but has additional information about the
spatial size or length of the observations. An rtop-object with the variogram added is returned if the
function is called with an rtop-object as argument.
For spatio-temporal objects (STSDF), the variogram is the spatially variogram, averaged for all time
steps. There is a possibility to use data.table internally in this function, which can improve compu-
tation time for some cases.
Note
The variogram cloud is similar to the variogram cloud from gstat, with the area/length added to
the resulting data.frame. The binned variogram is also based on the area or length, in addition to
the distance between observations. The bins equally distanced in the log10-space of the distances
and areas (lengths). The size of the bins is decided from the parameters amul and dmul, defining
the number of bins per order of magnitude (1:10, 10:100, and so on).
The distances between areas are in this function based on the centre of gravity.
Author(s)
<NAME>en
References
<NAME>., <NAME>, and <NAME>. Top-kriging - geostatistics on stream networks. Hydrology
and Earth System Sciences, 10:277-287, 2006.
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., 2014. Rtop: An
R package for interpolation of data with a variable spatial support, with an example from river
networks. Computers & Geosciences, 67.
Examples
## Not run:
library(rgdal)
rpath = system.file("extdata",package="rtop")
observations = readOGR(rpath,"observations")
# Create a column with the specific runoff:
observations$obs = observations$QSUMMER_OB/observations$AREASQKM
vario = rtopVariogram(observations, cloud = TRUE)
## End(Not run)
sceua Optimisation with the Shuffle Complex Evolution method
Description
Calibration function which searches a parameter set which is minimizing the value of an objective
function
Usage
sceua(OFUN, pars, lower, upper, maxn = 10000, kstop = 5, pcento = 0.01,
ngs = 5, npg = 2 * length(pars) + 1, nps = length(pars) + 1,
nspl = 2 * length(pars) + 1, mings = ngs, iniflg = 1, iprint = 0, iround = 3,
peps = 0.0001, plog = rep(FALSE,length(pars)), implicit = NULL, timeout = NULL, ...)
Arguments
OFUN A function to be minimized, with first argument the vector of parameters over
which minimization is to take place. It should return a scalar result as an indica-
tor of the error for a certain parameter set
pars a vector with the initial guess the parameters
lower the lower boundary for the parameters
upper the upper boundary for the parameters
maxn the maximum number of function evaluations
kstop number of shuffling loops in which the criterion value must change by the given
percentage before optimization is terminated
pcento percentage by which the criterion value must change in given number (kstop) of
shuffling loops to continue optimization
ngs number of complexes in the initial population
npg number of points in each complex
nps number of points in a sub-complex
nspl number of evolution steps allowed for each complex before complex shuffling
mings minimum number of complexes required, if the number of complexes is allowed
to reduce as the optimization proceeds
iniflg flag on whether to include the initial point in population. iniflg = 0, not included.
iniflg= 1, included
iprint flag for controlling print-out after each shuffling loop. iprint < 0: no output.
iprint = 1: print information on the best point of the population. iprint > 0: print
information on every point of the population
iround number of significant digits in print-out
peps convergence level for parameter set (lower number means smaller difference
between parameters of the population required for stop)
plog whether optimization should be done in log10-domain. Either a single TRUE
value for all parameters, or a vector with TRUE/FALSE for the different param-
eters
implicit function for implicit boundaries for the parameters (e.g. sum(pars[4]+pars[5]) <
1). See below for details
timeout if different from NULL: maximum time in seconds for execution before the
optimization returns with the parameters so far.
... arguments for the objective function, must be named
Details
sceua is an R-implementation of the Shuffle Complex Evolution - University of Arizona (Duan et
al., 1992), a global optimization method which "combines the strengths of the simplex procedure of
Nelder and Mead (1965) with the concepts of controlled random search (Price, 1987), competetive
evolusion (Holland, 1975)" with the concept of complex shuffling, developed by Duan et al. (1992).
This implementation follows the Fortran implementation relatively close, but adds the possibility
of searching in log-space for one or more of the parameters, and it uses the capability of R to pass
functions as arguments, making it possible to pass implicit conditions to the parameter selection.
The objective function OFUN is a function which should give an error value for each parameter set. It
should never return non-numeric values such as NA, NULL, or Inf. If some parameter combinations
can give such values, the return value should rather be a large number.
The function works with fixed upper and lower boundaries for the parameters. If the possible range
of a parameter might span several orders of magnitude, it might be better to search in log-space for
the optimal parameter, to reduce the risk of being trapped in local optima. This can be set with the
argument plog, which is either a single value (FALSE/TRUE) or a vector for all parameters. plog
= c(TRUE, FALSE, FALSE, TRUE, TRUE) means that the search for parameters 1,4 and 5 should be
in log10-space, whereas the search for parameters 2 and 3 are in normal space.
Implicit boundaries can be evoked by passing a function implicit to sceua. This function should
give 0 when parameters are acceptable and 1 if not. If, for example, the condition is that the
following sum of parameters four and five should be limited:
sum(pars[4]+pars[5]) <= 1
then the function will be implicit = function(pars) (2*pars[4] + pars[5]) > 1
Value
The function returns a list with the following elements
• par - a vector of the best parameters combination
• value - the value of the objective function for this parameter set
• convergence - a list of two values
– funConvergence - the function convergence relative to pcento
– parConvergence - the parameter convergence relative to peps
• counts - the number of function evaluations
• iterations - the number of shuffling loops
• timeout - logical; TRUE if the optimization was aborted because the timeout time was reached,
FALSE otherwise
There are also two elements returned as attributes:
• parset - the entire set of parameters from the last evolution step
• xf - the values of the objective function from the last evolution step
The last two can be accessed as attr(sceuares, "parset") and attr(sceuares, "xf"), if the
result is stored as sceuares.
Author(s)
<NAME>
References
<NAME>., <NAME>., and <NAME>., 1992. Effective and efficient global optimization for
conceptual rainfall-runoff models. Water Resour. Res. 28 (4), 1015?1031.
<NAME>., 1975. Adaptation in natural and artificial systems, University of Michigan Press,
Ann Arbor.
<NAME>. and <NAME>., 1965. A simplex method for function minimization, Comput. J., 7(4),
308-313.
<NAME>., 1987. Global optimization algorithms for a CAD workstation, J. Optim. Theory Appl.,
55(1), 133-146.
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., 2014. Rtop: An
R package for interpolation of data with a variable spatial support, with an example from river
networks. Computers & Geosciences, 67.
Examples
set.seed(1)
# generate example data from a function with three parameters
# with some random noise
fun = function(x, pars) pars[2]*sin(x*pars[1])+pars[3]
x = rnorm(50, sd = 3)
y = fun(x, pars = c(5, 2, 3)) + rnorm(length(x), sd = 0.3)
plot(x,y)
# Objective function, summing up squared differences
OFUN = function(pars, x, yobs) {
yvals = fun(x, pars)
sum((yvals-yobs)^2)
}
sceuares = sceua(OFUN, pars = c(0.1,0.1,0.1), lower = c(-10,0,-10),
upper = c(10,10,10), x = x, yobs = y)
sceuares
xx = seq(min(x), max(x), 0.1)
lines(xx, fun(xx, pars = sceuares$par))
useRtopWithIntamap Integrates the rtop package with the intamap package
Description
This function makes it possible to use rtop-objects in the functions of the package. It is necessary
to load the intamap-package before calling this function.
Usage
useRtopWithIntamap()
Value
The function will have as side effect that the intamap package is loaded, and that rtop-methods are
registered for the intamap-functions estimateParameters, spatialPredict and methodParameters.
Author(s)
<NAME>
References
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>.,
<NAME>., <NAME>. INTAMAP: The design and implementation f an interoperable automated
interpolation Web Service. Computers and Geosciences 37 (3), 2011.
<NAME>., <NAME>, and <NAME>. Top-kriging - geostatistics on stream networks. Hydrology
and Earth System Sciences, 10:277-287, 2006.
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., 2014. Rtop: An
R package for interpolation of data with a variable spatial support, with an example from river
networks. Computers & Geosciences, 67.
Examples
library(intamap)
useRtopWithIntamap()
variogramModel create or update variogram model
Description
This gives an easier interface to the parameters of the variogram model
Usage
rtopVariogramModel(model = "Ex1", sill = NULL, range = NULL,
exp = NULL, nugget = NULL, exp0 = NULL,
observations = NULL, formulaString = obs~1)
## S3 method for class 'rtop'
updateRtopVariogram(object, ...)
## S3 method for class 'rtopVariogramModel'
updateRtopVariogram(object, action = "mult", ...,
checkVario = FALSE,
sampleVariogram = NULL, observations = NULL)
Arguments
model variogram model, currently "Ex1" is the only implemented, see Skoien et al
(2006)
sill sill of variogram
range range of variogram
exp the exponent of the fractal part of the variogram, see Skoien et al (2006)
exp0 gives the angle of the first part of the variogram in a log-log plot (weibull type),
should be between 0 and 2. See Skoien et al (2006)
nugget nugget of point variogram
formulaString formula that defines the dependent variable as a linear model of independent
variables, see e.g. createRtopObject for more details.
object either: object of class rtop (see rtop-package), or an rtopVariogramModel.
action character variable defining whether the new parameters should be add(-ed),
mult(-iplied) or replace the former parameters. Leaving the parameters equal
to NULL will cause no change.
checkVario logical, will issue a call tocheckVario if TRUE
sampleVariogram
a sample variogram of the data
observations a set of observations
... parameters to lower level functions
Value
The function helps creating and updating the parameters of the variogram, by using common names
and simple update methods. This is mainly for manual fitting of the variogram. The automatic call
to checkVario makes it easier to visualize the effect of the changes to the variogram
Author(s)
<NAME>
See Also
rtop-package
Examples
## Not run:
library(rgdal)
rpath = system.file("extdata",package="rtop")
observations = readOGR(rpath,"observations")
# Create a column with the specific runoff:
observations$obs = observations$QSUMMER_OB/observations$AREASQKM
predictionLocations = readOGR(rpath,"predictionLocations")
rtopObj = createRtopObject(observations,predictionLocations)
# Fit a variogram (function also creates it)
rtopObj = rtopFitVariogram(rtopObj)
rtopObj = updateRtopVariogram(rtopObj, exp = 1.5, action = "mult",
checkVario = TRUE)
## End(Not run)
varMat create a semivariogram matrix between a set of locations, or semi-
variogram matrices between and within two sets of locations
Description
varMat will create a semivariogram matrix between all the supports in a set of locations (observa-
tions or prediction locations) or semivariogram matrices between all the supports in one or two sets
of locations, and also between them.
Usage
## S3 method for class 'rtop'
varMat(object, varMatUpdate = FALSE, fullPred = FALSE, params = list(), ...)
## S3 method for class 'SpatialPolygonsDataFrame'
varMat(object, object2 = NULL,...)
## S3 method for class 'SpatialPolygons'
varMat(object, object2 = NULL, variogramModel,
overlapObs, overlapPredObs, ...)
## S3 method for class 'list'
varMat(object, object2 = NULL, coor1, coor2, maxdist = Inf,
variogramModel, diag = FALSE, sub1, sub2,
debug.level = ifelse(interactive(), 1, 0), ...)
Arguments
object either: 1) an object of class rtop (see rtop-package) or 2) a
SpatialPolygonsDataFrame, or SpatialPolygons, or 3) a
matrix with geostatistical distances (see gDist or 4) a list with discretized
supports
varMatUpdate logical; if TRUE, also existing variance matrices will be recomputed, if FALSE,
only missing variance matrices will be computed
fullPred logical; whether to create the full covariance matrix also for the predictions,
mainly used for simulations
params a set of parameters, used to modify the default parameters for the rtop package,
set in getRtopParams.
object2 if object is not an object of class rtop; an object of the same class as object
with a possible second set of locations with support
variogramModel variogramModel to be used in calculation of the semivariogram matrix (matri-
ces)
... typical parameters to modify from the default parameters of the rtop-package
(or modifications of the previously set parameters for the rtop-object), see also
getRtopParams. These can also be passed in a list named params, as for the
rtop-method. Typical parameters to modify for this function:
• rresol = 100miminum number of discretization points, in call to rtopDisc
if necessary
• rstype = "rtop"sampling type from areas, in call to rtopDisc if necessary
• gDistPred = FALSEuse geostatistical distance for semivariogram matrices
• gDistparameter to set jointly gDistEst = gDistPred = gDist
overlapObs matrix with observations that overlap each other
overlapPredObs matrix with observations and predictionLocations that overlap each other
coor1 coordinates of centroids of object
coor2 coordinates of centre-of-gravity of object2
maxdist maximum distance between areas for inclusion in semivariogrma matrix
diag logical; if TRUE only the semivariogram values along the diagonal will be cal-
culated, typical for semivariogram matrix of prediction locations
sub1 semivariogram array for subtraction of inner variances of areas
sub2 semivariogram array for subtraction of inner variances of areas
debug.level debug.level >= 1 will give output for every element
Value
The lower level versions of the function calculates a semivariogram matrix between locations in
object or between the locations in object and the locations in object2. The method for object
of type rtop calculates semivariogram matrices between observation locations, between prediction
locations, and between observation locations and prediction locations, and adds these to object.
Note
The argument varMatUpdate is typically used to avoid repeated computations of the same vari-
ance matrices. Default is FALSE, which will avoid recomputation of the variance matrix for the
observations if the procedure is cross-validation before interpolation. Should be set to TRUE if
the variogram Model has been changed, or if observation and/or prediction locations have been
changed.
If an rtop-object contains observations and/or predictionLocations of type STSDF, the covariance
matrix is computed based on the spatial properties of the object.
Author(s)
<NAME>
References
<NAME>., <NAME>, and <NAME>. Top-kriging - geostatistics on stream networks. Hydrology
and Earth System Sciences, 10:277-287, 2006.
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., 2014. Rtop: An
R package for interpolation of data with a variable spatial support, with an example from river
networks. Computers & Geosciences, 67.
See Also
gDist
Examples
## Not run:
library(rgdal)
rpath = system.file("extdata",package="rtop")
observations = readOGR(rpath,"observations")
vmod = list(model = "Ex1", params = c(0.00001,0.007,350000,0.9,1000))
vm = varMat(observations, variogramModel = vmod)
## End(Not run) |
LorenzRegression | cran | R | Package ‘LorenzRegression’
February 28, 2023
Type Package
Title Lorenz and Penalized Lorenz Regressions
Version 1.0.0
Description
Inference for the Lorenz and penalized Lorenz regressions. More broadly, the package pro-
poses functions to assess inequality and graphically represent it. The Lorenz Regression proce-
dure is introduced in Heuchenne and Jacquemain (2022) <doi:10.1016/j.csda.2021.107347>.
License GPL-3
Encoding UTF-8
Depends R (>= 3.3.1)
LazyData true
Imports stats, ggplot2, parallel, doParallel, foreach, MASS, GA,
locpol, Rearrangement, Rcpp (>= 0.11.0), knitr
RoxygenNote 7.2.2
Suggests rmarkdown
LinkingTo Rcpp, RcppArmadillo
NeedsCompilation yes
Author <NAME> [aut, cre]
(<https://orcid.org/0000-0001-9349-780X>),
<NAME> [ctb] (Author of an R implementation of the FABS algorithm
available at https://github.com/shuanggema/Fabs, of which function
Lorenz.FABS is derived)
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2023-02-28 17:32:34 UTC
R topics documented:
.Fitness_cp... 2
boot.confin... 3
coef.L... 3
coef.PL... 4
confint.L... 5
confint.PL... 6
Data.Income... 7
Gini.coe... 8
Lorenz.boo... 9
Lorenz.curv... 11
Lorenz.FAB... 12
Lorenz.G... 14
Lorenz.graph... 16
Lorenz.Populatio... 17
Lorenz.Re... 17
Lorenz.SCADFAB... 21
LorenzRegressio... 23
plot.L... 23
plot.PL... 24
PLR.BI... 25
PLR.C... 26
PLR.normaliz... 28
PLR.wra... 28
print.L... 30
print.PL... 31
Rearrangement.estimatio... 31
summary.L... 33
summary.PL... 33
.Fitness_cpp Computes the fitness used in the GA
Description
Computes the fitness of a candidate in the genetic algorithm displayed in function Lorenz.GA.cpp
Usage
.Fitness_cpp(x, Y, X, Z, pi)
Arguments
x vector of size (p-1) giving the proposed candidate, where p is the number of
covariates
Y vector of size n gathering the response, where n is the sample size
X matrix of dimension (n*p) gathering the covariates
Z vector of size n gathering iid repetitions of a U[0,1]
pi vector of size n gathering the observation weights (notice that sum(pi)=1)
Value
Fitness of candidate x
boot.confint Bootstrap confidence intervals
Description
boot.confint computes bootstrap confidence intervals given an estimation on the original sample
and on the bootstrap samples
Usage
boot.confint(x.hat, x.star, alpha, boot.method)
Arguments
x.hat estimator on the original sample.
x.star vector gathering the estimation on the bootstrapped sample.
alpha 1-level of the confidence interval
boot.method bootstrap method.
Value
A vector of dimension two with the desired confidence interval
coef.LR Estimated coefficients for the Lorenz Regression
Description
coef.LR provides the estimated coefficients for an object of class LR.
Usage
## S3 method for class 'LR'
coef(object, ...)
Arguments
object Output of a call to Lorenz.Reg, where penalty="none".
... Additional arguments.
Value
a vector gathering the estimated coefficients
See Also
Lorenz.Reg
Examples
data(Data.Incomes)
NPLR <- Lorenz.Reg(Income ~ ., data = Data.Incomes, penalty = "none")
coef(NPLR)
coef.PLR Estimated coefficients for the Penalized Lorenz Regression
Description
coef.PLR provides the estimated coefficients for an object of class PLR.
Usage
## S3 method for class 'PLR'
coef(object, renormalize = TRUE, ...)
Arguments
object Output of a call to Lorenz.Reg, where penalty!="none".
renormalize whether the coefficient vector should be re-normalized to match the represen-
tation where the first category of each categorical variable is omitted. Default
value is TRUE
... Additional arguments
Value
If the PLR was fitted with only one selection method, the output is a vector gathering the estimated
coefficients. If several selection methods were selected, it outputs a list of vectors, where each
element of the list corresponds to a different selection method.
See Also
Lorenz.Reg
Examples
data(Data.Incomes)
PLR <- Lorenz.Reg(Income ~ ., data = Data.Incomes, penalty = "SCAD",
h.grid = nrow(Data.Incomes)^(-1/5.5), sel.choice = c("BIC","CV"),
eps = 0.01, seed.CV = 123, nfolds = 5)
coef(PLR)
confint.LR Confidence intervals for the Lorenz Regression
Description
confint.LR provides confidence intervals for the explained Gini coefficient, Lorenz-R2 and theta
vector for an object of class LR.
Usage
## S3 method for class 'LR'
confint(
object,
parm = c("Gini", "LR2", "theta"),
level = 0.95,
boot.method = c("Param", "Basic", "Perc"),
...
)
Arguments
object Output of a call to Lorenz.Reg, where penalty="none" and Boot.inference=TRUE.
parm Determines whether the confidence interval is computed for the explained Gini
coefficient, for the Lorenz-R2 or for the vector of theta coefficients. Possible
values are "Gini" (default, for the explained Gini),"LR2" (for the Lorenz-R2)
and "theta" (for the vector theta).
level level of the confidence interval
boot.method What bootstrap method is used to construct the confidence interval. Default
value is "Param", which exploits the asymptotic normality and only bootstraps
the variance. Other possible values are "Perc" (percentile bootstrap) and "Ba-
sic" (basic bootstrap). Percentile bootstrap directly plugs the quantiles of the
bootstrap distribution. Basic bootstrap is based on bootstrapping the whole dis-
tribution of the estimator.
... Additional arguments.
Details
Use this function only if Boot.inference was set to TRUE in the call to Lorenz.Reg. Otherwise,
bootstrap was not computed and the confidence intervals cannot be determined.
Value
The desired confidence interval. If parm is set to either "Gini" or "LR2", the output is a vector. If
parm is set to "theta", it is a matrix where each row corresponds to a different coefficient.
See Also
Lorenz.Reg
Examples
# The following piece of code might take several minutes
data(Data.Incomes)
set.seed(123)
Data <- Data.Incomes[sample(1:nrow(Data.Incomes),50),]
NPLR <- Lorenz.Reg(Income ~ ., data = Data, penalty = "none",
seed.boot = 123, B = 40, Boot.inference = TRUE)
confint(NPLR)
confint.PLR Confidence intervals for the Penalized Lorenz Regression
Description
confint.PLR provides confidence intervals for the explained Gini coefficient and Lorenz-R2 for an
parm of class PLR.
Usage
## S3 method for class 'PLR'
confint(
object,
parm = c("Gini", "LR2"),
level = 0.95,
boot.method = c("Param", "Basic", "Perc"),
which.pars = NULL,
...
)
Arguments
object Output of a call to Lorenz.Reg, where penalty!="none" and Boot.inference=TRUE.
parm Determines whether the confidence interval is computed for the explained Gini
coefficient or for the Lorenz-R2. Possible values are "Gini" (default, for the
explained Gini) and "LR2" (for the Lorenz-R2).
level level of the confidence interval
boot.method What bootstrap method is used to construct the confidence interval. Default
value is "Param", which exploits the asymptotic normality and only bootstraps
the variance. Other possible values are "Perc" (percentile bootstrap) and "Ba-
sic" (basic bootstrap). Percentile bootstrap directly plugs the quantiles of the
bootstrap distribution. Basic bootstrap is based on bootstrapping the whole dis-
tribution of the estimator.
which.pars Which values of the bandwidth h and the penalty parameter lambda should be
used. Default is NULL, in which case the optimal values are used.
... Additional arguments.
Details
Use this function only if Boot.inference was set to TRUE in the call to Lorenz.Reg. Otherwise,
bootstrap was not computed and the confidence intervals cannot be determined.
Value
A matrix gathering the desired confidence intervals. Each row corresponds to a different selection
method for the pair (h,lambda).
See Also
Lorenz.Reg
Examples
data(Data.Incomes)
set.seed(123)
Data <- Data.Incomes[sample(1:nrow(Data.Incomes),50),]
PLR <- Lorenz.Reg(Income ~ ., data = Data, h.grid = nrow(Data)^(-1/5.5),
penalty = "SCAD", eps = 0.02, seed.boot = 123, B = 40, Boot.inference = TRUE)
confint(PLR)
Data.Incomes Simulated income data
Description
Fictitious cross-sectional dataset used to illustrate the Lorenz regression methodology. It covers 7
variables for 200 individuals aged between 25 and 30 years.
Usage
data(Data.Incomes)
Format
A data frame with 200 rows and 7 columns:
Income Individual’s labor income
Sex Sex (0=Female, 1=Male)
Health.level Variable ranging from 0 to 10 indicating the individual health’s level (0 is worst, 10
is best)
Age Individual’s age in years, ranging from 25 to 30
Work.Hours Individual’s weekly work hours
Education Individual’s highest grade completed in years
Seniority Length of service in years with the individual’s employer
Gini.coef Concentration index of y wrt x
Description
Gini.coef computes the concentration index of a vector y with respect to another vector x. If y and
x are identical, the obtained concentration index boils down to the Gini coefficient.
Usage
Gini.coef(
y,
x = y,
na.rm = TRUE,
ties.method = c("mean", "random"),
seed = NULL,
weights = NULL
)
Arguments
y variable of interest.
x variable to use for the ranking. By default x = y, and the obtained concentration
index is the Gini coefficient of y.
na.rm should missing values be deleted. Default value is TRUE. If FALSE is selected,
missing values generate an error message
ties.method What method should be used to break the ties in the rank index. Possible values
are "mean" (default value) or "random". If "random" is selected, the ties are
broken by further ranking in terms of a uniformly distributed random variable.
If "mean" is selected, the average rank method is used.
seed fixes what seed is imposed for the generation of the vector of uniform random
variables used to break the ties. Default is NULL, in which case no seed is
imposed.
weights vector of sample weights. By default, each observation is given the same weight.
Value
The value of the concentration index (or Gini coefficient)
See Also
Lorenz.curve, Lorenz.graphs
Examples
data(Data.Incomes)
# We first compute the Gini coefficient of Income
Y <- Data.Incomes$Income
Gini.coef(y = Y)
# Then we compute the concentration index of Income with respect to Age
X <- Data.Incomes$Age
Gini.coef(y = Y, x = X)
Lorenz.boot Produces bootstrap-based inference for (penalized) Lorenz regression
Description
Lorenz.boot determines bootstrap estimators for the weight vector, explained Gini coefficient and
Lorenz-R2 and, if applies, selects the regularization parameter.
Usage
Lorenz.boot(
formula,
data,
standardize = TRUE,
weights = NULL,
LR.est = NULL,
penalty = c("none", "SCAD", "LASSO"),
h = NULL,
eps = 0.005,
B = 500,
bootID = NULL,
seed.boot = NULL,
parallel = FALSE,
...
)
Arguments
formula A formula object of the form response ~ other_variables.
data A data frame containing the variables displayed in the formula.
standardize Should the variables be standardized before the estimation process? Default
value is TRUE.
weights vector of sample weights. By default, each observation is given the same weight.
LR.est Estimation on the original sample. Output of a call to Lorenz.GA or PLR.wrap.
penalty should the regression include a penalty on the coefficients size. If "none" is cho-
sen, a non-penalized Lorenz regression is computed using function Lorenz.GA.
If "SCAD" is chosen, a penalized Lorenz regression with SCAD penalty is com-
puted using function Lorenz.SCADFABS. IF "LASSO" is chosen, a penalized
Lorenz regression with LASSO penalty is computed using function Lorenz.FABS.
h Only used if penalty="SCAD" or penalty="LASSO". Bandwidth of the kernel,
determining the smoothness of the approximation of the indicator function. De-
fault value is NULL (unpenalized case) but has to be specified if penalty="LASSO"
or penalty="SCAD".
eps Only used if penalty="SCAD" or penalty="LASSO". Step size in the FABS or
SCADFABS algorithm. Default value is 0.005.
B Number of bootstrap resamples. Default is 500.
bootID matrix where each row provides the ID of the observations selected in each
bootstrap resample. Default is NULL, in which case these are defined internally.
seed.boot Should a specific seed be used in the definition of the folds. Default value is
NULL in which case no seed is imposed.
parallel Whether parallel computing should be used to distribute the B computations on
different CPUs. Either a logical value determining whether parallel computing
is used (TRUE) or not (FALSE, the default value). Or a numerical value deter-
mining the number of cores to use.
... Additional parameters corresponding to arguments passed in Lorenz.GA, Lorenz.SCADFABS
or Lorenz.FABS depending on the argument chosen in penalty.
Value
A list with several components:
LR.est Estimation on the original sample.
Gi.star In the unpenalized case, a vector gathering the bootstrap estimators of the explained Gini
coefficient. In the penalized case, it becomes a list of vectors. Each element of the list corre-
sponds to a different value of the penalization parameter
LR2.star In the unpenalized case, a vector gathering the bootstrap estimators of the Lorenz-R2 .
In the penalized case, it becomes a list of vectors.
theta.star In the unpenalized case, a matrix gathering the bootstrap estimators of theta (rows cor-
respond to bootstrap iterations and columns refer to the different coefficients). In the penalized
case, it becomes a list of matrices.
OOB.total In the penalized case only. Vector gathering the OOB-score for each lambda value.
OOB.best In the penalized case only. index of the lambda value attaining the highest OOB-score.
References
<NAME>. and <NAME> (2022). Inference for monotone single-index conditional means:
A Lorenz regression approach. Computational Statistics & Data Analysis 167(C). Jacquemain,
A., <NAME>, and <NAME> (2022). A penalised bootstrap estimation procedure for the
explained Gini coefficient.
See Also
Lorenz.Reg, Lorenz.GA, Lorenz.SCADFABS, Lorenz.FABS, PLR.wrap
Examples
data(Data.Incomes)
set.seed(123)
Data <- Data.Incomes[sample(1:nrow(Data.Incomes),50),]
Lorenz.boot(Income ~ ., data = Data,
penalty = "SCAD", h = nrow(Data)^(-1/5.5),
eps = 0.02, B = 40, seed.boot = 123)
Lorenz.curve Concentration curve of y with respect to x
Description
Lorenz.curve computes the concentration curve index of a vector y with respect to another vector
x. If y and x are identical, the obtained concentration curve boils down to the Lorenz curve.
Usage
Lorenz.curve(
y,
x = y,
graph = FALSE,
na.rm = TRUE,
ties.method = c("mean", "random"),
seed = NULL,
weights = NULL
)
Arguments
y variable of interest.
x variable to use for the ranking. By default x = y, and the obtained concentration
curve is the Lorenz curve of y.
graph whether a graph of the obtained concentration curve should be traced. Default
value is FALSE.
na.rm should missing values be deleted. Default value is TRUE. If FALSE is selected,
missing values generate an error message
ties.method What method should be used to break the ties in the rank index. Possible values
are "mean" (default value) or "random". If "random" is selected, the ties are
broken by further ranking in terms of a uniformly distributed random variable.
If "mean" is selected, the average rank method is used.
seed seed imposed for the generation of the vector of uniform random variables used
to break the ties. Default is NULL, in which case no seed is imposed.
weights vector of sample weights. By default, each observation is given the same weight.
Value
A function corresponding to the estimated Lorenz or concentration curve. If graph is TRUE, the
curve is also plotted.
See Also
Lorenz.graphs, Gini.coef
Examples
data(Data.Incomes)
# We first compute the Lorenz curve of Income
Y <- Data.Incomes$Income
Lorenz.curve(y = Y, graph = TRUE)
# Then we compute the concentration curve of Income with respect to Age
X <- Data.Incomes$Age
Lorenz.curve(y = Y, x = X, graph = TRUE)
Lorenz.FABS Solves the Penalized Lorenz Regression with Lasso penalty
Description
Lorenz.FABS solves the penalized Lorenz regression with (adaptive) Lasso penalty on a grid of
lambda values. For each value of lambda, the function returns estimates for the vector of parameters
and for the estimated explained Gini coefficient, as well as the Lorenz-R2 of the regression.
Usage
Lorenz.FABS(
YX_mat,
weights = NULL,
h,
w.adaptive = NULL,
eps,
iter = 10^4,
lambda = "Shi",
lambda.min = 1e-07,
gamma = 0.05
)
Arguments
YX_mat a matrix with the first column corresponding to the response vector, the remain-
ing ones being the explanatory variables.
weights vector of sample weights. By default, each observation is given the same weight.
h bandwidth of the kernel, determining the smoothness of the approximation of
the indicator function.
w.adaptive vector of size equal to the number of covariates where each entry indicates the
weight in the adaptive Lasso. By default, each covariate is given the same weight
(Lasso).
eps step size in the FABS algorithm.
iter maximum number of iterations. Default value is 10^4.
lambda this parameter relates to the regularization parameter. Several options are avail-
able.
grid If lambda="grid", lambda is defined on a grid, equidistant in the logarith-
mic scale.
Shi If lambda="Shi", lambda, is defined within the algorithm, as in Shi et al
(2018).
supplied If the user wants to supply the lambda vector himself
lambda.min lower bound of the penalty parameter. Only used if lambda="Shi".
gamma value of the Lagrange multiplier in the loss function
Details
The regression is solved using the FABS algorithm developed by Shi et al (2018) and adapted to
our case. For a comprehensive explanation of the Penalized Lorenz Regression, see Jacquemain et
al. In order to ensure identifiability, theta is forced to have a L2-norm equal to one.
Value
A list with several components:
iter number of iterations attained by the algorithm.
direction vector providing the direction (-1 = backward step, 1 = forward step) for each iteration.
lambda value of the regularization parameter for each iteration.
h value of the bandwidth.
theta matrix where column i provides the estimated parameter vector for iteration i.
LR2 the Lorenz-R2 of the regression.
Gi.expl the estimated explained Gini coefficient.
References
<NAME>., <NAME>, and <NAME> (2022). A penalised bootstrap estimation proce-
dure for the explained Gini coefficient. <NAME>., <NAME>, <NAME>, and <NAME> (2018). A Forward
and Backward Stagewise Algorithm for Nonconvex Loss Function with Adaptive Lasso, Computa-
tional Statistics & Data Analysis 124, 235-251.
See Also
Lorenz.Reg, PLR.wrap, Lorenz.SCADFABS
Examples
data(Data.Incomes)
YX_mat <- Data.Incomes[,-2]
Lorenz.FABS(YX_mat, h = nrow(Data.Incomes)^(-1/5.5), eps = 0.005)
Lorenz.GA Estimates the parameter vector in Lorenz regression using a genetic
algorithm
Description
Lorenz.GA estimates the vector of parameters in Lorenz regression using the unit-norm normal-
ization It also returns the Lorenz-R2 of the regression as well as the estimated explained Gini
coefficient.
Usage
Lorenz.GA(
YX_mat,
standardize = TRUE,
popSize = 50,
maxiter = 1500,
run = 150,
ties.method = c("random", "mean"),
ties.Gini = c("random", "mean"),
seed.random = NULL,
weights = NULL,
parallel = FALSE
)
Arguments
YX_mat A matrix with the first column corresponding to the response vector, the remain-
ing ones being the explanatory variables.
standardize Should the variables be standardized before the estimation process? Default
value is TRUE.
popSize Size of the population of candidates in the genetic algorithm. Default value is
50.
maxiter Maximum number ot iterations in the genetic algorithm. Default value is 1500.
run Number of iterations without improvement in the best fitness necessary for the
algorithm to stop. Default value is 150.
ties.method What method should be used to break the ties in optimization program. Possible
values are "random" (default value) or "mean". If "random" is selected, the
ties are broken by further ranking in terms of a uniformly distributed random
variable. If "mean" is selected, the average rank method is used.
ties.Gini what method should be used to break the ties in the computation of the Gini
coefficient at the end of the algorithm. Possible values and default choice are
the same as above.
seed.random seed.random imposed for the generation of the vector of uniform random vari-
ables used to break the ties. Default is NULL, in which case no seed.random is
imposed.
weights vector of sample weights. By default, each observation is given the same weight.
parallel Whether parallel computing should be used to distribute the computations in the
genetic algorithm. Either a logical value determining whether parallel comput-
ing is used (TRUE) or not (FALSE, the default value). Or a numerical value
determining the number of cores to use.
Details
The genetic algorithm is solved using function ga from the GA package. The fitness function is
coded in Rcpp to speed up computation time. When discrete covariates are introduced and ties
occur in the index, the default option randomly breaks them, as advised in Section 3 of Heuchenne
and Jacquemain (2020)
Value
A list with several components:
theta the estimated vector of parameters.
LR2 the Lorenz-R2 of the regression.
Gi.expl the estimated explained Gini coefficient.
niter number of iterations attained by the genetic algorithm.
fit value attained by the fitness function at the optimum.
References
Heuchenne, C. and <NAME> (2022). Inference for monotone single-index conditional means:
A Lorenz regression approach. Computational Statistics & Data Analysis 167(C).
See Also
Lorenz.Reg, ga
Examples
data(Data.Incomes)
YX_mat <- cbind(Data.Incomes$Income, Data.Incomes$Age, Data.Incomes$Work.Hours)
Lorenz.GA(YX_mat, popSize = 40)
Lorenz.graphs Graphs of concentration curves
Description
Lorenz.graphs traces the Lorenz curve of a response and the concentration curve of the response
and each of a series of covariates.
Usage
Lorenz.graphs(formula, data, ...)
Arguments
formula A formula object of the form response ~ other_variables.
data A dataframe containing the variables of interest
... other arguments (see Section ’Arguments’ in Lorenz.curve).
Value
A plot comprising
• The Lorenz curve of response
• The concentration curves of response with respect to each element of other_variables
See Also
Lorenz.curve, Gini.coef
Examples
data(Data.Incomes)
Lorenz.graphs(Income ~ Age + Work.Hours, data = Data.Incomes)
Lorenz.Population Defines the population used in the genetic algorithm
Description
Lorenz.Population creates the initial population of the genetic algorithm used to solve the Lorenz
regression.
Usage
Lorenz.Population(object)
Arguments
object An object of class "ga", resulting from a call to function ga.
Details
Note that this population produces an initial solution ensuring a unit norm.
Value
A matrix of dimension object@popSize times the number of explanatory variables minus one,
gathering the initial population.
See Also
Lorenz.GA
Lorenz.Reg Undertakes a Lorenz regression
Description
Lorenz.Reg performs the Lorenz regression of a response with respect to several covariates.
Usage
Lorenz.Reg(
formula,
data,
standardize = TRUE,
weights = NULL,
parallel = FALSE,
penalty = c("none", "SCAD", "LASSO"),
h.grid = c(0.1, 0.2, 1, 2, 5) * nrow(data)^(-1/5.5),
eps = 0.005,
sel.choice = c("BIC", "CV", "Boot")[1],
nfolds = 10,
seed.CV = NULL,
foldID = NULL,
Boot.inference = FALSE,
B = 500,
bootID = NULL,
seed.boot = NULL,
LR = NULL,
LR.boot = NULL,
...
)
Arguments
formula A formula object of the form response ~ other_variables.
data A data frame containing the variables displayed in the formula.
standardize Should the variables be standardized before the estimation process? Default
value is TRUE.
weights vector of sample weights. By default, each observation is given the same weight.
parallel Whether parallel computing should be used to distribute the computations on
different CPUs. Either a logical value determining whether parallel computing
is used (TRUE) or not (FALSE, the default value). Or a numerical value deter-
mining the number of cores to use.
penalty should the regression include a penalty on the coefficients size. If "none" is cho-
sen, a non-penalized Lorenz regression is computed using function Lorenz.GA.
If "SCAD" is chosen, a penalized Lorenz regression with SCAD penalty is com-
puted using function Lorenz.SCADFABS. IF "LASSO" is chosen, a penalized
Lorenz regression with LASSO penalty is computed using function Lorenz.FABS.
h.grid Only used if penalty="SCAD" or penalty="LASSO". Grid of values for the
bandwidth of the kernel, determining the smoothness of the approximation of
the indicator function. Default value is (0.1,0.2,1,2,5)*n^(-1/5.5), where n is
sample size.
eps Only used if penalty="SCAD" or penalty="LASSO". Step size in the FABS or
SCADFABS algorithm. Default value is 0.005.
sel.choice Only used if penalty="SCAD" or penalty="LASSO". Determines what method
is used to determine the optimal regularization parameter. Possibles values are
any subvector of c("BIC","CV","Boot"). Default is "BIC". Notice that "Boot"
is necessarily added if Boot.inference is set to TRUE.
nfolds Only used if sel.choice contains "CV". Number of folds in the cross-validation.
seed.CV Only used if sel.choice contains "CV". Should a specific seed be used in the
definition of the folds. Default value is NULL in which case no seed is imposed.
foldID vector taking value from 1 to nfolds specifying the fold index of each observa-
tion. Default value is NULL in which case the folds are defined internally.
Boot.inference should bootstrap inference be produced ? Default is FALSE. It is automatically
turned to TRUE if sel.choice contains "Boot".
B Only used if Boot.inference is TRUE. Number of bootstrap resamples. Default
is 500.
bootID Only used if Boot.inference is TRUE. matrix where each row provides the ID
of the observations selected in each bootstrap resample. Default is NULL, in
which case these are defined internally.
seed.boot Only used if Boot.inference is TRUE. Should a specific seed be used in the
definition of the folds. Default value is NULL in which case no seed is imposed.
LR Estimation on the original sample. Output of a call to Lorenz.GA or PLR.wrap.
LR.boot Estimation on the bootstrap resamples. In the non-penalized case, it is the output
of a call to Lorenz.boot. In the penalized case, it is a list of size length(h.grid),
where each element is the output of a call to Lorenz.boot and uses a different
value of the bandwidth.
... Additional parameters corresponding to arguments passed in Lorenz.GA, Lorenz.SCADFABS
or Lorenz.FABS depending on the argument chosen in penalty.
Value
For the Non-penalized Lorenz Regression, a list with the following elements :
theta the estimated vector of parameters.
pval.theta Only returned if Boot.inference is TRUE. the pvalues associated to each element of
the parameter vector.
summary a vector including the estimated explained Gini coefficient and the Lorenz-R2 .
Gi.expl the estimated explained Gini coefficient
LR2 the Lorenz-R2 of the regression.
MRS the matrix of estimated marginal rates of substitution. More precisely, if we want the MRS of
X1 (numerator) with respect to X2 (denominator), we should look for row corresponding to
X1 and column corresponding to X2.
Fit A data frame containing the response (first column) and the estimated index (second column).
Gi.star Only returned if Boot.inference is TRUE. A vector gathering the bootstrap estimators of
the explained Gini coefficient.
LR2.star Only returned if Boot.inference is TRUE. A vector gathering the bootstrap estimators of
the Lorenz-R2 .
theta.star Only returned if Boot.inference is TRUE. A matrix gathering the bootstrap estimators
of theta (rows refer to bootstrap iterations and columns refer to the different coefficients).
For the Penalized Lorenz Regression, a list with the following elements.
path a list where the different elements correspond to the values of h.grid. Each element is a
matrix where the first line displays the path of regularization parameters. The second and
third lines display the evolution of the Lorenz-R2 and explained Gini coefficient along that
path. The next lines display the evolution of the scores of the methods chosen in sel.choice.
The remaining lines display the evolution of the estimated parameter vector.
theta a matrix where the different lines correspond to the methods chosen in sel.choice. Each
line provides the estimated vector of parameters at the optimal value of the regularization
parameter.
summary a matrix where the different lines correspond to the methods chosen in sel.choice. Each
line provides the estimated explained Gini coefficient, the Lorenz-R2 , the optimal lambda, the
optimal bandwidth, the number of selected variables and the scores at the optimal value of the
regularization parameter.
Gi.expl a vector providing the estimated explained Gini coefficient at the optimal value of the
regularization parameter for each method in sel.choice.
LR2 a vector providing the Lorenz-R2 at the optimal value of the regularization parameter for each
method in sel.choice.
MRS a list where the different elements correspond to a method in sel.choice. Each element is
a matrix of estimated marginal rates of substitution for non-zero coefficients at the optimal
value of the regularization parameter.
Fit A data frame containing the response (first column). The remaining columns give the esti-
mated index at the optimal value of the regularization parameter, for each method chosen in
sel.choice.
which.h a vector providing the index of the optimal bandwidth for each method in sel.choice.
which.lambda a vector providing the index of the optimal lambda for each method in sel.choice.
Gi.star Only returned if Boot.inference is TRUE. A list (each element a different value of the
bandwidth h) of lists (each element a different value of the penalty parameter) of vectors
(each element a bootstrap iteration) gathering the bootstrap estimators of the explained Gini
coefficient.
LR2.star Only returned if Boot.inference is TRUE. Similarly for the Lorenz-R2
theta.star Only returned if Boot.inference is TRUE. A list (each element a different value of
the bandwidth h) of lists (each element a different value of the penalty parameter) of matrices
(rows are bootstrap iterations and columns refer to the coefficients) gathering the bootstrap
estimators of theta.
In both cases, the list also technical information, namely the formula, data, weights and call.
References
<NAME>. and <NAME> (2022). Inference for monotone single-index conditional means:
A Lorenz regression approach. Computational Statistics & Data Analysis 167(C). Jacquemain,
A., <NAME>, and <NAME> (2022). A penalised bootstrap estimation procedure for the
explained Gini coefficient.
See Also
Lorenz.GA, Lorenz.SCADFABS, Lorenz.FABS, PLR.wrap, Lorenz.boot
Examples
data(Data.Incomes)
set.seed(123)
Data <- Data.Incomes[sample(1:nrow(Data.Incomes),50),]
# 1. Non-penalized regression
NPLR <- Lorenz.Reg(Income ~ ., data = Data, penalty = "none",
popSize = 30)
# 2. Penalized regression
PLR <- Lorenz.Reg(Income ~ ., data = Data, penalty = "SCAD",
h.grid = nrow(Data.Incomes)^(-1/5.5),
sel.choice = c("BIC","CV"), eps = 0.01, nfolds = 5)
# Comparison
NPLR$theta;PLR$theta
NPLR$summary;PLR$summary
Lorenz.SCADFABS Solves the Penalized Lorenz Regression with SCAD penalty
Description
Lorenz.SCADFABS solves the penalized Lorenz regression with SCAD penalty on a grid of lambda
values. For each value of lambda, the function returns estimates for the vector of parameters and
for the estimated explained Gini coefficient, as well as the Lorenz-R2 of the regression.
Usage
Lorenz.SCADFABS(
YX_mat,
weights = NULL,
h,
eps,
a = 3.7,
iter = 10^4,
lambda = "Shi",
lambda.min = 1e-07,
gamma = 0.05
)
Arguments
YX_mat a matrix with the first column corresponding to the response vector, the remain-
ing ones being the explanatory variables.
weights vector of sample weights. By default, each observation is given the same weight.
h bandwidth of the kernel, determining the smoothness of the approximation of
the indicator function.
eps step size in the FABS algorithm.
a parameter of the SCAD penalty. Default value is 3.7.
iter maximum number of iterations. Default value is 10^4.
lambda this parameter relates to the regularization parameter. Several options are avail-
able.
grid If lambda="grid", lambda is defined on a grid, equidistant in the logarith-
mic scale.
Shi If lambda="Shi", lambda, is defined within the algorithm, as in Shi et al
(2018).
supplied If the user wants to supply the lambda vector himself
lambda.min lower bound of the penalty parameter. Only used if lambda="Shi".
gamma value of the Lagrange multiplier in the loss function
Details
The regression is solved using the SCAD-FABS algorithm developed by Jacquemain et al and
adapted to our case. For a comprehensive explanation of the Penalized Lorenz Regression, see
Heuchenne et al. In order to ensure identifiability, theta is forced to have a L2-norm equal to one.
Value
A list with several components:
iter number of iterations attained by the algorithm.
direction vector providing the direction (-1 = backward step, 1 = forward step) for each iteration.
lambda value of the regularization parameter for each iteration.
h value of the bandwidth.
theta matrix where column i provides the non-normalized estimated parameter vector for iteration
i.
LR2 vector where element i provides the Lorenz-R2 of the regression for iteration i.
Gi.expl vector where element i provides the estimated explained Gini coefficient for iteration i.
References
<NAME>., <NAME>, and <NAME> (2022). A penalised bootstrap estimation proce-
dure for the explained Gini coefficient.
See Also
Lorenz.Reg, PLR.wrap, Lorenz.FABS
Examples
data(Data.Incomes)
YX_mat <- Data.Incomes[,-2]
Lorenz.SCADFABS(YX_mat, h = nrow(Data.Incomes)^(-1/5.5), eps = 0.005)
LorenzRegression LorenzRegression : A package to estimate and interpret Lorenz regres-
sions
Description
The LorenzRegression package proposes a toolbox to estimate, produce inference on and interpret
Lorenz regressions. As argued in Heuchenne and Jacquemain (2020), these regressions are used to
determine the explanatory power of a set of covariates on the inequality of a response variable. In
a nutshell, each variable is given a weight in order to maximize the concentration index of the re-
sponse with respect to a weighted sum of the covariates. The obtained concentration index is called
the explained Gini coefficient. If a single-index model with increasing link function is assumed,
the explained Gini boils down to the Gini coefficient of the fitted part of the model. This package
rests on two main functions: Lorenz.Reg for the estimation process and Lorenz.boot for more
complete inference (tests and confidence intervals).
Details
We direct the user to Heuchenne and Jacquemain (2020) for a rigorous exposition of the methodol-
ogy and to the vignette Learning Lorenz regressions with examples for a motivational introduction
of the LorenzRegression package.
References
<NAME>. and <NAME> (2022). Inference for monotone single-index conditional means:
A Lorenz regression approach. Computational Statistics & Data Analysis 167(C).
plot.LR Plots for the Unpenalized Lorenz Regression
Description
plot.LR provides plots for an object of class LR.
Usage
## S3 method for class 'LR'
plot(x, ...)
Arguments
x Output of a call to Lorenz.Reg, where penalty=="none".
... Additional arguments
Value
The Lorenz curve of the response and concentration curve of the response with respect to the esti-
mated index
See Also
Lorenz.Reg
Examples
data(Data.Incomes)
NPLR <- Lorenz.Reg(Income ~ ., data = Data.Incomes, penalty = "none")
plot(NPLR)
plot.PLR Plots for the Penalized Lorenz Regression
Description
plot.PLR provides plots for an object of class PLR.
Usage
## S3 method for class 'PLR'
plot(x, ...)
Arguments
x Output of a call to Lorenz.Reg, where penalty!="none".
... Additional arguments.
Value
Three types of plots The first is the Lorenz curve of the response and concentration curves of the
response with respect to the estimated index (obtained with each selection method). In each of the
remaining graphs, the horizontal axis is -log(lambda), lambda being the value of the regularization
parameter. The second type of plot is a traceplot, where the vertical axis gives the size of the
coefficient attached to each covariate. The third type of plot shows the evolution of the score(s) for
each of the selection method chosen in the PLR object. For comparability reasons, the scores are
normalized such that the larger the better and the optimum is attained in 1. Since the whole path
depends on the chosen bandwidth for the kernel, and the optimal bandwidth may depend on the
selection method, the plots are produced for each selection method used in the PLR object
See Also
Lorenz.Reg
Examples
data(Data.Incomes)
PLR <- Lorenz.Reg(Income ~ ., data = Data.Incomes, penalty = "SCAD",
sel.choice = c("BIC","CV"), h.grid = nrow(Data.Incomes)^(-1/5.5),
eps = 0.01, seed.CV = 123, nfolds = 5)
plot(PLR)
PLR.BIC Determines the regularization parameter (lambda) in a PLR via opti-
mization of an information criterion.
Description
PLR.BIC takes as input a matrix of estimated parameter vectors, where each row corresponds to a
covariate and each column corresponds to a value of lambda, and returns the index of the optimal
column by optimizing an information criterion. By default the BIC is used.
Usage
PLR.BIC(YX_mat, theta, weights = NULL, IC = c("BIC", "AIC"))
Arguments
YX_mat A matrix with the first column corresponding to the response vector, the remain-
ing ones being the explanatory variables.
theta matrix gathering the path of estimated parameter vectors. Each row corresponds
to a given covariate. Each column corresponds to a given value of lambda
weights vector of sample weights. By default, each observation is given the same weight.
IC indicates which information criterion is used. Possibles values are "BIC" (de-
fault) or "AIC".
Value
A list with two components
val vector indicating the value attained by the information criterion for each value of lambda.
best index of the value of lambda where the optimum is attained.
References
<NAME>., <NAME>, and <NAME> (2022). A penalised bootstrap estimation proce-
dure for the explained Gini coefficient.
See Also
Lorenz.Reg, PLR.wrap, Lorenz.FABS, Lorenz.SCADFABS
Examples
data(Data.Incomes)
YX_mat <- Data.Incomes[,-2]
PLR <- PLR.wrap(YX_mat, h = nrow(YX_mat)^(-1/5.5), eps = 0.005)
PLR.BIC(YX_mat, PLR$theta)
PLR.CV Determines the regularization parameter (lambda) in a PLR via cross-
validation
Description
PLR.CV undertakes k-fold cross-validation for a Penalized Lorenz Regression. It returns the CV-
score associated to each value of the regularization parameter and the index of the optimum.
Usage
PLR.CV(
formula,
data,
penalty = "SCAD",
h,
PLR.est = NULL,
standardize = TRUE,
weights = NULL,
eps,
nfolds = 10,
foldID = NULL,
seed.CV = NULL,
parallel = FALSE,
...
)
Arguments
formula A formula object of the form response ~ other_variables.
data A data frame containing the variables displayed in the formula.
penalty penalty used in the Penalized Lorenz Regression. Possible values are "SCAD"
(default) or "LASSO".
h bandwidth of the kernel, determining the smoothness of the approximation of
the indicator function.
PLR.est Output of a call to PLR.wrap corresponding to the estimation of the Penalized
Lorenz Regression on the full sample. Default value is NULL in which case the
estimation on the full sample is run internally.
standardize Should the variables be standardized before the estimation process? Default
value is TRUE.
weights vector of sample weights. By default, each observation is given the same weight.
eps Step size in the FABS or SCADFABS algorithm. Default value is 0.005.
nfolds Number of folds. Default value is 10.
foldID vector taking value from 1 to nfolds specifying the fold index of each observa-
tion. Default value is NULL in which case the folds are defined internally.
seed.CV Should a specific seed be used in the definition of the folds. Default value is
NULL in which case no seed is imposed.
parallel Whether parallel computing should be used to distribute the nfolds computa-
tions on different CPUs. Either a logical value determining whether parallel
computing is used (TRUE) or not (FALSE, the default value). Or a numerical
value determining the number of cores to use.
... Additional parameters corresponding to arguments passed in Lorenz.SCADFABS
or Lorenz.FABS depending on the argument chosen in penalty.
Value
A list with two components
val vector indicating the CV-score for each value of lambda.
best index where the optimum is attained.
References
<NAME>., <NAME>, and <NAME> (2022). A penalised bootstrap estimation proce-
dure for the explained Gini coefficient.
See Also
Lorenz.Reg, PLR.wrap, Lorenz.FABS, Lorenz.SCADFABS
Examples
YX_mat <- Data.Incomes[,-2]
PLR <- PLR.wrap(YX_mat, h = nrow(YX_mat)^(-1/5.5), eps=0.01)
PLR.CV(Income ~ ., Data.Incomes, PLR.est = PLR,
h = nrow(Data.Incomes)^(-1/5.5), eps = 0.01, nfolds = 5)
PLR.normalize Re-normalizes the estimated coefficients of a penalized Lorenz regres-
sion
Description
PLR.normalize transforms the estimated coefficients of a penalized Lorenz regression to match the
model where the first category of each categorical variable is omitted.
Usage
PLR.normalize(PLR)
Arguments
PLR Output of a call to Lorenz.Reg, where penalty!="none".
Value
A matrix of re-normalized coefficients.
See Also
Lorenz.Reg
Examples
data(Data.Incomes)
PLR <- Lorenz.Reg(Income ~ ., data = Data.Incomes, penalty = "SCAD",
sel.choice = c("BIC","CV"), h.grid = nrow(Data.Incomes)^(-1/5.5),
eps = 0.01, seed.CV = 123, nfolds = 5)
PLR.normalize(PLR)
PLR.wrap Wrapper for the Lorenz.SCADFABS and Lorenz.FABS functions
Description
PLR.wrap standardizes the covariates, run the penalized regression and spits out the path of param-
eter vectors.
Usage
PLR.wrap(
YX_mat,
standardize = TRUE,
weights = NULL,
penalty = c("SCAD", "LASSO"),
h,
eps = 0.005,
...
)
Arguments
YX_mat a matrix with the first column corresponding to the response vector, the remain-
ing ones being the explanatory variables.
standardize Should the variables be standardized before the estimation process? Default
value is TRUE.
weights vector of sample weights. By default, each observation is given the same weight.
penalty penalty used in the Penalized Lorenz Regression. Possible values are "SCAD"
(default) or "LASSO".
h bandwidth of the kernel, determining the smoothness of the approximation of
the indicator function.
eps Only used if penalty="SCAD" or penalty="LASSO". Step size in the FABS or
SCADFABS algorithm. Default value is 0.005.
... Additional parameters corresponding to arguments passed in Lorenz.SCADFABS
or Lorenz.FABS depending on the argument chosen in penalty.
Value
A list with several components:
lambda vector gathering the different values of the regularization parameter
theta matrix where column i provides the normalized estimated parameter vector corresponding
to value lambda[i] of the regularization parameter.
LR2 vector where element i provides the Lorenz-R2 of the regression related to value lambda[i] of
the regularization parameter.
Gi.expl vector where element i provides the estimated explained Gini coefficient related to value
lambda[i] of the regularization parameter.
See Also
Lorenz.SCADFABS, Lorenz.FABS
Examples
data(Data.Incomes)
YX_mat <- Data.Incomes[,-2]
PLR.wrap(YX_mat, h = nrow(Data.Incomes)^(-1/5.5), eps = 0.005)
print.LR Printing method for the Lorenz Regression
Description
print.LR prints the arguments and estimated coefficients of an object of class LR.
Usage
## S3 method for class 'LR'
print(x, ...)
Arguments
x Output of a call to Lorenz.Reg, where penalty="none".
... Additional arguments.
Value
No return value, called for printing an object of class LR to the console
See Also
Lorenz.Reg
Examples
data(Data.Incomes)
NPLR <- Lorenz.Reg(Income ~ ., data = Data.Incomes, penalty = "none")
print(NPLR)
print.PLR Printing method for the Penalized Lorenz Regression
Description
print.PLR prints the arguments and estimated coefficients of an object of class PLR.
Usage
## S3 method for class 'PLR'
print(x, ...)
Arguments
x Output of a call to Lorenz.Reg, where penalty!="none".
... Additional arguments.
Value
No return value, called for printing an object of class PLR to the console
See Also
Lorenz.Reg
Examples
data(Data.Incomes)
PLR <- Lorenz.Reg(Income ~ ., data = Data.Incomes, penalty = "SCAD",
sel.choice = c("BIC","CV"), h.grid = nrow(Data.Incomes)^(-1/5.5),
eps = 0.01, seed.CV = 123, nfolds = 5)
print(PLR)
Rearrangement.estimation
Estimates a monotonic regression curve via Chernozhukov et al (2009)
Description
Rearrangement.estimation estimates the increasing link function of a single index model via the
methodology proposed in Chernozhukov et al (2009).
Usage
Rearrangement.estimation(Y, Index, t = Index, weights = NULL, degree.pol = 1)
Arguments
Y The response variable.
Index The estimated index. The user may obtain it using function Lorenz.Reg.
t A vector of points over which the link function H(.) should be estimated. De-
fault is the estimated index.
weights vector of sample weights. By default, each observation is given the same weight.
degree.pol degree of the polynomial used in the local polynomial regression. Default value
is 1.
Details
A first estimator of the link function, neglecting the assumption of monotonicity, is obtained with
function locpol from the locpol package. The final estimator is obtained through the rearrangement
operation explained in Chernozhukov et al (2009). This operation is carried out with function
rearrangement from package Rearrangement.
Value
A list with the following components
t the points over which the estimation has been undertaken.
H the estimated link function evaluated at t.
References
Chernozhukov, V., <NAME>, and <NAME> (2009). Improving Point and Interval Esti-
mators of Monotone Functions by Rearrangement. Biometrika 96 (3). 559–75.
See Also
Lorenz.Reg, locpol, rearrangement
Examples
data(Data.Incomes)
PLR <- Lorenz.Reg(Income ~ ., data = Data.Incomes, penalty = "SCAD",
h.grid = nrow(Data.Incomes)^(-1/5.5), eps = 0.01)
Y <- PLR$Fit[,1]
Index <- PLR$Fit[,2]
Rearrangement.estimation(Y = Y, Index = Index)
summary.LR Summary for the Lorenz Regression
Description
summary.LR provides a summary for an object of class LR.
Usage
## S3 method for class 'LR'
summary(object, ...)
Arguments
object Output of a call to Lorenz.Reg, where penalty=="none".
... Additional arguments
Value
A summary displaying the explained Gini coefficient, Lorenz-R2 and a table gathering the estimated
coefficients, including p-values if bootstrap was performed.
See Also
Lorenz.Reg
Examples
data(Data.Incomes)
NPLR <- Lorenz.Reg(Income ~ ., data = Data.Incomes, penalty = "none")
summary(NPLR)
summary.PLR Summary for the Penalized Lorenz Regression
Description
summary.PLR provides a summary for an object of class PLR.
Usage
## S3 method for class 'PLR'
summary(object, renormalize = TRUE, ...)
Arguments
object Output of a call to Lorenz.Reg, where penalty!="none".
renormalize whether the coefficient vector should be re-normalized to match the represen-
tation where the first category of each categorical variable is omitted. Default
value is TRUE
... Additional arguments.
Value
A summary displaying two tables: a summary of the model and the estimated coefficients.
See Also
Lorenz.Reg
Examples
data(Data.Incomes)
PLR <- Lorenz.Reg(Income ~ ., data = Data.Incomes, penalty = "SCAD",
sel.choice = c("BIC","CV"), h.grid = nrow(Data.Incomes)^(-1/5.5),
eps = 0.01, seed.CV = 123, nfolds = 5)
summary(PLR) |
dharithri-sc | rust | Rust | Function dharithri_sc::storage::storage_set::storage_clear
===
```
pub fn storage_clear<A>(key: ManagedRef<'_, A, StorageKey<A>>)where
A: StorageWriteApi + ManagedTypeApi + ErrorApi,
```
Useful for storage mappers.
Also calls to it generated by macro.
Function dharithri_sc::storage::storage_get::storage_get_len
===
```
pub fn storage_get_len<A>(key: ManagedRef<'_, A, StorageKey<A>>) -> usizewhere
A: StorageReadApi + ManagedTypeApi + ErrorApi,
```
Useful for storage mappers.
Also calls to it generated by macro.
Macro dharithri_sc::derive_imports
===
```
macro_rules! derive_imports {
() => { ... };
}
```
Imports required for deriving serialization and TypeAbi.
Macro dharithri_sc::imports
===
```
macro_rules! imports {
() => { ... };
}
```
Getting all imports needed for a smart contract.
Macro dharithri_sc::non_zero_usize
===
```
macro_rules! non_zero_usize {
($input: expr, $error_msg:expr) => { ... };
}
```
Converts usize to NonZeroUsize or returns SCError.
Macro dharithri_sc::only_owner
===
```
macro_rules! only_owner {
($trait_self: expr, $error_msg:expr) => { ... };
}
```
👎Deprecated since 0.26.0: Replace with the `#[only_owner]` attribute that can be placed on an endpoint. That one is more compact and shows up in the ABI.Very compact way of not allowing anyone but the owner to call a function.
It can only be used in a function that returns `SCResult<_>` where _ can be any type.
```
fn only_callable_by_owner(&self) -> SCResult<()> {
only_owner!(self, "Caller must be owner");
Ok(())
}
```
Macro dharithri_sc::require
===
```
macro_rules! require {
($expression:expr, $($msg_tokens:tt),+ $(,)?) => { ... };
}
```
Allows us to write Solidity style `require!(<condition>, <error_msg>)` and avoid if statements.
The most common way to use it is to provide a string message with optional format arguments.
It is also possible to give the error as a variable of types such as `&str`, `&[u8]` or `ManagedBuffer`.
Examples:
```
fn only_accept_positive(&self, x: i32) {
require!(x > 0, "only positive values accepted");
}
fn only_accept_negative(&self, x: i32) {
require!(x < 0, "only negative values accepted, {} is not negative", x);
}
fn only_accept_zero(&self, x: i32, message: &ManagedBuffer<Self::Api>) {
require!(x == 0, message,);
}
```
Macro dharithri_sc::require_old
===
```
macro_rules! require_old {
($expression:expr, $error_msg:expr) => { ... };
}
```
Allows us to write Solidity style `require!(<condition>, <error_msg>)` and avoid if statements.
It can only be used in a function that returns `SCResult<_>` where _ can be any type.
Example:
```
fn only_accept_positive_old(&self, x: i32) -> SCResult<()> {
require_old!(x > 0, "only positive values accepted");
Ok(())
}
```
Macro dharithri_sc::sc_error
===
```
macro_rules! sc_error {
($s:expr) => { ... };
}
```
Compact way of returning a static error message.
Macro dharithri_sc::sc_try
===
```
macro_rules! sc_try {
($s:expr) => { ... };
}
```
👎Deprecated since 0.16.0: The `?` operator can now be used on `SCResult`, please use it instead.Equivalent to the `?` operator for SCResult. |
epca | cran | R | Package ‘epca’
July 10, 2023
Type Package
Title Exploratory Principal Component Analysis
Version 1.1.0
Date 2023-07-10
Description Exploratory principal component analysis for large-
scale dataset, including sparse principal component analysis and sparse matrix approximation.
URL https://github.com/fchen365/epca
BugReports https://github.com/fchen365/epca/issues
License GPL-3
Depends R (>= 3.5),
Imports clue, irlba, Matrix, GPArotation,
Suggests elasticnet, ggcorrplot, tidyverse, rmarkdown, reshape2,
markdown, RSpectra, matlabr, knitr, PMA, testthat (>= 3.0.0)
VignetteBuilder knitr, rmarkdown
Encoding UTF-8
RoxygenNote 7.2.3
NeedsCompilation no
Author <NAME> [aut, cre] (<https://orcid.org/0000-0003-4508-6023>)
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2023-07-10 19:50:11 UTC
R topics documented:
epca-packag... 2
absmi... 3
absmin.criteri... 3
cpv... 4
dist.matri... 5
distanc... 6
exp.fra... 6
har... 7
inne... 7
labelCluste... 8
misClustRat... 8
norm.L... 9
permColum... 10
pitprop... 10
pola... 11
print.sc... 11
print.sm... 12
pr... 12
pv... 14
rootmatri... 15
rotatio... 15
sc... 16
shrinkag... 19
sm... 20
sof... 23
varima... 23
varimax.criteri... 24
vgQ.absmi... 25
epca-package Exploratory Principal Component Analysis
Description
epca is for comprehending any data matrix that contains low-rank and sparse underlying signals
of interest. The package currently features two key tools: (1) sca for sparse principal component
analysis and (2) sma for sparse matrix approximation, a two-way data analysis for simultaneously
row and column dimensionality reductions.
References
<NAME>. and <NAME>. (2020) "A New Basis for Sparse PCA".
absmin Absmin Rotation
Description
Given a p x k matrix x, finds the orthogonal matrix (rotation) that minimizes the absmin.criteria.
Usage
absmin(x, r0 = diag(ncol(x)), normalize = FALSE, eps = 1e-05, maxit = 1000L)
Arguments
x a matrix or Matrix, initial factor loadings matrix for which the rotation criterian
is to be optimized.
r0 matrix, initial rotation matrix.
normalize logical. Should Kaiser normalization be performed? If so the rows of x are
re-scaled to unit length before rotation, and scaled back afterwards.
eps The tolerance for stopping: the relative change in the sum of singular values.
maxit integer, maximum number of iteration (default to 1,000).
Value
A list with three elements:
rotated the rotated matrix.
rotmat the (orthogonal) rotation matrix.
n.iter the number of iteration taken.
See Also
GPArotation::GPForth
absmin.criteria Absmin Criteria
Description
Calculate the absmin criteria. This is a helper function for absmin.
Usage
absmin.criteria(x)
Arguments
x a matrix or Matrix, initial factor loadings matrix for which the rotation criterian
is to be optimized.
cpve Cumulative Proportion of Variance Explained (CPVE)
Description
Calculate the CPVE.
Usage
cpve(x, v, is.cov = FALSE)
Arguments
x matrix or Matrix, the original data matrix or the Gram matrix.
v matrix or Matrix, coefficients of linear transformation, e.g., loadings (in PCA).
is.cov logical, whether the input matrix is a covariance matrix (or a Gram matrix).
Value
a numeric vector of length ncol(v), the i-th value is the CPVE of the first i columns in v.
See Also
pve
Examples
## use the "swiss" data
## find two sparse PCs
s.sca <- sca(swiss, 2, gamma = sqrt(ncol(swiss)))
ld <- loadings(s.sca)
cpve(as.matrix(swiss), ld)
dist.matrix Matrix Column Distance
Description
Compute the distance between two matrices. The distance between two matrices is defined as the
sum of distances between column pairs. This function matches the columns of two matrices, such
that the matrix distance (i.e., the sum of paired column distances) is minimized. This is accom-
plished by solving an optimization over column permutation. Given two matrices, x and y, find
permutation p() that minimizes sum_i similarity(x[,p(i)], y[,i]), where the similarity() can
be "euclidean" distance, 1 - "cosine", or "maximum" difference (manhattan distance). The solution
is computed by clue::solve_LSAP().
Usage
dist.matrix(x, y, method = "euclidean")
Arguments
x, y matrix or Matrix, of the same number of rows. The columns of x and y will be
scaled to unit length.
method distance measure, "maximum", "cosine", or "euclidean" are implemented.
Value
a list of four components:
dist dist, the distance matrix.
match solve_LSAP, the column matches.
value numeric vector, the distance between pairs of columns.
method character, the distance measure used.
nrow integer, the dimension of the input matrices, i.e., nrow(x).
See Also
clue::solve_LSAP
Examples
x <- diag(4)
y <- x + rnorm(16, sd = 0.05) # add some noise
y = t(t(y) / sqrt(colSums(y ^ 2))) ## normalize the columns
## euclidian distance between column pairs, with minimal matches
dist.matrix(x, y, "euclidean")
distance Matrix Distance
Description
Matrix Distance
Usage
distance(x, y, method = "euclidean")
Arguments
x, y matrix or Matrix, of the same number of rows. The columns of x and y will be
scaled to unit length.
method distance measure, "maximum", "cosine", or "euclidean" are implemented.
Value
numeric, the distance between two matrices.
exp.frac Calculate fractional exponent/power
Description
Calculate fractional exponent/power, a^(num/den), where a could be negative.
Usage
## S3 method for class 'frac'
exp(a, num, den)
Arguments
a numeric(1), base (could be negative).
num a positive integer, numerator of the exponent.
den a positive integer, denominator of the exponent.
Value
numeric, the evaluated a^(num/den)
hard Hard-thresholding
Description
Perform hard-thresholding given the cut-off value.
Usage
hard(x, t)
Arguments
x any numerical matrix or vector.
t numeric, the amount to hard-threshold, i.e., sgn(xij )(|xij − t|)+ .
inner Matrix Inner Product
Description
Calculate the custom matrix inner product z of two matrices, x and y, where z[i,j] = FUN(x[,i],
y[,j]).
Usage
inner(x, y, FUN = "crossprod", ...)
Arguments
x, y matrix or Matrix.
FUN function or a character(1) name of base function. The function should take
in two vectors as input and output a numeric(1) result.
... additional parameters for FUN.
Value
matrix, inner product of x and y.
Examples
x <- matrix(1:6, 2, 3)
y <- matrix(7:12, 2, 3)
## The default is equivalent to `crossprod(x, y)`
inner(x, y)
## We can compute the pair-wise Euclidean distance of columns.
EuclideanDistance = function(x, y) crossprod(x, y)^2
inner(x, y, EuclideanDistance)
labelCluster Label Cluster
Description
Assign cluster labels to each row from the membership matrix.
Usage
labelCluster(x, ties.method = "random")
Arguments
x matrix with non-negative entries, where x[i,j] is the estimated likelihood (or
any equivalent measure) of node i belongs to block j. The higher the more likely.
ties.method character, how should ties be handled, "random", "first", "last" are allowed.
See base::rank() for details.
Value
integer vector of the same length as x. Each entry is one of 1, 2, ..., ncol(x).
misClustRate Mis-Classification Rate (MCR)
Description
Compute the empirical MCR, assuming that #cluster = #block, This calculation allows a permuta-
tion on clusters.
Usage
misClustRate(cluster, truth)
Arguments
cluster vector of integer or factor, estimated cluster membership.
truth a vector of the same length as clusters, the true cluster labels.
Value
numeric, the MCR.
Examples
truth = rep(1:3, each = 30)
cluster = rep(3:1, times = c(25, 32, 33))
misClustRate(cluster, truth)
norm.Lp Element-wise Matrix Norm
Description
Compute element-wise matrix Lp-norm. This is a helper function to shrinkage().
Usage
norm.Lp(x, p = 1)
Arguments
x a matrix or Matrix.
p numeric(1), the p for defining the Lp norm.
Value
numeric(1), the absolute sum of all elements.
permColumn Permute columns of a block membership matrix
Description
Perform column permutation of block membership matrix for aesthetic visualization. That is, the
k-th column gives k-th cluster. This is done by ranking the column sums of squares (by default).
Usage
permColumn(x, s = 2)
Arguments
x a non-negative matrix, nNode x nBlock,
s integer, order of non-linear
pitprops Pitprops correlation data
Description
The pitprops data is a correlation matrix that was calculated from 180 observations. There are 13
explanatory variables. Jeffers (1967) tried to interpret the first six PCs. This is a classical example
showing the difficulty of interpreting principal components.
References
<NAME>. (1967) "Two case studies in the application of principal component", Applied Statistics,
16, 225-236.
Examples
## NOT TEST
data(pitprops)
ggcorrplot::ggcorrplot(pitprops)
polar Polar Decomposition
Description
Perform the polar decomposition of an n x p (n > p) matrix x into two parts: u and h, where u is an
n x p unitary matrix with orthogonal columns (i.e. crossprod(u) is the identity matrix), and h is
a p x p positive-semidefinite Hermitian matrix. The function returns the u matrix. This is a helper
function of prs().
Usage
polar(x)
Arguments
x a matrix or Matrix, which is presumed full-rank.
Value
a matrix of the unitary part of the polar decomposition.
References
<NAME>. and <NAME>. (2020) "A New Basis for Sparse Principal Component Analysis."
Examples
x <- matrix(1:6, nrow = 3)
polar_x <- polar(x)
print.sca Print SCA
Description
Print SCA
Usage
## S3 method for class 'sca'
print(x, verbose = FALSE, ...)
Arguments
x an sca object.
verbose logical(1), whether to print out loadings.
... additional input to generic print.
Value
Print an sca object interactively.
print.sma Print SMA
Description
Print SMA
Usage
## S3 method for class 'sma'
print(x, verbose = FALSE, ...)
Arguments
x an sma object.
verbose logical(1), whether to print out loadings.
... additional input to generic print.
Value
Print an sma object interactively.
prs Polar-Rotate-Shrink
Description
This function is a helper function of sma(). It performs polar docomposition, orthogonal rotation,
and soft-thresholding shrinkage in order. The three steps together enable sparse estimates of the
SMA and SCA.
Usage
prs(x, z.hat, gamma, rotate, shrink, normalize, order, flip, epsilon)
Arguments
x, z.hat the matrix product crossprod(x, z.hat) is the actual Polar-Rotate-Shrink ob-
ject. x and z.hat are input separatedly because the former is additionally used to
compute the proportion of variance explained, in the case when order = TRUE.
gamma numeric, the sparsity parameter.
rotate character(1), rotation method. Two options are currently available: "varimax"
(default) or "absmin" (see details).
shrink character(1), shrinkage method, either "soft"- (default) or "hard"-thresholding
(see details).
normalize logical, whether to rows normalization should be done before and undone af-
terward the rotation (see details).
order logical, whether to re-order the columns of the estimates (see Details below).
flip logical, whether to flip the signs of the columns of estimates such that all
columns are positive-skewed (see details).
epsilon numeric, tolerance of convergence precision (default to 0.00001).
Details
rotate: The rotate option specifies the rotation technique to use. Currently, there are two build-in
options—“varimax” and “absmin”. The “varimax” rotation maximizes the element-wise L4 norm
of the rotated matrix. It is faster and computationally more stable. The “absmin” rotation minimizes
the absolute sum of the rotated matrix. It is sharper (as it directly minimizes the L1 norm) but slower
and computationally less stable.
shrink: The shrink option specifies the shrinkage operator to use. Currently, there are two build-in
options—“soft”- and “hard”-thresholding. The “soft”-thresholding universally reduce all elements
and sets the small elements to zeros. The “hard”-thresholding only sets the small elements to zeros.
normalize: The argument normalize gives an indication of if and how any normalization should
be done before rotation, and then undone after rotation. If normalize is FALSE (the default) no nor-
malization is done. If normalize is TRUE then Kaiser normalization is done. (So squared row entries
of normalized x sum to 1.0. This is sometimes called Horst normalization.) For rotate="absmin",
if normalize is a vector of length equal to the number of indicators (i.e., the number of rows of
x), then the columns are divided by normalize before rotation and multiplied by normalize after
rotation. Also, If normalize is a function then it should take x as an argument and return a vector
which is used like the vector above.
order: In PCA (and SVD), the principal components (and the singular vectors) are ordered. For
this, we order the sparse components (i.e., the columns of z or y) by their explained variance in the
data, which is defined as sum((x %*% y)^2), where y is a column of the sparse component. Note:
not to be confused with the cumulative proportion of variance explained by y (and z), particularly
when y (and z) is may not be strictly orthogonal.
flip: The argument flip gives an indication of if and the columns of estimated sparse component
should be flipped. Note that the estimated (sparse) loadings, i.e., the weights on original variables,
are column-wise invariant to a sign flipping. This is because flipping of a principal direction does
not influence the amount of the explained variance by the component. If flip=TRUE, then the
columns of loadings will be flip accordingly, such that each column is positive-skewed. This means
that for each column, the sum of cubic elements (i.e., sum(x^3)) are non-negative.
Value
a matrix of the sparse estimate, of the same dimension as crossprod(x, z.hat).
References
<NAME>. and <NAME>. (2020) "A New Basis for Sparse Principal Component Analysis."
See Also
sma, sca, polar, rotation, shrinkage
pve Proportion of Variance Explained (PVE)
Description
Calculate the Proportion of variance explained by a set of linear transformation, (e.g. eigenvectors).
Usage
pve(x, v, is.cov = FALSE)
Arguments
x matrix or Matrix, the original data matrix or the Gram matrix.
v matrix or Matrix, coefficients of linear transformation, e.g., loadings (in PCA).
is.cov logical, whether the input matrix is a covariance matrix (or a Gram matrix).
Value
a numeric value between 0 and 1, the proportion of total variance in x explained by the PCs whose
loadings are in v.
References
<NAME>., & <NAME>. (2008). "Sparse principal component analysis via regularized low rank
matrix approximation." Journal of multivariate analysis, 99(6), 1015-1034.
Examples
## use the "swiss" data
## find two sparse PCs
s.sca <- sca(swiss, 2, gamma = sqrt(ncol(swiss)))
ld <- loadings(s.sca)
pve(as.matrix(swiss), ld)
rootmatrix Find root matrix
Description
Find the root matrix (x) from the Gram matrix (i.e., crossprod(x)). This is also useful when the
input is a covariance matrix, up to a scaling factor of n-1, where n is the sample size.
Usage
rootmatrix(x)
Arguments
x a symmetric matrix (will trigger error if not symmetric).
rotation Varimax Rotation
Description
Perform varimax rotation. Flip the signs of columns so that the resulting matrix is positive-skewed.
Usage
rotation(
x,
rotate = c("varimax", "absmin"),
normalize = FALSE,
flip = TRUE,
eps = 1e-06
)
Arguments
x a matrix or Matrix.
rotate character(1), rotation method. Two options are currently available: "varimax"
(default) or "absmin" (see details).
normalize logical, whether to rows normalization should be done before and undone af-
terward the rotation (see details).
flip logical, whether to flip the signs of the columns of estimates such that all
columns are positive-skewed (see details).
eps numeric precision tolerance.
Details
rotate: The rotate option specifies the rotation technique to use. Currently, there are two build-in
options—“varimax” and “absmin”. The “varimax” rotation maximizes the element-wise L4 norm
of the rotated matrix. It is faster and computationally more stable. The “absmin” rotation minimizes
the absolute sum of the rotated matrix. It is sharper (as it directly minimizes the L1 norm) but slower
and computationally less stable.
normalize: The argument normalize gives an indication of if and how any normalization should
be done before rotation, and then undone after rotation. If normalize is FALSE (the default) no nor-
malization is done. If normalize is TRUE then Kaiser normalization is done. (So squared row entries
of normalized x sum to 1.0. This is sometimes called Horst normalization.) For rotate="absmin",
if normalize is a vector of length equal to the number of indicators (i.e., the number of rows of
x), then the columns are divided by normalize before rotation and multiplied by normalize after
rotation. Also, If normalize is a function then it should take x as an argument and return a vector
which is used like the vector above.
flip: The argument flip gives an indication of if and the columns of estimated sparse component
should be flipped. Note that the estimated (sparse) loadings, i.e., the weights on original variables,
are column-wise invariant to a sign flipping. This is because flipping of a principal direction does
not influence the amount of the explained variance by the component. If flip=TRUE, then the
columns of loadings will be flip accordingly, such that each column is positive-skewed. This means
that for each column, the sum of cubic elements (i.e., sum(x^3)) are non-negative.
Value
the rotated matrix of the same dimension as x.
References
<NAME>. and <NAME>. (2020) "A New Basis for Sparse Principal Component Analysis."
See Also
prs, varimax
Examples
## use the "swiss" data
fa <- factanal( ~., 2, data = swiss, rotation = "none")
rotation(loadings(fa))
sca Sparse Component Analysis
Description
sca performs sparse principal components analysis on the given numeric data matrix. Choices of
rotation techniques and shrinkage operators are available.
Usage
sca(
x,
k = min(5, dim(x)),
gamma = NULL,
is.cov = FALSE,
rotate = c("varimax", "absmin"),
shrink = c("soft", "hard"),
center = TRUE,
scale = FALSE,
normalize = FALSE,
order = TRUE,
flip = TRUE,
max.iter = 1000,
epsilon = 1e-05,
quiet = TRUE
)
Arguments
x matrix or Matrix to be analyzed.
k integer, rank of approximation.
gamma numeric(1), sparsity parameter, default to sqrt(pk), where n x p is the dimen-
sion of x.
is.cov logical, default to FALSE, whether the x is a covariance matrix (or Gram matrix,
i.e., crossprod() of some design matrix). If TRUE, both center and scale will
be ignored/skipped.
rotate character(1), rotation method. Two options are currently available: "varimax"
(default) or "absmin" (see details).
shrink character(1), shrinkage method, either "soft"- (default) or "hard"-thresholding
(see details).
center logical, whether to center columns of x (see scale()).
scale logical, whether to scale columns of x (see scale()).
normalize logical, whether to rows normalization should be done before and undone af-
terward the rotation (see details).
order logical, whether to re-order the columns of the estimates (see Details below).
flip logical, whether to flip the signs of the columns of estimates such that all
columns are positive-skewed (see details).
max.iter integer, maximum number of iteration (default to 1,000).
epsilon numeric, tolerance of convergence precision (default to 0.00001).
quiet logical, whether to mute the process report (default to TRUE)
Details
rotate: The rotate option specifies the rotation technique to use. Currently, there are two build-in
options—“varimax” and “absmin”. The “varimax” rotation maximizes the element-wise L4 norm
of the rotated matrix. It is faster and computationally more stable. The “absmin” rotation minimizes
the absolute sum of the rotated matrix. It is sharper (as it directly minimizes the L1 norm) but slower
and computationally less stable.
shrink: The shrink option specifies the shrinkage operator to use. Currently, there are two build-in
options—“soft”- and “hard”-thresholding. The “soft”-thresholding universally reduce all elements
and sets the small elements to zeros. The “hard”-thresholding only sets the small elements to zeros.
normalize: The argument normalize gives an indication of if and how any normalization should
be done before rotation, and then undone after rotation. If normalize is FALSE (the default) no nor-
malization is done. If normalize is TRUE then Kaiser normalization is done. (So squared row entries
of normalized x sum to 1.0. This is sometimes called Horst normalization.) For rotate="absmin",
if normalize is a vector of length equal to the number of indicators (i.e., the number of rows of
x), then the columns are divided by normalize before rotation and multiplied by normalize after
rotation. Also, If normalize is a function then it should take x as an argument and return a vector
which is used like the vector above.
order: In PCA (and SVD), the principal components (and the singular vectors) are ordered. For
this, we order the sparse components (i.e., the columns of z or y) by their explained variance in the
data, which is defined as sum((x %*% y)^2), where y is a column of the sparse component. Note:
not to be confused with the cumulative proportion of variance explained by y (and z), particularly
when y (and z) is may not be strictly orthogonal.
flip: The argument flip gives an indication of if and the columns of estimated sparse component
should be flipped. Note that the estimated (sparse) loadings, i.e., the weights on original variables,
are column-wise invariant to a sign flipping. This is because flipping of a principal direction does
not influence the amount of the explained variance by the component. If flip=TRUE, then the
columns of loadings will be flip accordingly, such that each column is positive-skewed. This means
that for each column, the sum of cubic elements (i.e., sum(x^3)) are non-negative.
Value
an sca object that contains:
loadings matrix, sparse loadings of PCs.
scores an n x k matrix, the component scores, calculated using centered (and/or scaled)
x. This will only be available when is.cov = FALSE.
cpve a numeric vector of length k, cumulative proportion of variance in x explained
by the top PCs (after center and/or scale).
center logical, this records the center parameter.
scale logical, this records the scale parameter.
n.iter integer, number of iteration taken.
n.obs integer, sample size, that is, nrow(x).
References
<NAME>. and <NAME>. (2020) "A New Basis for Sparse Principal Component Analysis."
See Also
sma, prs
Examples
## ------ example 1 ------
## simulate a low-rank data matrix with some additive Gaussian noise
n <- 300
p <- 50
k <- 5 ## rank
z <- shrinkage(polar(matrix(runif(n * k), n, k)), sqrt(n))
b <- diag(5) * 3
y <- shrinkage(polar(matrix(runif(p * k), p, k)), sqrt(p))
e <- matrix(rnorm(n * p, sd = .01), n, p)
x <- scale(z %*% b %*% t(y) + e)
## perform sparse PCA
s.sca <- sca(x, k)
s.sca
## ------ example 2 ------
## use the `pitprops` data from the `elasticnet` package
data(pitprops)
## find 6 sparse PCs
s.sca <- sca(pitprops, 6, gamma = 6, is.cov = TRUE)
print(s.sca, verbose = TRUE)
shrinkage Shrinkage
Description
Shrink a matrix using soft-thresholding or hard-thresholding.
Usage
shrinkage(x, gamma, shrink = c("soft", "hard"), epsilon = 1e-11)
Arguments
x matrix or Matrix, to be threshold.
gamma numeric, the constraint of Lp norm, i.e. kxk ≤ γ.
shrink character(1), shrinkage method, either "soft"- (default) or "hard"-thresholding
(see details).
epsilon numeric, precision tolerance. This should be greater than .Machine$double.eps.
Details
A binary search to find the cut-off value.
shrink: The shrink option specifies the shrinkage operator to use. Currently, there are two build-in
options—“soft”- and “hard”-thresholding. The “soft”-thresholding universally reduce all elements
and sets the small elements to zeros. The “hard”-thresholding only sets the small elements to zeros.
Value
a list with two components:
matrix matrix, the matrix that results from soft-thresholding
norm numeric, the norm of the matrix after soft-thresholding. This value is close to
constraint if using the second option.
References
<NAME>. and <NAME>. (2020) "A New Basis for Sparse Principal Component Analysis."
See Also
prs
Examples
x <- matrix(1:6, nrow = 3)
shrink_x <- shrinkage(x, 1)
sma Sparse Matrix Approximation
Description
Perform the sparse matrix approximation (SMA) of a data matrix x as three multiplicative compo-
nents: z, b, and t(y), where z and y are sparse, and b is low-rank but not necessarily diagonal.
Usage
sma(
x,
k = min(5, dim(x)),
gamma = NULL,
rotate = c("varimax", "absmin"),
shrink = c("soft", "hard"),
center = FALSE,
scale = FALSE,
normalize = FALSE,
order = FALSE,
flip = FALSE,
max.iter = 1000,
epsilon = 1e-05,
quiet = TRUE
)
Arguments
x matrix or Matrix to be analyzed.
k integer, rank of approximation.
gamma numeric(2), sparsity parameters. If gamma is numeric(1), it is used for both
left and right sparsity component (i.e, z and y). If absent, the two parameters
are set as (default): sqrt(nk) and sqrt(pk) for z and y respectively, where n x
p is the dimension of x.
rotate character(1), rotation method. Two options are currently available: "varimax"
(default) or "absmin" (see details).
shrink character(1), shrinkage method, either "soft"- (default) or "hard"-thresholding
(see details).
center logical, whether to center columns of x (see scale()).
scale logical, whether to scale columns of x (see scale()).
normalize logical, whether to rows normalization should be done before and undone af-
terward the rotation (see details).
order logical, whether to re-order the columns of the estimates (see Details below).
flip logical, whether to flip the signs of the columns of estimates such that all
columns are positive-skewed (see details).
max.iter integer, maximum number of iteration (default to 1,000).
epsilon numeric, tolerance of convergence precision (default to 0.00001).
quiet logical, whether to mute the process report (default to TRUE)
Details
rotate: The rotate option specifies the rotation technique to use. Currently, there are two build-in
options—“varimax” and “absmin”. The “varimax” rotation maximizes the element-wise L4 norm
of the rotated matrix. It is faster and computationally more stable. The “absmin” rotation minimizes
the absolute sum of the rotated matrix. It is sharper (as it directly minimizes the L1 norm) but slower
and computationally less stable.
shrink: The shrink option specifies the shrinkage operator to use. Currently, there are two build-in
options—“soft”- and “hard”-thresholding. The “soft”-thresholding universally reduce all elements
and sets the small elements to zeros. The “hard”-thresholding only sets the small elements to zeros.
normalize: The argument normalize gives an indication of if and how any normalization should
be done before rotation, and then undone after rotation. If normalize is FALSE (the default) no nor-
malization is done. If normalize is TRUE then Kaiser normalization is done. (So squared row entries
of normalized x sum to 1.0. This is sometimes called Horst normalization.) For rotate="absmin",
if normalize is a vector of length equal to the number of indicators (i.e., the number of rows of
x), then the columns are divided by normalize before rotation and multiplied by normalize after
rotation. Also, If normalize is a function then it should take x as an argument and return a vector
which is used like the vector above.
order: In PCA (and SVD), the principal components (and the singular vectors) are ordered. For
this, we order the sparse components (i.e., the columns of z or y) by their explained variance in the
data, which is defined as sum((x %*% y)^2), where y is a column of the sparse component. Note:
not to be confused with the cumulative proportion of variance explained by y (and z), particularly
when y (and z) is may not be strictly orthogonal.
flip: The argument flip gives an indication of if and the columns of estimated sparse component
should be flipped. Note that the estimated (sparse) loadings, i.e., the weights on original variables,
are column-wise invariant to a sign flipping. This is because flipping of a principal direction does
not influence the amount of the explained variance by the component. If flip=TRUE, then the
columns of loadings will be flip accordingly, such that each column is positive-skewed. This means
that for each column, the sum of cubic elements (i.e., sum(x^3)) are non-negative.
Value
an sma object that contains:
z, b, t(y) the three parts in the SMA. z is a sparse n x k matrix that contains the row
components (loadings). The row names of z inherit the row names of x. b is a k
x k matrix that contains the scores of SMA; the Frobenius norm of b equals to
the total variance explained by the SMA. y is a sparse n x k matrixthat contains
the column components (loadings).
The row names of y inherit the column names of x.
score the total variance explained by the SMA. This is the optimal objective value
obtained.
n.iter integer, the number of iteration taken.
References
<NAME>. and <NAME>. (2020) "A New Basis for Sparse Principal Component Analysis."
See Also
sca, prs
Examples
## simulate a rank-5 data matrix with some additive Gaussian noise
n <- 300
p <- 50
k <- 5 ## rank
z <- shrinkage(polar(matrix(runif(n * k), n, k)), sqrt(n))
b <- diag(5) * 3
y <- shrinkage(polar(matrix(runif(p * k), p, k)), sqrt(p))
e <- matrix(rnorm(n * p, sd = .01), n, p)
x <- scale(z %*% b %*% t(y) + e)
## perform sparse matrix approximation
s.sma <- sma(x, k)
s.sma
soft Soft-thresholding
Description
Perform soft-thresholding given the cut-off value.
Usage
soft(x, t)
Arguments
x any numerical matrix or vector.
t numeric, the amount to soft-threshold, i.e., sgn(xij )(|xij − t|)+ .
varimax Varimax Rotation
Description
This is a re-implementation of stats::varimax, which (1) adds a parameter for the maximum number
of iterations, (2) sets the default normalize parameter to FALSE, (3) outputs the number of iteration
taken, and (4) returns regular matrix rather than in loadings class.
Usage
varimax(x, normalize = FALSE, eps = 1e-05, maxit = 1000L)
Arguments
x A loadings matrix, with p rows and k < p columns
normalize logical. Should Kaiser normalization be performed? If so the rows of x are
re-scaled to unit length before rotation, and scaled back afterwards.
eps The tolerance for stopping: the relative change in the sum of singular values.
maxit integer, maximum number of iteration (default to 1,000).
Value
A list with three elements:
rotated the rotated matrix.
rotmat the (orthogonal) rotation matrix.
n.iter the number of iterations taken.
See Also
stats::varimax
varimax.criteria The varimax criterion
Description
Calculate the varimax criterion
Usage
varimax.criteria(x)
Arguments
x a matrix or Matrix.
Value
a numeric of evaluated varimax criterion.
References
Varimax rotation (Wikipedia)
Examples
## use the "swiss" data
fa <- factanal( ~., 2, data = swiss, rotation = "none")
lds <- loadings(fa)
## compute varimax criterion:
varimax.criteria(lds)
## compute varimax criterion (after the varimax rotation):
rlds <- rotation(lds, rotate = "varimax")
varimax.criteria(rlds)
vgQ.absmin Gradient of Absmin Criterion
Description
This is a helper function for absmin and is not to be used directly by users.
Usage
vgQ.absmin(x)
Arguments
x a matrix or Matrix, initial factor loadings matrix for which the rotation criterian
is to be optimized.
Value
a list required by GPArotation::GPForth for the absmin rotation.
Examples
## Not run:
## NOT RUN
## NOT for users to call.
## End(Not run) |
github.com/aws/aws-sdk-go-v2/service/serverlessapplicationrepository | go | Go | None
Documentation
[¶](#section-documentation)
---
### Overview [¶](#pkg-overview)
Package serverlessapplicationrepository provides the API client, operations,
and parameter types for AWSServerlessApplicationRepository.
The AWS Serverless Application Repository makes it easy for developers and enterprises to quickly find and deploy serverless applications in the AWS Cloud.
For more information about serverless applications, see Serverless Computing and Applications on the AWS website.The AWS Serverless Application Repository is deeply integrated with the AWS Lambda console, so that developers of all levels can get started with serverless computing without needing to learn anything new.
You can use category keywords to browse for applications such as web and mobile backends, data processing applications, or chatbots. You can also search for applications by name, publisher, or event source. To use an application, you simply choose it, configure any required fields, and deploy it with a few clicks. You can also easily publish applications, sharing them publicly with the community at large, or privately within your team or across your organization.
To publish a serverless application (or app), you can use the AWS Management Console, AWS Command Line Interface (AWS CLI), or AWS SDKs to upload the code.
Along with the code, you upload a simple manifest file, also known as the AWS Serverless Application Model (AWS SAM) template. For more information about AWS SAM, see AWS Serverless Application Model (AWS SAM) on the AWS Labs GitHub repository.The AWS Serverless Application Repository Developer Guide contains more information about the two developer experiences available:
* Consuming Applications – Browse for applications and view information about them, including source code and readme files. Also install, configure, and deploy applications of your choosing. Publishing Applications – Configure and upload applications to make them available to other developers, and publish new versions of applications.
### Index [¶](#pkg-index)
* [Constants](#pkg-constants)
* [func NewDefaultEndpointResolver() *internalendpoints.Resolver](#NewDefaultEndpointResolver)
* [func WithAPIOptions(optFns ...func(*middleware.Stack) error) func(*Options)](#WithAPIOptions)
* [func WithEndpointResolver(v EndpointResolver) func(*Options)](#WithEndpointResolver)deprecated
* [func WithEndpointResolverV2(v EndpointResolverV2) func(*Options)](#WithEndpointResolverV2)
* [type Client](#Client)
* + [func New(options Options, optFns ...func(*Options)) *Client](#New)
+ [func NewFromConfig(cfg aws.Config, optFns ...func(*Options)) *Client](#NewFromConfig)
* + [func (c *Client) CreateApplication(ctx context.Context, params *CreateApplicationInput, optFns ...func(*Options)) (*CreateApplicationOutput, error)](#Client.CreateApplication)
+ [func (c *Client) CreateApplicationVersion(ctx context.Context, params *CreateApplicationVersionInput, ...) (*CreateApplicationVersionOutput, error)](#Client.CreateApplicationVersion)
+ [func (c *Client) CreateCloudFormationChangeSet(ctx context.Context, params *CreateCloudFormationChangeSetInput, ...) (*CreateCloudFormationChangeSetOutput, error)](#Client.CreateCloudFormationChangeSet)
+ [func (c *Client) CreateCloudFormationTemplate(ctx context.Context, params *CreateCloudFormationTemplateInput, ...) (*CreateCloudFormationTemplateOutput, error)](#Client.CreateCloudFormationTemplate)
+ [func (c *Client) DeleteApplication(ctx context.Context, params *DeleteApplicationInput, optFns ...func(*Options)) (*DeleteApplicationOutput, error)](#Client.DeleteApplication)
+ [func (c *Client) GetApplication(ctx context.Context, params *GetApplicationInput, optFns ...func(*Options)) (*GetApplicationOutput, error)](#Client.GetApplication)
+ [func (c *Client) GetApplicationPolicy(ctx context.Context, params *GetApplicationPolicyInput, ...) (*GetApplicationPolicyOutput, error)](#Client.GetApplicationPolicy)
+ [func (c *Client) GetCloudFormationTemplate(ctx context.Context, params *GetCloudFormationTemplateInput, ...) (*GetCloudFormationTemplateOutput, error)](#Client.GetCloudFormationTemplate)
+ [func (c *Client) ListApplicationDependencies(ctx context.Context, params *ListApplicationDependenciesInput, ...) (*ListApplicationDependenciesOutput, error)](#Client.ListApplicationDependencies)
+ [func (c *Client) ListApplicationVersions(ctx context.Context, params *ListApplicationVersionsInput, ...) (*ListApplicationVersionsOutput, error)](#Client.ListApplicationVersions)
+ [func (c *Client) ListApplications(ctx context.Context, params *ListApplicationsInput, optFns ...func(*Options)) (*ListApplicationsOutput, error)](#Client.ListApplications)
+ [func (c *Client) PutApplicationPolicy(ctx context.Context, params *PutApplicationPolicyInput, ...) (*PutApplicationPolicyOutput, error)](#Client.PutApplicationPolicy)
+ [func (c *Client) UnshareApplication(ctx context.Context, params *UnshareApplicationInput, optFns ...func(*Options)) (*UnshareApplicationOutput, error)](#Client.UnshareApplication)
+ [func (c *Client) UpdateApplication(ctx context.Context, params *UpdateApplicationInput, optFns ...func(*Options)) (*UpdateApplicationOutput, error)](#Client.UpdateApplication)
* [type CreateApplicationInput](#CreateApplicationInput)
* [type CreateApplicationOutput](#CreateApplicationOutput)
* [type CreateApplicationVersionInput](#CreateApplicationVersionInput)
* [type CreateApplicationVersionOutput](#CreateApplicationVersionOutput)
* [type CreateCloudFormationChangeSetInput](#CreateCloudFormationChangeSetInput)
* [type CreateCloudFormationChangeSetOutput](#CreateCloudFormationChangeSetOutput)
* [type CreateCloudFormationTemplateInput](#CreateCloudFormationTemplateInput)
* [type CreateCloudFormationTemplateOutput](#CreateCloudFormationTemplateOutput)
* [type DeleteApplicationInput](#DeleteApplicationInput)
* [type DeleteApplicationOutput](#DeleteApplicationOutput)
* [type EndpointParameters](#EndpointParameters)
* + [func (p EndpointParameters) ValidateRequired() error](#EndpointParameters.ValidateRequired)
+ [func (p EndpointParameters) WithDefaults() EndpointParameters](#EndpointParameters.WithDefaults)
* [type EndpointResolver](#EndpointResolver)
* + [func EndpointResolverFromURL(url string, optFns ...func(*aws.Endpoint)) EndpointResolver](#EndpointResolverFromURL)
* [type EndpointResolverFunc](#EndpointResolverFunc)
* + [func (fn EndpointResolverFunc) ResolveEndpoint(region string, options EndpointResolverOptions) (endpoint aws.Endpoint, err error)](#EndpointResolverFunc.ResolveEndpoint)
* [type EndpointResolverOptions](#EndpointResolverOptions)
* [type EndpointResolverV2](#EndpointResolverV2)
* + [func NewDefaultEndpointResolverV2() EndpointResolverV2](#NewDefaultEndpointResolverV2)
* [type GetApplicationInput](#GetApplicationInput)
* [type GetApplicationOutput](#GetApplicationOutput)
* [type GetApplicationPolicyInput](#GetApplicationPolicyInput)
* [type GetApplicationPolicyOutput](#GetApplicationPolicyOutput)
* [type GetCloudFormationTemplateInput](#GetCloudFormationTemplateInput)
* [type GetCloudFormationTemplateOutput](#GetCloudFormationTemplateOutput)
* [type HTTPClient](#HTTPClient)
* [type HTTPSignerV4](#HTTPSignerV4)
* [type ListApplicationDependenciesAPIClient](#ListApplicationDependenciesAPIClient)
* [type ListApplicationDependenciesInput](#ListApplicationDependenciesInput)
* [type ListApplicationDependenciesOutput](#ListApplicationDependenciesOutput)
* [type ListApplicationDependenciesPaginator](#ListApplicationDependenciesPaginator)
* + [func NewListApplicationDependenciesPaginator(client ListApplicationDependenciesAPIClient, ...) *ListApplicationDependenciesPaginator](#NewListApplicationDependenciesPaginator)
* + [func (p *ListApplicationDependenciesPaginator) HasMorePages() bool](#ListApplicationDependenciesPaginator.HasMorePages)
+ [func (p *ListApplicationDependenciesPaginator) NextPage(ctx context.Context, optFns ...func(*Options)) (*ListApplicationDependenciesOutput, error)](#ListApplicationDependenciesPaginator.NextPage)
* [type ListApplicationDependenciesPaginatorOptions](#ListApplicationDependenciesPaginatorOptions)
* [type ListApplicationVersionsAPIClient](#ListApplicationVersionsAPIClient)
* [type ListApplicationVersionsInput](#ListApplicationVersionsInput)
* [type ListApplicationVersionsOutput](#ListApplicationVersionsOutput)
* [type ListApplicationVersionsPaginator](#ListApplicationVersionsPaginator)
* + [func NewListApplicationVersionsPaginator(client ListApplicationVersionsAPIClient, params *ListApplicationVersionsInput, ...) *ListApplicationVersionsPaginator](#NewListApplicationVersionsPaginator)
* + [func (p *ListApplicationVersionsPaginator) HasMorePages() bool](#ListApplicationVersionsPaginator.HasMorePages)
+ [func (p *ListApplicationVersionsPaginator) NextPage(ctx context.Context, optFns ...func(*Options)) (*ListApplicationVersionsOutput, error)](#ListApplicationVersionsPaginator.NextPage)
* [type ListApplicationVersionsPaginatorOptions](#ListApplicationVersionsPaginatorOptions)
* [type ListApplicationsAPIClient](#ListApplicationsAPIClient)
* [type ListApplicationsInput](#ListApplicationsInput)
* [type ListApplicationsOutput](#ListApplicationsOutput)
* [type ListApplicationsPaginator](#ListApplicationsPaginator)
* + [func NewListApplicationsPaginator(client ListApplicationsAPIClient, params *ListApplicationsInput, ...) *ListApplicationsPaginator](#NewListApplicationsPaginator)
* + [func (p *ListApplicationsPaginator) HasMorePages() bool](#ListApplicationsPaginator.HasMorePages)
+ [func (p *ListApplicationsPaginator) NextPage(ctx context.Context, optFns ...func(*Options)) (*ListApplicationsOutput, error)](#ListApplicationsPaginator.NextPage)
* [type ListApplicationsPaginatorOptions](#ListApplicationsPaginatorOptions)
* [type Options](#Options)
* + [func (o Options) Copy() Options](#Options.Copy)
* [type PutApplicationPolicyInput](#PutApplicationPolicyInput)
* [type PutApplicationPolicyOutput](#PutApplicationPolicyOutput)
* [type ResolveEndpoint](#ResolveEndpoint)
* + [func (m *ResolveEndpoint) HandleSerialize(ctx context.Context, in middleware.SerializeInput, ...) (out middleware.SerializeOutput, metadata middleware.Metadata, err error)](#ResolveEndpoint.HandleSerialize)
+ [func (*ResolveEndpoint) ID() string](#ResolveEndpoint.ID)
* [type UnshareApplicationInput](#UnshareApplicationInput)
* [type UnshareApplicationOutput](#UnshareApplicationOutput)
* [type UpdateApplicationInput](#UpdateApplicationInput)
* [type UpdateApplicationOutput](#UpdateApplicationOutput)
### Constants [¶](#pkg-constants)
```
const ServiceAPIVersion = "2017-09-08"
```
```
const ServiceID = "ServerlessApplicationRepository"
```
### Variables [¶](#pkg-variables)
This section is empty.
### Functions [¶](#pkg-functions)
####
func [NewDefaultEndpointResolver](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/endpoints.go#L33) [¶](#NewDefaultEndpointResolver)
```
func NewDefaultEndpointResolver() *[internalendpoints](/github.com/aws/aws-sdk-go-v2/service/[email protected]/internal/endpoints).[Resolver](/github.com/aws/aws-sdk-go-v2/service/[email protected]/internal/endpoints#Resolver)
```
NewDefaultEndpointResolver constructs a new service endpoint resolver
####
func [WithAPIOptions](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_client.go#L153) [¶](#WithAPIOptions)
added in v1.0.0
```
func WithAPIOptions(optFns ...func(*[middleware](/github.com/aws/smithy-go/middleware).[Stack](/github.com/aws/smithy-go/middleware#Stack)) [error](/builtin#error)) func(*[Options](#Options))
```
WithAPIOptions returns a functional option for setting the Client's APIOptions option.
####
func [WithEndpointResolver](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_client.go#L164)
deprecated
```
func WithEndpointResolver(v [EndpointResolver](#EndpointResolver)) func(*[Options](#Options))
```
Deprecated: EndpointResolver and WithEndpointResolver. Providing a value for this field will likely prevent you from using any endpoint-related service features released after the introduction of EndpointResolverV2 and BaseEndpoint.
To migrate an EndpointResolver implementation that uses a custom endpoint, set the client option BaseEndpoint instead.
####
func [WithEndpointResolverV2](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_client.go#L172) [¶](#WithEndpointResolverV2)
added in v1.13.0
```
func WithEndpointResolverV2(v [EndpointResolverV2](#EndpointResolverV2)) func(*[Options](#Options))
```
WithEndpointResolverV2 returns a functional option for setting the Client's EndpointResolverV2 option.
### Types [¶](#pkg-types)
####
type [Client](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_client.go#L30) [¶](#Client)
```
type Client struct {
// contains filtered or unexported fields
}
```
Client provides the API client to make operations call for AWSServerlessApplicationRepository.
####
func [New](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_client.go#L37) [¶](#New)
```
func New(options [Options](#Options), optFns ...func(*[Options](#Options))) *[Client](#Client)
```
New returns an initialized Client based on the functional options. Provide additional functional options to further configure the behavior of the client,
such as changing the client's endpoint or adding custom middleware behavior.
####
func [NewFromConfig](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_client.go#L281) [¶](#NewFromConfig)
```
func NewFromConfig(cfg [aws](/github.com/aws/aws-sdk-go-v2/aws).[Config](/github.com/aws/aws-sdk-go-v2/aws#Config), optFns ...func(*[Options](#Options))) *[Client](#Client)
```
NewFromConfig returns a new client from the provided config.
####
func (*Client) [CreateApplication](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_CreateApplication.go#L21) [¶](#Client.CreateApplication)
```
func (c *[Client](#Client)) CreateApplication(ctx [context](/context).[Context](/context#Context), params *[CreateApplicationInput](#CreateApplicationInput), optFns ...func(*[Options](#Options))) (*[CreateApplicationOutput](#CreateApplicationOutput), [error](/builtin#error))
```
Creates an application, optionally including an AWS SAM file to create the first application version in the same call.
####
func (*Client) [CreateApplicationVersion](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_CreateApplicationVersion.go#L20) [¶](#Client.CreateApplicationVersion)
```
func (c *[Client](#Client)) CreateApplicationVersion(ctx [context](/context).[Context](/context#Context), params *[CreateApplicationVersionInput](#CreateApplicationVersionInput), optFns ...func(*[Options](#Options))) (*[CreateApplicationVersionOutput](#CreateApplicationVersionOutput), [error](/builtin#error))
```
Creates an application version.
####
func (*Client) [CreateCloudFormationChangeSet](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_CreateCloudFormationChangeSet.go#L20) [¶](#Client.CreateCloudFormationChangeSet)
```
func (c *[Client](#Client)) CreateCloudFormationChangeSet(ctx [context](/context).[Context](/context#Context), params *[CreateCloudFormationChangeSetInput](#CreateCloudFormationChangeSetInput), optFns ...func(*[Options](#Options))) (*[CreateCloudFormationChangeSetOutput](#CreateCloudFormationChangeSetOutput), [error](/builtin#error))
```
Creates an AWS CloudFormation change set for the given application.
####
func (*Client) [CreateCloudFormationTemplate](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_CreateCloudFormationTemplate.go#L20) [¶](#Client.CreateCloudFormationTemplate)
```
func (c *[Client](#Client)) CreateCloudFormationTemplate(ctx [context](/context).[Context](/context#Context), params *[CreateCloudFormationTemplateInput](#CreateCloudFormationTemplateInput), optFns ...func(*[Options](#Options))) (*[CreateCloudFormationTemplateOutput](#CreateCloudFormationTemplateOutput), [error](/builtin#error))
```
Creates an AWS CloudFormation template.
####
func (*Client) [DeleteApplication](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_DeleteApplication.go#L19) [¶](#Client.DeleteApplication)
```
func (c *[Client](#Client)) DeleteApplication(ctx [context](/context).[Context](/context#Context), params *[DeleteApplicationInput](#DeleteApplicationInput), optFns ...func(*[Options](#Options))) (*[DeleteApplicationOutput](#DeleteApplicationOutput), [error](/builtin#error))
```
Deletes the specified application.
####
func (*Client) [GetApplication](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_GetApplication.go#L20) [¶](#Client.GetApplication)
```
func (c *[Client](#Client)) GetApplication(ctx [context](/context).[Context](/context#Context), params *[GetApplicationInput](#GetApplicationInput), optFns ...func(*[Options](#Options))) (*[GetApplicationOutput](#GetApplicationOutput), [error](/builtin#error))
```
Gets the specified application.
####
func (*Client) [GetApplicationPolicy](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_GetApplicationPolicy.go#L20) [¶](#Client.GetApplicationPolicy)
```
func (c *[Client](#Client)) GetApplicationPolicy(ctx [context](/context).[Context](/context#Context), params *[GetApplicationPolicyInput](#GetApplicationPolicyInput), optFns ...func(*[Options](#Options))) (*[GetApplicationPolicyOutput](#GetApplicationPolicyOutput), [error](/builtin#error))
```
Retrieves the policy for the application.
####
func (*Client) [GetCloudFormationTemplate](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_GetCloudFormationTemplate.go#L20) [¶](#Client.GetCloudFormationTemplate)
```
func (c *[Client](#Client)) GetCloudFormationTemplate(ctx [context](/context).[Context](/context#Context), params *[GetCloudFormationTemplateInput](#GetCloudFormationTemplateInput), optFns ...func(*[Options](#Options))) (*[GetCloudFormationTemplateOutput](#GetCloudFormationTemplateOutput), [error](/builtin#error))
```
Gets the specified AWS CloudFormation template.
####
func (*Client) [ListApplicationDependencies](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_ListApplicationDependencies.go#L20) [¶](#Client.ListApplicationDependencies)
```
func (c *[Client](#Client)) ListApplicationDependencies(ctx [context](/context).[Context](/context#Context), params *[ListApplicationDependenciesInput](#ListApplicationDependenciesInput), optFns ...func(*[Options](#Options))) (*[ListApplicationDependenciesOutput](#ListApplicationDependenciesOutput), [error](/builtin#error))
```
Retrieves the list of applications nested in the containing application.
####
func (*Client) [ListApplicationVersions](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_ListApplicationVersions.go#L20) [¶](#Client.ListApplicationVersions)
```
func (c *[Client](#Client)) ListApplicationVersions(ctx [context](/context).[Context](/context#Context), params *[ListApplicationVersionsInput](#ListApplicationVersionsInput), optFns ...func(*[Options](#Options))) (*[ListApplicationVersionsOutput](#ListApplicationVersionsOutput), [error](/builtin#error))
```
Lists versions for the specified application.
####
func (*Client) [ListApplications](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_ListApplications.go#L20) [¶](#Client.ListApplications)
```
func (c *[Client](#Client)) ListApplications(ctx [context](/context).[Context](/context#Context), params *[ListApplicationsInput](#ListApplicationsInput), optFns ...func(*[Options](#Options))) (*[ListApplicationsOutput](#ListApplicationsOutput), [error](/builtin#error))
```
Lists applications owned by the requester.
####
func (*Client) [PutApplicationPolicy](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_PutApplicationPolicy.go#L22) [¶](#Client.PutApplicationPolicy)
```
func (c *[Client](#Client)) PutApplicationPolicy(ctx [context](/context).[Context](/context#Context), params *[PutApplicationPolicyInput](#PutApplicationPolicyInput), optFns ...func(*[Options](#Options))) (*[PutApplicationPolicyOutput](#PutApplicationPolicyOutput), [error](/builtin#error))
```
Sets the permission policy for an application. For the list of actions supported for this operation, see Application Permissions (<https://docs.aws.amazon.com/serverlessrepo/latest/devguide/access-control-resource-based.html#application-permissions>)
.
####
func (*Client) [UnshareApplication](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_UnshareApplication.go#L20) [¶](#Client.UnshareApplication)
```
func (c *[Client](#Client)) UnshareApplication(ctx [context](/context).[Context](/context#Context), params *[UnshareApplicationInput](#UnshareApplicationInput), optFns ...func(*[Options](#Options))) (*[UnshareApplicationOutput](#UnshareApplicationOutput), [error](/builtin#error))
```
Unshares an application from an AWS Organization.This operation can be called only from the organization's master account.
####
func (*Client) [UpdateApplication](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_UpdateApplication.go#L20) [¶](#Client.UpdateApplication)
```
func (c *[Client](#Client)) UpdateApplication(ctx [context](/context).[Context](/context#Context), params *[UpdateApplicationInput](#UpdateApplicationInput), optFns ...func(*[Options](#Options))) (*[UpdateApplicationOutput](#UpdateApplicationOutput), [error](/builtin#error))
```
Updates the specified application.
####
type [CreateApplicationInput](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_CreateApplication.go#L36) [¶](#CreateApplicationInput)
```
type CreateApplicationInput struct {
// The name of the author publishing the app.Minimum length=1. Maximum
// length=127.Pattern "^[a-z0-9](([a-z0-9]|-(?!-))*[a-z0-9])?$";
//
// This member is required.
Author *[string](/builtin#string)
// The description of the application.Minimum length=1. Maximum length=256
//
// This member is required.
Description *[string](/builtin#string)
// The name of the application that you want to publish.Minimum length=1. Maximum
// length=140Pattern: "[a-zA-Z0-9\\-]+";
//
// This member is required.
Name *[string](/builtin#string)
// A URL with more information about the application, for example the location of
// your GitHub repository for the application.
HomePageUrl *[string](/builtin#string)
// Labels to improve discovery of apps in search results.Minimum length=1. Maximum
// length=127. Maximum number of labels: 10Pattern: "^[a-zA-Z0-9+\\-_:\\/@]+$";
Labels [][string](/builtin#string)
// A local text file that contains the license of the app that matches the
// spdxLicenseID value of your application. The file has the format
// file://<path>/<filename>.Maximum size 5 MBYou can specify only one of
// licenseBody and licenseUrl; otherwise, an error results.
LicenseBody *[string](/builtin#string)
// A link to the S3 object that contains the license of the app that matches the
// spdxLicenseID value of your application.Maximum size 5 MBYou can specify only
// one of licenseBody and licenseUrl; otherwise, an error results.
LicenseUrl *[string](/builtin#string)
// A local text readme file in Markdown language that contains a more detailed
// description of the application and how it works. The file has the format
// file://<path>/<filename>.Maximum size 5 MBYou can specify only one of readmeBody
// and readmeUrl; otherwise, an error results.
ReadmeBody *[string](/builtin#string)
// A link to the S3 object in Markdown language that contains a more detailed
// description of the application and how it works.Maximum size 5 MBYou can specify
// only one of readmeBody and readmeUrl; otherwise, an error results.
ReadmeUrl *[string](/builtin#string)
// The semantic version of the application: <https://semver.org/> (<https://semver.org/>)
SemanticVersion *[string](/builtin#string)
// A link to the S3 object that contains the ZIP archive of the source code for
// this version of your application.Maximum size 50 MB
SourceCodeArchiveUrl *[string](/builtin#string)
// A link to a public repository for the source code of your application, for
// example the URL of a specific GitHub commit.
SourceCodeUrl *[string](/builtin#string)
// A valid identifier from <https://spdx.org/licenses/> (<https://spdx.org/licenses/>) .
SpdxLicenseId *[string](/builtin#string)
// The local raw packaged AWS SAM template file of your application. The file has
// the format file://<path>/<filename>.You can specify only one of templateBody and
// templateUrl; otherwise an error results.
TemplateBody *[string](/builtin#string)
// A link to the S3 object containing the packaged AWS SAM template of your
// application.You can specify only one of templateBody and templateUrl; otherwise
// an error results.
TemplateUrl *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [CreateApplicationOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_CreateApplication.go#L112) [¶](#CreateApplicationOutput)
```
type CreateApplicationOutput struct {
// The application Amazon Resource Name (ARN).
ApplicationId *[string](/builtin#string)
// The name of the author publishing the app.Minimum length=1. Maximum
// length=127.Pattern "^[a-z0-9](([a-z0-9]|-(?!-))*[a-z0-9])?$";
Author *[string](/builtin#string)
// The date and time this resource was created.
CreationTime *[string](/builtin#string)
// The description of the application.Minimum length=1. Maximum length=256
Description *[string](/builtin#string)
// A URL with more information about the application, for example the location of
// your GitHub repository for the application.
HomePageUrl *[string](/builtin#string)
// Whether the author of this application has been verified. This means means that
// AWS has made a good faith review, as a reasonable and prudent service provider,
// of the information provided by the requester and has confirmed that the
// requester's identity is as claimed.
IsVerifiedAuthor [bool](/builtin#bool)
// Labels to improve discovery of apps in search results.Minimum length=1. Maximum
// length=127. Maximum number of labels: 10Pattern: "^[a-zA-Z0-9+\\-_:\\/@]+$";
Labels [][string](/builtin#string)
// A link to a license file of the app that matches the spdxLicenseID value of
// your application.Maximum size 5 MB
LicenseUrl *[string](/builtin#string)
// The name of the application.Minimum length=1. Maximum length=140Pattern:
// "[a-zA-Z0-9\\-]+";
Name *[string](/builtin#string)
// A link to the readme file in Markdown language that contains a more detailed
// description of the application and how it works.Maximum size 5 MB
ReadmeUrl *[string](/builtin#string)
// A valid identifier from <https://spdx.org/licenses/>.
SpdxLicenseId *[string](/builtin#string)
// The URL to the public profile of a verified author. This URL is submitted by
// the author.
VerifiedAuthorUrl *[string](/builtin#string)
// Version information about the application.
Version *[types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[Version](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#Version)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [CreateApplicationVersionInput](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_CreateApplicationVersion.go#L35) [¶](#CreateApplicationVersionInput)
```
type CreateApplicationVersionInput struct {
// The Amazon Resource Name (ARN) of the application.
//
// This member is required.
ApplicationId *[string](/builtin#string)
// The semantic version of the new version.
//
// This member is required.
SemanticVersion *[string](/builtin#string)
// A link to the S3 object that contains the ZIP archive of the source code for
// this version of your application.Maximum size 50 MB
SourceCodeArchiveUrl *[string](/builtin#string)
// A link to a public repository for the source code of your application, for
// example the URL of a specific GitHub commit.
SourceCodeUrl *[string](/builtin#string)
// The raw packaged AWS SAM template of your application.
TemplateBody *[string](/builtin#string)
// A link to the packaged AWS SAM template of your application.
TemplateUrl *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [CreateApplicationVersionOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_CreateApplicationVersion.go#L64) [¶](#CreateApplicationVersionOutput)
```
type CreateApplicationVersionOutput struct {
// The application Amazon Resource Name (ARN).
ApplicationId *[string](/builtin#string)
// The date and time this resource was created.
CreationTime *[string](/builtin#string)
// An array of parameter types supported by the application.
ParameterDefinitions [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[ParameterDefinition](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#ParameterDefinition)
// A list of values that you must specify before you can deploy certain
// applications. Some applications might include resources that can affect
// permissions in your AWS account, for example, by creating new AWS Identity and
// Access Management (IAM) users. For those applications, you must explicitly
// acknowledge their capabilities by specifying this parameter.The only valid
// values are CAPABILITY_IAM, CAPABILITY_NAMED_IAM, CAPABILITY_RESOURCE_POLICY, and
// CAPABILITY_AUTO_EXPAND.The following resources require you to specify
// CAPABILITY_IAM or CAPABILITY_NAMED_IAM: AWS::IAM::Group (<https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-group.html>)
// , AWS::IAM::InstanceProfile (<https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-instanceprofile.html>)
// , AWS::IAM::Policy (<https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-policy.html>)
// , and AWS::IAM::Role (<https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-role.html>)
// . If the application contains IAM resources, you can specify either
// CAPABILITY_IAM or CAPABILITY_NAMED_IAM. If the application contains IAM
// resources with custom names, you must specify CAPABILITY_NAMED_IAM.The following
// resources require you to specify CAPABILITY_RESOURCE_POLICY:
// AWS::Lambda::Permission (<https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-lambda-permission.html>)
// , AWS::IAM:Policy (<https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-policy.html>)
// , AWS::ApplicationAutoScaling::ScalingPolicy (<https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-applicationautoscaling-scalingpolicy.html>)
// , AWS::S3::BucketPolicy (<https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-s3-policy.html>)
// , AWS::SQS::QueuePolicy (<https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sqs-policy.html>)
// , and AWS::SNS::TopicPolicy (<https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sns-policy.html>)
// .Applications that contain one or more nested applications require you to
// specify CAPABILITY_AUTO_EXPAND.If your application template contains any of the
// above resources, we recommend that you review all permissions associated with
// the application before deploying. If you don't specify this parameter for an
// application that requires capabilities, the call will fail.
RequiredCapabilities [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[Capability](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#Capability)
// Whether all of the AWS resources contained in this application are supported in
// the region in which it is being retrieved.
ResourcesSupported [bool](/builtin#bool)
// The semantic version of the application: <https://semver.org/> (<https://semver.org/>)
SemanticVersion *[string](/builtin#string)
// A link to the S3 object that contains the ZIP archive of the source code for
// this version of your application.Maximum size 50 MB
SourceCodeArchiveUrl *[string](/builtin#string)
// A link to a public repository for the source code of your application, for
// example the URL of a specific GitHub commit.
SourceCodeUrl *[string](/builtin#string)
// A link to the packaged AWS SAM template of your application.
TemplateUrl *[string](/builtin#string)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [CreateCloudFormationChangeSetInput](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_CreateCloudFormationChangeSet.go#L35) [¶](#CreateCloudFormationChangeSetInput)
```
type CreateCloudFormationChangeSetInput struct {
// The Amazon Resource Name (ARN) of the application.
//
// This member is required.
ApplicationId *[string](/builtin#string)
// This property corresponds to the parameter of the same name for the AWS
// CloudFormation CreateChangeSet (<https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/CreateChangeSet>)
// API.
//
// This member is required.
StackName *[string](/builtin#string)
// A list of values that you must specify before you can deploy certain
// applications. Some applications might include resources that can affect
// permissions in your AWS account, for example, by creating new AWS Identity and
// Access Management (IAM) users. For those applications, you must explicitly
// acknowledge their capabilities by specifying this parameter.The only valid
// values are CAPABILITY_IAM, CAPABILITY_NAMED_IAM, CAPABILITY_RESOURCE_POLICY, and
// CAPABILITY_AUTO_EXPAND.The following resources require you to specify
// CAPABILITY_IAM or CAPABILITY_NAMED_IAM: AWS::IAM::Group (<https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-group.html>)
// , AWS::IAM::InstanceProfile (<https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-instanceprofile.html>)
// , AWS::IAM::Policy (<https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-policy.html>)
// , and AWS::IAM::Role (<https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-role.html>)
// . If the application contains IAM resources, you can specify either
// CAPABILITY_IAM or CAPABILITY_NAMED_IAM. If the application contains IAM
// resources with custom names, you must specify CAPABILITY_NAMED_IAM.The following
// resources require you to specify CAPABILITY_RESOURCE_POLICY:
// AWS::Lambda::Permission (<https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-lambda-permission.html>)
// , AWS::IAM:Policy (<https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-policy.html>)
// , AWS::ApplicationAutoScaling::ScalingPolicy (<https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-applicationautoscaling-scalingpolicy.html>)
// , AWS::S3::BucketPolicy (<https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-s3-policy.html>)
// , AWS::SQS::QueuePolicy (<https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sqs-policy.html>)
// , and AWS::SNS:TopicPolicy (<https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sns-policy.html>)
// .Applications that contain one or more nested applications require you to
// specify CAPABILITY_AUTO_EXPAND.If your application template contains any of the
// above resources, we recommend that you review all permissions associated with
// the application before deploying. If you don't specify this parameter for an
// application that requires capabilities, the call will fail.
Capabilities [][string](/builtin#string)
// This property corresponds to the parameter of the same name for the AWS
// CloudFormation CreateChangeSet (<https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/CreateChangeSet>)
// API.
ChangeSetName *[string](/builtin#string)
// This property corresponds to the parameter of the same name for the AWS
// CloudFormation CreateChangeSet (<https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/CreateChangeSet>)
// API.
ClientToken *[string](/builtin#string)
// This property corresponds to the parameter of the same name for the AWS
// CloudFormation CreateChangeSet (<https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/CreateChangeSet>)
// API.
Description *[string](/builtin#string)
// This property corresponds to the parameter of the same name for the AWS
// CloudFormation CreateChangeSet (<https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/CreateChangeSet>)
// API.
NotificationArns [][string](/builtin#string)
// A list of parameter values for the parameters of the application.
ParameterOverrides [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[ParameterValue](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#ParameterValue)
// This property corresponds to the parameter of the same name for the AWS
// CloudFormation CreateChangeSet (<https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/CreateChangeSet>)
// API.
ResourceTypes [][string](/builtin#string)
// This property corresponds to the parameter of the same name for the AWS
// CloudFormation CreateChangeSet (<https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/CreateChangeSet>)
// API.
RollbackConfiguration *[types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[RollbackConfiguration](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#RollbackConfiguration)
// The semantic version of the application: <https://semver.org/> (<https://semver.org/>)
SemanticVersion *[string](/builtin#string)
// This property corresponds to the parameter of the same name for the AWS
// CloudFormation CreateChangeSet (<https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/CreateChangeSet>)
// API.
Tags [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[Tag](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#Tag)
// The UUID returned by CreateCloudFormationTemplate.Pattern:
// [0-9a-fA-F]{8}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{12}
TemplateId *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [CreateCloudFormationChangeSetOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_CreateCloudFormationChangeSet.go#L125) [¶](#CreateCloudFormationChangeSetOutput)
```
type CreateCloudFormationChangeSetOutput struct {
// The application Amazon Resource Name (ARN).
ApplicationId *[string](/builtin#string)
// The Amazon Resource Name (ARN) of the change set.Length constraints: Minimum
// length of 1.Pattern: ARN:[-a-zA-Z0-9:/]*
ChangeSetId *[string](/builtin#string)
// The semantic version of the application: <https://semver.org/> (<https://semver.org/>)
SemanticVersion *[string](/builtin#string)
// The unique ID of the stack.
StackId *[string](/builtin#string)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [CreateCloudFormationTemplateInput](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_CreateCloudFormationTemplate.go#L35) [¶](#CreateCloudFormationTemplateInput)
```
type CreateCloudFormationTemplateInput struct {
// The Amazon Resource Name (ARN) of the application.
//
// This member is required.
ApplicationId *[string](/builtin#string)
// The semantic version of the application: <https://semver.org/> (<https://semver.org/>)
SemanticVersion *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [CreateCloudFormationTemplateOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_CreateCloudFormationTemplate.go#L48) [¶](#CreateCloudFormationTemplateOutput)
```
type CreateCloudFormationTemplateOutput struct {
// The application Amazon Resource Name (ARN).
ApplicationId *[string](/builtin#string)
// The date and time this resource was created.
CreationTime *[string](/builtin#string)
// The date and time this template expires. Templates expire 1 hour after creation.
ExpirationTime *[string](/builtin#string)
// The semantic version of the application: <https://semver.org/> (<https://semver.org/>)
SemanticVersion *[string](/builtin#string)
// Status of the template creation workflow.Possible values: PREPARING | ACTIVE |
// EXPIRED
Status [types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[Status](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#Status)
// The UUID returned by CreateCloudFormationTemplate.Pattern:
// [0-9a-fA-F]{8}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{12}
TemplateId *[string](/builtin#string)
// A link to the template that can be used to deploy the application using AWS
// CloudFormation.
TemplateUrl *[string](/builtin#string)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [DeleteApplicationInput](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_DeleteApplication.go#L34) [¶](#DeleteApplicationInput)
```
type DeleteApplicationInput struct {
// The Amazon Resource Name (ARN) of the application.
//
// This member is required.
ApplicationId *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [DeleteApplicationOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_DeleteApplication.go#L44) [¶](#DeleteApplicationOutput)
```
type DeleteApplicationOutput struct {
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [EndpointParameters](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/endpoints.go#L265) [¶](#EndpointParameters)
added in v1.13.0
```
type EndpointParameters struct {
// The AWS region used to dispatch the request.
//
// Parameter is
// required.
//
// AWS::Region
Region *[string](/builtin#string)
// When true, use the dual-stack endpoint. If the configured endpoint does not
// support dual-stack, dispatching the request MAY return an error.
//
// Defaults to
// false if no value is provided.
//
// AWS::UseDualStack
UseDualStack *[bool](/builtin#bool)
// When true, send this request to the FIPS-compliant regional endpoint. If the
// configured endpoint does not have a FIPS compliant endpoint, dispatching the
// request will return an error.
//
// Defaults to false if no value is
// provided.
//
// AWS::UseFIPS
UseFIPS *[bool](/builtin#bool)
// Override the endpoint used to send this request
//
// Parameter is
// required.
//
// SDK::Endpoint
Endpoint *[string](/builtin#string)
}
```
EndpointParameters provides the parameters that influence how endpoints are resolved.
####
func (EndpointParameters) [ValidateRequired](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/endpoints.go#L303) [¶](#EndpointParameters.ValidateRequired)
added in v1.13.0
```
func (p [EndpointParameters](#EndpointParameters)) ValidateRequired() [error](/builtin#error)
```
ValidateRequired validates required parameters are set.
####
func (EndpointParameters) [WithDefaults](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/endpoints.go#L317) [¶](#EndpointParameters.WithDefaults)
added in v1.13.0
```
func (p [EndpointParameters](#EndpointParameters)) WithDefaults() [EndpointParameters](#EndpointParameters)
```
WithDefaults returns a shallow copy of EndpointParameterswith default values applied to members where applicable.
####
type [EndpointResolver](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/endpoints.go#L26) [¶](#EndpointResolver)
```
type EndpointResolver interface {
ResolveEndpoint(region [string](/builtin#string), options [EndpointResolverOptions](#EndpointResolverOptions)) ([aws](/github.com/aws/aws-sdk-go-v2/aws).[Endpoint](/github.com/aws/aws-sdk-go-v2/aws#Endpoint), [error](/builtin#error))
}
```
EndpointResolver interface for resolving service endpoints.
####
func [EndpointResolverFromURL](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/endpoints.go#L51) [¶](#EndpointResolverFromURL)
added in v1.1.0
```
func EndpointResolverFromURL(url [string](/builtin#string), optFns ...func(*[aws](/github.com/aws/aws-sdk-go-v2/aws).[Endpoint](/github.com/aws/aws-sdk-go-v2/aws#Endpoint))) [EndpointResolver](#EndpointResolver)
```
EndpointResolverFromURL returns an EndpointResolver configured using the provided endpoint url. By default, the resolved endpoint resolver uses the client region as signing region, and the endpoint source is set to EndpointSourceCustom.You can provide functional options to configure endpoint values for the resolved endpoint.
####
type [EndpointResolverFunc](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/endpoints.go#L40) [¶](#EndpointResolverFunc)
```
type EndpointResolverFunc func(region [string](/builtin#string), options [EndpointResolverOptions](#EndpointResolverOptions)) ([aws](/github.com/aws/aws-sdk-go-v2/aws).[Endpoint](/github.com/aws/aws-sdk-go-v2/aws#Endpoint), [error](/builtin#error))
```
EndpointResolverFunc is a helper utility that wraps a function so it satisfies the EndpointResolver interface. This is useful when you want to add additional endpoint resolving logic, or stub out specific endpoints with custom values.
####
func (EndpointResolverFunc) [ResolveEndpoint](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/endpoints.go#L42) [¶](#EndpointResolverFunc.ResolveEndpoint)
```
func (fn [EndpointResolverFunc](#EndpointResolverFunc)) ResolveEndpoint(region [string](/builtin#string), options [EndpointResolverOptions](#EndpointResolverOptions)) (endpoint [aws](/github.com/aws/aws-sdk-go-v2/aws).[Endpoint](/github.com/aws/aws-sdk-go-v2/aws#Endpoint), err [error](/builtin#error))
```
####
type [EndpointResolverOptions](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/endpoints.go#L23) [¶](#EndpointResolverOptions)
added in v0.29.0
```
type EndpointResolverOptions = [internalendpoints](/github.com/aws/aws-sdk-go-v2/service/[email protected]/internal/endpoints).[Options](/github.com/aws/aws-sdk-go-v2/service/[email protected]/internal/endpoints#Options)
```
EndpointResolverOptions is the service endpoint resolver options
####
type [EndpointResolverV2](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/endpoints.go#L329) [¶](#EndpointResolverV2)
added in v1.13.0
```
type EndpointResolverV2 interface {
// ResolveEndpoint attempts to resolve the endpoint with the provided options,
// returning the endpoint if found. Otherwise an error is returned.
ResolveEndpoint(ctx [context](/context).[Context](/context#Context), params [EndpointParameters](#EndpointParameters)) (
[smithyendpoints](/github.com/aws/smithy-go/endpoints).[Endpoint](/github.com/aws/smithy-go/endpoints#Endpoint), [error](/builtin#error),
)
}
```
EndpointResolverV2 provides the interface for resolving service endpoints.
####
func [NewDefaultEndpointResolverV2](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/endpoints.go#L340) [¶](#NewDefaultEndpointResolverV2)
added in v1.13.0
```
func NewDefaultEndpointResolverV2() [EndpointResolverV2](#EndpointResolverV2)
```
####
type [GetApplicationInput](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_GetApplication.go#L35) [¶](#GetApplicationInput)
```
type GetApplicationInput struct {
// The Amazon Resource Name (ARN) of the application.
//
// This member is required.
ApplicationId *[string](/builtin#string)
// The semantic version of the application to get.
SemanticVersion *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [GetApplicationOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_GetApplication.go#L48) [¶](#GetApplicationOutput)
```
type GetApplicationOutput struct {
// The application Amazon Resource Name (ARN).
ApplicationId *[string](/builtin#string)
// The name of the author publishing the app.Minimum length=1. Maximum
// length=127.Pattern "^[a-z0-9](([a-z0-9]|-(?!-))*[a-z0-9])?$";
Author *[string](/builtin#string)
// The date and time this resource was created.
CreationTime *[string](/builtin#string)
// The description of the application.Minimum length=1. Maximum length=256
Description *[string](/builtin#string)
// A URL with more information about the application, for example the location of
// your GitHub repository for the application.
HomePageUrl *[string](/builtin#string)
// Whether the author of this application has been verified. This means means that
// AWS has made a good faith review, as a reasonable and prudent service provider,
// of the information provided by the requester and has confirmed that the
// requester's identity is as claimed.
IsVerifiedAuthor [bool](/builtin#bool)
// Labels to improve discovery of apps in search results.Minimum length=1. Maximum
// length=127. Maximum number of labels: 10Pattern: "^[a-zA-Z0-9+\\-_:\\/@]+$";
Labels [][string](/builtin#string)
// A link to a license file of the app that matches the spdxLicenseID value of
// your application.Maximum size 5 MB
LicenseUrl *[string](/builtin#string)
// The name of the application.Minimum length=1. Maximum length=140Pattern:
// "[a-zA-Z0-9\\-]+";
Name *[string](/builtin#string)
// A link to the readme file in Markdown language that contains a more detailed
// description of the application and how it works.Maximum size 5 MB
ReadmeUrl *[string](/builtin#string)
// A valid identifier from <https://spdx.org/licenses/>.
SpdxLicenseId *[string](/builtin#string)
// The URL to the public profile of a verified author. This URL is submitted by
// the author.
VerifiedAuthorUrl *[string](/builtin#string)
// Version information about the application.
Version *[types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[Version](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#Version)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [GetApplicationPolicyInput](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_GetApplicationPolicy.go#L35) [¶](#GetApplicationPolicyInput)
```
type GetApplicationPolicyInput struct {
// The Amazon Resource Name (ARN) of the application.
//
// This member is required.
ApplicationId *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [GetApplicationPolicyOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_GetApplicationPolicy.go#L45) [¶](#GetApplicationPolicyOutput)
```
type GetApplicationPolicyOutput struct {
// An array of policy statements applied to the application.
Statements [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[ApplicationPolicyStatement](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#ApplicationPolicyStatement)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [GetCloudFormationTemplateInput](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_GetCloudFormationTemplate.go#L35) [¶](#GetCloudFormationTemplateInput)
```
type GetCloudFormationTemplateInput struct {
// The Amazon Resource Name (ARN) of the application.
//
// This member is required.
ApplicationId *[string](/builtin#string)
// The UUID returned by CreateCloudFormationTemplate.Pattern:
// [0-9a-fA-F]{8}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{12}
//
// This member is required.
TemplateId *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [GetCloudFormationTemplateOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_GetCloudFormationTemplate.go#L51) [¶](#GetCloudFormationTemplateOutput)
```
type GetCloudFormationTemplateOutput struct {
// The application Amazon Resource Name (ARN).
ApplicationId *[string](/builtin#string)
// The date and time this resource was created.
CreationTime *[string](/builtin#string)
// The date and time this template expires. Templates expire 1 hour after creation.
ExpirationTime *[string](/builtin#string)
// The semantic version of the application: <https://semver.org/> (<https://semver.org/>)
SemanticVersion *[string](/builtin#string)
// Status of the template creation workflow.Possible values: PREPARING | ACTIVE |
// EXPIRED
Status [types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[Status](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#Status)
// The UUID returned by CreateCloudFormationTemplate.Pattern:
// [0-9a-fA-F]{8}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{12}
TemplateId *[string](/builtin#string)
// A link to the template that can be used to deploy the application using AWS
// CloudFormation.
TemplateUrl *[string](/builtin#string)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [HTTPClient](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_client.go#L178) [¶](#HTTPClient)
```
type HTTPClient interface {
Do(*[http](/net/http).[Request](/net/http#Request)) (*[http](/net/http).[Response](/net/http#Response), [error](/builtin#error))
}
```
####
type [HTTPSignerV4](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_client.go#L426) [¶](#HTTPSignerV4)
```
type HTTPSignerV4 interface {
SignHTTP(ctx [context](/context).[Context](/context#Context), credentials [aws](/github.com/aws/aws-sdk-go-v2/aws).[Credentials](/github.com/aws/aws-sdk-go-v2/aws#Credentials), r *[http](/net/http).[Request](/net/http#Request), payloadHash [string](/builtin#string), service [string](/builtin#string), region [string](/builtin#string), signingTime [time](/time).[Time](/time#Time), optFns ...func(*[v4](/github.com/aws/aws-sdk-go-v2/aws/signer/v4).[SignerOptions](/github.com/aws/aws-sdk-go-v2/aws/signer/v4#SignerOptions))) [error](/builtin#error)
}
```
####
type [ListApplicationDependenciesAPIClient](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_ListApplicationDependencies.go#L145) [¶](#ListApplicationDependenciesAPIClient)
added in v0.30.0
```
type ListApplicationDependenciesAPIClient interface {
ListApplicationDependencies([context](/context).[Context](/context#Context), *[ListApplicationDependenciesInput](#ListApplicationDependenciesInput), ...func(*[Options](#Options))) (*[ListApplicationDependenciesOutput](#ListApplicationDependenciesOutput), [error](/builtin#error))
}
```
ListApplicationDependenciesAPIClient is a client that implements the ListApplicationDependencies operation.
####
type [ListApplicationDependenciesInput](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_ListApplicationDependencies.go#L35) [¶](#ListApplicationDependenciesInput)
```
type ListApplicationDependenciesInput struct {
// The Amazon Resource Name (ARN) of the application.
//
// This member is required.
ApplicationId *[string](/builtin#string)
// The total number of items to return.
MaxItems [int32](/builtin#int32)
// A token to specify where to start paginating.
NextToken *[string](/builtin#string)
// The semantic version of the application to get.
SemanticVersion *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [ListApplicationDependenciesOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_ListApplicationDependencies.go#L54) [¶](#ListApplicationDependenciesOutput)
```
type ListApplicationDependenciesOutput struct {
// An array of application summaries nested in the application.
Dependencies [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[ApplicationDependencySummary](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#ApplicationDependencySummary)
// The token to request the next page of results.
NextToken *[string](/builtin#string)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [ListApplicationDependenciesPaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_ListApplicationDependencies.go#L164) [¶](#ListApplicationDependenciesPaginator)
added in v0.30.0
```
type ListApplicationDependenciesPaginator struct {
// contains filtered or unexported fields
}
```
ListApplicationDependenciesPaginator is a paginator for ListApplicationDependencies
####
func [NewListApplicationDependenciesPaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_ListApplicationDependencies.go#L174) [¶](#NewListApplicationDependenciesPaginator)
added in v0.30.0
```
func NewListApplicationDependenciesPaginator(client [ListApplicationDependenciesAPIClient](#ListApplicationDependenciesAPIClient), params *[ListApplicationDependenciesInput](#ListApplicationDependenciesInput), optFns ...func(*[ListApplicationDependenciesPaginatorOptions](#ListApplicationDependenciesPaginatorOptions))) *[ListApplicationDependenciesPaginator](#ListApplicationDependenciesPaginator)
```
NewListApplicationDependenciesPaginator returns a new ListApplicationDependenciesPaginator
####
func (*ListApplicationDependenciesPaginator) [HasMorePages](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_ListApplicationDependencies.go#L198) [¶](#ListApplicationDependenciesPaginator.HasMorePages)
added in v0.30.0
```
func (p *[ListApplicationDependenciesPaginator](#ListApplicationDependenciesPaginator)) HasMorePages() [bool](/builtin#bool)
```
HasMorePages returns a boolean indicating whether more pages are available
####
func (*ListApplicationDependenciesPaginator) [NextPage](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_ListApplicationDependencies.go#L203) [¶](#ListApplicationDependenciesPaginator.NextPage)
added in v0.30.0
```
func (p *[ListApplicationDependenciesPaginator](#ListApplicationDependenciesPaginator)) NextPage(ctx [context](/context).[Context](/context#Context), optFns ...func(*[Options](#Options))) (*[ListApplicationDependenciesOutput](#ListApplicationDependenciesOutput), [error](/builtin#error))
```
NextPage retrieves the next ListApplicationDependencies page.
####
type [ListApplicationDependenciesPaginatorOptions](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_ListApplicationDependencies.go#L153) [¶](#ListApplicationDependenciesPaginatorOptions)
added in v0.30.0
```
type ListApplicationDependenciesPaginatorOptions struct {
// The total number of items to return.
Limit [int32](/builtin#int32)
// Set to true if pagination should stop if the service returns a pagination token
// that matches the most recent token provided to the service.
StopOnDuplicateToken [bool](/builtin#bool)
}
```
ListApplicationDependenciesPaginatorOptions is the paginator options for ListApplicationDependencies
####
type [ListApplicationVersionsAPIClient](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_ListApplicationVersions.go#L142) [¶](#ListApplicationVersionsAPIClient)
added in v0.30.0
```
type ListApplicationVersionsAPIClient interface {
ListApplicationVersions([context](/context).[Context](/context#Context), *[ListApplicationVersionsInput](#ListApplicationVersionsInput), ...func(*[Options](#Options))) (*[ListApplicationVersionsOutput](#ListApplicationVersionsOutput), [error](/builtin#error))
}
```
ListApplicationVersionsAPIClient is a client that implements the ListApplicationVersions operation.
####
type [ListApplicationVersionsInput](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_ListApplicationVersions.go#L35) [¶](#ListApplicationVersionsInput)
```
type ListApplicationVersionsInput struct {
// The Amazon Resource Name (ARN) of the application.
//
// This member is required.
ApplicationId *[string](/builtin#string)
// The total number of items to return.
MaxItems [int32](/builtin#int32)
// A token to specify where to start paginating.
NextToken *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [ListApplicationVersionsOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_ListApplicationVersions.go#L51) [¶](#ListApplicationVersionsOutput)
```
type ListApplicationVersionsOutput struct {
// The token to request the next page of results.
NextToken *[string](/builtin#string)
// An array of version summaries for the application.
Versions [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[VersionSummary](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#VersionSummary)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [ListApplicationVersionsPaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_ListApplicationVersions.go#L160) [¶](#ListApplicationVersionsPaginator)
added in v0.30.0
```
type ListApplicationVersionsPaginator struct {
// contains filtered or unexported fields
}
```
ListApplicationVersionsPaginator is a paginator for ListApplicationVersions
####
func [NewListApplicationVersionsPaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_ListApplicationVersions.go#L170) [¶](#NewListApplicationVersionsPaginator)
added in v0.30.0
```
func NewListApplicationVersionsPaginator(client [ListApplicationVersionsAPIClient](#ListApplicationVersionsAPIClient), params *[ListApplicationVersionsInput](#ListApplicationVersionsInput), optFns ...func(*[ListApplicationVersionsPaginatorOptions](#ListApplicationVersionsPaginatorOptions))) *[ListApplicationVersionsPaginator](#ListApplicationVersionsPaginator)
```
NewListApplicationVersionsPaginator returns a new ListApplicationVersionsPaginator
####
func (*ListApplicationVersionsPaginator) [HasMorePages](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_ListApplicationVersions.go#L194) [¶](#ListApplicationVersionsPaginator.HasMorePages)
added in v0.30.0
```
func (p *[ListApplicationVersionsPaginator](#ListApplicationVersionsPaginator)) HasMorePages() [bool](/builtin#bool)
```
HasMorePages returns a boolean indicating whether more pages are available
####
func (*ListApplicationVersionsPaginator) [NextPage](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_ListApplicationVersions.go#L199) [¶](#ListApplicationVersionsPaginator.NextPage)
added in v0.30.0
```
func (p *[ListApplicationVersionsPaginator](#ListApplicationVersionsPaginator)) NextPage(ctx [context](/context).[Context](/context#Context), optFns ...func(*[Options](#Options))) (*[ListApplicationVersionsOutput](#ListApplicationVersionsOutput), [error](/builtin#error))
```
NextPage retrieves the next ListApplicationVersions page.
####
type [ListApplicationVersionsPaginatorOptions](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_ListApplicationVersions.go#L150) [¶](#ListApplicationVersionsPaginatorOptions)
added in v0.30.0
```
type ListApplicationVersionsPaginatorOptions struct {
// The total number of items to return.
Limit [int32](/builtin#int32)
// Set to true if pagination should stop if the service returns a pagination token
// that matches the most recent token provided to the service.
StopOnDuplicateToken [bool](/builtin#bool)
}
```
ListApplicationVersionsPaginatorOptions is the paginator options for ListApplicationVersions
####
type [ListApplicationsAPIClient](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_ListApplications.go#L134) [¶](#ListApplicationsAPIClient)
added in v0.30.0
```
type ListApplicationsAPIClient interface {
ListApplications([context](/context).[Context](/context#Context), *[ListApplicationsInput](#ListApplicationsInput), ...func(*[Options](#Options))) (*[ListApplicationsOutput](#ListApplicationsOutput), [error](/builtin#error))
}
```
ListApplicationsAPIClient is a client that implements the ListApplications operation.
####
type [ListApplicationsInput](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_ListApplications.go#L35) [¶](#ListApplicationsInput)
```
type ListApplicationsInput struct {
// The total number of items to return.
MaxItems [int32](/builtin#int32)
// A token to specify where to start paginating.
NextToken *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [ListApplicationsOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_ListApplications.go#L46) [¶](#ListApplicationsOutput)
```
type ListApplicationsOutput struct {
// An array of application summaries.
Applications [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[ApplicationSummary](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#ApplicationSummary)
// The token to request the next page of results.
NextToken *[string](/builtin#string)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [ListApplicationsPaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_ListApplications.go#L151) [¶](#ListApplicationsPaginator)
added in v0.30.0
```
type ListApplicationsPaginator struct {
// contains filtered or unexported fields
}
```
ListApplicationsPaginator is a paginator for ListApplications
####
func [NewListApplicationsPaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_ListApplications.go#L160) [¶](#NewListApplicationsPaginator)
added in v0.30.0
```
func NewListApplicationsPaginator(client [ListApplicationsAPIClient](#ListApplicationsAPIClient), params *[ListApplicationsInput](#ListApplicationsInput), optFns ...func(*[ListApplicationsPaginatorOptions](#ListApplicationsPaginatorOptions))) *[ListApplicationsPaginator](#ListApplicationsPaginator)
```
NewListApplicationsPaginator returns a new ListApplicationsPaginator
####
func (*ListApplicationsPaginator) [HasMorePages](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_ListApplications.go#L184) [¶](#ListApplicationsPaginator.HasMorePages)
added in v0.30.0
```
func (p *[ListApplicationsPaginator](#ListApplicationsPaginator)) HasMorePages() [bool](/builtin#bool)
```
HasMorePages returns a boolean indicating whether more pages are available
####
func (*ListApplicationsPaginator) [NextPage](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_ListApplications.go#L189) [¶](#ListApplicationsPaginator.NextPage)
added in v0.30.0
```
func (p *[ListApplicationsPaginator](#ListApplicationsPaginator)) NextPage(ctx [context](/context).[Context](/context#Context), optFns ...func(*[Options](#Options))) (*[ListApplicationsOutput](#ListApplicationsOutput), [error](/builtin#error))
```
NextPage retrieves the next ListApplications page.
####
type [ListApplicationsPaginatorOptions](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_ListApplications.go#L141) [¶](#ListApplicationsPaginatorOptions)
added in v0.30.0
```
type ListApplicationsPaginatorOptions struct {
// The total number of items to return.
Limit [int32](/builtin#int32)
// Set to true if pagination should stop if the service returns a pagination token
// that matches the most recent token provided to the service.
StopOnDuplicateToken [bool](/builtin#bool)
}
```
ListApplicationsPaginatorOptions is the paginator options for ListApplications
####
type [Options](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_client.go#L61) [¶](#Options)
```
type Options struct {
// Set of options to modify how an operation is invoked. These apply to all
// operations invoked for this client. Use functional options on operation call to
// modify this list for per operation behavior.
APIOptions []func(*[middleware](/github.com/aws/smithy-go/middleware).[Stack](/github.com/aws/smithy-go/middleware#Stack)) [error](/builtin#error)
// The optional application specific identifier appended to the User-Agent header.
AppID [string](/builtin#string)
// This endpoint will be given as input to an EndpointResolverV2. It is used for
// providing a custom base endpoint that is subject to modifications by the
// processing EndpointResolverV2.
BaseEndpoint *[string](/builtin#string)
// Configures the events that will be sent to the configured logger.
ClientLogMode [aws](/github.com/aws/aws-sdk-go-v2/aws).[ClientLogMode](/github.com/aws/aws-sdk-go-v2/aws#ClientLogMode)
// The credentials object to use when signing requests.
Credentials [aws](/github.com/aws/aws-sdk-go-v2/aws).[CredentialsProvider](/github.com/aws/aws-sdk-go-v2/aws#CredentialsProvider)
// The configuration DefaultsMode that the SDK should use when constructing the
// clients initial default settings.
DefaultsMode [aws](/github.com/aws/aws-sdk-go-v2/aws).[DefaultsMode](/github.com/aws/aws-sdk-go-v2/aws#DefaultsMode)
// The endpoint options to be used when attempting to resolve an endpoint.
EndpointOptions [EndpointResolverOptions](#EndpointResolverOptions)
// The service endpoint resolver.
//
// Deprecated: Deprecated: EndpointResolver and WithEndpointResolver. Providing a
// value for this field will likely prevent you from using any endpoint-related
// service features released after the introduction of EndpointResolverV2 and
// BaseEndpoint. To migrate an EndpointResolver implementation that uses a custom
// endpoint, set the client option BaseEndpoint instead.
EndpointResolver [EndpointResolver](#EndpointResolver)
// Resolves the endpoint used for a particular service. This should be used over
// the deprecated EndpointResolver
EndpointResolverV2 [EndpointResolverV2](#EndpointResolverV2)
// Signature Version 4 (SigV4) Signer
HTTPSignerV4 [HTTPSignerV4](#HTTPSignerV4)
// The logger writer interface to write logging messages to.
Logger [logging](/github.com/aws/smithy-go/logging).[Logger](/github.com/aws/smithy-go/logging#Logger)
// The region to send requests to. (Required)
Region [string](/builtin#string)
// RetryMaxAttempts specifies the maximum number attempts an API client will call
// an operation that fails with a retryable error. A value of 0 is ignored, and
// will not be used to configure the API client created default retryer, or modify
// per operation call's retry max attempts. When creating a new API Clients this
// member will only be used if the Retryer Options member is nil. This value will
// be ignored if Retryer is not nil. If specified in an operation call's functional
// options with a value that is different than the constructed client's Options,
// the Client's Retryer will be wrapped to use the operation's specific
// RetryMaxAttempts value.
RetryMaxAttempts [int](/builtin#int)
// RetryMode specifies the retry mode the API client will be created with, if
// Retryer option is not also specified. When creating a new API Clients this
// member will only be used if the Retryer Options member is nil. This value will
// be ignored if Retryer is not nil. Currently does not support per operation call
// overrides, may in the future.
RetryMode [aws](/github.com/aws/aws-sdk-go-v2/aws).[RetryMode](/github.com/aws/aws-sdk-go-v2/aws#RetryMode)
// Retryer guides how HTTP requests should be retried in case of recoverable
// failures. When nil the API client will use a default retryer. The kind of
// default retry created by the API client can be changed with the RetryMode
// option.
Retryer [aws](/github.com/aws/aws-sdk-go-v2/aws).[Retryer](/github.com/aws/aws-sdk-go-v2/aws#Retryer)
// The RuntimeEnvironment configuration, only populated if the DefaultsMode is set
// to DefaultsModeAuto and is initialized using config.LoadDefaultConfig . You
// should not populate this structure programmatically, or rely on the values here
// within your applications.
RuntimeEnvironment [aws](/github.com/aws/aws-sdk-go-v2/aws).[RuntimeEnvironment](/github.com/aws/aws-sdk-go-v2/aws#RuntimeEnvironment)
// The HTTP client to invoke API calls with. Defaults to client's default HTTP
// implementation if nil.
HTTPClient [HTTPClient](#HTTPClient)
// contains filtered or unexported fields
}
```
####
func (Options) [Copy](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_client.go#L183) [¶](#Options.Copy)
```
func (o [Options](#Options)) Copy() [Options](#Options)
```
Copy creates a clone where the APIOptions list is deep copied.
####
type [PutApplicationPolicyInput](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_PutApplicationPolicy.go#L37) [¶](#PutApplicationPolicyInput)
```
type PutApplicationPolicyInput struct {
// The Amazon Resource Name (ARN) of the application.
//
// This member is required.
ApplicationId *[string](/builtin#string)
// An array of policy statements applied to the application.
//
// This member is required.
Statements [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[ApplicationPolicyStatement](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#ApplicationPolicyStatement)
// contains filtered or unexported fields
}
```
####
type [PutApplicationPolicyOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_PutApplicationPolicy.go#L52) [¶](#PutApplicationPolicyOutput)
```
type PutApplicationPolicyOutput struct {
// An array of policy statements applied to the application.
Statements [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[ApplicationPolicyStatement](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#ApplicationPolicyStatement)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [ResolveEndpoint](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/endpoints.go#L67) [¶](#ResolveEndpoint)
```
type ResolveEndpoint struct {
Resolver [EndpointResolver](#EndpointResolver)
Options [EndpointResolverOptions](#EndpointResolverOptions)
}
```
####
func (*ResolveEndpoint) [HandleSerialize](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/endpoints.go#L76) [¶](#ResolveEndpoint.HandleSerialize)
```
func (m *[ResolveEndpoint](#ResolveEndpoint)) HandleSerialize(ctx [context](/context).[Context](/context#Context), in [middleware](/github.com/aws/smithy-go/middleware).[SerializeInput](/github.com/aws/smithy-go/middleware#SerializeInput), next [middleware](/github.com/aws/smithy-go/middleware).[SerializeHandler](/github.com/aws/smithy-go/middleware#SerializeHandler)) (
out [middleware](/github.com/aws/smithy-go/middleware).[SerializeOutput](/github.com/aws/smithy-go/middleware#SerializeOutput), metadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata), err [error](/builtin#error),
)
```
####
func (*ResolveEndpoint) [ID](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/endpoints.go#L72) [¶](#ResolveEndpoint.ID)
```
func (*[ResolveEndpoint](#ResolveEndpoint)) ID() [string](/builtin#string)
```
####
type [UnshareApplicationInput](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_UnshareApplication.go#L35) [¶](#UnshareApplicationInput)
```
type UnshareApplicationInput struct {
// The Amazon Resource Name (ARN) of the application.
//
// This member is required.
ApplicationId *[string](/builtin#string)
// The AWS Organization ID to unshare the application from.
//
// This member is required.
OrganizationId *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [UnshareApplicationOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_UnshareApplication.go#L50) [¶](#UnshareApplicationOutput)
```
type UnshareApplicationOutput struct {
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [UpdateApplicationInput](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_UpdateApplication.go#L35) [¶](#UpdateApplicationInput)
```
type UpdateApplicationInput struct {
// The Amazon Resource Name (ARN) of the application.
//
// This member is required.
ApplicationId *[string](/builtin#string)
// The name of the author publishing the app.Minimum length=1. Maximum
// length=127.Pattern "^[a-z0-9](([a-z0-9]|-(?!-))*[a-z0-9])?$";
Author *[string](/builtin#string)
// The description of the application.Minimum length=1. Maximum length=256
Description *[string](/builtin#string)
// A URL with more information about the application, for example the location of
// your GitHub repository for the application.
HomePageUrl *[string](/builtin#string)
// Labels to improve discovery of apps in search results.Minimum length=1. Maximum
// length=127. Maximum number of labels: 10Pattern: "^[a-zA-Z0-9+\\-_:\\/@]+$";
Labels [][string](/builtin#string)
// A text readme file in Markdown language that contains a more detailed
// description of the application and how it works.Maximum size 5 MB
ReadmeBody *[string](/builtin#string)
// A link to the readme file in Markdown language that contains a more detailed
// description of the application and how it works.Maximum size 5 MB
ReadmeUrl *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [UpdateApplicationOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/serverlessapplicationrepository/v1.14.2/service/serverlessapplicationrepository/api_op_UpdateApplication.go#L68) [¶](#UpdateApplicationOutput)
```
type UpdateApplicationOutput struct {
// The application Amazon Resource Name (ARN).
ApplicationId *[string](/builtin#string)
// The name of the author publishing the app.Minimum length=1. Maximum
// length=127.Pattern "^[a-z0-9](([a-z0-9]|-(?!-))*[a-z0-9])?$";
Author *[string](/builtin#string)
// The date and time this resource was created.
CreationTime *[string](/builtin#string)
// The description of the application.Minimum length=1. Maximum length=256
Description *[string](/builtin#string)
// A URL with more information about the application, for example the location of
// your GitHub repository for the application.
HomePageUrl *[string](/builtin#string)
// Whether the author of this application has been verified. This means means that
// AWS has made a good faith review, as a reasonable and prudent service provider,
// of the information provided by the requester and has confirmed that the
// requester's identity is as claimed.
IsVerifiedAuthor [bool](/builtin#bool)
// Labels to improve discovery of apps in search results.Minimum length=1. Maximum
// length=127. Maximum number of labels: 10Pattern: "^[a-zA-Z0-9+\\-_:\\/@]+$";
Labels [][string](/builtin#string)
// A link to a license file of the app that matches the spdxLicenseID value of
// your application.Maximum size 5 MB
LicenseUrl *[string](/builtin#string)
// The name of the application.Minimum length=1. Maximum length=140Pattern:
// "[a-zA-Z0-9\\-]+";
Name *[string](/builtin#string)
// A link to the readme file in Markdown language that contains a more detailed
// description of the application and how it works.Maximum size 5 MB
ReadmeUrl *[string](/builtin#string)
// A valid identifier from <https://spdx.org/licenses/>.
SpdxLicenseId *[string](/builtin#string)
// The URL to the public profile of a verified author. This URL is submitted by
// the author.
VerifiedAuthorUrl *[string](/builtin#string)
// Version information about the application.
Version *[types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[Version](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#Version)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
``` |
unidecode | hex | Erlang | API Reference
===
Modules
---
[Unidecode](Unidecode.html)
This library provides functions to transliterate Unicode characters to an ASCII approximation.
[Unidecode.Decoder](Unidecode.Decoder.html)
This module takes care of transliterate a single grapheme.
It's only documented so you can use a better strategy to transliterate larger texts.
Unidecode
===
This library provides functions to transliterate Unicode characters to an ASCII approximation.
Design Philosophy(taken from original Unidecode perl library)
---
Unidecode's ability to transliterate from a given language is limited by two factors:
* The amount and quality of data in the written form of the original language So if you have Hebrew data that has no vowel points in it, then Unidecode cannot guess what vowels should appear in a pronunciation.
S f y hv n vwls n th npt, y wn't gt ny vwls n th tpt.
(This is a specific application of the general principle of "Garbage In, Garbage Out".)
* Basic limitations in the Unidecode design Writing a real and clever transliteration algorithm for any single language usually requires a lot of time, and at least a passable knowledge of the language involved.
But Unicode text can convey more languages than I could possibly learn (much less create a transliterator for) in the entire rest of my lifetime.
So I put a cap on how intelligent Unidecode could be, by insisting that it support only context-insensitive transliteration.
That means missing the finer details of any given writing system, while still hopefully being useful.
Unidecode, in other words, is quick and dirty.
Sometimes the output is not so dirty at all: Russian and Greek seem to work passably; and while Thaana (Divehi, AKA Maldivian) is a definitely non-Western writing system, setting up a mapping from it to Roman letters seems to work pretty well.
But sometimes the output is very dirty: Unidecode does quite badly on Japanese and Thai.
If you want a smarter transliteration for a particular language than Unidecode provides, then you should look for (or write) a transliteration algorithm specific to that language, and apply it instead of (or at least before) applying Unidecode.
In other words, Unidecode's approach is broad (knowing about dozens of writing systems), but shallow (not being meticulous about any of them).
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[decode(string)](#decode/1)
Returns string with its UTF-8 characters transliterated to ASCII ones.
[unidecode(string)](#unidecode/1)
Returns string with its UTF-8 characters transliterated to ASCII ones.
[Link to this section](#functions)
Functions
===
Unidecode.Decoder
===
This module takes care of transliterate a single grapheme.
It's only documented so you can use a better strategy to transliterate larger texts.
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[decode(c)](#decode/1)
Returns the transliteration of a single grapheme.
[Link to this section](#functions)
Functions
=== |
rvg | cran | R | Package ‘rvg’
May 10, 2023
Type Package
Title R Graphics Devices for 'Office' Vector Graphics Output
Version 0.3.3
Description Vector Graphics devices for 'Microsoft PowerPoint' and
'Microsoft Excel'. Functions extending package 'officer' are provided to
embed 'DrawingML' graphics into 'Microsoft PowerPoint' presentations and
'Microsoft Excel' workbooks.
SystemRequirements libpng
License GPL-3
Encoding UTF-8
Depends R (>= 3.0)
Imports grDevices, Rcpp (>= 0.12.12), officer (>= 0.6.2), gdtools (>=
0.3.3), xml2 (>= 1.0.0), rlang
LinkingTo Rcpp, gdtools
Suggests testthat, grid
URL https://ardata-fr.github.io/officeverse/,
https://davidgohel.github.io/rvg/
BugReports https://github.com/davidgohel/rvg/issues
RoxygenNote 7.2.3
NeedsCompilation yes
Author <NAME> [aut, cre],
ArData [cph],
<NAME> [ctb] (the javascript code used by function set_attr),
<NAME> [ctb] (clipping algorithms)
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2023-05-10 09:20:02 UTC
R topics documented:
dm... 2
dml_ppt... 3
dml_xls... 4
ph_with.dm... 6
xl_add_v... 6
dml Wrap plot instructions for DrawingML plotting in Powerpoint
Description
A simple wrapper to mark the plot instructions as Vector Graphics instructions. It produces an
object of class ’dml’ with a corresponding method ph_with.
The function enable usage of any R plot with argument code and with ggplot objects with argument
ggobj.
Usage
dml(
code,
ggobj = NULL,
bg = "white",
fonts = list(),
pointsize = 12,
editable = TRUE,
...
)
Arguments
code plotting instructions
ggobj ggplot object to print. argument code will be ignored if this argument is sup-
plied.
bg, fonts, pointsize, editable
Parameters passed to dml_pptx
... unused arguments
background color
When dealing with a ggplot object argument bg will have no effect because ggplot theme is speci-
fying background color, don’t forget to define the colors you want in the theme:
theme(
panel.background = element_rect(fill = "#EFEFEF"),
plot.background = element_rect(fill = "wheat"))
See Also
ph_with.dml
Examples
anyplot <- dml(code = barplot(1:5, col = 2:6), bg = "wheat")
library(officer)
doc <- read_pptx()
doc <- add_slide(doc, "Title and Content", "Office Theme")
doc <- ph_with(doc, anyplot, location = ph_location_fullsize())
fileout <- tempfile(fileext = ".pptx")
# fileout <- "vg.pptx"
print(doc, target = fileout)
dml_pptx DrawingML graphic device for Microsoft PowerPoint
Description
Graphics devices for Microsoft PowerPoint DrawingML format.
Usage
dml_pptx(
file = "Rplots.dml",
width = 6,
height = 6,
offx = 1,
offy = 1,
bg = "white",
fonts = list(),
pointsize = 12,
editable = TRUE,
id = 1L,
last_rel_id = 1L,
raster_prefix = "raster_",
standalone = TRUE
)
Arguments
file the file where output will appear.
height, width Height and width in inches.
offx, offy top and left origin of the plot
bg Default background color for the plot (defaults to "white").
fonts Named list of font names to be aliased with fonts installed on your system. If
unspecified, the R default families sans, serif, mono and symbol are aliased to
the family returned by match_family().
When you use specific fonts, you will need that font installed on your system.
This can be check with package gdtools and function gdtools::font_family_exists().
An example: list(sans = "Roboto", serif = "Times", mono = "Courier").
pointsize default point size.
editable should vector graphics elements (points, text, etc.) be editable.
id specifies a unique identifier (integer) within the slide that will contain the Draw-
ingML instructions.
last_rel_id specifies the last unique identifier (integer) within relationship file that will be
used to reference embedded raster images if any.
raster_prefix string value used as prefix for png files produced when raster objects are printed
on the graphical device.
standalone produce a standalone drawingml file? If FALSE, omits xml header and names-
paces.
See Also
Devices
Examples
dml_pptx(file = tempfile())
plot(1:11, (-5:5)^2, type = "b", main = "Simple Example")
dev.off()
dml_xlsx DrawingML graphic device for Microsoft Excel
Description
Graphics devices for Microsoft Excel DrawingML format.
Usage
dml_xlsx(
file = "Rplots.dml",
width = 6,
height = 6,
offx = 1,
offy = 1,
bg = "white",
fonts = list(),
pointsize = 12,
editable = TRUE,
id = 1L,
last_rel_id = 1L,
raster_prefix = "raster_",
standalone = TRUE
)
Arguments
file the file where output will appear.
height, width Height and width in inches.
offx, offy top and left origin of the plot
bg Default background color for the plot (defaults to "white").
fonts Named list of font names to be aliased with fonts installed on your system. If
unspecified, the R default families sans, serif, mono and symbol are aliased to
the family returned by match_family().
pointsize default point size.
editable should vector graphics elements (points, text, etc.) be editable.
id specifies a unique identifier (integer) within the slide that will contain the Draw-
ingML instructions.
last_rel_id specifies the last unique identifier (integer) within relationship file that will be
used to reference embedded raster images if any.
raster_prefix string value used as prefix for png files produced when raster objects are printed
on the graphical device.
standalone produce a standalone drawingml file? If FALSE, omits xml header and names-
paces.
See Also
Devices
Examples
dml_xlsx(file = tempfile())
plot(1:11, (-5:5)^2, type = "b", main = "Simple Example")
dev.off()
ph_with.dml add a plot output as vector graphics into a PowerPoint object
Description
produces a vector graphics output from R plot instructions stored in a dml object and add the result
in an rpptx object produced by read_pptx.
Usage
## S3 method for class 'dml'
ph_with(x, value, location, ...)
Arguments
x a pptx device
value dml object
location a location for a placeholder.
... Arguments to be passed to methods
Examples
anyplot <- dml(code = barplot(1:5, col = 2:6), bg = "wheat")
library(officer)
doc <- read_pptx()
doc <- add_slide(doc, "Title and Content", "Office Theme")
doc <- ph_with(doc, anyplot, location = ph_location_fullsize())
fileout <- tempfile(fileext = ".pptx")
print(doc, target = fileout)
xl_add_vg add a plot output as vector graphics into an Excel object
Description
produces a vector graphics output from R plot instructions and add the result in an Excel sheet. by
read_xlsx.
Usage
xl_add_vg(x, sheet, code, left, top, width, height, ...)
Arguments
x an rxlsx object produced by officer::read_xlsx
sheet sheet label/name
code plot instructions
left, top left and top origin of the plot on the slide in inches.
height, width Height and width in inches.
... arguments passed on to dml_xlsx.
Examples
library(officer)
my_ws <- read_xlsx()
my_ws <- xl_add_vg(my_ws,
sheet = "Feuil1",
code = barplot(1:5, col = 2:6), width = 6, height = 6, left = 1, top = 2
)
fileout <- tempfile(fileext = ".xlsx")
print(my_ws, target = fileout) |
MBSP | cran | R | Package ‘MBSP’
May 17, 2023
Type Package
Title Multivariate Bayesian Model with Shrinkage Priors
Version 4.0
Date 2023-05-17
Author <NAME>, <NAME>
Maintainer <NAME> <<EMAIL>>
Description Gibbs sampler for fitting multivariate Bayesian linear regression with shrinkage pri-
ors (MBSP), using the three parameter beta normal family. The method is de-
scribed in Bai and Ghosh (2018) <doi:10.1016/j.jmva.2018.04.010>.
License GPL-3
Depends R (>= 3.6.0)
Imports stats, MCMCpack, GIGrvg, utils, mvtnorm
NeedsCompilation yes
Repository CRAN
Date/Publication 2023-05-17 21:00:03 UTC
R topics documented:
matrix_norma... 1
MBS... 3
matrix_normal Matrix-Normal Distribution
Description
This function provides a way to draw a sample from the matrix-normal distribution, given the mean
matrix, the covariance structure of the rows, and the covariance structure of the columns.
Usage
matrix_normal(M, U, V)
Arguments
M mean a × b matrix
U a × a covariance matrix (covariance of rows).
V b × b covariance matrix (covariance of columns).
Details
This function provides a way to draw a random a × b matrix from the matrix-normal distribution,
M N (M, U, V ),
where M is the a × b mean matrix, U is an a × a covariance matrix, and V is a b × b covariance
matrix.
Value
A randomly drawn a × b matrix from M N (M, U, V ).
Author(s)
<NAME> and <NAME>
Examples
# Draw a random 50x20 matrix from MN(O,U,V),
# where:
# O = zero matrix of dimension 50x20
# U has AR(1) structure,
# V has sigma^2*I structure
# Specify Mean.mat
p <- 50
q <- 20
Mean_mat <- matrix(0, nrow=p, ncol=q)
# Construct U
rho <- 0.5
times <- 1:p
H <- abs(outer(times, times, "-"))
U <- rho^H
# Construct V
sigma_sq <- 2
V <- sigma_sq*diag(q)
# Draw from MN(Mean_mat, U, V)
mn_draw <- matrix_normal(Mean_mat, U, V)
MBSP MBSP Model with Three Parameter Beta Normal (TPBN) Family
Description
This function provides a fully Bayesian approach for obtaining a (nearly) sparse estimate of the
p × q regression coefficients matrix B in the multivariate linear regression model,
Y = XB + E,
using the three parameter beta normal (TPBN) family. Here Y is the n × q matrix with n samples of
q response variables, X is the n × p design matrix with n samples of p covariates, and E is the n × q
noise matrix with independent rows. The complete model is described in Bai and Ghosh (2018).
If there are r confounding variables which must remain in the model and should not be regularized,
then these can be included in the model by putting them in a separate n × r confounding matrix Z.
Then the model that is fit is
Y = XB + ZC + E,
where C is the r × q regression coefficients matrix corresponding to the confounders. In this case,
we put a flat prior on C. By default, confounders are not included.
If the user desires, two information criteria can be computed: the Deviance Information Criterion
(DIC) of Spiegelhalter et al. (2002) and the widely applicable information criterion (WAIC) of
Watanabe (2010).
Usage
MBSP(Y, X, confounders=NULL, u=0.5, a=0.5, tau=NA,
max_steps=6000, burnin=1000, save_samples=TRUE,
model_criteria=FALSE)
Arguments
Y Response matrix of n samples and q response variables.
X Design matrix of n samples and p covariates. The MBSP model regularizes the
regression coefficients B corresponding to X.
confounders Optional design matrix Z of n samples of r confounding variables. By default,
confounders are not included in the model (confounders=NULL). However, if
there are some confounders that must remain in the model and should not be
regularized, then the user can include them here.
u The first parameter in the TPBN family. Defaults to u = 0.5 for the horseshoe
prior.
a The second parameter in the TPBN family. Defaults to a = 0.5 for the horseshoe
prior.
tau The global parameter. If the user does not specify this (tau=NA), the Gibbs
sampler will use τ = 1/(p ∗ n ∗ log(n)). The user may also specify any value
for τ strictly greater than 0; otherwise it defaults to 1/(p ∗ n ∗ log(n)).
max_steps The total number of iterations to run in the Gibbs sampler. Defaults to 6000.
burnin The number of burn-in iterations for the Gibbs sampler. Defaults to 1000.
save_samples A Boolean variable for whether to save all of the posterior samples of the re-
gression coefficients matrix B and the covariance matrix Sigma. Defaults to
"TRUE".
model_criteria A Boolean variable for whether to compute the following information criteria:
DIC (Deviance Information Criterion) and WAIC (widely applicable informa-
tion criterion). Can be used to compare models with (for example) different
choices of u, a, or tau. Defauls to "FALSE".
Details
The function performs (nearly) sparse estimation of the regression coefficients matrix B and vari-
able selection from the p covariates. The lower and upper endpoints of the 95 percent posterior
credible intervals for each of the pq elements of B are also returned so that the user may assess
uncertainty quantification.
In the three parameter beta normal (TPBN) family, (u, a) = (0.5, 0.5) corresponds to the horseshoe
prior, (u, a) = (1, 0.5) corresponds to the Strawderman-Berger prior, and (u, a) = (1, a), a > 0
corresponds to the normal-exponential-gamma (NEG) prior. This function uses the horseshoe prior
as the default shrinkage prior.
The user also has the option of including an n × r matrix with r confounding variables. These
confounders are variables which are included in the model but should not be regularized.
Finally, if the user specifies model_criteria=TRUE, then the MBSP function will compute two model
selection criteria: the Deviance Information Criterion (DIC) of Spiegelhalter et al. (2002) and the
widely applicable information criterion (WAIC) of Watanabe (2010). This permits model compar-
isons between (for example) different choices of u, a, and tau. The default horseshoe prior and
choice of tau performs well, but the user may wish to experiment with u, a, and tau. In general,
models with lower DIC or WAIC are preferred.
Value
The function returns a list containing the following components:
B_est The point estimate of the p × q matrix B (taken as the componentwise posterior
median for all pq entries).
B_CI_lower The 2.5th percentile of the posterior density (or the lower endpoint of the 95
percent credible interval) for all pq entries of B.
B_CI_upper The 97.5th percentile of the posterior density (or the upper endpoint of the 95
percent credible interval) for all pq entries of B.
active_predictors
The row indices of the active (nonzero) covariates chosen by our model from
the p total predictors.
B_samples All max_steps-burnin samples of B.
C_est The point estimate of the r × q matrix C corresponding to the confounders
(taken as the componentwise posterior median for all rq entries). This matrix is
not returned if there are no confounders (i.e. confounders=NULL).
C_CI_lower The 2.5th percentile of the posterior density (or the lower endpoint of the 95
percent credible interval) for all rq entries of C. This is not returned if there are
no confounders (i.e. confounders=NULL).
C_CI_upper The 97.5th percentile of the posterior density (or the upper endpoint of the 95
percent credible interval) for all rq entries of C. This is not returned if there are
no confounders (i.e. confounders=NULL)
C_samples All max_steps-burnin samples of C. This is not returned if there are no con-
founders (i.e. confounders=NULL)
Sigma_est The point estimate of the q×q covariance matrix Σ (taken as the componentwise
posterior median for all q 2 entries).
Sigma_CI_lower The 2.5th percentile of the posterior density (or the lower endpoint of the 95
percent credible interval) for all q 2 entries of Σ.
Sigma_CI_upper The 97.5th percentile of the posterior density (or the upper endpoint of the 95
percent credible interval) for all q 2 entries of Σ.
Sigma_samples All max_steps-burnin samples of C.
DIC The Deviance Information Criterion (DIC), which can be used for model com-
parison. Models with smaller DIC are preferred. This only returns a numeric
value if model_criteria=TRUE is specified.
WAIC The widely applicable information criterion (WAIC), which can be used for
model comparison. Models with smaller WAIC are preferred. This only re-
turns a numeric value if model_criteria=TRUE is specified. The WAIC tends
to be more stable than DIC.
Author(s)
<NAME> and <NAME>
References
<NAME>., <NAME>., and <NAME>. (2011) Generalized beta mixtures of Gaussians. In
<NAME>, <NAME>, <NAME>, <NAME>, and <NAME> (Eds.) Advances in Neural
Information Processing Systems 24, 523-531.
<NAME>. and <NAME>. (2018). High-dimensional multivariate posterior consistency under global-
local shrinkage priors. Journal of Multivariate Analysis, 167: 157-170.
<NAME>. (1980). A robust generalized Bayes estimator and confidence region for a multivariate
normal mean. Annals of Statistics, 8(4): 716-761.
<NAME>., <NAME>., and <NAME>. (2010). The horseshoe estimator for sparse signals.
Biometrika, 97(2): 465-480.
<NAME>., <NAME>., <NAME>., and <NAME>. (2002). Bayesian measures of
model complexity and fit. Journal of the Royal Statistical Society: Series B (Statistical Methodol-
ogy), 64(4): 583-639.
<NAME>. (1971). Proper Bayes Minimax Estimators of the Multivariate Normal Mean.
Annals of Mathematical Statistics, 42(1): 385-388.
Watanabe, S. (2010). Asymptotic equivalence of Bayes cross validation and widely applicable
information criterion in singular learning theory. Journal of Machine Learning Research, 11: 3571-
3594.
Examples
###################################
# Set n, p, q, and sparsity level #
###################################
n <- 100
p <- 40
q <- 3 # number of response variables is 3
p_act <- 5 # number of active (nonzero) predictors is 5
#############################
# Generate design matrix X. #
#############################
set.seed(1234)
times <- 1:p
rho <- 0.5
H <- abs(outer(times, times, "-"))
V <- rho^H
mu <- rep(0, p)
# Rows of X are simulated from MVN(0,V)
X <- mvtnorm::rmvnorm(n, mu, V)
# Center X
X <- scale(X, center=TRUE, scale=FALSE)
############################################
# Generate true coefficient matrix B_true. #
############################################
# Entries in nonzero rows are drawn from Unif[(-5,-0.5)U(0.5,5)]
B_act <- runif(p_act*q,-5,4)
disjoint <- function(x){
if(x <= -0.5)
return(x)
else
return(x+1)
}
B_act <- matrix(sapply(B_act, disjoint),p_act,q)
# Set rest of the rows equal to 0
B_true <- rbind(B_act,matrix(0,p-p_act,q))
B_true <- B_true[sample(1:p),] # permute the rows
#########################################
# Generate true error covariance Sigma. #
#########################################
sigma_sq=2
times <- 1:q
H <- abs(outer(times, times, "-"))
Sigma <- sigma_sq * rho^H
############################
# Generate noise matrix E. #
############################
mu <- rep(0,q)
E <- mvtnorm::rmvnorm(n, mu, Sigma)
##############################
# Generate response matrix Y #
##############################
Y <- crossprod(t(X),B_true) + E
# Note that there are no confounding variables in this synthetic example
#########################################
# Fit the MBSP model on synthetic data. #
#########################################
# Should use default of max_steps=6000, burnin=1000 in practice.
mbsp_model = MBSP(Y=Y, X=X, max_steps=1000, burnin=500, model_criteria=FALSE)
# Recommended to use the default, i.e. can simply use: mbsp_model = MBSP(Y, X)
# If you want to return the DIC and WAIC, have to set model_criteria=TRUE.
# indices of the true nonzero rows
true_active_predictors <- which(rowSums(B_true)!=0)
true_active_predictors
# variables selected by the MBSP model
mbsp_model$active_predictors
# true regression coeficients in the true nonzero rows
B_true[true_active_predictors, ]
# the MBSP model's estimates of the nonzero rows
mbsp_model$B_est[true_active_predictors, ] |
GITHUB_edsomjr_TEP.zip_unzipped_OJ_10171.pdf | free_programming_book | Unknown | OJ 10171 Meeting Prof. Miguel...
Prof. <NAME> Faculdade UnB Gama
I have always thought that someday I will meet Professor Miguel, who has allowed me to arrange so many contests. But I have managed to miss all the opportunities in reality. At last with the help of a magician I have managed to meet him in the magical City of Hope. The city of hope has many roads. Some of them are bi-directional and others are unidirectional. Another important property of these streets are that some of the streets are for people whose age is less than thirty and rest are for the others. This is to give the minors freedom in their activities.
Each street has a certain length. Given the description of such a city and our initial positions, you will have to find the most suitable place where we can meet.
The most suitable place is the place where our combined effort of reaching is minimum. You can assume that I am 25 years old and Prof. Miguel is 40+.
Eu sempre pensei que algum dia eu iria me encontrar com o Professor Miguel, que tem me permitido preparar muitos eventos. Mas, de fato, eu perdi todas as oportunidades. Enfim, com a ajuda de um mgico eu consegui encontr-lo na mgica Cidade da Esperana. A cidade da esperana tem muitas ruas. Algumas delas so bidirecionais, outras unidirecionais. Outra propriedade importante destas ruas que algumas delas so para pessoas com menos do que 30 anos e as outras so para os demais cidados. Isto feito para dar aos menores liberdade em suas atividades. Cada rua tem um certo comprimento. Dada a descrio de cada cidade e as nossas posies iniciais, determine o local mais apropriado para o encontro. O lugar mais apropriado o lugar cujo esforo combinado de ambos para chegar l minimo. Voc pode assumir que eu tenho 25 anos e o Prof. Miguel tem 40 ou mais.
Figura: <NAME> and <NAME> (Shanghai, 2005). First meeting after five years of collaboration
Figura: <NAME> e <NAME> (Shanghai, 2005). Primeiro encontro aps cinco anos de colaborao
Input The input contains several descriptions of cities. Each description of city is started by a integer N , which indicates how many streets are there. The next N lines contain the description of N streets.
The description of each street consists of four uppercase alphabets and an integer.
The first alphabet is either Y (indicates that the street is for young) or M
(indicates that the road is for people aged 30 or more) and the second character is either U (indicates that the street is unidirectional) or B (indicates that the street is bi-directional). The third and fourth characters, X and Y can be any uppercase alphabet and they indicate that place named X and Y of the city are connected (in case of unidirectional it means that there is a one-way street from X to Y ) and the last non-negative integer C indicates the energy required to walk through the street. If we are in the same place we can meet each other in zero cost anyhow. Every energy value is less than 500.
Entrada A entrada contm vrias descries de cidades. Cada descrio de cidade comea com um inteiro N , o qual indica quantas ruas h na cidade. As prximas N linhas contm as descries das N ruas.
A descrio de cada rua consiste em quatro letras maisculas do alfabeto e um inteiro. A primeira letra ou Y (indicando que a rua para os jovens) ou M
(indicando que a rua para pessoas com 30 anos ou mais) e o segundo caractere ou U (indicando que a rua unidirecional) ou B (indicando que a rua
bidirecional). O terceiro e o quarto caracteres X e Y podem ser qualquer letra maiscula do alfabeto e indicam que os lugares X e Y da cidade esto conectados
(no caso unidirecional significa que h uma rua de mo nica com sentido de X a Y ) e o inteiro no-negativo C indica que a energia necessria para caminhar pela rua. Se ambos estamos no mesmo lugar ns podemos nos encontrar com custo zero, de algum modo. Os valores da energia so menores do que 500.
After the description of the city the last line of each input contains two place names, which are the initial position of me and Prof. Miguel respectively.
A value zero for N indicates end of input.
Output For each set of input, print the minimum energy cost and the place, which is most suitable for us to meet. If there is more than one place to meet print all of them in lexicographical order in the same line, separated by a single space. If there is no such places where we can meet then print the line You will never meet.
Aps a descrio de uma cidade a ltima linha de cada entrada contm o nome de dois lugares, os quais so as posies iniciais onde esto eu e o prof. Miguel,
respectivamente.
O valor N igual a zero indica o fim da entrada.
Sada Para cada conjunto da entrada, imprima o custo mnimo de energia necessrio e o lugar mais apropriado para o encontro. Se existe mais de um lugar imprima todos eles, em ordem lexicogrfica, na mesma linha, separados por um nico espao em branco. Se no h um lugar apropriado para o encontro imprima a mensagem You will never meet.
x Exemplo de entrada e sada x
x Exemplo de entrada e sada 4
x
x Exemplo de entrada e sada 4
# de estradas x
x Exemplo de entrada e sada B
4 A
C D
x
x Exemplo de entrada e sada B
4 Y U A B 4 pblico da rua A
C D
x
x Exemplo de entrada e sada B
4 Y U A B 4 sentido do trfego A
C D
x
x Exemplo de entrada e sada B
4 Y U A B 4 ponto de partida A
C D
x
x Exemplo de entrada e sada B
4 Y U A B 4 ponto de chegada A
C D
x
x Exemplo de entrada e sada B
4 Y U A B 4 custo de energia A
C D
x
x Exemplo de entrada e sada B
4 Y U A B 4 4
A C
D x
x Exemplo de entrada e sada B
4 Y U A B 4 Y U C A 1 4
A C
D x
x Exemplo de entrada e sada B
4 Y U A B 4 Y U C A 1 4
A 1
C D
x
x Exemplo de entrada e sada B
4 Y U A B 4 Y U C A 1 M U D B 6 4
A 1
C D
x
x Exemplo de entrada e sada B
4 Y U A B 4 Y U C A 1 M U D B 6 4
A 6
1 C
D x
x Exemplo de entrada e sada 4
Y U A B 4 Y U C A 1 M U D B 6 M B C D 2 B
4 A
6 1
C D
x
x Exemplo de entrada e sada 4
Y U A B 4 Y U C A 1 M U D B 6 M B C D 2 B
4 A
6 1
C 2
D x
x Exemplo de entrada e sada 4
Y U A B 4 Y U C A 1 M U D B 6 M B C D 2 A D B
4 A
6 1
C 2
D x
x Exemplo de entrada e sada 4
Y U A B 4 Y U C A 1 M U D B 6 M B C D 2 A D B
4 A
6 1
C 2
Prof. <NAME>
x
x Exemplo de entrada e sada 4
Y U A B 4 Y U C A 1 M U D B 6 M B C D 2 A D B
4 6
1 C
2 D
x
x Exemplo de entrada e sada 4
Y U A B 4 Y U C A 1 M U D B 6 M B C D 2 A D B
4 6
1 C
2 Prof. <NAME>
x
x Exemplo de entrada e sada 4
Y U A B 4 Y U C A 1 M U D B 6 M B C D 2 A D B
4 6
1 C
2
x
x Exemplo de entrada e sada 4
Y U A B 4 Y U C A 1 M U D B 6 M B C D 2 A D B
4 6
1 C
2
x
x Exemplo de entrada e sada 4
Y U A B 4 Y U C A 1 M U D B 6 M B C D 2 A D B
4 6
1 C
2 10
x
x Exemplo de entrada e sada x
x Exemplo de entrada e sada 2
x
x Exemplo de entrada e sada B
2 A
C D
x
x Exemplo de entrada e sada B
2 Y U A B 10 A
C D
x
x Exemplo de entrada e sada B
2 Y U A B 10 10
A C
D x
x Exemplo de entrada e sada B
2 Y U A B 10 M U C D 20 10
A C
D x
x Exemplo de entrada e sada B
2 Y U A B 10 M U C D 20 10
A C
20 D
x
x Exemplo de entrada e sada B
2 Y U A B 10 M U C D 20 A D 10
A C
20 D
x
x Exemplo de entrada e sada B
2 Y U A B 10 M U C D 20 A D 10
C
20
x
x Exemplo de entrada e sada B
2 Y U A B 10 M U C D 20 A D 10
C
x 20
x
x Soluo
x
x Soluo
B A
F C
E x
D
x Soluo
B A
F C
D E
B B
A A
F x
C E
D F
C E
D
x Soluo
m11
m21
..
.
m12
..
.
mN 1 mN 2 x
...
. . . m2N
..
..
.
.
...
s11
..
.
s22
..
.
sN 1 sN 2
. . . s1N
. . . s2N
..
..
.
.
...
x Soluo
m11
m21
..
.
m12
..
.
mN 1 mN 2
...
. . . m2N
..
..
.
.
...
distncias mnimas para o prof. Miguel x
s11
..
.
s22
..
.
sN 1 sN 2
. . . s1N
. . . s2N
..
..
.
.
...
x Soluo
m11
m21
..
.
m12
..
.
mN 1 mN 2
...
. . . m2N
..
..
.
.
...
s11
..
.
s22
..
.
sN 1 sN 2
. . . s1N
. . . s2N
..
..
.
.
...
distncias mnimas para o prof. Shahriar x
x Soluo
m11
m21
..
.
m12
..
.
mN 1 mN 2
...
. . . m2N
..
..
.
.
...
d11
..
.
s11
..
.
sN 1 sN 2
..
.
dN 1 dN 2 x
s22
..
.
...
. . . d2N
..
..
.
.
...
. . . s1N
. . . s2N
..
..
.
.
...
dij = mij + sij
vector<vector<int>> floyd_warshall(int N, const vector<edge>& edges)
{
vector<vector<int>> dist(N + 1, vector<int>(N + 1, oo));
for (const auto& [u, v, w] : edges)
dist[u][v] = w;
for (int u = 0; u < N; ++u)
dist[u][u] = 0;
for (int k = 0; k < N; ++k)
for (int u = 0; u < N; ++u)
for (int v = 0; v < N; ++v)
dist[u][v] = min(dist[u][v], dist[u][k] + dist[k][v]);
}
return dist;
vector<int>
solve(int m, int r, const vector<edge>& ys, const vector<edge>& ms)
{
auto distM = floyd_warshall(MAX, ys);
auto distS = floyd_warshall(MAX, ms);
int min_cost = oo;
vector<int> ans;
for (int u = 0; u < MAX; ++u)
{
if (distM[r][u] == oo or distS[m][u] == oo)
continue;
auto cost = distM[r][u] + distS[m][u];
if (cost > min_cost)
continue;
}
}
if (cost == min_cost)
ans.push_back(u);
else
{
min_cost = cost;
ans = vector<int> { min_cost, u };
}
return ans; |
@strapi/strapi | npm | JavaScript | ### [Open-source headless CMS, self-hosted or Cloud you’re in control.](#open-source-headless-cms-self-hosted-or-cloud-youre-in-control)
The leading open-source headless CMS, 100% JavaScript/TypeScript, flexible and fully customizable.
[Cloud](https://cloud.strapi.io/signups?source=github1) · [Try live demo](https://strapi.io/demo)
Strapi Community Edition is a free and open-source headless CMS enabling you to manage any content, anywhere.
* **Self-hosted or Cloud**: You can host and scale Strapi projects the way you want. You can save time by deploying to [Strapi Cloud](https://cloud.strapi.io/signups?source=github1) or deploy to the hosting platform you want**: AWS, Azure, Google Cloud, DigitalOcean.
* **Modern Admin Pane**: Elegant, entirely customizable and a fully extensible admin panel.
* **Multi-database support**: You can choose the database you prefer: PostgreSQL, MySQL, MariaDB, and SQLite.
* **Customizable**: You can quickly build your logic by fully customizing APIs, routes, or plugins to fit your needs perfectly.
* **Blazing Fast and Robust**: Built on top of Node.js and TypeScript, Strapi delivers reliable and solid performance.
* **Front-end Agnostic**: Use any front-end framework (React, Next.js, Vue, Angular, etc.), mobile apps or even IoT.
* **Secure by default**: Reusable policies, CORS, CSP, P3P, Xframe, XSS, and more.
* **Powerful CLI**: Scaffold projects and APIs on the fly.
[Getting Started](#getting-started)
---
[Read the Getting Started tutorial](https://docs.strapi.io/developer-docs/latest/getting-started/quick-start.html) or follow the steps below:
### [⏳ Installation](#-installation)
Install Strapi with this **Quickstart** command to create a Strapi project instantly:
* (Use **yarn** to install the Strapi project (recommended). [Install yarn with these docs](https://yarnpkg.com/lang/en/docs/install/).)
```
yarn create strapi-app my-project --quickstart
```
**or**
* (Use npm/npx to install the Strapi project.)
```
npx create-strapi-app my-project --quickstart
```
This command generates a brand new project with the default features (authentication, permissions, content management, content type builder & file upload). The **Quickstart** command installs Strapi using a **SQLite** database which is used for prototyping in development.
Enjoy 🎉
### [🖐 Requirements](#-requirements)
Complete installation requirements can be found in the documentation under [Installation Requirements](https://docs.strapi.io/developer-docs/latest/setup-deployment-guides/deployment.html).
**Supported operating systems**:
* Ubuntu LTS/Debian 9.x
* CentOS/RHEL 8
* macOS Mojave
* Windows 10
* Docker
(Please note that Strapi may work on other operating systems, but these are not tested nor officially supported at this time.)
**Node:**
Strapi only supports maintenance and LTS versions of Node.js. Please refer to the [Node.js release schedule](https://nodejs.org/en/about/releases/) for more information. NPM versions installed by default with Node.js are supported. Generally it's recommended to use yarn over npm where possible.
| Strapi Version | Recommended | Minimum |
| --- | --- | --- |
| 4.11.0 and up | 18.x | 16.x |
| 4.3.9 to 4.10.x | 18.x | 14.x |
| 4.0.x to 4.3.8 | 16.x | 14.x |
**Database:**
| Database | Recommended | Minimum |
| --- | --- | --- |
| MySQL | 8.0 | 5.7.8 |
| MariaDB | 10.6 | 10.3 |
| PostgreSQL | 14.0 | 11.0 |
| SQLite | 3 | 3 |
**We recommend always using the latest version of Strapi stable to start your new projects**.
[Features](#features)
---
* **Content Types Builder**: Build the most flexible publishing experience for your content managers, by giving them the freedom to create any page on the go with [fields](https://docs.strapi.io/user-docs/content-manager/writing-content#filling-up-fields), components and [Dynamic Zones](https://docs.strapi.io/user-docs/content-manager/writing-content#dynamic-zones).
* **Media Library**: Upload your images, videos, audio or documents to the media library. Easily find the right asset, edit and reuse it.
* **Internationalization**: The Internationalization (i18n) plugin allows Strapi users to create, manage and distribute localized content in different languages, called "locales
* **Role Based Access Control**: Create an unlimited number of custom roles and permissions for admin and end users.
* **GraphQL or REST**: Consume the API using REST or GraphQL
You can unlock additional features such as SSO, Audit Logs, Review Workflows in [Strapi Cloud](https://cloud.strapi.io/login?source=github1) or [Strapi Enterprise](https://strapi.io/enterprise?source=github1).
**[See more on our website](https://strapi.io/overview)**.
[Contributing](#contributing)
---
Please read our [Contributing Guide](https://github.com/strapi/strapi/blob/HEAD/CONTRIBUTING.md) before submitting a Pull Request to the project.
[Community support](#community-support)
---
For general help using Strapi, please refer to [the official Strapi documentation](https://docs.strapi.io). For additional help, you can use one of these channels to ask a question:
* [Discord](https://discord.strapi.io) (For live discussion with the Community and Strapi team)
* [GitHub](https://github.com/strapi/strapi) (Bug reports, Contributions)
* [Community Forum](https://forum.strapi.io) (Questions and Discussions)
* [Feedback section](https://feedback.strapi.io) (Roadmap, Feature requests)
* [Twitter](https://twitter.com/strapijs) (Get the news fast)
* [Facebook](https://www.facebook.com/Strapi-616063331867161)
* [YouTube Channel](https://www.youtube.com/strapi) (Learn from Video Tutorials)
[Migration](#migration)
---
Follow our [migration guides](https://docs.strapi.io/developer-docs/latest/update-migration-guides/migration-guides.html) on the documentation to keep your projects up-to-date.
[Roadmap](#roadmap)
---
Check out our [roadmap](https://feedback.strapi.io) to get informed of the latest features released and the upcoming ones. You may also give us insights and vote for a specific feature.
[Documentation](#documentation)
---
See our dedicated [repository](https://github.com/strapi/documentation) for the Strapi documentation, or view our documentation live:
* [Developer docs](https://docs.strapi.io/developer-docs/latest/getting-started/introduction.html)
* [User guide](https://docs.strapi.io/user-docs/latest/getting-started/introduction.html)
[Try live demo](#try-live-demo)
---
See for yourself what's under the hood by getting access to a [hosted Strapi project](https://strapi.io/demo) with sample data.
[License](#license)
---
See the [LICENSE](https://github.com/strapi/strapi/blob/HEAD/LICENSE) file for licensing information.
Readme
---
### Keywords
* strapi
* cms
* cmf
* content management system
* content management framework
* admin panel
* dashboard
* api
* auth
* framework
* http
* json
* koa
* koajs
* helmet
* mvc
* oauth
* oauth2
* orm
* rest
* restful
* security
* jam
* jamstack
* javascript
* headless
* MySQL
* MariaDB
* PostgreSQL
* SQLite
* graphqL
* infrastructure
* backend
* open source
* self hosted
* lerna
* lernajs
* react
* reactjs |
trust-dns-resolver | rust | Rust | Crate trust_dns_resolver
===
The Resolver is responsible for performing recursive queries to lookup domain names.
This is a 100% in process DNS resolver. It *does not* use the Host OS’ resolver. If what is desired is to use the Host OS’ resolver, generally in the system’s libc, then the
`std::net::ToSocketAddrs` variant over `&str` should be used.
Unlike the `trust-dns-client`, this tries to provide a simpler interface to perform DNS queries. For update options, i.e. Dynamic DNS, the `trust-dns-client` crate must be used instead. The Resolver library is capable of searching multiple domains (this can be disabled by using an FQDN during lookup), dual-stack IPv4/IPv6 lookups, performing chained CNAME lookups,
and features connection metric tracking for attempting to pick the best upstream DNS resolver.
There are two types for performing DNS queries, `Resolver` and `AsyncResolver`. `Resolver`
is the easiest to work with, it is a wrapper around `AsyncResolver`. `AsyncResolver` is a
`Tokio` based async resolver, and can be used inside any `Tokio` based system.
This as best as possible attempts to abide by the DNS RFCs, please file issues at https://github.com/bluejekyll/trust-dns.
Usage
---
### Declare dependency
```
[dependency]
trust-dns-resolver = "*"
```
### Using the Synchronous Resolver
This uses the default configuration, which sets the Google Public DNS as the upstream resolvers. Please see their privacy statement for important information about what they track, many ISP’s track similar information in DNS.
```
use std::net::*;
use trust_dns_resolver::Resolver;
use trust_dns_resolver::config::*;
// Construct a new Resolver with default configuration options let resolver = Resolver::new(ResolverConfig::default(), ResolverOpts::default()).unwrap();
// Lookup the IP addresses associated with a name.
// The final dot forces this to be an FQDN, otherwise the search rules as specified
// in `ResolverOpts` will take effect. FQDN's are generally cheaper queries.
let response = resolver.lookup_ip("www.example.com.").unwrap();
// There can be many addresses associated with the name,
// this can return IPv4 and/or IPv6 addresses let address = response.iter().next().expect("no addresses returned!");
if address.is_ipv4() {
assert_eq!(address, IpAddr::V4(Ipv4Addr::new(93, 184, 216, 34)));
} else {
assert_eq!(address, IpAddr::V6(Ipv6Addr::new(0x2606, 0x2800, 0x220, 0x1, 0x248, 0x1893, 0x25c8, 0x1946)));
}
```
### Using the host system config
On Unix systems, the `/etc/resolv.conf` can be used for configuration. Not all options specified in the host systems `resolv.conf` are applicable or compatible with this software. In addition there may be additional options supported which the host system does not. Example:
```
// Use the host OS'es `/etc/resolv.conf`
let resolver = Resolver::from_system_conf().unwrap();
let response = resolver.lookup_ip("www.example.com.").unwrap();
```
### Using the Tokio/Async Resolver
For more advanced asynchronous usage, the `AsyncResolver`] is integrated with Tokio. In fact,
the `AsyncResolver` is used by the synchronous Resolver for all lookups.
```
use std::net::*;
use tokio::runtime::Runtime;
use trust_dns_resolver::TokioAsyncResolver;
use trust_dns_resolver::config::*;
// We need a Tokio Runtime to run the resolver
// this is responsible for running all Future tasks and registering interest in IO channels let mut io_loop = Runtime::new().unwrap();
// Construct a new Resolver with default configuration options let resolver = io_loop.block_on(async {
TokioAsyncResolver::tokio(
ResolverConfig::default(),
ResolverOpts::default())
});
// Lookup the IP addresses associated with a name.
// This returns a future that will lookup the IP addresses, it must be run in the Core to
// to get the actual result.
let lookup_future = resolver.lookup_ip("www.example.com.");
// Run the lookup until it resolves or errors let mut response = io_loop.block_on(lookup_future).unwrap();
// There can be many addresses associated with the name,
// this can return IPv4 and/or IPv6 addresses let address = response.iter().next().expect("no addresses returned!");
if address.is_ipv4() {
assert_eq!(address, IpAddr::V4(Ipv4Addr::new(93, 184, 216, 34)));
} else {
assert_eq!(address, IpAddr::V6(Ipv6Addr::new(0x2606, 0x2800, 0x220, 0x1, 0x248, 0x1893, 0x25c8, 0x1946)));
}
```
Generally after a lookup in an asynchronous context, there would probably be a connection made to a server, for example:
```
let ips = io_loop.block_on(resolver.lookup_ip("www.example.com.")).unwrap();
let result = io_loop.block_on(async {
let ip = ips.iter().next().unwrap();
TcpStream::connect((ip, 443))
})
.and_then(|conn| Ok(conn) /* do something with the connection... */)
.unwrap();
```
It’s beyond the scope of these examples to show how to deal with connection failures and looping etc. But if you wanted to say try a different address from the result set after a connection failure, it will be necessary to create a type that implements the `Future` trait.
Inside the `Future::poll` method would be the place to implement a loop over the different IP addresses.
### DNS-over-TLS and DNS-over-HTTPS
DNS-over-TLS and DNS-over-HTTPS are supported in the Trust-DNS Resolver library. The underlying implementations are available as addon libraries. *WARNING* The trust-dns developers make no claims on the security and/or privacy guarantees of this implementation.
To use DNS-over-TLS one of the `dns-over-tls` features must be enabled at compile time. There are three: `dns-over-openssl`, `dns-over-native-tls`, and `dns-over-rustls`. For DNS-over-HTTPS only rustls is supported with the `dns-over-https-rustls`, this implicitly enables support for DNS-over-TLS as well. The reason for each is to make the Trust-DNS libraries flexible for different deployments, and/or security concerns. The easiest to use will generally be
`dns-over-rustls` which utilizes the `*ring*` Rust cryptography library (a rework of the
`boringssl` project), this should compile and be usable on most ARM and x86 platforms.
`dns-over-native-tls` will utilize the hosts TLS implementation where available or fallback to
`openssl` where not supported. `dns-over-openssl` will specify that `openssl` should be used
(which is a perfectly fine option if required). If more than one is specified, the precedence will be in this order (i.e. only one can be used at a time) `dns-over-rustls`,
`dns-over-native-tls`, and then `dns-over-openssl`. *NOTICE* the trust-dns developers are not responsible for any choice of library that does not meet required security requirements.
#### Example
Enable the TLS library through the dependency on `trust-dns-resolver`:
```
trust-dns-resolver = { version = "*", features = ["dns-over-rustls"] }
```
A default TLS configuration is available for Cloudflare’s `1.1.1.1` DNS service (Quad9 as well):
```
use trust_dns_resolver::Resolver;
use trust_dns_resolver::config::*;
// Construct a new Resolver with default configuration options let mut resolver = Resolver::new(ResolverConfig::cloudflare_tls(), ResolverOpts::default()).unwrap();
// see example above...
```
### mDNS (multicast DNS)
Multicast DNS is an experimental feature in Trust-DNS at the moment. Its support on different platforms is not yet ideal. Initial support is only for IPv4 mDNS, as there are some complexities to figure out with IPv6. Once enabled, an mDNS `NameServer` will automatically be added to the `Resolver` and used for any lookups performed in the `.local.` zone.
Re-exports
---
* `pub extern crate trust_dns_proto as proto;`
* `pub use name_server::TokioHandle;`
Modules
---
* caching_clientCaching related functionality for the Resolver.
* configConfiguration for a resolver
* dns_lruAn LRU cache designed for work with DNS lookups
* errorError types for the crate
* lookupLookup result from a resolution of ipv4 and ipv6 records with a Resolver.
* lookup_ipLookupIp result from a resolution of ipv4 and ipv6 records with a Resolver.
* name_serverA module with associated items for working with nameservers
* system_confSystem configuration loading
* testing`testing`Unit tests compatible with different runtime.
Structs
---
* AsyncResolverAn asynchronous resolver for DNS generic over async Runtimes.
* HostsConfiguration for the local hosts file
* NameA domain name
* Resolver`tokio-runtime`The Resolver is used for performing DNS queries.
Traits
---
* IntoNameConversion into a Name
* TryParseIpTypes of this trait will can be attempted for conversion to an IP address
Functions
---
* versionreturns a version as specified in Cargo.toml
Type Aliases
---
* ResolverFutureDeprecated`tokio-runtime`This is an alias for `AsyncResolver`, which replaced the type previously called `ResolverFuture`.
* TokioAsyncResolver`tokio-runtime`An AsyncResolver used with Tokio
Crate trust_dns_resolver
===
The Resolver is responsible for performing recursive queries to lookup domain names.
This is a 100% in process DNS resolver. It *does not* use the Host OS’ resolver. If what is desired is to use the Host OS’ resolver, generally in the system’s libc, then the
`std::net::ToSocketAddrs` variant over `&str` should be used.
Unlike the `trust-dns-client`, this tries to provide a simpler interface to perform DNS queries. For update options, i.e. Dynamic DNS, the `trust-dns-client` crate must be used instead. The Resolver library is capable of searching multiple domains (this can be disabled by using an FQDN during lookup), dual-stack IPv4/IPv6 lookups, performing chained CNAME lookups,
and features connection metric tracking for attempting to pick the best upstream DNS resolver.
There are two types for performing DNS queries, `Resolver` and `AsyncResolver`. `Resolver`
is the easiest to work with, it is a wrapper around `AsyncResolver`. `AsyncResolver` is a
`Tokio` based async resolver, and can be used inside any `Tokio` based system.
This as best as possible attempts to abide by the DNS RFCs, please file issues at https://github.com/bluejekyll/trust-dns.
Usage
---
### Declare dependency
```
[dependency]
trust-dns-resolver = "*"
```
### Using the Synchronous Resolver
This uses the default configuration, which sets the Google Public DNS as the upstream resolvers. Please see their privacy statement for important information about what they track, many ISP’s track similar information in DNS.
```
use std::net::*;
use trust_dns_resolver::Resolver;
use trust_dns_resolver::config::*;
// Construct a new Resolver with default configuration options let resolver = Resolver::new(ResolverConfig::default(), ResolverOpts::default()).unwrap();
// Lookup the IP addresses associated with a name.
// The final dot forces this to be an FQDN, otherwise the search rules as specified
// in `ResolverOpts` will take effect. FQDN's are generally cheaper queries.
let response = resolver.lookup_ip("www.example.com.").unwrap();
// There can be many addresses associated with the name,
// this can return IPv4 and/or IPv6 addresses let address = response.iter().next().expect("no addresses returned!");
if address.is_ipv4() {
assert_eq!(address, IpAddr::V4(Ipv4Addr::new(93, 184, 216, 34)));
} else {
assert_eq!(address, IpAddr::V6(Ipv6Addr::new(0x2606, 0x2800, 0x220, 0x1, 0x248, 0x1893, 0x25c8, 0x1946)));
}
```
### Using the host system config
On Unix systems, the `/etc/resolv.conf` can be used for configuration. Not all options specified in the host systems `resolv.conf` are applicable or compatible with this software. In addition there may be additional options supported which the host system does not. Example:
```
// Use the host OS'es `/etc/resolv.conf`
let resolver = Resolver::from_system_conf().unwrap();
let response = resolver.lookup_ip("www.example.com.").unwrap();
```
### Using the Tokio/Async Resolver
For more advanced asynchronous usage, the `AsyncResolver`] is integrated with Tokio. In fact,
the `AsyncResolver` is used by the synchronous Resolver for all lookups.
```
use std::net::*;
use tokio::runtime::Runtime;
use trust_dns_resolver::TokioAsyncResolver;
use trust_dns_resolver::config::*;
// We need a Tokio Runtime to run the resolver
// this is responsible for running all Future tasks and registering interest in IO channels let mut io_loop = Runtime::new().unwrap();
// Construct a new Resolver with default configuration options let resolver = io_loop.block_on(async {
TokioAsyncResolver::tokio(
ResolverConfig::default(),
ResolverOpts::default())
});
// Lookup the IP addresses associated with a name.
// This returns a future that will lookup the IP addresses, it must be run in the Core to
// to get the actual result.
let lookup_future = resolver.lookup_ip("www.example.com.");
// Run the lookup until it resolves or errors let mut response = io_loop.block_on(lookup_future).unwrap();
// There can be many addresses associated with the name,
// this can return IPv4 and/or IPv6 addresses let address = response.iter().next().expect("no addresses returned!");
if address.is_ipv4() {
assert_eq!(address, IpAddr::V4(Ipv4Addr::new(93, 184, 216, 34)));
} else {
assert_eq!(address, IpAddr::V6(Ipv6Addr::new(0x2606, 0x2800, 0x220, 0x1, 0x248, 0x1893, 0x25c8, 0x1946)));
}
```
Generally after a lookup in an asynchronous context, there would probably be a connection made to a server, for example:
```
let ips = io_loop.block_on(resolver.lookup_ip("www.example.com.")).unwrap();
let result = io_loop.block_on(async {
let ip = ips.iter().next().unwrap();
TcpStream::connect((ip, 443))
})
.and_then(|conn| Ok(conn) /* do something with the connection... */)
.unwrap();
```
It’s beyond the scope of these examples to show how to deal with connection failures and looping etc. But if you wanted to say try a different address from the result set after a connection failure, it will be necessary to create a type that implements the `Future` trait.
Inside the `Future::poll` method would be the place to implement a loop over the different IP addresses.
### DNS-over-TLS and DNS-over-HTTPS
DNS-over-TLS and DNS-over-HTTPS are supported in the Trust-DNS Resolver library. The underlying implementations are available as addon libraries. *WARNING* The trust-dns developers make no claims on the security and/or privacy guarantees of this implementation.
To use DNS-over-TLS one of the `dns-over-tls` features must be enabled at compile time. There are three: `dns-over-openssl`, `dns-over-native-tls`, and `dns-over-rustls`. For DNS-over-HTTPS only rustls is supported with the `dns-over-https-rustls`, this implicitly enables support for DNS-over-TLS as well. The reason for each is to make the Trust-DNS libraries flexible for different deployments, and/or security concerns. The easiest to use will generally be
`dns-over-rustls` which utilizes the `*ring*` Rust cryptography library (a rework of the
`boringssl` project), this should compile and be usable on most ARM and x86 platforms.
`dns-over-native-tls` will utilize the hosts TLS implementation where available or fallback to
`openssl` where not supported. `dns-over-openssl` will specify that `openssl` should be used
(which is a perfectly fine option if required). If more than one is specified, the precedence will be in this order (i.e. only one can be used at a time) `dns-over-rustls`,
`dns-over-native-tls`, and then `dns-over-openssl`. *NOTICE* the trust-dns developers are not responsible for any choice of library that does not meet required security requirements.
#### Example
Enable the TLS library through the dependency on `trust-dns-resolver`:
```
trust-dns-resolver = { version = "*", features = ["dns-over-rustls"] }
```
A default TLS configuration is available for Cloudflare’s `1.1.1.1` DNS service (Quad9 as well):
```
use trust_dns_resolver::Resolver;
use trust_dns_resolver::config::*;
// Construct a new Resolver with default configuration options let mut resolver = Resolver::new(ResolverConfig::cloudflare_tls(), ResolverOpts::default()).unwrap();
// see example above...
```
### mDNS (multicast DNS)
Multicast DNS is an experimental feature in Trust-DNS at the moment. Its support on different platforms is not yet ideal. Initial support is only for IPv4 mDNS, as there are some complexities to figure out with IPv6. Once enabled, an mDNS `NameServer` will automatically be added to the `Resolver` and used for any lookups performed in the `.local.` zone.
Re-exports
---
* `pub extern crate trust_dns_proto as proto;`
* `pub use name_server::TokioHandle;`
Modules
---
* caching_clientCaching related functionality for the Resolver.
* configConfiguration for a resolver
* dns_lruAn LRU cache designed for work with DNS lookups
* errorError types for the crate
* lookupLookup result from a resolution of ipv4 and ipv6 records with a Resolver.
* lookup_ipLookupIp result from a resolution of ipv4 and ipv6 records with a Resolver.
* name_serverA module with associated items for working with nameservers
* system_confSystem configuration loading
* testing`testing`Unit tests compatible with different runtime.
Structs
---
* AsyncResolverAn asynchronous resolver for DNS generic over async Runtimes.
* HostsConfiguration for the local hosts file
* NameA domain name
* Resolver`tokio-runtime`The Resolver is used for performing DNS queries.
Traits
---
* IntoNameConversion into a Name
* TryParseIpTypes of this trait will can be attempted for conversion to an IP address
Functions
---
* versionreturns a version as specified in Cargo.toml
Type Aliases
---
* ResolverFutureDeprecated`tokio-runtime`This is an alias for `AsyncResolver`, which replaced the type previously called `ResolverFuture`.
* TokioAsyncResolver`tokio-runtime`An AsyncResolver used with Tokio
Struct trust_dns_resolver::Resolver
===
```
pub struct Resolver { /* private fields */ }
```
Available on **crate feature `tokio-runtime`** only.The Resolver is used for performing DNS queries.
For forward (A) lookups, hostname -> IP address, see: `Resolver::lookup_ip`
Special note about resource consumption. The Resolver and all Trust-DNS software is built around the Tokio async-io library. This synchronous Resolver is intended to be a simpler wrapper for of the `AsyncResolver`. To allow the `Resolver` to be `Send` + `Sync`, the construction of the `AsyncResolver` is lazy, this means some of the features of the `AsyncResolver`, like performance based resolution via the most efficient `NameServer` will be lost (the lookup cache is shared across invocations of the `Resolver`). If these other features of the Trust-DNS Resolver are desired, please use the tokio based `AsyncResolver`.
*Note: Threaded/Sync usage*: In multithreaded scenarios, the internal Tokio Runtime will block on an internal Mutex for the tokio Runtime in use. For higher performance, it’s recommended to use the `AsyncResolver`.
Implementations
---
### impl Resolver
#### pub fn new(config: ResolverConfig, options: ResolverOpts) -> Result<SelfConstructs a new Resolver with the specified configuration.
##### Arguments
* `config` - configuration for the resolver
* `options` - resolver options for performing lookups
##### Returns
A new `Resolver` or an error if there was an error with the configuration.
#### pub fn default() -> Result<SelfConstructs a new Resolver with default config and default options.
See `ResolverConfig::default` and `ResolverOpts::default` for more information.
##### Returns
A new `Resolver` or an error if there was an error with the configuration.
#### pub fn from_system_conf() -> Result<SelfAvailable on **crate feature `system-config` and (Unix or Windows)** only.Constructs a new Resolver with the system configuration.
This will use `/etc/resolv.conf` on Unix OSes and the registry on Windows.
#### pub fn clear_cache(&self)
Flushes/Removes all entries from the cache
#### pub fn lookup<N: IntoName>(
&self,
name: N,
record_type: RecordType
) -> ResolveResult<LookupGeneric lookup for any RecordType
*WARNING* This interface may change in the future, please use `Self::lookup_ip` or another variant for more stable interfaces.
##### Arguments
* `name` - name of the record to lookup, if name is not a valid domain name, an error will be returned
* `record_type` - type of record to lookup
#### pub fn lookup_ip<N: IntoName + TryParseIp>(
&self,
host: N
) -> ResolveResult<LookupIpPerforms a dual-stack DNS lookup for the IP for the given hostname.
See the configuration and options parameters for controlling the way in which A(Ipv4) and AAAA(Ipv6) lookups will be performed. For the least expensive query a fully-qualified-domain-name, FQDN, which ends in a final `.`, e.g. `www.example.com.`, will only issue one query. Anything else will always incur the cost of querying the `ResolverConfig::domain` and `ResolverConfig::search`.
##### Arguments
* `host` - string hostname, if this is an invalid hostname, an error will be returned.
#### pub fn reverse_lookup(&self, query: IpAddr) -> ResolveResult<ReverseLookupPerforms a lookup for the associated type.
##### Arguments
* `query` - a type which can be converted to `Name` via `From`.
#### pub fn ipv4_lookup<N: IntoName>(&self, query: N) -> ResolveResult<Ipv4LookupPerforms a lookup for the associated type.
*hint* queries that end with a ‘.’ are fully qualified names and are cheaper lookups
##### Arguments
* `query` - a `&str` which parses to a domain name, failure to parse will return an error
#### pub fn ipv6_lookup<N: IntoName>(&self, query: N) -> ResolveResult<Ipv6LookupPerforms a lookup for the associated type.
*hint* queries that end with a ‘.’ are fully qualified names and are cheaper lookups
##### Arguments
* `query` - a `&str` which parses to a domain name, failure to parse will return an error
#### pub fn mx_lookup<N: IntoName>(&self, query: N) -> ResolveResult<MxLookupPerforms a lookup for the associated type.
*hint* queries that end with a ‘.’ are fully qualified names and are cheaper lookups
##### Arguments
* `query` - a `&str` which parses to a domain name, failure to parse will return an error
#### pub fn ns_lookup<N: IntoName>(&self, query: N) -> ResolveResult<NsLookupPerforms a lookup for the associated type.
*hint* queries that end with a ‘.’ are fully qualified names and are cheaper lookups
##### Arguments
* `query` - a `&str` which parses to a domain name, failure to parse will return an error
#### pub fn soa_lookup<N: IntoName>(&self, query: N) -> ResolveResult<SoaLookupPerforms a lookup for the associated type.
*hint* queries that end with a ‘.’ are fully qualified names and are cheaper lookups
##### Arguments
* `query` - a `&str` which parses to a domain name, failure to parse will return an error
#### pub fn srv_lookup<N: IntoName>(&self, query: N) -> ResolveResult<SrvLookupPerforms a lookup for the associated type.
*hint* queries that end with a ‘.’ are fully qualified names and are cheaper lookups
##### Arguments
* `query` - a `&str` which parses to a domain name, failure to parse will return an error
#### pub fn tlsa_lookup<N: IntoName>(&self, query: N) -> ResolveResult<TlsaLookupPerforms a lookup for the associated type.
*hint* queries that end with a ‘.’ are fully qualified names and are cheaper lookups
##### Arguments
* `query` - a `&str` which parses to a domain name, failure to parse will return an error
#### pub fn txt_lookup<N: IntoName>(&self, query: N) -> ResolveResult<TxtLookupPerforms a lookup for the associated type.
*hint* queries that end with a ‘.’ are fully qualified names and are cheaper lookups
##### Arguments
* `query` - a `&str` which parses to a domain name, failure to parse will return an error
Auto Trait Implementations
---
### impl !RefUnwindSafe for Resolver
### impl Send for Resolver
### impl Sync for Resolver
### impl Unpin for Resolver
### impl !UnwindSafe for Resolver
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct trust_dns_resolver::AsyncResolver
===
```
pub struct AsyncResolver<P: ConnectionProvider> { /* private fields */ }
```
An asynchronous resolver for DNS generic over async Runtimes.
Creating a `AsyncResolver` returns a new handle and a future that should be spawned on an executor to drive the background work. The lookup methods on `AsyncResolver` request lookups from the background task.
The futures returned by a `AsyncResolver` and the corresponding background task need not be spawned on the same executor, or be in the same thread.
Additionally, one background task may have any number of handles; calling
`clone()` on a handle will create a new handle linked to the same background task.
*NOTE* If lookup futures returned by a `AsyncResolver` and the background future are spawned on two separate `CurrentThread` executors, one thread cannot run both executors simultaneously, so the `run` or `block_on`
functions will cause the thread to deadlock. If both the background work and the lookup futures are intended to be run on the same thread, they should be spawned on the same executor.
The background task manages the name server pool and other state used to drive lookups. When this future is spawned on an executor, it will first construct and configure the necessary client state, before checking for any incoming lookup requests, handling them, and yielding. It will continue to do so as long as there are still any `AsyncResolver` handle linked to it. When all of its `AsyncResolver`s have been dropped, the background future will finish.
Implementations
---
### impl AsyncResolver<GenericConnector<TokioRuntimeProvider>#### pub fn tokio(config: ResolverConfig, options: ResolverOpts) -> Self
Available on **crate feature `tokio-runtime`** only.Construct a new Tokio based `AsyncResolver` with the provided configuration.
##### Arguments
* `config` - configuration, name_servers, etc. for the Resolver
* `options` - basic lookup options for the resolver
##### Returns
A tuple containing the new `AsyncResolver` and a future that drives the background task that runs resolutions for the `AsyncResolver`. See the documentation for `AsyncResolver` for more information on how to use the background future.
#### pub fn tokio_from_system_conf() -> Result<Self, ResolveErrorAvailable on **crate feature `system-config` and (Unix or Windows) and crate feature `tokio-runtime`** only.Constructs a new Tokio based Resolver with the system configuration.
This will use `/etc/resolv.conf` on Unix OSes and the registry on Windows.
### impl<R: ConnectionProvider> AsyncResolver<R#### pub fn new(config: ResolverConfig, options: ResolverOpts, provider: R) -> Self
Construct a new generic `AsyncResolver` with the provided configuration.
see [TokioAsyncResolver::tokio(..)] instead.
##### Arguments
* `config` - configuration, name_servers, etc. for the Resolver
* `options` - basic lookup options for the resolver
##### Returns
A tuple containing the new `AsyncResolver` and a future that drives the background task that runs resolutions for the `AsyncResolver`. See the documentation for `AsyncResolver` for more information on how to use the background future.
#### pub fn from_system_conf(runtime: R) -> Result<Self, ResolveErrorAvailable on **crate feature `system-config` and (Unix or Windows)** only.Constructs a new Resolver with the system configuration.
see [TokioAsyncResolver::tokio_from_system_conf(..)] instead.
This will use `/etc/resolv.conf` on Unix OSes and the registry on Windows.
#### pub fn clear_cache(&self)
Flushes/Removes all entries from the cache
### impl<P: ConnectionProvider> AsyncResolver<P#### pub fn new_with_conn(
config: ResolverConfig,
options: ResolverOpts,
conn_provider: P
) -> Self
Construct a new `AsyncResolver` with the provided configuration.
##### Arguments
* `config` - configuration, name_servers, etc. for the Resolver
* `options` - basic lookup options for the resolver
##### Returns
A tuple containing the new `AsyncResolver` and a future that drives the background task that runs resolutions for the `AsyncResolver`. See the documentation for `AsyncResolver` for more information on how to use the background future.
#### pub fn from_system_conf_with_provider(
conn_provider: P
) -> Result<Self, ResolveErrorAvailable on **crate feature `system-config` and (Unix or Windows)** only.Constructs a new Resolver with the system configuration.
This will use `/etc/resolv.conf` on Unix OSes and the registry on Windows.
#### pub async fn lookup<N: IntoName>(
&self,
name: N,
record_type: RecordType
) -> Result<Lookup, ResolveErrorGeneric lookup for any RecordType
*WARNING* this interface may change in the future, see if one of the specializations would be better.
##### Arguments
* `name` - name of the record to lookup, if name is not a valid domain name, an error will be returned
* `record_type` - type of record to lookup, all RecordData responses will be filtered to this type
##### Returns
#### pub async fn lookup_ip<N: IntoName + TryParseIp>(
&self,
host: N
) -> Result<LookupIp, ResolveErrorPerforms a dual-stack DNS lookup for the IP for the given hostname.
See the configuration and options parameters for controlling the way in which A(Ipv4) and AAAA(Ipv6) lookups will be performed. For the least expensive query a fully-qualified-domain-name, FQDN, which ends in a final `.`, e.g. `www.example.com.`, will only issue one query. Anything else will always incur the cost of querying the `ResolverConfig::domain` and `ResolverConfig::search`.
##### Arguments
* `host` - string hostname, if this is an invalid hostname, an error will be returned.
#### pub fn set_hosts(&mut self, hosts: Option<Hosts>)
Customizes the static hosts used in this resolver.
#### pub async fn reverse_lookup(
&self,
query: IpAddr
) -> Result<ReverseLookup, ResolveErrorPerforms a lookup for the associated type.
##### Arguments
* `query` - a type which can be converted to `Name` via `From`.
#### pub async fn ipv4_lookup<N: IntoName>(
&self,
query: N
) -> Result<Ipv4Lookup, ResolveErrorPerforms a lookup for the associated type.
*hint* queries that end with a ‘.’ are fully qualified names and are cheaper lookups
##### Arguments
* `query` - a string which parses to a domain name, failure to parse will return an error
#### pub async fn ipv6_lookup<N: IntoName>(
&self,
query: N
) -> Result<Ipv6Lookup, ResolveErrorPerforms a lookup for the associated type.
*hint* queries that end with a ‘.’ are fully qualified names and are cheaper lookups
##### Arguments
* `query` - a string which parses to a domain name, failure to parse will return an error
#### pub async fn mx_lookup<N: IntoName>(
&self,
query: N
) -> Result<MxLookup, ResolveErrorPerforms a lookup for the associated type.
*hint* queries that end with a ‘.’ are fully qualified names and are cheaper lookups
##### Arguments
* `query` - a string which parses to a domain name, failure to parse will return an error
#### pub async fn ns_lookup<N: IntoName>(
&self,
query: N
) -> Result<NsLookup, ResolveErrorPerforms a lookup for the associated type.
*hint* queries that end with a ‘.’ are fully qualified names and are cheaper lookups
##### Arguments
* `query` - a string which parses to a domain name, failure to parse will return an error
#### pub async fn soa_lookup<N: IntoName>(
&self,
query: N
) -> Result<SoaLookup, ResolveErrorPerforms a lookup for the associated type.
*hint* queries that end with a ‘.’ are fully qualified names and are cheaper lookups
##### Arguments
* `query` - a string which parses to a domain name, failure to parse will return an error
#### pub async fn srv_lookup<N: IntoName>(
&self,
query: N
) -> Result<SrvLookup, ResolveErrorPerforms a lookup for the associated type.
*hint* queries that end with a ‘.’ are fully qualified names and are cheaper lookups
##### Arguments
* `query` - a string which parses to a domain name, failure to parse will return an error
#### pub async fn tlsa_lookup<N: IntoName>(
&self,
query: N
) -> Result<TlsaLookup, ResolveErrorPerforms a lookup for the associated type.
*hint* queries that end with a ‘.’ are fully qualified names and are cheaper lookups
##### Arguments
* `query` - a string which parses to a domain name, failure to parse will return an error
#### pub async fn txt_lookup<N: IntoName>(
&self,
query: N
) -> Result<TxtLookup, ResolveErrorPerforms a lookup for the associated type.
*hint* queries that end with a ‘.’ are fully qualified names and are cheaper lookups
##### Arguments
* `query` - a string which parses to a domain name, failure to parse will return an error
Trait Implementations
---
### impl<P: Clone + ConnectionProvider> Clone for AsyncResolver<P#### fn clone(&self) -> AsyncResolver<PReturns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
Formats the value using the given formatter. Read moreAuto Trait Implementations
---
### impl<P> !RefUnwindSafe for AsyncResolver<P### impl<P> Send for AsyncResolver<P### impl<P> Sync for AsyncResolver<P### impl<P> Unpin for AsyncResolver<P### impl<P> !UnwindSafe for AsyncResolver<PBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct trust_dns_resolver::name_server::TokioHandle
===
```
pub struct TokioHandle { /* private fields */ }
```
Available on **crate feature `tokio-runtime`** only.A handle to the Tokio runtime
Trait Implementations
---
### impl Clone for TokioHandle
#### fn clone(&self) -> TokioHandle
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn default() -> TokioHandle
Returns the “default value” for a type.
#### fn spawn_bg<F>(&mut self, future: F)where
F: Future<Output = Result<(), ProtoError>> + Send + 'static,
Spawn a future in the backgroundAuto Trait Implementations
---
### impl RefUnwindSafe for TokioHandle
### impl Send for TokioHandle
### impl Sync for TokioHandle
### impl Unpin for TokioHandle
### impl UnwindSafe for TokioHandle
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Module trust_dns_resolver::caching_client
===
Caching related functionality for the Resolver.
Module trust_dns_resolver::config
===
Configuration for a resolver
Structs
---
* NameServerConfigConfiguration for the NameServer
* NameServerConfigGroupA set of name_servers to associate with a `ResolverConfig`.
* ResolverConfigConfiguration for the upstream nameservers to use for resolution
* ResolverOptsConfiguration for the Resolver
* TlsClientConfig`dns-over-rustls`a compatibility wrapper around rustls ClientConfig
Enums
---
* LookupIpStrategyThe lookup ip strategy
* ProtocolThe protocol on which a NameServer should be communicated with
* ServerOrderingStrategyThe strategy for establishing the query order of name servers in a pool.
Constants
---
* CLOUDFLARE_IPSIP addresses for Cloudflare’s 1.1.1.1 DNS service
* GOOGLE_IPSIP addresses for Google Public DNS
* QUAD9_IPSIP address for the Quad9 DNS service
Module trust_dns_resolver::dns_lru
===
An LRU cache designed for work with DNS lookups
Structs
---
* DnsLruAnd LRU eviction cache specifically for storing DNS records
* TtlConfigThe time-to-live, TTL, configuration for use by the cache.
Module trust_dns_resolver::error
===
Error types for the crate
Structs
---
* ResolveErrorThe error type for errors that get returned in the crate
Enums
---
* ResolveErrorKindThe error kind for errors that get returned in the crate
Type Aliases
---
* ResolveResultAn alias for results returned by functions of this crate
Module trust_dns_resolver::lookup
===
Lookup result from a resolution of ipv4 and ipv6 records with a Resolver.
Structs
---
* Ipv4LookupContains the results of a lookup for the associated RecordType
* Ipv4LookupIntoIterBorrowed view of set of RDatas returned from a Lookup
* Ipv4LookupIterAn iterator over the Lookup type
* Ipv6LookupContains the results of a lookup for the associated RecordType
* Ipv6LookupIntoIterBorrowed view of set of RDatas returned from a Lookup
* Ipv6LookupIterAn iterator over the Lookup type
* LookupResult of a DNS query when querying for any record type supported by the Trust-DNS Proto library.
* LookupIntoIterBorrowed view of set of `RData`s returned from a `Lookup`.
* LookupIterBorrowed view of set of `RData`s returned from a Lookup
* LookupRecordIterBorrowed view of set of `Record`s returned from a Lookup
* MxLookupContains the results of a lookup for the associated RecordType
* MxLookupIntoIterBorrowed view of set of RDatas returned from a Lookup
* MxLookupIterAn iterator over the Lookup type
* NsLookupContains the results of a lookup for the associated RecordType
* NsLookupIntoIterBorrowed view of set of RDatas returned from a Lookup
* NsLookupIterAn iterator over the Lookup type
* ReverseLookupContains the results of a lookup for the associated RecordType
* ReverseLookupIntoIterBorrowed view of set of RDatas returned from a Lookup
* ReverseLookupIterAn iterator over the Lookup type
* SoaLookupContains the results of a lookup for the associated RecordType
* SoaLookupIntoIterBorrowed view of set of RDatas returned from a Lookup
* SoaLookupIterAn iterator over the Lookup type
* SrvLookupThe result of an SRV lookup
* SrvLookupIntoIterBorrowed view of set of RDatas returned from a Lookup
* SrvLookupIterAn iterator over the Lookup type
* TlsaLookupContains the results of a lookup for the associated RecordType
* TlsaLookupIntoIterBorrowed view of set of RDatas returned from a Lookup
* TlsaLookupIterAn iterator over the Lookup type
* TxtLookupContains the results of a lookup for the associated RecordType
* TxtLookupIntoIterBorrowed view of set of RDatas returned from a Lookup
* TxtLookupIterAn iterator over the Lookup type
Module trust_dns_resolver::name_server
===
A module with associated items for working with nameservers
Structs
---
* GenericConnectionA connected DNS handle
* GenericConnectorDefault connector for `GenericConnection`
* NameServerThis struct is used to create `DnsHandle` with the help of `P`.
* NameServerPoolAbstract interface for mocking purpose
* TokioHandle`tokio-runtime`A handle to the Tokio runtime
* TokioRuntimeProvider`tokio-runtime`The Tokio Runtime for async execution
Traits
---
* ConnectionProviderCreate `DnsHandle` with the help of `RuntimeProvider`.
This trait is designed for customization.
* RuntimeProviderRuntimeProvider defines which async runtime that handles IO and timers.
* SpawnA type defines the Handle which can spawn future.
Type Aliases
---
* GenericNameServerSpecifies the details of a remote NameServer used for lookups
* GenericNameServerPoolA pool of NameServers
* TokioConnectionProvider`tokio-runtime`Default ConnectionProvider with `GenericConnection`.
Module trust_dns_resolver::system_conf
===
System configuration loading
This module is responsible for parsing and returning the configuration from the host system. It will read from the default location on each operating system, e.g. most Unixes have this written to `/etc/resolv.conf`
Functions
---
* parse_resolv_conf`system-config` and Unix
* read_system_conf`system-config` and Unix
Module trust_dns_resolver::testing
===
Available on **crate feature `testing`** only.Unit tests compatible with different runtime.
Functions
---
* domain_search_testTest domain search.
* fqdn_testTest fqdn.
* hosts_lookup_test`system-config`Test AsyncResolver created from system configuration with host lookups.
* idna_testTest idna.
* ip_lookup_across_threads_testTest IP lookup from IP literals across threads.
* ip_lookup_testTest IP lookup from IP literals.
* large_ndots_testTest large ndots with non-fqdn.
* localhost_ipv4_testTest ipv4 localhost.
* localhost_ipv6_testTest ipv6 localhost.
* lookup_testTest IP lookup from URLs.
* ndots_testTest ndots with non-fqdn.
* search_ipv4_large_ndots_testTest ipv4 search with large ndots.
* search_ipv6_large_ndots_testTest ipv6 search with large ndots.
* search_ipv6_name_parse_fails_testTest ipv6 name parse fails.
* search_list_testTest search lists.
* sec_lookup_fails_test`dnssec`Test IP lookup from domains that exist but unsigned with DNSSEC validation.
* sec_lookup_test`dnssec`Test IP lookup from URLs with DNSSEC validation.
* system_lookup_test`system-config`Test AsyncResolver created from system configuration with IP lookup.
Struct trust_dns_resolver::Hosts
===
```
pub struct Hosts { /* private fields */ }
```
Configuration for the local hosts file
Implementations
---
### impl Hosts
#### pub fn new() -> Self
Creates a new configuration from the system hosts file,
only works for Windows and Unix-like OSes,
will return empty configuration on others
#### pub fn lookup_static_host(&self, query: &Query) -> Option<LookupLook up the addresses for the given host from the system hosts file.
#### pub fn insert(&mut self, name: Name, record_type: RecordType, lookup: Lookup)
Insert a new Lookup for the associated `Name` and `RecordType`
#### pub fn read_hosts_conf(self, src: impl Read) -> Result<Selfparse configuration from `src`
Trait Implementations
---
### impl Debug for Hosts
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> Hosts
Returns the “default value” for a type. Read moreAuto Trait Implementations
---
### impl RefUnwindSafe for Hosts
### impl Send for Hosts
### impl Sync for Hosts
### impl Unpin for Hosts
### impl UnwindSafe for Hosts
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct trust_dns_resolver::Name
===
```
pub struct Name { /* private fields */ }
```
A domain name
Implementations
---
### impl Name
#### pub fn new() -> Name
Create a new domain::Name, i.e. label
#### pub fn root() -> Name
Returns the root label, i.e. no labels, can probably make this better in the future.
#### pub fn is_root(&self) -> bool
Returns true if there are no labels, i.e. it’s empty.
In DNS the root is represented by `.`
##### Examples
```
use trust_dns_proto::rr::domain::Name;
let root = Name::root();
assert_eq!(&root.to_string(), ".");
```
#### pub fn is_fqdn(&self) -> bool
Returns true if the name is a fully qualified domain name.
If this is true, it has effects like only querying for this single name, as opposed to building up a search list in resolvers.
*warning: this interface is unstable and may change in the future*
##### Examples
```
use std::str::FromStr;
use trust_dns_proto::rr::domain::Name;
let name = Name::from_str("www").unwrap();
assert!(!name.is_fqdn());
let name = Name::from_str("www.example.com").unwrap();
assert!(!name.is_fqdn());
let name = Name::from_str("www.example.com.").unwrap();
assert!(name.is_fqdn());
```
#### pub fn set_fqdn(&mut self, val: bool)
Specifies this name is a fully qualified domain name
*warning: this interface is unstable and may change in the future*
#### pub fn iter(&self) -> LabelIter<'_Returns an iterator over the labels
#### pub fn append_label<L>(self, label: L) -> Result<Name, ProtoError>where
L: IntoLabel,
Appends the label to the end of this name
##### Example
```
use std::str::FromStr;
use trust_dns_proto::rr::domain::Name;
let name = Name::from_str("www.example").unwrap();
let name = name.append_label("com").unwrap();
assert_eq!(name, Name::from_str("www.example.com").unwrap());
```
#### pub fn from_labels<I, L>(labels: I) -> Result<Name, ProtoError>where
I: IntoIterator<Item = L>,
L: IntoLabel,
Creates a new Name from the specified labels
##### Arguments
* `labels` - vector of items which will be stored as Strings.
##### Examples
```
use std::str::FromStr;
use trust_dns_proto::rr::domain::Name;
// From strings, uses utf8 conversion let from_labels = Name::from_labels(vec!["www", "example", "com"]).unwrap();
assert_eq!(from_labels, Name::from_str("www.example.com").unwrap());
// Force a set of bytes into labels (this is none-standard and potentially dangerous)
let from_labels = Name::from_labels(vec!["bad chars".as_bytes(), "example".as_bytes(), "com".as_bytes()]).unwrap();
assert_eq!(from_labels.iter().next(), Some(&b"bad chars"[..]));
let root = Name::from_labels(Vec::<&str>::new()).unwrap();
assert!(root.is_root());
```
#### pub fn append_name(self, other: &Name) -> Result<Name, ProtoErrorAppends `other` to `self`, returning a new `Name`
Carries forward `is_fqdn` from `other`.
##### Examples
```
use std::str::FromStr;
use trust_dns_proto::rr::domain::Name;
let local = Name::from_str("www").unwrap();
let domain = Name::from_str("example.com").unwrap();
assert!(!domain.is_fqdn());
let name = local.clone().append_name(&domain).unwrap();
assert_eq!(name, Name::from_str("www.example.com").unwrap());
assert!(!name.is_fqdn());
// see also `Name::append_domain`
let domain = Name::from_str("example.com.").unwrap();
assert!(domain.is_fqdn());
let name = local.append_name(&domain).unwrap();
assert_eq!(name, Name::from_str("www.example.com.").unwrap());
assert!(name.is_fqdn());
```
#### pub fn append_domain(self, domain: &Name) -> Result<Name, ProtoErrorAppends the `domain` to `self`, making the new `Name` an FQDN
This is an alias for `append_name` with the added effect of marking the new `Name` as a fully-qualified-domain-name.
##### Examples
```
use std::str::FromStr;
use trust_dns_proto::rr::domain::Name;
let local = Name::from_str("www").unwrap();
let domain = Name::from_str("example.com").unwrap();
let name = local.append_domain(&domain).unwrap();
assert_eq!(name, Name::from_str("www.example.com").unwrap());
assert!(name.is_fqdn())
```
#### pub fn to_lowercase(&self) -> Name
Creates a new Name with all labels lowercased
##### Examples
```
use std::cmp::Ordering;
use std::str::FromStr;
use trust_dns_proto::rr::domain::{Label, Name};
let example_com = Name::from_ascii("Example.Com").unwrap();
assert_eq!(example_com.cmp_case(&Name::from_str("example.com").unwrap()), Ordering::Less);
assert!(example_com.to_lowercase().eq_case(&Name::from_str("example.com").unwrap()));
```
#### pub fn base_name(&self) -> Name
Trims off the first part of the name, to help with searching for the domain piece
##### Examples
```
use std::str::FromStr;
use trust_dns_proto::rr::domain::Name;
let example_com = Name::from_str("example.com.").unwrap();
assert_eq!(example_com.base_name(), Name::from_str("com.").unwrap());
assert_eq!(Name::from_str("com.").unwrap().base_name(), Name::root());
assert_eq!(Name::root().base_name(), Name::root());
```
#### pub fn trim_to(&self, num_labels: usize) -> Name
Trims to the number of labels specified
##### Examples
```
use std::str::FromStr;
use trust_dns_proto::rr::domain::Name;
let example_com = Name::from_str("example.com.").unwrap();
assert_eq!(example_com.trim_to(2), Name::from_str("example.com.").unwrap());
assert_eq!(example_com.trim_to(1), Name::from_str("com.").unwrap());
assert_eq!(example_com.trim_to(0), Name::root());
assert_eq!(example_com.trim_to(3), Name::from_str("example.com.").unwrap());
```
#### pub fn zone_of_case(&self, name: &Name) -> bool
same as `zone_of` allows for case sensitive call
#### pub fn zone_of(&self, name: &Name) -> bool
returns true if the name components of self are all present at the end of name
##### Example
```
use std::str::FromStr;
use trust_dns_proto::rr::domain::Name;
let name = Name::from_str("www.example.com").unwrap();
let name = Name::from_str("www.example.com").unwrap();
let zone = Name::from_str("example.com").unwrap();
let another = Name::from_str("example.net").unwrap();
assert!(zone.zone_of(&name));
assert!(!name.zone_of(&zone));
assert!(!another.zone_of(&name));
```
#### pub fn num_labels(&self) -> u8
Returns the number of labels in the name, discounting `*`.
##### Examples
```
use std::str::FromStr;
use trust_dns_proto::rr::domain::Name;
let root = Name::root();
assert_eq!(root.num_labels(), 0);
let example_com = Name::from_str("example.com").unwrap();
assert_eq!(example_com.num_labels(), 2);
let star_example_com = Name::from_str("*.example.com.").unwrap();
assert_eq!(star_example_com.num_labels(), 2);
```
#### pub fn len(&self) -> usize
returns the length in bytes of the labels. ‘.’ counts as 1
This can be used as an estimate, when serializing labels, they will often be compressed and/or escaped causing the exact length to be different.
##### Examples
```
use std::str::FromStr;
use trust_dns_proto::rr::domain::Name;
assert_eq!(Name::from_str("www.example.com.").unwrap().len(), 16);
assert_eq!(Name::from_str(".").unwrap().len(), 1);
assert_eq!(Name::root().len(), 1);
```
#### pub fn is_empty(&self) -> bool
Returns whether the length of the labels, in bytes is 0. In practice, since ‘.’ counts as 1, this is never the case so the method returns false.
#### pub fn parse(local: &str, origin: Option<&Name>) -> Result<Name, ProtoErrorattempts to parse a name such as `"example.com."` or `"subdomain.example.com."`
##### Examples
```
use std::str::FromStr;
use trust_dns_proto::rr::domain::Name;
let name = Name::from_str("example.com.").unwrap();
assert_eq!(name.base_name(), Name::from_str("com.").unwrap());
assert_eq!(name.iter().next(), Some(&b"example"[..]));
```
#### pub fn from_ascii<S>(name: S) -> Result<Name, ProtoError>where
S: AsRef<str>,
Will convert the string to a name only allowing ascii as valid input
This method will also preserve the case of the name where that’s desirable
##### Examples
```
use trust_dns_proto::rr::Name;
let bytes_name = Name::from_labels(vec!["WWW".as_bytes(), "example".as_bytes(), "COM".as_bytes()]).unwrap();
let ascii_name = Name::from_ascii("WWW.example.COM.").unwrap();
let lower_name = Name::from_ascii("www.example.com.").unwrap();
assert!(bytes_name.eq_case(&ascii_name));
assert!(!lower_name.eq_case(&ascii_name));
// escaped values let bytes_name = Name::from_labels(vec!["email.name".as_bytes(), "example".as_bytes(), "com".as_bytes()]).unwrap();
let name = Name::from_ascii("email\\.name.example.com.").unwrap();
assert_eq!(bytes_name, name);
let bytes_name = Name::from_labels(vec!["bad.char".as_bytes(), "example".as_bytes(), "com".as_bytes()]).unwrap();
let name = Name::from_ascii("bad\\056char.example.com.").unwrap();
assert_eq!(bytes_name, name);
```
#### pub fn from_utf8<S>(name: S) -> Result<Name, ProtoError>where
S: AsRef<str>,
Will convert the string to a name using IDNA, punycode, to encode the UTF8 as necessary
When making names IDNA compatible, there is a side-effect of lowercasing the name.
##### Examples
```
use std::str::FromStr;
use trust_dns_proto::rr::Name;
let bytes_name = Name::from_labels(vec!["WWW".as_bytes(), "example".as_bytes(), "COM".as_bytes()]).unwrap();
// from_str calls through to from_utf8 let utf8_name = Name::from_str("WWW.example.COM.").unwrap();
let lower_name = Name::from_str("www.example.com.").unwrap();
assert!(!bytes_name.eq_case(&utf8_name));
assert!(lower_name.eq_case(&utf8_name));
```
#### pub fn from_str_relaxed<S>(name: S) -> Result<Name, ProtoError>where
S: AsRef<str>,
First attempts to decode via `from_utf8`, if that fails IDNA checks, then falls back to ascii decoding.
##### Examples
```
use std::str::FromStr;
use trust_dns_proto::rr::Name;
// Ok, underscore in the beginning of a name assert!(Name::from_utf8("_allows.example.com.").is_ok());
// Error, underscore in the end assert!(Name::from_utf8("dis_allowed.example.com.").is_err());
// Ok, relaxed mode assert!(Name::from_str_relaxed("allow_in_.example.com.").is_ok());
```
#### pub fn emit_as_canonical(
&self,
encoder: &mut BinEncoder<'_>,
canonical: bool
) -> Result<(), ProtoErrorEmits the canonical version of the name to the encoder.
In canonical form, there will be no pointers written to the encoder (i.e. no compression).
#### pub fn emit_with_lowercase(
&self,
encoder: &mut BinEncoder<'_>,
lowercase: bool
) -> Result<(), ProtoErrorWrites the labels, as lower case, to the encoder
##### Arguments
* `encoder` - encoder for writing this name
* `lowercase` - if true the name will be lowercased, otherwise it will not be changed when writing
#### pub fn cmp_case(&self, other: &Name) -> Ordering
Case sensitive comparison
#### pub fn eq_case(&self, other: &Name) -> bool
Compares the Names, in a case sensitive manner
#### pub fn to_ascii(&self) -> String
Converts this name into an ascii safe string.
If the name is an IDNA name, then the name labels will be returned with the `xn--` prefix.
see `to_utf8` or the `Display` impl for methods which convert labels to utf8.
#### pub fn to_utf8(&self) -> String
Converts the Name labels to the utf8 String form.
This converts the name to an unescaped format, that could be used with parse. If, the name is is followed by the final `.`, e.g. as in `www.example.com.`, which represents a fully qualified Name.
#### pub fn parse_arpa_name(&self) -> Result<IpNet, ProtoErrorConverts a *.arpa Name in a PTR record back into an IpNet if possible.
#### pub fn is_localhost(&self) -> bool
Returns true if the `Name` is either localhost or in the localhost zone.
##### Example
```
use std::str::FromStr;
use trust_dns_proto::rr::Name;
let name = Name::from_str("localhost").unwrap();
assert!(name.is_localhost());
let name = Name::from_str("localhost.").unwrap();
assert!(name.is_localhost());
let name = Name::from_str("my.localhost.").unwrap();
assert!(name.is_localhost());
```
#### pub fn is_wildcard(&self) -> bool
True if the first label of this name is the wildcard, i.e. ‘*’
##### Example
```
use std::str::FromStr;
use trust_dns_proto::rr::Name;
let name = Name::from_str("www.example.com").unwrap();
assert!(!name.is_wildcard());
let name = Name::from_str("*.example.com").unwrap();
assert!(name.is_wildcard());
let name = Name::root();
assert!(!name.is_wildcard());
```
#### pub fn into_wildcard(self) -> Name
Converts a name to a wildcard, by replacing the first label with `*`
##### Example
```
use std::str::FromStr;
use trust_dns_proto::rr::Name;
let name = Name::from_str("www.example.com").unwrap().into_wildcard();
assert_eq!(name, Name::from_str("*.example.com.").unwrap());
// does nothing if the root let name = Name::root().into_wildcard();
assert_eq!(name, Name::root());
```
Trait Implementations
---
### impl<'r> BinDecodable<'r> for Name
#### fn read(decoder: &mut BinDecoder<'r>) -> Result<Name, ProtoErrorparses the chain of labels this has a max of 255 octets, with each label being less than 63.
all names will be stored lowercase internally.
This will consume the portions of the `Vec` which it is reading…
#### fn from_bytes(bytes: &'r [u8]) -> Result<Self, ProtoErrorReturns the object in binary form### impl BinEncodable for Name
#### fn emit(&self, encoder: &mut BinEncoder<'_>) -> Result<(), ProtoErrorWrite the type to the stream#### fn to_bytes(&self) -> Result<Vec<u8, Global>, ProtoErrorReturns the object in binary form### impl Clone for Name
#### fn clone(&self) -> Name
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter.
#### fn default() -> Name
Returns the “default value” for a type.
#### fn deserialize<D>(
deserializer: D
) -> Result<Name, <D as Deserializer<'de>>::Error>where
D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter.
#### fn from(name: &'a LowerName) -> Name
Converts to this type from the input type.### impl From<IpAddr> for Name
#### fn from(addr: IpAddr) -> Name
Converts to this type from the input type.### impl From<Ipv4Addr> for Name
#### fn from(addr: Ipv4Addr) -> Name
Converts to this type from the input type.### impl From<Ipv6Addr> for Name
#### fn from(addr: Ipv6Addr) -> Name
Converts to this type from the input type.### impl From<LowerName> for Name
#### fn from(name: LowerName) -> Name
Converts to this type from the input type.### impl FromStr for Name
#### fn from_str(s: &str) -> Result<Name, <Name as FromStr>::ErrUses the Name::from_utf8 conversion on this string, see Name::from_ascii for ascii only, or for preserving case
#### type Err = ProtoError
The associated error which can be returned from parsing.### impl Hash for Name
#### fn hash<H>(&self, state: &mut H)where
H: Hasher,
Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where
H: Hasher,
Self: Sized,
Feeds a slice of this type into the given `Hasher`.
#### type Item = &'a [u8]
The type of the elements being iterated over.#### type IntoIter = LabelIter<'aWhich kind of iterator are we turning this into?#### fn into_iter(self) -> <&'a Name as IntoIterator>::IntoIter
Creates an iterator from a value.
#### fn cmp(&self, other: &Name) -> Ordering
Case insensitive comparison, see `Name::cmp_case` for case sensitive comparisons
RFC 4034 DNSSEC Resource Records March 2005
```
6.1. Canonical DNS Name Order
For the purposes of DNS security, owner names are ordered by treating
individual labels as unsigned left-justified octet strings. The
absence of a octet sorts before a zero value octet, and uppercase
US-ASCII letters are treated as if they were lowercase US-ASCII
letters.
To compute the canonical ordering of a set of DNS names, start by
sorting the names according to their most significant (rightmost)
labels. For names in which the most significant label is identical,
continue sorting according to their next most significant label, and
so forth.
For example, the following names are sorted in canonical DNS name
order. The most significant label is "example". At this level,
"example" sorts first, followed by names ending in "a.example", then
by names ending "z.example". The names within each level are sorted
in the same way.
example
a.example
yljkjljk.a.example
Z.a.example
zABC.a.EXAMPLE
z.example
\001.z.example
*.z.example
\200.z.example
```
1.21.0 · source#### fn max(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the maximum of two values. Read more1.21.0 · source#### fn min(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the minimum of two values. Read more1.50.0 · source#### fn clamp(self, min: Self, max: Self) -> Selfwhere
Self: Sized + PartialOrd<Self>,
Restrict a value to a certain interval.
#### fn eq(&self, other: &Name) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialOrd<Name> for Name
#### fn partial_cmp(&self, other: &Name) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists. Read more1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool
This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool
This method tests less than or equal to (for `self` and `other`) and is used by the `<=`
operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool
This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool
This method tests greater than or equal to (for `self` and `other`) and is used by the `>=`
operator.
#### fn serialize<S>(
&self,
serializer: S
) -> Result<<S as Serializer>::Ok, <S as Serializer>::Error>where
S: Serializer,
Serialize this value into the given Serde serializer.
#### fn try_parse_ip(&self) -> Option<RDataAlways returns none for Name, it assumes something that is already a name, wants to be a name
### impl Eq for Name
### impl StructuralEq for Name
Auto Trait Implementations
---
### impl RefUnwindSafe for Name
### impl Send for Name
### impl Sync for Name
### impl Unpin for Name
### impl UnwindSafe for Name
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> IntoName for Twhere
T: Into<Name>,
#### fn into_name(self) -> Result<Name, ProtoErrorConvert this into Name### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
T: Display + ?Sized,
#### default fn to_string(&self) -> String
Converts the given value to a `String`.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Trait trust_dns_resolver::IntoName
===
```
pub trait IntoName: Sized {
// Required method
fn into_name(self) -> Result<Name, ProtoError>;
}
```
Conversion into a Name
Required Methods
---
#### fn into_name(self) -> Result<Name, ProtoErrorConvert this into Name
Implementations on Foreign Types
---
### impl<'a> IntoName for &'a str
#### fn into_name(self) -> Result<Name, ProtoErrorPerforms a utf8, IDNA or punycode, translation of the `str` into `Name`
### impl IntoName for &String
#### fn into_name(self) -> Result<Name, ProtoErrorPerforms a utf8, IDNA or punycode, translation of the `&String` into `Name`
### impl IntoName for String
#### fn into_name(self) -> Result<Name, ProtoErrorPerforms a utf8, IDNA or punycode, translation of the `String` into `Name`
Implementors
---
### impl<T> IntoName for Twhere
T: Into<Name>,
Function trust_dns_resolver::version
===
```
pub fn version() -> &'static str
```
returns a version as specified in Cargo.toml
Type Alias trust_dns_resolver::ResolverFuture
===
```
pub type ResolverFuture = TokioAsyncResolver;
```
👎Deprecated: use [`trust_dns_resolver::AsyncResolver`] insteadAvailable on **crate feature `tokio-runtime`** only.This is an alias for `AsyncResolver`, which replaced the type previously called `ResolverFuture`.
Note
---
For users of `ResolverFuture`, the return type for `ResolverFuture::new`
has changed since version 0.9 of `trust-dns-resolver`. It now returns a tuple of an `AsyncResolver` *and* a background future, which must be spawned on a reactor before any lookup futures will run.
See the `AsyncResolver` documentation for more information on how to use the background future.
Aliased Type
---
```
struct ResolverFuture { /* private fields */ }
```
Implementations
---
### impl AsyncResolver<GenericConnector<TokioRuntimeProvider>#### pub fn tokio(config: ResolverConfig, options: ResolverOpts) -> Self
Construct a new Tokio based `AsyncResolver` with the provided configuration.
##### Arguments
* `config` - configuration, name_servers, etc. for the Resolver
* `options` - basic lookup options for the resolver
##### Returns
A tuple containing the new `AsyncResolver` and a future that drives the background task that runs resolutions for the `AsyncResolver`. See the documentation for `AsyncResolver` for more information on how to use the background future.
#### pub fn tokio_from_system_conf() -> Result<Self, ResolveErrorAvailable on **crate feature `system-config` and (Unix or Windows)** only.Constructs a new Tokio based Resolver with the system configuration.
This will use `/etc/resolv.conf` on Unix OSes and the registry on Windows.
### impl<R: ConnectionProvider> AsyncResolver<R#### pub fn new(config: ResolverConfig, options: ResolverOpts, provider: R) -> Self
Construct a new generic `AsyncResolver` with the provided configuration.
see [TokioAsyncResolver::tokio(..)] instead.
##### Arguments
* `config` - configuration, name_servers, etc. for the Resolver
* `options` - basic lookup options for the resolver
##### Returns
A tuple containing the new `AsyncResolver` and a future that drives the background task that runs resolutions for the `AsyncResolver`. See the documentation for `AsyncResolver` for more information on how to use the background future.
#### pub fn from_system_conf(runtime: R) -> Result<Self, ResolveErrorAvailable on **crate feature `system-config` and (Unix or Windows)** only.Constructs a new Resolver with the system configuration.
see [TokioAsyncResolver::tokio_from_system_conf(..)] instead.
This will use `/etc/resolv.conf` on Unix OSes and the registry on Windows.
#### pub fn clear_cache(&self)
Flushes/Removes all entries from the cache
### impl<P: ConnectionProvider> AsyncResolver<P#### pub fn new_with_conn(
config: ResolverConfig,
options: ResolverOpts,
conn_provider: P
) -> Self
Construct a new `AsyncResolver` with the provided configuration.
##### Arguments
* `config` - configuration, name_servers, etc. for the Resolver
* `options` - basic lookup options for the resolver
##### Returns
A tuple containing the new `AsyncResolver` and a future that drives the background task that runs resolutions for the `AsyncResolver`. See the documentation for `AsyncResolver` for more information on how to use the background future.
#### pub fn from_system_conf_with_provider(
conn_provider: P
) -> Result<Self, ResolveErrorAvailable on **crate feature `system-config` and (Unix or Windows)** only.Constructs a new Resolver with the system configuration.
This will use `/etc/resolv.conf` on Unix OSes and the registry on Windows.
#### pub async fn lookup<N: IntoName>(
&self,
name: N,
record_type: RecordType
) -> Result<Lookup, ResolveErrorGeneric lookup for any RecordType
*WARNING* this interface may change in the future, see if one of the specializations would be better.
##### Arguments
* `name` - name of the record to lookup, if name is not a valid domain name, an error will be returned
* `record_type` - type of record to lookup, all RecordData responses will be filtered to this type
##### Returns
#### pub async fn lookup_ip<N: IntoName + TryParseIp>(
&self,
host: N
) -> Result<LookupIp, ResolveErrorPerforms a dual-stack DNS lookup for the IP for the given hostname.
See the configuration and options parameters for controlling the way in which A(Ipv4) and AAAA(Ipv6) lookups will be performed. For the least expensive query a fully-qualified-domain-name, FQDN, which ends in a final `.`, e.g. `www.example.com.`, will only issue one query. Anything else will always incur the cost of querying the `ResolverConfig::domain` and `ResolverConfig::search`.
##### Arguments
* `host` - string hostname, if this is an invalid hostname, an error will be returned.
#### pub fn set_hosts(&mut self, hosts: Option<Hosts>)
Customizes the static hosts used in this resolver.
#### pub async fn reverse_lookup(
&self,
query: IpAddr
) -> Result<ReverseLookup, ResolveErrorPerforms a lookup for the associated type.
##### Arguments
* `query` - a type which can be converted to `Name` via `From`.
#### pub async fn ipv4_lookup<N: IntoName>(
&self,
query: N
) -> Result<Ipv4Lookup, ResolveErrorPerforms a lookup for the associated type.
*hint* queries that end with a ‘.’ are fully qualified names and are cheaper lookups
##### Arguments
* `query` - a string which parses to a domain name, failure to parse will return an error
#### pub async fn ipv6_lookup<N: IntoName>(
&self,
query: N
) -> Result<Ipv6Lookup, ResolveErrorPerforms a lookup for the associated type.
*hint* queries that end with a ‘.’ are fully qualified names and are cheaper lookups
##### Arguments
* `query` - a string which parses to a domain name, failure to parse will return an error
#### pub async fn mx_lookup<N: IntoName>(
&self,
query: N
) -> Result<MxLookup, ResolveErrorPerforms a lookup for the associated type.
*hint* queries that end with a ‘.’ are fully qualified names and are cheaper lookups
##### Arguments
* `query` - a string which parses to a domain name, failure to parse will return an error
#### pub async fn ns_lookup<N: IntoName>(
&self,
query: N
) -> Result<NsLookup, ResolveErrorPerforms a lookup for the associated type.
*hint* queries that end with a ‘.’ are fully qualified names and are cheaper lookups
##### Arguments
* `query` - a string which parses to a domain name, failure to parse will return an error
#### pub async fn soa_lookup<N: IntoName>(
&self,
query: N
) -> Result<SoaLookup, ResolveErrorPerforms a lookup for the associated type.
*hint* queries that end with a ‘.’ are fully qualified names and are cheaper lookups
##### Arguments
* `query` - a string which parses to a domain name, failure to parse will return an error
#### pub async fn srv_lookup<N: IntoName>(
&self,
query: N
) -> Result<SrvLookup, ResolveErrorPerforms a lookup for the associated type.
*hint* queries that end with a ‘.’ are fully qualified names and are cheaper lookups
##### Arguments
* `query` - a string which parses to a domain name, failure to parse will return an error
#### pub async fn tlsa_lookup<N: IntoName>(
&self,
query: N
) -> Result<TlsaLookup, ResolveErrorPerforms a lookup for the associated type.
*hint* queries that end with a ‘.’ are fully qualified names and are cheaper lookups
##### Arguments
* `query` - a string which parses to a domain name, failure to parse will return an error
#### pub async fn txt_lookup<N: IntoName>(
&self,
query: N
) -> Result<TxtLookup, ResolveErrorPerforms a lookup for the associated type.
*hint* queries that end with a ‘.’ are fully qualified names and are cheaper lookups
##### Arguments
* `query` - a string which parses to a domain name, failure to parse will return an error
Trait Implementations
---
### impl<P: Clone + ConnectionProvider> Clone for AsyncResolver<P#### fn clone(&self) -> AsyncResolver<PReturns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
Formats the value using the given formatter. Read more
Type Alias trust_dns_resolver::TokioAsyncResolver
===
```
pub type TokioAsyncResolver = AsyncResolver<TokioConnectionProvider>;
```
Available on **crate feature `tokio-runtime`** only.An AsyncResolver used with Tokio
Aliased Type
---
```
struct TokioAsyncResolver { /* private fields */ }
```
Implementations
---
### impl TokioAsyncResolver
#### pub fn tokio(config: ResolverConfig, options: ResolverOpts) -> Self
Construct a new Tokio based `AsyncResolver` with the provided configuration.
##### Arguments
* `config` - configuration, name_servers, etc. for the Resolver
* `options` - basic lookup options for the resolver
##### Returns
A tuple containing the new `AsyncResolver` and a future that drives the background task that runs resolutions for the `AsyncResolver`. See the documentation for `AsyncResolver` for more information on how to use the background future.
#### pub fn tokio_from_system_conf() -> Result<Self, ResolveErrorAvailable on **crate feature `system-config` and (Unix or Windows)** only.Constructs a new Tokio based Resolver with the system configuration.
This will use `/etc/resolv.conf` on Unix OSes and the registry on Windows.
### impl<R: ConnectionProvider> AsyncResolver<R#### pub fn new(config: ResolverConfig, options: ResolverOpts, provider: R) -> Self
Construct a new generic `AsyncResolver` with the provided configuration.
see [TokioAsyncResolver::tokio(..)] instead.
##### Arguments
* `config` - configuration, name_servers, etc. for the Resolver
* `options` - basic lookup options for the resolver
##### Returns
A tuple containing the new `AsyncResolver` and a future that drives the background task that runs resolutions for the `AsyncResolver`. See the documentation for `AsyncResolver` for more information on how to use the background future.
#### pub fn from_system_conf(runtime: R) -> Result<Self, ResolveErrorAvailable on **crate feature `system-config` and (Unix or Windows)** only.Constructs a new Resolver with the system configuration.
see [TokioAsyncResolver::tokio_from_system_conf(..)] instead.
This will use `/etc/resolv.conf` on Unix OSes and the registry on Windows.
#### pub fn clear_cache(&self)
Flushes/Removes all entries from the cache
### impl<P: ConnectionProvider> AsyncResolver<P#### pub fn new_with_conn(
config: ResolverConfig,
options: ResolverOpts,
conn_provider: P
) -> Self
Construct a new `AsyncResolver` with the provided configuration.
##### Arguments
* `config` - configuration, name_servers, etc. for the Resolver
* `options` - basic lookup options for the resolver
##### Returns
A tuple containing the new `AsyncResolver` and a future that drives the background task that runs resolutions for the `AsyncResolver`. See the documentation for `AsyncResolver` for more information on how to use the background future.
#### pub fn from_system_conf_with_provider(
conn_provider: P
) -> Result<Self, ResolveErrorAvailable on **crate feature `system-config` and (Unix or Windows)** only.Constructs a new Resolver with the system configuration.
This will use `/etc/resolv.conf` on Unix OSes and the registry on Windows.
#### pub async fn lookup<N: IntoName>(
&self,
name: N,
record_type: RecordType
) -> Result<Lookup, ResolveErrorGeneric lookup for any RecordType
*WARNING* this interface may change in the future, see if one of the specializations would be better.
##### Arguments
* `name` - name of the record to lookup, if name is not a valid domain name, an error will be returned
* `record_type` - type of record to lookup, all RecordData responses will be filtered to this type
##### Returns
#### pub async fn lookup_ip<N: IntoName + TryParseIp>(
&self,
host: N
) -> Result<LookupIp, ResolveErrorPerforms a dual-stack DNS lookup for the IP for the given hostname.
See the configuration and options parameters for controlling the way in which A(Ipv4) and AAAA(Ipv6) lookups will be performed. For the least expensive query a fully-qualified-domain-name, FQDN, which ends in a final `.`, e.g. `www.example.com.`, will only issue one query. Anything else will always incur the cost of querying the `ResolverConfig::domain` and `ResolverConfig::search`.
##### Arguments
* `host` - string hostname, if this is an invalid hostname, an error will be returned.
#### pub fn set_hosts(&mut self, hosts: Option<Hosts>)
Customizes the static hosts used in this resolver.
#### pub async fn reverse_lookup(
&self,
query: IpAddr
) -> Result<ReverseLookup, ResolveErrorPerforms a lookup for the associated type.
##### Arguments
* `query` - a type which can be converted to `Name` via `From`.
#### pub async fn ipv4_lookup<N: IntoName>(
&self,
query: N
) -> Result<Ipv4Lookup, ResolveErrorPerforms a lookup for the associated type.
*hint* queries that end with a ‘.’ are fully qualified names and are cheaper lookups
##### Arguments
* `query` - a string which parses to a domain name, failure to parse will return an error
#### pub async fn ipv6_lookup<N: IntoName>(
&self,
query: N
) -> Result<Ipv6Lookup, ResolveErrorPerforms a lookup for the associated type.
*hint* queries that end with a ‘.’ are fully qualified names and are cheaper lookups
##### Arguments
* `query` - a string which parses to a domain name, failure to parse will return an error
#### pub async fn mx_lookup<N: IntoName>(
&self,
query: N
) -> Result<MxLookup, ResolveErrorPerforms a lookup for the associated type.
*hint* queries that end with a ‘.’ are fully qualified names and are cheaper lookups
##### Arguments
* `query` - a string which parses to a domain name, failure to parse will return an error
#### pub async fn ns_lookup<N: IntoName>(
&self,
query: N
) -> Result<NsLookup, ResolveErrorPerforms a lookup for the associated type.
*hint* queries that end with a ‘.’ are fully qualified names and are cheaper lookups
##### Arguments
* `query` - a string which parses to a domain name, failure to parse will return an error
#### pub async fn soa_lookup<N: IntoName>(
&self,
query: N
) -> Result<SoaLookup, ResolveErrorPerforms a lookup for the associated type.
*hint* queries that end with a ‘.’ are fully qualified names and are cheaper lookups
##### Arguments
* `query` - a string which parses to a domain name, failure to parse will return an error
#### pub async fn srv_lookup<N: IntoName>(
&self,
query: N
) -> Result<SrvLookup, ResolveErrorPerforms a lookup for the associated type.
*hint* queries that end with a ‘.’ are fully qualified names and are cheaper lookups
##### Arguments
* `query` - a string which parses to a domain name, failure to parse will return an error
#### pub async fn tlsa_lookup<N: IntoName>(
&self,
query: N
) -> Result<TlsaLookup, ResolveErrorPerforms a lookup for the associated type.
*hint* queries that end with a ‘.’ are fully qualified names and are cheaper lookups
##### Arguments
* `query` - a string which parses to a domain name, failure to parse will return an error
#### pub async fn txt_lookup<N: IntoName>(
&self,
query: N
) -> Result<TxtLookup, ResolveErrorPerforms a lookup for the associated type.
*hint* queries that end with a ‘.’ are fully qualified names and are cheaper lookups
##### Arguments
* `query` - a string which parses to a domain name, failure to parse will return an error
Trait Implementations
---
### impl<P: Clone + ConnectionProvider> Clone for AsyncResolver<P#### fn clone(&self) -> AsyncResolver<PReturns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
Formats the value using the given formatter. Read more |
oaColors | cran | R | Package ‘oaColors’
October 14, 2022
Type Package
Title OpenAnalytics Colors Package
Version 0.0.4
Date 2015-11-28
Author <NAME>
Maintainer <NAME> <<EMAIL>>
Description Provides carefully chosen color palettes as used a.o. at OpenAnalytics <http:
//www.openanalytics.eu>.
URL http://www.openanalytics.eu
Imports MASS, grid, RColorBrewer
License GPL-3 + file LICENSE
Collate 'oaColorFunctions.R'
NeedsCompilation no
Repository CRAN
Date/Publication 2015-11-30 14:51:32
R topics documented:
colorWhee... 2
oaColor... 2
oaPalett... 3
colorWheel Generate a Color Wheel
Description
Generate a Color Wheel
Usage
colorWheel(colVec = NULL)
Arguments
colVec vector of colors
Author(s)
<NAME>
oaColors Pick One or More OA Colors
Description
Pick One or More OA Colors
Usage
oaColors(color = NULL, alpha = 1)
Arguments
color a character vector of color names; possible values are "red", "orange", "yellow",
"green", "cyan", "blue", "pink", "limegreen", "purple", "black", "white", "grey"
or "gray"
alpha transparency level for the color(s)
Value
character vector of colors
Author(s)
<NAME>
oaPalette Generate a Palette of OA Colors
Description
Generate a Palette of OA Colors
Usage
oaPalette(numColors = NULL, alpha = 1)
Arguments
numColors number of colors to be contained in the palette
alpha transparency level of the colors
Value
vector of colors
Author(s)
<NAME> |
kraftkit.sh/unikraft/target | go | Go | None
Documentation
[¶](#section-documentation)
---
### Overview [¶](#pkg-overview)
SPDX-License-Identifier: BSD-3-Clause Copyright (c) 2022, Unikraft GmbH and The KraftKit Authors.
Licensed under the BSD-3-Clause License (the "License").
You may not use this file except in compliance with the License.
SPDX-License-Identifier: BSD-3-Clause Copyright (c) 2022, Unikraft GmbH and The KraftKit Authors.
Licensed under the BSD-3-Clause License (the "License").
You may not use this file except in compliance with the License.
SPDX-License-Identifier: BSD-3-Clause Copyright (c) 2022, Unikraft GmbH and The KraftKit Authors.
Licensed under the BSD-3-Clause License (the "License").
You may not use this file except in compliance with the License.
SPDX-License-Identifier: BSD-3-Clause Copyright (c) 2022, Unikraft GmbH and The KraftKit Authors.
Licensed under the BSD-3-Clause License (the "License").
You may not use this file except in compliance with the License.
### Index [¶](#pkg-index)
* [func KernelDbgName(target TargetConfig) (string, error)](#KernelDbgName)
* [func KernelName(target TargetConfig) (string, error)](#KernelName)
* [func TargetPlatArchName(target Target) string](#TargetPlatArchName)
* [func TransformFromSchema(ctx context.Context, data interface{}) (interface{}, error)](#TransformFromSchema)
* [type Command](#Command)
* [type Target](#Target)
* + [func Filter(targets []Target, arch, plat, targ string) []Target](#Filter)
+ [func NewTargetFromOptions(opts ...TargetOption) (Target, error)](#NewTargetFromOptions)
+ [func Select(targets []Target) (Target, error)](#Select)
* [type TargetConfig](#TargetConfig)
* + [func (tc *TargetConfig) Architecture() arch.Architecture](#TargetConfig.Architecture)
+ [func (tc *TargetConfig) Command() []string](#TargetConfig.Command)
+ [func (tc *TargetConfig) ConfigFilename() string](#TargetConfig.ConfigFilename)
+ [func (tc *TargetConfig) Format() pack.PackageFormat](#TargetConfig.Format)
+ [func (tc *TargetConfig) Initrd() *initrd.InitrdConfig](#TargetConfig.Initrd)
+ [func (tc *TargetConfig) IsUnpacked() bool](#TargetConfig.IsUnpacked)
+ [func (tc *TargetConfig) KConfig() kconfig.KeyValueMap](#TargetConfig.KConfig)
+ [func (tc *TargetConfig) KConfigTree(_ context.Context, env ...*kconfig.KeyValue) (*kconfig.KConfigFile, error)](#TargetConfig.KConfigTree)
+ [func (tc *TargetConfig) Kernel() string](#TargetConfig.Kernel)
+ [func (tc *TargetConfig) KernelDbg() string](#TargetConfig.KernelDbg)
+ [func (tc TargetConfig) MarshalYAML() (interface{}, error)](#TargetConfig.MarshalYAML)
+ [func (tc *TargetConfig) Name() string](#TargetConfig.Name)
+ [func (tc *TargetConfig) Path() string](#TargetConfig.Path)
+ [func (tc *TargetConfig) Platform() plat.Platform](#TargetConfig.Platform)
+ [func (tc *TargetConfig) PrintInfo(ctx context.Context) string](#TargetConfig.PrintInfo)
+ [func (tc *TargetConfig) Source() string](#TargetConfig.Source)
+ [func (tc *TargetConfig) Type() unikraft.ComponentType](#TargetConfig.Type)
+ [func (tc *TargetConfig) Version() string](#TargetConfig.Version)
* [type TargetOption](#TargetOption)
* + [func WithArchitecture(arch arch.ArchitectureConfig) TargetOption](#WithArchitecture)
+ [func WithCommand(command []string) TargetOption](#WithCommand)
+ [func WithFormat(format pack.PackageFormat) TargetOption](#WithFormat)
+ [func WithInitrd(initrd *initrd.InitrdConfig) TargetOption](#WithInitrd)
+ [func WithKConfig(kconfig kconfig.KeyValueMap) TargetOption](#WithKConfig)
+ [func WithKernel(kernel string) TargetOption](#WithKernel)
+ [func WithKernelDbg(kernelDbg string) TargetOption](#WithKernelDbg)
+ [func WithName(name string) TargetOption](#WithName)
+ [func WithPlatform(platform plat.PlatformConfig) TargetOption](#WithPlatform)
### Constants [¶](#pkg-constants)
This section is empty.
### Variables [¶](#pkg-variables)
This section is empty.
### Functions [¶](#pkg-functions)
####
func [KernelDbgName](https://github.com/unikraft/kraftkit/blob/v0.6.7/unikraft/target/target.go#L185) [¶](#KernelDbgName)
added in v0.3.0
```
func KernelDbgName(target [TargetConfig](#TargetConfig)) ([string](/builtin#string), [error](/builtin#error))
```
KernelDbgName is identical to KernelName but is used to access the symbolic kernel image which has not been stripped.
####
func [KernelName](https://github.com/unikraft/kraftkit/blob/v0.6.7/unikraft/target/target.go#L170) [¶](#KernelName)
added in v0.3.0
```
func KernelName(target [TargetConfig](#TargetConfig)) ([string](/builtin#string), [error](/builtin#error))
```
TargetName returns the name of the kernel image based on standard pattern which is baked within Unikraft's build system, see for example `KVM_IMAGE`.
If we do not have a target name, return an error.
####
func [TargetPlatArchName](https://github.com/unikraft/kraftkit/blob/v0.6.7/unikraft/target/target.go#L196) [¶](#TargetPlatArchName)
added in v0.5.0
```
func TargetPlatArchName(target [Target](#Target)) [string](/builtin#string)
```
TargetPlatArchName returns the canonical name for the platform and architecture combination.
####
func [TransformFromSchema](https://github.com/unikraft/kraftkit/blob/v0.6.7/unikraft/target/transform.go#L18) [¶](#TransformFromSchema)
added in v0.4.0
```
func TransformFromSchema(ctx [context](/context).[Context](/context#Context), data interface{}) (interface{}, [error](/builtin#error))
```
### Types [¶](#pkg-types)
####
type [Command](https://github.com/unikraft/kraftkit/blob/v0.6.7/unikraft/target/command.go#L34) [¶](#Command)
```
type Command [][string](/builtin#string)
```
####
type [Target](https://github.com/unikraft/kraftkit/blob/v0.6.7/unikraft/target/target.go#L21) [¶](#Target)
added in v0.4.0
```
type Target interface {
[component](/[email protected]/unikraft/component).[Component](/[email protected]/unikraft/component#Component)
// Architecture is the component architecture for this target.
Architecture() [arch](/[email protected]/unikraft/arch).[Architecture](/[email protected]/unikraft/arch#Architecture)
// Platform is the component platform for this target.
Platform() [plat](/[email protected]/unikraft/plat).[Platform](/[email protected]/unikraft/plat#Platform)
// Format is the desired package implementation for this target.
Format() [pack](/[email protected]/pack).[PackageFormat](/[email protected]/pack#PackageFormat)
// Kernel is the path to the kernel for this target.
Kernel() [string](/builtin#string)
// KernelDbg is the path to the symbolic (unstripped) kernel for this target.
KernelDbg() [string](/builtin#string)
// Initrd contains the initramfs configuration for this target.
Initrd() *[initrd](/[email protected]/initrd).[InitrdConfig](/[email protected]/initrd#InitrdConfig)
// Command is the command-line arguments set for this target.
Command() [][string](/builtin#string)
// ConfigFilename returns the target-specific `.config` file which contains
// all the porclained KConfig key values which is formatted
// `.config.<TARGET-NAME>`
ConfigFilename() [string](/builtin#string)
}
```
####
func [Filter](https://github.com/unikraft/kraftkit/blob/v0.6.7/unikraft/target/select.go#L57) [¶](#Filter)
added in v0.6.5
```
func Filter(targets [][Target](#Target), arch, plat, targ [string](/builtin#string)) [][Target](#Target)
```
Filter returns a subset of `targets` based in input strings `arch`,
`plat` and/or `targ`
####
func [NewTargetFromOptions](https://github.com/unikraft/kraftkit/blob/v0.6.7/unikraft/target/target.go#L81) [¶](#NewTargetFromOptions)
added in v0.5.1
```
func NewTargetFromOptions(opts ...[TargetOption](#TargetOption)) ([Target](#Target), [error](/builtin#error))
```
NewTargetFromOptions is a constructor for TargetConfig.
####
func [Select](https://github.com/unikraft/kraftkit/blob/v0.6.7/unikraft/target/select.go#L17) [¶](#Select)
added in v0.6.5
```
func Select(targets [][Target](#Target)) ([Target](#Target), [error](/builtin#error))
```
Select is a utility method used in a CLI context to prompt the user for a specific application's
####
type [TargetConfig](https://github.com/unikraft/kraftkit/blob/v0.6.7/unikraft/target/target.go#L51) [¶](#TargetConfig)
```
type TargetConfig struct {
// contains filtered or unexported fields
}
```
####
func (*TargetConfig) [Architecture](https://github.com/unikraft/kraftkit/blob/v0.6.7/unikraft/target/target.go#L105) [¶](#TargetConfig.Architecture)
```
func (tc *[TargetConfig](#TargetConfig)) Architecture() [arch](/[email protected]/unikraft/arch).[Architecture](/[email protected]/unikraft/arch#Architecture)
```
####
func (*TargetConfig) [Command](https://github.com/unikraft/kraftkit/blob/v0.6.7/unikraft/target/target.go#L137) [¶](#TargetConfig.Command)
```
func (tc *[TargetConfig](#TargetConfig)) Command() [][string](/builtin#string)
```
####
func (*TargetConfig) [ConfigFilename](https://github.com/unikraft/kraftkit/blob/v0.6.7/unikraft/target/target.go#L155) [¶](#TargetConfig.ConfigFilename)
added in v0.5.0
```
func (tc *[TargetConfig](#TargetConfig)) ConfigFilename() [string](/builtin#string)
```
####
func (*TargetConfig) [Format](https://github.com/unikraft/kraftkit/blob/v0.6.7/unikraft/target/target.go#L125) [¶](#TargetConfig.Format)
```
func (tc *[TargetConfig](#TargetConfig)) Format() [pack](/[email protected]/pack).[PackageFormat](/[email protected]/pack#PackageFormat)
```
####
func (*TargetConfig) [Initrd](https://github.com/unikraft/kraftkit/blob/v0.6.7/unikraft/target/target.go#L121) [¶](#TargetConfig.Initrd)
```
func (tc *[TargetConfig](#TargetConfig)) Initrd() *[initrd](/[email protected]/initrd).[InitrdConfig](/[email protected]/initrd#InitrdConfig)
```
####
func (*TargetConfig) [IsUnpacked](https://github.com/unikraft/kraftkit/blob/v0.6.7/unikraft/target/target.go#L141) [¶](#TargetConfig.IsUnpacked)
added in v0.4.0
```
func (tc *[TargetConfig](#TargetConfig)) IsUnpacked() [bool](/builtin#bool)
```
####
func (*TargetConfig) [KConfig](https://github.com/unikraft/kraftkit/blob/v0.6.7/unikraft/target/target.go#L145) [¶](#TargetConfig.KConfig)
added in v0.4.0
```
func (tc *[TargetConfig](#TargetConfig)) KConfig() [kconfig](/[email protected]/kconfig).[KeyValueMap](/[email protected]/kconfig#KeyValueMap)
```
####
func (*TargetConfig) [KConfigTree](https://github.com/unikraft/kraftkit/blob/v0.6.7/unikraft/target/target.go#L159) [¶](#TargetConfig.KConfigTree)
added in v0.4.0
```
func (tc *[TargetConfig](#TargetConfig)) KConfigTree(_ [context](/context).[Context](/context#Context), env ...*[kconfig](/[email protected]/kconfig).[KeyValue](/[email protected]/kconfig#KeyValue)) (*[kconfig](/[email protected]/kconfig).[KConfigFile](/[email protected]/kconfig#KConfigFile), [error](/builtin#error))
```
####
func (*TargetConfig) [Kernel](https://github.com/unikraft/kraftkit/blob/v0.6.7/unikraft/target/target.go#L113) [¶](#TargetConfig.Kernel)
```
func (tc *[TargetConfig](#TargetConfig)) Kernel() [string](/builtin#string)
```
####
func (*TargetConfig) [KernelDbg](https://github.com/unikraft/kraftkit/blob/v0.6.7/unikraft/target/target.go#L117) [¶](#TargetConfig.KernelDbg)
```
func (tc *[TargetConfig](#TargetConfig)) KernelDbg() [string](/builtin#string)
```
####
func (TargetConfig) [MarshalYAML](https://github.com/unikraft/kraftkit/blob/v0.6.7/unikraft/target/target.go#L205) [¶](#TargetConfig.MarshalYAML)
added in v0.6.4
```
func (tc [TargetConfig](#TargetConfig)) MarshalYAML() (interface{}, [error](/builtin#error))
```
MarshalYAML makes TargetConfig implement yaml.Marshaller
####
func (*TargetConfig) [Name](https://github.com/unikraft/kraftkit/blob/v0.6.7/unikraft/target/target.go#L93) [¶](#TargetConfig.Name)
```
func (tc *[TargetConfig](#TargetConfig)) Name() [string](/builtin#string)
```
####
func (*TargetConfig) [Path](https://github.com/unikraft/kraftkit/blob/v0.6.7/unikraft/target/target.go#L133) [¶](#TargetConfig.Path)
added in v0.4.0
```
func (tc *[TargetConfig](#TargetConfig)) Path() [string](/builtin#string)
```
####
func (*TargetConfig) [Platform](https://github.com/unikraft/kraftkit/blob/v0.6.7/unikraft/target/target.go#L109) [¶](#TargetConfig.Platform)
```
func (tc *[TargetConfig](#TargetConfig)) Platform() [plat](/[email protected]/unikraft/plat).[Platform](/[email protected]/unikraft/plat#Platform)
```
####
func (*TargetConfig) [PrintInfo](https://github.com/unikraft/kraftkit/blob/v0.6.7/unikraft/target/target.go#L163) [¶](#TargetConfig.PrintInfo)
```
func (tc *[TargetConfig](#TargetConfig)) PrintInfo(ctx [context](/context).[Context](/context#Context)) [string](/builtin#string)
```
####
func (*TargetConfig) [Source](https://github.com/unikraft/kraftkit/blob/v0.6.7/unikraft/target/target.go#L97) [¶](#TargetConfig.Source)
```
func (tc *[TargetConfig](#TargetConfig)) Source() [string](/builtin#string)
```
####
func (*TargetConfig) [Type](https://github.com/unikraft/kraftkit/blob/v0.6.7/unikraft/target/target.go#L129) [¶](#TargetConfig.Type)
```
func (tc *[TargetConfig](#TargetConfig)) Type() [unikraft](/[email protected]/unikraft).[ComponentType](/[email protected]/unikraft#ComponentType)
```
####
func (*TargetConfig) [Version](https://github.com/unikraft/kraftkit/blob/v0.6.7/unikraft/target/target.go#L101) [¶](#TargetConfig.Version)
```
func (tc *[TargetConfig](#TargetConfig)) Version() [string](/builtin#string)
```
####
type [TargetOption](https://github.com/unikraft/kraftkit/blob/v0.6.7/unikraft/target/options.go#L16) [¶](#TargetOption)
added in v0.5.1
```
type TargetOption func(*[TargetConfig](#TargetConfig)) [error](/builtin#error)
```
TargetOption is a function that modifies a TargetConfig.
####
func [WithArchitecture](https://github.com/unikraft/kraftkit/blob/v0.6.7/unikraft/target/options.go#L27) [¶](#WithArchitecture)
added in v0.5.1
```
func WithArchitecture(arch [arch](/[email protected]/unikraft/arch).[ArchitectureConfig](/[email protected]/unikraft/arch#ArchitectureConfig)) [TargetOption](#TargetOption)
```
WithVersion sets the version of the target.
####
func [WithCommand](https://github.com/unikraft/kraftkit/blob/v0.6.7/unikraft/target/options.go#L83) [¶](#WithCommand)
added in v0.5.1
```
func WithCommand(command [][string](/builtin#string)) [TargetOption](#TargetOption)
```
WithCommand sets the command of the target.
####
func [WithFormat](https://github.com/unikraft/kraftkit/blob/v0.6.7/unikraft/target/options.go#L51) [¶](#WithFormat)
added in v0.5.1
```
func WithFormat(format [pack](/[email protected]/pack).[PackageFormat](/[email protected]/pack#PackageFormat)) [TargetOption](#TargetOption)
```
WithFormat sets the format of the target.
####
func [WithInitrd](https://github.com/unikraft/kraftkit/blob/v0.6.7/unikraft/target/options.go#L75) [¶](#WithInitrd)
added in v0.5.1
```
func WithInitrd(initrd *[initrd](/[email protected]/initrd).[InitrdConfig](/[email protected]/initrd#InitrdConfig)) [TargetOption](#TargetOption)
```
WithInitrd sets the initrd of the target.
####
func [WithKConfig](https://github.com/unikraft/kraftkit/blob/v0.6.7/unikraft/target/options.go#L43) [¶](#WithKConfig)
added in v0.5.1
```
func WithKConfig(kconfig [kconfig](/[email protected]/kconfig).[KeyValueMap](/[email protected]/kconfig#KeyValueMap)) [TargetOption](#TargetOption)
```
WithKConfig sets the kconfig of the target.
####
func [WithKernel](https://github.com/unikraft/kraftkit/blob/v0.6.7/unikraft/target/options.go#L59) [¶](#WithKernel)
added in v0.5.1
```
func WithKernel(kernel [string](/builtin#string)) [TargetOption](#TargetOption)
```
WithKernel sets the kernel of the target.
####
func [WithKernelDbg](https://github.com/unikraft/kraftkit/blob/v0.6.7/unikraft/target/options.go#L67) [¶](#WithKernelDbg)
added in v0.5.1
```
func WithKernelDbg(kernelDbg [string](/builtin#string)) [TargetOption](#TargetOption)
```
WithKernelDbg sets the kernel debug of the target.
####
func [WithName](https://github.com/unikraft/kraftkit/blob/v0.6.7/unikraft/target/options.go#L19) [¶](#WithName)
added in v0.5.1
```
func WithName(name [string](/builtin#string)) [TargetOption](#TargetOption)
```
WithName sets the name of the target.
####
func [WithPlatform](https://github.com/unikraft/kraftkit/blob/v0.6.7/unikraft/target/options.go#L35) [¶](#WithPlatform)
added in v0.5.1
```
func WithPlatform(platform [plat](/[email protected]/unikraft/plat).[PlatformConfig](/[email protected]/unikraft/plat#PlatformConfig)) [TargetOption](#TargetOption)
```
WithPlatform sets the platform of the target. |
RobKF | cran | R | Package ‘RobKF’
October 12, 2022
Type Package
Title Innovative and/or Additive Outlier Robust Kalman Filtering
Version 1.0.2
Date 2021-07-15
Description Implements a series of robust Kalman filtering approaches. It implements the addi-
tive outlier robust filters of Ruckdeschel et al. (2014) <arXiv:1204.3358> and Agamen-
noni et al. (2018) <doi:10.1109/ICRA.2011.5979605>, the innovative outlier robust fil-
ter of Ruckdeschel et al. (2014) <arXiv:1204.3358>, as well as the innovative and additive out-
lier robust filter of Fisch et al. (2020) <arXiv:2007.03238>.
License GPL
Imports Rcpp (>= 1.0.2), Rdpack, ggplot2, reshape2, Matrix
RdMacros Rdpack
LinkingTo Rcpp, RcppEigen
RoxygenNote 7.1.1
NeedsCompilation yes
Author <NAME> [aut],
<NAME> [aut, cre],
<NAME> [aut, ths],
<NAME> [aut, ths],
<NAME> [aut, ctb]
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2021-07-15 09:40:02 UTC
R topics documented:
AORKF_hube... 2
AORKF_... 3
Generate_Dat... 5
IOAORK... 6
IORKF_hube... 8
K... 10
plo... 11
prin... 12
summar... 12
AORKF_huber A huberisation based additive outlier robust Kalman filter
Description
An additive outlier robust Kalman filter, based on the work by Ruckdeschel et al. (2014). This
function assumes that the additions are potentially polluted by a heavy tailed process. The update
equations are made robust to these via huberisation.
Usage
AORKF_huber(
Y,
mu_0,
Sigma_0 = NULL,
A,
C,
Sigma_Add,
Sigma_Inn,
h = 2,
epsilon = 1e-06
)
Arguments
Y A list of matrices containing the observations to be filtered.
mu_0 A matrix indicating the mean of the prior for the hidden states.
Sigma_0 A matrix indicating the variance of the prior for the hidden states. It defaults to
the limit of the variance of the Kalman filter.
A A matrix giving the updates for the hidden states.
C A matrix mapping the hidden states to the observed states.
Sigma_Add A positive definite matrix giving the additive noise covariance.
Sigma_Inn A positive definite matrix giving the innovative noise covariance.
h A numeric giving the huber threshold. It defaults to 2.
epsilon A positive numeric giving the precision to which the limit of the covariance is
to be computed. It defaults to 0.000001.
Value
An rkf S3 class.
References
<NAME>, <NAME>, <NAME> (2014). “Robust Kalman tracking and smoothing with
propagating and non-propagating outliers.” Statistical Papers, 55(1), 93–123.
Examples
library(RobKF)
set.seed(2019)
A = matrix(c(1), nrow = 1, ncol = 1)
C = matrix(c(1), nrow = 1, ncol = 1)
Sigma_Inn = diag(1,1)*0.01
Sigma_Add = diag(1,1)
mu_0 = matrix(0,nrow=1,ncol=1)
Y_list = Generate_Data(1000,A,C,Sigma_Add,Sigma_Inn,mu_0,anomaly_loc = c(100,400,700),
anomaly_type = c("Add","Add","Add"),anomaly_comp = c(1,1,1),
anomaly_strength = c(10,10,10))
Output = AORKF_huber(Y_list,mu_0,Sigma_0=NULL,A,C,Sigma_Add,Sigma_Inn)
plot(Output,conf_level = 0.9999)
AORKF_t A t-distribution based additive outlier robust Kalman filter
Description
An additive outlier robust Kalman filter, based on the work by Agamennoni et al. (2018). This
function assumes that the additions are potentially polluted by a heavy tailed process, which is
approximated by a t-distribution. Variational inference is used to approximate the posterior.
Usage
AORKF_t(
Y,
mu_0,
Sigma_0 = NULL,
A,
C,
Sigma_Add,
Sigma_Inn,
s = 2,
epsilon = 1e-06
)
Arguments
Y A list of matrices containing the observations to be filtered.
mu_0 A matrix indicating the mean of the prior for the hidden states.
Sigma_0 A matrix indicating the Variance of the prior for the hidden states. It defaults to
the limit of the variance of the Kalman filter.
A A matrix giving the updates for the hidden states.
C A matrix mapping the hidden states to the observed states.
Sigma_Add A positive definite matrix giving the additive noise covariance.
Sigma_Inn A positive definite matrix giving the innovative noise covariance.
s A numeric giving the shape of the t-distribution to be considered. It defaults to
2.
epsilon A positive numeric giving the precision to which the limit of the covariance, and
the variational inferences is to be computed. It defaults to 0.000001.
Value
An rkf S3 class.
References
<NAME>, <NAME>, <NAME> (2011). “An outlier-robust Kalman filter.” In 2011 IEEE
International Conference on Robotics and Automation, 1551–1558. IEEE.
Examples
library(RobKF)
set.seed(2019)
A = matrix(c(1), nrow = 1, ncol = 1)
C = matrix(c(1), nrow = 1, ncol = 1)
Sigma_Inn = diag(1,1)*0.01
Sigma_Add = diag(1,1)
mu_0 = matrix(0,nrow=1,ncol=1)
Y_list = Generate_Data(1000,A,C,Sigma_Add,Sigma_Inn,mu_0,anomaly_loc = c(100,400,700),
anomaly_type = c("Add","Add","Add"),anomaly_comp = c(1,1,1),
anomaly_strength = c(10,10,10))
Output = AORKF_t(Y_list,mu_0,Sigma_0=NULL,A,C,Sigma_Add,Sigma_Inn)
plot(Output,conf_level = 0.9999)
Generate_Data Simulate data from a Kalman model
Description
This function simulates data obeying a Kalman model whilst allowing the user to add innovative
and additive anomalies.
Usage
Generate_Data(
n,
A,
C,
Sigma_Add,
Sigma_Inn,
mu_0 = NULL,
anomaly_loc = integer(0),
anomaly_type = character(0),
anomaly_comp = integer(0),
anomaly_strength = NULL
)
Arguments
n A positive integer giving the number of observations desired
A A matrix giving the updates for the hidden states.
C A matrix mapping the hidden states to the observed states.
Sigma_Add A positive definite diagonal matrix giving the additive noise covariance.
Sigma_Inn A positive definite diagonal matrix giving the innovative noise covariance.
mu_0 A matrix indicating the mean of the prior for the hidden states. It defaults to a
zero-vector.
anomaly_loc A vector of integers giving the locations of anomalies.
anomaly_type A vector of strings, either "Add" or "Inn" indicating whether the anomaly is
additive or innovative.
anomaly_comp A vector of integers giving the component affected by the anomalies.
anomaly_strength
A vector of numerics giving the strength of the anomalies (in sigmas).
Value
A list of matrices, each corresponding to an observation.
Examples
library(RobKF)
library(ggplot2)
set.seed(2018)
A = diag(2)*0.99
A[1,2] = -0.05
C = matrix(c(10,0.1),nrow=1)
mu = matrix(c(0,0),nrow=2)
Sigma_Inn = diag(c(1,0.01)*0.00001,nrow=2)
Sigma_Add = diag(c(1)*0.1,nrow=1)
Y_list = Generate_Data(100,A,C,Sigma_Add,Sigma_Inn, mu_0 = mu, anomaly_loc = c(10,30,50),
anomaly_type = c("Inn","Add","Inn"),
anomaly_comp = c(1,1,2), anomaly_strength = c(400,-10,3000))
qplot(1:100,unlist(Y_list),xlab="time",ylab="observation")+theme_minimal()
IOAORKF An innovative and additive outlier robust Kalman filter
Description
An implementation of Computationally Efficient Bayesian Anomaly detection by Sequential Sam-
pling (CE-BASS) by Fisch et al. (2020). This function assumes that both the innovations and addi-
tions are potentially polluted by a heavy tailed process, which is approximated by a t-distribution.
To approximate the posterior, particles for the precision (inverse variance) are sampled using a ro-
bust approximation to the posterior. Conditionally on those samples, the classical Kalman updates
are used.
Usage
IOAORKF(
Y,
mu_0,
Sigma_0 = NULL,
A,
C,
Sigma_Add,
Sigma_Inn,
Particles,
Descendants = 1,
s = 2,
anom_add_prob = NULL,
anom_inn_prob = NULL,
epsilon = 1e-06,
horizon_matrix = NULL
)
Arguments
Y A list of matrices containing the observations to be filtered.
mu_0 A matrix indicating the mean of the prior for the hidden states.
Sigma_0 A matrix indicating the variance of the prior for the hidden states. It defaults to
the limit of the variance of the Kalman filter.
A A matrix giving the updates for the hidden states.
C A matrix mapping the hidden states to the observed states.
Sigma_Add A positive definite diagonal matrix giving the additive noise covariance.
Sigma_Inn A positive definite diagonal matrix giving the innovative noise covariance.
Particles An integer giving the number of particles to be maintained at each step. More
particles lead to more accuracy, but also require more memory and CPU time.
The parameter should be at least p + q + 1, where p s the dimension of the
observations and q the dimension of the hidden states.
Descendants An integer giving the number of descendants to be sampled for each of the possi-
ble anomalies. Increasing Descendants leads to higher accuracy but also higher
memory and CPU requirements. The default value is 1.
s A numeric giving the shape of the t-distribution to be considered. It defaults to
2.
anom_add_prob A vector of probabilities with length equal to the dimension of the observations
giving the probabilities of additive outliers in each of the components. It defaults
to 1/10000.
anom_inn_prob A vector of probabilities with length equal to the dimension of the hidden state
giving the probabilities of innovative outliers in each of the components. It
defaults to 1/10000.
epsilon A positive numeric giving the precision to which the limit of the covariance is
to be computed. It defaults to 0.000001.
horizon_matrix A matrix of 0s and 1s giving the horizon’s at which innovative particles are to be
resampled. It defaults to a k by q matrix, where k is the number of observations
required for observability of the system and q is the dimension of the hidden
states.
Value
An ioaorkf S3 class.
References
<NAME>, <NAME>, <NAME> (2020). “Innovative And Additive Outlier Robust Kalman Filtering
With A Robust Particle Filter.” arXiv preprint arXiv:2007.03238.
Examples
library(RobKF)
set.seed(2018)
A = diag(2)*0.99
A[1,2] = -0.05
C = matrix(c(10,0.1),nrow=1)
mu = matrix(c(0,0),nrow=2)
Sigma_Inn = diag(c(1,0.01)*0.00001,nrow=2)
Sigma_Add = diag(c(1)*0.1,nrow=1)
Y_list = Generate_Data(100,A,C,Sigma_Add,Sigma_Inn, mu_0 = mu, anomaly_loc = c(10,30,50),
anomaly_type = c("Inn","Add","Inn"),
anomaly_comp = c(1,1,2), anomaly_strength = c(400,-10,3000))
horizon_matrix = matrix(1,nrow = 3 ,ncol = 2)
Particle_List = IOAORKF(Y_list,mu,Sigma_0=NULL,A,C,Sigma_Add,Sigma_Inn,Particles=20,
horizon_matrix=horizon_matrix)
plot(Particle_List)
summary(Particle_List)
IORKF_huber A huberisation based innovative outlier robust Kalman filter
Description
An innovative outlier robust Kalman filter, based on the work by <NAME> al. (2014). This
function assumes that the innovations are potentially polluted by a heavy tailed process. The update
equations are made robust to these via huberisation.
Usage
IORKF_huber(
Y,
mu_0,
Sigma_0 = NULL,
A,
C,
Sigma_Add,
Sigma_Inn,
h = 2,
epsilon = 1e-06
)
Arguments
Y A list of matrices containing the observations to be filtered.
mu_0 A matrix indicating the mean of the prior for the hidden states.
Sigma_0 A matrix indicating the variance of the prior for the hidden states. It defaults to
the limit of the variance of the Kalman filter.
A A matrix giving the updates for the hidden states.
C A matrix mapping the hidden states to the observed states.
Sigma_Add A positive definite matrix giving the additive noise covariance.
Sigma_Inn A positive definite matrix giving the innovative noise covariance.
h A numeric giving the huber threshold. It defaults to 2.
epsilon A positive numeric giving the precision to which the limit of the covariance is
to be computed. It defaults to 0.000001.
Value
An rkf S3 class.
References
<NAME>, <NAME>, <NAME> (2014). “Robust Kalman tracking and smoothing with
propagating and non-propagating outliers.” Statistical Papers, 55(1), 93–123.
Examples
library(RobKF)
set.seed(2019)
A = matrix(c(1), nrow = 1, ncol = 1)
C = matrix(c(1), nrow = 1, ncol = 1)
Sigma_Inn = diag(1,1)*0.01
Sigma_Add = diag(1,1)
mu_0 = matrix(0,nrow=1,ncol=1)
Y_list = Generate_Data(1000,A,C,Sigma_Add,Sigma_Inn,mu_0,anomaly_loc = c(100,400,700),
anomaly_type = c("Inn","Inn","Inn"),anomaly_comp = c(1,1,1),
anomaly_strength = c(50,80,-100))
Output = IORKF_huber(Y_list,mu_0,Sigma_0=NULL,A,C,Sigma_Add,Sigma_Inn,h=2)
plot(Output,conf_level = 0.9999)
KF The classical Kalman filter
Description
The classical Kalman filter.
Usage
KF(Y, mu_0, Sigma_0 = NULL, A, C, Sigma_Add, Sigma_Inn, epsilon = 1e-06)
Arguments
Y A list of matrices containing the observations to be filtered.
mu_0 A matrix indicating the mean of the prior for the hidden states.
Sigma_0 A matrix indicating the variance of the prior for the hidden states. It defaults to
the limit of the variance of the Kalman filter.
A A matrix giving the updates for the hidden states.
C A matrix mapping the hidden states to the observed states.
Sigma_Add A positive definite matrix giving the additive noise covariance.
Sigma_Inn A positive definite matrix giving the innovative noise covariance.
epsilon A positive numeric giving the precision to which the limit of the covariance is
to be computed. It defaults to 0.000001.
Value
An rkf S3 class.
References
Kalman RE (1960). “A New Approach to Linear Filtering and Prediction Problems.” Transactions
of the ASME–Journal of Basic Engineering, 82(Series D), 35–45.
Examples
library(RobKF)
set.seed(2019)
A = matrix(c(1), nrow = 1, ncol = 1)
C = matrix(c(1), nrow = 1, ncol = 1)
Sigma_Inn = diag(1,1)*0.01
Sigma_Add = diag(1,1)
mu_0 = matrix(0,nrow=1,ncol=1)
Y_list = Generate_Data(1000,A,C,Sigma_Add,Sigma_Inn,mu_0)
Output = KF(Y_list,mu_0,Sigma_0=NULL,A,C,Sigma_Add,Sigma_Inn)
plot(Output)
plot plot
Description
A function to plot the output produced by AORKF_t, AORKF_huber, IORKF_huber or IOAORKF. One
can specify a time during the run for which the output should be displayed.
Usage
## S3 method for class 'ioaorkf'
plot(x, time = NULL, horizon = NULL, subset = NULL, ...)
## S3 method for class 'rkf'
plot(x, time = NULL, subset = NULL, conf_level = 0.95, ...)
Arguments
x An instance of an ioaorkf or rkf S3 class.
time A positive integer giving the time at which the output is to be displayed. It
defaults to the number of observations.
horizon A positive integer giving the smoothing horizon that is to be used. It must be at
least equal to the number of rows of the horizonmatrix used to obtain the ioaorkf
object.
subset A list of integers indicating the components of observations which are to be
plotted.
... Ignored.
conf_level A probability between 0 and 1 giving the confidence level at which the series
are to be tested against anomalies. It defaults to 0.95.
Value
A ggplot object.
print print
Description
A function to print the output produced by AORKF_t, AORKF_huber, IORKF_huber or IOAORKF. One
can specify a time during the run for which the output should be displayed.
Usage
## S3 method for class 'ioaorkf'
print(x, time = NULL, horizon = NULL, ...)
## S3 method for class 'rkf'
print(x, time = NULL, conf_level = 0.95, ...)
Arguments
x An instance of an ioaorkf or rkf S3 class.
time A positive integer giving the time at which the output is to be displayed. It
defaults to the number of observations.
horizon A positive integer giving the smoothing horizon that is to be used. It must be at
least equal to the number of rows of the horizonmatrix used to obtain the ioaorkf
object.
... Ignored.
conf_level A probability between 0 and 1 giving the confidence level at which the series
are to be tested against anomalies. It defaults to 0.95.
summary Summary
Description
A function to summarise the output produced by AORKF_t, AORKF_huber, IORKF_huber, or IOAORKF.
One can specify a time during the run for which the output should be displayed.
Usage
## S3 method for class 'ioaorkf'
summary(object, time = NULL, horizon = NULL, ...)
## S3 method for class 'rkf'
summary(object, time = NULL, conf_level = 0.95, ...)
Arguments
object An instance of an ioaorkf or rkf S3 class.
time A positive integer giving the time at which the output is to be displayed. It
defaults to the number of observations.
horizon A positive integer giving the smoothing horizon that is to be used. It must be at
least equal to the number of rows of the horizonmatrix used to obtain the ioaorkf
object.
... Ignored
conf_level A probability between 0 and 1 giving the confidence level at which the series
are to be tested against anomalies. It defaults to 0.95. |
Livro-Introdu%C3%A7%C3%A3o-a-Vis%C3%A3o-Computacional-com-Python-e-OpenCV-3.pdf | free_programming_book | Unknown | Introduo a Viso Computacional com Python e OpenCV Verso 0.8 No corrigida <NAME> www.antonello.com.br
2 O autor <NAME> mestre em Cincia da Computao pela Universidade Federal de Santa Catarina UFSC e bacharel em Cincia da Computao pelo Centro Universitrio de Braslia
UniCEUB. Possui as certificaes Java Sun Certified Java Programmer - SCJP e Sun Certified Web Component Developer - SCWCD. Em 2000 iniciou sua carreira em instituies de grande porte do mercado financeiro no Brasil e desde 2006 professor universitrio. Na Universidade do Oeste de Santa Catarina - Unoesc foi coordenador do Ncleo de Inovao Tecnolgica NIT e da Pr-Incubadora de Empresas. Tambm atuou como coordenador das atividades do Polo de Inovao Vale do Rio do Peixe Inovale. Atualmente professor de Linguagens de Programao em regime de dedicao exclusiva no Instituto Federal Catarinense IFC, cmpus Luzerna. Trabalha em projetos de pesquisa e extenso nas reas de inteligncia artificial, processamento de imagens e robs autnomos.
Contato: <EMAIL> ou <EMAIL> Mais informaes no blog: www.antonello.com.br
3 Prefcio
Os cinco sentidos dos seres humanos so: olfato, tato, audio, paladar e viso. De todos eles, temos que concordar que a viso o mais, ou pelo menos, um dos mais importantes.
Por este motivo, dar o sentido da viso para uma mquina gera um resultado impressionante. Imagens esto em todo o lugar e a capacidade de reconhecer objetos,
paisagens, rostos, sinais e gestos torna as mquinas muito mais teis.
nesse sentido que explicamos a importncia deste livro, que foi criado para ser utilizado em uma disciplina optativa chamada Tpicos especiais em viso computacional lecionada por mim no curso de Engenharia de Controle e Automao do Instituto Federal Catarinense IFC, campus Luzerna.
Existe muita literatura a respeito do tema em ingls mas em portugus temos pouco material, ento a primeira sugesto ao leitor que APRENDA ingls o quanto antes. Porm,
dada a imcapacidade de muitos alunos em lrem fluentemente na lngua inglesa, surgiu a necessidade de criar esta obra onde aproveitei para incluir minha experincia prtica com algoritmos e projetos de pesquisa sobre viso computacional.
Aproveito para agradecer ao Dr. <NAME> da Unicamp onde fiz meu primeiro curso de extenso em Viso Computacional e ao <NAME> que alm de PhD em Cincia da Computao pela Universidade de Marylan, mantm o site www.pyimagesearch.com que me ajudou muito a ver exemplos sobre o tema.
<NAME>
4 Sumrio
1 Bem vindo ao mundo da viso computacional ... 6 2 Sistema de coordenadas e manipulao de pixels ... 9 3 Fatiamento e desenho sobre a imagem ... 12 4 Transformaes e mscaras ... 16 4.1 Cortando uma imagem / Crop ... 16 4.2 Redimensionamento / Resize... 16 4.3 Espelhando uma imagem / Flip ... 18 4.4 Rotacionando uma imagem / Rotate ... 19 4.5 Mscaras ... 20 5 Sistemas de cores ... 23 5.1 Canais da imagem colorida... 24 6 Histogramas e equalizao de imagem ... 26 6.1 Equalizao de Histograma ... 29 7 Suavizao de imagens ... 33 7.1 Suavizao por clculo da mdia... 33 7.2 Suavizao pela Gaussiana ... 34 7.3 Suavizao pela mediana ... 35 7.4 Suavizao com filtro bilateral ... 36 8 Binarizao com limiar ... 38 8.1 Threshold adaptativo ... 39 8.2 Threshold com Otsu e Riddler-Calvard ... 40 9 Segmentao e mtodos de deteco de bordas ... 41 9.1 Sobel ... 41 9.2 Filtro Laplaciano... 42 9.3 Detector de bordas Canny... 43 10 Identificando e contando objetos ... 45 11 Deteco de faces em imagens ... 49 12 Deteco de faces em vdeos ... 53 13 Rastreamento de objetos em vdeos... 55 14 Reconhecimento de caracteres... 56 14.1 Biblioteca PyTesseract ... 56 14.2 Padronizao de placas ... 56 14.3 Filtros do OpenCV... 57
5 14.4 Criao do dicionrio... 58 14.5 jTessBoxEditor ... 58 14.6 Serak Tesseract Trainer ... 58 14.7 Resultados ... 60 15 Treinamento para idenficao de objetos por Haar Cascades ... 63 15.1 Coletando o bando de dados de imagens ... 63 15.2 Organizando as imagens negativas ... 63 15.3 Recortar e marcar imagens positivas ... 64 15.4 Criando um vetor de imagens positivas ... 64 15.5 Haar-Training ... 65 15.6 Criando o arquivo XML ... 67 16 Criao de um identificador de objetos com <NAME>... 68
6 1 Bem vindo ao mundo da viso computacional No prefcio, que recomendo que voc leia, j iniciamos as explicaes do que viso computacional. Mas vale trazer uma definio clssica: Viso computacional a cincia e tecnologia das mquinas que enxergam. Ela desenvolve teoria e tecnologia para a construo de sistemas artificiais que obtm informao de imagens ou quaisquer dados multidimensionais.
Sim a definio da wikipdia e deve j ter sido alterada, afinal o campo de estudos sobre viso computacional esta em constante evoluo. Neste ponto importante frizar que,
alm de fazer mquinas enchergarem, reconhecerem objetos, paisagens, gestos, faces e padres, os mesmos algoritmos podem ser utilizados para reconhecimento de padres em grandes bases de dados, no necessariamente feitos de imagens.
Porm quando se trata de imagens, temos avanos significativos j embutidos em muitos apps e outros sistemas que usamos. O Facebook j reconhece objetos automaticamente para classificar suas fotos, alm disso, j aponta onde esto as pessoas na imagem para voc marcar. O mesmo ocorre com smartphones que j disparam a foto quando as pessoas estiverem sorindo, pois conseguem reconhecer tais expresses.
Alm disso, outros sistemas de reconhecimetno como o chamado popularmente OCR de placas de veculos j esto espalhados pelo Brasil, onde as imagens dos veculos so capturadas, a placa reconhecida e convertida para textos e nmeros que podem ser comparados diretamente com banco de dados de veculos roubados ou com taxas em atrazo.
Enfim, a viso computacional j esta a!
No vou entrar no mrito de discutir o que viso computacional e sua diferente com outro termo muito utilizado que processamento de imagens. Leia sobre isso na web e tire suas prprias concluses. Mas minha opinio que passar um filtro para equalizar o histograma da imagem esta relacionado com processamento de imagem. J reconhecer um objeto existente em uma foto esta relacionado a viso computacional.
Vamos utilizar a linguagem Python neste livro, especificamente a verso 3.4 juntamente com a biblioteca OpenCV verso 3.0 que voc pode baixar e configurar atravs de vrios tutoriais na internet. Ao final do livro temos um apncice com as instrues detalhadas.
Algumas bibliotecas Python tambm so necessrias como Numpy (Numeric Python),
Scypy (Cientific Python) e Matplotlib que uma biblioteca para plotagem de grficos com sintaxe similar ao Matlab.
Utilizamos nas imagem do livro o sistema operacional Linux/Ubuntu rodando em uma mquina virtual com Oracle Virtual Box o que recomendo fortemente para facilitar a configurao uma nica vez e a utilizao para testes em vrias outras mquinas j que a configurao do ambiente trabalhosa.
Feita a configurao descrita no Apncia A vamos rodar nosso primeiro programa.
Um Al mundo! da viso computacional onde iremos apenas abrir uma arquivo de imagem do disco e exibi-lo na tela. Feito isso o cdigo espero o pressionamento de uma tecla para fechar a janela e encerrar o programa.
7
# Importao das bibliotecas import cv2
# Leitura da imagem com a funo imread()
imagem = cv2.imread('entrada.jpg')
print('Largura em pixels: ', end='')
print(imagem.shape[1]) #largura da imagem print('Altura em pixels: ', end='')
print(imagem.shape[0]) #altura da imagem print('Qtde de canais: ', end='')
print(imagem.shape[2])
# Mostra a imagem com a funo imshow cv2.imshow("Nome da janela", imagem)
cv2.waitKey(0) #espera pressionar qualquer tecla
# Salvar a imagem no disco com funo imwrite()
cv2.imwrite("saida.jpg", imagem)
Este programa abre uma imagem, mostra suas propriedades de largura e altura em pixels, mosta a quantidade de canais utilizados, mostra a imagem na tela, espera o pressionar de alguma tecla para fechar a imagem e salva em disco a mesma imagem com o nome saida.jpg. Vamos explicar o cdigo em detalhes abaixo:
# Importao das bibliotecas import cv2
# Leitura da imagem com a funo imread()
imagem = cv2.imread('entrada.jpg')
A importao da biblioteca padro da OpenCV obrigatria para utilizar suas funes.
A primeira funo usada para abrir a imagem atravs de cv2.imread() que leva como argumento o nome do arquivo em disco.
A imagem lida e armazenada em imagem que uma variavel que dar acesso ao objeto da imagem que nada mais que uma matriz de 3 dimenses (3 canais) contendo em cada dimenso uma das 3 cores do padro RGB (red=vermelho, green-verde, blue=azul). No caso de uma imagem preto e branca temos apenas um canal, ou seja, apenas uma matriz de 2 dimenses.
Para facilitar o entendimento podemos pensar em uma planilha eletrnica, com linhas e colunas, portanto, uma matriz de 2 dimenses. Cada clula dessa matriz um pixel, que no caso de imagens preto e brancas possuem um valor de 0 a 255, sendo 0 para preto e 255 para branco. Portanto, cada clula contm um inteiro de 8 bits (sem sinal) que em Python
definido por uint8 que um unsigned integer de 8 bits.
8 Figura 1 Imagem preto e branca representada em uma matriz de inteiros onde cada clula um inteiro sem sinal de 8 bits que pode conter de 0 (preto) at 255 (branco). Perceba os vrios tons de cinza nos valores intermedirios como 30 (cinza escuro) e 210 (cinza claro).
No caso de imagens preto e branca composta de apenas uma matriz de duas dimenses como na imagem acima. J para imagens coloridas temos trs dessas matrizes de duas dimenses cada uma representando uma das cores do sistema RGB. Portanto, cada pixel
formado de uma tupla de 3 inteiros de 8 bits sem sinal no sistema (R,G,B) sendo que (0,0,0)
representa o preto, (255,255,255) o branco. Nesse sentido, as cores mais comuns so:
Branco - RGB (255,255,255);
Azul - RGB (0,0,255);
Vermelho - RGB (255,0,0);
Verde - RGB (0,255,0);
Amarelo - RGB (255,255,0);
Magenta - RGB (255,0,255);
Ciano - RGB (0,255,255);
Preto - RGB (0,0,0).
As imagens coloridas, portanto, so compostas normalmente de 3 matrizes de inteiros sem sinal de 8 bits, a juno das 3 matrizes produz a imagem colorida com capacidade de reproduo de 16,7 milhes de cores, sendo que os 8 bits tem capacidade para 256 valores e elevando a 3 temos 256 = 16,7 milhes.
0 15
30 45
60 75
90 105
120 135
150 165
180 195
210 225
240 255
0 15
30 45
60 75
90 105
120 135
150 165
180 195
210 225
240 255
0 15
30 45
60 75
90 105
120 135
150 165
180 195
210 225
0 15
30 45
60 75
90 105
120 135
150 165
180 195
210 225
240 255
0 15
30 45
60 75
90 105
120 135
150 165
180 195
210 225
240 255
1 16
31 46
61 76
91 106
121 136
151 166
181 196
211 226
241 256 0
15 30
45 60
75 90
105 120
135 150
165 180
195 210
225 240
255 0
15 30
45 60
75 90
105 120
135 150
165 180
195 210
225 240
255 2
17 32
47 62
77 92
107 122
137 152
167 182
197 212
227 242
257 0
15 30
45 60
75 90
105 120
135 150
165 180
195 210
225 240
255 0
15 30
45 60
75 90
105 120
135 150
165 180
195 210
225 240
255 3
18 33
48 63
78 93
108 123
138 153
168 183
198 213
228 243
258 0
15 30
45 60
75 90
105 120
135 150
165 180
195 210
225 240
255 0
15 30
45 60
75 90
105 120
135 150
165 180
195 210
225 240
255 4
19 34
49 64
79 94
109 124
139 154
169 184
199 214
229 244
259 0
15 30
45 60
75 90
105 120
135 150
165 180
195 210
225 240
255 0
15 30
45 60
75 90
105 120
135 150
165 180
195 210
225 240
255 5
20 35
50 65
80 95
110 125
140 155
170 185
200 215
230 245
260 0
15 30
45 60
75 90
105 120
135 150
165 180
195 210
225 240
255 0
15 30
45 60
75 90
105 120
135 150
165 180
195 210
225 240
255 6
21 36
51 66
81 96
111 126
141 156
171 186
201 216
231 246 261
0 15
30 45
60 75
90 105
120 135
150 165
180 195
210 225
240 255
0 15
30 45
60 75
90 105
120 135
150 165
180 195
210 225
240 255
7 22
37 52
67 82
97 112
127 142
157 172
187 202
217 232
247 262
0 15
30 45
60 75
90 105
120 135
150 165
180 195
210 225
240 255
0 15
30 45
60 75
90 105
120 135
150 165
180 195
210 225
240 255
8 23
38 53
68 83
98 113
128 143
158 173
188 203
218 233
248 263
0 15
30 45
60 75
90 105
120 135
150 165
180 195
210 225
240 255
0 15
30 45
60 75
90 105
120 135
150 165
180 195
210 225
240 255
9 24
39 54
69 84
99 114
129 144
159 174
189 204
219 234
249 264
0 15
30 45
60 75
90 105
120 135
150 165
180 195
210 225
240 255
0 15
30 45
60 75
90 105
120 135
150 165
180 195
210 225
240 255
10 25
40 55
70 85
100 115
130 145
160 175
190 205
220 235
250 265
0 15
30 45
60 75
90 105
120 135
150 165
180 195
210 225
240 255
0 15
30 45
60 75
90 105
120 135
150 165
180 195
210 225
240 255
11 26
41 56
71 86
101 116
131 146
161 176
191 206
221 236 251
266 Figura 2 Na imagem temos um exemplo das 3 matrizes que compe o sistema RGB. Cada pixel da imagem, portanto, composto por 3 componentes de 8 bits cada, sem sinal, o que gera 256 combinaes por cor. Portanto, a representao de 256 vezes 256 vezes 256 ou 256 que igual a 16,7 milhes de cores.
240 255
9 2 Sistema de coordenadas e manipulao de pixels Conforme vimos no captulo anterior, temos uma representao de 3 cores no sistema RGB para cada pixel da imagem colorida. Podemos alterar a cor individualmente para cada pixel, ou seja, podemos manipular individualmente cada pixel da imagem.
Para isso importante entender o sistema de coordenadas (linha, coluna) onde o pixel mais a esquerda e acima da imagem esta na posio (0,0) esta na linha zero e coluna zero. J em uma imagem com 300 pixels de largura, ou seja, 300 colunas e tendo 200 pixels de altura,
ou seja, 200 linhas, ter o pixel (199,299) como sendo o pixel mais a direita e abaixo da imagem.
C0 C1
C2 C3
C4 C5
C6 C7
C8 C9
L0 0
0 0
0 0
0 0
0 0
0 L1
0 50
50 50
50 50
50 50
50 0
L2 0
50 100
100 100
100 100
100 50
0 L3
0 50
100 150
150 150
150 100
50 0
L4 0
50 100
150 200
200 150
100 50
0 L5
0 50
100 150
200 200
150 100
50 0
L6 0
50 100
150 150
150 150
100 50
0 L7
0 50
100 100
100 100
100 100
50 0
L8 0
50 50
50 50
50 50
50 50
0 L9
0 0
0 0
0 0
0 0
0 0
Figura 3 O sistema de coordenadas envolve uma linha e coluna. Os ndices iniciam em zero. O pixel (2,8),
ou seja, na linha ndice 2 que a terceira linha e na coluna ndice 8 que a nona linha possui a cor 50.
A partir do entendimento do sistema de coordenadas possvel alterar individualmente cada pixel ou ler a informao individual do pixel conforme abaixo:
import cv2 imagem = cv2.imread('ponte.jpg')
(b, g, r) = imagem[0, 0] #veja que a ordem BGR e no RGB Imagens so matrizes Numpy neste caso retornadas pelo mtodo imread e armazenada em memria atravs da varivel imagem conforme acima. Lembre-se que o pixel superior mais a esquerda o (0,0). No cdigo retornado na tupla (b, g, r) os respectivos valores das cores do pxel superior mais a esquerda. Veja que o mtodo retorna a sequncia BGR e no RGB como poderiamos esperar. Tendo os valores inteiros de cada cor
possvel exib-los na tela com o cdigo abaixo:
print('O pixel (0, 0) tem as seguintes cores:')
print('Vermelho:', r, 'Verde:', g, 'Azul:', b)
10 Outra possibilidade utilizar dois laos de repetio para varrer todos os pixels da imagem, linha por linha como o caso do cdigo abaixo. Importante notar que esta estratgia pode no ser muito performtica j que um processo lento varrer toda a imagem pixel a pixel.
import cv2 imagem = cv2.imread('ponte.jpg')
for y in range(0, imagem.shape[0]):
for x in range(0, imagem.shape[1]):
imagem[y, x] = (255,0,0)
cv2.imshow("Imagem modificada", imagem)
O resultado uma imagem com todos os pixels substitudos pela cor azul (255,0,0):
Figura 4 Imagem completamente azul pela alterao de todos os pixels para (255,0,0). Lembrando que o padro RGB na verdade BRG pela tupla (B, R, G).
Outro cdigo interessante segue abaixo onde inclumos as variveis de linha e coluna para serem as componentes de cor, lembrando que as variveis componentes da cor devem assumir o valor entre 0 e 255 ento utilizamos a operao resta da diviso por 256 para manter o resultado entre 0 e 255.
import cv2 imagem = cv2.imread('ponte.jpg')
for y in range(0, imagem.shape[0]): #percorre linhas for x in range(0, imagem.shape[1]): #percorre colunas imagem[y, x] = (x%256,y%256,x%256)
cv2.imshow("Imagem modificada", imagem)
cv2.waitKey(0)
Figura 5 A alterao nas componentes das cores da imagem conforme as coordenadas de linha e coluna geram a imagem acima.
Alterando minimamente o cdigo, especificamente no componente green, temos a
11 imagem abaixo. Veja que utilizamos os valores de linha multiplicado pela coluna (x*y) no componente G da tupla que forma a cor de cada pixel e deixamos o componente azul e vermelho zerados. A dinmica da mudana de linhas e colunas gera esta imagem.
import cv2 imagem = cv2.imread('ponte.jpg')
for y in range(0, imagem.shape[0], 1): #percorre as linhas for x in range(0, imagem.shape[1], 1): #percorre as colunas imagem[y, x] = (0,(x*y)%256,0)
cv2.imshow("Imagem modificada", imagem)
cv2.waitKey(0)
Figura 6 A alterao dinmica da cor de cada pixel gera esta bela imagem.
A imagem original se perdeu pois todos os pixels foram alterados.
Com mais uma pequena modificao temos o cdigo abaixo. O objetivo agora saltar a cada 10 pixels ao percorrer as linhas e mais 10 pixels ao percorrer as colunas. A cada salto
criado um quadrado amarelo de 5x5 pixels. Desta vez parte da imagem orignal preservada e podemos ainda observar a ponte por baixo da grade de quadrados amarelos.
import cv2 imagem = cv2.imread('ponte.jpg')
for y in range(0, imagem.shape[0], 10): #percorre linhas for x in range(0, imagem.shape[1], 10): #percorre colunas imagem[y:y+5, x: x+5] = (0,255,255)
cv2.imshow("Imagem modificada", imagem)
cv2.waitKey(0)
Figura 7 Cdigo gerou quadrados amarelos de 5x5 pixels sobre a toda a imagem.
Aproveite e crie suas prprias frmulas e gere novas imagens.
12 3 Fatiamento e desenho sobre a imagem No captulo anterior vimos que possvel alterar um nico pixel da imagem com o cdigo abaixo:
imagem[0, 0] = (0, 255, 0) #altera o pixel (0,0) para verde No caso acima o primeiro pixel da imagem ter a cor verde. Tambm podemos utilizar a tcnica de slicing para alterar vrios pixeis da imagem de uma nica vez como abaixo:
image[30:50, :] = (255, 0, 0)
Este cdigo acima cria um retangulo azul a partir da linha 31 at a linha 50 da imagem e ocupa toda a largura disponvel, ou seja, todas as colunas.
O cdigo abaixo abre uma imagem e cria vrios retngulos coloridos sobre ela.
import cv2 image = cv2.imread('ponte.jpg')
#Cria um retangulo azul por toda a largura da imagem image[30:50, :] = (255, 0, 0)
#Cria um quadrado vermelho image[100:150, 50:100] = (0, 0, 255)
#Cria um retangulo amarelo por toda a altura da imagem image[:, 200:220] = (0, 255, 255)
#Cria um retangulo verde da linha 150 a 300 nas colunas 250 a 350
image[150:300, 250:350] = (0, 255, 0)
#Cria um quadrado ciano da linha 150 a 300 nas colunas 250 a 350
image[300:400, 50:150] = (255, 255, 0)
#Cria um quadrado branco image[250:350, 300:400] = (255, 255, 255)
#Cria um quadrado preto image[70:100, 300: 450] = (0, 0, 0)
cv2.imshow("Imagem alterada", image)
cv2.imwrite("alterada.jpg", image)
cv2.waitKey(0)
Vrios retangulos coloridos so criados sobre a imagem com o cdigo acima. Veja a seguir a imagem original ponte.jpg e a imagem alterada.
13 Figura 8 Imagem original.
Figura 9 Imagem aps a execuo do cdigo acima com os retangulos coloridos includos pela tcnica de slicing.
Com a tcnica de slicing possvel criar quadrados e retngulos, contudo, para outras formas geomtricas possvel utilizar as funes da OpenCV, isso til principalmente no caso de desenho de crculos e textos sobre a imagem, veja:
import numpy as np import cv2 imagem = cv2.imread('ponte.jpg')
vermelho = (0, 0, 255)
verde = (0, 255, 0)
azul = (255, 0, 0)
14 cv2.line(imagem, (0, 0), (100, 200), verde)
cv2.line(imagem, (300, 200), (150, 150), vermelho, 5)
cv2.rectangle(imagem, (20, 20), (120, 120), azul, 10)
cv2.rectangle(imagem, (200, 50), (225, 125), verde, -1)
(X, Y) = (imagem.shape[1] // 2, imagem.shape[0] // 2)
for raio in range(0, 175, 15):
cv2.circle(imagem, (X, Y), raio, vermelho)
cv2.imshow("Desenhando sobre a imagem", imagem)
cv2.waitKey(0)
Figura 10 Utilizando funes do OpenCV para desenhar sobre a imagem.
Outra funo muito til a de escrever textos sobre a imagem. Para isso lembre-se que a coordenada do texto se refere a base onde os caracteres comearam a serem escritos. Ento para um calculo preciso estude a funo getTextSize. O exemplo abaixo tem o resultado a seguir:
import numpy as np import cv2 imagem = cv2.imread('ponte.jpg')
fonte = cv2.FONT_HERSHEY_SIMPLEX cv2.putText(imagem,'OpenCV',(15,65), fonte,
2,(255,255,255),2,cv2.LINE_AA)
cv2.imshow("Ponte", imagem)
cv2.waitKey(0)
15 Figura 11 exemplo de escrita sobre a iamgem. possvel escolher fonte, tamanho e posio.
16 4 Transformaes e mscaras Em muitas ocasies necessrio realizar transformaes sobre a imagem. Aes como redimensionar, cortar ou rotacionar uma imagem so necessariamente frequentes. Esse processamento pode ser feito de vrias formas como vereamos nos exemplos a seguir.
4.1 Cortando uma imagem / Crop A mesma tcnica j usada para fazer slicing pode ser usada para criar uma nova imagem recortada da imagem original, o termo em ings crop. Veja o cdigo abaixo onde criamos uma nova imagem a partir de um pedao da imagem original e a salvamos no disco.
import cv2 imagem = cv2.imread('ponte.jpg')
recorte = imagem[100:200, 100:200]
cv2.imshow("Recorte da imagem", recorte)
cv2.imwrite("recorte.jpg", recorte) #salva no disco Usando a mesma imagem ponte.jpg dos exemplos anteriores, temos o resultado abaixo que da linha 101 at a linha 200 na coluna 101 at a coluna 200:
Figura 12 Imagem recortada da imagem original e salvada em um arquivo em disco.
4.2 Redimensionamento / Resize Para reduzir ou aumentar o tamanho da imagem, existe uma funo j pronta kda OpenCV, trata-se da funo resize mostrada abaixo. Importante notar que preciso calcular a proporo da altura em relao a largura da nova imagem, caso contrrio ela poder ficar distorcida.
import numpy as np import cv2 img = cv2.imread('ponte.jpg')
cv2.imshow("Original", img)
largura = img.shape[1]
altura = img.shape[0]
proporcao = float(altura/largura)
largura_nova = 320 #em pixels altura_nova = int(largura_nova*proporcao)
17 tamanho_novo = (largura_nova, altura_nova)
img_redimensionada = cv2.resize(img,
tamanho_novo, interpolation = cv2.INTER_AREA)
cv2.imshow('Resultado', img_redimensionada)
cv2.waitKey(0)
Figura 13 No canto inferior esquerdo da imagem possvel notar a imagem redimensionada.
Veja que a funo rezise utiliza uma propriedade aqui definida como cv2.INTER_AREA que uma especificao do clculo matemtico para redimensionar a imagem. Apesar disso, caso a imagem seja redimensionada para um tamanho maior preciso ponderar que ocorrer perda de qualidade.
Outra meneira de redimensionar a imagem para tamanhos menores ou maiores
utilizando a tcnica de slicing. Neste caso fcil cortar pela metade o tamanho da imagem com o cdigo abaixo:
import numpy as np import imutils import cv2 img = cv2.imread('ponte.jpg')
cv2.imshow("Original", img)
img_redimensionada = img[::2,::2]
cv2.imshow("Imagem redimensionada", img_redimensionada)
cv2.waitKey(0)
O cdigo basicamente refaz a imagem interpolando linhas e colunas, ou seja, pega a primeira linha, ignora a segunda, depois pega a terceira linha, ignora a quarta, e assim por diante. O mesmo feito com as colunas. Ento temos uma imagem que exatamente (um
18 quarto) da original, tendo a metade da altura e a metade da largura da orginal. Veja o resultado abaixo:
Figura 14 Imagem gerada a partir da tcnica de slicing.
4.3 Espelhando uma imagem / Flip Para espelhar uma imagem, basta inverter suas linhas, suas colunas ou ambas.
Invertendo as linhas temos o flip horizontal e invertendo as colunas temos o flip vertical.
Podemos fazer o espelhamento/flip tanto com uma funo oferecida pela OpenCV
(funo flip) como atravs da manipulao direta das matrizes que compe a imagem. Abaixo temos os dois cdigos equivalentes em cada caso.
import cv2 img = cv2.imread('ponte.jpg')
cv2.imshow("Original", img)
flip_horizontal = img[::-1,:] #comando equivalente abaixo
#flip_horizontal = cv2.flip(img, 1)
cv2.imshow("Flip Horizontal", flip_horizontal)
flip_vertical = img[:,::-1] #comando equivalente abaixo
#flip_vertical = cv2.flip(img, 0)
cv2.imshow("Flip Vertical", flip_vertical)
flip_hv = img[::-1,::-1] #comando equivalente abaixo
#flip_hv = cv2.flip(img, -1)
cv2.imshow("Flip Horizontal e Vertical", flip_hv)
cv2.waitKey(0)
19 Figura 15 Resultado do flip horizontal, vertical e horizontal e vertical na mesma imagem.
4.4 Rotacionando uma imagem / Rotate Em latin affinis significa conectado com ou que possui conexo. por isso que uma das mais famosas transformaes na geometria que tambm utilizada em processamento de imagem se chama affine.
A transformao affine ou mapa affine, uma funo entre espaos affine que preservam os pontos, grossura de linhas e planos. Alm disso, linhas paralelas permanecem paralelas aps uma transformao affine. Essa transformao no necessariamente preserva a distncia entre pontos mas ela preserva a proporo das distncias entre os pontos de uma linha reta. Uma rotao um tipo de transformao affine.
img = cv2.imread('ponte.jpg')
(alt, lar) = img.shape[:2] #captura altura e largura centro = (lar // 2, alt // 2) #acha o centro M = cv2.getRotationMatrix2D(centro, 30, 1.0) #30 graus img_rotacionada = cv2.warpAffine(img, M, (lar, alt))
cv2.imshow("Imagem rotacionada em 45 graus", img_rotacionada)
cv2.waitKey(0)
20 Figura 16 Imagem rotacionada em 30 graus.
4.5 Mscaras Agora que j vimos alguns tipos de processamento vamos avanar para o assunto de mscaras. Primeiro importante definir que uma mscara nada mais que uma imagem onde cada pixel pode estar ligado ou desligado, ou seja, a mscara possui pixels pretos e brancos apenas. Veja um exemplo:
Figura 17 Imagem original esquerda e direita com a aplicao da mscara.
O cdigo necessrio esta abaixo:
img = cv2.imread('ponte.jpg')
cv2.imshow("Original", img)
mascara = np.zeros(img.shape[:2], dtype = "uint8")
(cX, cY) = (img.shape[1] // 2, img.shape[0] // 2)
cv2.circle(mascara, (cX, cY), 100, 255, -1)
img_com_mascara = cv2.bitwise_and(img, img, mask = mascara)
cv2.imshow("Mscara aplicada imagem", img_com_mascara)
cv2.waitKey(0)
21 import cv2 import numpy as np img = cv2.imread('ponte.jpg')
cv2.imshow("Original", img)
mascara = np.zeros(img.shape[:2], dtype = "uint8")
(cX, cY) = (img.shape[1] // 2, img.shape[0] // 2)
cv2.circle(mascara, (cX, cY), 180, 255, 70)
cv2.circle(mascara, (cX, cY), 70, 255, -1)
cv2.imshow("Mscara", mascara)
img_com_mascara = cv2.bitwise_and(img, img, mask = mascara)
22 cv2.imshow("Mscara aplicada imagem", img_com_mascara)
cv2.waitKey(
23 5 Sistemas de cores J conhecemos o tradicional espao de cores RGB (Red, Green, Blue) que sabemos que em OpenCV na verdade BGR dada a necessidade de colocar o azul como primeiro elemento e o vermelho como terceiro elemento de uma tupla que compe as cores de pixel.
Figura 18 Sistema de cores RGB.
Contudo, existem outros espaos de cores como o prprio Preto e Branco ou tons de cinza, alm de outros coloridos como o L*a*b* e o HSV. Abaixo temos um exemplo de como ficaria nossa imagem da ponte nos outros espaos de cores.
Figura 19 Outros espaos de cores com a mesm aimagem.
O cdigo para gerar o resultado visto acima o seguinte:
img = cv2.imread('ponte.jpg')
24 cv2.imshow("Original", img)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
cv2.imshow("Gray", gray)
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
cv2.imshow("HSV", hsv)
lab = cv2.cvtColor(img, cv2.COLOR_BGR2LAB)
cv2.imshow("L*a*b*", lab)
cv2.waitKey(0)
5.1 Canais da imagem colorida Como j sabemos uma imagem colorida no formato RGB possui 3 canais, um para cada cor. Existem funes do OpenCV que permitem separar e visualizar esses canais individualmente. Veja:
img = cv2.imread('ponte.jpg')
(canalAzul, canalVerde, canalVermelho) = cv2.split(img)
cv2.imshow("Vermelho", canalVermelho)
cv2.imshow("Verde", canalVerde)
cv2.imshow("Azul", canalAzul)
cv2.waitKey(0)
A funo split faz o trabalho duro separando os canais. Assim podemos exib-los em tons de cinza conforme mostra a imagem abaixo:
Figura 20 Perceba como a linha amarela (que formada por verde e vermelho) fica quase imperceptvel no canal azul.
25 Tambm possvel alterar individualmente as Numpy Arrays que formam cada canal e depois junt-las para criar novamente a imagem. Para isso use o comando:
resultado = cv2.merge([canalAzul, canalVerde, canalVermelho])
Tambm possvel exibir os canais nas cores originais conforme abaixo:
import numpy as np import cv2 img = cv2.imread('ponte.jpg')
(canalAzul, canalVerde, canalVermelho) = cv2.split(img)
zeros = np.zeros(img.shape[:2], dtype = "uint8")
cv2.imshow("Vermelho", cv2.merge([zeros, zeros,
canalVermelho]))
cv2.imshow("Verde", cv2.merge([zeros, canalVerde, zeros]))
cv2.imshow("Azul", cv2.merge([canalAzul, zeros, zeros]))
cv2.imshow("Original", img)
cv2.waitKey(0)
Figura 21 Exibindo os canais separadamente.
26 6 Histogramas e equalizao de imagem Um histograma um grfico de colunas ou de linhas que representa a distribuio dos valores dos pixels de uma imagem, ou seja, a quantidade de pixeis mais claros (prximos de 255) e a quantidade de pixels mais escuros (prximos de 0).
O eixo X do grfico normalmente possui uma distribuio de 0 a 255 que demonstra o valor (intensidade) do pixel e no eixo Y plotada a quantidade de pixels daquela intensidade.
Figura 22 Imagem original j convertidada para tons de cinza.
Figura 23 Histograma da imagem em tons de cinza.
Perceba que no histograma existe um pico ao centro do grfico, entre 100 e 150,
27 demonstrando a grande quantidade de pixels nessa faixa devido a estrada que ocupa grande parte da imagem possui pixels nessa faixa.
O cdigo para gerar o histograma segue abaixo:
from matplotlib import pyplot as plt import cv2 img = cv2.imread('ponte.jpg')
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) #converte P&B cv2.imshow("Imagem P&B", img)
#Funo calcHist para calcular o hisograma da imagem h = cv2.calcHist([img], [0], None, [256], [0, 256])
plt.figure()
plt.title("Histograma P&B")
plt.xlabel("Intensidade")
plt.ylabel("Qtde de Pixels")
plt.plot(h)
plt.xlim([0, 256])
plt.show()
cv2.waitKey(0)
Tambm possvel plotar o histograma de outra forma, com a ajuda da funo ravel(). Neste caso o eixo X avana o valor 255 indo at 300, espao que no existem pixels.
plt.hist(img.ravel(),256,[0,256])
plt.show()
Figura 24 Histograma em barras.
Alm do histograma da imagem em tons de cinza possvel plotar um histograma da imagem colorida. Neste caso teremos trs linhas, uma para cada canal. Veja abaixo o cdigo
28 necessrio. Importante notar que a funo zip cria uma lista de tuplas formada pelas unio das listas passadas e no tem nada a ver com um processo de compactao como poderia se esperar.
from matplotlib import pyplot as plt import numpy as np import cv2 img = cv2.imread('ponte.jpg')
cv2.imshow("Imagem Colorida", img)
#Separa os canais canais = cv2.split(img)
cores = ("b", "g", "r")
plt.figure()
plt.title("'Histograma Colorido")
plt.xlabel("Intensidade")
plt.ylabel("Nmero de Pixels")
for (canal, cor) in zip(canais, cores):
#Este loop executa 3 vezes, uma para cada canal hist = cv2.calcHist([canal], [0], None, [256], [0, 256])
plt.plot(hist, cor = cor)
plt.xlim([0, 256])
plt.show()
Figura 25 Histograma colorido da imagem. Neste caso so protados os 3 canais RGB.
29 6.1 Equalizao de Histograma
possvel realizar um clculo matemtico sobre a distribuio de pixels para aumentar o contraste da imagem. A inteno neste caso distribuir de forma mais uniforme as intensidades dos pixels sobre a imagem. No histograma possvel identificar a diferena pois o acumulo de pixels prximo a alguns valores suavizado. Veja a diferena entre o histograma original e o equalizado abaixo:
Figura 26 Histograma da imagem original.
Figura 27 Histograma da imagem cujo histograma foi equalizado.
O cdigo utilizado para gerar os dois histogramas segue abaixo:
30 from matplotlib import pyplot as plt import numpy as np import cv2 img = cv2.imread('ponte.jpg')
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
h_eq = cv2.equalizeHist(img)
plt.figure()
plt.title("Histograma Equalizado")
plt.xlabel("Intensidade")
plt.ylabel("Qtde de Pixels")
plt.hist(h_eq.ravel(), 256, [0,256])
plt.xlim([0, 256])
plt.show()
plt.figure()
plt.title("Histograma Original")
plt.xlabel("Intensidade")
plt.ylabel("Qtde de Pixels")
plt.hist(img.ravel(), 256, [0,256])
plt.xlim([0, 256])
plt.show()
cv2.waitKey(0)
Na imagem a diferena tambm perceptvel, veja:
Figura 28 Imagem original (acima) e imagem cujo histograma foi equalizado (abaixo). Na imagem cujo histograma foi equalizado percebemos maior contraste.
Contudo, conforme vemos na imagem possvel que ocorram distores e alteraes nas cores da imagem equalizada, portanto, nem sem a imagem mantm suas caractersticas
31 originais. Porm caso exista a necessidade de destacar detalhes na imagem a equalizao pode ser uma grande aliada, isso normalmente feito em imagens para identificao de objetos,
imagens de estudos de reas por satlite e para identificao de padres em imagens mdicas por exemplo.
O cdigo para equalizao do histograma da imagem segue abaixo. O clculo matemtico feito pela funo equalizeHist disponibilizada pela OpenCV.
A explicao do algoritmo utilizado pela funo extremamente bem feita prpria documentao da OpenCV1 que mostra o seguinte exemplo:
Figura 29 Exemplo extrado da documentao da OpenCV mostrando o histograma de uma imagem com baixo contraste.
Figura 30 Exemplo extrado da documentao da OpenCV mostrando o histograma da mesma imagem mas desda vez com o histograma equalizado pela funo equalizeHist.
Perceba que a equalizao faz com que a distribuio das intensidades dos pixels de 0 a 255 seja uniforme. Portanto teremos a mesma quantidade de pixels com valores na faixa de 0 a 10 (pixels muito escuros) e na faixa de 245 a 255 (pixels muito claros).
A funo usa o seguinte algoritmo:
1 Disponvel em: <http://docs.opencv.org/3.1.0/d5/daf/tutorial_py_histogram_equalization.html>. Acesso em:
11 mar. 2017.
32 Passo 1: Calcula o histograma H da imagem.
Passo 2: Normaliza o histograma para garantir que os valores das intensidades dos pixels estejam entre 0 e 255.
Passo 3: Calcula o histograma acumulado = 0 ()
Passo 4: Transforma a imagem: (, ) = ((, ))
Dessa forma temos uma distribuio mais uniforme das intensidades dos pixels na imagem. Lembre-se que detalhes podem inclusive serem perdidos com este processamento de imagem. Contudo, o que garantido o aumento de contraste.
33 7 Suavizao de imagens A suavisao da imagem (do ingls Smoothing), tambm chamada de blur ou blurring que podemos traduzir para borro, um efeito que podemos notar nas fotografias fora de foco ou desfocadas onde tudo fica embasado.
Na verdade esse efeito pode ser criado digitalmente, basta alterar a cor de cada pixel misturando a cor com os pixels ao seu redor. Esse efeito muito til quando utilizamos algoritmos de identificao de objetos em imagens pois os processos de deteco de bordas por exemplo, funcionam melhor depois de aplicar uma suavizao na imagem.
7.1 Suavizao por clculo da mdia Neste caso criada uma caixa de pixels para envolver o pixel em questo e calcular seu novo valor. O novo valor do pixel ser a mdia simples dos valores dos pixels dentro da caixa, ou seja, dos pixels da vizinhana. Alguns autores chamam esta caixa de janela de clculo ou kernel (do ingls ncleo).
30 100
130 130 Pixel 160 50
100 210
Figura 31 Caixa 3x3 pixels. O nmero de linhas e colunas da caixa deve ser impar para que existe sempre o pixel central que ser alvo do clculo.
Portanto o novo valor do pixel ser a mdia da sua vizinhana o que gera a suavizao na imagem como um todo.
No cdigo abaixo percebemos que o mtodo utilizado para a suavizao pela mdia
o mtodo blur da OpenCV. Os parmetros so a imagem a ser suavizada e a janela de suavizao. Colocarmos nmeros impars para gerar as caixas de clculo pois dessa forma no existe dvida sobre onde estar o pixel central que ter seu valor atualizado.
Perceba que usamos as funes vstack (pilha vertical) e hstack (pilha horizontal) para juntar as imagens em uma nica imagem final mostrando desde a imagem original e seguinte com caixas de calculo de 3x3, 5x5, 7x7, 9x9 e 11x11. Perceba que conforme aumenta a caixa maior o efeito de borro (blur) na imagem.
img = cv2.imread('ponte.jpg')
img = img[::2,::2] # Diminui a imagem suave = np.vstack([
np.hstack([img,
cv2.blur(img, ( 3, 3))]),
np.hstack([cv2.blur(img, (5,5)), cv2.blur(img, ( 7, 7))]),
np.hstack([cv2.blur(img, (9,9)), cv2.blur(img, (11, 11))]),
])
cv2.imshow("Imagens suavisadas (Blur)", suave)
cv2.waitKey(0)
34 Figura 32 Imagem original seguida da esquerda para a direita e de cima para baixo com imagens tendo caixas de clculo de 3x3, 5x5, 7x7, 9x9 e 11x11. Perceba que conforme aumenta a caixa maior o efeito de borro (blur) na imagem.
7.2 Suavizao pela Gaussiana Ao invs do filtro de caixa utilizado um kernel gaussiano. Isso calculado atravs da funo cv2.GaussianBlur(). A funo exige a especificao de uma largura e altura com nmeros impares e tambm, opcionalmente, possvel especificar a quantidade de desvios padro no eixo X e Y (horizontal e vertical).
img = cv2.imread('ponte.jpg')
img = img[::2,::2] # Diminui a imagem suave = np.vstack([
np.hstack([img,
cv2.GaussianBlur(img, ( 3, 3), 0)]),
np.hstack([cv2.GaussianBlur(img, ( 5, 5), 0),
cv2.GaussianBlur(img, ( 7, 7), 0)]),
np.hstack([cv2.GaussianBlur(img, ( 9, 9), 0),
cv2.GaussianBlur(img, (11, 11), 0)]),
])
cv2.imshow("Imagem original e suavisadas pelo filtro Gaussiano", suave)
35 cv2.waitKey(0)
Figura 33 Imagem original seguida da esquerda para a direita e de cima para baixo com imagens tendo caixas de clculo de 3x3, 5x5, 7x7, 9x9 e 11x11 utilizando o Gaussian Blur.
Veja nas imagens como o filtro de kernel gaussiano gera menos borro na imagem mas tambm gera um efeito mais natural e reduz o rudo na imagem.
7.3 Suavizao pela mediana Da mesma forma que os clculos anteriores, aqui temos o clculo de uma caixa ou janela quadrada sobre um pixel central onde matematicamente se utiliza a mediana para calcular o valor final do pixel. A mediana semelhante mdia, mas ela despreza os valores muito altos ou muito baixos que podem distorcer o resultado. A mediana o nmero que fica examente no meio do intervalo.
A funo utilizada a cv2.medianBlur(img, 3) e o nico argumento o tamaho da caixa ou janela usada.
importante notar que este mtodo no cria novas cores, como pode acontecer com os ateriores, pois ele sempre altera a cor do pixel atual com um dos valores da vizinhana.
Veja o cdigo usado:
import numpy as np import cv2
36 img = cv2.imread('ponte.jpg')
img = img[::2,::2] # Diminui a imagem suave = np.vstack([
np.hstack([img,
cv2.medianBlur(img, 3)]),
np.hstack([cv2.medianBlur(img, 5),
cv2.medianBlur(img, 7)]),
np.hstack([cv2.medianBlur(img, 9),
cv2.medianBlur(img, 11)]),
])
cv2.imshow("Imagem original e suavisadas pela mediana", suave)
cv2.waitKey(0)
Figura 34 Da mesma forma temos a imagem original seguida pelas imagens alteradas pelo filtro de mediana com o tamanho de 3, 5, 7, 9, e 11 nas caixas de clculo.
7.4 Suavizao com filtro bilateral Este mtodo mais lento para calcular que os anteriores mas como vantagem apresenta a preservao de bordas e garante que o rudo seja removido.
Para realizar essa tarefa, alm de um filtro gaussiano do espao ao redor do pixel
37 tambm utilizado outro clculo com outro filtro gaussiano que leva em conta a diferena de intensidade entre os pixels, dessa forma, como resultado temos uma maior manuteno das bordas das imagem. A funo usada cv2.bilateralFilter() e o cdigo usado segue abaixo:
img = cv2.imread('ponte.jpg')
img = img[::2,::2] # Diminui a imagem suave = np.vstack([
np.hstack([img,
cv2.bilateralFilter(img, 3, 21, 21)]),
np.hstack([cv2.bilateralFilter(img, 5, 35, 35),
cv2.bilateralFilter(img, 7, 49, 49)]),
np.hstack([cv2.bilateralFilter(img, 9, 63, 63),
cv2.bilateralFilter(img, 11, 77, 77)])
])
Figura 35 Imagem original e imagens alteradas pelo filtro bilateral. Veja como mesmo com a grande interferncia na imagem no caso da imagem mais baixo e direita as bordas so preservadas.
38 8 Binarizao com limiar Thresholding pode ser traduzido por limiarizao e no caso de processamento de imagens na maior parte das vezes utilizamos para binarizao da imagem. Normalmente convertemos imagens em tons de cinza para imagens preto e branco onde todos os pixels possuem 0 ou 255 como valores de intensidade.
img = cv2.imread('ponte.jpg')
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
suave = cv2.GaussianBlur(img, (7, 7), 0) # aplica blur
(T, bin) = cv2.threshold(suave, 160, 255, cv2.THRESH_BINARY)
(T, binI) = cv2.threshold(suave, 160, 255,
cv2.THRESH_BINARY_INV)
resultado = np.vstack([
np.hstack([suave, bin]),
np.hstack([binI, cv2.bitwise_and(img, img, mask = binI)])
])
cv2.imshow("Binarizao da imagem", resultado)
cv2.waitKey(0)
No cdigo realizamos a suavizao da imagem, o processo de binarizao com threshold de 160 e a inverso da imagem binarizada.
Figura 36 Da esquerda para a direta e de cima para baixo temos: a imagem, a imagem suavizada, a imagem binarizada e a imagem binarizada invertida.
No caso das estradas, esta uma das tcnicas utilizadas por carros autnomos para identificar a pista. A mesma tcnica tambm utilizada para identificao de objetos.
39 8.1 Threshold adaptativo O valor de intensidade 160 utilizada para a binarizao acima foi arbitrado, contudo,
possvel otimizar esse valor matematicamente. Esta a proposta do threshold adaptativo.
Para isso precisamos dar um valor da janela ou caixa de clculo para que o limiar seja calculado nos pixels prximos das imagem. Outro parmetro um inteiro que subtrado da mdia calculada dentro da caixa para gerar o threshold final.
Figura 37 Threshold adaptativo. Da esquerda para a direta e de cima para baixo temos: a imagem, a imagem suavizada, a imagem binarizada pela mdia e a imagem binarizada com Gauss.
img = cv2.imread('ponte.jpg')
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # converte suave = cv2.GaussianBlur(img, (7, 7), 0) # aplica blur bin1 = cv2.adaptiveThreshold(suave, 255,
cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, 21, 5)
bin2 = cv2.adaptiveThreshold(suave, 255,
cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV,
21, 5)
resultado = np.vstack([
np.hstack([img, suave]),
np.hstack([bin1, bin2])
])
cv2.imshow("Binarizao adaptativa da imagem", resultado)
cv2.waitKey(0)
40 8.2 Threshold com Otsu e Riddler-Calvard Outro mtodo que automaticamente encontra um threshold para a imagem o mtodo de Otsu. Neste caso ele analiza o histograma da imagem para encontrar os dois maiores picos de intensidades, ento ele calcula um valor para separar da melhor forma esses dois picos.
Figura 38 1.1 Threshold com Otsu e Riddler-Calvard.
import mahotas import numpy as np import cv2 img = cv2.imread('ponte.jpg')
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # converte suave = cv2.GaussianBlur(img, (7, 7), 0) # aplica blur T = mahotas.thresholding.otsu(suave)
temp = img.copy()
temp[temp > T] = 255 temp[temp < 255] = 0 temp = cv2.bitwise_not(temp)
T = mahotas.thresholding.rc(suave)
temp2 = img.copy()
temp2[temp2 > T] = 255 temp2[temp2 < 255] = 0 temp2 = cv2.bitwise_not(temp2)
resultado = np.vstack([
np.hstack([img, suave]),
np.hstack([temp, temp2])
])
cv2.imshow("Binarizao com mtodo Otsu e Riddler-Calvard",
resultado)
cv2.waitKey(0)
41 9 Segmentao e mtodos de deteco de bordas Uma das tarefas mais importantes para a viso computacional identificar objetos.
Para essa identificao uma das principais tcnicas a utilizao de detectores de bordas a fim de identificar os formatos dos objetos presentes na imagem.
Quando falamos em segmentao e deteco de bordas, os algoritmos mais comuns so o Canny, Sobel e variaes destes. Basicamente nestes e em outros mtodos a deteco de bordas se faz atravs de identificao do gradiente, ou, neste caso, de variaes abruptas na intensidade dos pixels de uma regio da imagem.
A OpenCV disponibiliza a implementao de 3 filtros de gradiente (High-pass filters):
Sobel, Scharr e Laplacian. As respectivas funes so: cv2.Sobel(), cv2.Scharr(),
cv2.Laplacian().
9.1 Sobel No entraremos na explicao matemtica de cada mtodo mas importante notar que o Sobel direcional, ento temos que juntar o filtro horizontal e o vertical para ter uma transformao completa, veja:
import numpy as np import cv2 img = cv2.imread('ponte.jpg')
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
sobelX = cv2.Sobel(img, cv2.CV_64F, 1, 0)
sobelY = cv2.Sobel(img, cv2.CV_64F, 0, 1)
sobelX = np.uint8(np.absolute(sobelX))
sobelY = np.uint8(np.absolute(sobelY))
sobel = cv2.bitwise_or(sobelX, sobelY)
resultado = np.vstack([
np.hstack([img,
sobelX]),
np.hstack([sobelY, sobel])
])
cv2.imshow("Sobel", resultado)
cv2.waitKey(0)
Note que devido ao processamento do Sobel preciso trabalhar com a imagem com ponto flutuante de 64 bits (que suporta valores positivos e negativos) para depois converter para uint8 novamente.
42 Figura 39 Da esquerda para a direta e de cima para baixo temos: a imagem original, Sobel Horizonta
(sobelX), Sobel Vertical (sobelY) e a imagem com o Sobel combinado que o resultado final.
Um belo exemplo de resultado Sobel esta na documentao da OpenCV, veja:
Figura 40 Exemplo de processamento Sobel Vertical e Horizontal disponvel na documentao da OpenCV.
9.2 Filtro Laplaciano O filtro Laplaciano no exige processamento individual horizontal e vertical como o
43 Sobel. Um nico passo necessrio para gerar a imagem abaixo. Contudo, tambm
necessrio trabalhar com a representao do pixel em ponto flutuant de 64 bits com sinal para depois converter novamente para inteiro sem sinal de 8 bits.
Figura 41 Filtro Laplaciano.
O cdigo segue abaixo:
import numpy as np import cv2 img = cv2.imread('ponte.jpg')
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
lap = cv2.Laplacian(img, cv2.CV_64F)
lap = np.uint8(np.absolute(lap))
resultado = np.vstack([img, lap])
cv2.imshow("Filtro Laplaciano", resultado)
cv2.waitKey(0)
9.3 Detector de bordas Canny Em ingls canny pode ser traduzido para esperto, esta no discionrio. E o Carry Hedge Detector ou detector de bordas Caany realmente mais inteligente que os outros. Na verdade ele se utiliza de outras tcnicas como o Sobel e realiza multiplos passos para chegar ao resultado final.
44 Basicamente o Canny envolve:
1.
2.
3.
4.
Aplicar um filtro gaussiano para suavizar a imagem e remover o rudo.
Encontrar os gradientes de intensidade da imagem.
Aplicar Sobel duplo para determinar bordas potenciais.
Aplicar o processo de hysteresis para verificar se o pixel faz parte de uma borda forte suprimindo todas as outras bordas que so fracas e no conectadas a bordas fortes.
preciso fornecer dois parmetros para a funo cv2.Canny(). Esses dois valores so o limiar 1 e limiar 2 e so utilizados no processo de hysteresis final. Qualquer gradiente com valor maior que o limiar 2 considerado como borda. Qualquer valor inferior ao limiar 1 no considerado borda. Valores entre o limiar 1 e limiar 2 so classificados como bordas ou no bordas com base em como eles esto conectados.
Figura 42 Canny com parmetros diferentes. A esquerda deixamos um limiar mais baixo (20,120) e
direita a imagem foi gerada com limiares maiores (70,200).
import numpy as np import cv2 img = cv2.imread('ponte.jpg')
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
suave = cv2.GaussianBlur(img, (7, 7), 0)
canny1 = cv2.Canny(suave, 20, 120)
canny2 = cv2.Canny(suave, 70, 200)
resultado = np.vstack([
np.hstack([img,
suave ]),
np.hstack([canny1, canny2])
])
cv2.imshow("Detector de Bordas Canny", resultado)
cv2.waitKey(0)
45 10 Identificando e contando objetos Como todos sabem, a atividade de jogar dados muito til. Muito til para jogar RPG,
General e outros jogos. Mas depois do sistema apresentado abaixo, no ser mais necessrio clicar no mouse ou pressionar uma tecla do teclado para jogar com o computador. Voc poder jogar os dados de verdade e o computador ir ver sua pontuao.
Para isso precisamos identificar:
1. Onde esto os dados na imagem.
2. Quantos dados foram jogados.
3. Qual o lado que esta para cima.
Inicialmente vamos identificar os dados e contar quantos dados existem na imagem,
em um segundo momento iremos identificar quais so esses dados. A imagem que temos esta abaixo. No uma imagem fcil pois alm dos dados serem vermelhos e terem um contraste menor que dados brancos sobre uma mesa preta, por exemplo, eles ainda esto sobre uma superfcie branca com ranhuras, ou seja, no uma superfcie uniforme. Isso ir dificultar nosso trabalho.
Figura 43 Imagem original. A superfcie branca com ranhuras dificultar o processo.
Os passos mostrados na sequncia de imagens abaixo so:
1. Convertemos a imagem para tons de cinza.
2. Aplicamos blur para retirar o rudo e facilitar a identificao das bordas.
3. Aplicamos uma binarizao na imagem resultando em pixels s brancos e pretos.
4. Aplicamos um detector de bordas para identificar os objetos.
5. Com as bordas identificadas, vamos contar os contornos externos para achar a quantidade de dados presentes na imagem.
46 Figura 44 Passos para identificar e contar os dados na imagem.
Figura 45 Resultado sobre a imagem original.
47 O cdigo para gerar as sadas acima segue abaixo comentado:
import numpy as np import cv2 import mahotas
#Funo para facilitar a escrita nas imagem def escreve(img, texto, cor=(255,0,0)):
fonte = cv2.FONT_HERSHEY_SIMPLEX cv2.putText(img, texto, (10,20), fonte, 0.5, cor, 0,
cv2.LINE_AA)
imgColorida = cv2.imread('dados.jpg') #Carregamento da imagem
#Se necessrio o redimensioamento da imagem pode vir aqui.
#Passo 1: Converso para tons de cinza img = cv2.cvtColor(imgColorida, cv2.COLOR_BGR2GRAY)
#Passo 2: Blur/Suavizao da imagem suave = cv2.blur(img, (7, 7))
#Passo 3: Binarizao resultando em pixels brancos e pretos T = mahotas.thresholding.otsu(suave)
bin = suave.copy()
bin[bin > T] = 255 bin[bin < 255] = 0 bin = cv2.bitwise_not(bin)
#Passo 4: Deteco de bordas com Canny bordas = cv2.Canny(bin, 70, 150)
#Passo 5: Identificao e contagem dos contornos da imagem
#cv2.RETR_EXTERNAL = conta apenas os contornos externos
(lx, objetos, lx) = cv2.findContours(bordas.copy(),
cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
#A varivel lx (lixo) recebe dados que no so utilizados escreve(img, "Imagem em tons de cinza", 0)
escreve(suave, "Suavizacao com Blur", 0)
escreve(bin, "Binarizacao com Metodo Otsu", 255)
escreve(bordas, "Detector de bordas Canny", 255)
temp = np.vstack([
np.hstack([img, suave]),
np.hstack([bin, bordas])
])
cv2.imshow("Quantidade de objetos: "+str(len(objetos)), temp)
cv2.waitKey(0)
imgC2 = imgColorida.copy()
cv2.imshow("Imagem Original", imgColorida)
48 cv2.drawContours(imgC2, objetos, -1, (255, 0, 0), 2)
escreve(imgC2, str(len(objetos))+" objetos encontrados!")
cv2.imshow("Resultado", imgC2)
cv2.waitKey(0)
A funo cv2.findContours() no foi mostrada anteriormente neste livro. Encorajamos o leitor a buscar compreender melhor a funo na documentao da OpenCV. Resumidamente ela busca na imagem contornos fechados e retorna um mapa que um vetor contendo os objetos encontrados. Este mapa neste caso foi armazenado na varivel objetos.
por isso que usamos a funo len(objetos) para contar quantos objetos foram encontrados. O terceiro argumento definido como -1 define que todos os contornos de objetos sero desenhados. Mas podemos identificar um contorno especifico sendo 0 para o primeiro objeto, 1 para o segudo e assim por diante.
Agora preciso identificar qual o lado do dado que esta virado para cima. Para isso precisaremos contar quantos pontos brancos existe na superfcie do dado. possvel utilizar vrias tcnicas para encontrar a soluo. Deixaremos a implementao e testes dessa atividade a cargo do amigo leitor ;)
49 11 Deteco de faces em imagens Uma das grandes habilidades dos seres humanos a capacidade de rapidamente identificar padres em imagens. Isso sem dvida foi crucial para a sobrevivncia da humanidade at os dias de hoje. Busca-se desenvolver a mesma habilidade para os computadores atravs da viso computacional e vrias tcnicas foram criadas nos ltimos anos visando este objetivo.
O que h de mais moderno (estado da arte) atualmente so as tcnicas de deep learning ou em uma traduo livre aprendizado profundo que envolvem algoritmos de inteligncia artificial e redes neurais para treinar identificadores.
Outra tcnica bastante utilizada e muito importante so os haar-like cascades features que traduzindo seria algo como caractersticas em cascata do tipo haar j que a palavra haar no possui traduo j que o nome deriva dos wavelets Haar que foram usados no primeiro detector de rosto em tempo real. Essa tcnica foi criada por <NAME> e <NAME> no artigo Rapid Object Detection using a Boosted Cascade of Simple Features de 2001. O trabalho foi melhorado por <NAME> e <NAME> em 2002 no trabalho An Extended Set of Haar-like Features for Rapid Object Detection.
As duas referncias seguem abaixo:
<NAME> and <NAME>. Rapid Object Detection using a Boosted Cascade of Simple Features. IEEE CVPR,
2001. The paper is available online at http://research.microsoft.com/enus/um/people/viola/Pubs/Detect/violaJones_CVPR2001.pdf <NAME> and <NAME>. An Extended Set of Haar-like Features for Rapid Object Detection. IEEE ICIP 2002, Vol. 1, pp. 900-903, Sep. 2002. This paper, as well as the extended technical report, can be retrieved at http://www.multimediacomputing.de/mediawiki//images/5/52/MRL-TR-May02-revisedDec02.pdf A principal vantagem da tcnica a baixa necessidade de processamento para realizar a identificao dos objetos, o que se traduz em alta velocidade de deteco.
Historicamnete os algoritmos sempre trabalharam apenas com a intensidades dos pixels da imagem. Contudo, uma publicao de Oren Papageorgio "A general framework for object detection" publicada em 1998 mostrou um recurso alternativo baseado em Haar wavelets em vez das intensidades de imagem. Viola e Jones ento adaptaram a idia de usar ondas Haar e desenvolveram as chamadas Haar-like features ou caractersticas Haar. Uma caracterstica Haar considera as regies retangulares adjacentes num local especfico (janela de deteco) da imagem, ento se processa a intensidades dos pixel em cada regio e se calcula a diferena entre estas somas. Esta diferena ento usada para categorizar subsees de uma imagem.
Por exemplo, digamos que temos imagens com faces humanas. uma caracterstica
50 comum que entre todas as faces a regio dos olhos mais escura do que a regio das bochechas. Portanto, uma caracterstica Haar comum para a deteco de face um conjunto de dois retngulos adjacentes que ficam na regio dos olhos e acima da regio das bochechas.
A posio desses retngulos definida em relao a uma janela de deteco que age como uma caixa delimitadora para o objeto alvo (a face, neste caso).
Na fase de deteco da estrutura de deteco de objetos Viola-Jones, uma janela do tamanho do alvo movida sobre a imagem de entrada, e para cada subseo da imagem
calculada a caracterstica do tipo Haar. Essa diferena ento comparada a um limiar aprendido que separa no-objetos de objetos. Como essa caracterstica Haar apenas um classificador fraco (sua qualidade de deteco ligeiramente melhor que a suposio aleatria), um grande nmero de caractersticas semelhantes a Haar so necessrias para descrever um objeto com suficiente preciso. Na estrutura de deteco de objetos Viola-Jones,
as caractersticas de tipo Haar so, portanto, organizadas em algo chamado cascata de classificadores para formar classificador forte. A principal vantagem de um recurso semelhante ao Haar sobre a maioria dos outros recursos a velocidade de clculo. Devido ao uso de imagens integrais, um recurso semelhante a Haar de qualquer tamanho pode ser calculado em tempo constante (aproximadamente 60 instrues de microprocessador para um recurso de 2 retngulos).
Uma caracterstica Haar-like pode ser definida como a diferena da soma de pixels de reas dentro do retngulo, que pode ser em qualquer posio e escala dentro da imagem original. Esse conjunto de caractersticas modificadas chamado de caractersticas de 2 retngulos. Viola e Jones tambm definiram caractersticas de 3 retngulos e caractersticas de 4 retngulos. Cada tipo de recurso pode indicar a existncia (ou ausncia) de certos padres na imagem, como bordas ou alteraes na textura. Por exemplo, um recurso de 2 retngulos pode indicar onde a borda est entre uma regio escura e uma regio clara.
A OpenCV j possui o algoritmo pronto para deteco de Haar-like features, contudo,
precisamos dos arquivo XML que a fonte dos padres para identificao dos objetos. A OpenCV j oferece arquivos prontos que identificam padres como faces e olhos. Em github.com/opencv/opencv/tree/master/data/haarcascades possvel encontrar outros arquivos para identificar outros objetos.
Abaixo veja um exemplo de cdigo que utiliza a OpenCV com Python para identificar faces:
#Carrega arquivo e converte para tons de cinza i = cv2.imread('iamgem.jpg')
iPB = cv2.cvtColor(i, cv2.COLOR_BGR2GRAY)
#Criao do detector de faces df = cv2.CascadeClassifier('xml/frontalface.xml')
#Executa a deteco faces = df.detectMultiScale(iPB,
scaleFactor = 1.05, minNeighbors = 7,
minSize = (30,30), flags = cv2.CASCADE_SCALE_IMAGE)
#Desenha retangulos amarelos na iamgem original (colorida)
for (x, y, w, h) in faces:
51 cv2.rectangle(i, (x, y), (x + w, y + h), (0, 255, 255), 7)
#Exibe imagem. Ttulo da janela exibe nmero de faces cv2.imshow(str(len(faces))+' face(s) encontrada(s).', imgC)
cv2.waitKey(0)
Figura 46 Identificao de face frontal.
Figura 47 Identificao de face frontal.
52 Os argumentos que precisam ser informados para o mtodo de deteco alm do arquivo XML que contm a descrio do objeto seguem abaixo:
ScaleFactor: Quanto o tamanho da imagem reduzido em cada busca da imagem. Isso necessrio porque podem ter objetos grandes (prximos) ou menores (mais distantes) na imagem. Um valor de 1,05 indica que a imagem ser reduzida em 5% de cada vez.
minNeighbors: Quantos vizinhos cada janela deve ter para a rea na janela ser considerada um rosto. O classificador em cascata detectar vrias janelas ao redor da face e este parmetro controla quantas janelas positivas so necessrias para se considerar como um rosto vlido.
minSize: Uma tupla de largura e altura (em pixels) indicando o tamanho mnimo da janela para que caixas menores do que este tamanho sero ignoradas.
Abaixo temos um cdigo completo com varredura de diretrio, ou seja, possvel repassar um diretrio para busca e o algoritmo ir procurar por todas as imagens do diretrio,
identificar as faces e criar um retangulo sobre as faces encontradas:
import os, cv2
#Faz a varredura do diretrio imagens buscando arquivos JPG, JPEG e PNG.
diretorio = 'imagens'
arquivos = os.listdir(diretorio)
for a in arquivos:
if a.lower().endswith('.jpg') or a.lower().endswith('.png') or a.lower().endswith('.jpeg'):
imgC = cv2.imread(diretorio+'/'+a)
imgPB = cv2.cvtColor(imgC, cv2.COLOR_BGR2GRAY)
df =
cv2.CascadeClassifier('xml/haarcascade_frontalface_default.xml')
faces = df.detectMultiScale(imgPB,
scaleFactor = 1.2, minNeighbors = 7,
minSize = (30,30), flags = cv2.CASCADE_SCALE_IMAGE)
for (x, y, w, h) in faces:
cv2.rectangle(imgC, (x, y), (x + w, y + h), (0, 255, 255), 7)
alt = int(imgC.shape[0]/imgC.shape[1]*640)
imgC = cv2.resize(imgC, (640, alt), interpolation =
cv2.INTER_CUBIC)
cv2.imshow(str(len(faces))+' face(s) encontrada(s).', imgC)
cv2.waitKey(0)
53 12 Deteco de faces em vdeos O processo de deteco de objetos em vdeos muito similar a deteco em imagens.
Na verdade um vdeo nada mais do que um fluxo contnuo de imagens que so enviadas a partir da fonte como uma webcam ou ainda um arquivo de vdeo mp4.
Um looping necessrio para processar o fluxo contnuo de imagens do vdeo. Para facilitar a compreeno veja o exemplo abaixo:
import cv2 def redim(img, largura): #funo para redimensionar uma imagem alt = int(img.shape[0]/img.shape[1]*largura)
img = cv2.resize(img, (largura, alt), interpolation =
cv2.INTER_AREA)
return img
#Cria o detector de faces baseado no XML df = v2.CascadeClassifier('xml/haarcascade_frontalface_default.xml')
#Abre um vdeo gravado em disco camera = cv2.VideoCapture('video.mp4')
#Tambm possvel abrir a prprio webcam
#do sistema para isso segue cdigo abaixo
#camera = cv2.VideoCapture(0)
while True:
#read() retorna 1-Se houve sucesso e 2-O prprio frame
(sucesso, frame) = camera.read()
if not sucesso: #final do vdeo break
#reduz tamanho do frame para acelerar processamento frame = redim(frame, 320)
#converte para tons de cinza frame_pb = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
#detecta as faces no frame faces = df.detectMultiScale(frame_pb, scaleFactor = 1.1,
minNeighbors=3, minSize=(20,20), flags=cv2.CASCADE_SCALE_IMAGE)
frame_temp = frame.copy()
for (x, y, lar, alt) in faces:
cv2.rectangle(frame_temp, (x, y), (x + lar, y + alt), (0,
255, 255), 2)
#Exibe um frame redimensionado (com perca de qualidade)
cv2.imshow("Encontrando faces...", redim(frame_temp, 640))
#Espera que a tecla 's' seja pressionada para sair if cv2.waitKey(1) & 0xFF == ord("s"):
break
#fecha streaming camera.release()
cv2.destroyAllWindows()
54 Figura 48 Deteco em vdeo.
55 13 Rastreamento de objetos em vdeos O rastreamento de objetos muito til em aplicaes reais. Realizar o rastreamento envolve identificar um objeto e aps isso acompanhar sua trajetria. Uma das maneiras para identificar um objeto utilizar a tcnica das caractersticas Haar-like. Outra maneira, ainda mais simples simplemente definir uma cor especfica para rastrear um objeto.
O cdigo abaixo realiza essa tarefa. O objetivo identificar e acompanhar um objeto azul na tela. Perceba que o a cor azul pode ter vrias tonalidades e por isso que a funo cv2.inRange() to importante. Essa funo recebe uma cor azul-claro e outra azul-escuro e tudo que estiver entre essas duas tonalidades ser identificado como sendo parte de nosso objeto. A funo retorna uma imagem binarizada. Veja o exemplo abaixo:
import numpy as np import cv2 azulEscuro = np.array([100, 67, 0], dtype = "uint8")
azulClaro = np.array([255, 128, 50], dtype = "uint8")
#camera = cv2.VideoCapture(args["video"])
camera = cv2.VideoCapture('video.mp4')
while True:
(sucesso, frame) = camera.read()
if not sucesso:
break obj = cv2.inRange(frame, azulEscuro, azulClaro)
obj = cv2.GaussianBlur(obj, (3, 3), 0)
(_, cnts, _) = cv2.findContours(obj.copy(),
cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
if len(cnts) > 0:
cnt = sorted(cnts, key = cv2.contourArea, reverse =
True)[0]
rect = np.int32(cv2.boxPoints(cv2.minAreaRect(cnt)))
cv2.drawContours(frame, [rect], -1, (0, 255, 255),
2)
cv2.imshow("Tracking", frame)
cv2.imshow("Binary", obj)
if cv2.waitKey(1) & 0xFF == ord("q"):
break camera.release()
cv2.destroyAllWindows()
A linha sorted(cnts, key = cv2.contourArea, reverse = True)[0]
garante que apenas o maior contorno seja rastreado.
Outra possibilidade utilizar um identificador Haar-like para rastrear um objeto. O procedimento anlogo ao algoritmo exposto acima. Mas ao invs da funo inRange()
utilizaremos a funo detectMultiScale().
56 14 Reconhecimento de caracteres Neste captulo contamos com a colaborao de <NAME> atravs do artigo do qual fui coautor Identificao automtica de placa de veculos atravs de processamento de imagem e viso computacional. O artigo foi gerado baseado em estudos de um projeto de pesquisa do qual fui orientador e esta em fase de publicao. Nas prximas verses deste livro iremos publicar o endereo eletrnico da publicao. Parte do artigo transcrita abaixo,
notadamente o que trata da biblioteca PyTesseract e da identificao e reconhecimento de caracteres de placas de veculos.
14.1 Biblioteca PyTesseract O Tesseract-OCR uma biblioteca de cdigo aberto desenvolvida pela Google,
originalmente para a linguagem C++. O seu objetivo a leitura de textos e caracteres de uma imagem. Ela capaz de transformar a imagem de um texto em um arquivo .txt do mesmo.
Graas a comunidade de programadores, essa biblioteca foi modificada, permitindo o funcionamento da mesma em programao Python. Por tal motivo, o nome desta biblioteca para a linguagem Python tornou-se PyTesseract.
O funcionamento o PyTesseract se deve a diversos arquivos externos, chamados de dicionrios. Nestes arquivos h diferentes tipos de fontes de letras e diferentes combinaes de palavras. Assim possvel que o programa faa a leitura de qualquer frase ou caractere que se encaixe nesses arquivos.
14.2 Padronizao de placas O Brasil passou por diversos padres de placas desde o surgimento dos carros no pas,
no ano de 1901. Naquela poca as placas continham apenas nmeros e uma letra, que especificava se o veculo era particular, de aluguel, dentre outras especificaes. Uma caracterstica desse momento que as placas eram emitidas pelas prefeituras, permitindo assim, placas iguais em diversos locais do Brasil.
Figura 49 Carro Antigo (2017) com uma placa originria do primeiro sistema de padronizao A partir da dcada de 90, o Brasil mudou o padro para o que temos at os dias atuais.
O motivo para tal mudana a quantidade de carros em territrio nacional. Este novo modelo
57 de placa, formado por 3 letras e 4 nmeros inteiros. possvel que haja 150 milhes combinaes possveis, permitindo em teoria, essa mesma quantidade em carros.
Figura 50 Padro atual brasileiro de placas de veculos (ITARO, 2016).
As placas brasileiras possuem em seu topo a cidade e a unidade federativa de origem,
e abaixo 3 letras e 4 nmeros. Vale ressaltar que dessas 3 letras, caracteres com acento e cedilha no so considerados. Todos esses caracteres so escritos com a fonte Mandatory, a mesma utilizada pelas naes da Unio Europeia.
14.3 Filtros do OpenCV As imagens de placas apresentam rudos intensos, que para um programa de identificao so extremamente prejudiciais leitura. Para uma correo utiliza-se a funo Threshold, que serve nesse caso como uma espcie de filtro, intensificando o que
importante na imagem.
O cdigo de filtragem pode ser visto na tabela abaixo.
import cv2 import pytesseract from PIL import Image import string import re img2 = cv2.imread("/home/pyimagesearch/Desktop/placa-carro.jpg")
img = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)
(T,Thresh1) = cv2.threshold(img, 44, 54, cv2.THRESH_TRUNC)
(T,Thresh3) = cv2.threshold(Thresh1, 43, 44, cv2.THRESH_BINARY)
(T,Thresh2) = cv2.threshold(Thresh3, 0 ,255,
cv2.ADAPTIVE_THRESH_GAUSSIAN_C)
(T,Thresh4) = cv2.threshold(Thresh2, 30, 255, cv2.CALIB_CB_ADAPTIVE_THRESH)
cv2.imshow("Imagem 01", Thresh4)
cv2.waitKey(0)
Figura 51 Placa sem filtro (O autor).
Figura 52 Placa com filtro (O autor).
58 14.4 Criao do dicionrio As placas nacionais so escritas pela fonte Mandatory, como j mencionado anteriormente. Infelizmente no h nenhum dicionrio do PyTesseract destinado leitura de tais caracteres. Para isso utilizou-se programas externos biblioteca, criados pela comunidade. Os programas utilizados foram:
jTessBoxEditor: este responsvel por salvar os algoritmos de cada caractere em um arquivo .TIFF.
Serak Tesseract Trainer: este responsvel por unir todos os arquivos .TIFF necessrios em um arquivo .traineddata, formato padro do PyTesseract.
14.5 jTessBoxEditor A primeira etapa deste processo colocar os caracteres em um arquivo .txt para depois import-lo para o programa. O programa abre esse arquivo, l cada caractere e salva suas coordenadas para, mais tarde, realizar a localizao dos mesmos em uma imagem. Esse processo pode ser visto na imagem 06:
Figura 53 Programa JTessBoxEditor em funcionamento (O autor).
Para a realizao deste projeto utilizou-se apenas os caracteres do alfabeto, os nmeros de 0 e 9 e o hfen, pois so apenas estes caracteres que esto presentes em uma placa de carro.
14.6 Serak Tesseract Trainer Este programa tambm foi criado pela comunidade, possuindo seu cdigo aberto a todos. Seu nome deve-se ao seu criador <NAME>. Sua funo transformar o arquivo criado pelo jTessBoxEditor em um arquivo .traineddata para a biblioteca do PyTesseract. O funcionamento do programa pode ser visto na imagem 07.
59 Figura 54 Programa Serak Tesseract Trainer em funcionamento (O autor).
Este programa tambm permite colocar padres de resultados, permitindo que o programa sempre tente esses padres nas imagens. As placas brasileiras utilizam um padro de 3 letras, um hfen e 4 nmeros: ABC-XXXX.
Para isso criou-se um cdigo bsico em Python responsvel pela criao randmica de milhares placas, para que o programa sempre tente procurar resultados parecidos com alguma dessas placas. Tal cdigo pode ser visto abaixo e o resultado de compilao vem na sequncia.
from random import *
letras =
["A","B","C","D","E","F","G","H","I","J","K","L","M","N","O","P","Q"
,"R","S","T","U","V","W","X","Y","Z"]
num = ["0", "1", "2", "3", "4","5", "6", "7", "8", "9"]
a = 0 while(a>=0):
x = randint (0,25)
y = randint (0,25)
z = randint (0,25)
a = randint (0, 9)
b = randint (0, 9)
c = randint (0, 9)
d = randint (0, 9)
res = letras[x]+letras[y]+letras[z]+"-"+num[a]+num[b]+num[c]+num[d]
print(res)
a+=1
60 Figura 55 Algumas placas randmicas criadas pelo programa (O autor).
Por fim, o programa cria o arquivo .traineddata com todas essas informaes j citadas, permitindo assim a leitura das placas.
14.7 Resultados Tendo em mos tudo que necessrio, criou-se um cdigo base para leitura de placas,
que contm, uma parte de filtros e outra para o reconhecimento de caracteres.
Nos primeiros testes notou-se um problema de identificao, pois as letras I e O so iguais aos nmeros 1 e 0, respectivamente. Por tal motivo o programa ao invs de identificar a placa como PLA-0000, identificava como PLA-OOOO. Para solucionar tal problema foi preciso dividir a string em duas partes: parte com letras e a parte com nmeros.
A partir da s colocar no cdigo algumas linhas de substituio de caracteres. Caso a parte destinada a nmeros tenha alguma letra, como por exemplo o O, substitua pelo caractere 0. O resultado de tal compilao pode ser visto na imagem 10.
Figura 56 Placa para identificao.
Figura 57 Resultado.
Vale ressaltar que o programa ainda no foi direcionado para a leitura dos caracteres das cidades, sendo por enquanto seu objetivo apenas a leitura dos caracteres de identificao.
61 O cdigo final criado pode ser visto abaixo:
import cv2 import pytesseract from PIL import Image import string import re img2 = cv2.imread("/home/pyimagesearch/Desktop/placa-carro.jpg")
img = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)
(T,Thresh1) = cv2.threshold(img, 40, 50, cv2.THRESH_TRUNC)
(T,Thresh3) = cv2.threshold(Thresh1, 26, 44, cv2.THRESH_BINARY)
(T,Thresh2)
=
cv2.threshold(Thresh3,
0
,255,
cv2.ADAPTIVE_THRESH_GAUSSIAN_C)
(T,Thresh4) = cv2.threshold(Thresh2, 30, 255, cv2.CALIB_CB_ADAPTIVE_THRESH)
pronta = cv2.resize(Thresh4, (300, 200))
pronta1 = cv2.GaussianBlur(pronta, (9,9), 1000)
cv2.imwrite("placa.jpg", pronta1)
caracs = pytesseract.image_to_string(Image.open("placa.jpg"), lang="mdt")
letras = caracs[:3]
num = caracs[4:8]
num = num.replace('O', "0")
num = num.replace('I', "1")
letras = letras.replace('0', "O")
letras = letras.replace('1', "I")
num = num.replace('G', "6")
letras = letras.replace('6', "G")
num = num.replace('B', "3")
letras = letras.replace('3', "B")
num = num.replace('T', "1")
letras = letras.replace('1', "T")
print("A placa (sem modificaes): " + caracs)
print("A placa (com modificaes):
" + letras + '-' + num)
62 Referncias utilizadas neste captulo:
<NAME>
(Comp.).
Dcada de
10.
Disponvel em:
<http://www.carroantigo.com/portugues/conteudo/fotos_10.htm>. Acesso em: 20 abr. 2017.
ITARO (Comp.). Conhea os significados das placas de carro. 2016. Disponvel em:
<https://www.itaro.com.br/blog/2016/02/conheca-o-significado-das-placas-de-carro/>.
Acesso em: 20 abr. 2017.
63 15 Treinamento para idenficao de objetos por Haar Cascades No captulo 11 utilizamos haar-like cascades features para identificar objetos citada.
Contudo, naquela oportunidade foram utilizados classificadores (arquivos XML) que contnuam as caractersticas j definidas para determinados tipos de objetos como faces. Na web possvel encontrar outros classificadores (arquivos XML) criados para identificar outros objetos. Contudo, fundamental para criarmos uma aplicao personalizada aprendermos a criar nossos prprios arquivos contendo as caractersticas haar-like, ou seja,
precisamos aprender a criar nossos prprios classificadores.
Para isso, neste captulo utilizamos como principal fonte o excelente tutorial produzido em formato de artigo intitulado: Creating a Cascade of Haar-Like Classifiers: Step by Step por <NAME> do do Departamento de Cincia da Computao da Universidade de Auckland na Nova Zelndia. No artigo o autor informa seu e-mail <EMAIL> e seu website www.MahdiRezaei.com para consulta.
A sequncia de passos que precisamos desenvolver para criar nosso arquivos classificador Haar-like (arquivo XML) :
1. Criar nossa coleo de arquivos positivos e negativos.
2. Marcars os arquivos positivos com as coordenadas de onde examente esto os objetos que o classificador deve encontrar. Faremos isso com o utilitrio objectmarker.exe ou a ferramenta ImageClipper.
3. Criar um arquivos de vetores (.vec) baseados nas marcaes das imagens positivas usando o utilitrio createsamples.exe.
4. Treinar o classificador usando o utilitrio haartraining.exe.
5. Executar o classificador utiliando o cvHaarDetectObjects().
15.1 Coletando o bando de dados de imagens As imagens positivas so imagens que contm o objeto e as negativas so imagens que no contm o objeto. Quanto maior o nmero de imagens positivas e negativas melhor ser a performance do seu classificador.
15.2 Organizando as imagens negativas As imagens negativas so imagens que contm uma cena normal ao fundo mas que no contm o objeto que buscamos encontrar. Coloque essas imagens na pasta
../training/negative e rode o arquivo utilitrio create_list.bat para processar as imagens. Este arquivo contm apenas um comando dir /b *.jpg >bg.txt que ir gerar um arquivo texto contendo uma lista com os nomes das imagens na pasta. Este arquivo ser necessrio para treinar o classificador.
Abaixo temos exemplos de imagens negativas, lembrando que elas necessariamente precisam estar em tons de cinza.
64 Figura 58 Exemplos de imagens negativas, ou seja, que no possuem o objeto pesquisado.
15.3 Recortar e marcar imagens positivas Agora preciso criar um arquivo de dados (arquivo com vetores) que contenham os nomes das imagens positivas e a localizao do objeto alvo do treinamento dentro da imagen positiva. A localizao dada por coordenadas em um arquivo texto gerado contendo em cada linha o nome do arquivo e ao lado as coordenadas dos objetos alvo.
Para automatizar esse processo possvel utilizar o Objectmarker ou o Image Clipper que seguem junto com o link disponibilizado no incio deste captulo. O primeiro mais rpido e simples, j o segundo mais verstil mas consome um pouco mais de tempo para configurar e executar. Vamos utilizar o Objectmarker neste exemplo.
Antes de proceguir confira se os arquivos esto em formato .bmp pois este o formato suportado.
Coloque suas imagens na pasta ../training/positive/rawdata. Na pasta
../training/positive existe o arquivo objectmaker.exe que preciso executar para marcar as imagens. Importante notar que as bibliotecas cv.dll e highgui.dll precisam estar na mesma pasta.
Ao iniciar o processamento a imagem aparece na tela e preciso clicar no canto esquerdo superior e arrastar at o canto direito inferiar para marcar o objeto. Pressione
[Espao] para registrar a marcao. possvel repetir a marcao se a imagem contm mais de um objeto. Aps o trmino pressione [Enter]. Se deseja sair antes de passar por todos os arquivos pressione [Esc]. O arquivo info.txt gerado. Note que cada vez que executar o objectmarker.exe o arquivo info.txt sobrescrito, portanto, mantenha cpia de segurana.
15.4 Criando um vetor de imagens positivas Na pasta ../training existe um arquivo .bat com o nome samples_creation.bat.
Este arquivo contm o seguinte comando: createsamples.exe -info positive/info.txt -vec vector/facevector.vec num 200 -w 24 -h 24 onde temos os seguintes parmetros:
Parmetro 1 -info positive/info.txt: caminho para o arquivo ndice de imagens positivas.
Parmetro 2 -vec vector/facevector.vec: caminho para a sada do vetor que ser gerado.
Parmetro 3 -num 200: Nmero de objetos positivos para ser empacotado no vetor.
Parmetro 4 -w 24: largura dos objetos.
Parmetro 5 -h 24: altura dos objetos.
65 Importante notar que no parmetro 3 preciso definir o tamanho total de objetos disponveis nas imagens positivas. Por exemplo, se voce tem 100 arquivos mas cada arquivos possui 3 objetos (marcados no passo 3) ento preciso definir neste parmetro 300.
Lembre-se que preciso ter os arquivos cv097.dll, cxcore097.dll,
highgui097.dll, e libguide40.dll nas pasta ..\training antes de rodar o createsamples.exe.
15.5 Haar-Training Na pasta ../training possvel modificar o arquivo haartraining.bat para alterar os parmetros conforme abaixo:
Comando: haartraining.exe -data cascades -vec vector/facevector.vec -bg negative/bg.txt -npos 204 -nneg 200 -nstages 15 -mem 1024 -mode ALL -w 24 -h 24
nonsym Parmetros:
-data cascades: caminho para guardar o cascade
-vec data/vector.vec: caminho do vetor
-bg negative/bg.txt: caminho da lista de arquivos negativos
-npos 200: Nmero de exemplos positivos nmero de arquivos BMP.
-nneg 200: Nmero de exemplos negativos npos
-nstages 15: Nmero estgios identados de treinamento
-mem 1024: Quantidade de memria associada em megabytes.
-mode ALL: Olhe na literatura para maiores informaes
-w 24 -h 24: Tamanho dos exemplos devem ser os mesmos de samples_creation.bat
-nonsym: Usar apenas se os objetos no so simtricos horizontalmente O comando haartraining.exe coleta um novo conjunto de exemplos negativos para cada estgio e -nneg define o limite para o tamanho do conjunto. O programa usa a informao dos estgios anteriores para determinar qual dos exemplos candidatos esta mal classificado. O treinamento termina quando a proporo dos exemplos mal classificados em relao aos exemplos candidatos menor que FR (nmero do estgio), esta a condio de parada.
Independentemente do nmero de estgios -nstages que voc definiu, o programa pode terminar antes se a condio de parada for alcanada. Alcanar essa condio
normalmente um bom sinal a no ser que voc tenha poucos arquivos positivos para treinar
(normalmente um nmero menor que 500 considerado baixo).
Lembre-se que para executar o haartaining.exe voc precisa dos arquivos cv097.dll,
cxcore097.dll, e highgui097.dll na pasta ..\training.
As informaes disponibilizadas durante o treinamento conforme imagem a seguir so listadas a abaixo:
Parent node: define o estgio atual do processo de treinamento
N: nmero de caractersticas usadas neste estgio
%SMP: percentual de exemplos usados para esta caracterstica
F: +quando a simetria aplicada e se a simetria no aplicada
ST.THR: gatilho do estgio (threshold)
HR: Taxa de acerto baseada no threshold
66
FA: Alarme falso baseado no threshold EXP. ERR: erro exponencial do classificador forte Este parmetros podem ser consultados na imagem a seguir que exemplifica um processo de treinamento.
Figura 59 Processo de treinamento.
67 15.6 Criando o arquivo XML Depois de terminar o treinamento preciso gerar o arquivo XML. Na pasta
../training/cascades voce ir encontrar um conjunto de pastas. Se tudo ocorreu bem haver uma pasta para cada estgio processado. Lembre-se que possvel que o treinamento termine antes de alcanar o nmero de estgios definido no incio do treinamento. Por exemplo, na imagem anterior o treinamento realizado foi iniciado definindo 15 estgios mas apenas 12 estgios foram necessrios para ser alcanado o erro mnimo e o treinamento parou. Ento apenas 12 pastas foram geradas.
Cada pasta ter um arquivos chamado AdaBoostCARTHaarClassifier.txt. Agora
preciso copiar todas as pastas para ../cascade2xml/data para entrar gerar o XML.
O prximo passo executar o arquivo convert.bat em ../cascade2xml que possui o comando: haarconv.exe data detector_de_objeto.xml 24 24 sendo que o nome do arquivo XML pode ser alterado livremente. Lembre-se de manter a largura e altura (24 e 24) iguais as utilizadas no treinamento. Na mesma pasta ser criado o arquivo XML.
Agora necessrio testar o arquivo XML realizando uma verificao da deteco.
68 16 Criao de um identificador de objetos com Haar Cascades Em construo... |
Player | cocoapods | Objective-C | player
===
Introduction
---
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nullam dignissim, arcu vitae interdum ultrices, tortor arcu fringilla leo, vitae cursus enim leo sed enim. Aliquam cursus, metus a interdum hendrerit, odio velit bibendum nunc, sed pellentesque lorem turpis vitae ipsum.
Installation
---
To install the Player library, you can follow the steps below:
1. Download the latest version of the library from the official website or GitHub repository.
2. Unzip the downloaded file to your desired location.
3. Open your terminal or command prompt and navigate to the unzipped folder.
4. Run the command `npm install` to install the necessary dependencies.
5. Start using the Player library in your project!
Getting Started
---
Once you have installed the Player library, you can start integrating it into your project. Follow the steps below to get started:
1. Import the necessary modules into your project using the provided imports:
```
import { Player, Controls } from 'player';
```
3. Create a new Player instance:
```
const player = new Player();
```
5. Configure the player settings as per your requirements:
```
player.setConfig({
autoplay: true,
loop: false,
// Additional configuration options...
});
```
7. Add the player to your desired HTML element:
```
player.mount('#player-container');
```
9. Use the provided controls (optional):
```
const controls = new Controls(player);
```
Configuration Options
---
The Player library provides various configuration options to customize the behavior and appearance of the player. You can use the `setConfig` method to update these options. Below is a list of available configuration options:
* **autoplay** (boolean): Determines whether the player should start automatically when loaded.
* **loop** (boolean): Specifies whether the player should loop the media playback.
* **volume** (number): Sets the initial volume of the player (0 to 1).
* **controls** (boolean): Displays the default media controls.
* … (add any other relevant configuration options)
API Documentation
---
The Player library provides an extensive API that allows you to interact with the player programmatically. Below is a summary of the available methods:
### Player Methods
* `play()`: Starts the media playback.
* `pause()`: Pauses the media playback.
* `stop()`: Stops the media playback.
* `seek(time: number)`: Seeks to the specified time in the media playback.
* … (list any other relevant player methods)
### Controls Methods
* `show()`: Shows the media controls.
* `hide()`: Hides the media controls.
* `toggle()`: Toggles the visibility of the media controls.
* … (list any other relevant controls methods)
Examples
---
Here are a few examples to showcase how to use the Player library:
### Example 1: Basic Integration
```
// Import required modules import { Player } from 'player';
// Create a new Player instance const player = new Player();
// Configure player settings player.setConfig({
autoplay: true,
loop: false,
});
// Mount the player to a HTML element player.mount('#player-container');
```
### Example 2: Custom Controls
```
// Import required modules import { Player, Controls } from 'player';
// Create a new Player instance const player = new Player();
// Configure player settings player.setConfig({
autoplay: true,
loop: false,
});
// Mount the player to a HTML element player.mount('#player-container');
// Create custom controls for the player const controls = new Controls(player);
// Add event listeners or additional functionalities to custom controls
```
Conclusion
---
By following the steps and examples provided in this documentation, you should be able to successfully integrate and utilize the Player library in your project. If you encounter any issues or require further assistance, feel free to consult the official documentation or GitHub repository for additional resources. |
@fluent/bundle | npm | JavaScript | @fluent/bundle
===
`@fluent/bundle` is a JavaScript implementation of [Project Fluent](https://projectfluent.org),
optimized for runtime performance.
Installation
---
`@fluent/bundle` can be used both on the client-side and the server-side. You can install it from the npm registry or use it as a standalone script (as the
`FluentBundle` global).
```
npm install @fluent/bundle
```
How to use
---
The `FluentBundle` constructor provides the core functionality of formatting translations from FTL files.
```
import { FluentBundle, FluentResource } from "@fluent/bundle";
let resource = new FluentResource(`
-brand-name = Foo 3000 welcome = Welcome, {$name}, to {-brand-name}!
`);
let bundle = new FluentBundle("en-US");
let errors = bundle.addResource(resource);
if (errors.length) {
// Syntax errors are per-message and don't break the whole resource
}
let welcome = bundle.getMessage("welcome");
if (welcome.value) {
bundle.formatPattern(welcome.value, { name: "Anna" });
// → "Welcome, Anna, to Foo 3000!"
}
```
The API reference is available at <https://projectfluent.org/fluent.js/bundle>.
Compatibility
---
`@fluent/bundle` requires the following `Intl` formatters:
* `Intl.DateTimeFormat` (standard, well-supported)
* `Intl.NumberFormat` (standard, well-supported)
* `Intl.PluralRules` (standard, new in ECMAScript 2018)
`Intl.PluralRules` may already be available in some engines. In most cases,
however, a polyfill will be required. We recommend [intl-pluralrules](https://www.npmjs.com/package/intl-pluralrules).
```
import "intl-pluralrules";
import { FluentBundle } from "@fluent/bundle";
```
See also the [Compatibility](https://github.com/projectfluent/fluent.js/wiki/Compatibility) article on the `fluent.js` wiki.
Readme
---
### Keywords
* localization
* l10n
* internationalization
* i18n
* ftl
* plural
* gender
* locale
* language
* formatting
* translate
* translation
* format
* parser |
@nathantreid/dockerode | npm | JavaScript | dockerode
===
Not another Node.js Docker Remote API module.
Why `dockerode` is different from other Docker node.js modules:
* **streams** - `dockerode` does NOT break any stream, it passes them to you allowing for some stream voodoo.
* **stream demux** - Supports optional demultiplexing.
* **entities** - containers, images and execs are defined entities and not random static methods.
* **run** - `dockerode` allow you to seamless run commands in a container ala `docker run`.
* **tests** - `dockerode` really aims to have a good test set, allowing to follow `Docker` changes easily, quickly and painlessly.
* **feature-rich** - There's a real effort in keeping **All** `Docker` Remote API features implemented and tested.
* **interfaces** - Features a **callback** and a **promise** based interfaces, making everyone happy :)
Installation
---
`npm install dockerode`
Usage
---
* Input options are directly passed to Docker. Check [Docker Remote API documentation](https://docs.docker.com/engine/reference/api/docker_remote_api/) for more details.
* Return values are unchanged from Docker, official Docker documentation will also apply to them.
* Check the tests and examples folder for more examples.
###
Getting started
To use `dockerode` first you need to instantiate it:
```
var Docker = require('dockerode');
var docker = new Docker({socketPath: '/var/run/docker.sock'});
var docker1 = new Docker(); //defaults to above if env variables are not used var docker2 = new Docker({host: 'http://192.168.1.10', port: 3000});
var docker3 = new Docker({protocol:'http', host: '127.0.0.1', port: 3000});
var docker4 = new Docker({host: '127.0.0.1', port: 3000}); //defaults to http
//protocol http vs https is automatically detected var docker5 = new Docker({
host: '192.168.1.10',
port: process.env.DOCKER_PORT || 2375,
ca: fs.readFileSync('ca.pem'),
cert: fs.readFileSync('cert.pem'),
key: fs.readFileSync('key.pem'),
version: 'v1.25' // required when Docker >= v1.13, https://docs.docker.com/engine/api/version-history/
});
var docker6 = new Docker({
protocol: 'https', //you can enforce a protocol
host: '192.168.1.10',
port: process.env.DOCKER_PORT || 2375,
ca: fs.readFileSync('ca.pem'),
cert: fs.readFileSync('cert.pem'),
key: fs.readFileSync('key.pem')
});
//using a different promise library (default is the native one)
var docker7 = new Docker({
Promise: require('bluebird')
//...
});
//...
```
###
Manipulating a container:
```
// create a container entity. does not query API var container = docker.getContainer('71501a8ab0f8');
// query API for container info container.inspect(function (err, data) {
console.log(data);
});
container.start(function (err, data) {
console.log(data);
});
container.remove(function (err, data) {
console.log(data);
});
// promises are supported docker.createContainer({
Image: 'ubuntu',
AttachStdin: false,
AttachStdout: true,
AttachStderr: true,
Tty: true,
Cmd: ['/bin/bash', '-c', 'tail -f /var/log/dmesg'],
OpenStdin: false,
StdinOnce: false
}).then(function(container) {
return container.start();
}).then(function(container) {
return container.resize({
h: process.stdout.rows,
w: process.stdout.columns
});
}).then(function(container) {
return container.stop();
}).then(function(container) {
return container.remove();
}).then(function(data) {
console.log('container removed');
}).catch(function(err) {
console.log(err);
});
```
You may also specify default options for each container's operations, which will always be used for the specified container and operation.
```
container.defaultOptions.start.Binds = ["/tmp:/tmp:rw"];
```
###
Stopping all containers on a host
```
docker.listContainers(function (err, containers) {
containers.forEach(function (containerInfo) {
docker.getContainer(containerInfo.Id).stop(cb);
});
});
```
###
Building an Image
```
docker.buildImage('archive.tar', {t: imageName}, function (err, response){
//...
});
docker.buildImage({
context: __dirname,
src: ['Dockerfile', 'file1', 'file2']
}, {t: imageName}, function (err, response) {
//...
});
```
###
Creating a container:
```
docker.createContainer({Image: 'ubuntu', Cmd: ['/bin/bash'], name: 'ubuntu-test'}, function (err, container) {
container.start(function (err, data) {
//...
});
});
//...
```
###
Streams goodness:
```
//tty:true docker.createContainer({ /*...*/ Tty: true /*...*/ }, function(err, container) {
/* ... */
container.attach({stream: true, stdout: true, stderr: true}, function (err, stream) {
stream.pipe(process.stdout);
});
/* ... */
}
//tty:false docker.createContainer({ /*...*/ Tty: false /*...*/ }, function(err, container) {
/* ... */
container.attach({stream: true, stdout: true, stderr: true}, function (err, stream) {
//dockerode may demultiplex attach streams for you :)
container.modem.demuxStream(stream, process.stdout, process.stderr);
});
/* ... */
}
docker.createImage({fromImage: 'ubuntu'}, function (err, stream) {
stream.pipe(process.stdout);
});
//...
```
There is also support for [HTTP connection hijacking](https://docs.docker.com/engine/reference/api/docker_remote_api_v1.22/#3-2-hijacking),
which allows for cleaner interactions with commands that work with stdin and stdout separately.
```
docker.createContainer({Tty: false, /*... other options */}, function(err, container) {
container.exec({Cmd: ['shasum', '-'], AttachStdin: true, AttachStdout: true}, function(err, exec) {
exec.start({hijack: true, stdin: true}, function(err, stream) {
// shasum can't finish until after its stdin has been closed, telling it that it has
// read all the bytes it needs to sum. Without a socket upgrade, there is no way to
// close the write-side of the stream without also closing the read-side!
fs.createReadStream('node-v5.1.0.tgz', 'binary').pipe(stream);
// Fortunately, we have a regular TCP socket now, so when the readstream finishes and closes our
// stream, it is still open for reading and we will still get our results :-)
docker.modem.demuxStream(stream, process.stdout, process.stderr);
});
});
});
```
###
Equivalent of `docker run` in `dockerode`:
* `image` - container image
* `cmd` - command to be executed
* `stream` - stream(s) which will be used for execution output.
* `create_options` - options used for container creation. (optional)
* `start_options` - options used for container start. (optional)
* `callback` - callback called when execution ends (optional, promise will be returned if not used).
```
//callback docker.run('ubuntu', ['bash', '-c', 'uname -a'], process.stdout, function (err, data, container) {
console.log(data.StatusCode);
});
//promise docker.run(testImage, ['bash', '-c', 'uname -a'], process.stdout).then(function(container) {
console.log(container.output.StatusCode);
return container.remove();
}).then(function(data) {
console.log('container removed');
}).catch(function(err) {
console.log(err);
});
```
or, if you want to split stdout and stderr (you must to pass `Tty:false` as an option for this to work)
```
docker.run('ubuntu', ['bash', '-c', 'uname -a'], [process.stdout, process.stderr], {Tty:false}, function (err, data, container) {
console.log(data.StatusCode);
});
```
If you provide a callback, `run` will return an EventEmitter supporting the following events: container, stream, data.
If a callback isn't provided a promise will be returned.
```
docker.run('ubuntu', ['bash', '-c', 'uname -a'], [process.stdout, process.stderr], {Tty:false}, function (err, data, container) {
//...
}).on('container', function (container) {
container.defaultOptions.start.Binds = ["/tmp:/tmp:rw"];
});
```
###
Equivalent of `docker pull` in `dockerode`:
* `repoTag` - container image name (optionally with tag)
`myrepo/myname:withtag`
* `opts` - extra options passed to create image.
* `callback` - callback called when execution ends.
```
docker.pull('myrepo/myname:tag', function (err, stream) {
// streaming output from pull...
});
```
####
Pull from private repos
`docker-modem` already base64 encodes the necessary auth object for you.
```
var auth = {
username: 'username',
password: 'password',
auth: '',
email: '<EMAIL>',
serveraddress: 'https://index.docker.io/v1'
};
docker.pull('tag', {'authconfig': auth}, function (err, stream) {
//...
});
```
If you already have a base64 encoded auth object, you can use it directly:
```
var auth = { key: '<KEY> }
```
Helper functions
---
* `followProgress` - allows to fire a callback only in the end of a stream based process. (build, pull, ...)
```
//followProgress(stream, onFinished, [onProgress])
docker.pull(repoTag, function(err, stream) {
//...
docker.modem.followProgress(stream, onFinished, onProgress);
function onFinished(err, output) {
//output is an array with output json parsed objects
//...
}
function onProgress(event) {
//...
}
});
```
* `demuxStream` - demux stdout and stderr
```
//demuxStream(stream, stdout, stderr)
container.attach({
stream: true,
stdout: true,
stderr: true
}, function handler(err, stream) {
//...
container.modem.demuxStream(stream, process.stdout, process.stderr);
//...
});
```
Tests
---
* `docker pull ubuntu:latest` to prepare your system for the tests.
* Tests are implemented using `mocha` and `chai`. Run them with `npm test`.
Examples
---
Check the examples folder for more specific use cases examples.
License
---
<NAME> - [@pedromdias](https://twitter.com/pedromdias)
Licensed under the Apache license, version 2.0 (the "license"); You may not use this file except in compliance with the license. You may obtain a copy of the license at:
```
http://www.apache.org/licenses/LICENSE-2.0.html
```
Unless required by applicable law or agreed to in writing, software distributed under the license is distributed on an "as is" basis, without warranties or conditions of any kind, either express or implied. See the license for the specific language governing permissions and limitations under the license.
Readme
---
### Keywords
* docker
* docker.io |
ConSpline | cran | R | Package ‘ConSpline’
October 12, 2022
Type Package
Title Partial Linear Least-Squares Regression using Constrained
Splines
Version 1.2
Date 2017-10-02
Author <NAME>
Maintainer <NAME> <<EMAIL>>
Description Given response y, continuous predictor x, and covariate matrix, the relationship be-
tween E(y) and x is estimated with a shape constrained regression spline. Function out-
puts fits and various types of inference.
License GPL-2 | GPL-3
Depends graphics, grDevices, stats, utils, coneproj (>= 1.12)
NeedsCompilation no
Repository CRAN
Date/Publication 2017-10-02 18:49:42 UTC
R topics documented:
ConSpline-packag... 2
consplin... 3
GAVotin... 5
WhiteSpruc... 6
ConSpline-package Partial Linear Least-squares Regression with Constrained Splines
Description
Given a continuous response y and a continuous predictor x, and a design matrix Z of parametrically-
modeled covariates, the model y=f(x)+Zb+e is fit using least-squares cone projection. The function
f is smooth and has one of eight user-defined shapes: increasing, decreasing, convex, concave, or
combinations of monotonicity and convexity. Quadratic splines are used for increasing and decreas-
ing, while cubic splines are used for the other six shapes.
Details
Package: ConSpline
Type: Package
Version: 1.1
Date: 2015-08-27
License: GPL-2 | GPL-3
The function conspline fits the partial linear model. Given a response variable y, a continuous pre-
dictor x, and a design matrix Z of parametrically modeled covariates, this function solves a least-
squares regression assuming that y=f(x)+Zb+e, where f is a smooth function with a user-defined
shape. The shape is assigned with the argument type, where 1=increasing, 2=decreasing, 3=con-
vex, 4=concave, 5=increasing and convex, 6=decreasing and convex, 7=increasing and concave, 8=
decreasing and concave.
Author(s)
<NAME>
Maintainer: <NAME> <<EMAIL>>
References
Meyer, M.C. (2008) Shape-Restricted Regression Splines, Annals of Applied Statistics, 2(3),1013-
1033.
Examples
data(WhiteSpruce)
plot(WhiteSpruce$Diameter,WhiteSpruce$Height)
ans=conspline(WhiteSpruce$Height,WhiteSpruce$Diameter,7)
lines(sort(WhiteSpruce$Diameter),ans$muhat[order(WhiteSpruce$Diameter)])
conspline Partial Linear Least-Squares with Constrained Regression Splines
Description
Given a response variable y, a continuous predictor x, and a design matrix Z of parametrically mod-
eled covariates, this function solves a least-squares regression assuming that y=f(x)+Zb+e, where
f is a smooth function with a user-defined shape. The shape is assigned with the argument type,
where 1=increasing, 2=decreasing, 3=convex, 4=concave, 5=increasing and convex, 6=decreasing
and convex, 7=increasing and concave, 8=decreasing and concave.
Usage
conspline(y,x,type,zmat=0,wt=0,knots=0,
test=FALSE,c=1.2,nsim=10000)
Arguments
y A continuous response variable
x A continuous predictor variable. The length of x must equal the length of y.
type An integer 1-8 describing the shape of the regression function in x. 1=increas-
ing, 2=decreasing, 3=convex, 4=concave, 5=increasing and convex, 6=decreas-
ing and convex, 7=increasing and concave, 8= decreasing and concave.
zmat An optional design matrix of covariates to be modeled parametrically. The num-
ber of rows of zmat must be the length of y.
wt Optional weight vector, must be positive and of the same length as y.
knots Optional user-defined knots for the spline function. The range of the knots must
contain the range of x.
test If test=TRUE, a test for the "significance" of x is performed. For convex and
concave shapes, the null hypothesis is that the relationship between y and x is
linear, for any of the other shapes, the null hypothesis is that the expected value
of y is constant in x.
c An optional parameter for the variance estimation. Must be between 1 and 2
inclusive.
nsim An optional specification of the number of simulated data sets to make the mix-
ing distribution for the test statistic if test=TRUE.
Details
A cone projection is used to fit the least-squares regression model. The test for the significance of
x is exact, while the inference for the covariates represented by the Z columns uses statistics that
have approximate t-distributions.
Value
muhat The fitted values at the design points, i.e. an estimate of E(y).
fhat The estimated regression function, evaluated at the x-values, describing the re-
lationship between E(y) and x, see above description of the model.
fslope The slope of fhat, evaluated at the x-values.
knots The knots used in the spline function estimation.
pvalx If test=TRUE, this is the p-value for the test involving the predictor x. For
convex and concave shapes, the null hypothesis is that the relationship between
y and x is linear, versus the alternative that it has the assigned shape. For any of
the other shapes, the null hypothesis is that the expected value of y is constant
in x, versus the assigned shape.
zcoef The estimated coefficients for the components of the regression function given
by the columns of Z. An "intercept" is given if the column space of Z did not
contain the constant vectors.
sighat The estimate of the model variance. Calculated as SSR/(n-cD), where SSR is
the sum of squared residuals of the fit, n is the length of y, D is the observed
degrees of freedom of the fit, and c is a parameter between 1 and 2.
zhmat The hat matrix corresponding the columns of Z, to compute p-values for con-
trasts, for example.
sez The standard errors for the Z coefficient estimates. These are square roots of the
diagonal values of zhmat, times the square root of sighat.
pvalz Approximate p-values for the null hypotheses that the coefficients for the co-
variates represented by the Z columns are zero.
Author(s)
<NAME>, Professor, Statistics Department, Colorado State University
References
<NAME>. (2008) Shape-Restricted Regression Splines, Annals of Applied Statistics, 2(3),1013-
1033.
Examples
n=60
x=1:n/n
z=sample(0:1,n,replace=TRUE)
mu=1:n*0+4
mu[x>1/2]=4+5*(x[x>1/2]-1/2)^2
mu=mu+z/4
y=mu+rnorm(n)/4
plot(x,y,col=z+1)
ans=conspline(y,x,5,z,test=TRUE)
points(x,ans$muhat,pch=20,col=z+1)
lines(x,ans$fhat)
lines(x,ans$fhat+ans$zcoef, col=2)
ans$pvalz ## p-val for test of significance of z parameter
ans$pvalx ## p-val for test for linear vs convex regression function
GAVoting Voting Data for Counties in Georgia, for the 2000 U.S. Presidential
Election
Description
Voting data by county, for the 150 counties in the state of Georgia, in the Bush vs Gore 2000
presidential election.
Usage
data("GAVoting")
Format
A data frame with 159 observations on the following 9 variables.
county the county name
method the voting method: OS-CC (optical scan, central count); OS-PC (optical scan, precinct
count); LEVER (lever); PUNCH (punch card); PAPER (paper ballot)
econ the economic level of the county according to OneGeorgia: poor; middle; rich
percent.black proportion of registered voters who are black
gore number of votes recorded for Mr Gore
bush number of votes recorded for Mr Bush
other number of votes recorded for a third candidate
votes number of votes recorded
ballots number of ballots received
Details
The uncounted votes in the 2000 presidential election were a concern in the state of Florida, where
2.9 percent of the ballots did not have vote for president recorded. Because the election was close in
that state, the voting methods and other issues were scrutinized. In the state of Georgia, 3.5 percent
of the votes were uncounted. This data set gives votes by county, along with other data including
voting method. A properly weighted ANOVA will show that proportions of uncounted votes are
significantly higher with counties using the punch card method.
References
<NAME>. (2002). Uncounted Votes: Does Voting Equipment Matter? Chance Magazine, 15(4),
pp33-38.
Examples
data(GAVoting)
obs1=1:5
obs2=1:3
meth=1:159
econ=1:159
types=unique(GAVoting$method)
econs=unique(GAVoting$econ)
for(i in 1:159){
meth[i]=obs1[GAVoting$method[i]==types]
econ[i]=obs2[GAVoting$econ[i]==econs]
}
punc=100*(1-GAVoting$votes/GAVoting$ballots)
par(mar=c(4,4,1,1))
plot(GAVoting$percent.black,punc,xlab="Proportion of black voters",
ylab="percent uncounted votes",col=meth,pch=econ)
legend(0,18.5,pch=1:3,legend=c("poor","middle","rich"))
legend(.63,18.5,pch=c(1,1,1,1,1),col=1:5,
legend=c("lever","OS-CC","OS-PC","punch","paper"))
zmat=matrix(0,ncol=4,nrow=159)
for(i in 1:4){zmat[meth==i+1,i]=1}
ans1=conspline(punc,GAVoting$percent.black,1,zmat,wt=GAVoting$ballots)
lines(sort(GAVoting$percent.black),
ans1$fhat[order(GAVoting$percent.black)],col=1)
for(i in 1:4){
lines(sort(GAVoting$percent.black),
ans1$fhat[order(GAVoting$percent.black)]+ans1$zcoef[i],col=i+1)
}
WhiteSpruce Height and Diameter of 36 White Spruce trees.
Description
A standard scatterplot example from various statistics text books, representing height versus diam-
eter of White Spruce trees.
Usage
data("WhiteSpruce")
Format
A data frame with 36 observations on the following 2 variables.
Diameter Diameter at "breast height" of tree
Height Height of tree
Examples
data(WhiteSpruce)
plot(WhiteSpruce$Diameter,WhiteSpruce$Height)
ans=conspline(WhiteSpruce$Height,WhiteSpruce$Diameter,7)
lines(sort(WhiteSpruce$Diameter),ans$muhat[order(WhiteSpruce$Diameter)]) |
GITHUB_papers-we-love_papers-we-love.zip_unzipped_p5-a-protocal-for-scalable-anonymous-communication.pdf | free_programming_book | Unknown | : A Protocol for Scalable Anonymous Communication
<NAME> <NAME> University of Maryland, College Park, Maryland, USA capveg, bobby, <EMAIL>
individual senders (or receivers) cannot determine the destination (or origin) of messages beyond a certain set of hosts in the network.
Our system, , provides both receiver and sender anonymity and also provides sender-receiver anonymity. Specifically, we assume that an adversary in our system may passively monitor every packet on every link of a network, and is able to correlate individual packets across links. Thus, the adversary can mount any passive attack on the underlying networking infrastructure. However, the adversary is not able to invert encryptions and read encrypted messages. The adversary can also read all signaling messages in the system. Our system provides receiver and sender anonymity under this rather strong adversarial model and provides the sender-receiver anonymity (or unlinkability) property. Thus, the adversary cannot determine if (or when) any two parties in the system are communicating.
maintains anonymity even if one party of a communication colludes with the adversary who can now identify specific packets sent to or received from the other end of the communication).
Unlike previous known solutions,
can be used to implement a scalable wide-area system with many thousand active participants, all of whom may communicate simultaneously.
Abstract
We present a protocol for anonymous communication over the
(Peer-to-Peer Personal PriInternet. Our protocol, called vacy Protocol) provides sender-, receiver-, and sender-receiver anonymity.
is designed to be implemented over the current Internet protocols, and does not require any special infrastructure support. A novel feature of is that it allows individual participants to trade-off degree of anonymity for communication efficiency, and hence can be used to scalably implement large anonymous groups. We present a description of
, an analysis of its anonymity and communication efficiency, and evaluate its performance using detailed packet-level simulations.
1 Introduction
We present the Peer-to-Peer Personal Privacy Protocol ( )
which can be used for scalable anonymous communication over the Internet.
provides sender-, receiver-, and sender-receiver anonymity, and can be implemented over the current Internet protocols.
can scale to provide anonymity for hundreds of thousands of users all communicating simultaneously and anonymously.
A system provides receiver anonymity if and only if it is not possible to ascertain who the receiver of a particular message is
(even though the receiver may be able to identify the sender).
Analogously, a system provides sender anonymity if and only if it is not possible for the receiver of a message to identify the original sender. It is not possible to provide perfect anonymity in a communication system since it is usually possible to enumerate all possible senders or recipients of a particular message.
In general, the degree of sender/receiver anonymity is measured by the size of the set of people who could have sent/received a particular message. There have been a number of systems designed to provide receiver anonymity [2, 5], and a number of systems that provide sender anonymity [8, 9]. In these systems,
1.1 A naive solution Consider a global broadcast channel. All participants in the anonymous communication send fixed length packets onto this channel at a fixed rate. These packets are encrypted such that only the recipient of the message may decrypt the packet, e.g.,
by using the receivers published public key. Assume that there is a mechanism to hide, spoof, or re-write sender addresses, e.g.,
by implementing the broadcast using an application-layer peerto-peer ring, and that all messages are sent to the entire group.
Lastly, every message is hop-by-hop encrypted, and thus, it is not possible to map a specific incoming message to a node to a particular outgoing message. (In essence, every node acts as a mix [1]). It is possible that a node may not be actively communicating at any given time, but in order to maintain the fixed communication rate, it would have to send a packet anyway. Such a packet would be a noise packet, and any packet destined for a particular receiver would be a signal packet.
This system provides receiver anonymity, since the sender does not know where in the broadcast group the receiver is or The first author is with the Department of Computer Science, University of Maryland at College Park. The second and third authors are with the Department of Computer Science and the University of Maryland Institute for Advanced Computer Studies, University of Maryland at College Park. This work was supported by a grant (ANI 0092806)
from the National Science Foundation.
1
which host or address the receiver is using; the sender only knows that the receiver is part of the broadcast group. This system also provides sender anonymity, since all messages to a given receiver (in case of a ring) come from a single upstream node, and the receiver cannot determine the original sender of a message. Lastly, this solution also provides unlinkability from a passive adversary since the adversary is not able to gain any extra information from monitoring any (or all) network links. For example, suppose node is sending messages to node . The adversary sees the same number of messages from node whether it were conversing with or not, and all of the messages are sent to the same broadcast address. Similarly, whether talks to or not, receives the same number of messages from over any suitably large interval. Note that the adversary is not able to trace a message from the sender to a receiver or vice-versa because of the hop-by-hop encryption, and thus, even if one end of a communication colludes with the adversary, the anonymity of the other party is not compromised.
This naive solution does not scale due to its broadcast nature.
As the number of people in the channel increases, the available bandwidth for any useful communication decreases linearly, and end-to-end reliability decreases exponentially. It is possible to increase the bandwidth utilization and reliability by limiting the number of people in a broadcast group, but then two parties who want to communicate may end up in different groups.
is based upon this basic broadcast channel principle; we scale the system by creating a hierarchy of broadcast channels.
Clearly, any broadcast-based system, including will not provide high bandwidth efficiency, both in terms of how many bits it takes a senderreceiver pair to exchange a bit of information,
and how many extra bits the network carries to carry one bit of useful information.
allows users to choose how inefficient the communication is, and provides a scalable control structure for securely and anonymously connecting users in different lognext.
ical broadcast groups. We present an overview of is [5]. This system also allows a tradeest prior work to off between bandwidth and communication efficiency, but like in [2], in this system only one sender-receiver pair may simultaneously communicate in this system. Thus, these systems cannot be used to implement large anonymous communication groups. A number of recent protocols [8, 4] also provide anonymous communication over the Internet. These protocols have the same underlying systems assumptions as
; however, unlike these protocols cannot withstand an all-powerful passive adversary.
1.3
The rest of this paper is structured as follows: we discuss related work in Section 2. We describe the algorithm in detail in Section 3, and present a set of analytic bounds on performance in Section 5. In Section 6, we analyze results from packet-level simulator. We discuss future work and conclude in Section 7.
2 Related Work
We discuss prior work related to . We begin with a discussion of the Dining Cryptographers problem, and discuss some other systems that provide anonymity over the Internet.
Roadmap 2.1 Dining Cryptographers and Mixes Dining Cryptographers The Dining Cryptographers (DC-
net) protocol [2] provides sender anonymity under an adversary model similar to
. DC-Net assumes a public key infrastructure, and users send encrypted broadcasts to the entire group,
thus achieving receiver anonymity. However, unlike
, all members of the group are made aware of when a message is sent, so Dining Cryptographers does not have the same level of sender-receiver anonymity. Also, in DC-net, only one user can send at a time, so it takes additional bandwidth to handle collisions and contention [10]. Lastly, a DC-net participant fixes its anonymity vs. bandwidth trade off when joining the system, and there are no provisions to rescale that trade off when others join the system.
1.2 Solution overview scales the naive solution by creating a broadcast hierarchy. Different levels of the hierarchy provide different levels of anonymity (and unlinkability), at the cost of communication bandwidth and reliability. Users of the system locally select a level of anonymity and communication efficiency and can locally map themselves to a level which provides requisite performance. At any time, it is possible for individual users in to decrease anonymity by choosing a more communication efficient channel. (Unfortunately, it is not possible to regain stronger anonymity). Obviously, it is possible to choose a set of parameters that is not supported by the system (e.g., mutually incompatible levels of bandwidth utilization and anonymity).
Chaum introduced sender anonymity in [1], and senderreceiver anonymity (as the dining cryptographers problem)
in [2]. The solution presented in [2] provides sender, receiver anonymity and unlinkability. However, it does not provide a bandwidth vs. communication efficiency trade-off, and one conversation can be only sustained at any one time. The clos-
Mixes A mix is a process that provides anonymity via packet re-shuffling. Mixes were introduced by Chaum in [1]. Mixes work best in series, and need a constant amount of traffic to avoid delay while preserving anonymity.
does both by creating a hierarchy of mixes, and the constant stream of signal and noise packets serve to keep the mixes operational.
2.2 Recent Internet-based Communications Work Anonymous
We describe four anonymity protocols that can be implemented over the Internet 2
necessary, the addition of the bitmask will significantly ease our exposition. We use the notation ( / ) to represent the contents of a group, where is the bitstring, and is the number of valid bits.
The root of consists of the null bitstring and a zero length mask. We represent the root with the label
. The left child of the root contains the group and the right child is
.
The rest of the tree is constructed as shown in Figure 1. For example, note that group represents the bitstring and the group represents the bitstring .
Each group in corresponds to a broadcast channel in
.
A message sent to a group is (unreliably) forwarded to a subset of all members of the system. Suppose user sends a message
. This message will be forwarded to user in to group group
if and only if the most significant bits of and are the same, where is defined to be
. We call this common prefix testing the min-common-prefix check.
is sent to three distinct Thus, a message sent to a group regions of the tree:
Xor-Trees Like
, Xor-Trees [5] provides sender-,
receiver-, and sender-receiver anonymity. However, unlike
,
Xor-Trees do not admit a per user anonymity vs. communications efficiency trade off. Also, like in DC-net, only a single user may send at any one time in an Xor-Tree. Thus, in an Xor-Tree,
performance degrades due to collisions as the number of users increase.
%"",#$
Crowds, Hordes, and Onion Routing Both Crowds [8]
and the more recent Hordes [9] provide sender anonymity. The basic idea in both these systems is similar to Onion Routing [6],
in which messages between communicating users are routed on an application-layer overlay using paths different than the shortest path. The receiver cannot resolve the sender of a particular message since messages take different, potentially randomly chosen, routes through the network. However, neither system can provide anonymity when confronted by a passive observer who can mount statistical attacks by tracing and correlating packets throughout the network. None of these systems provide receiver anonymity.
!#"$
% "!&'$
%"+*&)$
./1 $ $
1
.
31
2
"#"
(&)*&)$
"
0 46587:9'; 1=<
2
)>$
? Local: A message sent on )>$ is broadcast to all members of the ./$ group.
1A@ , this message
?
Path to root: For each is also FreeNet Freenet [4] provides an anonymous publishD B
C 1
.
$
broadcast to all members of the group
, where subscribe system over the Internet using an application-layer BDC denotes the -bit prefix of .
overlay, much like . However, FreeNet is designed for
? Subtree: Lastly,+G for all1 1 FE +, Gthis message is also sent anonymous storage and retrieval, and the anonymity issues for such a system are different than a system like that provides to all groups H. $ , where H#2 is any bitstring of anonymity when communicating parties are on-line. There is no length 2 that begins with the string .
notion of noise or signal, etc., and the major issues in FreeNet For example, any message sent to the root !#"+$ is forwarded are decoupling/hiding authorship from a particular document,
to all members of all groups. Any message sent to %"!&'"I&'.J+$
and providing fault-tolerant anonymous availability for a set of is sent to members of the "I&'"!& group (local). This message is static documents.
also sent to all members of the %"I&'"#K$ group; all members of the %"I&',#$ and %"+*&)$ groups, and the all members of the !#"+$
Protocol group. Further, this message is sent to every member of the 3 The E J , and "I& "I& is any bitstring
that
%"!&'"!starts
&L.2 with
$ groups,
where 2
"I&'"!& . In general, when
is based upon public-key cryptography. does not require 1 1 a message is sent to a global public-key infrastructure; however, we do assume that if group )>$ , members of groups . $ are forwarded this if and only if the nodes of corresponding to .>$
two parties wish to communicate, they can ascertain each others message 3
1 1 $ have an ancestor-descendant relationship. Note and
)
public keys using an out-of-band mechanism.
Assume individuals wish to form an anonymous commu- that these broadcast groups should be implemented as peer-tonication system using . Assume each of these users (or peer unicast trees in the underlying network (and not multicast
. will use trees). These channels may lose messages and require no pargroup members) have public keys
these public keys, called communication keys to create a log- ticular consistency, reliability, or quality-of-service guarantees.
We describe the precise networking and systems requirements ical broadcast hierarchy.
1 of
and underlying protocols in Section 4.3.
Each user in the system joins a set of such broadcast groups.
In general, communication efficiency increases as the groupss mask size increases; however, as we shall see, this increase in efficiency comes at an expense of reduced anonymity. The depth of the tree is defined at run-time and depends on the number of people in the system ( ), and on the security parameters chosen by individual users. However, we need to fix a maximum depth of this tree: choosing this parameter, a-priori, is not difficult,
since such a system can accommodate approximately users, where is the least number of people in any channel. In our implementation, we have chosen to be 32.
logical broadcast hierarchy The logical broadcast hierarchy is a binary tree ( ) which is constructed using the public keys . Each node of consists of a bitstring of a specified length. We present the algorithm assuming each node of contains both a bitstring and a bitmask. The bitmask specifies how many of the most 3.1 The
NM significant bits in the bitstring are valid. Though not strictly
1 It is entirely possible that the uals, such that
keys belong to different individ. We discuss this issue in Section 3.5.
3 2
M
,OQPSRT2
/0
0/1
00/2
000/3
001/3
Figure 1: Form of the shown in boldface.
10/2
100/3
101/3
01/2
010/3
011/3
% $6 ! , O P - - (R $
.
0
-
-
0
# #
0
- - 0
"
) $ $# . # $
#
!#"$
!#"+$
# ./$ !#"-+E $ " - 0
- 0 -
0 0
0
% )% $ # 0
-
"
-
-
-
0
11/2
110/3
111/3
is bounded by the longest mask that it has revealed to any other member. Lastly, note that the communication efficiency is upper bounded by the smaller of the two masks revealed by either of the participants in a communication.
Suppose and
have agreed upon the length of a mask
(say ), and wish to communicate. This means is joined to some group and is joined to some group
,
where both and
are greater than or equal to . They can now communicate by sending messages to a lowest common ancestor channel that has both and
as descendants. However, if and only join one group each, and the public keys are uniformly hashed to channels on the tree,
then there is approximately probability that will be the only channel that and would have in common! Similarly, for any given node, an exponentially high number of nodes would be farther away on the logical broadcast tree, and in general, the communication on the system will not be very effective. Once again, we are reduced to using the inefficient global broadcast channel for most communications.
Our solution to this inefficient routing is as follows: each user joins a small number of groups on the logical tree. For each joined group, users generate another public-private key pair,
called routing key. These routing keys are generated locally, and do not require any global coordination. In fact, it should not be possible to map a users routing key to their communication key,
otherwise, the users anonymity can be compromised (using an Intersection Attack, Section 3.5).
When user joins a group , it periodically sends a message to the channel listing other channels that it is joined to. This message serves as a routing advertisement, and will be used to efficiently send messages along the lower levels of the tree.
In general, the advertisements from a node contains the set of channels it can directly reach, the set of channels it can reach using one other node, and so on. In effect, these routing keys generate lateral edges in the tree. In Section 5, we show that typically, each user needs to join only a few groups (
) for We use a secure public hash function (
) to map users to a node (group). Consider user , with public-key
. Assume
. User will join some group of the form
. The length of the mask is chosen independently and randomly by user according to a local security policy, as described in Section 3.4. The choice of the parameter should be secret, and it should not be possible to determine which precise group a user is joined to. Thus, given a public key, it is public knowledge which set of groups a user may be in, but it is difficult to determine which specific group in this set the user has chosen.
Suppose users and are mapped to some arbitrary groups in the tree . We say a channel is common between and if and only if messages sent to are forwarded to both and
. Suppose and join groups and
respectively, and assume both know each others public key. Since knows
, it can determine ; however, does not know
. Even without any knowledge of the masks,
the value of and
can begin to communicate using the channel.
Unfortunately, this communication channel can be quite lossy since messages have a higher probability of getting lost in the channels higher up in the tree and is the most lossy communication channel of all.
The communication efficiency can be improved as follows.
Instead of sending messages through
, could try to send messages on to some group
,
. would receive these messages, and may reply back to . However, in doing so, sacrifices some anonymity since can now map to a smaller set of users. ( maps to a smaller set using a Difference Attack described in Section 3.5). In general,
can choose to communicate back to using any length mask; however, longer masks trade-off anonymity for communication efficiency.
can selectively trust some users and reveal longer masks for these trusted users, but in general, the anonymity of 0
logical broadcast tree ( ). The effective broadcast channels of a user in group is 3.2 Mapping users to
-
1/1
- # . # $
# . # $
I#"$
0 )% $
&#"('
0
"
0
)
0 4
* K
? Uniform drop: This is the simplest scheme in which mes-
* ,
any two users in to have short paths (
channel crossings)
between them with high probability.
We note that a user joins a set of groups only when it enters the system, and should not change the set of channels it is part of. Otherwise, once again, yet another intersection attack becomes feasible that can compromise their anonymity. Each group joined by a user corresponds to a one hop peering in the underlying network2 ; we call the set of these peerings the physical connectivity graph, and denote it with .
sages from the input queue are dropped with equal probability until the input queue size is below the maximum threshold.
? Non-uniform drop: In this scheme, messages which are destined to a channel higher up in are dropped preferentially.
We have experimented with several variations of this scheme; the specific scheme which we use for our simulations drops packets destined for higher nodes with an exponentially higher probability.
If most of the end-to-end paths in a network can use lateral edges, i.e. between channels at the same logical height, then this scheme provides lower drop rate. However any communication that must use higher channels have proportionately high drops.
3.3 Signal and Noise
Our description of the protocol is nearly complete: however,
we still need a crucial piece. Assuming packet sources cannot be traced from the broadcast messages (See Section 4 for the precise packet format), the protocol as described provides sender and receiver anonymity. We assume that each messages is of the same size and is encrypted per-hop, and thus it is not possible to map an outgoing message (packet) to a specific packet that the node received in the past. However, a passive observer can still mount an easy statistical attack and trace a communication by correlating a packet stream from a communicating source to a sink.
Thus, we add the notion of noise to the system. The noise packets should be added such that a passive correlation attack becomes infeasible. There are many possible good noisegeneration algorithms, and we use the following simple scheme.
Each user, at all times, generates fixed amount of traffic destined to channel chosen uniformly at random. A packet transmitted from a node is one of the following:
3.4
9* ./$ <
./$
Assume node has joined the at node
. Let be the set of members who receive a broadcast message sent to channel
. (Recall that this set includes all members of
, all members of the groups below
, and all group
members of groups on the path to the root of ).
. >./$ $
.>$
9* ../>$ $ <
Claim 3.1 The anonymity of a node communicating using channel is equivalent to the set of members who are part of
.
Proof.
We consider the sender-, receiver-, and sender-receiver anonymity cases separately.
? A packet (noise or signal) that was received from some
? Sender Anonymity
incoming interface that this node is forwarding onto some other channel(s). (The precise forwarding rule for is
described in Section 4).
? A signal packet that has been locally generated.
Anonymity Analysis Sender anonymity is the size of the set of the nodes that could have sent a particular packet to a given host (say
). A receiver who can only monitor their own links cannot determine the source of a packet since this information is never included in the packet. A receiver, in collusion with a all-powerful passive adversary, however, can enumerate the set of nodes who could have sent the packet by computing the closure of the set of nodes who have a and
causal relationship with . (In this case, nodes are causally related if sent a packet to ). This closure would be computed over some finite time window on the order of the end-to-end latency in the system.
However, in
, a user connected to sends packets at a constant rate (signal or noise), and these pack. Thus, there ets are received by all users in is a causal relationship between every user in a broadcast group. However, inter-channel routers transmit packets between channels, and over time every node in the system is causally related to every other node in the system.
Suppose a malicious receiver tries to expose a sender.
It can, at best (assuming there are no other cross channel packets), causally relate packets to its own broadcast 0
? A noise packet that has been locally generated.
Note that to an external observer, there is no discernible difference between these three scenarios. In general, only the source and destination of a communication can distinguish between noise and signal packets. They are treated with equal disdain at all other nodes.
0
Message Dropping Algorithms In any communication system without explicit feedback, e.g. our channel broadcasts,
message queues may build at slow nodes or at nodes with high
, members may simply drop any message they degree. In do not have the bandwidth or processing capacity to handle.
The global properties of the system depend upon how messages are dropped. We have considered two different dropping algorithms:
.>$
9! ./$ <
2 Clearly, these are one hop transport level peerings, and not one physical hop peerings.
5
group. Further, if the receiver is able to determine and compromise the router node, then the sender anonymity becomes the effective broadcast group of the sender.
In case the receiver cannot compromise the channel router node, the senders anonymity is the size of the entire system, even in the presence of an all-powerful passive adversary.
the communication keys. Note that this is also the reason users cannot increase their anonymity beyond the smallest set they have ever been mapped to.
? Difference Attack: If an adversary can map the user to some set and can assert that the user is not in some other set, then it can map the user to the difference between these two sets.
For example, suppose user has revealed an bit mask.
In this case, it should not respond or react to packets sent to any group where
. If does respond to such packets, then is divulging where in the it is not.
That is because the receiver set of is smaller than the receiver set
, and now an adversary knows that is not in the set
.
? Receiver Anonymity
# ) # $ , evWhen - sends a packet to 0 at channel
# . # $ < receives the packet. From
) 1 - $ 1E -
ery member of 9!
the perspective of an external observer, the behavior of the 1$<
*
9
.
system is exactly the same whether 0 receives the packet
<
9! .>$ 9* ./$ < 9! . 1 $ <
or not. Thus, 0 s receiver anonymity is exactly equivalent to the set of all users who receive the packet, namely
$# . #D< .
? DoS attack: Suppose a malicious user wants to reduce 9
?
the efficiency of the system by sending a large number of useless packets.
can withstand this type of an attack since we impose a per-link queue limit, and all the extra packets from the malicious user will be dropped at the very first hop. Note that even the local broadcast group is not affected by a DoS attack as long as the first non-colluding hop correctly implements its queue limits.
Sender-receiver anonymity Since all nodes in the system send at a constant rate, and all packets are pair-wise encrypted between each hop, we claim that it is impossible for a passive observer to distinguish noise from signal packets. Since the observer cannot distinguish signal packets, it cannot discern if or when communicates, and thus, it cannot determine when is
communicating with any other node .
- -
0
? Mob attack: In this case, a (set of) malicious users (the mob) collude to try to expose some user - . These users can all join the same channel as - , and reduce the efficiency of the channel. This can cause - to expose more bits of its mask or cause other legitimate users to leave the channel. In either case, the mob has reduced - s anonymity to the set of remaining legitimate users.
-
Assume that the rate at which some user sends packets does not change when it is sending signal versus noise packets. In this case, the distribution of packets, whether they are signal packets or noise, does not affect the security of the node. Thus,
a nice property of our system is that the anonymity of any node depends only upon the length of the mask that is willing to respond to.
-
-
3.5 Attacks
This is a difficult attack to handle in a system which provides anonymity, since it is (hopefully) not possible to map public keys back to individuals. In
, this attack can be handled by choosing a security parameter larger than the size of largest mob.
Note that in practice, it may be possible for a single attacker to spoof multiple addresses. However, since we use unicast and each member of the group must communicate with others, all these addresses must actually exist on the network. In practice, however, it is unlikely that an attacker can co-opt addresses from many different Autonomous Systems (ASs); and can thwart this attack by ensuring that there are enough different ASs represented in the channel that it responds on. However, an all-powerful active attacker can mount this attack from enough different ASs to expose any user. Thus is susceptible to adversaries who can actively manipulate (e.g. by generating packets from arbitrary ASs) large parts of the network.
In this section, we outline a number of attacks that a system like must guard against, and show why is invulnerable to all these attacks.
? Correlation Attack: We have already alluded to this at-
-
tack in which a passive observer is able to (statistically)
track signal packets from a source to a destination, thus vithe noise packolating sender-receiver anonymity. In ets thwart this attack.
? Intersection Attack: If an adversary knows that a user is
two different sets and , then the anonymity of the user is reduced to
. If users are uniformly distributed across such sets that can be intersected, then the anonymity for any user in these reduces exponentially with the number of intersecting sets. For example, suppose users communicate using both their routing keys and their communication key. With each key, there is a corresponding set of users who may own that key. This leads to an intersection attack. This is the reason a user communicates with only one key in
, and routing keys cannot be mapped back to 4
Details
In this section, we present details from our implementation of
. We have implemented in a packet level simulator, but the details from our implementation would be useful in a real implementation as well.
6
Packet Format We use fixed length packets of size 1 KB.
The fixed packet length is used to eliminate any information an adversary can gain by monitoring packet lengths.
The header only contains the identifier for the first hop destination channel (a pair). (It could, equivalently, also contain the ultimate destination, but we chose the first-hop destination in cleartext option). In our implementation, is a 32 bit unsigned integer, and is a 6 bit integer. If the packet is a signal packet, the rest of the packet is encrypted using the next hop receivers public key. Since packets may need to be encapsulated,
and each packet is the same size, each packet also contains a padding field which is the size of the header.
The data part of the packet contains a set of fixed size chunks each of which is encrypted with the receivers public key. These chunks are formed naturally by many public-key encryptions. The decrypted data part of the packet contains a checksum which the receiver uses to determine whether a packet is destined for itself or not. Each chunk can be decrypted independently; thus, a receiver does not need to decrypt an entire noise packet, it can discard a packet as soon as the first chunk fails its checksum. For efficiency, the first chunk of a signal packet may also include a symetric cipher key for use in decrypting the other chunks, as symetric cipher decryptions tend to be faster than asymetric ones.
Regardless of whether a packet decrypts properly, the receiver schedules each packet for further delivery within the local channel using the forwarding rule described below.
The first chunk of a signal packet contains an encrypted bit which determines whether a packet should be forwarded onto some other channel, or whether the packet is destined for the current node. It also contains a channel identifier for the ultimate destination for the packet, which the current node uses to choose an outgoing channel.
When a node receives a packet with the forward bit set, it interprets the rest of the data as another packet, and if possible, forwards it onto the specified channel. In the forwarding step, the process at a channel router is different depending on whether the packet is at its ultimate channel or not:
"
./$
"
"
./$
Note that it is important that the output order of the packets not be determined by the input order, else it becomes possible to correlate packets across successive nodes and trace communication between two parties. In other words, each node should act like a mix [1].
sent to some arbitrary Forward to a peer on channel iff only if did not come in on and if passes the min-common-prefix check with respect to
.
./$
algorithm to forward a packet channel on .
4.1
Member Security and Join Procedure
-
Analogous to the definition in [8], we define anonymity for a user as the set of users in the group who are indistinguishable from that , i.e., no other user or a passive adversary can resolve messages from to a granularity finer than .
We assume that each user requires a minimal acceptable level of anonymity, i.e. each user requires their corresponding
set to be of a minimum size. We call this minimum set size the security parameter, and denote it with . Each user may also define a maximum required level of security (i.e. a maximum size of the corresponding set) since this provides a bound on the communication inefficiency. We call this the efficiency parameter, and denote it with . User joins group
, s.t.
in the following manner:
-
-
)>$
* G 9! ./$ <!G *
? - initially joins !#"+$ . If * G 9 I."+$ <IG * ,
were done.
E G 9 !#"+$ <!G , - the entire system does not have If
enough members to provide the requisite anonymity.
forwards packets for other nodes and sends noise packets,
but does not directly communicate with other nodes.
, then this group has too many members,
If and joins the appropriate channel of the form
.
G 9 !#" <!GIE
*&)$
? - repeats this procedure until it finds a )>$ such that G 9! .>$ <!G *( .
*
Clearly, it is possible to choose incompatible values and
? If the packet is not at its final channel, the current node such E G 9! )>$ < and E G 9! . &)$ <!G .
that
replaces the first chunk with a new chunk in which the In this case, the user can either change their security or efforward bit is set, sets the proper ultimate destination ficiency parameter or wait in channel .
&)$ , until *
<IG .
channel, and encrypts this chunk with the public key of the G 9! .
)
&
$
next hop.
In our simulations, each user can determine the number of
If the packet is already is the destination channel, then the data part of the packet is already formatted with the proper address and has a valid first chunk encrypted by the public key of the intended recipient. The current node adds a last chunk at end of the packet with random bits to increment the packet length to the fixed system size.
people in a group by consulting an oracle which maintains an up to date list of channel memberships. In implementation, this information can be maintained in a secure distributed manner,
either by the underlying application-layer multicast primitive,
or at a well-known centralized topology server.
The topology server construct is needed if it is not possible to infer approximate group sizes. The topology server keeps
"! . It is possible for the pairs of the form
topology server to expose a user by providing false information
(reporting a group is large when it is in fact not). For extra security, the topology information can be replicated at # different topology servers. A user would only consider the minimum
; ./$
If the forward bit is not set, then this signal packet is delivered locally.
Forwarding within a channel Since each logical channel is a tree, each node can use the following simple forwarding 7
&
group size reported by all topology servers; this way, a user can withstand up to #
colluding malicious topology servers. Similar techniques can be used to handle malicious topology servers who return a value smaller than the actual group size (to make the communication inefficient). Lastly, note that the topology servers can also be used to find users on a specified channel,
which is needed when a new user joins a channel.
,)J
,M If is large (e.g.
), it may not be feasible for a node to maintain information about all peer channels. In this case,
could maintain a small cache of advertisements,
and if a new communication requires a channel that is not in the cache, would have to wait until it hears another advertisement for that channel.
-
-
? There is a single public-key decryption for every packet that member - receives . Further, - has to encrypt every signal packet during communication. However, in general,
- does not have to encrypt noise packets; it is only necessary that the adversary not be able to distinguish noise packets from signal packets. Thus, it is feasible for - to generate noise packets using a good local random number 4.2 Migration up and down
- G 9* is./connected
<IG * to. As )>users$ , joinand andinitially
*
$
leave the system, it is possible for the channel )>$ to violate - s security or efficiency parameter. If - s efficiency parameter is source.
violated, - can migrate to group )
'
&
$
.
?
The broadcast nature of requires individual group Unfortunately, - does not re-gain any security by migrating members to (potentially) devote more bandwidth for up (i.e. by decreasing ) since an intersection attack fixes communication than pure unicast (or systems such as
to the size of the smallest channel that - was part of since
Suppose
Crowds [8]). However, the extra bandwidth directly results in enhanced anonymity, and, obviously, individual users may choose a more bandwidth efficient channel if they are willing to sacrifice some anonymity. We analyze the actual bandwidth usage and how it scales with number of group members and different security parameters in Section 6.
it initially joined. Thus, in common use, we assume users do not leave once they join and if a group becomes too small, all remaining users have to recreate a new hierarchy. Note that they must then use a new communication key, since a passive observer along with a colluding receiver can mount an intersection attack.
4.3 The Network Abstraction and Process- 5 The Random Channels Model ing Requirements In this section, we present an analysis of the paths in a
network. Let be the number of users in the system, and suppose there are channels available in total. For some integer
(which is typically smallat most ), each user independently chooses random channels without replacement: we will denote this random set of channels by . Two central parameters for us will be: (i) , the maximum hop-count between any users, which is the maximum communication distance between any two users, and (ii)
and
, the minimum and maximum load (number of users) on any channel. Note that and
are different than the security parameters as they count only the set of users local to a channel, while the security parameters count the set of users in all channels that provide anonymity for a given user. We next discuss these two parameters.
In this section we describe the precise networking requirements of . It was our design goal for to be easily implementable using the current Internet protocols, as such our networking requirements are meager.
requires the implementation of broadcast channels in which the source address cannot easily be determined. This can be efficiently implemented using a application-layer multicast protocol. The transport level requirements of are minimal,
and UDP would suffice as the transport protocol for edges on the topology. Lastly, note that during normal operation,
members only remain joined to the same set of channels; they only change channels if the overall security policy changes or if the group dynamic changes drastically. Thus, the signaling load due to is low, and since the topologies within each channel is relatively static, the tree can be optimized to map efficiently on to the underlying physical topology.
)
,
B
2
&
)
D $
B
B
)
B We say that there is a path
between two users and , if for each
# and' . choose
, users a channel in common. The length The parameter
Host Requirements requires state, processing, and link bandwidth at each host. We discuss these requirements in turn:
of such a path is defined to be , and the minimum length of any path between and is the distance or hop-count between and
; if there is no such path, then this hop-count is defined to be
. We are interested in bounding , the maximum hop-count between any two users.
? Suppose member - communicates using channel ./$
at depth in . In order to communicate within the group,
M
-groups.
may have to maintain next hop information about ,
B and B . Define the load on a
@
&
For small (e.g.
), the state requirements The load parameters
are minimal. Note that unlike an IP router, - does not channel to be the number who chose it among their )
B ofandusers have to search its routing table for every packet; it only random channels; let
B respectively be the mini
searches this table when it initiates a new data connection.
Thus, this table can be maintained in secondary storage.
mum and maximum loads on any channel. A lower bound on the former is needed to guarantee the security of the system, and an 8
* ,
upper bound on the latter is required to show that the bandwidth overhead is not significant.
B
Given the above discussion, our basic goals will be as follows.
Clearly, we simultaneously want small , a value of that
is not too small, and being not too high. It is easy to see that the expected load on any given channel is exactly
. Thus,
is a natural parameter to study our system with. So, we will consider scenarios where is constrained to be at most some given value , and is required to be at least some given value . So, the primary parameters are , , and
. Given these, and for any choice of for which and
, we aim to show that the system has the abovesketched satisfactory properties w.r.t. ,
, and
.
More concretely, we will proceed as follows. Fix
,
say. Let denote the desirable event that (i) all the channel loads are within of the expected value , and (ii)
.
We derive the following sufficient conditions for to hold (for
) with a probability of at least
:
)DR #2
#2
.2
&
B
#2 2
2 $
B
)
& &'"
1. , : two sufficient conditions are
(P1) ) K , &'"#" , and * & "" ; or
(P2) ) J , " , and *K#"" .
2. K : two sufficient conditions are
(P3) ) K , &'"#" , and * $& &#" ; or
(P4) ) J , " , and * ."" .
,*K
B
/
*
E ,
E
K
/
$ ,
2 )*
* K hyperedges. We next study the two requirements of most interest:
and
. As can be expected, the second case involves more work than the first. Our basic plan is as follows.
Lemma 5.1 gives an upper-bound on
, and Lemma 5.2 gives an upper-bound on
. Then, letting denote the complement of event , we see by the union bound that
is upper-bounded by the sum of (1) and the probability bound given by Lemma 5.1. Similarly,
is upper-bounded by the sum of (1) and the bound given by Lemma 5.2. We shall do this putting together in Section 5.4.
5.2
"! K The Requirement 02143
* ,
Here, we want sufficient conditions for
to hold with high probability. In other words, we want to show that for any two users, there is a path of length at most between them.
To do so, we fix distinct users and , and upper-bound the probability that there is no path of length between them;
then, by the union bound, the probability of is at most
, since there are only choices for the unordered pair
. Thus, we need to show that is negligible in comparison
, which we proceed to do now.
with Our plan is to condition on the values of and
. For each such choice, we will upper-bound the probability that there is no user (among the remaining
) who chose a channel that intersects both and
. Then, the maximum such probability is an upper-bound on . Fix and
. If
, then and are at distance ; so suppose
. In particular, we may assume that
.
Consider any other user . What is the probability of s random choice
intersecting and
? The total number of possible choices for is
. The number of possible intersection patterns can be counted as follows. Suppose
and
, where and
. The remaining elements of
are selected at random from outside
. Thus,
5
76 R $ 5 $
6 5
76
,
,E ,
:$ D $
,$
D $ D $ D $ D $
& 2 ,)
We now prove that these conditions are indeed sufficient, in the D $
D $ D $ $8;:% 9 9 rest of this section.
<
<
D
< $
D
$
D
$
5>= ?
5.1 Analysis Approach
<
6 In our analysis, we will frequently use the union bound or G G G D < $ D $ G A@
@ &
B ' * . D< $ @ D* ) $
Booles inequality: B
) @$
< $
and
are much more tractable The parameters
:
$
C B
$
than , so we handle them first. Let F $ denote . It is an easy consequence of large-deviations bounds such as the
FE %2: ) $3
$8 < $ D $ D9 $ - D < $ $ ;9 $ &
Chernoff-Hoeffding bounds [3, 7] that for any given channel "
and any parameter "!# "I'& , its load "'$ satisfies:
where
$ " &
$ !#% 8( & $ R ) .2 (& $ R ) #2 '
? 5 ? 5 H ? 5 ? = ? H
G
H I
)
7 H
K E %2 ) $
6
* ()F )* #, R* #2T$; $
* J 5 = ? 6 6
()F ) + #, ,) $ R+ .2 $;$
6
* ()F ) #,$ F ) + #, ,) $;$3 Thus, since different users < make their random inde;E %2: ) $;$ * choices
(
)
F
pendently, we get that * (&
Now, a simple application of the union bound yields E
,#$ %2 );$;$ . Thus, as discussed above, a union bound yields B
B
@
E E , * ,$ R F ,$ E %2: ) $;$3 (2)
$8
(& 3$ R ) #2T$ -
(& .3$ R ) #2T$
* R F ) , $ R ) , , $;$ (1) In order to see what this bound says for various concrete values of our parameters / ) , we develop:
?
We next turn to bounding . We cannot directly draw on the
,. ! ) , ) &)$;$ and , .
rich random graphs literature, since we are working here with Lemma 5.1 Suppose E
a certain model of random hypergraphs with possibly repeated Then,
, * ;%L $ #,$ R "! !) , ) &'$ # $ .
9
5>= ? ? 5>= ?
E
Proof. (Sketch.) Since %2 );$ )
C6 6 , bound (2) 5.4 Putting It Together Now, as described at the end of Section 5.1, we just do routine
calculations to verify the following. First, if , and any one E
, * ,#$ R ) 2 ) ,$, ) 2 ) of (P1) and (P2)
holds, then the sum of (1) and the probability bound given by Lemma 5.1 is at most & "
. Similarly, if K and
any one
of
(P3)
and
(P4)
holds,
then the
sum of (1) and the We can now do a calculation to show that subject to our conprobability bound
given by
Lemma 5.2 is
at most &'"
. This straints (which includes the constraint that 2 , ) ), this bound
concludes our
proof sketch
about these
sufficient conditions
for is maximized when 2 and
. Further simplification shows that
and , respectively to hold with high probability.
then leads to the bound of the lemma.
6 1
5.3 The Requirement 0
G E G J
2
L! - $
%- $
! D $;$ D $ - ;
% 9 $3
%- $
simuIn this section, we present results from a packet-level lator. Our simulator is written in C, and can simulate the entire protocol with thousands of participants. We designed and implemented five basic experiments:
We now adopt a different approach to get an upper-bound on
. Let denote our set of channels. For a set with
, define to be the set of all for
which the following holds:
- )
Simulation Results
? Measure system performance as the number of participants increase; specifically, we measure the end-to-end bandwidth, latency, and packet drop rates as the number of users in the system is increased
? Measure the effect of the security and efficiency parameter
-
on communication efficiency
:$
? Estimate the amount of time it takes systems of a given number of participants to converge (i.e. how long does it D $ D% 9 D $;$ : $ D D $; $.$;$ % D% 9 9 D $;$ D$ ;% $ 9
take a user to find a channel that satisfies their security and efficiency
constraints)
G G
)
?
G %- $ GE Measure the effects of different noise generation rates and
%2 , ) $;#,
queuing disciplines
? Measure how the system behaves when increasing numG
G G
G bers of nodes engage in end-to-end communication
- )- $
$ %- $ *;
G G
0
0 2
)
2T,
Simulation Methodology For each experiment, we gener?
ated a random physical topology. We did not model different
%- $ A0 D9 * F ) .2 $ , $
(3) propagation delays between pairs of nodes; instead, we assumed In other words,
is the set of channels that lie outside of ,
but which lie in some set
that intersects . Thus, we need to show that with high probability, at least one of the following four conditions holds for each pair of users and : (i)
; (ii)
; (iii)
;
or (iv)
. To do so, we will instead with
will show that with high probability, all have
, where
. It can be verified that this implies that at least one of the conditions (i), (ii), (iii)
and (iv) will hold for all
.
Fix such that
. Let us bound
.
Now, for any fixed such that
, a calculation can be used to show that
We summarize with
Lemma 5.2 Suppose ,#? ! ) , )
E Then,
K * , R' R ()F ) &',$;$ $ and
.
?
, .
Proof. (Sketch.) A union bound using (3) yields
E K *
- G %- $ G *
?
2 2
)
*
)= ? R 2 #, R ) ? #2 T$ , $
* , R)2 R+ F ) .2 $ , $
2
A calculation shows that subject to our constraints, this bound is maximized when and
. Further simplification completes the proof.
unit propagation delay between any two nodes. Since all internode latencies are the same, our simulation proceeds using a synchronous clock. At every tick, all packets sent from every node is received at their destination.
We assume an unbounded input queue length and a bounded output queue. All packets received at a node at a given time step are processed. Some of these packets may be queued at appropriate output queues, and a subset of them may be delivered locally. Next, each node generates a set of outgoing packets and enqueues these on the output queues. We then impose the output queue limit and according to the queuing discipline, discard packets if any output queue is larger than its maximum specified size. Note that during the discard phase, the node does not discriminate whether it is dropping its own packets or packets from some other node. All remaining packets at an output queue are delivered in the next time step to the next hop node.
Since all packets queued at a node are delivered at the next time step, the output queue size serves as a measure of both the 10
processing and bandwidth requirements at a node. We instrumented the simulator to record the end-to-end latencies (number of simulator ticks), drop rates and bandwidth, end-to-end hop counts, number of channel crossings, and convergence times. In the rest of this section, we report results from individual experiments.
6.1 Scalability 0.5 K"#"
&)
&)
Sending rate:
1/S Sending rate: log S/S 0.4 Drop Rate
&'"#"
size). Unless otherwise noted, we choose the values of security parameters as
, and
, and implement nonuniform queuing. All nodes in the system were connected to two channels, i.e. they had one communication key and one routing key.
There are two different curves, each corresponding to two different sending rates. In the Sending Rate=1/S case, each node generates a packet at each time stop with probability , where
is the size of its current broadcast group. Analogously, in the Sending Rate= case, each node generates a packet at each time stop with probability . For both cases, the queue sizes at each node were very small:
.
From the plot, it is clear that the sending rate can be sustained in the system, and almost no signal (or noise) packets are lost. However, as the sending rate is increased, the queues in the system are saturated, and drop rates increase with group size.
In Figure 3, we plot the average number of channels that the signal packets have to cross in order to reach their destination.
Note that in this case, each user only connects to channels.
As predicted by the analysis in Section 5, the average channel level hop count is very small, and is less than 1 for all our runs.
. The The average end-to-end hop count in these runs were
worst case inter-channel distance that we encountered in these runs was 2. This occurred in 4 out of 6000 signal packets sent.
0.3 F
<
4 $ 9+&'"!
,
0.2
&'K 0.1 0
10 100
1000 10000
Figure 2: Loss rate vs. number of users Avg. number of channels 1.2 1
0.8 0.6 0.4 0.2 0
10 100
&$&#"
Analysis In our simulations, the average size of each local Group Size 1000
10000 Group Size Figure 3: Average number of channels In Figure 2, we plot the end-to-end loss rate as the number of users in the system is increased. In each case, there is only one pair of communicating nodes; all other nodes only send noise packets. For each group size, we chose ten different random seeds and created ten different topologies. For each topology,
we chose three different sender-receiver pairs. Each point on Figure 2 is an average of all these runs (24 for each topology
, and broadcast group in these experiments is approximately the average queue size is around the minimum (10). In the worst case, each queue is always full, and the nodes have to handle two full queues worth (20 packets) per tick. Each tick in our system corresponds to an end-to-end propagation delay. Assume that this delay is on average on the order of 100ms. Thus, these nodes would have to handle up to 200 packets per second. At 1000 bytes per second, this translates to 1.6 Mbps of bandwidth and the ability to handle two hundred 1000 byte public-key decryptions per second. The processing capability required is trivial compared to the power of current processors. The bandwidth required is somewhat more of a concern; however, note that individual users can always reduce their bandwidth requirement by migrating lower in the tree. The Sending Rate=1 corresponds roughly to each node sending at 16Kbps (with essentially no packet loss). If nodes are willing to incur higher packet loss,
then the sending rate can be much higher, e.g. for the 8192 user case, users can send at up to 200 Kbps if they are willing to handle upto 40% packet losses. Note that these losses accumulate over about 13 hops, which is the average end-to-end path length in these simulations.
Thus, in a 8192 node network, any subset users can anonymously communicate at hundreds of kilobits per second
(with relatively high packet losses), if they invest approximately 2 Mbps of bandwidth. Clearly, the loss rate is high compared to the communication media we are used to, but we have to remember that in
, the user is gaining anonymity of at least 100 other users. In a pure broadcast system with 8192 users and these same parameters, the average end-to-end loss rate would be about
; communication in this system would, es-
& &" ,
11
sentially, be impossible. (In such a system, to achieve a 50%
average end-to-end loss rate, the sending rate per user would have to be 1 packet/8 seconds. Of course, each user is also gaining anonymity from all other users in the system).
In Section 6.3, we discuss the effects of varying the sending rate while the bandwidth and group sizes are fixed.
No. of rounds
0 0
1 2
3 4
5 6
0.9 0.8 0.7 Drop Rate 6.2 Convergence Times Group
Size 64
128 256
512 1024
2048 4096
8192 1
Security Parameter
16 32
64 128
256 512
0.6 0.5 0.4 0.3 No. of rounds
6 5
4 3
2 1
0.2 Uniform Queuing, Base rate: 1 Non-uniform Queuing, Base rate 1 Uniform Queuing, Base rate: log S/S Non-uniform Queuing,Base rate log S/S 0.1 0
1 10
Figure 4: Loss rate vs. sending rate 0.45 Sending rate: 1/S Sending rate: log S/S 0.4 Table 1: Convergence times 0.35
Drop Rate 0.3 In Table 1, we present the convergence times for in our simulator. There are results from two different experiments in the Table; in the experiment we fixed the security parameters (at
), and varied the number of users. In the
second experiment, we fixed the number of users at 1024, and
.
varied the security parameter ( ). In all cases, we used
In all experiments, all the users join simultaneously at time 0. Each user migrates down the tree in rounds. Each round consists of 10 simulator ticks, and each user only makes a single migration decision in any one round. As expected, the convergence times increase as the number of users increase (or the
parameter is decreased) since each user settles lower down in
. However, in all cases the number of rounds to converge is given by
.
&'"#"I
K#""
K
4 9."I
0.25 0.2 0.15 0.1 0.05 0
0
<
6.3 Noise and Signal Generation In Figure 4, we vary the sending rate while keeping all other parameter fixed. We monitor a single senderreceiver pair, and report the observed packet loss. We use the two base sending rates from Section 6.1, and linearly increase these rates by the rate multipliers plotted on the -axis, while keeping the link bandwidths constant. As expected, the drop rates increase linearly with increases in sending rate. Interestingly, the non-uniform drop rates perform slightly better as the sending rates increase.
In Figure 5, we repeat the same experiment and vary the number of sender-receiver pairs in the system. The rest of the users still generate noise at the same rate ( ). We plot the average drop rate across all of the sender-receiver pairs. As expected, the drop rate is not affected by the number of sender receiver pairs, and thus, no extra information is divulged to a
100 Sending Rate Multiplier 50
100 150
200 Number of simultaneous sender-receiver pairs 250
Figure 5: Loss rate vs. number of senders passive observer when more people in the system communicate.
We have also experimented with different values of and .
As expected, the drop rate increases as users choose higher values of the security parameters, since they are mapped to larger broadcast groups.
7 Conclusions
We divide our conclusions for
7.1
in to two parts.
Observations is a protocol for anonymous communication over the Internet.
allows secure anonymous connections between a hierarchy of progressively smaller broadcast groups, and allows 12
individual users to trade off anonymity for communication efficiency.
In developing
, we found an interesting property relating communication latency, bandwidth usage, and anonymity. In general, we found it was easy to construct protocols that provided two out of these three properties, e.g., consider plain unicast communication: it provides low latency and high bandwidth usage, but does not provide anonymity. Now consider multicasting to a set (using per-source shortest path trees) in which the message is intended for only one member of the group. This solution provides low latency; however the bandwidth utility decreases as the anonymity and unlinkability increases.
has the interesting property that it allows individual users to tradeoff these three properties on-line.
to be scalable and compatible with current We designed Internet protocols. Our simulations show that can scale to large groups, and our analysis shows that will maintain its short paths property with very little extra overhead for extremely large groups. Our current work is to adapt lower overhead noise generation algorithms to further improve scalability;
provide better reliability by considering more connected structures within individual groups; and to build a prototype for deployment around the Internet.
[1] <NAME>. Untraceable Electronic Mail, Return Addresses, and Digital Pseudonyms. Communications of the ACM, 24(2), 1981.
[2] <NAME>. The Dining Cryptographers Problem: Unconditional sender and recipient untraceability. Journal of Cryptology, 1(1):6575, 1988.
[3] <NAME>. A measure of asymptotic efficiency for tests of a hypothesis based on the sum of observations. Annals of Mathematical Statistics, 1952.
[4] <NAME>, <NAME>, <NAME>, and <NAME>. Freenet:
A Distributed Anonymous Information Storage and Retrieval System. In International Workshop on Design Issues in Anonymity and Unobservability, LNCS 2009, 2001.
7.2 A Note on Ethics
[5] <NAME> and <NAME>. Xor-Trees for Efficient Anonymous Multicast Receiption. Advances in Cryptography - CRYPTO 97, 1997.
[6] <NAME>, <NAME>, and <NAME>. Onion routing for anonymous and private internet connections. Communications of the ACM, 42(2), February 1999.
[7] <NAME>.
Probability inequalities for sums of bounded random variables. American Statistical Association Journal, 58, 1963.
There may be some questions about why a system like is
needed. Clearly, privacy over the Internet is an important and open issue, and is a first step towards a truly scalable anonymous network layer over IP. There are a number of applications,
e.g. anonymous web transactions and anonymous re-mailers,
where sender- and receiver-privacy is all that is required.
,
however, also provides sender-receiver privacy, and like all technologies, this can be used in a malicious manner. We have defor the following cided to include sender-receiver privacy in reasons:
[8] <NAME> and <NAME>.
Crowds:
Anonymity for Web Transactions. ACM Transactions on Information and System Security, 1(1):6692, 1998.
? We believe it is important to study these protocols, simply to learn what levels of anonymity are feasible over a public network such as the Internet.
? The protocol-steps in that provide sender-receiver
References
[9] <NAME> and <NAME>. A protocol for anonymous communication over the Internet. In Proceedings of the 7th ACM Conference on Computer and Communications Security (CCS-00), pages 3342, N.Y., November 1 4 2000. ACM Press.
[10] <NAME> and <NAME>. The Dining Cryptographers in the Disco: Unconditional Sender and Recipient Untraceability with Computationally Secure Serviceability. In J.-J. Quisquater and J. Vandewalle, editors, Advances in CryptologyEUROCRYPT 89, volume 434 of Lecture Notes in Computer Science, page 690, April 1989.
anonymity can be decoupled from the rest of the protocol,
and can be used in sender-, receiver-anonymity mode only. It is an orthogonal ethical (and possibly political) decision as to whether should be implemented to provide sender-receiver anonymity.
? We describe an attack that can be mounted by an powerful
active adversary, specifically an adversary who can inject packets on a arbitrary set of network links.
fails under such an attack.
Acknowledgments. We thank the referees for their helpful comments.
13 |
@kyfe/ksui | npm | JavaScript | KSUI
===
Mobile UI Components built on Vue
🔥 [文档网站](https://vant-contrib.gitee.io/vant)
🇨🇳 [中文版介绍](https://github.com/youzan/vant/blob/HEAD/README.zh-CN.md)
🚀 [小程序版](https://github.com/youzan/vant-weapp)
---
Features
---
* 🚀 1KB Component average size (min+gzip)
* 🚀 65+ High quality components
* 💪 90%+ Unit test coverage
* 💪 Written in TypeScript
* 📖 Extensive documentation and demos
* 📖 Provide Sketch and Axure design resources
* 🍭 Support Vue 2 & Vue 3
* 🍭 Support Tree Shaking
* 🍭 Support Custom Theme
* 🍭 Support i18n
* 🌍 Support SSR
Install
---
```
# Install latest Vant for Vue 3 project npm i @kyfe/ksui
```
Quickstart
---
```
import Vue from 'vue';
import { Button } from '@kyfe/ksui';
Vue.use(Button);
```
See more in [Quickstart](https://youzan.github.io/vant#/en-US/quickstart).
Contribution
---
Please make sure to read the [Contributing Guide](https://github.com/youzan/vant/blob/HEAD/.github/CONTRIBUTING.md) before making a pull request.
Browser Support
---
Vant 2 supports modern browsers and Android >= 4.0、iOS >= 8.0.
Vant 3 supports modern browsers and Chrome >= 51、iOS >= 10.0 (same as Vue 3).
Official Ecosystem
---
| Project | Description |
| --- | --- |
| [vant-weapp](https://github.com/youzan/vant-weapp) | WeChat MiniProgram UI |
| [vant-demo](https://github.com/youzan/vant-demo) | Collection of Vant demos |
| [vant-cli](https://github.com/youzan/vant/tree/dev/packages/vant-cli) | Scaffold for UI library |
| [vant-icons](https://github.com/youzan/vant/tree/dev/packages/vant-icons) | Vant icons |
| [vant-touch-emulator](https://github.com/youzan/vant/tree/dev/packages/vant-touch-emulator) | Using vant in desktop browsers |
Community Ecosystem
---
| Project | Description |
| --- | --- |
| [3lang3/react-vant](https://github.com/3lang3/react-vant) | React mobile UI Components based on Vant |
| [mxdi9i7/vant-react](https://github.com/mxdi9i7/vant-react) | Mobile UI Components built on React and TS, inspired by Vant |
| [vant-aliapp](https://github.com/ant-move/Vant-Aliapp) | Alipay MiniProgram UI |
| [taroify](https://gitee.com/mallfoundry/taroify) | Vant Taro |
| [vant-theme](https://github.com/Aisen60/vant-theme) | Online theme preview built on Vant UI |
| [@antmjs/vantui](https://github.com/antmjs/vantui) | Mobile UI Components based on Vant, supporting Taro and React |
| [@formily/vant](https://github.com/formilyjs/vant) | Form solution based on Vant and Formily |
Links
---
* [Documentation](https://youzan.github.io/vant)
* [Changelog](https://youzan.github.io/vant#/en-US/changelog)
* [Gitter](https://gitter.im/vant-contrib/discuss?utm_source=share-link&utm_medium=link&utm_campaign=share-link)
Preview
---
You can scan the following QR code to access the demo:
LICENSE
---
[MIT](https://en.wikipedia.org/wiki/MIT_License)
Readme
---
### Keywords
* ui
* vue
* frontend
* mobile ui
* component
* components |
github.com/bemasher/rtlamr | go | Go | README
[¶](#section-readme)
---
### Purpose
Utilities often use "smart meters" to optimize their residential meter reading infrastructure. Smart meters transmit consumption information in the various ISM bands allowing utilities to simply send readers driving through neighborhoods to collect commodity consumption information. One protocol in particular: Encoder Receiver Transmitter by Itron is fairly straight forward to decode and operates in the 900MHz ISM band, well within the tunable range of inexpensive rtl-sdr dongles.
This project is a software defined radio receiver for these messages. We make use of an inexpensive rtl-sdr dongle to allow users to non-invasively record and analyze the commodity consumption of their household.
There's now experimental support for data collection and aggregation with [rtlamr-collect](https://github.com/bemasher/rtlamr-collect)!
[![Build Status](https://travis-ci.org/bemasher/rtlamr.svg?branch=master&style=flat)](https://travis-ci.org/bemasher/rtlamr)
[![AGPLv3 License](https://img.shields.io/badge/license-AGPLv3-blue.svg?style=flat)](http://choosealicense.com/licenses/agpl-3.0/)
### Requirements
* GoLang >=1.3 (Go build environment setup guide: <http://golang.org/doc/code.html>)
* rtl-sdr
+ Windows: [pre-built binaries](https://ftp.osmocom.org/binaries/windows/rtl-sdr/)
+ Linux: [source and build instructions](http://sdr.osmocom.org/trac/wiki/rtl-sdr)
### Building
This project requires the package [`github.com/bemasher/rtltcp`](http://godoc.org/github.com/bemasher/rtltcp), which provides a means of controlling and sampling from rtl-sdr dongles via the `rtl_tcp` tool. This package will be automatically downloaded and installed when getting rtlamr. The following command should be all that is required to install rtlamr.
```
go get github.com/bemasher/rtlamr
```
This will produce the binary `$GOPATH/bin/rtlamr`. For convenience it's common to add `$GOPATH/bin` to the path.
### Usage
See the wiki page [Configuration](https://github.com/bemasher/rtlamr/wiki/Configuration) for details on configuring rtlamr.
Running the receiver is as simple as starting an `rtl_tcp` instance and then starting the receiver:
```
# Terminal A
$ rtl_tcp
# Terminal B
$ rtlamr
```
If you want to run the spectrum server on a different machine than the receiver you'll want to specify an address to listen on that is accessible from the machine `rtlamr` will run on with the `-a` option for `rtl_tcp` with an address accessible by the system running the receiver.
### Message Types
The following message types are supported by rtlamr:
* **scm**: Standard Consumption Message. Simple packet that reports total consumption.
* **scm+**: Similar to SCM, allows greater precision and longer meter ID's.
* **idm**: Interval Data Message. Provides differential consumption data for previous 47 intervals at 5 minutes per interval.
* **netidm**: Similar to IDM, except net meters (type 8) have different internal packet structure, number of intervals and precision. Also reports total power production.
* **r900**: Message type used by Neptune R900 transmitters, provides total consumption and leak flags.
* **r900bcd**: Some Neptune R900 meters report consumption as a binary-coded digits.
### Sensitivity
Using a NooElec NESDR Nano R820T with the provided antenna, I can reliably receive standard consumption messages from ~300 different meters and intermittently from another ~600 meters. These figures are calculated from the number of messages received during a 25 minute window. Reliably in this case means receiving at least 10 of the expected 12 messages and intermittently means 3-9 messages.
### Compatibility
Currently the only tested meter is the Itron C1SR. However, the protocol is designed to be useful for several different commodities and should be capable of receiving messages from any ERT capable smart meter.
Check out the table of meters I've been compiling from various internet sources: [ERT Compatible Meters](https://github.com/bemasher/rtlamr/blob/master/meters.md)
If you've got a meter not on the list that you've successfully received messages from, you can submit this info via a form available at the link above.
### Ethics
*Do not use this for malicious purposes.* If you do, I don't want to know about it, I am not and will not be responsible for your actions. However, if you find a clever non-evil use for this, by all means, share.
### Use Cases
These are a few examples of ways this tool could be used:
**Ethical**
* Track down stray appliances.
* Track power generated vs. power consumed.
* Find a water leak with rtlamr rather than from your bill.
* Optimize your thermostat to reduce energy consumption.
* Mass collection for research purposes. (*Please* anonymize your data.)
**Unethical**
* Using data collected to determine living patterns of specific persons with the intent to act on this data, particularly without express permission to do so.
### License
The source of this project is licensed under Affero GPL v3.0. According to <http://choosealicense.com/licenses/agpl-3.0/> you may:
#### Required:
* **Disclose Source:** Source code must be made available when distributing the software. In the case of LGPL, the source for the library (and not the entire program) must be made available.
* **License and copyright notice:** Include a copy of the license and copyright notice with the code.
* **Network Use is Distribution:** Users who interact with the software via network are given the right to receive a copy of the corresponding source code.
* **State Changes:** Indicate significant changes made to the code.
#### Permitted:
* **Commercial Use:** This software and derivatives may be used for commercial purposes.
* **Distribution:** You may distribute this software.
* **Modification:** This software may be modified.
* **Patent Grant:** This license provides an express grant of patent rights from the contributor to the recipient.
* **Private Use:** You may use and modify the software without distributing it.
#### Forbidden:
* **Hold Liable:** Software is provided without warranty and the software author/license owner cannot be held liable for damages.
* **Sublicensing:** You may not grant a sublicense to modify and distribute this software to third parties not included in the license.
### Feedback
If you have any questions, comments, feedback or bugs, please submit an issue.
Documentation
[¶](#section-documentation)
---
![The Go Gopher](/static/shared/gopher/airplane-1200x945.svg)
There is no documentation for this package. |
ritehash | rust | Rust | Crate ritehash[][src]
===
RiteLabs’ Fx Hash
---
This hashing algorithm was extracted from the Rustc compiler. This is the same hashing algorithm used for some internal operations in Firefox. The strength of this algorithm is in hashing 8 bytes at a time on 64-bit platforms, where the FNV algorithm works on one byte at a time.
### Disclaimer
It is **not a cryptographically secure** hash, so it is strongly recommended that you do not use this hash for cryptographic purposes. Furthermore, this hashing algorithm was not designed to prevent any attacks for determining collisions which could be used to potentially cause quadratic behavior in `HashMap`s. So it is not recommended to expose this hash in places where collisions or DDOS attacks may be a concern.
Structs
---
FxHasherThis hashing algorithm was extracted from the Rustc compiler.
This is the same hashing algorithm used for some internal operations in Firefox.
The strength of this algorithm is in hashing 8 bytes at a time on 64-bit platforms,
where the FNV algorithm works on one byte at a time.
FxHasher32This hashing algorithm was extracted from the Rustc compiler.
This is the same hashing algorithm used for some internal operations in Firefox.
The strength of this algorithm is in hashing 4 bytes at a time on any platform,
where the FNV algorithm works on one byte at a time.
FxHasher64This hashing algorithm was extracted from the Rustc compiler.
This is the same hashing algorithm used for some internal operations in Firefox.
The strength of this algorithm is in hashing 8 bytes at a time on any platform,
where the FNV algorithm works on one byte at a time.
Functions
---
hashA convenience function for when you need a quick usize hash.
hash32A convenience function for when you need a quick 32-bit hash.
hash64A convenience function for when you need a quick 64-bit hash.
Crate ritehash[][src]
===
RiteLabs’ Fx Hash
---
This hashing algorithm was extracted from the Rustc compiler. This is the same hashing algorithm used for some internal operations in Firefox. The strength of this algorithm is in hashing 8 bytes at a time on 64-bit platforms, where the FNV algorithm works on one byte at a time.
### Disclaimer
It is **not a cryptographically secure** hash, so it is strongly recommended that you do not use this hash for cryptographic purposes. Furthermore, this hashing algorithm was not designed to prevent any attacks for determining collisions which could be used to potentially cause quadratic behavior in `HashMap`s. So it is not recommended to expose this hash in places where collisions or DDOS attacks may be a concern.
Structs
---
FxHasherThis hashing algorithm was extracted from the Rustc compiler.
This is the same hashing algorithm used for some internal operations in Firefox.
The strength of this algorithm is in hashing 8 bytes at a time on 64-bit platforms,
where the FNV algorithm works on one byte at a time.
FxHasher32This hashing algorithm was extracted from the Rustc compiler.
This is the same hashing algorithm used for some internal operations in Firefox.
The strength of this algorithm is in hashing 4 bytes at a time on any platform,
where the FNV algorithm works on one byte at a time.
FxHasher64This hashing algorithm was extracted from the Rustc compiler.
This is the same hashing algorithm used for some internal operations in Firefox.
The strength of this algorithm is in hashing 8 bytes at a time on any platform,
where the FNV algorithm works on one byte at a time.
Functions
---
hashA convenience function for when you need a quick usize hash.
hash32A convenience function for when you need a quick 32-bit hash.
hash64A convenience function for when you need a quick 64-bit hash.
Struct ritehash::FxHasher[][src]
===
```
pub struct FxHasher { /* fields omitted */ }
```
This hashing algorithm was extracted from the Rustc compiler.
This is the same hashing algorithm used for some internal operations in Firefox.
The strength of this algorithm is in hashing 8 bytes at a time on 64-bit platforms,
where the FNV algorithm works on one byte at a time.
This hashing algorithm should not be used for cryptographic, or in scenarios where DOS attacks are a concern.
Trait Implementations
---
[src]### impl Clone for FxHasher
[src]#### fn clone(&self) -> FxHasher
Returns a copy of the value. Read more
1.0.0[src]#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
[src]### impl Debug for FxHasher
[src]#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
[src]### impl Default for FxHasher
[src]#### fn default() -> FxHasher
Returns the “default value” for a type. Read more
[src]### impl Hasher for FxHasher
[src]#### fn write(&mut self, bytes: &[u8])
Writes some data into this `Hasher`. Read more
[src]#### fn write_u8(&mut self, i: u8)
Writes a single `u8` into this hasher.
[src]#### fn write_u16(&mut self, i: u16)
Writes a single `u16` into this hasher.
[src]#### fn write_u32(&mut self, i: u32)
Writes a single `u32` into this hasher.
[src]#### fn write_u64(&mut self, i: u64)
Writes a single `u64` into this hasher.
[src]#### fn write_usize(&mut self, i: usize)
Writes a single `usize` into this hasher.
[src]#### fn finish(&self) -> u64
Returns the hash value for the values written so far. Read more
1.26.0[src]#### fn write_u128(&mut self, i: u128)
Writes a single `u128` into this hasher.
1.3.0[src]#### fn write_i8(&mut self, i: i8)
Writes a single `i8` into this hasher.
1.3.0[src]#### fn write_i16(&mut self, i: i16)
Writes a single `i16` into this hasher.
1.3.0[src]#### fn write_i32(&mut self, i: i32)
Writes a single `i32` into this hasher.
1.3.0[src]#### fn write_i64(&mut self, i: i64)
Writes a single `i64` into this hasher.
1.26.0[src]#### fn write_i128(&mut self, i: i128)
Writes a single `i128` into this hasher.
1.3.0[src]#### fn write_isize(&mut self, i: isize)
Writes a single `isize` into this hasher.
Auto Trait Implementations
---
### impl RefUnwindSafe for FxHasher
### impl Send for FxHasher
### impl Sync for FxHasher
### impl Unpin for FxHasher
### impl UnwindSafe for FxHasher
Blanket Implementations
---
[src]### impl<T> Any for T where T: 'static + ?Sized,
[src]#### pub fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
[src]### impl<T> Borrow<T> for T where T: ?Sized,
[src]#### pub fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
[src]### impl<T> BorrowMut<T> for T where T: ?Sized,
[src]#### pub fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
[src]### impl<T> From<T> for T
[src]#### pub fn from(t: T) -> T
Performs the conversion.
[src]### impl<T, U> Into<U> for T where U: From<T>,
[src]#### pub fn into(self) -> U
Performs the conversion.
[src]### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
[src]#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
[src]### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
[src]#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct ritehash::FxHasher32[][src]
===
```
pub struct FxHasher32 { /* fields omitted */ }
```
This hashing algorithm was extracted from the Rustc compiler.
This is the same hashing algorithm used for some internal operations in Firefox.
The strength of this algorithm is in hashing 4 bytes at a time on any platform,
where the FNV algorithm works on one byte at a time.
This hashing algorithm should not be used for cryptographic, or in scenarios where DOS attacks are a concern.
Trait Implementations
---
[src]### impl Clone for FxHasher32
[src]#### fn clone(&self) -> FxHasher32
Returns a copy of the value. Read more
1.0.0[src]#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
[src]### impl Debug for FxHasher32
[src]#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
[src]### impl Default for FxHasher32
[src]#### fn default() -> FxHasher32
Returns the “default value” for a type. Read more
[src]### impl Hasher for FxHasher32
[src]#### fn write(&mut self, bytes: &[u8])
Writes some data into this `Hasher`. Read more
[src]#### fn write_u8(&mut self, i: u8)
Writes a single `u8` into this hasher.
[src]#### fn write_u16(&mut self, i: u16)
Writes a single `u16` into this hasher.
[src]#### fn write_u32(&mut self, i: u32)
Writes a single `u32` into this hasher.
[src]#### fn write_u64(&mut self, i: u64)
Writes a single `u64` into this hasher.
[src]#### fn write_usize(&mut self, i: usize)
Writes a single `usize` into this hasher.
[src]#### fn finish(&self) -> u64
Returns the hash value for the values written so far. Read more
1.26.0[src]#### fn write_u128(&mut self, i: u128)
Writes a single `u128` into this hasher.
1.3.0[src]#### fn write_i8(&mut self, i: i8)
Writes a single `i8` into this hasher.
1.3.0[src]#### fn write_i16(&mut self, i: i16)
Writes a single `i16` into this hasher.
1.3.0[src]#### fn write_i32(&mut self, i: i32)
Writes a single `i32` into this hasher.
1.3.0[src]#### fn write_i64(&mut self, i: i64)
Writes a single `i64` into this hasher.
1.26.0[src]#### fn write_i128(&mut self, i: i128)
Writes a single `i128` into this hasher.
1.3.0[src]#### fn write_isize(&mut self, i: isize)
Writes a single `isize` into this hasher.
Auto Trait Implementations
---
### impl RefUnwindSafe for FxHasher32
### impl Send for FxHasher32
### impl Sync for FxHasher32
### impl Unpin for FxHasher32
### impl UnwindSafe for FxHasher32
Blanket Implementations
---
[src]### impl<T> Any for T where T: 'static + ?Sized,
[src]#### pub fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
[src]### impl<T> Borrow<T> for T where T: ?Sized,
[src]#### pub fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
[src]### impl<T> BorrowMut<T> for T where T: ?Sized,
[src]#### pub fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
[src]### impl<T> From<T> for T
[src]#### pub fn from(t: T) -> T
Performs the conversion.
[src]### impl<T, U> Into<U> for T where U: From<T>,
[src]#### pub fn into(self) -> U
Performs the conversion.
[src]### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
[src]#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
[src]### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
[src]#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct ritehash::FxHasher64[][src]
===
```
pub struct FxHasher64 { /* fields omitted */ }
```
This hashing algorithm was extracted from the Rustc compiler.
This is the same hashing algorithm used for some internal operations in Firefox.
The strength of this algorithm is in hashing 8 bytes at a time on any platform,
where the FNV algorithm works on one byte at a time.
This hashing algorithm should not be used for cryptographic, or in scenarios where DOS attacks are a concern.
Trait Implementations
---
[src]### impl Clone for FxHasher64
[src]#### fn clone(&self) -> FxHasher64
Returns a copy of the value. Read more
1.0.0[src]#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
[src]### impl Debug for FxHasher64
[src]#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
[src]### impl Default for FxHasher64
[src]#### fn default() -> FxHasher64
Returns the “default value” for a type. Read more
[src]### impl Hasher for FxHasher64
[src]#### fn write(&mut self, bytes: &[u8])
Writes some data into this `Hasher`. Read more
[src]#### fn write_u8(&mut self, i: u8)
Writes a single `u8` into this hasher.
[src]#### fn write_u16(&mut self, i: u16)
Writes a single `u16` into this hasher.
[src]#### fn write_u32(&mut self, i: u32)
Writes a single `u32` into this hasher.
[src]#### fn write_u64(&mut self, i: u64)
Writes a single `u64` into this hasher.
[src]#### fn write_usize(&mut self, i: usize)
Writes a single `usize` into this hasher.
[src]#### fn finish(&self) -> u64
Returns the hash value for the values written so far. Read more
1.26.0[src]#### fn write_u128(&mut self, i: u128)
Writes a single `u128` into this hasher.
1.3.0[src]#### fn write_i8(&mut self, i: i8)
Writes a single `i8` into this hasher.
1.3.0[src]#### fn write_i16(&mut self, i: i16)
Writes a single `i16` into this hasher.
1.3.0[src]#### fn write_i32(&mut self, i: i32)
Writes a single `i32` into this hasher.
1.3.0[src]#### fn write_i64(&mut self, i: i64)
Writes a single `i64` into this hasher.
1.26.0[src]#### fn write_i128(&mut self, i: i128)
Writes a single `i128` into this hasher.
1.3.0[src]#### fn write_isize(&mut self, i: isize)
Writes a single `isize` into this hasher.
Auto Trait Implementations
---
### impl RefUnwindSafe for FxHasher64
### impl Send for FxHasher64
### impl Sync for FxHasher64
### impl Unpin for FxHasher64
### impl UnwindSafe for FxHasher64
Blanket Implementations
---
[src]### impl<T> Any for T where T: 'static + ?Sized,
[src]#### pub fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
[src]### impl<T> Borrow<T> for T where T: ?Sized,
[src]#### pub fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
[src]### impl<T> BorrowMut<T> for T where T: ?Sized,
[src]#### pub fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
[src]### impl<T> From<T> for T
[src]#### pub fn from(t: T) -> T
Performs the conversion.
[src]### impl<T, U> Into<U> for T where U: From<T>,
[src]#### pub fn into(self) -> U
Performs the conversion.
[src]### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
[src]#### pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
[src]### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
[src]#### pub fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Function ritehash::hash[][src]
===
```
pub fn hash<T: Hash + ?Sized>(v: &T) -> usize
```
A convenience function for when you need a quick usize hash.
Function ritehash::hash32[][src]
===
```
pub fn hash32<T: Hash + ?Sized>(v: &T) -> u32
```
A convenience function for when you need a quick 32-bit hash.
Function ritehash::hash64[][src]
===
```
pub fn hash64<T: Hash + ?Sized>(v: &T) -> u64
```
A convenience function for when you need a quick 64-bit hash. |
frame-support-procedural | rust | Rust | Crate frame_support_procedural
===
Proc macro of Support code for the runtime.
Macros
---
* __create_tt_macroInternal macro used by `frame_support` to create tt-call-compliant macros
* __generate_dummy_part_checkerInternal macro use by frame_support to generate dummy part checker for old pallet declaration
* construct_runtimeConstruct a runtime, with the given name and the given pallets.
* crate_to_crate_version
* impl_key_prefix_for_tuplesThis macro is meant to be used by frame-support only.
It implements the trait `HasKeyPrefix` and `HasReversibleKeyPrefix` for tuple of `Key`.
* match_and_insertMacro that inserts some tokens after the first match of some pattern.
Attribute Macros
---
* benchmarkAn attribute macro used to declare a benchmark within a benchmarking module. Must be attached to a function definition containing an `#[extrinsic_call]` or `#[block]`
attribute.
* benchmarksAn attribute macro that can be attached to a (non-empty) module declaration. Doing so will designate that module as a benchmarking module.
* blockAn attribute macro used to specify that a block should be the measured portion of the enclosing benchmark function, This attribute is also used as a boundary designating where the benchmark setup code ends, and the benchmark verification code begins.
* call_indexEach dispatchable may also be annotated with the `#[pallet::call_index($idx)]` attribute,
which explicitly defines the codec index for the dispatchable function in the `Call` enum.
* compactCompact encoding for arguments can be achieved via `#[pallet::compact]`. The function must return a `DispatchResultWithPostInfo` or `DispatchResult`.
* composite_enumThe `#[pallet::composite_enum]` attribute allows you to define an enum that gets composed as an aggregate enum by `construct_runtime`. This is similar in principle with `#[pallet::event]` and
`#[pallet::error]`.
* configThe mandatory attribute `#[pallet::config]` defines the configurable options for the pallet.
* constantThe `#[pallet::constant]` attribute can be used to add an associated type trait bounded by `Get`
from `pallet::config` into metadata, e.g.:
* derive_implThis attribute can be used to derive a full implementation of a trait based on a local partial impl and an external impl containing defaults that can be overriden in the local impl.
* disable_frame_system_supertrait_checkTo bypass the `frame_system::Config` supertrait check, use the attribute
`pallet::disable_frame_system_supertrait_check`, e.g.:
* errorThe `#[pallet::error]` attribute allows you to define an error enum that will be returned from the dispatchable when an error occurs. The information for this error type is then stored in metadata.
* eventThe `#[pallet::event]` attribute allows you to define pallet events. Pallet events are stored under the `system` / `events` key when the block is applied (and then replaced when the next block writes it’s events).
* extra_constantsAllows you to define some extra constants to be added into constant metadata.
* extrinsic_callAn attribute macro used to specify the extrinsic call inside a benchmark function, and also used as a boundary designating where the benchmark setup code ends, and the benchmark verification code begins.
* generate_depositThe attribute `#[pallet::generate_deposit($visibility fn deposit_event)]` generates a helper function on `Pallet` that handles deposit events.
* generate_storeTo generate a `Store` trait associating all storages, annotate your `Pallet` struct with the attribute `#[pallet::generate_store($vis trait Store)]`, e.g.:
* genesis_build**Rust-Analyzer users**: See the documentation of the Rust item in
`frame_support::pallet_macros::genesis_build`.
* genesis_config**Rust-Analyzer users**: See the documentation of the Rust item in
`frame_support::pallet_macros::genesis_config`.
* getterThe optional attribute `#[pallet::getter(fn $my_getter_fn_name)]` allows you to define a getter function on `Pallet`.
* hooksThe `#[pallet::hooks]` attribute allows you to specify a `Hooks` implementation for
`Pallet` that specifies pallet-specific logic.
* import_sectionAn attribute macro that can be attached to a module declaration. Doing so will Imports the contents of the specified external pallet section that was defined previously using `#[pallet_section]`.
* inherentThe `#[pallet::inherent]` attribute allows the pallet to provide some inherent.
An inherent is some piece of data that is inserted by a block authoring node at block creation time and can either be accepted or rejected by validators based on whether the data falls within an acceptable range.
* inject_runtime_type
* instance_benchmarksAn attribute macro that can be attached to a (non-empty) module declaration. Doing so will designate that module as an instance benchmarking module.
* no_defaultThe optional attribute `#[pallet::no_default]` can be attached to trait items within a
`Config` trait impl that has `#[pallet::config(with_default)]` attached.
* no_default_boundsThe optional attribute `#[pallet::no_default_bounds]` can be attached to trait items within a
`Config` trait impl that has `#[pallet::config(with_default)]` attached.
* originThe `#[pallet::origin]` attribute allows you to define some origin for the pallet.
* palletThe pallet struct placeholder `#[pallet::pallet]` is mandatory and allows you to specify pallet information.
* pallet_sectionCan be attached to a module. Doing so will declare that module as importable into a pallet via `#[import_section]`.
* register_default_implAttach this attribute to an impl statement that you want to use with
`#[derive_impl(..)]`.
* require_transactional
* storageThe `#[pallet::storage]` attribute lets you define some abstract storage inside of runtime storage and also set its metadata. This attribute can be used multiple times.
* storage_alias
* storage_prefixThe optional attribute `#[pallet::storage_prefix = "SomeName"]` allows you to define the storage prefix to use. This is helpful if you wish to rename the storage field but don’t want to perform a migration.
* storage_versionBecause the `pallet::pallet` macro implements `GetStorageVersion`, the current storage version needs to be communicated to the macro. This can be done by using the
`pallet::storage_version` attribute:
* transactionalExecute the annotated function in a new storage transaction.
* type_valueThe `#[pallet::type_value]` attribute lets you define a struct implementing the `Get` trait to ease the use of storage types. This attribute is meant to be used alongside
`#[pallet::storage]` to define a storage’s default value. This attribute can be used multiple times.
* unboundedThe optional attribute `#[pallet::unbounded]` declares the storage as unbounded. When implementating the storage info (when `#[pallet::generate_storage_info]` is specified on the pallet struct placeholder), the size of the storage will be declared as unbounded. This can be useful for storage which can never go into PoV (Proof of Validity).
* validate_unsignedThe `#[pallet::validate_unsigned]` attribute allows the pallet to validate some unsigned transaction:
* weightEach dispatchable needs to define a weight with `#[pallet::weight($expr)]` attribute, the first argument must be `origin: OriginFor<T>`.
* whitelist_storageThe optional attribute `#[pallet::whitelist_storage]` will declare the storage as whitelisted from benchmarking. Doing so will exclude reads of that value’s storage key from counting towards weight calculations during benchmarking.
Derive Macros
---
* CloneNoBoundDerive `Clone` but do not bound any generic. Docs are at `frame_support::CloneNoBound`.
* DebugNoBoundDerive `Debug` but do not bound any generics. Docs are at `frame_support::DebugNoBound`.
* DefaultNoBoundderive `Default` but do no bound any generic. Docs are at `frame_support::DefaultNoBound`.
* EqNoBoundderive Eq but do no bound any generic. Docs are at `frame_support::EqNoBound`.
* PalletError
* PartialEqNoBoundDerive `PartialEq` but do not bound any generic. Docs are at
`frame_support::PartialEqNoBound`.
* RuntimeDebugNoBoundDerive `Debug`, if `std` is enabled it uses `frame_support::DebugNoBound`, if `std` is not enabled it just returns `"<wasm:stripped>"`.
This behaviour is useful to prevent bloating the runtime WASM blob from unneeded code.
Crate frame_support_procedural
===
Proc macro of Support code for the runtime.
Macros
---
* __create_tt_macroInternal macro used by `frame_support` to create tt-call-compliant macros
* __generate_dummy_part_checkerInternal macro use by frame_support to generate dummy part checker for old pallet declaration
* construct_runtimeConstruct a runtime, with the given name and the given pallets.
* crate_to_crate_version
* impl_key_prefix_for_tuplesThis macro is meant to be used by frame-support only.
It implements the trait `HasKeyPrefix` and `HasReversibleKeyPrefix` for tuple of `Key`.
* match_and_insertMacro that inserts some tokens after the first match of some pattern.
Attribute Macros
---
* benchmarkAn attribute macro used to declare a benchmark within a benchmarking module. Must be attached to a function definition containing an `#[extrinsic_call]` or `#[block]`
attribute.
* benchmarksAn attribute macro that can be attached to a (non-empty) module declaration. Doing so will designate that module as a benchmarking module.
* blockAn attribute macro used to specify that a block should be the measured portion of the enclosing benchmark function, This attribute is also used as a boundary designating where the benchmark setup code ends, and the benchmark verification code begins.
* call_indexEach dispatchable may also be annotated with the `#[pallet::call_index($idx)]` attribute,
which explicitly defines the codec index for the dispatchable function in the `Call` enum.
* compactCompact encoding for arguments can be achieved via `#[pallet::compact]`. The function must return a `DispatchResultWithPostInfo` or `DispatchResult`.
* composite_enumThe `#[pallet::composite_enum]` attribute allows you to define an enum that gets composed as an aggregate enum by `construct_runtime`. This is similar in principle with `#[pallet::event]` and
`#[pallet::error]`.
* configThe mandatory attribute `#[pallet::config]` defines the configurable options for the pallet.
* constantThe `#[pallet::constant]` attribute can be used to add an associated type trait bounded by `Get`
from `pallet::config` into metadata, e.g.:
* derive_implThis attribute can be used to derive a full implementation of a trait based on a local partial impl and an external impl containing defaults that can be overriden in the local impl.
* disable_frame_system_supertrait_checkTo bypass the `frame_system::Config` supertrait check, use the attribute
`pallet::disable_frame_system_supertrait_check`, e.g.:
* errorThe `#[pallet::error]` attribute allows you to define an error enum that will be returned from the dispatchable when an error occurs. The information for this error type is then stored in metadata.
* eventThe `#[pallet::event]` attribute allows you to define pallet events. Pallet events are stored under the `system` / `events` key when the block is applied (and then replaced when the next block writes it’s events).
* extra_constantsAllows you to define some extra constants to be added into constant metadata.
* extrinsic_callAn attribute macro used to specify the extrinsic call inside a benchmark function, and also used as a boundary designating where the benchmark setup code ends, and the benchmark verification code begins.
* generate_depositThe attribute `#[pallet::generate_deposit($visibility fn deposit_event)]` generates a helper function on `Pallet` that handles deposit events.
* generate_storeTo generate a `Store` trait associating all storages, annotate your `Pallet` struct with the attribute `#[pallet::generate_store($vis trait Store)]`, e.g.:
* genesis_build**Rust-Analyzer users**: See the documentation of the Rust item in
`frame_support::pallet_macros::genesis_build`.
* genesis_config**Rust-Analyzer users**: See the documentation of the Rust item in
`frame_support::pallet_macros::genesis_config`.
* getterThe optional attribute `#[pallet::getter(fn $my_getter_fn_name)]` allows you to define a getter function on `Pallet`.
* hooksThe `#[pallet::hooks]` attribute allows you to specify a `Hooks` implementation for
`Pallet` that specifies pallet-specific logic.
* import_sectionAn attribute macro that can be attached to a module declaration. Doing so will Imports the contents of the specified external pallet section that was defined previously using `#[pallet_section]`.
* inherentThe `#[pallet::inherent]` attribute allows the pallet to provide some inherent.
An inherent is some piece of data that is inserted by a block authoring node at block creation time and can either be accepted or rejected by validators based on whether the data falls within an acceptable range.
* inject_runtime_type
* instance_benchmarksAn attribute macro that can be attached to a (non-empty) module declaration. Doing so will designate that module as an instance benchmarking module.
* no_defaultThe optional attribute `#[pallet::no_default]` can be attached to trait items within a
`Config` trait impl that has `#[pallet::config(with_default)]` attached.
* no_default_boundsThe optional attribute `#[pallet::no_default_bounds]` can be attached to trait items within a
`Config` trait impl that has `#[pallet::config(with_default)]` attached.
* originThe `#[pallet::origin]` attribute allows you to define some origin for the pallet.
* palletThe pallet struct placeholder `#[pallet::pallet]` is mandatory and allows you to specify pallet information.
* pallet_sectionCan be attached to a module. Doing so will declare that module as importable into a pallet via `#[import_section]`.
* register_default_implAttach this attribute to an impl statement that you want to use with
`#[derive_impl(..)]`.
* require_transactional
* storageThe `#[pallet::storage]` attribute lets you define some abstract storage inside of runtime storage and also set its metadata. This attribute can be used multiple times.
* storage_alias
* storage_prefixThe optional attribute `#[pallet::storage_prefix = "SomeName"]` allows you to define the storage prefix to use. This is helpful if you wish to rename the storage field but don’t want to perform a migration.
* storage_versionBecause the `pallet::pallet` macro implements `GetStorageVersion`, the current storage version needs to be communicated to the macro. This can be done by using the
`pallet::storage_version` attribute:
* transactionalExecute the annotated function in a new storage transaction.
* type_valueThe `#[pallet::type_value]` attribute lets you define a struct implementing the `Get` trait to ease the use of storage types. This attribute is meant to be used alongside
`#[pallet::storage]` to define a storage’s default value. This attribute can be used multiple times.
* unboundedThe optional attribute `#[pallet::unbounded]` declares the storage as unbounded. When implementating the storage info (when `#[pallet::generate_storage_info]` is specified on the pallet struct placeholder), the size of the storage will be declared as unbounded. This can be useful for storage which can never go into PoV (Proof of Validity).
* validate_unsignedThe `#[pallet::validate_unsigned]` attribute allows the pallet to validate some unsigned transaction:
* weightEach dispatchable needs to define a weight with `#[pallet::weight($expr)]` attribute, the first argument must be `origin: OriginFor<T>`.
* whitelist_storageThe optional attribute `#[pallet::whitelist_storage]` will declare the storage as whitelisted from benchmarking. Doing so will exclude reads of that value’s storage key from counting towards weight calculations during benchmarking.
Derive Macros
---
* CloneNoBoundDerive `Clone` but do not bound any generic. Docs are at `frame_support::CloneNoBound`.
* DebugNoBoundDerive `Debug` but do not bound any generics. Docs are at `frame_support::DebugNoBound`.
* DefaultNoBoundderive `Default` but do no bound any generic. Docs are at `frame_support::DefaultNoBound`.
* EqNoBoundderive Eq but do no bound any generic. Docs are at `frame_support::EqNoBound`.
* PalletError
* PartialEqNoBoundDerive `PartialEq` but do not bound any generic. Docs are at
`frame_support::PartialEqNoBound`.
* RuntimeDebugNoBoundDerive `Debug`, if `std` is enabled it uses `frame_support::DebugNoBound`, if `std` is not enabled it just returns `"<wasm:stripped>"`.
This behaviour is useful to prevent bloating the runtime WASM blob from unneeded code.
Macro frame_support_procedural::__create_tt_macro
===
```
__create_tt_macro!() { /* proc-macro */ }
```
Internal macro used by `frame_support` to create tt-call-compliant macros
Macro frame_support_procedural::__generate_dummy_part_checker
===
```
__generate_dummy_part_checker!() { /* proc-macro */ }
```
Internal macro use by frame_support to generate dummy part checker for old pallet declaration
Macro frame_support_procedural::construct_runtime
===
```
construct_runtime!() { /* proc-macro */ }
```
Construct a runtime, with the given name and the given pallets.
The parameters here are specific types for `Block`, `NodeBlock`, and `UncheckedExtrinsic`
and the pallets that are used by the runtime.
`Block` is the block type that is used in the runtime and `NodeBlock` is the block type that is used in the node. For instance they can differ in the extrinsics type.
Example:
---
```
construct_runtime!(
pub enum Runtime where
Block = Block,
NodeBlock = node::Block,
UncheckedExtrinsic = UncheckedExtrinsic
{
System: frame_system::{Pallet, Call, Event<T>, Config<T>} = 0,
Test: path::to::test::{Pallet, Call} = 1,
// Pallets with instances.
Test2_Instance1: test2::<Instance1>::{Pallet, Call, Storage, Event<T, I>, Config<T, I>, Origin<T, I>},
Test2_DefaultInstance: test2::{Pallet, Call, Storage, Event<T>, Config<T>, Origin<T>} = 4,
// Pallets declared with `pallet` attribute macro: no need to define the parts
Test3_Instance1: test3::<Instance1>,
Test3_DefaultInstance: test3,
// with `exclude_parts` keyword some part can be excluded.
Test4_Instance1: test4::<Instance1> exclude_parts { Call, Origin },
Test4_DefaultInstance: test4 exclude_parts { Storage },
// with `use_parts` keyword, a subset of the pallet parts can be specified.
Test4_Instance1: test4::<Instance1> use_parts { Pallet, Call},
Test4_DefaultInstance: test4 use_parts { Pallet },
}
)
```
Each pallet is declared as such:
* `Identifier`: name given to the pallet that uniquely identifies it.
* `:`: colon separator
* `path::to::pallet`: identifiers separated by colons which declare the path to a pallet definition.
* `::<InstanceN>` optional: specify the instance of the pallet to use. If not specified it will use the default instance (or the only instance in case of non-instantiable pallets).
* `::{ Part1, Part2<T>, .. }` optional if pallet declared with `frame_support::pallet`: Comma separated parts declared with their generic. If a pallet is declared with
`frame_support::pallet` macro then the parts can be automatically derived if not explicitly provided. We provide support for the following module parts in a pallet:
+ `Pallet` - Required for all pallets
+ `Call` - If the pallet has callable functions
+ `Storage` - If the pallet uses storage
+ `Event` or `Event<T>` (if the event is generic) - If the pallet emits events
+ `Origin` or `Origin<T>` (if the origin is generic) - If the pallet has instanciable origins
+ `Config` or `Config<T>` (if the config is generic) - If the pallet builds the genesis
storage with `GenesisConfig`
+ `Inherent` - If the pallet provides/can check inherents.
+ `ValidateUnsigned` - If the pallet validates unsigned extrinsics.It is important to list these parts here to export them correctly in the metadata or to make the pallet usable in the runtime.
* `exclude_parts { Part1, Part2 }` optional: comma separated parts without generics. I.e. one of
`Pallet`, `Call`, `Storage`, `Event`, `Origin`, `Config`, `Inherent`, `ValidateUnsigned`. It is incompatible with `use_parts`. This specifies the part to exclude. In order to select subset of the pallet parts.
For example excluding the part `Call` can be useful if the runtime doesn’t want to make the pallet calls available.
* `use_parts { Part1, Part2 }` optional: comma separated parts without generics. I.e. one of
`Pallet`, `Call`, `Storage`, `Event`, `Origin`, `Config`, `Inherent`, `ValidateUnsigned`. It is incompatible with `exclude_parts`. This specifies the part to use. In order to select a subset of the pallet parts.
For example not using the part `Call` can be useful if the runtime doesn’t want to make the pallet calls available.
* `= $n` optional: number to define at which index the pallet variants in `OriginCaller`, `Call`
and `Event` are encoded, and to define the ModuleToIndex value.
if `= $n` is not given, then index is resolved in the same way as fieldless enum in Rust
(i.e. incrementedly from previous index):
```
pallet1 .. = 2,
pallet2 .., // Here pallet2 is given index 3 pallet3 .. = 0,
pallet4 .., // Here pallet4 is given index 1
```
Note
---
The population of the genesis storage depends on the order of pallets. So, if one of your pallets depends on another pallet, the pallet that is depended upon needs to come before the pallet depending on it.
Type definitions
---
* The macro generates a type alias for each pallet to their `Pallet`. E.g. `type System = frame_system::Pallet<Runtime>`
Macro frame_support_procedural::impl_key_prefix_for_tuples
===
```
impl_key_prefix_for_tuples!() { /* proc-macro */ }
```
This macro is meant to be used by frame-support only.
It implements the trait `HasKeyPrefix` and `HasReversibleKeyPrefix` for tuple of `Key`.
Macro frame_support_procedural::match_and_insert
===
```
match_and_insert!() { /* proc-macro */ }
```
Macro that inserts some tokens after the first match of some pattern.
Example:
---
```
match_and_insert!(
target = [{ Some content with { at some point match pattern } other match pattern are ignored }]
pattern = [{ match pattern }] // the match pattern cannot contain any group: `[]`, `()`, `{}`
// can relax this constraint, but will require modifying the match logic in code
tokens = [{ expansion tokens }] // content inside braces can be anything including groups
);
```
will generate:
```
Some content with { at some point match pattern expansion tokens } other match patterns are
ignored
```
Attribute Macro frame_support_procedural::benchmark
===
```
#[benchmark]
```
An attribute macro used to declare a benchmark within a benchmarking module. Must be attached to a function definition containing an `#[extrinsic_call]` or `#[block]`
attribute.
See `frame_benchmarking::v2` for more info.
Attribute Macro frame_support_procedural::benchmarks
===
```
#[benchmarks]
```
An attribute macro that can be attached to a (non-empty) module declaration. Doing so will designate that module as a benchmarking module.
See `frame_benchmarking::v2` for more info.
Attribute Macro frame_support_procedural::block
===
```
#[block]
```
An attribute macro used to specify that a block should be the measured portion of the enclosing benchmark function, This attribute is also used as a boundary designating where the benchmark setup code ends, and the benchmark verification code begins.
See `frame_benchmarking::v2` for more info.
Attribute Macro frame_support_procedural::call_index
===
```
#[call_index]
```
Each dispatchable may also be annotated with the `#[pallet::call_index($idx)]` attribute,
which explicitly defines the codec index for the dispatchable function in the `Call` enum.
All call indexes start from 0, until it encounters a dispatchable function with a defined call index. The dispatchable function that lexically follows the function with a defined call index will have that call index, but incremented by 1, e.g. if there are 3 dispatchable functions `fn foo`, `fn bar` and `fn qux` in that order, and only `fn bar`
has a call index of 10, then `fn qux` will have an index of 11, instead of 1.
All arguments must implement `Debug`, `PartialEq`, `Eq`, `Decode`, `Encode`, and
`Clone`. For ease of use, bound by the trait `frame_support::pallet_prelude::Member`.
If no `#[pallet::call]` exists, then a default implementation corresponding to the following code is automatically generated:
```
#[pallet::call]
impl<T: Config> Pallet<T> {}
```
**WARNING**: modifying dispatchables, changing their order, removing some, etc., must be done with care. Indeed this will change the outer runtime call type (which is an enum with one variant per pallet), this outer runtime call can be stored on-chain (e.g. in
`pallet-scheduler`). Thus migration might be needed. To mitigate against some of this, the
`#[pallet::call_index($idx)]` attribute can be used to fix the order of the dispatchable so that the `Call` enum encoding does not change after modification. As a general rule of thumb, it is therefore adventageous to always add new calls to the end so you can maintain the existing order of calls.
#### Macro expansion
The macro creates an enum `Call` with one variant per dispatchable. This enum implements:
`Clone`, `Eq`, `PartialEq`, `Debug` (with stripped implementation in `not("std")`),
`Encode`, `Decode`, `GetDispatchInfo`, `GetCallName`, `GetCallIndex` and
`UnfilteredDispatchable`.
The macro implements the `Callable` trait on `Pallet` and a function `call_functions`
which returns the dispatchable metadata.
Attribute Macro frame_support_procedural::compact
===
```
#[compact]
```
Compact encoding for arguments can be achieved via `#[pallet::compact]`. The function must return a `DispatchResultWithPostInfo` or `DispatchResult`.
Attribute Macro frame_support_procedural::composite_enum
===
```
#[composite_enum]
```
The `#[pallet::composite_enum]` attribute allows you to define an enum that gets composed as an aggregate enum by `construct_runtime`. This is similar in principle with `#[pallet::event]` and
`#[pallet::error]`.
The attribute currently only supports enum definitions, and identifiers that are named
`FreezeReason`, `HoldReason`, `LockId` or `SlashReason`. Arbitrary identifiers for the enum are not supported. The aggregate enum generated by `construct_runtime` will have the name of
`RuntimeFreezeReason`, `RuntimeHoldReason`, `RuntimeLockId` and `RuntimeSlashReason`
respectively.
NOTE: The aggregate enum generated by `construct_runtime` generates a conversion function from the pallet enum to the aggregate enum, and automatically derives the following traits:
```
Copy, Clone, Eq, PartialEq, Ord, PartialOrd, Encode, Decode, MaxEncodedLen, TypeInfo,
RuntimeDebug
```
For ease of usage, when no `#[derive]` attributes are found for the enum under
`#[pallet::composite_enum]`, the aforementioned traits are automatically derived for it. The inverse is also true: if there are any `#[derive]` attributes found for the enum, then no traits will automatically be derived for it.
Attribute Macro frame_support_procedural::config
===
```
#[config]
```
The mandatory attribute `#[pallet::config]` defines the configurable options for the pallet.
Item must be defined as:
```
#[pallet::config]
pub trait Config: frame_system::Config + $optionally_some_other_supertraits
$optional_where_clause
{
...
}
```
I.e. a regular trait definition named `Config`, with the supertrait
`frame_system::pallet::Config`, and optionally other supertraits and a where clause.
(Specifying other supertraits here is known as tight coupling)
The associated type `RuntimeEvent` is reserved. If defined, it must have the bounds
`From<Event>` and `IsType<<Self as frame_system::Config>::RuntimeEvent>`.
`pallet::event` must be present if `RuntimeEvent` exists as a config item in your `#[pallet::config]`.
### Optional: `with_default`
An optional `with_default` argument may also be specified. Doing so will automatically generate a `DefaultConfig` trait inside your pallet which is suitable for use with
`[#[derive_impl(..)]` to derive a default testing config:
```
#[pallet::config(with_default)]
pub trait Config: frame_system::Config {
type RuntimeEvent: Parameter
+ Member
+ From<Event<Self>>
+ Debug
+ IsType<<Self as frame_system::Config>::RuntimeEvent>;
#[pallet::no_default]
type BaseCallFilter: Contains<Self::RuntimeCall>;
// ...
}
```
As shown above, you may also attach the `#[pallet::no_default]`
attribute to specify that a particular trait item *cannot* be used as a default when a test
`Config` is derived using the `#[derive_impl(..)]` attribute macro.
This will cause that particular trait item to simply not appear in default testing configs based on this config (the trait item will not be included in `DefaultConfig`).
#### `DefaultConfig` Caveats
The auto-generated `DefaultConfig` trait:
* is always a *subset* of your pallet’s `Config` trait.
* can only contain items that don’t rely on externalities, such as `frame_system::Config`.
Trait items that *do* rely on externalities should be marked with
`#[pallet::no_default]`
Consequently:
* Any items that rely on externalities *must* be marked with
`#[pallet::no_default]` or your trait will fail to compile when used with `derive_impl`.
* Items marked with `#[pallet::no_default]` are entirely excluded from the
`DefaultConfig` trait, and therefore any impl of `DefaultConfig` doesn’t need to implement such items.
For more information, see `derive_impl`.
Attribute Macro frame_support_procedural::constant
===
```
#[constant]
```
The `#[pallet::constant]` attribute can be used to add an associated type trait bounded by `Get`
from `pallet::config` into metadata, e.g.:
```
#[pallet::config]
pub trait Config: frame_system::Config {
#[pallet::constant]
type Foo: Get<u32>;
}
```
Attribute Macro frame_support_procedural::derive_impl
===
```
#[derive_impl]
```
This attribute can be used to derive a full implementation of a trait based on a local partial impl and an external impl containing defaults that can be overriden in the local impl.
For a full end-to-end example, see below.
Usage
---
The attribute should be attached to an impl block (strictly speaking a `syn::ItemImpl`) for which we want to inject defaults in the event of missing trait items in the block.
The attribute minimally takes a single `default_impl_path` argument, which should be the module path to an impl registered via `#[register_default_impl]` that contains the default trait items we want to potentially inject, with the general form:
```
#[derive_impl(default_impl_path)]
impl SomeTrait for SomeStruct {
...
}
```
Optionally, a `disambiguation_path` can be specified as follows by providing `as path::here`
after the `default_impl_path`:
```
#[derive_impl(default_impl_path as disambiguation_path)]
impl SomeTrait for SomeStruct {
...
}
```
The `disambiguation_path`, if specified, should be the path to a trait that will be used to qualify all default entries that are injected into the local impl. For example if your
`default_impl_path` is `some::path::TestTraitImpl` and your `disambiguation_path` is
`another::path::DefaultTrait`, any items injected into the local impl will be qualified as
`<some::path::TestTraitImpl as another::path::DefaultTrait>::specific_trait_item`.
If you omit the `as disambiguation_path` portion, the `disambiguation_path` will internally default to `A` from the `impl A for B` part of the default impl. This is useful for scenarios where all of the relevant types are already in scope via `use` statements.
Conversely, the `default_impl_path` argument is required and cannot be omitted.
Optionally, `no_aggregated_types` can be specified as follows:
```
#[derive_impl(default_impl_path as disambiguation_path, no_aggregated_types)]
impl SomeTrait for SomeStruct {
...
}
```
If specified, this indicates that the aggregated types (as denoted by impl items attached with [`#[inject_runtime_type]`]) should not be injected with the respective concrete types. By default, all such types are injected.
You can also make use of `#[pallet::no_default]` on specific items in your default impl that you want to ensure will not be copied over but that you nonetheless want to use locally in the context of the foreign impl and the pallet (or context) in which it is defined.
### Use-Case Example: Auto-Derive Test Pallet Config Traits
The `#[derive_imp(..)]` attribute can be used to derive a test pallet `Config` based on an existing pallet `Config` that has been marked with
`#[pallet::config(with_default)]` (which under the hood, generates a
`DefaultConfig` trait in the pallet in which the macro was invoked).
In this case, the `#[derive_impl(..)]` attribute should be attached to an `impl` block that implements a compatible `Config` such as `frame_system::Config` for a test/mock runtime, and should receive as its first argument the path to a `DefaultConfig` impl that has been registered via `#[register_default_impl]`, and as its second argument, the path to the auto-generated `DefaultConfig` for the existing pallet `Config` we want to base our test config off of.
The following is what the `basic` example pallet would look like with a default testing config:
```
#[derive_impl(frame_system::config_preludes::TestDefaultConfig as frame_system::pallet::DefaultConfig)]
impl frame_system::Config for Test {
// These are all defined by system as mandatory.
type BaseCallFilter = frame_support::traits::Everything;
type RuntimeEvent = RuntimeEvent;
type RuntimeCall = RuntimeCall;
type RuntimeOrigin = RuntimeOrigin;
type OnSetCode = ();
type PalletInfo = PalletInfo;
type Block = Block;
// We decide to override this one.
type AccountData = pallet_balances::AccountData<u64>;
}
```
where `TestDefaultConfig` was defined and registered as follows:
```
pub struct TestDefaultConfig;
#[register_default_impl(TestDefaultConfig)]
impl DefaultConfig for TestDefaultConfig {
type Version = ();
type BlockWeights = ();
type BlockLength = ();
type DbWeight = ();
type Nonce = u64;
type BlockNumber = u64;
type Hash = sp_core::hash::H256;
type Hashing = sp_runtime::traits::BlakeTwo256;
type AccountId = AccountId;
type Lookup = IdentityLookup<AccountId>;
type BlockHashCount = frame_support::traits::ConstU64<10>;
type AccountData = u32;
type OnNewAccount = ();
type OnKilledAccount = ();
type SystemWeightInfo = ();
type SS58Prefix = ();
type MaxConsumers = frame_support::traits::ConstU32<16>;
}
```
The above call to `derive_impl` would expand to roughly the following:
```
impl frame_system::Config for Test {
use frame_system::config_preludes::TestDefaultConfig;
use frame_system::pallet::DefaultConfig;
type BaseCallFilter = frame_support::traits::Everything;
type RuntimeEvent = RuntimeEvent;
type RuntimeCall = RuntimeCall;
type RuntimeOrigin = RuntimeOrigin;
type OnSetCode = ();
type PalletInfo = PalletInfo;
type Block = Block;
type AccountData = pallet_balances::AccountData<u64>;
type Version = <TestDefaultConfig as DefaultConfig>::Version;
type BlockWeights = <TestDefaultConfig as DefaultConfig>::BlockWeights;
type BlockLength = <TestDefaultConfig as DefaultConfig>::BlockLength;
type DbWeight = <TestDefaultConfig as DefaultConfig>::DbWeight;
type Nonce = <TestDefaultConfig as DefaultConfig>::Nonce;
type BlockNumber = <TestDefaultConfig as DefaultConfig>::BlockNumber;
type Hash = <TestDefaultConfig as DefaultConfig>::Hash;
type Hashing = <TestDefaultConfig as DefaultConfig>::Hashing;
type AccountId = <TestDefaultConfig as DefaultConfig>::AccountId;
type Lookup = <TestDefaultConfig as DefaultConfig>::Lookup;
type BlockHashCount = <TestDefaultConfig as DefaultConfig>::BlockHashCount;
type OnNewAccount = <TestDefaultConfig as DefaultConfig>::OnNewAccount;
type OnKilledAccount = <TestDefaultConfig as DefaultConfig>::OnKilledAccount;
type SystemWeightInfo = <TestDefaultConfig as DefaultConfig>::SystemWeightInfo;
type SS58Prefix = <TestDefaultConfig as DefaultConfig>::SS58Prefix;
type MaxConsumers = <TestDefaultConfig as DefaultConfig>::MaxConsumers;
}
```
You can then use the resulting `Test` config in test scenarios.
Note that items that are *not* present in our local `DefaultConfig` are automatically copied from the foreign trait (in this case `TestDefaultConfig`) into the local trait impl (in this case `Test`), unless the trait item in the local trait impl is marked with
`#[pallet::no_default]`, in which case it cannot be overridden, and any attempts to do so will result in a compiler error.
See `frame/examples/default-config/tests.rs` for a runnable end-to-end example pallet that makes use of `derive_impl` to derive its testing config.
See here for more information and caveats about the auto-generated
`DefaultConfig` trait.
### Optional Conventions
Note that as an optional convention, we encourage creating a `config_preludes` module inside of your pallet. This is the convention we follow for `frame_system`’s `TestDefaultConfig` which, as shown above, is located at `frame_system::config_preludes::TestDefaultConfig`. This is just a suggested convention – there is nothing in the code that expects modules with these names to be in place, so there is no imperative to follow this pattern unless desired.
In `config_preludes`, you can place types named like:
* `TestDefaultConfig`
* `ParachainDefaultConfig`
* `SolochainDefaultConfig`
Signifying in which context they can be used.
Advanced Usage
---
### Expansion
The `#[derive_impl(default_impl_path as disambiguation_path)]` attribute will expand to the local impl, with any extra items from the foreign impl that aren’t present in the local impl also included. In the case of a colliding trait item, the version of the item that exists in the local impl will be retained. All imported items are qualified by the `disambiguation_path`, as discussed above.
### Handling of Unnamed Trait Items
Items that lack a `syn::Ident` for whatever reason are first checked to see if they exist,
verbatim, in the local/destination trait before they are copied over, so you should not need to worry about collisions between identical unnamed items.
Attribute Macro frame_support_procedural::disable_frame_system_supertrait_check
===
```
#[disable_frame_system_supertrait_check]
```
To bypass the `frame_system::Config` supertrait check, use the attribute
`pallet::disable_frame_system_supertrait_check`, e.g.:
```
#[pallet::config]
#[pallet::disable_frame_system_supertrait_check]
pub trait Config: pallet_timestamp::Config {}
```
NOTE: Bypassing the `frame_system::Config` supertrait check is typically desirable when you want to write an alternative to the `frame_system` pallet.
Attribute Macro frame_support_procedural::error
===
```
#[error]
```
The `#[pallet::error]` attribute allows you to define an error enum that will be returned from the dispatchable when an error occurs. The information for this error type is then stored in metadata.
Item must be defined as:
```
#[pallet::error]
pub enum Error<T> {
/// $some_optional_doc
$SomeFieldLessVariant,
/// $some_more_optional_doc
$SomeVariantWithOneField(FieldType),
...
}
```
I.e. a regular enum named `Error`, with generic `T` and fieldless or multiple-field variants.
Any field type in the enum variants must implement `TypeInfo` in order to be properly used in the metadata, and its encoded size should be as small as possible, preferably 1 byte in size in order to reduce storage size. The error enum itself has an absolute maximum encoded size specified by `MAX_MODULE_ERROR_ENCODED_SIZE`.
(1 byte can still be 256 different errors. The more specific the error, the easier it is to diagnose problems and give a better experience to the user. Don’t skimp on having lots of individual error conditions.)
Field types in enum variants must also implement `PalletError`, otherwise the pallet will fail to compile. Rust primitive types have already implemented the `PalletError` trait along with some commonly used stdlib types such as `Option` and `PhantomData`, and hence in most use cases, a manual implementation is not necessary and is discouraged.
The generic `T` must not bound anything and a `where` clause is not allowed. That said,
bounds and/or a where clause should not needed for any use-case.
### Macro expansion
The macro implements the `Debug` trait and functions `as_u8` using variant position, and
`as_str` using variant doc.
The macro also implements `From<Error<T>>` for `&'static str` and `From<Error<T>>` for
`DispatchError`.
Attribute Macro frame_support_procedural::event
===
```
#[event]
```
The `#[pallet::event]` attribute allows you to define pallet events. Pallet events are stored under the `system` / `events` key when the block is applied (and then replaced when the next block writes it’s events).
The Event enum must be defined as follows:
```
#[pallet::event]
#[pallet::generate_deposit($visibility fn deposit_event)] // Optional pub enum Event<$some_generic> $optional_where_clause {
/// Some doc
$SomeName($SomeType, $YetanotherType, ...),
...
}
```
I.e. an enum (with named or unnamed fields variant), named `Event`, with generic: none or
`T` or `T: Config`, and optional w here clause.
Each field must implement `Clone`, `Eq`, `PartialEq`, `Encode`, `Decode`, and
`Debug` (on std only). For ease of use, bound by the trait `Member`, available in
`frame_support::pallet_prelude`.
Attribute Macro frame_support_procedural::extra_constants
===
```
#[extra_constants]
```
Allows you to define some extra constants to be added into constant metadata.
Item must be defined as:
```
#[pallet::extra_constants]
impl<T: Config> Pallet<T> where $optional_where_clause {
/// $some_doc
$vis fn $fn_name() -> $some_return_type {
...
}
...
}
```
I.e. a regular rust `impl` block with some optional where clause and functions with 0 args,
0 generics, and some return type.
### Macro expansion
The macro add some extra constants to pallet constant metadata.
Attribute Macro frame_support_procedural::extrinsic_call
===
```
#[extrinsic_call]
```
An attribute macro used to specify the extrinsic call inside a benchmark function, and also used as a boundary designating where the benchmark setup code ends, and the benchmark verification code begins.
See `frame_benchmarking::v2` for more info.
Attribute Macro frame_support_procedural::generate_deposit
===
```
#[generate_deposit]
```
The attribute `#[pallet::generate_deposit($visibility fn deposit_event)]` generates a helper function on `Pallet` that handles deposit events.
NOTE: For instantiable pallets, the event must be generic over `T` and `I`.
### Macro expansion
The macro will add on enum `Event` the attributes:
* `#[derive(frame_support::CloneNoBound)]`
* `#[derive(frame_support::EqNoBound)]`
* `#[derive(frame_support::PartialEqNoBound)]`
* `#[derive(frame_support::RuntimeDebugNoBound)]`
* `#[derive(codec::Encode)]`
* `#[derive(codec::Decode)]`
The macro implements `From<Event<..>>` for ().
The macro implements a metadata function on `Event` returning the `EventMetadata`.
If `#[pallet::generate_deposit]` is present then the macro implements `fn deposit_event` on
`Pallet`.
Attribute Macro frame_support_procedural::generate_store
===
```
#[generate_store]
```
To generate a `Store` trait associating all storages, annotate your `Pallet` struct with the attribute `#[pallet::generate_store($vis trait Store)]`, e.g.:
```
#[pallet::pallet]
#[pallet::generate_store(pub(super) trait Store)]
pub struct Pallet<T>(_);
```
More precisely, the `Store` trait contains an associated type for each storage. It is implemented for `Pallet` allowing access to the storage from pallet struct.
Thus when defining a storage named `Foo`, it can later be accessed from `Pallet` using
`<Pallet as Store>::Foo`.
NOTE: this attribute is only valid when applied *directly* to your `Pallet` struct definition.
Attribute Macro frame_support_procedural::genesis_build
===
```
#[genesis_build]
```
**Rust-Analyzer users**: See the documentation of the Rust item in
`frame_support::pallet_macros::genesis_build`.
Attribute Macro frame_support_procedural::genesis_config
===
```
#[genesis_config]
```
**Rust-Analyzer users**: See the documentation of the Rust item in
`frame_support::pallet_macros::genesis_config`.
Attribute Macro frame_support_procedural::getter
===
```
#[getter]
```
The optional attribute `#[pallet::getter(fn $my_getter_fn_name)]` allows you to define a getter function on `Pallet`.
Also see `pallet::storage`
Attribute Macro frame_support_procedural::hooks
===
```
#[hooks]
```
The `#[pallet::hooks]` attribute allows you to specify a `Hooks` implementation for
`Pallet` that specifies pallet-specific logic.
The item the attribute attaches to must be defined as follows:
```
#[pallet::hooks]
impl<T: Config> Hooks<BlockNumberFor<T>> for Pallet<T> $optional_where_clause {
...
}
```
I.e. a regular trait implementation with generic bound: `T: Config`, for the trait
`Hooks<BlockNumberFor<T>>` (they are defined in preludes), for the type `Pallet<T>` and with an optional where clause.
If no `#[pallet::hooks]` exists, then the following default implementation is automatically generated:
```
#[pallet::hooks]
impl<T: Config> Hooks<BlockNumberFor<T>> for Pallet<T> {}
```
### Macro expansion
The macro implements the traits `OnInitialize`, `OnIdle`, `OnFinalize`, `OnRuntimeUpgrade`,
`OffchainWorker`, and `IntegrityTest` using the provided `Hooks` implementation.
NOTE: `OnRuntimeUpgrade` is implemented with `Hooks::on_runtime_upgrade` and some additional logic. E.g. logic to write the pallet version into storage.
NOTE: The macro also adds some tracing logic when implementing the above traits. The following hooks emit traces: `on_initialize`, `on_finalize` and `on_runtime_upgrade`.
Attribute Macro frame_support_procedural::import_section
===
```
#[import_section]
```
An attribute macro that can be attached to a module declaration. Doing so will Imports the contents of the specified external pallet section that was defined previously using `#[pallet_section]`.
### Example
```
#[import_section(some_section)]
#[pallet]
pub mod pallet {
// ...
}
```
where `some_section` was defined elsewhere via:
```
#[pallet_section]
pub mod some_section {
// ...
}
```
This will result in the contents of `some_section` being *verbatim* imported into the pallet above. Note that since the tokens for `some_section` are essentially copy-pasted into the target pallet, you cannot refer to imports that don’t also exist in the target pallet, but this is easily resolved by including all relevant
`use` statements within your pallet section, so they are imported as well, or by otherwise ensuring that you have the same imports on the target pallet.
It is perfectly permissible to import multiple pallet sections into the same pallet,
which can be done by having multiple `#[import_section(something)]` attributes attached to the pallet.
Note that sections are imported by their module name/ident, and should be referred to by their *full path* from the perspective of the target pallet.
Attribute Macro frame_support_procedural::pallet_section
===
```
#[pallet_section]
```
Can be attached to a module. Doing so will declare that module as importable into a pallet via `#[import_section]`.
Note that sections are imported by their module name/ident, and should be referred to by their *full path* from the perspective of the target pallet. Do not attempt to make use of `use` statements to bring pallet sections into scope, as this will not work (unless you do so as part of a wildcard import, in which case it will work).
### Naming Logistics
Also note that because of how `#[pallet_section]` works, pallet section names must be globally unique *within the crate in which they are defined*. For more information on why this must be the case, see macro_magic’s
`#[export_tokens]` macro.
Optionally, you may provide an argument to `#[pallet_section]` such as
`#[pallet_section(some_ident)]`, in the event that there is another pallet section in same crate with the same ident/name. The ident you specify can then be used instead of the module’s ident name when you go to import it via `#[import_section]`.
Attribute Macro frame_support_procedural::inherent
===
```
#[inherent]
```
The `#[pallet::inherent]` attribute allows the pallet to provide some inherent.
An inherent is some piece of data that is inserted by a block authoring node at block creation time and can either be accepted or rejected by validators based on whether the data falls within an acceptable range.
The most common inherent is the `timestamp` that is inserted into every block. Since there is no way to validate timestamps, validators simply check that the timestamp reported by the block authoring node falls within an acceptable range.
Item must be defined as:
```
#[pallet::inherent]
impl<T: Config> ProvideInherent for Pallet<T> {
// ... regular trait implementation
}
```
I.e. a trait implementation with bound `T: Config`, of trait `ProvideInherent` for type
`Pallet<T>`, and some optional where clause.
### Macro expansion
The macro currently makes no use of this information, but it might use this information in the future to give information directly to `construct_runtime`.
Attribute Macro frame_support_procedural::instance_benchmarks
===
```
#[instance_benchmarks]
```
An attribute macro that can be attached to a (non-empty) module declaration. Doing so will designate that module as an instance benchmarking module.
See `frame_benchmarking::v2` for more info.
Attribute Macro frame_support_procedural::no_default
===
```
#[no_default]
```
The optional attribute `#[pallet::no_default]` can be attached to trait items within a
`Config` trait impl that has `#[pallet::config(with_default)]` attached.
Attaching this attribute to a trait item ensures that that trait item will not be used as a default with the `#[derive_impl(..)]` attribute macro.
Attribute Macro frame_support_procedural::no_default_bounds
===
```
#[no_default_bounds]
```
The optional attribute `#[pallet::no_default_bounds]` can be attached to trait items within a
`Config` trait impl that has `#[pallet::config(with_default)]` attached.
Attaching this attribute to a trait item ensures that the generated trait `DefaultConfig`
will not have any bounds for this trait item.
As an example, if you have a trait item `type AccountId: SomeTrait;` in your `Config` trait,
the generated `DefaultConfig` will only have `type AccountId;` with no trait bound.
Attribute Macro frame_support_procedural::origin
===
```
#[origin]
```
The `#[pallet::origin]` attribute allows you to define some origin for the pallet.
Item must be either a type alias, an enum, or a struct. It needs to be public.
E.g.:
```
#[pallet::origin]
pub struct Origin<T>(PhantomData<(T)>);
```
**WARNING**: modifying origin changes the outer runtime origin. This outer runtime origin can be stored on-chain (e.g. in `pallet-scheduler`), thus any change must be done with care as it might require some migration.
NOTE: for instantiable pallets, the origin must be generic over `T` and `I`.
Attribute Macro frame_support_procedural::pallet
===
```
#[pallet]
```
The pallet struct placeholder `#[pallet::pallet]` is mandatory and allows you to specify pallet information.
The struct must be defined as follows:
```
#[pallet::pallet]
pub struct Pallet<T>(_);
```
I.e. a regular struct definition named `Pallet`, with generic T and no where clause.
### Macro expansion:
The macro adds this attribute to the struct definition:
```
#[derive(
frame_support::CloneNoBound,
frame_support::EqNoBound,
frame_support::PartialEqNoBound,
frame_support::RuntimeDebugNoBound,
)]
```
and replaces the type `_` with `PhantomData<T>`. It also implements on the pallet:
* `GetStorageVersion`
* `OnGenesis`: contains some logic to write the pallet version into storage.
* `PalletErrorTypeInfo`: provides the type information for the pallet error, if defined.
It declares `type Module` type alias for `Pallet`, used by `construct_runtime`.
It implements `PalletInfoAccess` on `Pallet` to ease access to pallet information given by
`frame_support::traits::PalletInfo`. (The implementation uses the associated type
`frame_system::Config::PalletInfo`).
It implements `StorageInfoTrait` on `Pallet` which give information about all storages.
If the attribute `generate_store` is set then the macro creates the trait `Store` and implements it on `Pallet`.
If the attribute `set_storage_max_encoded_len` is set then the macro calls
`StorageInfoTrait` for each storage in the implementation of `StorageInfoTrait` for the pallet. Otherwise it implements `StorageInfoTrait` for the pallet using the
`PartialStorageInfoTrait` implementation of storages.
### Dev Mode (`#[pallet(dev_mode)]`)
Specifying the argument `dev_mode` will allow you to enable dev mode for a pallet. The aim of dev mode is to loosen some of the restrictions and requirements placed on production pallets for easy tinkering and development. Dev mode pallets should not be used in production. Enabling dev mode has the following effects:
* Weights no longer need to be specified on every `#[pallet::call]` declaration. By default, dev mode pallets will assume a weight of zero (`0`) if a weight is not specified. This is equivalent to specifying `#[weight(0)]` on all calls that do not specify a weight.
* Call indices no longer need to be specified on every `#[pallet::call]` declaration. By default, dev mode pallets will assume a call index based on the order of the call.
* All storages are marked as unbounded, meaning you do not need to implement `MaxEncodedLen` on storage types. This is equivalent to specifying `#[pallet::unbounded]` on all storage type definitions.
* Storage hashers no longer need to be specified and can be replaced by `_`. In dev mode, these will be replaced by `Blake2_128Concat`. In case of explicit key-binding, `Hasher` can simply be ignored when in `dev_mode`.
Note that the `dev_mode` argument can only be supplied to the `#[pallet]` or
`#[frame_support::pallet]` attribute macro that encloses your pallet module. This argument cannot be specified anywhere else, including but not limited to the `#[pallet::pallet]`
attribute macro.
```
**WARNING**:
You should not deploy or use dev mode pallets in production. Doing so can break your chain and therefore should never be done. Once you are done tinkering, you should remove the
'dev_mode' argument from your #[pallet] declaration and fix any compile errors before attempting to use your pallet in a production scenario.
```
See `frame_support::pallet` docs for more info.
### Runtime Metadata Documentation
The documentation added to this pallet is included in the runtime metadata.
The documentation can be defined in the following ways:
```
#[pallet::pallet]
/// Documentation for pallet 1
#[doc = "Documentation for pallet 2"]
#[doc = include_str!("../README.md")]
#[pallet_doc("../doc1.md")]
#[pallet_doc("../doc2.md")]
pub mod pallet {}
```
The runtime metadata for this pallet contains the following
* “ Documentation for pallet 1“ (captured from `///`)
* “Documentation for pallet 2” (captured from `#[doc]`)
* content of ../README.md (captured from `#[doc]` with `include_str!`)
* content of “../doc1.md” (captured from `pallet_doc`)
* content of “../doc2.md” (captured from `pallet_doc`)
#### `doc` attribute
The value of the `doc` attribute is included in the runtime metadata, as well as expanded on the pallet module. The previous example is expanded to:
```
/// Documentation for pallet 1
/// Documentation for pallet 2
/// Content of README.md pub mod pallet {}
```
If you want to specify the file from which the documentation is loaded, you can use the
`include_str` macro. However, if you only want the documentation to be included in the runtime metadata, use the `pallet_doc` attribute.
#### `pallet_doc` attribute
Unlike the `doc` attribute, the documentation provided to the `pallet_doc` attribute is not inserted on the module.
The `pallet_doc` attribute can only be provided with one argument,
which is the file path that holds the documentation to be added to the metadata.
This approach is beneficial when you use the `include_str` macro at the beginning of the file and want that documentation to extend to the runtime metadata, without reiterating the documentation on the pallet module itself.
Attribute Macro frame_support_procedural::register_default_impl
===
```
#[register_default_impl]
```
Attach this attribute to an impl statement that you want to use with
`#[derive_impl(..)]`.
You must also provide an identifier/name as the attribute’s argument. This is the name you must provide to `#[derive_impl(..)]` when you import this impl via the `default_impl_path` argument. This name should be unique at the crate-level.
### Example
```
pub struct ExampleTestDefaultConfig;
#[register_default_impl(ExampleTestDefaultConfig)]
impl DefaultConfig for ExampleTestDefaultConfig {
type Version = ();
type BlockWeights = ();
type BlockLength = ();
...
type SS58Prefix = ();
type MaxConsumers = frame_support::traits::ConstU32<16>;
}
```
### Advanced Usage
This macro acts as a thin wrapper around macro_magic’s `#[export_tokens]`. See the docs here for more info.
There are some caveats when applying a `use` statement to bring a
`#[register_default_impl]` item into scope. If you have a `#[register_default_impl]`
defined in `my_crate::submodule::MyItem`, it is currently not sufficient to do something like:
```
use my_crate::submodule::MyItem;
#[derive_impl(MyItem as Whatever)]
```
This will fail with a mysterious message about `__export_tokens_tt_my_item` not being defined.
You can, however, do any of the following:
```
// partial path works use my_crate::submodule;
#[derive_impl(submodule::MyItem as Whatever)]
```
```
// full path works
#[derive_impl(my_crate::submodule::MyItem as Whatever)]
```
```
// wild-cards work use my_crate::submodule::*;
#[derive_impl(MyItem as Whatever)]
```
Attribute Macro frame_support_procedural::storage
===
```
#[storage]
```
The `#[pallet::storage]` attribute lets you define some abstract storage inside of runtime storage and also set its metadata. This attribute can be used multiple times.
Item should be defined as:
```
#[pallet::storage]
#[pallet::getter(fn $getter_name)] // optional
$vis type $StorageName<$some_generic> $optional_where_clause
= $StorageType<$generic_name = $some_generics, $other_name = $some_other, ...>;
```
or with unnamed generic:
```
#[pallet::storage]
#[pallet::getter(fn $getter_name)] // optional
$vis type $StorageName<$some_generic> $optional_where_clause
= $StorageType<_, $some_generics, ...>;
```
I.e. it must be a type alias, with generics: `T` or `T: Config`. The aliased type must be one of `StorageValue`, `StorageMap` or `StorageDoubleMap`. The generic arguments of the storage type can be given in two manners: named and unnamed. For named generic arguments,
the name for each argument should match the name defined for it on the storage struct:
* `StorageValue` expects `Value` and optionally `QueryKind` and `OnEmpty`,
* `StorageMap` expects `Hasher`, `Key`, `Value` and optionally `QueryKind` and `OnEmpty`,
* `CountedStorageMap` expects `Hasher`, `Key`, `Value` and optionally `QueryKind` and `OnEmpty`,
* `StorageDoubleMap` expects `Hasher1`, `Key1`, `Hasher2`, `Key2`, `Value` and optionally
`QueryKind` and `OnEmpty`.
For unnamed generic arguments: Their first generic must be `_` as it is replaced by the macro and other generic must declared as a normal generic type declaration.
The `Prefix` generic written by the macro is generated using
`PalletInfo::name::<Pallet<..>>()` and the name of the storage type. E.g. if runtime names the pallet “MyExample” then the storage `type Foo<T> = ...` should use the prefix:
`Twox128(b"MyExample") ++ Twox128(b"Foo")`.
For the `CountedStorageMap` variant, the `Prefix` also implements
`CountedStorageMapInstance`. It also associates a `CounterPrefix`, which is implemented the same as above, but the storage prefix is prepend with `"CounterFor"`. E.g. if runtime names the pallet “MyExample” then the storage `type Foo<T> = CountedStorageaMap<...>` will store its counter at the prefix: `Twox128(b"MyExample") ++ Twox128(b"CounterForFoo")`.
E.g:
```
#[pallet::storage]
pub(super) type MyStorage<T> = StorageMap<Hasher = Blake2_128Concat, Key = u32, Value = u32>;
```
In this case the final prefix used by the map is `Twox128(b"MyExample") ++ Twox128(b"OtherName")`.
Attribute Macro frame_support_procedural::storage_prefix
===
```
#[storage_prefix]
```
The optional attribute `#[pallet::storage_prefix = "SomeName"]` allows you to define the storage prefix to use. This is helpful if you wish to rename the storage field but don’t want to perform a migration.
E.g:
```
#[pallet::storage]
#[pallet::storage_prefix = "foo"]
#[pallet::getter(fn my_storage)]
pub(super) type MyStorage<T> = StorageMap<Hasher = Blake2_128Concat, Key = u32, Value = u32>;
```
or
```
#[pallet::storage]
#[pallet::getter(fn my_storage)]
pub(super) type MyStorage<T> = StorageMap<_, Blake2_128Concat, u32, u32>;
```
Attribute Macro frame_support_procedural::storage_version
===
```
#[storage_version]
```
Because the `pallet::pallet` macro implements `GetStorageVersion`, the current storage version needs to be communicated to the macro. This can be done by using the
`pallet::storage_version` attribute:
```
const STORAGE_VERSION: StorageVersion = StorageVersion::new(5);
#[pallet::pallet]
#[pallet::storage_version(STORAGE_VERSION)]
pub struct Pallet<T>(_);
```
If not present, the current storage version is set to the default value.
Attribute Macro frame_support_procedural::transactional
===
```
#[transactional]
```
Execute the annotated function in a new storage transaction.
The return type of the annotated function must be `Result`. All changes to storage performed by the annotated function are discarded if it returns `Err`, or committed if `Ok`.
Example
---
```
#[transactional]
fn value_commits(v: u32) -> result::Result<u32, &'static str> {
Value::set(v);
Ok(v)
}
#[transactional]
fn value_rollbacks(v: u32) -> result::Result<u32, &'static str> {
Value::set(v);
Err("nah")
}
```
Attribute Macro frame_support_procedural::type_value
===
```
#[type_value]
```
The `#[pallet::type_value]` attribute lets you define a struct implementing the `Get` trait to ease the use of storage types. This attribute is meant to be used alongside
`#[pallet::storage]` to define a storage’s default value. This attribute can be used multiple times.
Item must be defined as:
```
#[pallet::type_value]
fn $MyDefaultName<$some_generic>() -> $default_type $optional_where_clause { $expr }
```
I.e.: a function definition with generics none or `T: Config` and a returned type.
E.g.:
```
#[pallet::type_value]
fn MyDefault<T: Config>() -> T::Balance { 3.into() }
```
### Macro expansion
The macro renames the function to some internal name, generates a struct with the original name of the function and its generic, and implements `Get<$ReturnType>` by calling the user defined function.
Attribute Macro frame_support_procedural::unbounded
===
```
#[unbounded]
```
The optional attribute `#[pallet::unbounded]` declares the storage as unbounded. When implementating the storage info (when `#[pallet::generate_storage_info]` is specified on the pallet struct placeholder), the size of the storage will be declared as unbounded. This can be useful for storage which can never go into PoV (Proof of Validity).
Attribute Macro frame_support_procedural::validate_unsigned
===
```
#[validate_unsigned]
```
The `#[pallet::validate_unsigned]` attribute allows the pallet to validate some unsigned transaction:
Item must be defined as:
```
#[pallet::validate_unsigned]
impl<T: Config> ValidateUnsigned for Pallet<T> {
// ... regular trait implementation
}
```
I.e. a trait implementation with bound `T: Config`, of trait `ValidateUnsigned` for type
`Pallet<T>`, and some optional where clause.
NOTE: There is also the `sp_runtime::traits::SignedExtension` trait that can be used to add some specific logic for transaction validation.
### Macro expansion
The macro currently makes no use of this information, but it might use this information in the future to give information directly to `construct_runtime`.
Attribute Macro frame_support_procedural::weight
===
```
#[weight]
```
Each dispatchable needs to define a weight with `#[pallet::weight($expr)]` attribute, the first argument must be `origin: OriginFor<T>`.
Attribute Macro frame_support_procedural::whitelist_storage
===
```
#[whitelist_storage]
```
The optional attribute `#[pallet::whitelist_storage]` will declare the storage as whitelisted from benchmarking. Doing so will exclude reads of that value’s storage key from counting towards weight calculations during benchmarking.
This attribute should only be attached to storages that are known to be read/used in every block. This will result in a more accurate benchmarking weight.
#### Example
```
#[pallet::storage]
#[pallet::whitelist_storage]
pub(super) type Number<T: Config> = StorageValue<_, frame_system::pallet_prelude::BlockNumberFor::<T>, ValueQuery>;
```
NOTE: As with all `pallet::*` attributes, this one *must* be written as
`#[pallet::whitelist_storage]` and can only be placed inside a `pallet` module in order for it to work properly.
Derive Macro frame_support_procedural::CloneNoBound
===
```
#[derive(CloneNoBound)]
```
Derive `Clone` but do not bound any generic. Docs are at `frame_support::CloneNoBound`.
Derive Macro frame_support_procedural::DebugNoBound
===
```
#[derive(DebugNoBound)]
```
Derive `Debug` but do not bound any generics. Docs are at `frame_support::DebugNoBound`.
Derive Macro frame_support_procedural::DefaultNoBound
===
```
#[derive(DefaultNoBound)]
{
// Attributes available to this derive:
#[default]
}
```
derive `Default` but do no bound any generic. Docs are at `frame_support::DefaultNoBound`.
Derive Macro frame_support_procedural::EqNoBound
===
```
#[derive(EqNoBound)]
```
derive Eq but do no bound any generic. Docs are at `frame_support::EqNoBound`.
Derive Macro frame_support_procedural::PartialEqNoBound
===
```
#[derive(PartialEqNoBound)]
```
Derive `PartialEq` but do not bound any generic. Docs are at
`frame_support::PartialEqNoBound`.
Derive Macro frame_support_procedural::RuntimeDebugNoBound
===
```
#[derive(RuntimeDebugNoBound)]
```
Derive `Debug`, if `std` is enabled it uses `frame_support::DebugNoBound`, if `std` is not enabled it just returns `"<wasm:stripped>"`.
This behaviour is useful to prevent bloating the runtime WASM blob from unneeded code. |
futures-loco-protocol | rust | Rust | Struct futures_loco_protocol::LocoClient
===
```
pub struct LocoClient<T> { /* private fields */ }
```
Implementations
---
### impl<T> LocoClient<T#### pub const MAX_READ_SIZE: u64 = 16_777_216u64
#### pub const fn new(inner: T) -> Self
#### pub const fn inner(&self) -> &T
#### pub fn inner_mut(&mut self) -> &mut T
#### pub fn inner_pin_mut(self: Pin<&mut Self>) -> Pin<&mut T#### pub fn into_inner(self) -> T
### impl<T: AsyncRead> LocoClient<T#### pub async fn read(&mut self) -> Result<BoxedCommand>where
T: Unpin,
#### pub fn poll_read(
self: Pin<&mut Self>,
cx: &mut Context<'_>
) -> Poll<Result<BoxedCommand>### impl<T: AsyncWrite> LocoClient<T#### pub async fn send(&mut self, method: Method, data: &[u8]) -> Result<u32>where
T: Unpin,
#### pub fn write(self: Pin<&mut Self>, method: Method, data: &[u8]) -> u32
#### pub fn poll_flush(
self: Pin<&mut Self>,
cx: &mut Context<'_>
) -> Poll<Result<()>### impl<T: AsyncRead + AsyncWrite + Unpin> LocoClient<T#### pub async fn request(
&mut self,
method: Method,
data: &[u8]
) -> Result<impl Future<Output = Result<BoxedCommand>> + '_Trait Implementations
---
### impl<T: Debug> Debug for LocoClient<T#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
__Origin<'__pin, T>: Unpin,
Auto Trait Implementations
---
### impl<T> RefUnwindSafe for LocoClient<T>where
T: RefUnwindSafe,
### impl<T> Send for LocoClient<T>where
T: Send,
### impl<T> Sync for LocoClient<T>where
T: Sync,
### impl<T> UnwindSafe for LocoClient<T>where
T: UnwindSafe,
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V |
pmdb | ctan | TeX | ## The pmdb Package
D. P. StoryTable of Contents
* 1 Introduction
* 2 Requirements and options
* 3 The DB stage
* 4 The production stage
* 5 Package commands
* 6 Comments on portability
* 7 Final comments
Introduction
This package addresses the issue of a poor-man's database (pmdb).1 Educators who use LaTeX to construct exams and homework sometimes have a collection of problems. Each problem is in its own TEX file. When the educator creates a new exam or homework document, a "common" workflow is to \input several of these prepared questions. This package attempts to provide a visual "user interface" to the questions and to provide a mechanism for viewing and selecting questions that are to be included in the document.
Footnote 1: The basic concept for this package was suggested to my by <NAME>.
**How does this package operate?** For a document that inputs content using the LaTeX command \input, the same content can be input using the command \pmInput, a command defined in this package. When content is input by \pmInput, a checkbox is created in the margin at the insertion point of the content. The checkboxes so created can be checked (to select the associated content) or cleared (to de-select the content). When the user clicks on a push button provided by this package, a list of all _selected_ \input statements is displayed in the JavaScript console. This list can then be copied \input and pasted into another document the author is developing. If you Ctrl+Click on a checkbox, the associated content is opened in the default browser. For this workflow, the document author can see a typeset of the content and decide whether the content should be included in the developing document, and can optionally view the source file to edit it by clicking on the button or link to the right of the checkbox.
2. Requirements and options
**Folder JavaScript.** The 'Ctrl+Click' action feature requires the installation of the folder JavaScript file aeb-reader.js2 found in the folder-js folder of this distribution. This file comes with the distribution of the pondo package. If you already have the ab_pro package, you've already installed the file aeb_pro.js, which includes the special JavaScript functions use by pmbd; however, version 1.7 or later of aeb_pro.js is required. Download the latest version of aeb_pro if Version 1.7 is not on your system already. The installation procedure of folder JavaScript files is described in the file docs/install_jsfiles.pdf. The folder JS file aeb-reader.js (or aeb_pro.js) enables the 'Ctrl+Click' to be operational for both Adobe Acrobat (AA) and Adobe Acrobat Reader (AR). However, when the command \editSourceOn is expanded, a pushbutton appears to the right of the checkbox; click on this pushbutton opens the content in the default editor. The use of the button generated by \editSourceOn requires neither aeb-reader.js nor aeb_pro.js.
Footnote 2: If you don’t see a need for this feature, the installation of aeb-reader.js is not essential.
**Options.** There are four options: dbmode and!dbmode, and tight and!tight.3 The default options are dbmode and!tight. When dbmode is in effect, checkboxes appear in the margins at each \pmInput point; for the option!dbmode, the checkboxes are not produced. The use of an exclamation point (!) makes it convenient to turn on or off the creation of the marginal checkboxes. The default location of the checkboxes are flush left. The tight option places the checkboxes "tight" up against the text area. Refer to the Marginal Checkboxes paragraph for more information.
**Requirements.** The eforms package is required for the creation of checkboxes and pushbuttons.
3. The DB stage
When you have a collection of questions (or content) in various files, this package enables you to build a document that displays these questions (or content) in a single "DB' document. Once your DB document is build, you can use the checkboxes in the margin to select content you want to include in another document; you can use the Ctrl+Click feature to view the source file of that content as well.
The following comments are apropos to the creation of a DB document:
* **PDF creators:** Any PDF creator current in the LaTeX world is valid for use with this package.
* **PDF viewers:** The ideal viewer is AA; however, AR and PDF-XChange Editor can also be used. In the case of Adobe Reader, there is an annoying security dialog box that appears each time you use the Ctrl+Click feature of the checkbox;4 the Ctrl+Click feature _does not work_ with PDF-XChange Editor.
Footnote 4: This assumes the file abe-reader.js is properly installed.
_To remove the security warning_
* For AR, the annoying security dialog mentioned above is emitted when AR is in Protected View. To avoid the security dialog, exit Protected View as follows: (1) open Preferences (Ctrl+K) of AR; (2) select Security (Enhanced) from the left panel; and (3) clear the Enable Protected Mode at startup checkbox. For AA, Protected View set to Off by default.
**Outline of a DB file.** A DB file is just a LaTeX document that uses the pmdb package. The document itself uses a pmnInput to input its content.
\documentclass[article] \usepackage[forcolorpaper]{web} % optional % Additional packages that may be required by any content % that is input with \pnInput. For example,... \usepackage{exerquiz} % if needed \usepackage[dbmode]{pmdb}... \editSourceOn % or, \editSourceOff \%useEditLink % \usepEditBtn, the default...
% Declares input for quiz items %Input@uizItems % Declares input paragraph content %InputParasThe DB stage 5 % Declares input items %InputItems \pnInput{(\(path_{1}\)) \pnInput{(\(path_{2}\))... \pnInput{(\(path_{n}\)) \}displayChoices}quad}\clrChoices \end{document} Descriptions of the various commands \pmInput, \InputQurizItems, \InputParas, \InputItems, \InputProbs, \editSourceOn, \displayChoices, and \clrChoices appear later in this documentation.
These methods are not restricted to inputting quiz items or whole chapters. This paragraph was input into the main document with \pmInput{sample-para.tex}. Note the check box in the left margin. If you have aeb-reader.js or aeb_pro.js properly installed, you can Ctrl+Click to see this paragraph at the source.
**Functionality of the boxes in the margin.** By default, only one box appears in the margins, a checkbox. If the command \editSourceOn is in effect, a small pushbutton appears to the left of the checkbox.
checkbox: Selecting the checkbox (a check mark) appears declares that you want that problem (or item) in the document you are creating. Ctrl+Click opens the source file for viewing (not editing) in the default browser. A Shift+Click action jumps--if \InputQurizItems is in effect--to the solution of the selected item, if a solution is provided. The Ctrl+Click functionality requires the successful installation of the aeb-reader.js or aeb_pro.js JavaScript file.
pushbutton: If \editSourceOn has been expanded prior, a little pushbutton appears to the right of the checkbox. Clicking the pushbutton opens the default viewer (for a TEX file) and the source file is loaded into the viewer for possible editing.
link annotation: If \editSourceOn and \useEditLnk are expanded, a link annotation having the same functionality as the pushbutton appears in the margins, in place of the pushbutton.
The checkbox/pushbutton pair above have been disabled for this documentation. Experience the functionality with the example files, listed below.
**Sample files.** The four sample files are found in the examples folder:
* tst-qzdb.tex: The example that motivated the creation of this package. Input various quiz questions into an quiz environment of exerquiz.
* tst-paras.tex: For the book class, we input chapters of the book using the command \pmInput.
* tst-qzdb-paras.tex: A combination of the two example files above, were we declare \InputQuizItems to input content of a quiz, and \InputParas to input chapters.
* tst-items: An example that demonstrates the \InputItems input mode.
* tst-eegdb.tex: An example that demonstrates the \InputProbs input mode.
4. The production stage
After your DB document has been assembled (using prmdb), you are ready to use your DB document to select questions (for quizzes) or other content for insertion into a new document.
Open your DB document in AA or AR, and open the source file of your developing document in your LaTeX editor. Within the DB document, select questions or content by checking any of the checkboxes in the margin. Now press the \displayChoices push button. The AA (AR) console window opens and displays your choices; for example,
\input{probs/probl.tex}
\input{probs/probl.tex}
\input{probs/probl5.tex}
These can be copied and pasted into your document. Use the Ctrl+Click feature to view the sources of your choices. It may be you want to modify the source for your document; (1) edit the DB source snippet you are inputting; or (2) copy and paste the whole content into your document and make the needed changes there. Once all content has been referenced by your developing document, you can compile into a PDF. Done!
5. Package commands
The package defines several commands and these are discussed now.
\pmInput*[(arg)\{(path)\}]
Within a DB source document, content is inserted using \pmInput. This command both inputs the referenced \((path)\) (using the LaTeX command \input) and places a checkbox in the margin. The \((path)\) can be a relative or full path reference. If the \((path)\) contains any spaces, the path needs to be enclosed in _double quotes_ ("); for example,
\pmInput{"C:/Users/Public/Documents/My Tex Files/tex/%
latex/aeb/pmdb/examples/chapters/doc2.tex"}
The optional argument \((arg)\) is only obeyed when \InputItems is active and \pmInput is expanded within a list environment; \((arg)\) is passed to the underlying \item in the list (\item[(arg)]). When the * option is taken, the rest of the arguments are gobbled and the command does nothing; this is a convenient way of _not inputting_ a \((path)\).
**Important requirement:** Unlike the normal \input command, we require the file name to include the extension, '.tex' in the above example.
``` Inputmodes.There are four 'input modes': InputParas \InputQuiZItems \InputItems \InputProbs A brief description of each follows. InputParas: Sets the input mode to input 'paragraph content'. This input mode is suitable for exercises created by the exercise environment of exerquiz, whole paragraphs, or whole chapters. \InputQuiZItems: Sets the input mode to input items in a quiz (as created by the quiz environment). \InputItems: Sets the input mode to input items in a list environment. \InputProbs: Set the input mode to input problems for an exam created by the eqexam package. \margin_used In all cases, the FIEX command \marginpar is used; as a result, the checkbox appears in the margins when the \marginpar command is supported; in particular, \marginpar does not work in a tabular environment or a multicols environment, for example. Below is an example of \InputItems.5
Footnote 5: The marginal form fields have been made readonly for this documentation.
* This is content destined for an list environment and was input by \pmInput.
* Another item, not input by \pmInput E
* This is content destined for an list environment and was input by \pmInput.
The verbatim listing is, \begin{itemize}__pmdbrighttrue__InputItems \pnInput{sample-item.tex} \itemAnotheritem, not input by \verb|\pmInput|. \userEditLink\%Use marginal link for illustrative purposes \pnInput[*]{sample-item.tex} \end{itemize} Note the use of the optional argument for the last \pnInput. Note also that both \pmdbrighttrue and \InputItems are expanded locally. When \pmdbrighttrue, the checkboxes appear "tight" against the text box margin (flush right, in this case). You can (locally) move the checkboxes to the right margin by expanding \normaInarginpar within the itemize environment group.
**Marginal Checkboxes.** The document produces checkboxes in the margins, you can set the appearance of the checkboxes using the \pmCBPresets command.
\pmCRPresets{(opts)}
Pass eforms key-value pairs to the checkboxes through the argument (opts); for example, this document was compiled with \pmCRPresets{\textColor{red}} declared in the preamble, as a result, the checks are colored red.
The checkboxes are placed in the margins and hopefully correctly aligned at the insertion point. For article class-type documents, use of \reversemarginpar is recommended. In this case, the checkboxes appear in the left margin at the extreme left (as seen above). For book class-type document, the checkboxes alternate between the left and right margins. The option tight can be used to move the checkboxes to the inner margins of the text block.
**Marginal pushbuttons.** When \editSourceOn is in effect, a pushbutton appears to the right of the marginal checkbox. The action of this pushbutton is to open the source file in the default editor. Modify the appearance using the \editSourceBtn command:
\editSourceOn (required for the button to appear) (the default) \editSourceBtn[(opts)]{(wd)}{(ht)}
The first line \editSourceOn is required for the button to appear; usually this command is expanded in the preamble, but it can be expanded in the body of the document to turn on or off (\editSourceOff). The second line specifies that pushbutton form field should be used (the default). The third line is the general syntax; here, (opts) are key-values that are passed to the underlying \pushButton command of eforms.
\editSourceBtn[\TUView in default editor]\{S}{S}]{llbp}{llbp} (the default)
The above is the default definition for the marginal link.
**Marginal links.** As an alternative to using marginal pushbuttons, pmdb also provides link annotations. When \editSourceOn is expanded, marginal buttons appear in the margin, by default. To obtain link annotations also expand the macro \usedEditLnk (\useEditbtn is the default). Use \editSourceLn to customized the link:
\editSourceOn \editSourceLn \editSourceLn[(opts)]{(wd)}{(ht)}{(txt)}
where (opts) are key-values that are passed to the underlying link command \setLink of eforms. The dimensions provided should be the same as those used by the marginal checkboxes so they are properly aligned.
\editSourceLn[\linktxtcolor{red}\H{N}]{llbp}{llbp}{E} (the default)
**Pushbuttons.** The package defines two push buttons that should be utilized in your DB document. They can be placed at the end of the document, or in a running footer.
\displayChoices(\(\{\)opts\(\}\)\{\(\{\)wd\(\}\)\{\(\{\)ht\(\}\}\)\}\)
\clrChoices[\(\{\)opts\(\}\)\{\(\{\)wd\(\}\)\{\(\{\)ht\(\}\)\}\}
The \(\backslash\)displayChoices command displays the choices made in the console window of AA/AR, while \(\backslash\)clrChoices clears all the marginal checkboxes. For example,
\displayChoices(\{\)\(\{\)llbp\(\}\)\(\backslash\)clrChoices\(\{\}\)\{llbp\(\}\)
The (opts) argument is to modify the appearance of the buttons. There are several supporting, convenience commands associate with \(\backslash\)displayChoices and \(\backslash\)clrChoices:
\(\backslash\)displayChoiceCA\(\{\)\(\{\)string\(\}\}\)\{Display Choices\(\}\)
\(\backslash\)displayChoicePU\(\{\)\(\{\)string\(\}\}\)\{Display all choices in the console window\(\}\)
\(\backslash\)clrChoicesCA\(\{\)\(\{\)string\(\}\}\)\{Clear Chocies\(\}\)
\(\backslash\)clrChoicesTU\(\{\)\(\{\)string\(\}\}\)
The 'CA' commands place captions on the buttons; the 'TU' commands defines tool tips for the buttons. The strings shown in parentheses to the right are the default declarations for each of the commands.
6. Comments on portability
The functionality of the document (with the exception of the Ctrl+Click feature) is platform independent; however, to be of any value, the DB files must accompany the DB document. As long as all references to DB files are _relative paths_ the DB document can be ported elsewhere along with the supporting DB files. Just ZIP the whole folder containing the DB document and all DB files. They can now be moved to another computer system, unzipped, and total functionality attained. For the Ctrl+Click feature to work, the aeb-reader.js file must also be installed. However, recall that the use of the marginal edit button or link annotation does not require installation of a JavaScript file.
7. Final comments
The method of producing the checkboxes in the margins work for many of the situations that arise in producing a LaTeX document; the four 'input modes' \InputParas, \InputQwizItems, \InputProbs, and \InputItems, however, may fail in some situations. By studying the DTX file perhaps you can create more input modes that solve your problem.
It has been lovely, but now I must return to my retirement. |
@alan-ai/alan-sdk-react-native | npm | JavaScript | Alan voice assistant SDK for React Native
===
[Alan Platform](https://alan.app/) • [Alan Studio](https://studio.alan.app/register) • [Docs](https://alan.app/docs) • [FAQ](https://alan.app/docs/usage/additional/faq) •
[Blog](https://alan.app/blog/) • [Twitter](https://twitter.com/alanvoiceai)
Quickly add voice to your app. Create an in-app voice assistant to enable human-like conversations and provide a personalized voice experience for every user.
Alan is a Voice AI Platform
---
Alan is a conversational voice AI platform that lets you create an intelligent voice assistant for your app. It offers all necessary tools to design, embed and host your voice solutions:
#### Alan Studio
A powerful web-based IDE where you can write, test and debug dialog scenarios for your voice assistant or chatbot.
#### Alan Client SDKs
Alan's lightweight SDKs to quickly embed a voice assistant to your app.
#### Alan Cloud
Alan's AI-backend powered by the industry’s best Automatic Speech Recognition (ASR), Natural Language Understanding (NLU) and Speech Synthesis. The Alan Cloud provisions and handles the infrastructure required to maintain your voice deployments and perform all the voice processing tasks.
To get more details on how Alan works, see [Alan Platform](https://alan.app/platform).
Why Alan?
---
* **No or minimum changes to your UI**: To voice enable your app, you only need to get the Alan Client SDK and drop it to your app.
* **Serverless environment**: No need to plan for, deploy and maintain any infrastructure or speech components - the Alan Platform does the bulk of the work.
* **On-the-fly updates**: All changes to the dialogs become available immediately.
* **Voice flow testing and analytics**: Alan Studio provides advanced tools for testing your dialog flows and getting the analytics data on users' interactions, all in the same console.
How to start
---
To create a voice assistant for your React Native app:
1. [Sign up for Alan Studio](https://studio.alan.app/register) to build voice scripts in JavaScript and test them.
2. Use the Alan React Native SDK to embed a voice assistant to your application. For details, see [Alan AI documentation](https://alan.app/docs/client-api/cross-platform/react-native).
Example apps
---
In the [Examples](https://github.com/alan-ai/alan-sdk-reactnative/tree/master/examples) folder, you can find example apps integrated with the Alan voice SDK for React Native. Launch the app, tap the Alan button and start giving voice commands. For example, you can ask: "Hello" or "What does this app do?"
Other platforms
---
You may also want to try Alan Client SDKs for the following platforms:
* [Web](https://github.com/alan-ai/alan-sdk-web)
* [iOS](https://github.com/alan-ai/alan-sdk-ios)
* [Android](https://github.com/alan-ai/alan-sdk-android)
* [Flutter](https://github.com/alan-ai/alan-sdk-flutter)
* [Ionic](https://github.com/alan-ai/alan-sdk-ionic)
* [Apache Cordova](https://github.com/alan-ai/alan-sdk-cordova)
* [PowerApps](https://github.com/alan-ai/alan-sdk-pcf)
Have questions?
---
If you have any questions or something is missing in the documentation:
* Join [Alan AI Slack community](https://app.slack.com/client/TL55N530A) for support
* Contact us at [<EMAIL>](mailto:<EMAIL>)
Readme
---
### Keywords
* react-native
* react-component
* alan
* alanai
* alansdk
* voice |
github.com/nsqio/nsq | go | Go | README
[¶](#section-readme)
---
![](https://nsq.io/static/img/nsq_blue.png)
* **Source**: https://github.com/nsqio/nsq
* **Issues**: https://github.com/nsqio/nsq/issues
* **Mailing List**: [<EMAIL>](https://groups.google.com/d/forum/nsq-users)* **IRC**: #nsq on freenode
* **Docs**: https://nsq.io
* **Twitter**: [@nsqio](https://twitter.com/nsqio)
[![Build Status](https://github.com/nsqio/nsq/workflows/tests/badge.svg)](https://github.com/nsqio/nsq/actions) [![GitHub release](https://img.shields.io/github/release/nsqio/nsq.svg)](https://github.com/nsqio/nsq/releases/latest) [![Coverage Status](https://coveralls.io/repos/github/nsqio/nsq/badge.svg?branch=master)](https://coveralls.io/github/nsqio/nsq?branch=master)
**NSQ** is a realtime distributed messaging platform designed to operate at scale, handling billions of messages per day.
It promotes *distributed* and *decentralized* topologies without single points of failure,
enabling fault tolerance and high availability coupled with a reliable message delivery guarantee. See [features & guarantees](https://nsq.io/overview/features_and_guarantees.html).
Operationally, **NSQ** is easy to configure and deploy (all parameters are specified on the command line and compiled binaries have no runtime dependencies). For maximum flexibility, it is agnostic to data format (messages can be JSON, MsgPack, Protocol Buffers, or anything else). Official Go and Python libraries are available out of the box (as well as many other [client libraries](https://nsq.io/clients/client_libraries.html)) and, if you're interested in building your own, there's a [protocol spec](https://nsq.io/clients/tcp_protocol_spec.html).
We publish [binary releases](https://nsq.io/deployment/installing.html) for linux, darwin, freebsd and windows as well as an official [Docker image](https://nsq.io/deployment/docker.html).
NOTE: master is our *development* branch and may not be stable at all times.
### In Production
[![](https://nsq.io/static/img/bitly_logo.png)](https://bitly.com/)
[![](https://nsq.io/static/img/life360_logo.png)](https://www.life360.com/)
[![](https://nsq.io/static/img/simplereach_logo.png)](https://www.simplereach.com/)
[![](https://nsq.io/static/img/moz_logo.png)](https://moz.com/)
[![](https://nsq.io/static/img/segment_logo.png)](https://segment.com/)
[![](https://nsq.io/static/img/eventful_logo.png)](https://eventful.com/events)
[![](https://nsq.io/static/img/energyhub_logo.png)](https://www.energyhub.com/)
[![](https://nsq.io/static/img/project_fifo.png)](https://project-fifo.net/)
[![](https://nsq.io/static/img/trendrr_logo.png)](https://trendrr.com/)
[![](https://nsq.io/static/img/reonomy_logo.png)](https://reonomy.com/)
[![](https://nsq.io/static/img/heavy_water.png)](https://hw-ops.com/)
[![](https://nsq.io/static/img/lytics.png)](https://www.getlytics.com/)
[![](https://nsq.io/static/img/rakuten.png)](https://mediaforge.com/)
[![](https://nsq.io/static/img/wistia_logo.png)](https://wistia.com/)
[![](https://nsq.io/static/img/stripe_logo.png)](https://stripe.com/)
[![](https://nsq.io/static/img/shipwire_logo.png)](https://www.shipwire.com/)
[![](https://nsq.io/static/img/digg_logo.png)](https://digg.com/)
[![](https://nsq.io/static/img/scalabull_logo.png)](https://www.scalabull.com/)
[![](https://nsq.io/static/img/soundest_logo.png)](https://www.soundest.com/)
[![](https://nsq.io/static/img/docker_logo.png)](https://www.docker.com/)
[![](https://nsq.io/static/img/weave_logo.png)](https://www.getweave.com/)
[![](https://nsq.io/static/img/augury_logo.png)](https://www.augury.com/)
[![](https://nsq.io/static/img/buzzfeed_logo.png)](https://www.buzzfeed.com/)
[![](https://nsq.io/static/img/eztable_logo.png)](https://eztable.com/)
[![](https://nsq.io/static/img/dotabuff_logo.png)](https://www.dotabuff.com/)
[![](https://nsq.io/static/img/fastly_logo.png)](https://www.fastly.com/)
[![](https://nsq.io/static/img/talky_logo.png)](https://talky.io/)
[![](https://nsq.io/static/img/groupme_logo.png)](https://groupme.com/)
[![](https://nsq.io/static/img/wiredcraft_logo.jpg)](https://wiredcraft.com/)
[![](https://nsq.io/static/img/sproutsocial_logo.png)](https://sproutsocial.com/)
[![](https://nsq.io/static/img/fandom_logo.svg)](https://fandom.wikia.com/)
[![](https://nsq.io/static/img/gitee_logo.svg)](https://gitee.com/)
[![](https://nsq.io/static/img/bytedance_logo.png)](https://bytedance.com/)
### Code of Conduct
Help us keep NSQ open and inclusive. Please read and follow our [Code of Conduct](https://github.com/nsqio/nsq/blob/v1.2.1/CODE_OF_CONDUCT.md).
### Authors
NSQ was designed and developed by <NAME> ([@imsnakes](https://twitter.com/imsnakes)) and <NAME>
([@jehiah](https://twitter.com/jehiah)) but wouldn't have been possible without the support of [Bitly](https://bitly.com),
maintainers ([<NAME>](https://github.com/ploxiln)), and all our [contributors](https://github.com/nsqio/nsq/graphs/contributors).
Logo created by <NAME> ([@kisalow](https://twitter.com/kisalow)).
None |
fcaR | cran | R | Package ‘fcaR’
April 27, 2023
Title Formal Concept Analysis
Version 1.2.1
Maintainer <NAME> <<EMAIL>>
Description Provides tools to perform fuzzy formal concept
analysis, presented in Wille (1982) <doi:10.1007/978-3-642-01815-2_23>
and in Ganter and Obiedkov (2016) <doi:10.1007/978-3-662-49291-8>. It
provides functions to load and save a formal context, extract its
concept lattice and implications. In addition, one can use the
implications to compute semantic closures of fuzzy sets and, thus,
build recommendation systems.
License GPL-3
URL https://github.com/Malaga-FCA-group/fcaR
BugReports https://github.com/Malaga-FCA-group/fcaR/issues
Depends R (>= 3.1)
Imports forcats, fractional, glue, grDevices, Matrix, methods, POSetR,
R6, Rcpp, registry, settings, stringr, tibble, tikzDevice,
magrittr, purrr
Suggests arules, covr, hasseDiagram, knitr, markdown, rmarkdown,
testthat (>= 2.1.0), tictoc, parallel
LinkingTo Rcpp
VignetteBuilder knitr
Encoding UTF-8
LazyData true
RoxygenNote 7.2.3
NeedsCompilation yes
Author <NAME> [aut, cre]
(<https://orcid.org/0000-0002-0172-1585>),
<NAME> [aut],
<NAME> [aut],
<NAME> [aut],
<NAME> [ctb]
Repository CRAN
Date/Publication 2023-04-27 19:50:06 UTC
R topics documented:
as_Se... 2
as_vecto... 3
cobre3... 4
cobre6... 5
Concep... 6
ConceptLattic... 7
ConceptSe... 12
equivalencesRegistr... 14
fca... 15
fcaR_option... 16
FormalContex... 17
ImplicationSe... 27
parse_implicatio... 32
parse_implication... 33
planet... 33
scalingRegistr... 34
Se... 35
vega... 37
%&... 38
%entails... 38
%==... 39
%-... 40
%holds_in... 41
%<=... 41
%or... 42
%respects... 43
%~... 43
as_Set Convert Named Vector to Set
Description
Convert Named Vector to Set
Usage
as_Set(A)
Arguments
A A named vector or matrix to build a new Set.
Value
A Set object.
Examples
A <- c(a = 0.1, b = 0.2, p = 0.3, q = 0)
as_Set(A)
as_vector Convert Set to vector
Description
Convert Set to vector
Usage
as_vector(v)
Arguments
v A Set to convert to vector.
Value
A vector.
Examples
A <- c(a = 0.1, b = 0.2, p = 0.3, q = 0)
v <- as_Set(A)
A2 <- as_vector(v)
all(A == A2)
cobre32 Data for Differential Diagnosis for Schizophrenia
Description
A subset of the COBRE dataset has been retrieved, by querying SchizConnect for 105 patients with
neurological and clinical symptoms, collecting also their corresponding diagnosis.
Usage
cobre32
Format
A matrix with 105 rows and 32 columns. Column names are related to different scales for depression
and Schizophrenia:
COSAS_n The Simpson-Angus Scale, 7 items to evaluate Parkinsonism-like alterations, related to
schizophrenia, in an individual.
FICAL_n The Calgary Depression Scale for Schizophrenia, 9 items (attributes) assessing the level
of depression in schizophrenia, differentiating between positive and negative aspects of the
disease.
SCIDII_n The Structured Clinical Interview for DSM-III-R Personality Disorders, with 14 vari-
ables related to the presence of signs affecting personality.
dx_ss if TRUE, the diagnosis is strict schizophrenia.
dx_other it TRUE, the diagnosis is other than schizophrenia, including schizoaffective, bipolar dis-
order and major depression.
In summary, the dataset consists in the previous 30 attributes related to signs or symptoms, and
2 attributes related to diagnosis (these diagnoses are mutually exclusive, thus only one of them is
assigned to each patient). This makes a dataset with 105 objects (patients) and 32 attributes to
explore. The symptom attributes are multi-valued.
Thus, according to the specific scales used, all attributes are fuzzy and graded. For a given attribute
(symptom), the available grades range from absent to extreme, with minimal, mild, moderate, mod-
erate severe and severe in between.
These fuzzy attributes are mapped to values in the interval [0, 1].
Source
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., ... & Liu,
J. (2017). Multimodal neuroimaging in schizophrenia: description and dissemination. Neuroinfor-
matics, 15(4), 343-364. https://pubmed.ncbi.nlm.nih.gov/26142271/
cobre61 Data for Differential Diagnosis for Schizophrenia
Description
A subset of the COBRE dataset has been retrieved, by querying SchizConnect for 105 patients with
neurological and clinical symptoms, collecting also their corresponding diagnosis.
Usage
cobre61
Format
A matrix with 105 rows and 61 columns. Column names are related to different scales for depression
and Schizophrenia:
COSAS_n The Simpson-Angus Scale, 7 items to evaluate Parkinsonism-like alterations, related to
schizophrenia, in an individual.
FIPAN_n The Positive and Negative Syndrome Scale, a set of 29 attributes measuring different
aspects and symptoms in schizophrenia.
FICAL_n The Calgary Depression Scale for Schizophrenia, 9 items (attributes) assessing the level
of depression in schizophrenia, differentiating between positive and negative aspects of the
disease.
SCIDII_n The Structured Clinical Interview for DSM-III-R Personality Disorders, with 14 vari-
ables related to the presence of signs affecting personality.
dx_ss if TRUE, the diagnosis is strict schizophrenia.
dx_other it TRUE, the diagnosis is other than schizophrenia, including schizoaffective, bipolar dis-
order and major depression.
In summary, the dataset consists in the previous 59 attributes related to signs or symptoms, and
2 attributes related to diagnosis (these diagnoses are mutually exclusive, thus only one of them is
assigned to each patient). This makes a dataset with 105 objects (patients) and 61 attributes to
explore. The symptom attributes are multi-valued.
Thus, according to the specific scales used, all attributes are fuzzy and graded. For a given attribute
(symptom), the available grades range from absent to extreme, with minimal, mild, moderate, mod-
erate severe and severe in between.
These fuzzy attributes are mapped to values in the interval [0, 1].
Source
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., ... & Liu,
J. (2017). Multimodal neuroimaging in schizophrenia: description and dissemination. Neuroinfor-
matics, 15(4), 343-364. https://pubmed.ncbi.nlm.nih.gov/26142271/
Concept R6 class for a fuzzy concept with sparse internal representation
Description
This class implements the data structure and methods for fuzzy concepts.
Methods
Public methods:
• Concept$new()
• Concept$get_extent()
• Concept$get_intent()
• Concept$print()
• Concept$to_latex()
• Concept$clone()
Method new(): Creator for objects of class Concept
Usage:
Concept$new(extent, intent)
Arguments:
extent (Set) The extent of the concept.
intent (Set) The intent of the concept.
Returns: An object of class Concept.
Method get_extent(): Internal Set for the extent
Usage:
Concept$get_extent()
Returns: The Set representation of the extent.
Method get_intent(): Internal Set for the intent
Usage:
Concept$get_intent()
Returns: The Set representation of the intent.
Method print(): Prints the concept to console
Usage:
Concept$print()
Returns: A string with the elements of the set and their grades between brackets .
Method to_latex(): Write the concept in LaTeX format
Usage:
Concept$to_latex(print = TRUE)
Arguments:
print (logical) Print to output?
Returns: The fuzzy concept in LaTeX.
Method clone(): The objects of this class are cloneable with this method.
Usage:
Concept$clone(deep = FALSE)
Arguments:
deep Whether to make a deep clone.
Examples
# Build a formal context and find its concepts
fc_planets <- FormalContext$new(planets)
fc_planets$find_concepts()
# Print the first three concepts
fc_planets$concepts[1:3]
# Select the first concept:
C <- fc_planets$concepts$sub(1)
# Get its extent and intent
C$get_extent()
C$get_intent()
ConceptLattice R6 class for a concept lattice
Description
This class implements the data structure and methods for concept lattices.
Super class
fcaR::ConceptSet -> ConceptLattice
Methods
Public methods:
• ConceptLattice$new()
• ConceptLattice$plot()
• ConceptLattice$sublattice()
• ConceptLattice$top()
• ConceptLattice$bottom()
• ConceptLattice$join_irreducibles()
• ConceptLattice$meet_irreducibles()
• ConceptLattice$decompose()
• ConceptLattice$supremum()
• ConceptLattice$infimum()
• ConceptLattice$subconcepts()
• ConceptLattice$superconcepts()
• ConceptLattice$lower_neighbours()
• ConceptLattice$upper_neighbours()
• ConceptLattice$clone()
Method new(): Create a new ConceptLattice object.
Usage:
ConceptLattice$new(extents, intents, objects, attributes, I = NULL)
Arguments:
extents (dgCMatrix) The extents of all concepts
intents (dgCMatrix) The intents of all concepts
objects (character vector) Names of the objects in the formal context
attributes (character vector) Names of the attributes in the formal context
I (dgCMatrix) The matrix of the formal context
Returns: A new ConceptLattice object.
Method plot(): Plot the concept lattice
Usage:
ConceptLattice$plot(object_names = TRUE, to_latex = FALSE, ...)
Arguments:
object_names (logical) If TRUE, plot object names, otherwise omit them from the diagram.
to_latex (logical) If TRUE, export the plot as a tikzpicture environment that can be included
in a LaTeX file.
... Other parameters to be passed to the tikzDevice that renders the lattice in LaTeX, or for
the figure caption. See Details.
Details: Particular parameters that control the size of the tikz output are: width, height
(both in inches), and pointsize (in points), that should be set to the font size used in the
documentclass header in the LaTeX file where the code is to be inserted.
If a caption is provided, the whole tikz picture will be wrapped by a figure environment and
the caption set.
Returns: If to_latex is FALSE, it returns nothing, just plots the graph of the concept lattice.
Otherwise, this function returns the LaTeX code to reproduce the concept lattice.
Method sublattice(): Sublattice
Usage:
ConceptLattice$sublattice(...)
Arguments:
... See Details.
Details: As argument, one can provide both integer indices or Concepts, separated by commas.
The corresponding concepts are used to generate a sublattice.
Returns: The generated sublattice as a new ConceptLattice object.
Method top(): Top of a Lattice
Usage:
ConceptLattice$top()
Returns: The top of the Concept Lattice
Examples:
fc <- FormalContext$new(planets)
fc$find_concepts()
fc$concepts$top()
Method bottom(): Bottom of a Lattice
Usage:
ConceptLattice$bottom()
Returns: The bottom of the Concept Lattice
Examples:
fc <- FormalContext$new(planets)
fc$find_concepts()
fc$concepts$bottom()
Method join_irreducibles(): Join-irreducible Elements
Usage:
ConceptLattice$join_irreducibles()
Returns: The join-irreducible elements in the concept lattice.
Method meet_irreducibles(): Meet-irreducible Elements
Usage:
ConceptLattice$meet_irreducibles()
Returns: The meet-irreducible elements in the concept lattice.
Method decompose(): Decompose a concept as the supremum of meet-irreducible concepts
Usage:
ConceptLattice$decompose(C)
Arguments:
C A list of Concepts
Returns: A list, each field is the set of meet-irreducible elements whose supremum is the
corresponding element in C.
Method supremum(): Supremum of Concepts
Usage:
ConceptLattice$supremum(...)
Arguments:
... See Details.
Details: As argument, one can provide both integer indices or Concepts, separated by commas.
The corresponding concepts are used to compute their supremum in the lattice.
Returns: The supremum of the list of concepts.
Method infimum(): Infimum of Concepts
Usage:
ConceptLattice$infimum(...)
Arguments:
... See Details.
Details: As argument, one can provide both integer indices or Concepts, separated by commas.
The corresponding concepts are used to compute their infimum in the lattice.
Returns: The infimum of the list of concepts.
Method subconcepts(): Subconcepts of a Concept
Usage:
ConceptLattice$subconcepts(C)
Arguments:
C (numeric or SparseConcept) The concept to which determine all its subconcepts.
Returns: A list with the subconcepts.
Method superconcepts(): Superconcepts of a Concept
Usage:
ConceptLattice$superconcepts(C)
Arguments:
C (numeric or SparseConcept) The concept to which determine all its superconcepts.
Returns: A list with the superconcepts.
Method lower_neighbours(): Lower Neighbours of a Concept
Usage:
ConceptLattice$lower_neighbours(C)
Arguments:
C (SparseConcept) The concept to which find its lower neighbours
Returns: A list with the lower neighbours of C.
Method upper_neighbours(): Upper Neighbours of a Concept
Usage:
ConceptLattice$upper_neighbours(C)
Arguments:
C (SparseConcept) The concept to which find its upper neighbours
Returns: A list with the upper neighbours of C.
Method clone(): The objects of this class are cloneable with this method.
Usage:
ConceptLattice$clone(deep = FALSE)
Arguments:
deep Whether to make a deep clone.
Examples
# Build a formal context
fc_planets <- FormalContext$new(planets)
# Find the concepts
fc_planets$find_concepts()
# Find join- and meet- irreducible elements
fc_planets$concepts$join_irreducibles()
fc_planets$concepts$meet_irreducibles()
# Get concept support
fc_planets$concepts$support()
## ------------------------------------------------
## Method `ConceptLattice$top`
## ------------------------------------------------
fc <- FormalContext$new(planets)
fc$find_concepts()
fc$concepts$top()
## ------------------------------------------------
## Method `ConceptLattice$bottom`
## ------------------------------------------------
fc <- FormalContext$new(planets)
fc$find_concepts()
fc$concepts$bottom()
ConceptSet R6 class for a set of concepts
Description
This class implements the data structure and methods for concept sets.
Methods
Public methods:
• ConceptSet$new()
• ConceptSet$size()
• ConceptSet$is_empty()
• ConceptSet$extents()
• ConceptSet$intents()
• ConceptSet$print()
• ConceptSet$to_latex()
• ConceptSet$to_list()
• ConceptSet$[()
• ConceptSet$sub()
• ConceptSet$support()
• ConceptSet$clone()
Method new(): Create a new ConceptLattice object.
Usage:
ConceptSet$new(extents, intents, objects, attributes, I = NULL)
Arguments:
extents (dgCMatrix) The extents of all concepts
intents (dgCMatrix) The intents of all concepts
objects (character vector) Names of the objects in the formal context
attributes (character vector) Names of the attributes in the formal context
I (dgCMatrix) The matrix of the formal context
Returns: A new ConceptLattice object.
Method size(): Size of the Lattice
Usage:
ConceptSet$size()
Returns: The number of concepts in the lattice.
Method is_empty(): Is the lattice empty?
Usage:
ConceptSet$is_empty()
Returns: TRUE if the lattice has no concepts.
Method extents(): Concept Extents
Usage:
ConceptSet$extents()
Returns: The extents of all concepts, as a dgCMatrix.
Method intents(): Concept Intents
Usage:
ConceptSet$intents()
Returns: The intents of all concepts, as a dgCMatrix.
Method print(): Print the Concept Set
Usage:
ConceptSet$print()
Returns: Nothing, just prints the concepts
Method to_latex(): Write in LaTeX
Usage:
ConceptSet$to_latex(print = TRUE, ncols = 1, numbered = TRUE, align = TRUE)
Arguments:
print (logical) Print to output?
ncols (integer) Number of columns of the output.
numbered (logical) Number the concepts?
align (logical) Align objects and attributes independently?
Returns: The LaTeX code to list all concepts.
Method to_list(): Returns a list with all the concepts
Usage:
ConceptSet$to_list()
Returns: A list of concepts.
Method [(): Subsets a ConceptSet
Usage:
ConceptSet$[(indices)
Arguments:
indices (numeric or logical vector) The indices of the concepts to return as a list of Concepts.
It can be a vector of logicals where TRUE elements are to be retained.
Returns: Another ConceptSet.
Method sub(): Individual Concepts
Usage:
ConceptSet$sub(index)
Arguments:
index (numeric) The index of the concept to return.
Returns: The Concept.
Method support(): Get support of each concept
Usage:
ConceptSet$support()
Returns: A vector with the support of each concept.
Method clone(): The objects of this class are cloneable with this method.
Usage:
ConceptSet$clone(deep = FALSE)
Arguments:
deep Whether to make a deep clone.
Examples
# Build a formal context
fc_planets <- FormalContext$new(planets)
# Find the concepts
fc_planets$find_concepts()
# Find join- and meet- irreducible elements
fc_planets$concepts$join_irreducibles()
fc_planets$concepts$meet_irreducibles()
equivalencesRegistry Equivalence Rules Registry
Description
Equivalence Rules Registry
Usage
equivalencesRegistry
Format
An object of class equivalence_registry (inherits from registry) of length 6.
Details
This is a registry that stores the equivalence rules that can be applied using the apply_rules()
method in an ImplicationSet.
One can obtain the list of available equivalence operators by: equivalencesRegistry$get_entry_names()
fcaR fcaR: Tools for Formal Concept Analysis
Description
The aim of this package is to provide tools to perform fuzzy formal concept analysis (FCA) from
within R. It provides functions to load and save a Formal Context, extract its concept lattice and
implications. In addition, one can use the implications to compute semantic closures of fuzzy sets
and, thus, build recommendation systems.
Details
The fcaR package provides data structures which allow the user to work seamlessly with formal
contexts and sets of implications. More explicitly, three main classes are implemented, using the R6
object-oriented-programming paradigm in R:
• FormalContext encapsulates the definition of a formal context (G, M, I), being G the set of
objects, M the set of attributes and I the (fuzzy) relationship matrix, and provides methods to
operate on the context using FCA tools.
• ImplicationSet represents a set of implications over a specific formal context.
• ConceptLattice represents the set of concepts and their relationships, including methods to
operate on the lattice.
Two additional helper classes are implemented:
• Set is a class solely used for visualization purposes, since it encapsulates in sparse format a
(fuzzy) set.
• Concept encapsulates internally both extent and intent of a formal concept as Set. Since
fcaR is an extension of the data model in the arules package, most of the methods and classes
implemented interoperates with the main S4 classes in arules (transactions and rules).
References
<NAME>, <NAME> (1986). “Familles minimales d’implications informatives résultant d’un
tableau de données binaires.” Mathématiques et Sciences humaines, 95, 5-18.
<NAME>, <NAME> (1999). Formal concept analysis : mathematical foundations. Springer. ISBN
3540627715.
<NAME>, <NAME>, <NAME>, <NAME> (2002). “SLFD Logic: Elimination of Data
Redundancy in Knowledge Representation.” Advances in Artificial Intelligence - IBERAMIA 2002,
2527, 141-150. doi: 10.1007/3-540-36131-6_15 (URL: http://doi.org/10.1007/3-540-36131-6_15).
<NAME> (2002). “Algorithms for fuzzy concept lattices.” In Proc. Fourth Int. Conf. on Recent
Advances in Soft Computing. Nottingham, United Kingdom, 200-205.
<NAME>, <NAME>, <NAME> (2005). “arules - a computational environment for mining association
rules and frequent item sets.” J Stat Softw, 14, 1-25.
<NAME>, <NAME>, <NAME>, <NAME>, <NAME> (2012). “Closure via functional dependence
simplification.” International Journal of Computer Mathematics, 89(4), 510-526. Belohlavek R,
<NAME>, <NAME>, <NAME>, <NAME> (2016). “Automated prover for attribute dependencies
in data with grades.” International Journal of Approximate Reasoning, 70, 51-67.
Examples
# Build a formal context
fc_planets <- FormalContext$new(planets)
# Find its concepts and implications
fc_planets$find_implications()
# Print the extracted implications
fc_planets$implications
fcaR_options Set or get options for fcaR
Description
Set or get options for fcaR
Usage
fcaR_options(...)
Arguments
... Option names to retrieve option values or [key]=[value] pairs to set options.
Supported options
The following options are supported
• decimal_places(numeric;2) The number of decimal places to show when printing or export-
ing to LATEX sets, implications, concepts, etc.
• latex_size(character;"normalsize") Size to use when exporting to LaTeX.
• reduced\_lattice(logical;TRUE) Plot the reduced concept lattice?
FormalContext R6 class for a formal context
Description
This class implements the data structure and methods for formal contexts.
Public fields
I The table of the formal context as a matrix.
attributes The attributes of the formal context.
objects The objects of the formal context.
grades_set The set of degrees (in [0, 1]) the whole set of attributes can take.
expanded_grades_set The set of degrees (in [0, 1]) each attribute can take.
concepts The concept lattice associated to the formal context as a ConceptLattice.
implications A set of implications on the formal context as an ImplicationSet.
Methods
Public methods:
• FormalContext$new()
• FormalContext$is_empty()
• FormalContext$scale()
• FormalContext$get_scales()
• FormalContext$background_knowledge()
• FormalContext$dual()
• FormalContext$intent()
• FormalContext$uparrow()
• FormalContext$extent()
• FormalContext$downarrow()
• FormalContext$closure()
• FormalContext$obj_concept()
• FormalContext$att_concept()
• FormalContext$is_concept()
• FormalContext$is_closed()
• FormalContext$clarify()
• FormalContext$reduce()
• FormalContext$standardize()
• FormalContext$find_concepts()
• FormalContext$find_implications()
• FormalContext$to_transactions()
• FormalContext$save()
• FormalContext$load()
• FormalContext$dim()
• FormalContext$print()
• FormalContext$to_latex()
• FormalContext$incidence()
• FormalContext$subcontext()
• FormalContext$[()
• FormalContext$plot()
• FormalContext$use_logic()
• FormalContext$get_logic()
• FormalContext$use_connection()
• FormalContext$get_connection()
• FormalContext$clone()
Method new(): Creator for the Formal Context class
Usage:
FormalContext$new(I, filename, remove_const = FALSE)
Arguments:
I (numeric matrix) The table of the formal context.
filename (character) Path of a file to import.
remove_const (logical) If TRUE, remove constant columns. The default is FALSE.
Details: Columns of I should be named, since they are the names of the attributes of the formal
context.
If no I is used, the resulting FormalContext will be empty and not usable unless for loading a
previously saved one. In this case, one can provide a filename to import. Only RDS, CSV and
CXT files are currently supported.
Returns: An object of the FormalContext class.
Method is_empty(): Check if the FormalContext is empty
Usage:
FormalContext$is_empty()
Returns: TRUE if the FormalContext is empty, that is, has not been provided with a matrix, and
FALSE otherwise.
Method scale(): Scale the context
Usage:
FormalContext$scale(attributes, type, ...)
Arguments:
attributes The attributes to scale
type Type of scaling.
...
Details: The types of scaling are implemented in a registry, so that scalingRegistry$get_entries()
returns all types.
Returns: The scaled formal context
Examples:
filename <- system.file("contexts", "aromatic.csv", package = "fcaR")
fc <- FormalContext$new(filename)
fc$scale("nitro", "ordinal", comparison = `>=`, values = 1:3)
fc$scale("OS", "nominal", c("O", "S"))
fc$scale(attributes = "ring", type = "nominal")
Method get_scales(): Scales applied to the formal context
Usage:
FormalContext$get_scales(attributes = names(private$scales))
Arguments:
attributes (character) Name of the attributes for which scales (if applied) are returned.
Returns: The scales that have been applied to the specified attributes of the formal context. If
no attributes are passed, then all applied scales are returned.
Examples:
filename <- system.file("contexts", "aromatic.csv", package = "fcaR")
fc <- FormalContext$new(filename)
fc$scale("nitro", "ordinal", comparison = `>=`, values = 1:3)
fc$scale("OS", "nominal", c("O", "S"))
fc$scale(attributes = "ring", type = "nominal")
fc$get_scales()
Method background_knowledge(): Background knowledge of a scaled formal context
Usage:
FormalContext$background_knowledge()
Returns: An ImplicationSet with the implications extracted from the application of scales.
Examples:
filename <- system.file("contexts", "aromatic.csv", package = "fcaR")
fc <- FormalContext$new(filename)
fc$scale("nitro", "ordinal", comparison = `>=`, values = 1:3)
fc$scale("OS", "nominal", c("O", "S"))
fc$scale(attributes = "ring", type = "nominal")
fc$background_knowledge()
Method dual(): Get the dual formal context
Usage:
FormalContext$dual()
Returns: A FormalContext where objects and attributes have interchanged their roles.
Method intent(): Get the intent of a fuzzy set of objects
Usage:
FormalContext$intent(S)
Arguments:
S (Set) The set of objects to compute the intent for.
Returns: A Set with the intent.
Method uparrow(): Get the intent of a fuzzy set of objects
Usage:
FormalContext$uparrow(S)
Arguments:
S (Set) The set of objects to compute the intent for.
Returns: A Set with the intent.
Method extent(): Get the extent of a fuzzy set of attributes
Usage:
FormalContext$extent(S)
Arguments:
S (Set) The set of attributes to compute the extent for.
Returns: A Set with the intent.
Method downarrow(): Get the extent of a fuzzy set of attributes
Usage:
FormalContext$downarrow(S)
Arguments:
S (Set) The set of attributes to compute the extent for.
Returns: A Set with the intent.
Method closure(): Get the closure of a fuzzy set of attributes
Usage:
FormalContext$closure(S)
Arguments:
S (Set) The set of attributes to compute the closure for.
Returns: A Set with the closure.
Method obj_concept(): Object Concept
Usage:
FormalContext$obj_concept(object)
Arguments:
object (character) Name of the object to compute its associated concept
Returns: The object concept associated to the object given.
Method att_concept(): Attribute Concept
Usage:
FormalContext$att_concept(attribute)
Arguments:
attribute (character) Name of the attribute to compute its associated concept
Returns: The attribute concept associated to the attribute given.
Method is_concept(): Is a Concept?
Usage:
FormalContext$is_concept(C)
Arguments:
C A Concept object
Returns: TRUE if C is a concept.
Method is_closed(): Testing closure of attribute sets
Usage:
FormalContext$is_closed(S)
Arguments:
S A Set of attributes
Returns: TRUE if the set S is closed in this formal context.
Method clarify(): Clarify a formal context
Usage:
FormalContext$clarify(copy = FALSE)
Arguments:
copy (logical) If TRUE, a new FormalContext object is created with the clarified context, oth-
erwise the current one is overwritten.
Returns: The clarified FormalContext.
Method reduce(): Reduce a formal context
Usage:
FormalContext$reduce(copy = FALSE)
Arguments:
copy (logical) If TRUE, a new FormalContext object is created with the clarified and reduced
context, otherwise the current one is overwritten.
Returns: The clarified and reduced FormalContext.
Method standardize(): Build the Standard Context
Usage:
FormalContext$standardize()
Details: All concepts must be previously computed.
Returns: The standard context using the join- and meet- irreducible elements.
Method find_concepts(): Use Ganter Algorithm to compute concepts
Usage:
FormalContext$find_concepts(verbose = FALSE)
Arguments:
verbose (logical) TRUE will provide a verbose output.
Returns: A list with all the concepts in the formal context.
Method find_implications(): Use modified Ganter algorithm to compute both concepts and
implications
Usage:
FormalContext$find_implications(save_concepts = TRUE, verbose = FALSE)
Arguments:
save_concepts (logical) TRUE will also compute and save the concept lattice. FALSE is usually
faster, since it only computes implications.
verbose (logical) TRUE will provide a verbose output.
Returns: Nothing, just updates the internal fields concepts and implications.
Method to_transactions(): Convert the formal context to object of class transactions from
the arules package
Usage:
FormalContext$to_transactions()
Returns: A transactions object.
Method save(): Save a FormalContext to RDS or CXT format
Usage:
FormalContext$save(filename = tempfile(fileext = ".rds"))
Arguments:
filename (character) Path of the file where to store the FormalContext.
Details: The format is inferred from the extension of the filename.
Returns: Invisibly the current FormalContext.
Method load(): Load a FormalContext from a file
Usage:
FormalContext$load(filename)
Arguments:
filename (character) Path of the file to load the FormalContext from.
Details: Currently, only RDS, CSV and CXT files are supported.
Returns: The loaded FormalContext.
Method dim(): Dimensions of the formal context
Usage:
FormalContext$dim()
Returns: A vector with (number of objects, number of attributes).
Method print(): Prints the formal context
Usage:
FormalContext$print()
Returns: Prints information regarding the formal context.
Method to_latex(): Write the context in LaTeX format
Usage:
FormalContext$to_latex(table = TRUE, label = "", caption = "")
Arguments:
table (logical) If TRUE, surrounds everything between \begin{table} and \end{table}.
label (character) The label for the table environment.
caption (character) The caption of the table.
fraction (character) If none, no fractions are produced. Otherwise, if it is frac, dfrac or
sfrac, decimal numbers are represented as fractions with the corresponding LaTeX type-
setting.
Returns: A table environment in LaTeX.
Method incidence(): Incidence matrix of the formal context
Usage:
FormalContext$incidence()
Returns: The incidence matrix of the formal context
Examples:
fc <- FormalContext$new(planets)
fc$incidence()
Method subcontext(): Subcontext of the formal context
Usage:
FormalContext$subcontext(objects, attributes)
Arguments:
objects (character array) Name of the objects to keep.
attributes (character array) Names of the attributes to keep.
Details: A warning will be issued if any of the names is not present in the list of objects or
attributes of the formal context.
If objects or attributes is empty, then it is assumed to represent the whole set of objects or
attributes of the original formal context.
Returns: Another FormalContext that is a subcontext of the original one, with only the objects
and attributes selected.
Examples:
fc <- FormalContext$new(planets)
fc$subcontext(attributes = c("moon", "no_moon"))
Method [(): Subcontext of the formal context
Usage:
FormalContext$[(objects, attributes)
Arguments:
objects (character array) Name of the objects to keep.
attributes (character array) Names of the attributes to keep.
Details: A warning will be issued if any of the names is not present in the list of objects or
attributes of the formal context.
If objects or attributes is empty, then it is assumed to represent the whole set of objects or
attributes of the original formal context.
Returns: Another FormalContext that is a subcontext of the original one, with only the objects
and attributes selected.
Examples:
fc <- FormalContext$new(planets)
fc[, c("moon", "no_moon")]
Method plot(): Plot the formal context table
Usage:
FormalContext$plot(to_latex = FALSE, ...)
Arguments:
to_latex (logical) If TRUE, export the plot as a tikzpicture environment that can be included
in a LaTeX file.
... Other parameters to be passed to the tikzDevice that renders the lattice in LaTeX, or for
the figure caption. See Details.
Details: Particular parameters that control the size of the tikz output are: width, height
(both in inches), and pointsize (in points), that should be set to the font size used in the
documentclass header in the LaTeX file where the code is to be inserted.
If a caption is provided, the whole tikz picture will be wrapped by a figure environment and
the caption set.
Returns: If to_latex is FALSE, it returns nothing, just plots the graph of the formal context.
Otherwise, this function returns the LaTeX code to reproduce the formal context plot.
Method use_logic(): Sets the logic to use
Usage:
FormalContext$use_logic(name = available_logics())
Arguments:
name The name of the logic to use. To see the available names, run available_logics().
Method get_logic(): Gets the logic used
Usage:
FormalContext$get_logic()
Returns: A string with the name of the logic.
Method use_connection(): Sets the name of the Galois connection to use
Usage:
FormalContext$use_connection(connection)
Arguments:
connection The name of the Galois connection. Available connections are "standard" (anti-
tone), "benevolent1" and "benevolent2" (isotone)
Method get_connection(): Gets the name of the Galois connection
Usage:
FormalContext$get_connection()
Returns: A string with the name of the Galois connection
Method clone(): The objects of this class are cloneable with this method.
Usage:
FormalContext$clone(deep = FALSE)
Arguments:
deep Whether to make a deep clone.
References
Guigues J, <NAME> (1986). “Familles minimales d’implications informatives résultant d’un
tableau de données binaires.” Mathématiques et Sciences humaines, 95, 5-18.
<NAME>, <NAME> (1999). Formal concept analysis : mathematical foundations. Springer. ISBN
3540627715.
Belohlavek R (2002). “Algorithms for fuzzy concept lattices.” In Proc. Fourth Int. Conf. on Recent
Advances in Soft Computing. Nottingham, United Kingdom, 200-205.
<NAME>, <NAME>, <NAME> (2005). “arules - a computational environment for mining association
rules and frequent item sets.” J Stat Softw, 14, 1-25.
Examples
# Build and print the formal context
fc_planets <- FormalContext$new(planets)
print(fc_planets)
# Define a set of attributes
S <- Set$new(attributes = fc_planets$attributes)
S$assign(moon = 1, large = 1)
# Compute the closure of S
Sc <- fc_planets$closure(S)
# Is Sc a closed set?
fc_planets$is_closed(Sc)
# Clarify and reduce the formal context
fc2 <- fc_planets$reduce(TRUE)
# Find implications
fc_planets$find_implications()
# Read a formal context from CSV
filename <- system.file("contexts", "airlines.csv", package = "fcaR")
fc <- FormalContext$new(filename)
# Read a formal context from a CXT file
filename <- system.file("contexts", "lives_in_water.cxt", package = "fcaR")
fc <- FormalContext$new(filename)
## ------------------------------------------------
## Method `FormalContext$scale`
## ------------------------------------------------
filename <- system.file("contexts", "aromatic.csv", package = "fcaR")
fc <- FormalContext$new(filename)
fc$scale("nitro", "ordinal", comparison = `>=`, values = 1:3)
fc$scale("OS", "nominal", c("O", "S"))
fc$scale(attributes = "ring", type = "nominal")
## ------------------------------------------------
## Method `FormalContext$get_scales`
## ------------------------------------------------
filename <- system.file("contexts", "aromatic.csv", package = "fcaR")
fc <- FormalContext$new(filename)
fc$scale("nitro", "ordinal", comparison = `>=`, values = 1:3)
fc$scale("OS", "nominal", c("O", "S"))
fc$scale(attributes = "ring", type = "nominal")
fc$get_scales()
## ------------------------------------------------
## Method `FormalContext$background_knowledge`
## ------------------------------------------------
filename <- system.file("contexts", "aromatic.csv", package = "fcaR")
fc <- FormalContext$new(filename)
fc$scale("nitro", "ordinal", comparison = `>=`, values = 1:3)
fc$scale("OS", "nominal", c("O", "S"))
fc$scale(attributes = "ring", type = "nominal")
fc$background_knowledge()
## ------------------------------------------------
## Method `FormalContext$incidence`
## ------------------------------------------------
fc <- FormalContext$new(planets)
fc$incidence()
## ------------------------------------------------
## Method `FormalContext$subcontext`
## ------------------------------------------------
fc <- FormalContext$new(planets)
fc$subcontext(attributes = c("moon", "no_moon"))
## ------------------------------------------------
## Method `FormalContext$[`
## ------------------------------------------------
fc <- FormalContext$new(planets)
fc[, c("moon", "no_moon")]
ImplicationSet R6 Class for Set of implications
Description
This class implements the structure needed to store implications and the methods associated.
Methods
Public methods:
• ImplicationSet$new()
• ImplicationSet$get_attributes()
• ImplicationSet$[()
• ImplicationSet$to_arules()
• ImplicationSet$add()
• ImplicationSet$cardinality()
• ImplicationSet$is_empty()
• ImplicationSet$size()
• ImplicationSet$closure()
• ImplicationSet$recommend()
• ImplicationSet$apply_rules()
• ImplicationSet$to_basis()
• ImplicationSet$print()
• ImplicationSet$to_latex()
• ImplicationSet$get_LHS_matrix()
• ImplicationSet$get_RHS_matrix()
• ImplicationSet$filter()
• ImplicationSet$support()
• ImplicationSet$clone()
Method new(): Initialize with an optional name
Usage:
ImplicationSet$new(...)
Arguments:
... See Details.
Details: Creates and initialize a new ImplicationSet object. It can be done in two ways:
initialize(name, attributes, lhs, rhs) or initialize(rules)
In the first way, the only mandatory argument is attributes, (character vector) which is a
vector of names of the attributes on which we define the implications. Optional arguments are:
name (character string), name of the implication set, lhs (a dgCMatrix), initial LHS of the
implications stored and the analogous rhs.
The other way is used to initialize the ImplicationSet object from a rules object from pack-
age arules.
Returns: A new ImplicationSet object.
Method get_attributes(): Get the names of the attributes
Usage:
ImplicationSet$get_attributes()
Returns: A character vector with the names of the attributes used in the implications.
Method [(): Get a subset of the implication set
Usage:
ImplicationSet$[(idx)
Arguments:
idx (integer or logical vector) Indices of the implications to extract or remove. If logical vector,
only TRUE elements are retained and the rest discarded.
Returns: A new ImplicationSet with only the rules given by the idx indices (if all idx > 0
and all but idx if all idx < 0.
Method to_arules(): Convert to arules format
Usage:
ImplicationSet$to_arules(quality = TRUE)
Arguments:
quality (logical) Compute the interest measures for each rule?
Returns: A rules object as used by package arules.
Method add(): Add a precomputed implication set
Usage:
ImplicationSet$add(...)
Arguments:
... An ImplicationSet object, a rules object, or a pair lhs, rhs of Set objects or dgCMatrix.
The implications to add to this formal context.
Returns: Nothing, just updates the internal implications field.
Method cardinality(): Cardinality: Number of implications in the set
Usage:
ImplicationSet$cardinality()
Returns: The cardinality of the implication set.
Method is_empty(): Empty set
Usage:
ImplicationSet$is_empty()
Returns: TRUE if the set of implications is empty, FALSE otherwise.
Method size(): Size: number of attributes in each of LHS and RHS
Usage:
ImplicationSet$size()
Returns: A vector with two components: the number of attributes present in each of the LHS
and RHS of each implication in the set.
Method closure(): Compute the semantic closure of a fuzzy set with respect to the implication
set
Usage:
ImplicationSet$closure(S, reduce = FALSE, verbose = FALSE)
Arguments:
S (a Set object) Fuzzy set to compute its closure. Use class Set to build it.
reduce (logical) Reduce the implications using simplification logic?
verbose (logical) Show verbose output?
Returns: If reduce == FALSE, the output is a fuzzy set corresponding to the closure of S. If
reduce == TRUE, a list with two components: closure, with the closure as above, and implications,
the reduced set of implications.
Method recommend(): Generate a recommendation for a subset of the attributes
Usage:
ImplicationSet$recommend(S, attribute_filter)
Arguments:
S (a vector) Vector with the grades of each attribute (a fuzzy set).
attribute_filter (character vector) Names of the attributes to get recommendation for.
Returns: A fuzzy set describing the values of the attributes in attribute_filter within the
closure of S.
Method apply_rules(): Apply rules to remove redundancies
Usage:
ImplicationSet$apply_rules(
rules = c("composition", "generalization"),
batch_size = 25000L,
parallelize = FALSE,
reorder = FALSE
)
Arguments:
rules (character vector) Names of the rules to use. See details.
batch_size (integer) If the number of rules is large, apply the rules by batches of this size.
parallelize (logical) If possible, should we parallelize the computation among different batches?
reorder (logical) Should the rules be randomly reordered previous to the computation?
Details: Currently, the implemented rules are "generalization", "simplification", "reduction"
and "composition".
Returns: Nothing, just updates the internal matrices for LHS and RHS.
Method to_basis(): Convert Implications to Canonical Basis
Usage:
ImplicationSet$to_basis()
Returns: The canonical basis of implications obtained from the current ImplicationSet
Method print(): Print all implications to text
Usage:
ImplicationSet$print()
Returns: A string with all the implications in the set.
Method to_latex(): Export to LaTeX
Usage:
ImplicationSet$to_latex(
print = TRUE,
ncols = 1,
numbered = TRUE,
numbers = seq(self$cardinality())
)
Arguments:
print (logical) Print to output?
ncols (integer) Number of columns for the output.
numbered (logical) If TRUE (default), implications will be numbered in the output.
numbers (vector) If numbered, use these elements to enumerate the implications. The default
is to enumerate 1, 2, ..., but can be changed.
Returns: A string in LaTeX format that prints nicely all the implications.
Method get_LHS_matrix(): Get internal LHS matrix
Usage:
ImplicationSet$get_LHS_matrix()
Returns: A sparse matrix representing the LHS of the implications in the set.
Method get_RHS_matrix(): Get internal RHS matrix
Usage:
ImplicationSet$get_RHS_matrix()
Returns: A sparse matrix representing the RHS of the implications in the set.
Method filter(): Filter implications by attributes in LHS and RHS
Usage:
ImplicationSet$filter(
lhs = NULL,
not_lhs = NULL,
rhs = NULL,
not_rhs = NULL,
drop = FALSE
)
Arguments:
lhs (character vector) Names of the attributes to filter the LHS by. If NULL, no filtering is done
on the LHS.
not_lhs (character vector) Names of the attributes to not include in the LHS. If NULL (the
default), it is not considered at all.
rhs (character vector) Names of the attributes to filter the RHS by. If NULL, no filtering is done
on the RHS.
not_rhs (character vector) Names of the attributes to not include in the RHS. If NULL (the
default), it is not considered at all.
drop (logical) Remove the rest of attributes in RHS?
Returns: An ImplicationSet that is a subset of the current set, only with those rules which
has the attributes in lhs and rhs in their LHS and RHS, respectively.
Method support(): Compute support of each implication
Usage:
ImplicationSet$support()
Returns: A vector with the support of each implication
Method clone(): The objects of this class are cloneable with this method.
Usage:
ImplicationSet$clone(deep = FALSE)
Arguments:
deep Whether to make a deep clone.
References
<NAME>, <NAME> (2016). Conceptual Exploration. Springer. https://doi.org/10.1007/978-3-
662-49291-8
<NAME>, <NAME>, <NAME> (2005). “arules - a computational environment for mining association
rules and frequent item sets.” J Stat Softw, 14, 1-25.
<NAME>, <NAME>, <NAME>, <NAME>, <NAME> (2016). “Automated prover for attribute
dependencies in data with grades.” International Journal of Approximate Reasoning, 70, 51-67.
<NAME>, <NAME>, <NAME>, <NAME>, <NAME> (2012). “Closure via functional dependence
simplification.” International Journal of Computer Mathematics, 89(4), 510-526.
Examples
# Build a formal context
fc_planets <- FormalContext$new(planets)
# Find its implication basis
fc_planets$find_implications()
# Print implications
fc_planets$implications
# Cardinality and mean size in the ruleset
fc_planets$implications$cardinality()
sizes <- fc_planets$implications$size()
colMeans(sizes)
# Simplify the implication set
fc_planets$implications$apply_rules("simplification")
parse_implication Parses a string into an implication
Description
Parses a string into an implication
Usage
parse_implication(string, attributes)
Arguments
string (character) The string to be parsed
attributes (character vector) The attributes’ names
Value
Two vectors as sparse matrices representing the LHS and RHS of the implication
parse_implications Parses several implications given as a string
Description
Parses several implications given as a string
Usage
parse_implications(input)
Arguments
input (character) The string with the implications or a file containing the implications
Details
The format for the input file is:
• Every implication in its own line or separated by semicolon (;)
• Attributes are separated by commas (,)
• The LHS and RHS of each implication are separated by an arrow (->)
Value
An ImplicationSet
Examples
input <- system.file("implications", "ex_implications", package = "fcaR")
imps <- parse_implications(input)
planets Planets data
Description
This dataset records some properties of the planets in our solar system.
Usage
planets
Format
A matrix with 9 rows (the planets) and 7 columns, representing additional features of the planets:
small 1 if the planet is small, 0 otherwise.
medium 1 if the planet is medium-sized, 0 otherwise.
large 1 if the planet is large, 0 otherwise.
near 1 if the planet belongs in the inner solar system, 0 otherwise.
far 1 if the planet belongs in the outer solar system, 0 otherwise.
moon 1 if the planet has a natural moon, 0 otherwise.
no_moon 1 if the planet has no moon, 0 otherwise.
Source
Wille R (1982). “Restructuring Lattice Theory: An Approach Based on Hierarchies of Concepts.”
In Ordered Sets, pp. 445–470. Springer.
scalingRegistry Scaling Registry
Description
Scaling Registry
Usage
scalingRegistry
Format
An object of class scaling_registry (inherits from registry) of length 6.
Details
This is a registry that stores the implemented scales that can be applied using the scale() method
in an FormalContext.
One can obtain the list of available equivalence operators by: scalingRegistry$get_entry_names()
Set R6 class for a fuzzy set with sparse internal representation
Description
This class implements the data structure and methods for fuzzy sets.
Methods
Public methods:
• Set$new()
• Set$assign()
• Set$[()
• Set$cardinal()
• Set$get_vector()
• Set$get_attributes()
• Set$length()
• Set$print()
• Set$to_latex()
• Set$clone()
Method new(): Creator for objects of class Set
Usage:
Set$new(attributes, M = NULL, ...)
Arguments:
attributes (character vector) Names of the attributes that will be available in the fuzzy set.
M (numeric vector or column Matrix) Values (grades) to be assigned to the attributes.
... key = value pairs, where the value value is assigned to the key attribute name.
Details: If M is omitted and no pair key = value, the fuzzy set is the empty set. Later, one can
use the assign method to assign grades to any of its attributes.
Returns: An object of class Set.
Method assign(): Assign grades to attributes in the set
Usage:
Set$assign(attributes = c(), values = c(), ...)
Arguments:
attributes (character vector) Names of the attributes to assign a grade to.
values (numeric vector) Grades to be assigned to the previous attributes.
... key = value pairs, where the value value is assigned to the key attribute name.
Details: One can use both of: S$assign(A = 1, B = 0.3) S$assign(attributes = c(A, B),
values = c(1, 0.3)).
Method [(): Get elements by index
Usage:
Set$[(indices)
Arguments:
indices (numeric, logical or character vector) The indices of the elements to return. It can be
a vector of logicals where TRUE elements are to be retained.
Returns: A Set but with only the required elements.
Method cardinal(): Cardinal of the Set
Usage:
Set$cardinal()
Returns: the cardinal of the Set, counted as the sum of the degrees of each element.
Method get_vector(): Internal Matrix
Usage:
Set$get_vector()
Returns: The internal sparse Matrix representation of the set.
Method get_attributes(): Attributes defined for the set
Usage:
Set$get_attributes()
Returns: A character vector with the names of the attributes.
Method length(): Number of attributes
Usage:
Set$length()
Returns: The number of attributes that are defined for this fuzzy set.
Method print(): Prints the set to console
Usage:
Set$print(eol = TRUE)
Arguments:
eol (logical) If TRUE, adds an end of line to the output.
Returns: A string with the elements of the set and their grades between brackets .
Method to_latex(): Write the set in LaTeX format
Usage:
Set$to_latex(print = TRUE)
Arguments:
print (logical) Print to output?
Returns: The fuzzy set in LaTeX.
Method clone(): The objects of this class are cloneable with this method.
Usage:
Set$clone(deep = FALSE)
Arguments:
deep Whether to make a deep clone.
Examples
S <- Set$new(attributes = c("A", "B", "C"))
S$assign(A = 1)
print(S)
S$to_latex()
S <- Set$new(c("A", "B", "C"), C = 1, B = 0.5)
S
vegas Data for Tourist Destination in Las Vegas
Description
The dataset vegas is the binary translation of the Las Vegas Strip dataset (@moro2017stripping),
which records more than 500 TripAdvisor reviews of hotels in Las Vegas Strip. The uninformative
attributes (such as the user continent or the weekday of the review) are removed.
Usage
vegas
Format
A matrix with 504 rows and 25 binary columns. Column names are related to different features of
the hotels:
Period of Stay 4 categories are present in the original data, which produces as many binary vari-
ables: Period of stay=Dec-Feb, Period of stay=Mar-May, Period of stay=Jun-Aug and
Period of stay=Sep-Nov.
Traveler type Five binary categories are created from the original data: Traveler type=Business,
Traveler type=Couples, Traveler type=Families, Traveler type=Friends and Traveler
type=Solo.
Pool, Gym, Tennis court, Spa, Casino, Free internet Binary variables for the services offered by
each destination hotel
Stars Five binary variables are created, according to the number of stars of the hotel, Stars=3,
Stars=3.5, Stars=4, Stars=4.5 and Stars=5.
Score The score assigned in the review, from Score=1 to Score=5.
Source
<NAME>., <NAME>., & <NAME>. (2017). Stripping customers’ feedback on hotels through data
mining: The case of Las Vegas Strip. Tourism Management Perspectives, 23, 41-52.
%&% Intersection (Logical AND) of Fuzzy Sets
Description
Intersection (Logical AND) of Fuzzy Sets
Usage
S1 %&% S2
Arguments
S1 A Set
S2 A Set
Details
Both S1 and S2 must be Sets.
Value
Returns the intersection of S1 and S2.
Examples
# Build two sparse sets
S <- Set$new(attributes = c("A", "B", "C"))
S$assign(A = 1, B = 1)
T <- Set$new(attributes = c("A", "B", "C"))
T$assign(A = 1, C = 1)
# Intersection
S %&% T
%entails% Entailment between implication sets
Description
Entailment between implication sets
Usage
imps %entails% imps2
Arguments
imps (ImplicationSet) A set of implications.
imps2 (ImplicationSet) A set of implications which is tested to check if it follows
semantically from imps.
Value
A logical vector, where element k is TRUE if the k-th implication in imps2 follows from imps.
Examples
fc <- FormalContext$new(planets)
fc$find_implications()
imps <- fc$implications[1:4]$clone()
imps2 <- fc$implications[3:6]$clone()
imps %entails% imps2
%==% Equality in Sets and Concepts
Description
Equality in Sets and Concepts
Usage
C1 %==% C2
Arguments
C1 A Set or Concept
C2 A Set or Concept
Details
Both C1 and C2 must be of the same class.
Value
Returns TRUE if C1 is equal to C2.
40 %-%
Examples
# Build two sparse sets
S <- Set$new(attributes = c("A", "B", "C"))
S$assign(A = 1)
T <- Set$new(attributes = c("A", "B", "C"))
T$assign(A = 1)
# Test whether S and T are equal
S %==% T
%-% Difference in Sets
Description
Difference in Sets
Usage
S1 %-% S2
Arguments
S1 A Set
S2 A Set
Details
Both S1 and S2 must be Sets.
Value
Returns the difference S1 - S2.
Examples
# Build two sparse sets
S <- Set$new(attributes = c("A", "B", "C"))
S$assign(A = 1, B = 1)
T <- Set$new(attributes = c("A", "B", "C"))
T$assign(A = 1)
# Difference
S %-% T
%holds_in% Implications that hold in a Formal Context
Description
Implications that hold in a Formal Context
Usage
imps %holds_in% fc
Arguments
imps (ImplicationSet) The set of implications to test if hold in the formal context.
fc (FormalContext) A formal context where to test if the implications hold.
Value
A logical vector, indicating if each implication holds in the formal context.
Examples
fc <- FormalContext$new(planets)
fc$find_implications()
imps <- fc$implications$clone()
imps %holds_in% fc
%<=% Partial Order in Sets and Concepts
Description
Partial Order in Sets and Concepts
Usage
C1 %<=% C2
Arguments
C1 A Set or Concept
C2 A Set or Concept
Details
Both C1 and C2 must be of the same class.
Value
Returns TRUE if concept C1 is subconcept of C2 or if set C1 is subset of C2.
Examples
# Build two sparse sets
S <- Set$new(attributes = c("A", "B", "C"))
S$assign(A = 1)
T <- Set$new(attributes = c("A", "B", "C"))
T$assign(A = 1, B = 1)
# Test whether S is subset of T
S %<=% T
%or% Union (Logical OR) of Fuzzy Sets
Description
Union (Logical OR) of Fuzzy Sets
Usage
S1 %|% S2
Arguments
S1 A Set
S2 A Set
Details
Both S1 and S2 must be Sets.
Value
Returns the union of S1 and S2.
Examples
# Build two sparse sets
S <- Set$new(attributes = c("A", "B", "C"))
S$assign(A = 1, B = 1)
T <- Set$new(attributes = c("A", "B", "C"))
T$assign(C = 1)
# Union
S %|% T
%respects% Check if Set or FormalContext respects an ImplicationSet
Description
Check if Set or FormalContext respects an ImplicationSet
Usage
set %respects% imps
Arguments
set (list of Sets, or a FormalContext) The sets of attributes to check whether they
respect the ImplicationSet.
imps (ImplicationSet) The set of implications to check.
Value
A logical matrix with as many rows as Sets and as many columns as implications in the ImplicationSet.
A TRUE in element (i, j) of the result means that the i-th Set respects the j-th implication of the
ImplicationSet.
Examples
fc <- FormalContext$new(planets)
fc$find_implications()
imps <- fc$implications$clone()
fc %respects% imps
%~% Equivalence of sets of implications
Description
Equivalence of sets of implications
Usage
imps %~% imps2
Arguments
imps A ImplicationSet.
imps2 Another ImplicationSet.
Value
TRUE of and only if imps and imps2 are equivalent, that is, if every implication in imps follows
from imps2 and viceversa.
Examples
fc <- FormalContext$new(planets)
fc$find_implications()
imps <- fc$implications$clone()
imps2 <- imps$clone()
imps2$apply_rules(c("simp", "rsimp"))
imps %~% imps2
imps %~% imps2[1:9] |
gollum | readthedoc | YAML | gollum 0.6.0-dev documentation
[gollum](index.html#document-index)
---
Welcome to Gollum’s documentation![¶](#welcome-to-gollum-s-documentation)
===
What is Gollum?[¶](#what-is-gollum)
---
Gollum is an n:m multiplexer that gathers messages from different sources and broadcasts them to a set of destinations.
Gollum originally started as a tool to **MUL**-tiplex **LOG**-files (read it backwards to get the name).
It quickly evolved to a one-way router for all kinds of messages, not limited to just logs.
Gollum is written in Go to make it scalable and easy to extend without the need to use a scripting language.
Terminology[¶](#terminology)
---
The main components of Gollum are consumers, streams and producers. To explain these it helps imagineing to look at Gollum “from the outside”.
| Message: | A single set of data passing over a stream is called a message. |
| Metadata: | A optional part of messages. These can contain key/value pairs with additional information or content. |
| Stream: | A stream defines a path between one or more consumers, routers and producers. |
| Consumer: | The consumer create messages by “consuming” a specific data source. This can be everything like files, ports, external services and so on. |
| Producer: | The producer processed receiving message and “produce” something with it. That can be writing to files or ports, sending to external services and so on. |
| Router: | The router get and forward messages from specific source- to target-stream(s). |
| Modulator: | A modulator can be a Filter or Formatter which “modulates” a message. |
| Formatter: | A formatter can modulate the payload of a message like convert a plain-text to JSON. |
| Filter: | A filter can inspect a message to decide wether to drop the message or to let it pass. |
These main components, consumers, routers, producers, filters and formatters are build upon a plugin architecture.
This allows each component to be exchanged and configured individually with a different sets of options.
Configuration[¶](#configuration)
---
A Gollum configuration file is written in YAML and may contain any number of plugins.
Multiple plugins of the same type are possible, too.
The Gollum core does not make any assumption over the type of data you are processing.
Plugins however may do that. So it is up to the person configuring Gollum to ensure valid data is passed from consumers to producers.
Formatters can help to achieve this.
Table of contents[¶](#table-of-contents)
---
### Instructions[¶](#instructions)
#### Installation[¶](#installation)
##### Latest Release[¶](#latest-release)
You can download a compressed pre-compiled binary from [github releases](https://github.com/trivago/gollum/releases):
```
# linux bases example curl -L https://github.com/trivago/gollum/releases/download/v0.4.5/gollum-0.4.5-Linux_x64.zip -o gollum.zip unzip -o gollum.zip chmod 0755 gollum
./gollum --help
```
##### From source[¶](#from-source)
Installation from source requires the installation of the [go toolchain](http://golang.org/).
Gollum need a least go version 1.7 or higher and supports the Go 1.5 vendor experiment that is automatically enabled when using the provided makefile.
With Go 1.7 and later you can also use **go build** directly without additional modifications.
Builds with Go 1.6 or earlier versions are not officially supported and might require additional steps and modifications.
```
# checkout mkdir -p $(GOPATH)/src/github.com/trivago cd $(GOPATH)/src/github.com/trivago git clone <EMAIL>:trivago/gollum.git cd gollum
# run tests and compile make test
./gollum --help
```
You can use the make file coming with gollum to trigger cross platform builds.
Make will produce ready to deploy .zip files with the corresponding platform builds inside the dist folder.
###### Build[¶](#build)
Building gollum is as easy as make or go build.
If you want to do cross platform builds use make all or specify one of the following platforms instead of “all”:
| current: | build for current OS (default) |
| freebsd: | build for FreeBSD |
| linux: | build for Linux x64 |
| mac: | build for MacOS X |
| pi: | build for Linux ARM |
| win: | build for Windows |
| debug: | build for current OS with debug compiler flags |
| clean: | clean all artifacts created by the build process |
##### Docker[¶](#docker)
The repository contains a Dockerfile which enables you to build and run gollum inside a Docker container.
```
docker build -t trivago/gollum .
docker run -it --rm trivago/gollum -c config/profile.conf -ps -ll 3
```
To use your own configuration you could run:
```
docker run -it --rm -v /path/to/config.conf:/etc/gollum/gollum.conf:ro trivago/gollum -c /etc/gollum/gollum.conf
```
#### Usage[¶](#usage)
##### Commandline[¶](#commandline)
Gollum goes into an infinte loop once started.
You can shutdown gollum by sending a SIG_INT, i.e. Ctrl+C, SIG_TERM or SIG_KILL.
Gollum has several commandline options that can be accessed by starting Gollum without any paramters:
| `-h, -help` | Print this help message. |
| `-v, -version` | Print version information and quit. |
| `-r, -runtime` | Print runtime information and quit. |
| `-l, -list` | Print plugin information and quit. |
| `-c, -config` | Use a given configuration file. |
| `-tc, -testconfig` |
| | Test the given configuration file and exit. |
| `-ll, -loglevel` | Set the loglevel [0-3] as in {0=Error, 1=+Warning, 2=+Info, 3=+Debug}. |
| `-lc, -log-colors` |
| | Use Logrus’s “colored” log format. One of “never”, “auto” (default), “always” |
| `-n, -numcpu` | Number of CPUs to use. Set 0 for all CPUs (respects cgroup limits). |
| `-p, -pidfile` | Write the process id into a given file. |
| `-m, -metrics` | Address to use for metric queries. Disabled by default. |
| `-hc, -healthcheck` |
| | Listening address ([IP]:PORT) to use for healthcheck HTTP endpoint. Disabled by default. |
| `-pc, -profilecpu` |
| | Write CPU profiler results to a given file. |
| `-pm, -profilemem` |
| | Write heap profile results to a given file. |
| `-ps, -profilespeed` |
| | Write msg/sec measurements to log. |
| `-pt, -profiletrace` |
| | Write profile trace results to a given file. |
| `-t, -trace` | Write message trace results _TRACE_ stream. |
##### Running Gollum[¶](#running-gollum)
By default you start Gollum with your config file of your defined pipeline.
Configuration files are written in the YAML format and have to be loaded via command line switch.
Each plugin has a different set of configuration options which are currently described in the plugin itself, i.e. you can find examples in the [github wiki](https://github.com/trivago/gollum/wiki).
```
# starts a gollum process gollum -c path/to/your/config.yaml
```
Here is a minimal console example to run Gollum:
```
# create a minimal config echo \
{StdIn: {Type: consumer.Console, Streams: console}, StdOut: {Type: producer.Console, Streams: console}} \
> example_conf.yaml
# starts a gollum process gollum -c example_conf.yaml -ll 3
```
#### Metrics[¶](#metrics)
Gollum provide various metrics which can be used for monitoring or controlling.
##### Collecting metrics[¶](#collecting-metrics)
To collect metrics you need to start the gollum process with the “-m <address:port>” [option](http://gollum.readthedocs.io/en/latest/src/instructions/usage.html#commandline).
If gollum is running with the “-m” option you are able to get all collected metrics by a tcp request in json format.
Example request:
```
# start gollum on host gollum -m 8080 -c /my/config/file.conf
# get metrics by curl and prettify response by python curl 127.0.0.1:8080 | python -m json.tool
# alternative by netcat nc -d 127.0.0.1 8080 | python -m json.tool
```
Example response
```
{
"Consumers": 3,
"GoMemoryAllocated": 21850200,
"GoMemoryGCEnabled": 1,
"GoMemoryNumObjects": 37238,
"GoRoutines": 20,
"GoVersion": 10803,
"Messages:Discarded": 0,
"Messages:Discarded:AvgPerSec": 0,
"Messages:Enqueued": 13972236,
"Messages:Enqueued:AvgPerSec": 1764931,
"Messages:Routed": 13972233,
"Messages:Routed:AvgPerSec": 1764930,
"Plugins:ActiveWorkers": 3,
"Plugins:State:Active": 4,
"Plugins:State:Dead": 0,
"Plugins:State:Initializing": 0,
"Plugins:State:PrepareStop": 0,
"Plugins:State:Stopping": 0,
"Plugins:State:Waiting": 0,
"ProcessStart": 1501855102,
"Producers": 1,
"Routers": 2,
"Routers:Fallback": 2,
"Stream:profile:Messages:Discarded": 0,
"Stream:profile:Messages:Discarded:AvgPerSec": 0,
"Stream:profile:Messages:Routed": 13972239,
"Stream:profile:Messages:Routed:AvgPerSec": 1768878,
"Version": 500
}
```
##### Metrics overview[¶](#metrics-overview)
###### Global metrics[¶](#global-metrics)
**Consumers**
> Number of current active consumers.
**GoMemoryAllocated**
> Current allocated memory in bytes.
**GoMemoryGCEnabled**
> Indicates that GC is enabled.
**GoMemoryNumObjects**
> The number of allocated heap objects.
**GoRoutines**
> The number of active go routines.
**GoVersion**
> The golang version in number format.
**Messages:Discarded**
> The count of discarded messages over all.
**Messages:Discarded:AvgPerSec**
> The average of discarded messages from the last seconds.
**Messages:Enqueued**
> The count of enqueued messages over all.
**Messages:Enqueued:AvgPerSec**
> The average of enqueued messages from the last seconds.
**Messages:Routed**
> The count of routed messages over all.
**Messages:Routed:AvgPerSec**
> The average of routed messages from the last seconds.
**Plugins:ActiveWorkers**
> Number of active worker (plugin) processes.
**Plugins:State:<STATE>**
> Number of plugins in specific states. The following states can possible for plugins:
> * Active
> * Dead
> * Initializing
> * PrepareStop
> * Stopping
> * Waiting
**ProcessStart**
> Timestamp of the process start time.
**Producers**
> Number of current active producers.
**Routers**
> Number of current active routers.
**Routers:Fallback**
> Number of current active “fallback” (auto created) routers.
**Version**
> Gollum version as numeric value.
###### Stream based metrics[¶](#stream-based-metrics)
**Stream:<STREAM_NAME>:Messages:Discarded**
> The count of discarded messages for a specific stream.
**Stream:<STREAM_NAME>:Messages:Discarded:AvgPerSec**
> The average of discarded messages from the last seconds for a specific stream.
**Stream:<STREAM_NAME>:Messages:Routed**
> The count of routed messages for a specific stream.
**Stream:<STREAM_NAME>:Messages:Routed:AvgPerSec**
> The average of routed messages from the last seconds for a specific stream.
#### Health checks[¶](#health-checks)
Gollum provide optional http endpoints for health checks.
To activate the health check endpoints you need to start the gollum process with the “-hc <address:port>” [option](http://gollum.readthedocs.io/en/latest/src/instructions/usage.html#commandline).
If gollum is running with the “-hc” option you are able to request different http endpoints to get global- and plugin health status.
```
# start gollum on host with health check endpoints gollum -hc 8080 -c /my/config/file.conf
```
##### Endpoints[¶](#endpoints)
**/_ALL_**
Request:
```
curl -i 127.0.0.1:8080/_ALL_
```
Response:
```
HTTP/1.1 200 OK Date: Fri, 04 Aug 2017 16:03:22 GMT Content-Length: 191 Content-Type: text/plain; charset=utf-8
/pluginID-A/pluginState 200 ACTIVE: Active
/pluginID-B/pluginState 200 ACTIVE: Active
/pluginID-C/pluginState 200 ACTIVE: Active
/pluginID-D/pluginState 200 ACTIVE: Active
/_PING_ 200 PONG
```
**/_PING_**
Request:
```
curl -i 127.0.0.1:8080/_PING_
```
Response:
```
HTTP/1.1 200 OK Date: Fri, 04 Aug 2017 15:46:34 GMT Content-Length: 5 Content-Type: text/plain; charset=utf-8
PONG
```
**/<PLUGIN_ID>/pluginState**
Request:
```
# example request with active `producer.benchmark`
curl -i 127.0.0.1:8080/pluginID-A/pluginState
```
Response:
```
HTTP/1.1 200 OK Date: Fri, 04 Aug 2017 15:47:45 GMT Content-Length: 15 Content-Type: text/plain; charset=utf-8
ACTIVE: Active
```
#### Best practice[¶](#best-practice)
##### Managing own plugins in a seperate git repository[¶](#managing-own-plugins-in-a-seperate-git-repository)
You can add a own plugin module by simple using git submodule:
```
git submodule add -f https://github.com/YOUR_NAMESPACE/YOUR_REPO.git contrib/YOUR_NAMESPACE
```
The by git created .gitmodules will be ignored by the gollum repository.
To activate your plugin you need to create a contrib_loader.go to be able to compile gollum with your own provided plugins.
```
package main
// This is a stub file to enable registration of vendor specific plugins that
// are placed in sub folders of this folder.
import (
_ "github.com/trivago/gollum/contrib/myPackage"
)
func init() {
}
```
You can also copy the existing contrib_loader.go.dist to contrib_loader.go and update the import path to your package:
```
cp contrib_loader.go.dist contrib_loader.go
# open contrib_loader.go with an editor
# update package path with your plugin's path make build
```
You can also change the version string of you Gollum builds to include the version of your plugin.
Set the GOLLUM_RELEASE_SUFFIX variable either in the environment or as an argument to `make`:
```
# build Gollum with myPackage version suffixed to the Gollum version
# e.g.: 0.5.3-pkg0a01d7b6 make all GOLLUM_RELEASE_SUFFIX=pkg$(git -C contrib/myPackage describe --tags --always)
```
##### Use more Gollum processes for complex pipelines[¶](#use-more-gollum-processes-for-complex-pipelines)
If your pipeline contain more steps think in your setup also about the separation of concerns (SoC) principle.
Split your configuration in smaller parts and start more Gollum processes to handle the pipeline steps.
#### Developing[¶](#developing)
##### Testing[¶](#testing)
Gollum provides unit-, integrations- and a couple of linter tests which also runs regulary on [travis-ci](https://travis-ci.org/trivago/gollum).
You can run the test by:
```
# run tests make test
# run unit-test only make unit
# run integration-test only make integration
```
Here an overview of all provided tests by the Makefile:
| make test: | Run go vet, golint, gofmt and go test |
| make unit: | Run go test -tags unit |
| make integration: |
| | Run go test -tags integration |
| make vet: | Run go vet |
| make lint: | Run golint |
| make fmt-check: | Run gofmt -l |
| make ineffassign: |
| | Install and run [ineffassign](https://github.com/gordonklaus/ineffassign) |
##### Debugging[¶](#debugging)
If you want to use [Delve](https://github.com/derekparker/delve) for debugging you need to build gollum with some additional flags.
You can use the predefined make command make debug:
```
# build for current OS with debug compiler flags make debug
# or go build
# go build -ldflags='-s -linkmode=internal' -gcflags='-N -l'
```
With this debug build you are able to start a [Delve](https://github.com/derekparker/delve) remote debugger:
```
# for the gollum arguments pls use this format: ./gollum -- -c my/config.conf dlv --listen=:2345 --headless=true --api-version=2 --log exec ./gollum -- -c testing/configs/test_router.conf -ll 3
```
##### Profiling[¶](#profiling)
To test Gollum you can use the internal [profiler consumer](../gen/consumer/profiler.html) and the [benchmark producer](../gen/producer/benchmark.html).
* The [profiler consumer](../gen/consumer/profiler.html) allows you to create automatically messages with random payload.
* The [benchmark producer](../gen/producer/benchmark.html) is able to measure processed messages
By some optional parameters you can get further additional information:
| -ps: | Profile the processed message per second. |
| -ll 3: | Set the log level to debug |
| -m 8080: | Activate metrics endpoint on port 8080 |
Here a simple config example how you can setup a [profiler consumer](../gen/consumer/profiler.html) with a [benchmark producer](../gen/producer/benchmark.html).
By default this test profiles the theoretic maximum throughput of 256 Byte messages:
```
Profiler:
Type: consumer.Profiler
Runs: 100000
Batches: 100
Message: "%256s"
Streams: profile
KeepRunning: true
Benchmark:
Type: producer.Benchmark
Streams: profile
```
```
# start Gollum for profiling gollum -ps -ll 3 -m 8080 -c config/profile.conf
# get metrics nc -d 127.0.0.1 8080 | python -m json.tool
```
You can enable different producers in that config to test the write performance of these producers, too.
##### Dependencies[¶](#dependencies)
To handle external go-packages and -libraries Gollum use [dep](https://github.com/golang/dep). Like in other go projects the vendor is also checked in on github.com. All dependencies can be found in the [Gopkg.toml](https://github.com/golang/dep/blob/master/docs/Gopkg.toml.md) file.
To update the external dependencies we provide also a make command:
```
# update external dependencies make update-vendor
```
#### Writing plugins[¶](#writing-plugins)
When starting to write a plugin its probably a good idea to have a look at already existing plugins.
A good starting point is the console plugin as it is very lightweight.
If you plan to write a special purpose plugin you should place it into “contrib/yourCompanyName”.
Plugins that can be used for general purpose should be placed into the main package folders like “consumer” or “producer”.
To enable a contrib plugin you will need to extend the file “contrib/loader.go”.
Add an anonymous import to the list of imports like this:
```
import (
_ "./yourCompanyName" // this is ok for local extensions
_ "github.com/trivago/gollum/contrib/yourCompanyName" // if you plan to contribute
)
```
##### Configuration[¶](#configuration)
All plugins have to implement the “core/Plugin” interface.
This interface requires a type to implement the Configure method which can be used to read data from the config file passed to Gollum.
To make it possible for Gollum to instantiate an instance of your plugin by name it has to be registered.
This should be done by adding a line to the init() method of the file.
```
import (
"github.com/trivago/gollum/core"
"github.com/trivago/tgo"
)
struct MyPlugin type {
}
func init() {
core.TypeRegistry.Register(MyPlugin{}) // Register the new plugin type
}
func (cons *MyPlugin) Configure(conf core.PluginConfig) error {
// ... read custom options ...
}
```
The configure method is also called when just testing the configuration via gollum -tc.
As of this, this function should never open any sockets or other kind of resources.
This should be done when a plugin is explicitly started so that proper closing of resources is assured, too.
If your plugins derives from aother plugin it is advisable to call Configure() of the base type before checking your configuration options.
There are several convenience functions in the PluginConfig type that makes it easy to obtain configuration values and setting default values.
Please refer to Gollum’s GoDoc API documentation for more details on this.
```
func (plugin *MyPlugin) Configure(conf core.PluginConfig) error {
err := prod.MyPluginBase.Configure(conf)
if err != nil {
return err
}
// ... read custom options ...
return nil
}
```
##### Configuring nested plugins[¶](#configuring-nested-plugins)
Some plugins may want to configure “nested” plugins such as a formatter or filter.
The plugins can be instantiated by using the type registry and passing the config passed to the Configure method.
```
func (plugin *MyPlugin) Configure(conf core.PluginConfig) error {
formatter, err := core.NewPluginWithType(conf.GetString("Formatter", "format.Forward"), conf)
if err != nil {
return err // ### return, plugin load error ###
}
// ... do something with your formatter ...
return nil
}
```
##### How to document plugins[¶](#how-to-document-plugins)
###### How to document plugins[¶](#how-to-document-plugins)
####### Background[¶](#background)
Documentation for each plugin is sourced from a single location - the plugin’s source code -
and presented in two locations.
* Standard godocs based documentation is targeted at developers interested in writing Gollum plugins or developing Gollum itself and describes implementation-level details of Gollum’s plugin API and internals.
* The Plugins chapter of Gollum’s documentation on readthedocs.org describes the built-in plugins’ features and configuration from the viewpoint of an end user or operator.
The readthedocs.org documentation is generated by a custom tool (./docs/generator) that converts the godoc formatted comments into RST. Consequently, plugin source code comments must satisfy the syntax requirements of both godoc and Gollum’s rst_generator.
####### Syntax[¶](#syntax)
Each plugin’s documentation is contained in the comment block immediately preceding the plugin’s Go struct definition with al according to godoc rules. Other comments in the plugin source are ignored by the RST generator.
Single-line comments (“// ….”) are used for inline documentation. For the purpose of this document, the initial “// ” sequences on each line are not part of the comment block; an “empty line” means a line solely consisting of the comment sequence and optional trailing whitespace.
The following types of elements are recognized by the RST generator:
1. Heading A heading is defined by text that is surrounded by blank lines.
If a heading is starting a documentation block, the preceding blank line is omitted.
Please note that a heading is only detected as heading by godoc if there is regular text following the blank line after the heading.
2. Enumeration A “- ” as the first non-space sequence on a line begins an enumeration list item,
which continues over zero or more further lines, terminated by a Heading or another enumeration list item. Continuation lines must match the indentation of the beginning line.
3. Keyed enumeration A colon (“:”) on the first line of an enumeration specifies a keyed enumeration list item. The text between “- ” and “:” defines the item’s key; the rest its contents.
4. Nested enumerations Enumerations may be nested; each level is indented by a single space. Indentation must begin at 0.
####### Contents[¶](#contents)
The contents of a documentation block consists of 4 parts:
1. General description
The heading of this section starts with the name of the plugin followed by its type e.g.
“Console consumer”.
The contents of this section should describe what the plugin does, including any special considerations.
2. Metadata fields
The heading of this secion is “Metadata”.
The contents of this section is an enumeration (of keyed and/or unkeyed elements)
of all metadata fields consumed or generated by the plugin.
The RST generatorwill automatically inherit the Metadata sections from plugins embedded by this type, so there is no need to duplicate their documentation in your own plugin. You can, however, override inherited fields’ documentation if needed.
Fields that have specific keys should be documented using keyed enumerations,
e.g. “- key: stores the key of the request”.
3. Configuration parameters
The heading of this secion is “Parameters”.
The contents of this section is a keyed enumeration of configuration parameters recognized by the plugin.
The RST generator will automatically inherit the Parameters sections from plugins embedded by this type, so there is no need to duplicate their documentation in your own plugin. You can, however, override inherited parameters’ documentation if needed.
Default values and units are picked up automatically from struct tags for struct properties that have the ‘config’ and one or both of ‘default’ and ‘metric’ tags.
4. Configuration Examples
The heading of this section is “Examples”.
This section should contain at least one configuration example and a short description for each example provided.
The example is meant to be a working demonstration of the plugin’s usage, not an exhaustive listing of all possible features and parameters. The user should be able to copy-paste it into a config file as-is. It doesn’t need to be self-contained and should not include any boilerplate configuration for other plugins, with the exception of nested plugins (filters and formatters), which should be contained in the following stub:
```
ExampleConsumer:
Type: consumer.Console
Streams: console
Modulators:
```
####### Struct Tags[¶](#struct-tags)
The same Go struct tags used by Gollum to parse and set plugin configuration parameters are supported by the RST generator:
* config:”<ConfigParameter>” maps this struct property to the <ConfigParameter> parameter
* default:”<defval>” specifies <defval> as the parameter’s default value
* metric:”<unit>” specifies <unit> as the parameter’s unit
To inherit an embedded struct type’s Metadata and Parameters documentation in your own plugin’s documentation, add the gollumdoc:”embed_type” in the embed instruction.
####### Structure[¶](#structure)
The general structure of a plugin documentation block looks like this:
```
// <BLOCK HEADING>
//
// <CHAPTER CONTENTS>
// <CHAPTER CONTENTS>
//
// <CHAPTER HEADING>
//
// <CHAPTER CONTENTS>
// <CHAPTER CONTENTS>
//
// <CHAPTER HEADING>
//
// <CHAPTER CONTENTS>
// <CHAPTER CONTENTS>
//
// Metadata
//
// - <METADATA KEY>: <DESCRIPTION
// DESCRIPTION CONTINUED ...>
// - <METADATA KEY>: <DESCRIPTION>
// - <VARIABLE-NAMED METADATA FIELD
// DESCRIPTION CONTINUED ...>
//
// Parameters
//
// - <PARAMETER NAME>: <PARAMETER DESCRIPTION>
// - <PARAMETER NAME>: <PARAMETER DESCRIPTION
// DESCRIPTION CONTINUED ...>
// - <NESTED NAME>: <NESTED VALUE OR OPTION>
// DESCRIPTION CONTINUED ...>
// - <NESTED VALUE OR OPTION>
// - <NESTED VALUE OR OPTION>
//
// Examples
//
// <CONFIGURATION EXAMPLE>
// <CONFIGURATION EXAMPLE>
```
####### Example[¶](#example)
```
// Console consumer
//
// This consumer reads from stdin or a named pipe. A message is generated after
// each newline character.
//
// Metadata
//
// - pipe: name of the pipe the message was received on
//
// Parameters
//
// - Pipe: Defines the pipe to read from. This can be "stdin" or the path
// of a named pipe. A named pipe is creared if not existing.
//
// - Permissions: Accepts an octal number string containing the unix file
// permissions used when creating a named pipe.
//
// - ExitOnEOF: Can be set to true to trigger an exit signal if the pipe is closed
// i.e. when EOF is detected.
//
// Examples
//
// This configuration reads data from standard-in.
//
// ConsoleIn:
// Type: consumer.Console
// Streams: console
// Pipe: stdin type Console struct {
core.SimpleConsumer `gollumdoc:"embed_type"`
autoExit bool `config:"ExitOnEOF" default:"true"`
pipeName string `config:"Pipe" default:"stdin"`
pipePerm uint32 `config:"Permissions" default:"0644"`
pipe *os.File
}
```
##### Plugin types[¶](#plugin-types)
###### Writing consumers[¶](#writing-consumers)
When writing a new consumer it is advisable to have a look at existing consumers.
A good starting point are the Console and File consumers.
####### Requirements[¶](#requirements)
All consumers have to implement the “core/Consumer” as well as the “core/Plugin” interface.
The most convenient way to do this is to derive from the “core/ConsumerBase” type as it will provide implementations of the most common methods required.
In addition to this, every plugin has to register at the plugin registry to be available as a config option.
This is explained in the general [plugin section](index.html#document-src/instructions/writingPlugins).
####### ConsumerBase[¶](#consumerbase)
Consumers deriving from “core/ConsumerBase” have to implement the “Consume” method from the “core/Consumer” interface.
In addition to that most plugins might also want to overload the “Configure” function from the “core/Plugin” interface.
The Consume() function will be called as a separate go routine and should do two things.
1. Listen to the control channel 2. Process incoming data
As Consume() is called as a separate go routine you can decide wether to spawn additional go routines to handle both tasks or to let Consume() handle everything.
ConsumerBase gives you two convenience loop functions to handle control commands:
**ControlLoop**
Will loop until a stop is received and can trigger a callback if a log rotation is requested (SIG_HUP is sent).
The log rotation callback cane be set e.g. in the Configure method by using the SetRollBack function.
Other possible callbacks functions are SetPrepareStopCallback and SetStopCallback.
**TickerControlLoop**
Gives you an additional callback that is triggered in regular intervals.
Both loops only cover control message handling and are blocking calls.
As of their blocking nature you will probably want to spawn a separate go routine handling incoming messages when using these loops.
It is highly recommended to use at least one of these functions in your plugin implementation.
By doing this you can be sure that changes to message streaming and control handling are automatically used by your plugin after a Gollum update.
A typical consume function will look like this:
```
func (cons *MyConsumer) Configure(conf core.PluginConfig) error {
cons.SetStopCallback(cons.close) // Register close to the control message handler
}
func (cons *MyConsumer) close() {
cons.WorkerDone()
}
func (cons *MyConsumer) Consume(workers *sync.WaitGroup) {
cons.AddMainWorker(workers) // New go routine = new worker
go cons.readData() // Run until close is called
cons.ControlLoop() // Blocks
}
```
As we want to run a new go routine we also add a new worker. As this is the first worker we use AddMainWorker().
Additional workers can be added by using AddWorker().
This enables the shutdown routine to wait until all consumers have properly stopped.
However - to avoid a hang during shutdown, make sure that all workers added are properly closed during the shutdown sequence.
After we made sure all workers are registered, the core function readData() is called as a separate go routine.
This is necessary as the ControlLoop will block Consume() until a shutdown is requested.
When a stop control message is received, the StopCallback is executed.
You can use this callback to signal your readData function to stop or you can check the pluginState inside your readData function.
The pluginState will switch to PluginStateStopping after a stop control has been triggered.
####### Configuration[¶](#configuration)
If your consumer requires additonal configuration options you should implement the Configure method.
Please refer to the [Plugin documentation](index.html#document-src/instructions/writingPlugins) for further details.
####### Sending messages[¶](#sending-messages)
Messages can be sent by using either the Enqueue() or EnqueueCopy() method.
Both function will make sure that the message is sent to all streams and the correct stream ID is set.
The function Enqueue() will reference the data you pass to it, while EnqueueCopy() will copy the data to the new message.
The latter will allow you to e.g. safely recycle internal buffers without changing messages that are not processed by all producers, yet.
Both methods expect a sequence number to be passed.
This sequence number is meant to be a runtime unique ID that may allow future checks on duplicate messages.
The most common sequence number is an incrementing 64-bit integer.
```
func (cons *MyConsumer) readData() {
var data []byte
for cons.IsActive() {
// ... read data
cons.Enqueue(data, cons.sequence) // This call may block
cons.sequence++ // Increment your sequence number
}
}
```
####### Writing bare bone consumers[¶](#writing-bare-bone-consumers)
Sometimes it might be useful not to derive from ConsumerBase.
If you decide to go this way please have a look at Gollum’s GoDoc API documentation as well as the source of ConsumerBase.
###### Writing producers[¶](#writing-producers)
When writing a new producer it is advisable to have a look at existing producers.
A good starting point are the Console and File producers.
####### Requirements[¶](#requirements)
All producers have to implement the “core/Producer” as well as the “core/Plugin” interface.
The most convenient way to do this is to derive from the “core/ProducerBase” type as it will provide implementations of the most common methods required.
In addition to this, every plugin has to register at the plugin registry to be available as a config option.
This is explained in the general [plugin section](index.html#document-src/instructions/writingPlugins).
####### ProducerBase[¶](#producerbase)
Producers deriving from core/ProducerBase have to implement the “Produce” method from the “core/Producer” interface.
In addition to that most plugins might also want to overload the “Configure” function from the “core/Plugin” interface.
The Produce() function will be called as a separate go routine and should provide two things.
1. Listen to the control channel 2. Listen to incoming messages
As Produce() is called as a separate go routine you can decide wether to spawn additional go routines to handle both tasks or to let Produce() handle everything.
ProducerBase gives you three convenience loop functions to handle control commands:
**ControlLoop**
Will only listen to control messages and trigger the corresponding callbacks that can be registered during Configure.
Stop control messages will cause this loop to end.
**MessageControlLoop**
In addition to the functionality of ControlLoop this will also check for incoming messages.
Messages from the internal message channel are passed to the given message handler.
The log rotation callback can be set e.g. in the Configure method by using the SetRollBack function.
Other possible callbacks functions are SetPrepareStopCallback and SetStopCallback.
**TickerMessageControlLoop**
Gives you an additional callback that is triggered in regular intervals.
It is highly recommended to use at least one of these functions in your plugin implementation.
By doing this you can be sure that changes to message streaming and control handling are automatically used by your plugin after a Gollum update.
A typical produce function will look like this:
```
func (prod *MyProducer) close() {
prod.CloseMessageChannel(prod.processData) // Close the internal channel and flush remaining messages
prod.WorkerDone() // Signal that we're done now
}
func (prod *MyProducer) Configure(conf core.PluginConfig) error {
prod.SetStopCallback(prod.close) // Call close upon shutdown
prod.SetRollCallback(prod.onRoll) // Call onRoll when SIG_HUP is sent to the process
}
func (prod *MyProducer) processData(msg core.Message) {
// Do something with the message
}
func (prod *MyProducer) Produce(workers *sync.WaitGroup) {
prod.AddMainWorker(workers)
prod.MessageControlLoop(prod.processData)
}
```
The framework will call the registered StopCallback function when the control loop receives a stop.
As the shutdown procedure needs to wait until all messages from this producers have been sent (to avoid data loss) at least one worker should always be registered.
The shutdown procedure will wait until all producer workers have finished before exiting.
As of this you have to make sure that all AddWorker calls are followed by a WorkerDone() call during shutdown.
If this does not happen the shutdown procedure will block.
If your producer sends messages to other producers you can manually set dependencies between receiving producers and this producer by using StreamRegistry.LinkDependencies.
DropStream dependencies are automatically added during startup.
####### Configuration[¶](#configuration)
If your producer requires additonal configuration options you should implement the Configure method.
Please refer to the [Plugin documentation](index.html#document-src/instructions/writingPlugins) for further details.
####### Working with slow services[¶](#working-with-slow-services)
Messages are passed to the producer one-by-one.
Certain services however might perform better when messages are not sent one-by-one but as a batch of messages.
Gollum gives you several tools to handle these kind of message batches.
A good example for this is the socket producer.
This producer takes advantage of the “core/MessageBatch” type.
This allows storing messages in a double-buffered queue and provides callback based methods to flush the queue asynchronously.
The following code illustrates a best practice approach on how to use the MessageBatch.
You may of course change details if required.
```
buffer := NewMessageBatch(8192) // Hold up to 8192*2 messages (front and backbuffer)
for {
// Append the given message
// - If the buffer is full call the sendBatch method and wait for flush
// - If the producers is not active or if it is shutting down pass the message to prod.Drop
buffer.AppendOrFlush(message, prod.sendBatch, prod.IsActiveOrStopping, prod.Drop)
// ...
if buffer.ReachedSizeThreshold(2048) { // Check if at least 2 KB have been written
buffer.Flush(prod.sendBatch) // Send all buffered messages via sendBatch
buffer.WaitForFlush() // Wait until done
}
}
```
####### Filtering messages[¶](#filtering-messages)
Producers are able to filter messages like streams do, too.
In contrast to streams messages are filtered before they are send to the internal message channel, i.e. before formatting.
As formatting is an implementation detail (and may also not happen) a plugin that needs filtering after formatting has too implement it by itself.
####### Formatting messages[¶](#formatting-messages)
Messages are not automatically formatted when passed to the producer.
If you wish to enable producer based formatting you need to call ProducerBase.Format() at an appropriate point inside your plugin.
All producers deriving from ProducerBase - and that have called ProducerBase.Configure() - may have a formatter set and should thus provide this possibility.
####### Writing bare bone producers[¶](#writing-bare-bone-producers)
Sometimes it might be useful not to derive from ProducerBase.
An example for this is the Null producer which is extremely lightweight.
If you decide to go this way please have a look at Gollum’s GoDoc API documentation as well as the source of ConsumerBase.
###### Writing filters[¶](#writing-filters)
####### Requirements[¶](#requirements)
All filters have to implement the “core/Filter” as well as the “core/Plugin” interface.
In addition to this, every plugin has to register at the plugin registry to be available as a config option.
This is explained in the general [plugin section](index.html#document-src/instructions/writingPlugins).
####### Attention[¶](#attention)
Filters are called in a multithreaded context, so you have to make sure that any internal state is secured by either a mutex or by using atomic functions.
####### Filtering messages[¶](#filtering-messages)
The Accept method is fairly simple to implement.
If the methods returns true the message is passed. If the method returns false the message is rejected.
You can inspect the message in question from the parameter passed to the accept method.
The following example filter will reject all messages that have no content:
```
func (filter *MyFilter) Accepts(msg core.Message) bool {
return len(msg.Data) > 0
}
```
###### Writing formatters[¶](#writing-formatters)
####### Requirements[¶](#requirements)
All filters have to implement the “core/Formatter” as well as the “core/Plugin” interface.
In addition to this, every plugin has to register at the plugin registry to be available as a config option.
This is explained in the general [plugin section](index.html#document-src/instructions/writingPlugins).
####### Attention[¶](#attention)
Formatters are called in a multithreaded context, so you have to make sure that any internal state is secured by either a mutex or by using atomic functions.
####### Transforming messages[¶](#transforming-messages)
The Format method is fairly simple to implement.
It accepts the message to modify and returns the new content plus the stream the message should be sent to.
The message itself cannot be changed directly.
The following example adds a newline to each message:
```
func (format *MyFormatter) Format(msg core.Message) ([]byte, core.MessageStreamID) {
return append(msg.Data, '\n'), msg.StreamID
}
```
###### Writing routers[¶](#writing-routers)
When writing a new router it is advisable to have a look at existing routers.
A good starting point is the Random router.
####### Requirements[¶](#requirements)
All routers have to implement the “core/Router” as well as the “core/Plugin” interface.
The most convenient way to do this is to derive from the “core/RouterBase” type as it will provide implementations of the most common methods required as well as message metrics.
In addition to this, every plugin has to register at the plugin registry to be available as a config option.
This is explained in the general [plugin section](index.html#document-src/instructions/writingPlugins).
####### RouterBase[¶](#routerbase)
Routers deriving from “core/RouterBase” have to implement a custom method that has to be hooked to the “Distribute” callback during Configure().
This allows RouterBase to check and format the message before actually distributing it.
In addition to that a message count metric is updated.
The following example implements a router that sends messages only to the first producer in the list.
```
func (router *MyRouter) myDistribute() {
router.RouterBase.Producers[0].Enqueue(msg)
}
func (router *MyRouter) Configure(conf core.PluginConfig) {
if err := router.RouterBase.Configure(conf); err != nil {
return err
}
router.RouterBase.Distribute = router.myDistribute
return nil
}
```
####### Sending messages[¶](#sending-messages)
Messages are sent directly to a producer by calling the Enqueue method.
This call may block as either the underlying channel is filled up completely or the producer plugin implemented Enqueue as a blocking method.
Routers that derive from RouterBase may also by paused.
In that case messages are not passed to the custom distributor function but to a temporary function.
These messages will be sent to the custom distributor function after the router is resumed.
A Pause() call is normally done from producers that encounter a connection loss or an unavailable resource in general.
### Plugins[¶](#plugins)
The main components, [consumers](../gen/consumer/index.html), [routers](../gen/router/index.html), [producers](../gen/producer/index.html), [filters](../gen/filter/index.html) and [formatters](../gen/formatter/index.html)
are build upon a plugin architecture.
This allows each component to be exchanged and configured individually with a different sets of options.
**Plugin types:**
#### Consumers[¶](#consumers)
Consumers are plugins that read data from external sources.
Data is packed into messages and passed to a [router](index.html#document-src/plugins/router).
**Example file consumer:**
**List of available consumers:**
##### AwsKinesis[¶](#awskinesis)
This consumer reads a message from an AWS Kinesis router.
###### Parameters[¶](#parameters)
**Enable** (default: true)
> Switches this plugin on or off.
**KinesisStream** (default: default)
> This value defines the stream to read from.
> By default this parameter is set to “default”.
**OffsetFile**
> This value defines a file to store the current offset per shard.
> To disable this parameter, set it to “”. If the parameter is set and the file
> is found, consuming will start after the offset stored in the file.
> By default this parameter is set to “”.
**RecordsPerQuery** (default: 100)
> This value defines the number of records to pull per query.
> By default this parameter is set to “100”.
**RecordMessageDelimiter**
> This value defines the string to delimit messages
> within a record. To disable this parameter, set it to “”.
> By default this parameter is set to “”.
**QuerySleepTimeMs** (default: 1000, unit: ms)
> This value defines the number of milliseconds to sleep
> before trying to pull new records from a shard that did not return any records.
> By default this parameter is set to “1000”.
**RetrySleepTimeSec**
> This value defines the number of seconds to wait after
> trying to reconnect to a shard.
> By default this parameter is set to “4”.
**CheckNewShardsSec** (default: 0, unit: sec)
> This value sets a timer to update shards in Kinesis.
> You can set this parameter to “0” for disabling.
> By default this parameter is set to “0”.
**DefaultOffset**
> This value defines the message index to start reading from.
> Valid values are either “newest”, “oldest”, or a number.
> By default this parameter is set to “newest”.
###### Parameters (from components.AwsCredentials)[¶](#parameters-from-components-awscredentials)
**Credential/Type** (default: none)
> This value defines the credentials that are to be used when
> connecting to aws. Available values are listed below. See
> <https://docs.aws.amazon.com/sdk-for-go/api/aws/credentials/#Credentials>
> for more information.
> **environment**
> > > Retrieves credentials from the environment variables of
> > the running process
> **static**
> > > Retrieves credentials value for individual credential fields
> **shared**
> > > Retrieves credentials from the current user’s home directory
> **none**
> > > Use a anonymous login to aws
**Credential/Id**
> is used for “static” type and is used as the AccessKeyID
**Credential/Token**
> is used for “static” type and is used as the SessionToken
**Credential/Secret**
> is used for “static” type and is used as the SecretAccessKey
**Credential/File**
> is used for “shared” type and is used as the path to your
> shared Credentials file (~/.aws/credentials)
**Credential/Profile** (default: default)
> is used for “shared” type and is used for the profile
**Credential/AssumeRole**
> This value is used to assume an IAM role using.
> By default this is set to “”.
###### Parameters (from components.AwsMultiClient)[¶](#parameters-from-components-awsmulticlient)
**Region** (default: us-east-1)
> This value defines the used aws region.
> By default this is set to “us-east-1”
**Endpoint**
> This value defines the used aws api endpoint. If no endpoint is set
> the client needs to set the right endpoint for the used region.
> By default this is set to “”.
###### Parameters (from core.SimpleConsumer)[¶](#parameters-from-core-simpleconsumer)
**Streams**
> Defines a list of streams a consumer will send to. This parameter
> is mandatory. When using “*” messages will be sent only to the internal “*”
> stream. It will NOT send messages to all streams.
> By default this parameter is set to an empty list.
**ShutdownTimeoutMs** (default: 1000, unit: ms)
> Defines the maximum time in milliseconds a consumer is
> allowed to take to shut down. After this timeout the consumer is always
> considered to have shut down.
> By default this parameter is set to 1000.
**Modulators**
> Defines a list of modulators to be applied to a message before
> it is sent to the list of streams. If a modulator specifies a stream, the
> message is only sent to that specific stream. A message is saved as original
> after all modulators have been applied.
> By default this parameter is set to an empty list.
**ModulatorRoutines**
> Defines the number of go routines reserved for
> modulating messages. Setting this parameter to 0 will use as many go routines
> as the specific consumer plugin is using for fetching data. Any other value
> will force the given number fo go routines to be used.
> By default this parameter is set to 0
**ModulatorQueueSize**
> Defines the size of the channel used to buffer messages
> before they are fetched by the next free modulator go routine. If the
> ModulatorRoutines parameter is set to 0 this parameter is ignored.
> By default this parameter is set to 1024.
###### Examples[¶](#examples)
This example consumes a kinesis stream “myStream” and create messages:
```
KinesisIn:
Type: consumer.AwsKinesis
Credential:
Type: shared
File: /Users/<USERNAME>/.aws/credentials
Profile: default
Region: "eu-west-1"
KinesisStream: myStream
```
##### Console[¶](#console)
This consumer reads from stdin or a named pipe. A message is generated after each newline character.
###### Metadata[¶](#metadata)
*NOTE: The metadata will only set if the parameter `SetMetadata` is active.*
**pipe**
> Name of the pipe the message was received on (set)
###### Parameters[¶](#parameters)
**Enable** (default: true)
> Switches this plugin on or off.
**Pipe** (default: stdin)
> Defines the pipe to read from. This can be “stdin” or the path
> to a named pipe. If the named pipe doesn’t exist, it will be created.
> By default this paramater is set to “stdin”.
**Permissions** (default: 0644)
> Defines the UNIX filesystem permissions used when creating
> the named pipe as an octal number.
> By default this paramater is set to “0664”.
**ExitOnEOF** (default: true)
> If set to true, the plusing triggers an exit signal if the
> pipe is closed, i.e. when EOF is detected.
> By default this paramater is set to “true”.
**SetMetadata** (default: false)
> When this value is set to “true”, the fields mentioned in the metadata
> section will be added to each message. Adding metadata will have a
> performance impact on systems with high throughput.
> By default this parameter is set to “false”.
###### Parameters (from core.SimpleConsumer)[¶](#parameters-from-core-simpleconsumer)
**Streams**
> Defines a list of streams a consumer will send to. This parameter
> is mandatory. When using “*” messages will be sent only to the internal “*”
> stream. It will NOT send messages to all streams.
> By default this parameter is set to an empty list.
**ShutdownTimeoutMs** (default: 1000, unit: ms)
> Defines the maximum time in milliseconds a consumer is
> allowed to take to shut down. After this timeout the consumer is always
> considered to have shut down.
> By default this parameter is set to 1000.
**Modulators**
> Defines a list of modulators to be applied to a message before
> it is sent to the list of streams. If a modulator specifies a stream, the
> message is only sent to that specific stream. A message is saved as original
> after all modulators have been applied.
> By default this parameter is set to an empty list.
**ModulatorRoutines**
> Defines the number of go routines reserved for
> modulating messages. Setting this parameter to 0 will use as many go routines
> as the specific consumer plugin is using for fetching data. Any other value
> will force the given number fo go routines to be used.
> By default this parameter is set to 0
**ModulatorQueueSize**
> Defines the size of the channel used to buffer messages
> before they are fetched by the next free modulator go routine. If the
> ModulatorRoutines parameter is set to 0 this parameter is ignored.
> By default this parameter is set to 1024.
###### Examples[¶](#examples)
This config reads data from stdin e.g. when starting gollum via unix pipe.
```
ConsoleIn:
Type: consumer.Console
Streams: console
Pipe: stdin
```
##### File[¶](#file)
The File consumer reads messages from a file, looking for a customizable delimiter sequence that marks the end of a message. If the file is part of e.g. a log rotation, the consumer can be set to read from a symbolic link pointing to the current file and (optionally) be told to reopen the file by sending a SIGHUP. A symlink to a file will automatically be reopened if the underlying file is changed.
###### Metadata[¶](#metadata)
*NOTE: The metadata will only set if the parameter `SetMetadata` is active.*
**file**
> The file name of the consumed file (set)
**dir**
> The directory of the consumed file (set)
###### Parameters[¶](#parameters)
**Enable** (default: true)
> Switches this plugin on or off.
**File**
> This value is a mandatory setting and contains the name of the
> file to read. This field supports glob patterns.
> If the file pointed to is a symlink, changes to the symlink will be
> detected. The file will be watched for changes, so active logfiles can
> be scraped, too.
**OffsetFilePath**
> This value defines a path where the individual, current
> file offsets are stored. The filename will the name and extension of the
> source file plus the extension “.offset”. If the consumer is restarted,
> these offset files are used to continue reading from the previous position.
> To disable this setting, set it to “”.
> By default this parameter is set to “”.
**Delimiter** (default: n)
> This value defines the delimiter sequence to expect at the
> end of each message in the file.
> By default this parameter is set to “n”.
**ObserveMode** (default: poll)
> This value select how the source file is observed. Available
> values are poll and watch. NOTE: The watch implementation uses
> the [fsnotify/fsnotify](<https://github.com/fsnotify/fsnotify>) package.
> If your source file is rotated (moved or removed), please verify that
> your file system and distribution support the RENAME and REMOVE events;
> the consumer’s stability depends on them.
> By default this parameter is set to poll.
**DefaultOffset** (default: newest)
> This value defines the default offset from which to start
> reading within the file. Valid values are “oldest” and “newest”. If OffsetFile
> is defined and the file exists, the DefaultOffset parameter is ignored.
> By default this parameter is set to “newest”.
**PollingDelayMs** (default: 100, unit: ms)
> This value defines the duration in milliseconds the consumer
> waits between checking the source file for new content after hitting the
> end of file (EOF). NOTE: This settings only takes effect if the consumer is
> running in poll mode!
> By default this parameter is set to “100”.
**RetryDelaySec** (default: 3, unit: s)
> This value defines the duration in seconds the consumer waits
> between retries, e.g. after not being able to open a file.
> By default this parameter is set to “3”.
**DirScanIntervalSec** (default: 10, unit: s)
> Only applies when using globs. This setting will define the
> interval in secnds in which the glob will be re-evaluated and new files can be
> scraped. By default this parameter is set to “10”.
**SetMetadata** (default: false)
> When this value is set to “true”, the fields mentioned in the metadata
> section will be added to each message. Adding metadata will have a
> performance impact on systems with high throughput.
> By default this parameter is set to “false”.
**BlackList**
> A regular expression matching file paths to NOT read. When both
> BlackList and WhiteList are defined, the WhiteList takes precedence.
> This setting is only used when glob expressions ([*](#id1), ?) are present in the
> filename. The path checked is the one before symlink evaluation.
> By default this parameter is set to “”.
**WhiteList**
> A regular expression matching file paths to read. When both
> BlackList and WhiteList are defined, the WhiteList takes precedence.
> This setting is only used when glob expressions ([*](#id3), ?) are present in the
> filename. The path checked is the one before symlink evaluation.
> By default this parameter is set to “”.
**Files** (default: /var/log/[*](#id5).log)
> (no documentation available)
###### Parameters (from core.SimpleConsumer)[¶](#parameters-from-core-simpleconsumer)
**Streams**
> Defines a list of streams a consumer will send to. This parameter
> is mandatory. When using “*” messages will be sent only to the internal “*”
> stream. It will NOT send messages to all streams.
> By default this parameter is set to an empty list.
**ShutdownTimeoutMs** (default: 1000, unit: ms)
> Defines the maximum time in milliseconds a consumer is
> allowed to take to shut down. After this timeout the consumer is always
> considered to have shut down.
> By default this parameter is set to 1000.
**Modulators**
> Defines a list of modulators to be applied to a message before
> it is sent to the list of streams. If a modulator specifies a stream, the
> message is only sent to that specific stream. A message is saved as original
> after all modulators have been applied.
> By default this parameter is set to an empty list.
**ModulatorRoutines**
> Defines the number of go routines reserved for
> modulating messages. Setting this parameter to 0 will use as many go routines
> as the specific consumer plugin is using for fetching data. Any other value
> will force the given number fo go routines to be used.
> By default this parameter is set to 0
**ModulatorQueueSize**
> Defines the size of the channel used to buffer messages
> before they are fetched by the next free modulator go routine. If the
> ModulatorRoutines parameter is set to 0 this parameter is ignored.
> By default this parameter is set to 1024.
###### Examples[¶](#examples)
This example will read all the .log files /var/log/ into one stream and create a message for each new entry. If the file starts with sys it is ignored
```
FileIn:
Type: consumer.File
File: /var/log/*.log
BlackList '^sys.*'
DefaultOffset: newest
OffsetFilePath: ""
Delimiter: "\n"
ObserveMode: poll
PollingDelay: 100
```
##### HTTP[¶](#http)
This consumer opens up an HTTP 1.1 server and processes the contents of any incoming HTTP request.
###### Parameters[¶](#parameters)
**Enable** (default: true)
> Switches this plugin on or off.
**Address** (default: :80)
> Defines the TCP port and optional IP address to listen on.
> Sets http.Server.Addr; for defails, see its Go documentation.
> Syntax: [hostname|address]:<port**ReadTimeoutSec** (default: 3, unit: sec)
> Defines the maximum duration in seconds before timing out
> the HTTP read request. Sets http.Server.ReadTimeout; for details, see its
> Go documentation.
**WithHeaders** (default: true)
> If true, relays the complete HTTP request to the generated
> Gollum message. If false, relays only the HTTP request body and ignores
> headers.
**Htpasswd**
> Path to an htpasswd-formatted password file. If defined, turns
> on HTTP Basic Authentication in the server.
**BasicRealm**
> Defines the Authentication Realm for HTTP Basic Authentication.
> Meaningful only in conjunction with Htpasswd.
**Certificate**
> Path to an X509 formatted certificate file. If defined, turns on
> SSL/TLS support in the HTTP server. Requires PrivateKey to be set.
**PrivateKey**
> Path to an X509 formatted private key file. Meaningful only in
> conjunction with Certificate.
###### Parameters (from core.SimpleConsumer)[¶](#parameters-from-core-simpleconsumer)
**Streams**
> Defines a list of streams a consumer will send to. This parameter
> is mandatory. When using “*” messages will be sent only to the internal “*”
> stream. It will NOT send messages to all streams.
> By default this parameter is set to an empty list.
**ShutdownTimeoutMs** (default: 1000, unit: ms)
> Defines the maximum time in milliseconds a consumer is
> allowed to take to shut down. After this timeout the consumer is always
> considered to have shut down.
> By default this parameter is set to 1000.
**Modulators**
> Defines a list of modulators to be applied to a message before
> it is sent to the list of streams. If a modulator specifies a stream, the
> message is only sent to that specific stream. A message is saved as original
> after all modulators have been applied.
> By default this parameter is set to an empty list.
**ModulatorRoutines**
> Defines the number of go routines reserved for
> modulating messages. Setting this parameter to 0 will use as many go routines
> as the specific consumer plugin is using for fetching data. Any other value
> will force the given number fo go routines to be used.
> By default this parameter is set to 0
**ModulatorQueueSize**
> Defines the size of the channel used to buffer messages
> before they are fetched by the next free modulator go routine. If the
> ModulatorRoutines parameter is set to 0 this parameter is ignored.
> By default this parameter is set to 1024.
###### Examples[¶](#examples)
This example listens on port 9090 and writes to the stream “http_in_00”.
```
"HttpIn00":
Type: "consumer.HTTP"
Streams: "http_in_00"
Address: "localhost:9090"
WithHeaders: false
```
##### Kafka[¶](#kafka)
This consumer reads data from a kafka topic. It is based on the sarama library; most settings are mapped to the settings from this library.
###### Metadata[¶](#metadata)
*NOTE: The metadata will only set if the parameter `SetMetadata` is active.*
**topic**
> Contains the name of the kafka topic
**key**
> Contains the key of the kafka message
###### Parameters[¶](#parameters)
**Enable** (default: true)
> Switches this plugin on or off.
**Servers**
> Defines the list of all kafka brokers to initially connect to when
> querying topic metadata. This list requires at least one borker to work and
> ideally contains all the brokers in the cluster.
> By default this parameter is set to [“localhost:9092”].
**Topic** (default: default)
> Defines the kafka topic to read from.
> By default this parameter is set to “default”.
**ClientId**
> Sets the client id used in requests by this consumer.
> By default this parameter is set to “gollum”.
**GroupId**
> Sets the consumer group of this consumer. If empty, consumer
> groups are not used. This setting requires Kafka version >= 0.9.
> By default this parameter is set to “”.
**Version**
> Defines the kafka protocol version to use. Common values are 0.8.2,
> 0.9.0 or 0.10.0. Values of the form “A.B” are allowed as well as “A.B.C”
> and “A.B.C.D”. If the version given is not known, the closest possible
> version is chosen. If GroupId is set to a value < “0.9”, “0.9.0.1” will be used.
> By default this parameter is set to “0.8.2”.
**SetMetadata** (default: false)
> When this value is set to “true”, the fields mentioned in the metadata
> section will be added to each message. Adding metadata will have a
> performance impact on systems with high throughput.
> By default this parameter is set to “false”.
**DefaultOffset**
> Defines the initial offest when starting to read the topic.
> Valid values are “oldest” and “newest”. If OffsetFile
> is defined and the file exists, the DefaultOffset parameter is ignored.
> If GroupId is defined, this setting will only be used for the first request.
> By default this parameter is set to “newest”.
**OffsetFile**
> Defines the path to a file that holds the current offset of a
> given partition. If the consumer is restarted, reading continues from that
> offset. To disable this setting, set it to “”. Please note that offsets
> stored in the file might be outdated. In that case DefaultOffset “oldest”
> will be used.
> By default this parameter is set to “”.
**FolderPermissions** (default: 0755)
> Used to create the path to the offset file if necessary.
> By default this parameter is set to “0755”.
**Ordered**
> Forces partitions to be read one-by-one in a round robin fashion
> instead of reading them all in parallel. Please note that this may restore
> the original ordering but does not necessarily do so. The term “ordered” refers
> to an ordered reading of all partitions, as opposed to reading them randomly.
> By default this parameter is set to false.
**MaxOpenRequests**
> Defines the number of simultaneous connections to a
> broker at a time.
> By default this parameter is set to 5.
**ServerTimeoutSec**
> Defines the time after which a connection will time out.
> By default this parameter is set to 30.
**MaxFetchSizeByte**
> Sets the maximum size of a message to fetch. Larger
> messages will be ignored. When set to 0 size of the messages is ignored.
> By default this parameter is set to 0.
**MinFetchSizeByte**
> Defines the minimum amout of data to fetch from Kafka per
> request. If less data is available the broker will wait.
> By default this parameter is set to 1.
**DefaultFetchSizeByte**
> Defines the average amout of data to fetch per
> request. This value must be greater than 0.
> By default this parameter is set to 32768.
**FetchTimeoutMs**
> Defines the time in milliseconds to wait on reaching
> MinFetchSizeByte before fetching new data regardless of size.
> By default this parameter is set to 250.
**MessageBufferCount**
> Sets the internal channel size for the kafka client.
> By default this parameter is set to 8192.
**PresistTimoutMs** (default: 5000, unit: ms)
> Defines the interval in milliseconds in which data is
> written to the OffsetFile. A short duration reduces the amount of duplicate
> messages after a crash but increases I/O. When using GroupId this setting
> controls the pause time after receiving errors.
> By default this parameter is set to 5000.
**ElectRetries**
> Defines how many times to retry fetching the new master
> partition during a leader election.
> By default this parameter is set to 3.
**ElectTimeoutMs**
> Defines the number of milliseconds to wait for the cluster
> to elect a new leader.
> By default this parameter is set to 250.
**MetadataRefreshMs**
> Defines the interval in milliseconds used for fetching
> kafka metadata from the cluster (e.g. number of partitons).
> By default this parameter is set to 10000.
**TlsEnable**
> Defines whether to use TLS based authentication when
> communicating with brokers.
> By default this parameter is set to false.
**TlsKeyLocation**
> Defines the path to the client’s PEM-formatted private key
> used for TLS based authentication.
> By default this parameter is set to “”.
**TlsCertificateLocation**
> Defines the path to the client’s PEM-formatted
> public key used for TLS based authentication.
> By default this parameter is set to “”.
**TlsCaLocation**
> Defines the path to the CA certificate(s) for verifying a
> broker’s key when using TLS based authentication.
> By default this parameter is set to “”.
**TlsServerName**
> Defines the expected hostname used by hostname verification
> when using TlsInsecureSkipVerify.
> By default this parameter is set to “”.
**TlsInsecureSkipVerify**
> Enables verification of the server’s certificate
> chain and host name.
> By default this parameter is set to false.
**SaslEnable**
> Defines whether to use SASL based authentication when
> communicating with brokers.
> By default this parameter is set to false.
**SaslUsername**
> Defines the username for SASL/PLAIN authentication.
> By default this parameter is set to “gollum”.
**SaslPassword**
> Defines the password for SASL/PLAIN authentication.
> By default this parameter is set to “”.
###### Parameters (from core.SimpleConsumer)[¶](#parameters-from-core-simpleconsumer)
**Streams**
> Defines a list of streams a consumer will send to. This parameter
> is mandatory. When using “*” messages will be sent only to the internal “*”
> stream. It will NOT send messages to all streams.
> By default this parameter is set to an empty list.
**ShutdownTimeoutMs** (default: 1000, unit: ms)
> Defines the maximum time in milliseconds a consumer is
> allowed to take to shut down. After this timeout the consumer is always
> considered to have shut down.
> By default this parameter is set to 1000.
**Modulators**
> Defines a list of modulators to be applied to a message before
> it is sent to the list of streams. If a modulator specifies a stream, the
> message is only sent to that specific stream. A message is saved as original
> after all modulators have been applied.
> By default this parameter is set to an empty list.
**ModulatorRoutines**
> Defines the number of go routines reserved for
> modulating messages. Setting this parameter to 0 will use as many go routines
> as the specific consumer plugin is using for fetching data. Any other value
> will force the given number fo go routines to be used.
> By default this parameter is set to 0
**ModulatorQueueSize**
> Defines the size of the channel used to buffer messages
> before they are fetched by the next free modulator go routine. If the
> ModulatorRoutines parameter is set to 0 this parameter is ignored.
> By default this parameter is set to 1024.
###### Examples[¶](#examples)
This config reads the topic “logs” from a cluster with 4 brokers.
```
kafkaIn:
Type: consumer.Kafka
Streams: logs
Topic: logs
ClientId: "gollum log reader"
DefaultOffset: newest
OffsetFile: /var/gollum/logs.offset
Servers:
- "kafka0:9092"
- "kafka1:9092"
- "kafka2:9092"
- "kafka3:9092"
```
##### Profiler[¶](#profiler)
The “Profiler” consumer plugin autogenerates messages in user-defined quantity,
size and density. It can be used to profile producers and configurations and to provide a message source for testing.
Before startup, [TemplateCount] template payloads are generated based on the format specifier [Message], using characters from [Characters]. The length of each template is determined by format size specifier(s) in [Message].
During execution, [Batches] batches of [Runs] messages are generated, with a
[DelayMs] ms delay between each message. Each message’s payload is randomly selected from the set of template payloads above.
###### Parameters[¶](#parameters)
**Enable** (default: true)
> Switches this plugin on or off.
**Runs** (default: 10000)
> Defines the number of messages per batch.
**Batches** (default: 10)
> Defines the number of batches to generate.
**TemplateCount**
> Defines the number of message templates to generate.
> Templates are generated in advance and a random message template is chosen
> from this set every time a message is sent.
**Characters** (default: abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ01234567890)
> Defines the set of characters use when generated templates.
**Message** (default: %256s)
> Defines a go format string to use for generating the message
> templaets. The length of the values generated will be deduced from the
> format size parameter - “%200d” will generate a digit between 0 and 200,
> “%10s” will generate a string with 10 characters, etc.
**DelayMs** (default: 0, unit: ms)
> Defines the number of milliseconds to sleep between messages.
**KeepRunning**
> If set to true, shuts down Gollum after Batches * Runs messages
> have been generated. This can be used to e.g. read metrics after a profile run.
###### Parameters (from core.SimpleConsumer)[¶](#parameters-from-core-simpleconsumer)
**Streams**
> Defines a list of streams a consumer will send to. This parameter
> is mandatory. When using “*” messages will be sent only to the internal “*”
> stream. It will NOT send messages to all streams.
> By default this parameter is set to an empty list.
**ShutdownTimeoutMs** (default: 1000, unit: ms)
> Defines the maximum time in milliseconds a consumer is
> allowed to take to shut down. After this timeout the consumer is always
> considered to have shut down.
> By default this parameter is set to 1000.
**Modulators**
> Defines a list of modulators to be applied to a message before
> it is sent to the list of streams. If a modulator specifies a stream, the
> message is only sent to that specific stream. A message is saved as original
> after all modulators have been applied.
> By default this parameter is set to an empty list.
**ModulatorRoutines**
> Defines the number of go routines reserved for
> modulating messages. Setting this parameter to 0 will use as many go routines
> as the specific consumer plugin is using for fetching data. Any other value
> will force the given number fo go routines to be used.
> By default this parameter is set to 0
**ModulatorQueueSize**
> Defines the size of the channel used to buffer messages
> before they are fetched by the next free modulator go routine. If the
> ModulatorRoutines parameter is set to 0 this parameter is ignored.
> By default this parameter is set to 1024.
###### Examples[¶](#examples)
Generate a short message every 0.5s, useful for testing and debugging
```
JunkGenerator:
Type: "consumer.Profiler"
Message: "%20s"
Streams: "junkstream"
Characters: "abcdefghijklmZ"
KeepRunning: true
Runs: 10000
Batches: 3000000
DelayMs: 500
```
##### Proxy[¶](#proxy)
This consumer reads messages from a given socket like consumer.Socket but allows reverse communication, too. Producers which require this kind of communication can access message.GetSource to write data back to the client sending the message. See producer.Proxy as an example target producer.
###### Parameters[¶](#parameters)
**Enable** (default: true)
> Switches this plugin on or off.
**Address**
> Defines the protocol, host and port or the unix domain socket to
> listen to. This can either be any ip address and port like “localhost:5880”
> or a file like “unix:///var/gollum.socket”. Only unix and tcp protocols are
> supported.
> By default this parameter is set to “:5880”.
**Partitioner**
> Defines the algorithm used to read messages from the router.
> The messages will be sent as a whole, no cropping or removal will take place.
> By default this parameter is set to “delimiter”.
> **delimiter**
> > > Separates messages by looking for a delimiter string.
> > The delimiter is removed from the message.
> **ascii**
> > > Reads an ASCII number at a given offset until a given delimiter is
> > found. Everything to the left of and including the delimiter is removed
> > from the message.
> **binary**
> > > reads a binary number at a given offset and size.
> > The number is removed from the message.
> **binary_le**
> > > is an alias for “binary”.
> **binary_be**
> > > acts like “binary”_le but uses big endian encoding.
> **fixed**
> > > assumes fixed size messages.
**Delimiter** (default: n)
> Defines the delimiter string used to separate messages if
> partitioner is set to “delimiter” or the string used to separate the message
> length if partitioner is set to “ascii”.
> By default this parameter is set to “n”.
**Offset** (default: 0)
> Defines an offset in bytes used to read the length provided for
> partitioner “binary” and “ascii”.
> By default this parameter is set to 0.
**Size** (default: 4)
> Defines the size of the length prefix used by partitioner “binary”
> or the message total size when using partitioner “fixed”.
> When using partitioner “binary” this parameter can be set to 1,2,4 or 8 when
> using uint8,uint16,uint32 or uint64 length prefixes.
> By default this parameter is set to 4.
###### Parameters (from core.SimpleConsumer)[¶](#parameters-from-core-simpleconsumer)
**Streams**
> Defines a list of streams a consumer will send to. This parameter
> is mandatory. When using “*” messages will be sent only to the internal “*”
> stream. It will NOT send messages to all streams.
> By default this parameter is set to an empty list.
**ShutdownTimeoutMs** (default: 1000, unit: ms)
> Defines the maximum time in milliseconds a consumer is
> allowed to take to shut down. After this timeout the consumer is always
> considered to have shut down.
> By default this parameter is set to 1000.
**Modulators**
> Defines a list of modulators to be applied to a message before
> it is sent to the list of streams. If a modulator specifies a stream, the
> message is only sent to that specific stream. A message is saved as original
> after all modulators have been applied.
> By default this parameter is set to an empty list.
**ModulatorRoutines**
> Defines the number of go routines reserved for
> modulating messages. Setting this parameter to 0 will use as many go routines
> as the specific consumer plugin is using for fetching data. Any other value
> will force the given number fo go routines to be used.
> By default this parameter is set to 0
**ModulatorQueueSize**
> Defines the size of the channel used to buffer messages
> before they are fetched by the next free modulator go routine. If the
> ModulatorRoutines parameter is set to 0 this parameter is ignored.
> By default this parameter is set to 1024.
###### Examples[¶](#examples)
This example will accepts 64bit length encoded data on TCP port 5880.
```
proxyReceive:
Type: consumer.Proxy
Streams: proxyData
Address: ":5880"
Partitioner: binary
Size: 8
```
##### Socket[¶](#socket)
The socket consumer reads messages as-is from a given network or filesystem socket. Messages are separated from the stream by using a specific partitioner method.
###### Parameters[¶](#parameters)
**Enable** (default: true)
> Switches this plugin on or off.
**Address**
> This value defines the protocol, host and port or socket to bind to.
> This can either be any ip address and port like “localhost:5880” or a file
> like “unix:///var/gollum.socket”. Valid protocols can be derived from the
> golang net package documentation. Common values are “udp”, “tcp” and “unix”.
> By default this parameter is set to “tcp://0.0.0.0:5880”.
**Permissions** (default: 0770)
> This value sets the filesystem permissions for UNIX domain
> sockets as a four-digit octal number.
> By default this parameter is set to “0770”.
**Acknowledge**
> This value can be set to a non-empty value to inform the writer
> that data has been accepted. On success, the given string is sent. Any error
> will close the connection. Acknowledge does not work with UDP based sockets.
> By default this parameter is set to “”.
**Partitioner**
> This value defines the algorithm used to read messages from the
> router. By default this is set to “delimiter”. The following options are available:
> **“delimiter”**
> > > Separates messages by looking for a delimiter string.
> > The delimiter is removed from the message.
> **“ascii”**
> > > Reads an ASCII number at a given offset until a given delimiter is found.
> > Everything to the right of and including the delimiter is removed from the message.
> **“binary”**
> > > Reads a binary number at a given offset and size.
> **“binary_le”**
> > > An alias for “binary”.
> **“binary_be”**
> > > The same as “binary” but uses big endian encoding.
> **“fixed”**
> > > Assumes fixed size messages.
**Delimiter** (default: n)
> This value defines the delimiter used by the text and delimiter
> partitioners.
> By default this parameter is set to “n”.
**Offset** (default: 0)
> This value defines the offset used by the binary and text partitioners.
> This setting is ignored by the fixed partitioner.
> By default this parameter is set to “0”.
**Size**
> This value defines the size in bytes used by the binary and fixed
> partitioners. For binary, this can be set to 1,2,4 or 8. The default value
> is 4. For fixed , this defines the size of a message. By default this parameter
> is set to “1”.
**ReconnectAfterSec** (default: 2, unit: sec)
> This value defines the number of seconds to wait before a
> connection is retried.
> By default this parameter is set to “2”.
**AckTimeoutSec** (default: 1, unit: sec)
> This value defines the number of seconds to wait for acknowledges
> to succeed.
> By default this parameter is set to “1”.
**ReadTimeoutSec** (default: 2, unit: sec)
> This value defines the number of seconds to wait for data
> to be received. This setting affects the maximum shutdown duration of this consumer.
> By default this parameter is set to “2”.
**RemoveOldSocket** (default: true)
> If set to true, any existing file with the same name as the
> socket (unix://<path>) is removed prior to connecting.
> By default this parameter is set to “true”.
###### Parameters (from core.SimpleConsumer)[¶](#parameters-from-core-simpleconsumer)
**Streams**
> Defines a list of streams a consumer will send to. This parameter
> is mandatory. When using “*” messages will be sent only to the internal “*”
> stream. It will NOT send messages to all streams.
> By default this parameter is set to an empty list.
**ShutdownTimeoutMs** (default: 1000, unit: ms)
> Defines the maximum time in milliseconds a consumer is
> allowed to take to shut down. After this timeout the consumer is always
> considered to have shut down.
> By default this parameter is set to 1000.
**Modulators**
> Defines a list of modulators to be applied to a message before
> it is sent to the list of streams. If a modulator specifies a stream, the
> message is only sent to that specific stream. A message is saved as original
> after all modulators have been applied.
> By default this parameter is set to an empty list.
**ModulatorRoutines**
> Defines the number of go routines reserved for
> modulating messages. Setting this parameter to 0 will use as many go routines
> as the specific consumer plugin is using for fetching data. Any other value
> will force the given number fo go routines to be used.
> By default this parameter is set to 0
**ModulatorQueueSize**
> Defines the size of the channel used to buffer messages
> before they are fetched by the next free modulator go routine. If the
> ModulatorRoutines parameter is set to 0 this parameter is ignored.
> By default this parameter is set to 1024.
###### Examples[¶](#examples)
This example open a socket and expect messages with a fixed length of 256 bytes:
```
socketIn:
Type: consumer.Socket
Address: unix:///var/gollum.socket
Partitioner: fixed
Size: 256
```
##### Syslogd[¶](#syslogd)
The syslogd consumer creates a syslogd-compatible log server and receives messages on a TCP or UDP port or a UNIX filesystem socket.
###### Parameters[¶](#parameters)
**Enable** (default: true)
> Switches this plugin on or off.
**Address**
> Defines the IP address or UNIX socket to listen to.
> This can take one of the four forms below, to listen on a TCP, UDP
> or UNIX domain socket. However, see the “Format” option for details on
> transport support by different formats.
> * [hostname|ip]:<tcp-port>
> * <tcp:/>/<hostname|ip>:<tcp-port>
> * udp://<hostname|ip>:<udp-port>
> * unix://<filesystem-path>
> By default this parameter is set to “udp://0.0.0.0:514”
**Format**
> Defines which syslog standard the server will support.
> Three standards, listed below, are currently available. All
> standards support listening to UDP and UNIX domain sockets.
> RFC6587 additionally supports TCP sockets. Default: “RFC6587”.
> * RFC3164 (<https://tools.ietf.org/html/rfc3164>) - unix, udp
> * RFC5424 (<https://tools.ietf.org/html/rfc5424>) - unix, udp
> * RFC6587 (<https://tools.ietf.org/html/rfc6587>) - unix, upd, tcp
> By default this parameter is set to “RFC6587”.
**Permissions** (default: 0770)
> This value sets the filesystem permissions
> as a four-digit octal number in case the address is a Unix domain socket
> (i.e. unix://<filesystem-path>).
> By default this parameter is set to “0770”.
**SetMetadata** (default: false)
> When set to true, syslog based metadata will be attached to
> the message. The metadata fields added depend on the protocol version used.
> RFC3164 supports: tag, timestamp, hostname, priority, facility, severity.
> RFC5424 and RFC6587 support: app_name, version, proc_id , msg_id, timestamp,
> hostname, priority, facility, severity.
> By default this parameter is set to “false”.
**TimestampFormat** (default: 2006-01-02T15:04:05.000 MST)
> When using SetMetadata this string denotes the go time
> format used to convert syslog timestamps into strings.
> By default this parameter is set to “2006-01-02T15:04:05.000 MST”.
###### Parameters (from core.SimpleConsumer)[¶](#parameters-from-core-simpleconsumer)
**Streams**
> Defines a list of streams a consumer will send to. This parameter
> is mandatory. When using “*” messages will be sent only to the internal “*”
> stream. It will NOT send messages to all streams.
> By default this parameter is set to an empty list.
**ShutdownTimeoutMs** (default: 1000, unit: ms)
> Defines the maximum time in milliseconds a consumer is
> allowed to take to shut down. After this timeout the consumer is always
> considered to have shut down.
> By default this parameter is set to 1000.
**Modulators**
> Defines a list of modulators to be applied to a message before
> it is sent to the list of streams. If a modulator specifies a stream, the
> message is only sent to that specific stream. A message is saved as original
> after all modulators have been applied.
> By default this parameter is set to an empty list.
**ModulatorRoutines**
> Defines the number of go routines reserved for
> modulating messages. Setting this parameter to 0 will use as many go routines
> as the specific consumer plugin is using for fetching data. Any other value
> will force the given number fo go routines to be used.
> By default this parameter is set to 0
**ModulatorQueueSize**
> Defines the size of the channel used to buffer messages
> before they are fetched by the next free modulator go routine. If the
> ModulatorRoutines parameter is set to 0 this parameter is ignored.
> By default this parameter is set to 1024.
###### Examples[¶](#examples)
Replace the system’s standard syslogd with Gollum
```
SyslogdSocketConsumer:
Type: consumer.Syslogd
Streams: "system_syslog"
Address: "unix:///dev/log"
Format: "RFC3164"
```
Listen on a TCP socket
```
SyslogdTCPSocketConsumer:
Type: consumer.Syslogd
Streams: "tcp_syslog"
Address: "tcp://0.0.0.0:5599"
Format: "RFC6587"
```
#### Producers[¶](#producers)
Producers are plugins that transfer messages to external services.
Data arrives in the form of messages and can be converted by using a [formatter](index.html#document-src/plugins/formatter).
**Example producer setup:**
**List of available producer:**
##### AwsCloudwatchLogs[¶](#awscloudwatchlogs)
The AwsCloudwatchLogs producer plugin sends messages to AWS Cloudwatch Logs service. Credentials are obtained by gollum automaticly.
Patameters
* LogStream: Stream name in cloudwatch logs.
* LogGroup: Group name in cloudwatch logs.
* Region: Amazon region into which stream logs to. Defaults to “eu-west-1”.
###### Parameters[¶](#parameters)
**Enable** (default: true)
> Switches this plugin on or off.
**LogStream**
> (no documentation available)
**LogGroup**
> (no documentation available)
###### Parameters (from components.AwsCredentials)[¶](#parameters-from-components-awscredentials)
**Credential/Type** (default: none)
> This value defines the credentials that are to be used when
> connecting to aws. Available values are listed below. See
> <https://docs.aws.amazon.com/sdk-for-go/api/aws/credentials/#Credentials>
> for more information.
> **environment**
> > > Retrieves credentials from the environment variables of
> > the running process
> **static**
> > > Retrieves credentials value for individual credential fields
> **shared**
> > > Retrieves credentials from the current user’s home directory
> **none**
> > > Use a anonymous login to aws
**Credential/Id**
> is used for “static” type and is used as the AccessKeyID
**Credential/Token**
> is used for “static” type and is used as the SessionToken
**Credential/Secret**
> is used for “static” type and is used as the SecretAccessKey
**Credential/File**
> is used for “shared” type and is used as the path to your
> shared Credentials file (~/.aws/credentials)
**Credential/Profile** (default: default)
> is used for “shared” type and is used for the profile
**Credential/AssumeRole**
> This value is used to assume an IAM role using.
> By default this is set to “”.
###### Parameters (from components.AwsMultiClient)[¶](#parameters-from-components-awsmulticlient)
**Region** (default: us-east-1)
> This value defines the used aws region.
> By default this is set to “us-east-1”
**Endpoint**
> This value defines the used aws api endpoint. If no endpoint is set
> the client needs to set the right endpoint for the used region.
> By default this is set to “”.
###### Parameters (from core.BatchedProducer)[¶](#parameters-from-core-batchedproducer)
**Batch/MaxCount** (default: 8192)
> Defines the maximum number of messages per batch. If this
> limit is reached a flush is always triggered.
> By default this parameter is set to 8192.
**Batch/FlushCount** (default: 4096)
> Defines the minimum number of messages required to flush
> a batch. If this limit is reached a flush might be triggered.
> By default this parameter is set to 4096.
**Batch/TimeoutSec** (default: 5, unit: sec)
> Defines the maximum time in seconds messages can stay in
> the internal buffer before being flushed.
> By default this parameter is set to 5.
###### Parameters (from core.SimpleProducer)[¶](#parameters-from-core-simpleproducer)
**Streams**
> Defines a list of streams the producer will receive from. This
> parameter is mandatory. Specifying “*” causes the producer to receive messages
> from all streams except internal internal ones (e.g. _GOLLUM_).
> By default this parameter is set to an empty list.
**FallbackStream**
> Defines a stream to route messages to if delivery fails.
> The message is reset to its original state before being routed, i.e. all
> modifications done to the message after leaving the consumer are removed.
> Setting this paramater to “” will cause messages to be discared when delivery
> fails.
**ShutdownTimeoutMs** (default: 1000, unit: ms)
> Defines the maximum time in milliseconds a producer is
> allowed to take to shut down. After this timeout the producer is always
> considered to have shut down. Decreasing this value may lead to lost
> messages during shutdown. Raising it may increase shutdown time.
**Modulators**
> Defines a list of modulators to be applied to a message when
> it arrives at this producer. If a modulator changes the stream of a message
> the message is NOT routed to this stream anymore.
> By default this parameter is set to an empty list.
###### Examples[¶](#examples)
This configuration sends messages to stream stream_name and group group_name with shared credentials.
CwLogs:
.. code-block:: yaml
> Type: AwsCloudwatchLogs:
> LogStream: stream_name
> LogGroup: group_name
> Credential:
> Type: shared
##### AwsFirehose[¶](#awsfirehose)
This producer sends data to an AWS Firehose stream.
###### Parameters[¶](#parameters)
**Enable** (default: true)
> Switches this plugin on or off.
**StreamMapping**
> This value defines a translation from gollum stream names
> to firehose stream names. If no mapping is given, the gollum stream name is
> used as the firehose stream name.
> By default this parameter is set to “empty”
**RecordMaxMessages** (default: 1)
> This value defines the number of messages to send
> in one record to aws firehose.
> By default this parameter is set to “1”.
**RecordMessageDelimiter** (default: n)
> This value defines the delimiter string to use between
> messages within a firehose record.
> By default this parameter is set to “n”.
**SendTimeframeMs** (default: 1000, unit: ms)
> This value defines the timeframe in milliseconds in which a second
> batch send can be triggered.
> By default this parameter is set to “1000”.
###### Parameters (from components.AwsCredentials)[¶](#parameters-from-components-awscredentials)
**Credential/Type** (default: none)
> This value defines the credentials that are to be used when
> connecting to aws. Available values are listed below. See
> <https://docs.aws.amazon.com/sdk-for-go/api/aws/credentials/#Credentials>
> for more information.
> **environment**
> > > Retrieves credentials from the environment variables of
> > the running process
> **static**
> > > Retrieves credentials value for individual credential fields
> **shared**
> > > Retrieves credentials from the current user’s home directory
> **none**
> > > Use a anonymous login to aws
**Credential/Id**
> is used for “static” type and is used as the AccessKeyID
**Credential/Token**
> is used for “static” type and is used as the SessionToken
**Credential/Secret**
> is used for “static” type and is used as the SecretAccessKey
**Credential/File**
> is used for “shared” type and is used as the path to your
> shared Credentials file (~/.aws/credentials)
**Credential/Profile** (default: default)
> is used for “shared” type and is used for the profile
**Credential/AssumeRole**
> This value is used to assume an IAM role using.
> By default this is set to “”.
###### Parameters (from components.AwsMultiClient)[¶](#parameters-from-components-awsmulticlient)
**Region** (default: us-east-1)
> This value defines the used aws region.
> By default this is set to “us-east-1”
**Endpoint**
> This value defines the used aws api endpoint. If no endpoint is set
> the client needs to set the right endpoint for the used region.
> By default this is set to “”.
###### Parameters (from core.BatchedProducer)[¶](#parameters-from-core-batchedproducer)
**Batch/MaxCount** (default: 8192)
> Defines the maximum number of messages per batch. If this
> limit is reached a flush is always triggered.
> By default this parameter is set to 8192.
**Batch/FlushCount** (default: 4096)
> Defines the minimum number of messages required to flush
> a batch. If this limit is reached a flush might be triggered.
> By default this parameter is set to 4096.
**Batch/TimeoutSec** (default: 5, unit: sec)
> Defines the maximum time in seconds messages can stay in
> the internal buffer before being flushed.
> By default this parameter is set to 5.
###### Parameters (from core.SimpleProducer)[¶](#parameters-from-core-simpleproducer)
**Streams**
> Defines a list of streams the producer will receive from. This
> parameter is mandatory. Specifying “*” causes the producer to receive messages
> from all streams except internal internal ones (e.g. _GOLLUM_).
> By default this parameter is set to an empty list.
**FallbackStream**
> Defines a stream to route messages to if delivery fails.
> The message is reset to its original state before being routed, i.e. all
> modifications done to the message after leaving the consumer are removed.
> Setting this paramater to “” will cause messages to be discared when delivery
> fails.
**ShutdownTimeoutMs** (default: 1000, unit: ms)
> Defines the maximum time in milliseconds a producer is
> allowed to take to shut down. After this timeout the producer is always
> considered to have shut down. Decreasing this value may lead to lost
> messages during shutdown. Raising it may increase shutdown time.
**Modulators**
> Defines a list of modulators to be applied to a message when
> it arrives at this producer. If a modulator changes the stream of a message
> the message is NOT routed to this stream anymore.
> By default this parameter is set to an empty list.
###### Examples[¶](#examples)
This example set up a simple aws firehose producer:
```
firehoseOut:
Type: producer.AwsFirehose
Streams: "*"
StreamMapping:
"*": default
Credential:
Type: shared
File: /Users/<USERNAME>/.aws/credentials
Profile: default
Region: eu-west-1
RecordMaxMessages: 1
RecordMessageDelimiter: "\n"
SendTimeframeSec: 1
```
##### AwsKinesis[¶](#awskinesis)
This producer sends data to an AWS kinesis stream.
Configuration example
###### Parameters[¶](#parameters)
**Enable** (default: true)
> Switches this plugin on or off.
**StreamMapping**
> This value defines a translation from gollum stream names
> to kinesis stream names. If no mapping is given the gollum stream name is
> used as the kinesis stream name.
> By default this parameter is set to “empty”
**RecordMaxMessages** (default: 1)
> This value defines the maximum number of messages to join into
> a kinesis record.
> By default this parameter is set to “500”.
**RecordMessageDelimiter** (default: n)
> This value defines the delimiter string to use between
> messages within a kinesis record.
> By default this parameter is set to “n”.
**SendTimeframeMs** (default: 1000, unit: ms)
> This value defines the timeframe in milliseconds in which a second
> batch send can be triggered.
> By default this parameter is set to “1000”.
###### Parameters (from components.AwsCredentials)[¶](#parameters-from-components-awscredentials)
**Credential/Type** (default: none)
> This value defines the credentials that are to be used when
> connecting to aws. Available values are listed below. See
> <https://docs.aws.amazon.com/sdk-for-go/api/aws/credentials/#Credentials>
> for more information.
> **environment**
> > > Retrieves credentials from the environment variables of
> > the running process
> **static**
> > > Retrieves credentials value for individual credential fields
> **shared**
> > > Retrieves credentials from the current user’s home directory
> **none**
> > > Use a anonymous login to aws
**Credential/Id**
> is used for “static” type and is used as the AccessKeyID
**Credential/Token**
> is used for “static” type and is used as the SessionToken
**Credential/Secret**
> is used for “static” type and is used as the SecretAccessKey
**Credential/File**
> is used for “shared” type and is used as the path to your
> shared Credentials file (~/.aws/credentials)
**Credential/Profile** (default: default)
> is used for “shared” type and is used for the profile
**Credential/AssumeRole**
> This value is used to assume an IAM role using.
> By default this is set to “”.
###### Parameters (from components.AwsMultiClient)[¶](#parameters-from-components-awsmulticlient)
**Region** (default: us-east-1)
> This value defines the used aws region.
> By default this is set to “us-east-1”
**Endpoint**
> This value defines the used aws api endpoint. If no endpoint is set
> the client needs to set the right endpoint for the used region.
> By default this is set to “”.
###### Parameters (from core.BatchedProducer)[¶](#parameters-from-core-batchedproducer)
**Batch/MaxCount** (default: 8192)
> Defines the maximum number of messages per batch. If this
> limit is reached a flush is always triggered.
> By default this parameter is set to 8192.
**Batch/FlushCount** (default: 4096)
> Defines the minimum number of messages required to flush
> a batch. If this limit is reached a flush might be triggered.
> By default this parameter is set to 4096.
**Batch/TimeoutSec** (default: 5, unit: sec)
> Defines the maximum time in seconds messages can stay in
> the internal buffer before being flushed.
> By default this parameter is set to 5.
###### Parameters (from core.SimpleProducer)[¶](#parameters-from-core-simpleproducer)
**Streams**
> Defines a list of streams the producer will receive from. This
> parameter is mandatory. Specifying “*” causes the producer to receive messages
> from all streams except internal internal ones (e.g. _GOLLUM_).
> By default this parameter is set to an empty list.
**FallbackStream**
> Defines a stream to route messages to if delivery fails.
> The message is reset to its original state before being routed, i.e. all
> modifications done to the message after leaving the consumer are removed.
> Setting this paramater to “” will cause messages to be discared when delivery
> fails.
**ShutdownTimeoutMs** (default: 1000, unit: ms)
> Defines the maximum time in milliseconds a producer is
> allowed to take to shut down. After this timeout the producer is always
> considered to have shut down. Decreasing this value may lead to lost
> messages during shutdown. Raising it may increase shutdown time.
**Modulators**
> Defines a list of modulators to be applied to a message when
> it arrives at this producer. If a modulator changes the stream of a message
> the message is NOT routed to this stream anymore.
> By default this parameter is set to an empty list.
###### Examples[¶](#examples)
This example set up a simple aws Kinesis producer:
```
KinesisOut:
Type: producer.AwsKinesis
Streams: "*"
StreamMapping:
"*": default
Credential:
Type: shared
File: /Users/<USERNAME>/.aws/credentials
Profile: default
Region: eu-west-1
RecordMaxMessages: 1
RecordMessageDelimiter: "\n"
SendTimeframeSec: 1
```
##### AwsS3[¶](#awss3)
This producer sends messages to Amazon S3.
Each “file” uses a configurable batch and sends the content by a multipart upload to s3. This principle avoids temporary storage on disk.
Please keep in mind that Amazon S3 does not support appending to existing objects. Therefore rotation is mandatory in this producer.
###### Parameters[¶](#parameters)
**Enable** (default: true)
> Switches this plugin on or off.
**Bucket**
> The S3 bucket to upload to
**File** (default: gollum_*.log)
> This value is used as a template for final file names. The string
> ” * ” will replaced with the active stream name.
> By default this parameter is set to “gollum_*.log”
###### Parameters (from components.AwsCredentials)[¶](#parameters-from-components-awscredentials)
**Credential/Type** (default: none)
> This value defines the credentials that are to be used when
> connecting to aws. Available values are listed below. See
> <https://docs.aws.amazon.com/sdk-for-go/api/aws/credentials/#Credentials>
> for more information.
> **environment**
> > > Retrieves credentials from the environment variables of
> > the running process
> **static**
> > > Retrieves credentials value for individual credential fields
> **shared**
> > > Retrieves credentials from the current user’s home directory
> **none**
> > > Use a anonymous login to aws
**Credential/Id**
> is used for “static” type and is used as the AccessKeyID
**Credential/Token**
> is used for “static” type and is used as the SessionToken
**Credential/Secret**
> is used for “static” type and is used as the SecretAccessKey
**Credential/File**
> is used for “shared” type and is used as the path to your
> shared Credentials file (~/.aws/credentials)
**Credential/Profile** (default: default)
> is used for “shared” type and is used for the profile
**Credential/AssumeRole**
> This value is used to assume an IAM role using.
> By default this is set to “”.
###### Parameters (from components.AwsMultiClient)[¶](#parameters-from-components-awsmulticlient)
**Region** (default: us-east-1)
> This value defines the used aws region.
> By default this is set to “us-east-1”
**Endpoint**
> This value defines the used aws api endpoint. If no endpoint is set
> the client needs to set the right endpoint for the used region.
> By default this is set to “”.
###### Parameters (from components.BatchedWriterConfig)[¶](#parameters-from-components-batchedwriterconfig)
**Batch/TimeoutSec** (default: 5, unit: sec)
> This value defines the maximum number of seconds to wait after the last
> message arrived before a batch is flushed automatically.
> By default this parameter is set to “5”.
**Batch/MaxCount** (default: 8192)
> This value defines the maximum number of messages that can be buffered
> before a flush is mandatory. If the buffer is full and a flush is still
> underway or cannot be triggered out of other reasons, the producer will block.
> By default this parameter is set to “8192”.
**Batch/FlushCount** (default: 4096)
> This value defines the number of messages to be buffered before they are
> written to disk. This setting is clamped to “BatchMaxCount”.
> By default this parameter is set to “BatchMaxCount / 2”.
**Batch/FlushTimeoutSec** (default: 0, unit: sec)
> This value defines the maximum number of seconds to wait before
> a flush is aborted during shutdown. Set this parameter to “0” which does not abort
> the flushing procedure.
> By default this parameter is set to “0”.
###### Parameters (from components.RotateConfig)[¶](#parameters-from-components-rotateconfig)
**Rotation/Enable** (default: false)
> If this value is set to “true” the logs will rotate after reaching certain thresholds.
> By default this parameter is set to “false”.
**Rotation/TimeoutMin** (default: 1440, unit: min)
> This value defines a timeout in minutes that will cause the logs to
> rotate. Can be set in parallel with RotateSizeMB.
> By default this parameter is set to “1440”.
**Rotation/SizeMB** (default: 1024, unit: mb)
> This value defines the maximum file size in MB that triggers a file rotate.
> Files can get bigger than this size.
> By default this parameter is set to “1024”.
**Rotation/Timestamp** (default: 2006-01-02_15)
> This value sets the timestamp added to the filename when file rotation
> is enabled. The format is based on Go’s time.Format function.
> By default this parameter is to to “2006-01-02_15”.
**Rotation/ZeroPadding** (default: 0)
> This value sets the number of leading zeros when rotating files with
> an existing name. Setting this setting to 0 won’t add zeros, every other
> number defines the number of leading zeros to be used.
> By default this parameter is set to “0”.
**Rotation/Compress** (default: false)
> This value defines if a rotated logfile is to be gzip compressed or not.
> By default this parameter is set to “false”.
**Rotation/At**
> This value defines a specific time for rotation in hh:mm format.
> By default this parameter is set to “”.
**Rotation/AtHour** (default: -1)
> (no documentation available)
**Rotation/AtMin** (default: -1)
> (no documentation available)
###### Parameters (from core.SimpleProducer)[¶](#parameters-from-core-simpleproducer)
**Streams**
> Defines a list of streams the producer will receive from. This
> parameter is mandatory. Specifying “*” causes the producer to receive messages
> from all streams except internal internal ones (e.g. _GOLLUM_).
> By default this parameter is set to an empty list.
**FallbackStream**
> Defines a stream to route messages to if delivery fails.
> The message is reset to its original state before being routed, i.e. all
> modifications done to the message after leaving the consumer are removed.
> Setting this paramater to “” will cause messages to be discared when delivery
> fails.
**ShutdownTimeoutMs** (default: 1000, unit: ms)
> Defines the maximum time in milliseconds a producer is
> allowed to take to shut down. After this timeout the producer is always
> considered to have shut down. Decreasing this value may lead to lost
> messages during shutdown. Raising it may increase shutdown time.
**Modulators**
> Defines a list of modulators to be applied to a message when
> it arrives at this producer. If a modulator changes the stream of a message
> the message is NOT routed to this stream anymore.
> By default this parameter is set to an empty list.
###### Examples[¶](#examples)
This example sends all received messages from all streams to S3, creating a separate file for each stream:
```
S3Out:
Type: producer.AwsS3
Streams: "*"
Credential:
Type: shared
File: /Users/<USERNAME>/.aws/credentials
Profile: default
Region: eu-west-1
Bucket: gollum-s3-test
Batch:
TimeoutSec: 60
MaxCount: 1000
FlushCount: 500
FlushTimeoutSec: 0
Rotation:
Timestamp: 2006-01-02T15:04:05.999999999Z07:00
TimeoutMin: 1
SizeMB: 20
Modulators:
- format.Envelope:
Postfix: "\n"
```
##### Benchmark[¶](#benchmark)
This producer is meant to provide more meaningful results in benchmark situations than producer.Null, as it is based on core.BufferedProducer.
###### Parameters[¶](#parameters)
**Enable** (default: true)
> Switches this plugin on or off.
###### Parameters (from core.BufferedProducer)[¶](#parameters-from-core-bufferedproducer)
**Channel**
> This value defines the capacity of the message buffer.
> By default this parameter is set to “8192”.
**ChannelTimeoutMs** (default: 0, unit: ms)
> This value defines a timeout for each message
> before the message will discarded. To disable the timeout, set this
> parameter to 0.
> By default this parameter is set to “0”.
###### Parameters (from core.SimpleProducer)[¶](#parameters-from-core-simpleproducer)
**Streams**
> Defines a list of streams the producer will receive from. This
> parameter is mandatory. Specifying “*” causes the producer to receive messages
> from all streams except internal internal ones (e.g. _GOLLUM_).
> By default this parameter is set to an empty list.
**FallbackStream**
> Defines a stream to route messages to if delivery fails.
> The message is reset to its original state before being routed, i.e. all
> modifications done to the message after leaving the consumer are removed.
> Setting this paramater to “” will cause messages to be discared when delivery
> fails.
**ShutdownTimeoutMs** (default: 1000, unit: ms)
> Defines the maximum time in milliseconds a producer is
> allowed to take to shut down. After this timeout the producer is always
> considered to have shut down. Decreasing this value may lead to lost
> messages during shutdown. Raising it may increase shutdown time.
**Modulators**
> Defines a list of modulators to be applied to a message when
> it arrives at this producer. If a modulator changes the stream of a message
> the message is NOT routed to this stream anymore.
> By default this parameter is set to an empty list.
###### Examples[¶](#examples)
```
benchmark:
Type: producer.Benchmark
Streams: "*"
```
##### Console[¶](#console)
The console producer writes messages to standard output or standard error.
###### Parameters[¶](#parameters)
**Enable** (default: true)
> Switches this plugin on or off.
**Console**
> Chooses the output device; either “stdout” or “stderr”.
> By default this is set to “stdout”.
###### Parameters (from core.BufferedProducer)[¶](#parameters-from-core-bufferedproducer)
**Channel**
> This value defines the capacity of the message buffer.
> By default this parameter is set to “8192”.
**ChannelTimeoutMs** (default: 0, unit: ms)
> This value defines a timeout for each message
> before the message will discarded. To disable the timeout, set this
> parameter to 0.
> By default this parameter is set to “0”.
###### Parameters (from core.SimpleProducer)[¶](#parameters-from-core-simpleproducer)
**Streams**
> Defines a list of streams the producer will receive from. This
> parameter is mandatory. Specifying “*” causes the producer to receive messages
> from all streams except internal internal ones (e.g. _GOLLUM_).
> By default this parameter is set to an empty list.
**FallbackStream**
> Defines a stream to route messages to if delivery fails.
> The message is reset to its original state before being routed, i.e. all
> modifications done to the message after leaving the consumer are removed.
> Setting this paramater to “” will cause messages to be discared when delivery
> fails.
**ShutdownTimeoutMs** (default: 1000, unit: ms)
> Defines the maximum time in milliseconds a producer is
> allowed to take to shut down. After this timeout the producer is always
> considered to have shut down. Decreasing this value may lead to lost
> messages during shutdown. Raising it may increase shutdown time.
**Modulators**
> Defines a list of modulators to be applied to a message when
> it arrives at this producer. If a modulator changes the stream of a message
> the message is NOT routed to this stream anymore.
> By default this parameter is set to an empty list.
###### Examples[¶](#examples)
```
StdErrPrinter:
Type: producer.Console
Streams: myerrorstream
Console: stderr
```
##### ElasticSearch[¶](#elasticsearch)
The ElasticSearch producer sends messages to elastic search using the bulk http API. The producer expects a json payload.
###### Parameters[¶](#parameters)
**Enable** (default: true)
> Switches this plugin on or off.
**Retry/Count**
> Set the amount of retries before a Elasticsearch request
> fail finally.
> By default this parameter is set to “3”.
**Retry/TimeToWaitSec**
> This value denotes the time in seconds after which a
> failed dataset will be transmitted again.
> By default this parameter is set to “3”.
**SetGzip**
> This value enables or disables gzip compression for Elasticsearch
> requests (disabled by default). This option is used one to one for the library
> package. See <http://godoc.org/gopkg.in/olivere/elastic.v5#SetGzip>
> By default this parameter is set to “false”.
**Servers**
> This value defines a list of servers to connect to.
**User**
> This value used as the username for the elasticsearch server.
> By default this parameter is set to “”.
**Password**
> This value used as the password for the elasticsearch server.
> By default this parameter is set to “”.
**StreamProperties**
> This value defines the mapping and settings for each stream.
> As index use the stream name here.
**StreamProperties/<streamName>/Index**
> The value defines the Elasticsearch
> index used for the stream.
**StreamProperties/<streamName>/Type**
> This value defines the document type
> used for the stream.
**StreamProperties/<streamName>/TimeBasedIndex**
> This value can be set to “true”
> to append the date of the message to the index as in “<index>_<TimeBasedFormat>”.
> NOTE: This setting incurs a performance penalty because it is necessary to
> check if an index exists for each message!
> By default this parameter is set to “false”.
**StreamProperties/<streamName>/TimeBasedFormat**
> This value can be set to a valid
> go time format string to be used with DayBasedIndex.
> By default this parameter is set to “2006-01-02”.
**StreamProperties/<streamName>/Mapping**
> This value is a map which is used
> for the document field mapping. As document type, the already defined type is
> reused for the field mapping. See
> <https://www.elastic.co/guide/en/elasticsearch/reference/5.4/indices-create-index.html#mappings**StreamProperties/<streamName>/Settings**
> This value is a map which is used
> for the index settings. See
> <https://www.elastic.co/guide/en/elasticsearch/reference/5.4/indices-create-index.html#mappings###### Parameters (from core.BatchedProducer)[¶](#parameters-from-core-batchedproducer)
**Batch/MaxCount** (default: 8192)
> Defines the maximum number of messages per batch. If this
> limit is reached a flush is always triggered.
> By default this parameter is set to 8192.
**Batch/FlushCount** (default: 4096)
> Defines the minimum number of messages required to flush
> a batch. If this limit is reached a flush might be triggered.
> By default this parameter is set to 4096.
**Batch/TimeoutSec** (default: 5, unit: sec)
> Defines the maximum time in seconds messages can stay in
> the internal buffer before being flushed.
> By default this parameter is set to 5.
###### Parameters (from core.SimpleProducer)[¶](#parameters-from-core-simpleproducer)
**Streams**
> Defines a list of streams the producer will receive from. This
> parameter is mandatory. Specifying “*” causes the producer to receive messages
> from all streams except internal internal ones (e.g. _GOLLUM_).
> By default this parameter is set to an empty list.
**FallbackStream**
> Defines a stream to route messages to if delivery fails.
> The message is reset to its original state before being routed, i.e. all
> modifications done to the message after leaving the consumer are removed.
> Setting this paramater to “” will cause messages to be discared when delivery
> fails.
**ShutdownTimeoutMs** (default: 1000, unit: ms)
> Defines the maximum time in milliseconds a producer is
> allowed to take to shut down. After this timeout the producer is always
> considered to have shut down. Decreasing this value may lead to lost
> messages during shutdown. Raising it may increase shutdown time.
**Modulators**
> Defines a list of modulators to be applied to a message when
> it arrives at this producer. If a modulator changes the stream of a message
> the message is NOT routed to this stream anymore.
> By default this parameter is set to an empty list.
###### Examples[¶](#examples)
This example starts a simple twitter example producer for local running ElasticSearch:
```
producerElasticSearch:
Type: producer.ElasticSearch
Streams: tweets_stream
SetGzip: true
Servers:
- http://127.0.0.1:9200
StreamProperties:
tweets_stream:
Index: twitter
DayBasedIndex: true
Type: tweet
Mapping:
# index mapping for payload
user: keyword
message: text
Settings:
number_of_shards: 1
number_of_replicas: 1
```
##### File[¶](#file)
The file producer writes messages to a file. This producer also allows log rotation and compression of the rotated logs. Folders in the file path will be created if necessary.
Each target file will handled with separated batch processing.
###### Parameters[¶](#parameters)
**Enable** (default: true)
> Switches this plugin on or off.
**File**
> This value contains the path to the log file to write. The wildcard character “*”
> can be used as a placeholder for the stream name.
> By default this parameter is set to “/var/log/gollum.log”.
**FileOverwrite**
> This value causes the file to be overwritten instead of appending new data
> to it.
> By default this parameter is set to “false”.
**Permissions** (default: 0644)
> Defines the UNIX filesystem permissions used when creating
> the named file as an octal number.
> By default this paramater is set to “0664”.
**FolderPermissions** (default: 0755)
> Defines the UNIX filesystem permissions used when creating
> the folders as an octal number.
> By default this paramater is set to “0755”.
###### Parameters (from components.BatchedWriterConfig)[¶](#parameters-from-components-batchedwriterconfig)
**Batch/TimeoutSec** (default: 5, unit: sec)
> This value defines the maximum number of seconds to wait after the last
> message arrived before a batch is flushed automatically.
> By default this parameter is set to “5”.
**Batch/MaxCount** (default: 8192)
> This value defines the maximum number of messages that can be buffered
> before a flush is mandatory. If the buffer is full and a flush is still
> underway or cannot be triggered out of other reasons, the producer will block.
> By default this parameter is set to “8192”.
**Batch/FlushCount** (default: 4096)
> This value defines the number of messages to be buffered before they are
> written to disk. This setting is clamped to “BatchMaxCount”.
> By default this parameter is set to “BatchMaxCount / 2”.
**Batch/FlushTimeoutSec** (default: 0, unit: sec)
> This value defines the maximum number of seconds to wait before
> a flush is aborted during shutdown. Set this parameter to “0” which does not abort
> the flushing procedure.
> By default this parameter is set to “0”.
###### Parameters (from components.RotateConfig)[¶](#parameters-from-components-rotateconfig)
**Rotation/Enable** (default: false)
> If this value is set to “true” the logs will rotate after reaching certain thresholds.
> By default this parameter is set to “false”.
**Rotation/TimeoutMin** (default: 1440, unit: min)
> This value defines a timeout in minutes that will cause the logs to
> rotate. Can be set in parallel with RotateSizeMB.
> By default this parameter is set to “1440”.
**Rotation/SizeMB** (default: 1024, unit: mb)
> This value defines the maximum file size in MB that triggers a file rotate.
> Files can get bigger than this size.
> By default this parameter is set to “1024”.
**Rotation/Timestamp** (default: 2006-01-02_15)
> This value sets the timestamp added to the filename when file rotation
> is enabled. The format is based on Go’s time.Format function.
> By default this parameter is to to “2006-01-02_15”.
**Rotation/ZeroPadding** (default: 0)
> This value sets the number of leading zeros when rotating files with
> an existing name. Setting this setting to 0 won’t add zeros, every other
> number defines the number of leading zeros to be used.
> By default this parameter is set to “0”.
**Rotation/Compress** (default: false)
> This value defines if a rotated logfile is to be gzip compressed or not.
> By default this parameter is set to “false”.
**Rotation/At**
> This value defines a specific time for rotation in hh:mm format.
> By default this parameter is set to “”.
**Rotation/AtHour** (default: -1)
> (no documentation available)
**Rotation/AtMin** (default: -1)
> (no documentation available)
###### Parameters (from core.SimpleProducer)[¶](#parameters-from-core-simpleproducer)
**Streams**
> Defines a list of streams the producer will receive from. This
> parameter is mandatory. Specifying “*” causes the producer to receive messages
> from all streams except internal internal ones (e.g. _GOLLUM_).
> By default this parameter is set to an empty list.
**FallbackStream**
> Defines a stream to route messages to if delivery fails.
> The message is reset to its original state before being routed, i.e. all
> modifications done to the message after leaving the consumer are removed.
> Setting this paramater to “” will cause messages to be discared when delivery
> fails.
**ShutdownTimeoutMs** (default: 1000, unit: ms)
> Defines the maximum time in milliseconds a producer is
> allowed to take to shut down. After this timeout the producer is always
> considered to have shut down. Decreasing this value may lead to lost
> messages during shutdown. Raising it may increase shutdown time.
**Modulators**
> Defines a list of modulators to be applied to a message when
> it arrives at this producer. If a modulator changes the stream of a message
> the message is NOT routed to this stream anymore.
> By default this parameter is set to an empty list.
###### Parameters (from file.Pruner)[¶](#parameters-from-file-pruner)
**Prune/Count** (default: 0)
> this value removes old logfiles upon rotate so that only the given
> number of logfiles remain. Logfiles are located by the name defined by “File”
> and are pruned by date (followed by name). Set this value to “0” to disable pruning by count.
> By default this parameter is set to “0”.
**Prune/AfterHours** (default: 0)
> This value removes old logfiles that are older than a given number
> of hours. Set this value to “0” to disable pruning by lifetime.
> By default this parameter is set to “0”.
**Prune/TotalSizeMB** (default: 0, unit: mb)
> This value removes old logfiles upon rotate so that only the
> given number of MBs are used by logfiles. Logfiles are located by the name
> defined by “File” and are pruned by date (followed by name).
> Set this value to “0” to disable pruning by file size.
> By default this parameter is set to “0”.
###### Examples[¶](#examples)
This example will write the messages from all streams to /tmp/gollum.log after every 64 message or after 60sec:
```
fileOut:
Type: producer.File
Streams: "*"
File: /tmp/gollum.log
Batch:
MaxCount: 128
FlushCount: 64
TimeoutSec: 60
FlushTimeoutSec: 3
```
##### HTTPRequest[¶](#httprequest)
The HTTPRequest producer sends messages as HTTP requests to a given webserver.
In RawData mode, incoming messages are expected to contain complete HTTP requests in “wire format”, such as:
```
POST /foo/bar HTTP/1.0\n Content-type: text/plain\n Content-length: 24
\n Dummy test\n Request data\n
```
In this mode, the message’s contents is parsed as an HTTP request and sent to the destination server (virtually) unchanged. If the message cannot be parsed as an HTTP request, an error is logged. Only the scheme,
host and port components of the “Address” URL are used; any path and query parameters are ignored. The “Encoding” parameter is ignored.
If RawData mode is off, a POST request is made to the destination server for each incoming message, using the complete URL in “Address”. The incoming message’s contents are delivered in the POST request’s body and Content-type is set to the value of “Encoding”
###### Parameters[¶](#parameters)
**Enable** (default: true)
> Switches this plugin on or off.
**Address**
> defines the URL to send http requests to. If the value doesn’t
> contain “://”, it is prepended with “<http://>”, so short forms like
> “localhost:8088” are accepted. The default value is “http://localhost:80”.
**RawData** (default: true)
> Turns “RawData” mode on. See the description above.
**Encoding** (default: text/plain; charset=utf-8)
> Defines the payload encoding when RawData is set to false.
###### Parameters (from core.BufferedProducer)[¶](#parameters-from-core-bufferedproducer)
**Channel**
> This value defines the capacity of the message buffer.
> By default this parameter is set to “8192”.
**ChannelTimeoutMs** (default: 0, unit: ms)
> This value defines a timeout for each message
> before the message will discarded. To disable the timeout, set this
> parameter to 0.
> By default this parameter is set to “0”.
###### Parameters (from core.SimpleProducer)[¶](#parameters-from-core-simpleproducer)
**Streams**
> Defines a list of streams the producer will receive from. This
> parameter is mandatory. Specifying “*” causes the producer to receive messages
> from all streams except internal internal ones (e.g. _GOLLUM_).
> By default this parameter is set to an empty list.
**FallbackStream**
> Defines a stream to route messages to if delivery fails.
> The message is reset to its original state before being routed, i.e. all
> modifications done to the message after leaving the consumer are removed.
> Setting this paramater to “” will cause messages to be discared when delivery
> fails.
**ShutdownTimeoutMs** (default: 1000, unit: ms)
> Defines the maximum time in milliseconds a producer is
> allowed to take to shut down. After this timeout the producer is always
> considered to have shut down. Decreasing this value may lead to lost
> messages during shutdown. Raising it may increase shutdown time.
**Modulators**
> Defines a list of modulators to be applied to a message when
> it arrives at this producer. If a modulator changes the stream of a message
> the message is NOT routed to this stream anymore.
> By default this parameter is set to an empty list.
###### Examples[¶](#examples)
```
HttpOut01:
Type: producer.HTTPRequest
Streams: http_01
Address: "http://localhost:8099/test"
RawData: true
```
##### InfluxDB[¶](#influxdb)
This producer writes data to an influxDB endpoint. Data is not converted to the correct influxDB format automatically. Proper formatting might be required.
###### Parameters[¶](#parameters)
**Enable** (default: true)
> Switches this plugin on or off.
**Version**
> Defines the InfluxDB protocol version to use. This can either be
> 80-89 for 0.8.x, 90 for 0.9.0 or 91-100 for 0.9.1 or later.
> Be default this parameter is set to 100.
**Host**
> Defines the host (and port) of the InfluxDB master.
> Be default this parameter is set to “localhost:8086”.
**User**
> Defines the InfluxDB username to use. If this is empty,
> credentials are not used.
> Be default this parameter is set to “”.
**Password**
> Defines the InfluxDB password to use.
> Be default this parameter is set to “”.
**Database**
> Sets the InfluxDB database to write to.
> Be default this parameter is set to “default”.
**TimeBasedName**
> When set to true, the Database parameter is treated as a
> template for time.Format and the resulting string is used as the database
> name. You can e.g. use “default-2006-01-02” to switch databases each day.
> By default this parameter is set to “true”.
**RetentionPolicy**
> Only available for Version 90. This setting defines the
> InfluxDB retention policy allowed with this protocol version.
> By default this parameter is set to “”.
###### Parameters (from core.BatchedProducer)[¶](#parameters-from-core-batchedproducer)
**Batch/MaxCount** (default: 8192)
> Defines the maximum number of messages per batch. If this
> limit is reached a flush is always triggered.
> By default this parameter is set to 8192.
**Batch/FlushCount** (default: 4096)
> Defines the minimum number of messages required to flush
> a batch. If this limit is reached a flush might be triggered.
> By default this parameter is set to 4096.
**Batch/TimeoutSec** (default: 5, unit: sec)
> Defines the maximum time in seconds messages can stay in
> the internal buffer before being flushed.
> By default this parameter is set to 5.
###### Parameters (from core.SimpleProducer)[¶](#parameters-from-core-simpleproducer)
**Streams**
> Defines a list of streams the producer will receive from. This
> parameter is mandatory. Specifying “*” causes the producer to receive messages
> from all streams except internal internal ones (e.g. _GOLLUM_).
> By default this parameter is set to an empty list.
**FallbackStream**
> Defines a stream to route messages to if delivery fails.
> The message is reset to its original state before being routed, i.e. all
> modifications done to the message after leaving the consumer are removed.
> Setting this paramater to “” will cause messages to be discared when delivery
> fails.
**ShutdownTimeoutMs** (default: 1000, unit: ms)
> Defines the maximum time in milliseconds a producer is
> allowed to take to shut down. After this timeout the producer is always
> considered to have shut down. Decreasing this value may lead to lost
> messages during shutdown. Raising it may increase shutdown time.
**Modulators**
> Defines a list of modulators to be applied to a message when
> it arrives at this producer. If a modulator changes the stream of a message
> the message is NOT routed to this stream anymore.
> By default this parameter is set to an empty list.
###### Examples[¶](#examples)
```
metricsToInflux:
Type: producer.InfluxDB
Streams: metrics
Host: "influx01:8086"
Database: "metrics"
TimeBasedName: false
Batch:
MaxCount: 2000
FlushCount: 100
TimeoutSec: 5
```
##### Kafka[¶](#kafka)
This producer writes messages to a kafka cluster. This producer is backed by the sarama library (<https://github.com/Shopify/sarama>) so most settings directly relate to the settings of that library.
###### Parameters[¶](#parameters)
**Enable** (default: true)
> Switches this plugin on or off.
**Servers**
> Defines a list of ideally all brokers in the cluster. At least one
> broker is required.
> By default this parameter is set to an empty list.
**Version**
> Defines the kafka protocol version to use. Common values are 0.8.2,
> 0.9.0 or 0.10.0. Values of the form “A.B” are allowed as well as “A.B.C”
> and “A.B.C.D”. If the version given is not known, the closest possible
> version is chosen. If GroupId is set to a value < “0.9”, “0.9.0.1” will be used.
> By default this parameter is set to “0.8.2”.
**Topics**
> Defines a stream to topic mapping. If a stream is not mapped the
> stream name is used as topic. You can define the wildcard stream (*) here,
> too. If defined, all streams that do not have a specific mapping will go to
> this topic (including _GOLLUM_).
> By default this parameter is set to an empty list.
**ClientId** (default: gollum)
> Sets the kafka client id used by this producer.
> By default this parameter is set to “gollum”.
**Partitioner**
> Defines the distribution algorithm to use. Valid values are:
> Random, Roundrobin and Hash.
> By default this parameter is set to “Roundrobin”.
**PartitionHasher**
> Defines the hash algorithm to use when Partitioner is set
> to “Hash”. Accepted values are “fnv1-a” and “murmur2”.
**KeyFrom**
> Defines the metadata field that contains the string to be used as
> the key passed to kafka. When set to an empty string no key is used.
> By default this parameter is set to “”.
**Compression**
> Defines the compression algorithm to use.
> Possible values are “none”, “zip” and “snappy”.
> By default this parameter is set to “none”.
**RequiredAcks**
> Defines the numbers of acknowledgements required until a
> message is marked as “sent”. When set to -1 all replicas must acknowledge a
> message.
> By default this parameter is set to 1.
**TimeoutMs**
> Denotes the maximum time the broker will wait for acks. This
> setting becomes active when RequiredAcks is set to wait for multiple commits.
> By default this parameter is set to 10000.
**GracePeriodMs** (default: 100, unit: ms)
> Defines the number of milliseconds to wait for Sarama to
> accept a single message. After this period a message is sent to the fallback.
> This setting mitigates a conceptual problem in the saram API which can lead
> to long blocking times during startup.
> By default this parameter is set to 100.
**MaxOpenRequests**
> Defines the maximum number of simultaneous connections
> opened to a single broker at a time.
> By default this parameter is set to 5.
**ServerTimeoutSec**
> Defines the time after which a connection is set to timed
> out.
> By default this parameter is set to 30.
**SendTimeoutMs**
> Defines the number of milliseconds to wait for a broker to
> before marking a message as timed out.
> By default this parameter is set to 250.
**SendRetries**
> Defines how many times a message should be send again before a
> broker is marked as not reachable. Please note that this setting should never
> be 0. See <https://github.com/Shopify/sarama/issues/294>.
> By default this parameter is set to 1.
**AllowNilValue** (default: false)
> When enabled messages containing an empty or nil payload
> will not be rejected.
> By default this parameter is set to false.
**Batch/MinCount**
> Sets the minimum number of messages required to send a
> request.
> By default this parameter is set to 1.
**Batch/MaxCount**
> Defines the maximum number of messages bufferd before a
> request is sent. A value of 0 will remove this limit.
> By default this parameter is set to 0.
**Batch/MinSizeByte**
> Defines the minimum number of bytes to buffer before
> sending a request.
> By default this parameter is set to 8192.
**Batch/SizeMaxKB**
> Defines the maximum allowed message size in KB.
> Messages bigger than this limit will be rejected.
> By default this parameter is set to 1024.
**Batch/TimeoutMs**
> Defines the maximum time in milliseconds after which a
> new request will be sent, ignoring of Batch/MinCount and Batch/MinSizeByte
> By default this parameter is set to 3.
**ElectRetries**
> Defines how many times a metadata request is to be retried
> during a leader election phase.
> By default this parameter is set to 3.
**ElectTimeoutMs**
> Defines the number of milliseconds to wait for the cluster
> to elect a new leader.
> By default this parameter is set to 250.
**MetadataRefreshMs**
> Defines the interval in milliseconds for refetching
> cluster metadata.
> By default this parameter is set to 600000.
**TlsEnable**
> Enables TLS communication with brokers.
> By default this parameter is set to false.
**TlsKeyLocation**
> Path to the client’s private key (PEM) used for TLS based
> authentication.
> By default this parameter is set to “”.
**TlsCertificateLocation**
> Path to the client’s public key (PEM) used for TLS
> based authentication.
> By default this parameter is set to “”.
**TlsCaLocation**
> Path to the CA certificate(s) used for verifying the
> broker’s key.
> By default this parameter is set to “”.
**TlsServerName**
> Used to verify the hostname on the server’s certificate
> unless TlsInsecureSkipVerify is true.
> By default this parameter is set to “”.
**TlsInsecureSkipVerify**
> Enables server certificate chain and host name
> verification.
> By default this parameter is set to false.
**SaslEnable**
> Enables SASL based authentication.
> By default this parameter is set to false.
**SaslUsername**
> Sets the user name used for SASL/PLAIN authentication.
> By default this parameter is set to “”.
**SaslPassword**
> Sets the password used for SASL/PLAIN authentication.
> By default this parameter is set to “”.
> MessageBufferCount sets the internal channel size for the kafka client.
> By default this is set to 8192.
###### Parameters (from core.BufferedProducer)[¶](#parameters-from-core-bufferedproducer)
**Channel**
> This value defines the capacity of the message buffer.
> By default this parameter is set to “8192”.
**ChannelTimeoutMs** (default: 0, unit: ms)
> This value defines a timeout for each message
> before the message will discarded. To disable the timeout, set this
> parameter to 0.
> By default this parameter is set to “0”.
###### Parameters (from core.SimpleProducer)[¶](#parameters-from-core-simpleproducer)
**Streams**
> Defines a list of streams the producer will receive from. This
> parameter is mandatory. Specifying “*” causes the producer to receive messages
> from all streams except internal internal ones (e.g. _GOLLUM_).
> By default this parameter is set to an empty list.
**FallbackStream**
> Defines a stream to route messages to if delivery fails.
> The message is reset to its original state before being routed, i.e. all
> modifications done to the message after leaving the consumer are removed.
> Setting this paramater to “” will cause messages to be discared when delivery
> fails.
**ShutdownTimeoutMs** (default: 1000, unit: ms)
> Defines the maximum time in milliseconds a producer is
> allowed to take to shut down. After this timeout the producer is always
> considered to have shut down. Decreasing this value may lead to lost
> messages during shutdown. Raising it may increase shutdown time.
**Modulators**
> Defines a list of modulators to be applied to a message when
> it arrives at this producer. If a modulator changes the stream of a message
> the message is NOT routed to this stream anymore.
> By default this parameter is set to an empty list.
###### Examples[¶](#examples)
```
kafkaWriter:
Type: producer.Kafka
Streams: logs
Compression: zip
Servers:
- "kafka01:9092"
- "kafka02:9092"
- "kafka03:9092"
- "kafka04:9092"
```
##### Null[¶](#null)
This producer is meant to be used as a sink for data. It will throw away all messages without notice.
###### Parameters[¶](#parameters)
**Enable** (default: true)
> Switches this plugin on or off.
###### Parameters (from core.SimpleProducer)[¶](#parameters-from-core-simpleproducer)
**Streams**
> Defines a list of streams the producer will receive from. This
> parameter is mandatory. Specifying “*” causes the producer to receive messages
> from all streams except internal internal ones (e.g. _GOLLUM_).
> By default this parameter is set to an empty list.
**FallbackStream**
> Defines a stream to route messages to if delivery fails.
> The message is reset to its original state before being routed, i.e. all
> modifications done to the message after leaving the consumer are removed.
> Setting this paramater to “” will cause messages to be discared when delivery
> fails.
**ShutdownTimeoutMs** (default: 1000, unit: ms)
> Defines the maximum time in milliseconds a producer is
> allowed to take to shut down. After this timeout the producer is always
> considered to have shut down. Decreasing this value may lead to lost
> messages during shutdown. Raising it may increase shutdown time.
**Modulators**
> Defines a list of modulators to be applied to a message when
> it arrives at this producer. If a modulator changes the stream of a message
> the message is NOT routed to this stream anymore.
> By default this parameter is set to an empty list.
###### Examples[¶](#examples)
```
TrashCan:
Type: producer.Null
Streams: trash
```
##### Proxy[¶](#proxy)
This producer is a compatible with the Proxy consumer plugin.
Responses to messages sent to the given address are sent back to the original consumer of it is a compatible message source. As with consumer.proxy the returned messages are partitioned by common message length algorithms.
###### Parameters[¶](#parameters)
**Enable** (default: true)
> Switches this plugin on or off.
**Address**
> This value stores the identifier to connect to.
> This can either be any ip address and port like “localhost:5880” or a file
> like “unix:///var/gollum.Proxy”.
> By default this parameter is set to “:5880”.
**ConnectionBufferSizeKB** (default: 1024, unit: mb)
> This value sets the connection buffer size in KB.
> This also defines the size of the buffer used by the message parser.
> By default this parameter is set to “1024”.
**TimeoutSec** (default: 1, unit: sec)
> This value defines the maximum time in seconds a client is allowed to take
> for a response.
> By default this parameter is set to “1”.
**Partitioner**
> This value defines the algorithm used to read messages from the stream.
> The messages will be sent as a whole, no cropping or removal will take place.
> By default this parameter is set to “delimiter”.
> **delimiter**
> > > separates messages by looking for a delimiter string. The
> > delimiter is included into the left hand message.
> **ascii**
> > > reads an ASCII encoded number at a given offset until a given
> > delimiter is found.
> **binary**
> > > reads a binary number at a given offset and size
> **binary_le**
> > > is an alias for “binary”
> **binary_be**
> > > is the same as “binary” but uses big endian encoding
> **fixed**
> > > assumes fixed size messages
**Delimiter**
> This value defines the delimiter used by the text and delimiter partitioner.
> By default this parameter is set to “n”.
**Offset**
> This value defines the offset used by the binary and text partitioner.
> This setting is ignored by the fixed partitioner.
> By default this parameter is set to “0”.
**Size**
> This value defines the size in bytes used by the binary or fixed partitioner.
> For binary this can be set to 1,2,4 or 8, for fixed this defines the size of a message.
> BY default this parameter is set to “4” for binary or “1” for fixed partitioner.
###### Parameters (from core.BufferedProducer)[¶](#parameters-from-core-bufferedproducer)
**Channel**
> This value defines the capacity of the message buffer.
> By default this parameter is set to “8192”.
**ChannelTimeoutMs** (default: 0, unit: ms)
> This value defines a timeout for each message
> before the message will discarded. To disable the timeout, set this
> parameter to 0.
> By default this parameter is set to “0”.
###### Parameters (from core.SimpleProducer)[¶](#parameters-from-core-simpleproducer)
**Streams**
> Defines a list of streams the producer will receive from. This
> parameter is mandatory. Specifying “*” causes the producer to receive messages
> from all streams except internal internal ones (e.g. _GOLLUM_).
> By default this parameter is set to an empty list.
**FallbackStream**
> Defines a stream to route messages to if delivery fails.
> The message is reset to its original state before being routed, i.e. all
> modifications done to the message after leaving the consumer are removed.
> Setting this paramater to “” will cause messages to be discared when delivery
> fails.
**ShutdownTimeoutMs** (default: 1000, unit: ms)
> Defines the maximum time in milliseconds a producer is
> allowed to take to shut down. After this timeout the producer is always
> considered to have shut down. Decreasing this value may lead to lost
> messages during shutdown. Raising it may increase shutdown time.
**Modulators**
> Defines a list of modulators to be applied to a message when
> it arrives at this producer. If a modulator changes the stream of a message
> the message is NOT routed to this stream anymore.
> By default this parameter is set to an empty list.
###### Examples[¶](#examples)
This example will send 64bit length encoded data on TCP port 5880.
```
proxyOut:
Type: producer.Proxy
Address: ":5880"
Partitioner: binary
Size: 8
```
##### Redis[¶](#redis)
This producer sends messages to a redis server. Different redis storage types and database indexes are supported. This producer does not implement support for redis 3.0 cluster.
###### Parameters[¶](#parameters)
**Enable** (default: true)
> Switches this plugin on or off.
**Address**
> Stores the identifier to connect to.
> This can either be any ip address and port like “localhost:6379” or a file
> like “unix:///var/redis.socket”. By default this is set to “:6379”.
**Database** (default: 0)
> Defines the redis database to connect to.
**Key**
> Defines the redis key to store the values in.
> This field is ignored when “KeyFormatter” is set.
> By default this is set to “default”.
**Storage**
> Defines the type of the storage to use. Valid values are: “hash”,
> “list”, “set”, “sortedset”, “string”. By default this is set to “hash”.
**KeyFrom**
> Defines the name of the metadata field used as a key for messages
> sent to redis. If the name is an empty string no key is sent. By default
> this value is set to an empty string.
**FieldFrom**
> Defines the name of the metadata field used as a field for messages
> sent to redis. If the name is an empty string no key is sent. By default
> this value is set to an empty string.
**Password**
> (no documentation available)
###### Parameters (from core.BufferedProducer)[¶](#parameters-from-core-bufferedproducer)
**Channel**
> This value defines the capacity of the message buffer.
> By default this parameter is set to “8192”.
**ChannelTimeoutMs** (default: 0, unit: ms)
> This value defines a timeout for each message
> before the message will discarded. To disable the timeout, set this
> parameter to 0.
> By default this parameter is set to “0”.
###### Parameters (from core.SimpleProducer)[¶](#parameters-from-core-simpleproducer)
**Streams**
> Defines a list of streams the producer will receive from. This
> parameter is mandatory. Specifying “*” causes the producer to receive messages
> from all streams except internal internal ones (e.g. _GOLLUM_).
> By default this parameter is set to an empty list.
**FallbackStream**
> Defines a stream to route messages to if delivery fails.
> The message is reset to its original state before being routed, i.e. all
> modifications done to the message after leaving the consumer are removed.
> Setting this paramater to “” will cause messages to be discared when delivery
> fails.
**ShutdownTimeoutMs** (default: 1000, unit: ms)
> Defines the maximum time in milliseconds a producer is
> allowed to take to shut down. After this timeout the producer is always
> considered to have shut down. Decreasing this value may lead to lost
> messages during shutdown. Raising it may increase shutdown time.
**Modulators**
> Defines a list of modulators to be applied to a message when
> it arrives at this producer. If a modulator changes the stream of a message
> the message is NOT routed to this stream anymore.
> By default this parameter is set to an empty list.
###### Examples[¶](#examples)
.
```
RedisProducer00:
Type: producer.Redis
Address: ":6379"
Key: "mykey"
Storage: "hash"
```
##### Scribe[¶](#scribe)
This producer allows sending messages to Facebook’s scribe service.
###### Parameters[¶](#parameters)
**Enable** (default: true)
> Switches this plugin on or off.
**Address**
> Defines the host and port of a scrive endpoint.
> By default this parameter is set to “localhost:1463”.
**ConnectionBufferSizeKB** (default: 1024, unit: kb)
> Sets the connection socket buffer size in KB.
> By default this parameter is set to 1024.
**HeartBeatIntervalSec** (default: 5, unit: sec)
> Defines the interval in seconds used to query scribe
> for status updates.
> By default this parameter is set to 1.
**WindowSize** (default: 2048)
> Defines the maximum number of messages send to scribe in one
> call. The WindowSize will reduce when scribe is returing “try later” to
> reduce load on the scribe server. It will slowly rise again for each
> successful write until WindowSize is reached again.
> By default this parameter is set to 2048.
**ConnectionTimeoutSec** (default: 5, unit: sec)
> Defines the time in seconds after which a connection
> timeout is assumed. This can happen during writes or status reads.
> By default this parameter is set to 5.
**Category**
> Maps a stream to a scribe category. You can define the wildcard
> stream (*) here, too. When set, all streams that do not have a specific
> mapping will go to this category (including reserved streams like _GOLLUM_).
> If no category mappings are set the stream name is used as category.
> By default this parameter is set to an empty list.
###### Parameters (from core.BatchedProducer)[¶](#parameters-from-core-batchedproducer)
**Batch/MaxCount** (default: 8192)
> Defines the maximum number of messages per batch. If this
> limit is reached a flush is always triggered.
> By default this parameter is set to 8192.
**Batch/FlushCount** (default: 4096)
> Defines the minimum number of messages required to flush
> a batch. If this limit is reached a flush might be triggered.
> By default this parameter is set to 4096.
**Batch/TimeoutSec** (default: 5, unit: sec)
> Defines the maximum time in seconds messages can stay in
> the internal buffer before being flushed.
> By default this parameter is set to 5.
###### Parameters (from core.SimpleProducer)[¶](#parameters-from-core-simpleproducer)
**Streams**
> Defines a list of streams the producer will receive from. This
> parameter is mandatory. Specifying “*” causes the producer to receive messages
> from all streams except internal internal ones (e.g. _GOLLUM_).
> By default this parameter is set to an empty list.
**FallbackStream**
> Defines a stream to route messages to if delivery fails.
> The message is reset to its original state before being routed, i.e. all
> modifications done to the message after leaving the consumer are removed.
> Setting this paramater to “” will cause messages to be discared when delivery
> fails.
**ShutdownTimeoutMs** (default: 1000, unit: ms)
> Defines the maximum time in milliseconds a producer is
> allowed to take to shut down. After this timeout the producer is always
> considered to have shut down. Decreasing this value may lead to lost
> messages during shutdown. Raising it may increase shutdown time.
**Modulators**
> Defines a list of modulators to be applied to a message when
> it arrives at this producer. If a modulator changes the stream of a message
> the message is NOT routed to this stream anymore.
> By default this parameter is set to an empty list.
###### Examples[¶](#examples)
```
logs:
Type: producer.Scribe"
Stream: ["*", "_GOLLUM"]
Address: "scribe01:1463"
HeartBeatIntervalSec: 10
Category:
"access" : "accesslogs"
"error" : "errorlogs"
"_GOLLUM_" : "gollumlogs"
```
##### Socket[¶](#socket)
The socket producer connects to a service over TCP, UDP or a UNIX domain socket.
###### Parameters[¶](#parameters)
**Enable** (default: true)
> Switches this plugin on or off.
**Address**
> Defines the address to connect to. This can either be any ip
> address and port like “localhost:5880” or a file like “unix:///var/gollum.socket”.
> By default this parameter is set to “:5880”.
**ConnectionBufferSizeKB** (default: 1024, unit: kb)
> This value sets the connection buffer size in KB.
> By default this parameter is set to “1024”.
**Batch/MaxCount** (default: 8192)
> This value defines the maximum number of messages that can be buffered
> before a flush is mandatory. If the buffer is full and a flush is still
> underway or cannot be triggered out of other reasons, the producer will block.
> By default this parameter is set to “8192”.
**Batch/FlushCount** (default: 4096)
> This value defines the number of messages to be buffered before they are
> written to disk. This setting is clamped to BatchMaxCount.
> By default this parameter is set to “Batch/MaxCount / 2”.
**Batch/TimeoutSec** (default: 5, unit: sec)
> This value defines the maximum number of seconds to wait after the last
> message arrived before a batch is flushed automatically.
> By default this parameter is set to “5”.
**Acknowledge**
> This value can be set to a non-empty value to expect the given string as a
> response from the server after a batch has been sent.
> If Acknowledge is enabled and a IP-Address is given to Address, TCP is used
> to open the connection, otherwise UDP is used.
> By default this parameter is set to “”.
**AckTimeoutMs** (default: 2000, unit: ms)
> This value defines the time in milliseconds to wait for a response from the
> server. After this timeout the send is marked as failed.
> By default this parameter is set to “2000”.
###### Parameters (from core.BufferedProducer)[¶](#parameters-from-core-bufferedproducer)
**Channel**
> This value defines the capacity of the message buffer.
> By default this parameter is set to “8192”.
**ChannelTimeoutMs** (default: 0, unit: ms)
> This value defines a timeout for each message
> before the message will discarded. To disable the timeout, set this
> parameter to 0.
> By default this parameter is set to “0”.
###### Parameters (from core.SimpleProducer)[¶](#parameters-from-core-simpleproducer)
**Streams**
> Defines a list of streams the producer will receive from. This
> parameter is mandatory. Specifying “*” causes the producer to receive messages
> from all streams except internal internal ones (e.g. _GOLLUM_).
> By default this parameter is set to an empty list.
**FallbackStream**
> Defines a stream to route messages to if delivery fails.
> The message is reset to its original state before being routed, i.e. all
> modifications done to the message after leaving the consumer are removed.
> Setting this paramater to “” will cause messages to be discared when delivery
> fails.
**ShutdownTimeoutMs** (default: 1000, unit: ms)
> Defines the maximum time in milliseconds a producer is
> allowed to take to shut down. After this timeout the producer is always
> considered to have shut down. Decreasing this value may lead to lost
> messages during shutdown. Raising it may increase shutdown time.
**Modulators**
> Defines a list of modulators to be applied to a message when
> it arrives at this producer. If a modulator changes the stream of a message
> the message is NOT routed to this stream anymore.
> By default this parameter is set to an empty list.
###### Examples[¶](#examples)
This example starts a socket producer on localhost port 5880:
```
SocketOut:
Type: producer.Socket
Address: ":5880"
Batch
MaxCount: 1024
FlushCount: 512
TimeoutSec: 3
AckTimeoutMs: 1000
```
##### Spooling[¶](#spooling)
This producer is meant to be used as a fallback if another producer fails to send messages, e.g. because a service is down. It does not really produce messages to some other service, it buffers them on disk for a certain time and inserts them back to the system after this period.
###### Parameters[¶](#parameters)
**Enable** (default: true)
> Switches this plugin on or off.
**Path** (default: /var/run/gollum/spooling)
> Sets the output directory for spooling files. Spooling files will
> be stored as “<path>/<stream name>/<number>.spl”.
> By default this parameter is set to “/var/run/gollum/spooling”.
**MaxFileSizeMB** (default: 512, unit: mb)
> Sets the size limit in MB that causes a spool file rotation.
> Reading messages back into the system will start only after a file is
> rotated.
> By default this parameter is set to 512.
**MaxFileAgeMin** (default: 1, unit: min)
> Defines the duration in minutes after which a spool file
> rotation is triggered (regardless of MaxFileSizeMB). Reading messages back
> into the system will start only after a file is rotated.
> By default this parameter is set to 1.
**MaxMessagesSec**
> Sets the maximum number of messages that will be respooled
> per second. Setting this value to 0 will cause respooling to send as fast as
> possible.
> By default this parameter is set to 100.
**RespoolDelaySec** (default: 10, unit: sec)
> Defines the number of seconds to wait before trying to
> load existing spool files from disk after a restart. This setting can be used
> to define a safe timeframe for gollum to set up all required connections and
> resources before putting additionl load on it.
> By default this parameter is set to 10.
**RevertStreamOnFallback** (default: false)
> This allows the spooling fallback to handle the
> messages that would have been sent back by the spooler if it would have
> handled the message. When set to true it will revert the stream of the
> message to the previous stream ID before sending it to the Fallback stream.
> By default this parameter is set to false.
**BufferSizeByte** (default: 8192)
> Defines the initial size of the buffer that is used to read
> messages from a spool file. If a message is larger than this size, the buffer
> will be resized.
> By default this parameter is set to 8192.
**Batch/MaxCount** (default: 100)
> defines the maximum number of messages stored in memory before
> a write to file is triggered.
> By default this parameter is set to 100.
**Batch/TimeoutSec** (default: 5, unit: sec)
> defines the maximum number of seconds to wait after the last
> message arrived before a batch is flushed automatically.
> By default this parameter is set to 5.
###### Parameters (from components.RotateConfig)[¶](#parameters-from-components-rotateconfig)
**Rotation/Enable** (default: false)
> If this value is set to “true” the logs will rotate after reaching certain thresholds.
> By default this parameter is set to “false”.
**Rotation/TimeoutMin** (default: 1440, unit: min)
> This value defines a timeout in minutes that will cause the logs to
> rotate. Can be set in parallel with RotateSizeMB.
> By default this parameter is set to “1440”.
**Rotation/SizeMB** (default: 1024, unit: mb)
> This value defines the maximum file size in MB that triggers a file rotate.
> Files can get bigger than this size.
> By default this parameter is set to “1024”.
**Rotation/Timestamp** (default: 2006-01-02_15)
> This value sets the timestamp added to the filename when file rotation
> is enabled. The format is based on Go’s time.Format function.
> By default this parameter is to to “2006-01-02_15”.
**Rotation/ZeroPadding** (default: 0)
> This value sets the number of leading zeros when rotating files with
> an existing name. Setting this setting to 0 won’t add zeros, every other
> number defines the number of leading zeros to be used.
> By default this parameter is set to “0”.
**Rotation/Compress** (default: false)
> This value defines if a rotated logfile is to be gzip compressed or not.
> By default this parameter is set to “false”.
**Rotation/At**
> This value defines a specific time for rotation in hh:mm format.
> By default this parameter is set to “”.
**Rotation/AtHour** (default: -1)
> (no documentation available)
**Rotation/AtMin** (default: -1)
> (no documentation available)
###### Parameters (from core.BufferedProducer)[¶](#parameters-from-core-bufferedproducer)
**Channel**
> This value defines the capacity of the message buffer.
> By default this parameter is set to “8192”.
**ChannelTimeoutMs** (default: 0, unit: ms)
> This value defines a timeout for each message
> before the message will discarded. To disable the timeout, set this
> parameter to 0.
> By default this parameter is set to “0”.
###### Parameters (from core.SimpleProducer)[¶](#parameters-from-core-simpleproducer)
**Streams**
> Defines a list of streams the producer will receive from. This
> parameter is mandatory. Specifying “*” causes the producer to receive messages
> from all streams except internal internal ones (e.g. _GOLLUM_).
> By default this parameter is set to an empty list.
**FallbackStream**
> Defines a stream to route messages to if delivery fails.
> The message is reset to its original state before being routed, i.e. all
> modifications done to the message after leaving the consumer are removed.
> Setting this paramater to “” will cause messages to be discared when delivery
> fails.
**ShutdownTimeoutMs** (default: 1000, unit: ms)
> Defines the maximum time in milliseconds a producer is
> allowed to take to shut down. After this timeout the producer is always
> considered to have shut down. Decreasing this value may lead to lost
> messages during shutdown. Raising it may increase shutdown time.
**Modulators**
> Defines a list of modulators to be applied to a message when
> it arrives at this producer. If a modulator changes the stream of a message
> the message is NOT routed to this stream anymore.
> By default this parameter is set to an empty list.
###### Examples[¶](#examples)
This example will collect messages from the fallback stream and buffer them for 10 minutes. After 10 minutes the first messages will be written back to the system as fast as possible.
```
spooling:
Type: producer.Spooling
Stream: fallback
MaxMessagesSec: 0
MaxFileAgeMin: 10
```
##### StatsdMetrics[¶](#statsdmetrics)
This producer samples the messages it receives and sends metrics about them to statsd.
###### Parameters[¶](#parameters)
**Enable** (default: true)
> Switches this plugin on or off.
**Server**
> Defines the server and port to send statsd metrics to.
> By default this parameter is set to “localhost:8125”.
**Prefix**
> Defines a string that is prepended to every statsd metric name.
> By default this parameter is set to “gollum.”.
**StreamMapping**
> Defines a translation from gollum stream to statsd metric
> name. If no mapping is given the gollum stream name is used as the metric
> name.
> By default this parameter is set to an empty list.
**UseMessage** (default: false)
> Switch between just counting all messages arriving at this
> producer or summing up the message content. If UseMessage is set to true, the
> contents will be parsed as an integer, i.e. a string containing a human
> readable number is expected.
> By default the parameter is set to false.
**UseGauge** (default: false)
> When set to true the statsd data format will switch from counter
> to gauge. Every stream that does not receive any message but is liste in
> StreamMapping will have a gauge value of 0.
> By default this is parameter is set to false.
**Batch/MaxMessages**
> Defines the maximum number of messages to collect per
> batch.
> By default this parameter is set to 500.
**Batch/TimeoutSec** (default: 10, unit: sec)
> Defines the number of seconds after which a batch is
> processed, regardless of MaxMessages being reached or not.
> By default this parameter is set to 10.
###### Parameters (from core.BufferedProducer)[¶](#parameters-from-core-bufferedproducer)
**Channel**
> This value defines the capacity of the message buffer.
> By default this parameter is set to “8192”.
**ChannelTimeoutMs** (default: 0, unit: ms)
> This value defines a timeout for each message
> before the message will discarded. To disable the timeout, set this
> parameter to 0.
> By default this parameter is set to “0”.
###### Parameters (from core.SimpleProducer)[¶](#parameters-from-core-simpleproducer)
**Streams**
> Defines a list of streams the producer will receive from. This
> parameter is mandatory. Specifying “*” causes the producer to receive messages
> from all streams except internal internal ones (e.g. _GOLLUM_).
> By default this parameter is set to an empty list.
**FallbackStream**
> Defines a stream to route messages to if delivery fails.
> The message is reset to its original state before being routed, i.e. all
> modifications done to the message after leaving the consumer are removed.
> Setting this paramater to “” will cause messages to be discared when delivery
> fails.
**ShutdownTimeoutMs** (default: 1000, unit: ms)
> Defines the maximum time in milliseconds a producer is
> allowed to take to shut down. After this timeout the producer is always
> considered to have shut down. Decreasing this value may lead to lost
> messages during shutdown. Raising it may increase shutdown time.
**Modulators**
> Defines a list of modulators to be applied to a message when
> it arrives at this producer. If a modulator changes the stream of a message
> the message is NOT routed to this stream anymore.
> By default this parameter is set to an empty list.
###### Examples[¶](#examples)
This example will collect all messages going through gollum and sending metrics about the different datastreams to statsd at least every 5 seconds.
Metrics will be send as “logs.streamName”.
```
metricsCollector:
Type: producer.StatsdMetrics
Stream: "*"
Server: "stats01:8125"
BatchTimeoutSec: 5
Prefix: "logs."
UseGauge: true
```
##### Websocket[¶](#websocket)
The websocket producer opens up a websocket.
###### Parameters[¶](#parameters)
**Enable** (default: true)
> Switches this plugin on or off.
**Address** (default: :81)
> This value defines the host and port to bind to.
> This is allowed be any ip address/dns and port like “localhost:5880”.
> By default this parameter is set to “:81”.
**Path** (default: /)
> This value defines the url path to listen for.
> By default this parameter is set to “/”
**ReadTimeoutSec** (default: 3, unit: sec)
> This value specifies the maximum duration in seconds before timing out
> read of the request.
> By default this parameter is set to “3” seconds.
**IgnoreOrigin** (default: false)
> Ignore origin check from websocket server.
> By default this parameter is set to “false”.
###### Parameters (from core.BufferedProducer)[¶](#parameters-from-core-bufferedproducer)
**Channel**
> This value defines the capacity of the message buffer.
> By default this parameter is set to “8192”.
**ChannelTimeoutMs** (default: 0, unit: ms)
> This value defines a timeout for each message
> before the message will discarded. To disable the timeout, set this
> parameter to 0.
> By default this parameter is set to “0”.
###### Parameters (from core.SimpleProducer)[¶](#parameters-from-core-simpleproducer)
**Streams**
> Defines a list of streams the producer will receive from. This
> parameter is mandatory. Specifying “*” causes the producer to receive messages
> from all streams except internal internal ones (e.g. _GOLLUM_).
> By default this parameter is set to an empty list.
**FallbackStream**
> Defines a stream to route messages to if delivery fails.
> The message is reset to its original state before being routed, i.e. all
> modifications done to the message after leaving the consumer are removed.
> Setting this paramater to “” will cause messages to be discared when delivery
> fails.
**ShutdownTimeoutMs** (default: 1000, unit: ms)
> Defines the maximum time in milliseconds a producer is
> allowed to take to shut down. After this timeout the producer is always
> considered to have shut down. Decreasing this value may lead to lost
> messages during shutdown. Raising it may increase shutdown time.
**Modulators**
> Defines a list of modulators to be applied to a message when
> it arrives at this producer. If a modulator changes the stream of a message
> the message is NOT routed to this stream anymore.
> By default this parameter is set to an empty list.
###### Examples[¶](#examples)
This example starts a default Websocket producer on port 8080:
```
WebsocketOut:
Type: producer.Websocket
Address: ":8080"
```
#### Routers[¶](#routers)
Routers manage the transfer of messages between [consumers](index.html#document-src/plugins/consumer) and [producers](index.html#document-src/plugins/producer) by streams.
Routers can act as a kind of proxy that may filter and define the distribution algorithm of messages.
The stream names can be referred to by cleartext names. This stream names are free to choose but there are several reserved names for internal or special purpose:
| _GOLLUM_: | is used for internal log messages |
| *: | is a placeholder for “all routers but the internal routers”. In some cases “*” means “all routers” without exceptions. This is denoted in the corresponding documentations whenever this is the case. |
**Basics router setups:**
**List of available Router:**
##### Broadcast[¶](#broadcast)
This router implements the default behavior of routing all messages to all producers registered to the configured stream.
###### Parameters[¶](#parameters)
**Enable** (default: true)
> Switches this plugin on or off.
###### Parameters (from core.SimpleRouter)[¶](#parameters-from-core-simplerouter)
**Stream**
> This value specifies the name of the stream this plugin is supposed to
> read messages from.
**Filters**
> This value defines an optional list of Filter plugins to connect to
> this router.
**TimeoutMs** (default: 0, unit: ms)
> This value sets a timeout in milliseconds until a message should
> handled by the router. You can disable this behavior by setting it to “0”.
> By default this parameter is set to “0”.
###### Examples[¶](#examples)
```
rateLimiter:
Type: router.Broadcast
Stream: errorlogs
Filters:
- filter.Rate:
MessagesPerSec: 200
```
##### Distribute[¶](#distribute)
The “Distribute” plugin provides 1:n stream remapping by duplicating messages.
During startup, it creates a set of streams with names listed in [TargetStreams]. During execution, it consumes messages from the stream [Stream] and enqueues copies of these messages onto each of the streams listed in [TargetStreams].
When routing to multiple routers, the incoming stream has to be listed explicitly to be used.
###### Parameters[¶](#parameters)
**Enable** (default: true)
> Switches this plugin on or off.
**TargetStreams**
> List of streams to route the incoming messages to.
###### Parameters (from core.SimpleRouter)[¶](#parameters-from-core-simplerouter)
**Stream**
> This value specifies the name of the stream this plugin is supposed to
> read messages from.
**Filters**
> This value defines an optional list of Filter plugins to connect to
> this router.
**TimeoutMs** (default: 0, unit: ms)
> This value sets a timeout in milliseconds until a message should
> handled by the router. You can disable this behavior by setting it to “0”.
> By default this parameter is set to “0”.
###### Examples[¶](#examples)
This example route incoming messages from streamA to streamB and streamC (duplication):
```
JunkRouterDist:
Type: router.Distribute
Stream: streamA
TargetStreams:
- streamB
- streamC
```
##### Metadata[¶](#metadata)
This router routes the message to a stream given in a specified metadata field. If the field is not set, the message will be passed along.
###### Parameters[¶](#parameters)
**Enable** (default: true)
> Switches this plugin on or off.
**Key** (default: Stream)
> The metadata field to read from.
> By default this parameter is set to “Stream”
###### Parameters (from core.SimpleRouter)[¶](#parameters-from-core-simplerouter)
**Stream**
> This value specifies the name of the stream this plugin is supposed to
> read messages from.
**Filters**
> This value defines an optional list of Filter plugins to connect to
> this router.
**TimeoutMs** (default: 0, unit: ms)
> This value sets a timeout in milliseconds until a message should
> handled by the router. You can disable this behavior by setting it to “0”.
> By default this parameter is set to “0”.
###### Examples[¶](#examples)
```
switchRoute:
Type: router.Metadata
Stream: errorlogs
Key: key
```
##### Random[¶](#random)
The “Random” router relays each message sent to the stream [Stream] to exactly one of the producers connected to [Stream]. The receiving producer is chosen randomly for each message.
###### Parameters[¶](#parameters)
**Enable** (default: true)
> Switches this plugin on or off.
###### Parameters (from core.SimpleRouter)[¶](#parameters-from-core-simplerouter)
**Stream**
> This value specifies the name of the stream this plugin is supposed to
> read messages from.
**Filters**
> This value defines an optional list of Filter plugins to connect to
> this router.
**TimeoutMs** (default: 0, unit: ms)
> This value sets a timeout in milliseconds until a message should
> handled by the router. You can disable this behavior by setting it to “0”.
> By default this parameter is set to “0”.
###### Examples[¶](#examples)
This example will randomly send messages to one of the two console producers.
```
randomRouter:
Type: router.Random
Stream: randomStream
```
```
JunkPrinter00:
Type: producer.Console
Streams: randomStream
Modulators:
- format.Envelope:
Prefix: "[junk_00] "
```
```
JunkPrinter01:
Type: producer.Console
Streams: randomStream
Modulators:
- format.Envelope:
Prefix: "[junk_01] "
```
##### RoundRobin[¶](#roundrobin)
This router implements round robin routing. Messages are routed to exactly one of the producers registered to the given stream. The producer is switched in a round robin fashin after each message.
This producer can be useful for load balancing, e.g. when the target service does not support sharding by itself.
###### Parameters[¶](#parameters)
**Enable** (default: true)
> Switches this plugin on or off.
###### Parameters (from core.SimpleRouter)[¶](#parameters-from-core-simplerouter)
**Stream**
> This value specifies the name of the stream this plugin is supposed to
> read messages from.
**Filters**
> This value defines an optional list of Filter plugins to connect to
> this router.
**TimeoutMs** (default: 0, unit: ms)
> This value sets a timeout in milliseconds until a message should
> handled by the router. You can disable this behavior by setting it to “0”.
> By default this parameter is set to “0”.
###### Examples[¶](#examples)
This example will send message to the two console producers in an alternating fashin.
```
loadBalancer:
Type: router.RoundRobin
Stream: logs
```
```
JunkPrinter00:
Type: producer.Console
Streams: randomStream
Modulators:
- format.Envelope:
Prefix: "[junk_00] "
```
```
JunkPrinter01:
Type: producer.Console
Streams: randomStream
Modulators:
- format.Envelope:
Prefix: "[junk_01] "
```
#### Filters[¶](#filters)
Filters are plugins that are embedded into [router plugins](index.html#document-src/plugins/router).
Filters can analyze messages and decide wether to let them pass to a [producer](index.html#document-src/plugins/producer). or to block them.
##### Any[¶](#any)
This plugin takes a list of filters and applies each of them to incoming messages until a an accepting filter is found. If any of the listed filters accept the message, it is passed through, otherwise, the message is dropper.
###### Parameters[¶](#parameters)
**AnyFilters**
> Defines a list of filters that should be checked before filtering
> a message. Filters are checked in order, and if the message passes
> then no further filters are checked.
###### Parameters (from core.SimpleFilter)[¶](#parameters-from-core-simplefilter)
**FilteredStream**
> This value defines the stream filtered messages get sent to.
> You can disable this behavior by setting the value to “”.
> By default this parameter is set to “”.
###### Examples[¶](#examples)
This example will accept valid json or messages from “exceptionStream”:
```
ExampleConsumer:
Type: consumer.Console
Streams: "*"
Modulators:
- filter.Any:
AnyFilters:
- filter.JSON
- filter.Stream:
Only: exceptionStream
```
##### None[¶](#none)
This filter blocks all messages.
###### Parameters (from core.SimpleFilter)[¶](#parameters-from-core-simplefilter)
**FilteredStream**
> This value defines the stream filtered messages get sent to.
> You can disable this behavior by setting the value to “”.
> By default this parameter is set to “”.
###### Examples[¶](#examples)
This example starts a Console consumer and blocks all incoming messages:
```
exampleConsumer:
Type: consumer.Console
Streams: console
Modulators:
- filter.None
```
##### Rate[¶](#rate)
This plugin blocks messages after a certain number of messages per second has been reached.
###### Parameters[¶](#parameters)
**MessagesPerSec** (default: 100)
> This value defines the maximum number of messages per second allowed
> to pass through this filter.
> By default this parameter is set to “100”.
**Ignore**
> Defines a list of streams that should not be affected by
> rate limiting. This is useful for e.g. producers listeing to “*”.
> By default this parameter is set to “empty”.
###### Parameters (from core.SimpleFilter)[¶](#parameters-from-core-simplefilter)
**FilteredStream**
> This value defines the stream filtered messages get sent to.
> You can disable this behavior by setting the value to “”.
> By default this parameter is set to “”.
###### Examples[¶](#examples)
This example accept ~10 messages in a second except the “noLimit” stream:
```
ExampleConsumer:
Type: consumer.Console
Streams: "*"
Modulators:
- filter.Rate:
MessagesPerSec: 10
Ignore:
- noLimit
```
##### RegExp[¶](#regexp)
This filter rejects or accepts messages based on regular expressions.
###### Parameters[¶](#parameters)
**Expression**
> Messages matching this expression are passed on.
> This parameter is ignored when set to “”. Expression is checked
> after ExpressionNot.
> By default this parameter is set to “”.
**ExpressionNot**
> Messages *not* matching this expression are
> passed on. This parameter is ignored when set to “”. ExpressionNot
> is checked before Expression.
> By default this parameter is set to “”.
**ApplyTo**
> Defines which part of the message the filter is applied to.
> When set to “”, this filter is applied to the message’s payload. All
> other values denotes a metadata key.
> By default this parameter is set to “”.
###### Parameters (from core.SimpleFilter)[¶](#parameters-from-core-simplefilter)
**FilteredStream**
> This value defines the stream filtered messages get sent to.
> You can disable this behavior by setting the value to “”.
> By default this parameter is set to “”.
###### Examples[¶](#examples)
This example accepts only accesslog entries with a return status of 2xx or 3xx not originated from staging systems.
```
ExampleConsumer:
Type: consumer.Console
Streams: console
Modulators:
- filter.RegExp:
ExpressionNot: " stage\\."
Expression: "HTTP/1\\.1\\\" [23]\\d\\d"
```
##### Sample[¶](#sample)
This plugin can be used to get n out of m messages (downsample).
This allows you to reduce the amount of messages; the plugin starts blocking after a certain number of messages has been reached.
###### Parameters[¶](#parameters)
**SampleRatePerGroup** (default: 1)
> This value defines how many messages are passed through
> the filter in each group.
> By default this parameter is set to “1”.
**SampleGroupSize** (default: 2)
> This value defines how many messages make up a group. Messages over
> SampleRatePerGroup within a group are filtered.
> By default this parameter is set to “2”.
**SampleRateIgnore**
> This value defines a list of streams that should not be affected by
> sampling. This is useful for e.g. producers listening to “*”.
> By default this parameter is set to an empty list.
###### Examples[¶](#examples)
This example will block 8 from 10 messages:
```
exampleConsumer:
Type: consumer.Console
Streams: "*"
Modulators:
- filter.Sample:
SampleRatePerGroup: 2
SampleGroupSize: 10
SampleIgnore:
- foo
- bar
```
##### Stream[¶](#stream)
The “Stream” filter filters messages by applying black and white lists to the the messages’ streams’ names.
The blacklist is applied first; messages not rejected by the blacklist are checked against the whitelist. An empty white list matches all streams.
###### Parameters[¶](#parameters)
**Block**
> Defines a list of stream names that are blocked. If a message’s
> stream is not in that list, the “Only” list is tested. By default this
> parameter is empty.
**Only**
> Defines a list of streams that may pass. Messages from streams
> that are not in this list are blocked unless the list is empty.
> By default this parameter is empty.
###### Parameters (from core.SimpleFilter)[¶](#parameters-from-core-simplefilter)
**FilteredStream**
> This value defines the stream filtered messages get sent to.
> You can disable this behavior by setting the value to “”.
> By default this parameter is set to “”.
###### Examples[¶](#examples)
This example accepts ALL messages except ones from stream “foo”:
```
ExampleConsumer:
Type: consumer.Console
Streams: "*"
Modulators:
- filter.Stream:
Block:
- foo
```
This example only accepts messages from stream “foo”:
```
ExampleConsumer:
Type: consumer.Console
Streams: "*"
Modulators:
- filter.Stream:
Only:
- foo
```
#### Formatters[¶](#formatters)
Formatters are plugins that are embedded into [routers](index.html#document-src/plugins/router) or [producers](index.html#document-src/plugins/producer).
Formatters can convert messages into another format or append additional information.
##### Agent[¶](#agent)
This formatter parses a user agent string and outputs it as metadata fields to the set target.
###### Parameters[¶](#parameters)
**Fields**
> An array of the fields to extract from the user agent.
> Available fields are: “mozilla”, “platform”, “os”, “localization”, “engine”,
> “engine-version”, “browser”, “browser-version”, “bot”, “mobile”.
> By default this is set to [“platform”,”os”,”localization”,”browser”].
**Prefix**
> Defines a prefix for each of the keys generated.
> By default this is set to “”.
###### Parameters (from core.SimpleFormatter)[¶](#parameters-from-core-simpleformatter)
**Source**
> This value chooses the part of the message the data to be formatted
> should be read from. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**Target**
> This value chooses the part of the message the formatted data
> should be stored to. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**ApplyTo**
> Use this to set Source and Target to the same value. This setting
> will be ignored if either Source or Target is set to something else but “”.
> By default this parameter is set to “”.
**SkipIfEmpty**
> When set to true, this formatter will not be applied to data
> that is empty or - in case of metadata - not existing.
> By default this parameter is set to false
###### Examples[¶](#examples)
```
exampleConsumer:
Type: consumer.Console
Streams: stdin
Modulators:
- format.Agent
Source: user_agent
```
##### Aggregate[¶](#aggregate)
Aggregate is a formatter which can group up further formatter.
The Source setting will be passed on to all child formatters, overwriting any source value there (if set).
This plugin could be useful to setup complex configs with metadata handling in more readable format.
###### Parameters[¶](#parameters)
**Source**
> This value chooses the part of the message that should be
> formatted. Use “” to use the message payload; other values specify the
> name of a metadata field to use.
> This values is forced to be used by all child modulators.
> By default this parameter is set to “”.
**Modulators**
> Defines a list of child modulators to be applied to a message
> when it arrives at this formatter. Please note that everything is still one
> message. I.e. applying filters twice might not make sense.
###### Parameters (from core.SimpleFormatter)[¶](#parameters-from-core-simpleformatter)
**Target**
> This value chooses the part of the message the formatted data
> should be stored to. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**ApplyTo**
> Use this to set Source and Target to the same value. This setting
> will be ignored if either Source or Target is set to something else but “”.
> By default this parameter is set to “”.
**SkipIfEmpty**
> When set to true, this formatter will not be applied to data
> that is empty or - in case of metadata - not existing.
> By default this parameter is set to false
###### Examples[¶](#examples)
This example show a useful case for format.Aggregate plugin:
```
exampleConsumerA:
Type: consumer.Console
Streams: "foo"
Modulators:
- format.Aggregate:
Target: bar
Modulators:
- format.Copy
- format.Envelope:
Postfix: "\n"
- format.Aggregate:
Target: foo
Modulators:
- format.Copy
- format.Base64Encode
- format.Double
- format.Envelope:
Postfix: "\n"
```
```
# same config as exampleConsumerB:
Type: consumer.Console
Streams: "bar"
Modulators:
- format.Copy:
Target: bar
- format.Envelope:
Target: bar
Postfix: "\n"
- format.Copy:
Target: foo
- format.Base64Encode:
Target: foo
- format.Double:
Target: foo
- format.Envelope:
Postfix: "\n"
Target: foo
```
##### Base64Decode[¶](#base64decode)
Base64Decode is a formatter that decodes base64 encoded messages.
If a message is not or only partly base64 encoded an error will be logged and the decoded part is returned. RFC 4648 is expected.
###### Parameters[¶](#parameters)
**Base64Dictionary**
> This value defines the 64-character base64 lookup
> dictionary to use. When left empty, a dictionary as defined by RFC4648 is used.
> By default this parameter is set to “”.
###### Parameters (from core.SimpleFormatter)[¶](#parameters-from-core-simpleformatter)
**Source**
> This value chooses the part of the message the data to be formatted
> should be read from. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**Target**
> This value chooses the part of the message the formatted data
> should be stored to. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**ApplyTo**
> Use this to set Source and Target to the same value. This setting
> will be ignored if either Source or Target is set to something else but “”.
> By default this parameter is set to “”.
**SkipIfEmpty**
> When set to true, this formatter will not be applied to data
> that is empty or - in case of metadata - not existing.
> By default this parameter is set to false
###### Examples[¶](#examples)
This example expects base64 strings from the console and decodes them before transmitting the message payload.
```
exampleConsumer:
Type: consumer.Console
Streams: "*"
Modulators:
- format.Base64Decode
```
##### Base64Encode[¶](#base64encode)
Base64Encode is a formatter that decodes Base64 encoded strings. Custom dictionaries are supported, by default RFC 4648 standard encoding is used.
###### Parameters[¶](#parameters)
**Base64Dictionary**
> Defines the 64-character base64 lookup dictionary to use.
> When left empty a RFC 4648 standard encoding is used.
> By default this parameter is set to “”.
###### Parameters (from core.SimpleFormatter)[¶](#parameters-from-core-simpleformatter)
**Source**
> This value chooses the part of the message the data to be formatted
> should be read from. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**Target**
> This value chooses the part of the message the formatted data
> should be stored to. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**ApplyTo**
> Use this to set Source and Target to the same value. This setting
> will be ignored if either Source or Target is set to something else but “”.
> By default this parameter is set to “”.
**SkipIfEmpty**
> When set to true, this formatter will not be applied to data
> that is empty or - in case of metadata - not existing.
> By default this parameter is set to false
###### Examples[¶](#examples)
This example uses RFC 4648 URL encoding to format incoming data.
```
ExampleConsumer:
Type: consumer.Console
Streams: console
Modulators:
- formatter.Base64Encode
Dictionary: "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_"
```
##### Cast[¶](#cast)
This formatter casts a given metadata filed into another type.
* AsType: The type to cast to. Can be either string, bytes, float or int.
By default this parameter is set to “string”.
###### Parameters[¶](#parameters)
**ToType** (default: string)
> (no documentation available)
###### Parameters (from core.SimpleFormatter)[¶](#parameters-from-core-simpleformatter)
**Source**
> This value chooses the part of the message the data to be formatted
> should be read from. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**Target**
> This value chooses the part of the message the formatted data
> should be stored to. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**ApplyTo**
> Use this to set Source and Target to the same value. This setting
> will be ignored if either Source or Target is set to something else but “”.
> By default this parameter is set to “”.
**SkipIfEmpty**
> When set to true, this formatter will not be applied to data
> that is empty or - in case of metadata - not existing.
> By default this parameter is set to false
###### Examples[¶](#examples)
This example casts the key “bar” to string.
```
exampleConsumer:
Type: consumer.Console
Streams: stdin
Modulators:
- format.Cast
ApplyTo: bar
ToType: "string"
```
##### ConvertTime[¶](#converttime)
This formatter converts one time format in another.
* From: When left empty, a unix time is expected. Otherwise a go compatible
timestamp has to be given. See <https://golang.org/pkg/time/#pkg-constants>
By default this is set to “”.
* To: When left empty, the output will be unixtime. Otherwise a go compatible
timestamp has to be given. See <https://golang.org/pkg/time/#pkg-constants>
By default this is set to “”.
###### Parameters[¶](#parameters)
**FromFormat**
> (no documentation available)
**ToFormat**
> (no documentation available)
###### Parameters (from core.SimpleFormatter)[¶](#parameters-from-core-simpleformatter)
**Source**
> This value chooses the part of the message the data to be formatted
> should be read from. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**Target**
> This value chooses the part of the message the formatted data
> should be stored to. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**ApplyTo**
> Use this to set Source and Target to the same value. This setting
> will be ignored if either Source or Target is set to something else but “”.
> By default this parameter is set to “”.
**SkipIfEmpty**
> When set to true, this formatter will not be applied to data
> that is empty or - in case of metadata - not existing.
> By default this parameter is set to false
###### Examples[¶](#examples)
This example removes the “pipe” key from the metadata produced by consumer.Console.
```
exampleConsumer:
Type: consumer.Console
Streams: stdin
Modulators:
- format.ConvertTime:
FromFormat: ""
ToFormat: ""
```
##### Copy[¶](#copy)
This formatter sets metadata fields by copying data from the message’s payload or from other metadata fields.
###### Parameters[¶](#parameters)
**Source**
> Defines the key to copy, i.e. the “source” of a copy operation.
> Target will define the target of the copy, i.e. the “destination”.
> An empty string will use the message payload as source.
> By default this parameter is set to an empty string (i.e. payload).
**Mode**
> Defines the copy mode to use. This can be one of “append”,
> “prepend” or “replace”.
> By default this parameter is set to “replace”.
**Separator**
> When using mode prepend or append, defines the characters
> inserted between source and destination.
> By default this parameter is set to an empty string.
###### Parameters (from core.SimpleFormatter)[¶](#parameters-from-core-simpleformatter)
**Target**
> This value chooses the part of the message the formatted data
> should be stored to. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**ApplyTo**
> Use this to set Source and Target to the same value. This setting
> will be ignored if either Source or Target is set to something else but “”.
> By default this parameter is set to “”.
**SkipIfEmpty**
> When set to true, this formatter will not be applied to data
> that is empty or - in case of metadata - not existing.
> By default this parameter is set to false
###### Examples[¶](#examples)
This example copies the payload to the field key and applies a hash on it contain a hash over the complete payload.
```
exampleConsumer:
Type: consumer.Console
Streams: "*"
Modulators:
- format.Copy:
Target: key
- formatter.Identifier
Generator: hash
Target: key
```
##### Delete[¶](#delete)
This formatter erases the message payload or deletes a metadata key.
###### Parameters (from core.SimpleFormatter)[¶](#parameters-from-core-simpleformatter)
**Source**
> This value chooses the part of the message the data to be formatted
> should be read from. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**Target**
> This value chooses the part of the message the formatted data
> should be stored to. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**ApplyTo**
> Use this to set Source and Target to the same value. This setting
> will be ignored if either Source or Target is set to something else but “”.
> By default this parameter is set to “”.
**SkipIfEmpty**
> When set to true, this formatter will not be applied to data
> that is empty or - in case of metadata - not existing.
> By default this parameter is set to false
###### Examples[¶](#examples)
This example removes the “pipe” key from the metadata produced by consumer.Console.
```
exampleConsumer:
Type: consumer.Console
Streams: stdin
Modulators:
- format.Delete
Target: pipe
```
##### Double[¶](#double)
Double is a formatter that duplicates a message and applies two different sets of formatters to both sides. After both messages have been processed,
the value of the field defined as “source” by the double formatter will be copied from both copies and merged into the “target” field of the original message using a given separator.
###### Parameters[¶](#parameters)
**Separator** (default: :)
> This value sets the separator string placed between both parts.
> This parameter is set to “:” by default.
**UseLeftStreamID** (default: false)
> When set to “true”, use the stream id of the left side
> (after formatting) as the streamID for the resulting message.
> This parameter is set to “false” by default.
**Left**
> An optional list of formatters. The first copy of the message (left
> of the delimiter) is passed through these filters.
> This parameter is set to an empty list by default.
**Right**
> An optional list of formatters. The second copy of the mssage (right
> of the delimiter) is passed through these filters.
> This parameter is set to an empty list by default.
###### Parameters (from core.SimpleFormatter)[¶](#parameters-from-core-simpleformatter)
**Source**
> This value chooses the part of the message the data to be formatted
> should be read from. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**Target**
> This value chooses the part of the message the formatted data
> should be stored to. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**ApplyTo**
> Use this to set Source and Target to the same value. This setting
> will be ignored if either Source or Target is set to something else but “”.
> By default this parameter is set to “”.
**SkipIfEmpty**
> When set to true, this formatter will not be applied to data
> that is empty or - in case of metadata - not existing.
> By default this parameter is set to false
###### Examples[¶](#examples)
This example creates a message of the form “<orig>|<hash>”, where <orig> is the original console input and <hash> its hash.
```
exampleConsumer:
Type: consumer.Console
Streams: "*"
Modulators:
- format.Double:
Separator: "|"
Right:
- format.Identifier:
Generator: hash
```
##### Envelope[¶](#envelope)
This formatter adds content to the beginning and/or end of a message.
###### Parameters[¶](#parameters)
**Prefix**
> Defines a string that is added to the front of the message.
> Special characters like n r or t can be used without additional escaping.
> By default this parameter is set to “”.
**Postfix** (default: n)
> Defines a string that is added to the end of the message.
> Special characters like n r or t can be used without additional escaping.
> By default this parameter is set to “n”.
###### Parameters (from core.SimpleFormatter)[¶](#parameters-from-core-simpleformatter)
**Source**
> This value chooses the part of the message the data to be formatted
> should be read from. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**Target**
> This value chooses the part of the message the formatted data
> should be stored to. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**ApplyTo**
> Use this to set Source and Target to the same value. This setting
> will be ignored if either Source or Target is set to something else but “”.
> By default this parameter is set to “”.
**SkipIfEmpty**
> When set to true, this formatter will not be applied to data
> that is empty or - in case of metadata - not existing.
> By default this parameter is set to false
###### Examples[¶](#examples)
This example adds a line number and a newline character to each message printed to the console.
```
exampleProducer:
Type: producer.Console
Streams: "*"
Modulators:
- format.Sequence
- format.Envelope
```
##### Flatten[¶](#flatten)
This formatter takes a metadata tree and moves all subkeys on the same level as the root of the tree. Fields will be named according to their hierarchy but joining all keys in the path with a given separator.
###### Parameters[¶](#parameters)
**Separator** (default: .)
> Defines the separator used when joining keys.
> By default this parameter is set to “.”
###### Parameters (from core.SimpleFormatter)[¶](#parameters-from-core-simpleformatter)
**Source**
> This value chooses the part of the message the data to be formatted
> should be read from. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**Target**
> This value chooses the part of the message the formatted data
> should be stored to. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**ApplyTo**
> Use this to set Source and Target to the same value. This setting
> will be ignored if either Source or Target is set to something else but “”.
> By default this parameter is set to “”.
**SkipIfEmpty**
> When set to true, this formatter will not be applied to data
> that is empty or - in case of metadata - not existing.
> By default this parameter is set to false
###### Examples[¶](#examples)
This will flatten all elements below the key “tree” on the root level.
A key /tree/a/b will become /tree.a.b
```
ExampleConsumer:
Type: consumer.Console
Streams: console
Modulators:
- format.Flatten:
Source: tree
```
##### GeoIP[¶](#geoip)
This formatter parses an IP and outputs it’s geo information as metadata fields to the set target.
###### Parameters[¶](#parameters)
**GeoIPFile**
> Defines a GeoIP file to load this setting is mandatory. Files
> can be found e.g. at <http://dev.maxmind.com/geoip/geoip2/geolite2/>.
> By default this parameter is set to “”.
**Fields**
> An array of the fields to extract from the GeoIP.
> Available fields are: “city”, “country-code”, “country”, “continent-code”,
> “continent”, “timezone”, “proxy”, “satellite”, “location”, “location-hash”
> By default this is set to [“city”,”country”,”continent”,”location-hash”].
**Prefix**
> Defines a prefix for each of the keys generated.
> By default this is set to “”.
###### Parameters (from core.SimpleFormatter)[¶](#parameters-from-core-simpleformatter)
**Source**
> This value chooses the part of the message the data to be formatted
> should be read from. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**Target**
> This value chooses the part of the message the formatted data
> should be stored to. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**ApplyTo**
> Use this to set Source and Target to the same value. This setting
> will be ignored if either Source or Target is set to something else but “”.
> By default this parameter is set to “”.
**SkipIfEmpty**
> When set to true, this formatter will not be applied to data
> that is empty or - in case of metadata - not existing.
> By default this parameter is set to false
###### Examples[¶](#examples)
```
exampleConsumer:
Type: consumer.Console
Streams: stdin
Modulators:
- format.GeoIP
Source: client-ip
```
##### Grok[¶](#grok)
Grok is a formatter that applies regex filters to messages and stores the result as metadata fields. If the target key is not existing it will be created. If the target key is existing but not a map, it will be replaced.
It works by combining text patterns into something that matches your logs.
See <https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#_grok_basics>
for more information about Grok.
###### Parameters[¶](#parameters)
**RemoveEmptyValues**
> When set to true, empty captures will not be returned.
> By default this parameter is set to “true”.
**NamedCapturesOnly**
> When set to true, only named captures will be returned.
> By default this parameter is set to “true”.
**SkipDefaultPatterns**
> When set to true, standard grok patterns will not be
> included in the list of patterns.
> By default this parameter is set to “true”.
**Patterns**
> A list of grok patterns that will be applied to messages.
> The first matching pattern will be used to parse the message.
###### Parameters (from core.SimpleFormatter)[¶](#parameters-from-core-simpleformatter)
**Source**
> This value chooses the part of the message the data to be formatted
> should be read from. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**Target**
> This value chooses the part of the message the formatted data
> should be stored to. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**ApplyTo**
> Use this to set Source and Target to the same value. This setting
> will be ignored if either Source or Target is set to something else but “”.
> By default this parameter is set to “”.
**SkipIfEmpty**
> When set to true, this formatter will not be applied to data
> that is empty or - in case of metadata - not existing.
> By default this parameter is set to false
###### Examples[¶](#examples)
This example transforms unstructured input into a structured json output.
Input:
```
us-west.servicename.webserver0.this.is.the.measurement 12.0 1497003802
```
Output:
```
{
"datacenter": "us-west",
"service": "servicename",
"host": "webserver0",
"measurement": "this.is.the.measurement",
"value": "12.0",
"time": "1497003802"
}
```
Config:
```
exampleConsumer:
Type: consumer.Console
Streams: "*"
Modulators:
- format.Grok:
Patterns:
- ^(?P<datacenter>[^\.]+?)\.(?P<service>[^\.]+?)\.(?P<host>[^\.]+?)\.statsd\.gauge-(?P<application>[^\.]+?)\.(?P<measurement>[^\s]+?)\s%{NUMBER:value_gauge:float}\s*%{INT:time}
- ^(?P<datacenter>[^\.]+?)\.(?P<service>[^\.]+?)\.(?P<host>[^\.]+?)\.statsd\.latency-(?P<application>[^\.]+?)\.(?P<measurement>[^\s]+?)\s%{NUMBER:value_latency:float}\s*%{INT:time}
- ^(?P<datacenter>[^\.]+?)\.(?P<service>[^\.]+?)\.(?P<host>[^\.]+?)\.statsd\.derive-(?P<application>[^\.]+?)\.(?P<measurement>[^\s]+?)\s%{NUMBER:value_derive:float}\s*%{INT:time}
- ^(?P<datacenter>[^\.]+?)\.(?P<service>[^\.]+?)\.(?P<host>[^\.]+?)\.(?P<measurement>[^\s]+?)\s%{NUMBER:value:float}\s*%{INT:time}
- format.ToJSON: {}
```
##### Hostname[¶](#hostname)
This formatter prefixes the message or metadata with the hostname of the machine gollum is running on.
###### Parameters[¶](#parameters)
**Separator** (default: :)
> Defines the separator string placed between hostname and data.
> By default this parameter is set to “:”.
###### Parameters (from core.SimpleFormatter)[¶](#parameters-from-core-simpleformatter)
**Source**
> This value chooses the part of the message the data to be formatted
> should be read from. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**Target**
> This value chooses the part of the message the formatted data
> should be stored to. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**ApplyTo**
> Use this to set Source and Target to the same value. This setting
> will be ignored if either Source or Target is set to something else but “”.
> By default this parameter is set to “”.
**SkipIfEmpty**
> When set to true, this formatter will not be applied to data
> that is empty or - in case of metadata - not existing.
> By default this parameter is set to false
###### Examples[¶](#examples)
This example inserts the hostname into an existing JSON payload.
```
exampleProducer:
Type: producer.Console
Streams: "*"
Modulators:
- format.Trim:
LeftSeparator: "{"
RightSeparator: "}"
- format.Hostname
Separator: ","
- format.Envelope:
Prefix: "{\"host\":"
Postfix: "}"
```
##### Identifier[¶](#identifier)
This formatter generates a (mostly) unique 64 bit identifier number from the message payload, timestamp and/or sequence number. The number is be converted to a human readable form.
###### Parameters[¶](#parameters)
**Generator**
> Defines which algorithm to use when generating the identifier.
> This my be one of the following values.
> By default this parameter is set to “time”
> **hash**
> > > The message payload will be hashed using fnv1a and returned as hex.
> **time**
> > > The id will be formatted YYMMDDHHmmSSxxxxxxx where x denotes the
> > current sequence number modulo 10000000. I.e. 10.000.000 messages per second
> > are possible before a collision occurs.
> **seq**
> > > The sequence number will be used.
> **seqhex**
> > > The hex encoded sequence number will be used.
###### Parameters (from core.SimpleFormatter)[¶](#parameters-from-core-simpleformatter)
**Source**
> This value chooses the part of the message the data to be formatted
> should be read from. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**Target**
> This value chooses the part of the message the formatted data
> should be stored to. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**ApplyTo**
> Use this to set Source and Target to the same value. This setting
> will be ignored if either Source or Target is set to something else but “”.
> By default this parameter is set to “”.
**SkipIfEmpty**
> When set to true, this formatter will not be applied to data
> that is empty or - in case of metadata - not existing.
> By default this parameter is set to false
###### Examples[¶](#examples)
This example will generate a payload checksum and store it to a metadata field called “checksum”.
```
ExampleConsumer:
Type: consumer.Console
Streams: console
Modulators:
- formatter.Identifier
Generator: hash
Target: checksum
```
##### JSON[¶](#json)
This formatter parses json data into metadata.
###### Parameters (from core.SimpleFormatter)[¶](#parameters-from-core-simpleformatter)
**Source**
> This value chooses the part of the message the data to be formatted
> should be read from. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**Target**
> This value chooses the part of the message the formatted data
> should be stored to. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**ApplyTo**
> Use this to set Source and Target to the same value. This setting
> will be ignored if either Source or Target is set to something else but “”.
> By default this parameter is set to “”.
**SkipIfEmpty**
> When set to true, this formatter will not be applied to data
> that is empty or - in case of metadata - not existing.
> By default this parameter is set to false
###### Examples[¶](#examples)
This example parses the payload as JSON and stores it below the key
“data”.
```
exampleConsumer:
Type: consumer.Console
Streams: stdin
Modulators:
- format.JSON
Target: data
```
##### Move[¶](#move)
This formatter moves data from one location to another. When targeting a metadata key, the target key will be created or overwritten. When the source is the payload, it will be cleared.
###### Parameters (from core.SimpleFormatter)[¶](#parameters-from-core-simpleformatter)
**Source**
> This value chooses the part of the message the data to be formatted
> should be read from. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**Target**
> This value chooses the part of the message the formatted data
> should be stored to. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**ApplyTo**
> Use this to set Source and Target to the same value. This setting
> will be ignored if either Source or Target is set to something else but “”.
> By default this parameter is set to “”.
**SkipIfEmpty**
> When set to true, this formatter will not be applied to data
> that is empty or - in case of metadata - not existing.
> By default this parameter is set to false
###### Examples[¶](#examples)
This example moves the payload produced by consumer.Console to the metadata key data.
```
exampleConsumer:
Type: consumer.Console
Streams: stdin
Modulators:
- format.Move
Target: data
```
##### Override[¶](#override)
This formatter sets a given value to a metadata field or payload.
###### Parameters[¶](#parameters)
**Value**
> (no documentation available)
###### Parameters (from core.SimpleFormatter)[¶](#parameters-from-core-simpleformatter)
**Source**
> This value chooses the part of the message the data to be formatted
> should be read from. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**Target**
> This value chooses the part of the message the formatted data
> should be stored to. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**ApplyTo**
> Use this to set Source and Target to the same value. This setting
> will be ignored if either Source or Target is set to something else but “”.
> By default this parameter is set to “”.
**SkipIfEmpty**
> When set to true, this formatter will not be applied to data
> that is empty or - in case of metadata - not existing.
> By default this parameter is set to false
###### Examples[¶](#examples)
This example sets the value “foo” on the key “bar”.
```
exampleConsumer:
Type: consumer.Console
Streams: stdin
Modulators:
- format.Override
Target: bar
Value: "foo"
```
##### RegExp[¶](#regexp)
This formatter parses a message using a regular expression, performs string (template) replacement and returns the result.
###### Parameters[¶](#parameters)
**Posix**
> Set to true to compile the regular expression using posix semantics.
> By default this parameter is set to true.
**Expression**
> Defines the regular expression used for parsing.
> For details on the regexp syntax see <https://golang.org/pkg/regexp/syntax>.
> By default this parameter is set to “(.*)”
**Template** (default: ${1})
> Defines the result string. Regexp matching groups can be referred
> to using “${n}”, with n being the group’s index. For other possible
> reference semantics, see <https://golang.org/pkg/regexp/#Regexp.Expand>.
> By default this parameter is set to “${1}”
###### Parameters (from core.SimpleFormatter)[¶](#parameters-from-core-simpleformatter)
**Source**
> This value chooses the part of the message the data to be formatted
> should be read from. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**Target**
> This value chooses the part of the message the formatted data
> should be stored to. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**ApplyTo**
> Use this to set Source and Target to the same value. This setting
> will be ignored if either Source or Target is set to something else but “”.
> By default this parameter is set to “”.
**SkipIfEmpty**
> When set to true, this formatter will not be applied to data
> that is empty or - in case of metadata - not existing.
> By default this parameter is set to false
###### Examples[¶](#examples)
This example extracts time and host from an imaginary log message format.
```
exampleConsumer:
Type: consumer.Console
Streams: stding
Modulators:
- format.RegExp:
Expression: "^(\\d+) (\\w+): "
Template: "time: ${1}, host: ${2}"
```
##### Replace[¶](#replace)
This formatter replaces all occurrences in a string with another.
###### Parameters[¶](#parameters)
**Search**
> Defines the string to search for. When left empty, the target will
> be completely replaced by ReplaceWith.
> By default this is set to “”.
**ReplaceWith**
> Defines the string to replace all occurences of “search” with.
> By default this is set to “”.
###### Parameters (from core.SimpleFormatter)[¶](#parameters-from-core-simpleformatter)
**Source**
> This value chooses the part of the message the data to be formatted
> should be read from. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**Target**
> This value chooses the part of the message the formatted data
> should be stored to. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**ApplyTo**
> Use this to set Source and Target to the same value. This setting
> will be ignored if either Source or Target is set to something else but “”.
> By default this parameter is set to “”.
**SkipIfEmpty**
> When set to true, this formatter will not be applied to data
> that is empty or - in case of metadata - not existing.
> By default this parameter is set to false
###### Examples[¶](#examples)
```
ExampleConsumer:
Type: consumer.Console
Streams: console
Modulators:
- format.Replace:
Search: "foo"
ReplaceWith: "bar"
```
##### Runlength[¶](#runlength)
Runlength is a formatter that prepends the length of the message, followed by a “:”. The actual message is formatted by a nested formatter.
###### Parameters[¶](#parameters)
**Separator** (default: :)
> This value is used as separator.
> By default this parameter is set to “:”.
**StoreRunlengthOnly** (default: false)
> If this value is set to “true” only the runlength will
> stored. This option is useful to e.g. create metadata fields only containing
> the length of the payload. When set to “true” the Separator parameter will
> be ignored.
> By default this parameter is set to false.
###### Parameters (from core.SimpleFormatter)[¶](#parameters-from-core-simpleformatter)
**Source**
> This value chooses the part of the message the data to be formatted
> should be read from. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**Target**
> This value chooses the part of the message the formatted data
> should be stored to. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**ApplyTo**
> Use this to set Source and Target to the same value. This setting
> will be ignored if either Source or Target is set to something else but “”.
> By default this parameter is set to “”.
**SkipIfEmpty**
> When set to true, this formatter will not be applied to data
> that is empty or - in case of metadata - not existing.
> By default this parameter is set to false
###### Examples[¶](#examples)
This example will store the length of the payload in a separate metadata field.
```
exampleConsumer:
Type: consumer.Console
Streams: "*"
Modulators:
- format.MetadataCopy:
CopyToKeys: ["length"]
- format.Runlength:
Target: length
StoreRunlengthOnly: true
```
##### Sequence[¶](#sequence)
This formatter prefixes data with a sequence number managed by the formatter. All messages passing through an instance of the formatter will get a unique number. The number is not persisted,
i.e. it restarts at 0 after each restart of gollum.
###### Parameters[¶](#parameters)
**Separator** (default: :)
> Defines the separator string placed between number and data.
> By default this parameter is set to “:”.
###### Parameters (from core.SimpleFormatter)[¶](#parameters-from-core-simpleformatter)
**Source**
> This value chooses the part of the message the data to be formatted
> should be read from. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**Target**
> This value chooses the part of the message the formatted data
> should be stored to. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**ApplyTo**
> Use this to set Source and Target to the same value. This setting
> will be ignored if either Source or Target is set to something else but “”.
> By default this parameter is set to “”.
**SkipIfEmpty**
> When set to true, this formatter will not be applied to data
> that is empty or - in case of metadata - not existing.
> By default this parameter is set to false
###### Examples[¶](#examples)
This example will insert the sequence number into an existing JSON payload.
```
exampleProducer:
Type: producer.Console
Streams: "*"
Modulators:
- format.Trim:
LeftSeparator: "{"
RightSeparator: "}"
- format.Sequence
Separator: ","
- format.Envelope:
Prefix: "{\"seq\":"
Postfix: "}"
```
##### Split[¶](#split)
This formatter splits data into an array by using the given delimiter and stores it at the metadata key denoted by target. Targeting the payload (by not given a target or passing an empty string) will result in an error.
###### Parameters[¶](#parameters)
**Delimiter** (default: ,)
> Defines the delimiter to use when splitting the data.
> By default this parameter is set to “,”
###### Parameters (from core.SimpleFormatter)[¶](#parameters-from-core-simpleformatter)
**Source**
> This value chooses the part of the message the data to be formatted
> should be read from. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**Target**
> This value chooses the part of the message the formatted data
> should be stored to. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**ApplyTo**
> Use this to set Source and Target to the same value. This setting
> will be ignored if either Source or Target is set to something else but “”.
> By default this parameter is set to “”.
**SkipIfEmpty**
> When set to true, this formatter will not be applied to data
> that is empty or - in case of metadata - not existing.
> By default this parameter is set to false
###### Examples[¶](#examples)
```
ExampleConsumer:
Type: consumer.Console
Streams: console
Modulators:
- format.Split:
Target: values
Delimiter: ":"
```
##### SplitPick[¶](#splitpick)
This formatter splits data into an array by using the given delimiter and extracts the given index from that array. The value of that index will be written back.
###### Parameters[¶](#parameters)
**Delimiter** (default: ,)
> Defines the delimiter to use when splitting the data.
> By default this parameter is set to “,”
**Index** (default: 0)
> Defines the index to pick.
> By default this parameter is set to 0.
###### Parameters (from core.SimpleFormatter)[¶](#parameters-from-core-simpleformatter)
**Source**
> This value chooses the part of the message the data to be formatted
> should be read from. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**Target**
> This value chooses the part of the message the formatted data
> should be stored to. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**ApplyTo**
> Use this to set Source and Target to the same value. This setting
> will be ignored if either Source or Target is set to something else but “”.
> By default this parameter is set to “”.
**SkipIfEmpty**
> When set to true, this formatter will not be applied to data
> that is empty or - in case of metadata - not existing.
> By default this parameter is set to false
###### Examples[¶](#examples)
```
ExampleConsumer:
Type: consumer.Console
Streams: console
Modulators:
- format.SplitPick:
Index: 2
Delimiter: ","
```
##### SplitToFields[¶](#splittofields)
This formatter splits data into an array by using the given delimiter and stores it at the metadata key denoted by Fields.
###### Parameters[¶](#parameters)
**Delimiter** (default: ,)
> Defines the delimiter to use when splitting the data.
> By default this parameter is set to “,”
**Fields**
> Defines a index-to-key mapping for storing the resulting list into
> Metadata. If there are less entries in the resulting array than fields, the
> remaining fields will not be set. If there are more entries, the additional
> indexes will not be handled.
> By default this parameter is set to an empty list.
###### Parameters (from core.SimpleFormatter)[¶](#parameters-from-core-simpleformatter)
**Source**
> This value chooses the part of the message the data to be formatted
> should be read from. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**Target**
> This value chooses the part of the message the formatted data
> should be stored to. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**ApplyTo**
> Use this to set Source and Target to the same value. This setting
> will be ignored if either Source or Target is set to something else but “”.
> By default this parameter is set to “”.
**SkipIfEmpty**
> When set to true, this formatter will not be applied to data
> that is empty or - in case of metadata - not existing.
> By default this parameter is set to false
###### Examples[¶](#examples)
This example will split the payload by “:” and writes up to three elements as keys “first”, “second” and “third” as fields below the field “values”.
```
ExampleProducer:
Type: proucer.Console
Streams: console
Modulators:
- format.SplitToFields:
Target: values
Delimiter: ":"
Fields: [first,second,third]
```
##### StreamName[¶](#streamname)
This formatter prefixes data with the name of the current or previous stream.
###### Parameters[¶](#parameters)
**UsePrevious**
> Set to true to use the name of the previous stream.
> By default this parameter is set to false.
**Separator** (default: :)
> Defines the separator string used between stream name and data.
> By default this parameter is set to “:”.
###### Parameters (from core.SimpleFormatter)[¶](#parameters-from-core-simpleformatter)
**Source**
> This value chooses the part of the message the data to be formatted
> should be read from. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**Target**
> This value chooses the part of the message the formatted data
> should be stored to. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**ApplyTo**
> Use this to set Source and Target to the same value. This setting
> will be ignored if either Source or Target is set to something else but “”.
> By default this parameter is set to “”.
**SkipIfEmpty**
> When set to true, this formatter will not be applied to data
> that is empty or - in case of metadata - not existing.
> By default this parameter is set to false
###### Examples[¶](#examples)
This example prefixes the message with the most recent routing history.
```
exampleProducer:
Type: producer.Console
Streams: "*"
Modulators:
- format.StreamName:
Separator: ", "
UsePrevious: true
- format.StreamName:
Separator: ": "
```
##### StreamRevert[¶](#streamrevert)
This formatter gets the previously used stream from a message and sets it as the new target stream.
###### Parameters (from core.SimpleFormatter)[¶](#parameters-from-core-simpleformatter)
**Source**
> This value chooses the part of the message the data to be formatted
> should be read from. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**Target**
> This value chooses the part of the message the formatted data
> should be stored to. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**ApplyTo**
> Use this to set Source and Target to the same value. This setting
> will be ignored if either Source or Target is set to something else but “”.
> By default this parameter is set to “”.
**SkipIfEmpty**
> When set to true, this formatter will not be applied to data
> that is empty or - in case of metadata - not existing.
> By default this parameter is set to false
###### Examples[¶](#examples)
```
ExampleConsumer:
Type: consumer.Console
Streams: console
Modulators:
- format.StreamRevert
```
##### StreamRoute[¶](#streamroute)
StreamRoute is a formatter that modifies a message’s stream by reading a prefix from the message’s data (and discarding it).
The prefix is defined as everything before a given delimiter in the message. If no delimiter is found or the prefix is empty the message stream is not changed.
###### Parameters[¶](#parameters)
**Delimiter** (default: :)
> This value defines the delimiter to search when extracting the stream name.
> By default this parameter is set to “:”.
**StreamModulator**
> A list of further modulators to format and filter the extracted stream name.
> By default this parameter is “empty”.
###### Parameters (from core.SimpleFormatter)[¶](#parameters-from-core-simpleformatter)
**Source**
> This value chooses the part of the message the data to be formatted
> should be read from. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**Target**
> This value chooses the part of the message the formatted data
> should be stored to. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**ApplyTo**
> Use this to set Source and Target to the same value. This setting
> will be ignored if either Source or Target is set to something else but “”.
> By default this parameter is set to “”.
**SkipIfEmpty**
> When set to true, this formatter will not be applied to data
> that is empty or - in case of metadata - not existing.
> By default this parameter is set to false
###### Examples[¶](#examples)
This example sets the stream name for messages like <error>:a message string to error and a message string as payload:
```
exampleConsumer:
Type: consumer.Console
Streams: "*"
Modulators:
- format.StreamRoute:
Delimiter: ":"
StreamModulator:
- format.Trim:
LeftSeparator: <
RightSeparator: >
```
##### Template[¶](#template)
This formatter allows to apply go templating to a message based on the currently set metadata. The template language is described in the go documentation: <https://golang.org/pkg/text/template/#hdr-Actions###### Parameters[¶](#parameters)
**Template**
> Defines the go template to apply.
> By default this parameter is set to “”.
###### Parameters (from core.SimpleFormatter)[¶](#parameters-from-core-simpleformatter)
**Source**
> This value chooses the part of the message the data to be formatted
> should be read from. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**Target**
> This value chooses the part of the message the formatted data
> should be stored to. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**ApplyTo**
> Use this to set Source and Target to the same value. This setting
> will be ignored if either Source or Target is set to something else but “”.
> By default this parameter is set to “”.
**SkipIfEmpty**
> When set to true, this formatter will not be applied to data
> that is empty or - in case of metadata - not existing.
> By default this parameter is set to false
###### Examples[¶](#examples)
This example writes the fields “Name” and “Surname” from metadata as the new payload.
```
exampleProducer:
Type: proucer.Console
Streams: "*"
Modulators:
- format.Template:
Template: "{{.Name}} {{.Surname}}"
```
##### Timestamp[¶](#timestamp)
Timestamp is a formatter that allows prefixing messages with a timestamp
(time of arrival at gollum). The timestamp format is freely configurable and can e.g. contain a delimiter sequence at the end.
###### Parameters[¶](#parameters)
**Timestamp** (default: 2006-01-02 15:04:05 MST | )
> This value defines a Go time format string that is used to f
> ormat the timestamp.
> By default this parameter is set to “2006-01-02 15:04:05 MST | “.
###### Parameters (from core.SimpleFormatter)[¶](#parameters-from-core-simpleformatter)
**Source**
> This value chooses the part of the message the data to be formatted
> should be read from. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**Target**
> This value chooses the part of the message the formatted data
> should be stored to. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**ApplyTo**
> Use this to set Source and Target to the same value. This setting
> will be ignored if either Source or Target is set to something else but “”.
> By default this parameter is set to “”.
**SkipIfEmpty**
> When set to true, this formatter will not be applied to data
> that is empty or - in case of metadata - not existing.
> By default this parameter is set to false
###### Examples[¶](#examples)
This example will set a time string to the meta data field time:
```
exampleConsumer:
Type: consumer.Console
Streams: "*"
Modulators:
- format.Timestamp:
Timestamp: "2006-01-02T15:04:05.000 MST"
Target: time
```
##### ToCSV[¶](#tocsv)
ToCSV converts a set of metadata keys to CSV and applies it to Target.
###### Parameters[¶](#parameters)
**Keys**
> List of strings specifying the keys to write as CSV.
> Note that these keys can be paths.
> By default this parameter is set to an empty list.
**Separator** (default: ,)
> The delimited string to insert between each value in the generated
> string. By default this parameter is set to “,”.
**KeepLastSeparator**
> When set to true, the last separator will not be removed.
> By default this parameter is set to false.
###### Parameters (from core.SimpleFormatter)[¶](#parameters-from-core-simpleformatter)
**Source**
> This value chooses the part of the message the data to be formatted
> should be read from. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**Target**
> This value chooses the part of the message the formatted data
> should be stored to. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**ApplyTo**
> Use this to set Source and Target to the same value. This setting
> will be ignored if either Source or Target is set to something else but “”.
> By default this parameter is set to “”.
**SkipIfEmpty**
> When set to true, this formatter will not be applied to data
> that is empty or - in case of metadata - not existing.
> By default this parameter is set to false
###### Examples[¶](#examples)
This example get sthe foo and bar keys from the metdata of a message and set this as the new payload.
```
exampleProducer:
Type: producer.Console
Streams: "*"
Modulators:
- format.ToCSV:
Separator: ';'
Keys:
- 'foo'
- 'bar'
```
##### ToJSON[¶](#tojson)
This formatter converts metadata to JSON and stores it where applied.
###### Parameters[¶](#parameters)
**Root**
> The metadata key to transform to json. When left empty, all
> metadata is assumed. By default this is set to ‘’.
**Ignore**
> A list of keys or paths to exclude from marshalling.
> please note that this is currently a quite expensive operation as
> all metadata below root is cloned during the process.
> By default this is set to an empty list.
###### Parameters (from core.SimpleFormatter)[¶](#parameters-from-core-simpleformatter)
**Source**
> This value chooses the part of the message the data to be formatted
> should be read from. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**Target**
> This value chooses the part of the message the formatted data
> should be stored to. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**ApplyTo**
> Use this to set Source and Target to the same value. This setting
> will be ignored if either Source or Target is set to something else but “”.
> By default this parameter is set to “”.
**SkipIfEmpty**
> When set to true, this formatter will not be applied to data
> that is empty or - in case of metadata - not existing.
> By default this parameter is set to false
###### Examples[¶](#examples)
This example transforms all metadata below the “foo” key to JSON and stores the result as the new payload.
```
exampleProducer:
Type: consumer.Producer
Streams: stdin
Modulators:
- format.ToJSON
Root: "foo"
```
##### Trim[¶](#trim)
Trim removes a set of characters from the beginning and end of a metadata value or the payload.
###### Parameters[¶](#parameters)
**Characters** (default: trnvf)
> This value defines which characters should be removed from
> both ends of the data. The data to operate on is expected to be a string.
> By default this is set to ” trnvf”.
###### Parameters (from core.SimpleFormatter)[¶](#parameters-from-core-simpleformatter)
**Source**
> This value chooses the part of the message the data to be formatted
> should be read from. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**Target**
> This value chooses the part of the message the formatted data
> should be stored to. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**ApplyTo**
> Use this to set Source and Target to the same value. This setting
> will be ignored if either Source or Target is set to something else but “”.
> By default this parameter is set to “”.
**SkipIfEmpty**
> When set to true, this formatter will not be applied to data
> that is empty or - in case of metadata - not existing.
> By default this parameter is set to false
###### Examples[¶](#examples)
This example will trim spaces from the message payload:
```
exampleConsumer:
Type: consumer.Console
Streams: "*"
Modulators:
- format.Trim: {}
```
##### TrimToBounds[¶](#trimtobounds)
This formatter searches for separator strings and removes all data left or right of this separator.
###### Parameters[¶](#parameters)
**LeftBounds**
> The string to search for. Searching starts from the left
> side of the data. If an empty string is given this parameter is ignored.
> By default this parameter is set to “”.
**RightBounds**
> The string to search for. Searching starts from the right
> side of the data. If an empty string is given this parameter is ignored.
> By default this parameter is set to “”.
**LeftOffset** (default: 0)
> Defines the search start index when using LeftBounds.
> By default this parameter is set to 0.
**RightOffset** (default: 0)
> Defines the search start index when using RightBounds.
> Counting starts from the right side of the message.
> By default this parameter is set to 0.
###### Parameters (from core.SimpleFormatter)[¶](#parameters-from-core-simpleformatter)
**Source**
> This value chooses the part of the message the data to be formatted
> should be read from. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**Target**
> This value chooses the part of the message the formatted data
> should be stored to. Use “” to target the message payload; other values
> specify the name of a metadata field to target.
> By default this parameter is set to “”.
**ApplyTo**
> Use this to set Source and Target to the same value. This setting
> will be ignored if either Source or Target is set to something else but “”.
> By default this parameter is set to “”.
**SkipIfEmpty**
> When set to true, this formatter will not be applied to data
> that is empty or - in case of metadata - not existing.
> By default this parameter is set to false
###### Examples[¶](#examples)
This example will reduce data like “foo[bar[foo]bar]foo” to “bar[foo]bar”.
```
exampleConsumer:
Type: consumer.Console
Streams: "*"
Modulators:
- format.TrimToBounds:
LeftBounds: "["
RightBounds: "]"
```
#### Aggregate plugins[¶](#aggregate-plugins)
To simplify complex pipeline configs you are able to aggregate plugin configurations.
That means that all settings witch are defined in the aggregation scope will injected to each defined “sub-plugin”.
To define an aggregation use the keyword **Aggregate** as plugin type.
##### Parameters[¶](#parameters)
**Plugins**
> List of plugins witch will instantiate and get the aggregate settings injected.
##### Examples[¶](#examples)
In this example both consumers get the streams and modulator injected from the aggregation settings:
```
AggregatePipeline:
Type: Aggregate
Streams: console
Modulators:
- format.Envelope:
Postfix: "\n"
Plugins:
consumerFoo:
Type: consumer.File
File: /tmp/foo.log
consumerBar:
Type: consumer.File
File: /tmp/bar.log
```
This example shows a second use case to reuse server settings easier:
```
consumerConsole:
Type: consumer.Console
Streams: write
kafka:
Type: Aggregate
Servers:
- kafka0:9092
- kafka1:9093
- kafka2:9094
Plugins:
producer:
Type: producer.Kafka
Streams: write
Compression: zip
Topics:
write: test
consumer:
Type: consumer.Kafka
Streams: read
Topic: test
DefaultOffset: Oldest
producerConsole:
Type: producer.Console
Streams: read
```
### Examples and Cookbooks[¶](#examples-and-cookbooks)
Here you can find some examples and cookbooks how you can run Gollum.
#### Examples[¶](#examples)
##### Hello World Examples[¶](#hello-world-examples)
###### Hello World[¶](#hello-world)
This example sets up a simple console consumer and producer that will simply echo everything you type back to the console.
As messages have no new line appended by default an envelope formatter is used to add one before writing to console.
Make sure to start Gollum with gollum -ll 3 to see all log messages.
```
'StdIn':
Type: 'consumer.Console'
Streams: 'console'
'StdOut':
Type: 'producer.Console'
Streams: 'console'
Modulators:
- 'format.Envelope': {}
```
###### Loadbalancer[¶](#loadbalancer)
This example extends the Hello World example by introducing a route configuration.
All messages from the console consumer will be sent to a round robin loadbalancer that will forward messages to one of the two attached producers.
Make sure to start Gollum with gollum -ll 3 to see all log messages.
```
'StdIn':
Type: 'consumer.Console'
Streams: 'console'
'loadbalancer':
Type: 'router.RoundRobin'
Stream: 'console'
'StdOut1':
Type: 'producer.Console'
Streams: 'console'
Modulators:
- 'format.Envelope':
Prefix: '1: '
'StdOut2':
Type: 'producer.Console'
Streams: 'console'
Modulators:
- 'format.Envelope':
Prefix: '2: '
```
When you remove the router from the config you will see each message to reach both producers.
###### Hello World filtered[¶](#hello-world-filtered)
This example extends the previous example by setting up a filter to only echo sentences that end with the word “gollum”.
A regular expression filter is used to achieve this.
Note that this filter does not apply to standard log messages.
Make sure to start Gollum with gollum -ll 3 to see all log messages.
```
'StdIn':
Type: 'consumer.Console'
Streams: 'console'
'loadbalancer':
Type: 'router.RoundRobin'
Stream: 'console'
Filters:
- 'filter.RegExp':
FilterExpression: ".*gollum$"
'StdOut1':
Type: 'producer.Console'
Streams: 'console'
Modulators:
- 'format.Envelope':
Prefix: '1: '
'StdOut2':
Type: 'producer.Console'
Streams: 'console'
Modulators:
- 'format.Envelope':
Prefix: '2: '
```
You can also attach filters to the modulators section of a consumer or a producer.
Please note that routers can filter but not modify messages.
###### Hello World splitter[¶](#hello-world-splitter)
This example extends the first example by introducing a stream split.
This time we will print the console output twice, encoded as XML and as JSON.
Make sure to start Gollum with gollum -ll 3 to see all log messages.
```
'StdIn':
Type: 'consumer.Console'
Streams: 'console'
'StdOutXML':
Type: 'producer.Console'
Streams: 'console'
Modulators:
- 'format.Envelope':
Prefix: '<msg>'
Postfix: '</msg>\n'
'StdOutJSON':
Type: 'producer.Console'
Streams: 'console'
Modulators:
- 'format.Envelope':
Prefix: '{"msg":"'
Postfix: '"}\n'
```
You can also do this in a slightly different way by utilizing two streams.
When doing this you can filter or route both streams differently.
In this extended example, every second example will output only JSON.
```
'StdIn':
Type: 'consumer.Console'
Streams:
- 'consoleJSON'
- 'consoleXML'
'xmlFilter':
Type: 'router.Broadcast'
Stream: 'consoleXML'
Filters:
- 'filter.Sample': {}
'StdOutXML':
Type: 'producer.Console'
Streams: 'consoleXML'
Modulators:
- 'format.Envelope':
Prefix: '<msg>'
Postfix: '</msg>\n'
'StdOutJSON':
Type: 'producer.Console'
Streams: 'consoleJSON'
Modulators:
- 'format.Envelope':
Prefix: '{"msg":"'
Postfix: '"}\n'
```
###### Chat server[¶](#chat-server)
This example requires two Gollum instances to run.
The first one acts as the “chat client” while the second one acts as the “chat server”.
Messages entered on the client will be sent to the server using runlength encoding.
When the message reaches the server, it will be decoded and written to the console.
If the server does not respond, the message will be sent to the fallback and displayed as an error.
Make sure to start Gollum with gollum -ll 3 to see all log messages.
**Client**
```
'StdIn':
Type: 'consumer.Console'
Streams: 'console'
'SocketOut':
Type: 'producer.Socket'
Streams: 'console'
Address: ':5880'
Acknowledge: 'OK'
FallbackStream: 'failed'
Modulators:
- 'format.Runlength': {}
'Failed':
Type: 'producer.Console'
Streams: 'failed'
Modulators:
- 'format.Envelope':
Prefix: 'Failed to sent: '
```
**Server**
```
'SocketIn':
Type: 'consumer.Socket'
Streams: 'socket'
Address: ":5880"
Acknowledge: 'OK'
Partitioner: 'ascii'
Delimiter: ':'
'StdOut':
Type: 'producer.Console'
Streams: 'socket'
Modulators:
- 'format.Envelope': {}
```
###### Profiling[¶](#profiling)
This configuration will test Gollum for its theoretic maximum message throughput.
You can of course modify this example to test e.g. file producer performance.
Make sure to start Gollum with gollum -ll 3 -ps to see all log messages as well as intermediate profiling results.
```
'Profiler':
```
> Type: ‘consumer.Profiler’
> Streams: ‘profile’
> Runs: 100000
> Batches: 100
> Characters: ‘abcdefghijklmnopqrstuvwxyz .,!;:-_’
> Message: ‘%256s’
> KeepRunning: false
> ModulatorRoutines: 0
‘Benchmark’:
Type: ‘producer.Benchmark’
Streams: ‘profile’
#### Cookbooks[¶](#cookbooks)
##### Socket to Kafka[¶](#socket-to-kafka)
This example creates a unix domain socket /tmp/kafka.socket that accepts the following protocol:
| `<topic>:<message_base64>\n`
The message will be base64 decoded and written to the topic mentioned at the start of the message. This example also allows shows how to apply rate limiting per topic.
**Configuration (v0.4.x):**
```
# Socket accepts <topic>:<message_base64>
- "consumer.Socket":
Stream: "raw"
Address: "unix:///tmp/kafka.socket"
Permissions: "0777"
# Stream "raw" to stream "<topic>" conversion
# Decoding of <message_base64> to <message>
- "stream.Broadcast":
Stream: "raw"
Formatter: "format.StreamRoute"
StreamRouteDelimiter: ":"
StreamRouteFormatter: "format.Base64Decode"
# Listening to all streams as streams are generated at runtime
# Use ChannelTimeoutMs to be non-blocking
- "producer.Kafka":
Stream: "*"
Filter: "filter.Rate"
RateLimitPerSec: 100
ChannelTimeoutMs: 10
Servers:
- "kafka1:9092"
- "kafka2:9092"
- "kafka3:9092"
```
##### Kafka roundtrip[¶](#kafka-roundtrip)
This example can be used for developing or testing kafka consumers and producers.
###### gollum config[¶](#gollum-config)
With the following config gollum will create a `console.consumer` with a `kafka.producer` and a `kafka.consumer` with a
`console.producer`. All data which write to the console will send to kafka. The second `kafka.consumer` will read all data from kafka and send it back to your console by the `console.producer`:
####### gollum >= v0.5.0[¶](#gollum-v0-5-0)
```
consumerConsole:
type: consumer.Console
Streams: "write"
producerKafka:
type: producer.Kafka
Streams: "write"
Compression: "zip"
Topics:
"write" : "test"
Servers:
- kafka0:9092
- kafka1:9093
- kafka2:9094
consumerKafka:
type: consumer.Kafka
Streams: "read"
Topic: "test"
DefaultOffset: "Oldest"
MaxFetchSizeByte: 100
Servers:
- kafka0:9092
- kafka1:9093
- kafka2:9094
producerConsole:
type: producer.Console
Streams: "read"
Modulators:
- format.Envelope:
Postfix: "\n"
```
This config example can also be found [here](https://github.com/trivago/gollum/blob/master/config/kafka_roundtrip.conf)
###### kafka setup for docker[¶](#kafka-setup-for-docker)
Here you find a docker-compose setup which works for the gollum config example.
####### /etc/hosts entry[¶](#etc-hosts-entry)
You need a valid `/etc/hosts` entry to be able to use the set hostnames:
```
# you can not use 127.0.0.1 or localhost here
<YOUR PUBLIC IP> kafka0 kafka1 kafka2
```
####### docker-compose file[¶](#docker-compose-file)
```
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
- "2888:2888"
- "3888:3888"
kafkaone:
image: wurstmeister/kafka:0.10.0.0
ports:
- "9092:9092"
links:
- zookeeper:zookeeper
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka0
KAFKA_ZOOKEEPER_CONNECT: "zookeeper"
KAFKA_BROKER_ID: "21"
KAFKA_CREATE_TOPICS: "test:1:3,Topic2:1:1:compact"
kafkatwo:
image: wurstmeister/kafka:0.10.0.0
ports:
- "9093:9092"
links:
- zookeeper:zookeeper
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka1
KAFKA_ZOOKEEPER_CONNECT: "zookeeper"
KAFKA_BROKER_ID: "22"
KAFKA_CREATE_TOPICS: "test:1:3,Topic2:1:1:compact"
kafkathree:
image: wurstmeister/kafka:0.10.0.0
ports:
- "9094:9092"
links:
- zookeeper:zookeeper
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka2
KAFKA_ZOOKEEPER_CONNECT: "zookeeper"
KAFKA_BROKER_ID: "23"
KAFKA_CREATE_TOPICS: "test:1:3,Topic2:1:1:compact"
```
This docker-compose file can be run by
```
docker-compose -f docker-compose-kafka.yml -p kafka010 up
```
##### Write to Elasticsearch (ElasticSearch producer)[¶](#write-to-elasticsearch-elasticsearch-producer)
###### Description[¶](#description)
This example can be used for developing or testing the ElasticSearch producer.
###### gollum config[¶](#gollum-config)
With the following config gollum will create a `console.consumer` with a `ElasticSearch.producer`. All data which write to the console will send to ElasticSearch.
This payload can be used for the configured setup:
```
{"user" : "olivere", "message" : "It's a Raggy Waltz"}
```
####### gollum >= v0.5.0[¶](#gollum-v0-5-0)
```
consumerConsole:
type: consumer.Console
Streams: "write"
producerElastic:
Type: producer.ElasticSearch
Streams: write
User: elastic
Password: changeme
Servers:
- http://127.0.0.1:9200
Retry:
Count: 3
TimeToWaitSec: 5
SetGzip: true
StreamProperties:
write:
Index: twitter
DayBasedIndex: true
Type: tweet
Mapping:
user: keyword
message: text
Settings:
number_of_shards: 1
number_of_replicas: 1
```
This config example can also be found [here](https://github.com/trivago/gollum/blob/master/config/console_elastic.conf)
###### ElasticSearch setup for docker[¶](#elasticsearch-setup-for-docker)
Here you find a docker-compose setup which works for the config example:
```
version: '2'
services:
elasticsearch1:
image: docker.elastic.co/elasticsearch/elasticsearch:5.4.1
container_name: elasticsearch1
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
mem_limit: 1g
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- esnet
elasticsearch2:
image: docker.elastic.co/elasticsearch/elasticsearch:5.4.1
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=elasticsearch1"
ulimits:
memlock:
soft: -1
hard: -1
mem_limit: 1g
volumes:
- esdata2:/usr/share/elasticsearch/data
networks:
- esnet
volumes:
esdata1:
driver: local
esdata2:
driver: local
networks:
esnet:
```
This docker-compose file can be run by:
```
docker-compose -f docker-compose-elastic.yml up
```
### Release Notes[¶](#release-notes)
#### Performance tests[¶](#performance-tests)
##### History[¶](#history)
All tests were executed by calling `time gollum -c profile.conf -ll 1`.
| test | ver | user | sys | cpu | msg/sec |
| --- | --- | --- | --- | --- | --- |
| Raw pipeline | 0.6.0 | 11,45s | 3,13s | 200% | 1.316.153 |
| | 0.5.0 | 11,85s | 3,69s | 173% | 1.116.071 |
| | 0.4.6 | 10,48s | 3,01s | 178% | 1.320.132 |
| Basic formatting | 0.6.0 | 37,63s | 4,01s | 520% | 1.173.945 |
| | 0.5.0 | 39,70s | 6,09s | 532% | 1.163.602 |
| | 0.4.6 [[1]](#id2) | 21,84s | 5,78s | 206% | 746.881 |
| 8 consumers | 0.6.0 | 325,18s | 18,32s | 511% | 1.137.784 |
| | 0.5.0 | 344,33s | 28,24s | 673% | 1.446.157 |
| | 0.4.6 | 319,44s | 72,22s | 574% | 1.173.536 |
| JSON pipeline | 0.6.0 | 14,98s | 4,77s | 173% | 78.377 |
| | 0.5.0 | 28,23s | 6,33s | 138% | 40.033 |
| | 0.4.6 | 28,30s | 6,30s | 150% | 43.400 |
| [[1]](#id1) | this version does not use paralell formatting |
##### v0.6.0[¶](#v0-6-0)
###### JSON pipeline[¶](#json-pipeline)
Intel Core i7-7700HQ CPU @ 2.80GHz, 16 GB RAM go1.12.3 darwin/amd64
> * 14,98s user
> * 4,77s system
> * 173% cpu
> * 11,388s total
> * 73.846 msg/sec
```
"Profiler":
Type: consumer.Profiler
Runs: 10000
Batches: 100
Characters: "abcdefghijklmnopqrstuvwxyz .,!;:-_"
Message: "{\"test\":\"%64s\",\"foo\":\"%32s|%32s\",\"bar\":\"%64s\",\"thisisquitealongstring\":\"%64s\"}"
Streams: "profile"
KeepRunning: false
ModulatorRoutines: 0
Modulators:
- format.JSON: {}
- format.Move:
Source: "test"
Target: "foobar"
- format.Delete:
Target: "bar"
- format.SplitToFields:
Source: "foo"
Delimiter: "|"
Fields: ["foo1","foo2"]
- format.Copy:
Source: "thisisquitealongstring"
"Benchmark":
Type: "producer.Benchmark"
Streams: "profile"
```
##### v0.5.0[¶](#v0-5-0)
###### Raw pipeline[¶](#raw-pipeline)
Intel Core i7-4770HQ CPU @ 2.20GHz, 16 GB RAM go1.8.3 darwin/amd64
> * 11,85s user
> * 3,69s system
> * 173% cpu
> * 8,960s total
> * 1.116.071 msg/sec
```
"Profiler":
Type: "consumer.Profiler"
Runs: 100000
Batches: 100
Characters: "abcdefghijklmnopqrstuvwxyz .,!;:-_"
Message: "%256s"
Streams: "profile"
KeepRunning: false
ModulatorRoutines: 0
"Benchmark":
Type: "producer.Benchmark"
Streams: "profile"
```
###### Basic formatting[¶](#basic-formatting)
Intel Core i7-4770HQ CPU @ 2.20GHz, 16 GB RAM go1.8.3 darwin/amd64 Please note that from this version on formatting is done in parallel.
> * 39,70s user
> * 6,09s system
> * 532% cpu
> * 8,594s total
> * 1.163.602 msg/sec
```
"Profiler":
Type: "consumer.Profiler"
Runs: 100000
Batches: 100
Characters: "abcdefghijklmnopqrstuvwxyz .,!;:-_"
Message: "%256s"
Streams: "profile"
KeepRunning: false
ModulatorRoutines: 4
Modulators:
- format.Envelope
- format.Timestamp
"Benchmark":
Type: "producer.Benchmark"
Streams: "profile"
```
###### 8 consumers with formatting[¶](#consumers-with-formatting)
Intel Core i7-4770HQ CPU @ 2.20GHz, 16 GB RAM go1.8.3 darwin/amd64
> * 344,33s user
> * 28,24s system
> * 673% cpu
> * 55,319s total
> * 1.446.157 msg/sec
```
"Profiler":
Type: Aggregate
Runs: 100000
Batches: 100
Characters: "abcdefghijklmnopqrstuvwxyz .,!;:-_"
Message: "%256s"
Streams: "profile"
KeepRunning: false
ModulatorRoutines: 0
Modulators:
- format.Envelope
- format.Timestamp
Plugins:
P01:
Type: "consumer.Profiler"
P02:
Type: "consumer.Profiler"
P03:
Type: "consumer.Profiler"
P04:
Type: "consumer.Profiler"
P05:
Type: "consumer.Profiler"
P06:
Type: "consumer.Profiler"
P07:
Type: "consumer.Profiler"
P08:
Type: "consumer.Profiler"
"Benchmark":
Type: "producer.Benchmark"
Streams: "profile"
```
###### JSON pipeline[¶](#id3)
Intel Core i7-4770HQ CPU @ 2.20GHz, 16 GB RAM go1.8.3 darwin/amd64
> * 28,23s user
> * 6,33s system
> * 138% cpu
> * 24,979s total
> * 40.033 msg/sec
```
"Profiler":
Type: consumer.Profiler
Runs: 10000
Batches: 100
Characters: "abcdefghijklmnopqrstuvwxyz .,!;:-_"
Message: "{\"test\":\"%64s\",\"foo\":\"%32s|%32s\",\"bar\":\"%64s\",\"thisisquitealongstring\":\"%64s\"}"
Streams: "profile"
KeepRunning: false
ModulatorRoutines: 0
Modulators:
- format.ProcessJSON:
Directives:
- "test:rename:foobar"
- "bar:remove"
- "foo:split:|:foo1:foo2"
- format.ExtractJSON:
Field: thisisquitealongstring
"Benchmark":
Type: "producer.Benchmark"
Streams: "profile"
```
##### v0.4.6[¶](#v0-4-6)
###### Raw pipeline[¶](#id4)
Intel Core i7-4770HQ CPU @ 2.20GHz, 16 GB RAM go1.8.3 darwin/amd64
> * 10,48s user
> * 3,01s system
> * 178% cpu
> * 7,575s total
> * 1.320.132 msg/sec
```
- "consumer.Profiler":
Runs: 100000
Batches: 100
Characters: "abcdefghijklmnopqrstuvwxyz .,!;:-_"
Message: "{\"test\":\"%64s\",\"foo\":\"%32s|%32s\",\"bar\":\"%64s\",\"thisisquitealongstring\":\"%64s\"}"
Stream: "profile"
KeepRunning: false
- "producer.Benchmark":
Stream: "profile"
```
###### Basic formatting[¶](#id5)
Intel Core i7-4770HQ CPU @ 2.20GHz, 16 GB RAM go1.8.3 darwin/amd64
> * 21,84s user
> * 5,78s system
> * 206% cpu
> * 13,389s total
> * 746.881 msg/sec
```
- "consumer.Profiler":
Runs: 100000
Batches: 100
Characters: "abcdefghijklmnopqrstuvwxyz .,!;:-_"
Message: "%256s"
Stream: "profile"
KeepRunning: false
- "stream.Broadcast":
Stream: "profile"
Formatter: format.Timestamp
TimestampFormatter: format.Envelope
- "producer.Benchmark":
Stream: "profile"
```
###### 8 consumers with formatting[¶](#id6)
Intel Core i7-4770HQ CPU @ 2.20GHz, 16 GB RAM go1.8.3 darwin/amd64
> * 319,44s user
> * 72,22s system
> * 574% cpu
> * 68,17s total
> * 1.173.536 msg/sec
```
- "consumer.Profiler":
Instances: 8
Runs: 100000
Batches: 100
Characters: "abcdefghijklmnopqrstuvwxyz .,!;:-_"
Message: "%256s"
Stream: "profile"
KeepRunning: false
- "stream.Broadcast":
Stream: "profile"
Formatter: format.Timestamp
TimestampFormatter: format.Envelope
- "producer.Benchmark":
Stream: "profile"
```
###### JSON pipeline[¶](#id7)
Intel Core i7-4770HQ CPU @ 2.20GHz, 16 GB RAM go1.8.3 darwin/amd64
> * 28,30s user
> * 6,30s system
> * 150% cpu
> * 23,041s total
> * 43.400 msg/sec
```
- "consumer.Profiler":
Runs: 10000
Batches: 100
Characters: "abcdefghijklmnopqrstuvwxyz .,!;:-_"
Message: "%256s"
Stream: "profile"
KeepRunning: false
- "stream.Broadcast":
Stream: "profile"
Formatter: format.ExtractJSON
ExtractJSONdataFormatter: format.ProcessJSON
ProcessJSONDirectives:
- "test:rename:foobar"
- "bar:remove"
- "foo:split:|:foo1:foo2"
ExtractJSONField: thisisquitealongstring
- "producer.Benchmark":
Stream: "profile"
```
#### v0.5.0[¶](#v0-5-0)
##### Breaking changes 0.4.x to 0.5.0[¶](#breaking-changes-0-4-x-to-0-5-0)
###### Configuration[¶](#configuration)
The goal of this breaking change was to make Gollum configuration files easier to maintain and easier to merge. In addition to that several quirks and inconsistencies have been resolved.
####### Plugin header[¶](#plugin-header)
This change allows configs to be easier to merge which is requirement for future features.
As of this change a new, mandatory field “Type” has been added.
**From**
```
- "plugin.Type":
ID: "pluginId"
```
**To**
```
"pluginId":
Type: "plugin.Type"
```
####### Plural form[¶](#plural-form)
In previous versions fields did not follow a rule when to use plural or singular. In 0.5.0 plural means “one or more values” while singular means “only one value”.
**From**
```
- "plugin.Type":
ID: "pluginId"
Category:
- "Foo"
- "Bar"
Streams:
- "foo"
- "bar"
```
**To**
```
"pluginId":
type: "plugin.Type"
categories:
- "Foo"
- "Bar"
streams:
- "foo"
- "bar"
```
####### Formatters and filters are now modulators[¶](#formatters-and-filters-are-now-modulators)
In earlier versions chaining formatters was done by nesting them via options. This was confusing as the order was “upside down”. In addition to that you could use every formatter only once. The new modulator concept introduces a more natural order and allows formatters to be reused as often as necessary. In addition to that, filter and formatters have been merged into the same list. This fixes the problem of applying filters before or after formatters that was previously fixed by adding e.g. a “FilterAfterFormat” field.
**From**
```
- "plugin.Type":
ID: "pluginId"
Filter: "filter.Before"
FilterAfterFormat: "filter.After"
Formatter: "format.SECOND"
SECONDOption: "foobar"
SECONDFormatter: "format.FIRST"
```
**To**
```
"pluginId":
Type: "plugin.Type"
Modulators:
- "filter.Before"
- "format.FIRST"
- "format.SECOND"
Option: "foobar"
- "filter.After"
```
####### Nested options[¶](#nested-options)
Some plugins had a set of options starting with the same prefix
(e.g. file.Producer). These options have now been grouped.
**From**
```
- "plugin.Type":
ID: "pluginId"
RotateAfterHours: 10
RotateSizeMB: 1024
RotateAt: "00:00"
```
**To**
```
"pluginId":
Type: "plugin.Type"
Rotate:
AfterHours: 10
SizeMB: 1024
At: "00:00"
```
###### Plugins[¶](#plugins)
The plugin system has been refactored to make plugins more consistent and to reduce the amount of work required to write a new plugin. This change introduced new subclasses and changed some of the basic interfaces.
The shutdown process has been revamped to give plugins a better chance to cleanly shut down and to get rid of all their messages without the system having to care about stream loops.
###### Renaming of streams to routers[¶](#renaming-of-streams-to-routers)
A “stream” in 0.4.x has a double meaning. It denotes a stream of data,
as well as a type of plugin that is used to route messages from one stream to another or simply to configure a certain stream of data in terms of formatting.
To make it easier to talk about these to things the routing/configuring part (the plugins) are renamed to “router”.
**From**
```
- "stream.Broadcast":
ID: "Splitter"
Stream: "foo"
```
**To**
```
"Splitter":
Type: "router.Broadcast"
Stream: "foo"
```
###### Removal of gollum/shared[¶](#removal-of-gollum-shared)
All types from the `github.com/trivago/gollum/shared` package have been moved to the new `github.com/trivago/tgo` package and subpackages. This allows us to re-use these types in other projects more easily and introduces a better structure. This package is meant to be an extension to the Golang standard library and follows a “t-prefix” naming convention. Everything that you would expect in e.g. the `sync`
package will be placed in `tgo/tsync`.
**From**
```
c := shared.MaxI(a,b)
spin := shared.NewSpinner(shared.SpinPriorityLow)
```
**To**
```
c := tmath.MaxI(a,b)
spin := tsync.NewSpinner(tsync.SpinPriorityLow)
```
###### Base classes[¶](#base-classes)
In version 0.4.x and earlier not all plugins had a base class. In 0.5.0 all plugins have base classes and existing base classes have been renamed.
**renamed**
```
core.ConsumerBase -> core.SimpleConsumer core.ProducerBase -> core.BufferedProducer core.StreamBase -> core.SimpleRouter
```
**new**
```
core.SimpleConsumer Consumer base class core.SimpleFilter Filter base class core.SimpleFormatter Formatter base class core.SimpleProducer Producer base class core.SimpleRouter Router base class core.DirectProducer A producer that directly accepts messages without buffering core.BufferedProducer A producer that reads messages from a channel core.BatchedProducer A producer that collects messages and processes them in a batch
```
###### Metrics[¶](#metrics)
Metrics have been moved from gollum/shared to the tgo package. As of this `shared.Metric.*` has to be replaced by `tgo.Metric.*` and the package “github.com/trivago/tgo” has to be imported instead of
“github.com/trivago/gollum/shared”.
Please note that “per second” metrics can now be added without additional overhead by using
`tgo.Metric.NewRate(metricName, rateMetricName, time.Second, 10, 3, true)`.
All custom “per second” metrics should be replaced with this function.
###### Logging[¶](#logging)
Version 0.5.0 introduces logrus based scoped logging to give error messages a clearer context. As of this every plugin has a “Logger”
member in its base class.
**From**
```
Log.Error.Print("MyPlugin: Something's wrong", err)
```
**To**
```
plugin.Logger.WithError(err).Error("Something's wrong")
```
###### Configure[¶](#configure)
Error handling has been improved so that a plugin automatically reacts on missing or invalid values. Errors are now collected in a stack attached to the config reader and processed as a batch after configure returns. In addition to that, simple types can now be configured using struct tags.
**From**
```
type Console struct {
core.ConsumerBase
autoExit bool
pipeName string
pipePerm uint32
pipe *os.File
}
func (cons *Console) Configure(conf core.PluginConfig) error {
cons.autoexit = conf.GetBool("ExitOnEOF", true)
inputConsole := conf.GetString("Console", "stdin")
switch strings.ToLower(inputConsole) {
case "stdin":
cons.pipe = os.Stdin
cons.pipeName = "stdin"
case "stdin":
return fmt.Errorf("Cannot read from stderr")
default:
cons.pipe = nil
cons.pipeName = inputConsole
if perm, err := strconv.ParseInt(conf.GetString("Permissions", "0664"), 8, 32); err != nil {
Log.Error.Printf("Error parsing named pipe permissions: %s", err)
} else {
cons.pipePerm = uint32(perm)
}
}
return cons.ConsumerBase.Configure(conf)
}
```
**To**
```
type Console struct {
core.SimpleConsumer
autoExit bool `config:"ExitOnEOF" default:"true"`
pipeName string `config:"Pipe" default:"stdin"`
pipePerm uint32 `config:"Permissions" default:"0644"`
pipe *os.File
}
func (cons *Console) Configure(conf core.PluginConfigReader) {
switch strings.ToLower(cons.pipeName) {
case "stdin":
cons.pipe = os.Stdin
cons.pipeName = "stdin"
case "stderr":
conf.Errors.Pushf("Cannot read from stderr")
default:
cons.pipe = nil
}
}
```
###### Message handling[¶](#message-handling)
Message handling has changed from the way 0.4.x does it.
Messages now support MetaData and contain a copy of the “original” data next to the actual payload.
In addition to this, messages are now backed by a memory pool and are passed around using pointers.
All this is reflected in new function signatures and new message member functions.
**From**
```
func (format *Sequence) Format(msg core.Message) ([]byte, core.MessageStreamID) {
basePayload, stream := format.base.Format(msg)
baseLength := len(basePayload)
sequenceStr := strconv.FormatUint(msg.Sequence, 10) + format.separator
payload := make([]byte, len(sequenceStr)+baseLength)
len := copy(payload, []byte(sequenceStr))
copy(payload[len:], basePayload)
return payload, stream
}
```
**To**
```
func (format *Sequence) ApplyFormatter(msg *core.Message) error {
seq := atomic.AddInt64(format.seq, 1)
sequenceStr := strconv.FormatInt(seq, 10)
content := format.GetAppliedContent(msg)
dataSize := len(sequenceStr) + len(format.separator) + len(content)
payload := core.MessageDataPool.Get(dataSize)
offset := copy(payload, []byte(sequenceStr))
offset += copy(payload[offset:], format.separator)
copy(payload[offset:], content)
format.SetAppliedContent(msg, payload)
return nil
}
```
This example shows most of the changes related to the new message structure.
1. As the sequence number has been removed from the message struct, plugins relying on it need to implement it themselves.
2. As messages now support metadata, you need to specify whether you want to affect metadata or the payload.
In formatter plugins this is reflected by the GetAppliedContent method, which is backed by the “ApplyTo” config parameter.
3. If you require a new payload buffer you should now utilize core.MessageDataPool.
Things that you don’t see in this example are the following:
1. Buffers returned by core.MessageDataPool tend to be overallocated, i.e. they can be resized without reallocation in most cases.
As of this methods to resize the payload have been added.
2. If you need to create a copy of the complete message use the Clone() method
###### Formatting pipeline[¶](#formatting-pipeline)
In version 0.4.x you had to take care about message changes by yourself on many different occasions.
With 0.5.0 the message flow has been moved completely to the core framework.
As of this you don’t need to worry about routing, or resetting data to it’s original state. The framework will do this for you.
**From**
```
func (prod *Redis) getValueAndKey(msg core.Message) (v []byte, k string) {
value, _ := prod.Format(msg) // Creates a copy and we must not forget this step
if prod.keyFormat == nil {
return value, prod.key
}
if prod.keyFromParsed { // Ordering is crucial here
keyMsg := msg
keyMsg.Data = value
key, _ := prod.keyFormat.Format(keyMsg)
return value, string(key)
}
key, _ := prod.keyFormat.Format(msg)
return value, string(key)
}
func (prod *Redis) storeString(msg core.Message) {
value, key := prod.getValueAndKey(msg)
result := prod.client.Set(key, string(value), 0)
if result.Err() != nil {
Log.Error.Print("Redis: ", result.Err())
prod.Drop(msg) // Good thing we stored a copy of the message ...
}
}
```
**To**
```
func (prod *Redis) getValueFieldAndKey(msg *core.Message) (v, f, k []byte) {
meta := msg.GetMetadata()
key := meta.GetValue(prod.key) // Due to metadata fields...
field := meta.GetValue(prod.field) // ... this is now a lot easier
return msg.GetPayload(), field, key
}
func (prod *Redis) storeString(msg *core.Message) {
// The message arrives here after formatting
value, key := prod.getValueAndKey(msg)
result := prod.client.Set(string(key), string(value), time.Duration(0))
if result.Err() != nil {
prod.Logger.WithError(result.Err()).Error("Failed to set value")
prod.TryFallback(msg) // Will send the original (unformatted) message. Always.
}
}
```
##### New features[¶](#new-features)
* Filters and Formatters have been merged into one list
* You can now use a filter or formatter more than once in the same plugin
* Consumers can now do filtering and formatting, too
* Messages can now store metadata. Formatters can affect the payload or a metadata field
* All plugins now have an automatic log scope
* Message payloads are now backed by a memory pool
* Messages now store the original message, i.e. a backup of the payload state after consumer processing
* Gollum now provides per-stream metrics
* Plugins are now able to implement health checks that can be queried via http
* There is a new pseudo plugin type “Aggregate” that can be used to share configuration between multiple plugins
* New base types for producers: Direct, Buffered, Batched
* Plugin configurations now support nested structures
* The configuration process has been simplified a lot by adding automatic error handling and struct tags
* Added a new formatter format.GrokToJSON
* Added a new formatter format.JSONToInflux10
* Added a new formatter format.Double
* Added a new formatter format.MetadataCopy
* Added a new formatter format.Trim
* Consumer.File now supports filesystem events
* Consumers can now define the number of go routines used for formatting/filtering
* All AWS plugins now support role switching
* All AWS plugins are now based on the same credentials code
##### Bugfixes[¶](#bugfixes)
* The plugin lifecycle has been reimplemented to avoid gollum being stuck waiting for plugins to change state
* Any errors during the configuration phase will cause gollum to exit
* Integration test suite added
* Producer.HTTPRequest port handling fixed
* The test-config command will now produce more meaningful results
* Duplicating messages now properly duplicates the whole message and not just the struct
* Several race conditions have been fixed
* Producer.ElasticSearch is now based on a more up-to-date library
* Producer.AwsS3 is now behaving more like producer.File
* Gollum metrics can now bind to a specific address instead of just a port
##### Breaking changes[¶](#breaking-changes)
* The config format has changed to improve automatic processing
* A lot of plugins have been renamed to avoid confusion and to better reflect their behavior
* A lot of plugins parameters have been renamed
* The instances plugin parameter has been removed
* Most of gollum’s metrics have been renamed
* Plugin base types have been renamed
* All message handling function signatures have changed to use pointers
* All formatters don’t daisy chain anymore as they can now be listed in proper order
* Stream plugins have been renamed to Router plugins
* Routers are not allowed to modify message content anymore
* filter.All and format.Forward have been removed as they are not required anymore
* Producer formatter listss dedicated to format a key or similar constructs have been removed
* Logging framework switched to logrus
* The package gollum.shared has been removed in favor of trivago.tgo
* Fuses have been removed from all plugins
* The general message sequence number has been removed
* The term “drop” has been replaced by the term “fallback” to emphasise it’s use
* The _DROPPED_ stream has been removed. Messages are discarded if no fallback is set
* Formatters can still the stream of a message but cannot trigger routing by themselves
* Compiling contrib plugins now requires a specific loader.go to be added
* The docker file on docker hub is now a lot smaller and only contains the gollum binary
### License[¶](#license)
This project is released under the terms of the [Apache 2.0 license](http://www.apache.org/licenses/LICENSE-2.0).
```
Copyright 2015-2018 <NAME>.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and limitations under the License.
``` |
aco-pants | readthedoc | Python | Pants 0.5.2 documentation
[Pants](index.html#document-index)
---
Welcome to the documentation for Pants![¶](#module-pants)
===
A Python3 implementation of the Ant Colony Optimization Meta-Heuristic.
**Pants** provides you with the ability to quickly determine how to visit a collection of interconnected nodes such that the work done is minimized. Nodes can be any arbitrary collection of data while the edges represent the amount of “work” required to travel between two nodes.
Thus, **Pants** is a tool for solving traveling salesman problems.
The world is built from a list of nodes and a function responsible for returning the length of the edge between any two given nodes. The length function need not return actual length. Instead, “length” refers to that
the amount of “work” involved in moving from the first node to the second node - whatever that “work” may be. For a silly, random example, it could even be the number of dishes one must wash before moving to the next
station at a least dish-washing dish washer competition.
Solutions are found through an iterative process. In each iteration,
several ants are allowed to find a solution that “visits” every node of the world. The amount of pheromone on each edge is updated according to the length of the solutions in which it was used. The ant that traveled the least distance is considered to be the local best solution. If the local solution has a shorter distance than the best from any previous iteration, it then becomes the global best solution. The elite ant(s)
then deposit their pheromone along the path of the global best solution to strengthen it further, and the process repeats.
You can read more about [Ant Colony Optimization on Wikipedia](http://en.wikipedia.org/wiki/Ant_colony_optimization_algorithms).
World module[¶](#module-pants.world)
---
*class* `pants.world.``Edge`(*start*, *end*, *length=None*, *pheromone=None*)[[source]](_modules/pants/world.html#Edge)[¶](#pants.world.Edge)
This class represents the link between starting and ending nodes.
In addition to *start* and *end* nodes, every `Edge` has *length*
and *pheromone* properties. *length* represents the static, *a priori*
information, whereas *pheromone* level represents the dynamic, *a posteriori* information.
| Parameters: | * **start** ([*node*](#pants.ant.Ant.node)) – the node at the start of the `Edge`
* **end** ([*node*](#pants.ant.Ant.node)) – the node at the end of the `Edge`
* **length** ([*float*](https://docs.python.org/2/library/functions.html#float)) – the length of the `Edge` (default=1)
* **pheromone** ([*float*](https://docs.python.org/2/library/functions.html#float)) – the amount of pheromone on the `Edge`
(default=0.1)
|
*class* `pants.world.``World`(*nodes*, *lfunc*, ***kwargs*)[[source]](_modules/pants/world.html#World)[¶](#pants.world.World)
The nodes and edges of a particular problem.
Each `World` is created from a list of nodes, a length function, and optionally, a name and a description. Additionally, each `World` has a UID. The length function must accept nodes as its first two parameters,
and is responsible for returning the distance between them. It is the
responsibility of the `create_edges()` to generate the required
`Edge`s and initialize them with the correct *length* as returned by the length function.
Once created, `World` objects convert the actual nodes into node IDs, since solving does not rely on the actual data in the nodes. These are accessible via the `nodes` property. To access the actual nodes,
simply pass an ID obtained from `nodes` to the `data()` method,
which will return the node associated with the specified ID.
`Edge`s are accessible in much the same way, except two node IDs must be passed to the `data()` method to indicate which nodes start and end the `Edge`. For example:
```
ids = world.nodes assert len(ids) > 1 node0 = world.data(ids[0])
node1 = world.data(ids[1])
edge01 = world.data(ids[0], ids[1])
assert edge01.start == node0 assert edge01.end == node1
```
The `reset_pheromone()` method provides an easy way to reset the pheromone levels of every `Edge` contained in a `World` to a given *level*. It should be invoked before attempting to solve a
`World` unless a “blank slate” is not desired. Also note that it should *not* be called between iterations of the `Solver` because it effectively erases the memory of the `Ant` colony solving it.
| Parameters: | * **nodes** (*list*) – a list of nodes
* **lfunc** ([*callable*](https://docs.python.org/2/library/functions.html#callable)) – a function that calculates the distance between two nodes
* **name** ([*str*](https://docs.python.org/2/library/functions.html#str)) – the name of the world (default is “world#”, where
“#” is the `uid` of the world)
* **description** ([*str*](https://docs.python.org/2/library/functions.html#str)) – a description of the world (default is None)
|
`create_edges`()[[source]](_modules/pants/world.html#World.create_edges)[¶](#pants.world.World.create_edges)
Create edges from the nodes.
The job of this method is to map node ID pairs to `Edge`
instances that describe the edge between the nodes at the given indices. Note that all of the `Edge`s are created within this method.
| Returns: | a mapping of node ID pairs to `Edge` instances. |
| Return type: | [`dict`](https://docs.python.org/2/library/stdtypes.html#dict) |
`data`(*idx*, *idy=None*)[[source]](_modules/pants/world.html#World.data)[¶](#pants.world.World.data)
Return the node data of a single id or the edge data of two ids.
If only *idx* is specified, return the node with the ID *idx*. If *idy*
is also specified, return the `Edge` between nodes with indices
*idx* and *idy*.
| Parameters: | * **idx** ([*int*](https://docs.python.org/2/library/functions.html#int)) – the id of the first node
* **idy** ([*int*](https://docs.python.org/2/library/functions.html#int)) – the id of the second node (default is None)
|
| Returns: | the node with ID *idx* or the `Edge` between nodes with IDs *idx* and *idy*. |
| Return type: | node or `Edge` |
`nodes`[¶](#pants.world.World.nodes)
Node IDs.
`reset_pheromone`(*level=0.01*)[[source]](_modules/pants/world.html#World.reset_pheromone)[¶](#pants.world.World.reset_pheromone)
Reset the amount of pheromone on every edge to some base *level*.
Each time a new set of solutions is to be found, the amount of pheromone on every edge should be equalized to ensure un-biased initial conditions.
| Parameters: | **level** ([*float*](https://docs.python.org/2/library/functions.html#float)) – amount of pheromone to set on each edge
(default=0.01) |
Ant module[¶](#module-pants.ant)
---
*class* `pants.ant.``Ant`(*alpha=1*, *beta=3*)[[source]](_modules/pants/ant.html#Ant)[¶](#pants.ant.Ant)
A single independent finder of solutions to a `World`.
Each `Ant` finds a solution to a world one move at a time. They also represent the solution they find, and are capable of reporting which nodes and edges they visited, in what order they were visited, and the total length of the solution.
Two properties govern the decisions each `Ant` makes while finding a solution: *alpha* and *beta*. *alpha* controls the importance placed on pheromone while *beta* controls the importance placed on distance. In
general, *beta* should be greater than *alpha* for best results.
`Ant`s also have a *uid* property that can be used to identify a particular instance.
Using the `initialize()` method, each `Ant` *must be
initialized* to a particular `World`, and optionally may be given an initial node from which to start finding a solution. If a starting node is not given, one is chosen at random. Thus a few examples of instantiation and initialization might look like:
```
ant = Ant()
ant.initialize(world)
```
```
ant = Ant().initialize(world)
```
```
ant = Ant(alpha=0.5, beta=2.25)
ant.initialize(world, start=world.nodes[0])
```
Note
The examples above assume the world has already been created!
Once an `Ant` has found a solution (or at any time), the solution may be obtained and inspected by accessing its `tour` property, which returns the nodes visited in order, or its `path` property, which
returns the edges visited in order. Also, the total distance of the
solution can be accessed through its `distance` property. `Ant`s are even sortable by their distance:
```
ants = [Ant() for ...]
# ... have each ant in the list solve a world ants = sorted(ants)
for i in range(1, len(ants)):
assert ants[i - 1].distance < ants[i].distance
```
`Ant`s may be cloned, which will return a shallow copy while not
preserving the *uid* property. If this behavior is not desired, simply use the [`copy.copy()`](https://docs.python.org/2/library/copy.html#copy.copy) or [`copy.deepcopy()`](https://docs.python.org/2/library/copy.html#copy.deepcopy) methods as necessary.
The remaining methods mainly govern the mechanics of making each move.
`can_move()` determines whether all possible moves have been made,
`remaining_moves()` returns the moves not yet made, `choose_move()`
returns a single move from a list of moves, `make_move()` actually performs the move, and `weigh()` returns the weight of a given move.
The `move()` method governs the move-making process by gathering the remaining moves, choosing one of them, making the chosen move, and
returning the move that was made.
`can_move`()[[source]](_modules/pants/ant.html#Ant.can_move)[¶](#pants.ant.Ant.can_move)
Return `True` if there are moves that have not yet been made.
| Return type: | [bool](https://docs.python.org/2/library/functions.html#bool) |
`choose_move`(*choices*)[[source]](_modules/pants/ant.html#Ant.choose_move)[¶](#pants.ant.Ant.choose_move)
Choose a move from all possible moves.
| Parameters: | **choices** (*list*) – a list of all possible moves |
| Returns: | the chosen element from *choices* |
| Return type: | [node](#pants.ant.Ant.node) |
`clone`()[[source]](_modules/pants/ant.html#Ant.clone)[¶](#pants.ant.Ant.clone)
Return a shallow copy with a new UID.
If an exact copy (including the uid) is desired, use the
[`copy.copy()`](https://docs.python.org/2/library/copy.html#copy.copy) method.
| Returns: | a clone |
| Return type: | `Ant` |
`initialize`(*world*, *start=None*)[[source]](_modules/pants/ant.html#Ant.initialize)[¶](#pants.ant.Ant.initialize)
Reset everything so that a new solution can be found.
| Parameters: | * **world** ([*World*](#pants.world.World)) – the world to solve
* **start** (*Node*) – the starting node (default is chosen randomly)
|
| Returns: | self |
| Return type: | `Ant` |
`make_move`(*dest*)[[source]](_modules/pants/ant.html#Ant.make_move)[¶](#pants.ant.Ant.make_move)
Move to the *dest* node and return the edge traveled.
When *dest* is `None`, an attempt to take the final move back to the starting node is made. If that is not possible (because it has
previously been done), then `None` is returned.
| Parameters: | **dest** ([*node*](#pants.ant.Ant.node)) – the destination node for the move |
| Returns: | the edge taken to get to *dest* |
| Return type: | `Edge` |
`move`()[[source]](_modules/pants/ant.html#Ant.move)[¶](#pants.ant.Ant.move)
Choose, make, and return a move from the remaining moves.
| Returns: | the `Edge` taken to make the move chosen |
| Return type: | `Edge` |
`node`[¶](#pants.ant.Ant.node)
Most recently visited node.
`path`[¶](#pants.ant.Ant.path)
Edges traveled by the `Ant` in order.
`remaining_moves`()[[source]](_modules/pants/ant.html#Ant.remaining_moves)[¶](#pants.ant.Ant.remaining_moves)
Return the moves that remain to be made.
| Return type: | list |
`tour`[¶](#pants.ant.Ant.tour)
Nodes visited by the `Ant` in order.
`weigh`(*edge*)[[source]](_modules/pants/ant.html#Ant.weigh)[¶](#pants.ant.Ant.weigh)
Calculate the weight of the given *edge*.
The weight of an edge is simply a representation of its perceived value in finding a shorter solution. Larger weights increase the odds of the edge being taken, whereas smaller weights decrease those odds.
| Parameters: | **edge** ([*Edge*](#pants.world.Edge)) – the edge to weigh |
| Returns: | the weight of *edge* |
| Return type: | [float](https://docs.python.org/2/library/functions.html#float) |
Solver module[¶](#module-pants.solver)
---
*class* `pants.solver.``Solver`(***kwargs*)[[source]](_modules/pants/solver.html#Solver)[¶](#pants.solver.Solver)
This class contains the functionality for finding one or more solutions for a given `World`.
| Parameters: | * **alpha** ([*float*](https://docs.python.org/2/library/functions.html#float)) – relative importance of pheromone (default=1)
* **beta** ([*float*](https://docs.python.org/2/library/functions.html#float)) – relative importance of distance (default=3)
* **rho** ([*float*](https://docs.python.org/2/library/functions.html#float)) – percent evaporation of pheromone (0..1, default=0.8)
* **q** ([*float*](https://docs.python.org/2/library/functions.html#float)) – total pheromone deposited by each `Ant` after each iteration is complete (>0, default=1)
* **t0** ([*float*](https://docs.python.org/2/library/functions.html#float)) – initial pheromone level along each `Edge` of the
`World` (>0, default=0.01)
* **limit** ([*int*](https://docs.python.org/2/library/functions.html#int)) – number of iterations to perform (default=100)
* **ant_count** ([*float*](https://docs.python.org/2/library/functions.html#float)) – how many `Ant`s will be used
(default=10)
* **elite** ([*float*](https://docs.python.org/2/library/functions.html#float)) – multiplier of the pheromone deposited by the elite
`Ant` (default=0.5)
|
`aco`(*colony*)[[source]](_modules/pants/solver.html#Solver.aco)[¶](#pants.solver.Solver.aco)
Return the best solution by performing the ACO meta-heuristic.
This method lets every `Ant` in the colony find a solution,
updates the pheromone levels according to the solutions found, and returns the Ant with the best solution.
This method is not meant to be called directly. Instead, call either
`solve()` or `solutions()`.
| Parameters: | **colony** (*list*) – the Ants to use in finding a solution |
| Returns: | the best solution found |
| Return type: | `Ant` |
`create_colony`(*world*)[[source]](_modules/pants/solver.html#Solver.create_colony)[¶](#pants.solver.Solver.create_colony)
Create a set of `Ant`s and initialize them to the given
*world*.
If the *ant_count* is less than 1, `round_robin_ants()` are used and the number of `Ant`s will be equal to the number of nodes. Otherwise, `random_ants()` are created instead, and the
number of `Ant`s will be equal to the *ant_count*.
| Parameters: | **world** ([*World*](#pants.world.World)) – the world from which the `Ant`s will be given starting nodes. |
| Returns: | list of `Ant`s |
| Return type: | list |
`find_solutions`(*ants*)[[source]](_modules/pants/solver.html#Solver.find_solutions)[¶](#pants.solver.Solver.find_solutions)
Let each `Ant` find a solution.
Makes each `Ant` move until each can no longer move.
| Parameters: | **ants** (*list*) – the ants to use for solving |
`global_update`(*ants*)[[source]](_modules/pants/solver.html#Solver.global_update)[¶](#pants.solver.Solver.global_update)
Update the amount of pheromone on each edge according to the fitness of solutions that use it.
This accomplishes the global update performed at the end of each solving iteration.
Note
This method should never let the pheromone on an edge decrease to
less than its initial level.
| Parameters: | **ants** (*list*) – the ants to use for solving |
`local_update`(*edge*)[[source]](_modules/pants/solver.html#Solver.local_update)[¶](#pants.solver.Solver.local_update)
Evaporate some of the pheromone on the given *edge*.
Note
This method should never let the pheromone on an edge decrease to
less than its initial level.
| Parameters: | **edge** ([*Edge*](#pants.world.Edge)) – the `Edge` to be updated |
`random_ants`(*world*, *count*, *even=False*)[[source]](_modules/pants/solver.html#Solver.random_ants)[¶](#pants.solver.Solver.random_ants)
Returns a list of `Ant`s distributed to the nodes of the
world in a random fashion.
Note that this does not ensure at least one `Ant` begins at each node unless there are exactly as many `Ant`s as there are nodes. This method is used to create the `Ant`s before solving if *ant_count* is **not** `0`.
| Parameters: | * **world** ([*World*](#pants.world.World)) – the `World` in which to create the ants.
* **count** ([*int*](https://docs.python.org/2/library/functions.html#int)) – the number of `Ant`s to create
* **even** ([*bool*](https://docs.python.org/2/library/functions.html#bool)) – `True` if [`random.random()`](https://docs.python.org/2/library/random.html#random.random) should avoid
choosing the same starting node multiple times
(default is `False`)
|
| Returns: | the `Ant`s initialized to nodes in the `World` |
| Return type: | list |
`reset_colony`(*colony*)[[source]](_modules/pants/solver.html#Solver.reset_colony)[¶](#pants.solver.Solver.reset_colony)
Reset the *colony* of `Ant`s such that each `Ant` is ready to find a new solution.
Essentially, this method re-initializes all `Ant`s in the colony to the `World` that they were initialized to last.
Internally, this method is called after each iteration of the
`Solver`.
| Parameters: | **colony** (*list*) – the `Ant`s to reset |
`round_robin_ants`(*world*, *count*)[[source]](_modules/pants/solver.html#Solver.round_robin_ants)[¶](#pants.solver.Solver.round_robin_ants)
Returns a list of `Ant`s distributed to the nodes of the
world in a round-robin fashion.
Note that this does not ensure at least one `Ant` begins at each node unless there are exactly as many `Ant`s as there are nodes. However, if *ant_count* is `0` then *ant_count* is set to the number of nodes in the `World` and this method is used to create the `Ant`s before solving.
| Parameters: | * **world** ([*World*](#pants.world.World)) – the `World` in which to create the
`Ant`s
* **count** ([*int*](https://docs.python.org/2/library/functions.html#int)) – the number of `Ant`s to create
|
| Returns: | the `Ant`s initialized to nodes in the `World` |
| Return type: | list |
`solutions`(*world*)[[source]](_modules/pants/solver.html#Solver.solutions)[¶](#pants.solver.Solver.solutions)
Return successively shorter paths through the given *world*.
Unlike `solve()`, this method returns one solution for each
improvement of the best solution found thus far.
| Parameters: | **world** ([*World*](#pants.world.World)) – the `World` to solve |
| Returns: | successively shorter solutions as `Ant`s |
| Return type: | list |
`solve`(*world*)[[source]](_modules/pants/solver.html#Solver.solve)[¶](#pants.solver.Solver.solve)
Return the single shortest path found through the given *world*.
| Parameters: | **world** ([*World*](#pants.world.World)) – the `World` to solve |
| Returns: | the single best solution found |
| Return type: | `Ant` |
`trace_elite`(*ant*)[[source]](_modules/pants/solver.html#Solver.trace_elite)[¶](#pants.solver.Solver.trace_elite)
Deposit pheromone along the path of a particular ant.
This method is used to deposit the pheromone of the elite `Ant`
at the end of each iteration.
Note
This method should never let the pheromone on an edge decrease to
less than its initial level.
| Parameters: | **ant** ([*Ant*](#pants.ant.Ant)) – the elite `Ant` |
Indices and tables[¶](#indices-and-tables)
===
* [Index](genindex.html)
* [Module Index](py-modindex.html)
* [Search Page](search.html) |
waywiser | cran | R | Package ‘waywiser’
July 20, 2023
Type Package
Title Ergonomic Methods for Assessing Spatial Models
Version 0.4.2
Description Assessing predictive models of spatial data can be
challenging, both because these models are typically built for
extrapolating outside the original region represented by training data
and due to potential spatially structured errors, with ``hot spots'' of
higher than expected error clustered geographically due to spatial
structure in the underlying data. Methods are provided for assessing
models fit to spatial data, including approaches for measuring the
spatial structure of model errors, assessing model predictions at
multiple spatial scales, and evaluating where predictions can be made
safely. Methods are particularly useful for models fit using the
'tidymodels' framework. Methods include Moran's I ('Moran' (1950)
<doi:10.2307/2332142>), Geary's C ('Geary' (1954)
<doi:10.2307/2986645>), Getis-Ord's G ('Ord' and 'Getis' (1995)
<doi:10.1111/j.1538-4632.1995.tb00912.x>), agreement coefficients from
'Ji' and Gallo (2006) (<doi:10.14358/PERS.72.7.823>), agreement
metrics from 'Willmott' (1981) (<doi:10.1080/02723646.1981.10642213>)
and 'Willmott' 'et' 'al'. (2012) (<doi:10.1002/joc.2419>), an
implementation of the area of applicability methodology from 'Meyer'
and 'Pebesma' (2021) (<doi:10.1111/2041-210X.13650>), and an
implementation of multi-scale assessment as described in 'Riemann'
'et' 'al'. (2010) (<doi:10.1016/j.rse.2010.05.010>).
License MIT + file LICENSE
URL https://github.com/ropensci/waywiser,
https://docs.ropensci.org/waywiser/
BugReports https://github.com/ropensci/waywiser/issues
Depends R (>= 4.0)
Imports dplyr (>= 1.1.0), fields, FNN, glue, hardhat, Matrix, purrr,
rlang (>= 1.1.0), sf (>= 1.0-0), spdep (>= 1.1-9), stats,
tibble, tidyselect, vctrs, yardstick (>= 1.2.0)
Suggests applicable, caret, CAST, covr, exactextractr, ggplot2, knitr,
modeldata, recipes, rmarkdown, rsample, spatialsample, terra,
testthat (>= 3.0.0), tidymodels, tidyr, tigris, vip, whisker,
withr
VignetteBuilder knitr
Config/Needs/website kableExtra
Config/testthat/edition 3
Config/testthat/parallel true
Encoding UTF-8
Language en-US
LazyData true
RoxygenNote 7.2.3
NeedsCompilation no
Author <NAME> [aut, cre] (<https://orcid.org/0000-0003-2402-304X>),
<NAME> [ctb] (<https://orcid.org/0000-0002-7953-0260>),
<NAME> [rev] (Virgilio reviewed the package (v.
0.2.0.9000) for rOpenSci, see
<https://github.com/ropensci/software-review/issues/571>),
<NAME> [rev] (Jakub reviewed the package (v. 0.2.0.9000) for
rOpenSci, see
<https://github.com/ropensci/software-review/issues/571>),
Posit Software, PBC [cph, fnd]
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2023-07-20 14:30:02 UTC
R topics documented:
guerr... 3
ny_tree... 5
predict.ww_area_of_applicabilit... 5
worldclim_simulatio... 7
ww_agreement_coefficien... 7
ww_area_of_applicabilit... 10
ww_build_neighbor... 13
ww_build_weight... 15
ww_global_geary_... 15
ww_global_moran_... 17
ww_local_geary_... 19
ww_local_getis_ord_... 21
ww_local_moran_... 23
ww_make_point_neighbor... 24
ww_make_polygon_neighbor... 25
ww_multi_scal... 26
ww_willmott_... 29
guerry Guerry "Moral Statistics" (1830s)
Description
This data and description are taken from the geodaData R package. Classic social science foun-
dational study by Andre-Michel Guerry on crime, suicide, literacy and other “moral statistics” in
1830s France. Data from the R package Guerry (Michael Friendly and Stephane Dray).
Usage
guerry
Format
An sf data frame with 85 rows, 23 variables, and a geometry column:
dept Department ID: Standard numbers for the departments
Region Region of France (’N’=’North’, ’S’=’South’, ’E’=’East’, ’W’=’West’, ’C’=’Central’).
Corsica is coded as NA.
Dprtmnt Department name: Departments are named according to usage in 1830, but without
accents. A factor with levels Ain Aisne Allier ... Vosges Yonne
Crm_prs Population per Crime against persons.
Crm_prp Population per Crime against property.
Litercy Percent of military conscripts who can read and write.
Donatns Donations to the poor.
Infants Population per illegitimate birth.
Suicids Population per suicide.
Maincty Size of principal city (’1:Sm’, ’2:Med’, ’3:Lg’), used as a surrogate for population den-
sity. Large refers to the top 10, small to the bottom 10; all the rest are classed Medium.
Wealth Per capita tax on personal property. A ranked index based on taxes on personal and
movable property per inhabitant.
Commerc Commerce and Industry, measured by the rank of the number of patents / population.
Clergy Distribution of clergy, measured by the rank of the number of Catholic priests in active
service population.
Crim_prn Crimes against parents, measured by the rank of the ratio of crimes against parents to
all crimes – Average for the years 1825-1830.
Infntcd Infanticides per capita. A ranked ratio of number of infanticides to population – Average
for the years 1825-1830.
Dntn_cl Donations to the clergy. A ranked ratio of the number of bequests and donations inter
vivios to population – Average for the years 1815-1824.
Lottery Per capita wager on Royal Lottery. Ranked ratio of the proceeds bet on the royal lottery
to population — Average for the years 1822-1826.
Desertn Military desertion, ratio of number of young soldiers accused of desertion to the force of
the military contingent, minus the deficit produced by the insufficiency of available billets –
Average of the years 1825-1827.
Instrct Instruction. Ranks recorded from Guerry’s map of Instruction. Note: this is inversely
related to Literacy
Prsttts Number of prostitutes registered in Paris from 1816 to 1834, classified by the department
of their birth
Distanc Distance to Paris (km). Distance of each department centroid to the centroid of the Seine
(Paris)
Area Area (1000 km^2).
Pop1831 Population in 1831, in 1000s
Details
Sf object, units in m. EPSG 27572: NTF (Paris) / Lambert zone II.
Source
• <NAME>. (1836). Essai sur la Statistique de la Population française Paris: F. Doufour.
• <NAME>. (1833). Essai sur la statistique morale de la France Paris: Crochard. English
translation: <NAME> and <NAME>, <NAME>. : <NAME> Press,
2002.
• Parent-Duchatelet, A. (1836). De la prostitution dans la ville de Paris, 3rd ed, 1857, p. 32, 36
https://geodacenter.github.io/data-and-lab/Guerry/
Examples
if (requireNamespace("sf", quietly = TRUE)) {
library(sf)
data(guerry)
plot(guerry["Donatns"])
}
ny_trees Number of trees and aboveground biomass for Forest Inventory and
Analysis plots in New York State
Description
The original data is derived from the Forest Inventory and Analysis program, implemented by the
US Department of Agriculture’s Forest Service.
Usage
ny_trees
Format
An sf object using EPSG 5070: NAD83 / Conus Albers (in meters), with 5,303 rows and 5 columns:
yr The year measurements were taken.
plot A unique identifier signifying the plot measurements were taken at.
n_trees The number of trees present on a plot.
agb The total aboveground biomass at the plot location, in pounds.
geometry The centroid of the plot location.
predict.ww_area_of_applicability
Predict from a ww_area_of_applicability
Description
Predict from a ww_area_of_applicability
Usage
## S3 method for class 'ww_area_of_applicability'
predict(object, new_data, ...)
Arguments
object A ww_area_of_applicability object.
new_data A data frame or matrix of new samples.
... Not used.
Details
The function computes the distance indices of the new data and whether or not they are "inside" the
area of applicability.
Value
A tibble of predictions, with two columns: di, numeric, contains the "dissimilarity index" of each
point in new_data, while aoa, logical, contains whether a row is inside (TRUE) or outside (FALSE)
the area of applicability.
Note that this function is often called using terra::predict(), in which case aoa will be con-
verted to numeric implicitly; 1 values correspond to cells "inside" the area of applicability and 0
corresponds to cells "outside" the AOA.
The number of rows in the tibble is guaranteed to be the same as the number of rows in new_data.
Rows with NA predictor values will have NA di and aoa values.
See Also
Other area of applicability functions: ww_area_of_applicability()
Examples
library(vip)
train <- gen_friedman(1000, seed = 101) # ?vip::gen_friedman
test <- train[701:1000, ]
train <- train[1:700, ]
pp <- stats::ppr(y ~ ., data = train, nterms = 11)
metric_name <- ifelse(
packageVersion("vip") > package_version("0.3.2"),
"rsq",
"rsquared"
)
importance <- vip::vi_permute(
pp,
target = "y",
metric = metric_name,
pred_wrapper = predict,
train = train
)
aoa <- ww_area_of_applicability(y ~ ., train, test, importance = importance)
predict(aoa, test)
worldclim_simulation Simulated data based on WorldClim Bioclimatic variables
Description
This data is adapted from the CAST vignette vignette("cast02-AOA-tutorial", package =
"CAST"). The original data is derived from the Worldclim global climate variables.
Usage
worldclim_simulation
Format
An sf object with 10,000 rows and 6 columns:
bio2 Mean Diurnal Range (Mean of monthly (max temp - min temp))
bio10 Mean Temperature of Warmest Quarter
bio13 Precipitation of Wettest Month
bio19 Precipitation of Coldest Quarter
geometry The location of the sampled point.
response A virtual species distribution, generated using the generateSpFromPCA() function from
the virtualspecies package.
Source
https://www.worldclim.org
ww_agreement_coefficient
Agreement coefficients and related methods
Description
These functions calculate the agreement coefficient and mean product difference (MPD), as well as
their systematic and unsystematic components, from Ji and Gallo (2006). Agreement coefficients
provides a useful measurement of agreement between two data sets which is bounded, symmetrical,
and can be decomposed into systematic and unsystematic components; however, it assumes a linear
relationship between the two data sets and treats both "truth" and "estimate" as being of equal
quality, and as such may not be a useful metric in all scenarios.
Usage
ww_agreement_coefficient(data, ...)
## S3 method for class 'data.frame'
ww_agreement_coefficient(data, truth, estimate, na_rm = TRUE, ...)
ww_agreement_coefficient_vec(truth, estimate, na_rm = TRUE, ...)
ww_systematic_agreement_coefficient(data, ...)
## S3 method for class 'data.frame'
ww_systematic_agreement_coefficient(data, truth, estimate, na_rm = TRUE, ...)
ww_systematic_agreement_coefficient_vec(truth, estimate, na_rm = TRUE, ...)
ww_unsystematic_agreement_coefficient(data, ...)
## S3 method for class 'data.frame'
ww_unsystematic_agreement_coefficient(data, truth, estimate, na_rm = TRUE, ...)
ww_unsystematic_agreement_coefficient_vec(truth, estimate, na_rm = TRUE, ...)
ww_unsystematic_mpd(data, ...)
## S3 method for class 'data.frame'
ww_unsystematic_mpd(data, truth, estimate, na_rm = TRUE, ...)
ww_unsystematic_mpd_vec(truth, estimate, na_rm = TRUE, ...)
ww_systematic_mpd(data, ...)
## S3 method for class 'data.frame'
ww_systematic_mpd(data, truth, estimate, na_rm = TRUE, ...)
ww_systematic_mpd_vec(truth, estimate, na_rm = TRUE, ...)
ww_unsystematic_rmpd(data, ...)
## S3 method for class 'data.frame'
ww_unsystematic_rmpd(data, truth, estimate, na_rm = TRUE, ...)
ww_unsystematic_rmpd_vec(truth, estimate, na_rm = TRUE, ...)
ww_systematic_rmpd(data, ...)
## S3 method for class 'data.frame'
ww_systematic_rmpd(data, truth, estimate, na_rm = TRUE, ...)
ww_systematic_rmpd_vec(truth, estimate, na_rm = TRUE, ...)
Arguments
data A data.frame containing the columns specified by the truth and estimate
arguments.
... Not currently used.
truth The column identifier for the true results (that is numeric). This should be an
unquoted column name although this argument is passed by expression and sup-
ports quasiquotation (you can unquote column names). For _vec() functions, a
numeric vector.
estimate The column identifier for the predicted results (that is also numeric). As with
truth this can be specified different ways but the primary method is to use an
unquoted variable name. For _vec() functions, a numeric vector.
na_rm A logical value indicating whether NA values should be stripped before the
computation proceeds.
Details
Agreement coefficient values range from 0 to 1, with 1 indicating perfect agreement. truth and
estimate must be the same length. This function is not explicitly spatial and as such can be applied
to data with any number of dimensions and any coordinate reference system.
Value
A tibble with columns .metric, .estimator, and .estimate and 1 row of values. For grouped data
frames, the number of rows returned will be the same as the number of groups. For _vec() func-
tions, a single value (or NA).
References
<NAME>. and <NAME>. 2006. "An Agreement Coefficient for Image Comparison." Photogrammetric
Engineering & Remote Sensing 72(7), pp 823–833, doi: 10.14358/PERS.72.7.823.
See Also
Other agreement metrics: ww_willmott_d()
Other yardstick metrics: ww_global_geary_c(), ww_global_moran_i(), ww_local_geary_c(),
ww_local_getis_ord_g(), ww_local_moran_i(), ww_willmott_d()
Examples
# Calculated values match Ji and Gallo 2006:
x <- c(6, 8, 9, 10, 11, 14)
y <- c(2, 3, 5, 5, 6, 8)
ww_agreement_coefficient_vec(x, y)
ww_systematic_agreement_coefficient_vec(x, y)
ww_unsystematic_agreement_coefficient_vec(x, y)
ww_systematic_mpd_vec(x, y)
ww_unsystematic_mpd_vec(x, y)
ww_systematic_rmpd_vec(x, y)
ww_unsystematic_rmpd_vec(x, y)
example_df <- data.frame(x = x, y = y)
ww_agreement_coefficient(example_df, x, y)
ww_systematic_agreement_coefficient(example_df, x, y)
ww_unsystematic_agreement_coefficient(example_df, x, y)
ww_systematic_mpd(example_df, x, y)
ww_unsystematic_mpd(example_df, x, y)
ww_systematic_rmpd(example_df, x, y)
ww_unsystematic_rmpd(example_df, x, y)
ww_area_of_applicability
Find the area of applicability
Description
This function calculates the "area of applicability" of a model, as introduced by <NAME> Pebesma
(2021). While the initial paper introducing this method focused on spatial models, there is nothing
inherently spatial about the method; it can be used with any type of data (and, because it does not
care about the spatial arrangement of your data, can be used with 2D or 3D spatial data, and with
geographic or projected CRS).
Usage
ww_area_of_applicability(x, ...)
## S3 method for class 'data.frame'
ww_area_of_applicability(x, testing = NULL, importance, ..., na_rm = FALSE)
## S3 method for class 'matrix'
ww_area_of_applicability(x, testing = NULL, importance, ..., na_rm = FALSE)
## S3 method for class 'formula'
ww_area_of_applicability(
x,
data,
testing = NULL,
importance,
...,
na_rm = FALSE
)
## S3 method for class 'recipe'
ww_area_of_applicability(
x,
data,
testing = NULL,
importance,
...,
na_rm = FALSE
)
## S3 method for class 'rset'
ww_area_of_applicability(x, y = NULL, importance, ..., na_rm = FALSE)
Arguments
x Either a data frame, matrix, formula (specifying predictor terms on the right-
hand side), recipe (from recipes::recipe(), or rset object, produced by re-
sampling functions from rsample or spatialsample.
If x is a recipe, it should be the same one used to pre-process the data used in
your model. If the recipe used to build the area of applicability doesn’t match the
one used to build the model, the returned area of applicability won’t be correct.
... Not currently used.
testing A data frame or matrix containing the data used to validate your model. This
should be the same data as used to calculate all model accuracy metrics.
If this argument is NULL, then this function will use the training data (from x
or data) to calculate within-sample distances. This may result in the area of
applicability threshold being set too high, with the result that too many points
are classed as "inside" the area of applicability.
importance Either:
• A data.frame with two columns: term, containing the names of each vari-
able in the training and testing data, and estimate, containing the (raw or
scaled) feature importance for each variable.
• An object of class vi with at least two columns, Variable and Importance.
All variables in the training data (x or data, depending on the context) must
have a matching importance estimate, and all terms with importance estimates
must be in the training data.
na_rm A logical of length 1, indicating whether observations (in both training and test-
ing) with NA values in predictors should be removed. Only predictor variables
are considered, and this value has no impact on predictions (where NA values
produce NA predictions). If na_rm = FALSE and NA values are found, this func-
tion returns an error.
data The data frame representing your "training" data, when using the formula or
recipe methods.
y Optional: a recipe (from recipes::recipe()) or formula.
If y is a recipe, it should be the same one used to pre-process the data used in
your model. If the recipe used to build the area of applicability doesn’t match the
one used to build the model, the returned area of applicability won’t be correct.
Details
Predictions made on points "inside" the area of applicability should be as accurate as predictions
made on the data provided to testing. That means that generally testing should be your final
hold-out set so that predictions on points inside the area of applicability are accurately described by
your reported model metrics. When passing an rset object to x, predictions made on points "inside"
the area of applicability instead should be as accurate as predictions made on the assessment sets
during cross-validation.
This method assumes your model was fit using dummy variables in the place of any non-numeric
predictor, and that you have one importance score per dummy variable. Having non-numeric pre-
dictors will cause this function to fail.
Value
A ww_area_of_applicability object, which can be used with predict() to calculate the distance
of new data to the original training data, and determine if new data is within a model’s area of
applicability.
Differences from CAST
This implementation differs from Meyer and Pebesma (2021) (and therefore from CAST) when
using cross-validated data in order to minimize data leakage. Namely, in order to calculate the
dissimilarity index DIk , CAST:
1. Rescales all data used for cross validation at once, lumping assessment folds in with analysis
data.
2. Calculates a single d¯ as the mean distance between all points in the rescaled data set, including
between points in the same assessment fold.
3. For each point k that’s used in an assessment fold, calculates dk as the minimum distance
between k and any point in its corresponding analysis fold.
4. Calculates DIk by dividing dk by d¯ (which was partially calculated as the distance between k
and the rest of the rescaled data).
Because assessment data is used to calculate constants for rescaling analysis data and d, ¯ the assess-
ment data may appear too "similar" to the analysis data when calculating DIk . As such, waywiser
treats each fold in an rset independently:
1. Each analysis set is rescaled independently.
2. Separate d¯ are calculated for each fold, as the mean distance between all points in the analysis
set for that fold.
3. Identically to CAST, dk is the minimum distance between a point k in the assessment fold and
any point in the corresponding analysis fold.
4. DIk is then found by dividing dk by d, ¯ which was calculated independently from k.
Predictions are made using the full training data set, rescaled once (in the same way as CAST), and
the mean d¯ across folds, under the assumption that the "final" model in use will be retrained using
the entire data set.
In practice, this means waywiser produces very slightly higher d¯ values than CAST and a slightly
higher area of applicability threshold than CAST when using rset objects.
References
<NAME> and <NAME>. 2021. "Predicting into unknown space? Estimating the area of applica-
bility of spatial prediction models," Methods in Ecology and Evolution 12(9), pp 1620 - 1633, doi:
10.1111/2041-210X.13650.
See Also
Other area of applicability functions: predict.ww_area_of_applicability()
Examples
train <- vip::gen_friedman(1000, seed = 101) # ?vip::gen_friedman
test <- train[701:1000, ]
train <- train[1:700, ]
pp <- stats::ppr(y ~ ., data = train, nterms = 11)
metric_name <- ifelse(
packageVersion("vip") > package_version("0.3.2"),
"rsq",
"rsquared"
)
importance <- vip::vi_permute(
pp,
target = "y",
metric = metric_name,
pred_wrapper = predict,
train = train
)
aoa <- ww_area_of_applicability(y ~ ., train, test, importance = importance)
predict(aoa, test)
# Equivalent methods for calculating AOA:
ww_area_of_applicability(train[2:11], test[2:11], importance)
ww_area_of_applicability(
as.matrix(train[2:11]),
as.matrix(test[2:11]),
importance
)
ww_build_neighbors Make ’nb’ objects from sf objects
Description
These functions can be used for geographic or projected coordinate reference systems and expect
2D data.
Usage
ww_build_neighbors(data, nb = NULL, ..., call = rlang::caller_env())
Arguments
data An sf object (of class "sf" or "sfc").
nb An object of class "nb" (in which case it will be returned unchanged), or a func-
tion to create an object of class "nb" from data and ..., or NULL. See details.
... Arguments passed to the neighbor-creating function.
call The execution environment of a currently running function, e.g. call = caller_env().
The corresponding function call is retrieved and mentioned in error messages as
the source of the error.
You only need to supply call when throwing a condition from a helper function
which wouldn’t be relevant to mention in the message.
Can also be NULL or a defused function call to respectively not display any call
or hard-code a code to display.
For more information about error calls, see Including function calls in error
messages.
Details
When nb = NULL, the method used to create neighbors from data is dependent on what geometry
type data is:
• If nb = NULL and data is a point geometry (classes "sfc_POINT" or "sfc_MULTIPOINT") the
"nb" object will be created using ww_make_point_neighbors().
• If nb = NULL and data is a polygon geometry (classes "sfc_POLYGON" or "sfc_MULTIPOLYGON")
the "nb" object will be created using ww_make_polygon_neighbors().
• If nb = NULL and data is any other geometry type, the "nb" object will be created using the
centroids of the data as points, with a warning.
Value
An object of class "nb".
Examples
ww_build_neighbors(guerry)
ww_build_weights Build "listw" objects of spatial weights
Description
These functions can be used for geographic or projected coordinate reference systems and expect
2D data.
Usage
ww_build_weights(x, wt = NULL, include_self = FALSE, ...)
Arguments
x Either an sf object or a "nb" neighbors list object. If an sf object, will be con-
verted into a neighbors list via ww_build_neighbors().
wt Either a "listw" object (which will be returned unchanged), a function for creat-
ing a "listw" object from x, or NULL, in which case weights will be constructed
via spdep::nb2listw().
include_self Include each region itself in its own list of neighbors?
... Arguments passed to the weight constructing function.
Value
A listw object.
Examples
ww_build_weights(guerry)
ww_global_geary_c Global Geary’s C statistic
Description
Calculate the global Geary’s C statistic for model residuals. ww_global_geary_c() returns the
statistic itself, while ww_global_geary_pvalue() returns the associated p value. These functions
are meant to help assess model predictions, for instance by identifying if there are clusters of higher
residuals than expected. For statistical testing and inference applications, use spdep::geary.test()
instead.
Usage
ww_global_geary_c(data, ...)
ww_global_geary_c_vec(truth, estimate, wt, na_rm = FALSE, ...)
ww_global_geary_pvalue(data, ...)
ww_global_geary_pvalue_vec(truth, estimate, wt = NULL, na_rm = FALSE, ...)
Arguments
data A data.frame containing the columns specified by the truth and estimate
arguments.
... Additional arguments passed to spdep::geary() (for ww_global_geary_c())
or spdep::geary.test() (for ww_global_geary_pvalue()).
truth The column identifier for the true results (that is numeric). This should be an
unquoted column name although this argument is passed by expression and sup-
ports quasiquotation (you can unquote column names). For _vec() functions, a
numeric vector.
estimate The column identifier for the predicted results (that is also numeric). As with
truth this can be specified different ways but the primary method is to use an
unquoted variable name. For _vec() functions, a numeric vector.
wt A listw object, for instance as created with ww_build_weights(). For data.frame
input, may also be a function that takes data and returns a listw object.
na_rm A logical value indicating whether NA values should be stripped before the
computation proceeds.
Details
These functions can be used for geographic or projected coordinate reference systems and expect
2D data.
Value
A tibble with columns .metric, .estimator, and .estimate and 1 row of values. For grouped data
frames, the number of rows returned will be the same as the number of groups. For _vec() func-
tions, a single value (or NA).
References
<NAME>. (1954). "The Contiguity Ratio and Statistical Mapping". The Incorporated Statistician.
5 (3): 115–145. doi:10.2307/2986645.
<NAME>., <NAME>. 1981 Spatial processes, Pion, p. 17.
See Also
Other autocorrelation metrics: ww_global_moran_i(), ww_local_geary_c(), ww_local_getis_ord_g(),
ww_local_moran_i()
Other yardstick metrics: ww_agreement_coefficient(), ww_global_moran_i(), ww_local_geary_c(),
ww_local_getis_ord_g(), ww_local_moran_i(), ww_willmott_d()
Examples
guerry_model <- guerry
guerry_lm <- lm(Crm_prs ~ Litercy, guerry_model)
guerry_model$predictions <- predict(guerry_lm, guerry_model)
ww_global_geary_c(guerry_model, Crm_prs, predictions)
ww_global_geary_pvalue(guerry_model, Crm_prs, predictions)
wt <- ww_build_weights(guerry_model)
ww_global_geary_c_vec(
guerry_model$Crm_prs,
guerry_model$predictions,
wt = wt
)
ww_global_geary_pvalue_vec(
guerry_model$Crm_prs,
guerry_model$predictions,
wt = wt
)
ww_global_moran_i Global Moran’s I statistic
Description
Calculate the global Moran’s I statistic for model residuals. ww_global_moran_i() returns the
statistic itself, while ww_global_moran_pvalue() returns the associated p value. These functions
are meant to help assess model predictions, for instance by identifying if there are clusters of higher
residuals than expected. For statistical testing and inference applications, use spdep::moran.test()
instead.
Usage
ww_global_moran_i(data, ...)
ww_global_moran_i_vec(truth, estimate, wt = NULL, na_rm = FALSE, ...)
ww_global_moran_pvalue(data, ...)
ww_global_moran_pvalue_vec(truth, estimate, wt = NULL, na_rm = FALSE, ...)
Arguments
data A data.frame containing the columns specified by the truth and estimate
arguments.
... Additional arguments passed to spdep::moran() (for ww_global_moran_i())
or spdep::moran.test() (for ww_global_moran_pvalue()).
truth The column identifier for the true results (that is numeric). This should be an
unquoted column name although this argument is passed by expression and sup-
ports quasiquotation (you can unquote column names). For _vec() functions, a
numeric vector.
estimate The column identifier for the predicted results (that is also numeric). As with
truth this can be specified different ways but the primary method is to use an
unquoted variable name. For _vec() functions, a numeric vector.
wt A listw object, for instance as created with ww_build_weights(). For data.frame
input, may also be a function that takes data and returns a listw object.
na_rm A logical value indicating whether NA values should be stripped before the
computation proceeds.
Details
These functions can be used for geographic or projected coordinate reference systems and expect
2D data.
Value
A tibble with columns .metric, .estimator, and .estimate and 1 row of values. For grouped data
frames, the number of rows returned will be the same as the number of groups. For _vec() func-
tions, a single value (or NA).
References
<NAME>. (1950). "Notes on Continuous Stochastic Phenomena." Biometrika, 37(1/2), pp 17.
doi: 10.2307/2332142
<NAME>., <NAME>. 1981 Spatial processes, Pion, p. 17.
See Also
Other autocorrelation metrics: ww_global_geary_c(), ww_local_geary_c(), ww_local_getis_ord_g(),
ww_local_moran_i()
Other yardstick metrics: ww_agreement_coefficient(), ww_global_geary_c(), ww_local_geary_c(),
ww_local_getis_ord_g(), ww_local_moran_i(), ww_willmott_d()
Examples
guerry_model <- guerry
guerry_lm <- lm(Crm_prs ~ Litercy, guerry_model)
guerry_model$predictions <- predict(guerry_lm, guerry_model)
ww_global_moran_i(guerry_model, Crm_prs, predictions)
ww_global_moran_pvalue(guerry_model, Crm_prs, predictions)
wt <- ww_build_weights(guerry_model)
ww_global_moran_i_vec(
guerry_model$Crm_prs,
guerry_model$predictions,
wt = wt
)
ww_global_moran_pvalue_vec(
guerry_model$Crm_prs,
guerry_model$predictions,
wt = wt
)
ww_local_geary_c Local Geary’s C statistic
Description
Calculate the local Geary’s C statistic for model residuals. ww_local_geary_c() returns the statis-
tic itself, while ww_local_geary_pvalue() returns the associated p value. These functions are
meant to help assess model predictions, for instance by identifying clusters of higher residuals than
expected. For statistical testing and inference applications, use spdep::localC_perm() instead.
Usage
ww_local_geary_c(data, ...)
ww_local_geary_c_vec(truth, estimate, wt, na_rm = FALSE, ...)
ww_local_geary_pvalue(data, ...)
ww_local_geary_pvalue_vec(truth, estimate, wt = NULL, na_rm = FALSE, ...)
Arguments
data A data.frame containing the columns specified by the truth and estimate
arguments.
... Additional arguments passed to spdep::localC() (for ww_local_geary_c())
or spdep::localC_perm() (for ww_local_geary_pvalue()).
truth The column identifier for the true results (that is numeric). This should be an
unquoted column name although this argument is passed by expression and sup-
ports quasiquotation (you can unquote column names). For _vec() functions, a
numeric vector.
estimate The column identifier for the predicted results (that is also numeric). As with
truth this can be specified different ways but the primary method is to use an
unquoted variable name. For _vec() functions, a numeric vector.
wt A listw object, for instance as created with ww_build_weights(). For data.frame
input, may also be a function that takes data and returns a listw object.
na_rm A logical value indicating whether NA values should be stripped before the
computation proceeds.
Details
These functions can be used for geographic or projected coordinate reference systems and expect
2D data.
Value
A tibble with columns .metric, .estimator, and .estimate and nrow(data) rows of values. For
_vec() functions, a numeric vector of length(truth) (or NA).
References
<NAME>. 1995. Local indicators of spatial association, Geographical Analysis, 27, pp 93–115.
doi: 10.1111/j.1538-4632.1995.tb00338.x.
Anselin, L. 2019. A Local Indicator of Multivariate Spatial Association: Extending Geary’s C.
Geographical Analysis, 51, pp 133-150. doi: 10.1111/gean.12164
See Also
Other autocorrelation metrics: ww_global_geary_c(), ww_global_moran_i(), ww_local_getis_ord_g(),
ww_local_moran_i()
Other yardstick metrics: ww_agreement_coefficient(), ww_global_geary_c(), ww_global_moran_i(),
ww_local_getis_ord_g(), ww_local_moran_i(), ww_willmott_d()
Examples
guerry_model <- guerry
guerry_lm <- lm(Crm_prs ~ Litercy, guerry_model)
guerry_model$predictions <- predict(guerry_lm, guerry_model)
ww_local_geary_c(guerry_model, Crm_prs, predictions)
ww_local_geary_pvalue(guerry_model, Crm_prs, predictions)
wt <- ww_build_weights(guerry_model)
ww_local_geary_c_vec(
guerry_model$Crm_prs,
guerry_model$predictions,
wt = wt
)
ww_local_geary_pvalue_vec(
guerry_model$Crm_prs,
guerry_model$predictions,
wt = wt
)
ww_local_getis_ord_g Local Getis-Ord G and G* statistic
Description
Calculate the local Getis-Ord G and G* statistic for model residuals. ww_local_getis_ord_g()
returns the statistic itself, while ww_local_getis_ord_pvalue() returns the associated p value.
These functions are meant to help assess model predictions, for instance by identifying clusters of
higher residuals than expected. For statistical testing and inference applications, use spdep::localG_perm()
instead.
Usage
ww_local_getis_ord_g(data, ...)
ww_local_getis_ord_g_vec(truth, estimate, wt, na_rm = FALSE, ...)
ww_local_getis_ord_g_pvalue(data, ...)
ww_local_getis_ord_g_pvalue_vec(truth, estimate, wt, na_rm = FALSE, ...)
Arguments
data A data.frame containing the columns specified by the truth and estimate
arguments.
... Additional arguments passed to spdep::localG() (for ww_local_getis_ord_g())
or spdep::localG_perm() (for ww_local_getis_ord_pvalue()).
truth The column identifier for the true results (that is numeric). This should be an
unquoted column name although this argument is passed by expression and sup-
ports quasiquotation (you can unquote column names). For _vec() functions, a
numeric vector.
estimate The column identifier for the predicted results (that is also numeric). As with
truth this can be specified different ways but the primary method is to use an
unquoted variable name. For _vec() functions, a numeric vector.
wt A listw object, for instance as created with ww_build_weights(). For data.frame
input, may also be a function that takes data and returns a listw object.
na_rm A logical value indicating whether NA values should be stripped before the
computation proceeds.
Details
These functions can be used for geographic or projected coordinate reference systems and expect
2D data.
Value
A tibble with columns .metric, .estimator, and .estimate and nrow(data) rows of values. For
_vec() functions, a numeric vector of length(truth) (or NA).
References
<NAME>. and <NAME>. 1995. Local spatial autocorrelation statistics: distributional issues and an
application. Geographical Analysis, 27, 286–306. doi: 10.1111/j.1538-4632.1995.tb00912.x
See Also
Other autocorrelation metrics: ww_global_geary_c(), ww_global_moran_i(), ww_local_geary_c(),
ww_local_moran_i()
Other yardstick metrics: ww_agreement_coefficient(), ww_global_geary_c(), ww_global_moran_i(),
ww_local_geary_c(), ww_local_moran_i(), ww_willmott_d()
Examples
guerry_model <- guerry
guerry_lm <- lm(Crm_prs ~ Litercy, guerry_model)
guerry_model$predictions <- predict(guerry_lm, guerry_model)
ww_local_getis_ord_g(guerry_model, Crm_prs, predictions)
ww_local_getis_ord_g_pvalue(guerry_model, Crm_prs, predictions)
wt <- ww_build_weights(guerry_model)
ww_local_getis_ord_g_vec(
guerry_model$Crm_prs,
guerry_model$predictions,
wt = wt
)
ww_local_getis_ord_g_pvalue_vec(
guerry_model$Crm_prs,
guerry_model$predictions,
wt = wt
)
ww_local_moran_i Local Moran’s I statistic
Description
Calculate the local Moran’s I statistic for model residuals. ww_local_moran_i() returns the statis-
tic itself, while ww_local_moran_pvalue() returns the associated p value. These functions are
meant to help assess model predictions, for instance by identifying clusters of higher residuals than
expected. For statistical testing and inference applications, use spdep::localmoran_perm() in-
stead.
Usage
ww_local_moran_i(data, ...)
ww_local_moran_i_vec(truth, estimate, wt, na_rm = FALSE, ...)
ww_local_moran_pvalue(data, ...)
ww_local_moran_pvalue_vec(truth, estimate, wt = NULL, na_rm = FALSE, ...)
Arguments
data A data.frame containing the columns specified by the truth and estimate
arguments.
... Additional arguments passed to spdep::localmoran().
truth The column identifier for the true results (that is numeric). This should be an
unquoted column name although this argument is passed by expression and sup-
ports quasiquotation (you can unquote column names). For _vec() functions, a
numeric vector.
estimate The column identifier for the predicted results (that is also numeric). As with
truth this can be specified different ways but the primary method is to use an
unquoted variable name. For _vec() functions, a numeric vector.
wt A listw object, for instance as created with ww_build_weights(). For data.frame
input, may also be a function that takes data and returns a listw object.
na_rm A logical value indicating whether NA values should be stripped before the
computation proceeds.
Details
These functions can be used for geographic or projected coordinate reference systems and expect
2D data.
Value
A tibble with columns .metric, .estimator, and .estimate and nrow(data) rows of values. For
_vec() functions, a numeric vector of length(truth) (or NA).
References
<NAME>. 1995. Local indicators of spatial association, Geographical Analysis, 27, pp 93–115.
doi: 10.1111/j.1538-4632.1995.tb00338.x.
<NAME>, <NAME>. and <NAME>. 1998. Local Spatial Autocorrelation in a Biological
Model. Geographical Analysis, 30, pp 331–354. doi: 10.1111/j.1538-4632.1998.tb00406.x
See Also
Other autocorrelation metrics: ww_global_geary_c(), ww_global_moran_i(), ww_local_geary_c(),
ww_local_getis_ord_g()
Other yardstick metrics: ww_agreement_coefficient(), ww_global_geary_c(), ww_global_moran_i(),
ww_local_geary_c(), ww_local_getis_ord_g(), ww_willmott_d()
Examples
guerry_model <- guerry
guerry_lm <- lm(Crm_prs ~ Litercy, guerry_model)
guerry_model$predictions <- predict(guerry_lm, guerry_model)
ww_local_moran_i(guerry_model, Crm_prs, predictions)
ww_local_moran_pvalue(guerry_model, Crm_prs, predictions)
wt <- ww_build_weights(guerry_model)
ww_local_moran_i_vec(
guerry_model$Crm_prs,
guerry_model$predictions,
wt = wt
)
ww_local_moran_pvalue_vec(
guerry_model$Crm_prs,
guerry_model$predictions,
wt = wt
)
ww_make_point_neighbors
Make ’nb’ objects from point geometries
Description
This function uses spdep::knearneigh() and spdep::knn2nb() to create a "nb" neighbors list.
Usage
ww_make_point_neighbors(data, k = 1, sym = FALSE, ...)
Arguments
data An sfc_POINT or sfc_MULTIPOINT object.
k How many nearest neighbors to use in spdep::knearneigh().
sym Force the output neighbors list (from spdep::knn2nb()) to symmetry.
... Other arguments passed to spdep::knearneigh().
Details
These functions can be used for geographic or projected coordinate reference systems and expect
2D data.
Value
An object of class "nb"
Examples
ww_make_point_neighbors(ny_trees)
ww_make_polygon_neighbors
Make ’nb’ objects from polygon geometries
Description
This function is an extremely thin wrapper around spdep::poly2nb(), renamed to use the way-
wiser "ww" prefix.
Usage
ww_make_polygon_neighbors(data, ...)
Arguments
data An sfc_POLYGON or sfc_MULTIPOLYGON object.
... Additional arguments passed to spdep::poly2nb().
Details
These functions can be used for geographic or projected coordinate reference systems and expect
2D data.
Value
An object of class "nb"
Examples
ww_make_polygon_neighbors(guerry)
ww_multi_scale Evaluate metrics at multiple scales of aggregation
Description
Evaluate metrics at multiple scales of aggregation
Usage
ww_multi_scale(
data = NULL,
truth,
estimate,
metrics = list(yardstick::rmse, yardstick::mae),
grids = NULL,
...,
na_rm = TRUE,
aggregation_function = "mean",
autoexpand_grid = TRUE,
progress = TRUE
)
Arguments
data Either: a point geometry sf object containing the columns specified by the
truth and estimate arguments; a SpatRaster from the terra package con-
taining layers specified by the truth and estimate arguments; or NULL if truth
and estimate are SpatRaster objects.
truth, estimate
If data is an sf object, the names (optionally unquoted) for the columns in data
containing the true and predicted values, respectively. If data is a SpatRaster
object, either layer names or indices which will select the true and predicted
layers, respectively, via terra::subset() If data is NULL, SpatRaster objects
with a single layer containing the true and predicted values, respectively.
metrics Either a yardstick::metric_set() object, or a list of functions which will
be used to construct a yardstick::metric_set() object specifying the perfor-
mance metrics to evaluate at each scale.
grids Optionally, a list of pre-computed sf or sfc objects specifying polygon bound-
aries to use for assessments.
... Arguments passed to sf::st_make_grid(). You almost certainly should pro-
vide these arguments as lists. For instance, passing n = list(c(1, 2)) will
create a single 1x2 grid; passing n = c(1, 2) will create a 1x1 grid and a 2x2
grid.
na_rm Boolean: Should polygons with NA values be removed before calculating met-
rics? Note that this does not impact how values are aggregated to polygons:
if you want to remove NA values before aggregating, provide a function to
aggregation_function which will remove NA values.
aggregation_function
The function to use to aggregate predictions and true values at various scales,
by default mean(). For the sf method, you can pass any function which takes
a single vector and returns a scalar. For raster methods, any function accepted
by exactextractr::exact_extract() (note that built-in function names must
be quoted). Note that this function does not pay attention to the value of na_rm;
any NA handling you want to do during aggregation should be handled by this
function directly.
autoexpand_grid
Boolean: if data is in geographic coordinates and grids aren’t provided, the
grids generated by sf::st_make_grid() may not contain all observations. If
TRUE, this function will automatically expand generated grids by a tiny factor to
attempt to capture all observations.
progress Boolean: if data is NULL, should aggregation via exactextractr::exact_extract()
show a progress bar? Separate progress bars will be shown for each time truth
and estimate are aggregated.
Value
A tibble with six columns: .metric, with the name of the metric that the row describes; .estimator,
with the name of the estimator used, .estimate, with the output of the metric function; .grid_args,
with the arguments passed to sf::st_make_grid() via ... (if any), .grid, containing the grids
used to aggregate predictions, as well as the aggregated values of truth and estimate as well as
the count of non-NA values for each, and .notes, which (if data is an sf object) will indicate any
observations which were not used in a given assessment.
Raster inputs
If data is NULL, then truth and estimate should both be SpatRaster objects, as created via
terra::rast(). These rasters will then be aggregated to each grid using exactextractr::exact_extract().
If data is a SpatRaster object, then truth and estimate should be indices to select the appropri-
ate layers of the raster via terra::subset().
Grids are calculated using the bounding box of truth, under the assumption that you may have
extrapolated into regions which do not have matching "true" values. This function does not check
that truth and estimate overlap at all, or that they are at all contained within the grid.
Creating grid blocks
The grid blocks can be controlled by passing arguments to sf::st_make_grid() via .... Some
particularly useful arguments include:
• cellsize: Target cellsize, expressed as the "diameter" (shortest straight-line distance between
opposing sides; two times the apothem) of each block, in map units.
• n: The number of grid blocks in the x and y direction (columns, rows).
• square: A logical value indicating whether to create square (TRUE) or hexagonal (FALSE)
cells.
If both cellsize and n are provided, then the number of blocks requested by n of sizes specified by
cellsize will be returned, likely not lining up with the bounding box of data. If only cellsize is
provided, this function will return as many blocks of size cellsize as fit inside the bounding box
of data. If only n is provided, then cellsize will be automatically adjusted to create the requested
number of cells.
Grids are created by mapping over each argument passed via ... simultaneously, in a similar man-
ner to mapply() or purrr::pmap(). This means that, for example, passing n = list(c(1, 2)) will
create a single 1x2 grid, while passing n = c(1, 2) will create a 1x1 grid and a 2x2 grid. It also
means that arguments will be recycled using R’s standard vector recycling rules, so that passing n =
c(1, 2) and square = FALSE will create two separate grids of hexagons.
This function can be used for geographic or projected coordinate reference systems and expects 2D
data.
References
<NAME>., <NAME>., <NAME>., and <NAME>. (2010). "An effective assessment protocol for
continuous geospatial datasets of forest characteristics using USFS Forest Inventory and Analysis
(FIA) data." Remote Sensing of Environment 114(10), pp 2337-2352, doi: 10.1016/j.rse.2010.05.010
.
Examples
data(ames, package = "modeldata")
ames_sf <- sf::st_as_sf(ames, coords = c("Longitude", "Latitude"), crs = 4326)
ames_model <- lm(Sale_Price ~ Lot_Area, data = ames_sf)
ames_sf$predictions <- predict(ames_model, ames_sf)
ww_multi_scale(
ames_sf,
Sale_Price,
predictions,
n = list(
c(10, 10),
c(1, 1)
),
square = FALSE
)
# or, mostly equivalently
# (there will be a slight difference due to `autoexpand_grid = TRUE`)
grids <- list(
sf::st_make_grid(ames_sf, n = c(10, 10), square = FALSE),
sf::st_make_grid(ames_sf, n = c(1, 1), square = FALSE)
)
ww_multi_scale(ames_sf, Sale_Price, predictions, grids = grids)
ww_willmott_d Willmott’s d and related values
Description
These functions calculate Willmott’s d value, a proposed replacement for R2 which better differ-
entiates between types and magnitudes of possible covariations. Additional functions calculate
systematic and unsystematic components of MSE and RMSE; the sum of the systematic and unsys-
tematic components of MSE equal total MSE (though the same is not true for RMSE).
Usage
ww_willmott_d(data, ...)
## S3 method for class 'data.frame'
ww_willmott_d(data, truth, estimate, na_rm = TRUE, ...)
ww_willmott_d_vec(truth, estimate, na_rm = TRUE, ...)
ww_willmott_d1(data, ...)
## S3 method for class 'data.frame'
ww_willmott_d1(data, truth, estimate, na_rm = TRUE, ...)
ww_willmott_d1_vec(truth, estimate, na_rm = TRUE, ...)
ww_willmott_dr(data, ...)
## S3 method for class 'data.frame'
ww_willmott_dr(data, truth, estimate, na_rm = TRUE, ...)
ww_willmott_dr_vec(truth, estimate, na_rm = TRUE, ...)
ww_systematic_mse(data, ...)
## S3 method for class 'data.frame'
ww_systematic_mse(data, truth, estimate, na_rm = TRUE, ...)
ww_systematic_mse_vec(truth, estimate, na_rm = TRUE, ...)
ww_unsystematic_mse(data, ...)
## S3 method for class 'data.frame'
ww_unsystematic_mse(data, truth, estimate, na_rm = TRUE, ...)
ww_unsystematic_mse_vec(truth, estimate, na_rm = TRUE, ...)
ww_systematic_rmse(data, ...)
## S3 method for class 'data.frame'
ww_systematic_rmse(data, truth, estimate, na_rm = TRUE, ...)
ww_systematic_rmse_vec(truth, estimate, na_rm = TRUE, ...)
ww_unsystematic_rmse(data, ...)
## S3 method for class 'data.frame'
ww_unsystematic_rmse(data, truth, estimate, na_rm = TRUE, ...)
ww_unsystematic_rmse_vec(truth, estimate, na_rm = TRUE, ...)
Arguments
data A data.frame containing the columns specified by the truth and estimate
arguments.
... Not currently used.
truth The column identifier for the true results (that is numeric). This should be an
unquoted column name although this argument is passed by expression and sup-
ports quasiquotation (you can unquote column names). For _vec() functions, a
numeric vector.
estimate The column identifier for the predicted results (that is also numeric). As with
truth this can be specified different ways but the primary method is to use an
unquoted variable name. For _vec() functions, a numeric vector.
na_rm A logical value indicating whether NA values should be stripped before the
computation proceeds.
Details
Values of d and d1 range from 0 to 1, with 1 indicating perfect agreement. Values of dr range from
-1 to 1, with 1 similarly indicating perfect agreement. Values of RMSE are in the same units as
truth and estimate, while values of MSE are in squared units. truth and estimate must be
the same length. This function is not explicitly spatial and as such can be applied to data with any
number of dimensions and any coordinate reference system.
Value
A tibble with columns .metric, .estimator, and .estimate and 1 row of values. For grouped data
frames, the number of rows returned will be the same as the number of groups. For _vec() func-
tions, a single value (or NA).
References
<NAME>. 1981. "On the Validation of Models". Physical Geography 2(2), pp 184-194, doi:
10.1080/02723646.1981.10642213.
<NAME>. 1982. "Some Comments on the Evaluation of Model Performance". Bulletin of the
American Meteorological Society 63(11), pp 1309-1313, doi: 10.1175/1520-0477(1982)063<1309:SCOTEO>2.0.CO;2.
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>. 1985. "Statistics for the evaluation of model performance." Journal of Geophysical
Research 90(C5): 8995–9005, doi: 10.1029/jc090ic05p08995
<NAME>., <NAME>., and <NAME>. "A refined index of model performance". Inter-
national Journal of Climatology 32, pp 2088-2094, doi: 10.1002/joc.2419.
See Also
Other agreement metrics: ww_agreement_coefficient()
Other yardstick metrics: ww_agreement_coefficient(), ww_global_geary_c(), ww_global_moran_i(),
ww_local_geary_c(), ww_local_getis_ord_g(), ww_local_moran_i()
Examples
x <- c(6, 8, 9, 10, 11, 14)
y <- c(2, 3, 5, 5, 6, 8)
ww_willmott_d_vec(x, y)
ww_willmott_d1_vec(x, y)
ww_willmott_dr_vec(x, y)
ww_systematic_mse_vec(x, y)
ww_unsystematic_mse_vec(x, y)
ww_systematic_rmse_vec(x, y)
ww_unsystematic_rmse_vec(x, y)
example_df <- data.frame(x = x, y = y)
ww_willmott_d(example_df, x, y)
ww_willmott_d1(example_df, x, y)
ww_willmott_dr(example_df, x, y)
ww_systematic_mse(example_df, x, y)
ww_unsystematic_mse(example_df, x, y)
ww_systematic_rmse(example_df, x, y)
ww_unsystematic_rmse(example_df, x, y) |
go-parser | rust | Rust | Crate go_parser
===
This crate is part of the Goscript project. Please refer to https://goscript.dev for more information.
It’s a port of the the parser from the Go standard library https://github.com/golang/go/tree/release-branch.go1.12/src/go/parser
Usage:
---
```
fn parse_file() {
let source = "package main ...";
let mut fs = go_parser::FileSet::new();
let o = &mut go_parser::AstObjects::new();
let el = &mut go_parser::ErrorList::new();
let (p, _) = go_parser::parse_file(o, &mut fs, el, "./main.go", source, false);
print!("{}", p.get_errors());
}
```
Feature
---
* `btree_map`: Make it use BTreeMap instead of HashMap
Modules
---
* ast
* scope
* visitor
Macros
---
* piggy_key_type
Structs
---
* AssignStmtKey
* AstObjects
* EntityKey
* Error
* ErrorList
* FieldKey
* File
* FilePos
* FilePosErrors
* FileSet
* FileSetIter
* FuncDeclKey
* FuncTypeKey
* IdentKey
* LabeledStmtKey
* Parser
* PiggyVecA vec that you can only insert into, so that the index can be used as a key
* PiggyVecIter
* ScopeKey
* SpecKey
* TokenData
Enums
---
* Token
* TokenType
Traits
---
* PiggyVecKey
Functions
---
* parse_file
Type Aliases
---
* AssignStmts
* Entitys
* Fields
* FuncDecls
* FuncTypes
* Idents
* LabeledStmts
* Map
* MapIter
* Pos
* Scopes
* Specs
Crate go_parser
===
This crate is part of the Goscript project. Please refer to https://goscript.dev for more information.
It’s a port of the the parser from the Go standard library https://github.com/golang/go/tree/release-branch.go1.12/src/go/parser
Usage:
---
```
fn parse_file() {
let source = "package main ...";
let mut fs = go_parser::FileSet::new();
let o = &mut go_parser::AstObjects::new();
let el = &mut go_parser::ErrorList::new();
let (p, _) = go_parser::parse_file(o, &mut fs, el, "./main.go", source, false);
print!("{}", p.get_errors());
}
```
Feature
---
* `btree_map`: Make it use BTreeMap instead of HashMap
Modules
---
* ast
* scope
* visitor
Macros
---
* piggy_key_type
Structs
---
* AssignStmtKey
* AstObjects
* EntityKey
* Error
* ErrorList
* FieldKey
* File
* FilePos
* FilePosErrors
* FileSet
* FileSetIter
* FuncDeclKey
* FuncTypeKey
* IdentKey
* LabeledStmtKey
* Parser
* PiggyVecA vec that you can only insert into, so that the index can be used as a key
* PiggyVecIter
* ScopeKey
* SpecKey
* TokenData
Enums
---
* Token
* TokenType
Traits
---
* PiggyVecKey
Functions
---
* parse_file
Type Aliases
---
* AssignStmts
* Entitys
* Fields
* FuncDecls
* FuncTypes
* Idents
* LabeledStmts
* Map
* MapIter
* Pos
* Scopes
* Specs
Struct go_parser::AssignStmtKey
===
```
#[repr(transparent)]pub struct AssignStmtKey(/* private fields */);
```
Implementations
---
### impl AssignStmtKey
#### pub fn null() -> Self
Trait Implementations
---
### impl Clone for AssignStmtKey
#### fn clone(&self) -> AssignStmtKey
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> AssignStmtKey
Returns the “default value” for a type.
#### fn from(k: usize) -> Self
Converts to this type from the input type.### impl Hash for AssignStmtKey
#### fn hash<__H: Hasher>(&self, state: &mut __H)
Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where
H: Hasher,
Self: Sized,
Feeds a slice of this type into the given `Hasher`.
#### fn cmp(&self, other: &AssignStmtKey) -> Ordering
This method returns an `Ordering` between `self` and `other`. Read more1.21.0 · source#### fn max(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the maximum of two values. Read more1.21.0 · source#### fn min(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the minimum of two values. Read more1.50.0 · source#### fn clamp(self, min: Self, max: Self) -> Selfwhere
Self: Sized + PartialOrd<Self>,
Restrict a value to a certain interval.
#### fn eq(&self, other: &AssignStmtKey) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialOrd<AssignStmtKey> for AssignStmtKey
#### fn partial_cmp(&self, other: &AssignStmtKey) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists. Read more1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool
This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool
This method tests less than or equal to (for `self` and `other`) and is used by the `<=`
operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool
This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool
This method tests greater than or equal to (for `self` and `other`) and is used by the `>=`
operator.
#### fn as_usize(&self) -> usize
### impl Copy for AssignStmtKey
### impl Eq for AssignStmtKey
### impl StructuralEq for AssignStmtKey
### impl StructuralPartialEq for AssignStmtKey
Auto Trait Implementations
---
### impl RefUnwindSafe for AssignStmtKey
### impl Send for AssignStmtKey
### impl Sync for AssignStmtKey
### impl Unpin for AssignStmtKey
### impl UnwindSafe for AssignStmtKey
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct go_parser::AstObjects
===
```
pub struct AstObjects {
pub l_stmts: LabeledStmts,
pub a_stmts: AssignStmts,
pub specs: Specs,
pub fdecls: FuncDecls,
pub ftypes: FuncTypes,
pub idents: Idents,
pub fields: Fields,
pub entities: Entitys,
pub scopes: Scopes,
}
```
Fields
---
`l_stmts: LabeledStmts``a_stmts: AssignStmts``specs: Specs``fdecls: FuncDecls``ftypes: FuncTypes``idents: Idents``fields: Fields``entities: Entitys``scopes: Scopes`Implementations
---
### impl AstObjects
#### pub fn new() -> AstObjects
Auto Trait Implementations
---
### impl RefUnwindSafe for AstObjects
### impl !Send for AstObjects
### impl !Sync for AstObjects
### impl Unpin for AstObjects
### impl UnwindSafe for AstObjects
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct go_parser::EntityKey
===
```
#[repr(transparent)]pub struct EntityKey(/* private fields */);
```
Implementations
---
### impl EntityKey
#### pub fn null() -> Self
Trait Implementations
---
### impl Clone for EntityKey
#### fn clone(&self) -> EntityKey
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> EntityKey
Returns the “default value” for a type.
#### fn from(k: usize) -> Self
Converts to this type from the input type.### impl Hash for EntityKey
#### fn hash<__H: Hasher>(&self, state: &mut __H)
Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where
H: Hasher,
Self: Sized,
Feeds a slice of this type into the given `Hasher`.
#### fn cmp(&self, other: &EntityKey) -> Ordering
This method returns an `Ordering` between `self` and `other`. Read more1.21.0 · source#### fn max(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the maximum of two values. Read more1.21.0 · source#### fn min(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the minimum of two values. Read more1.50.0 · source#### fn clamp(self, min: Self, max: Self) -> Selfwhere
Self: Sized + PartialOrd<Self>,
Restrict a value to a certain interval.
#### fn eq(&self, other: &EntityKey) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialOrd<EntityKey> for EntityKey
#### fn partial_cmp(&self, other: &EntityKey) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists. Read more1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool
This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool
This method tests less than or equal to (for `self` and `other`) and is used by the `<=`
operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool
This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool
This method tests greater than or equal to (for `self` and `other`) and is used by the `>=`
operator.
#### fn as_usize(&self) -> usize
### impl Copy for EntityKey
### impl Eq for EntityKey
### impl StructuralEq for EntityKey
### impl StructuralPartialEq for EntityKey
Auto Trait Implementations
---
### impl RefUnwindSafe for EntityKey
### impl Send for EntityKey
### impl Sync for EntityKey
### impl Unpin for EntityKey
### impl UnwindSafe for EntityKey
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct go_parser::Error
===
```
pub struct Error {
pub pos: FilePos,
pub msg: String,
pub soft: bool,
pub by_parser: bool,
/* private fields */
}
```
Fields
---
`pos: FilePos``msg: String``soft: bool``by_parser: bool`Trait Implementations
---
### impl Clone for Error
#### fn clone(&self) -> Error
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more1.0.0 · source#### fn description(&self) -> &str
👎Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎Deprecated since 1.33.0: replaced by Error::source, which can support downcasting#### fn provide<'a>(&'a self, request: &mut Request<'a>)
🔬This is a nightly-only experimental API. (`error_generic_member_access`)Provides type based access to context intended for error reports. Read moreAuto Trait Implementations
---
### impl RefUnwindSafe for Error
### impl !Send for Error
### impl !Sync for Error
### impl Unpin for Error
### impl UnwindSafe for Error
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
T: Display + ?Sized,
#### default fn to_string(&self) -> String
Converts the given value to a `String`.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct go_parser::ErrorList
===
```
pub struct ErrorList { /* private fields */ }
```
Implementations
---
### impl ErrorList
#### pub fn new() -> ErrorList
#### pub fn add(&self, p: Option<FilePos>, msg: String, soft: bool, by_parser: bool)
#### pub fn len(&self) -> usize
#### pub fn sort(&self)
#### pub fn borrow(&self) -> Ref<'_, Vec<Error>Trait Implementations
---
### impl Clone for ErrorList
#### fn clone(&self) -> ErrorList
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more1.0.0 · source#### fn description(&self) -> &str
👎Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎Deprecated since 1.33.0: replaced by Error::source, which can support downcasting#### fn provide<'a>(&'a self, request: &mut Request<'a>)
🔬This is a nightly-only experimental API. (`error_generic_member_access`)Provides type based access to context intended for error reports. Read moreAuto Trait Implementations
---
### impl !RefUnwindSafe for ErrorList
### impl !Send for ErrorList
### impl !Sync for ErrorList
### impl Unpin for ErrorList
### impl !UnwindSafe for ErrorList
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
T: Display + ?Sized,
#### default fn to_string(&self) -> String
Converts the given value to a `String`.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct go_parser::FieldKey
===
```
#[repr(transparent)]pub struct FieldKey(/* private fields */);
```
Implementations
---
### impl FieldKey
#### pub fn null() -> Self
Trait Implementations
---
### impl Clone for FieldKey
#### fn clone(&self) -> FieldKey
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> FieldKey
Returns the “default value” for a type.
#### fn from(k: usize) -> Self
Converts to this type from the input type.### impl Hash for FieldKey
#### fn hash<__H: Hasher>(&self, state: &mut __H)
Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where
H: Hasher,
Self: Sized,
Feeds a slice of this type into the given `Hasher`.
#### fn pos(&self, objs: &AstObjects) -> Pos
#### fn end(&self, objs: &AstObjects) -> Pos
#### fn id(&self) -> NodeId
### impl Ord for FieldKey
#### fn cmp(&self, other: &FieldKey) -> Ordering
This method returns an `Ordering` between `self` and `other`. Read more1.21.0 · source#### fn max(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the maximum of two values. Read more1.21.0 · source#### fn min(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the minimum of two values. Read more1.50.0 · source#### fn clamp(self, min: Self, max: Self) -> Selfwhere
Self: Sized + PartialOrd<Self>,
Restrict a value to a certain interval.
#### fn eq(&self, other: &FieldKey) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialOrd<FieldKey> for FieldKey
#### fn partial_cmp(&self, other: &FieldKey) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists. Read more1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool
This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool
This method tests less than or equal to (for `self` and `other`) and is used by the `<=`
operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool
This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool
This method tests greater than or equal to (for `self` and `other`) and is used by the `>=`
operator.
#### fn as_usize(&self) -> usize
### impl Copy for FieldKey
### impl Eq for FieldKey
### impl StructuralEq for FieldKey
### impl StructuralPartialEq for FieldKey
Auto Trait Implementations
---
### impl RefUnwindSafe for FieldKey
### impl Send for FieldKey
### impl Sync for FieldKey
### impl Unpin for FieldKey
### impl UnwindSafe for FieldKey
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct go_parser::File
===
```
pub struct File { /* private fields */ }
```
Implementations
---
### impl File
#### pub fn new(name: String) -> File
#### pub fn name(&self) -> &str
#### pub fn base(&self) -> usize
#### pub fn size(&self) -> usize
#### pub fn line_count(&self) -> usize
#### pub fn add_line(&mut self, offset: usize)
#### pub fn merge_line(&mut self, line: usize)
#### pub fn set_lines(&mut self, lines: Vec<usize>) -> bool
#### pub fn set_lines_for_content(&mut self, content: &mut Chars<'_>)
#### pub fn line_start(&self, line: usize) -> usize
#### pub fn pos(&self, offset: usize) -> Pos
#### pub fn position(&self, p: Pos) -> FilePos
Trait Implementations
---
### impl Debug for File
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read moreAuto Trait Implementations
---
### impl RefUnwindSafe for File
### impl !Send for File
### impl !Sync for File
### impl Unpin for File
### impl UnwindSafe for File
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct go_parser::FilePos
===
```
pub struct FilePos {
pub filename: Rc<String>,
pub offset: usize,
pub line: usize,
pub column: usize,
}
```
Fields
---
`filename: Rc<String>``offset: usize``line: usize``column: usize`Implementations
---
### impl FilePos
#### pub fn is_valid(&self) -> bool
#### pub fn null() -> FilePos
Trait Implementations
---
### impl Clone for FilePos
#### fn clone(&self) -> FilePos
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read moreAuto Trait Implementations
---
### impl RefUnwindSafe for FilePos
### impl !Send for FilePos
### impl !Sync for FilePos
### impl Unpin for FilePos
### impl UnwindSafe for FilePos
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
T: Display + ?Sized,
#### default fn to_string(&self) -> String
Converts the given value to a `String`.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct go_parser::FilePosErrors
===
```
pub struct FilePosErrors<'a> { /* private fields */ }
```
Implementations
---
### impl<'a> FilePosErrors<'a#### pub fn new(file: &'a File, elist: &'a ErrorList) -> FilePosErrors<'a#### pub fn add(&self, pos: Pos, msg: String, soft: bool)
#### pub fn add_str(&self, pos: Pos, s: &str, soft: bool)
#### pub fn parser_add(&self, pos: Pos, msg: String)
#### pub fn parser_add_str(&self, pos: Pos, s: &str)
Trait Implementations
---
### impl<'a> Clone for FilePosErrors<'a#### fn clone(&self) -> FilePosErrors<'aReturns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
Formats the value using the given formatter. Read moreAuto Trait Implementations
---
### impl<'a> !RefUnwindSafe for FilePosErrors<'a### impl<'a> !Send for FilePosErrors<'a### impl<'a> !Sync for FilePosErrors<'a### impl<'a> Unpin for FilePosErrors<'a### impl<'a> !UnwindSafe for FilePosErrors<'aBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct go_parser::FileSet
===
```
pub struct FileSet { /* private fields */ }
```
Implementations
---
### impl FileSet
#### pub fn new() -> FileSet
#### pub fn base(&self) -> usize
#### pub fn iter(&self) -> FileSetIter<'_#### pub fn file(&self, p: Pos) -> Option<&File#### pub fn position(&self, p: Pos) -> Option<FilePos#### pub fn index_file(&mut self, i: usize) -> Option<&mut File#### pub fn recent_file(&mut self) -> Option<&mut File#### pub fn add_file(
&mut self,
name: String,
base: Option<usize>,
size: usize
) -> &mut File
Trait Implementations
---
### impl Debug for FileSet
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read moreAuto Trait Implementations
---
### impl RefUnwindSafe for FileSet
### impl !Send for FileSet
### impl !Sync for FileSet
### impl Unpin for FileSet
### impl UnwindSafe for FileSet
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.{"FileSetIter<'_>":"<h3>Notable traits for <code><a class=\"struct\" href=\"struct.FileSetIter.html\" title=\"struct go_parser::FileSetIter\">FileSetIter</a><'a></code></h3><pre><code><span class=\"where fmt-newline\">impl<'a> <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"struct\" href=\"struct.FileSetIter.html\" title=\"struct go_parser::FileSetIter\">FileSetIter</a><'a></span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = &'a <a class=\"struct\" href=\"struct.File.html\" title=\"struct go_parser::File\">File</a>;</span>"}
Struct go_parser::FileSetIter
===
```
pub struct FileSetIter<'a> { /* private fields */ }
```
Trait Implementations
---
### impl<'a> Iterator for FileSetIter<'a#### type Item = &'a File
The type of the elements being iterated over.#### fn next(&mut self) -> Option<&'a FileAdvances the iterator and returns the next value.
&mut self
) -> Result<[Self::Item; N], IntoIter<Self::Item, N>>where
Self: Sized,
🔬This is a nightly-only experimental API. (`iter_next_chunk`)Advances the iterator and returns an array containing the next `N` values. Read more1.0.0 · source#### fn size_hint(&self) -> (usize, Option<usize>)
Returns the bounds on the remaining length of the iterator. Read more1.0.0 · source#### fn count(self) -> usizewhere
Self: Sized,
Consumes the iterator, counting the number of iterations and returning it. Read more1.0.0 · source#### fn last(self) -> Option<Self::Item>where
Self: Sized,
Consumes the iterator, returning the last element.
Self: Sized,
Creates an iterator starting at the same point, but stepping by the given amount at each iteration. Read more1.0.0 · source#### fn chain<U>(self, other: U) -> Chain<Self, <U as IntoIterator>::IntoIter>where
Self: Sized,
U: IntoIterator<Item = Self::Item>,
Takes two iterators and creates a new iterator over both in sequence. Read more1.0.0 · source#### fn zip<U>(self, other: U) -> Zip<Self, <U as IntoIterator>::IntoIter>where
Self: Sized,
U: IntoIterator,
‘Zips up’ two iterators into a single iterator of pairs.
Self: Sized,
G: FnMut() -> Self::Item,
🔬This is a nightly-only experimental API. (`iter_intersperse`)Creates a new iterator which places an item generated by `separator`
between adjacent items of the original iterator. Read more1.0.0 · source#### fn map<B, F>(self, f: F) -> Map<Self, F>where
Self: Sized,
F: FnMut(Self::Item) -> B,
Takes a closure and creates an iterator which calls that closure on each element. Read more1.21.0 · source#### fn for_each<F>(self, f: F)where
Self: Sized,
F: FnMut(Self::Item),
Calls a closure on each element of an iterator. Read more1.0.0 · source#### fn filter<P>(self, predicate: P) -> Filter<Self, P>where
Self: Sized,
P: FnMut(&Self::Item) -> bool,
Creates an iterator which uses a closure to determine if an element should be yielded. Read more1.0.0 · source#### fn filter_map<B, F>(self, f: F) -> FilterMap<Self, F>where
Self: Sized,
F: FnMut(Self::Item) -> Option<B>,
Creates an iterator that both filters and maps. Read more1.0.0 · source#### fn enumerate(self) -> Enumerate<Self>where
Self: Sized,
Creates an iterator which gives the current iteration count as well as the next value. Read more1.0.0 · source#### fn peekable(self) -> Peekable<Self>where
Self: Sized,
Creates an iterator which can use the `peek` and `peek_mut` methods to look at the next element of the iterator without consuming it. See their documentation for more information. Read more1.0.0 · source#### fn skip_while<P>(self, predicate: P) -> SkipWhile<Self, P>where
Self: Sized,
P: FnMut(&Self::Item) -> bool,
Creates an iterator that `skip`s elements based on a predicate. Read more1.0.0 · source#### fn take_while<P>(self, predicate: P) -> TakeWhile<Self, P>where
Self: Sized,
P: FnMut(&Self::Item) -> bool,
Creates an iterator that yields elements based on a predicate. Read more1.57.0 · source#### fn map_while<B, P>(self, predicate: P) -> MapWhile<Self, P>where
Self: Sized,
P: FnMut(Self::Item) -> Option<B>,
Creates an iterator that both yields elements based on a predicate and maps. Read more1.0.0 · source#### fn skip(self, n: usize) -> Skip<Self>where
Self: Sized,
Creates an iterator that skips the first `n` elements. Read more1.0.0 · source#### fn take(self, n: usize) -> Take<Self>where
Self: Sized,
Creates an iterator that yields the first `n` elements, or fewer if the underlying iterator ends sooner. Read more1.0.0 · source#### fn scan<St, B, F>(self, initial_state: St, f: F) -> Scan<Self, St, F>where
Self: Sized,
F: FnMut(&mut St, Self::Item) -> Option<B>,
An iterator adapter which, like `fold`, holds internal state, but unlike `fold`, produces a new iterator. Read more1.0.0 · source#### fn flat_map<U, F>(self, f: F) -> FlatMap<Self, U, F>where
Self: Sized,
U: IntoIterator,
F: FnMut(Self::Item) -> U,
Creates an iterator that works like map, but flattens nested structure.
Self: Sized,
F: FnMut(&[Self::Item; N]) -> R,
🔬This is a nightly-only experimental API. (`iter_map_windows`)Calls the given function `f` for each contiguous window of size `N` over
`self` and returns an iterator over the outputs of `f`. Like `slice::windows()`,
the windows during mapping overlap as well. Read more1.0.0 · source#### fn fuse(self) -> Fuse<Self>where
Self: Sized,
Creates an iterator which ends after the first `None`. Read more1.0.0 · source#### fn inspect<F>(self, f: F) -> Inspect<Self, F>where
Self: Sized,
F: FnMut(&Self::Item),
Does something with each element of an iterator, passing the value on. Read more1.0.0 · source#### fn by_ref(&mut self) -> &mut Selfwhere
Self: Sized,
Borrows an iterator, rather than consuming it. Read more1.0.0 · source#### fn collect<B>(self) -> Bwhere
B: FromIterator<Self::Item>,
Self: Sized,
Transforms an iterator into a collection.
E: Extend<Self::Item>,
Self: Sized,
🔬This is a nightly-only experimental API. (`iter_collect_into`)Collects all the items from an iterator into a collection. Read more1.0.0 · source#### fn partition<B, F>(self, f: F) -> (B, B)where
Self: Sized,
B: Default + Extend<Self::Item>,
F: FnMut(&Self::Item) -> bool,
Consumes an iterator, creating two collections from it.
Self: Sized,
P: FnMut(Self::Item) -> bool,
🔬This is a nightly-only experimental API. (`iter_is_partitioned`)Checks if the elements of this iterator are partitioned according to the given predicate,
such that all those that return `true` precede all those that return `false`. Read more1.27.0 · source#### fn try_fold<B, F, R>(&mut self, init: B, f: F) -> Rwhere
Self: Sized,
F: FnMut(B, Self::Item) -> R,
R: Try<Output = B>,
An iterator method that applies a function as long as it returns successfully, producing a single, final value. Read more1.27.0 · source#### fn try_for_each<F, R>(&mut self, f: F) -> Rwhere
Self: Sized,
F: FnMut(Self::Item) -> R,
R: Try<Output = ()>,
An iterator method that applies a fallible function to each item in the iterator, stopping at the first error and returning that error. Read more1.0.0 · source#### fn fold<B, F>(self, init: B, f: F) -> Bwhere
Self: Sized,
F: FnMut(B, Self::Item) -> B,
Folds every element into an accumulator by applying an operation,
returning the final result. Read more1.51.0 · source#### fn reduce<F>(self, f: F) -> Option<Self::Item>where
Self: Sized,
F: FnMut(Self::Item, Self::Item) -> Self::Item,
Reduces the elements to a single one, by repeatedly applying a reducing operation.
&mut self,
f: F
) -> <<R as Try>::Residual as Residual<Option<<R as Try>::Output>>>::TryTypewhere
Self: Sized,
F: FnMut(Self::Item, Self::Item) -> R,
R: Try<Output = Self::Item>,
<R as Try>::Residual: Residual<Option<Self::Item>>,
🔬This is a nightly-only experimental API. (`iterator_try_reduce`)Reduces the elements to a single one by repeatedly applying a reducing operation. If the closure returns a failure, the failure is propagated back to the caller immediately. Read more1.0.0 · source#### fn all<F>(&mut self, f: F) -> boolwhere
Self: Sized,
F: FnMut(Self::Item) -> bool,
Tests if every element of the iterator matches a predicate. Read more1.0.0 · source#### fn any<F>(&mut self, f: F) -> boolwhere
Self: Sized,
F: FnMut(Self::Item) -> bool,
Tests if any element of the iterator matches a predicate. Read more1.0.0 · source#### fn find<P>(&mut self, predicate: P) -> Option<Self::Item>where
Self: Sized,
P: FnMut(&Self::Item) -> bool,
Searches for an element of an iterator that satisfies a predicate. Read more1.30.0 · source#### fn find_map<B, F>(&mut self, f: F) -> Option<B>where
Self: Sized,
F: FnMut(Self::Item) -> Option<B>,
Applies function to the elements of iterator and returns the first non-none result.
&mut self,
f: F
) -> <<R as Try>::Residual as Residual<Option<Self::Item>>>::TryTypewhere
Self: Sized,
F: FnMut(&Self::Item) -> R,
R: Try<Output = bool>,
<R as Try>::Residual: Residual<Option<Self::Item>>,
🔬This is a nightly-only experimental API. (`try_find`)Applies function to the elements of iterator and returns the first true result or the first error. Read more1.0.0 · source#### fn position<P>(&mut self, predicate: P) -> Option<usize>where
Self: Sized,
P: FnMut(Self::Item) -> bool,
Searches for an element in an iterator, returning its index. Read more1.6.0 · source#### fn max_by_key<B, F>(self, f: F) -> Option<Self::Item>where
B: Ord,
Self: Sized,
F: FnMut(&Self::Item) -> B,
Returns the element that gives the maximum value from the specified function. Read more1.15.0 · source#### fn max_by<F>(self, compare: F) -> Option<Self::Item>where
Self: Sized,
F: FnMut(&Self::Item, &Self::Item) -> Ordering,
Returns the element that gives the maximum value with respect to the specified comparison function. Read more1.6.0 · source#### fn min_by_key<B, F>(self, f: F) -> Option<Self::Item>where
B: Ord,
Self: Sized,
F: FnMut(&Self::Item) -> B,
Returns the element that gives the minimum value from the specified function. Read more1.15.0 · source#### fn min_by<F>(self, compare: F) -> Option<Self::Item>where
Self: Sized,
F: FnMut(&Self::Item, &Self::Item) -> Ordering,
Returns the element that gives the minimum value with respect to the specified comparison function. Read more1.0.0 · source#### fn unzip<A, B, FromA, FromB>(self) -> (FromA, FromB)where
FromA: Default + Extend<A>,
FromB: Default + Extend<B>,
Self: Sized + Iterator<Item = (A, B)>,
Converts an iterator of pairs into a pair of containers. Read more1.36.0 · source#### fn copied<'a, T>(self) -> Copied<Self>where
T: 'a + Copy,
Self: Sized + Iterator<Item = &'a T>,
Creates an iterator which copies all of its elements. Read more1.0.0 · source#### fn cloned<'a, T>(self) -> Cloned<Self>where
T: 'a + Clone,
Self: Sized + Iterator<Item = &'a T>,
Creates an iterator which `clone`s all of its elements.
Self: Sized,
🔬This is a nightly-only experimental API. (`iter_array_chunks`)Returns an iterator over `N` elements of the iterator at a time. Read more1.11.0 · source#### fn sum<S>(self) -> Swhere
Self: Sized,
S: Sum<Self::Item>,
Sums the elements of an iterator. Read more1.11.0 · source#### fn product<P>(self) -> Pwhere
Self: Sized,
P: Product<Self::Item>,
Iterates over the entire iterator, multiplying all the elements
Self: Sized,
I: IntoIterator,
F: FnMut(Self::Item, <I as IntoIterator>::Item) -> Ordering,
🔬This is a nightly-only experimental API. (`iter_order_by`)Lexicographically compares the elements of this `Iterator` with those of another with respect to the specified comparison function. Read more1.5.0 · source#### fn partial_cmp<I>(self, other: I) -> Option<Ordering>where
I: IntoIterator,
Self::Item: PartialOrd<<I as IntoIterator>::Item>,
Self: Sized,
Lexicographically compares the `PartialOrd` elements of this `Iterator` with those of another. The comparison works like short-circuit evaluation, returning a result without comparing the remaining elements.
As soon as an order can be determined, the evaluation stops and a result is returned.
Self: Sized,
I: IntoIterator,
F: FnMut(Self::Item, <I as IntoIterator>::Item) -> Option<Ordering>,
🔬This is a nightly-only experimental API. (`iter_order_by`)Lexicographically compares the elements of this `Iterator` with those of another with respect to the specified comparison function. Read more1.5.0 · source#### fn eq<I>(self, other: I) -> boolwhere
I: IntoIterator,
Self::Item: PartialEq<<I as IntoIterator>::Item>,
Self: Sized,
Determines if the elements of this `Iterator` are equal to those of another.
Self: Sized,
I: IntoIterator,
F: FnMut(Self::Item, <I as IntoIterator>::Item) -> bool,
🔬This is a nightly-only experimental API. (`iter_order_by`)Determines if the elements of this `Iterator` are equal to those of another with respect to the specified equality function. Read more1.5.0 · source#### fn ne<I>(self, other: I) -> boolwhere
I: IntoIterator,
Self::Item: PartialEq<<I as IntoIterator>::Item>,
Self: Sized,
Determines if the elements of this `Iterator` are not equal to those of another. Read more1.5.0 · source#### fn lt<I>(self, other: I) -> boolwhere
I: IntoIterator,
Self::Item: PartialOrd<<I as IntoIterator>::Item>,
Self: Sized,
Determines if the elements of this `Iterator` are lexicographically less than those of another. Read more1.5.0 · source#### fn le<I>(self, other: I) -> boolwhere
I: IntoIterator,
Self::Item: PartialOrd<<I as IntoIterator>::Item>,
Self: Sized,
Determines if the elements of this `Iterator` are lexicographically less or equal to those of another. Read more1.5.0 · source#### fn gt<I>(self, other: I) -> boolwhere
I: IntoIterator,
Self::Item: PartialOrd<<I as IntoIterator>::Item>,
Self: Sized,
Determines if the elements of this `Iterator` are lexicographically greater than those of another. Read more1.5.0 · source#### fn ge<I>(self, other: I) -> boolwhere
I: IntoIterator,
Self::Item: PartialOrd<<I as IntoIterator>::Item>,
Self: Sized,
Determines if the elements of this `Iterator` are lexicographically greater than or equal to those of another.
Self: Sized,
F: FnMut(&Self::Item, &Self::Item) -> Option<Ordering>,
🔬This is a nightly-only experimental API. (`is_sorted`)Checks if the elements of this iterator are sorted using the given comparator function.
Self: Sized,
F: FnMut(Self::Item) -> K,
K: PartialOrd<K>,
🔬This is a nightly-only experimental API. (`is_sorted`)Checks if the elements of this iterator are sorted using the given key extraction function. Read moreAuto Trait Implementations
---
### impl<'a> RefUnwindSafe for FileSetIter<'a### impl<'a> !Send for FileSetIter<'a### impl<'a> !Sync for FileSetIter<'a### impl<'a> Unpin for FileSetIter<'a### impl<'a> UnwindSafe for FileSetIter<'aBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<I> IntoIterator for Iwhere
I: Iterator,
#### type Item = <I as Iterator>::Item
The type of the elements being iterated over.#### type IntoIter = I
Which kind of iterator are we turning this into?const: unstable · source#### fn into_iter(self) -> I
Creates an iterator from a value.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct go_parser::FuncDeclKey
===
```
#[repr(transparent)]pub struct FuncDeclKey(/* private fields */);
```
Implementations
---
### impl FuncDeclKey
#### pub fn null() -> Self
Trait Implementations
---
### impl Clone for FuncDeclKey
#### fn clone(&self) -> FuncDeclKey
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> FuncDeclKey
Returns the “default value” for a type.
#### fn from(k: usize) -> Self
Converts to this type from the input type.### impl Hash for FuncDeclKey
#### fn hash<__H: Hasher>(&self, state: &mut __H)
Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where
H: Hasher,
Self: Sized,
Feeds a slice of this type into the given `Hasher`.
#### fn cmp(&self, other: &FuncDeclKey) -> Ordering
This method returns an `Ordering` between `self` and `other`. Read more1.21.0 · source#### fn max(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the maximum of two values. Read more1.21.0 · source#### fn min(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the minimum of two values. Read more1.50.0 · source#### fn clamp(self, min: Self, max: Self) -> Selfwhere
Self: Sized + PartialOrd<Self>,
Restrict a value to a certain interval.
#### fn eq(&self, other: &FuncDeclKey) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialOrd<FuncDeclKey> for FuncDeclKey
#### fn partial_cmp(&self, other: &FuncDeclKey) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists. Read more1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool
This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool
This method tests less than or equal to (for `self` and `other`) and is used by the `<=`
operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool
This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool
This method tests greater than or equal to (for `self` and `other`) and is used by the `>=`
operator.
#### fn as_usize(&self) -> usize
### impl Copy for FuncDeclKey
### impl Eq for FuncDeclKey
### impl StructuralEq for FuncDeclKey
### impl StructuralPartialEq for FuncDeclKey
Auto Trait Implementations
---
### impl RefUnwindSafe for FuncDeclKey
### impl Send for FuncDeclKey
### impl Sync for FuncDeclKey
### impl Unpin for FuncDeclKey
### impl UnwindSafe for FuncDeclKey
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct go_parser::FuncTypeKey
===
```
#[repr(transparent)]pub struct FuncTypeKey(/* private fields */);
```
Implementations
---
### impl FuncTypeKey
#### pub fn null() -> Self
Trait Implementations
---
### impl Clone for FuncTypeKey
#### fn clone(&self) -> FuncTypeKey
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> FuncTypeKey
Returns the “default value” for a type.
#### fn from(k: usize) -> Self
Converts to this type from the input type.### impl Hash for FuncTypeKey
#### fn hash<__H: Hasher>(&self, state: &mut __H)
Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where
H: Hasher,
Self: Sized,
Feeds a slice of this type into the given `Hasher`.
#### fn pos(&self, objs: &AstObjects) -> Pos
#### fn end(&self, objs: &AstObjects) -> Pos
#### fn id(&self) -> NodeId
### impl Ord for FuncTypeKey
#### fn cmp(&self, other: &FuncTypeKey) -> Ordering
This method returns an `Ordering` between `self` and `other`. Read more1.21.0 · source#### fn max(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the maximum of two values. Read more1.21.0 · source#### fn min(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the minimum of two values. Read more1.50.0 · source#### fn clamp(self, min: Self, max: Self) -> Selfwhere
Self: Sized + PartialOrd<Self>,
Restrict a value to a certain interval.
#### fn eq(&self, other: &FuncTypeKey) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialOrd<FuncTypeKey> for FuncTypeKey
#### fn partial_cmp(&self, other: &FuncTypeKey) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists. Read more1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool
This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool
This method tests less than or equal to (for `self` and `other`) and is used by the `<=`
operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool
This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool
This method tests greater than or equal to (for `self` and `other`) and is used by the `>=`
operator.
#### fn as_usize(&self) -> usize
### impl Copy for FuncTypeKey
### impl Eq for FuncTypeKey
### impl StructuralEq for FuncTypeKey
### impl StructuralPartialEq for FuncTypeKey
Auto Trait Implementations
---
### impl RefUnwindSafe for FuncTypeKey
### impl Send for FuncTypeKey
### impl Sync for FuncTypeKey
### impl Unpin for FuncTypeKey
### impl UnwindSafe for FuncTypeKey
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct go_parser::IdentKey
===
```
#[repr(transparent)]pub struct IdentKey(/* private fields */);
```
Implementations
---
### impl IdentKey
#### pub fn null() -> Self
Trait Implementations
---
### impl Clone for IdentKey
#### fn clone(&self) -> IdentKey
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> IdentKey
Returns the “default value” for a type.
#### fn from(k: usize) -> Self
Converts to this type from the input type.### impl Hash for IdentKey
#### fn hash<__H: Hasher>(&self, state: &mut __H)
Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where
H: Hasher,
Self: Sized,
Feeds a slice of this type into the given `Hasher`.
#### fn cmp(&self, other: &IdentKey) -> Ordering
This method returns an `Ordering` between `self` and `other`. Read more1.21.0 · source#### fn max(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the maximum of two values. Read more1.21.0 · source#### fn min(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the minimum of two values. Read more1.50.0 · source#### fn clamp(self, min: Self, max: Self) -> Selfwhere
Self: Sized + PartialOrd<Self>,
Restrict a value to a certain interval.
#### fn eq(&self, other: &IdentKey) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialOrd<IdentKey> for IdentKey
#### fn partial_cmp(&self, other: &IdentKey) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists. Read more1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool
This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool
This method tests less than or equal to (for `self` and `other`) and is used by the `<=`
operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool
This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool
This method tests greater than or equal to (for `self` and `other`) and is used by the `>=`
operator.
#### fn as_usize(&self) -> usize
### impl Copy for IdentKey
### impl Eq for IdentKey
### impl StructuralEq for IdentKey
### impl StructuralPartialEq for IdentKey
Auto Trait Implementations
---
### impl RefUnwindSafe for IdentKey
### impl Send for IdentKey
### impl Sync for IdentKey
### impl Unpin for IdentKey
### impl UnwindSafe for IdentKey
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct go_parser::LabeledStmtKey
===
```
#[repr(transparent)]pub struct LabeledStmtKey(/* private fields */);
```
Implementations
---
### impl LabeledStmtKey
#### pub fn null() -> Self
Trait Implementations
---
### impl Clone for LabeledStmtKey
#### fn clone(&self) -> LabeledStmtKey
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> LabeledStmtKey
Returns the “default value” for a type.
#### fn from(k: usize) -> Self
Converts to this type from the input type.### impl Hash for LabeledStmtKey
#### fn hash<__H: Hasher>(&self, state: &mut __H)
Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where
H: Hasher,
Self: Sized,
Feeds a slice of this type into the given `Hasher`.
#### fn cmp(&self, other: &LabeledStmtKey) -> Ordering
This method returns an `Ordering` between `self` and `other`. Read more1.21.0 · source#### fn max(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the maximum of two values. Read more1.21.0 · source#### fn min(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the minimum of two values. Read more1.50.0 · source#### fn clamp(self, min: Self, max: Self) -> Selfwhere
Self: Sized + PartialOrd<Self>,
Restrict a value to a certain interval.
#### fn eq(&self, other: &LabeledStmtKey) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialOrd<LabeledStmtKey> for LabeledStmtKey
#### fn partial_cmp(&self, other: &LabeledStmtKey) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists. Read more1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool
This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool
This method tests less than or equal to (for `self` and `other`) and is used by the `<=`
operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool
This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool
This method tests greater than or equal to (for `self` and `other`) and is used by the `>=`
operator.
#### fn as_usize(&self) -> usize
### impl Copy for LabeledStmtKey
### impl Eq for LabeledStmtKey
### impl StructuralEq for LabeledStmtKey
### impl StructuralPartialEq for LabeledStmtKey
Auto Trait Implementations
---
### impl RefUnwindSafe for LabeledStmtKey
### impl Send for LabeledStmtKey
### impl Sync for LabeledStmtKey
### impl Unpin for LabeledStmtKey
### impl UnwindSafe for LabeledStmtKey
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct go_parser::Parser
===
```
pub struct Parser<'a> { /* private fields */ }
```
Implementations
---
### impl<'a> Parser<'a#### pub fn new(
objs: &'a mut AstObjects,
file: &'a mut File,
el: &'a ErrorList,
src: &'a str,
trace: bool
) -> Parser<'a#### pub fn get_errors(&self) -> &ErrorList
#### pub fn deref(x: &Expr) -> &Expr
#### pub fn unparen(x: &Expr) -> &Expr
#### pub fn parse_file(&mut self) -> Option<FileAuto Trait Implementations
---
### impl<'a> !RefUnwindSafe for Parser<'a### impl<'a> !Send for Parser<'a### impl<'a> !Sync for Parser<'a### impl<'a> Unpin for Parser<'a### impl<'a> !UnwindSafe for Parser<'aBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct go_parser::PiggyVec
===
```
pub struct PiggyVec<K, V>where
K: PiggyVecKey + From<usize>,{ /* private fields */ }
```
A vec that you can only insert into, so that the index can be used as a key
Implementations
---
### impl<K, V> PiggyVec<K, V>where
K: PiggyVecKey + From<usize>,
#### pub fn with_capacity(capacity: usize) -> Self
#### pub fn insert(&mut self, v: V) -> K
#### pub fn vec(&self) -> &Vec<V#### pub fn iter<'a>(&'a self) -> PiggyVecIter<'a, K, VTrait Implementations
---
### impl<K, V: Debug> Debug for PiggyVec<K, V>where
K: PiggyVecKey + From<usize> + Debug,
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
K: PiggyVecKey + From<usize>,
#### fn from(vec: Vec<V>) -> Self
Converts to this type from the input type.### impl<K, V> Index<K> for PiggyVec<K, V>where
K: PiggyVecKey + From<usize>,
#### type Output = V
The returned type after indexing.#### fn index(&self, index: K) -> &Self::Output
Performs the indexing (`container[index]`) operation.
K: PiggyVecKey + From<usize>,
#### fn index_mut(&mut self, index: K) -> &mut Self::Output
Performs the mutable indexing (`container[index]`) operation. Read moreAuto Trait Implementations
---
### impl<K, V> RefUnwindSafe for PiggyVec<K, V>where
K: RefUnwindSafe,
V: RefUnwindSafe,
### impl<K, V> Send for PiggyVec<K, V>where
K: Send,
V: Send,
### impl<K, V> Sync for PiggyVec<K, V>where
K: Sync,
V: Sync,
### impl<K, V> Unpin for PiggyVec<K, V>where
K: Unpin,
V: Unpin,
### impl<K, V> UnwindSafe for PiggyVec<K, V>where
K: UnwindSafe,
V: UnwindSafe,
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.{"PiggyVecIter<'a, K, V>":"<h3>Notable traits for <code><a class=\"struct\" href=\"struct.PiggyVecIter.html\" title=\"struct go_parser::PiggyVecIter\">PiggyVecIter</a><'a, K, V></code></h3><pre><code><span class=\"where fmt-newline\">impl<'a, K, V> <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"struct\" href=\"struct.PiggyVecIter.html\" title=\"struct go_parser::PiggyVecIter\">PiggyVecIter</a><'a, K, V><span class=\"where fmt-newline\">where\n K: <a class=\"trait\" href=\"trait.PiggyVecKey.html\" title=\"trait go_parser::PiggyVecKey\">PiggyVecKey</a> + <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/convert/trait.From.html\" title=\"trait core::convert::From\">From</a><<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.usize.html\">usize</a>>,</span></span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = <a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.reference.html\">&'a V</a>;</span>"}
Struct go_parser::PiggyVecIter
===
```
pub struct PiggyVecIter<'a, K, V>where
K: PiggyVecKey + From<usize>,{ /* private fields */ }
```
Trait Implementations
---
### impl<'a, K, V> Iterator for PiggyVecIter<'a, K, V>where
K: PiggyVecKey + From<usize>,
#### type Item = &'a V
The type of the elements being iterated over.#### fn next(&mut self) -> Option<Self::ItemAdvances the iterator and returns the next value.
&mut self
) -> Result<[Self::Item; N], IntoIter<Self::Item, N>>where
Self: Sized,
🔬This is a nightly-only experimental API. (`iter_next_chunk`)Advances the iterator and returns an array containing the next `N` values. Read more1.0.0 · source#### fn size_hint(&self) -> (usize, Option<usize>)
Returns the bounds on the remaining length of the iterator. Read more1.0.0 · source#### fn count(self) -> usizewhere
Self: Sized,
Consumes the iterator, counting the number of iterations and returning it. Read more1.0.0 · source#### fn last(self) -> Option<Self::Item>where
Self: Sized,
Consumes the iterator, returning the last element.
Self: Sized,
Creates an iterator starting at the same point, but stepping by the given amount at each iteration. Read more1.0.0 · source#### fn chain<U>(self, other: U) -> Chain<Self, <U as IntoIterator>::IntoIter>where
Self: Sized,
U: IntoIterator<Item = Self::Item>,
Takes two iterators and creates a new iterator over both in sequence. Read more1.0.0 · source#### fn zip<U>(self, other: U) -> Zip<Self, <U as IntoIterator>::IntoIter>where
Self: Sized,
U: IntoIterator,
‘Zips up’ two iterators into a single iterator of pairs.
Self: Sized,
G: FnMut() -> Self::Item,
🔬This is a nightly-only experimental API. (`iter_intersperse`)Creates a new iterator which places an item generated by `separator`
between adjacent items of the original iterator. Read more1.0.0 · source#### fn map<B, F>(self, f: F) -> Map<Self, F>where
Self: Sized,
F: FnMut(Self::Item) -> B,
Takes a closure and creates an iterator which calls that closure on each element. Read more1.21.0 · source#### fn for_each<F>(self, f: F)where
Self: Sized,
F: FnMut(Self::Item),
Calls a closure on each element of an iterator. Read more1.0.0 · source#### fn filter<P>(self, predicate: P) -> Filter<Self, P>where
Self: Sized,
P: FnMut(&Self::Item) -> bool,
Creates an iterator which uses a closure to determine if an element should be yielded. Read more1.0.0 · source#### fn filter_map<B, F>(self, f: F) -> FilterMap<Self, F>where
Self: Sized,
F: FnMut(Self::Item) -> Option<B>,
Creates an iterator that both filters and maps. Read more1.0.0 · source#### fn enumerate(self) -> Enumerate<Self>where
Self: Sized,
Creates an iterator which gives the current iteration count as well as the next value. Read more1.0.0 · source#### fn peekable(self) -> Peekable<Self>where
Self: Sized,
Creates an iterator which can use the `peek` and `peek_mut` methods to look at the next element of the iterator without consuming it. See their documentation for more information. Read more1.0.0 · source#### fn skip_while<P>(self, predicate: P) -> SkipWhile<Self, P>where
Self: Sized,
P: FnMut(&Self::Item) -> bool,
Creates an iterator that `skip`s elements based on a predicate. Read more1.0.0 · source#### fn take_while<P>(self, predicate: P) -> TakeWhile<Self, P>where
Self: Sized,
P: FnMut(&Self::Item) -> bool,
Creates an iterator that yields elements based on a predicate. Read more1.57.0 · source#### fn map_while<B, P>(self, predicate: P) -> MapWhile<Self, P>where
Self: Sized,
P: FnMut(Self::Item) -> Option<B>,
Creates an iterator that both yields elements based on a predicate and maps. Read more1.0.0 · source#### fn skip(self, n: usize) -> Skip<Self>where
Self: Sized,
Creates an iterator that skips the first `n` elements. Read more1.0.0 · source#### fn take(self, n: usize) -> Take<Self>where
Self: Sized,
Creates an iterator that yields the first `n` elements, or fewer if the underlying iterator ends sooner. Read more1.0.0 · source#### fn scan<St, B, F>(self, initial_state: St, f: F) -> Scan<Self, St, F>where
Self: Sized,
F: FnMut(&mut St, Self::Item) -> Option<B>,
An iterator adapter which, like `fold`, holds internal state, but unlike `fold`, produces a new iterator. Read more1.0.0 · source#### fn flat_map<U, F>(self, f: F) -> FlatMap<Self, U, F>where
Self: Sized,
U: IntoIterator,
F: FnMut(Self::Item) -> U,
Creates an iterator that works like map, but flattens nested structure.
Self: Sized,
F: FnMut(&[Self::Item; N]) -> R,
🔬This is a nightly-only experimental API. (`iter_map_windows`)Calls the given function `f` for each contiguous window of size `N` over
`self` and returns an iterator over the outputs of `f`. Like `slice::windows()`,
the windows during mapping overlap as well. Read more1.0.0 · source#### fn fuse(self) -> Fuse<Self>where
Self: Sized,
Creates an iterator which ends after the first `None`. Read more1.0.0 · source#### fn inspect<F>(self, f: F) -> Inspect<Self, F>where
Self: Sized,
F: FnMut(&Self::Item),
Does something with each element of an iterator, passing the value on. Read more1.0.0 · source#### fn by_ref(&mut self) -> &mut Selfwhere
Self: Sized,
Borrows an iterator, rather than consuming it. Read more1.0.0 · source#### fn collect<B>(self) -> Bwhere
B: FromIterator<Self::Item>,
Self: Sized,
Transforms an iterator into a collection.
E: Extend<Self::Item>,
Self: Sized,
🔬This is a nightly-only experimental API. (`iter_collect_into`)Collects all the items from an iterator into a collection. Read more1.0.0 · source#### fn partition<B, F>(self, f: F) -> (B, B)where
Self: Sized,
B: Default + Extend<Self::Item>,
F: FnMut(&Self::Item) -> bool,
Consumes an iterator, creating two collections from it.
Self: Sized,
P: FnMut(Self::Item) -> bool,
🔬This is a nightly-only experimental API. (`iter_is_partitioned`)Checks if the elements of this iterator are partitioned according to the given predicate,
such that all those that return `true` precede all those that return `false`. Read more1.27.0 · source#### fn try_fold<B, F, R>(&mut self, init: B, f: F) -> Rwhere
Self: Sized,
F: FnMut(B, Self::Item) -> R,
R: Try<Output = B>,
An iterator method that applies a function as long as it returns successfully, producing a single, final value. Read more1.27.0 · source#### fn try_for_each<F, R>(&mut self, f: F) -> Rwhere
Self: Sized,
F: FnMut(Self::Item) -> R,
R: Try<Output = ()>,
An iterator method that applies a fallible function to each item in the iterator, stopping at the first error and returning that error. Read more1.0.0 · source#### fn fold<B, F>(self, init: B, f: F) -> Bwhere
Self: Sized,
F: FnMut(B, Self::Item) -> B,
Folds every element into an accumulator by applying an operation,
returning the final result. Read more1.51.0 · source#### fn reduce<F>(self, f: F) -> Option<Self::Item>where
Self: Sized,
F: FnMut(Self::Item, Self::Item) -> Self::Item,
Reduces the elements to a single one, by repeatedly applying a reducing operation.
&mut self,
f: F
) -> <<R as Try>::Residual as Residual<Option<<R as Try>::Output>>>::TryTypewhere
Self: Sized,
F: FnMut(Self::Item, Self::Item) -> R,
R: Try<Output = Self::Item>,
<R as Try>::Residual: Residual<Option<Self::Item>>,
🔬This is a nightly-only experimental API. (`iterator_try_reduce`)Reduces the elements to a single one by repeatedly applying a reducing operation. If the closure returns a failure, the failure is propagated back to the caller immediately. Read more1.0.0 · source#### fn all<F>(&mut self, f: F) -> boolwhere
Self: Sized,
F: FnMut(Self::Item) -> bool,
Tests if every element of the iterator matches a predicate. Read more1.0.0 · source#### fn any<F>(&mut self, f: F) -> boolwhere
Self: Sized,
F: FnMut(Self::Item) -> bool,
Tests if any element of the iterator matches a predicate. Read more1.0.0 · source#### fn find<P>(&mut self, predicate: P) -> Option<Self::Item>where
Self: Sized,
P: FnMut(&Self::Item) -> bool,
Searches for an element of an iterator that satisfies a predicate. Read more1.30.0 · source#### fn find_map<B, F>(&mut self, f: F) -> Option<B>where
Self: Sized,
F: FnMut(Self::Item) -> Option<B>,
Applies function to the elements of iterator and returns the first non-none result.
&mut self,
f: F
) -> <<R as Try>::Residual as Residual<Option<Self::Item>>>::TryTypewhere
Self: Sized,
F: FnMut(&Self::Item) -> R,
R: Try<Output = bool>,
<R as Try>::Residual: Residual<Option<Self::Item>>,
🔬This is a nightly-only experimental API. (`try_find`)Applies function to the elements of iterator and returns the first true result or the first error. Read more1.0.0 · source#### fn position<P>(&mut self, predicate: P) -> Option<usize>where
Self: Sized,
P: FnMut(Self::Item) -> bool,
Searches for an element in an iterator, returning its index. Read more1.6.0 · source#### fn max_by_key<B, F>(self, f: F) -> Option<Self::Item>where
B: Ord,
Self: Sized,
F: FnMut(&Self::Item) -> B,
Returns the element that gives the maximum value from the specified function. Read more1.15.0 · source#### fn max_by<F>(self, compare: F) -> Option<Self::Item>where
Self: Sized,
F: FnMut(&Self::Item, &Self::Item) -> Ordering,
Returns the element that gives the maximum value with respect to the specified comparison function. Read more1.6.0 · source#### fn min_by_key<B, F>(self, f: F) -> Option<Self::Item>where
B: Ord,
Self: Sized,
F: FnMut(&Self::Item) -> B,
Returns the element that gives the minimum value from the specified function. Read more1.15.0 · source#### fn min_by<F>(self, compare: F) -> Option<Self::Item>where
Self: Sized,
F: FnMut(&Self::Item, &Self::Item) -> Ordering,
Returns the element that gives the minimum value with respect to the specified comparison function. Read more1.0.0 · source#### fn unzip<A, B, FromA, FromB>(self) -> (FromA, FromB)where
FromA: Default + Extend<A>,
FromB: Default + Extend<B>,
Self: Sized + Iterator<Item = (A, B)>,
Converts an iterator of pairs into a pair of containers. Read more1.36.0 · source#### fn copied<'a, T>(self) -> Copied<Self>where
T: 'a + Copy,
Self: Sized + Iterator<Item = &'a T>,
Creates an iterator which copies all of its elements. Read more1.0.0 · source#### fn cloned<'a, T>(self) -> Cloned<Self>where
T: 'a + Clone,
Self: Sized + Iterator<Item = &'a T>,
Creates an iterator which `clone`s all of its elements.
Self: Sized,
🔬This is a nightly-only experimental API. (`iter_array_chunks`)Returns an iterator over `N` elements of the iterator at a time. Read more1.11.0 · source#### fn sum<S>(self) -> Swhere
Self: Sized,
S: Sum<Self::Item>,
Sums the elements of an iterator. Read more1.11.0 · source#### fn product<P>(self) -> Pwhere
Self: Sized,
P: Product<Self::Item>,
Iterates over the entire iterator, multiplying all the elements
Self: Sized,
I: IntoIterator,
F: FnMut(Self::Item, <I as IntoIterator>::Item) -> Ordering,
🔬This is a nightly-only experimental API. (`iter_order_by`)Lexicographically compares the elements of this `Iterator` with those of another with respect to the specified comparison function. Read more1.5.0 · source#### fn partial_cmp<I>(self, other: I) -> Option<Ordering>where
I: IntoIterator,
Self::Item: PartialOrd<<I as IntoIterator>::Item>,
Self: Sized,
Lexicographically compares the `PartialOrd` elements of this `Iterator` with those of another. The comparison works like short-circuit evaluation, returning a result without comparing the remaining elements.
As soon as an order can be determined, the evaluation stops and a result is returned.
Self: Sized,
I: IntoIterator,
F: FnMut(Self::Item, <I as IntoIterator>::Item) -> Option<Ordering>,
🔬This is a nightly-only experimental API. (`iter_order_by`)Lexicographically compares the elements of this `Iterator` with those of another with respect to the specified comparison function. Read more1.5.0 · source#### fn eq<I>(self, other: I) -> boolwhere
I: IntoIterator,
Self::Item: PartialEq<<I as IntoIterator>::Item>,
Self: Sized,
Determines if the elements of this `Iterator` are equal to those of another.
Self: Sized,
I: IntoIterator,
F: FnMut(Self::Item, <I as IntoIterator>::Item) -> bool,
🔬This is a nightly-only experimental API. (`iter_order_by`)Determines if the elements of this `Iterator` are equal to those of another with respect to the specified equality function. Read more1.5.0 · source#### fn ne<I>(self, other: I) -> boolwhere
I: IntoIterator,
Self::Item: PartialEq<<I as IntoIterator>::Item>,
Self: Sized,
Determines if the elements of this `Iterator` are not equal to those of another. Read more1.5.0 · source#### fn lt<I>(self, other: I) -> boolwhere
I: IntoIterator,
Self::Item: PartialOrd<<I as IntoIterator>::Item>,
Self: Sized,
Determines if the elements of this `Iterator` are lexicographically less than those of another. Read more1.5.0 · source#### fn le<I>(self, other: I) -> boolwhere
I: IntoIterator,
Self::Item: PartialOrd<<I as IntoIterator>::Item>,
Self: Sized,
Determines if the elements of this `Iterator` are lexicographically less or equal to those of another. Read more1.5.0 · source#### fn gt<I>(self, other: I) -> boolwhere
I: IntoIterator,
Self::Item: PartialOrd<<I as IntoIterator>::Item>,
Self: Sized,
Determines if the elements of this `Iterator` are lexicographically greater than those of another. Read more1.5.0 · source#### fn ge<I>(self, other: I) -> boolwhere
I: IntoIterator,
Self::Item: PartialOrd<<I as IntoIterator>::Item>,
Self: Sized,
Determines if the elements of this `Iterator` are lexicographically greater than or equal to those of another.
Self: Sized,
F: FnMut(&Self::Item, &Self::Item) -> Option<Ordering>,
🔬This is a nightly-only experimental API. (`is_sorted`)Checks if the elements of this iterator are sorted using the given comparator function.
Self: Sized,
F: FnMut(Self::Item) -> K,
K: PartialOrd<K>,
🔬This is a nightly-only experimental API. (`is_sorted`)Checks if the elements of this iterator are sorted using the given key extraction function. Read moreAuto Trait Implementations
---
### impl<'a, K, V> RefUnwindSafe for PiggyVecIter<'a, K, V>where
K: RefUnwindSafe,
V: RefUnwindSafe,
### impl<'a, K, V> Send for PiggyVecIter<'a, K, V>where
K: Send,
V: Sync,
### impl<'a, K, V> Sync for PiggyVecIter<'a, K, V>where
K: Sync,
V: Sync,
### impl<'a, K, V> Unpin for PiggyVecIter<'a, K, V>where
K: Unpin,
### impl<'a, K, V> UnwindSafe for PiggyVecIter<'a, K, V>where
K: UnwindSafe,
V: RefUnwindSafe,
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<I> IntoIterator for Iwhere
I: Iterator,
#### type Item = <I as Iterator>::Item
The type of the elements being iterated over.#### type IntoIter = I
Which kind of iterator are we turning this into?const: unstable · source#### fn into_iter(self) -> I
Creates an iterator from a value.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct go_parser::ScopeKey
===
```
#[repr(transparent)]pub struct ScopeKey(/* private fields */);
```
Implementations
---
### impl ScopeKey
#### pub fn null() -> Self
Trait Implementations
---
### impl Clone for ScopeKey
#### fn clone(&self) -> ScopeKey
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> ScopeKey
Returns the “default value” for a type.
#### fn from(k: usize) -> Self
Converts to this type from the input type.### impl Hash for ScopeKey
#### fn hash<__H: Hasher>(&self, state: &mut __H)
Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where
H: Hasher,
Self: Sized,
Feeds a slice of this type into the given `Hasher`.
#### fn cmp(&self, other: &ScopeKey) -> Ordering
This method returns an `Ordering` between `self` and `other`. Read more1.21.0 · source#### fn max(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the maximum of two values. Read more1.21.0 · source#### fn min(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the minimum of two values. Read more1.50.0 · source#### fn clamp(self, min: Self, max: Self) -> Selfwhere
Self: Sized + PartialOrd<Self>,
Restrict a value to a certain interval.
#### fn eq(&self, other: &ScopeKey) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialOrd<ScopeKey> for ScopeKey
#### fn partial_cmp(&self, other: &ScopeKey) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists. Read more1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool
This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool
This method tests less than or equal to (for `self` and `other`) and is used by the `<=`
operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool
This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool
This method tests greater than or equal to (for `self` and `other`) and is used by the `>=`
operator.
#### fn as_usize(&self) -> usize
### impl Copy for ScopeKey
### impl Eq for ScopeKey
### impl StructuralEq for ScopeKey
### impl StructuralPartialEq for ScopeKey
Auto Trait Implementations
---
### impl RefUnwindSafe for ScopeKey
### impl Send for ScopeKey
### impl Sync for ScopeKey
### impl Unpin for ScopeKey
### impl UnwindSafe for ScopeKey
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct go_parser::SpecKey
===
```
#[repr(transparent)]pub struct SpecKey(/* private fields */);
```
Implementations
---
### impl SpecKey
#### pub fn null() -> Self
Trait Implementations
---
### impl Clone for SpecKey
#### fn clone(&self) -> SpecKey
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> SpecKey
Returns the “default value” for a type.
#### fn from(k: usize) -> Self
Converts to this type from the input type.### impl Hash for SpecKey
#### fn hash<__H: Hasher>(&self, state: &mut __H)
Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where
H: Hasher,
Self: Sized,
Feeds a slice of this type into the given `Hasher`.
#### fn cmp(&self, other: &SpecKey) -> Ordering
This method returns an `Ordering` between `self` and `other`. Read more1.21.0 · source#### fn max(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the maximum of two values. Read more1.21.0 · source#### fn min(self, other: Self) -> Selfwhere
Self: Sized,
Compares and returns the minimum of two values. Read more1.50.0 · source#### fn clamp(self, min: Self, max: Self) -> Selfwhere
Self: Sized + PartialOrd<Self>,
Restrict a value to a certain interval.
#### fn eq(&self, other: &SpecKey) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialOrd<SpecKey> for SpecKey
#### fn partial_cmp(&self, other: &SpecKey) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists. Read more1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool
This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool
This method tests less than or equal to (for `self` and `other`) and is used by the `<=`
operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool
This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool
This method tests greater than or equal to (for `self` and `other`) and is used by the `>=`
operator.
#### fn as_usize(&self) -> usize
### impl Copy for SpecKey
### impl Eq for SpecKey
### impl StructuralEq for SpecKey
### impl StructuralPartialEq for SpecKey
Auto Trait Implementations
---
### impl RefUnwindSafe for SpecKey
### impl Send for SpecKey
### impl Sync for SpecKey
### impl Unpin for SpecKey
### impl UnwindSafe for SpecKey
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct go_parser::TokenData
===
```
pub struct TokenData(/* private fields */);
```
Implementations
---
### impl TokenData
#### pub fn as_bool(&self) -> &bool
#### pub fn as_str(&self) -> &String
#### pub fn as_str_mut(&mut self) -> &mut String
#### pub fn as_str_str(&self) -> (&String, &String)
#### pub fn as_str_char(&self) -> (&String, &char)
Trait Implementations
---
### impl AsMut<String> for TokenData
#### fn as_mut(&mut self) -> &mut String
Converts this type into a mutable reference of the (usually inferred) input type.### impl AsRef<String> for TokenData
#### fn as_ref(&self) -> &String
Converts this type into a shared reference of the (usually inferred) input type.### impl AsRef<bool> for TokenData
#### fn as_ref(&self) -> &bool
Converts this type into a shared reference of the (usually inferred) input type.### impl Clone for TokenData
#### fn clone(&self) -> TokenData
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn from(ss: (String, String)) -> Self
Converts to this type from the input type.### impl From<(String, char)> for TokenData
#### fn from(ss: (String, char)) -> Self
Converts to this type from the input type.### impl From<String> for TokenData
#### fn from(s: String) -> Self
Converts to this type from the input type.### impl From<bool> for TokenData
#### fn from(b: bool) -> Self
Converts to this type from the input type.### impl Hash for TokenData
#### fn hash<__H: Hasher>(&self, state: &mut __H)
Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where
H: Hasher,
Self: Sized,
Feeds a slice of this type into the given `Hasher`.
#### fn eq(&self, other: &TokenData) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Eq for TokenData
### impl StructuralEq for TokenData
### impl StructuralPartialEq for TokenData
Auto Trait Implementations
---
### impl RefUnwindSafe for TokenData
### impl Send for TokenData
### impl Sync for TokenData
### impl Unpin for TokenData
### impl UnwindSafe for TokenData
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Enum go_parser::Token
===
```
pub enum Token {
NONE,
ILLEGAL(TokenData),
EOF,
COMMENT(TokenData),
IDENT(TokenData),
INT(TokenData),
FLOAT(TokenData),
IMAG(TokenData),
CHAR(TokenData),
STRING(TokenData),
ADD,
SUB,
MUL,
QUO,
REM,
AND,
OR,
XOR,
SHL,
SHR,
AND_NOT,
ADD_ASSIGN,
SUB_ASSIGN,
MUL_ASSIGN,
QUO_ASSIGN,
REM_ASSIGN,
AND_ASSIGN,
OR_ASSIGN,
XOR_ASSIGN,
SHL_ASSIGN,
SHR_ASSIGN,
AND_NOT_ASSIGN,
LAND,
LOR,
ARROW,
INC,
DEC,
EQL,
LSS,
GTR,
ASSIGN,
NOT,
NEQ,
LEQ,
GEQ,
DEFINE,
ELLIPSIS,
LPAREN,
LBRACK,
LBRACE,
COMMA,
PERIOD,
RPAREN,
RBRACK,
RBRACE,
SEMICOLON(TokenData),
COLON,
BREAK,
CASE,
CHAN,
CONST,
CONTINUE,
DEFAULT,
DEFER,
ELSE,
FALLTHROUGH,
FOR,
FUNC,
GO,
GOTO,
IF,
IMPORT,
INTERFACE,
MAP,
PACKAGE,
RANGE,
RETURN,
SELECT,
STRUCT,
SWITCH,
TYPE,
VAR,
}
```
Variants
---
### NONE
### ILLEGAL(TokenData)
### EOF
### COMMENT(TokenData)
### IDENT(TokenData)
### INT(TokenData)
### FLOAT(TokenData)
### IMAG(TokenData)
### CHAR(TokenData)
### STRING(TokenData)
### ADD
### SUB
### MUL
### QUO
### REM
### AND
### OR
### XOR
### SHL
### SHR
### AND_NOT
### ADD_ASSIGN
### SUB_ASSIGN
### MUL_ASSIGN
### QUO_ASSIGN
### REM_ASSIGN
### AND_ASSIGN
### OR_ASSIGN
### XOR_ASSIGN
### SHL_ASSIGN
### SHR_ASSIGN
### AND_NOT_ASSIGN
### LAND
### LOR
### ARROW
### INC
### DEC
### EQL
### LSS
### GTR
### ASSIGN
### NOT
### NEQ
### LEQ
### GEQ
### DEFINE
### ELLIPSIS
### LPAREN
### LBRACK
### LBRACE
### COMMA
### PERIOD
### RPAREN
### RBRACK
### RBRACE
### SEMICOLON(TokenData)
### COLON
### BREAK
### CASE
### CHAN
### CONST
### CONTINUE
### DEFAULT
### DEFER
### ELSE
### FALLTHROUGH
### FOR
### FUNC
### GO
### GOTO
### IF
### IMPORT
### INTERFACE
### MAP
### PACKAGE
### RANGE
### RETURN
### SELECT
### STRUCT
### SWITCH
### TYPE
### VAR
Implementations
---
### impl Token
#### pub fn token_property(&self) -> (TokenType, &str)
#### pub fn ident_token(ident: String) -> Token
#### pub fn int1() -> Token
#### pub fn precedence(&self) -> usize
#### pub fn text(&self) -> &str
#### pub fn is_literal(&self) -> bool
#### pub fn is_operator(&self) -> bool
#### pub fn is_keyword(&self) -> bool
#### pub fn get_literal(&self) -> &str
#### pub fn is_stmt_start(&self) -> bool
#### pub fn is_decl_start(&self) -> bool
#### pub fn is_expr_end(&self) -> bool
Trait Implementations
---
### impl Clone for Token
#### fn clone(&self) -> Token
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn hash<__H: Hasher>(&self, state: &mut __H)
Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where
H: Hasher,
Self: Sized,
Feeds a slice of this type into the given `Hasher`.
#### fn eq(&self, other: &Token) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Eq for Token
### impl StructuralEq for Token
### impl StructuralPartialEq for Token
Auto Trait Implementations
---
### impl RefUnwindSafe for Token
### impl Send for Token
### impl Sync for Token
### impl Unpin for Token
### impl UnwindSafe for Token
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
T: Display + ?Sized,
#### default fn to_string(&self) -> String
Converts the given value to a `String`.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Enum go_parser::TokenType
===
```
pub enum TokenType {
Literal,
Operator,
Keyword,
Other,
}
```
Variants
---
### Literal
### Operator
### Keyword
### Other
Auto Trait Implementations
---
### impl RefUnwindSafe for TokenType
### impl Send for TokenType
### impl Sync for TokenType
### impl Unpin for TokenType
### impl UnwindSafe for TokenType
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Type Alias go_parser::AssignStmts
===
```
pub type AssignStmts = PiggyVec<AssignStmtKey, AssignStmt>;
```
Aliased Type
---
```
struct AssignStmts { /* private fields */ }
```
Implementations
---
### impl<K, V> PiggyVec<K, V>where
K: PiggyVecKey + From<usize>,
#### pub fn with_capacity(capacity: usize) -> Self
#### pub fn insert(&mut self, v: V) -> K
#### pub fn vec(&self) -> &Vec<V#### pub fn iter<'a>(&'a self) -> PiggyVecIter<'a, K, VTrait Implementations
---
### impl<K, V: Debug> Debug for PiggyVec<K, V>where
K: PiggyVecKey + From<usize> + Debug,
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
K: PiggyVecKey + From<usize>,
#### fn from(vec: Vec<V>) -> Self
Converts to this type from the input type.### impl<K, V> Index<K> for PiggyVec<K, V>where
K: PiggyVecKey + From<usize>,
#### type Output = V
The returned type after indexing.#### fn index(&self, index: K) -> &Self::Output
Performs the indexing (`container[index]`) operation.
K: PiggyVecKey + From<usize>,
#### fn index_mut(&mut self, index: K) -> &mut Self::Output
Performs the mutable indexing (`container[index]`) operation. Read more{"PiggyVecIter<'a, K, V>":"<h3>Notable traits for <code><a class=\"struct\" href=\"struct.PiggyVecIter.html\" title=\"struct go_parser::PiggyVecIter\">PiggyVecIter</a><'a, K, V></code></h3><pre><code><span class=\"where fmt-newline\">impl<'a, K, V> <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"struct\" href=\"struct.PiggyVecIter.html\" title=\"struct go_parser::PiggyVecIter\">PiggyVecIter</a><'a, K, V><span class=\"where fmt-newline\">where\n K: <a class=\"trait\" href=\"trait.PiggyVecKey.html\" title=\"trait go_parser::PiggyVecKey\">PiggyVecKey</a> + <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/convert/trait.From.html\" title=\"trait core::convert::From\">From</a><<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.usize.html\">usize</a>>,</span></span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = <a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.reference.html\">&'a V</a>;</span>"}
Type Alias go_parser::Entitys
===
```
pub type Entitys = PiggyVec<EntityKey, Entity>;
```
Aliased Type
---
```
struct Entitys { /* private fields */ }
```
Implementations
---
### impl<K, V> PiggyVec<K, V>where
K: PiggyVecKey + From<usize>,
#### pub fn with_capacity(capacity: usize) -> Self
#### pub fn insert(&mut self, v: V) -> K
#### pub fn vec(&self) -> &Vec<V#### pub fn iter<'a>(&'a self) -> PiggyVecIter<'a, K, VTrait Implementations
---
### impl<K, V: Debug> Debug for PiggyVec<K, V>where
K: PiggyVecKey + From<usize> + Debug,
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
K: PiggyVecKey + From<usize>,
#### fn from(vec: Vec<V>) -> Self
Converts to this type from the input type.### impl<K, V> Index<K> for PiggyVec<K, V>where
K: PiggyVecKey + From<usize>,
#### type Output = V
The returned type after indexing.#### fn index(&self, index: K) -> &Self::Output
Performs the indexing (`container[index]`) operation.
K: PiggyVecKey + From<usize>,
#### fn index_mut(&mut self, index: K) -> &mut Self::Output
Performs the mutable indexing (`container[index]`) operation. Read more{"PiggyVecIter<'a, K, V>":"<h3>Notable traits for <code><a class=\"struct\" href=\"struct.PiggyVecIter.html\" title=\"struct go_parser::PiggyVecIter\">PiggyVecIter</a><'a, K, V></code></h3><pre><code><span class=\"where fmt-newline\">impl<'a, K, V> <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"struct\" href=\"struct.PiggyVecIter.html\" title=\"struct go_parser::PiggyVecIter\">PiggyVecIter</a><'a, K, V><span class=\"where fmt-newline\">where\n K: <a class=\"trait\" href=\"trait.PiggyVecKey.html\" title=\"trait go_parser::PiggyVecKey\">PiggyVecKey</a> + <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/convert/trait.From.html\" title=\"trait core::convert::From\">From</a><<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.usize.html\">usize</a>>,</span></span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = <a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.reference.html\">&'a V</a>;</span>"}
Type Alias go_parser::Fields
===
```
pub type Fields = PiggyVec<FieldKey, Field>;
```
Aliased Type
---
```
struct Fields { /* private fields */ }
```
Implementations
---
### impl<K, V> PiggyVec<K, V>where
K: PiggyVecKey + From<usize>,
#### pub fn with_capacity(capacity: usize) -> Self
#### pub fn insert(&mut self, v: V) -> K
#### pub fn vec(&self) -> &Vec<V#### pub fn iter<'a>(&'a self) -> PiggyVecIter<'a, K, VTrait Implementations
---
### impl<K, V: Debug> Debug for PiggyVec<K, V>where
K: PiggyVecKey + From<usize> + Debug,
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
K: PiggyVecKey + From<usize>,
#### fn from(vec: Vec<V>) -> Self
Converts to this type from the input type.### impl<K, V> Index<K> for PiggyVec<K, V>where
K: PiggyVecKey + From<usize>,
#### type Output = V
The returned type after indexing.#### fn index(&self, index: K) -> &Self::Output
Performs the indexing (`container[index]`) operation.
K: PiggyVecKey + From<usize>,
#### fn index_mut(&mut self, index: K) -> &mut Self::Output
Performs the mutable indexing (`container[index]`) operation. Read more{"PiggyVecIter<'a, K, V>":"<h3>Notable traits for <code><a class=\"struct\" href=\"struct.PiggyVecIter.html\" title=\"struct go_parser::PiggyVecIter\">PiggyVecIter</a><'a, K, V></code></h3><pre><code><span class=\"where fmt-newline\">impl<'a, K, V> <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"struct\" href=\"struct.PiggyVecIter.html\" title=\"struct go_parser::PiggyVecIter\">PiggyVecIter</a><'a, K, V><span class=\"where fmt-newline\">where\n K: <a class=\"trait\" href=\"trait.PiggyVecKey.html\" title=\"trait go_parser::PiggyVecKey\">PiggyVecKey</a> + <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/convert/trait.From.html\" title=\"trait core::convert::From\">From</a><<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.usize.html\">usize</a>>,</span></span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = <a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.reference.html\">&'a V</a>;</span>"}
Type Alias go_parser::FuncDecls
===
```
pub type FuncDecls = PiggyVec<FuncDeclKey, FuncDecl>;
```
Aliased Type
---
```
struct FuncDecls { /* private fields */ }
```
Implementations
---
### impl<K, V> PiggyVec<K, V>where
K: PiggyVecKey + From<usize>,
#### pub fn with_capacity(capacity: usize) -> Self
#### pub fn insert(&mut self, v: V) -> K
#### pub fn vec(&self) -> &Vec<V#### pub fn iter<'a>(&'a self) -> PiggyVecIter<'a, K, VTrait Implementations
---
### impl<K, V: Debug> Debug for PiggyVec<K, V>where
K: PiggyVecKey + From<usize> + Debug,
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
K: PiggyVecKey + From<usize>,
#### fn from(vec: Vec<V>) -> Self
Converts to this type from the input type.### impl<K, V> Index<K> for PiggyVec<K, V>where
K: PiggyVecKey + From<usize>,
#### type Output = V
The returned type after indexing.#### fn index(&self, index: K) -> &Self::Output
Performs the indexing (`container[index]`) operation.
K: PiggyVecKey + From<usize>,
#### fn index_mut(&mut self, index: K) -> &mut Self::Output
Performs the mutable indexing (`container[index]`) operation. Read more{"PiggyVecIter<'a, K, V>":"<h3>Notable traits for <code><a class=\"struct\" href=\"struct.PiggyVecIter.html\" title=\"struct go_parser::PiggyVecIter\">PiggyVecIter</a><'a, K, V></code></h3><pre><code><span class=\"where fmt-newline\">impl<'a, K, V> <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"struct\" href=\"struct.PiggyVecIter.html\" title=\"struct go_parser::PiggyVecIter\">PiggyVecIter</a><'a, K, V><span class=\"where fmt-newline\">where\n K: <a class=\"trait\" href=\"trait.PiggyVecKey.html\" title=\"trait go_parser::PiggyVecKey\">PiggyVecKey</a> + <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/convert/trait.From.html\" title=\"trait core::convert::From\">From</a><<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.usize.html\">usize</a>>,</span></span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = <a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.reference.html\">&'a V</a>;</span>"}
Type Alias go_parser::FuncTypes
===
```
pub type FuncTypes = PiggyVec<FuncTypeKey, FuncType>;
```
Aliased Type
---
```
struct FuncTypes { /* private fields */ }
```
Implementations
---
### impl<K, V> PiggyVec<K, V>where
K: PiggyVecKey + From<usize>,
#### pub fn with_capacity(capacity: usize) -> Self
#### pub fn insert(&mut self, v: V) -> K
#### pub fn vec(&self) -> &Vec<V#### pub fn iter<'a>(&'a self) -> PiggyVecIter<'a, K, VTrait Implementations
---
### impl<K, V: Debug> Debug for PiggyVec<K, V>where
K: PiggyVecKey + From<usize> + Debug,
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
K: PiggyVecKey + From<usize>,
#### fn from(vec: Vec<V>) -> Self
Converts to this type from the input type.### impl<K, V> Index<K> for PiggyVec<K, V>where
K: PiggyVecKey + From<usize>,
#### type Output = V
The returned type after indexing.#### fn index(&self, index: K) -> &Self::Output
Performs the indexing (`container[index]`) operation.
K: PiggyVecKey + From<usize>,
#### fn index_mut(&mut self, index: K) -> &mut Self::Output
Performs the mutable indexing (`container[index]`) operation. Read more{"PiggyVecIter<'a, K, V>":"<h3>Notable traits for <code><a class=\"struct\" href=\"struct.PiggyVecIter.html\" title=\"struct go_parser::PiggyVecIter\">PiggyVecIter</a><'a, K, V></code></h3><pre><code><span class=\"where fmt-newline\">impl<'a, K, V> <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"struct\" href=\"struct.PiggyVecIter.html\" title=\"struct go_parser::PiggyVecIter\">PiggyVecIter</a><'a, K, V><span class=\"where fmt-newline\">where\n K: <a class=\"trait\" href=\"trait.PiggyVecKey.html\" title=\"trait go_parser::PiggyVecKey\">PiggyVecKey</a> + <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/convert/trait.From.html\" title=\"trait core::convert::From\">From</a><<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.usize.html\">usize</a>>,</span></span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = <a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.reference.html\">&'a V</a>;</span>"}
Type Alias go_parser::Idents
===
```
pub type Idents = PiggyVec<IdentKey, Ident>;
```
Aliased Type
---
```
struct Idents { /* private fields */ }
```
Implementations
---
### impl<K, V> PiggyVec<K, V>where
K: PiggyVecKey + From<usize>,
#### pub fn with_capacity(capacity: usize) -> Self
#### pub fn insert(&mut self, v: V) -> K
#### pub fn vec(&self) -> &Vec<V#### pub fn iter<'a>(&'a self) -> PiggyVecIter<'a, K, VTrait Implementations
---
### impl<K, V: Debug> Debug for PiggyVec<K, V>where
K: PiggyVecKey + From<usize> + Debug,
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
K: PiggyVecKey + From<usize>,
#### fn from(vec: Vec<V>) -> Self
Converts to this type from the input type.### impl<K, V> Index<K> for PiggyVec<K, V>where
K: PiggyVecKey + From<usize>,
#### type Output = V
The returned type after indexing.#### fn index(&self, index: K) -> &Self::Output
Performs the indexing (`container[index]`) operation.
K: PiggyVecKey + From<usize>,
#### fn index_mut(&mut self, index: K) -> &mut Self::Output
Performs the mutable indexing (`container[index]`) operation. Read more{"PiggyVecIter<'a, K, V>":"<h3>Notable traits for <code><a class=\"struct\" href=\"struct.PiggyVecIter.html\" title=\"struct go_parser::PiggyVecIter\">PiggyVecIter</a><'a, K, V></code></h3><pre><code><span class=\"where fmt-newline\">impl<'a, K, V> <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"struct\" href=\"struct.PiggyVecIter.html\" title=\"struct go_parser::PiggyVecIter\">PiggyVecIter</a><'a, K, V><span class=\"where fmt-newline\">where\n K: <a class=\"trait\" href=\"trait.PiggyVecKey.html\" title=\"trait go_parser::PiggyVecKey\">PiggyVecKey</a> + <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/convert/trait.From.html\" title=\"trait core::convert::From\">From</a><<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.usize.html\">usize</a>>,</span></span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = <a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.reference.html\">&'a V</a>;</span>"}
Type Alias go_parser::LabeledStmts
===
```
pub type LabeledStmts = PiggyVec<LabeledStmtKey, LabeledStmt>;
```
Aliased Type
---
```
struct LabeledStmts { /* private fields */ }
```
Implementations
---
### impl<K, V> PiggyVec<K, V>where
K: PiggyVecKey + From<usize>,
#### pub fn with_capacity(capacity: usize) -> Self
#### pub fn insert(&mut self, v: V) -> K
#### pub fn vec(&self) -> &Vec<V#### pub fn iter<'a>(&'a self) -> PiggyVecIter<'a, K, VTrait Implementations
---
### impl<K, V: Debug> Debug for PiggyVec<K, V>where
K: PiggyVecKey + From<usize> + Debug,
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
K: PiggyVecKey + From<usize>,
#### fn from(vec: Vec<V>) -> Self
Converts to this type from the input type.### impl<K, V> Index<K> for PiggyVec<K, V>where
K: PiggyVecKey + From<usize>,
#### type Output = V
The returned type after indexing.#### fn index(&self, index: K) -> &Self::Output
Performs the indexing (`container[index]`) operation.
K: PiggyVecKey + From<usize>,
#### fn index_mut(&mut self, index: K) -> &mut Self::Output
Performs the mutable indexing (`container[index]`) operation. Read more{"PiggyVecIter<'a, K, V>":"<h3>Notable traits for <code><a class=\"struct\" href=\"struct.PiggyVecIter.html\" title=\"struct go_parser::PiggyVecIter\">PiggyVecIter</a><'a, K, V></code></h3><pre><code><span class=\"where fmt-newline\">impl<'a, K, V> <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"struct\" href=\"struct.PiggyVecIter.html\" title=\"struct go_parser::PiggyVecIter\">PiggyVecIter</a><'a, K, V><span class=\"where fmt-newline\">where\n K: <a class=\"trait\" href=\"trait.PiggyVecKey.html\" title=\"trait go_parser::PiggyVecKey\">PiggyVecKey</a> + <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/convert/trait.From.html\" title=\"trait core::convert::From\">From</a><<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.usize.html\">usize</a>>,</span></span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = <a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.reference.html\">&'a V</a>;</span>"}
Type Alias go_parser::Scopes
===
```
pub type Scopes = PiggyVec<ScopeKey, Scope>;
```
Aliased Type
---
```
struct Scopes { /* private fields */ }
```
Implementations
---
### impl<K, V> PiggyVec<K, V>where
K: PiggyVecKey + From<usize>,
#### pub fn with_capacity(capacity: usize) -> Self
#### pub fn insert(&mut self, v: V) -> K
#### pub fn vec(&self) -> &Vec<V#### pub fn iter<'a>(&'a self) -> PiggyVecIter<'a, K, VTrait Implementations
---
### impl<K, V: Debug> Debug for PiggyVec<K, V>where
K: PiggyVecKey + From<usize> + Debug,
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
K: PiggyVecKey + From<usize>,
#### fn from(vec: Vec<V>) -> Self
Converts to this type from the input type.### impl<K, V> Index<K> for PiggyVec<K, V>where
K: PiggyVecKey + From<usize>,
#### type Output = V
The returned type after indexing.#### fn index(&self, index: K) -> &Self::Output
Performs the indexing (`container[index]`) operation.
K: PiggyVecKey + From<usize>,
#### fn index_mut(&mut self, index: K) -> &mut Self::Output
Performs the mutable indexing (`container[index]`) operation. Read more{"PiggyVecIter<'a, K, V>":"<h3>Notable traits for <code><a class=\"struct\" href=\"struct.PiggyVecIter.html\" title=\"struct go_parser::PiggyVecIter\">PiggyVecIter</a><'a, K, V></code></h3><pre><code><span class=\"where fmt-newline\">impl<'a, K, V> <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"struct\" href=\"struct.PiggyVecIter.html\" title=\"struct go_parser::PiggyVecIter\">PiggyVecIter</a><'a, K, V><span class=\"where fmt-newline\">where\n K: <a class=\"trait\" href=\"trait.PiggyVecKey.html\" title=\"trait go_parser::PiggyVecKey\">PiggyVecKey</a> + <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/convert/trait.From.html\" title=\"trait core::convert::From\">From</a><<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.usize.html\">usize</a>>,</span></span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = <a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.reference.html\">&'a V</a>;</span>"}
Type Alias go_parser::Specs
===
```
pub type Specs = PiggyVec<SpecKey, Spec>;
```
Aliased Type
---
```
struct Specs { /* private fields */ }
```
Implementations
---
### impl<K, V> PiggyVec<K, V>where
K: PiggyVecKey + From<usize>,
#### pub fn with_capacity(capacity: usize) -> Self
#### pub fn insert(&mut self, v: V) -> K
#### pub fn vec(&self) -> &Vec<V#### pub fn iter<'a>(&'a self) -> PiggyVecIter<'a, K, VTrait Implementations
---
### impl<K, V: Debug> Debug for PiggyVec<K, V>where
K: PiggyVecKey + From<usize> + Debug,
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
K: PiggyVecKey + From<usize>,
#### fn from(vec: Vec<V>) -> Self
Converts to this type from the input type.### impl<K, V> Index<K> for PiggyVec<K, V>where
K: PiggyVecKey + From<usize>,
#### type Output = V
The returned type after indexing.#### fn index(&self, index: K) -> &Self::Output
Performs the indexing (`container[index]`) operation.
K: PiggyVecKey + From<usize>,
#### fn index_mut(&mut self, index: K) -> &mut Self::Output
Performs the mutable indexing (`container[index]`) operation. Read more{"PiggyVecIter<'a, K, V>":"<h3>Notable traits for <code><a class=\"struct\" href=\"struct.PiggyVecIter.html\" title=\"struct go_parser::PiggyVecIter\">PiggyVecIter</a><'a, K, V></code></h3><pre><code><span class=\"where fmt-newline\">impl<'a, K, V> <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"struct\" href=\"struct.PiggyVecIter.html\" title=\"struct go_parser::PiggyVecIter\">PiggyVecIter</a><'a, K, V><span class=\"where fmt-newline\">where\n K: <a class=\"trait\" href=\"trait.PiggyVecKey.html\" title=\"trait go_parser::PiggyVecKey\">PiggyVecKey</a> + <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/convert/trait.From.html\" title=\"trait core::convert::From\">From</a><<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.usize.html\">usize</a>>,</span></span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = <a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.reference.html\">&'a V</a>;</span>"} |
chi | cran | R | Package ‘chi’
October 12, 2022
Type Package
Title The Chi Distribution
Version 0.1
URL https://github.com/dkahle/chi
BugReports https://github.com/dkahle/chi/issues
Description Light weight implementation of the standard distribution
functions for the chi distribution, wrapping those for the chi-squared
distribution in the stats package.
License GPL-2
RoxygenNote 6.0.1
NeedsCompilation no
Author <NAME> [aut, cre, cph]
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2017-05-07 05:22:54 UTC
R topics documented:
ch... 2
invch... 3
chi The Chi Distribution
Description
Density, distribution function, quantile function and random generation for the chi distribution.
Usage
dchi(x, df, ncp = 0, log = FALSE)
pchi(q, df, ncp = 0, lower.tail = TRUE, log.p = FALSE)
qchi(p, df, ncp = 0, lower.tail = TRUE, log.p = FALSE)
rchi(n, df, ncp = 0)
Arguments
x, q vector of quantiles.
df degrees of freedom (non-negative, but can be non-integer).
ncp non-centrality parameter (non-negative).
log, log.p logical; if TRUE, probabilities p are given as log(p).
lower.tail logical; if TRUE (default), probabilities are P[X <= x] otherwise, P[X > x].
p vector of probabilities.
n number of observations. If length(n) > 1, the length is taken to be the number
required.
Details
The functions (d/p/q/r)chi simply wrap those of the standard (d/p/q/r)chisq R implementation, so
look at, say, dchisq for details.
See Also
dchisq; these functions just wrap the (d/p/q/r)chisq functions.
Examples
s <- seq(0, 5, .01)
plot(s, dchi(s, 7), type = 'l')
f <- function(x) dchi(x, 7)
q <- 2
integrate(f, 0, q)
(p <- pchi(q, 7))
qchi(p, 7) # = q
mean(rchi(1e5, 7) <= q)
samples <- rchi(1e5, 7)
plot(density(samples))
curve(f, add = TRUE, col = "red")
invchi The Inverse Chi Distribution
Description
Density, distribution function, quantile function and random generation for the inverse chi distribu-
tion.
Usage
dinvchi(x, df, ncp = 0, log = FALSE)
pinvchi(q, df, ncp = 0, lower.tail = TRUE, log.p = FALSE)
qinvchi(p, df, ncp = 0, lower.tail = TRUE, log.p = FALSE)
rinvchi(n, df, ncp = 0)
Arguments
x, q vector of quantiles.
df degrees of freedom (non-negative, but can be non-integer).
ncp non-centrality parameter (non-negative).
log, log.p logical; if TRUE, probabilities p are given as log(p).
lower.tail logical; if TRUE (default), probabilities are P[X <= x] otherwise, P[X > x].
p vector of probabilities.
n number of observations. If length(n) > 1, the length is taken to be the number
required.
See Also
dchi
4 invchi
Examples
s <- seq(0, 2, .01)
plot(s, dinvchi(s, 7), type = 'l')
f <- function(x) dinvchi(x, 7)
q <- .5
integrate(f, 0, q)
(p <- pinvchi(q, 7))
qinvchi(p, 7) # = q
mean(rinvchi(1e5, 7) <= q)
samples <- rinvchi(1e5, 7)
plot(density(samples))
curve(f, add = TRUE, col = "red") |
gambin | cran | R | Package ‘gambin’
October 13, 2022
Type Package
Title Fit the Gambin Model to Species Abundance Distributions
Version 2.5.0
Description Fits unimodal and multimodal gambin distributions to species-abundance distributions
from ecological data, as in in Matthews et al. (2014) <DOI:10.1111/ecog.00861>.
'gambin' is short for 'gamma-binomial'. The main function is fit_abundances(), which estimates
the 'alpha' parameter(s) of the gambin distribution using maximum likelihood. Functions are
also provided to generate the gambin distribution and for calculating likelihood statistics.
Imports stats, graphics, doParallel, gtools, foreach, parallel
Suggests testthat, knitr, rmarkdown
Depends R(>= 3.0.0)
URL https://github.com/txm676/gambin/
BugReports https://github.com/txm676/gambin/issues
License GPL-3
RoxygenNote 7.1.1
VignetteBuilder knitr
Encoding UTF-8
NeedsCompilation no
Author <NAME> [aut, cre],
<NAME> [aut],
<NAME> [aut],
<NAME> [aut]
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2021-04-16 18:10:05 UTC
R topics documented:
gambin-packag... 2
AIC.gambi... 3
cate... 4
create_octave... 5
deconstruct_mode... 5
dgambi... 7
fit_abundance... 9
fl... 10
moth... 11
mult_abundance... 11
summary.gambi... 13
gambin-package Fit the gambin model to species abundance distributions
Description
This package provides functions for fitting unimodal and multimodal gambin distributions to species-
abundance distributions from ecological data. The main function is fit_abundances(), which
estimates the ’alpha’ parameter(s) of the gambin distribution using maximum likelihood.
Details
The gambin distribution is a sample distribution based on a stochastic model of species abundances,
and has been demonstrated to fit empirical data better than the most commonly used species-
abundance models (see references). Gambin is a stochastic model which combines the gamma
distribution with a binomial sampling method. To fit the gambin distribution, the abundance data
is first binned into octaves. The expected abundance octave of a species is given by the number
of successful consecutive Bernoulli trials with a given parameter p. The parameter p of species is
assumed to distributed according to a gamma distribution. This approach can be viewed as linking
the gamma distribution with the probability of success in a binomial process with x trials. Use the
fit_abundances() function to fit the gambin model to a vector of species abundances, optionally
using a subsample of the individuals. Use the mult_abundances() function to fit the gambin model
to multiple sites / samples and return the alpha values for each model fit (both the raw values and the
alpha values standardised by the number of individuals).The package estimates the alpha (shape)
parameter with associated confidence intervals. Methods are provided for plotting the results, and
for calculating the likelihood of fits.
The package now provides functionality to fit multimodal gambin distributions (i.e. a gambin dis-
tribution with more than one mode), and to deconstruct and examine a multimodal gambin model
fit (deconstruct_modes).
References
<NAME>., <NAME>., <NAME>., <NAME>, <NAME>., <NAME>. and Whittaker,
R.J. (2014) The gambin model provides a superior fit to species abundance distributions with a
single free parameter: evidence, implementation and interpretation. Ecography 37: 1002-1011.
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>. & <NAME>. (2019) Extension of the gambin model
to multimodal species abundance distributions. Methods in Ecology and Evolution, 10, 432-437.
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>. & Whittaker, R.J.
(2007). Modelling dimensionality in species abundance distributions: description and evaluation of
the Gambin model. Evolutionary Ecology Research, 9, 313-324.
See Also
https://github.com/txm676/gambin
Examples
data(moths, package = "gambin")
fit = fit_abundances(moths)
barplot(fit)
lines(fit)
AIC(fit)
AIC.gambin Likelihood statistics for the GamBin model
Description
Uses likelihood and information theoretical approaches to reveal the degree of fit of the GamBin
model to empirical species abundance distributions.
Usage
## S3 method for class 'gambin'
AIC(object, ...)
AICc(object, ...)
## S3 method for class 'gambin'
AICc(object, ...)
## S3 method for class 'gambin'
BIC(object, ...)
## S3 method for class 'gambin'
logLik(object, ...)
Arguments
object An object of type gambin
... Further arguments to pass to the function
Value
logLik returns an R object of type logLik. The other function return the numerical value of the
statistic
References
Akaike, Hirotugu. "A new look at the statistical model identification." Automatic Control, IEEE
Transactions on 19.6 (1974): 716-723.
Examples
data(moths)
fit = fit_abundances(moths)
AIC(fit)
categ Simulated bird SAD dataset with species classification data
Description
A randomly generated bird SAD dataset where each species has been randomly classified according
to its origin (native, exotic or invasive).
Format
A dataframe with three columns: 1) ’abundances’ = the abundance of each species, 2) ’species’ =
the species names, and 3) ’status’ the species origin classification. In regards to (3) each species is
classified as either native (N), exotic (E) or invasive (I).
Source
This package.
Examples
data(categ, package = "gambin")
create_octaves Reclassify a vector of species’ abundances into abundance octaves
Description
Creates abundance octaves by a log2 transform that doubles the number of abundance classes within
each octave (method 3 of Gray, Bjoergesaeter & Ugland 2006). Octave 0 contains the number of
species with 1 individual, octave 1 the number of species with 2 or 3 individuals, octave 2 the
number of species with 4 to 7 individuals, and so forth.
Usage
create_octaves(abundances, subsample = 0)
Arguments
abundances A numerical vector of species abundances in a community.
subsample If > 0, the community is subsampled by this number of individuals before cre-
ating octaves. This is useful for analyses where alpha is estimated from a stan-
dardized number of individuals.
Value
A data.frame with two variables: octave with the name of each octave and species with the
number of species in that octave.
References
<NAME>., <NAME>. & Ugland, K.I. (2006) On plotting species abundance distributions.
Journal of Animal Ecology, 75, 752-756.
Examples
data(moths)
create_octaves(moths)
deconstruct_modes Deconstruct a multimodal gambin model fit
Description
Deconstruct a multimodal gambin model fit by locating the modal octaves and (if species classifi-
cation data are provided) determining the proportion of different types of species in each octave.
Usage
deconstruct_modes(
fit,
dat,
peak_val = NULL,
abundances = "abundances",
species = "species",
categ = NULL,
plot_modes = TRUE,
col.statu = NULL,
plot_legend = TRUE,
legend_location = "topright"
)
Arguments
fit A gambin model fit where the number of components is greater than one (see
fit_abundances).
dat A matrix or dataframe with at least two columns, including the abundance data
used to fit the multimodal gambin model and the species names. An optional
third column can be provided that contains species classification data.
peak_val A vector of of modal octave values. If peak_val = NULL, the modal octave val-
ues are taken from the model fit object.
abundances The name of the column in dat that contains the abundance data (default =
"abundance").
species The name of the column in dat that contains the species names (default =
"species").
categ Either NULL if no species classification data are provided, or the name of the
column in dat that contains the species classification data.
plot_modes A logical argument specifying whether a barplot of the model fit with high-
lighted octaves should be generated. If categ = NULL a barplot is produced
whereby just the modal octaves are highlighted in red. If categ is provided
a barplot is produced whereby the bar for each octave is split into n parts, where
n equals the number of species categories.
col.statu A vector of colours (of length n) for the split barplot, where n equals the number
of species categories.
plot_legend Should the barplot include a legend. Only applicable when plot_modes = TRUE
and categ is not NULL.
legend_location
If plot_legend = TRUE, where should the legend be located. Should be one of
“bottomright”, “bottom”, “bottomleft”, “left”, “topleft”, “top”, “topright” (de-
fault), “right”, or “center”.
Details
The function enables greater exploration of a multimodal gambin model fit. If no species classifica-
tion data are available (i.e. categ = NULL) the function returns the modal octaves of the n-component
distributions and the names of the species located in each octave. If plot_modes = TRUE a plot is
returned with the modal octaves highlighted in red. If species classification data are provided the
function also returns a summary table with the number of each species category in each octave
provided. The user can then use these data to run different tests to test whether, for example, the
number of species in each category in the modal octaves is significantly different than expected by
chance. If plot_modes = TRUE a split barplot is returned whereby each bar (representing an octave)
is split into the n species categories.
Species classification data should be of type character (e.g. native or invasive).
Occasionally, some of the component distributions in a multimodal gambin model fit have the same
modal octave; this is more common when fitting the 3-component model. When this occurs a
warning is produced, but it is not a substantive issue.
Value
An object of class deconstruct. The object is a list with either two or three elements. If categ =
NULL, the list has two elements: 1) ’Peak_locations’, which contains the modal octave values, and 2)
’Species_per_octave’, which is a list where each element contains the species names in an octave.
If categ != NULL, the returned object has a third element: 3) ’Summary_table’, which contains a
dataframe (frequency table) with the numbers of each category of species in each octave.
Author(s)
<NAME> & <NAME>
Examples
data(categ)
fits2 = fit_abundances(categ$abundances, no_of_components = 2)
#without species classification data
deconstruct_modes(fits2, dat = categ, peak_val = NULL, abundances = "abundances",
species = "species", categ = NULL, plot_modes = TRUE)
#with species classification data
deconstruct_modes(fits2, dat = categ, categ = "status", col.statu = c("green", "red", "blue"))
#manually choose modal octaves
deconstruct_modes(fits2, dat = categ, peak_val = c(0,1))
dgambin The mixture gambin distribution
Description
Density, distribution function, quantile function and random generation for the mixture gambin
distribution.
Usage
dgambin(x, alpha, maxoctave, w = 1, log = FALSE)
pgambin(q, alpha, maxoctave, w = 1, lower.tail = TRUE, log.p = FALSE)
rgambin(n, alpha, maxoctave, w = 1)
qgambin(p, alpha, maxoctave, w = 1, lower.tail = TRUE, log.p = FALSE)
gambin_exp(alpha, maxoctave, w = 1, total_species)
Arguments
x vector of (non-negative integer) quantiles.
alpha The shape parameter of the GamBin distribution.
maxoctave The scale parameter of the GamBin distribution - which octave is the highest in
the empirical dataset?
w A vector of weights. Default, a single weight. This vector must of the same
length as alpha.
log logical; If TRUE, probabilities p are given as log(p).
q vector of quantiles.
lower.tail logical; if TRUE (default), probabilities are P[X <= x], otherwise, P[X > x].
log.p logical; if TRUE, probabilities p are given as log(p).
n number of random values to return.
p vector of probabilities.
total_species The total number of species in the empirical dataset
Details
dgambin gives the distribution function of a mixture gambin, so all octaves sum to 1. gambin_exp
multiplies this by the total number of species to give the expected GamBin distribution in units of
species, for comparison with empirical data.
Value
A vector with length MaxOctave + 1 of the expected number of species in each octave
References
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., . . .
<NAME>. (2019) Extension of the gambin model to multimodal species abundance distribu-
tions. Methods in Ecology and Evolution, doi:10.1111/2041-210X.13122
<NAME>., <NAME>., <NAME>., <NAME>, <NAME>., <NAME>. and Whittaker,
R.J. (2014) The gambin model provides a superior fit to species abundance distributions with a
single free parameter: evidence, implementation and interpretation. Ecography 37: 1002-1011.
Examples
## maxoctave is 4. So zero for x = 5
dgambin(0:5, 1, 4)
## Equal weightings between components
dgambin(0:5, alpha = c(1,2), maxoctave = c(4, 4))
## Zero weight on the second component, i.e. a 1 component model
dgambin(0:5, alpha = c(1,2), maxoctave = c(4, 4), w = c(1, 0))
expected = gambin_exp(4, 13, total_species = 200)
plot(expected, type = "l")
##draw random values from a gambin distribution
x = rgambin(1e6, alpha = 2, maxoctave = 7)
x = table(x)
freq = as.vector(x)
values = as.numeric(as.character(names(x)))
abundances = data.frame(octave=values, species = freq)
fit_abundances(abundances, no_of_components = 1)
fit_abundances Fit a unimodal or multimodal gambin model to a species abundance
distribution
Description
Uses maximum likelihood methods to fit the GamBin model (with a given number of modes) to
binned species abundances. To control for the effect of sample size, the abundances may be sub-
sampled prior to fitting.
Usage
fit_abundances(abundances, subsample = 0, no_of_components = 1, cores = 1)
fitGambin(abundances, subsample = 0)
Arguments
abundances Either a vector of abundances of all species in the sample/community; or the
result of create_octaves
subsample The number of individuals to sample from the community before fitting the
GamBin model. If subsample == 0 the entire community is used
no_of_components
Number of components (i.e. modes) to fit.The default (no_of_components ==
1) fits the standard unimodal gambin model.
cores No of cores to use when fitting. Use parallel::detectCores() to detect the
number of cores on your machine.
Details
The gambin distribution is fit to the number of species in abundance octaves, as specified by the
create_octaves function. Because the shape of species abundance distributions depend on sam-
ple size, abundances of different communities should be compared on equally large samples. The
sample size can be set by the subsample parameter. To estimate alpha from a standardised sample,
the function must be run several times; see the examples. The no_of_components parameter en-
ables multimodal gambin distributions to be fitted. For example, setting no_of_components equal
to 2, the bimodal gambin model is fitted. When a multimodal gambin model is fitted (with g modes),
the return values are the alpha parameters of the g different component distributions, the max octave
values for the g component distributions (as the max octave values for the g-1 component distribu-
tions are allowed to vary), and the and the weight parameter(s) which denote the fraction of objects
within each g component distribution. When fitting multimodal gambin models (particularly on
large datasets), the optimisation algorithm can be slow. In such cases, the process can be speeded
up by using the cores parameter to enable parallel computing.
The plot method creates a barplot showing the observed number of species in octaves, with the
fitted GamBin distribution shown as black dots. The summary.gambin method provides additional
useful information such as confidence intervals around the model parameter estimates.
Value
The fit_abundances function returns an object of class gambin, with the alpha, w and MaxOctave
parameters of the gambin mixture distribution, the likelihood of the fit, and the empirical distribution
over octaves.
Examples
data(moths)
fit = fit_abundances(moths)
barplot(fit)
lines(fit, col=2)
summary(fit)
# gambin parameters based on a standardized sample size of 1000 individuals
stand_fit <- replicate(20, fit_abundances(moths, 1000)$alpha) #may take a while on slower computers
print(c(mean = mean(stand_fit), sd = sd(stand_fit)))
# a bimodal gambin model
biMod <- fit_abundances(moths, no_of_components = 2)
fly Brazilian Horse Fly Data
Description
Horse flies captured using various sampling methods at different sites across Brazil.
Format
A list with two elements. The first element contains a numerical vector with the abundance of 164
fly species sampled at various sites across Brazil. The second element contains a numerical vector
with the abundance of 58 fly specie sampled at a single site within Brazil using just canopy traps.
Source
This package.
Examples
data(fly, package = "gambin")
moths Williams’ Rothamsted moth data
Description
Macro-Lepidoptera captured in a light trap at Rothamsted Experimental Station during 1935.
Format
A numerical vector with the abundance of 195 moth species.
Source
<NAME>. (1964) Patterns in the balance of nature. Academic Press, London.
Examples
data(moths, package = "gambin")
mult_abundances Fit a unimodal gambin model to multiple species abundance distribu-
tions
Description
Fits the unimodal gambin model to the SADs from multiple sites and returns the standardised and
unstandardised alpha values.
Usage
mult_abundances(mult, N = 100, subsample = NULL, r = 3)
Arguments
mult Either a matrix, dataframe or list containing the species abundance data of a
set of sites. In the case of a matrix or dataframe, a given column contains the
abundance data for a given site (i.e. columns are sites and rows are species; each
cell is the abundance of a given species in a given site). In the case of a list, each
element in the list contains the abundance data (i.e. a vector of abundances) for
a given site.
N The number of times to subsample the abundance data in each site to calculate
mean standardised alpha.
subsample The number of individuals to sample from each site before fitting the gambin
model. The default is subsample = NULL, in which case subsample is set to
equal the number of individuals in the site with the fewest individuals.
r The number of decimal points to round the returned alpha values to (default is r
= 3)
Details
Because the alpha parameter of the gambin model is dependent on sample size, when comparing
the alpha values between sites it can be useful to first standardise the number of individuals in all
sites. By default, the mult_abundances function calculates the total number of individuals in each
site and selects the minimum value for standardising. This minimum number of individuals is then
sampled from each site and the gambin model fitted to this subsample using fit_abundances, and
the alpha value stored. This process is then repeated N times and the mean alpha value is calculated
for each site. The number of individuals to be subsampled can be manually set using the subsample
argument. The function returns a list, in which the first two elements are the mean standardised
alpha values for each site, and the raw unstandardized alpha values for each site, respectively. The
full set of N alpha values and X2 P-values for each site are also returned. As an input, the SAD data
can be in the form of a matrix or dataframe, or a list. A matrix/dataframe is only for when each
site (column) has abundance data for the same set of species (rows). For example, an abundance
matrix of bird species on a set of islands in an archipelago. A list should be used in cases where the
number of species varies between sites; for example, when comparing the SADs of samples from
different countries. In this case, each element of the list contains an abundance vector from a given
site.
At present, the mult_abundances function only fits the unimodal gambin model.
Examples
#simulate a matrix containing the SAD data for 10 sites (50 sp. in each)
mult <- matrix(0, nrow = 50, ncol = 10)
mult <- apply(mult, 2, function(x) ceiling(rlnorm(length(x), 0, 2.5)))
#run the mult_abundances function and view the alpha values
mm <- mult_abundances(mult, N = 10, subsample = NULL)
mm[1:2]
plot(mm$Mean.Stan.Alpha, mm$Unstan.Alpha)
#simulate a list containing the SAD of 5 sites (with varying numbers of sp.)
mult2 <- vector("list", length = 5)
for (i in 1:ncol(mult)){
dum <- sample(mult[, i], replace = TRUE)
rm <- round(runif(1, 0, 5), 0)
if (rm > 0){
rm2 <- sample(1:length(dum), rm, replace = FALSE)
dum <- dum[-rm2]
}
mult2[[i]] <- dum
}
#run the mult_abundances function on the list
mm2 <- mult_abundances(mult2, N = 5, subsample = NULL)
mm2[1:2]
summary.gambin Summarising the results of a gambin model fit
Description
S3 method for class ’gambin’. summary.gambin creates summary statistics for objects of class
’gambin’.The summary method generates more useful information (e.g. confidence intervals) for
the user than the standard model fitting function. Another S3 method (print.summary.gambin;
not documented) is used to print the output.
Usage
## S3 method for class 'gambin'
summary(object, confint = FALSE, n = 50, ...)
Arguments
object A gambin model fit object from fit_abundances
confint A logical argument specifying whether confidence intervals should be calcu-
lated (via bootstrapping) for the parameters of gambin models with more than 1
component (confidence intervals for 1 component gambin models are calculated
automatically)
n The number of bootstrap samples to use in generating the confidence intervals
(for multimodal gambin models)
... Further arguments to pass
Details
For the one-component gambin model the confidence interval for the alpha parameter is calculated
automatically using an analytical solution.
For gambin models with more than one component no analytical solution for deriving the confi-
dence intervals is known. Instead, a bootstrapping procedure can be used (using the confint and n
arguments) to generate confidence intervals around the alpha and max octave parameters. However,
the process can be time-consuming, particularly for gambin models with more than two compo-
nents. Thus, the default is that confidence intervals are not automatically calculated for gambin
models with more than one component (i.e. confint == FALSE).
In addition, it should be noted that in certain case the confidence intervals around the alpha pa-
rameters in multi-component gambin models can be quite wide. This is due to changes in the max
octaves of the component distributions in the bootstrapped samples. It can be useful to make a plot
(e.g. a dependency boxplot) of the n alpha values against the max octave values.
Value
A list of class ’summary.gambin’ with nine elements, containing useful information about the model
fit.
Examples
## Not run:
data(moths)
fit = fit_abundances(moths)
summary(fit)
# multimodal gambin models with confidence intervals
biMod <- fit_abundances(moths, no_of_components = 2)
summary(biMod, confint = TRUE, n = 5) #large n takes a long time to run
## End(Not run) |
github.com/chai2010/awesome-go-zh | go | Go | README
[¶](#section-readme)
---
### Go资源精选中文版
* *KusonStack一站式可编程配置技术栈(Go): <https://github.com/KusionStack/kusion>*
* *KCL 配置编程语言(Rust): <https://github.com/KusionStack/KCLVM>*
* *凹语言™: <https://github.com/wa-lang/wa>*
---
Go中国讨论组: <https://groups.google.com/forum/#!forum/golang-china开发者头条: <https://toutiao.io/subjects/318517---
#### 收录标准
* 有深度或时效性的单篇或系列非转载的原创或翻译文章
* Github上类似awesome系列的集合类或列表类项目必须:Go强相关的,至少1k个star
#### 其它awesome
* <https://github.com/avelino/awesome-go#### 中国区Go语言贡献者
* [go-contributors.md](https://github.com/chai2010/awesome-go-zh/blob/4c1631e1516f/go-contributors.md)
---
#### 入门到精通
1. Go指南: <https://tour.golang.org>, <https://tour.go-zh.org/>
2. Go圣经: <https://gopl.io>, <https://github.com/golang-china/gopl-zh>
3. Go进阶: <https://github.com/chai2010/advanced-go-programming-book---
#### 官方文档(所有第三方文档的灵感来源)
1. 官网: <https://golang.org/>, <https://golang.google.cn/>
2. 博客: <https://blog.golang.org/>
3. 报告: <https://talks.golang.org/>
4. 老巢: <https://github.com/golang/>
5. 维基: <https://github.com/golang/go/wiki>
6. 文档: <https://godoc.org/核心开发者的博客:
* <NAME>: <https://twitter.com/_rsc>, <https://swtch.com/~rsc>
* <NAME>: <https://twitter.com/rob_pike>
* <NAME>: <https://twitter.com/davecheney>, <https://dave.cheney.netGo2 草案:
* <https://blog.golang.org/go2draft>
* <https://go.googlesource.com/proposal/+/master/design/go2draft.md>
* <https://github.com/golang/proposal/blob/master/design/go2draft-generics-overview.md>
* <https://github.com/golang/proposal/blob/master/design/go2draft-error-handling.md>
* [go2设计草案介绍](https://github.com/songtianyi/songtianyi.github.io/blob/master/mds/techniques/go2-design-draft-introduction.md)
---
#### 讨论组
全球论坛(需要站在长城看):
1. 官方讨论组: <https://groups.google.com/forum/#!forum/golang-nuts>
2. Go中国讨论组: <https://groups.google.com/forum/#!forum/golang-china国内论坛:
1. <https://gocn.vip/>
2. <https://golangtc.com/>
3. <https://studygolang.com/>
4. <https://zhihu.com/topic/19625982/>
5. <https://www.newsmth.net/bbsdoc.php?board=Golang---
#### 报告列表
1. [Go&WebAssembly简介](https://talks.godoc.org/github.com/chai2010/awesome-go-zh/chai2010/chai2010-golang-wasm.slide) - [chai2010](https://github.com/chai2010/awesome-go-zh/tree/master/chai2010) [Go·夜读](https://github.com/developer-learning/reading-go/issues/445) 2019/08/15 2. [再谈 Go 语言在前端的应用前景](https://www.jiqizhixin.com/articles/2019-01-02-35) - 许式伟 2019/01/02 3. [Go语言简介](https://talks.godoc.org/github.com/chai2010/awesome-go-zh/chai2010/chai2010-golang-intro.slide) - [chai2010](https://github.com/chai2010/awesome-go-zh/tree/master/chai2010) 武汉·黄鹤会 2018/12/16 4. [GIAC: 2018 - Go 语言将要走向何方?](https://github.com/chai2010/awesome-go-zh/blob/4c1631e1516f/chai2010/giac2018) - [chai2010](https://github.com/chai2010/awesome-go-zh/tree/master/chai2010) 上海·GIAC全球互联网架构大会 2018/11/23 5. [Go语言并发编程](https://talks.godoc.org/github.com/chai2010/awesome-go-zh/chai2010/chai2010-golang-concurrency.slide) - [chai2010](https://github.com/chai2010/awesome-go-zh/tree/master/chai2010) 武汉·光谷猫友会 2018/09/16, [整理01](https://mp.weixin.qq.com/s/UaY9gJU85dq-dXlOhLYY1Q)/[整理02](https://mp.weixin.qq.com/s/_aKNO-H11GEDA-l0rycfQQ)
6. [深入CGO编程](https://github.com/chai2010/gopherchina2018-cgo-talk) - [chai2010](https://github.com/chai2010/awesome-go-zh/tree/master/chai2010) GopherChina2018·上海 2018/04/15
---
#### Go语言电子书
1. 胡文Go.ogle: <https://github.com/chai2010/gopherchina2018-cgo-talk/blob/master/go.ogle.pdf>
2. Go高级编程: <https://github.com/chai2010/advanced-go-programming-book>
3. Go Web编程: <https://github.com/astaxie/build-web-application-with-golang>
4. Go语法树入门: <https://github.com/chai2010/go-ast-book>
5. µGo语言实现: <https://github.com/wa-lang/ugo-compiler-book>
6. 官方收录图书: <https://github.com/golang/go/wiki/BooksGo2图书:
1. Go2编程指南: <https://github.com/chai2010/go2-book#### Go语言出版图书
* 第一本中文图书: [Go语言·云动力](http://www.ituring.com.cn/book/1040), fango, 2012
* Go语言官方图书: [The Go Programming Language](https://gopl.io), D&K, 2016
* Go语言进阶图书: [Go语言高级编程](https://book.douban.com/subject/34442131/): 柴树杉 & 曹春晖
豆瓣 **7分以上** 图书(国内已出版)([链接](https://book.douban.com/subject_search?search_text=go%E8%AF%AD%E8%A8%80)):
1. [Go程序设计语言(英文版)](https://book.douban.com/subject/26859123/): D&K 2. [Go语言高级编程](https://book.douban.com/subject/34442131/): 柴树杉 & 曹春晖 3. [Go程序设计语言(中文版)](https://book.douban.com/subject/27044219/): D&K 4. [Go学习笔记](https://book.douban.com/subject/26832468/): 雨痕 5. [Go语言实战](https://book.douban.com/subject/27015617/): WilliaKennedy 6. [Go语言编程](https://book.douban.com/subject/11577300/): 许式伟 7. [Go并发编程实战](https://book.douban.com/subject/27016236/): 郝林 8. [Go语言程序设计](https://book.douban.com/subject/24869910/): <NAME>
中文 **原创** 部分(不含 **编著/主编** 类型):
1. [Go语言高级编程](https://book.douban.com/subject/34442131/): 柴树杉 & 曹春晖, 2019, 89元 2. [Go语言核心编程](http://product.china-pub.com/8052655): 李文塔, 2018, 79元 3. [Go学习笔记](http://product.china-pub.com/4971695): 雨痕, 2016, 89元 4. [Go并发编程实战](https://book.douban.com/subject/27016236/): 郝林, 2015, 79元 5. [Go Web 编程](http://product.china-pub.com/3767290): 谢孟军, 2013, 65元 6. [Go语言编程](http://www.ituring.com.cn/book/967): 许式伟, 2012, 49元 7. [Go语言·云动力](http://www.ituring.com.cn/book/1040): fango, 2012, 39元
中文 **编著/主编** 部分:
1. [Go程序员面试算法宝典](http://product.china-pub.com/8058587): 猿媛之家(编著), 2019, 69元 2. [Go语言编程实战](http://product.china-pub.com/8062288): 强彦 & 王军红(主编), 2019, 59元 3. [GO语言公链开发实战](http://product.china-pub.com/8061295): 郑东旭 等(编著), 2019, 89元 4. [Go语言编程入门与实战技巧](http://product.china-pub.com/8051880): 黄靖钧(编著), 2018, 79元 5. [Go语言从入门到进阶实战:视频教学版](http://product.china-pub.com/8014297): 徐波(编著), 2018, 99元 6. [Go语言程序设计](http://product.china-pub.com/4076269): 王鹏(编著), 2014, 39元
国外引进部分:
1. [Go语言实战](http://product.china-pub.com/8057254): <NAME>, <NAME>, 2019, 79 2. [Go语言并发之道(中文翻译)](http://www.oreilly.com.cn/index.php?func=book&isbn=978-7-5198-2494-5): <NAME>, 2018, 58元 3. [Go语言入门经典(中文翻译)](https://www.epubit.com/book/detail/7239): [英]乔治 奥尔波, 2018, 59元 4. [Go Web编程(中文翻译)](https://www.amazon.cn/dp/B078CN7XSS): 郑兆雄, 2017, 79元 5. [Go程序设计语言(中文翻译)](http://product.china-pub.com/5576736): D&K, 2017, 79元 6. [Go语言实战(中文翻译)](http://product.china-pub.com/5294458): WilliaKennedy, 2017, 59元 7. [Go语言程序设计(中文翻译)](http://product.china-pub.com/3768290): <NAME>, 2013, 69元
英文影印版:
1. [Go程序设计语言(英文)](http://product.china-pub.com/4912464): D&K, 2016, 79元
#### 其它图书(Go语言相关)
中文原创部分:
1. [分布式缓存——原理、架构及Go语言实现](https://www.epubit.com/book/detail/39324): 胡世杰, 2018, 49元 2. [自己动手实现Lua](https://github.com/zxh0/luago-book): 张秀宏, 2018, 89元 3. [分布式对象存储 原理 架构及Go语言实现](http://product.china-pub.com/8016311): 胡世杰, 2018, 59元 4. [自己动手写Java虚拟机](https://github.com/zxh0/jvmgo-book): 张秀宏, 2016, 69元
国外引进部分:
1. [机器学习:Go语言实现](http://product.china-pub.com/8053220): [美] 丹尼尔·怀特纳克, 2018, 59元 2. [Cloud Native Go:构建基于Go和React的云原生Web应用与微服务(中文翻译)](http://product.china-pub.com/6170789): <NAME>, 2017, 69元
---
#### Go2
* <https://github.com/golang/go/wiki/Go2>
* <https://github.com/golang/go/labels/Go2#### Go Modules
**官方文档**
* <https://tip.golang.org/cmd/go/#hdr-Modules__module_versions__and_more>
* <https://research.swtch.com/vgo>
* <https://github.com/golang/go/wiki/vgo>
* <https://github.com/golang/go/wiki/Modules**其它文档**
* <https://lingchao.xin/tags/vgo/>
* <https://tonybai.com/2018/07/15/hello-go-module/>
* <https://roberto.selbach.ca/intro-to-go-modules>
* <https://www.cnblogs.com/apocelipes/p/9609895.html>
* <https://arslan.io/2018/08/26/using-go-modules-with-vendor-support-on-travis-ci/>
* <https://tonybai.com/2018/07/15/hello-go-module/>
* <https://colobu.com/2018/08/27/learn-go-module/>
* <https://www.komu.engineer/blogs/go-modules-early-peek---
#### Go & Google 出品库
*Go team:*
* <https://github.com/golang/protobuf/>
* <https://github.com/golang/oauth2>
* <https://github.com/golang/glog>
* <https://github.com/golang/geo>
* <https://github.com/golang/groupcache>
* <https://github.com/golang/snappy>
* <https://github.com/golang/freetype>
* <https://github.com/rsc/goversion*Google team:*
* <https://github.com/googleapis/googleapis>
* <https://github.com/google/btree>
* <https://github.com/google/go-cloud>
* <https://github.com/google/gops>
* <https://github.com/google/gvisor>
* <https://github.com/google/google-api-go-client>
* <https://github.com/grpc/grpc-go---
#### 新编程语言
* Go+ 语言, 许式伟
+ <https://github.com/goplus/gop>
* 凹语言, 柴树杉/丁尔男/史斌
+ <https://github.com/wa-lang/wa---
#### WebAssembly
**Go官方资料**
* <https://tip.golang.org/pkg/syscall/js>
* <https://github.com/golang/go/tree/master/misc/wasm**wasm资料精选**
* <https://github.com/chai2010/awesome-wasm-zh>
* <https://github.com/mbasso/awesome-wasm>
* <https://gopry.rice.sh/**WebAssembly图书**
1. [WebAssembly标准入门](https://www.epubit.com/book/detail/40619) - 人民邮电出版社, 49元 2. [C/C++面向WebAssembly编程](https://github.com/3dgen/cppwasm-book) - 开源图书, 开发中 3. [Learn WebAssembly](https://www.packtpub.com/web-development/learn-webassembly) - 英文 4. [Programming WebAssembly with Rust](https://medium.com/@KevinHoffman/programming-webassembly-with-rust-the-book-7c4a890fcf97) - 英文, 开发中
---
#### 云计算
* <https://github.com/google/go-cloud>
* [Portable Cloud Programming with Go Cloud](https://blog.golang.org/go-cloud), [中文](http://www.53it.net/show/1708.html)
* <https://cloud.google.com/appengine/docs/standard/go/>
* [Google Cloud Functions for Go](https://medium.com/google-cloud/google-cloud-functions-for-go-57e4af9b10da)
---
#### 数据库 & ORM
**BoltDB**
* <https://github.com/boltdb/bolt>
* <https://github.com/etcd-io/bbolt**LevelDB**
* <https://github.com/dgraph-io/badger>
* <https://github.com/syndtr/goleveldb**SQLite3**
* <https://github.com/mattn/go-sqlite3**ORM**
* <https://github.com/rsc/dbstore>
* <https://github.com/go-gorp/gorp>
* <https://github.com/jinzhu/gorm---
#### DevOps
* KusionStack 可编程配置技术栈
+ <https://github.com/KusionStack/kusion---
#### 其他
None |
github.com/obsidiandynamics/goharvest | go | Go | README
[¶](#section-readme)
---
### logo
![Go version](https://img.shields.io/github/go-mod/go-version/obsidiandynamics/goharvest)
[![Build](https://travis-ci.org/obsidiandynamics/goharvest.svg?branch=master)](https://travis-ci.org/obsidiandynamics/goharvest)
![Release](https://img.shields.io/github/v/release/obsidiandynamics/goharvest?color=ff69b4)
[![Codecov](https://codecov.io/gh/obsidiandynamics/goharvest/branch/master/graph/badge.svg)](https://codecov.io/gh/obsidiandynamics/goharvest)
[![Go Report Card](https://goreportcard.com/badge/github.com/obsidiandynamics/goharvest)](https://goreportcard.com/report/github.com/obsidiandynamics/goharvest)
[![Total alerts](https://img.shields.io/lgtm/alerts/g/obsidiandynamics/goharvest.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/obsidiandynamics/goharvest/alerts/)
[![GoDoc Reference](https://img.shields.io/badge/docs-GoDoc-blue.svg)](https://pkg.go.dev/github.com/obsidiandynamics/goharvest?tab=doc)
Documentation
[¶](#section-documentation)
---
### Overview [¶](#pkg-overview)
Example [¶](#example-package)
```
const dataSource = "host=localhost port=5432 user=postgres password= dbname=postgres sslmode=disable"
// Optional: Ensure the database table exists before we start harvesting.
func() {
db, err := sql.Open("postgres", dataSource)
if err != nil {
panic(err)
}
defer db.Close()
_, err = db.Exec(`
CREATE TABLE IF NOT EXISTS outbox (
id BIGSERIAL PRIMARY KEY,
create_time TIMESTAMP WITH TIME ZONE NOT NULL,
kafka_topic VARCHAR(249) NOT NULL,
kafka_key VARCHAR(100) NOT NULL, -- pick your own key size
kafka_value VARCHAR(10000), -- pick your own value size
kafka_header_keys TEXT[] NOT NULL,
kafka_header_values TEXT[] NOT NULL,
leader_id UUID
)
`)
if err != nil {
panic(err)
}
}()
// Configure the harvester. It will use its own database and Kafka connections under the hood.
config := Config{
BaseKafkaConfig: KafkaConfigMap{
"bootstrap.servers": "localhost:9092",
},
DataSource: dataSource,
}
// Create a new harvester.
harvest, err := New(config)
if err != nil {
panic(err)
}
// Start it.
err = harvest.Start()
if err != nil {
panic(err)
}
// Wait indefinitely for it to end.
log.Fatal(harvest.Await())
```
```
Output:
```
Example (WithCustomLogger) [¶](#example-package-WithCustomLogger)
```
// Example: Configure GoHarvest with a Logrus binding for Scribe.
log := logrus.StandardLogger()
log.SetLevel(logrus.DebugLevel)
// Configure the custom logger using a binding.
config := Config{
BaseKafkaConfig: KafkaConfigMap{
"bootstrap.servers": "localhost:9092",
},
Scribe: scribe.New(scribelogrus.Bind()),
DataSource: "host=localhost port=5432 user=postgres password= dbname=postgres sslmode=disable",
}
// Create a new harvester.
harvest, err := New(config)
if err != nil {
panic(err)
}
// Start it.
err = harvest.Start()
if err != nil {
panic(err)
}
// Wait indefinitely for it to end.
log.Fatal(harvest.Await())
```
```
Output:
```
Example (WithEventHandler) [¶](#example-package-WithEventHandler)
```
// Example: Registering a custom event handler to get notified of leadership changes and metrics.
log := logrus.StandardLogger()
log.SetLevel(logrus.TraceLevel)
config := Config{
BaseKafkaConfig: KafkaConfigMap{
"bootstrap.servers": "localhost:9092",
},
DataSource: "host=localhost port=5432 user=postgres password= dbname=postgres sslmode=disable",
Scribe: scribe.New(scribelogrus.Bind()),
}
// Create a new harvester and register an event hander.
harvest, err := New(config)
if err != nil {
panic(err)
}
// Register a handler callback, invoked when an event occurs within goharvest.
// The callback is completely optional; it lets the application piggy-back on leader
// status updates, in case it needs to schedule some additional work (other than
// harvesting outbox records) that should only be run on one process at any given time.
harvest.SetEventHandler(func(e Event) {
switch event := e.(type) {
case LeaderAcquired:
// The application may initialise any state necessary to perform work as a leader.
log.Infof("Got event: leader acquired: %v", event.LeaderID())
case LeaderRefreshed:
// Indicates that a new leader ID was generated, as a result of having to remark
// a record (typically as due to an earlier delivery error). This is purely
// informational; there is nothing an application should do about this, other
// than taking note of the new leader ID if it has come to rely on it.
log.Infof("Got event: leader refreshed: %v", event.LeaderID())
case LeaderRevoked:
// The application may block the callback until it wraps up any in-flight
// activity. Only upon returning from the callback, will a new leader be elected.
log.Infof("Got event: leader revoked")
case LeaderFenced:
// The application must immediately terminate any ongoing activity, on the assumption
// that another leader may be imminently elected. Unlike the handling of LeaderRevoked,
// blocking in the callback will not prevent a new leader from being elected.
log.Infof("Got event: leader fenced")
case MeterRead:
// Periodic statistics regarding the harvester's throughput.
log.Infof("Got event: meter read: %v", event.Stats())
}
})
// Start harvesting in the background.
err = harvest.Start()
if err != nil {
panic(err)
}
// Wait indefinitely for it to end.
log.Fatal(harvest.Await())
```
```
Output:
```
Example (WithSaslSslAndCustomProducerConfig) [¶](#example-package-WithSaslSslAndCustomProducerConfig)
```
// Example: Using Kafka with sasl_ssl for authentication and encryption.
config := Config{
BaseKafkaConfig: KafkaConfigMap{
"bootstrap.servers": "localhost:9094",
"security.protocol": "sasl_ssl",
"ssl.ca.location": "ca-cert.pem",
"sasl.mechanism": "SCRAM-SHA-512",
"sasl.username": "alice",
"sasl.password": "alice-secret",
},
ProducerKafkaConfig: KafkaConfigMap{
"compression.type": "lz4",
},
DataSource: "host=localhost port=5432 user=postgres password= dbname=postgres sslmode=disable",
}
// Create a new harvester.
harvest, err := New(config)
if err != nil {
panic(err)
}
// Start harvesting in the background.
err = harvest.Start()
if err != nil {
panic(err)
}
// Wait indefinitely for the harvester to end.
log.Fatal(harvest.Await())
```
```
Output:
```
### Index [¶](#pkg-index)
* [func Duration(d time.Duration) *time.Duration](#Duration)
* [func Int(i int) *int](#Int)
* [func String(str string) *string](#String)
* [type Config](#Config)
* + [func Unmarshal(in []byte) (Config, error)](#Unmarshal)
* + [func (c *Config) SetDefaults()](#Config.SetDefaults)
+ [func (c Config) String() string](#Config.String)
+ [func (c Config) Validate() error](#Config.Validate)
* [type DatabaseBinding](#DatabaseBinding)
* + [func NewPostgresBinding(dataSource string, outboxTable string) (DatabaseBinding, error)](#NewPostgresBinding)
* [type DatabaseBindingProvider](#DatabaseBindingProvider)
* + [func StandardPostgresBindingProvider() DatabaseBindingProvider](#StandardPostgresBindingProvider)
* [type Event](#Event)
* [type EventHandler](#EventHandler)
* [type Harvest](#Harvest)
* + [func New(config Config) (Harvest, error)](#New)
* [type KafkaConfigMap](#KafkaConfigMap)
* [type KafkaConsumer](#KafkaConsumer)
* [type KafkaConsumerProvider](#KafkaConsumerProvider)
* + [func StandardKafkaConsumerProvider() KafkaConsumerProvider](#StandardKafkaConsumerProvider)
* [type KafkaHeader](#KafkaHeader)
* + [func (h KafkaHeader) String() string](#KafkaHeader.String)
* [type KafkaHeaders](#KafkaHeaders)
* [type KafkaProducer](#KafkaProducer)
* [type KafkaProducerProvider](#KafkaProducerProvider)
* + [func StandardKafkaProducerProvider() KafkaProducerProvider](#StandardKafkaProducerProvider)
* [type LeaderAcquired](#LeaderAcquired)
* + [func (e LeaderAcquired) LeaderID() uuid.UUID](#LeaderAcquired.LeaderID)
+ [func (e LeaderAcquired) String() string](#LeaderAcquired.String)
* [type LeaderFenced](#LeaderFenced)
* + [func (e LeaderFenced) String() string](#LeaderFenced.String)
* [type LeaderRefreshed](#LeaderRefreshed)
* + [func (e LeaderRefreshed) LeaderID() uuid.UUID](#LeaderRefreshed.LeaderID)
+ [func (e LeaderRefreshed) String() string](#LeaderRefreshed.String)
* [type LeaderRevoked](#LeaderRevoked)
* + [func (e LeaderRevoked) String() string](#LeaderRevoked.String)
* [type Limits](#Limits)
* + [func (l *Limits) SetDefaults()](#Limits.SetDefaults)
+ [func (l Limits) String() string](#Limits.String)
+ [func (l Limits) Validate() error](#Limits.Validate)
* [type MeterRead](#MeterRead)
* + [func (e MeterRead) Stats() metric.MeterStats](#MeterRead.Stats)
+ [func (e MeterRead) String() string](#MeterRead.String)
* [type NeliProvider](#NeliProvider)
* + [func StandardNeliProvider() NeliProvider](#StandardNeliProvider)
* [type OutboxRecord](#OutboxRecord)
* + [func (rec OutboxRecord) String() string](#OutboxRecord.String)
* [type State](#State)
#### Examples [¶](#pkg-examples)
* [Package](#example-package)
* [Package (WithCustomLogger)](#example-package-WithCustomLogger)
* [Package (WithEventHandler)](#example-package-WithEventHandler)
* [Package (WithSaslSslAndCustomProducerConfig)](#example-package-WithSaslSslAndCustomProducerConfig)
### Constants [¶](#pkg-constants)
This section is empty.
### Variables [¶](#pkg-variables)
This section is empty.
### Functions [¶](#pkg-functions)
####
func [Duration](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/config.go#L15) [¶](#Duration)
```
func Duration(d [time](/time).[Duration](/time#Duration)) *[time](/time).[Duration](/time#Duration)
```
Duration is a convenience for deriving a pointer from a given Duration argument.
####
func [Int](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/config.go#L20) [¶](#Int)
```
func Int(i [int](/builtin#int)) *[int](/builtin#int)
```
Int is a convenience for deriving a pointer from a given int argument.
####
func [String](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/db.go#L36) [¶](#String)
added in v0.2.0
```
func String(str [string](/builtin#string)) *[string](/builtin#string)
```
String is a convenience function that returns a pointer to the given str argument, for use with setting OutboxRecord.Value.
### Types [¶](#pkg-types)
####
type [Config](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/config.go#L118) [¶](#Config)
```
type Config struct {
BaseKafkaConfig [KafkaConfigMap](#KafkaConfigMap) `yaml:"baseKafkaConfig"`
ProducerKafkaConfig [KafkaConfigMap](#KafkaConfigMap) `yaml:"producerKafkaConfig"`
LeaderTopic [string](/builtin#string) `yaml:"leaderTopic"`
LeaderGroupID [string](/builtin#string) `yaml:"leaderGroupID"`
DataSource [string](/builtin#string) `yaml:"dataSource"`
OutboxTable [string](/builtin#string) `yaml:"outboxTable"`
Limits [Limits](#Limits) `yaml:"limits"`
KafkaConsumerProvider [KafkaConsumerProvider](#KafkaConsumerProvider)
KafkaProducerProvider [KafkaProducerProvider](#KafkaProducerProvider)
DatabaseBindingProvider [DatabaseBindingProvider](#DatabaseBindingProvider)
NeliProvider [NeliProvider](#NeliProvider)
Scribe [scribe](/github.com/obsidiandynamics/libstdgo/scribe).[Scribe](/github.com/obsidiandynamics/libstdgo/scribe#Scribe)
Name [string](/builtin#string) `yaml:"name"`
}
```
Config encapsulates configuration for Harvest.
####
func [Unmarshal](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/config.go#L210) [¶](#Unmarshal)
```
func Unmarshal(in [][byte](/builtin#byte)) ([Config](#Config), [error](/builtin#error))
```
Unmarshal a configuration from a byte slice, returning the configuration struct with pre-initialised defaults,
or an error if unmarshalling failed. The configuration is not validated prior to returning, in case further amendments are required by the caller. The caller should call Validate() independently.
####
func (*Config) [SetDefaults](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/config.go#L170) [¶](#Config.SetDefaults)
```
func (c *[Config](#Config)) SetDefaults()
```
SetDefaults assigns the default values to optional fields.
####
func (Config) [String](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/config.go#L152) [¶](#Config.String)
```
func (c [Config](#Config)) String() [string](/builtin#string)
```
Obtains a textual representation of the configuration.
####
func (Config) [Validate](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/config.go#L135) [¶](#Config.Validate)
```
func (c [Config](#Config)) Validate() [error](/builtin#error)
```
Validate the Config, returning an error if invalid.
####
type [DatabaseBinding](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/db.go#L52) [¶](#DatabaseBinding)
```
type DatabaseBinding interface {
Mark(leaderID [uuid](/github.com/google/uuid).[UUID](/github.com/google/uuid#UUID), limit [int](/builtin#int)) ([][OutboxRecord](#OutboxRecord), [error](/builtin#error))
Purge(id [int64](/builtin#int64)) ([bool](/builtin#bool), [error](/builtin#error))
Reset(id [int64](/builtin#int64)) ([bool](/builtin#bool), [error](/builtin#error))
Dispose()
}
```
DatabaseBinding is an abstraction over the data access layer, allowing goharvest to use arbitrary database implementations.
####
func [NewPostgresBinding](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/postgres.go#L67) [¶](#NewPostgresBinding)
```
func NewPostgresBinding(dataSource [string](/builtin#string), outboxTable [string](/builtin#string)) ([DatabaseBinding](#DatabaseBinding), [error](/builtin#error))
```
NewPostgresBinding creates a Postgres binding for the given dataSource and outboxTable args.
####
type [DatabaseBindingProvider](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/db.go#L60) [¶](#DatabaseBindingProvider)
```
type DatabaseBindingProvider func(dataSource [string](/builtin#string), outboxTable [string](/builtin#string)) ([DatabaseBinding](#DatabaseBinding), [error](/builtin#error))
```
DatabaseBindingProvider is a factory for creating instances of a DatabaseBinding.
####
func [StandardPostgresBindingProvider](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/postgres.go#L62) [¶](#StandardPostgresBindingProvider)
```
func StandardPostgresBindingProvider() [DatabaseBindingProvider](#DatabaseBindingProvider)
```
StandardPostgresBindingProvider returns a DatabaseBindingProvider that connects to a real Postgres database.
####
type [Event](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/event.go#L14) [¶](#Event)
```
type Event interface {
[fmt](/fmt).[Stringer](/fmt#Stringer)
}
```
Event encapsulates a GoHarvest event.
####
type [EventHandler](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/event.go#L11) [¶](#EventHandler)
```
type EventHandler func(e [Event](#Event))
```
EventHandler is a callback function for handling GoHarvest events.
####
type [Harvest](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/harvest.go#L49) [¶](#Harvest)
```
type Harvest interface {
Start() [error](/builtin#error)
Stop()
Await() [error](/builtin#error)
State() [State](#State)
IsLeader() [bool](/builtin#bool)
LeaderID() *[uuid](/github.com/google/uuid).[UUID](/github.com/google/uuid#UUID)
InFlightRecords() [int](/builtin#int)
InFlightRecordKeys() [][string](/builtin#string)
SetEventHandler(eventHandler [EventHandler](#EventHandler))
}
```
Harvest performs background harvesting of a transactional outbox table.
####
func [New](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/harvest.go#L84) [¶](#New)
```
func New(config [Config](#Config)) ([Harvest](#Harvest), [error](/builtin#error))
```
New creates a new Harvest instance from the supplied config.
####
type [KafkaConfigMap](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/config.go#L115) [¶](#KafkaConfigMap)
```
type KafkaConfigMap map[[string](/builtin#string)]interface{}
```
KafkaConfigMap represents the Kafka key-value configuration.
####
type [KafkaConsumer](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/kafka.go#L15) [¶](#KafkaConsumer)
```
type KafkaConsumer interface {
Subscribe(topic [string](/builtin#string), rebalanceCb [kafka](/gopkg.in/confluentinc/confluent-kafka-go.v1/kafka).[RebalanceCb](/gopkg.in/confluentinc/confluent-kafka-go.v1/kafka#RebalanceCb)) [error](/builtin#error)
ReadMessage(timeout [time](/time).[Duration](/time#Duration)) (*[kafka](/gopkg.in/confluentinc/confluent-kafka-go.v1/kafka).[Message](/gopkg.in/confluentinc/confluent-kafka-go.v1/kafka#Message), [error](/builtin#error))
Close() [error](/builtin#error)
}
```
KafkaConsumer specifies the methods of a minimal consumer.
####
type [KafkaConsumerProvider](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/kafka.go#L22) [¶](#KafkaConsumerProvider)
```
type KafkaConsumerProvider func(conf *[KafkaConfigMap](#KafkaConfigMap)) ([KafkaConsumer](#KafkaConsumer), [error](/builtin#error))
```
KafkaConsumerProvider is a factory for creating KafkaConsumer instances.
####
func [StandardKafkaConsumerProvider](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/kafka.go#L39) [¶](#StandardKafkaConsumerProvider)
```
func StandardKafkaConsumerProvider() [KafkaConsumerProvider](#KafkaConsumerProvider)
```
StandardKafkaConsumerProvider returns a factory for creating a conventional KafkaConsumer, backed by the real client API.
####
type [KafkaHeader](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/db.go#L11) [¶](#KafkaHeader)
```
type KafkaHeader struct {
Key [string](/builtin#string)
Value [string](/builtin#string)
}
```
KafkaHeader is a key-value tuple representing a single header entry.
####
func (KafkaHeader) [String](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/db.go#L17) [¶](#KafkaHeader.String)
```
func (h [KafkaHeader](#KafkaHeader)) String() [string](/builtin#string)
```
String obtains a textual representation of a KafkaHeader.
####
type [KafkaHeaders](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/db.go#L22) [¶](#KafkaHeaders)
```
type KafkaHeaders [][KafkaHeader](#KafkaHeader)
```
KafkaHeaders is a slice of KafkaHeader tuples.
####
type [KafkaProducer](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/kafka.go#L25) [¶](#KafkaProducer)
```
type KafkaProducer interface {
Events() chan [kafka](/gopkg.in/confluentinc/confluent-kafka-go.v1/kafka).[Event](/gopkg.in/confluentinc/confluent-kafka-go.v1/kafka#Event)
Produce(msg *[kafka](/gopkg.in/confluentinc/confluent-kafka-go.v1/kafka).[Message](/gopkg.in/confluentinc/confluent-kafka-go.v1/kafka#Message), deliveryChan chan [kafka](/gopkg.in/confluentinc/confluent-kafka-go.v1/kafka).[Event](/gopkg.in/confluentinc/confluent-kafka-go.v1/kafka#Event)) [error](/builtin#error)
Close()
}
```
KafkaProducer specifies the methods of a minimal producer.
####
type [KafkaProducerProvider](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/kafka.go#L32) [¶](#KafkaProducerProvider)
```
type KafkaProducerProvider func(conf *[KafkaConfigMap](#KafkaConfigMap)) ([KafkaProducer](#KafkaProducer), [error](/builtin#error))
```
KafkaProducerProvider is a factory for creating KafkaProducer instances.
####
func [StandardKafkaProducerProvider](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/kafka.go#L46) [¶](#StandardKafkaProducerProvider)
```
func StandardKafkaProducerProvider() [KafkaProducerProvider](#KafkaProducerProvider)
```
StandardKafkaProducerProvider returns a factory for creating a conventional KafkaProducer, backed by the real client API.
####
type [LeaderAcquired](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/event.go#L19) [¶](#LeaderAcquired)
```
type LeaderAcquired struct {
// contains filtered or unexported fields
}
```
LeaderAcquired is emitted upon successful acquisition of leader status.
####
func (LeaderAcquired) [LeaderID](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/event.go#L29) [¶](#LeaderAcquired.LeaderID)
```
func (e [LeaderAcquired](#LeaderAcquired)) LeaderID() [uuid](/github.com/google/uuid).[UUID](/github.com/google/uuid#UUID)
```
LeaderID returns the local UUID of the elected leader.
####
func (LeaderAcquired) [String](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/event.go#L24) [¶](#LeaderAcquired.String)
```
func (e [LeaderAcquired](#LeaderAcquired)) String() [string](/builtin#string)
```
String obtains a textual representation of the LeaderAcquired event.
####
type [LeaderFenced](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/event.go#L57) [¶](#LeaderFenced)
```
type LeaderFenced struct{}
```
LeaderFenced is emitted when the leader status has been revoked.
####
func (LeaderFenced) [String](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/event.go#L60) [¶](#LeaderFenced.String)
```
func (e [LeaderFenced](#LeaderFenced)) String() [string](/builtin#string)
```
String obtains a textual representation of the LeaderFenced event.
####
type [LeaderRefreshed](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/event.go#L34) [¶](#LeaderRefreshed)
```
type LeaderRefreshed struct {
// contains filtered or unexported fields
}
```
LeaderRefreshed is emitted when a new leader ID is generated as a result of a remarking request.
####
func (LeaderRefreshed) [LeaderID](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/event.go#L44) [¶](#LeaderRefreshed.LeaderID)
```
func (e [LeaderRefreshed](#LeaderRefreshed)) LeaderID() [uuid](/github.com/google/uuid).[UUID](/github.com/google/uuid#UUID)
```
LeaderID returns the local UUID of the elected leader.
####
func (LeaderRefreshed) [String](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/event.go#L39) [¶](#LeaderRefreshed.String)
```
func (e [LeaderRefreshed](#LeaderRefreshed)) String() [string](/builtin#string)
```
String obtains a textual representation of the LeaderRefreshed event.
####
type [LeaderRevoked](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/event.go#L49) [¶](#LeaderRevoked)
```
type LeaderRevoked struct{}
```
LeaderRevoked is emitted when the leader status has been revoked.
####
func (LeaderRevoked) [String](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/event.go#L52) [¶](#LeaderRevoked.String)
```
func (e [LeaderRevoked](#LeaderRevoked)) String() [string](/builtin#string)
```
String obtains a textual representation of the LeaderRevoked event.
####
type [Limits](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/config.go#L25) [¶](#Limits)
```
type Limits struct {
IOErrorBackoff *[time](/time).[Duration](/time#Duration) `yaml:"ioErrorBackoff"`
PollDuration *[time](/time).[Duration](/time#Duration) `yaml:"pollDuration"`
MinPollInterval *[time](/time).[Duration](/time#Duration) `yaml:"minPollInterval"`
MaxPollInterval *[time](/time).[Duration](/time#Duration) `yaml:"maxPollInterval"`
HeartbeatTimeout *[time](/time).[Duration](/time#Duration) `yaml:"heartbeatTimeout"`
DrainInterval *[time](/time).[Duration](/time#Duration) `yaml:"drainInterval"`
QueueTimeout *[time](/time).[Duration](/time#Duration) `yaml:"queueTimeout"`
MarkBackoff *[time](/time).[Duration](/time#Duration) `yaml:"markBackoff"`
MaxInFlightRecords *[int](/builtin#int) `yaml:"maxInFlightRecords"`
SendConcurrency *[int](/builtin#int) `yaml:"sendConcurrency"`
SendBuffer *[int](/builtin#int) `yaml:"sendBuffer"`
MarkQueryRecords *[int](/builtin#int) `yaml:"markQueryRecords"`
MinMetricsInterval *[time](/time).[Duration](/time#Duration) `yaml:"minMetricsInterval"`
}
```
Limits configuration.
####
func (*Limits) [SetDefaults](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/config.go#L54) [¶](#Limits.SetDefaults)
```
func (l *[Limits](#Limits)) SetDefaults()
```
SetDefaults assigns the defaults for optional values.
####
func (Limits) [String](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/config.go#L96) [¶](#Limits.String)
```
func (l [Limits](#Limits)) String() [string](/builtin#string)
```
String obtains a textural representation of Limits.
####
func (Limits) [Validate](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/config.go#L76) [¶](#Limits.Validate)
```
func (l [Limits](#Limits)) Validate() [error](/builtin#error)
```
Validate the Limits configuration, returning an error if invalid
####
type [MeterRead](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/event.go#L65) [¶](#MeterRead)
```
type MeterRead struct {
// contains filtered or unexported fields
}
```
MeterRead is emitted when the internal throughput Meter has been read.
####
func (MeterRead) [Stats](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/event.go#L75) [¶](#MeterRead.Stats)
```
func (e [MeterRead](#MeterRead)) Stats() [metric](/github.com/obsidiandynamics/[email protected]/metric).[MeterStats](/github.com/obsidiandynamics/[email protected]/metric#MeterStats)
```
Stats embedded in the MeterRead event.
####
func (MeterRead) [String](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/event.go#L70) [¶](#MeterRead.String)
```
func (e [MeterRead](#MeterRead)) String() [string](/builtin#string)
```
String obtains a textual representation of the MeterRead event.
####
type [NeliProvider](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/neli.go#L6) [¶](#NeliProvider)
```
type NeliProvider func(config [goneli](/github.com/obsidiandynamics/goneli).[Config](/github.com/obsidiandynamics/goneli#Config), barrier [goneli](/github.com/obsidiandynamics/goneli).[Barrier](/github.com/obsidiandynamics/goneli#Barrier)) ([goneli](/github.com/obsidiandynamics/goneli).[Neli](/github.com/obsidiandynamics/goneli#Neli), [error](/builtin#error))
```
NeliProvider is a factory for creating Neli instances.
####
func [StandardNeliProvider](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/neli.go#L9) [¶](#StandardNeliProvider)
```
func StandardNeliProvider() [NeliProvider](#NeliProvider)
```
StandardNeliProvider returns a factory for creating a conventional Neli instance, backed by the real client API.
####
type [OutboxRecord](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/db.go#L25) [¶](#OutboxRecord)
```
type OutboxRecord struct {
ID [int64](/builtin#int64)
CreateTime [time](/time).[Time](/time#Time)
KafkaTopic [string](/builtin#string)
KafkaKey [string](/builtin#string)
KafkaValue *[string](/builtin#string)
KafkaHeaders [KafkaHeaders](#KafkaHeaders)
LeaderID *[uuid](/github.com/google/uuid).[UUID](/github.com/google/uuid#UUID)
}
```
OutboxRecord depicts a single entry in the outbox table. It can be used for both reading and writing operations.
####
func (OutboxRecord) [String](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/db.go#L41) [¶](#OutboxRecord.String)
```
func (rec [OutboxRecord](#OutboxRecord)) String() [string](/builtin#string)
```
String provides a textual representation of an OutboxRecord.
####
type [State](https://github.com/obsidiandynamics/goharvest/blob/v0.3.1/harvest.go#L23) [¶](#State)
```
type State [int](/builtin#int)
```
State of the Harvest instance.
```
const (
// Created — initialised (configured) but not started.
Created [State](#State) = [iota](/builtin#iota)
// Running — currently running.
Running
// Stopping — in the process of being stopped. I.e. Stop() has been invoked, but workers are still running.
Stopping
// Stopped — has been completely disposed of.
Stopped
)
``` |
spectralR | cran | R | Package ‘spectralR’
August 24, 2023
Type Package
Title Obtain and Visualize Spectral Reflectance Data for Earth Surface
Polygons
Version 0.1.3
Description Tools for obtaining, processing, and visualizing spectral reflectance data for the user-
defined land or water surface classes for visual exploring in which wavelength the classes dif-
fer. Input should be a shapefile with polygons of surface classes (it might be different habi-
tat types, crops, vegetation, etc.). The Sentinel-2 L2A satellite mission opti-
cal bands pixel data are obtained through the Google Earth Engine ser-
vice (<https://earthengine.google.com/>) and used as a source of spectral data.
Depends R (>= 4.1.0)
Imports rgee (>= 1.1.3), geojsonio (>= 0.9.4), sf (>= 1.0-7), dplyr
(>= 1.0.9), ggplot2 (>= 3.3.5), reshape2 (>= 1.4.0), rlang (>=
1.0.0), tibble (>= 3.1.0), tidyr (>= 1.2.0)
Suggests covr, tinytest (>= 1.3.0)
License GPL-3
URL https://github.com/olehprylutskyi/spectralR/
BugReports https://github.com/olehprylutskyi/spectralR/issues
Encoding UTF-8
RoxygenNote 7.2.3
NeedsCompilation no
Author <NAME> [aut, cre],
<NAME> [ctb],
<NAME> [ctb]
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2023-08-24 09:20:02 UTC
R topics documented:
get.pixel.dat... 2
prepare.vector.dat... 3
spectral.curves.plo... 4
spectral... 5
stat.summary.plo... 6
violin.plo... 7
get.pixel.data Obtain Sentinel-2 spectral reflectance data for user-defined vector
polygons
Description
The function takes sf polygon object, created through prepare.vector.data function, and retrieves
data frame with brightness values for each pixel intersected with polygons, for each optical band of
Sentinel-2 sensor, marked according to the label of surface class from the polygons.
Usage
get.pixel.data(sf_data, startday, endday, cloud_threshold, scale_value)
Arguments
sf_data polygons of surface classes as a sf object, created through prepare.vector.data
startday starting day for Sentinel image collection, as "YYYY-MM-DD". See Note 1
below
endday final day for Sentinel image collection, as "YYYY-MM-DD"
cloud_threshold
maximum percent of cloud-covered pixels per image by which individual
scale_value the scale of resulting satellite images in meters (pixel size). See Note 2 below
Value
A dataframe (non-spatial) with unscaled reflectance data for each pixel of median satellite image,
for each optical band of Sentinel-2 sensor, marked according to the label of surface class from the
polygons.
Note 1. Particular satellite imagery is typically not ready for instant analysis - it contains clouds,
cloud shadows, athmospheric aerosols, and may cover not all the territory of your interest. Another
issue is that each particular pixel slightly differs in reflectance between images taken on different
days due to differences in atmospheric conditions and angle of sunlight at the moments images were
taken. Google Earth Engine has its own build-in algorithms for image pre-processing, atmospheric
corrections and mosaicing, which allows to obtain a ready-to-use, rectified image. The approach
used in this script is to find a median value for each pixel between several images within each of
10 optical bands and thereby make a composite image. To define a set of imageries between which
we will calculate the median, we should set a timespan defining starting and final days. Sentinel-2
apparatus takes a picture once a 5 days, so if you set up a month-long timesnap, you can expect
each pixel value to be calculated based on 5 to 6 values.
Note 2. You may set up any image resolution (pixel size) for satellite imagery with GEE, but
this is hardly reasonable to set the finer resolution than the finest for satellite source. The finest
resolution for Sentinel data is 10 m, while using higher scale_value requires less computational
resources and returns a smaller resulting dataframe. Although sampling satellite data performs in a
cloud, there are some memory limitations placed by GEE itself. If you are about to sample really
large areas, consider setting a higher ’scale’ value (100, 1000). More about GEE best practices:
https://developers.google.com/earth-engine/guides/best_practices
Examples
## Not run:
# Downlad spectral reflectance data
reflectance <- get.pixel.data(
sf_data = sf_df,
startday = "2019-05-15",
endday = "2019-06-30",
cloud_threshold = 10,
scale_value = 100)
head(reflectance)
## End(Not run)
prepare.vector.data Prepare vector data for further reflectance data sampling
Description
The function takes shapefile with polygons of different surface classes (habitats, crops, vegetation,
etc.), and retrieves ready-for-sampling sf object.
Usage
prepare.vector.data(shapefile_name, label_field)
Arguments
shapefile_name shapefile name (should be within working directory, using absolute paths were
not tested)
label_field name of the field which contains class labels
Value
sf object with label (characters) and class (integer) variables, as well as geometry of each polygon,
ready to further processing by rgee.
Examples
# Load example data
load(system.file("testdata/reflectance_test_data.RData", package = "spectralR"))
# Prepare vector data
sf_df <- prepare.vector.data(
shapefile_name = system.file("extdata/test_shapefile.shp", package = "spectralR"),
label_field = "veget_type")
head(sf_df)
spectral.curves.plot Make spectral reflectance curves for defined classes of surface
Description
Make spectral reflectance curves for defined classes of surface
Usage
spectral.curves.plot(data, target_classes = NULL)
Arguments
data reflectance data as dataframe with pixel values for Sentinel optical bands B2,
B3, B4, B5, B6, B7, B8, B8A, B11, B12
target_classes list of the classes of surface which should be highlighted, others will be turned
in gray, as a background. Defaults is NULL.
Value
ggplot2 object with basic visual aesthetics, represents smoother lines with confidence intervals for
each surface class. Default aesthetic is smoother curve (geom_smooth). May be time-consuming
depending on input dataframe size. See https://ggplot2.tidyverse.org/reference/geom_smooth.html
for more details.
Examples
# Load example data
load(system.file("testdata/reflectance_test_data.RData", package = "spectralR"))
# Create a plot
p <- spectral.curves.plot(data = reflectance)
# Customize a plot
p +
ggplot2::labs(x = 'Wavelength, nm', y = 'Reflectance',
colour = "Surface classes",
fill = "Surface classes",
title = "Spectral reflectance curves for different classes of surface",
caption = 'Data: Sentinel-2 Level-2A')+
ggplot2::theme_minimal()
# Highlight only specific target classes
spectral.curves.plot(
data = reflectance,
target_classes = list("meadow", "coniferous_forest")
)
spectralR spectralR: A package for obtaining and visualizing spectral re-
flectance data for earth surface polygons
Description
This package aims to obtain, process, and visualize spectral reflectance data for the user-defined
land or water surface classes for visual exploring in which wavelength the classes differ. Input
should be a shapefile with polygons of surface classes (it might be different habitat types, crops,
vegetation, etc.). The Sentinel-2 L2A satellite mission optical bands pixel data are obtained through
the Google Earth Engine service and used as a source of spectral data.
Currently spectralR package provides several main functions
get.pixel.data prepare.vector.data spectral.curves.plot stat.summary.plot violin.plot
Author(s)
Maintainer: <NAME> <<EMAIL>>
Other contributors:
• <NAME> <<EMAIL>> [contributor]
• <NAME> <<EMAIL>> [contributor]
See Also
Useful links:
• https://github.com/olehprylutskyi/spectralR/
• Report bugs at https://github.com/olehprylutskyi/spectralR/issues
stat.summary.plot Statistical summary plot of reflectance values
Description
Make a plot with statistical summary of reflectance values (mean, mean-standard deviation, mean+standard
deviation) for defined classes of surface.
Usage
stat.summary.plot(
data,
target_classes = NULL,
point_size = 0.6,
fatten = 4,
x_dodge = 0.2
)
Arguments
data reflectance data as dataframe with pixel values for Sentinel optical bands B2,
B3, B4, B5, B6, B7, B8, B8A, B11, B12
target_classes list of the classes of surface which should be highlighted, others will be turned
in gray, as a background. Defaults is NULL.
point_size Size of points on a plot
fatten A multiplicative factor used to increase the size of points in comparison with
standard deviation lines
x_dodge Position adjustment of points along the X-axis
Value
ggplot2 object with basic visual aesthetics. Default aesthetics are line with statistical summary for
each satellite band ([geom_line()] + [geom_pointrange()]). See [geom_linerange](https://ggplot2.tidyverse.org/reference/geo
and [geom_path](https://ggplot2.tidyverse.org/reference/geom_path.html) documentation for more
details.
Wavelengths values (nm) acquired from mean known value for each optical band of Sentinel 2
sensor https://en.wikipedia.org/wiki/Sentinel-2
Examples
# Load example data
load(system.file("testdata/reflectance_test_data.RData", package = "spectralR"))
# Create a summary plot
p <- stat.summary.plot(data = reflectance)
# Customize a plot
p +
ggplot2::labs(x = 'Sentinel-2 bands', y = 'Reflectance',
colour = "Surface classes",
title = "Reflectance for different surface classes",
caption='Data: Sentinel-2 Level-2A\nmean ± standard deviation')+
ggplot2::theme_minimal()
# Highlight only specific target classes
stat.summary.plot(
data = reflectance,
target_classes = list("meadow", "coniferous_forest")
)
violin.plot Create violin plots of reflectance per band for each surface class
Description
Create violin plots of reflectance per band for each surface class
Usage
violin.plot(data)
Arguments
data reflectance data as dataframe with pixel values for Sentinel optical bands B2,
B3, B4, B5, B6, B7, B8, B8A, B11, B12
Value
ggplot2 object with basic visual aesthetics. Default aesthetics is violin plot for each satellite band
(geom_violin). See https://ggplot2.tidyverse.org/reference/geom_violin.html for more details.
Examples
# Load example data
load(system.file("testdata/reflectance_test_data.RData", package = "spectralR"))
# Create a plot
p3 <- violin.plot(data = reflectance)
# Customize a plot
p3 +
ggplot2::labs(x='Surface class',y='Reflectance',
fill="Surface classes",
title = "Reflectance for different surface classes",
caption='Data: Sentinel-2 Level-2A')+
ggplot2::theme_minimal() |
fastText | cran | R | Package ‘fastText’
October 13, 2022
Type Package
Title Efficient Learning of Word Representations and Sentence
Classification
Version 1.0.3
Date 2022-10-08
URL https://github.com/mlampros/fastText
BugReports https://github.com/mlampros/fastText/issues
Description An interface to the 'fastText' <https://github.com/facebookresearch/fastText> li-
brary for efficient learning of word representations and sentence classification. The 'fastText' al-
gorithm is explained in detail in (i) ``Enriching Word Vectors with subword Information'', Pi-
<NAME>, <NAME>, Ar-
<NAME>, <NAME>, 2017, <doi:10.1162/tacl_a_00051>; (ii) ``Bag of Tricks for Effi-
cient Text Classification'', <NAME>, <NAME>, <NAME>-
janowski, <NAME>, 2017, <doi:10.18653/v1/e17-2068>; (iii) ``FastText.zip: Compress-
ing text classification models'', <NAME>, <NAME>, <NAME>-
janowski, <NAME>, <NAME>, <NAME>, 2016, <arXiv:1612.03651>.
License MIT + file LICENSE
SystemRequirements Generally, fastText builds on modern Mac OS and
Linux distributions. Since it uses some C++11 features, it
requires a compiler with good C++11 support. These include a
(g++-4.7.2 or newer) or a (clang-3.3 or newer).
Encoding UTF-8
Imports Rcpp (>= 1.0.0), ggplot2, grid, utils, glue, data.table, stats
Depends R(>= 3.2.3)
LinkingTo Rcpp
Suggests testthat, covr, knitr, rmarkdown
VignetteBuilder knitr
RoxygenNote 7.2.1
NeedsCompilation yes
Author <NAME> [aut, cre] (<https://orcid.org/0000-0002-8024-1546>),
Facebook Inc [cph]
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2022-10-08 06:30:02 UTC
R topics documented:
fasttext_interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
language_identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
plot_progress_logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
printAnalogiesUsage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
printDumpUsage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
printNNUsage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
printPredictUsage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
printPrintNgramsUsage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
printPrintSentenceVectorsUsage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
printPrintWordVectorsUsage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
printQuantizeUsage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
printTestLabelUsage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
printTestUsage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
printUsage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
print_parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
fasttext_interface Interface for the fasttext library
Description
Interface for the fasttext library
Usage
fasttext_interface(
list_params,
path_output = "",
MilliSecs = 100,
path_input = "",
remove_previous_file = TRUE,
print_process_time = FALSE
)
Arguments
list_params a list of valid parameters
path_output a character string specifying the file path where the process-logs (or output in
generally) should be saved
MilliSecs an integer specifying the delay in milliseconds when printing the results to the
specified path_output
path_input a character string specifying the path to the input data file
remove_previous_file
a boolean. If TRUE, in case that the path_output is not an empty string (""),
then an existing file with the same output name will be removed
print_process_time
a boolean. If TRUE then the processing time of the function will be printed out
in the R session
Details
This function allows the user to run the various methods included in the fasttext library from within
R
The "output" parameter which exists in the named list (see examples section) and is passed to the
"list_params" parameter of the "fasttext_interface()" function, is a file path and not a directory name
and will actually return two files (a *.vec* and a *.bin*) to the output directory.
Value
a vector of class character that includes the parameters and file paths used as input to the function
References
https://github.com/facebookresearch/fastText
https://github.com/facebookresearch/fastText/blob/master/docs/supervised-tutorial.md
Examples
## Not run:
library(fastText)
####################################################################################
# If the user intends to run the following examples then he / she must replace #
# the 'input', 'output', 'path_input', 'path_output', 'model' and 'test_data' file #
# paths depending on where the data are located or should be saved! #
# ( 'tempdir()' is used here as an example folder ) #
####################################################################################
# ------------------------------------------------
# print information for the Usage of each function [ parameters ]
# ------------------------------------------------
fastText::printUsage()
fastText::printTestUsage()
fastText::printTestLabelUsage()
fastText::printQuantizeUsage()
fastText::printPrintWordVectorsUsage()
fastText::printPrintSentenceVectorsUsage()
fastText::printPrintNgramsUsage()
fastText::printPredictUsage()
fastText::printNNUsage()
fastText::printDumpUsage()
fastText::printAnalogiesUsage()
fastText::print_parameters(command = "supervised")
# -----------------------------------------------------------------------
# In case that the 'command' is one of 'cbow', 'skipgram' or 'supervised'
# -----------------------------------------------------------------------
list_params = list(command = 'cbow',
lr = 0.1,
dim = 200,
input = file.path(tempdir(), "doc.txt"),
output = tempdir(),
verbose = 2,
thread = 1)
res = fasttext_interface(list_params,
path_output = file.path(tempdir(),"model_logs.txt"),
MilliSecs = 100)
# ---------------------
# 'supervised' training
# ---------------------
list_params = list(command = 'supervised',
lr = 0.1,
dim = 200,
input = file.path(tempdir(), "cooking.train"),
output = file.path(tempdir(), "model_cooking"),
verbose = 2,
thread = 1)
res = fasttext_interface(list_params,
path_output = file.path(tempdir(), 'logs_supervise.txt'),
MilliSecs = 5)
# ---------------------------------------
# In case that the 'command' is 'predict'
# ---------------------------------------
list_params = list(command = 'predict',
model = file.path(tempdir(), 'model_cooking.bin'),
test_data = file.path(tempdir(), 'cooking.valid'),
k = 1,
th = 0.0)
res = fasttext_interface(list_params,
path_output = file.path(tempdir(), 'predict_valid.txt'))
# ------------------------------------
# In case that the 'command' is 'test' [ k = 5 , means that precision and recall are at 5 ]
# ------------------------------------
list_params = list(command = 'test',
model = file.path(tempdir(), 'model_cooking.bin'),
test_data = file.path(tempdir(), 'cooking.valid'),
k = 5,
th = 0.0)
res = fasttext_interface(list_params) # It only prints 'Precision', 'Recall' to the R session
# ------------------------------------------
# In case that the 'command' is 'test-label' [ k = 5 , means that precision and recall are at 5 ]
# ------------------------------------------
list_params = list(command = 'test-label',
model = file.path(tempdir(), 'model_cooking.bin'),
test_data = file.path(tempdir(), 'cooking.valid'),
k = 5,
th = 0.0)
res = fasttext_interface(list_params, # prints also 'Precision', 'Recall' to R session
path_output = file.path(tempdir(), "test_valid.txt"))
# -----------------
# quantize function [ it will take a .bin file and return an .ftz file ]
# -----------------
# the quantize function is currenlty (01/02/2019) single-threaded
# https://github.com/facebookresearch/fastText/issues/353#issuecomment-342501742
list_params = list(command = 'quantize',
input = file.path(tempdir(), 'model_cooking.bin'),
output = file.path(tempdir(), gsub('.bin', '.ftz', 'model_cooking.bin')))
res = fasttext_interface(list_params)
# -----------------
# quantize function [ by using the optional parameters 'qnorm' and 'qout' ]
# -----------------
list_params = list(command = 'quantize',
input = file.path(tempdir(), 'model_cooking.bin'),
output = file.path(tempdir(), gsub('.bin', '.ftz', 'model_cooking.bin')),
qnorm = TRUE,
qout = TRUE)
res = fasttext_interface(list_params)
# ------------------
# print-word-vectors [ each line of the 'queries.txt' must be a single word ]
# ------------------
list_params = list(command = 'print-word-vectors',
model = file.path(tempdir(), 'model_cooking.bin'))
res = fasttext_interface(list_params,
path_input = file.path(tempdir(), 'queries.txt'),
path_output = file.path(tempdir(), 'print_vecs_file.txt'))
# ----------------------
# print-sentence-vectors [ See also the comments in the main.cc file about the input-file ]
# ----------------------
list_params = list(command = 'print-sentence-vectors',
model = file.path(tempdir(), 'model_cooking.bin'))
res = fasttext_interface(list_params,
path_input = file.path(tempdir(), 'text.txt'),
path_output = file.path(tempdir(), 'SENTENCE_VECs.txt'))
# ------------
# print-ngrams [ print to console or to output-file ]
# ------------
list_params = list(command = 'skipgram', lr = 0.1, dim = 200,
input = file.path(tempdir(), "doc.txt"),
output = tempdir(), verbose = 2, thread = 1,
minn = 2, maxn = 2)
res = fasttext_interface(list_params,
path_output = file.path(tempdir(), "ngram_out.txt"),
MilliSecs = 5)
list_params = list(command = 'print-ngrams',
model = file.path(tempdir(), 'ngram_out.bin'),
word = 'word') # print n-grams for specific word
res = fasttext_interface(list_params, path_output = "") # print output to console
res = fasttext_interface(list_params,
path_output = file.path(tempdir(), "NGRAMS.txt")) # output to file
# -------------
# 'nn' function
# -------------
list_params = list(command = 'nn',
model = file.path(tempdir(), 'model_cooking.bin'),
k = 20,
query_word = 'word') # a 'query_word' is required
res = fasttext_interface(list_params,
path_output = file.path(tempdir(), "nn_output.txt"))
# ---------
# analogies [ in the output file each analogy-triplet-result is separated with a newline ]
# ---------
list_params = list(command = 'analogies',
model = file.path(tempdir(), 'model_cooking.bin'),
k = 5)
res = fasttext_interface(list_params,
path_input = file.path(tempdir(), 'analogy_queries.txt'),
path_output = file.path(tempdir(), 'analogies_output.txt'))
# -------------
# dump function [ the 'option' param should be one of 'args', 'dict', 'input' or 'output' ]
# -------------
list_params = list(command = 'dump',
model = file.path(tempdir(), 'model_cooking.bin'),
option = 'args')
res = fasttext_interface(list_params,
path_output = file.path(tempdir(), "DUMP.txt"))
## End(Not run)
language_identification
Language Identification using fastText
Description
Language Identification using fastText
Usage
language_identification(
input_obj,
pre_trained_language_model_path,
k = 1,
th = 0,
threads = 1,
verbose = FALSE
)
Arguments
input_obj either a valid character string to a valid path where each line represents a differ-
ent text extract or a vector of text extracts
pre_trained_language_model_path
a valid character string to the pre-trained language identification model path, for
more info see https://fasttext.cc/docs/en/language-identification.html
k predict top k labels (1 by default)
th probability threshold (0.0 by default)
threads an integer specifying the number of threads to run in parallel. This parameter
applies only if k > 1
verbose if TRUE then information will be printed out in the console
Value
an object of class data.table which includes two or more columns with the names ’iso_lang_N’ and
’prob_N’ where ’N’ corresponds to 1 to ’k’ input parameter
References
https://fasttext.cc/docs/en/language-identification.html https://becominghuman.ai/a-handy-pre-trained-
model-for-language-identification-cadd89db9db8
Examples
library(fastText)
vec_txt = c("Incapaz de distinguir la luna y la cara de esta chica,
Las estrellas se ponen nerviosas en el cielo",
"Unable to tell apart the moon and this girl's face,
Stars are flustered up in the sky.")
file_pretrained = system.file("language_identification/lid.176.ftz", package = "fastText")
dtbl_out = language_identification(input_obj = vec_txt,
pre_trained_language_model_path = file_pretrained,
k = 3,
th = 0.0,
verbose = TRUE)
dtbl_out
plot_progress_logs Plot the progress of loss, learning-rate and word-counts
Description
Plot the progress of loss, learning-rate and word-counts
Usage
plot_progress_logs(path_logs = "progress_data.txt", plot = FALSE)
Arguments
path_logs a character string specifying a valid path to a file where the progress-logs are
saved
plot a boolean specifying if the loss, learning-rate and word-counts should be plotted
Value
an object of class data.frame that includes the progress logs with columns ’progress’, ’words_sec_thread’,
’learning_rate’ and ’loss’
References
http://www.cookbook-r.com/Graphs/Multiple_graphs_on_one_page_(ggplot2)/
Examples
## Not run:
library(fastText)
#-----------------------------------------------------------------
# the 'progress_data.txt' file corresponds to the 'path_output'
# parameter of the 'fasttext_interface()'. Therefore the user has
# to run first the 'fasttext_interface()' function to save the
# 'progress_data.txt' file to the desired folder.
#-----------------------------------------------------------------
res = plot_progress_logs(path = file.path(tempdir(), "progress_data.txt"),
plot = TRUE)
## End(Not run)
printAnalogiesUsage Print Usage Information when the command equals to ’analogies’
Description
Print Usage Information when the command equals to ’analogies’
Usage
printAnalogiesUsage(verbose = TRUE)
Arguments
verbose if TRUE then information will be printed in the console
Value
It does not return a value but only prints the available parameters of the ’printAnalogiesUsage’
function in the R session
Examples
library(fastText)
printAnalogiesUsage()
printDumpUsage Print Usage Information when the command equals to ’dump’
Description
Print Usage Information when the command equals to ’dump’
Usage
printDumpUsage(verbose = TRUE)
Arguments
verbose if TRUE then information will be printed in the console
Value
It does not return a value but only prints the available parameters of the ’printDumpUsage’ function
in the R session
Examples
library(fastText)
printDumpUsage()
printNNUsage Print Usage Information when the command equals to ’nn’
Description
Print Usage Information when the command equals to ’nn’
Usage
printNNUsage(verbose = TRUE)
Arguments
verbose if TRUE then information will be printed in the console
Value
It does not return a value but only prints the available parameters of the ’printNNUsage’ function
in the R session
Examples
library(fastText)
printNNUsage()
printPredictUsage Print Usage Information when the command equals to ’predict’ or
’predict-prob’
Description
Print Usage Information when the command equals to ’predict’ or ’predict-prob’
Usage
printPredictUsage(verbose = TRUE)
Arguments
verbose if TRUE then information will be printed in the console
Value
It does not return a value but only prints the available parameters of the ’printPredictUsage’ function
in the R session
Examples
library(fastText)
printPredictUsage()
printPrintNgramsUsage Print Usage Information when the command equals to ’print-ngrams’
Description
Print Usage Information when the command equals to ’print-ngrams’
Usage
printPrintNgramsUsage(verbose = TRUE)
Arguments
verbose if TRUE then information will be printed in the console
Value
It does not return a value but only prints the available parameters of the ’printPrintNgramsUsage’
function in the R session
Examples
library(fastText)
printPrintNgramsUsage()
printPrintSentenceVectorsUsage
Print Usage Information when the command equals to ’print-sentence-
vectors’
Description
Print Usage Information when the command equals to ’print-sentence-vectors’
Usage
printPrintSentenceVectorsUsage(verbose = TRUE)
Arguments
verbose if TRUE then information will be printed in the console
Value
It does not return a value but only prints the available parameters of the ’printPrintSentenceVector-
sUsage’ function in the R session
Examples
library(fastText)
printPrintSentenceVectorsUsage()
printPrintWordVectorsUsage
Print Usage Information when the command equals to ’print-word-
vectors’
Description
Print Usage Information when the command equals to ’print-word-vectors’
Usage
printPrintWordVectorsUsage(verbose = TRUE)
Arguments
verbose if TRUE then information will be printed in the console
Value
It does not return a value but only prints the available parameters of the ’printPrintWordVector-
sUsage’ function in the R session
Examples
library(fastText)
printPrintWordVectorsUsage()
printQuantizeUsage Print Usage Information when the command equals to ’quantize’
Description
Print Usage Information when the command equals to ’quantize’
Usage
printQuantizeUsage(verbose = TRUE)
Arguments
verbose if TRUE then information will be printed in the console
Value
It does not return a value but only prints the available parameters of the ’printQuantizeUsage’ func-
tion in the R session
Examples
library(fastText)
printQuantizeUsage()
printTestLabelUsage Print Usage Information when the command equals to ’test-label’
Description
Print Usage Information when the command equals to ’test-label’
Usage
printTestLabelUsage(verbose = TRUE)
Arguments
verbose if TRUE then information will be printed in the console
Value
It does not return a value but only prints the available parameters of the ’printTestLabelUsage’
function in the R session
Examples
library(fastText)
printTestLabelUsage()
printTestUsage Print Usage Information when the command equals to ’test’
Description
Print Usage Information when the command equals to ’test’
Usage
printTestUsage(verbose = TRUE)
Arguments
verbose if TRUE then information will be printed in the console
Value
It does not return a value but only prints the available parameters of the ’printTestUsage’ function
in the R session
Examples
library(fastText)
printTestUsage()
printUsage Print Usage Information for all parameters
Description
Print Usage Information for all parameters
Usage
printUsage(verbose = TRUE)
Arguments
verbose if TRUE then information will be printed in the console
Value
It does not return a value but only prints the available parameters of the ’printUsage’ function in the
R session
Examples
library(fastText)
printUsage()
print_parameters Print the parameters for a specific command
Description
Print the parameters for a specific command
Usage
print_parameters(command = "supervised")
Arguments
command a character string specifying the command for which the parameters should be
printed in the R session. It should be one of "skipgram", "cbow", "supervised",
"test", "test-label" or "quantize"
Value
It does not return a value but only prints the available parameters in the R session
References
https://github.com/facebookresearch/fastText#full-documentation
https://github.com/facebookresearch/fastText/issues/341#issuecomment-339783130
Examples
## Not run:
library(fastText)
print_parameters(command = 'supervised')
## End(Not run) |
github.com/wangtuanjie/ip17mon | go | Go | README
[¶](#section-readme)
---
### [17mon](http://www.ipip.net/) IP location data for Golang
[![Circle CI](https://circleci.com/gh/wangtuanjie/ip17mon.svg?style=svg)](https://circleci.com/gh/wangtuanjie/ip17mon)
#### 特性
* dat/datx 只支持 ipv4
* ipdb 支持 ipv4/ipv6
#### 安装
```
go get github.com/wangtuanjie/ip17mon@latest
```
#### 使用
```
import (
"fmt"
"github.com/wangtuanjie/ip17mon"
)
func init() {
ip17mon.Init("your data file")
}
func main() {
loc, err := ip17mon.Find("116.228.111.18")
if err != nil {
fmt.Println("err:", err)
return
}
fmt.Println(loc)
}
```
更多请参考[example](https://github.com/wangtuanjie/ip17mon/tree/master/cmd/qip)
#### 许可证
基于 [MIT](https://github.com/wangtuanjie/ip17mon/raw/master/LICENSE) 协议发布
Documentation
[¶](#section-documentation)
---
### Index [¶](#pkg-index)
* [func Init(dataFile string)](#Init)
* [func InitWithDatx(b []byte)](#InitWithDatx)
* [func InitWithIpdb(b []byte)](#InitWithIpdb)
* [type LocationInfo](#LocationInfo)
* + [func Find(ipstr string) (*LocationInfo, error)](#Find)
* [type Locator](#Locator)
* + [func New(dataFile string) (loc Locator, err error)](#New)
### Constants [¶](#pkg-constants)
This section is empty.
### Variables [¶](#pkg-variables)
This section is empty.
### Functions [¶](#pkg-functions)
####
func [Init](https://github.com/wangtuanjie/ip17mon/blob/v1.5.2/ip17mon.go#L20) [¶](#Init)
```
func Init(dataFile [string](/builtin#string))
```
####
func [InitWithDatx](https://github.com/wangtuanjie/ip17mon/blob/v1.5.2/ip17mon.go#L28) [¶](#InitWithDatx)
added in v1.5.0
```
func InitWithDatx(b [][byte](/builtin#byte))
```
####
func [InitWithIpdb](https://github.com/wangtuanjie/ip17mon/blob/v1.5.2/ip17mon.go#L32) [¶](#InitWithIpdb)
added in v1.5.0
```
func InitWithIpdb(b [][byte](/builtin#byte))
```
### Types [¶](#pkg-types)
####
type [LocationInfo](https://github.com/wangtuanjie/ip17mon/blob/v1.5.2/ip17mon.go#L17) [¶](#LocationInfo)
```
type LocationInfo = [proto](/github.com/wangtuanjie/[email protected]/internal/proto).[LocationInfo](/github.com/wangtuanjie/[email protected]/internal/proto#LocationInfo)
```
####
func [Find](https://github.com/wangtuanjie/ip17mon/blob/v1.5.2/ip17mon.go#L40) [¶](#Find)
```
func Find(ipstr [string](/builtin#string)) (*[LocationInfo](#LocationInfo), [error](/builtin#error))
```
####
type [Locator](https://github.com/wangtuanjie/ip17mon/blob/v1.5.2/ip17mon.go#L16) [¶](#Locator)
```
type Locator = [proto](/github.com/wangtuanjie/[email protected]/internal/proto).[Locator](/github.com/wangtuanjie/[email protected]/internal/proto#Locator)
```
####
func [New](https://github.com/wangtuanjie/ip17mon/blob/v1.5.2/ip17mon.go#L44) [¶](#New)
added in v1.5.0
```
func New(dataFile [string](/builtin#string)) (loc [Locator](#Locator), err [error](/builtin#error))
``` |
MOQA | cran | R | Package ‘MOQA’
October 12, 2022
Type Package
Title Basic Quality Data Assurance for Epidemiological Research
Version 2.0.0
Date 2017-06-21
Author
<NAME> <<EMAIL>>, <NAME> <<EMAIL>>
Maintainer <NAME> <<EMAIL>>
Description With the provision of several tools and templates the MOSAIC project (DFG-Grant Num-
ber HO 1937/2-1) supports the implementation of a central data management in epidemiologi-
cal research projects. The 'MOQA' package enables epidemiologists with none or low experi-
ence in R to generate basic data quality reports for a wide range of application scenar-
ios. See <https://mosaic-greifswald.de/> for more information. Please read and cite the cor-
responding open access publication (using the former package-name) in METHODS OF INFOR-
MATION IN MEDICINE by <NAME>, <NAME>, <NAME>, <NAME>, <NAME> and <NAME>-
mann (2017) <doi:10.3414/ME16-01-
0123>. <https://methods.schattauer.de/en/contents/most-recent-articles/issue/
2483/issue/special/manuscript/27573/show.html>.
License AGPL-3
Depends psych, gplots, grid, readr
NeedsCompilation no
Repository CRAN
Date/Publication 2017-06-22 13:23:11 UTC
R topics documented:
codelis... 2
footnoteStrin... 3
labelCount... 3
labelPercentag... 3
label_boxplo... 4
label_descriptio... 4
label_normalverteilun... 4
label_qnormplo... 5
label_uni... 5
MOQ... 5
MOQA.en... 10
mosaic.addFootnot... 10
mosaic.beginPlo... 11
mosaic.countValu... 11
mosaic.createSimplePdfCategorica... 12
mosaic.createSimplePdfCategoricalDatafram... 13
mosaic.createSimplePdfMetri... 13
mosaic.createSimplePdfMetricDatafram... 14
mosaic.finishPlo... 15
mosaic.generateCategoricalPlo... 16
mosaic.generateMetricPlot... 16
mosaic.generateMetricTablePlo... 17
mosaic.getTimestam... 18
mosaic.importToolboxSpssDataFil... 18
mosaic.inf... 19
mosaic.loadCsvDat... 19
mosaic.preProcessCategoricalDat... 20
mosaic.preProcessMetricDat... 20
mosaic.setGlobalCodelis... 21
mosaic.setGlobalDescriptio... 21
mosaic.setGlobalMissingTreshol... 22
mosaic.setGlobalUni... 23
outputPrefi... 23
qualifiedMissingsTreshol... 24
codelist codelist
Description
internal data variable
Note
internal data variable
Author(s)
The MOSAIC Project, <NAME>
footnoteString 3
footnoteString footnoteString
Description
internal data variable
Note
internal data variable
Author(s)
The MOSAIC Project, <NAME>
labelCounts labelCounts
Description
internal label for data variable
Note
internal label for data variable
Author(s)
The MOSAIC Project, <NAME>
labelPercentage labelPercentage
Description
internal label for data variable
Note
internal label for data variable
Author(s)
The MOSAIC Project, <NAME>
label_boxplot label_boxplot
Description
internal label for data variable
Note
internal label for data variable
Author(s)
The MOSAIC Project, <NAME>
label_description label_description
Description
internal label for data variable
Note
internal label for data variable
Author(s)
The MOSAIC Project, <NAME>
label_normalverteilung
label_normalverteilung
Description
internal label for data variable
Note
internal label for data variable
Author(s)
The MOSAIC Project, <NAME>
label_qnormplot label_qnormplot
Description
internal label for data variable
Note
internal label for data variable
Author(s)
The MOSAIC Project, <NAME>
label_unit label_unit
Description
internal label for data variable
Note
internal label for data variable
Author(s)
The MOSAIC Project, <NAME>
MOQA Basic Quality Data Assurance for Epidemiological Research
Description
With the provision of several tools and templates the MOSAIC project (DFG-Grant Number HO
1937/2-1) supports the implementation of a central data management in epidemiological research
projects. The ’MOQA’ package enables epidemiologists with none or low experience in R to gen-
erate basic data quality reports for a wide range of application scenarios. See <https://mosaic-
greifswald.de/> for more information. Please read and cite the corresponding open access publica-
tion (using the former package-name) in METHODS OF INFORMATION IN MEDICINE by M.
Bialke, <NAME>, <NAME>, <NAME>, <NAME> and <NAME> (2017) <doi:10.3414/ME16-
01-0123>. <https://methods.schattauer.de/en/contents/most-recent-articles/issue/2483/issue/special/manuscript/27573/show.
Details
The DESCRIPTION file:
Package: MOQA
Type: Package
Title: Basic Quality Data Assurance for Epidemiological Research
Version: 2.0.0
Date: 2017-06-21
Author: <NAME> <<EMAIL>>, <NAME> <<EMAIL>
Maintainer: <NAME> <<EMAIL>>
Description: With the provision of several tools and templates the MOSAIC project (DFG-Grant Number HO 1937/2
License: AGPL-3
Depends: psych, gplots, grid, readr
NeedsCompilation: no
Repository: CRAN
Index of help topics:
MOQA.env MOQA.env
codelist codelist
footnoteString footnoteString
labelCounts labelCounts
labelPercentage labelPercentage
label_boxplot label_boxplot
label_description label_description
label_normalverteilung
label_normalverteilung
label_qnormplot label_qnormplot
label_unit label_unit
moqa Basic Quality Data Assurance for
Epidemiological Research
mosaic.addFootnote addFootnote
mosaic.beginPlot beginPlot
mosaic.countValue countValue
mosaic.createSimplePdfCategorical
createSimplePdfCategorical
mosaic.createSimplePdfCategoricalDataframe
createSimplePdfCategoricalDataframe
mosaic.createSimplePdfMetric
createSimplePdfMetric
mosaic.createSimplePdfMetricDataframe
createSimplePdfMetricDataframe
mosaic.finishPlot finishPlot
mosaic.generateCategoricalPlot
generateCategoricalPlot
mosaic.generateMetricPlots
generateMetricPlots
mosaic.generateMetricTablePlot
generateMetricTablePlot
mosaic.getTimestamp getTimestamp
mosaic.importToolboxSpssDataFile
importToolboxSpssDataFile
mosaic.info info
mosaic.loadCsvData loadCsvData
mosaic.preProcessCategoricalData
preProcessCategoricalData
mosaic.preProcessMetricData
preProcessMetricData
mosaic.setGlobalCodelist
setGlobalCodelist
mosaic.setGlobalDescription
setGlobalDescription
mosaic.setGlobalMissingTreshold
setGlobalMissingTreshold
mosaic.setGlobalUnit setGlobalUnit
outputPrefix outputPrefix
qualifiedMissingsTreshold
qualifiedMissingsTreshold
The aim of the MOQA R-Package is to provide a basic assessment of data quality and to generate
a set of informative graphs. Especially, there should be no demand for the potential researcher to
master R. This R-package enables researchers to generate reports for various kinds of metric and
categorical data. Additionally, general reports for multivariate input data and, if needed, detailed
results for single-variable data can be produced.
CSV-files as well as dataframes can be used as input format to create a report. The results are
instantly saved in an automatically generated PDF-file. For each study variable within the data
input file a separate PDF-file with standard or, if applicable, customized plots and tables is produced.
These standard reports enable the user to monitor and report the data integrity and completeness.
However, for more specific reports the knowledge of metadata is necessary, including definition of
units, variables, descriptions, code lists and categories of qualified missings.
Version 1.2 ———– ADDED Support for metric and categorical dataframes BUGFIX Aborted
report generation in case of non-existent missings in datacolumn
Version 2.0 ———– RENAME Official Renaming of former package-name mosaicQA to MOQA
ADDED new function importToolboxSpssDataFile
Author(s)
<NAME> <<EMAIL>>, <NAME> <<EMAIL>-
<EMAIL>>, <NAME> <<EMAIL>>
Maintainer: <NAME> <<EMAIL>>
See Also
mosaic-greifswald.de
Examples
## Example 1: Generate pdf with graphs for a single metric data column, e.g. data of body height
# load MOQA package
library('MOQA')
# specify the csv import file with metric data, use one column per variable
metric_datafile='c:/mosaic/metric_single_var.csv'
#specify output folder
outputFolder='c:/mosaic/outputs/'
#set missing threshold, optional, default is 99900
mosaic.setGlobalMissingTreshold(99900)
#set variable unit, optional
mosaic.setGlobalUnit('(cm)')
#set variable description, optional, if not uses the name of the variable is displayed in
#table heading
mosaic.setGlobalDescription('Height')
#create PDF-report,
#uncomment to start report-generation
#mosaic.createSimplePdfmetric(metric_datafile, outputFolder)
## Example 2: Generate pdf with graphs for a single categorical data column
# load MOQA package
library('MOQA')
# specify the import file with Categorical data
# first row has to contain variable names without special characters
Categorical_datafile='c:/mosaic/cat_single_var_en.csv'
#specify output folder
outputFolder='c:/mosaic/outputs/'
#set treshold to detect missings, default is 99900 (adjust this line to change this global value,
#but be careful)
mosaic.setGlobalMissingTreshold(99900)
#set description of var
mosaic.setGlobalCodelist(c('1=yes','2=no','99996=not specified','99997=not acquired'))
# create simple pdf file foreach variable column in Categorical data file,
# uncomment to start report-generation
# mosaic.createSimplePdfCategorical(Categorical_datafile,outputFolder)
## Example 3: Generate pdf with graphs for a multiple metric data columns, generates one pdf for
# each column using the variable name for table headings
# load MOQA package
library('MOQA')
# specify the import file with metric data
# use one column per variable, first row should contain variable name, following rows should
# contain data, csv Files with multiple rows are supported, decimal values should be formated
# for example : 25.4
metric_datafile='c:/mosaic/metric_multi_var.csv'
#specify output folder
outputFolder="c:/mosaic/outputs/"
# set treshold to detect missings, default is 99900 (adjust this line to change this global value
# but be careful)
mosaic.setGlobalMissingTreshold(99900)
# create PDF-Files for vars,
# uncomment to start report-generation
#mosaic.createSimplePdfmetric(metric_datafile, outputFolder)
## Example 4: Generate pdf with graphs for a multiple metric dataframe, generates one pdf for
# each column using the variable name for table headings
# load MOQA package
library('MOQA')
# specify the metric dataframe with 1-n columns, here sample data is generated
metric_data=data.frame(matrix(rnorm(20), nrow=10))
#specify output folder
outputFolder="c:/mosaic/outputs/"
# set treshold to detect missings, default is 99900 (adjust this line to change this global value
# but be careful)
mosaic.setGlobalMissingTreshold(99900)
# create PDF-Files for vars,
# uncomment to start report-generation
#mosaic.createSimplePdfMetricDataframe(metric_data, outputFolder)
## Example 5: Import data from SPSS Export file generated by Toolbox for Research
# and generate report for specific variable
# load MOQA package
library('MOQA')
# specify import dat-file
importfile="c:/mosaic/import/all_in_one.dat"
# specify output folder
outputFolder="c:/mosaic/outputs/"
# import data
#importdata=mosaic.importToolboxSpssDataFile(importfile)
# generate report for a specifc variable e.e. patient.age
# pass data as dataframe to use already given column name for a more descriptive output
#mosaic.createSimplePdfMetricDataframe(as.data.frame(importdata$ve_temperature_ear),outputFolder)
MOQA.env MOQA.env
Description
local environment to handle MOQA-internal variables
Note
local environment
Author(s)
The MOSAIC Project, <NAME>
mosaic.addFootnote addFootnote
Description
Add a Footnote to plot using footnotestring and current timestamp.
Usage
mosaic.addFootnote()
Note
Function call type: internal
Author(s)
The MOSAIC Project, <NAME>
mosaic.beginPlot beginPlot
Description
begin plotting the configured graphs for loaded data and generate the output PDF-File.
Usage
mosaic.beginPlot(varname,outputfolder)
Arguments
varname name of the studyitem or csv column loaded to plot graphs for.
outputfolder name of the output folder
Note
Function call type: internal
Author(s)
The MOSAIC Project, <NAME>
mosaic.countValue countValue
Description
Count occurrence of search value in data column
Usage
mosaic.countValue(searchvalue, data_column)
Arguments
searchvalue value to search for
data_column name of study item or data column to search in
Details
useful to find qualified missings in data column
Value
count of occurences of specified value in specified data column
Note
Function call type: internal
Author(s)
The MOSAIC Project, <NAME>
mosaic.createSimplePdfCategorical
createSimplePdfCategorical
Description
Create simple PDF-file for categorical data
Usage
mosaic.createSimplePdfCategorical(inputfile, outputfolder)
Arguments
inputfile path to input csv-file
outputfolder path to output folder
Note
Function call type: user
Author(s)
The MOSAIC Project, <NAME>
Examples
# load MOQA package
library('MOQA')
# specify the import file with categorial data
# first row has to contain variable names without special characters
categorial_datafile='c:/mosaic/cat_single_var_en.csv'
# specify output folder
outputFolder='c:/mosaic/outputs/'
# set treshold to detect missings, default is 99900 (adjust this line to change this global value,
# but be careful)
mosaic.setGlobalMissingTreshold(99900)
# set description of var
mosaic.setGlobalCodelist(c('1=yes','2=no','99996=not specified','99997=not acquired'))
# create simple pdf file foreach variable column in categorial data file, uncomment to start
# report-generation
# mosaic.createSimplePdfCategorical(categorial_datafile,outputFolder)
mosaic.createSimplePdfCategoricalDataframe
createSimplePdfCategoricalDataframe
Description
Create simple PDF-file for categorical data
Usage
mosaic.createSimplePdfCategoricalDataframe(df, outputfolder)
Arguments
df dataframe
outputfolder path to output folder
Note
Function call type: user
Author(s)
The MOSAIC Project, <NAME>
mosaic.createSimplePdfMetric
createSimplePdfMetric
Description
Create simple PDF-file for metric data
Usage
mosaic.createSimplePdfMetric(inputfile, outputfolder)
Arguments
inputfile path to input csv file
outputfolder path to output folder
Note
Function call type: user
Author(s)
The MOSAIC Project, <NAME>
Examples
# load MOQA package
library('MOQA')
# specify the csv import file with metric data, use one column per variable
metric_datafile='c:/mosaic/metric_single_var.csv'
#specify output folder
outputFolder='c:/mosaic/output/'
#set missing threshold, optional, default is 99900
mosaic.setGlobalMissingTreshold(99900)
#set variable unit, optional
mosaic.setGlobalUnit('(cm)')
#set variable description, optional
mosaic.setGlobalDescription('Height')
#create PDF-report, uncomment to start report-generation
#mosaic.createSimplePdfMetric(metric_datafile, outputFolder)
mosaic.createSimplePdfMetricDataframe
createSimplePdfMetricDataframe
Description
Create simple PDF-file for metric data
Usage
mosaic.createSimplePdfMetricDataframe(df, outputfolder)
Arguments
df path to input csv file
outputfolder path to output folder
Note
Function call type: user
Author(s)
The MOSAIC Project, <NAME>
Examples
# load MOQA package
library('MOQA')
# specify the metric dataframe with 1-n columns, here sample data is generated
metric_data=data.frame(matrix(rnorm(20), nrow=10))
#specify output folder
outputFolder="c:/mosaic/outputs/"
# set treshold to detect missings, default is 99900 (adjust this line to change this global value
# but be careful)
mosaic.setGlobalMissingTreshold(99900)
# create PDF-Files for vars,
# uncomment to start report-generation
#mosaic.createSimplePdfMetricDataframe(metric_data, outputFolder)
mosaic.finishPlot finishPlot
Description
Finish plotting, close PDF-file
Usage
mosaic.finishPlot()
Note
Function call type: internal
Author(s)
The MOSAIC Project, <NAME>
mosaic.generateCategoricalPlot
generateCategoricalPlot
Description
Generate Statistics and Create plots for categorical data
Usage
mosaic.generateCategoricalPlot(dataframe, varname)
Arguments
dataframe data table with one or more columns (first row should contain column names/study
item names/variable names)
varname selected column/study item/variable to plot graph for
Note
Function call type: internal
Author(s)
The MOSAIC Project, <NAME>
mosaic.generateMetricPlots
generateMetricPlots
Description
calculate statistics and generate graphs for metric data
Usage
mosaic.generateMetricPlots(data_snippet, var_name)
Arguments
data_snippet data table with one or more columns (first row should contain column names/study
item names/variable names)
var_name selected column/study item/variable to plot graph for
Note
Function call type: internal
Author(s)
The MOSAIC Project, <NAME>
mosaic.generateMetricTablePlot
generateMetricTablePlot
Description
Generate missing-ratio table for metric data (data, num of columns, column index, varname)
Usage
mosaic.generateMetricTablePlot(data, num_of_columns, index, varname)
Arguments
data preprocessed data frame including ’valid value markers’
num_of_columns absolute number of to be processed data columns
index current column to be processed
varname current name of variable to be used in table heading
Note
Function call type: internal
Author(s)
The MOSAIC Project, <NAME>
mosaic.getTimestamp getTimestamp
Description
get a current timestamp formatted as %Y_%m_%d_%H%M%S
Usage
mosaic.getTimestamp()
Value
timestamp, e.g. ’2016_09_09_143458’
Note
Function call type: internal
Author(s)
The MOSAIC Project, <NAME>
mosaic.importToolboxSpssDataFile
importToolboxSpssDataFile
Description
load dat-file from ’toolbox for resarch’ spss export with tab-separator with n columns to dataframe
Usage
mosaic.importToolboxSpssDataFile(filename)
Arguments
filename filename or a complete path to a dat-file
Note
Function call type: user
Author(s)
The MOSAIC Project, <NAME>
mosaic.info info
Description
MOSAIC Information
Usage
mosaic.info()
Note
Function call type: user
Author(s)
The MOSAIC Project, <NAME>
mosaic.loadCsvData loadCsvData
Description
Load data from csv-file is one or more columns. first row should contain the name of the study
item, e.g. ’height’
Usage
mosaic.loadCsvData(filename)
Arguments
filename filename or a complete path to a file
Note
Function call type: user
Author(s)
The MOSAIC Project, <NAME>
mosaic.preProcessCategoricalData
preProcessCategoricalData
Description
Identify unique values in data column, get absolute, percentage and cumulative statistics
Usage
mosaic.preProcessCategoricalData(data)
Arguments
data data frame to be processed containing categorical data
Note
Function call type: internal
Author(s)
The MOSAIC Project, <NAME>
mosaic.preProcessMetricData
preProcessMetricData
Description
Pre-process metric data to allow missing-ratio table
Usage
mosaic.preProcessMetricData(data)
Arguments
data data frame to be preprocessed containing metric data
Note
Function call type: internal
Author(s)
The MOSAIC Project, <NAME>
mosaic.setGlobalCodelist
setGlobalCodelist
Description
set and parse a global code list for categorical data to be used in categorical plot descriptions
Usage
mosaic.setGlobalCodelist(coding)
Arguments
coding list of code and value pairs, see example for details
Note
Function call type: user
Author(s)
The MOSAIC Project, <NAME>
Examples
mosaic.setGlobalCodelist(c('1=yes','2=no', '99996=no information'))
mosaic.setGlobalDescription
setGlobalDescription
Description
Set Global Description for variable User (description) data. especially useful when plotting graphs
for a selected data column
Usage
mosaic.setGlobalDescription(value)
Arguments
value string value to be used as study item description, e.g. ’waist circumference’
Note
Function call type: user
Author(s)
The MOSAIC Project, <NAME>
Examples
mosaic.setGlobalDescription('waist circumference')
mosaic.setGlobalMissingTreshold
setGlobalMissingTreshold
Description
Set Global Threshold for Missings , e.g. 99000
Usage
mosaic.setGlobalMissingTreshold(value)
Arguments
value threshold to separate missings from valid values
Note
Function call type: user
Author(s)
The MOSAIC Project, <NAME>
Examples
mosaic.setGlobalMissingTreshold(99000)
mosaic.setGlobalUnit 23
mosaic.setGlobalUnit setGlobalUnit
Description
Set Global Unit Label to be used User in graphs, e.g. ’(cm)’
Usage
mosaic.setGlobalUnit(value)
Arguments
value unit string to be used in graphs
Note
Function call type: user
Author(s)
The MOSAIC Project, <NAME>
Examples
mosaic.setGlobalUnit('(cm)')
outputPrefix outputPrefix
Description
internal data variable
Note
internal data variable
Author(s)
The MOSAIC Project, <NAME>
qualifiedMissingsTreshold
qualifiedMissingsTreshold
Description
internal data variable
Note
internal data variable
Author(s)
The MOSAIC Project, <NAME> |
tidylda | cran | R | Package ‘tidylda’
July 14, 2023
Type Package
Title Latent Dirichlet Allocation Using 'tidyverse' Conventions
Version 0.0.3
Description Implements an algorithm for Latent Dirichlet
Allocation (LDA), Blei et at. (2003) <https:
//www.jmlr.org/papers/volume3/blei03a/blei03a.pdf>,
using style conventions from the 'tidyverse',
Wickham et al. (2019)<doi:10.21105/joss.01686>,
and 'tidymodels', Kuhn et al.<https:
//tidymodels.github.io/model-implementation-principles/>.
Fitting is done via collapsed Gibbs sampling.
Also implements several novel features for LDA such as guided models and
transfer learning based on ongoing and, as yet, unpublished research.
License MIT + file LICENSE
URL https://github.com/TommyJones/tidylda/
BugReports https://github.com/TommyJones/tidylda/issues
Depends R (>= 3.5.0)
Imports dplyr, generics, gtools, Matrix, methods, mvrsquared (>=
0.1.0), Rcpp (>= 1.0.2), rlang, stats, stringr, tibble, tidyr,
tidytext
Suggests ggplot2, knitr, parallel, quanteda, testthat, tm, slam,
spelling, covr, rmarkdown
LinkingTo Rcpp, RcppArmadillo, RcppProgress, RcppThread
Encoding UTF-8
RoxygenNote 7.2.2
Language en-US
LazyData true
VignetteBuilder knitr
NeedsCompilation yes
Author <NAME> [aut, cre] (<https://orcid.org/0000-0001-6457-2452>),
<NAME> [ctb] (<https://orcid.org/0000-0003-3284-4972>),
Barum Park [ctb]
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2023-07-14 20:10:02 UTC
R topics documented:
augment.tidyld... 2
calc_prob_coherenc... 3
glance.tidyld... 4
ni... 5
posterio... 5
predict.tidyld... 7
print.tidyld... 9
refit.tidyld... 10
tidy.tidyld... 13
tidyld... 14
augment.tidylda Augment method for tidylda objects
Description
augment appends observation level model outputs.
Usage
## S3 method for class 'tidylda'
augment(
x,
data,
type = c("class", "prob"),
document_col = "document",
term_col = "term",
...
)
Arguments
x an object of class tidylda
data a tidy tibble containing one row per original document-token pair, such as is re-
turned by tdm_tidiers with column names c("document", "term") at a minimum.
type one of either "class" or "prob"
document_col character specifying the name of the column that corresponds to document IDs.
Defaults to "document".
term_col character specifying the name of the column that corresponds to term/token IDs.
Defaults to "term".
... other arguments passed to methods,currently not used
Details
The key statistic for augment is P(topic | document, token) = P(topic | token) * P(token | document).
P(topic | token) are the entries of the ’lambda’ matrix in the tidylda object passed with x. P(token
| document) is taken to be the frequency of each token normalized within each document.
Value
augment returns a tidy tibble containing one row per document-token pair, with one or more
columns appended, depending on the value of type.
If type = 'prob', then one column per topic is appended. Its value is P(topic | document, token).
If type = 'class', then the most-probable topic for each document-token pair is returned. If mul-
tiple topics are equally probable, then the topic with the smallest index is returned by default.
calc_prob_coherence Probabilistic coherence of topics
Description
Calculates the probabilistic coherence of a topic or topics. This approximates semantic coherence
or human understandability of a topic.
Usage
calc_prob_coherence(beta, data, m = 5)
Arguments
beta A numeric matrix or a numeric vector. The vector, or rows of the matrix rep-
resent the numeric relationship between topic(s) and terms. For example, this
relationship may be p(word|topic) or p(topic|word).
data A document term matrix or term co-occurrence matrix. The preferred class is a
dgCMatrix-class. However there is support for any Matrix-class object as
well as several other commonly-used classes such as matrix, dfm, DocumentTermMatrix,
and simple_triplet_matrix
m An integer for the number of words to be used in the calculation. Defaults to 5
Details
For each pair of words a, b in the top M words in a topic, probabilistic coherence calculates P(b|a) -
P(b), where a is more probable than b in the topic. For example, suppose the top 4 words in a topic
are a, b, c, d. Then, we calculate 1. P(a|b) - P(b), P(a|c) - P(c), P(a|d) - P(d) 2. P(b|c) - P(c), P(b|d) -
P(d) 3. P(c|d) - P(d) All 6 differences are averaged together.
Value
Returns an object of class numeric corresponding to the probabilistic coherence of the input topic(s).
Examples
# Load a pre-formatted dtm and topic model
data(nih_sample_dtm)
# fit a model
set.seed(12345)
model <- tidylda(
data = nih_sample_dtm[1:20, ], k = 5,
iterations = 100, burnin = 50
)
calc_prob_coherence(beta = model$beta, data = nih_sample_dtm, m = 5)
glance.tidylda Glance method for tidylda objects
Description
glance constructs a single-row summary "glance" of a tidylda topic model.
Usage
## S3 method for class 'tidylda'
glance(x, ...)
Arguments
x an object of class tidylda
... other arguments passed to methods,currently not used
Value
glance returns a one-row tibble with the following columns:
num_topics: the number of topics in the model num_documents: the number of documents used
for fitting num_tokens: the number of tokens covered by the model iterations: number of total
Gibbs iterations run burnin: number of burn-in Gibbs iterations run
Examples
dtm <- nih_sample_dtm
lda <- tidylda(data = dtm, k = 10, iterations = 100, burnin = 75)
glance(lda)
nih Abstracts and metadata from NIH research grants awarded in 2014
Description
This dataset holds information on research grants awarded by the National Institutes of Health
(NIH) in 2014. The data set was downloaded in approximately January of 2015 from https:
//exporter.nih.gov/ExPORTER_Catalog.aspx. It includes both ’projects’ and ’abstracts’ files.
Usage
data("nih_sample")
Format
For nih_sample, a tibble of 100 randomly-sampled grants’ abstracts and metadata. For nih_sample_dtm,
a dgCMatrix-class representing the document term matrix of abstracts from 100 randomly-sampled
grants.
Source
National Institutes of Health ExPORTER https://exporter.nih.gov/ExPORTER_Catalog.aspx
posterior Draw from the marginal posteriors of a tidylda topic model
Description
Sample from the marginal posteriors of a tidylda topic model. This is useful for quantifying
uncertainty around the parameters of beta or theta.
Usage
posterior(x, ...)
## S3 method for class 'tidylda'
posterior(x, matrix, which, times, ...)
Arguments
x An object of class tidylda.
... Other arguments, currently not used.
matrix A character of either ’theta’ or ’beta’, indicating from which matrix to draw
posterior samples.
which Row index of theta, for document, or beta, for topic, from which to draw sam-
ples. which may also be a vector of indices to sample from multiple documents
or topics simultaneously.
times Integer, number of samples to draw.
Value
posterior returns a tibble with one row per parameter per sample.
Returns a data frame where each row is a single sample from the posterior. Each column is the
distribution over a single parameter. The variable var is a facet for subsetting by document (for
theta) or topic (for beta).
References
<NAME>. (2005) Parameter estimation for text analysis. Technical report. http://www.arbylon.net/publications/text-
est.pdf
Examples
# load some data
data(nih_sample_dtm)
# fit a model
set.seed(12345)
m <- tidylda(
data = nih_sample_dtm[1:20, ], k = 5,
iterations = 200, burnin = 175
)
# sample from the marginal posterior corresponding to topic 1
t1 <- posterior(
x = m,
matrix = "beta",
which = 1,
times = 100
)
# sample from the marginal posterior corresponding to documents 5 and 6
d5 <- posterior(
x = m,
matrix = "theta",
which = c(5, 6),
times = 100
)
predict.tidylda Get predictions from a Latent Dirichlet Allocation model
Description
Obtains predictions of topics for new documents from a fitted LDA model
Usage
## S3 method for class 'tidylda'
predict(
object,
new_data,
type = c("prob", "class", "distribution"),
method = c("gibbs", "dot"),
iterations = NULL,
burnin = -1,
no_common_tokens = c("default", "zero", "uniform"),
times = 100,
threads = 1,
verbose = TRUE,
...
)
Arguments
object a fitted object of class tidylda
new_data a DTM or TCM of class dgCMatrix or a numeric vector
type one of "prob", "class", or "distribution". Defaults to "prob".
method one of either "gibbs" or "dot". If "gibbs" Gibbs sampling is used and iterations
must be specified.
iterations If method = "gibbs", an integer number of iterations for the Gibbs sampler to
run. A future version may include automatic stopping criteria.
burnin If method = "gibbs", an integer number of burnin iterations. If burnin is greater
than -1, the entries of the resulting "theta" matrix are an average over all itera-
tions greater than burnin. Behavior is the same as documented in tidylda.
no_common_tokens
behavior when encountering documents that have no tokens in common with the
model. Options are "default", "zero", or "uniform". See ’details’, below for
explanation of behavior.
times Integer, number of samples to draw if type = "distribution". Ignored if type
is "class" or "prob". Defaults to 100.
threads Number of parallel threads, defaults to 1. Note: currently ignored; only single-
threaded prediction is implemented.
verbose Logical. Do you want to print a progress bar out to the console? Only active if
method = "gibbs". Defaults to TRUE.
... Additional arguments, currently unused
Details
If predict.tidylda encounters documents that have no tokens in common with the model in
object it will engage in one of three behaviors based on the setting of no_common_tokens.
default (the default) sets all topics to 0 for offending documents. This enables continued com-
putations downstream in a way that NA would not. However, if no_common_tokens == "default",
then predict.tidylda will emit a warning for every such document it encounters.
zero has the same behavior as default but it emits a message instead of a warning.
uniform sets all topics to 1/k for every topic for offending documents. it does not emit a warning
or message.
Value
type gives different outputs depending on whether the user selects "prob", "class", or "distribution".
If "prob", the default, returns a a "theta" matrix with one row per document and one column per
topic. If "class", returns a vector with the topic index of the most likely topic in each document. If
"distribution", returns a tibble with one row per parameter per sample. Number of samples is set by
the times argument.
Examples
# load some data
data(nih_sample_dtm)
# fit a model
set.seed(12345)
m <- tidylda(
data = nih_sample_dtm[1:20, ], k = 5,
iterations = 200, burnin = 175
)
str(m)
# predict on held-out documents using gibbs sampling "fold in"
p1 <- predict(m, nih_sample_dtm[21:100, ],
method = "gibbs",
iterations = 200, burnin = 175
)
# predict on held-out documents using the dot product
p2 <- predict(m, nih_sample_dtm[21:100, ], method = "dot")
# compare the methods
barplot(rbind(p1[1, ], p2[1, ]), beside = TRUE, col = c("red", "blue"))
# predict classes on held out documents
p3 <- predict(m, nih_sample_dtm[21:100, ],
method = "gibbs",
type = "class",
iterations = 100, burnin = 75
)
# predict distribution on held out documents
p4 <- predict(m, nih_sample_dtm[21:100, ],
method = "gibbs",
type = "distribution",
iterations = 100, burnin = 75,
times = 10
)
print.tidylda Print Method for tidylda
Description
Print a summary for objects of class tidylda
Usage
## S3 method for class 'tidylda'
print(x, digits = max(3L, getOption("digits") - 3L), n = 5, ...)
Arguments
x an object of class tidylda
digits minimal number of significant digits
n Number of rows to show in each displayed tibble.
... further arguments passed to or from other methods
Value
Silently returns x
Examples
dtm <- nih_sample_dtm
lda <- tidylda(data = dtm, k = 10, iterations = 100)
print(lda)
lda
print(lda, digits = 2)
refit.tidylda Update a Latent Dirichlet Allocation topic model
Description
Update an LDA model using collapsed Gibbs sampling.
Usage
## S3 method for class 'tidylda'
refit(
object,
new_data,
iterations = NULL,
burnin = -1,
prior_weight = 1,
additional_k = 0,
additional_eta_sum = 250,
optimize_alpha = FALSE,
calc_likelihood = FALSE,
calc_r2 = FALSE,
return_data = FALSE,
threads = 1,
verbose = TRUE,
...
)
Arguments
object a fitted object of class tidylda.
new_data A document term matrix or term co-occurrence matrix of class dgCMatrix.
iterations Integer number of iterations for the Gibbs sampler to run.
burnin Integer number of burnin iterations. If burnin is greater than -1, the result-
ing "beta" and "theta" matrices are an average over all iterations greater than
burnin.
prior_weight Numeric, 0 or greater or NA. The weight of the beta as a prior from the base
model. See Details, below.
additional_k Integer number of topics to add, defaults to 0.
additional_eta_sum
Numeric magnitude of prior for additional topics. Ignored if additional_k is
0. Defaults to 250.
optimize_alpha Logical. Experimental. Do you want to optimize alpha every iteration? Defaults
to FALSE.
calc_likelihood
Logical. Do you want to calculate the log likelihood every iteration? Useful for
assessing convergence. Defaults to FALSE.
calc_r2 Logical. Do you want to calculate R-squared after the model is trained? Defaults
to FALSE.
return_data Logical. Do you want new_data returned as part of the model object?
threads Number of parallel threads, defaults to 1.
verbose Logical. Do you want to print a progress bar out to the console? Defaults to
TRUE.
... Additional arguments, currently unused
Details
refit allows you to (a) update the probabilities (i.e. weights) of a previously-fit model with new
data or additional iterations and (b) optionally use beta of a previously-fit LDA topic model as the
eta prior for the new model. This is tuned by setting beta_as_prior = FALSE or beta_as_prior
= TRUE respectively.
prior_weight tunes how strong the base model is represented in the prior. If prior_weight =
1, then the tokens from the base model’s training data have the same relative weight as tokens in
new_data. In other words, it is like just adding training data. If prior_weight is less than 1, then
tokens in new_data are given more weight. If prior_weight is greater than 1, then the tokens from
the base model’s training data are given more weight.
If prior_weight is NA, then the new eta is equal to eta from the old model, with new tokens
folded in. (For handling of new tokens, see below.) Effectively, this just controls how the sampler
initializes (described below), but does not give prior weight to the base model.
Instead of initializing token-topic assignments in the manner for new models (see tidylda), the
update initializes in 2 steps:
First, topic-document probabilities (i.e. theta) are obtained by a call to predict.tidylda us-
ing method = "dot" for the documents in new_data. Next, both beta and theta are passed to an
internal function, initialize_topic_counts, which assigns topics to tokens in a manner approx-
imately proportional to the posteriors and executes a single Gibbs iteration.
refit handles the addition of new vocabulary by adding a flat prior over new tokens. Specifically,
each entry in the new prior is equal to the 10th percentile of eta from the old model. The resulting
model will have the total vocabulary of the old model plus any new vocabulary tokens. In other
words, after running refit.tidylda ncol(beta) >= ncol(new_data) where beta is from the
new model and new_data is the additional data.
You can add additional topics by setting the additional_k parameter to an integer greater than
zero. New entries to alpha have a flat prior equal to the median value of alpha in the old model.
(Note that if alpha itself is a flat prior, i.e. scalar, then the new topics have the same value for their
prior.) New entries to eta have a shape from the average of all previous topics in eta and scaled by
additional_eta_sum.
Value
Returns an S3 object of class c("tidylda").
Note
Updates are, as of this writing, are almost-surely useful but their behaviors have not been optimized
or well-studied. Caveat emptor!
Examples
# load a document term matrix
data(nih_sample_dtm)
d1 <- nih_sample_dtm[1:50, ]
d2 <- nih_sample_dtm[51:100, ]
# fit a model
m <- tidylda(d1,
k = 10,
iterations = 200, burnin = 175
)
# update an existing model by adding documents using old model as prior
m2 <- refit(
object = m,
new_data = rbind(d1, d2),
iterations = 200,
burnin = 175,
prior_weight = 1
)
# use an old model to initialize new model and not use old model as prior
m3 <- refit(
object = m,
new_data = d2, # new documents only
iterations = 200,
burnin = 175,
prior_weight = NA
)
# add topics while updating a model by adding documents
m4 <- refit(
object = m,
new_data = rbind(d1, d2),
additional_k = 3,
iterations = 200,
burnin = 175
)
tidy.tidylda Tidy a matrix from a tidylda topic model
Description
Tidy the result of a tidylda topic model
Usage
## S3 method for class 'tidylda'
tidy(x, matrix, log = FALSE, ...)
## S3 method for class 'matrix'
tidy(x, matrix, log = FALSE, ...)
Arguments
x an object of class tidylda or an individual beta, theta, or lambda matrix.
matrix the matrix to tidy; one of 'beta', 'theta', or 'lambda'
log do you want to have the result on a log scale? Defaults to FALSE
... other arguments passed to methods,currently not used
Value
Returns a tibble.
If matrix = "beta" then the result is a table of one row per topic and token with the following
columns: topic, token, beta
If matrix = "theta" then the result is a table of one row per document and topic with the following
columns: document, topic, theta
If matrix = "lambda" then the result is a table of one row per topic and token with the following
columns: topic, token, lambda
Functions
• tidy(matrix): Tidy an individual matrix. Useful for predictions and called from tidy.tidylda
Note
If log = TRUE then "log_" will be appended to the name of the third column of the resulting table.
e.g "beta" becomes "log_beta".
Examples
dtm <- nih_sample_dtm
lda <- tidylda(data = dtm, k = 10, iterations = 100, burnin = 75)
tidy_beta <- tidy(lda, matrix = "beta")
tidy_theta <- tidy(lda, matrix = "theta")
tidy_lambda <- tidy(lda, matrix = "lambda")
tidylda Fit a Latent Dirichlet Allocation topic model
Description
Fit a Latent Dirichlet Allocation topic model using collapsed Gibbs sampling.
Usage
tidylda(
data,
k,
iterations = NULL,
burnin = -1,
alpha = 0.1,
eta = 0.05,
optimize_alpha = FALSE,
calc_likelihood = TRUE,
calc_r2 = FALSE,
threads = 1,
return_data = FALSE,
verbose = TRUE,
...
)
Arguments
data A document term matrix or term co-occurrence matrix. The preferred class is a
dgCMatrix-class. However there is support for any Matrix-class object as
well as several other commonly-used classes such as matrix, dfm, DocumentTermMatrix,
and simple_triplet_matrix
k Integer number of topics.
iterations Integer number of iterations for the Gibbs sampler to run.
burnin Integer number of burnin iterations. If burnin is greater than -1, the result-
ing "beta" and "theta" matrices are an average over all iterations greater than
burnin.
alpha Numeric scalar or vector of length k. This is the prior for topics over documents.
eta Numeric scalar, numeric vector of length ncol(data), or numeric matrix with
k rows and ncol(data) columns. This is the prior for words over topics.
optimize_alpha Logical. Do you want to optimize alpha every iteration? Defaults to FALSE. See
’details’ below for more information.
calc_likelihood
Logical. Do you want to calculate the log likelihood every iteration? Useful for
assessing convergence. Defaults to TRUE.
calc_r2 Logical. Do you want to calculate R-squared after the model is trained? Defaults
to FALSE. See calc_lda_r2.
threads Number of parallel threads, defaults to 1. See Details, below.
return_data Logical. Do you want data returned as part of the model object?
verbose Logical. Do you want to print a progress bar out to the console? Defaults to
TRUE.
... Additional arguments, currently unused
Details
This function calls a collapsed Gibbs sampler for Latent Dirichlet Allocation written using the
excellent Rcpp package. Some implementation notes follow:
Topic-token and topic-document assignments are not initialized based on a uniform-random sam-
pling, as is common. Instead, topic-token probabilities (i.e. beta) are initialized by sampling from
a Dirichlet distribution with eta as its parameter. The same is done for topic-document probabilities
(i.e. theta) using alpha. Then an internal function is called (initialize_topic_counts) to run
a single Gibbs iteration to initialize assignments of tokens to topics and topics to documents.
When you use burn-in iterations (i.e. burnin = TRUE), the resulting beta and theta matrices are
calculated by averaging over every iteration after the specified number of burn-in iterations. If you
do not use burn-in iterations, then the matrices are calculated from the last run only. Ideally, you’d
burn in every iteration before convergence, then average over the chain after its converged (and thus
every observation is independent).
If you set optimize_alpha to TRUE, then each element of alpha is proportional to the number of
times each topic has be sampled that iteration averaged with the value of alpha from the previous
iteration. This lets you start with a symmetric alpha and drift into an asymmetric one. However, (a)
this probably means that convergence will take longer to happen or convergence may not happen at
all. And (b) I make no guarantees that doing this will give you any benefit or that it won’t hurt your
model. Caveat emptor!
The log likelihood calculation is the same that can be found on page 9 of https://arxiv.org/
pdf/1510.08628.pdf. The only difference is that the version in tidylda allows eta to be a vector
or matrix. (Vector used in this function, matrix used for model updates in refit.tidylda. At
present, the log likelihood function appears to be ok for assessing convergence. i.e. It has the
right shape. However, it is, as of this writing, returning positive numbers, rather than the expected
negative numbers. Looking into that, but in the meantime caveat emptor once again.
Parallelism, is not currently implemented. The threads argument is a placeholder for planned
enhancements.
Value
Returns an S3 object of class tidylda. See new_tidylda.
Examples
# load some data
data(nih_sample_dtm)
# fit a model
set.seed(12345)
m <- tidylda(
data = nih_sample_dtm[1:20, ], k = 5,
iterations = 200, burnin = 175
)
str(m)
# predict on held-out documents using gibbs sampling "fold in"
p1 <- predict(m, nih_sample_dtm[21:100, ],
method = "gibbs",
iterations = 200, burnin = 175
)
# predict on held-out documents using the dot product method
p2 <- predict(m, nih_sample_dtm[21:100, ], method = "dot")
# compare the methods
barplot(rbind(p1[1, ], p2[1, ]), beside = TRUE, col = c("red", "blue")) |
rspack-codespan-reporting | rust | Rust | Crate rspack\_codespan\_reporting
===
Diagnostic reporting support for the codespan crate.
Modules
---
* diagnosticDiagnostic data structures.
* filesSource file support for diagnostic reporting.
* termTerminal back-end for emitting diagnostics.
Crate rspack\_codespan\_reporting
===
Diagnostic reporting support for the codespan crate.
Modules
---
* diagnosticDiagnostic data structures.
* filesSource file support for diagnostic reporting.
* termTerminal back-end for emitting diagnostics.
Module rspack\_codespan\_reporting::diagnostic
===
Diagnostic data structures.
Structs
---
* DiagnosticRepresents a diagnostic message that can provide information like errors and warnings to the user.
* LabelA label describing an underlined region of code associated with a diagnostic.
Enums
---
* LabelStyle
* SeverityA severity level for diagnostic messages.
Module rspack\_codespan\_reporting::files
===
Source file support for diagnostic reporting.
The main trait defined in this module is the `Files` trait, which provides provides the minimum amount of functionality required for printing `Diagnostics`
with the `term::emit` function.
Simple implementations of this trait are implemented:
* `SimpleFile`: For single-file use-cases
* `SimpleFiles`: For multi-file use-cases
These data structures provide a pretty minimal API, however,
so end-users are encouraged to create their own implementations for their own specific use-cases, such as an implementation that accesses the file system directly (and caches the line start locations), or an implementation using an incremental compilation library like `salsa`.
Structs
---
* LocationA user-facing location in a source file.
* SimpleFileA file database that contains a single source file.
* SimpleFilesA file database that can store multiple source files.
Enums
---
* ErrorAn enum representing an error that happened while looking up a file or a piece of content in that file.
Traits
---
* FilesA minimal interface for accessing source files when rendering diagnostics.
Functions
---
* column\_indexThe column index at the given byte index in the source file.
This is the number of characters to the given byte index.
* line\_startsReturn the starting byte index of each line in the source string.
Module rspack\_codespan\_reporting::term
===
Terminal back-end for emitting diagnostics.
Re-exports
---
* `pub use termcolor;`
Structs
---
* CharsCharacters to use when rendering the diagnostic.
* ColorArgA command line argument that configures the coloring of the output.
* ConfigConfigures how a diagnostic is rendered.
* StylesStyles to use when rendering the diagnostic.
Enums
---
* DisplayStyleThe display style to use when rendering diagnostics.
Functions
---
* emitEmit a diagnostic using the given writer, context, config, and files. |
github.com/ipfs/go-ipfs-posinfo | go | Go | README
[¶](#section-readme)
---
### go-ipfs-posinfo
[![](https://img.shields.io/badge/made%20by-Protocol%20Labs-blue.svg?style=flat-square)](http://ipn.io)
[![](https://img.shields.io/badge/project-IPFS-blue.svg?style=flat-square)](http://ipfs.io/)
[![standard-readme compliant](https://img.shields.io/badge/standard--readme-OK-green.svg?style=flat-square)](https://github.com/RichardLitt/standard-readme)
[![GoDoc](https://godoc.org/github.com/ipfs/go-ipfs-posinfo?status.svg)](https://godoc.org/github.com/ipfs/go-ipfs-posinfo)
[![Build Status](https://travis-ci.org/ipfs/go-ipfs-posinfo.svg?branch=master)](https://travis-ci.org/ipfs/go-ipfs-posinfo)
> Posinfo wraps offset information for ipfs filestore nodes
#### ❗ This repo is no longer maintained.
👉 We highly recommend switching to the maintained version at <https://github.com/ipfs/boxo/tree/main/filestore/posinfo>.
🏎️ Good news! There is [tooling and documentation](https://github.com/ipfs/boxo#migrating-to-boxo) to expedite a switch in your repo.
⚠️ If you continue using this repo, please note that security fixes will not be provided (unless someone steps in to maintain it).
📚 Learn more, including how to take the maintainership mantle or ask questions, [here](https://github.com/ipfs/boxo/wiki/Copied-or-Migrated-Repos-FAQ).
#### Table of Contents
* [Install](#readme-install)
* [Usage](#readme-usage)
* [License](#readme-license)
#### Install
```
go get github.com/ipfs/go-ipfs-posinfo
```
#### Usage
See the [GoDoc documentation](https://godoc.org/github.com/ipfs/go-ipfs-posinfo)
#### License
MIT © Protocol Labs, Inc.
Documentation
[¶](#section-documentation)
---
### Overview [¶](#pkg-overview)
Package posinfo wraps offset information used by ipfs filestore nodes
### Index [¶](#pkg-index)
* [type FilestoreNode](#FilestoreNode)deprecated
* [type PosInfo](#PosInfo)deprecated
### Constants [¶](#pkg-constants)
This section is empty.
### Variables [¶](#pkg-variables)
This section is empty.
### Functions [¶](#pkg-functions)
This section is empty.
### Types [¶](#pkg-types)
####
type [FilestoreNode](https://github.com/ipfs/go-ipfs-posinfo/blob/v0.0.2/posinfo.go#L24)
deprecated
```
type FilestoreNode struct {
[ipld](/github.com/ipfs/go-ipld-format).[Node](/github.com/ipfs/go-ipld-format#Node)
PosInfo *[PosInfo](#PosInfo)
}
```
FilestoreNode is an ipld.Node which arries PosInfo with it allowing to map it directly to a filesystem object.
Deprecated: use github.com/ipfs/boxo/filestore/posinfo.FilestoreNode
####
type [PosInfo](https://github.com/ipfs/go-ipfs-posinfo/blob/v0.0.2/posinfo.go#L14)
deprecated
```
type PosInfo struct {
Offset [uint64](/builtin#uint64)
FullPath [string](/builtin#string)
Stat [os](/os).[FileInfo](/os#FileInfo) // can be nil
}
```
PosInfo stores information about the file offset, its path and stat.
Deprecated: use github.com/ipfs/boxo/filestore/posinfo.PosInfo |
ncctools | ctan | TeX | ###### Abstract
The 'ncctools' collection consists of a number of packages extracted from NCC style (developed by <NAME> in 1992-1996 under LaTeX-2.09) while re-implementation it for LaTeX 2\({}_{\mathcal{E}}\). Many new packages were also added later.
The collection now contains 25 packages providing the following:
Dynamic counters. The counter declared as dynamic is really created at the first use and receives at that moment the count style established by the \countsstyle command. The special use of \countstyle command with optional parameter allows modify the subordination of existing counter. For example, while using the book class you can reject the subordination of the section counter to the chapter counter and re-subordinate figures, tables and equations to sections. The package is used in the nccthm package.
Implements the desclist environment. It is considered as an improvement of the description environment. The appearance of item markers is easy customizable on the fly. An optional parameter allows set a marker prototype for calculation of hang indentation skip. The description environment is redefined to use an optional parameter also.
instead. You can also decrease the length of em-dash by the cyremdash option to satisfy the Russian typesetting rules.
The package implements a command, \newfootnote, that adds footnote levels to the standard LaTeX's footnote mechanism. Footnote of every additional level are automatically grouped together on a LaTeX 2\({}_{\mathcal{E}}\) output page and areseparated from another levels by the special vertical space and (maybe) rule. You can customize the typesetting style of additional footnotes choosing between ordinary footnotes and run-in paragraph footnotes (useful for critical editions). Service command \DeclareNewFootnote simplifies creation of new footnote levels with automatic footnote numbering. The possibility of customization inter-level footnote rules is allowed.
The package introduces \mboxfill command filling a free space with a pattern. All leader types are supported. Width of pattern can be specified by the same manner as in the \makebox command.
Implementation of poor Black Board Bold symbols. Ported from old NCCLEX. It is useless in modern LaTeX but kept just in case.
Additional boxes from NCC-LaTeX. The \jhbox and \jvbox horizontally and vertically align a body with respect to a prototype. The \jparbox vertically aligns paragraph box with respect to a prototype. The \addbox adjusts height and depth of box. The \pbox is a simple version of one-column table. It is independent on \arraystretch value. The \cbox is intended for design of fancy headers in tables.
Implements the smart comma in math mode working as an ordinary character if a decimal character goes after it. Otherwise, the math comma works as a punctuation mark.
Implements the \cropbox command preparing a box with crop marks at its corners looking like angles. Angle parameters are customizable.
Implements the \cropark command producing crop box around page text area (header area, footer area and marginal notes are optionally taken into consideration). The \cropark command is useful as a parameter of the \watermark commands form the watermark package. It accurately interprets current state of two-column, two-side, and reverse-margin modes.
Absolutely new implementation of functionality of the fancyhdr package. It is more transparent, simple, and non-aggressive (redefining of standard page styles is optional). Using the package with names of standard page styles as options, you can easy decorate your document with header/footer rules. For example, the command
\usepackage[headings]{nccfancyhdr}
sets the headings page style and provides it with the decorative rule at the header. Header width control is improved with two commands, namely \extendedheaders (extended upon marginal notes) and \normalheaders. The \thispagestyle command correctly works with the fancy page style (in fancyhdr, it didn't work because of use of global definitions). A new page style can be easy created with the help of the \newpagestyle command and fancy mark commands.
nccfloatsWraps LaTeX floats with service commands \fig, \tabl, \figs, \tabls, introduces the \minifig and \minitabl commands preparing figure and table in a minipage with possible use of \caption within, and \sidefig and \sidetabl used for placement of minifloats next to surrounding text on the outer side of page.
nccfootsThe package implements commands for generating footnotes with manual marks. For example, to mark footnote by star you can write
\Footnote{$*$}{Footnote text}.
nccmathExtension of the amsmath package. Its main aim is to combine \(\mathcal{A}\!\mathcal{M}\!\mathcal{S}\)'s type-setting of display equations and NCC-LaTeX's one. In amsmath, the eqnarray environment leaves unchanged. This package redefines eqnarray to allow using of amsmath tag control features and display breaks. Inter-column distance in eqnarray is reduced to the distance typical for relation operations. All columns are prepared in the \displaystyle. A new darray environment is a mix of the \(\mathcal{A}\!\mathcal{M}\!\mathcal{S}\)'s aligned environment and LaTeX's array environment. It is typed out in the same way as the aligned environment but has columns definition parameter as in array environment. The use of column specifications is restricted to the necessary commands only: 1, c, r, @, and * are allowed. The implementation has no conflicts with the packages redefining arrays. The fleqn and ceqn environments allow dynamically change the alignment of display formulas to flushed left or to centered alignment. Some additional commands are introduced also.
nccparskipUseful for documents with non-zero skips between paragraphs. In this case, the additional vertical space inserted by lists is unlikely. The package provides identical distance between all paragraphs except sectioning markup commands. It redefines control list commands and suppress \topskip, \partopskip, and \itemsep in lists. As a result, the distance between ordinary paragraphs and paragraphs prepared by lists is the same. The \SetParskip{distance} command controls this distance.
nccpicEnvelop for the graphicx package. It customizes graphics extensions list for dvips driver. You need not specify a graphics file extension when use the \includegraphics command. Depending on a dvi-driver specified, a graphics file with an appropriate extension is searched. So, you only need to create a number of versions of a graphics file in different formats (for example, '.bmp' for dvips or Yap and '.png' for pdftex). After that you can produce resulting '.ps' and '.pdf' file without any changes in the source file. The recommended storage for graphics files is the 'graphics/' subdirectory of the directory the '.tex' file is translated. Some additional commands are introduced also.
* [noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt]
* Implements two commands, \dashrule and \dashrulefill, which compose dashed (multi)lines. Two footnote rule generation commands, \newfootnoterule and \newfootnotedashrule, are useful in conjunction with the manyfoot package.
* Extension of LaTeX's section, caption, and toc-entries generation technique. The package contains many improvements in comparison with the base LaTeX's implementation. The most interesting of them are:
* simple declaring of sections of any level (including sections of 0th level and captions for floats);
* user-controlled typeout for display sections (user can select one of the following typeout styles: hangindent, hangindent*, parindent, parindent*, hangparindent, hangparindent*, center, and centerlast);
* new section styles can be easy constructed with the help of two style definition commands;
* customizing of section or caption tag by the manner similar to \(\mathcal{AWS}\) equation tag;
* simple declaring of toc-entries using prototypes for calculation of hang indentations;
* \numberline command newer overlaps the text going after;
* \PnumPrototype command is used for calculation of right margin in table of contents;
* different captions for different float types;
* simple handling of new types of floats (after registration of a new float in the package, you can declare a caption and toc-entry for it; be sure that the \chapter command will automatically produce a vertical skip in a toc for the new float also).
* [noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt]
* implements the \stretchwith command that stretches a text inserting something between every pair of neighbour tokens.
* Yet another extension to the \newtheorem command. The following orthogonal properties of theorems are used: _numbering mode_ is standard or _apar_ (a number before header); _theorem type_ defines an appearance of a theorem (what fonts are used for title, comment, and body). The 'theorem' and'remark' types a predefined; *
no redefinition of page marks. The left and right watermarks are allowed. Temporary \thiswatermark acts on the current page only. Using this way it is easy to replace a page header on a page by your own page header with the \thispageheading command. |
github.com/xendit/xendit-go/v3 | go | Go | README
[¶](#section-readme)
---
![Xendit PHP SDK](https://github.com/xendit/xendit-go/raw/v3.3.0/docs/header.jpg "Xendit Go SDK")
### Xendit Go SDK
The official Xendit Go SDK provides a simple and convenient way to call Xendit's REST API in applications written in Go.
* Package version: 3.3.0
### Getting Started
#### Installation
Install xendit-go with:
```
go get github.com/xendit/xendit-go/v3
```
Put the package under your project folder and add the following in import:
```
import xendit "github.com/xendit/xendit-go/v3"
```
To use a proxy, set the environment variable `HTTP_PROXY`:
```
os.Setenv("HTTP_PROXY", "http://proxy_name:proxy_port")
```
#### Authorization
The SDK needs to be instantiated using your secret API key obtained from the [Xendit Dashboard](https://dashboard.xendit.co/settings/developers#api-keys).
You can sign up for a free Dashboard account [here](https://dashboard.xendit.co/register).
```
xnd := xendit.NewClient("API-KEY")
```
### Documentation
Find detailed API information and examples for each of our product's by clicking the links below,
* [Balance](https://github.com/xendit/xendit-go/blob/v3.3.0/docs/BalanceApi.md)
* [Customer](https://github.com/xendit/xendit-go/blob/v3.3.0/docs/CustomerApi.md)
* [Invoice](https://github.com/xendit/xendit-go/blob/v3.3.0/docs/InvoiceApi.md)
* [PaymentMethod](https://github.com/xendit/xendit-go/blob/v3.3.0/docs/PaymentMethodApi.md)
* [PaymentRequest](https://github.com/xendit/xendit-go/blob/v3.3.0/docs/PaymentRequestApi.md)
* [Payout](https://github.com/xendit/xendit-go/blob/v3.3.0/docs/PayoutApi.md)
* [Refund](https://github.com/xendit/xendit-go/blob/v3.3.0/docs/RefundApi.md)
* [Transaction](https://github.com/xendit/xendit-go/blob/v3.3.0/docs/TransactionApi.md)
All URIs are relative to *<https://api.xendit.co>*. For more information about our API, please refer to *<https://developers.xendit.co/>*.
Further Reading
* [Xendit Docs](https://docs.xendit.co/)
* [Xendit API Reference](https://developers.xendit.co/)
Documentation
[¶](#section-documentation)
---
### Overview [¶](#pkg-overview)
Package xendit provides the binding for Xendit APIs.
### Index [¶](#pkg-index)
* [Variables](#pkg-variables)
* [func CacheExpires(r *http.Response) time.Time](#CacheExpires)
* [type APIClient](#APIClient)
* + [func NewClient(apiKey string) *APIClient](#NewClient)
* + [func (c *APIClient) CallAPI(request *http.Request) (*http.Response, error)](#APIClient.CallAPI)
+ [func (c *APIClient) Decode(v interface{}, b []byte, contentType string) (err error)](#APIClient.Decode)
+ [func (c *APIClient) GetConfig() common.IConfiguration](#APIClient.GetConfig)
+ [func (c *APIClient) PrepareRequest(ctx context.Context, path string, method string, postBody interface{}, ...) (localVarRequest *http.Request, err error)](#APIClient.PrepareRequest)
* [type APIKey](#APIKey)
* [type APIResponse](#APIResponse)
* + [func NewAPIResponse(r *http.Response) *APIResponse](#NewAPIResponse)
+ [func NewAPIResponseWithError(errorMessage string) *APIResponse](#NewAPIResponseWithError)
* [type BasicAuth](#BasicAuth)
* [type Configuration](#Configuration)
* + [func NewConfiguration() *Configuration](#NewConfiguration)
* + [func (c *Configuration) AddDefaultHeader(key string, value string)](#Configuration.AddDefaultHeader)
+ [func (c *Configuration) ServerURL(index int, variables map[string]string) (string, error)](#Configuration.ServerURL)
+ [func (c *Configuration) ServerURLWithContext(ctx context.Context, endpoint string) (string, error)](#Configuration.ServerURLWithContext)
* [type ServerConfiguration](#ServerConfiguration)
* [type ServerConfigurations](#ServerConfigurations)
* + [func (sc ServerConfigurations) URL(index int, variables map[string]string) (string, error)](#ServerConfigurations.URL)
* [type ServerVariable](#ServerVariable)
### Constants [¶](#pkg-constants)
This section is empty.
### Variables [¶](#pkg-variables)
```
var (
// ContextServerIndex uses a server configuration from the index.
ContextServerIndex = contextKey("serverIndex")
// ContextOperationServerIndices uses a server configuration from the index mapping.
ContextOperationServerIndices = contextKey("serverOperationIndices")
// ContextServerVariables overrides a server configuration variables.
ContextServerVariables = contextKey("serverVariables")
// ContextOperationServerVariables overrides a server configuration variables using operation specific values.
ContextOperationServerVariables = contextKey("serverOperationVariables")
)
```
### Functions [¶](#pkg-functions)
####
func [CacheExpires](https://github.com/xendit/xendit-go/blob/v3.3.0/client.go#L448) [¶](#CacheExpires)
```
func CacheExpires(r *[http](/net/http).[Response](/net/http#Response)) [time](/time).[Time](/time#Time)
```
CacheExpires helper function to determine remaining time before repeating a request.
### Types [¶](#pkg-types)
####
type [APIClient](https://github.com/xendit/xendit-go/blob/v3.3.0/client.go#L45) [¶](#APIClient)
```
type APIClient struct {
// API Services
BalanceApi [balance](/github.com/xendit/xendit-go/[email protected]/balance_and_transaction).[BalanceApi](/github.com/xendit/xendit-go/[email protected]/balance_and_transaction#BalanceApi)
CustomerApi [customer](/github.com/xendit/xendit-go/[email protected]/customer).[CustomerApi](/github.com/xendit/xendit-go/[email protected]/customer#CustomerApi)
InvoiceApi [invoice](/github.com/xendit/xendit-go/[email protected]/invoice).[InvoiceApi](/github.com/xendit/xendit-go/[email protected]/invoice#InvoiceApi)
PaymentMethodApi [paymentmethod](/github.com/xendit/xendit-go/[email protected]/payment_method).[PaymentMethodApi](/github.com/xendit/xendit-go/[email protected]/payment_method#PaymentMethodApi)
PaymentRequestApi [paymentrequest](/github.com/xendit/xendit-go/[email protected]/payment_request).[PaymentRequestApi](/github.com/xendit/xendit-go/[email protected]/payment_request#PaymentRequestApi)
PayoutApi [payout](/github.com/xendit/xendit-go/[email protected]/payout).[PayoutApi](/github.com/xendit/xendit-go/[email protected]/payout#PayoutApi)
RefundApi [refund](/github.com/xendit/xendit-go/[email protected]/refund).[RefundApi](/github.com/xendit/xendit-go/[email protected]/refund#RefundApi)
TransactionApi [transaction](/github.com/xendit/xendit-go/[email protected]/balance_and_transaction).[TransactionApi](/github.com/xendit/xendit-go/[email protected]/balance_and_transaction#TransactionApi)
// contains filtered or unexported fields
}
```
APIClient manages communication with XENDIT API In most cases there should be only one, shared, APIClient.
####
func [NewClient](https://github.com/xendit/xendit-go/blob/v3.3.0/client.go#L67) [¶](#NewClient)
```
func NewClient(apiKey [string](/builtin#string)) *[APIClient](#APIClient)
```
NewClient creates a new Xendit SDK Client. The SDK needs to be instantiated using your secret API key obtained from the <https://dashboard.xendit.co/settings/developers#api-keys>.
You can sign up for a free Dashboard account from <https://dashboard.xendit.co/register>.
####
func (*APIClient) [CallAPI](https://github.com/xendit/xendit-go/blob/v3.3.0/client.go#L113) [¶](#APIClient.CallAPI)
```
func (c *[APIClient](#APIClient)) CallAPI(request *[http](/net/http).[Request](/net/http#Request)) (*[http](/net/http).[Response](/net/http#Response), [error](/builtin#error))
```
callAPI do the request.
####
func (*APIClient) [Decode](https://github.com/xendit/xendit-go/blob/v3.3.0/client.go#L297) [¶](#APIClient.Decode)
```
func (c *[APIClient](#APIClient)) Decode(v interface{}, b [][byte](/builtin#byte), contentType [string](/builtin#string)) (err [error](/builtin#error))
```
####
func (*APIClient) [GetConfig](https://github.com/xendit/xendit-go/blob/v3.3.0/client.go#L139) [¶](#APIClient.GetConfig)
```
func (c *[APIClient](#APIClient)) GetConfig() [common](/github.com/xendit/xendit-go/[email protected]/common).[IConfiguration](/github.com/xendit/xendit-go/[email protected]/common#IConfiguration)
```
Allow modification of underlying config for alternate implementations and testing Caution: modifying the configuration while live can cause data races and potentially unwanted behavior
####
func (*APIClient) [PrepareRequest](https://github.com/xendit/xendit-go/blob/v3.3.0/client.go#L144) [¶](#APIClient.PrepareRequest)
```
func (c *[APIClient](#APIClient)) PrepareRequest(
ctx [context](/context).[Context](/context#Context),
path [string](/builtin#string), method [string](/builtin#string),
postBody interface{},
headerParams map[[string](/builtin#string)][string](/builtin#string),
queryParams [url](/net/url).[Values](/net/url#Values),
formParams [url](/net/url).[Values](/net/url#Values),
formFiles [][common](/github.com/xendit/xendit-go/[email protected]/common).[FormFile](/github.com/xendit/xendit-go/[email protected]/common#FormFile)) (localVarRequest *[http](/net/http).[Request](/net/http#Request), err [error](/builtin#error))
```
PrepareRequest build the request
####
type [APIKey](https://github.com/xendit/xendit-go/blob/v3.3.0/configuration.go#L41) [¶](#APIKey)
```
type APIKey struct {
Key [string](/builtin#string)
Prefix [string](/builtin#string)
}
```
APIKey provides API key based authentication to a request passed via context using ContextAPIKey
####
type [APIResponse](https://github.com/xendit/xendit-go/blob/v3.3.0/response.go#L8) [¶](#APIResponse)
```
type APIResponse struct {
*[http](/net/http).[Response](/net/http#Response) `json:"-"`
Message [string](/builtin#string) `json:"message,omitempty"`
// Operation is the name of the OpenAPI operation.
Operation [string](/builtin#string) `json:"operation,omitempty"`
// RequestURL is the request URL. This value is always available, even if the
// embedded *http.Response is nil.
RequestURL [string](/builtin#string) `json:"url,omitempty"`
// Method is the HTTP method used for the request. This value is always
// available, even if the embedded *http.Response is nil.
Method [string](/builtin#string) `json:"method,omitempty"`
// Payload holds the contents of the response body (which may be nil or empty).
// This is provided here as the raw response.Body() reader will have already
// been drained.
Payload [][byte](/builtin#byte) `json:"-"`
}
```
APIResponse stores the API response returned by the server.
####
func [NewAPIResponse](https://github.com/xendit/xendit-go/blob/v3.3.0/response.go#L26) [¶](#NewAPIResponse)
```
func NewAPIResponse(r *[http](/net/http).[Response](/net/http#Response)) *[APIResponse](#APIResponse)
```
NewAPIResponse returns a new APIResponse object.
####
func [NewAPIResponseWithError](https://github.com/xendit/xendit-go/blob/v3.3.0/response.go#L33) [¶](#NewAPIResponseWithError)
```
func NewAPIResponseWithError(errorMessage [string](/builtin#string)) *[APIResponse](#APIResponse)
```
NewAPIResponseWithError returns a new APIResponse object with the provided error message.
####
type [BasicAuth](https://github.com/xendit/xendit-go/blob/v3.3.0/configuration.go#L35) [¶](#BasicAuth)
```
type BasicAuth struct {
UserName [string](/builtin#string) `json:"userName,omitempty"`
Password [string](/builtin#string) `json:"password,omitempty"`
}
```
BasicAuth provides basic http authentication to a request passed via context using ContextBasicAuth
####
type [Configuration](https://github.com/xendit/xendit-go/blob/v3.3.0/configuration.go#L64) [¶](#Configuration)
```
type Configuration struct {
Host [string](/builtin#string) `json:"host,omitempty"`
Scheme [string](/builtin#string) `json:"scheme,omitempty"`
DefaultHeader map[[string](/builtin#string)][string](/builtin#string) `json:"defaultHeader,omitempty"`
UserAgent [string](/builtin#string) `json:"userAgent,omitempty"`
Debug [bool](/builtin#bool) `json:"debug,omitempty"`
Servers [ServerConfigurations](#ServerConfigurations)
OperationServers map[[string](/builtin#string)][ServerConfigurations](#ServerConfigurations)
HTTPClient *[http](/net/http).[Client](/net/http#Client)
}
```
Configuration stores the configuration of the API client
```
var Default [Configuration](#Configuration) = *[NewConfiguration](#NewConfiguration)()
```
####
func [NewConfiguration](https://github.com/xendit/xendit-go/blob/v3.3.0/configuration.go#L78) [¶](#NewConfiguration)
```
func NewConfiguration() *[Configuration](#Configuration)
```
NewConfiguration returns a new Configuration object
####
func (*Configuration) [AddDefaultHeader](https://github.com/xendit/xendit-go/blob/v3.3.0/configuration.go#L96) [¶](#Configuration.AddDefaultHeader)
```
func (c *[Configuration](#Configuration)) AddDefaultHeader(key [string](/builtin#string), value [string](/builtin#string))
```
AddDefaultHeader adds a new HTTP header to the default header in the request
####
func (*Configuration) [ServerURL](https://github.com/xendit/xendit-go/blob/v3.3.0/configuration.go#L129) [¶](#Configuration.ServerURL)
```
func (c *[Configuration](#Configuration)) ServerURL(index [int](/builtin#int), variables map[[string](/builtin#string)][string](/builtin#string)) ([string](/builtin#string), [error](/builtin#error))
```
ServerURL returns URL based on server settings
####
func (*Configuration) [ServerURLWithContext](https://github.com/xendit/xendit-go/blob/v3.3.0/configuration.go#L186) [¶](#Configuration.ServerURLWithContext)
```
func (c *[Configuration](#Configuration)) ServerURLWithContext(ctx [context](/context).[Context](/context#Context), endpoint [string](/builtin#string)) ([string](/builtin#string), [error](/builtin#error))
```
ServerURLWithContext returns a new server URL given an endpoint
####
type [ServerConfiguration](https://github.com/xendit/xendit-go/blob/v3.3.0/configuration.go#L54) [¶](#ServerConfiguration)
```
type ServerConfiguration struct {
URL [string](/builtin#string)
Description [string](/builtin#string)
Variables map[[string](/builtin#string)][ServerVariable](#ServerVariable)
}
```
ServerConfiguration stores the information about a server
####
type [ServerConfigurations](https://github.com/xendit/xendit-go/blob/v3.3.0/configuration.go#L61) [¶](#ServerConfigurations)
```
type ServerConfigurations [][ServerConfiguration](#ServerConfiguration)
```
ServerConfigurations stores multiple ServerConfiguration items
####
func (ServerConfigurations) [URL](https://github.com/xendit/xendit-go/blob/v3.3.0/configuration.go#L101) [¶](#ServerConfigurations.URL)
```
func (sc [ServerConfigurations](#ServerConfigurations)) URL(index [int](/builtin#int), variables map[[string](/builtin#string)][string](/builtin#string)) ([string](/builtin#string), [error](/builtin#error))
```
URL formats template on a index using given variables
####
type [ServerVariable](https://github.com/xendit/xendit-go/blob/v3.3.0/configuration.go#L47) [¶](#ServerVariable)
```
type ServerVariable struct {
Description [string](/builtin#string)
DefaultValue [string](/builtin#string)
EnumValues [][string](/builtin#string)
}
```
ServerVariable stores the information about a server variable |
cytominer | cran | R | Package ‘cytominer’
October 12, 2022
Encoding UTF-8
Type Package
Title Methods for Image-Based Cell Profiling
Version 0.2.2
Description Typical morphological profiling datasets have millions of cells
and hundreds of features per cell. When working with this data, you must
clean the data, normalize the features to make them comparable across
experiments, transform the features, select features based on their
quality, and aggregate the single-cell data, if needed. 'cytominer' makes
these steps fast and easy. Methods used in practice in the field are
discussed in Caicedo (2017) <doi:10.1038/nmeth.4397>. An overview of the
field is presented in Caicedo (2016) <doi:10.1016/j.copbio.2016.04.003>.
Depends R (>= 3.3.0)
License BSD_3_clause + file LICENSE
LazyData TRUE
Imports caret (>= 6.0.76), doParallel (>= 1.0.10), dplyr (>= 0.8.5),
foreach (>= 1.4.3), futile.logger (>= 1.4.3), magrittr (>=
1.5), Matrix (>= 1.2), purrr (>= 0.3.3), rlang (>= 0.4.5),
tibble (>= 2.1.3), tidyr (>= 1.0.2)
Suggests DBI (>= 0.7), dbplyr (>= 1.4.2), knitr (>= 1.17), lazyeval
(>= 0.2.0), readr (>= 1.1.1), rmarkdown (>= 1.6), RSQLite (>=
2.0), stringr (>= 1.2.0), testthat (>= 1.0.2)
VignetteBuilder knitr
URL https://github.com/cytomining/cytominer
BugReports https://github.com/cytomining/cytominer/issues
RoxygenNote 7.1.0
NeedsCompilation no
Author <NAME> [aut],
<NAME> [aut],
<NAME>Quin [aut],
<NAME> [aut],
<NAME> [aut, cre]
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2020-05-09 05:00:03 UTC
R topics documented:
aggregat... 2
correlation_threshol... 3
count_na_row... 4
covarianc... 5
drop_na_column... 6
drop_na_row... 7
extract_subpopulation... 7
generalized_lo... 8
generate_component_matri... 9
normaliz... 10
replicate_correlatio... 11
sparse_random_projectio... 12
svd_entrop... 13
transfor... 14
variable_importanc... 15
variable_selec... 16
variance_threshol... 17
white... 18
aggregate Aggregate data based on given grouping.
Description
aggregate aggregates data based on the specified aggregation method.
Usage
aggregate(
population,
variables,
strata,
operation = "mean",
univariate = TRUE,
...
)
Arguments
population tbl with grouping (metadata) and observation variables.
variables character vector specifying observation variables.
strata character vector specifying grouping variables for aggregation.
operation optional character string specifying method for aggregation, e.g. "mean", "median",
"mean+sd". A sequence can comprise only of univariate functions.
univariate boolean specifying whether the aggregation function is univariate or multivari-
ate.
... optional arguments passed to aggregation operation
Value
aggregated data of the same class as population.
Examples
population <- tibble::tibble(
Metadata_group = c(
"control", "control", "control", "control",
"experiment", "experiment", "experiment",
"experiment"
),
Metadata_batch = c("a", "a", "b", "b", "a", "a", "b", "b"),
AreaShape_Area = c(10, 12, 15, 16, 8, 8, 7, 7)
)
variables <- c("AreaShape_Area")
strata <- c("Metadata_group", "Metadata_batch")
aggregate(population, variables, strata, operation = "mean")
correlation_threshold Remove redundant variables.
Description
correlation_threshold returns list of variables such that no two variables have a correlation
greater than a specified threshold.
Usage
correlation_threshold(variables, sample, cutoff = 0.9, method = "pearson")
Arguments
variables character vector specifying observation variables.
sample tbl containing sample used to estimate parameters.
cutoff threshold between [0,1] that defines the minimum correlation of a selected fea-
ture.
method optional character string specifying method for calculating correlation. This
must be one of the strings "pearson" (default), "kendall", "spearman".
Details
correlation_threshold is a wrapper for caret::findCorrelation.
Value
character vector specifying observation variables to be excluded.
Examples
suppressMessages(suppressWarnings(library(magrittr)))
sample <- tibble::tibble(
x = rnorm(30),
y = rnorm(30) / 1000
)
sample %<>% dplyr::mutate(z = x + rnorm(30) / 10)
variables <- c("x", "y", "z")
head(sample)
cor(sample)
# `x` and `z` are highly correlated; one of them will be removed
correlation_threshold(variables, sample)
count_na_rows Count the number of NAs per variable.
Description
count_na_rows counts the number of NAs per variable.
Usage
count_na_rows(population, variables)
Arguments
population tbl with grouping (metadata) and observation variables.
variables character vector specifying observation variables.
Value
data frame with frequency of NAs per variable.
Examples
population <- tibble::tibble(
Metadata_group = c(
"control", "control", "control", "control",
"experiment", "experiment", "experiment", "experiment"
),
Metadata_batch = c("a", "a", "b", "b", "a", "a", "b", "b"),
AreaShape_Area = c(10, 12, 15, 16, 8, 8, 7, 7),
AreaShape_length = c(2, 3, NA, NA, 4, 5, 1, 5)
)
variables <- c("AreaShape_Area", "AreaShape_length")
count_na_rows(population, variables)
covariance Compute covariance matrix and vectorize.
Description
covariance computes the covariance matrix and vectorize it.
Usage
covariance(population, variables)
Arguments
population tbl with grouping (metadata) and observation variables.
variables character vector specifying observation variables.
Value
data frame of 1 row comprising vectorized covariance matrix.
Examples
population <- tibble::tibble(
x = rnorm(30),
y = rnorm(30),
z = rnorm(30)
)
variables <- c("x", "y")
covariance(population, variables)
drop_na_columns Remove variables with NA values.
Description
drop_na_columns returns list of variables which have greater than a specified threshold number of
NAs.
Usage
drop_na_columns(population, variables, cutoff = 0.05)
Arguments
population tbl with grouping (metadata) and observation variables.
variables character vector specifying observation variables.
cutoff threshold between [0,1]. Variables with an NA frequency > cutoff are returned.
Value
character vector specifying observation variables to be excluded.
Examples
population <- tibble::tibble(
Metadata_group = c(
"control", "control", "control", "control",
"experiment", "experiment", "experiment", "experiment"
),
Metadata_batch = c("a", "a", "b", "b", "a", "a", "b", "b"),
AreaShape_Area = c(10, 12, 15, 16, 8, 8, 7, 7),
AreaShape_Length = c(2, 3, NA, NA, 4, 5, 1, 5)
)
variables <- c("AreaShape_Area", "AreaShape_Length")
drop_na_columns(population, variables)
drop_na_rows Drop rows that are NA in all specified variables.
Description
drop_na_rows drops rows that are NA in all specified variables.
Usage
drop_na_rows(population, variables)
Arguments
population tbl with grouping (metadata) and observation variables.
variables character vector specifying observation variables.
Value
population without rows that have NA in all specified variables.
Examples
population <- tibble::tibble(
Metadata_group = c(
"control", "control", "control", "control",
"experiment", "experiment", "experiment", "experiment"
),
Metadata_batch = c("a", "a", "b", "b", "a", "a", "b", "b"),
AreaShape_Area = c(10, 12, NA, 16, 8, 8, 7, 7),
AreaShape_Length = c(2, 3, NA, NA, 4, 5, 1, 5)
)
variables <- c("AreaShape_Area", "AreaShape_Length")
drop_na_rows(population, variables)
extract_subpopulations
Extract subpopulations.
Description
extract_subpopulations identifies clusters in the reference and population sets and reports the
frequency of points in each cluster for the two sets.
Usage
extract_subpopulations(population, reference, variables, k)
Arguments
population tbl with grouping (metadata) and observation variables.
reference tbl with grouping (metadata) and observation variables. Columns of population
and reference should be identical.
variables character vector specifying observation variables.
k scalar specifying number of clusters.
Value
list containing clusters centers (subpop_centers), two normalized histograms specifying frequency
of each clusters in population and reference (subpop_profiles), and cluster prediction and dis-
tance to the predicted cluster for all input data (population_clusters and reference_clusters).
Examples
data <- tibble::tibble(
Metadata_group = c(
"control", "control", "control", "control",
"experiment", "experiment", "experiment", "experiment"
),
AreaShape_Area = c(10, 12, NA, 16, 8, 8, 7, 7),
AreaShape_Length = c(2, 3, NA, NA, 4, 5, 1, 5)
)
variables <- c("AreaShape_Area", "AreaShape_Length")
population <- dplyr::filter(data, Metadata_group == "experiment")
reference <- dplyr::filter(data, Metadata_group == "control")
extract_subpopulations(
population = population,
reference = reference,
variables = variables,
k = 3
)
generalized_log Generalized log transform data.
Description
generalized_log transforms specified observation variables using x = log((x+sqrt(x2 +of f set2 ))/2).
Usage
generalized_log(population, variables, offset = 1)
Arguments
population tbl with grouping (metadata) and observation variables.
variables character vector specifying observation variables.
offset optional offset parameter for the transformation.
Value
transformed data of the same class as population.
Examples
population <- tibble::tibble(
Metadata_Well = c("A01", "A02", "B01", "B02"),
Intensity_DNA = c(8, 20, 12, 32)
)
variables <- c("Intensity_DNA")
generalized_log(population, variables)
generate_component_matrix
A sparse matrix for sparse random projection.
Description
generate_component_matrix generates the sparse random component matrix for performing sparse
random projection. If density is the density of the sparse matrix and n_components is the size of
the projected space, the elements of the random matrix are drawn from
Usage
generate_component_matrix(n_features, n_components, density)
Arguments
n_features the dimensionality of the original space.
n_components the dimensionality of the projected space.
density the density of the sparse random matrix.
Details
-sqrt(1 / (density * n_components)) with probability density / 2 0 with probability 1 - density
sqrt(1 / (density * n_components)) with probability density / 2
Value
A sparse random matrix of size (n_features, n_components).
Examples
generate_component_matrix(500, 100, 0.3)
normalize Normalize observation variables.
Description
normalize normalizes observation variables based on the specified normalization method.
Usage
normalize(
population,
variables,
strata,
sample,
operation = "standardize",
...
)
Arguments
population tbl with grouping (metadata) and observation variables.
variables character vector specifying observation variables.
strata character vector specifying grouping variables for grouping prior to normaliza-
tion.
sample tbl containing sample that is used by normalization methods to estimate parame-
ters. sample has same structure as population. Typically, sample corresponds
to controls in the experiment.
operation optional character string specifying method for normalization. This must be one
of the strings "standardize" (default), "robustize".
... arguments passed to normalization operation
Value
normalized data of the same class as population.
Examples
suppressMessages(suppressWarnings(library(magrittr)))
population <- tibble::tibble(
Metadata_group = c(
"control", "control", "control", "control",
"experiment", "experiment", "experiment", "experiment"
),
Metadata_batch = c("a", "a", "b", "b", "a", "a", "b", "b"),
AreaShape_Area = c(10, 12, 15, 16, 8, 8, 7, 7)
)
variables <- c("AreaShape_Area")
strata <- c("Metadata_batch")
sample <- population %>% dplyr::filter(Metadata_group == "control")
cytominer::normalize(population, variables, strata, sample, operation = "standardize")
replicate_correlation Measure replicate correlation of variables.
Description
‘replicate_correlation‘ measures replicate correlation of variables.
Usage
replicate_correlation(
sample,
variables,
strata,
replicates,
replicate_by = NULL,
split_by = NULL,
cores = NULL
)
Arguments
sample tbl containing sample used to estimate parameters.
variables character vector specifying observation variables.
strata character vector specifying grouping variables for grouping prior to normaliza-
tion.
replicates number of replicates.
replicate_by optional character string specifying column containing the replicate id.
split_by optional character string specifying column by which to split the sample into
batches; replicate correlations will be calculate per batch.
cores optional integer specifying number of CPU cores used for parallel computing
using doParallel.
Value
data frame of variable quality measurements
Examples
set.seed(123)
x1 <- rnorm(10)
x2 <- x1 + rnorm(10) / 100
y1 <- rnorm(10)
y2 <- y1 + rnorm(10) / 10
z1 <- rnorm(10)
z2 <- z1 + rnorm(10) / 1
batch <- rep(rep(1:2, each = 5), 2)
treatment <- rep(1:10, 2)
replicate_id <- rep(1:2, each = 10)
sample <-
tibble::tibble(
x = c(x1, x2), y = c(y1, y2), z = c(z1, z2),
Metadata_treatment = treatment,
Metadata_replicate_id = replicate_id,
Metadata_batch = batch
)
head(sample)
# `replicate_correlation`` returns the median, min, and max
# replicate correlation (across batches) per variable
replicate_correlation(
sample = sample,
variables = c("x", "y", "z"),
strata = c("Metadata_treatment"),
replicates = 2,
split_by = "Metadata_batch",
replicate_by = "Metadata_replicate_id",
cores = 1
)
sparse_random_projection
Reduce the dimensionality of a population using sparse random pro-
jection.
Description
sparse_random_projection reduces the dimensionality of a population by projecting the original
data with a sparse random matrix. Generally more efficient and faster to compute than a Gaussian
random projection matrix, while providing similar embedding quality.
Usage
sparse_random_projection(population, variables, n_components)
Arguments
population tbl with grouping (metadata) and observation variables.
variables character vector specifying observation variables.
n_components size of the projected feature space.
Value
Dimensionality reduced population.
Examples
population <- tibble::tibble(
Metadata_Well = c("A01", "A02", "B01", "B02"),
AreaShape_Area_DNA = c(10, 12, 7, 7),
AreaShape_Length_DNA = c(2, 3, 1, 5),
Intensity_DNA = c(8, 20, 12, 32),
Texture_DNA = c(5, 2, 43, 13)
)
variables <- c("AreaShape_Area_DNA", "AreaShape_Length_DNA", "Intensity_DNA", "Texture_DNA")
sparse_random_projection(population, variables, 2)
svd_entropy Feature importance based on data entropy.
Description
svd_entropy measures the contribution of each feature in decreasing the data entropy.
Usage
svd_entropy(variables, sample, cores = NULL)
Arguments
variables character vector specifying observation variables.
sample tbl containing sample used to estimate parameters.
cores optional integer specifying number of CPU cores used for parallel computing
using doParallel.
Value
data frame specifying the contribution of each feature in decreasing the data entropy. Higher values
indicate more information.
Examples
sample <- tibble::tibble(
AreaShape_MinorAxisLength = c(10, 12, 15, 16, 8, 8, 7, 7, 13, 18),
AreaShape_MajorAxisLength = c(35, 18, 22, 16, 9, 20, 11, 15, 18, 42),
AreaShape_Area = c(245, 151, 231, 179, 50, 112, 53, 73, 164, 529)
)
variables <- c("AreaShape_MinorAxisLength", "AreaShape_MajorAxisLength", "AreaShape_Area")
svd_entropy(variables, sample, cores = 1)
transform Transform observation variables.
Description
transform transforms observation variables based on the specified transformation method.
Usage
transform(population, variables, operation = "generalized_log", ...)
Arguments
population tbl with grouping (metadata) and observation variables.
variables character vector specifying observation variables.
operation optional character string specifying method for transform. This must be one of
the strings "generalized_log" (default), "whiten".
... arguments passed to transformation operation.
Value
transformed data of the same class as population.
Examples
population <- tibble::tibble(
Metadata_Well = c("A01", "A02", "B01", "B02"),
Intensity_DNA = c(8, 20, 12, 32)
)
variables <- c("Intensity_DNA")
transform(population, variables, operation = "generalized_log")
variable_importance Measure variable importance.
Description
variable_importance measures importance of variables based on specified methods.
Usage
variable_importance(
sample,
variables,
operation = "replicate_correlation",
...
)
Arguments
sample tbl containing sample used to estimate parameters.
variables character vector specifying observation variables.
operation optional character string specifying method for computing variable importance.
Currently, only "replicate_correlation" (default) is implemented.
... arguments passed to variable importance operation.
Value
data frame containing variable importance measures.
Examples
set.seed(123)
x1 <- rnorm(10)
x2 <- x1 + rnorm(10) / 100
y1 <- rnorm(10)
y2 <- y1 + rnorm(10) / 10
z1 <- rnorm(10)
z2 <- z1 + rnorm(10) / 1
batch <- rep(rep(1:2, each = 5), 2)
treatment <- rep(1:10, 2)
replicate_id <- rep(1:2, each = 10)
sample <-
tibble::tibble(
x = c(x1, x2), y = c(y1, y2), z = c(z1, z2),
Metadata_treatment = treatment,
Metadata_replicate_id = replicate_id,
Metadata_batch = batch
)
head(sample)
# `replicate_correlation`` returns the median, min, and max
# replicate correlation (across batches) per variable
variable_importance(
sample = sample,
variables = c("x", "y", "z"),
operation = "replicate_correlation",
strata = c("Metadata_treatment"),
replicates = 2,
split_by = "Metadata_batch",
replicate_by = "Metadata_replicate_id",
cores = 1
)
variable_select Select observation variables.
Description
variable_select selects observation variables based on the specified variable selection method.
Usage
variable_select(
population,
variables,
sample = NULL,
operation = "variance_threshold",
...
)
Arguments
population tbl with grouping (metadata) and observation variables.
variables character vector specifying observation variables.
sample tbl containing sample that is used by some variable selection methods. sample
has same structure as population.
operation optional character string specifying method for variable selection. This must be
one of the strings "variance_threshold", "correlation_threshold", "drop_na_columns".
... arguments passed to selection operation.
Value
variable-selected data of the same class as population.
Examples
# In this example, we use `correlation_threshold` as the operation for
# variable selection.
suppressMessages(suppressWarnings(library(magrittr)))
population <- tibble::tibble(
x = rnorm(100),
y = rnorm(100) / 1000
)
population %<>% dplyr::mutate(z = x + rnorm(100) / 10)
sample <- population %>% dplyr::slice(1:30)
variables <- c("x", "y", "z")
operation <- "correlation_threshold"
cor(sample)
# `x` and `z` are highly correlated; one of them will be removed
head(population)
futile.logger::flog.threshold(futile.logger::ERROR)
variable_select(population, variables, sample, operation) %>% head()
variance_threshold Remove variables with near-zero variance.
Description
variance_threshold returns list of variables that have near-zero variance.
Usage
variance_threshold(variables, sample)
Arguments
variables character vector specifying observation variables.
sample tbl containing sample used to estimate parameters.
Details
variance_threshold is a reimplementation of caret::nearZeroVar, using the default values for
freqCut and uniqueCut.
Value
character vector specifying observation variables to be excluded.
Examples
sample <- tibble::tibble(
AreaShape_Area = c(10, 12, 15, 16, 8, 8, 7, 7, 13, 18),
AreaShape_Euler = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0)
)
variables <- c("AreaShape_Area", "AreaShape_Euler")
variance_threshold(variables, sample)
whiten Whiten data.
Description
whiten transforms specified observation variables by estimating a whitening transformation on a
sample and applying it to the population.
Usage
whiten(population, variables, sample, regularization_param = 1)
Arguments
population tbl with grouping (metadata) and observation variables.
variables character vector specifying observation variables.
sample tbl containing sample that is used by the method to estimate whitening parame-
ters. sample has same structure as population. Typically, sample corresponds
to controls in the experiment.
regularization_param
optional parameter used in whitening to offset eigenvalues to avoid division by
zero.
Value
transformed data of the same class as population.
Examples
population <- tibble::tibble(
Metadata_Well = c("A01", "A02", "B01", "B02"),
Intensity_DNA = c(8, 20, 12, 32),
Texture_DNA = c(5, 2, 43, 13)
)
variables <- c("Intensity_DNA", "Texture_DNA")
whiten(population, variables, population, 0.01) |
github.com/ovh/cds/sdk/izanami | go | Go | README
[¶](#section-readme)
---
### izanami-go-client
Go client for [izanami](https://github.com/maif/izanami)
##### Usage
```
c, errNew := New("host", "clientID", "clientSecret")
if errNew != nil {
return errNew
}
// List all features features, errF := c.Feature().ListAll()
if errF != nil {
return errF
}
// Create a feature f := FeatureModel{
ID: "my-feature",
Enabled: true,
Strategy: NoStrategy,
}
if err := c.Feature().Create(f); err != nil {
return err
}
// Get a feature myFeature, errF := c.Feature().Get(f.ID)
if errF != nil {
return errF
}
// Update a feature if err := c.Feature().Update(myFeature); err != nil {
return err
}
// Check a feature check, err := c.Feature().CheckWithoutContext(feat.ID)
if err != nil {
return err
}
// Delete a feature if err := c.Feature().Delete(myFeature.ID); err != nil {
return err
}
```
Documentation
[¶](#section-documentation)
---
### Index [¶](#pkg-index)
* [type ActivationStrategy](#ActivationStrategy)
* [type Client](#Client)
* + [func New(apiURL, clientID, secret string) (*Client, error)](#New)
* + [func (c *Client) Feature() *FeatureClient](#Client.Feature)
+ [func (c *Client) Swagger() *SwaggerClient](#Client.Swagger)
* [type FeatureCheckResponse](#FeatureCheckResponse)
* [type FeatureClient](#FeatureClient)
* + [func (c *FeatureClient) CheckWithContext(id string, context interface{}) (FeatureCheckResponse, error)](#FeatureClient.CheckWithContext)
+ [func (c *FeatureClient) CheckWithoutContext(id string) (FeatureCheckResponse, error)](#FeatureClient.CheckWithoutContext)
+ [func (c *FeatureClient) Create(feat FeatureModel) error](#FeatureClient.Create)
+ [func (c *FeatureClient) Delete(id string) error](#FeatureClient.Delete)
+ [func (c *FeatureClient) Get(id string) (FeatureModel, error)](#FeatureClient.Get)
+ [func (c *FeatureClient) List(page int, pageSize int) (FeaturesResponse, error)](#FeatureClient.List)
+ [func (c *FeatureClient) ListAll() ([]FeatureModel, error)](#FeatureClient.ListAll)
+ [func (c *FeatureClient) Update(feat FeatureModel) error](#FeatureClient.Update)
* [type FeatureModel](#FeatureModel)
* [type FeaturesResponse](#FeaturesResponse)
* [type Metadata](#Metadata)
* [type SwaggerClient](#SwaggerClient)
* + [func (c *SwaggerClient) Get() (string, error)](#SwaggerClient.Get)
### Constants [¶](#pkg-constants)
This section is empty.
### Variables [¶](#pkg-variables)
This section is empty.
### Functions [¶](#pkg-functions)
This section is empty.
### Types [¶](#pkg-types)
####
type [ActivationStrategy](https://github.com/ovh/cds/blob/4133dc6a1e9e/sdk/izanami/feature.go#L29) [¶](#ActivationStrategy)
```
type ActivationStrategy [string](/builtin#string)
```
ActivationStrategy represents the different way to activate a feature
```
const (
NoStrategy [ActivationStrategy](#ActivationStrategy) = "NO_STRATEGY"
ReleaseDate [ActivationStrategy](#ActivationStrategy) = "RELEASE_DATE"
Script [ActivationStrategy](#ActivationStrategy) = "SCRIPT"
GlobalScript [ActivationStrategy](#ActivationStrategy) = "GLOBAL_SCRIPT"
)
```
####
type [Client](https://github.com/ovh/cds/blob/4133dc6a1e9e/sdk/izanami/client.go#L11) [¶](#Client)
```
type Client struct {
HTTPClient *[http](/net/http).[Client](/net/http#Client)
// contains filtered or unexported fields
}
```
Client represents the izanami client
####
func [New](https://github.com/ovh/cds/blob/4133dc6a1e9e/sdk/izanami/client.go#L29) [¶](#New)
```
func New(apiURL, clientID, secret [string](/builtin#string)) (*[Client](#Client), [error](/builtin#error))
```
New creates a new izanami client
####
func (*Client) [Feature](https://github.com/ovh/cds/blob/4133dc6a1e9e/sdk/izanami/client.go#L57) [¶](#Client.Feature)
```
func (c *[Client](#Client)) Feature() *[FeatureClient](#FeatureClient)
```
Feature creates a specific client for feature management
####
func (*Client) [Swagger](https://github.com/ovh/cds/blob/4133dc6a1e9e/sdk/izanami/client.go#L64) [¶](#Client.Swagger)
```
func (c *[Client](#Client)) Swagger() *[SwaggerClient](#SwaggerClient)
```
Swagger creates a specific client for getting swagger.json
####
type [FeatureCheckResponse](https://github.com/ovh/cds/blob/4133dc6a1e9e/sdk/izanami/feature.go#L16) [¶](#FeatureCheckResponse)
```
type FeatureCheckResponse struct {
Active [bool](/builtin#bool) `json:"active"`
}
```
FeatureCheckResponse represents the hhtp response for a feature check
####
type [FeatureClient](https://github.com/ovh/cds/blob/4133dc6a1e9e/sdk/izanami/client.go#L19) [¶](#FeatureClient)
```
type FeatureClient struct {
// contains filtered or unexported fields
}
```
FeatureClient represents a client for feature management
####
func (*FeatureClient) [CheckWithContext](https://github.com/ovh/cds/blob/4133dc6a1e9e/sdk/izanami/feature.go#L126) [¶](#FeatureClient.CheckWithContext)
```
func (c *[FeatureClient](#FeatureClient)) CheckWithContext(id [string](/builtin#string), context interface{}) ([FeatureCheckResponse](#FeatureCheckResponse), [error](/builtin#error))
```
CheckWithContext if a feature is enable for the given context
####
func (*FeatureClient) [CheckWithoutContext](https://github.com/ovh/cds/blob/4133dc6a1e9e/sdk/izanami/feature.go#L115) [¶](#FeatureClient.CheckWithoutContext)
```
func (c *[FeatureClient](#FeatureClient)) CheckWithoutContext(id [string](/builtin#string)) ([FeatureCheckResponse](#FeatureCheckResponse), [error](/builtin#error))
```
CheckWithoutContext if a feature is enable
####
func (*FeatureClient) [Create](https://github.com/ovh/cds/blob/4133dc6a1e9e/sdk/izanami/feature.go#L79) [¶](#FeatureClient.Create)
```
func (c *[FeatureClient](#FeatureClient)) Create(feat [FeatureModel](#FeatureModel)) [error](/builtin#error)
```
Create a new feature
####
func (*FeatureClient) [Delete](https://github.com/ovh/cds/blob/4133dc6a1e9e/sdk/izanami/feature.go#L110) [¶](#FeatureClient.Delete)
```
func (c *[FeatureClient](#FeatureClient)) Delete(id [string](/builtin#string)) [error](/builtin#error)
```
Delete a feature by its id
####
func (*FeatureClient) [Get](https://github.com/ovh/cds/blob/4133dc6a1e9e/sdk/izanami/feature.go#L88) [¶](#FeatureClient.Get)
```
func (c *[FeatureClient](#FeatureClient)) Get(id [string](/builtin#string)) ([FeatureModel](#FeatureModel), [error](/builtin#error))
```
Get a feature by its id
####
func (*FeatureClient) [List](https://github.com/ovh/cds/blob/4133dc6a1e9e/sdk/izanami/feature.go#L39) [¶](#FeatureClient.List)
```
func (c *[FeatureClient](#FeatureClient)) List(page [int](/builtin#int), pageSize [int](/builtin#int)) ([FeaturesResponse](#FeaturesResponse), [error](/builtin#error))
```
List features on the given page.
####
func (*FeatureClient) [ListAll](https://github.com/ovh/cds/blob/4133dc6a1e9e/sdk/izanami/feature.go#L58) [¶](#FeatureClient.ListAll)
```
func (c *[FeatureClient](#FeatureClient)) ListAll() ([][FeatureModel](#FeatureModel), [error](/builtin#error))
```
ListAll browses all pages and returns all features
####
func (*FeatureClient) [Update](https://github.com/ovh/cds/blob/4133dc6a1e9e/sdk/izanami/feature.go#L101) [¶](#FeatureClient.Update)
```
func (c *[FeatureClient](#FeatureClient)) Update(feat [FeatureModel](#FeatureModel)) [error](/builtin#error)
```
Update the given feature
####
type [FeatureModel](https://github.com/ovh/cds/blob/4133dc6a1e9e/sdk/izanami/feature.go#L21) [¶](#FeatureModel)
```
type FeatureModel struct {
ID [string](/builtin#string) `json:"id"`
Enabled [bool](/builtin#bool) `json:"enabled"`
Parameters map[[string](/builtin#string)][string](/builtin#string) `json:"parameters"`
Strategy [ActivationStrategy](#ActivationStrategy) `json:"activationStrategy"`
}
```
Feature represents a feature in izanami point of view
####
type [FeaturesResponse](https://github.com/ovh/cds/blob/4133dc6a1e9e/sdk/izanami/feature.go#L10) [¶](#FeaturesResponse)
```
type FeaturesResponse struct {
Results [][FeatureModel](#FeatureModel) `json:"results"`
Metadata [Metadata](#Metadata) `json:"metadata"`
}
```
FeaturesResponse represents the http response for listAll
####
type [Metadata](https://github.com/ovh/cds/blob/4133dc6a1e9e/sdk/izanami/http.go#L21) [¶](#Metadata)
```
type Metadata struct {
Page [int](/builtin#int) `json:"page"`
PageSize [int](/builtin#int) `json:"pageSize"`
Count [int](/builtin#int) `json:"count"`
NbPages [int](/builtin#int) `json:"nbPages"`
}
```
Metadata represents metadata parts of http response
####
type [SwaggerClient](https://github.com/ovh/cds/blob/4133dc6a1e9e/sdk/izanami/client.go#L24) [¶](#SwaggerClient)
```
type SwaggerClient struct {
// contains filtered or unexported fields
}
```
SwaggerClient represents a client for swagger endpoints
####
func (*SwaggerClient) [Get](https://github.com/ovh/cds/blob/4133dc6a1e9e/sdk/izanami/swagger.go#L4) [¶](#SwaggerClient.Get)
```
func (c *[SwaggerClient](#SwaggerClient)) Get() ([string](/builtin#string), [error](/builtin#error))
```
Get swagger.json datas |
github.com/microcosm-cc/bluemonday | go | Go | README
[¶](#section-readme)
---
### bluemonday [GoDoc](https://godoc.org/github.com/microcosm-cc/bluemonday) [Sourcegraph](https://sourcegraph.com/github.com/microcosm-cc/bluemonday?badge)
bluemonday is a HTML sanitizer implemented in Go. It is fast and highly configurable.
bluemonday takes untrusted user generated content as an input, and will return HTML that has been sanitised against an allowlist of approved HTML elements and attributes so that you can safely include the content in your web page.
If you accept user generated content, and your server uses Go, you **need** bluemonday.
The default policy for user generated content (`bluemonday.UGCPolicy().Sanitize()`) turns this:
```
Hello <STYLE>.XSS{background-image:url("javascript:alert('XSS')");}</STYLE><A CLASS=XSS></A>World
```
Into a harmless:
```
Hello World
```
And it turns this:
```
<a href="javascript:alert('XSS1')" onmouseover="alert('XSS2')">XSS<a>
```
Into this:
```
XSS
```
Whilst still allowing this:
```
<a href="http://www.google.com/">
<img src="https://ssl.gstatic.com/accounts/ui/logo_2x.png"/>
</a>
```
To pass through mostly unaltered (it gained a rel="nofollow" which is a good thing for user generated content):
```
<a href="http://www.google.com/" rel="nofollow">
<img src="https://ssl.gstatic.com/accounts/ui/logo_2x.png"/>
</a>
```
It protects sites from [XSS](http://en.wikipedia.org/wiki/Cross-site_scripting) attacks. There are many [vectors for an XSS attack](https://www.owasp.org/index.php/XSS_Filter_Evasion_Cheat_Sheet) and the best way to mitigate the risk is to sanitize user input against a known safe list of HTML elements and attributes.
You should **always** run bluemonday **after** any other processing.
If you use [blackfriday](https://github.com/russross/blackfriday) or [Pandoc](http://johnmacfarlane.net/pandoc/) then bluemonday should be run after these steps. This ensures that no insecure HTML is introduced later in your process.
bluemonday is heavily inspired by both the [OWASP Java HTML Sanitizer](https://code.google.com/p/owasp-java-html-sanitizer/) and the [HTML Purifier](http://htmlpurifier.org/).
#### Technical Summary
Allowlist based, you need to either build a policy describing the HTML elements and attributes to permit (and the `regexp` patterns of attributes), or use one of the supplied policies representing good defaults.
The policy containing the allowlist is applied using a fast non-validating, forward only, token-based parser implemented in the [Go net/html library](https://godoc.org/golang.org/x/net/html) by the core Go team.
We expect to be supplied with well-formatted HTML (closing elements for every applicable open element, nested correctly) and so we do not focus on repairing badly nested or incomplete HTML. We focus on simply ensuring that whatever elements do exist are described in the policy allowlist and that attributes and links are safe for use on your web page. [GIGO](http://en.wikipedia.org/wiki/Garbage_in,_garbage_out) does apply and if you feed it bad HTML bluemonday is not tasked with figuring out how to make it good again.
##### Supported Go Versions
bluemonday is tested on all versions since Go 1.2 including tip.
We do not support Go 1.0 as we depend on `golang.org/x/net/html` which includes a reference to `io.ErrNoProgress` which did not exist in Go 1.0.
We support Go 1.1 but Travis no longer tests against it.
#### Is it production ready?
*Yes*
We are using bluemonday in production having migrated from the widely used and heavily field tested OWASP Java HTML Sanitizer.
We are passing our extensive test suite (including AntiSamy tests as well as tests for any issues raised). Check for any [unresolved issues](https://github.com/microcosm-cc/bluemonday/issues?page=1&state=open) to see whether anything may be a blocker for you.
We invite pull requests and issues to help us ensure we are offering comprehensive protection against various attacks via user generated content.
#### Usage
Install in your `${GOPATH}` using `go get -u github.com/microcosm-cc/bluemonday`
Then call it:
```
package main
import (
"fmt"
"github.com/microcosm-cc/bluemonday"
)
func main() {
// Do this once for each unique policy, and use the policy for the life of the program
// Policy creation/editing is not safe to use in multiple goroutines
p := bluemonday.UGCPolicy()
// The policy can then be used to sanitize lots of input and it is safe to use the policy in multiple goroutines
html := p.Sanitize(
`<a onblur="alert(secret)" href="http://www.google.com">Google</a>`,
)
// Output:
// <a href="http://www.google.com" rel="nofollow">Google</a>
fmt.Println(html)
}
```
We offer three ways to call Sanitize:
```
p.Sanitize(string) string p.SanitizeBytes([]byte) []byte p.SanitizeReader(io.Reader) bytes.Buffer
```
If you are obsessed about performance, `p.SanitizeReader(r).Bytes()` will return a `[]byte` without performing any unnecessary casting of the inputs or outputs. Though the difference is so negligible you should never need to care.
You can build your own policies:
```
package main
import (
"fmt"
"github.com/microcosm-cc/bluemonday"
)
func main() {
p := bluemonday.NewPolicy()
// Require URLs to be parseable by net/url.Parse and either:
// mailto: http:// or https://
p.AllowStandardURLs()
// We only allow <p> and <a href="">
p.AllowAttrs("href").OnElements("a")
p.AllowElements("p")
html := p.Sanitize(
`<a onblur="alert(secret)" href="http://www.google.com">Google</a>`,
)
// Output:
// <a href="http://www.google.com">Google</a>
fmt.Println(html)
}
```
We ship two default policies:
1. `bluemonday.StrictPolicy()` which can be thought of as equivalent to stripping all HTML elements and their attributes as it has nothing on its allowlist. An example usage scenario would be blog post titles where HTML tags are not expected at all and if they are then the elements *and* the content of the elements should be stripped. This is a *very* strict policy.
2. `bluemonday.UGCPolicy()` which allows a broad selection of HTML elements and attributes that are safe for user generated content. Note that this policy does *not* allow iframes, object, embed, styles, script, etc. An example usage scenario would be blog post bodies where a variety of formatting is expected along with the potential for TABLEs and IMGs.
#### Policy Building
The essence of building a policy is to determine which HTML elements and attributes are considered safe for your scenario. OWASP provide an [XSS prevention cheat sheet](https://www.owasp.org/index.php/XSS_(Cross_Site_Scripting)_Prevention_Cheat_Sheet) to help explain the risks, but essentially:
1. Avoid anything other than the standard HTML elements 2. Avoid `script`, `style`, `iframe`, `object`, `embed`, `base` elements that allow code to be executed by the client or third party content to be included that can execute code 3. Avoid anything other than plain HTML attributes with values matched to a regexp
Basically, you should be able to describe what HTML is fine for your scenario. If you do not have confidence that you can describe your policy please consider using one of the shipped policies such as `bluemonday.UGCPolicy()`.
To create a new policy:
```
p := bluemonday.NewPolicy()
```
To add elements to a policy either add just the elements:
```
p.AllowElements("b", "strong")
```
Or using a regex:
*Note: if an element is added by name as shown above, any matching regex will be ignored*
It is also recommended to ensure multiple patterns don't overlap as order of execution is not guaranteed and can result in some rules being missed.
```
p.AllowElementsMatching(regex.MustCompile(`^my-element-`))
```
Or add elements as a virtue of adding an attribute:
```
// Note the recommended pattern, see the recommendation on using .Matching() below p.AllowAttrs("nowrap").OnElements("td", "th")
```
Again, this also supports a regex pattern match alternative:
```
p.AllowAttrs("nowrap").OnElementsMatching(regex.MustCompile(`^my-element-`))
```
Attributes can either be added to all elements:
```
p.AllowAttrs("dir").Matching(regexp.MustCompile("(?i)rtl|ltr")).Globally()
```
Or attributes can be added to specific elements:
```
// Not the recommended pattern, see the recommendation on using .Matching() below p.AllowAttrs("value").OnElements("li")
```
It is **always** recommended that an attribute be made to match a pattern. XSS in HTML attributes is very easy otherwise:
```
// \p{L} matches unicode letters, \p{N} matches unicode numbers p.AllowAttrs("title").Matching(regexp.MustCompile(`[\p{L}\p{N}\s\-_',:\[\]!\./\\\(\)&]*`)).Globally()
```
You can stop at any time and call .Sanitize():
```
// string htmlIn passed in from a HTTP POST htmlOut := p.Sanitize(htmlIn)
```
And you can take any existing policy and extend it:
```
p := bluemonday.UGCPolicy()
p.AllowElements("fieldset", "select", "option")
```
##### Inline CSS
Although it's possible to handle inline CSS using `AllowAttrs` with a `Matching` rule, writing a single monolithic regular expression to safely process all inline CSS which you wish to allow is not a trivial task. Instead of attempting to do so, you can allow the `style` attribute on whichever element(s) you desire and use style policies to control and sanitize inline styles.
It is strongly recommended that you use `Matching` (with a suitable regular expression)
`MatchingEnum`, or `MatchingHandler` to ensure each style matches your needs,
but default handlers are supplied for most widely used styles.
Similar to attributes, you can allow specific CSS properties to be set inline:
```
p.AllowAttrs("style").OnElements("span", "p")
// Allow the 'color' property with valid RGB(A) hex values only (on any element allowed a 'style' attribute)
p.AllowStyles("color").Matching(regexp.MustCompile("(?i)^#([0-9a-f]{3,4}|[0-9a-f]{6}|[0-9a-f]{8})$")).Globally()
```
Additionally, you can allow a CSS property to be set only to an allowed value:
```
p.AllowAttrs("style").OnElements("span", "p")
// Allow the 'text-decoration' property to be set to 'underline', 'line-through' or 'none'
// on 'span' elements only p.AllowStyles("text-decoration").MatchingEnum("underline", "line-through", "none").OnElements("span")
```
Or you can specify elements based on a regex pattern match:
```
p.AllowAttrs("style").OnElementsMatching(regex.MustCompile(`^my-element-`))
// Allow the 'text-decoration' property to be set to 'underline', 'line-through' or 'none'
// on 'span' elements only p.AllowStyles("text-decoration").MatchingEnum("underline", "line-through", "none").OnElementsMatching(regex.MustCompile(`^my-element-`))
```
If you need more specific checking, you can create a handler that takes in a string and returns a bool to validate the values for a given property. The string parameter has been converted to lowercase and unicode code points have been converted.
```
myHandler := func(value string) bool{
// Validate your input here
return true
}
p.AllowAttrs("style").OnElements("span", "p")
// Allow the 'color' property with values validated by the handler (on any element allowed a 'style' attribute)
p.AllowStyles("color").MatchingHandler(myHandler).Globally()
```
##### Links
Links are difficult beasts to sanitise safely and also one of the biggest attack vectors for malicious content.
It is possible to do this:
```
p.AllowAttrs("href").Matching(regexp.MustCompile(`(?i)mailto|https?`)).OnElements("a")
```
But that will not protect you as the regular expression is insufficient in this case to have prevented a malformed value doing something unexpected.
We provide some additional global options for safely working with links.
`RequireParseableURLs` will ensure that URLs are parseable by Go's `net/url` package:
```
p.RequireParseableURLs(true)
```
If you have enabled parseable URLs then the following option will `AllowRelativeURLs`. By default this is disabled (bluemonday is an allowlist tool... you need to explicitly tell us to permit things) and when disabled it will prevent all local and scheme relative URLs (i.e. `href="localpage.html"`, `href="../home.html"` and even `href="//www.google.com"` are relative):
```
p.AllowRelativeURLs(true)
```
If you have enabled parseable URLs then you can allow the schemes (commonly called protocol when thinking of `http` and `https`) that are permitted. Bear in mind that allowing relative URLs in the above option will allow for a blank scheme:
```
p.AllowURLSchemes("mailto", "http", "https")
```
Regardless of whether you have enabled parseable URLs, you can force all URLs to have a rel="nofollow" attribute. This will be added if it does not exist, but only when the `href` is valid:
```
// This applies to "a" "area" "link" elements that have a "href" attribute p.RequireNoFollowOnLinks(true)
```
Similarly, you can force all URLs to have "noreferrer" in their rel attribute.
```
// This applies to "a" "area" "link" elements that have a "href" attribute p.RequireNoReferrerOnLinks(true)
```
We provide a convenience method that applies all of the above, but you will still need to allow the linkable elements for the URL rules to be applied to:
```
p.AllowStandardURLs()
p.AllowAttrs("cite").OnElements("blockquote", "q")
p.AllowAttrs("href").OnElements("a", "area")
p.AllowAttrs("src").OnElements("img")
```
An additional complexity regarding links is the data URI as defined in [RFC2397](http://tools.ietf.org/html/rfc2397). The data URI allows for images to be served inline using this format:
```
<img src="data:image/webp;base64,UklGRh4AAABXRUJQVlA4TBEAAAAvAAAAAAfQ//73v/+BiOh/AAA=">
```
We have provided a helper to verify the mimetype followed by base64 content of data URIs links:
```
p.AllowDataURIImages()
```
That helper will enable GIF, JPEG, PNG and WEBP images.
It should be noted that there is a potential [security](http://palizine.plynt.com/issues/2010Oct/bypass-xss-filters/) [risk](https://capec.mitre.org/data/definitions/244.html) with the use of data URI links. You should only enable data URI links if you already trust the content.
We also have some features to help deal with user generated content:
```
p.AddTargetBlankToFullyQualifiedLinks(true)
```
This will ensure that anchor `<a href="" />` links that are fully qualified (the href destination includes a host name) will get `target="_blank"` added to them.
Additionally any link that has `target="_blank"` after the policy has been applied will also have the `rel` attribute adjusted to add `noopener`. This means a link may start like `<a href="//host/path"/>` and will end up as `<a href="//host/path" rel="noopener" target="_blank">`. It is important to note that the addition of `noopener` is a security feature and not an issue. There is an unfortunate feature to browsers that a browser window opened as a result of `target="_blank"` can still control the opener (your web page) and this protects against that. The background to this can be found here: <https://dev.to/ben/the-targetblank-vulnerability-by-example##### Policy Building Helpers
We also bundle some helpers to simplify policy building:
```
// Permits the "dir", "id", "lang", "title" attributes globally p.AllowStandardAttributes()
// Permits the "img" element and its standard attributes p.AllowImages()
// Permits ordered and unordered lists, and also definition lists p.AllowLists()
// Permits HTML tables and all applicable elements and non-styling attributes p.AllowTables()
```
##### Invalid Instructions
The following are invalid:
```
// This does not say where the attributes are allowed, you need to add
// .Globally() or .OnElements(...)
// This will be ignored without error.
p.AllowAttrs("value")
// This does not say where the attributes are allowed, you need to add
// .Globally() or .OnElements(...)
// This will be ignored without error.
p.AllowAttrs(
"type",
).Matching(
regexp.MustCompile("(?i)^(circle|disc|square|a|A|i|I|1)$"),
)
```
Both examples exhibit the same issue, they declare attributes but do not then specify whether they are allowed globally or only on specific elements (and which elements). Attributes belong to one or more elements, and the policy needs to declare this.
#### Limitations
We are not yet including any tools to help allow and sanitize CSS. Which means that unless you wish to do the heavy lifting in a single regular expression (inadvisable), **you should not allow the "style" attribute anywhere**.
In the same theme, both `<script>` and `<style>` are considered harmful. These elements (and their content) will not be rendered by default, and require you to explicitly set `p.AllowUnsafe(true)`. You should be aware that allowing these elements defeats the purpose of using a HTML sanitizer as you would be explicitly allowing either JavaScript (and any plainly written XSS) and CSS (which can modify a DOM to insert JS), and additionally but limitations in this library mean it is not aware of whether HTML is validly structured and that can allow these elements to bypass some of the safety mechanisms built into the [WhatWG HTML parser standard](https://html.spec.whatwg.org/multipage/parsing.html#parsing-main-inselect).
It is not the job of bluemonday to fix your bad HTML, it is merely the job of bluemonday to prevent malicious HTML getting through. If you have mismatched HTML elements, or non-conforming nesting of elements, those will remain. But if you have well-structured HTML bluemonday will not break it.
#### TODO
* Investigate whether devs want to blacklist elements and attributes. This would allow devs to take an existing policy (such as the `bluemonday.UGCPolicy()` ) that encapsulates 90% of what they're looking for but does more than they need, and to remove the extra things they do not want to make it 100% what they want
* Investigate whether devs want a validating HTML mode, in which the HTML elements are not just transformed into a balanced tree (every start tag has a closing tag at the correct depth) but also that elements and character data appear only in their allowed context (i.e. that a `table` element isn't a descendent of a `caption`, that `colgroup`, `thead`, `tbody`, `tfoot` and `tr` are permitted, and that character data is not permitted)
#### Development
If you have cloned this repo you will probably need the dependency:
`go get golang.org/x/net/html`
Gophers can use their familiar tools:
`go build`
`go test`
I personally use a Makefile as it spares typing the same args over and over whilst providing consistency for those of us who jump from language to language and enjoy just typing `make` in a project directory and watch magic happen.
`make` will build, vet, test and install the library.
`make clean` will remove the library from a *single* `${GOPATH}/pkg` directory tree
`make test` will run the tests
`make cover` will run the tests and *open a browser window* with the coverage report
`make lint` will run golint (install via `go get github.com/golang/lint/golint`)
#### Long term goals
1. Open the code to adversarial peer review similar to the [Attack Review Ground Rules](https://code.google.com/p/owasp-java-html-sanitizer/wiki/AttackReviewGroundRules)
2. Raise funds and pay for an external security review
Documentation
[¶](#section-documentation)
---
### Overview [¶](#pkg-overview)
Package bluemonday provides a way of describing an allowlist of HTML elements and attributes as a policy, and for that policy to be applied to untrusted strings from users that may contain markup. All elements and attributes not on the allowlist will be stripped.
The default bluemonday.UGCPolicy().Sanitize() turns this:
```
Hello <STYLE>.XSS{background-image:url("javascript:alert('XSS')");}</STYLE><A CLASS=XSS></A>World
```
Into the more harmless:
```
Hello World
```
And it turns this:
```
<a href="javascript:alert('XSS1')" onmouseover="alert('XSS2')">XSS<a>
```
Into this:
```
XSS
```
Whilst still allowing this:
```
<a href="http://www.google.com/">
<img src="https://ssl.gstatic.com/accounts/ui/logo_2x.png"/>
</a>
```
To pass through mostly unaltered (it gained a rel="nofollow"):
```
<a href="http://www.google.com/" rel="nofollow">
<img src="https://ssl.gstatic.com/accounts/ui/logo_2x.png"/>
</a>
```
The primary purpose of bluemonday is to take potentially unsafe user generated content (from things like Markdown, HTML WYSIWYG tools, etc) and make it safe for you to put on your website.
It protects sites against XSS (<http://en.wikipedia.org/wiki/Cross-site_scripting>)
and other malicious content that a user interface may deliver. There are many vectors for an XSS attack (<https://www.owasp.org/index.php/XSS_Filter_Evasion_Cheat_Sheet>)
and the safest thing to do is to sanitize user input against a known safe list of HTML elements and attributes.
Note: You should always run bluemonday after any other processing.
If you use blackfriday (<https://github.com/russross/blackfriday>) or Pandoc (<http://johnmacfarlane.net/pandoc/>) then bluemonday should be run after these steps. This ensures that no insecure HTML is introduced later in your process.
bluemonday is heavily inspired by both the OWASP Java HTML Sanitizer
(<https://code.google.com/p/owasp-java-html-sanitizer/>) and the HTML Purifier
(<http://htmlpurifier.org/>).
We ship two default policies, one is bluemonday.StrictPolicy() and can be thought of as equivalent to stripping all HTML elements and their attributes as it has nothing on its allowlist.
The other is bluemonday.UGCPolicy() and allows a broad selection of HTML elements and attributes that are safe for user generated content. Note that this policy does not allow iframes, object, embed, styles, script, etc.
The essence of building a policy is to determine which HTML elements and attributes are considered safe for your scenario. OWASP provide an XSS prevention cheat sheet ( <https://www.google.com/search?q=xss+prevention+cheat+sheet> )
to help explain the risks, but essentially:
1. Avoid allowing anything other than plain HTML elements 2. Avoid allowing `script`, `style`, `iframe`, `object`, `embed`, `base`
elements 3. Avoid allowing anything other than plain HTML elements with simple values that you can match to a regexp
Example [¶](#example-package)
```
package main
import (
"fmt"
"regexp"
"github.com/microcosm-cc/bluemonday"
)
func main() {
// Create a new policy
p := bluemonday.NewPolicy()
// Add elements to a policy without attributes
p.AllowElements("b", "strong")
// Add elements as a virtue of adding an attribute
p.AllowAttrs("nowrap").OnElements("td", "th")
// Attributes can either be added to all elements
p.AllowAttrs("dir").Globally()
//Or attributes can be added to specific elements
p.AllowAttrs("value").OnElements("li")
// It is ALWAYS recommended that an attribute be made to match a pattern
// XSS in HTML attributes is a very easy attack vector
// \p{L} matches unicode letters, \p{N} matches unicode numbers
p.AllowAttrs("title").Matching(regexp.MustCompile(`[\p{L}\p{N}\s\-_',:\[\]!\./\\\(\)&]*`)).Globally()
// You can stop at any time and call .Sanitize()
// Assumes that string htmlIn was passed in from a HTTP POST and contains
// untrusted user generated content
htmlIn := `untrusted user generated content <body onload="alert('XSS')">`
fmt.Println(p.Sanitize(htmlIn))
// And you can take any existing policy and extend it
p = bluemonday.UGCPolicy()
p.AllowElements("fieldset", "select", "option")
// Links are complex beasts and one of the biggest attack vectors for
// malicious content so we have included features specifically to help here.
// This is not recommended:
p = bluemonday.NewPolicy()
p.AllowAttrs("href").Matching(regexp.MustCompile(`(?i)mailto|https?`)).OnElements("a")
// The regexp is insufficient in this case to have prevented a malformed
// value doing something unexpected.
// This will ensure that URLs are not considered invalid by Go's net/url
// package.
p.RequireParseableURLs(true)
// If you have enabled parseable URLs then the following option will allow
// relative URLs. By default this is disabled and will prevent all local and
// schema relative URLs (i.e. `href="//www.google.com"` is schema relative).
p.AllowRelativeURLs(true)
// If you have enabled parseable URLs then you can allow the schemas
// that are permitted. Bear in mind that allowing relative URLs in the above
// option allows for blank schemas.
p.AllowURLSchemes("mailto", "http", "https")
// Regardless of whether you have enabled parseable URLs, you can force all
// URLs to have a rel="nofollow" attribute. This will be added if it does
// not exist.
// This applies to "a" "area" "link" elements that have a "href" attribute
p.RequireNoFollowOnLinks(true)
// We provide a convenience function that applies all of the above, but you
// will still need to allow the linkable elements:
p = bluemonday.NewPolicy()
p.AllowStandardURLs()
p.AllowAttrs("cite").OnElements("blockquote")
p.AllowAttrs("href").OnElements("a", "area")
p.AllowAttrs("src").OnElements("img")
// Policy Building Helpers
// If you've got this far and you're bored already, we also bundle some
// other convenience functions
p = bluemonday.NewPolicy()
p.AllowStandardAttributes()
p.AllowImages()
p.AllowLists()
p.AllowTables()
}
```
```
Output:
```
Share Format
Run
### Index [¶](#pkg-index)
* [Variables](#pkg-variables)
* [type Policy](#Policy)
* + [func NewPolicy() *Policy](#NewPolicy)
+ [func StrictPolicy() *Policy](#StrictPolicy)
+ [func StripTagsPolicy() *Policy](#StripTagsPolicy)
+ [func UGCPolicy() *Policy](#UGCPolicy)
* + [func (p *Policy) AddSpaceWhenStrippingTag(allow bool) *Policy](#Policy.AddSpaceWhenStrippingTag)
+ [func (p *Policy) AddTargetBlankToFullyQualifiedLinks(require bool) *Policy](#Policy.AddTargetBlankToFullyQualifiedLinks)
+ [func (p *Policy) AllowAttrs(attrNames ...string) *attrPolicyBuilder](#Policy.AllowAttrs)
+ [func (p *Policy) AllowComments()](#Policy.AllowComments)
+ [func (p *Policy) AllowDataAttributes()](#Policy.AllowDataAttributes)
+ [func (p *Policy) AllowDataURIImages()](#Policy.AllowDataURIImages)
+ [func (p *Policy) AllowElements(names ...string) *Policy](#Policy.AllowElements)
+ [func (p *Policy) AllowElementsContent(names ...string) *Policy](#Policy.AllowElementsContent)
+ [func (p *Policy) AllowElementsMatching(regex *regexp.Regexp) *Policy](#Policy.AllowElementsMatching)
+ [func (p *Policy) AllowIFrames(vals ...SandboxValue)](#Policy.AllowIFrames)
+ [func (p *Policy) AllowImages()](#Policy.AllowImages)
+ [func (p *Policy) AllowLists()](#Policy.AllowLists)
+ [func (p *Policy) AllowNoAttrs() *attrPolicyBuilder](#Policy.AllowNoAttrs)
+ [func (p *Policy) AllowRelativeURLs(require bool) *Policy](#Policy.AllowRelativeURLs)
+ [func (p *Policy) AllowStandardAttributes()](#Policy.AllowStandardAttributes)
+ [func (p *Policy) AllowStandardURLs()](#Policy.AllowStandardURLs)
+ [func (p *Policy) AllowStyles(propertyNames ...string) *stylePolicyBuilder](#Policy.AllowStyles)
+ [func (p *Policy) AllowStyling()](#Policy.AllowStyling)
+ [func (p *Policy) AllowTables()](#Policy.AllowTables)
+ [func (p *Policy) AllowURLSchemeWithCustomPolicy(scheme string, urlPolicy func(url *url.URL) (allowUrl bool)) *Policy](#Policy.AllowURLSchemeWithCustomPolicy)
+ [func (p *Policy) AllowURLSchemes(schemes ...string) *Policy](#Policy.AllowURLSchemes)
+ [func (p *Policy) AllowURLSchemesMatching(r *regexp.Regexp) *Policy](#Policy.AllowURLSchemesMatching)
+ [func (p *Policy) AllowUnsafe(allowUnsafe bool) *Policy](#Policy.AllowUnsafe)
+ [func (p *Policy) RequireCrossOriginAnonymous(require bool) *Policy](#Policy.RequireCrossOriginAnonymous)
+ [func (p *Policy) RequireNoFollowOnFullyQualifiedLinks(require bool) *Policy](#Policy.RequireNoFollowOnFullyQualifiedLinks)
+ [func (p *Policy) RequireNoFollowOnLinks(require bool) *Policy](#Policy.RequireNoFollowOnLinks)
+ [func (p *Policy) RequireNoReferrerOnFullyQualifiedLinks(require bool) *Policy](#Policy.RequireNoReferrerOnFullyQualifiedLinks)
+ [func (p *Policy) RequireNoReferrerOnLinks(require bool) *Policy](#Policy.RequireNoReferrerOnLinks)
+ [func (p *Policy) RequireParseableURLs(require bool) *Policy](#Policy.RequireParseableURLs)
+ [func (p *Policy) RequireSandboxOnIFrame(vals ...SandboxValue)](#Policy.RequireSandboxOnIFrame)
+ [func (p *Policy) RewriteSrc(fn urlRewriter) *Policy](#Policy.RewriteSrc)
+ [func (p *Policy) Sanitize(s string) string](#Policy.Sanitize)
+ [func (p *Policy) SanitizeBytes(b []byte) []byte](#Policy.SanitizeBytes)
+ [func (p *Policy) SanitizeReader(r io.Reader) *bytes.Buffer](#Policy.SanitizeReader)
+ [func (p *Policy) SanitizeReaderToWriter(r io.Reader, w io.Writer) error](#Policy.SanitizeReaderToWriter)
+ [func (p *Policy) SkipElementsContent(names ...string) *Policy](#Policy.SkipElementsContent)
* [type Query](#Query)
* [type SandboxValue](#SandboxValue)
#### Examples [¶](#pkg-examples)
* [Package](#example-package)
* [NewPolicy](#example-NewPolicy)
* [Policy.AllowAttrs](#example-Policy.AllowAttrs)
* [Policy.AllowElements](#example-Policy.AllowElements)
* [Policy.AllowStyles](#example-Policy.AllowStyles)
* [Policy.Sanitize](#example-Policy.Sanitize)
* [Policy.SanitizeBytes](#example-Policy.SanitizeBytes)
* [Policy.SanitizeReader](#example-Policy.SanitizeReader)
* [StrictPolicy](#example-StrictPolicy)
* [UGCPolicy](#example-UGCPolicy)
### Constants [¶](#pkg-constants)
This section is empty.
### Variables [¶](#pkg-variables)
```
var (
// CellAlign handles the `align` attribute
// <https://developer.mozilla.org/en-US/docs/Web/HTML/Element/td#attr-align>
CellAlign = [regexp](/regexp).[MustCompile](/regexp#MustCompile)(`(?i)^(center|justify|left|right|char)$`)
// CellVerticalAlign handles the `valign` attribute
// <https://developer.mozilla.org/en-US/docs/Web/HTML/Element/td#attr-valign>
CellVerticalAlign = [regexp](/regexp).[MustCompile](/regexp#MustCompile)(`(?i)^(baseline|bottom|middle|top)$`)
// Direction handles the `dir` attribute
// <https://developer.mozilla.org/en-US/docs/Web/HTML/Element/bdo#attr-dir>
Direction = [regexp](/regexp).[MustCompile](/regexp#MustCompile)(`(?i)^(rtl|ltr)$`)
// ImageAlign handles the `align` attribute on the `image` tag
// <http://www.w3.org/MarkUp/Test/Img/imgtest.html>
ImageAlign = [regexp](/regexp).[MustCompile](/regexp#MustCompile)(
`(?i)^(left|right|top|texttop|middle|absmiddle|baseline|bottom|absbottom)$`,
)
// Integer describes whole positive integers (including 0) used in places
// like td.colspan
// <https://developer.mozilla.org/en-US/docs/Web/HTML/Element/td#attr-colspan>
Integer = [regexp](/regexp).[MustCompile](/regexp#MustCompile)(`^[0-9]+$`)
// ISO8601 according to the W3 group is only a subset of the ISO8601
// standard: <http://www.w3.org/TR/NOTE-datetime>
//
// Used in places like time.datetime
// <https://developer.mozilla.org/en-US/docs/Web/HTML/Element/time#attr-datetime>
//
// Matches patterns:
// Year:
// YYYY (eg 1997)
// Year and month:
// YYYY-MM (eg 1997-07)
// Complete date:
// YYYY-MM-DD (eg 1997-07-16)
// Complete date plus hours and minutes:
// YYYY-MM-DDThh:mmTZD (eg 1997-07-16T19:20+01:00)
// Complete date plus hours, minutes and seconds:
// YYYY-MM-DDThh:mm:ssTZD (eg 1997-07-16T19:20:30+01:00)
// Complete date plus hours, minutes, seconds and a decimal fraction of a
// second
// YYYY-MM-DDThh:mm:ss.sTZD (eg 1997-07-16T19:20:30.45+01:00)
ISO8601 = [regexp](/regexp).[MustCompile](/regexp#MustCompile)(
`^[0-9]{4}(-[0-9]{2}(-[0-9]{2}([ T][0-9]{2}(:[0-9]{2}){1,2}(.[0-9]{1,6})` +
`?Z?([\+-][0-9]{2}:[0-9]{2})?)?)?)?$`,
)
// ListType encapsulates the common value as well as the latest spec
// values for lists
// <https://developer.mozilla.org/en-US/docs/Web/HTML/Element/ol#attr-type>
ListType = [regexp](/regexp).[MustCompile](/regexp#MustCompile)(`(?i)^(circle|disc|square|a|A|i|I|1)$`)
// SpaceSeparatedTokens is used in places like `a.rel` and the common attribute
// `class` which both contain space delimited lists of data tokens
// <http://www.w3.org/TR/html-markup/datatypes.html#common.data.tokens-def>
// Regexp: \p{L} matches unicode letters, \p{N} matches unicode numbers
SpaceSeparatedTokens = [regexp](/regexp).[MustCompile](/regexp#MustCompile)(`^([\s\p{L}\p{N}_-]+)$`)
// Number is a double value used on HTML5 meter and progress elements
// <http://www.whatwg.org/specs/web-apps/current-work/multipage/the-button-element.html#the-meter-element>
Number = [regexp](/regexp).[MustCompile](/regexp#MustCompile)(`^[-+]?[0-9]*\.?[0-9]+([eE][-+]?[0-9]+)?$`)
// NumberOrPercent is used predominantly as units of measurement in width
// and height attributes
// <https://developer.mozilla.org/en-US/docs/Web/HTML/Element/img#attr-height>
NumberOrPercent = [regexp](/regexp).[MustCompile](/regexp#MustCompile)(`^[0-9]+[%]?$`)
// Paragraph of text in an attribute such as *.'title', img.alt, etc
// <https://developer.mozilla.org/en-US/docs/Web/HTML/Global_attributes#attr-title>
// Note that we are not allowing chars that could close tags like '>'
Paragraph = [regexp](/regexp).[MustCompile](/regexp#MustCompile)(`^[\p{L}\p{N}\s\-_',\[\]!\./\\\(\)]*$`)
)
```
A selection of regular expressions that can be used as .Matching() rules on HTML attributes.
### Functions [¶](#pkg-functions)
This section is empty.
### Types [¶](#pkg-types)
####
type [Policy](https://github.com/microcosm-cc/bluemonday/blob/v1.0.26/policy.go#L47) [¶](#Policy)
```
type Policy struct {
// contains filtered or unexported fields
}
```
Policy encapsulates the allowlist of HTML elements and attributes that will be applied to the sanitised HTML.
You should use bluemonday.NewPolicy() to create a blank policy as the unexported fields contain maps that need to be initialized.
####
func [NewPolicy](https://github.com/microcosm-cc/bluemonday/blob/v1.0.26/policy.go#L250) [¶](#NewPolicy)
```
func NewPolicy() *[Policy](#Policy)
```
NewPolicy returns a blank policy with nothing allowed or permitted. This is the recommended way to start building a policy and you should now use AllowAttrs() and/or AllowElements() to construct the allowlist of HTML elements and attributes.
Example [¶](#example-NewPolicy)
```
package main
import (
"fmt"
"github.com/microcosm-cc/bluemonday"
)
func main() {
// NewPolicy is a blank policy and we need to explicitly allow anything
// that we wish to allow through
p := bluemonday.NewPolicy()
// We ensure any URLs are parseable and have rel="nofollow" where applicable
p.AllowStandardURLs()
// AllowStandardURLs already ensures that the href will be valid, and so we
// can skip the .Matching()
p.AllowAttrs("href").OnElements("a")
// We allow paragraphs too
p.AllowElements("p")
html := p.Sanitize(
`<p><a onblur="alert(secret)" href="http://www.google.com">Google</a></p>`,
)
fmt.Println(html)
}
```
```
Output:
<p><a href="http://www.google.com" rel="nofollow">Google</a></p>
```
Share Format
Run
####
func [StrictPolicy](https://github.com/microcosm-cc/bluemonday/blob/v1.0.26/policies.go#L38) [¶](#StrictPolicy)
```
func StrictPolicy() *[Policy](#Policy)
```
StrictPolicy returns an empty policy, which will effectively strip all HTML elements and their attributes from a document.
Example [¶](#example-StrictPolicy)
```
package main
import (
"fmt"
"github.com/microcosm-cc/bluemonday"
)
func main() {
// StrictPolicy is equivalent to NewPolicy and as nothing else is declared
// we are stripping all elements (and their attributes)
p := bluemonday.StrictPolicy()
html := p.Sanitize(
`Goodbye <a onblur="alert(secret)" href="http://en.wikipedia.org/wiki/Goodbye_Cruel_World_(Pink_Floyd_song)">Cruel</a> World`,
)
fmt.Println(html)
}
```
```
Output:
Goodbye Cruel World
```
Share Format
Run
####
func [StripTagsPolicy](https://github.com/microcosm-cc/bluemonday/blob/v1.0.26/policies.go#L43) [¶](#StripTagsPolicy)
```
func StripTagsPolicy() *[Policy](#Policy)
```
StripTagsPolicy is DEPRECATED. Use StrictPolicy instead.
####
func [UGCPolicy](https://github.com/microcosm-cc/bluemonday/blob/v1.0.26/policies.go#L54) [¶](#UGCPolicy)
```
func UGCPolicy() *[Policy](#Policy)
```
UGCPolicy returns a policy aimed at user generated content that is a result of HTML WYSIWYG tools and Markdown conversions.
This is expected to be a fairly rich document where as much markup as possible should be retained. Markdown permits raw HTML so we are basically providing a policy to sanitise HTML5 documents safely but with the least intrusion on the formatting expectations of the user.
Example [¶](#example-UGCPolicy)
```
package main
import (
"fmt"
"github.com/microcosm-cc/bluemonday"
)
func main() {
// UGCPolicy is a convenience policy for user generated content.
p := bluemonday.UGCPolicy()
html := p.Sanitize(
`<a onblur="alert(secret)" href="http://www.google.com">Google</a>`,
)
fmt.Println(html)
}
```
```
Output:
<a href="http://www.google.com" rel="nofollow">Google</a>
```
Share Format
Run
####
func (*Policy) [AddSpaceWhenStrippingTag](https://github.com/microcosm-cc/bluemonday/blob/v1.0.26/policy.go#L817) [¶](#Policy.AddSpaceWhenStrippingTag)
```
func (p *[Policy](#Policy)) AddSpaceWhenStrippingTag(allow [bool](/builtin#bool)) *[Policy](#Policy)
```
AddSpaceWhenStrippingTag states whether to add a single space " " when removing tags that are not allowed by the policy.
This is useful if you expect to strip tags in dense markup and may lose the value of whitespace.
For example: "<p>Hello</p><p>World</p>"" would be sanitized to "HelloWorld"
with the default value of false, but you may wish to sanitize this to
" Hello World " by setting AddSpaceWhenStrippingTag to true as this would retain the intent of the text.
####
func (*Policy) [AddTargetBlankToFullyQualifiedLinks](https://github.com/microcosm-cc/bluemonday/blob/v1.0.26/policy.go#L683) [¶](#Policy.AddTargetBlankToFullyQualifiedLinks)
```
func (p *[Policy](#Policy)) AddTargetBlankToFullyQualifiedLinks(require [bool](/builtin#bool)) *[Policy](#Policy)
```
AddTargetBlankToFullyQualifiedLinks will result in all a, area and link tags that point to a non-local destination (i.e. starts with a protocol and has a host) having a target="_blank" added to them if one does not already exist
Note: This requires p.RequireParseableURLs(true) and will enable it.
####
func (*Policy) [AllowAttrs](https://github.com/microcosm-cc/bluemonday/blob/v1.0.26/policy.go#L266) [¶](#Policy.AllowAttrs)
```
func (p *[Policy](#Policy)) AllowAttrs(attrNames ...[string](/builtin#string)) *attrPolicyBuilder
```
AllowAttrs takes a range of HTML attribute names and returns an attribute policy builder that allows you to specify the pattern and scope of the allowed attribute.
The attribute policy is only added to the core policy when either Globally()
or OnElements(...) are called.
Example [¶](#example-Policy.AllowAttrs)
```
package main
import (
"github.com/microcosm-cc/bluemonday"
)
func main() {
p := bluemonday.NewPolicy()
// Allow the 'title' attribute on every HTML element that has been
// allowed
p.AllowAttrs("title").Matching(bluemonday.Paragraph).Globally()
// Allow the 'abbr' attribute on only the 'td' and 'th' elements.
p.AllowAttrs("abbr").Matching(bluemonday.Paragraph).OnElements("td", "th")
// Allow the 'colspan' and 'rowspan' attributes, matching a positive integer
// pattern, on only the 'td' and 'th' elements.
p.AllowAttrs("colspan", "rowspan").Matching(
bluemonday.Integer,
).OnElements("td", "th")
}
```
```
Output:
```
Share Format
Run
####
func (*Policy) [AllowComments](https://github.com/microcosm-cc/bluemonday/blob/v1.0.26/policy.go#L309) [¶](#Policy.AllowComments)
added in v1.0.10
```
func (p *[Policy](#Policy)) AllowComments()
```
AllowComments allows comments.
Please note that only one type of comment will be allowed by this, this is the the standard HTML comment <!-- --> which includes the use of that to permit conditionals as per [https://docs.microsoft.com/en-us/previous-versions/windows/internet-explorer/ie-developer/compatibility/ms537512(v=vs.85)?redirectedfrom=MSDN](https://docs.microsoft.com/en-us/previous-versions/windows/internet-explorer/ie-developer/compatibility/ms537512%28v=vs.85%29?redirectedfrom=MSDN)
What is not permitted are CDATA XML comments, as the x/net/html package we depend on does not handle this fully and we are not choosing to take on that work:
<https://pkg.go.dev/golang.org/x/net/html#Tokenizer.AllowCDATA> . If the x/net/html package changes this then these will be considered, otherwise if you AllowComments but provide a CDATA comment, then as per the documentation in x/net/html this will be treated as a plain HTML comment.
####
func (*Policy) [AllowDataAttributes](https://github.com/microcosm-cc/bluemonday/blob/v1.0.26/policy.go#L293) [¶](#Policy.AllowDataAttributes)
```
func (p *[Policy](#Policy)) AllowDataAttributes()
```
AllowDataAttributes permits all data attributes. We can't specify the name of each attribute exactly as they are customized.
NOTE: These values are not sanitized and applications that evaluate or process them without checking and verification of the input may be at risk if this option is enabled. This is a 'caveat emptor' option and the person enabling this option needs to fully understand the potential impact with regards to whatever application will be consuming the sanitized HTML afterwards, i.e. if you know you put a link in a data attribute and use that to automatically load some new window then you're giving the author of a HTML fragment the means to open a malicious destination automatically.
Use with care!
####
func (*Policy) [AllowDataURIImages](https://github.com/microcosm-cc/bluemonday/blob/v1.0.26/helpers.go#L206) [¶](#Policy.AllowDataURIImages)
```
func (p *[Policy](#Policy)) AllowDataURIImages()
```
AllowDataURIImages permits the use of inline images defined in RFC2397
<http://tools.ietf.org/html/rfc2397>
<http://en.wikipedia.org/wiki/Data_URI_schemeImages must have a mimetype matching:
```
image/gif image/jpeg image/png image/webp
```
NOTE: There is a potential security risk to allowing data URIs and you should only permit them on content you already trust.
<http://palizine.plynt.com/issues/2010Oct/bypass-xss-filters/>
<https://capec.mitre.org/data/definitions/244.html####
func (*Policy) [AllowElements](https://github.com/microcosm-cc/bluemonday/blob/v1.0.26/policy.go#L558) [¶](#Policy.AllowElements)
```
func (p *[Policy](#Policy)) AllowElements(names ...[string](/builtin#string)) *[Policy](#Policy)
```
AllowElements will append HTML elements to the allowlist without applying an attribute policy to those elements (the elements are permitted sans-attributes)
Example [¶](#example-Policy.AllowElements)
```
package main
import (
"github.com/microcosm-cc/bluemonday"
)
func main() {
p := bluemonday.NewPolicy()
// Allow styling elements without attributes
p.AllowElements("br", "div", "hr", "p", "span")
}
```
```
Output:
```
Share Format
Run
####
func (*Policy) [AllowElementsContent](https://github.com/microcosm-cc/bluemonday/blob/v1.0.26/policy.go#L843) [¶](#Policy.AllowElementsContent)
```
func (p *[Policy](#Policy)) AllowElementsContent(names ...[string](/builtin#string)) *[Policy](#Policy)
```
AllowElementsContent marks the HTML elements whose content should be retained after removing the tag.
####
func (*Policy) [AllowElementsMatching](https://github.com/microcosm-cc/bluemonday/blob/v1.0.26/policy.go#L574) [¶](#Policy.AllowElementsMatching)
added in v1.0.3
```
func (p *[Policy](#Policy)) AllowElementsMatching(regex *[regexp](/regexp).[Regexp](/regexp#Regexp)) *[Policy](#Policy)
```
AllowElementsMatching will append HTML elements to the allowlist if they match a regexp.
####
func (*Policy) [AllowIFrames](https://github.com/microcosm-cc/bluemonday/blob/v1.0.26/helpers.go#L296) [¶](#Policy.AllowIFrames)
added in v1.0.17
```
func (p *[Policy](#Policy)) AllowIFrames(vals ...[SandboxValue](#SandboxValue))
```
####
func (*Policy) [AllowImages](https://github.com/microcosm-cc/bluemonday/blob/v1.0.26/helpers.go#L179) [¶](#Policy.AllowImages)
```
func (p *[Policy](#Policy)) AllowImages()
```
AllowImages enables the img element and some popular attributes. It will also ensure that URL values are parseable. This helper does not enable data URI images, for that you should also use the AllowDataURIImages() helper.
####
func (*Policy) [AllowLists](https://github.com/microcosm-cc/bluemonday/blob/v1.0.26/helpers.go#L232) [¶](#Policy.AllowLists)
```
func (p *[Policy](#Policy)) AllowLists()
```
AllowLists will enabled ordered and unordered lists, as well as definition lists
####
func (*Policy) [AllowNoAttrs](https://github.com/microcosm-cc/bluemonday/blob/v1.0.26/policy.go#L317) [¶](#Policy.AllowNoAttrs)
```
func (p *[Policy](#Policy)) AllowNoAttrs() *attrPolicyBuilder
```
AllowNoAttrs says that attributes on element are optional.
The attribute policy is only added to the core policy when OnElements(...)
are called.
####
func (*Policy) [AllowRelativeURLs](https://github.com/microcosm-cc/bluemonday/blob/v1.0.26/policy.go#L710) [¶](#Policy.AllowRelativeURLs)
```
func (p *[Policy](#Policy)) AllowRelativeURLs(require [bool](/builtin#bool)) *[Policy](#Policy)
```
AllowRelativeURLs enables RequireParseableURLs and then permits URLs that are parseable, have no schema information and url.IsAbs() returns false This permits local URLs
####
func (*Policy) [AllowStandardAttributes](https://github.com/microcosm-cc/bluemonday/blob/v1.0.26/helpers.go#L145) [¶](#Policy.AllowStandardAttributes)
```
func (p *[Policy](#Policy)) AllowStandardAttributes()
```
AllowStandardAttributes will enable "id", "title" and the language specific attributes "dir" and "lang" on all elements that are allowed
####
func (*Policy) [AllowStandardURLs](https://github.com/microcosm-cc/bluemonday/blob/v1.0.26/helpers.go#L128) [¶](#Policy.AllowStandardURLs)
```
func (p *[Policy](#Policy)) AllowStandardURLs()
```
AllowStandardURLs is a convenience function that will enable rel="nofollow"
on "a", "area" and "link" (if you have allowed those elements) and will ensure that the URL values are parseable and either relative or belong to the
"mailto", "http", or "https" schemes
####
func (*Policy) [AllowStyles](https://github.com/microcosm-cc/bluemonday/blob/v1.0.26/policy.go#L431) [¶](#Policy.AllowStyles)
added in v1.0.3
```
func (p *[Policy](#Policy)) AllowStyles(propertyNames ...[string](/builtin#string)) *stylePolicyBuilder
```
AllowStyles takes a range of CSS property names and returns a style policy builder that allows you to specify the pattern and scope of the allowed property.
The style policy is only added to the core policy when either Globally()
or OnElements(...) are called.
Example [¶](#example-Policy.AllowStyles)
```
package main
import (
"fmt"
"regexp"
"github.com/microcosm-cc/bluemonday"
)
func main() {
p := bluemonday.NewPolicy()
// Allow only 'span' and 'p' elements
p.AllowElements("span", "p", "strong")
// Only allow 'style' attributes on 'span' and 'p' elements
p.AllowAttrs("style").OnElements("span", "p")
// Allow the 'text-decoration' property to be set to 'underline', 'line-through' or 'none'
// on 'span' elements only
p.AllowStyles("text-decoration").MatchingEnum("underline", "line-through", "none").OnElements("span")
// Allow the 'color' property with valid RGB(A) hex values only
// on every HTML element that has been allowed
p.AllowStyles("color").Matching(regexp.MustCompile("(?i)^#([0-9a-f]{3,4}|[0-9a-f]{6}|[0-9a-f]{8})$")).Globally()
// Default handler
p.AllowStyles("background-origin").Globally()
// The span has an invalid 'color' which will be stripped along with other disallowed properties
html := p.Sanitize(
`<p style="color:#f00;">
<span style="text-decoration: underline; background-image: url(javascript:alert('XSS')); color: #f00ba; background-origin: invalidValue">
Red underlined <strong style="text-decoration:none;">text</strong>
</span>
</p>`,
)
fmt.Println(html)
}
```
```
Output:
<p style="color: #f00">
<span style="text-decoration: underline">
Red underlined <strong>text</strong>
</span>
</p>
```
Share Format
Run
####
func (*Policy) [AllowStyling](https://github.com/microcosm-cc/bluemonday/blob/v1.0.26/helpers.go#L170) [¶](#Policy.AllowStyling)
```
func (p *[Policy](#Policy)) AllowStyling()
```
AllowStyling presently enables the class attribute globally.
Note: When bluemonday ships a CSS parser and we can safely sanitise that,
this will also allow sanitized styling of elements via the style attribute.
####
func (*Policy) [AllowTables](https://github.com/microcosm-cc/bluemonday/blob/v1.0.26/helpers.go#L246) [¶](#Policy.AllowTables)
```
func (p *[Policy](#Policy)) AllowTables()
```
AllowTables will enable a rich set of elements and attributes to describe HTML tables
####
func (*Policy) [AllowURLSchemeWithCustomPolicy](https://github.com/microcosm-cc/bluemonday/blob/v1.0.26/policy.go#L739) [¶](#Policy.AllowURLSchemeWithCustomPolicy)
```
func (p *[Policy](#Policy)) AllowURLSchemeWithCustomPolicy(
scheme [string](/builtin#string),
urlPolicy func(url *[url](/net/url).[URL](/net/url#URL)) (allowUrl [bool](/builtin#bool)),
) *[Policy](#Policy)
```
AllowURLSchemeWithCustomPolicy will append URL schemes with a custom URL policy to the allowlist.
Only the URLs with matching schema and urlPolicy(url)
returning true will be allowed.
####
func (*Policy) [AllowURLSchemes](https://github.com/microcosm-cc/bluemonday/blob/v1.0.26/policy.go#L720) [¶](#Policy.AllowURLSchemes)
```
func (p *[Policy](#Policy)) AllowURLSchemes(schemes ...[string](/builtin#string)) *[Policy](#Policy)
```
AllowURLSchemes will append URL schemes to the allowlist Example: p.AllowURLSchemes("mailto", "http", "https")
####
func (*Policy) [AllowURLSchemesMatching](https://github.com/microcosm-cc/bluemonday/blob/v1.0.26/policy.go#L584) [¶](#Policy.AllowURLSchemesMatching)
added in v1.0.24
```
func (p *[Policy](#Policy)) AllowURLSchemesMatching(r *[regexp](/regexp).[Regexp](/regexp#Regexp)) *[Policy](#Policy)
```
AllowURLSchemesMatching will append URL schemes to the allowlist if they match a regexp.
####
func (*Policy) [AllowUnsafe](https://github.com/microcosm-cc/bluemonday/blob/v1.0.26/policy.go#L865) [¶](#Policy.AllowUnsafe)
added in v1.0.16
```
func (p *[Policy](#Policy)) AllowUnsafe(allowUnsafe [bool](/builtin#bool)) *[Policy](#Policy)
```
AllowUnsafe permits fundamentally unsafe elements.
If false (default) then elements such as `style` and `script` will not be permitted even if declared in a policy. These elements when combined with untrusted input cannot be safely handled by bluemonday at this point in time.
If true then `style` and `script` would be permitted by bluemonday if a policy declares them. However this is not recommended under any circumstance and can lead to XSS being rendered thus defeating the purpose of using a HTML sanitizer.
####
func (*Policy) [RequireCrossOriginAnonymous](https://github.com/microcosm-cc/bluemonday/blob/v1.0.26/policy.go#L671) [¶](#Policy.RequireCrossOriginAnonymous)
added in v1.0.6
```
func (p *[Policy](#Policy)) RequireCrossOriginAnonymous(require [bool](/builtin#bool)) *[Policy](#Policy)
```
RequireCrossOriginAnonymous will result in all audio, img, link, script, and video tags having a crossorigin="anonymous" added to them if one does not already exist
####
func (*Policy) [RequireNoFollowOnFullyQualifiedLinks](https://github.com/microcosm-cc/bluemonday/blob/v1.0.26/policy.go#L634) [¶](#Policy.RequireNoFollowOnFullyQualifiedLinks)
```
func (p *[Policy](#Policy)) RequireNoFollowOnFullyQualifiedLinks(require [bool](/builtin#bool)) *[Policy](#Policy)
```
RequireNoFollowOnFullyQualifiedLinks will result in all a, area, and link tags that point to a non-local destination (i.e. starts with a protocol and has a host) having a rel="nofollow" added to them if one does not already exist
Note: This requires p.RequireParseableURLs(true) and will enable it.
####
func (*Policy) [RequireNoFollowOnLinks](https://github.com/microcosm-cc/bluemonday/blob/v1.0.26/policy.go#L620) [¶](#Policy.RequireNoFollowOnLinks)
```
func (p *[Policy](#Policy)) RequireNoFollowOnLinks(require [bool](/builtin#bool)) *[Policy](#Policy)
```
RequireNoFollowOnLinks will result in all a, area, link tags having a rel="nofollow"added to them if one does not already exist
Note: This requires p.RequireParseableURLs(true) and will enable it.
####
func (*Policy) [RequireNoReferrerOnFullyQualifiedLinks](https://github.com/microcosm-cc/bluemonday/blob/v1.0.26/policy.go#L660) [¶](#Policy.RequireNoReferrerOnFullyQualifiedLinks)
added in v1.0.3
```
func (p *[Policy](#Policy)) RequireNoReferrerOnFullyQualifiedLinks(require [bool](/builtin#bool)) *[Policy](#Policy)
```
RequireNoReferrerOnFullyQualifiedLinks will result in all a, area, and link tags that point to a non-local destination (i.e. starts with a protocol and has a host) having a rel="noreferrer" added to them if one does not already exist
Note: This requires p.RequireParseableURLs(true) and will enable it.
####
func (*Policy) [RequireNoReferrerOnLinks](https://github.com/microcosm-cc/bluemonday/blob/v1.0.26/policy.go#L646) [¶](#Policy.RequireNoReferrerOnLinks)
added in v1.0.3
```
func (p *[Policy](#Policy)) RequireNoReferrerOnLinks(require [bool](/builtin#bool)) *[Policy](#Policy)
```
RequireNoReferrerOnLinks will result in all a, area, and link tags having a rel="noreferrrer" added to them if one does not already exist
Note: This requires p.RequireParseableURLs(true) and will enable it.
####
func (*Policy) [RequireParseableURLs](https://github.com/microcosm-cc/bluemonday/blob/v1.0.26/policy.go#L700) [¶](#Policy.RequireParseableURLs)
```
func (p *[Policy](#Policy)) RequireParseableURLs(require [bool](/builtin#bool)) *[Policy](#Policy)
```
RequireParseableURLs will result in all URLs requiring that they be parseable by "net/url" url.Parse()
This applies to:
- a.href
- area.href
- blockquote.cite
- img.src
- link.href
- script.src
####
func (*Policy) [RequireSandboxOnIFrame](https://github.com/microcosm-cc/bluemonday/blob/v1.0.26/policy.go#L757) [¶](#Policy.RequireSandboxOnIFrame)
added in v1.0.17
```
func (p *[Policy](#Policy)) RequireSandboxOnIFrame(vals ...[SandboxValue](#SandboxValue))
```
RequireSandboxOnIFrame will result in all iframe tags having a sandbox="" tag Any sandbox values not specified here will be filtered from the generated HTML
####
func (*Policy) [RewriteSrc](https://github.com/microcosm-cc/bluemonday/blob/v1.0.26/policy.go#L611) [¶](#Policy.RewriteSrc)
added in v1.0.25
```
func (p *[Policy](#Policy)) RewriteSrc(fn urlRewriter) *[Policy](#Policy)
```
RewriteSrc will rewrite the src attribute of a resource downloading tag
(e.g. <img>, <script>, <iframe>) using the provided function.
Typically the use case here is that if the content that we're sanitizing is untrusted then the content that is inlined is also untrusted.
To prevent serving this content on the same domain as the content appears on it is good practise to proxy the content through an additional domain name as this will force the web client to consider the inline content as third party to the main content, thus providing browser isolation around the inline content.
An example of this is a web mail provider like fastmail.com , when an email (user generated content) is displayed, the email text is shown on fastmail.com but the inline attachments and content are rendered from fastmailusercontent.com . This proxying of the external content on a domain that is different to the content domain forces the browser domain security model to kick in. Note that this only applies to differences below the suffix (as per the publix suffix list).
This is a good practise to adopt as it prevents the content from being able to set cookies on the main domain and thus prevents the content on the main domain from being able to read those cookies.
####
func (*Policy) [Sanitize](https://github.com/microcosm-cc/bluemonday/blob/v1.0.26/sanitize.go#L60) [¶](#Policy.Sanitize)
```
func (p *[Policy](#Policy)) Sanitize(s [string](/builtin#string)) [string](/builtin#string)
```
Sanitize takes a string that contains a HTML fragment or document and applies the given policy allowlist.
It returns a HTML string that has been sanitized by the policy or an empty string if an error has occurred (most likely as a consequence of extremely malformed input)
Example [¶](#example-Policy.Sanitize)
```
package main
import (
"fmt"
"github.com/microcosm-cc/bluemonday"
)
func main() {
// UGCPolicy is a convenience policy for user generated content.
p := bluemonday.UGCPolicy()
// string in, string out
html := p.Sanitize(`<a onblur="alert(secret)" href="http://www.google.com">Google</a>`)
fmt.Println(html)
}
```
```
Output:
<a href="http://www.google.com" rel="nofollow">Google</a>
```
Share Format
Run
####
func (*Policy) [SanitizeBytes](https://github.com/microcosm-cc/bluemonday/blob/v1.0.26/sanitize.go#L74) [¶](#Policy.SanitizeBytes)
```
func (p *[Policy](#Policy)) SanitizeBytes(b [][byte](/builtin#byte)) [][byte](/builtin#byte)
```
SanitizeBytes takes a []byte that contains a HTML fragment or document and applies the given policy allowlist.
It returns a []byte containing the HTML that has been sanitized by the policy or an empty []byte if an error has occurred (most likely as a consequence of extremely malformed input)
Example [¶](#example-Policy.SanitizeBytes)
```
package main
import (
"fmt"
"github.com/microcosm-cc/bluemonday"
)
func main() {
// UGCPolicy is a convenience policy for user generated content.
p := bluemonday.UGCPolicy()
// []byte in, []byte out
b := []byte(`<a onblur="alert(secret)" href="http://www.google.com">Google</a>`)
b = p.SanitizeBytes(b)
fmt.Println(string(b))
}
```
```
Output:
<a href="http://www.google.com" rel="nofollow">Google</a>
```
Share Format
Run
####
func (*Policy) [SanitizeReader](https://github.com/microcosm-cc/bluemonday/blob/v1.0.26/sanitize.go#L87) [¶](#Policy.SanitizeReader)
```
func (p *[Policy](#Policy)) SanitizeReader(r [io](/io).[Reader](/io#Reader)) *[bytes](/bytes).[Buffer](/bytes#Buffer)
```
SanitizeReader takes an io.Reader that contains a HTML fragment or document and applies the given policy allowlist.
It returns a bytes.Buffer containing the HTML that has been sanitized by the policy. Errors during sanitization will merely return an empty result.
Example [¶](#example-Policy.SanitizeReader)
```
package main
import (
"fmt"
"strings"
"github.com/microcosm-cc/bluemonday"
)
func main() {
// UGCPolicy is a convenience policy for user generated content.
p := bluemonday.UGCPolicy()
// io.Reader in, bytes.Buffer out
r := strings.NewReader(`<a onblur="alert(secret)" href="http://www.google.com">Google</a>`)
buf := p.SanitizeReader(r)
fmt.Println(buf.String())
}
```
```
Output:
<a href="http://www.google.com" rel="nofollow">Google</a>
```
Share Format
Run
####
func (*Policy) [SanitizeReaderToWriter](https://github.com/microcosm-cc/bluemonday/blob/v1.0.26/sanitize.go#L94) [¶](#Policy.SanitizeReaderToWriter)
added in v1.0.14
```
func (p *[Policy](#Policy)) SanitizeReaderToWriter(r [io](/io).[Reader](/io#Reader), w [io](/io).[Writer](/io#Writer)) [error](/builtin#error)
```
SanitizeReaderToWriter takes an io.Reader that contains a HTML fragment or document and applies the given policy allowlist and writes to the provided writer returning an error if there is one.
####
func (*Policy) [SkipElementsContent](https://github.com/microcosm-cc/bluemonday/blob/v1.0.26/policy.go#L826) [¶](#Policy.SkipElementsContent)
```
func (p *[Policy](#Policy)) SkipElementsContent(names ...[string](/builtin#string)) *[Policy](#Policy)
```
SkipElementsContent adds the HTML elements whose tags is needed to be removed with its content.
####
type [Query](https://github.com/microcosm-cc/bluemonday/blob/v1.0.26/sanitize.go#L99) [¶](#Query)
added in v1.0.6
```
type Query struct {
Key [string](/builtin#string)
Value [string](/builtin#string)
HasValue [bool](/builtin#bool)
}
```
Query represents a single part of the query string, a query param
####
type [SandboxValue](https://github.com/microcosm-cc/bluemonday/blob/v1.0.26/policy.go#L210) [¶](#SandboxValue)
added in v1.0.17
```
type SandboxValue [int64](/builtin#int64)
```
```
const (
SandboxAllowDownloads [SandboxValue](#SandboxValue) = [iota](/builtin#iota)
SandboxAllowDownloadsWithoutUserActivation
SandboxAllowForms
SandboxAllowModals
SandboxAllowOrientationLock
SandboxAllowPointerLock
SandboxAllowPopups
SandboxAllowPopupsToEscapeSandbox
SandboxAllowPresentation
SandboxAllowSameOrigin
SandboxAllowScripts
SandboxAllowStorageAccessByUserActivation
SandboxAllowTopNavigation
SandboxAllowTopNavigationByUserActivation
)
``` |
binaryage_chromex | hex | Erlang | [chromex](#chromex-github-license-clojars-project-travis-sample-project) [GitHub license](https://github.com/binaryage/chromex/blob/v0.8.2/license.txt) [Clojars Project](https://clojars.org/binaryage/chromex) [Travis](https://travis-ci.org/binaryage/chromex) [Sample Project](https://github.com/binaryage/chromex-sample)
===
This library is auto-generated. Current version was **generated on 2019-09-12** from [**Chromium @ 21f178e9cb5c**](https://chromium.googlesource.com/chromium/src.git/+/21f178e9cb5cdf515cd4622da7af43f4275def9d).
Looking for a nightly version? Check out [**nightly branch**](https://github.com/binaryage/chromex/tree/nightly) which gets updated if there are any new API changes.
#### [Chromex provides idiomatic ClojureScript interface](#chromex-provides-idiomatic-clojurescript-interface)
For Chrome Extensions and also for Chrome Apps:
| API family | namespaces | properties | functions | events |
| --- | --- | --- | --- | --- |
| [Public Chrome Extension APIs](https://github.com/binaryage/chromex/blob/v0.8.2/src/exts) | 79 | 53 | 391 | 178 |
| [Public Chrome App APIs](https://github.com/binaryage/chromex/blob/v0.8.2/src/apps) | 68 | 30 | 452 | 152 |
| [Private Chrome Extension APIs](https://github.com/binaryage/chromex/blob/v0.8.2/src/exts_private) | 39 | 0 | 405 | 72 |
| [Private Chrome App APIs](https://github.com/binaryage/chromex/blob/v0.8.2/src/apps_private) | 36 | 0 | 324 | 69 |
Note: Chromex generator uses the same data source as [developer.chrome.com/extensions/api_index](https://developer.chrome.com/extensions/api_index) and
[developer.chrome.com/apps/api_index](https://developer.chrome.com/apps/api_index) docs.
Following documentation is mostly speaking about Chrome Extension development but the same patterns generally apply to Chrome App development as well.
This library is data-driven. Given an API namespace, all API methods, properties and events are described in a Clojure map along with their parameters, callbacks, versions and additional metadata ([a simple example - look for `api-table` here](https://github.com/binaryage/chromex/blob/v0.8.2/src/exts/chromex/ext/context_menus.clj)).
Chromex then provides a set of macros which consume this table and generate actual ClojureScript code wrapping native APIs.
These macros can be further parametrized which allows for greater flexibility. Sane defaults are provided with following goals:
* API version checking and deprecation warnings at compile-time
* flexible marshalling of Javascript values to ClojureScript and back
* callbacks are converted to core.async channels
* events are emitted into core.async channels
#### [API versions and deprecation warnings](#api-versions-and-deprecation-warnings)
Chrome Extension API is evolving. You might want to target multiple Chrome versions with slightly different APIs. Good news is that our API data map contains full versioning and deprecation information.
By default you target the latest APIs. But you can target older API version instead and we will warn you during compilation in case you were accessing any API not yet available in that particular version.
Additionally we are able to detect calls to deprecated APIs and warn you during compilation.
#### [Flexible marshalling](#flexible-marshalling)
Generated API data map contains information about all parameters and their types. Chromex provides a pluggable system to specify how particular types are marshalled when crossing API boundary.
By default we marshall only a few types where it makes good sense. We don't want to blindly run all parameters through `clj->js` and `js->clj` conversions. That could have unexpected consequences and maybe performance implications. Instead we keep marshalling lean and give you an easy way how to provide your own macro which can optionally generate required marshalling code (during compilation).
There is also a practical reason. This library is auto-generated and quite large - it would be too laborious to maintain hairy marshalling conventions up-to-date with evolving Chrome API index. If you want to provide richer set of marshalling for particular APIs you care about, you [can do that consistently](https://github.com/binaryage/chromex/blob/v0.8.2/src/lib/chromex/marshalling.clj).
#### [Callbacks as core.async channels](#callbacks-as-coreasync-channels)
Many Chrome API calls are async in nature and require you to specify a callback (for receiving an answer later).
You might want to watch this video explaining [API conventions](https://www.youtube.com/watch?v=bmxr75CV36A) in Chrome.
We automatically turn all API functions with a callback parameter to a ClojureScript function without that callback parameter but returning a new core.async channel instead (`promise-chan`). The channel eventually receives a vector of parameters passed into the callback.
When an error occurs, the channel closes without receiving any result (you receive `nil`). In that case you can immediately call `chromex.error/get-last-error` to obtain relevant error object (which is what was found in `chrome.runtime.lastError` during the callback).
This mechanism is pluggable, so you can optionally implement your own mechanism of consuming callback calls.
#### [Events are emitted into core.async channels](#events-are-emitted-into-coreasync-channels)
Chrome API namespaces usually provide multiple `event` objects which you can subscribe with `.addListener`.
You provide a callback function which will get called with future events as they occur. Later you can call `.removeListener`
to unsubscribe from the event stream.
We think consuming events via core.async channels is more natural for ClojureScript developers.
In Chromex, you can request Chrome events to be emitted into a core.async channel provided by you.
And then implement a single loop to sequentially process events as they appear on the channel.
Again this mechanism is pluggable, so you can optionally implement a different mechanism for consuming event streams.
### [Usage examples](#usage-examples)
We provide an example skeleton Chrome extension [chromex-sample](https://github.com/binaryage/chromex-sample). This project acts as a code example but also as a skeleton with project configuration. We recommended to use it as starting point when starting development of your own extension.
Please refer to [readme in chromex-sample](https://github.com/binaryage/chromex-sample) for further explanation and code examples.
### [Advanced mode compilation](#advanced-mode-compilation)
Chromex does not rely on externs file. Instead it is rigorously [using string names](https://github.com/clojure/clojurescript/wiki/Dependencies#using-string-names)
to access Javascript properties. I would recommend you to do the same in your own extension code. It is not that hard after all. You can use `oget`, `ocall` and `oapply`
macros from the [cljs-oops](https://github.com/binaryage/cljs-oops) library, which is designed to work with string names.
Note: There is a [chrome_extensions.js](https://github.com/google/closure-compiler/blob/master/contrib/externs/chrome_extensions.js) externs file available,
but that's been updated ad-hoc by the community. It is definitely incomplete and may be incorrect. But of course you are free to include the externs file into your own project and rely on it if it works for your code. It depends on how recent/popular APIs are you going to use.
### [Tapping events](#tapping-events)
Let's say for example you want to subscribe to [tab creation events](https://developer.chrome.com/extensions/tabs#event-onCreated) and
[web navigation's "committed" events](https://developer.chrome.com/extensions/webNavigation#event-onCommitted).
```
(ns your.project
(:require [cljs.core.async :refer [chan close! go-loop]]
[chromex.ext.tabs :as tabs]
[chromex.ext.web-navigation :as web-navigation]
[chromex.chrome-event-channel :refer [make-chrome-event-channel]]))
(let [chrome-event-channel (make-chrome-event-channel (chan))]
(tabs/tap-on-created-events chrome-event-channel)
(web-navigation/tap-on-committed-events chrome-event-channel (clj->js {"url" [{"hostSuffix" "google.com"}]}))
; do something with the channel...
(go-loop []
(when-some [[event-id event-params] (<! chrome-event-channel)]
(process-chrome-event event-id event-params)
(recur))
(println "leaving main event loop"))
; alternatively
(close! chrome-event-channel)) ; this will unregister all chrome event listeners on the channel
```
As we wrote in previous sections, by default you consume Chrome events via core.async channels:
1. first, you have to create/provide a channel of your liking 2. then optionally wrap it in `make-chrome-event-channel` call 3. then call one or more tap-some-events calls 4. then you can process events as they appear on the channel
If you don't want to use the channel anymore, you should `close!` it.
Events coming from the channel are pairs `[event-id params]`, where params is a vector of parameters passed into event's callback function. See [chromex-sample](https://github.com/binaryage/chromex-sample/blob/master/src/background/chromex_sample/background/core.cljs)
for example usage. Refer to [Chrome's API docs](https://developer.chrome.com/extensions/api_index) for specific event objects.
Note: instead of calling tap-some-events you could call `tap-all-events`. This is a convenience function which will tap events on all valid non-deprecated event objects in given namespace. For example `tabs/tap-all-events` will subscribe to all existing tabs events in the latest API.
`make-chrome-event-channel` is a convenience wrapper for raw core.async channel. It is aware of event listeners and is able to unsubscribe them when channel gets closed. But you are free to remove listeners manually as well,
tap calls return `ChromeEventSubscription` which gives you an interface to `unsubscribe!` given tap. This way you can dynamically add/remove subscriptions on the channel.
Tap calls accept not only channel but also more optional arguments. These arguments will be passed into `.addListener` call when registering Chrome event listener. This is needed for scenarios when event objects accept filters or other additional parameters.
`web-navigation/tap-on-committed-events` is [an example of such situation](https://developer.chrome.com/extensions/events#filtered).
Even more complex scenario is registering listeners on some [webRequest API events](https://developer.chrome.com/extensions/webRequest)
(see 'Registering event listeners' section).
#### [Synchronous event listeners](#synchronous-event-listeners)
In some rare cases Chrome event listener has to be synchronous. For example [webRequest's onBeforeRequest event](https://developer.chrome.com/extensions/webRequest#examples)
accepts "blocking" flag which instructs Chrome to wait for listener's answer in a blocking call.
Here is an example how you would do this in chromex:
```
(ns your.project
(:require ...
[chromex.config :refer-macros [with-custom-event-listener-factory]]
[chromex.chrome-event-channel :refer [make-chrome-event-channel]]
[chromex.ext.web-request :as web-request]))
(defn my-event-listener-factory []
(fn [& args]
; do something useful with args...
#js ["return native answer"])) ; note: this value will be passed back to Chrome as-is, marshalling won't be applied here
...
(with-custom-event-listener-factory my-event-listener-factory
(web-request/tap-on-before-request-events chan (clj->js {"urls" ["<all_urls>"]}) #js ["blocking"]))
...
```
What happened here? We have specified our own event listener factory which is responsible for creating a new event callback function whenever chromex asks for it. The default implementation is [here](https://github.com/binaryage/chromex/blob/v0.8.2/src/lib/chromex/defaults.cljs).
This function is part of our config object, so it can be redefined during runtime.
[`with-custom-event-listener-factory`](https://github.com/binaryage/chromex/blob/master/src/lib/chromex/config.clj)
is just a convenience macro to override this config setting temporarily.
This way we get all benefits of chromex (marshalling, logging, API deprecation/version checking, etc.) but still we have a flexibility to hook our own custom listener code if needed. Please note that event listener has to return a native value.
We don't have type information here to do the marshalling automatically. Also note that incoming parameters into event listener get marshalled to ClojureScript (as expected). And obviously this event won't appear on the channel unless you `put!`
it there in your custom listener code.
Also note how we passed extra arguments for .addListener call. This was discussed in the previous section.
Advanced tip: similarly you can replace some other configurable functions in the config object. For example you can change the way how callbacks are turned into core.async channels. Theoretically you could replace it with some other mechanism (e.g. with js promises).
### [Projects using Chromex](#projects-using-chromex)
* [binaryage/chromex-sample](https://github.com/binaryage/chromex-sample) - a demo project of Chromex usage
* [binaryage/dirac](https://github.com/binaryage/dirac) - a Chrome DevTools fork for ClojureScript developers
* [madvas/thai2english-chrome-extension](https://github.com/madvas/thai2english-chrome-extension) - a Chrome extension to translate Thai to English
* [jazzytomato/hnlookup](https://github.com/jazzytomato/hnlookup) - a Chrome popup extension to look up pages on Hacker News
* [Scout, by Room Key](https://www.roomkey.com/scout) - Chrome extension that finds lower hotel rates as you browse major travel search sites
### [Similar libraries](#similar-libraries)
* [suprematic/khroma](https://github.com/suprematic/khroma) |
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob | go | Go | README
[¶](#section-readme)
---
### Azure Blob Storage module for Go
> Service Version: 2023-08-03
Azure Blob Storage is Microsoft's object storage solution for the cloud. Blob Storage is optimized for storing massive amounts of unstructured data - data that does not adhere to a particular data model or definition, such as text or binary data. For more information, see [Introduction to Azure Blob Storage](https://learn.microsoft.com/azure/storage/blobs/storage-blobs-introduction).
Use the Azure Blob Storage client module `github.com/Azure/azure-sdk-for-go/sdk/storage/azblob` to:
* Authenticate clients with Azure Blob Storage
* Manipulate containers and blobs in an Azure storage account
Key links:
[Source code](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/storage/azblob) | [API reference documentation](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob#section_documentation) | [REST API documentation](https://learn.microsoft.com/rest/api/storageservices/blob-service-rest-api) | [Product documentation](https://learn.microsoft.com/azure/storage/blobs/storage-blobs-overview) | [Samples](https://github.com/Azure-Samples/azure-sdk-for-go-samples/tree/main)
#### Getting started
##### Prerequisites
* Go, version 1.18 or higher - [Install Go](https://go.dev/doc/install)
* Azure subscription - [Create a free account](https://azure.microsoft.com/free/)
* Azure storage account - To create a storage account, use tools including the [Azure portal](https://learn.microsoft.com/azure/storage/common/storage-quickstart-create-account?tabs=azure-portal),
[Azure PowerShell](https://learn.microsoft.com/azure/storage/common/storage-quickstart-create-account?tabs=azure-powershell), or the [Azure CLI](https://learn.microsoft.com/azure/storage/common/storage-quickstart-create-account?tabs=azure-cli).
Here's an example using the Azure CLI:
```
az storage account create --name MyStorageAccount --resource-group MyResourceGroup --location westus --sku Standard_LRS
```
##### Install the package
Install the Azure Blob Storage client module for Go with [go get](https://pkg.go.dev/cmd/go#hdr-Add_dependencies_to_current_module_and_install_them):
```
go get github.com/Azure/azure-sdk-for-go/sdk/storage/azblob
```
If you plan to authenticate with Azure Active Directory (recommended), also install the [azidentity](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity) module.
```
go get github.com/Azure/azure-sdk-for-go/sdk/azidentity
```
##### Authenticate the client
To interact with the Azure Blob Storage service, you'll need to create an instance of the `azblob.Client` type. The [azidentity](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity) module makes it easy to add Azure Active Directory support for authenticating Azure SDK clients with their corresponding Azure services.
```
// create a credential for authenticating with Azure Active Directory cred, err := azidentity.NewDefaultAzureCredential(nil)
// TODO: handle err
// create an azblob.Client for the specified storage account that uses the above credential client, err := azblob.NewClient("https://MYSTORAGEACCOUNT.blob.core.windows.net/", cred, nil)
// TODO: handle err
```
Learn more about enabling Azure Active Directory for authentication with Azure Storage:
* [Authorize access to blobs using Azure Active Directory](https://learn.microsoft.com/azure/storage/common/storage-auth-aad)
Other options for authentication include connection strings, shared key, shared access signatures (SAS), and anonymous public access. Use the appropriate client constructor function for the authentication mechanism you wish to use. For examples, see:
* [Blob samples](https://github.com/Azure/azure-sdk-for-go/raw/main/sdk/storage/azblob/examples_test.go)
#### Key concepts
Blob Storage is designed for:
* Serving images or documents directly to a browser.
* Storing files for distributed access.
* Streaming video and audio.
* Writing to log files.
* Storing data for backup and restore, disaster recovery, and archiving.
* Storing data for analysis by an on-premises or Azure-hosted service.
Blob Storage offers three types of resources:
* The *storage account*
* One or more *containers* in a storage account
* One or more *blobs* in a container
Instances of the `azblob.Client` type provide methods for manipulating containers and blobs within a storage account.
The storage account is specified when the `azblob.Client` is constructed.
##### Specialized clients
The Azure Blob Storage client module for Go also provides specialized clients in various subpackages. Use these clients when you need to interact with a specific kind of blob. Learn more about [block blobs, append blobs, and page blobs](https://learn.microsoft.com/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs).
* [appendblob](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/storage/azblob/appendblob/client.go)
* [blockblob](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/storage/azblob/blockblob/client.go)
* [pageblob](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/storage/azblob/pageblob/client.go)
The [blob](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/storage/azblob/blob/client.go) package contains APIs common to all blob types. This includes APIs for deleting and undeleting a blob, setting metadata, and more.
The [lease](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/storage/azblob/lease) package contains clients for managing leases on blobs and containers. See the [REST API reference](https://learn.microsoft.com/rest/api/storageservices/lease-blob#remarks) for general information on leases.
The [container](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/storage/azblob/container/client.go) package contains APIs specific to containers. This includes APIs for setting access policies or properties, and more.
The [service](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/storage/azblob/service/client.go) package contains APIs specific to the Blob service. This includes APIs for manipulating containers, retrieving account information, and more.
The [sas](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/storage/azblob/sas) package contains utilities to aid in the creation and manipulation of shared access signature (SAS) tokens.
See the package's documentation for more information.
##### Goroutine safety
We guarantee that all client instance methods are goroutine-safe and independent of each other (see [guideline](https://azure.github.io/azure-sdk/golang_introduction.html#thread-safety)). This ensures that the recommendation to reuse client instances is always safe, even across goroutines.
##### Blob metadata
Blob metadata name-value pairs are valid HTTP headers and should adhere to all restrictions governing HTTP headers. Metadata names must be valid HTTP header names, may contain only ASCII characters, and should be treated as case-insensitive. Base64-encode or URL-encode metadata values containing non-ASCII characters.
##### Additional concepts
[Client options](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azcore/policy#ClientOptions) |
[Accessing the response](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#WithCaptureResponse) |
[Handling failures](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azcore#ResponseError) |
[Logging](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azcore/log)
#### Examples
##### Upload a blob
```
const (
account = "https://MYSTORAGEACCOUNT.blob.core.windows.net/"
containerName = "sample-container"
blobName = "sample-blob"
sampleFile = "path/to/sample/file"
)
// authenticate with Azure Active Directory cred, err := azidentity.NewDefaultAzureCredential(nil)
// TODO: handle error
// create a client for the specified storage account client, err := azblob.NewClient(account, cred, nil)
// TODO: handle error
// open the file for reading file, err := os.OpenFile(sampleFile, os.O_RDONLY, 0)
// TODO: handle error defer file.Close()
// upload the file to the specified container with the specified blob name _, err = client.UploadFile(context.TODO(), containerName, blobName, file, nil)
// TODO: handle error
```
##### Download a blob
```
// this example accesses a public blob via anonymous access, so no credentials are required client, err := azblob.NewClientWithNoCredential("https://azurestoragesamples.blob.core.windows.net/", nil)
// TODO: handle error
// create or open a local file where we can download the blob file, err := os.Create("cloud.jpg")
// TODO: handle error defer file.Close()
// download the blob _, err = client.DownloadFile(context.TODO(), "samples", "cloud.jpg", file, nil)
// TODO: handle error
```
##### Enumerate blobs
```
const (
account = "https://MYSTORAGEACCOUNT.blob.core.windows.net/"
containerName = "sample-container"
)
// authenticate with Azure Active Directory cred, err := azidentity.NewDefaultAzureCredential(nil)
// TODO: handle error
// create a client for the specified storage account client, err := azblob.NewClient(account, cred, nil)
// TODO: handle error
// blob listings are returned across multiple pages pager := client.NewListBlobsFlatPager(containerName, nil)
// continue fetching pages until no more remain for pager.More() {
// advance to the next page
page, err := pager.NextPage(context.TODO())
// TODO: handle error
// print the blob names for this page
for _, blob := range page.Segment.BlobItems {
fmt.Println(*blob.Name)
}
}
```
#### Troubleshooting
All Blob service operations will return an
[*azcore.ResponseError](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azcore#ResponseError) on failure with a populated `ErrorCode` field. Many of these errors are recoverable.
The [bloberror](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/storage/azblob/bloberror/error_codes.go) package provides the possible Storage error codes along with helper facilities for error handling.
```
const (
connectionString = "<connection_string>"
containerName = "sample-container"
)
// create a client with the provided connection string client, err := azblob.NewClientFromConnectionString(connectionString, nil)
// TODO: handle error
// try to delete the container, avoiding any potential race conditions with an in-progress or completed deletion _, err = client.DeleteContainer(context.TODO(), containerName, nil)
if bloberror.HasCode(err, bloberror.ContainerBeingDeleted, bloberror.ContainerNotFound) {
// ignore any errors if the container is being deleted or already has been deleted
} else if err != nil {
// TODO: some other error
}
```
#### Next steps
Get started with our [Blob samples](https://github.com/Azure/azure-sdk-for-go/raw/main/sdk/storage/azblob/examples_test.go). They contain complete examples of the above snippets and more.
#### Contributing
See the [Storage CONTRIBUTING.md](https://github.com/Azure/azure-sdk-for-go/blob/main/CONTRIBUTING.md) for details on building,
testing, and contributing to this library.
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit [cla.microsoft.com](https://cla.microsoft.com).
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/)
or contact [<EMAIL>](mailto:<EMAIL>) with any additional questions or comments.
![Impressions](https://azure-sdk-impressions.azurewebsites.net/api/impressions/azure-sdk-for-go%2Fsdk%2Fstorage%2Fazblob%2FREADME.png)
Documentation
[¶](#section-documentation)
---
### Overview [¶](#pkg-overview)
Example [¶](#example-package)
This example is a quick-starter and demonstrates how to get started using the Azure Blob Storage SDK for Go.
```
package main
import (
"context"
"fmt"
"io"
"log"
"os"
"strings"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/to"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob"
)
func handleError(err error) {
if err != nil {
log.Fatal(err.Error())
}
}
func main() {
// Your account name and key can be obtained from the Azure Portal.
accountName, ok := os.LookupEnv("AZURE_STORAGE_ACCOUNT_NAME")
if !ok {
panic("AZURE_STORAGE_ACCOUNT_NAME could not be found")
}
accountKey, ok := os.LookupEnv("AZURE_STORAGE_PRIMARY_ACCOUNT_KEY")
if !ok {
panic("AZURE_STORAGE_PRIMARY_ACCOUNT_KEY could not be found")
}
cred, err := azblob.NewSharedKeyCredential(accountName, accountKey)
handleError(err)
// The service URL for blob endpoints is usually in the form: http(s)://<account>.blob.core.windows.net/
client, err := azblob.NewClientWithSharedKeyCredential(fmt.Sprintf("https://%s.blob.core.windows.net/", accountName), cred, nil)
handleError(err)
// === 1. Create a container ===
containerName := "testcontainer"
containerCreateResp, err := client.CreateContainer(context.TODO(), containerName, nil)
handleError(err)
fmt.Println(containerCreateResp)
// === 2. Upload and Download a block blob ===
blobData := "Hello world!"
blobName := "HelloWorld.txt"
uploadResp, err := client.UploadStream(context.TODO(),
containerName,
blobName,
strings.NewReader(blobData),
&azblob.UploadStreamOptions{
Metadata: map[string]*string{"Foo": to.Ptr("Bar")},
Tags: map[string]string{"Year": "2022"},
})
handleError(err)
fmt.Println(uploadResp)
// Download the blob's contents and ensure that the download worked properly
blobDownloadResponse, err := client.DownloadStream(context.TODO(), containerName, blobName, nil)
handleError(err)
// Use the bytes.Buffer object to read the downloaded data.
// RetryReaderOptions has a lot of in-depth tuning abilities, but for the sake of simplicity, we'll omit those here.
reader := blobDownloadResponse.Body
downloadData, err := io.ReadAll(reader)
handleError(err)
if string(downloadData) != blobData {
log.Fatal("Uploaded data should be same as downloaded data")
}
err = reader.Close()
if err != nil {
return
}
// === 3. List blobs ===
// List methods returns a pager object which can be used to iterate over the results of a paging operation.
// To iterate over a page use the NextPage(context.Context) to fetch the next page of results.
// PageResponse() can be used to iterate over the results of the specific page.
pager := client.NewListBlobsFlatPager(containerName, nil)
for pager.More() {
resp, err := pager.NextPage(context.TODO())
handleError(err)
for _, v := range resp.Segment.BlobItems {
fmt.Println(*v.Name)
}
}
// Delete the blob.
_, err = client.DeleteBlob(context.TODO(), containerName, blobName, nil)
handleError(err)
// Delete the container.
_, err = client.DeleteContainer(context.TODO(), containerName, nil)
handleError(err)
}
```
```
Output:
```
Share Format
Run
Example (Blob_AccessConditions) [¶](#example-package-Blob_AccessConditions)
This example shows how to perform operations on blob conditionally.
```
package main
import (
"context"
"fmt"
"log"
"os"
"strings"
"time"
"github.com/Azure/azure-sdk-for-go/sdk/azcore"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/streaming"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/to"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blockblob"
)
func handleError(err error) {
if err != nil {
log.Fatal(err.Error())
}
}
func main() {
accountName, accountKey := os.Getenv("AZURE_STORAGE_ACCOUNT_NAME"), os.Getenv("AZURE_STORAGE_ACCOUNT_KEY")
credential, err := azblob.NewSharedKeyCredential(accountName, accountKey)
handleError(err)
blockBlob, err := blockblob.NewClientWithSharedKeyCredential(fmt.Sprintf("https://%s.blob.core.windows.net/mycontainer/Data.txt", accountName), credential, nil)
handleError(err)
// This function displays the results of an operation
showResult := func(response *blob.DownloadStreamResponse, err error) {
if err != nil {
log.Fatalf("Failure: %s\n", err.Error())
} else {
err := response.Body.Close()
if err != nil {
log.Fatal(err)
}
// The client must close the response body when finished with it
fmt.Printf("Success: %v\n", response)
}
// Close the response
if err != nil {
return
}
fmt.Printf("Success: %v\n", response)
}
showResultUpload := func(response blockblob.UploadResponse, err error) {
if err != nil {
log.Fatalf("Failure: %s\n", err.Error())
}
fmt.Printf("Success: %v\n", response)
}
// Create the blob
upload, err := blockBlob.Upload(context.TODO(), streaming.NopCloser(strings.NewReader("Text-1")), nil)
showResultUpload(upload, err)
// Download blob content if the blob has been modified since we uploaded it (fails):
downloadResp, err := blockBlob.DownloadStream(
context.TODO(),
&azblob.DownloadStreamOptions{
AccessConditions: &blob.AccessConditions{
ModifiedAccessConditions: &blob.ModifiedAccessConditions{
IfModifiedSince: upload.LastModified,
},
},
},
)
showResult(&downloadResp, err)
// Download blob content if the blob hasn't been modified in the last 24 hours (fails):
downloadResp, err = blockBlob.DownloadStream(
context.TODO(),
&azblob.DownloadStreamOptions{
AccessConditions: &blob.AccessConditions{
ModifiedAccessConditions: &blob.ModifiedAccessConditions{
IfUnmodifiedSince: to.Ptr(time.Now().UTC().Add(time.Hour * -24))},
},
},
)
showResult(&downloadResp, err)
// Upload new content if the blob hasn't changed since the version identified by ETag (succeeds):
showResultUpload(blockBlob.Upload(
context.TODO(),
streaming.NopCloser(strings.NewReader("Text-2")),
&blockblob.UploadOptions{
AccessConditions: &blob.AccessConditions{
ModifiedAccessConditions: &blob.ModifiedAccessConditions{IfMatch: upload.ETag},
},
},
))
// Download content if it has changed since the version identified by ETag (fails):
downloadResp, err = blockBlob.DownloadStream(
context.TODO(),
&azblob.DownloadStreamOptions{
AccessConditions: &blob.AccessConditions{
ModifiedAccessConditions: &blob.ModifiedAccessConditions{IfNoneMatch: upload.ETag}},
})
showResult(&downloadResp, err)
// Upload content if the blob doesn't already exist (fails):
showResultUpload(blockBlob.Upload(
context.TODO(),
streaming.NopCloser(strings.NewReader("Text-3")),
&blockblob.UploadOptions{
AccessConditions: &blob.AccessConditions{
ModifiedAccessConditions: &blob.ModifiedAccessConditions{IfNoneMatch: to.Ptr(azcore.ETagAny)},
},
}))
}
```
```
Output:
```
Share Format
Run
Example (Blob_Client_Download) [¶](#example-package-Blob_Client_Download)
This example shows how to download a large stream with intelligent retries. Specifically, if the connection fails while reading, continuing to read from this stream initiates a new GetBlob call passing a range that starts from the last byte successfully read before the failure.
```
package main
import (
"context"
"fmt"
"io"
"log"
"os"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/streaming"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob"
)
func handleError(err error) {
if err != nil {
log.Fatal(err.Error())
}
}
func main() {
// From the Azure portal, get your Storage account blob service URL endpoint.
accountName, accountKey := os.Getenv("AZURE_STORAGE_ACCOUNT_NAME"), os.Getenv("AZURE_STORAGE_ACCOUNT_KEY")
// Create a blobClient object to a blob in the container (we assume the container & blob already exist).
blobURL := fmt.Sprintf("https://%s.blob.core.windows.net/mycontainer/BigBlob.bin", accountName)
credential, err := azblob.NewSharedKeyCredential(accountName, accountKey)
handleError(err)
blobClient, err := blob.NewClientWithSharedKeyCredential(blobURL, credential, nil)
handleError(err)
contentLength := int64(0) // Used for progress reporting to report the total number of bytes being downloaded.
// Download returns an intelligent retryable stream around a blob; it returns an io.ReadCloser.
dr, err := blobClient.DownloadStream(context.TODO(), nil)
handleError(err)
rs := dr.Body
// NewResponseBodyProgress wraps the GetRetryStream with progress reporting; it returns an io.ReadCloser.
stream := streaming.NewResponseProgress(
rs,
func(bytesTransferred int64) {
fmt.Printf("Downloaded %d of %d bytes.\n", bytesTransferred, contentLength)
},
)
defer func(stream io.ReadCloser) {
err := stream.Close()
if err != nil {
log.Fatal(err)
}
}(stream) // The client must close the response body when finished with it
file, err := os.Create("BigFile.bin") // Create the file to hold the downloaded blob contents.
handleError(err)
defer func(file *os.File) {
err := file.Close()
if err != nil {
}
}(file)
written, err := io.Copy(file, stream) // Write to the file by reading from the blob (with intelligent retries).
handleError(err)
fmt.Printf("Wrote %d bytes.\n", written)
}
```
```
Output:
```
Share Format
Run
Example (Client_CreateContainer) [¶](#example-package-Client_CreateContainer)
```
package main
import (
"context"
"fmt"
"log"
"os"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/to"
"github.com/Azure/azure-sdk-for-go/sdk/azidentity"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob"
)
func handleError(err error) {
if err != nil {
log.Fatal(err.Error())
}
}
func main() {
accountName, ok := os.LookupEnv("AZURE_STORAGE_ACCOUNT_NAME")
if !ok {
panic("AZURE_STORAGE_ACCOUNT_NAME could not be found")
}
serviceURL := fmt.Sprintf("https://%s.blob.core.windows.net/", accountName)
cred, err := azidentity.NewDefaultAzureCredential(nil)
handleError(err)
client, err := azblob.NewClient(serviceURL, cred, nil)
handleError(err)
resp, err := client.CreateContainer(context.TODO(), "testcontainer", &azblob.CreateContainerOptions{
Metadata: map[string]*string{"hello": to.Ptr("world")},
})
handleError(err)
fmt.Println(resp)
}
```
```
Output:
```
Share Format
Run
Example (Client_DeleteBlob) [¶](#example-package-Client_DeleteBlob)
```
package main
import (
"context"
"fmt"
"log"
"os"
"github.com/Azure/azure-sdk-for-go/sdk/azidentity"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob"
)
func handleError(err error) {
if err != nil {
log.Fatal(err.Error())
}
}
func main() {
accountName, ok := os.LookupEnv("AZURE_STORAGE_ACCOUNT_NAME")
if !ok {
panic("AZURE_STORAGE_ACCOUNT_NAME could not be found")
}
serviceURL := fmt.Sprintf("https://%s.blob.core.windows.net/", accountName)
cred, err := azidentity.NewDefaultAzureCredential(nil)
handleError(err)
client, err := azblob.NewClient(serviceURL, cred, nil)
handleError(err)
resp, err := client.DeleteBlob(context.TODO(), "testcontainer", "testblob", nil)
handleError(err)
fmt.Println(resp)
}
```
```
Output:
```
Share Format
Run
Example (Client_DeleteContainer) [¶](#example-package-Client_DeleteContainer)
```
package main
import (
"context"
"fmt"
"log"
"os"
"github.com/Azure/azure-sdk-for-go/sdk/azidentity"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob"
)
func handleError(err error) {
if err != nil {
log.Fatal(err.Error())
}
}
func main() {
accountName, ok := os.LookupEnv("AZURE_STORAGE_ACCOUNT_NAME")
if !ok {
panic("AZURE_STORAGE_ACCOUNT_NAME could not be found")
}
serviceURL := fmt.Sprintf("https://%s.blob.core.windows.net/", accountName)
cred, err := azidentity.NewDefaultAzureCredential(nil)
handleError(err)
client, err := azblob.NewClient(serviceURL, cred, nil)
handleError(err)
resp, err := client.DeleteContainer(context.TODO(), "testcontainer", nil)
handleError(err)
fmt.Println(resp)
}
```
```
Output:
```
Share Format
Run
Example (Client_DownloadFile) [¶](#example-package-Client_DownloadFile)
```
package main
import (
"context"
"fmt"
"log"
"os"
"github.com/Azure/azure-sdk-for-go/sdk/azidentity"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob"
)
func handleError(err error) {
if err != nil {
log.Fatal(err.Error())
}
}
func main() {
// Set up file to download the blob to
destFileName := "test_download_file.txt"
destFile, err := os.Create(destFileName)
handleError(err)
defer func(destFile *os.File) {
err = destFile.Close()
handleError(err)
}(destFile)
accountName, ok := os.LookupEnv("AZURE_STORAGE_ACCOUNT_NAME")
if !ok {
panic("AZURE_STORAGE_ACCOUNT_NAME could not be found")
}
serviceURL := fmt.Sprintf("https://%s.blob.core.windows.net/", accountName)
cred, err := azidentity.NewDefaultAzureCredential(nil)
handleError(err)
client, err := azblob.NewClient(serviceURL, cred, nil)
handleError(err)
// Perform download
_, err = client.DownloadFile(context.TODO(), "testcontainer", "virtual/dir/path/"+destFileName, destFile,
&azblob.DownloadFileOptions{
// If Progress is non-nil, this function is called periodically as bytes are uploaded.
Progress: func(bytesTransferred int64) {
fmt.Println(bytesTransferred)
},
})
// Assert download was successful
handleError(err)
}
```
```
Output:
```
Share Format
Run
Example (Client_DownloadStream) [¶](#example-package-Client_DownloadStream)
```
accountName, ok := os.LookupEnv("AZURE_STORAGE_ACCOUNT_NAME")
if !ok {
panic("AZURE_STORAGE_ACCOUNT_NAME could not be found")
}
serviceURL := fmt.Sprintf("https://%s.blob.core.windows.net/", accountName)
cred, err := azidentity.NewDefaultAzureCredential(nil)
handleError(err)
client, err := azblob.NewClient(serviceURL, cred, nil)
handleError(err)
// Download the blob downloadResponse, err := client.DownloadStream(ctx, "testcontainer", "test_download_stream.bin", nil)
handleError(err)
// Assert that the content is correct actualBlobData, err := io.ReadAll(downloadResponse.Body)
handleError(err)
fmt.Println(len(actualBlobData))
```
```
Output:
```
Example (Client_NewClient) [¶](#example-package-Client_NewClient)
```
package main
import (
"fmt"
"log"
"os"
"github.com/Azure/azure-sdk-for-go/sdk/azidentity"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob"
)
func handleError(err error) {
if err != nil {
log.Fatal(err.Error())
}
}
func main() {
// this example uses Azure Active Directory (AAD) to authenticate with Azure Blob Storage
accountName, ok := os.LookupEnv("AZURE_STORAGE_ACCOUNT_NAME")
if !ok {
panic("AZURE_STORAGE_ACCOUNT_NAME could not be found")
}
serviceURL := fmt.Sprintf("https://%s.blob.core.windows.net/", accountName)
// https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#DefaultAzureCredential
cred, err := azidentity.NewDefaultAzureCredential(nil)
handleError(err)
client, err := azblob.NewClient(serviceURL, cred, nil)
handleError(err)
fmt.Println(client.URL())
}
```
```
Output:
```
Share Format
Run
Example (Client_NewClientFromConnectionString) [¶](#example-package-Client_NewClientFromConnectionString)
```
package main
import (
"fmt"
"log"
"os"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob"
)
func handleError(err error) {
if err != nil {
log.Fatal(err.Error())
}
}
func main() {
// this example uses a connection string to authenticate with Azure Blob Storage
connectionString, ok := os.LookupEnv("AZURE_STORAGE_CONNECTION_STRING")
if !ok {
log.Fatal("the environment variable 'AZURE_STORAGE_CONNECTION_STRING' could not be found")
}
serviceClient, err := azblob.NewClientFromConnectionString(connectionString, nil)
handleError(err)
fmt.Println(serviceClient.URL())
}
```
```
Output:
```
Share Format
Run
Example (Client_NewClientWithSharedKeyCredential) [¶](#example-package-Client_NewClientWithSharedKeyCredential)
```
package main
import (
"fmt"
"log"
"os"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob"
)
func handleError(err error) {
if err != nil {
log.Fatal(err.Error())
}
}
func main() {
// this example uses a shared key to authenticate with Azure Blob Storage
accountName, ok := os.LookupEnv("AZURE_STORAGE_ACCOUNT_NAME")
if !ok {
panic("AZURE_STORAGE_ACCOUNT_NAME could not be found")
}
accountKey, ok := os.LookupEnv("AZURE_STORAGE_ACCOUNT_KEY")
if !ok {
panic("AZURE_STORAGE_ACCOUNT_KEY could not be found")
}
serviceURL := fmt.Sprintf("https://%s.blob.core.windows.net/", accountName)
// shared key authentication requires the storage account name and access key
cred, err := azblob.NewSharedKeyCredential(accountName, accountKey)
handleError(err)
serviceClient, err := azblob.NewClientWithSharedKeyCredential(serviceURL, cred, nil)
handleError(err)
fmt.Println(serviceClient.URL())
}
```
```
Output:
```
Share Format
Run
Example (Client_NewListBlobsPager) [¶](#example-package-Client_NewListBlobsPager)
```
accountName, ok := os.LookupEnv("AZURE_STORAGE_ACCOUNT_NAME")
if !ok {
panic("AZURE_STORAGE_ACCOUNT_NAME could not be found")
}
serviceURL := fmt.Sprintf("https://%s.blob.core.windows.net/", accountName)
cred, err := azidentity.NewDefaultAzureCredential(nil)
handleError(err)
client, err := azblob.NewClient(serviceURL, cred, nil)
handleError(err)
pager := client.NewListBlobsFlatPager("testcontainer", &azblob.ListBlobsFlatOptions{
Include: container.ListBlobsInclude{Deleted: true, Versions: true},
})
for pager.More() {
resp, err := pager.NextPage(ctx)
handleError(err) // if err is not nil, break the loop.
for _, _blob := range resp.Segment.BlobItems {
fmt.Printf("%v", _blob.Name)
}
}
```
```
Output:
```
Example (Client_NewListContainersPager) [¶](#example-package-Client_NewListContainersPager)
```
accountName, ok := os.LookupEnv("AZURE_STORAGE_ACCOUNT_NAME")
if !ok {
panic("AZURE_STORAGE_ACCOUNT_NAME could not be found")
}
serviceURL := fmt.Sprintf("https://%s.blob.core.windows.net/", accountName)
cred, err := azidentity.NewDefaultAzureCredential(nil)
handleError(err)
client, err := azblob.NewClient(serviceURL, cred, nil)
handleError(err)
pager := client.NewListContainersPager(&azblob.ListContainersOptions{
Include: azblob.ListContainersInclude{Metadata: true, Deleted: true},
})
for pager.More() {
resp, err := pager.NextPage(ctx)
handleError(err) // if err is not nil, break the loop.
for _, _container := range resp.ContainerItems {
fmt.Printf("%v", _container)
}
}
```
```
Output:
```
Example (Client_UploadFile) [¶](#example-package-Client_UploadFile)
```
package main
import (
"context"
"fmt"
"log"
"os"
"github.com/Azure/azure-sdk-for-go/sdk/azidentity"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob"
)
func handleError(err error) {
if err != nil {
log.Fatal(err.Error())
}
}
func main() {
// Set up file to upload
fileSize := 8 * 1024 * 1024
fileName := "test_upload_file.txt"
fileData := make([]byte, fileSize)
err := os.WriteFile(fileName, fileData, 0666)
handleError(err)
// Open the file to upload
fileHandler, err := os.Open(fileName)
handleError(err)
// close the file after it is no longer required.
defer func(file *os.File) {
err = file.Close()
handleError(err)
}(fileHandler)
// delete the local file if required.
defer func(name string) {
err = os.Remove(name)
handleError(err)
}(fileName)
accountName, ok := os.LookupEnv("AZURE_STORAGE_ACCOUNT_NAME")
if !ok {
panic("AZURE_STORAGE_ACCOUNT_NAME could not be found")
}
serviceURL := fmt.Sprintf("https://%s.blob.core.windows.net/", accountName)
cred, err := azidentity.NewDefaultAzureCredential(nil)
handleError(err)
client, err := azblob.NewClient(serviceURL, cred, nil)
handleError(err)
// Upload the file to a block blob
_, err = client.UploadFile(context.TODO(), "testcontainer", "virtual/dir/path/"+fileName, fileHandler,
&azblob.UploadFileOptions{
BlockSize: int64(1024),
Concurrency: uint16(3),
// If Progress is non-nil, this function is called periodically as bytes are uploaded.
Progress: func(bytesTransferred int64) {
fmt.Println(bytesTransferred)
},
})
handleError(err)
}
```
```
Output:
```
Share Format
Run
Example (Client_UploadStream) [¶](#example-package-Client_UploadStream)
```
package main
import (
"bytes"
"context"
"fmt"
"log"
"os"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/to"
"github.com/Azure/azure-sdk-for-go/sdk/azidentity"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob"
)
func handleError(err error) {
if err != nil {
log.Fatal(err.Error())
}
}
func main() {
accountName, ok := os.LookupEnv("AZURE_STORAGE_ACCOUNT_NAME")
if !ok {
panic("AZURE_STORAGE_ACCOUNT_NAME could not be found")
}
serviceURL := fmt.Sprintf("https://%s.blob.core.windows.net/", accountName)
cred, err := azidentity.NewDefaultAzureCredential(nil)
handleError(err)
client, err := azblob.NewClient(serviceURL, cred, nil)
handleError(err)
// Set up test blob
containerName := "testcontainer"
bufferSize := 8 * 1024 * 1024
blobName := "test_upload_stream.bin"
blobData := make([]byte, bufferSize)
blobContentReader := bytes.NewReader(blobData)
// Perform UploadStream
resp, err := client.UploadStream(context.TODO(), containerName, blobName, blobContentReader,
&azblob.UploadStreamOptions{
Metadata: map[string]*string{"hello": to.Ptr("world")},
})
// Assert that upload was successful
handleError(err)
fmt.Println(resp)
}
```
```
Output:
```
Share Format
Run
Example (Client_anonymous_NewClientWithNoCredential) [¶](#example-package-Client_anonymous_NewClientWithNoCredential)
```
package main
import (
"fmt"
"log"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob"
)
func handleError(err error) {
if err != nil {
log.Fatal(err.Error())
}
}
func main() {
// this example uses anonymous access to access a public blob
serviceClient, err := azblob.NewClientWithNoCredential("https://azurestoragesamples.blob.core.windows.net/samples/cloud.jpg", nil)
handleError(err)
fmt.Println(serviceClient.URL())
}
```
```
Output:
```
Share Format
Run
Example (ProgressUploadDownload) [¶](#example-package-ProgressUploadDownload)
```
package main
import (
"bytes"
"context"
"fmt"
"log"
"os"
"strings"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/streaming"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/to"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blockblob"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/container"
)
func handleError(err error) {
if err != nil {
log.Fatal(err.Error())
}
}
func main() {
// Create a credentials object with your Azure Storage Account name and key.
accountName, accountKey := os.Getenv("AZURE_STORAGE_ACCOUNT_NAME"), os.Getenv("AZURE_STORAGE_ACCOUNT_KEY")
credential, err := azblob.NewSharedKeyCredential(accountName, accountKey)
handleError(err)
// From the Azure portal, get your Storage account blob service URL endpoint.
containerURL := fmt.Sprintf("https://%s.blob.core.windows.net/mycontainer", accountName)
// Create an serviceClient object that wraps the service URL and a request pipeline to making requests.
containerClient, err := container.NewClientWithSharedKeyCredential(containerURL, credential, nil)
handleError(err)
// Here's how to create a blob with HTTP headers and metadata (I'm using the same metadata that was put on the container):
blobClient := containerClient.NewBlockBlobClient("Data.bin")
// requestBody is the stream of data to write
requestBody := streaming.NopCloser(strings.NewReader("Some text to write"))
// Wrap the request body in a RequestBodyProgress and pass a callback function for progress reporting.
requestProgress := streaming.NewRequestProgress(streaming.NopCloser(requestBody), func(bytesTransferred int64) {
fmt.Printf("Wrote %d of %d bytes.", bytesTransferred, requestBody)
})
_, err = blobClient.Upload(context.TODO(), requestProgress, &blockblob.UploadOptions{
HTTPHeaders: &blob.HTTPHeaders{
BlobContentType: to.Ptr("text/html; charset=utf-8"),
BlobContentDisposition: to.Ptr("attachment"),
},
})
handleError(err)
// Here's how to read the blob's data with progress reporting:
get, err := blobClient.DownloadStream(context.TODO(), nil)
handleError(err)
// Wrap the response body in a ResponseBodyProgress and pass a callback function for progress reporting.
responseBody := streaming.NewResponseProgress(
get.Body,
func(bytesTransferred int64) {
fmt.Printf("Read %d of %d bytes.", bytesTransferred, *get.ContentLength)
},
)
downloadedData := &bytes.Buffer{}
_, err = downloadedData.ReadFrom(responseBody)
if err != nil {
return
}
err = responseBody.Close()
if err != nil {
return
}
fmt.Printf("Downloaded data: %s\n", downloadedData.String())
}
```
```
Output:
```
Share Format
Run
### Index [¶](#pkg-index)
* [Constants](#pkg-constants)
* [type AccessConditions](#AccessConditions)
* [type CPKInfo](#CPKInfo)
* [type CPKScopeInfo](#CPKScopeInfo)
* [type Client](#Client)
* + [func NewClient(serviceURL string, cred azcore.TokenCredential, options *ClientOptions) (*Client, error)](#NewClient)
+ [func NewClientFromConnectionString(connectionString string, options *ClientOptions) (*Client, error)](#NewClientFromConnectionString)
+ [func NewClientWithNoCredential(serviceURL string, options *ClientOptions) (*Client, error)](#NewClientWithNoCredential)
+ [func NewClientWithSharedKeyCredential(serviceURL string, cred *SharedKeyCredential, options *ClientOptions) (*Client, error)](#NewClientWithSharedKeyCredential)
* + [func (c *Client) CreateContainer(ctx context.Context, containerName string, o *CreateContainerOptions) (CreateContainerResponse, error)](#Client.CreateContainer)
+ [func (c *Client) DeleteBlob(ctx context.Context, containerName string, blobName string, ...) (DeleteBlobResponse, error)](#Client.DeleteBlob)
+ [func (c *Client) DeleteContainer(ctx context.Context, containerName string, o *DeleteContainerOptions) (DeleteContainerResponse, error)](#Client.DeleteContainer)
+ [func (c *Client) DownloadBuffer(ctx context.Context, containerName string, blobName string, buffer []byte, ...) (int64, error)](#Client.DownloadBuffer)
+ [func (c *Client) DownloadFile(ctx context.Context, containerName string, blobName string, file *os.File, ...) (int64, error)](#Client.DownloadFile)
+ [func (c *Client) DownloadStream(ctx context.Context, containerName string, blobName string, ...) (DownloadStreamResponse, error)](#Client.DownloadStream)
+ [func (c *Client) NewListBlobsFlatPager(containerName string, o *ListBlobsFlatOptions) *runtime.Pager[ListBlobsFlatResponse]](#Client.NewListBlobsFlatPager)
+ [func (c *Client) NewListContainersPager(o *ListContainersOptions) *runtime.Pager[ListContainersResponse]](#Client.NewListContainersPager)
+ [func (c *Client) ServiceClient() *service.Client](#Client.ServiceClient)
+ [func (c *Client) URL() string](#Client.URL)
+ [func (c *Client) UploadBuffer(ctx context.Context, containerName string, blobName string, buffer []byte, ...) (UploadBufferResponse, error)](#Client.UploadBuffer)
+ [func (c *Client) UploadFile(ctx context.Context, containerName string, blobName string, file *os.File, ...) (UploadFileResponse, error)](#Client.UploadFile)
+ [func (c *Client) UploadStream(ctx context.Context, containerName string, blobName string, body io.Reader, ...) (UploadStreamResponse, error)](#Client.UploadStream)
* [type ClientOptions](#ClientOptions)
* [type CreateContainerOptions](#CreateContainerOptions)
* [type CreateContainerResponse](#CreateContainerResponse)
* [type DeleteBlobOptions](#DeleteBlobOptions)
* [type DeleteBlobResponse](#DeleteBlobResponse)
* [type DeleteContainerOptions](#DeleteContainerOptions)
* [type DeleteContainerResponse](#DeleteContainerResponse)
* [type DeleteSnapshotsOptionType](#DeleteSnapshotsOptionType)
* + [func PossibleDeleteSnapshotsOptionTypeValues() []DeleteSnapshotsOptionType](#PossibleDeleteSnapshotsOptionTypeValues)
* [type DownloadBufferOptions](#DownloadBufferOptions)
* [type DownloadFileOptions](#DownloadFileOptions)
* [type DownloadStreamOptions](#DownloadStreamOptions)
* [type DownloadStreamResponse](#DownloadStreamResponse)
* [type HTTPRange](#HTTPRange)
* [type ListBlobsFlatOptions](#ListBlobsFlatOptions)
* [type ListBlobsFlatResponse](#ListBlobsFlatResponse)
* [type ListBlobsFlatSegmentResponse](#ListBlobsFlatSegmentResponse)
* [type ListBlobsInclude](#ListBlobsInclude)
* [type ListContainersInclude](#ListContainersInclude)
* [type ListContainersOptions](#ListContainersOptions)
* [type ListContainersResponse](#ListContainersResponse)
* [type ListContainersSegmentResponse](#ListContainersSegmentResponse)
* [type ObjectReplicationPolicy](#ObjectReplicationPolicy)
* [type PublicAccessType](#PublicAccessType)
* + [func PossiblePublicAccessTypeValues() []PublicAccessType](#PossiblePublicAccessTypeValues)
* [type RetryReaderOptions](#RetryReaderOptions)
* [type SharedKeyCredential](#SharedKeyCredential)
* + [func NewSharedKeyCredential(accountName, accountKey string) (*SharedKeyCredential, error)](#NewSharedKeyCredential)
* [type URLParts](#URLParts)
* + [func ParseURL(u string) (URLParts, error)](#ParseURL)
* [type UploadBufferOptions](#UploadBufferOptions)
* [type UploadBufferResponse](#UploadBufferResponse)
* [type UploadFileOptions](#UploadFileOptions)
* [type UploadFileResponse](#UploadFileResponse)
* [type UploadResponse](#UploadResponse)
* [type UploadStreamOptions](#UploadStreamOptions)
* [type UploadStreamResponse](#UploadStreamResponse)
#### Examples [¶](#pkg-examples)
* [Package](#example-package)
* [Package (Blob_AccessConditions)](#example-package-Blob_AccessConditions)
* [Package (Blob_Client_Download)](#example-package-Blob_Client_Download)
* [Package (Client_CreateContainer)](#example-package-Client_CreateContainer)
* [Package (Client_DeleteBlob)](#example-package-Client_DeleteBlob)
* [Package (Client_DeleteContainer)](#example-package-Client_DeleteContainer)
* [Package (Client_DownloadFile)](#example-package-Client_DownloadFile)
* [Package (Client_DownloadStream)](#example-package-Client_DownloadStream)
* [Package (Client_NewClient)](#example-package-Client_NewClient)
* [Package (Client_NewClientFromConnectionString)](#example-package-Client_NewClientFromConnectionString)
* [Package (Client_NewClientWithSharedKeyCredential)](#example-package-Client_NewClientWithSharedKeyCredential)
* [Package (Client_NewListBlobsPager)](#example-package-Client_NewListBlobsPager)
* [Package (Client_NewListContainersPager)](#example-package-Client_NewListContainersPager)
* [Package (Client_UploadFile)](#example-package-Client_UploadFile)
* [Package (Client_UploadStream)](#example-package-Client_UploadStream)
* [Package (Client_anonymous_NewClientWithNoCredential)](#example-package-Client_anonymous_NewClientWithNoCredential)
* [Package (ProgressUploadDownload)](#example-package-ProgressUploadDownload)
### Constants [¶](#pkg-constants)
```
const (
// EventUpload is used for logging events related to upload operation.
EventUpload = [exported](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/internal/exported).[EventUpload](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/internal/exported#EventUpload)
// EventSubmitBatch is used for logging events related to submit blob batch operation.
EventSubmitBatch = [exported](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/internal/exported).[EventSubmitBatch](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/internal/exported#EventSubmitBatch)
)
```
### Variables [¶](#pkg-variables)
This section is empty.
### Functions [¶](#pkg-functions)
This section is empty.
### Types [¶](#pkg-types)
####
type [AccessConditions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/models.go#L60) [¶](#AccessConditions)
added in v0.5.0
```
type AccessConditions = [exported](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/internal/exported).[BlobAccessConditions](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/internal/exported#BlobAccessConditions)
```
AccessConditions identifies blob-specific access conditions which you optionally set.
####
type [CPKInfo](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/models.go#L54) [¶](#CPKInfo)
added in v1.0.0
```
type CPKInfo = [blob](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/blob).[CPKInfo](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/blob#CPKInfo)
```
CPKInfo contains a group of parameters for client provided encryption key.
####
type [CPKScopeInfo](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/models.go#L57) [¶](#CPKScopeInfo)
added in v1.0.0
```
type CPKScopeInfo = [container](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/container).[CPKScopeInfo](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/container#CPKScopeInfo)
```
CPKScopeInfo contains a group of parameters for the ContainerClient.Create method.
####
type [Client](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/client.go#L25) [¶](#Client)
added in v0.5.0
```
type Client struct {
// contains filtered or unexported fields
}
```
Client represents a URL to an Azure Storage blob; the blob may be a block blob, append blob, or page blob.
####
func [NewClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/client.go#L33) [¶](#NewClient)
added in v0.5.0
```
func NewClient(serviceURL [string](/builtin#string), cred [azcore](/github.com/Azure/azure-sdk-for-go/sdk/azcore).[TokenCredential](/github.com/Azure/azure-sdk-for-go/sdk/azcore#TokenCredential), options *[ClientOptions](#ClientOptions)) (*[Client](#Client), [error](/builtin#error))
```
NewClient creates an instance of Client with the specified values.
* serviceURL - the URL of the storage account e.g. https://<account>.blob.core.windows.net/
* cred - an Azure AD credential, typically obtained via the azidentity module
* options - client options; pass nil to accept the default values
####
func [NewClientFromConnectionString](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/client.go#L85) [¶](#NewClientFromConnectionString)
added in v0.5.0
```
func NewClientFromConnectionString(connectionString [string](/builtin#string), options *[ClientOptions](#ClientOptions)) (*[Client](#Client), [error](/builtin#error))
```
NewClientFromConnectionString creates an instance of Client with the specified values.
* connectionString - a connection string for the desired storage account
* options - client options; pass nil to accept the default values
####
func [NewClientWithNoCredential](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/client.go#L52) [¶](#NewClientWithNoCredential)
added in v0.5.0
```
func NewClientWithNoCredential(serviceURL [string](/builtin#string), options *[ClientOptions](#ClientOptions)) (*[Client](#Client), [error](/builtin#error))
```
NewClientWithNoCredential creates an instance of Client with the specified values.
This is used to anonymously access a storage account or with a shared access signature (SAS) token.
* serviceURL - the URL of the storage account e.g. https://<account>.blob.core.windows.net/?<sas token>
* options - client options; pass nil to accept the default values
####
func [NewClientWithSharedKeyCredential](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/client.go#L71) [¶](#NewClientWithSharedKeyCredential)
added in v0.5.0
```
func NewClientWithSharedKeyCredential(serviceURL [string](/builtin#string), cred *[SharedKeyCredential](#SharedKeyCredential), options *[ClientOptions](#ClientOptions)) (*[Client](#Client), [error](/builtin#error))
```
NewClientWithSharedKeyCredential creates an instance of Client with the specified values.
* serviceURL - the URL of the storage account e.g. https://<account>.blob.core.windows.net/
* cred - a SharedKeyCredential created with the matching storage account and access key
* options - client options; pass nil to accept the default values
####
func (*Client) [CreateContainer](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/client.go#L111) [¶](#Client.CreateContainer)
added in v0.5.0
```
func (c *[Client](#Client)) CreateContainer(ctx [context](/context).[Context](/context#Context), containerName [string](/builtin#string), o *[CreateContainerOptions](#CreateContainerOptions)) ([CreateContainerResponse](#CreateContainerResponse), [error](/builtin#error))
```
CreateContainer is a lifecycle method to creates a new container under the specified account.
If the container with the same name already exists, a ResourceExistsError will be raised.
This method returns a client with which to interact with the newly created container.
####
func (*Client) [DeleteBlob](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/client.go#L125) [¶](#Client.DeleteBlob)
added in v0.5.0
```
func (c *[Client](#Client)) DeleteBlob(ctx [context](/context).[Context](/context#Context), containerName [string](/builtin#string), blobName [string](/builtin#string), o *[DeleteBlobOptions](#DeleteBlobOptions)) ([DeleteBlobResponse](#DeleteBlobResponse), [error](/builtin#error))
```
DeleteBlob marks the specified blob or snapshot for deletion. The blob is later deleted during garbage collection.
Note that deleting a blob also deletes all its snapshots.
For more information, see <https://docs.microsoft.com/rest/api/storageservices/delete-blob>.
####
func (*Client) [DeleteContainer](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/client.go#L118) [¶](#Client.DeleteContainer)
added in v0.5.0
```
func (c *[Client](#Client)) DeleteContainer(ctx [context](/context).[Context](/context#Context), containerName [string](/builtin#string), o *[DeleteContainerOptions](#DeleteContainerOptions)) ([DeleteContainerResponse](#DeleteContainerResponse), [error](/builtin#error))
```
DeleteContainer is a lifecycle method that marks the specified container for deletion.
The container and any blobs contained within it are later deleted during garbage collection.
If the container is not found, a ResourceNotFoundError will be raised.
####
func (*Client) [DownloadBuffer](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/client.go#L160) [¶](#Client.DownloadBuffer)
added in v0.5.0
```
func (c *[Client](#Client)) DownloadBuffer(ctx [context](/context).[Context](/context#Context), containerName [string](/builtin#string), blobName [string](/builtin#string), buffer [][byte](/builtin#byte), o *[DownloadBufferOptions](#DownloadBufferOptions)) ([int64](/builtin#int64), [error](/builtin#error))
```
DownloadBuffer downloads an Azure blob to a buffer with parallel.
####
func (*Client) [DownloadFile](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/client.go#L166) [¶](#Client.DownloadFile)
added in v0.5.0
```
func (c *[Client](#Client)) DownloadFile(ctx [context](/context).[Context](/context#Context), containerName [string](/builtin#string), blobName [string](/builtin#string), file *[os](/os).[File](/os#File), o *[DownloadFileOptions](#DownloadFileOptions)) ([int64](/builtin#int64), [error](/builtin#error))
```
DownloadFile downloads an Azure blob to a local file.
The file would be truncated if the size doesn't match.
####
func (*Client) [DownloadStream](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/client.go#L172) [¶](#Client.DownloadStream)
added in v0.5.0
```
func (c *[Client](#Client)) DownloadStream(ctx [context](/context).[Context](/context#Context), containerName [string](/builtin#string), blobName [string](/builtin#string), o *[DownloadStreamOptions](#DownloadStreamOptions)) ([DownloadStreamResponse](#DownloadStreamResponse), [error](/builtin#error))
```
DownloadStream reads a range of bytes from a blob. The response also includes the blob's properties and metadata.
For more information, see <https://docs.microsoft.com/rest/api/storageservices/get-blob>.
####
func (*Client) [NewListBlobsFlatPager](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/client.go#L132) [¶](#Client.NewListBlobsFlatPager)
added in v0.5.0
```
func (c *[Client](#Client)) NewListBlobsFlatPager(containerName [string](/builtin#string), o *[ListBlobsFlatOptions](#ListBlobsFlatOptions)) *[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Pager](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Pager)[[ListBlobsFlatResponse](#ListBlobsFlatResponse)]
```
NewListBlobsFlatPager returns a pager for blobs starting from the specified Marker. Use an empty Marker to start enumeration from the beginning. Blob names are returned in lexicographic order.
For more information, see <https://docs.microsoft.com/rest/api/storageservices/list-blobs>.
####
func (*Client) [NewListContainersPager](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/client.go#L139) [¶](#Client.NewListContainersPager)
added in v0.5.0
```
func (c *[Client](#Client)) NewListContainersPager(o *[ListContainersOptions](#ListContainersOptions)) *[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Pager](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Pager)[[ListContainersResponse](#ListContainersResponse)]
```
NewListContainersPager operation returns a pager of the containers under the specified account.
Use an empty Marker to start enumeration from the beginning. Container names are returned in lexicographic order.
For more information, see <https://docs.microsoft.com/rest/api/storageservices/list-containers2>.
####
func (*Client) [ServiceClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/client.go#L104) [¶](#Client.ServiceClient)
added in v0.6.0
```
func (c *[Client](#Client)) ServiceClient() *[service](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/service).[Client](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/service#Client)
```
ServiceClient returns the embedded service client for this client.
####
func (*Client) [URL](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/client.go#L99) [¶](#Client.URL)
added in v0.5.0
```
func (c *[Client](#Client)) URL() [string](/builtin#string)
```
URL returns the URL endpoint used by the BlobClient object.
####
func (*Client) [UploadBuffer](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/client.go#L144) [¶](#Client.UploadBuffer)
added in v0.5.0
```
func (c *[Client](#Client)) UploadBuffer(ctx [context](/context).[Context](/context#Context), containerName [string](/builtin#string), blobName [string](/builtin#string), buffer [][byte](/builtin#byte), o *[UploadBufferOptions](#UploadBufferOptions)) ([UploadBufferResponse](#UploadBufferResponse), [error](/builtin#error))
```
UploadBuffer uploads a buffer in blocks to a block blob.
####
func (*Client) [UploadFile](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/client.go#L149) [¶](#Client.UploadFile)
added in v0.5.0
```
func (c *[Client](#Client)) UploadFile(ctx [context](/context).[Context](/context#Context), containerName [string](/builtin#string), blobName [string](/builtin#string), file *[os](/os).[File](/os#File), o *[UploadFileOptions](#UploadFileOptions)) ([UploadFileResponse](#UploadFileResponse), [error](/builtin#error))
```
UploadFile uploads a file in blocks to a block blob.
####
func (*Client) [UploadStream](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/client.go#L155) [¶](#Client.UploadStream)
added in v0.5.0
```
func (c *[Client](#Client)) UploadStream(ctx [context](/context).[Context](/context#Context), containerName [string](/builtin#string), blobName [string](/builtin#string), body [io](/io).[Reader](/io#Reader), o *[UploadStreamOptions](#UploadStreamOptions)) ([UploadStreamResponse](#UploadStreamResponse), [error](/builtin#error))
```
UploadStream copies the file held in io.Reader to the Blob at blockBlobClient.
A Context deadline or cancellation will cause this to error.
####
type [ClientOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/client.go#L22) [¶](#ClientOptions)
```
type ClientOptions [base](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/internal/base).[ClientOptions](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/internal/base#ClientOptions)
```
ClientOptions contains the optional parameters when creating a Client.
####
type [CreateContainerOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/models.go#L18) [¶](#CreateContainerOptions)
```
type CreateContainerOptions = [service](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/service).[CreateContainerOptions](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/service#CreateContainerOptions)
```
CreateContainerOptions contains the optional parameters for the ContainerClient.Create method.
####
type [CreateContainerResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/responses.go#L18) [¶](#CreateContainerResponse)
added in v0.5.0
```
type CreateContainerResponse = [service](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/service).[CreateContainerResponse](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/service#CreateContainerResponse)
```
CreateContainerResponse contains the response from method container.Client.Create.
####
type [DeleteBlobOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/models.go#L24) [¶](#DeleteBlobOptions)
```
type DeleteBlobOptions = [blob](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/blob).[DeleteOptions](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/blob#DeleteOptions)
```
DeleteBlobOptions contains the optional parameters for the Client.Delete method.
####
type [DeleteBlobResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/responses.go#L24) [¶](#DeleteBlobResponse)
added in v0.5.0
```
type DeleteBlobResponse = [blob](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/blob).[DeleteResponse](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/blob#DeleteResponse)
```
DeleteBlobResponse contains the response from method blob.Client.Delete.
####
type [DeleteContainerOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/models.go#L21) [¶](#DeleteContainerOptions)
```
type DeleteContainerOptions = [service](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/service).[DeleteContainerOptions](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/service#DeleteContainerOptions)
```
DeleteContainerOptions contains the optional parameters for the container.Client.Delete method.
####
type [DeleteContainerResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/responses.go#L21) [¶](#DeleteContainerResponse)
added in v0.5.0
```
type DeleteContainerResponse = [service](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/service).[DeleteContainerResponse](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/service#DeleteContainerResponse)
```
DeleteContainerResponse contains the response from method container.Client.Delete
####
type [DeleteSnapshotsOptionType](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/constants.go#L27) [¶](#DeleteSnapshotsOptionType)
```
type DeleteSnapshotsOptionType = [generated](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/internal/generated).[DeleteSnapshotsOptionType](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/internal/generated#DeleteSnapshotsOptionType)
```
DeleteSnapshotsOptionType defines values for DeleteSnapshotsOptionType.
```
const (
DeleteSnapshotsOptionTypeInclude [DeleteSnapshotsOptionType](#DeleteSnapshotsOptionType) = [generated](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/internal/generated).[DeleteSnapshotsOptionTypeInclude](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/internal/generated#DeleteSnapshotsOptionTypeInclude)
DeleteSnapshotsOptionTypeOnly [DeleteSnapshotsOptionType](#DeleteSnapshotsOptionType) = [generated](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/internal/generated).[DeleteSnapshotsOptionTypeOnly](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/internal/generated#DeleteSnapshotsOptionTypeOnly)
)
```
####
func [PossibleDeleteSnapshotsOptionTypeValues](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/constants.go#L35) [¶](#PossibleDeleteSnapshotsOptionTypeValues)
```
func PossibleDeleteSnapshotsOptionTypeValues() [][DeleteSnapshotsOptionType](#DeleteSnapshotsOptionType)
```
PossibleDeleteSnapshotsOptionTypeValues returns the possible values for the DeleteSnapshotsOptionType const type.
####
type [DownloadBufferOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/models.go#L48) [¶](#DownloadBufferOptions)
added in v0.5.0
```
type DownloadBufferOptions = [blob](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/blob).[DownloadBufferOptions](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/blob#DownloadBufferOptions)
```
DownloadBufferOptions identifies options used by the DownloadBuffer and DownloadFile functions.
####
type [DownloadFileOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/models.go#L51) [¶](#DownloadFileOptions)
added in v0.5.0
```
type DownloadFileOptions = [blob](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/blob).[DownloadFileOptions](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/blob#DownloadFileOptions)
```
DownloadFileOptions identifies options used by the DownloadBuffer and DownloadFile functions.
####
type [DownloadStreamOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/models.go#L27) [¶](#DownloadStreamOptions)
added in v0.5.0
```
type DownloadStreamOptions = [blob](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/blob).[DownloadStreamOptions](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/blob#DownloadStreamOptions)
```
DownloadStreamOptions contains the optional parameters for the Client.DownloadStream method.
####
type [DownloadStreamResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/responses.go#L30) [¶](#DownloadStreamResponse)
added in v0.5.0
```
type DownloadStreamResponse = [blob](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/blob).[DownloadStreamResponse](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/blob#DownloadStreamResponse)
```
DownloadStreamResponse wraps AutoRest generated BlobDownloadResponse and helps to provide info for retry.
####
type [HTTPRange](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/common.go#L36) [¶](#HTTPRange)
added in v0.5.0
```
type HTTPRange = [exported](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/internal/exported).[HTTPRange](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/internal/exported#HTTPRange)
```
HTTPRange defines a range of bytes within an HTTP resource, starting at offset and ending at offset+count. A zero-value HTTPRange indicates the entire resource. An HTTPRange which has an offset but no zero value count indicates from the offset to the resource's end.
####
type [ListBlobsFlatOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/models.go#L30) [¶](#ListBlobsFlatOptions)
added in v0.5.0
```
type ListBlobsFlatOptions = [container](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/container).[ListBlobsFlatOptions](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/container#ListBlobsFlatOptions)
```
ListBlobsFlatOptions contains the optional parameters for the container.Client.ListBlobFlatSegment method.
####
type [ListBlobsFlatResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/responses.go#L33) [¶](#ListBlobsFlatResponse)
added in v0.5.0
```
type ListBlobsFlatResponse = [container](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/container).[ListBlobsFlatResponse](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/container#ListBlobsFlatResponse)
```
ListBlobsFlatResponse contains the response from method container.Client.ListBlobFlatSegment.
####
type [ListBlobsFlatSegmentResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/responses.go#L51) [¶](#ListBlobsFlatSegmentResponse)
```
type ListBlobsFlatSegmentResponse = [generated](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/internal/generated).[ListBlobsFlatSegmentResponse](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/internal/generated#ListBlobsFlatSegmentResponse)
```
ListBlobsFlatSegmentResponse - An enumeration of blobs
####
type [ListBlobsInclude](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/models.go#L33) [¶](#ListBlobsInclude)
added in v0.5.0
```
type ListBlobsInclude = [container](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/container).[ListBlobsInclude](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/container#ListBlobsInclude)
```
ListBlobsInclude indicates what additional information the service should return with each blob.
####
type [ListContainersInclude](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/models.go#L63) [¶](#ListContainersInclude)
added in v0.5.0
```
type ListContainersInclude = [service](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/service).[ListContainersInclude](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/service#ListContainersInclude)
```
ListContainersInclude indicates what additional information the service should return with each container.
####
type [ListContainersOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/models.go#L36) [¶](#ListContainersOptions)
```
type ListContainersOptions = [service](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/service).[ListContainersOptions](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/service#ListContainersOptions)
```
ListContainersOptions contains the optional parameters for the container.Client.ListContainers operation
####
type [ListContainersResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/responses.go#L36) [¶](#ListContainersResponse)
added in v0.5.0
```
type ListContainersResponse = [service](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/service).[ListContainersResponse](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/service#ListContainersResponse)
```
ListContainersResponse contains the response from method service.Client.ListContainersSegment.
####
type [ListContainersSegmentResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/responses.go#L48) [¶](#ListContainersSegmentResponse)
```
type ListContainersSegmentResponse = [generated](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/internal/generated).[ListContainersSegmentResponse](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/internal/generated#ListContainersSegmentResponse)
```
ListContainersSegmentResponse - An enumeration of containers
####
type [ObjectReplicationPolicy](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/models.go#L66) [¶](#ObjectReplicationPolicy)
```
type ObjectReplicationPolicy = [blob](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/blob).[ObjectReplicationPolicy](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/blob#ObjectReplicationPolicy)
```
ObjectReplicationPolicy are deserialized attributes
####
type [PublicAccessType](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/constants.go#L14) [¶](#PublicAccessType)
```
type PublicAccessType = [generated](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/internal/generated).[PublicAccessType](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/internal/generated#PublicAccessType)
```
PublicAccessType defines values for AccessType - private (default) or blob or container.
```
const (
PublicAccessTypeBlob [PublicAccessType](#PublicAccessType) = [generated](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/internal/generated).[PublicAccessTypeBlob](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/internal/generated#PublicAccessTypeBlob)
PublicAccessTypeContainer [PublicAccessType](#PublicAccessType) = [generated](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/internal/generated).[PublicAccessTypeContainer](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/internal/generated#PublicAccessTypeContainer)
)
```
####
func [PossiblePublicAccessTypeValues](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/constants.go#L22) [¶](#PossiblePublicAccessTypeValues)
```
func PossiblePublicAccessTypeValues() [][PublicAccessType](#PublicAccessType)
```
PossiblePublicAccessTypeValues returns the possible values for the PublicAccessType const type.
####
type [RetryReaderOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/models.go#L69) [¶](#RetryReaderOptions)
```
type RetryReaderOptions = [blob](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/blob).[RetryReaderOptions](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/blob#RetryReaderOptions)
```
RetryReaderOptions contains properties which can help to decide when to do retry.
####
type [SharedKeyCredential](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/common.go#L15) [¶](#SharedKeyCredential)
```
type SharedKeyCredential = [exported](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/internal/exported).[SharedKeyCredential](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/internal/exported#SharedKeyCredential)
```
SharedKeyCredential contains an account's name and its primary or secondary key.
####
func [NewSharedKeyCredential](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/common.go#L19) [¶](#NewSharedKeyCredential)
```
func NewSharedKeyCredential(accountName, accountKey [string](/builtin#string)) (*[SharedKeyCredential](#SharedKeyCredential), [error](/builtin#error))
```
NewSharedKeyCredential creates an immutable SharedKeyCredential containing the storage account's name and either its primary or secondary key.
####
type [URLParts](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/common.go#L25) [¶](#URLParts)
added in v0.5.0
```
type URLParts = [sas](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/sas).[URLParts](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/sas#URLParts)
```
URLParts object represents the components that make up an Azure Storage Container/Blob URL.
NOTE: Changing any SAS-related field requires computing a new SAS signature.
####
func [ParseURL](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/common.go#L29) [¶](#ParseURL)
added in v0.5.0
```
func ParseURL(u [string](/builtin#string)) ([URLParts](#URLParts), [error](/builtin#error))
```
ParseURL parses a URL initializing URLParts' fields including any SAS-related & snapshot query parameters. Any other query parameters remain in the UnparsedParams field. This method overwrites all fields in the URLParts object.
####
type [UploadBufferOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/models.go#L39) [¶](#UploadBufferOptions)
added in v0.5.0
```
type UploadBufferOptions = [blockblob](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/blockblob).[UploadBufferOptions](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/blockblob#UploadBufferOptions)
```
UploadBufferOptions provides set of configurations for UploadBuffer operation
####
type [UploadBufferResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/responses.go#L39) [¶](#UploadBufferResponse)
added in v0.5.0
```
type UploadBufferResponse = [blockblob](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/blockblob).[UploadBufferResponse](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/blockblob#UploadBufferResponse)
```
UploadBufferResponse contains the response from method Client.UploadBuffer/Client.UploadFile.
####
type [UploadFileOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/models.go#L42) [¶](#UploadFileOptions)
added in v0.5.0
```
type UploadFileOptions = [blockblob](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/blockblob).[UploadFileOptions](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/blockblob#UploadFileOptions)
```
UploadFileOptions provides set of configurations for UploadFile operation
####
type [UploadFileResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/responses.go#L42) [¶](#UploadFileResponse)
added in v0.5.0
```
type UploadFileResponse = [blockblob](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/blockblob).[UploadFileResponse](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/blockblob#UploadFileResponse)
```
UploadFileResponse contains the response from method Client.UploadBuffer/Client.UploadFile.
####
type [UploadResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/responses.go#L27) [¶](#UploadResponse)
added in v0.5.0
```
type UploadResponse = [blockblob](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/blockblob).[CommitBlockListResponse](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/blockblob#CommitBlockListResponse)
```
UploadResponse contains the response from method blockblob.Client.CommitBlockList.
####
type [UploadStreamOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/models.go#L45) [¶](#UploadStreamOptions)
added in v0.4.0
```
type UploadStreamOptions = [blockblob](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/blockblob).[UploadStreamOptions](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/blockblob#UploadStreamOptions)
```
UploadStreamOptions provides set of configurations for UploadStream operation
####
type [UploadStreamResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/storage/azblob/v1.2.0/sdk/storage/azblob/responses.go#L45) [¶](#UploadStreamResponse)
added in v0.5.0
```
type UploadStreamResponse = [blockblob](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/blockblob).[CommitBlockListResponse](/github.com/Azure/azure-sdk-for-go/sdk/storage/[email protected]/blockblob#CommitBlockListResponse)
```
UploadStreamResponse contains the response from method Client.CommitBlockList. |
wdmodel | readthedoc | YAML | WDmodel 0.4 documentation
[WDmodel](index.html#document-index)
---
Welcome to WDmodel’s documentation![¶](#module-WDmodel)
===
WDmodel: Bayesian inference of white dwarf properties from spectra and photometry to establish spectrophotometric standards
WDmodel[¶](#wdmodel)
===
**Copyright 2017- <NAME> (<EMAIL>)**
About[¶](#about)
---
`WDmodel` is a DA White Dwarf model atmosphere fitting code. It fits observed spectrophotometry of DA White Dwarfs to infer intrinsic model atmosphere parameters in the presence of dust and correlated spectroscopic flux calibration errors, thereby determining full SEDs for the white dwarf. Its primary scientific purpose is to establish a network of faint (V = 16.5–19 mag) standard stars, suitable for LSST and other wide-field photometric surveys, and tied to HST and the CALSPEC system, defined by the three primary standards, GD71, GD153 and G191B2B.
Click on the badges above for code, licensing and documentation.
Compatibility[¶](#compatibility)
---
The code has been tested on Python 2.7 and 3.6 on both OS X (El Capitan and Sierra) and Linux (Debian-derivatives). Send us email or open an issue if you need help!
Analysis[¶](#analysis)
---
We’re working on a publication with the results from our combined Cycle 22 and Cycle 20 data, while ramping up for Cycle 25! A full data release of Cycle 20 and 22 HST data, and ground-based imaging and spectroscopy will accompany the publication. Look for an updated link here!
You can read the first version of our analysis of four of the Cycle 20 objects
[here](http://adsabs.harvard.edu/cgi-bin/bib_query?arXiv:1603.03825)
That analysis was intended as a proof-of-concept and used custom IDL routines from <NAME> (U. Arizona) to infer DA intrinsic parameters and custom python code to fit the reddening parameters. This code is intended to
(significantly) improve on that analysis.
Help[¶](#help)
===
This document will help get you up and running with the `WDmodel` package.
For the most part, you can simply execute code in grey boxes to get things up and running, and ignore the text initially. Come back to it when you need help,
or to configure the fitter.
Installing WDmodel[¶](#installing-wdmodel)
---
This document will step you through getting the `WDmodel` package installed on your system.
* [Installation instructions](#install)
+ [Get python](#python)
+ [Get the code](#code)
+ [Install everything](#package)
+ [Get auxillary pysynphot files](#synphot)
+ [Install the code](#finalize)
* [Some extra notes](#notes)
### Installation Instructions[¶](#installation-instructions)
Here’s a minimal set of instructions to get up and running. We will eventually get this package up on PyPI and conda-forge, and that should make this even easier.
#### 0. Install python:[¶](#install-python)
We recommend using the anaconda python distribution to run this package. If you don’t have it already, follow the instructions [here](https://conda.io/docs/install/quick.html#linux-miniconda-install)
**Make sure you added the conda/bin dir to your path!**
If you elect to use your system python, or some other distribution, we will assume you know what you are doing, and you can, skip ahead.
#### 1. Get the code:[¶](#get-the-code)
Clone this repository
```
git clone https://github.com/gnarayan/WDmodel.git cd WDmodel
```
#### 2. Install everything:[¶](#install-everything)
> 1. Create a new environment from specification (Preferred! All dependencies resolved!)
> ```
> conda env create -f docs/env/conda_environment_py[27|36]_[osx64|i686].yml
> ```
*or*
> 2. Create a new environment from scratch (Let conda figure out dependencies and you sort out potential issues)
> > > > ```
> > cp docs/env/condarc.example ~/.condarc
> > conda create -n WDmodel
> > source activate WDmodel
> > conda install --yes --file dependencies_py[27|36].txt
> > > ```
> > > ##### Setting up an environment vs setting up a known good environment[¶](#setting-up-an-environment-vs-setting-up-a-known-good-environment)
The `env` folder contains files to help get you setup using a consistent environment with all packages specified.
The `requirements_py[27|36].txt` files contains a list of required python packages and known working versions for each. They differ from the
`dependencies_py[27|36].txt` files in the root directory in that those files specify packages and version ranges, rather than exact versions, to allow conda to resolve dependecies and pull updated versions.
Of course, the environment really needs more than just python packages, while
`pip` only manages python packages. The conda environment files,
`conda_environment_py[27|37]_[osx64|i686].yml` files can be used to create conda environments with exact versions of all the packages for python 2.7 or 3.6 on OS X or linux. This is the most reliable way to recreate the entire environment.
#### 3. Get the latest HST CDBS files:[¶](#get-the-latest-hst-cdbs-files)
These are available over FTP from
[<ftp://archive.stsci.edu/pub/hst/pysynphot/>]
Untar them wherever you like, and set the `PYSYN_CDBS` environment variable.
You need at least `synphot1.tar.gz` and `synphot6.tar.gz`.
```
export PYSYN_CDBS=place_you_untarred_the_files
```
#### 4. Install the package [optional]:[¶](#install-the-package-optional)
```
python setup.py install
```
### Extra[¶](#extra)
The instructions should be enough to get up and running, even without `sudo`
privileges. There’s a few edge cases on cluster environments though. These notes may help:
#### Some extra notes on installation[¶](#some-extra-notes-on-installation)
If you followed the installation process detailed above, you shouldn’t need these notes, but they are provided for users who may be running on environments they do not manage themselves.
* [Installing eigen3 without conda](#eigen)
* [Installing OpenMPI and mpi4py without conda](#mpi)
* [Installing on a cluster](#cluster)
##### Installing eigen3 without conda[¶](#installing-eigen3-without-conda)
If eigen3 isn’t on your system, and installing it with conda didn’t work
For OS X do:
```
brew install eigen
```
or on a linux system with apt:
```
apt-get install libeigen3-dev
```
or compile it from [source](http://eigen.tuxfamily.org/index.php?title=Main_Page)
Note that if you do install it in a custom location, you may have to compile celerite yourself.
```
pip install celerite --global-option=build_ext --global-option=-I/path/to/eigen3
```
##### Installing OpenMPI and mpi4py without conda[¶](#installing-openmpi-and-mpi4py-without-conda)
if no mpi is on your system, and installing it with conda didn’t work
For OS X do:
```
brew install [mpich|mpich2|open-mpi]
```
on a linux system with apt:
```
apt-get install openmpi-bin
```
and if you had to resort to brew or apt, then finish with:
```
pip install mpi4py
```
##### Notes from installing on the Odyssey cluster at Harvard[¶](#notes-from-installing-on-the-odyssey-cluster-at-harvard)
These may be of use to get the code up and running with MPI on some other cluster. Good luck.
Odyssey uses the lmod system for module management, like many other clusters You can `module spider openmpi` to find what the openmpi modules.
The advantage to using this is distributing your computation over multiple nodes. The disadvantage is that you have to compile mpi4py yourself against the cluster mpi.
```
module load gcc/6.3.0-fasrc01 openmpi/2.0.2.40dc0399-fasrc01 wget https://bitbucket.org/mpi4py/mpi4py/downloads/mpi4py-2.0.0.tar.gz tar xvzf mpi4py-2.0.0.tar.gz cd mpi4py-2.0.0 python setup.py build --mpicc=$(which mpicc)
python setup.py build_exe --mpicc="$(which mpicc) --dynamic"
python setup.py install
```
Using WDmodel[¶](#using-wdmodel)
---
This document will help you get comfortable using the `WDmodel` package.
* [Usage](#usage)
+ [Get data](#data)
+ [Running single threaded](#singlethread)
+ [Running with MPI](#mpipool)
* [Useful options](#argparse)
+ [Quick analysis](#quicklook)
+ [Initializing the fitter](#init)
+ [Configuring the sampler](#samptype)
+ [Resuming the fit](#resume)
### Usage[¶](#usage)
This is the TL;DR version to get up and running.
#### 1. Get the data:[¶](#get-the-data)
Instructions will be available here when the paper is accepted. In the meantime there’s a single test object in the spectroscopy directory. If you want more,
Write your own HST proposal! :-P
#### 2. Run a fit single threaded:[¶](#run-a-fit-single-threaded)
```
fit_WDmodel --specfile data/spectroscopy/yourfavorite.flm
```
This option is single threaded and slow, but useful to testing or quick exploratory analysis.
A more reasonable way to run things fast is to use mpi.
#### 3. Run a fit as an MPI process:[¶](#run-a-fit-as-an-mpi-process)
```
mpirun -np 8 fit_WDmodel --mpi --specfile=file.flm [--ignorephot]
```
Note that `--mpi` **MUST** be specified in the options to
`WDmodel` and you must start the process with `mpirun`
### Useful runtime options[¶](#useful-runtime-options)
There’s a large number of command line options to the fitter, and most of it’s aspects can be configured. Some options make sense in concert with others, and here’s a short summary of use cases.
#### Quick looks[¶](#quick-looks)
The spectrum can be trimmed prior to fitting with the `--trimspec`
option. You can also blotch over gaps and cosmic rays if your reduction was sloppy, and you just need a quick fit, but it’s better to do this manually.
If there is no photometry data for the object, the fitter will barf unless `--ignorephot` is specified explicitly, so you know that the parameters are only constrained by the spectroscopy.
The fitter runs a MCMC to explore the posterior distribution of the model parameters given the data. If you are running with the above two options,
chances are you are at the telescope, getting spectra, and doing quick look reductions, and you just want a rough idea of temperature and surface gravity to decide if you should get more signal, and eventually get HST photometry. The MCMC is overkill for this purpose so you can `--skipmcmc`, in which case,
you’ll get results using minuit. They’ll be biased, and the errors will probably be too small, but they give you a ballpark estimate.
If you do want to use the MCMC anyway, you might like it to be faster. You can choose to use only every nth point in computing the log likelihood with
`--everyn` - this is only intended for testing purposes, and should probably not be used for any final analysis. Note that the uncertainties increase as you’d expect with fewer points.
#### Setting the initial state[¶](#setting-the-initial-state)
The fitter really runs minuit to refine initial supplied guesses for parameters. Every now at then, the guess prior to running minuit is so far off that you get rubbish out of minuit. This can be fixed by explicitly supplying a better initial guess. Of course, if you do that, you might wonder why even bother with minuit, and may wish to skip it entirely. This can be disabled with the `--skipminuit` option. If `--skipminuit` is used, a dl guess **MUST**
be specified.
All of the parameter files can be supplied via a JSON parameter file supplied via the `--param_file` option, or using individual parameter options. An example parameter file is available in the module directory.
#### Configuring the sampler[¶](#configuring-the-sampler)
You can change the sampler type (`-samptype`), number of chain temperatures
(`--ntemps`), number of walkers (`--nwalkers`), burn in steps
(`--nburnin`), production steps (`--nprod`), and proposal scale for the MCMC (`--ascale`). You can also thin the chain (`--thin`) and discard some fraction of samples from the start (`--discard`). The default sampler is the ensemble sampler from the [`emcee`](https://emcee.readthedocs.io/en/stable/user/quickstart.html#module-emcee) package. For a more conservative approach, we recommend the ptsampler with `ntemps=5`, `nwalkers=100`,
`nprod=5000` (or more).
#### Resuming the fit[¶](#resuming-the-fit)
If the sampling needs to be interrupted, or crashes for whatever reason, the state is saved every 100 steps, and the sampling can be restarted with
`--resume`. Note that you must have run at least the burn in and 100 steps for it to be possible to resume, and the state of the data, parameters, or chain configuration should not be changed externally (if they need to be use
`--redo` and rerun the fit). You can increase the length of the chain, and chain the visualization options when you `--resume` but the state of everything else is restored.
You can get a summary of all available options with `--help`
#### Useful routines[¶](#useful-routines)
There are a few useful routines included in the `WDmodel` package. Using
`WDmodel` itself will do the same thing as `fit_WDmodel`. If you need to look at results from a large number of fits, `print_WDmodel_result_table` and
`print_WDmodel_residual_table` will print out tables of results and residuals. `make_WDmodel_slurm_batch_scripts` provides an example script to generate batch scripts for the SLURM system used on Harvard’s Odyssey cluster.
Adapt this for use with other job queue systems or clusters.
Analyzing WDmodel[¶](#analyzing-wdmodel)
---
This document describes the output produced by the `WDmodel` package.
* [Analysis](#analysis)
+ [The fit](#fit)
+ [Spectral flux calibration errors](#spec-nogp)
+ [Hydrogen Balmer line diagnostics](#balmer)
+ [Posterior distributions](#posterior)
+ [Output files](#output)
### Analysis[¶](#analysis)
There’s many different outputs (ascii files, bintables, plots) that are produced by the `WDmodel` package. We’ll describe the plots first - it is a good idea to look at your data before using numbers from the analysis.
#### 1. The fit:[¶](#the-fit)
All the plots are stored in `<spec basename>_mcmc.pdf` in the output directory that is printed as you run the `WDmodel` fitter (default:
`out/<object name>/<spec basename>/`.
The first plots show you the bottom line - the fit of the model (red) to the data - the observed photometry and spectroscopy (black). Residuals for both are shown in the lower panel. The model parameters inferred from the data are shown in the legend of the spectrum plot. Draws from the posterior are shown in orange. The number of these is customizable with `--ndraws`. Observational uncertainties are shown as shaded grey regions.
If both of these don’t look reasonable, then the inferred parameters are probably meaningless. You should look at why the model is not in good agreement with the data. We’ve found this tends to happen if there’s a significant flux excess at some wavelengths, indicating a companion or perhaps variability.
#### 2. Spectral flux calibration errors:[¶](#spectral-flux-calibration-errors)
The `WDmodel` package uses a Gaussian process to model correlated flux calibration errors in the spectrum. These arise from a variety of sources
(flat-fielding, background subtraction, extraction of the 2D spectrum with a polynomial, telluric feature removal, and flux calibration relative to some other spectrophotometric standard, which in turn is probably only good to a few percent). However, most of the processes that generate these errors would cause smooth and continuous deformations to the observed spectrum, and a single stationary covariance kernel is a useful and reasonable way to model the effects. The choice of kernel is customizable (`--covtype`, with default
`Matern32` which has proven more than sufficient for data from four different spectrographs with very different optical designs).
The residual plot shows the difference between the spectrum and best fit model without the Gaussian process applied. The residuals therefore show our estimate of the flux calibration errors and the Gaussian process model for them.
#### 3. Hydrogen Balmer line diagnostics:[¶](#hydrogen-balmer-line-diagnostics)
These plots illustrate the spectroscopic data and model specifically in the region of the Hydrogen Balmer lines. While the entire spectrum and all the photometry is fit simultaneously, we extract the Balmer lines, normalize their local continua to unity, and illustrate them separately here, offsetting each vertically a small amount for clarity.
With the exception of the SDSS `autofit` package used for their white dwarf analysis (which isn’t public in any case), every white dwarf atmosphere fitting code takes the approach of only fitting the Hydrogen Balmer lines to determine model parameters. This includes our own proof-of-concept analysis of Cycle 20 data. The argument goes that the inferred model parameters aren’t sensitive to reddening if the local continuum is divided out, and the line profiles determine the temperature and surface gravity.
In reality, reddening also changes the shape of the line profiles, and to divide out the local continuum, a model for it had to be fit (typically a straight line across the line from regions defined “outside” the line profile wings). The properties of this local continuum *are* strongly correlated with reddening, and errors in the local continuum affect the inference of the model parameters, particularly at low S/N. This is the regime our program to observe faint white dwarfs operates in - low S/N with higher reddening. Any reasonable analysis of systematic errors should illustrate significant bias resulting from the naive analysis in the presence of correlated errors.
In other words, the approach doesn’t avoid the problem, so much as shove it under a rug with the words “nuisance parameters” on top. This is why we adopted the more complex forward modeling approach in the `WDmodel` package.
Unfortunately, Balmer profile fits are customary in the field, so after we forward model the data, we make a simple polynomial fit to the continuum (using our best understanding of what SDSS’ `autofit` does), and extract out the Balmer lines purely for this visualization. This way the polynomial continuum model does have no affect on the inference, and if it goes wrong and the Balmer line profile plots look wonky, it doesn’t actually matter.
If you mail the author about these plots, he will get annoyed and grumble about you, and probably reply with snark. He had no particular desire to even include these plots.
#### 4. Posterior Distributions:[¶](#posterior-distributions)
A corner plot of the posterior distribution. If the model and data are not in good agreement, then this is a good place to look. If you are running with
`--samptype=ensemble` (the default), you might consider `--samptype=pt
--ntemps=5 --nwalkers=100 --nprod=5000 --thin 10` to better sample the posterior, and map out any multi-modality.
#### 5. Output Files:[¶](#output-files)
This table describes the output files produced by the fitter.
| File | Description |
| --- | --- |
| `<spec basename>_inputs.hdf5` | All inputs to fitter and visualization module. Restored on `--resume` |
| `<spec basename>_params.json` | Initial guess parameters. Refined by minuit if not `--skipminuit` |
| `<spec basename>_minuit.pdf` | Plot of initial guess model, if refined by minuit |
| `<spec basename>_mcmc.hdf5` | Full Markov Chain - positions, log posterior, chain attributes |
| `<spec basename>_mcmc.pdf` | Plot of model and data after MCMC |
| `<spec basename>_result.json` | Summary of inferred model parameters, errors, uncertainties after MCMC |
| `<spec basename>_spec_model.dat` | Table of the observed spectrum and inferred model spectrum |
| `<spec basename>_phot_model.dat` | Table of the observed photometry and inferred model photometry |
| `<spec basename>_full_model.hdf5` | Derived normalized SED of the object |
See [Useful routines](index.html#extraroutines) for some useful routines to summarize fit results.
WDmodel[¶](#wdmodel)
---
### WDmodel package[¶](#module-WDmodel)
WDmodel: Bayesian inference of white dwarf properties from spectra and photometry to establish spectrophotometric standards
#### Submodules[¶](#submodules)
##### WDmodel.WDmodel module[¶](#module-WDmodel.WDmodel)
DA White Dwarf Atmosphere Models and SED generator.
Model grid originally from <NAME> using <NAME>’s Tlusty code (v200) and custom Synspec routines, repackaged into HDF5 by <NAME>.
*class* `WDmodel.WDmodel.``WDmodel`(*grid_file=None*, *grid_name=None*, *rvmodel=u'f99'*)[[source]](_modules/WDmodel/WDmodel.html#WDmodel)[¶](#WDmodel.WDmodel.WDmodel)
Bases: [`object`](https://docs.python.org/3/library/functions.html#object)
DA White Dwarf Atmosphere Model and SED generator
Base class defines the routines to generate and work with DA White Dwarf model spectra. Requires a grid file of DA White Dwarf atmospheres. This grid file is included along with the package - `TlustyGrids.hdf5` - and is the default.
| Parameters: | * **grid_file** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)*,* *optional*) – Filename of the HDF5 grid file to read. See
[`WDmodel.io.read_model_grid()`](index.html#WDmodel.io.read_model_grid) for format of the grid file.
Default is `TlustyGrids.hdf5`, included with the [`WDmodel`](index.html#module-WDmodel)
package.
* **grid_name** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)*,* *optional*) – Name of the HDF5 group containing the white dwarf model atmosphere grids in `grid_file`. Default is `default`
* **rvmodel** (`{'ccm89','od94','f99','custom'}`, optional) – Specify parametrization of the reddening law. Default is `'f99'`.
| rvmodel | parametrization |
| --- | --- |
| `'ccm89'` | <NAME> (1989, ApJ, 345, 245) |
| `'od94'` | O’Donnell (1994, ApJ, 422, 158) |
| `'f99'` | Fitzpatrick (1999, PASP, 111, 63) |
| `'custom'` | Custom law from Jay Holberg (email, 20180424) |
|
`_lines`[¶](#WDmodel.WDmodel.WDmodel._lines)
dictionary mapping Hydrogen Balmer series line names to line number,
central wavelength in Angstrom, approximate line width and continuum region width around line. Used to extract Balmer lines from spectra for visualization.
| Type: | [dict](https://docs.python.org/3/library/stdtypes.html#dict) |
`_grid_file`[¶](#WDmodel.WDmodel.WDmodel._grid_file)
Filename of the HDF5 grid file that was read.
| Type: | [str](https://docs.python.org/3/library/stdtypes.html#str) |
`_grid_name`[¶](#WDmodel.WDmodel.WDmodel._grid_name)
Name of the HDF5 group containing the white dwarf model atmosphere
| Type: | [str](https://docs.python.org/3/library/stdtypes.html#str) |
`_wave`[¶](#WDmodel.WDmodel.WDmodel._wave)
Array of model grid wavelengths in Angstroms, sorted in ascending order
| Type: | array-like |
`_ggrid`[¶](#WDmodel.WDmodel.WDmodel._ggrid)
Array of model grid surface gravity values in dex, sorted in ascending order
| Type: | array-like |
`_tgrid`[¶](#WDmodel.WDmodel.WDmodel._tgrid)
Array of model grid temperature values in Kelvin, sorted in ascending order
| Type: | array-like |
`_nwave`[¶](#WDmodel.WDmodel.WDmodel._nwave)
Size of the model grid wavelength array, `_wave`
| Type: | [int](https://docs.python.org/3/library/functions.html#int) |
`_ngrav`[¶](#WDmodel.WDmodel.WDmodel._ngrav)
Size of the model grid surface gravity array, `_ggrid`
| Type: | [int](https://docs.python.org/3/library/functions.html#int) |
`_ntemp`[¶](#WDmodel.WDmodel.WDmodel._ntemp)
Size of the model grid temperature array, `_tgrid`
| Type: | [int](https://docs.python.org/3/library/functions.html#int) |
`_flux`[¶](#WDmodel.WDmodel.WDmodel._flux)
Array of model grid fluxes, shape `(_nwave, _ntemp, _ngrav)`
| Type: | array-like |
`_lwave`[¶](#WDmodel.WDmodel.WDmodel._lwave)
Array of model grid `log10` wavelengths for interpolation
| Type: | array-like |
`_lflux`[¶](#WDmodel.WDmodel.WDmodel._lflux)
Array of model grid `log10` fluxes for interpolation, shape `(_ntemp, _ngrav, _nwave)`
| Type: | array-like |
`_law`[¶](#WDmodel.WDmodel.WDmodel._law)
| Type: | extinction function corresponding to `rvmodel` |
| Returns: | **out** |
| Return type: | [`WDmodel.WDmodel.WDmodel`](#WDmodel.WDmodel.WDmodel) instance |
| Raises: | [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) – If the supplied rvmodel is unknown |
Notes
Virtually none of the attributes should be used directly since it is trivially possible to break the model by redefining them. Access to them is best through the functions connected to the models.
A custom user-specified grid file can be specified. See
[`WDmodel.io.read_model_grid()`](index.html#WDmodel.io.read_model_grid) for the format of the grid file.
Uses [`scipy.interpolate.RegularGridInterpolator`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.RegularGridInterpolator.html#scipy.interpolate.RegularGridInterpolator) to interpolate the models.
The class contains various convenience methods that begin with an underscore (_) that will not be imported by default. These are intended for internal use, and do not have the sanity checking and associated overhead of the public methods.
`_WDmodel__init__rvmodel`(*rvmodel=u'f99'*)[¶](#WDmodel.WDmodel.WDmodel._WDmodel__init__rvmodel)
`_WDmodel__init__tlusty`(*grid_file=None*, *grid_name=None*)[¶](#WDmodel.WDmodel.WDmodel._WDmodel__init__tlusty)
`__call__`(*teff*, *logg*, *wave=None*, *log=False*, *strict=True*)[¶](#WDmodel.WDmodel.WDmodel.__call__)
Returns the model flux given `teff` and `logg` at wavelengths `wave`
Wraps [`WDmodel.WDmodel.WDmodel._get_model()`](#WDmodel.WDmodel.WDmodel._get_model) adding checking of inputs.
| Parameters: | * **teff** ([*float*](https://docs.python.org/3/library/functions.html#float)) – Desired model white dwarf atmosphere temperature (in Kelvin)
* **logg** ([*float*](https://docs.python.org/3/library/functions.html#float)) – Desired model white dwarf atmosphere surface gravity (in dex)
* **wave** (*array-like**,* *optional*) – Desired wavelengths at which to compute the model atmosphere flux.
If not supplied, the full model wavelength grid is returned.
* **log** ([*bool*](https://docs.python.org/3/library/functions.html#bool)*,* *optional*) – Return the log10 flux, rather than the flux
* **strict** ([*bool*](https://docs.python.org/3/library/functions.html#bool)*,* *optional*) – If strict, `teff` and `logg` out of model grid range raise a
[`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError), otherwise raise a [`RuntimeWarning`](https://docs.python.org/3/library/exceptions.html#RuntimeWarning)
and set `teff`, `logg` to the nearest grid value.
|
| Returns: | * **wave** (*array-like*) – Valid output wavelengths
* **flux** (*array-like*) – Interpolated model flux at `teff`, `logg` and wavelengths `wave`
|
| Raises: | [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) – If `teff` or `logg` are out of range of model grid and `strict` is True or
if there are any invalid wavelengths, or the requested wavelengths to do not overlap with the model grid |
Notes
Unlike the corresponding private methods, the public methods implement checking of the inputs and returns the wavelengths in addition to the flux. Internally, we only use the private methods as the inputs only need to be checked once, and their state is not altered anywhere after.
`__init__`(*grid_file=None*, *grid_name=None*, *rvmodel=u'f99'*)[[source]](_modules/WDmodel/WDmodel.html#WDmodel.__init__)[¶](#WDmodel.WDmodel.WDmodel.__init__)
x.__init__(…) initializes x; see help(type(x)) for signature
`_custom_extinction`(*wave*, *av*, *rv=3.1*, *unit=u'aa'*)[[source]](_modules/WDmodel/WDmodel.html#WDmodel._custom_extinction)[¶](#WDmodel.WDmodel.WDmodel._custom_extinction)
Return the extinction for `av`, `rv` at wavelengths `wave`
for the custom reddening law defined by J. Holberg
Mimics the interface provided by
[`WDmodel.WDmodel.WDmodel._law`](#WDmodel.WDmodel.WDmodel._law) to calculate the extinction as a function of wavelength (in Angstroms), \(A_{\lambda}\).
| Parameters: | * **wave** (*array-like*) – Array of wavelengths in Angstrom at which to compute extinction,
sorted in ascending order
* **av** ([*float*](https://docs.python.org/3/library/functions.html#float)) – Extinction in the V band, \(A_V\)
* **rv** ([*float*](https://docs.python.org/3/library/functions.html#float)*,* *optional*) – Fixed to 3.1 for J. Holberg’s custom reddening law
|
| Returns: | **out** – Extinction at wavelengths `wave` for `av` and `rv` |
| Return type: | array-like |
Notes
`av` should be >= 0.
`_extract_from_indices`(*w*, *f*, *ZE*, *df=None*)[[source]](_modules/WDmodel/WDmodel.html#WDmodel._extract_from_indices)[¶](#WDmodel.WDmodel.WDmodel._extract_from_indices)
Extracts slices of multiple arrays for the same set of indices.
Convenience function to extract elements of wavelength `w`, flux `f`
and optionally flux uncertainty `df` using indices `ZE`
| Parameters: | * **w** (*array-like*) – Wavelength array from which to extract indices `ZE`
* **f** (*array-like*) – Flux array from which to extract indices `ZE`
* **ZE** (*array-like*) – indices to extract
* **df** ([*None*](https://docs.python.org/3/library/constants.html#None) *or* *array-like**,* *optional*) – If array-like, extracted elements of this array are also returned
|
| Returns: | * **w** (*array-like*) – elements of input wavelength array at indices `ZE`
* **f** (*array-like*) – elements of input flux array at indices `ZE`
* **[df]** (*array-like*) – elements of input flux uncertainty array at indices `ZE` if optional input `df` is supplied
|
`_extract_spectral_line`(*w*, *f*, *line*, *df=None*)[[source]](_modules/WDmodel/WDmodel.html#WDmodel._extract_spectral_line)[¶](#WDmodel.WDmodel.WDmodel._extract_spectral_line)
Extracts slices of multiple arrays corresponding to a hydrogen Balmer line
Convenience function to extract elements of wavelength `w`, flux `f`
and optionally flux uncertainty `df` for a hydrogen Balmer `line`.
Wraps [`WDmodel.WDmodel.WDmodel._get_line_indices()`](#WDmodel.WDmodel.WDmodel._get_line_indices) and
[`WDmodel.WDmodel.WDmodel._extract_from_indices()`](#WDmodel.WDmodel.WDmodel._extract_from_indices), both of which have their own reasons for existence as well.
| Parameters: | * **w** (*array-like*) – Wavelength array from which to extract elements corresponding to hydrogen Balmer `line`
* **f** (*array-like*) – Flux array from which to extract elements corresponding to hydrogen Balmer `line`
* **line** (`{'alpha', 'beta', 'gamma', 'delta', 'zeta', 'eta'}`) – Name of hydrogen Balmer line to extract.
Properties are pre-defined in [`WDmodel.WDmodel.WDmodel._lines`](#WDmodel.WDmodel.WDmodel._lines)
* **df** ([*None*](https://docs.python.org/3/library/constants.html#None) *or* *array-like**,* *optional*) – If array-like, extracted elements of this array are also returned
|
| Returns: | * **w** (*array-like*) – elements of input wavelength array for hydrogen Balmer feature
`line`
* **f** (*array-like*) – elements of input flux array for hydrogen Balmer feature `line`
* **[df]** (*array-like*) – elements of input flux uncertainty array for hydrogen Balmer feature `line` if optional input `df` is supplied
|
Notes
Same as [`WDmodel.WDmodel.WDmodel.extract_spectral_line()`](#WDmodel.WDmodel.WDmodel.extract_spectral_line)
without checking of inputs and therefore corresponding overhead. Used internally.
`_get_full_obs_model`(*teff*, *logg*, *av*, *fwhm*, *wave*, *rv=3.1*, *log=False*, *pixel_scale=1.0*)[[source]](_modules/WDmodel/WDmodel.html#WDmodel._get_full_obs_model)[¶](#WDmodel.WDmodel.WDmodel._get_full_obs_model)
Returns the observed model flux given `teff`, `logg`, `av`, `rv`,
`fwhm` (for Gaussian instrumental broadening) at wavelengths, `wave` as well as the full SED.
Convenience function that does the same thing as
[`WDmodel.WDmodel.WDmodel._get_obs_model()`](#WDmodel.WDmodel.WDmodel._get_obs_model), but also returns the full SED without any instrumental broadening applied, appropriate for synthetic photometry.
Uses [`WDmodel.WDmodel.WDmodel._get_model()`](#WDmodel.WDmodel.WDmodel._get_model) to get the unreddened model, and reddens it with
[`WDmodel.WDmodel.WDmodel.reddening()`](#WDmodel.WDmodel.WDmodel.reddening) and convolves it with a Gaussian kernel using
`scipy.ndimage.filters.gaussian_filter1d()`
| Parameters: | * **teff** ([*float*](https://docs.python.org/3/library/functions.html#float)) – Desired model white dwarf atmosphere temperature (in Kelvin)
* **logg** ([*float*](https://docs.python.org/3/library/functions.html#float)) – Desired model white dwarf atmosphere surface gravity (in dex)
* **av** ([*float*](https://docs.python.org/3/library/functions.html#float)) – Extinction in the V band, \(A_V\)
* **fwhm** ([*float*](https://docs.python.org/3/library/functions.html#float)) – Instrumental FWHM in Angstrom
* **wave** (*array-like*) – Desired wavelengths at which to compute the model atmosphere flux.
* **rv** ([*float*](https://docs.python.org/3/library/functions.html#float)*,* *optional*) – The reddening law parameter, \(R_V\), the ration of the V band extinction \(A_V\) to the reddening between the B and V bands,
\(E(B-V)\). Default is `3.1`, appropriate for stellar SEDs in the Milky Way.
* **log** ([*bool*](https://docs.python.org/3/library/functions.html#bool)*,* *optional*) – Return the log10 flux, rather than the flux (what’s actually interpolated)
* **pixel_scale** ([*float*](https://docs.python.org/3/library/functions.html#float)*,* *optional*) – Jacobian of the transformation between wavelength in Angstrom and pixels. In principle, this should be a vector, but virtually all spectral reduction packages resample the spectrum onto a uniform wavelength scale that is close to the native pixel scale of the spectrograph. Default is `1.`
|
| Returns: | * **flux** (*array-like*) – Interpolated model flux at `teff`, `logg` with reddening parametrized by `av`, `rv` and broadened by a Gaussian kernel defined by `fwhm` at wavelengths `wave`
* **mod** ([`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray) with `dtype=[('wave', '<f8'), ('flux', '<f8')]`) – Full model SED at `teff`, `logg` with reddening parametrized by
`av`, `rv`
|
Notes
`fwhm` and `pixel_scale` must be > 0
*classmethod* `_get_indices_in_range`(*wave*, *WA*, *WB*, *W0=None*)[[source]](_modules/WDmodel/WDmodel.html#WDmodel._get_indices_in_range)[¶](#WDmodel.WDmodel.WDmodel._get_indices_in_range)
Returns indices of wavelength between blue and red wavelength limits and the central wavelength
| Parameters: | * **wave** (*array-like*) – Wavelengths array from which to extract indices
* **WA** ([*float*](https://docs.python.org/3/library/functions.html#float)) – blue limit of wavelengths to extract
* **WB** ([*float*](https://docs.python.org/3/library/functions.html#float)) – red limit of wavelenghts to extract
* **W0** ([*float*](https://docs.python.org/3/library/functions.html#float) *or* [*None*](https://docs.python.org/3/library/constants.html#None)*,* *optional*) – `None` or a central wavelength of range `[WA, WB]` to return.
If `None`, the central wavelength is computed, else the input is simply returned.
|
| Returns: | * **W0** (*float*) – central wavelength of range `[WA, WB]`
* **ZE** (*array-like*) – indices of `wave` in range `[WA, WB]`
|
`_get_line_indices`(*wave*, *line*)[[source]](_modules/WDmodel/WDmodel.html#WDmodel._get_line_indices)[¶](#WDmodel.WDmodel.WDmodel._get_line_indices)
Returns the central wavelength and indices of wavelength corresponding to a hydrogen Balmer line
| Parameters: | * **wave** (*array-like*) – Wavelengths array from which to extract indices
* **line** (`{'alpha', 'beta', 'gamma', 'delta', 'zeta', 'eta'}`) – Name of hydrogen Balmer line to extract.
Properties are pre-defined in [`WDmodel.WDmodel.WDmodel._lines`](#WDmodel.WDmodel.WDmodel._lines)
|
| Returns: | * **W0** (*float*) – central wavelength of line
* **ZE** (*array-like*) – indices of wave of line
|
Notes
No checking of input - will throw [`KeyError`](https://docs.python.org/3/library/exceptions.html#KeyError) if line is not accepted value
`_get_model`(*teff*, *logg*, *wave=None*, *log=False*)[[source]](_modules/WDmodel/WDmodel.html#WDmodel._get_model)[¶](#WDmodel.WDmodel.WDmodel._get_model)
Returns the model flux given `teff` and `logg` at wavelengths `wave`
Simple 3-D interpolation of model grid. Computes unreddened,
unnormalized, unconvolved, interpolated model flux. Uses
[`scipy.interpolate.RegularGridInterpolator`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.RegularGridInterpolator.html#scipy.interpolate.RegularGridInterpolator) to generate the interpolated model. This output has been tested against
[`WDmodel.WDmodel.WDmodel._get_model_nosp()`](#WDmodel.WDmodel.WDmodel._get_model_nosp).
| Parameters: | * **teff** ([*float*](https://docs.python.org/3/library/functions.html#float)) – Desired model white dwarf atmosphere temperature (in Kelvin)
* **logg** ([*float*](https://docs.python.org/3/library/functions.html#float)) – Desired model white dwarf atmosphere surface gravity (in dex)
* **wave** (*array-like**,* *optional*) – Desired wavelengths at which to compute the model atmosphere flux.
If not supplied, the full model wavelength grid is returned.
* **log** ([*bool*](https://docs.python.org/3/library/functions.html#bool)*,* *optional*) – Return the log10 flux rather than the flux.
|
| Returns: | **flux** – Interpolated model flux at `teff`, `logg` and wavelengths `wave`. |
| Return type: | array-like |
Notes
Inputs `teff`, `logg` and `wave` must be within the bounds of the grid. See [`WDmodel.WDmodel.WDmodel._wave`](#WDmodel.WDmodel.WDmodel._wave),
[`WDmodel.WDmodel.WDmodel._ggrid`](#WDmodel.WDmodel.WDmodel._ggrid),
[`WDmodel.WDmodel.WDmodel._tgrid`](#WDmodel.WDmodel.WDmodel._tgrid), for grid locations and limits.
`_get_model_nosp`(*teff*, *logg*, *wave=None*, *log=False*)[[source]](_modules/WDmodel/WDmodel.html#WDmodel._get_model_nosp)[¶](#WDmodel.WDmodel.WDmodel._get_model_nosp)
Returns the model flux given `teff` and `logg` at wavelengths
`wave`
Simple 3-D interpolation of model grid. Computes unreddened,
unnormalized, unconvolved, interpolated model flux. Not used, but serves as check of output of interpolation of
[`scipy.interpolate.RegularGridInterpolator`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.RegularGridInterpolator.html#scipy.interpolate.RegularGridInterpolator) output.
| Parameters: | * **teff** ([*float*](https://docs.python.org/3/library/functions.html#float)) – Desired model white dwarf atmosphere temperature (in Kelvin)
* **logg** ([*float*](https://docs.python.org/3/library/functions.html#float)) – Desired model white dwarf atmosphere surface gravity (in dex)
* **wave** (*array-like**,* *optional*) – Desired wavelengths at which to compute the model atmosphere flux.
If not supplied, the full model wavelength grid is returned.
* **log** ([*bool*](https://docs.python.org/3/library/functions.html#bool)*,* *optional*) – Return the log10 flux, rather than the flux.
|
| Returns: | **flux** – Interpolated model flux at `teff`, `logg` and wavelengths `wave` |
| Return type: | array-like |
Notes
Inputs `teff`, `logg` and `wave` must be within the bounds of the grid.
See [`WDmodel.WDmodel.WDmodel._wave`](#WDmodel.WDmodel.WDmodel._wave),
[`WDmodel.WDmodel.WDmodel._ggrid`](#WDmodel.WDmodel.WDmodel._ggrid),
[`WDmodel.WDmodel.WDmodel._tgrid`](#WDmodel.WDmodel.WDmodel._tgrid), for grid locations and limits.
This restriction is not imposed here for performance reasons, but is implicitly set by routines that call this method. The user is expected to verify this condition if this method is used outside the context of the [`WDmodel`](index.html#module-WDmodel) package. Caveat emptor.
`_get_obs_model`(*teff*, *logg*, *av*, *fwhm*, *wave*, *rv=3.1*, *log=False*, *pixel_scale=1.0*)[[source]](_modules/WDmodel/WDmodel.html#WDmodel._get_obs_model)[¶](#WDmodel.WDmodel.WDmodel._get_obs_model)
Returns the observed model flux given `teff`, `logg`, `av`, `rv`,
`fwhm` (for Gaussian instrumental broadening) and wavelengths `wave`
Uses [`WDmodel.WDmodel.WDmodel._get_model()`](#WDmodel.WDmodel.WDmodel._get_model) to get the unreddened model, and reddens it with
[`WDmodel.WDmodel.WDmodel.reddening()`](#WDmodel.WDmodel.WDmodel.reddening) and convolves it with a Gaussian kernel using
`scipy.ndimage.filters.gaussian_filter1d()`
| Parameters: | * **teff** ([*float*](https://docs.python.org/3/library/functions.html#float)) – Desired model white dwarf atmosphere temperature (in Kelvin)
* **logg** ([*float*](https://docs.python.org/3/library/functions.html#float)) – Desired model white dwarf atmosphere surface gravity (in dex)
* **av** ([*float*](https://docs.python.org/3/library/functions.html#float)) – Extinction in the V band, \(A_V\)
* **fwhm** ([*float*](https://docs.python.org/3/library/functions.html#float)) – Instrumental FWHM in Angstrom
* **wave** (*array-like*) – Desired wavelengths at which to compute the model atmosphere flux.
* **rv** ([*float*](https://docs.python.org/3/library/functions.html#float)*,* *optional*) – The reddening law parameter, \(R_V\), the ration of the V band extinction \(A_V\) to the reddening between the B and V bands,
\(E(B-V)\). Default is 3.1, appropriate for stellar SEDs in the Milky Way.
* **log** ([*bool*](https://docs.python.org/3/library/functions.html#bool)*,* *optional*) – Return the log10 flux, rather than the flux (what’s actually interpolated)
* **pixel_scale** ([*float*](https://docs.python.org/3/library/functions.html#float)*,* *optional*) – Jacobian of the transformation between wavelength in Angstrom and pixels. In principle, this should be a vector, but virtually all spectral reduction packages resample the spectrum onto a uniform wavelength scale that is close to the native pixel scale of the spectrograph. Default is `1.`
|
| Returns: | **flux** – Interpolated model flux at `teff`, `logg` with reddening parametrized by `av`, `rv` and broadened by a Gaussian kernel defined by `fwhm` at wavelengths `wave` |
| Return type: | array-like |
Notes
`fwhm` and `pixel_scale` must be > 0
`_get_red_model`(*teff*, *logg*, *av*, *wave*, *rv=3.1*, *log=False*)[[source]](_modules/WDmodel/WDmodel.html#WDmodel._get_red_model)[¶](#WDmodel.WDmodel.WDmodel._get_red_model)
Returns the reddened model flux given `teff`, `logg`, `av`, `rv` at wavelengths `wave`
Uses [`WDmodel.WDmodel.WDmodel._get_model()`](#WDmodel.WDmodel.WDmodel._get_model) to get the unreddened model, and reddens it with
[`WDmodel.WDmodel.WDmodel.reddening()`](#WDmodel.WDmodel.WDmodel.reddening)
| Parameters: | * **teff** ([*float*](https://docs.python.org/3/library/functions.html#float)) – Desired model white dwarf atmosphere temperature (in Kelvin)
* **logg** ([*float*](https://docs.python.org/3/library/functions.html#float)) – Desired model white dwarf atmosphere surface gravity (in dex)
* **av** ([*float*](https://docs.python.org/3/library/functions.html#float)) – Extinction in the V band, \(A_V\)
* **wave** (*array-like*) – Desired wavelengths at which to compute the model atmosphere flux.
* **rv** ([*float*](https://docs.python.org/3/library/functions.html#float)*,* *optional*) – The reddening law parameter, \(R_V\), the ration of the V band extinction \(A_V\) to the reddening between the B and V bands,
\(E(B-V)\). Default is 3.1, appropriate for stellar SEDs in the Milky Way.
* **log** ([*bool*](https://docs.python.org/3/library/functions.html#bool)*,* *optional*) – Return the log10 flux, rather than the flux (what’s actually interpolated)
|
| Returns: | **flux** – Interpolated model flux at `teff`, `logg` with reddening parametrized by `av`, `rv` at wavelengths `wave` |
| Return type: | array-like |
*classmethod* `_wave_test`(*wave*)[[source]](_modules/WDmodel/WDmodel.html#WDmodel._wave_test)[¶](#WDmodel.WDmodel.WDmodel._wave_test)
Raises an error if wavelengths are not valid
| Parameters: | **wave** (*array-like*) – Array of wavelengths to test for validity |
| Raises: | [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) – If wavelength array is empty, has negative values, or is not monotonic |
`extinction`(*wave*, *av*, *rv=3.1*)[[source]](_modules/WDmodel/WDmodel.html#WDmodel.extinction)[¶](#WDmodel.WDmodel.WDmodel.extinction)
Return the extinction for `av`, `rv` at wavelengths `wave`
Uses the extinction function corresponding to the `rvmodel`
parametrization set as
[`WDmodel.WDmodel.WDmodel._law`](#WDmodel.WDmodel.WDmodel._law) to calculate the extinction as a function of wavelength (in Angstroms),
\(A_{\lambda}\).
| Parameters: | * **wave** (*array-like*) – Array of wavelengths in Angstrom at which to compute extinction,
sorted in ascending order
* **av** ([*float*](https://docs.python.org/3/library/functions.html#float)) – Extinction in the V band, \(A_V\)
* **rv** ([*float*](https://docs.python.org/3/library/functions.html#float)*,* *optional*) – The reddening law parameter, \(R_V\), the ration of the V band extinction \(A_V\) to the reddening between the B and V bands,
\(E(B-V)\). Default is `3.1`, appropriate for stellar SEDs in the Milky Way.
|
| Returns: | **out** – Extinction at wavelengths `wave` for `av` and `rv` |
| Return type: | array-like |
Notes
`av` should be >= 0.
`extract_spectral_line`(*w*, *f*, *line*, *df=None*)[[source]](_modules/WDmodel/WDmodel.html#WDmodel.extract_spectral_line)[¶](#WDmodel.WDmodel.WDmodel.extract_spectral_line)
Extracts slices of multiple arrays corresponding to a hydrogen Balmer line
Convenience function to extract elements of wavelength `w`, flux `f`
and optionally flux uncertainty `df` for a hydrogen Balmer line. Wraps
[`WDmodel.WDmodel.WDmodel._extract_spectral_line()`](#WDmodel.WDmodel.WDmodel._extract_spectral_line) adding checking of inputs.
| Parameters: | * **w** (*array-like*) – Wavelength array from which to extract elements corresponding to hydrogen Balmer `line`
* **f** (*array-like*) – Flux array from which to extract elements corresponding to hydrogen Balmer `line`
* **line** (`{'alpha', 'beta', 'gamma', 'delta', 'zeta', 'eta'}`) – Name of hydrogen Balmer line to extract.
Properties are pre-defined in [`WDmodel.WDmodel.WDmodel._lines`](#WDmodel.WDmodel.WDmodel._lines)
* **df** ([*None*](https://docs.python.org/3/library/constants.html#None) *or* *array-like**,* *optional*) – If array-like, extracted elements of this array are also returned
|
| Returns: | * **w** (*array-like*) – elements of input wavelength array for hydrogen Balmer feature
`line`
* **f** (*array-like*) – elements of input flux array for hydrogen Balmer feature `line`
* **[df]** (*array-like*) – elements of input flux uncertainty array for hydrogen Balmer feature `line` if optional input `df` is supplied
|
| Raises: | [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) – If `line` is not one of the first six of the Balmer series or
If wavelengths are invalid of
If there’s a difference in shape of any of the arrays |
`get_model`(*teff*, *logg*, *wave=None*, *log=False*, *strict=True*)[[source]](_modules/WDmodel/WDmodel.html#WDmodel.get_model)[¶](#WDmodel.WDmodel.WDmodel.get_model)
Returns the model flux given `teff` and `logg` at wavelengths `wave`
Wraps [`WDmodel.WDmodel.WDmodel._get_model()`](#WDmodel.WDmodel.WDmodel._get_model) adding checking of inputs.
| Parameters: | * **teff** ([*float*](https://docs.python.org/3/library/functions.html#float)) – Desired model white dwarf atmosphere temperature (in Kelvin)
* **logg** ([*float*](https://docs.python.org/3/library/functions.html#float)) – Desired model white dwarf atmosphere surface gravity (in dex)
* **wave** (*array-like**,* *optional*) – Desired wavelengths at which to compute the model atmosphere flux.
If not supplied, the full model wavelength grid is returned.
* **log** ([*bool*](https://docs.python.org/3/library/functions.html#bool)*,* *optional*) – Return the log10 flux, rather than the flux
* **strict** ([*bool*](https://docs.python.org/3/library/functions.html#bool)*,* *optional*) – If strict, `teff` and `logg` out of model grid range raise a
[`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError), otherwise raise a [`RuntimeWarning`](https://docs.python.org/3/library/exceptions.html#RuntimeWarning)
and set `teff`, `logg` to the nearest grid value.
|
| Returns: | * **wave** (*array-like*) – Valid output wavelengths
* **flux** (*array-like*) – Interpolated model flux at `teff`, `logg` and wavelengths `wave`
|
| Raises: | [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) – If `teff` or `logg` are out of range of model grid and `strict` is True or
if there are any invalid wavelengths, or the requested wavelengths to do not overlap with the model grid |
Notes
Unlike the corresponding private methods, the public methods implement checking of the inputs and returns the wavelengths in addition to the flux. Internally, we only use the private methods as the inputs only need to be checked once, and their state is not altered anywhere after.
`get_obs_model`(*teff*, *logg*, *av*, *fwhm*, *rv=3.1*, *wave=None*, *log=False*, *strict=True*, *pixel_scale=1.0*)[[source]](_modules/WDmodel/WDmodel.html#WDmodel.get_obs_model)[¶](#WDmodel.WDmodel.WDmodel.get_obs_model)
Returns the observed model flux given `teff`, `logg`, `av`,
`rv`, `fwhm` (for Gaussian instrumental broadening) and wavelengths
`wave`
Uses [`WDmodel.WDmodel.WDmodel.get_red_model()`](#WDmodel.WDmodel.WDmodel.get_red_model) to get the reddened model and convolves it with a Gaussian kernel using
`scipy.ndimage.filters.gaussian_filter1d()`
| Parameters: | * **teff** ([*float*](https://docs.python.org/3/library/functions.html#float)) – Desired model white dwarf atmosphere temperature (in Kelvin)
* **logg** ([*float*](https://docs.python.org/3/library/functions.html#float)) – Desired model white dwarf atmosphere surface gravity (in dex)
* **av** ([*float*](https://docs.python.org/3/library/functions.html#float)) – Extinction in the V band, \(A_V\)
* **fwhm** ([*float*](https://docs.python.org/3/library/functions.html#float)) – Instrumental FWHM in Angstrom
* **rv** ([*float*](https://docs.python.org/3/library/functions.html#float)*,* *optional*) – The reddening law parameter, \(R_V\), the ration of the V band extinction \(A_V\) to the reddening between the B and V bands,
\(E(B-V)\). Default is `3.1`, appropriate for stellar SEDs in the Milky Way.
* **wave** (*array-like**,* *optional*) – Desired wavelengths at which to compute the model atmosphere flux.
If not supplied, the full model wavelength grid is returned.
* **log** ([*bool*](https://docs.python.org/3/library/functions.html#bool)*,* *optional*) – Return the log10 flux, rather than the flux (what’s actually interpolated)
* **strict** ([*bool*](https://docs.python.org/3/library/functions.html#bool)*,* *optional*) – If strict, `teff` and `logg` out of model grid range raise a
[`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError), otherwise raise a [`RuntimeWarning`](https://docs.python.org/3/library/exceptions.html#RuntimeWarning) and set
`teff`, `logg` to the nearest grid value.
* **pixel_scale** ([*float*](https://docs.python.org/3/library/functions.html#float)*,* *optional*) – Jacobian of the transformation between wavelength in Angstrom and pixels. In principle, this should be a vector, but virtually all spectral reduction packages resample the spectrum onto a uniform wavelength scale that is close to the native pixel scale of the spectrograph. Default is `1.`
|
| Returns: | * **wave** (*array-like*) – Valid output wavelengths
* **flux** (*array-like*) – Interpolated model flux at `teff`, `logg` with reddening parametrized by `av`, `rv` broadened by a Gaussian kernel defined by `fwhm` at wavelengths `wave`
|
Notes
`fwhm` and `pixel_scale` must be `> 0`
`get_red_model`(*teff*, *logg*, *av*, *rv=3.1*, *wave=None*, *log=False*, *strict=True*)[[source]](_modules/WDmodel/WDmodel.html#WDmodel.get_red_model)[¶](#WDmodel.WDmodel.WDmodel.get_red_model)
Returns the reddened model flux given `teff`, `logg`, `av`, `rv` at wavelengths `wave`
Uses [`WDmodel.WDmodel.WDmodel.get_model()`](#WDmodel.WDmodel.WDmodel.get_model) to get the unreddened model, and reddens it with [`WDmodel.WDmodel.WDmodel.reddening()`](#WDmodel.WDmodel.WDmodel.reddening)
| Parameters: | * **teff** ([*float*](https://docs.python.org/3/library/functions.html#float)) – Desired model white dwarf atmosphere temperature (in Kelvin)
* **logg** ([*float*](https://docs.python.org/3/library/functions.html#float)) – Desired model white dwarf atmosphere surface gravity (in dex)
* **av** ([*float*](https://docs.python.org/3/library/functions.html#float)) – Extinction in the V band, \(A_V\)
* **rv** ([*float*](https://docs.python.org/3/library/functions.html#float)*,* *optional*) – The reddening law parameter, \(R_V\), the ration of the V band extinction \(A_V\) to the reddening between the B and V bands,
\(E(B-V)\). Default is `3.1`, appropriate for stellar SEDs in the Milky Way.
* **wave** (*array-like**,* *optional*) – Desired wavelengths at which to compute the model atmosphere flux.
If not supplied, the full model wavelength grid is returned.
* **log** ([*bool*](https://docs.python.org/3/library/functions.html#bool)*,* *optional*) – Return the log10 flux, rather than the flux (what’s actually interpolated)
* **strict** ([*bool*](https://docs.python.org/3/library/functions.html#bool)*,* *optional*) – If strict, `teff` and `logg` out of model grid range raise a
[`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError), otherwise raise a [`RuntimeWarning`](https://docs.python.org/3/library/exceptions.html#RuntimeWarning)
and set `teff`, `logg` to the nearest grid value.
|
| Returns: | * **wave** (*array-like*) – Valid output wavelengths
* **flux** (*array-like*) – Interpolated model flux at `teff`, `logg` with reddening parametrized by `av`, `rv` at wavelengths `wave`
|
| Raises: | [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) – If `av < 0` or `rv` not in `[1.7, 5.1]` |
`reddening`(*wave*, *flux*, *av*, *rv=3.1*)[[source]](_modules/WDmodel/WDmodel.html#WDmodel.reddening)[¶](#WDmodel.WDmodel.WDmodel.reddening)
Redden a 1-D spectrum with extinction
Uses the extinction function corresponding to the `rvmodel`
parametrization set in
[`WDmodel.WDmodel.WDmodel._WDmodel__init__rvmodel()`](#WDmodel.WDmodel.WDmodel._WDmodel__init__rvmodel) to calculate the extinction as a function of wavelength (in Angstroms),
\(A_{\lambda}\).
| Parameters: | * **wave** (*array-like*) – Array of wavelengths in Angstrom at which to compute extinction,
sorted in ascending order
* **flux** (*array-like*) – Array of fluxes at `wave` at which to apply extinction
* **av** ([*float*](https://docs.python.org/3/library/functions.html#float)) – Extinction in the V band, \(A_V\)
* **rv** ([*float*](https://docs.python.org/3/library/functions.html#float)*,* *optional*) – The reddening law parameter, \(R_V\), the ration of the V band extinction \(A_V\) to the reddening between the B and V bands,
\(E(B-V)\). Default is `3.1`, appropriate for stellar SEDs in the Milky Way.
|
| Returns: | **out** – The reddened spectrum |
| Return type: | array-like |
Notes
`av` and `flux` should be >= 0.
##### WDmodel.covariance module[¶](#module-WDmodel.covariance)
Parametrizes the noise of the spectrum fit using a Gaussian process.
*class* `WDmodel.covariance.``WDmodel_CovModel`(*errscale*, *covtype=u'Matern32'*, *coveps=1e-12*)[[source]](_modules/WDmodel/covariance.html#WDmodel_CovModel)[¶](#WDmodel.covariance.WDmodel_CovModel)
Bases: [`object`](https://docs.python.org/3/library/functions.html#object)
Parametrizes the noise of the spectrum fit using a Gaussian process.
This class models the covariance of the spectrum fit using a stationary Gaussian process conditioned on the spectrum flux residuals and spectrum flux uncertainties. The class allows the kernel of the Gaussian process to be set in a single location. A few different stationary kernels are supported. These choices are defined in [`celerite.terms`](https://celerite.readthedocs.io/en/stable/python/kernel/#module-celerite.terms).
| Parameters: | * **errscale** ([*float*](https://docs.python.org/3/library/functions.html#float)) – Chracteristic scale of the spectrum flux uncertainties. The kernel amplitude hyperparameters are reported as fractions of this number. If the spectrum flux is rescaled, this must be set appropriately to get the correct uncertainties. The [`WDmodel`](index.html#module-WDmodel) package uses the median of the spectrum flux uncertainty internally.
* **covtype** (`{'Matern32', 'SHO', 'Exp', 'White}`) – The model to use to parametrize the covariance. Choices are defined in
[`celerite.terms`](https://celerite.readthedocs.io/en/stable/python/kernel/#module-celerite.terms) All choices except `'White'` parametrize the covariance using a stationary kernel with a characteristic amplitude
`fsig` and scale `tau` + a white noise component with amplitude
`fw`. Only the white noise component is used to condition the Gaussian process if `covtype` is `'White'`. If not specified or unknown, `'Matern32'` is used and a [`RuntimeWarning`](https://docs.python.org/3/library/exceptions.html#RuntimeWarning) is raised.
* **coveps** ([*float*](https://docs.python.org/3/library/functions.html#float)) – If `covtype` is `'Matern32'` a
[`celerite.terms.Matern32Term`](https://celerite.readthedocs.io/en/stable/python/kernel/#celerite.terms.Matern32Term) is used to approximate a Matern32 kernel with precision coveps. The default is `1e-12`.
Ignored if any other `covtype` is specified.
|
`_errscale`[¶](#WDmodel.covariance.WDmodel_CovModel._errscale)
The input `errscale`
| Type: | [float](https://docs.python.org/3/library/functions.html#float) |
`_covtype`[¶](#WDmodel.covariance.WDmodel_CovModel._covtype)
The input `covtype`
| Type: | [str](https://docs.python.org/3/library/stdtypes.html#str) |
`_coveps`[¶](#WDmodel.covariance.WDmodel_CovModel._coveps)
The input `coveps`
| Type: | [float](https://docs.python.org/3/library/functions.html#float) |
`_ndim`[¶](#WDmodel.covariance.WDmodel_CovModel._ndim)
The dimensionality of kernel used to parametrize the covariance
| Type: | [int](https://docs.python.org/3/library/functions.html#int) |
`_k1`[¶](#WDmodel.covariance.WDmodel_CovModel._k1)
The non-trivial stationary component of the kernel
| Type: | None or a term instance from [`celerite.terms`](https://celerite.readthedocs.io/en/stable/python/kernel/#module-celerite.terms) |
`_k2`[¶](#WDmodel.covariance.WDmodel_CovModel._k2)
The white noise component of the kernel
| Type: | [`celerite.terms.JitterTerm`](https://celerite.readthedocs.io/en/stable/python/kernel/#celerite.terms.JitterTerm) |
`_logQ`[¶](#WDmodel.covariance.WDmodel_CovModel._logQ)
`1/sqrt(2)` - only set if `covtype` is `'SHO'`
| Type: | [float](https://docs.python.org/3/library/functions.html#float), conditional |
| Returns: | |
| Return type: | A [`WDmodel.covariance.WDmodel_CovModel`](#WDmodel.covariance.WDmodel_CovModel) instance |
Notes
Virtually none of the attributes should be used directly since it is trivially possible to break the model by redefining them. Access to them is best through the functions connected to the models.
`__init__`(*errscale*, *covtype=u'Matern32'*, *coveps=1e-12*)[[source]](_modules/WDmodel/covariance.html#WDmodel_CovModel.__init__)[¶](#WDmodel.covariance.WDmodel_CovModel.__init__)
x.__init__(…) initializes x; see help(type(x)) for signature
`getgp`(*wave*, *flux_err*, *fsig*, *tau*, *fw*)[[source]](_modules/WDmodel/covariance.html#WDmodel_CovModel.getgp)[¶](#WDmodel.covariance.WDmodel_CovModel.getgp)
Return the [`celerite.GP`](https://celerite.readthedocs.io/en/stable/python/gp/#celerite.GP) instance
Precomputes the covariance matrix of the Gaussian process specified by the functional form of the stationary kernel and the current values of the hyperparameters. Wraps [`celerite.GP`](https://celerite.readthedocs.io/en/stable/python/gp/#celerite.GP).
| Parameters: | * **wave** (*array-like**,* *optional*) – Wavelengths at which to condition the Gaussian process
* **flux_err** (*array-like*) – Flux uncertainty array on which to condition the Gaussian process
* **fsig** ([*float*](https://docs.python.org/3/library/functions.html#float)) – The fractional amplitude of the non-trivial stationary kernel. The true amplitude is scaled by
[`WDmodel.covariance.WDmodel_CovModel._errscale`](#WDmodel.covariance.WDmodel_CovModel._errscale)
* **tau** ([*float*](https://docs.python.org/3/library/functions.html#float)) – The characteristic length scale of the non-trivial stationary kernel.
* **fw** ([*float*](https://docs.python.org/3/library/functions.html#float)) – The fractional amplitude of the white noise component of the kernel. The true amplitude is scaled by
[`WDmodel.covariance.WDmodel_CovModel._errscale`](#WDmodel.covariance.WDmodel_CovModel._errscale)
|
| Returns: | **gp** – The Gaussian process with covariance matrix precomputed at the location of the data |
| Return type: | [`celerite.GP`](https://celerite.readthedocs.io/en/stable/python/gp/#celerite.GP) instance |
Notes
`fsig`, `tau` and `fw` all must be > 0. This constraint is not checked here, but is instead imposed by the samplers/optimizers used in the [`WDmodel.fit`](index.html#module-WDmodel.fit) methods, and by bounds used to construct the `WDmodel.likelihood.WDmodel_Likelihood`
instance using the [`WDmodel.likelihood.setup_likelihood()`](index.html#WDmodel.likelihood.setup_likelihood)
method.
`lnlikelihood`(*wave*, *res*, *flux_err*, *fsig*, *tau*, *fw*)[[source]](_modules/WDmodel/covariance.html#WDmodel_CovModel.lnlikelihood)[¶](#WDmodel.covariance.WDmodel_CovModel.lnlikelihood)
Return the log likelihood of the Gaussian process
Conditions the Gaussian process specified by the functional form of the stationary kernel and the current values of the hyperparameters on the data, and computes the log likelihood. Wraps
[`celerite.GP.log_likelihood()`](https://celerite.readthedocs.io/en/stable/python/gp/#celerite.GP.log_likelihood).
| Parameters: | * **wave** (*array-like**,* *optional*) – Wavelengths at which to condition the Gaussian process
* **res** (*array-like*) – Flux residual array on which to condition the Gaussian process.
The kernel parametrization assumes that the mean model has been subtracted off.
* **flux_err** (*array-like*) – Flux uncertaintyarray on which to condition the Gaussian process
* **fsig** ([*float*](https://docs.python.org/3/library/functions.html#float)) – The fractional amplitude of the non-trivial stationary kernel. The true amplitude is scaled by
[`WDmodel.covariance.WDmodel_CovModel._errscale`](#WDmodel.covariance.WDmodel_CovModel._errscale)
* **tau** ([*float*](https://docs.python.org/3/library/functions.html#float)) – The characteristic length scale of the non-trivial stationary kernel.
* **fw** ([*float*](https://docs.python.org/3/library/functions.html#float)) – The fractional amplitude of the white noise component of the kernel. The true amplitude is scaled by
[`WDmodel.covariance.WDmodel_CovModel._errscale`](#WDmodel.covariance.WDmodel_CovModel._errscale)
|
| Returns: | **lnlike** – The log likelihood of the Gaussian process conditioned on the data. |
| Return type: | [float](https://docs.python.org/3/library/functions.html#float) |
See also
[`getgp()`](#WDmodel.covariance.WDmodel_CovModel.getgp)
`predict`(*wave*, *res*, *flux_err*, *fsig*, *tau*, *fw*, *mean_only=False*)[[source]](_modules/WDmodel/covariance.html#WDmodel_CovModel.predict)[¶](#WDmodel.covariance.WDmodel_CovModel.predict)
Return the prediction for the Gaussian process
Conditions the Gaussian process specified by the parametrized with the functional form of the stationary kernel and the current values of the hyperparameters on the data, and computes returns the prediction at the same location as the data. Wraps [`celerite.GP.predict()`](https://celerite.readthedocs.io/en/stable/python/gp/#celerite.GP.predict).
| Parameters: | * **wave** (*array-like**,* *optional*) – Wavelengths at which to condition the Gaussian process
* **res** (*array-like*) – Flux residual array on which to condition the Gaussian process.
The kernel parametrization assumes that the mean model has been subtracted off.
* **flux_err** (*array-like*) – Flux uncertaintyarray on which to condition the Gaussian process
* **fsig** ([*float*](https://docs.python.org/3/library/functions.html#float)) – The fractional amplitude of the non-trivial stationary kernel. The true amplitude is scaled by
[`WDmodel.covariance.WDmodel_CovModel._errscale`](#WDmodel.covariance.WDmodel_CovModel._errscale)
* **tau** ([*float*](https://docs.python.org/3/library/functions.html#float)) – The characteristic length scale of the non-trivial stationary kernel.
* **fw** ([*float*](https://docs.python.org/3/library/functions.html#float)) – The fractional amplitude of the white noise component of the kernel. The true amplitude is scaled by
[`WDmodel.covariance.WDmodel_CovModel._errscale`](#WDmodel.covariance.WDmodel_CovModel._errscale)
* **mean_only** ([*bool*](https://docs.python.org/3/library/functions.html#bool)*,* *optional*) – Return only the predicted mean, not the covariance matrix
|
| Returns: | * **wres** (*array-like*) – The prediction of the Gaussian process conditioned on the data at the same location i.e. the model.
* **cov** (*array-like, optional*) – The computed covariance matrix of the Gaussian process using the parametrized stationary kernel evaluated at the locations of the data.
|
See also
[`getgp()`](#WDmodel.covariance.WDmodel_CovModel.getgp)
##### WDmodel.fit module[¶](#module-WDmodel.fit)
Core data processing and fitting/sampling routines
`WDmodel.fit.``blotch_spectrum`(*spec*, *linedata*)[[source]](_modules/WDmodel/fit.html#blotch_spectrum)[¶](#WDmodel.fit.blotch_spectrum)
Automagically remove cosmic rays and gaps from spectrum
| Parameters: | * **spec** ([`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – The spectrum with `dtype=[('wave', '<f8'), ('flux', '<f8'), ('flux_err', '<f8')]`
* **linedata** ([`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – The observations of the spectrum corresponding to the hydrogen Balmer lines.
Must have
`dtype=[('wave', '<f8'), ('flux', '<f8'), ('flux_err', '<f8'), ('line_mask', 'i4'), ('line_ind','i4')]`
Produced by [`orig_cut_lines()`](#WDmodel.fit.orig_cut_lines)
|
| Returns: | **spec** – The blotched spectrum with `dtype=[('wave', '<f8'), ('flux', '<f8'), ('flux_err', '<f8')]` |
| Return type: | [`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray) |
Notes
Some spectra have nasty cosmic rays or gaps in the data. This routine does a reasonable job blotching these by Wiener filtering the spectrum,
marking features that differ significantly from the local variance in the region, and replace them with the filtered values. The hydrogen Balmer lines are preserved, so if your gap/cosmic ray lands on a line it will not be filtered. Additionally, filtering has edge effects, and these data are preserved as well. If you do blotch the spectrum, it is highly recommended that you use the bluelimit and redlimit options to trim the ends of the spectrum. Note that the spectrum will be rejected if it has flux or flux errors that are not finite or below zero. This is often the case with cosmic rays and gaps, so you will likely have to do some manual removal of these points.
YOU SHOULD PROBABLY PRE-PROCESS YOUR DATA YOURSELF BEFORE FITTING IT AND NOT BE LAZY! THIS ROUTINE ONLY EXISTS TO FIT QUICK LOOK SPECTRUM AT THE TELESCOPE, BEFORE FINAL REDUCTIONS!
`WDmodel.fit.``fit_model`(*spec*, *phot*, *model*, *covmodel*, *pbs*, *params*, *objname*, *outdir*, *specfile*, *phot_dispersion=0.0*, *samptype=u'ensemble'*, *ascale=2.0*, *ntemps=1*, *nwalkers=300*, *nburnin=50*, *nprod=1000*, *everyn=1*, *thin=1*, *pool=None*, *resume=False*, *redo=False*)[[source]](_modules/WDmodel/fit.html#fit_model)[¶](#WDmodel.fit.fit_model)
Core routine that models the spectrum using the white dwarf model and a Gaussian process with a stationary kernel to account for any flux miscalibration, sampling the posterior using a MCMC.
| Parameters: | * **spec** ([`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – The spectrum with `dtype=[('wave', '<f8'), ('flux', '<f8'), ('flux_err', '<f8')]`
* **phot** ([`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – The photometry of `objname` with `dtype=[('pb', 'str'), ('mag', '<f8'), ('mag_err', '<f8')]`
* **model** ([`WDmodel.WDmodel.WDmodel`](index.html#WDmodel.WDmodel.WDmodel) instance) – The DA White Dwarf SED model generator
* **pbs** ([*dict*](https://docs.python.org/3/library/stdtypes.html#dict)) – Passband dictionary containing the passbands corresponding to phot.pb` and generated by [`WDmodel.passband.get_pbmodel()`](index.html#WDmodel.passband.get_pbmodel).
* **params** ([*dict*](https://docs.python.org/3/library/stdtypes.html#dict)) – A parameter dict such as that produced by
[`WDmodel.io.read_params()`](index.html#WDmodel.io.read_params)
* **objname** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – object name - used to save output with correct name
* **outdir** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – controls where the chain file s written
* **specfile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Used in the title, and to set the name of the `outfile`
* **phot_dispersion** ([*float*](https://docs.python.org/3/library/functions.html#float)*,* *optional*) – Excess photometric dispersion to add in quadrature with the photometric uncertainties `phot.mag_err`. Use if the errors are grossly underestimated. Default is `0.`
* **samptype** (`{'ensemble', 'pt', 'gibbs'}`) – Which sampler to use. The default is `ensemble`.
* **ascale** ([*float*](https://docs.python.org/3/library/functions.html#float)) – The proposal scale for the sampler. Default is `2.`
* **ntemps** ([*int*](https://docs.python.org/3/library/functions.html#int)) – The number of temperatures to run walkers at. Only used if `samptype`
is in `{'pt','gibbs'}` and set to `1.` for `ensemble`. See a short summary [review](https://en.wikipedia.org/wiki/Parallel_tempering) for details.
Default is `1.`
* **nwalkers** ([*int*](https://docs.python.org/3/library/functions.html#int)) – The number of [Goodman and Weare walkers](http://msp.org/camcos/2010/5-1/p04.xhtml). Default is `300`.
* **nburnin** ([*int*](https://docs.python.org/3/library/functions.html#int)) – The number of steps to discard as burn-in for the Markov-Chain. Default is `500`.
* **nprod** ([*int*](https://docs.python.org/3/library/functions.html#int)) – The number of production steps in the Markov-Chain. Default is `1000`.
* **everyn** ([*int*](https://docs.python.org/3/library/functions.html#int)*,* *optional*) – If the posterior function is evaluated using only every nth observation from the data, this should be specified. Default is `1`.
* **thin** ([*int*](https://docs.python.org/3/library/functions.html#int)) – Only save every `thin` steps to the output Markov Chain. Useful, if brute force way of reducing correlation between samples.
* **pool** ([*None*](https://docs.python.org/3/library/constants.html#None) *or* *:py:class`emcee.utils.MPIPool`*) – If running with MPI, the pool object is used to distribute the computations among the child process
* **resume** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – If `True`, restores state and resumes the chain for another `nprod` iterations.
* **redo** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – If `True`, and a chain file and state file exist, simply clobbers them.
|
| Returns: | * **free_param_names** (*list*) – names of parameters that were fit for. Names correspond to keys in
`params` and the order of parameters in `samples`.
* **samples** (*array-like*) – The flattened Markov Chain with the parameter positions.
Shape is `(ntemps*nwalkers*nprod, nparam)`
* **samples_lnprob** (*array-like*) – The flattened log of the posterior corresponding to the positions in
`samples`. Shape is `(ntemps*nwalkers*nprod, 1)`
* **everyn** (*int*) – Specifies sampling of the data used to compute the posterior. Provided in case we are using `resume` to continue the chain, and this value must be restored from the state file, rather than being supplied as a user input.
* **shape** (*tuple*) – Specifies the shape of the un-flattened chain.
`(ntemps, nwalkers, nprod, nparam)`
Provided in case we are using `resume` to continue the chain, and this value must be restored from the state file, rather than being supplied as a user input.
|
| Raises: | [`RuntimeError`](https://docs.python.org/3/library/exceptions.html#RuntimeError) – If `resume` is set without the chain having been run in the first place. |
Notes
Uses an Ensemble MCMC (implemented by emcee) to generate samples from the posterior. Does a short burn-in around the initial guess model parameters - either `minuit` or user supplied values/defaults.
Model parameters may be frozen/fixed. Parameters can have bounds limiting their range. Then runs a full production change. Chain state is saved after every 100 production steps, and may be continued after the first 100 steps if interrupted or found to be too short. Progress is indicated visually with a progress bar that is written to STDOUT.
See also
[`WDmodel.likelihood`](index.html#module-WDmodel.likelihood)
[`WDmodel.covariance`](index.html#module-WDmodel.covariance)
`WDmodel.fit.``fix_pos`(*pos*, *free_param_names*, *params*)[[source]](_modules/WDmodel/fit.html#fix_pos)[¶](#WDmodel.fit.fix_pos)
Ensures that the initial positions of the [`emcee`](https://emcee.readthedocs.io/en/stable/user/quickstart.html#module-emcee) walkers are out of bounds
| Parameters: | * **pos** (*array-like*) – starting positions of all the walkers, such as that produced by
`utils.sample_ball`
* **free_param_names** (*iterable*) – names of parameters that are free to float. Names must correspond to keys in `params`.
* **params** ([*dict*](https://docs.python.org/3/library/stdtypes.html#dict)) – A parameter dict such as that produced by
[`WDmodel.io.read_params()`](index.html#WDmodel.io.read_params)
|
| Returns: | **pos** – starting positions of all the walkers, fixed to guarantee that they are within `bounds` defined in `params` |
| Return type: | array-like |
Notes
[`emcee.utils.sample_ball()`](https://emcee.readthedocs.io/en/stable/api.html#emcee.utils.sample_ball) creates random walkers that may be initialized out of bounds. These walkers get stuck as there is no step they can take that will make the change in loglikelihood finite. This makes the chain appear strongly correlated since all the samples of one walker are at a fixed location. This resolves the issue by assuming that the parameter `value` was within `bounds` to begin with. This routine does not do any checking of types, values or bounds. This check is done by [`WDmodel.io.get_params_from_argparse()`](index.html#WDmodel.io.get_params_from_argparse) before the fit. If you setup the fit using an external code, you should check these values.
See also
[`emcee.utils.sample_ball()`](https://emcee.readthedocs.io/en/stable/api.html#emcee.utils.sample_ball)
[`WDmodel.io.get_params_from_argparse()`](index.html#WDmodel.io.get_params_from_argparse)
`WDmodel.fit.``get_fit_params_from_samples`(*param_names*, *samples*, *samples_lnprob*, *params*, *ntemps=1*, *nwalkers=300*, *nprod=1000*, *discard=5*)[[source]](_modules/WDmodel/fit.html#get_fit_params_from_samples)[¶](#WDmodel.fit.get_fit_params_from_samples)
Get the marginalized parameters from the sample chain
| Parameters: | * **param_names** ([*list*](https://docs.python.org/3/library/stdtypes.html#list)) – names of parameters that were fit for. Names correspond to keys in
`params` and the order of parameters in `samples`.
* **samples** (*array-like*) – The flattened Markov Chain with the parameter positions.
Shape is `(ntemps*nwalkers*nprod, nparam)`
* **samples_lnprob** (*array-like*) – The flattened log of the posterior corresponding to the positions in
`samples`. Shape is `(ntemps*nwalkers*nprod, 1)`
* **params** ([*dict*](https://docs.python.org/3/library/stdtypes.html#dict)) – A parameter dict such as that produced by
[`WDmodel.io.read_params()`](index.html#WDmodel.io.read_params)
* **ntemps** ([*int*](https://docs.python.org/3/library/functions.html#int)) – The number of temperatures chains were run at. Default is `1.`
* **nwalkers** ([*int*](https://docs.python.org/3/library/functions.html#int)) – The number of [Goodman and Weare walkers](http://msp.org/camcos/2010/5-1/p04.xhtml) used in the fit. Default is `300`.
* **nprod** ([*int*](https://docs.python.org/3/library/functions.html#int)) – The number of production steps in the Markov-Chain. Default is `1000`.
* **discard** ([*int*](https://docs.python.org/3/library/functions.html#int)) – percentage of nprod steps from the start of the chain to discard in analyzing samples
|
| Returns: | * **mcmc_params** (*dict*) – The output parameter dictionary with updated parameter estimates,
errors and a scale.
`params`.
* **out_samples** (*array-like*) – The flattened Markov Chain with the parameter positions with the first
`%discard` tossed.
* **out_ samples_lnprob** (*array-like*) – The flattened log of the posterior corresponding to the positions in
`samples` with the first `%discard` samples tossed.
|
See also
[`fit_model()`](#WDmodel.fit.fit_model)
`WDmodel.fit.``hyper_param_guess`(*spec*, *phot*, *model*, *pbs*, *params*)[[source]](_modules/WDmodel/fit.html#hyper_param_guess)[¶](#WDmodel.fit.hyper_param_guess)
Makes a guess for the parameter `mu` after the initial fit by
[`quick_fit_spec_model()`](#WDmodel.fit.quick_fit_spec_model)
| Parameters: | * **spec** ([`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – The spectrum with `dtype=[('wave', '<f8'), ('flux', '<f8'), ('flux_err', '<f8')]`
* **phot** ([`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – The photometry of `objname` with `dtype=[('pb', 'str'), ('mag', '<f8'), ('mag_err', '<f8')]`
* **model** ([`WDmodel.WDmodel.WDmodel`](index.html#WDmodel.WDmodel.WDmodel) instance) – The DA White Dwarf SED model generator
* **pbs** ([*dict*](https://docs.python.org/3/library/stdtypes.html#dict)) – Passband dictionary containing the passbands corresponding to phot.pb` and generated by [`WDmodel.passband.get_pbmodel()`](index.html#WDmodel.passband.get_pbmodel).
* **params** ([*dict*](https://docs.python.org/3/library/stdtypes.html#dict)) – A parameter dict such as that produced by
[`WDmodel.io.read_params()`](index.html#WDmodel.io.read_params)
|
| Returns: | **out_params** – The output parameter dictionary with an initial guess for `mu` |
| Return type: | [dict](https://docs.python.org/3/library/stdtypes.html#dict) |
Notes
Uses the initial guess of parameters from the spectrum fit by
[`quick_fit_spec_model()`](#WDmodel.fit.quick_fit_spec_model) to construct an initial guess of the SED, and computes `mu` (which looks like a distance modulus, but also includes a normalization for the radius of the DA white dwarf, and it’s radius) as the median difference between the observed and synthetic photometry.
`WDmodel.fit.``orig_cut_lines`(*spec*, *model*)[[source]](_modules/WDmodel/fit.html#orig_cut_lines)[¶](#WDmodel.fit.orig_cut_lines)
Cut out the hydrogen Balmer spectral lines defined in
[`WDmodel.WDmodel.WDmodel`](index.html#WDmodel.WDmodel.WDmodel) from the spectrum.
The masking of Balmer lines is basic, and not very effective at high surface gravity or low temperature, or in the presence of non hydrogen lines. It’s used to get a roughly masked set of data suitable for continuum detection, and is effective in the context of our ground-based spectroscopic followup campaign for HST GO 12967 and 13711 programs.
| Parameters: | * **spec** ([`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – The spectrum with `dtype=[('wave', '<f8'), ('flux', '<f8'), ('flux_err', '<f8')]`
* **model** ([`WDmodel.WDmodel.WDmodel`](index.html#WDmodel.WDmodel.WDmodel) instance) – The DA White Dwarf SED model generator
|
| Returns: | * **linedata** ([`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – The observations of the spectrum corresponding to the hydrogen Balmer lines.
Has `dtype=[('wave', '<f8'), ('flux', '<f8'), ('flux_err', '<f8'), ('line_mask', 'i4'), (line_ind', 'i4')]`
* **continuumdata** ([`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – The continuum data. Has `dtype=[('wave', '<f8'), ('flux', '<f8'), ('flux_err', '<f8')]`
|
Notes
Does a coarse cut to remove hydrogen absorption lines from DA white dwarf spectra The line centroids, and widths are fixed and defined with the model grid This is insufficient, and particularly at high surface gravity and low temperatures the lines are blended. This routine is intended to provide a rough starting point for the process of continuum determination.
`WDmodel.fit.``polyfit_continuum`(*continuumdata*, *wave*)[[source]](_modules/WDmodel/fit.html#polyfit_continuum)[¶](#WDmodel.fit.polyfit_continuum)
Fit a polynomial to the DA white dwarf continuum to normalize it - purely for visualization purposes
| Parameters: | * **continuumdata** ([`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – The continuum data.
Must have `dtype=[('wave', '<f8'), ('flux', '<f8'), ('flux_err', '<f8')]`
Produced by running the spectrum through
[`WDmodel.fit.orig_cut_lines()`](#WDmodel.fit.orig_cut_lines) and extracting the pre-defined lines in the [`WDmodel.WDmodel.WDmodel`](index.html#WDmodel.WDmodel.WDmodel) instance.
* **wave** (*array-like*) – The full spectrum wavelength array on which to interpolate the continuum model
|
| Returns: | **cont_model** – The continuum model Must have `dtype=[('wave', '<f8'), ('flux', '<f8')]` |
| Return type: | [`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray) |
Notes
Roughly follows the algorithm described by the SDSS SSPP for a global continuum fit. Fits a red side and blue side at 5500 A separately to get a smooth polynomial representation. The red side uses a degree 5 polynomial and the blue side uses a degree 9 polynomial. Then “splices”
them together - I don’t actually know how SDSS does this, but we simply assert the two bits are the same function - and fits the full continuum to a degree 9 polynomial.
`WDmodel.fit.``pre_process_spectrum`(*spec*, *bluelimit*, *redlimit*, *model*, *params*, *lamshift=0.0*, *vel=0.0*, *rebin=1*, *blotch=False*, *rescale=False*)[[source]](_modules/WDmodel/fit.html#pre_process_spectrum)[¶](#WDmodel.fit.pre_process_spectrum)
Pre-process the spectrum before fitting
| Parameters: | * **spec** ([`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – The spectrum with `dtype=[('wave', '<f8'), ('flux', '<f8'), ('flux_err', '<f8')]`
* **bluelimit** ([*None*](https://docs.python.org/3/library/constants.html#None) *or* [*float*](https://docs.python.org/3/library/functions.html#float)) – Trim wavelengths bluer than this limit. Uses the bluest wavelength of spectrum if `None`
* **redlimit** ([*None*](https://docs.python.org/3/library/constants.html#None) *or* [*float*](https://docs.python.org/3/library/functions.html#float)) – Trim wavelengths redder than this limit. Uses the reddest wavelength of spectrum if `None`
* **model** ([`WDmodel.WDmodel.WDmodel`](index.html#WDmodel.WDmodel.WDmodel) instance) – The DA White Dwarf SED model generator
* **params** ([*dict*](https://docs.python.org/3/library/stdtypes.html#dict)) – A parameter dict such as that produced by
[`WDmodel.io.read_params()`](index.html#WDmodel.io.read_params)
Will be modified to adjust the spectrum normalization parameters `dl`
limits if `rescale` is set
* **lamshift** ([*float*](https://docs.python.org/3/library/functions.html#float)*,* *optional*) – Apply a flat wavelength shift to the spectrum. Useful if the target was not properly centered in the slit, and the shift is not correlated with wavelength. Default is `0`.
* **vel** ([*float*](https://docs.python.org/3/library/functions.html#float)*,* *optional*) – Apply a velocity shift to the spectrum. Default is `0`.
* **rebin** ([*int*](https://docs.python.org/3/library/functions.html#int)*,* *optional*) – Integer factor by which to rebin the spectrum.
Default is `1` (no rebinning).
* **blotch** ([*bool*](https://docs.python.org/3/library/functions.html#bool)*,* *optional*) – Attempt to remove cosmic rays and gaps from spectrum. Only to be used for quick look analysis at the telescope.
* **rescale** ([*bool*](https://docs.python.org/3/library/functions.html#bool)*,* *optional*) – Rescale the spectrum to make the median noise `~1`. Has no effect on fitted parameters except spectrum flux normalization parameter `dl`
but makes residual plots, histograms more easily interpretable as they can be compared to an `N(0, 1)` distribution.
|
| Returns: | **spec** – The spectrum with `dtype=[('wave', '<f8'), ('flux', '<f8'), ('flux_err', '<f8')]` |
| Return type: | [`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray) |
See also
[`orig_cut_lines()`](#WDmodel.fit.orig_cut_lines)
[`blotch_spectrum()`](#WDmodel.fit.blotch_spectrum)
[`rebin_spec_by_int_factor()`](#WDmodel.fit.rebin_spec_by_int_factor)
[`polyfit_continuum()`](#WDmodel.fit.polyfit_continuum)
`WDmodel.fit.``quick_fit_spec_model`(*spec*, *model*, *params*)[[source]](_modules/WDmodel/fit.html#quick_fit_spec_model)[¶](#WDmodel.fit.quick_fit_spec_model)
Does a quick fit of the spectrum to get an initial guess of the fit parameters
Uses iminuit to do a rough diagonal fit - i.e. ignores covariance.
For simplicity, also fixed FWHM and Rv (even when set to be fit).
Therefore, only teff, logg, av, dl are fit for (at most).
This isn’t robust, but it’s good enough for an initial guess.
| Parameters: | * **spec** ([`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – The spectrum with `dtype=[('wave', '<f8'), ('flux', '<f8'), ('flux_err', '<f8')]`
* **model** ([`WDmodel.WDmodel.WDmodel`](index.html#WDmodel.WDmodel.WDmodel) instance) – The DA White Dwarf SED model generator
* **params** ([*dict*](https://docs.python.org/3/library/stdtypes.html#dict)) – A parameter dict such as that produced by
[`WDmodel.io.read_params()`](index.html#WDmodel.io.read_params)
|
| Returns: | **migrad_params** – The output parameter dictionary with updated initial guesses stored in the `value` key. Same format as `params`. |
| Return type: | [dict](https://docs.python.org/3/library/stdtypes.html#dict) |
| Raises: | * [`RuntimeError`](https://docs.python.org/3/library/exceptions.html#RuntimeError) – If all of `teff, logg, av, dl` are set as fixed - there’s nothing to fit.
* [`RuntimeWarning`](https://docs.python.org/3/library/exceptions.html#RuntimeWarning) – If `minuit.Minuit.migrad()` or `minuit.Minuit.hesse()` indicate that the fit is unreliable
|
Notes
None of the starting values for the parameters maybe `None` EXCEPT `c`.
This refines the starting guesses, and determines a reasonable value for `c`
`WDmodel.fit.``rebin_spec_by_int_factor`(*spec*, *f=1*)[[source]](_modules/WDmodel/fit.html#rebin_spec_by_int_factor)[¶](#WDmodel.fit.rebin_spec_by_int_factor)
Rebins a spectrum by an integer factor f
| Parameters: | * **spec** ([`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – The spectrum with `dtype=[('wave', '<f8'), ('flux', '<f8'), ('flux_err', '<f8')]`
* **f** ([*int*](https://docs.python.org/3/library/functions.html#int)*,* *optional*) – an integer factor to rebin the spectrum by. Default is `1` (no rebinning)
|
| Returns: | **rspec** – The rebinned spectrum with `dtype=[('wave', '<f8'), ('flux', '<f8'), ('flux_err', '<f8')]` |
| Return type: | [`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray) |
Notes
If the spectrum is not divisible by f, the edges are trimmed by discarding the remainder measurements from both ends. If the remainder itself is odd, the extra measurement is discarded from the blue side.
##### WDmodel.io module[¶](#module-WDmodel.io)
I/O methods. All the submodules of the WDmodel package use this module for almost all I/O operations.
`WDmodel.io.``_read_ascii`(*filename*, ***kwargs*)[[source]](_modules/WDmodel/io.html#_read_ascii)[¶](#WDmodel.io._read_ascii)
Read ASCII files
Read space separated ASCII file, with column names provided on first line
(leading `#` optional). `kwargs` are passed along to
[`numpy.genfromtxt()`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.genfromtxt.html#numpy.genfromtxt). Forces any string column data to be encoded in ASCII, rather than Unicode.
| Parameters: | * **filename** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Filename of the ASCII file. Column names must be provided on the first line.
* **kwargs** ([*dict*](https://docs.python.org/3/library/stdtypes.html#dict)) – Extra options, passed directly to [`numpy.genfromtxt()`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.genfromtxt.html#numpy.genfromtxt)
|
| Returns: | **out** – Record array with the data. Field names correspond to column names in the file. |
| Return type: | [`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray) |
See also
[`numpy.genfromtxt()`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.genfromtxt.html#numpy.genfromtxt)
`WDmodel.io.``copy_params`(*params*)[[source]](_modules/WDmodel/io.html#copy_params)[¶](#WDmodel.io.copy_params)
Returns a deep copy of a dictionary. Necessary to ensure that dictionaries that nest dictionaries are properly updated.
| Parameters: | **params** ([*dict*](https://docs.python.org/3/library/stdtypes.html#dict) *or* *Object*) – Any python object for which a deepcopy needs to be created. Typically a parameter dictionary such as that from
[`WDmodel.io.read_params()`](#WDmodel.io.read_params) |
| Returns: | **params** – A deepcopy of the object |
| Return type: | Object |
Notes
Simple wrapper around [`copy.deepcopy()`](https://docs.python.org/3/library/copy.html#copy.deepcopy)
`WDmodel.io.``get_filepath`(*infile*)[[source]](_modules/WDmodel/io.html#get_filepath)[¶](#WDmodel.io.get_filepath)
Returns the full path to a file. If the path is relative, it is converted to absolute. If this file does not exist, it is treated as a file within the [`WDmodel`](index.html#module-WDmodel) package. If that file does not exist, an error is raised.
| Parameters: | **infile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The name of the file to set the full path for |
| Returns: | **pkgfile** – The path to the file |
| Return type: | [str](https://docs.python.org/3/library/stdtypes.html#str) |
| Raises: | [`IOError`](https://docs.python.org/3/library/exceptions.html#IOError) – If the `infile` could not be found at location or inside the
[`WDmodel`](index.html#module-WDmodel) package. |
`WDmodel.io.``get_options`(*args*, *comm*)[[source]](_modules/WDmodel/io.html#get_options)[¶](#WDmodel.io.get_options)
Get command line options for the [`WDmodel`](index.html#module-WDmodel) fitter package
| Parameters: | * **args** (*array-like*) – list of the input command line arguments, typically from
[`sys.argv`](https://docs.python.org/3/library/sys.html#sys.argv)
* **comm** (None or `mpi4py.mpi.MPI` instance) – Used to communicate options to all child processes if running with mpi
|
| Returns: | * **args** (*Namespace*) – Parsed command line options
* **pool** (None or :py:class`emcee.utils.MPIPool`) – If running with MPI, the pool object is used to distribute the computations among the child process
|
| Raises: | [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) – If any input value is invalid |
`WDmodel.io.``get_outfile`(*outdir*, *specfile*, *ext*, *check=False*, *redo=False*, *resume=False*)[[source]](_modules/WDmodel/io.html#get_outfile)[¶](#WDmodel.io.get_outfile)
Formats the output directory, spectrum filename, and an extension into an output filename.
| Parameters: | * **outdir** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The output directory name for the output file
* **specfile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The spectrum filename
* **ext** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The output file’s extension
* **check** ([*bool*](https://docs.python.org/3/library/functions.html#bool)*,* *optional*) – If `True`, check if the output file exists
* **redo** ([*bool*](https://docs.python.org/3/library/functions.html#bool)*,* *optional*) – If `False` and the output file already exists, an error is raised
* **resume** ([*bool*](https://docs.python.org/3/library/functions.html#bool)*,* *optional*) – If `False` and the output file already exists, an error is raised
|
| Returns: | **outfile** – The output filename |
| Return type: | [str](https://docs.python.org/3/library/stdtypes.html#str) |
| Raises: | [`IOError`](https://docs.python.org/3/library/exceptions.html#IOError) – If `check` is `True`, `redo` and `resume` are `False`, and
`outfile` exists. |
Notes
We set the output file based on the spectrum name, since we can have multiple spectra per object.
If `outdir` is configured by [`set_objname_outdir_for_specfile()`](#WDmodel.io.set_objname_outdir_for_specfile) for
`specfile`, it’ll include the object name.
See also
[`set_objname_outdir_for_specfile()`](#WDmodel.io.set_objname_outdir_for_specfile)
`WDmodel.io.``get_params_from_argparse`(*args*)[[source]](_modules/WDmodel/io.html#get_params_from_argparse)[¶](#WDmodel.io.get_params_from_argparse)
Converts an [`argparse.Namespace`](https://docs.python.org/3/library/argparse.html#argparse.Namespace) into an ordered parameter dictionary.
| Parameters: | **args** ([`argparse.Namespace`](https://docs.python.org/3/library/argparse.html#argparse.Namespace)) – The parsed command-line options from [`WDmodel.io.get_options()`](#WDmodel.io.get_options) |
| Returns: | **params** – The parameter dictionary |
| Return type: | [`collections.OrderedDict`](https://docs.python.org/3/library/collections.html#collections.OrderedDict) |
| Raises: | [`RuntimeError`](https://docs.python.org/3/library/exceptions.html#RuntimeError) – If format of [`argparse.Namespace`](https://docs.python.org/3/library/argparse.html#argparse.Namespace) is invalid.
or If parameter is `fixed` but `value` is `None`.
or If parameter `value` is out of `bounds`. |
Notes
Assumes that the argument parser options were names
* `<param>_value` : Value of the parameter (float or `None`)
* `<param>_fix` : Bool specifying if the parameter
* `<param>_bounds` : tuple with lower limit and upper limit
where <param> is one of `WDmodel.io._PARAMETER_NAMES`
See also
[`WDmodel.io.read_params()`](#WDmodel.io.read_params)
[`WDmodel.io.get_options()`](#WDmodel.io.get_options)
`WDmodel.io.``get_phot_for_obj`(*objname*, *filename*)[[source]](_modules/WDmodel/io.html#get_phot_for_obj)[¶](#WDmodel.io.get_phot_for_obj)
Gets the measured photometry for an object from a photometry lookup table.
| Parameters: | * **objname** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Object name to look for photometry for
* **filename** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The spectrum FWHM lookup table filename
|
| Returns: | **phot** – The photometry of `objname` with `dtype=[('pb', 'str'), ('mag', '<f8'), ('mag_err', '<f8')]` |
| Return type: | [`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray) |
| Raises: | * [`RuntimeError`](https://docs.python.org/3/library/exceptions.html#RuntimeError) – If there are no matches in the photometry lookup file or if there are multiple matches for an object in the photometry lookup file
* [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) – If the photometry or the photometry uncertainty values are not finite or if the photometry uncertainties are less `<= 0`
|
Notes
The lookup file must be readable by [`read_phot()`](#WDmodel.io.read_phot)
The column name with the object name `objname` expected to be `obj`
If column names for magnitudes are named <passband>, the column names for errors in magnitudes in passband must be ‘d’+<passband_name>.
`WDmodel.io.``get_pkgfile`(*infile*)[[source]](_modules/WDmodel/io.html#get_pkgfile)[¶](#WDmodel.io.get_pkgfile)
Returns the full path to a file inside the [`WDmodel`](index.html#module-WDmodel) package
| Parameters: | **infile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The name of the file to set the full package filename for |
| Returns: | **pkgfile** – The path to the file within the package. |
| Return type: | [str](https://docs.python.org/3/library/stdtypes.html#str) |
| Raises: | [`IOError`](https://docs.python.org/3/library/exceptions.html#IOError) – If the `pkgfile` could not be found inside the [`WDmodel`](index.html#module-WDmodel) package. |
Notes
This allows the package to be installed anywhere, and the code to still determine the location to a file included with the package, such as the model grid file.
`WDmodel.io.``get_spectrum_resolution`(*specfile*, *spectable*, *fwhm=None*, *lamshift=None*)[[source]](_modules/WDmodel/io.html#get_spectrum_resolution)[¶](#WDmodel.io.get_spectrum_resolution)
Gets the measured FWHM from a spectrum lookup table.
| Parameters: | * **specfile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The spectrum filename
* **spectable** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The spectrum FWHM lookup table filename
* **fwhm** ([*None*](https://docs.python.org/3/library/constants.html#None) *or* [*float*](https://docs.python.org/3/library/functions.html#float)*,* *optional*) – If specified, this overrides the resolution provided in the lookup table. If `None` lookups the resultion from `spectable`.
* **lamshift** ([*None*](https://docs.python.org/3/library/constants.html#None) *or* [*float*](https://docs.python.org/3/library/functions.html#float)*,* *optional*) – If specified, this overrides the wavelength shift provided in the lookup table. If `None` lookups the wavelength shift from `spectable`.
|
| Returns: | * **fwhm** (*float*) – The FWHM of the spectrum file. This is typically used as an initial guess to the [`WDmodel.fit`](index.html#module-WDmodel.fit) fitter routines.
* **lamshift** (*float*) – The wavelength shift to apply additively to the spectrum. This is not a fit parameter, and is treated as an input
|
| Raises: | [`RuntimeWarning`](https://docs.python.org/3/library/exceptions.html#RuntimeWarning) – If the `spectable` cannot be read, or the `specfile` name indicates that this is a test, or if there are no or multiple matches for
`specfile` in the `spectable` |
Notes
If the `specfile` is not found, it returns a default resolution of
`5` Angstroms, appropriate for the instruments used in our program.
Note that there there’s some hackish internal name fixing since T.
Matheson’s table spectrum names didn’t match the spectrum filenames.
`WDmodel.io.``make_outdirs`(*dirname*, *redo=False*, *resume=False*)[[source]](_modules/WDmodel/io.html#make_outdirs)[¶](#WDmodel.io.make_outdirs)
Makes output directories
| Parameters: | * **dirname** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The output directory name to create
* **redo** ([*bool*](https://docs.python.org/3/library/functions.html#bool)*,* *optional*) – If `False` the directory will not be created if it already exists, and an error is raised
* **resume** ([*bool*](https://docs.python.org/3/library/functions.html#bool)*,* *optional*) – If `False` the directory will not be created if it already exists, and an error is raised
|
| Returns: | **None** – If the output directory `dirname` is successfully created |
| Return type: | [None](https://docs.python.org/3/library/constants.html#None) |
| Raises: | * [`IOError`](https://docs.python.org/3/library/exceptions.html#IOError) – If the output directory exists
* [`OSError`](https://docs.python.org/3/library/exceptions.html#OSError) – If the output directory could not be created
|
Notes
If the options are parsed by [`get_options()`](#WDmodel.io.get_options) then only one of
`redo` or `resume` can be set, as the options are mutually exclusive. If `redo` is set, the fit is redone from scratch, while
`resume` restarts the MCMC sampling from the last saved chain position.
`WDmodel.io.``read_fit_inputs`(*input_file*)[[source]](_modules/WDmodel/io.html#read_fit_inputs)[¶](#WDmodel.io.read_fit_inputs)
Read the fit input HDF5 file produced by [`write_fit_inputs()`](#WDmodel.io.write_fit_inputs) and return [`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray) instances with the data.
| Parameters: | **input_file** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The HDF5 fit inputs filename |
| Returns: | * **spec** ([`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – The spectrum with `dtype=[('wave', '<f8'), ('flux', '<f8'), ('flux_err', '<f8')]`
* **cont_model** ([`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – The continuuum model. Has the same structure as `spec`.
* **linedata** ([`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – The observations of the spectrum corresponding to the hydrogen Balmer lines. Has `dtype=[('wave', '<f8'), ('flux', '<f8'), ('flux_err', '<f8'), ('line_mask', 'i4')]`
* **continuumdata** ([`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – Data used to generate the continuum model. Has the same structure as
`spec`.
* **phot** (None or [`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – `None` or the photometry with `dtype=[('pb', 'str'), ('mag', '<f8'), ('mag_err', '<f8')]`
* **fit_config** (*dict*) –
Dictionary with various keys needed to configure the fitter
+ `rvmodel` : `{'ccm89','od94','f99', 'custom'}` - Parametrization of the reddening law.
+ `covtype` : `{'Matern32', 'SHO', 'Exp', 'White'}`- kernel type used to parametrize the covariance
+ `coveps` : float - Matern32 kernel precision
+ `phot_dispersion` : float - Excess dispersion to add in quadrature with photometric uncertainties
+ `scale_factor` : float - Flux scale factor
|
| Raises: | * [`IOError`](https://docs.python.org/3/library/exceptions.html#IOError) – If all the fit inputs could not be restored from the HDF5 `input_file`
* [`RuntimeWarning`](https://docs.python.org/3/library/exceptions.html#RuntimeWarning) – If the `input_file` includes a `phot` group, but the data cannot be loaded.
|
See also
[`write_fit_inputs()`](#WDmodel.io.write_fit_inputs)
`WDmodel.io.``read_full_model`(*input_file*)[[source]](_modules/WDmodel/io.html#read_full_model)[¶](#WDmodel.io.read_full_model)
Read the full SED model from an output file.
| Parameters: | **input_file** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Input HDF5 SED model filename |
| Returns: | **spec** – Record array with the model SED.
Has `dtype=[('wave', '<f8'), ('flux', '<f8'), ('flux_err', '<f8')]` |
| Return type: | [`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray) |
| Raises: | * [`KeyError`](https://docs.python.org/3/library/exceptions.html#KeyError) – If any of `wave`, `flux` or `flux_err` is not found in the file
* [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) – If any value is not finite or if `flux` or `flux_err` have any values `<= 0`
|
`WDmodel.io.``read_mcmc`(*input_file*)[[source]](_modules/WDmodel/io.html#read_mcmc)[¶](#WDmodel.io.read_mcmc)
Read the saved HDF5 Markov chain file and return samples, sample log probabilities and chain parameters
| Parameters: | **input_file** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The HDF5 Markov chain filename |
| Returns: | * **samples** (*array-like*) – The model parameter sample chain
* **samples_lnprob** (*array-like*) – The log posterior corresponding to each of the `samples`
* **chain_params** (*dict*) –
The chain parameter dictionary
+ `param_names` : list - list of model parameter names
+ `samptype` : `{'ensemble','pt','gibbs'}` - the sampler to use
+ `ntemps` : int - the number of chain temperatures
+ `nwalkers` : int - the number of Goodman & Ware walkers
+ `nprod` : int - the number of production steps of the chain
+ `ndim` : int - the number of model parameters in the chain
+ `thin` : int - the chain thinning if any
+ `everyn` : int - the sparse of spectrum sampling step size
+ `ascale` : float - the proposal scale for the sampler
|
| Raises: | [`IOError`](https://docs.python.org/3/library/exceptions.html#IOError) – If a key in the `fit_config` output is missing |
`WDmodel.io.``read_model_grid`(*grid_file=None*, *grid_name=None*)[[source]](_modules/WDmodel/io.html#read_model_grid)[¶](#WDmodel.io.read_model_grid)
Read the Tlusty/Hubeny grid file
| Parameters: | * **grid_file** ([*None*](https://docs.python.org/3/library/constants.html#None) *or* [*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Filename of the Tlusty model grid HDF5 file. If `None` reads the
`TlustyGrids.hdf5` file included with the [`WDmodel`](index.html#module-WDmodel)
package.
* **grid_name** ([*None*](https://docs.python.org/3/library/constants.html#None) *or* [*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Name of the group name in the HDF5 file to read the grid from. If
`None` uses `default`
|
| Returns: | * **grid_file** (*str*) – Filename of the HDF5 grid file
* **grid_name** (*str*) – Name of the group within the HDF5 grid file with the grid arrays
* **wave** (*array-like*) – The wavelength array of the grid with shape `(nwave,)`
* **ggrid** (*array-like*) – The surface gravity array of the grid with shape `(ngrav,)`
* **tgrid** (*array-like*) – The temperature array of the grid with shape `(ntemp,)`
* **flux** (*array-like*) – The DA white dwarf model atmosphere flux array of the grid.
Has shape `(nwave, ngrav, ntemp)`
|
Notes
There are no easy command line options to change this deliberately because changing the grid file essentially changes the entire model,
and should not be done lightly, without careful comparison of the grids to quantify differences.
See also
[`WDmodel.WDmodel`](index.html#module-WDmodel.WDmodel)
`WDmodel.io.``read_params`(*param_file=None*)[[source]](_modules/WDmodel/io.html#read_params)[¶](#WDmodel.io.read_params)
Read a JSON file that configures the default guesses and bounds for the parameters, as well as if they should be fixed.
| Parameters: | **param_file** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)*,* *optional*) – The name of the input parameter file. If not the default file provided with the package, `WDmodel_param_defaults.json`, is read. |
| Returns: | **params** – The dictionary with the parameter `values`, `bounds`, `scale` and if `fixed`. See notes for more detailed information on dictionary format and `WDmodel_param_defaults.json` for an example file for
`param_file`. |
| Return type: | [dict](https://docs.python.org/3/library/stdtypes.html#dict) |
Notes
params is a dict the parameter names, as defined with
`WDmodel.io._PARAMETER_NAMES` as keys
Each key must have a dictionary with keys:
* `value` : value
* `fixed` : a bool specifying if the parameter is fixed (`True`) or allowed to vary (`False`)
* `scale` : a scale parameter used to set the step size in this dimension
* `bounds` : An upper and lower limit on parameter values
The default bounds are set by the grids available for the DA White Dwarf atmospheres, and by reasonable plausible ranges for the other parameters. Don’t muck with them unless you really have good reason to.
This routine does not do any checking of types, values or bounds. This is done by [`WDmodel.io.get_params_from_argparse()`](#WDmodel.io.get_params_from_argparse) before the fit. If you setup the fit using an external code, you should check these values.
`WDmodel.io.``read_pbmap`(*filename*, ***kwargs*)[¶](#WDmodel.io.read_pbmap)
Read passband obsmode mapping table - wraps [`_read_ascii()`](#WDmodel.io._read_ascii)
`WDmodel.io.``read_phot`(*filename*, ***kwargs*)[¶](#WDmodel.io.read_phot)
Read photometry - wraps [`_read_ascii()`](#WDmodel.io._read_ascii)
`WDmodel.io.``read_reddening`(*filename*, ***kwargs*)[¶](#WDmodel.io.read_reddening)
Read J. Holberg’s custom reddening function - wraps [`_read_ascii()`](#WDmodel.io._read_ascii)
`WDmodel.io.``read_spec`(*filename*, ***kwargs*)[[source]](_modules/WDmodel/io.html#read_spec)[¶](#WDmodel.io.read_spec)
Read a spectrum
Wraps [`_read_ascii()`](#WDmodel.io._read_ascii), adding testing of the input arrays to check if the elements are finite, and if the errors and flux are strictly positive.
| Parameters: | * **filename** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Filename of the ASCII file. Must have columns `wave`, `flux`,
`flux_err`
* **kwargs** ([*dict*](https://docs.python.org/3/library/stdtypes.html#dict)) – Extra options, passed directly to [`numpy.genfromtxt()`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.genfromtxt.html#numpy.genfromtxt)
|
| Returns: | **spec** – Record array with the spectrum data.
Has `dtype=[('wave', '<f8'), ('flux', '<f8'), ('flux_err', '<f8')]` |
| Return type: | [`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray) |
| Raises: | [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) – If any value is not finite or if `flux` or `flux_err` have any values `<= 0` |
See also
[`numpy.genfromtxt()`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.genfromtxt.html#numpy.genfromtxt)
[`_read_ascii()`](#WDmodel.io._read_ascii)
`WDmodel.io.``read_spectable`(*filename*, ***kwargs*)[¶](#WDmodel.io.read_spectable)
Read spectrum FWHM table - wraps [`_read_ascii()`](#WDmodel.io._read_ascii)
`WDmodel.io.``set_objname_outdir_for_specfile`(*specfile*, *outdir=None*, *outroot=None*, *redo=False*, *resume=False*, *nocreate=False*)[[source]](_modules/WDmodel/io.html#set_objname_outdir_for_specfile)[¶](#WDmodel.io.set_objname_outdir_for_specfile)
Sets the short human readable object name and output directory
| Parameters: | * **specfile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – The spectrum filename
* **outdir** ([*None*](https://docs.python.org/3/library/constants.html#None) *or* [*str*](https://docs.python.org/3/library/stdtypes.html#str)*,* *optional*) – The output directory name to create. If `None` this is set based on `specfile`
* **outroot** ([*None*](https://docs.python.org/3/library/constants.html#None) *or* [*str*](https://docs.python.org/3/library/stdtypes.html#str)*,* *optional*) – The output root directory under which to store the fits. If `None` the default is `'out'`
* **redo** ([*bool*](https://docs.python.org/3/library/functions.html#bool)*,* *optional*) – If `False` the directory will not be created if it already exists, and an error is raised
* **resume** ([*bool*](https://docs.python.org/3/library/functions.html#bool)*,* *optional*) – If `False` the directory will not be created if it already exists, and an error is raised
* **nocreate** ([*bool*](https://docs.python.org/3/library/functions.html#bool)*,* *optional*) – If `True` then creation of output directories is not even attempted
|
| Returns: | * **objname** (*str*) – The human readable object name based on the spectrum
* **dirname** (*str*) – The output directory name created if successful
|
See also
[`make_outdirs()`](#WDmodel.io.make_outdirs)
`WDmodel.io.``write_fit_inputs`(*spec*, *phot*, *cont_model*, *linedata*, *continuumdata*, *rvmodel*, *covtype*, *coveps*, *phot_dispersion*, *scale_factor*, *outfile*)[[source]](_modules/WDmodel/io.html#write_fit_inputs)[¶](#WDmodel.io.write_fit_inputs)
Save all the inputs to the fitter to a file
This file is enough to resume the fit with the same input, redoing the output, or restoring from a failure.
| Parameters: | * **spec** ([`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – The spectrum with `dtype=[('wave', '<f8'), ('flux', '<f8'), ('flux_err', '<f8')]`
* **phot** (None or [`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – `None` or the photometry with `dtype=[('pb', 'str'), ('mag', '<f8'), ('mag_err', '<f8')]`
* **cont_model** ([`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – The continuuum model. Must have the same structure as `spec`.
Produced by [`WDmodel.fit.pre_process_spectrum()`](index.html#WDmodel.fit.pre_process_spectrum).
Used by [`WDmodel.viz`](index.html#module-WDmodel.viz)
* **linedata** ([`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – The observations of the spectrum corresponding to the hydrogen Balmer lines. Must have `dtype=[('wave', '<f8'), ('flux', '<f8'), ('flux_err', '<f8'), ('line_mask', 'i4')]`
Produced by [`WDmodel.fit.pre_process_spectrum()`](index.html#WDmodel.fit.pre_process_spectrum)
Used by [`WDmodel.viz`](index.html#module-WDmodel.viz)
* **continuumdata** ([`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – Data used to generate the continuum model. Must have the same structure as `spec`. Produced by [`WDmodel.fit.pre_process_spectrum()`](index.html#WDmodel.fit.pre_process_spectrum)
* **rvmodel** (`{'ccm89','od94','f99', 'custom'}`) – Parametrization of the reddening law. Used to initialize
[`WDmodel.WDmodel.WDmodel()`](index.html#WDmodel.WDmodel.WDmodel) instance.
* **covtype** (`{'Matern32', 'SHO', 'Exp', 'White'}`) – stationary kernel type used to parametrize the covariance in
[`WDmodel.covariance.WDmodel_CovModel`](index.html#WDmodel.covariance.WDmodel_CovModel)
* **coveps** ([*float*](https://docs.python.org/3/library/functions.html#float)) – If `covtype` is `'Matern32'` a
[`celerite.terms.Matern32Term`](https://celerite.readthedocs.io/en/stable/python/kernel/#celerite.terms.Matern32Term) is used to approximate a Matern32 kernel with precision coveps.
* **phot_dispersion** ([*float*](https://docs.python.org/3/library/functions.html#float)*,* *optional*) – Excess photometric dispersion to add in quadrature with the photometric uncertainties `phot.mag_err` in
`WDmodel.likelihood.WDmodel_Likelihood`.
* **scale_factor** ([*float*](https://docs.python.org/3/library/functions.html#float)) – Factor by which the flux must be scaled. Critical to getting the right uncertainties.
* **outfile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Output HDF5 filename
|
Notes
The outputs are stored in a HDF5 file with groups
* `spec` - storing the spectrum and `scale_factor`
* `cont_model` - stores the continuum model
* `linedata` - stores the hydrogen Balmer line data
* `continuumdata` - stores the data used to generate `cont_model`
* `fit_config` - stores `covtype`, `coveps` and `rvmodel` as attributes
* `phot` - only created if `phot` is not `None`, stores `phot`, `phot_dispersion`
`WDmodel.io.``write_full_model`(*full_model*, *outfile*)[[source]](_modules/WDmodel/io.html#write_full_model)[¶](#WDmodel.io.write_full_model)
Write the full SED model to an output file.
| Parameters: | * **full_model** ([`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – The SED model with `dtype=[('wave', '<f8'), ('flux', '<f8'), ('flux_err', '<f8')]`
* **outfile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Output HDF5 SED model filename
|
Notes
The output is written into a group `model` with datasets
* `wave` : array-like - the SED model wavelength
* `flux` : array-like - the SED model flux
* `flux_err` : array-like - the SED model flux uncertainty
`WDmodel.io.``write_params`(*params*, *outfile*)[[source]](_modules/WDmodel/io.html#write_params)[¶](#WDmodel.io.write_params)
Dumps the parameter dictionary params to a JSON file
| Parameters: | * **params** ([*dict*](https://docs.python.org/3/library/stdtypes.html#dict)) – A parameter dict such as that produced by
[`WDmodel.io.read_params()`](#WDmodel.io.read_params)
* **outfile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Output filename to save the parameter dict as a JSON file.
|
Notes
params is a dict the parameter names, as defined with
`WDmodel.io._PARAMETER_NAMES` as keys
Each key must have a dictionary with keys:
* `value` : value
* `fixed` : a bool specifying if the parameter is fixed (`True`) or allowed to vary (`False`)
* `scale` : a scale parameter used to set the step size in this dimension
* `bounds` : An upper and lower limit on parameter values
Any extra keys are simply written as-is JSON doesn’t preserve ordering necessarily. This is imposed by [`WDmodel.io.read_params()`](#WDmodel.io.read_params)
See also
[`WDmodel.io.read_params()`](#WDmodel.io.read_params)
`WDmodel.io.``write_phot_model`(*phot*, *model_mags*, *outfile*)[[source]](_modules/WDmodel/io.html#write_phot_model)[¶](#WDmodel.io.write_phot_model)
Write the photometry, model photometry and residuals to an output file.
| Parameters: | * **phot** (None or [`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – `None` or the photometry with `dtype=[('pb', 'str'), ('mag', '<f8'), ('mag_err', '<f8')]`
* **model_mags** (None or [`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – The model magnitudes.
Has `dtype=[('pb', 'str'), ('mag', '<f8')]`
* **outfile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Output space-separated text filename
|
Notes
The data is saved to a space-separated ASCII text file with 6 decimal places of precision.
The order of the columns is
* `pb` : array-like - the observation’s passband
* `mag` : array-like - the observed magnitude
* `mag_err` : array-like - the observed magnitude uncertainty
* `model_mag` : array-like - the model magnitude
* `res_mag` : array-like - the magnitude residual
`WDmodel.io.``write_spectrum_model`(*spec*, *model_spec*, *outfile*)[[source]](_modules/WDmodel/io.html#write_spectrum_model)[¶](#WDmodel.io.write_spectrum_model)
Write the spectrum and the model spectrum and residuals to an output file.
| Parameters: | * **spec** ([`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – The spectrum with `dtype=[('wave', '<f8'), ('flux', '<f8'), ('flux_err', '<f8')]`
* **model_spec** ([`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – The model spectrum.
Has `dtype=[('wave', '<f8'), ('flux', '<f8'), ('norm_flux', '<f8'), ('flux_err', '<f8')]`
* **outfile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Output space-separated text filename
|
Notes
The data is saved to a space-separated ASCII text file with 8 decimal places of precision.
The order of the columns is
* `wave` : array-like - the spectrum wavelength
* `flux` : array-like - the observed flux
* `flux_err` : array-like - the observed flux uncertainty
* `norm_flux` : array-like - the model flux without the Gaussian process covariance model
* `model_flux` : array-like - the model flux
* `model_flux_err` : array-like - the model flux uncertainty
* `res_flux` : array-like - the flux residual
##### WDmodel.likelihood module[¶](#module-WDmodel.likelihood)
Classes defining the likelihood and the posterior probability of the model given the data
*class* `WDmodel.likelihood.``WDmodel_Posterior`(*spec*, *phot*, *model*, *covmodel*, *pbs*, *lnlike*, *pixel_scale=1.0*, *phot_dispersion=0.0*)[[source]](_modules/WDmodel/likelihood.html#WDmodel_Posterior)[¶](#WDmodel.likelihood.WDmodel_Posterior)
Bases: [`object`](https://docs.python.org/3/library/functions.html#object)
Classes defining the posterior probability of the model given the data
An instance of this class is used to store the data and model, and evaluate the likelihood and prior to compute the posterior.
| Parameters: | * **spec** ([`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – The spectrum with `dtype=[('wave', '<f8'), ('flux', '<f8'), ('flux_err', '<f8')]`
* **phot** (None or [`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – The photometry with `dtype=[('pb', 'str'), ('mag', '<f8'), ('mag_err', '<f8')]`
* **model** ([`WDmodel.WDmodel.WDmodel`](index.html#WDmodel.WDmodel.WDmodel) instance) – The DA White Dwarf SED model generator
* **covmodel** ([`WDmodel.covariance.WDmodel_CovModel`](index.html#WDmodel.covariance.WDmodel_CovModel) instance) – The parametrized model for the covariance of the spectrum `spec`
* **pbs** ([*dict*](https://docs.python.org/3/library/stdtypes.html#dict)) – Passband dictionary containing the passbands corresponding to
`phot.pb` and generated by [`WDmodel.passband.get_pbmodel()`](index.html#WDmodel.passband.get_pbmodel).
* **lnlike** (`WDmodel_Likelihood` instance) – Instance of the likelihood function class, such as that produced by
[`WDmodel.likelihood.setup_likelihood()`](#WDmodel.likelihood.setup_likelihood)
* **pixel_scale** ([*float*](https://docs.python.org/3/library/functions.html#float)*,* *optional*) – Jacobian of the transformation between wavelength in Angstrom and pixels. In principle, this should be a vector, but virtually all spectral reduction packages resample the spectrum onto a uniform wavelength scale that is close to the native pixel scale of the spectrograph. Default is `1.`
* **phot_dispersion** ([*float*](https://docs.python.org/3/library/functions.html#float)*,* *optional*) – Excess photometric dispersion to add in quadrature with the photometric uncertainties `phot.mag_err`. Use if the errors are grossly underestimated. Default is `0.`
|
`spec`[¶](#WDmodel.likelihood.WDmodel_Posterior.spec)
The spectrum with `dtype=[('wave', '<f8'), ('flux', '<f8'), ('flux_err', '<f8')]`
| Type: | [`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray) |
`wave_scale`[¶](#WDmodel.likelihood.WDmodel_Posterior.wave_scale)
length of the wavelength array `wave` in Angstroms
| Type: | [float](https://docs.python.org/3/library/functions.html#float) |
`phot`[¶](#WDmodel.likelihood.WDmodel_Posterior.phot)
The photometry with `dtype=[('pb', 'str'), ('mag', '<f8'), ('mag_err', '<f8')]`
| Type: | None or [`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray) |
`model`[¶](#WDmodel.likelihood.WDmodel_Posterior.model)
The DA White Dwarf SED model generator
| Type: | [`WDmodel.WDmodel.WDmodel`](index.html#WDmodel.WDmodel.WDmodel) instance |
`covmodel`[¶](#WDmodel.likelihood.WDmodel_Posterior.covmodel)
The parametrized model for the covariance of the spectrum `spec`
| Type: | [`WDmodel.covariance.WDmodel_CovModel`](index.html#WDmodel.covariance.WDmodel_CovModel) instance |
`pbs`[¶](#WDmodel.likelihood.WDmodel_Posterior.pbs)
Passband dictionary containing the passbands corresponding to
`phot.pb` and generated by [`WDmodel.passband.get_pbmodel()`](index.html#WDmodel.passband.get_pbmodel).
| Type: | [dict](https://docs.python.org/3/library/stdtypes.html#dict) |
`_lnlike`[¶](#WDmodel.likelihood.WDmodel_Posterior._lnlike)
Instance of the likelihood function class, such as that produced by
[`WDmodel.likelihood.setup_likelihood()`](#WDmodel.likelihood.setup_likelihood)
| Type: | `WDmodel_Likelihood` instance |
`pixel_scale`[¶](#WDmodel.likelihood.WDmodel_Posterior.pixel_scale)
Jacobian of the transformation between wavelength in Angstrom and pixels. In principle, this should be a vector, but virtually all spectral reduction packages resample the spectrum onto a uniform wavelength scale that is close to the native pixel scale of the spectrograph. Default is `1.`
| Type: | [float](https://docs.python.org/3/library/functions.html#float) |
`phot_dispersion`[¶](#WDmodel.likelihood.WDmodel_Posterior.phot_dispersion)
Excess photometric dispersion to add in quadrature with the photometric uncertainties `phot.mag_err`. Use if the errors are grossly underestimated. Default is `0.`
| Type: | [float](https://docs.python.org/3/library/functions.html#float), optional |
`p0`[¶](#WDmodel.likelihood.WDmodel_Posterior.p0)
initial values of all the model parameters, including fixed parameters
| Type: | [dict](https://docs.python.org/3/library/stdtypes.html#dict) |
| Returns: | **lnpost** – It is this instance that is passed to the samplers/optimizers in the
[`WDmodel.fit`](index.html#module-WDmodel.fit) module. Those methods evaluate the posterior probability of the model parameters given the data. |
| Return type: | [`WDmodel_Posterior`](#WDmodel.likelihood.WDmodel_Posterior) instance |
Notes
Wraps [`celerite.modeling.Model.log_prior()`](https://celerite.readthedocs.io/en/stable/python/modeling/#celerite.modeling.Model.log_prior) which imposes a boundscheck and returns `-inf`. This is not an issue as the samplers used in the methods in [`WDmodel.fit`](index.html#module-WDmodel.fit).
`__call__`(*theta*, *prior=False*, *likelihood=False*)[[source]](_modules/WDmodel/likelihood.html#WDmodel_Posterior.__call__)[¶](#WDmodel.likelihood.WDmodel_Posterior.__call__)
Evalulates the log posterior of the model parameters given the data
| Parameters: | * **theta** (*array-like*) – Vector of the non-frozen model parameters. The order of the parameters is defined by
`WDmodel_Likelihood.parameter_names`.
* **prior** ([*bool*](https://docs.python.org/3/library/functions.html#bool)*,* *optional*) – Only return the value of the log prior given the model parameters
* **likelihood** ([*bool*](https://docs.python.org/3/library/functions.html#bool)*,* *optional*) – Only return the value of the log likelihood given the model parameters if the prior is finite
|
| Returns: | **lnpost** – the log posterior of the model parameters given the data |
| Return type: | [float](https://docs.python.org/3/library/functions.html#float) |
`__init__`(*spec*, *phot*, *model*, *covmodel*, *pbs*, *lnlike*, *pixel_scale=1.0*, *phot_dispersion=0.0*)[[source]](_modules/WDmodel/likelihood.html#WDmodel_Posterior.__init__)[¶](#WDmodel.likelihood.WDmodel_Posterior.__init__)
x.__init__(…) initializes x; see help(type(x)) for signature
`_lnprior`()[[source]](_modules/WDmodel/likelihood.html#WDmodel_Posterior._lnprior)[¶](#WDmodel.likelihood.WDmodel_Posterior._lnprior)
Evalulates the log likelihood of the model parameters given the data.
Implements an lnprior function which imposes weakly informative priors on the model parameters.
| Parameters: | **theta** (*array-like*) – Vector of the non-frozen model parameters. The order of the parameters is defined by
`WDmodel_Likelihood.parameter_names`. |
| Returns: | **lnprior** – the log likelihood of the model parameters given the data |
| Return type: | [float](https://docs.python.org/3/library/functions.html#float) |
Notes
The prior on `av` is the ‘glos’ prior
The prior on `rv` is a Gaussian with mean 3.1 and standard deviation 0.18. This is adopted from Schlafly et al., 2014 PS1 analysis. Note that they report 3.31, but they aren’t really measuring E(B-V) with PS1. Their sigma should be consistent despite the different filter set.
The prior on `fsig` and `fw` - the fractional amplitudes of the non-trivial stationary and white components of the kernel used to parametrize the covariance is half-Cauchy since we don’t want it to be less than zero
There is no explicit prior on `tau` i.e. a tophat prior, defined by the bounds
The `fwhm` has a lower bound set at the value below which the spectrum isn’t being convolved anymore. We never run into this bound since real spectra have physical instrumental broadening.
This prevents `fwhm` from going to zero for fitting poorly simulated spectra generated from simply resampling the model grid.
The prior on all other parameters are broad Gaussians
Wraps [`celerite.modeling.Model.log_prior()`](https://celerite.readthedocs.io/en/stable/python/modeling/#celerite.modeling.Model.log_prior) which imposes a boundscheck and returns `-inf`. This is not an issue as the samplers used in the methods in [`WDmodel.fit`](index.html#module-WDmodel.fit).
`lnlike`(*theta*)[[source]](_modules/WDmodel/likelihood.html#WDmodel_Posterior.lnlike)[¶](#WDmodel.likelihood.WDmodel_Posterior.lnlike)
Evalulates the log likelihood of the model parameters given the data.
Convenience function that can return the value of the likelihood even if the prior is not finite unlike
[`WDmodel_Posterior.__call__()`](#WDmodel.likelihood.WDmodel_Posterior.__call__) for debugging.
| Parameters: | **theta** (*array-like*) – Vector of the non-frozen model parameters. The order of the parameters is defined by
`WDmodel_Likelihood.parameter_names`. |
| Returns: | **lnlike** – the log likelihood of the model parameters given the data |
| Return type: | [float](https://docs.python.org/3/library/functions.html#float) |
`lnprior`(*theta*)[[source]](_modules/WDmodel/likelihood.html#WDmodel_Posterior.lnprior)[¶](#WDmodel.likelihood.WDmodel_Posterior.lnprior)
Evalulates the log prior of the model parameters.
Convenience function that can return the value of the prior defined to make the interface consistent with the
[`WDmodel_Posterior.lnlike()`](#WDmodel.likelihood.WDmodel_Posterior.lnlike) method. Just a thin wrapper around
[`WDmodel_Posterior._lnprior()`](#WDmodel.likelihood.WDmodel_Posterior._lnprior) which is what is actually evalulated by [`WDmodel_Posterior.__call__()`](#WDmodel.likelihood.WDmodel_Posterior.__call__).
| Parameters: | **theta** (*array-like*) – Vector of the non-frozen model parameters. The order of the parameters is defined by
`WDmodel_Likelihood.parameter_names`. |
| Returns: | **lnprior** – the log prior of the model parameters |
| Return type: | [float](https://docs.python.org/3/library/functions.html#float) |
`WDmodel.likelihood.``setup_likelihood`(*params*)[[source]](_modules/WDmodel/likelihood.html#setup_likelihood)[¶](#WDmodel.likelihood.setup_likelihood)
Setup the form of the likelihood of the data given the model.
| Parameters: | **params** ([*dict*](https://docs.python.org/3/library/stdtypes.html#dict)) – A parameter dictionary used to configure the
`WDmodel_Likelihood` instance. The format of the dict is defined by [`WDmodel.io.read_params()`](index.html#WDmodel.io.read_params). |
| Returns: | **lnlike** – An instance of the likelihood function class. |
| Return type: | `WDmodel.likelihood.WDmodel_Likelihood` |
##### WDmodel.main module[¶](#module-WDmodel.main)
The WDmodel package is designed to infer the SED of DA white dwarfs given spectra and photometry. This main module wraps all the other modules, and their classes and methods to implement the alogrithm.
`WDmodel.main.``main`(*inargs=None*)[[source]](_modules/WDmodel/main.html#main)[¶](#WDmodel.main.main)
Entry point for the [`WDmodel`](index.html#module-WDmodel) fitter package.
| Parameters: | **inargs** ([*dict*](https://docs.python.org/3/library/stdtypes.html#dict)*,* *optional*) – Input arguments to configure the fit. If not specified
[`sys.argv`](https://docs.python.org/3/library/sys.html#sys.argv) is used. inargs must be parseable by
[`WDmodel.io.get_options()`](index.html#WDmodel.io.get_options). |
| Raises: | [`RuntimeError`](https://docs.python.org/3/library/exceptions.html#RuntimeError) – If user attempts to resume the fit without having run it first |
Notes
The package is structured into several modules and classes
| Module | Model Component |
| --- | --- |
| [`WDmodel.io`](index.html#module-WDmodel.io) | I/O methods |
| [`WDmodel.WDmodel.WDmodel`](index.html#WDmodel.WDmodel.WDmodel) | SED generator |
| [`WDmodel.passband`](index.html#module-WDmodel.passband) | Throughput model |
| [`WDmodel.covariance.WDmodel_CovModel`](index.html#WDmodel.covariance.WDmodel_CovModel) | Noise model |
| `WDmodel.likelihood.WDmodel_Likelihood` | Likelihood function |
| [`WDmodel.likelihood.WDmodel_Posterior`](index.html#WDmodel.likelihood.WDmodel_Posterior) | Posterior function |
| [`WDmodel.fit`](index.html#module-WDmodel.fit) | “Fitting” methods |
| [`WDmodel.viz`](index.html#module-WDmodel.viz) | Viz methods |
This method implements our algorithm to infer the DA White Dwarf properties and construct the SED model given the data using the methods and classes listed above. Once the data is read, the model is configured, and the liklihood and posterior functions constructed, the fitter methods evaluate the model parameters given the data, using the samplers in [`emcee`](https://emcee.readthedocs.io/en/stable/user/quickstart.html#module-emcee).
[`WDmodel.mossampler`](index.html#module-WDmodel.mossampler) provides an overloaded
[`emcee.PTSampler`](https://emcee.readthedocs.io/en/stable/api.html#emcee.PTSampler) with a more reliable auto-correlation estimate.
Finally, the result is output along with various plots.
`WDmodel.main.``mpi_excepthook`(*excepttype*, *exceptvalue*, *traceback*)[[source]](_modules/WDmodel/main.html#mpi_excepthook)[¶](#WDmodel.main.mpi_excepthook)
Overload [`sys.excepthook()`](https://docs.python.org/3/library/sys.html#sys.excepthook) when using `mpi4py.MPI` to terminate all MPI processes when an Exception is raised.
##### WDmodel.mossampler module[¶](#module-WDmodel.mossampler)
Overridden PTSampler with random Gibbs selection, more-reliable acor.
Original Author: <NAME> for the [mosfit package](https://github.com/guillochon/MOSFiT)
Modified to update kwargs, docstrings for full compatibility with PTSampler by <NAME>
##### WDmodel.passband module[¶](#module-WDmodel.passband)
Instrumental throughput models and calibration and synthetic photometry routines
`WDmodel.passband.``chop_syn_spec_pb`(*spec*, *model_mag*, *pb*, *model*)[[source]](_modules/WDmodel/passband.html#chop_syn_spec_pb)[¶](#WDmodel.passband.chop_syn_spec_pb)
Trims the pysynphot bandpass pb to non-zero throughput, computes the zeropoint of the passband given the SED spec, and model magnitude of spec in the passband
| Parameters: | * **spec** ([`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – The spectrum. Typically a standard which has a known `model_mag`.
This can be a real source such as Vega, BD+174708, or one of the three CALSPEC standards, or an idealized synthetic source such as AB.
Must have `dtype=[('wave', '<f8'), ('flux', '<f8')]`
* **model_mag** ([*float*](https://docs.python.org/3/library/functions.html#float)) – The apparent magnitude of the spectrum through the passband. The difference between the apparent magnitude and the synthetic magnitude is the synthetic zeropoint.
* **pb** ([`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – The passband transmission.
Must have `dtype=[('wave', '<f8'), ('throughput', '<f8')]`
* **model** ([`WDmodel.WDmodel.WDmodel`](index.html#WDmodel.WDmodel.WDmodel) instance) – The DA White Dwarf SED model generator
|
| Returns: | * **outpb** ([`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – The passband transmission with zero throughput entries trimmed.
Has `dtype=[('wave', '<f8'), ('throughput', '<f8')]`
* **outzp** (*float*) – The synthetic zeropoint of the passband `pb` such that the source with spectrum `spec` will have apparent magnitude `model_mag`
through `pb`. With the synthetic zeropoint computed, the synthetic magnitude of any source can be converted into an apparent magnitude and can be passed to [`WDmodel.passband.synphot()`](#WDmodel.passband.synphot).
|
See also
[`WDmodel.passband.interp_passband()`](#WDmodel.passband.interp_passband)
[`WDmodel.passband.synphot()`](#WDmodel.passband.synphot)
`WDmodel.passband.``get_model_synmags`(*model_spec*, *pbs*, *mu=0.0*)[[source]](_modules/WDmodel/passband.html#get_model_synmags)[¶](#WDmodel.passband.get_model_synmags)
Computes the synthetic magnitudes of spectrum `model_spec` through the passbands `pbs`, and optionally applies a common offset, `mu`
Wrapper around [`WDmodel.passband.synphot()`](#WDmodel.passband.synphot).
| Parameters: | * **spec** ([`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – The spectrum.
Must have `dtype=[('wave', '<f8'), ('flux', '<f8')]`
* **pbs** ([*dict*](https://docs.python.org/3/library/stdtypes.html#dict)) – Passband dictionary containing the passbands corresponding to phot.pb` and generated by [`WDmodel.passband.get_pbmodel()`](#WDmodel.passband.get_pbmodel).
* **mu** ([*float*](https://docs.python.org/3/library/functions.html#float)*,* *optional*) – Common achromatic photometric offset to apply to the synthetic magnitudes in al the passbands. Would be equal to the distance modulus if `model_spec` were normalized to return the true absolute magnitude of the source.
|
| Returns: | **model_mags** – The model magnitudes.
Has `dtype=[('pb', 'str'), ('mag', '<f8')]` |
| Return type: | None or [`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray) |
`WDmodel.passband.``get_pbmodel`(*pbnames*, *model*, *pbfile=None*, *mag_type=None*, *mag_zero=0.0*)[[source]](_modules/WDmodel/passband.html#get_pbmodel)[¶](#WDmodel.passband.get_pbmodel)
Converts passband names `pbnames` into passband models based on the mapping of name to `pysynphot` `obsmode` strings in `pbfile`.
| Parameters: | * **pbnames** (*array-like*) – List of passband names to get throughput models for Each name is resolved by first looking in `pbfile` (if provided) If an entry is found, that entry is treated as an `obsmode` for pysynphot. If the entry cannot be treated as an `obsmode,` we attempt to treat as an ASCII file. If neither is possible, an error is raised.
* **model** ([`WDmodel.WDmodel.WDmodel`](index.html#WDmodel.WDmodel.WDmodel) instance) – The DA White Dwarf SED model generator All the passbands are interpolated onto the wavelengths of the SED model.
* **pbfile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)*,* *optional*) – Filename containing mapping between `pbnames` and `pysynphot`
`obsmode` string, as well as the standard that has 0 magnitude in the system (either ‘’Vega’’ or ‘’AB’‘). The `obsmode` may also be the fullpath to a file that is readable by `pysynphot`
* **mag_type** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)*,* *optional*) – One of ‘’vegamag’’ or ‘’abmag’’
Used to specify the standard that has mag_zero magnitude in the passband.
If `magsys` is specified in `pbfile,` that overrides this option.
Must be the same for all passbands listed in `pbname` that do not have `magsys` specified in `pbfile`
If `pbnames` require multiple `mag_type`, concatentate the output.
* **mag_zero** ([*float*](https://docs.python.org/3/library/functions.html#float)*,* *optional*) – Magnitude of the standard in the passband If `magzero` is specified in `pbfile,` that overrides this option.
Must be the same for all passbands listed in `pbname` that do not have `magzero` specified in `pbfile`
If `pbnames` require multiple `mag_zero`, concatentate the output.
|
| Returns: | **out** – Output passband model dictionary. Has passband name `pb` from `pbnames` as key. |
| Return type: | [dict](https://docs.python.org/3/library/stdtypes.html#dict) |
| Raises: | [`RuntimeError`](https://docs.python.org/3/library/exceptions.html#RuntimeError) – If a bandpass cannot be loaded |
Notes
Each item of `out` is a tuple with
* `pb` : ([`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray))
The passband transmission with zero throughput entries trimmed.
Has `dtype=[('wave', '<f8'), ('throughput', '<f8')]`
* `transmission` : (array-like)
The non-zero passband transmission interpolated onto overlapping model wavelengths
* `ind` : (array-like)
Indices of model wavelength that overlap with this passband
* `zp` : (float)
mag_type zeropoint of this passband
* `avgwave` : (float)
Passband average/reference wavelength
`pbfile` must be readable by [`WDmodel.io.read_pbmap()`](index.html#WDmodel.io.read_pbmap) and must return a [`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)
with``dtype=[(‘pb’, ‘str’),(‘obsmode’, ‘str’)]``
If there is no entry in `pbfile` for a passband, then we attempt to use the passband name `pb` as `obsmode` string as is.
Trims the bandpass to entries with non-zero transmission and determines the `VEGAMAG/ABMAG` zeropoint for the passband - i.e. `zp` that gives `mag_Vega/AB=mag_zero` in all passbands.
See also
[`WDmodel.io.read_pbmap()`](index.html#WDmodel.io.read_pbmap)
[`WDmodel.passband.chop_syn_spec_pb()`](#WDmodel.passband.chop_syn_spec_pb)
`WDmodel.passband.``interp_passband`(*wave*, *pb*, *model*)[[source]](_modules/WDmodel/passband.html#interp_passband)[¶](#WDmodel.passband.interp_passband)
Find the indices of the wavelength array `wave`, that overlap with the passband `pb` and interpolates the passband onto the wavelengths.
| Parameters: | * **wave** (*array-like*) – The wavelength array. Must satisfy
[`WDmodel.WDmodel.WDmodel._wave_test()`](index.html#WDmodel.WDmodel.WDmodel._wave_test)
* **pb** ([`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – The passband transmission.
Must have `dtype=[('wave', '<f8'), ('throughput', '<f8')]`
* **model** ([`WDmodel.WDmodel.WDmodel`](index.html#WDmodel.WDmodel.WDmodel) instance) – The DA White Dwarf SED model generator
|
| Returns: | * **transmission** (*array-like*) – The transmission of the passband interpolated on to overlapping elements of `wave`
* **ind** (*array-like*) – Indices of wavelength `wave` that overlap with the passband `pb`.
Produced by [`WDmodel.WDmodel.WDmodel._get_indices_in_range()`](index.html#WDmodel.WDmodel.WDmodel._get_indices_in_range)
Satisfies `transmission.shape == wave[ind].shape`
|
Notes
The passband `pb` is interpolated on to the wavelength arrray
`wave`. `wave` is typically the wavelengths of a spectrum, and have much better sampling than passband transmission curves. Only the wavelengths `wave` that overlap the passband are taken, and the passband transmission is then linearly interpolated on to these wavelengths. This prescription has been checked against
`pysynphot` to return synthetic magnitudes that agree to be `<
1E-6`, while [`WDmodel.passband.synphot()`](#WDmodel.passband.synphot) is very significantly faster than [`pysynphot.observation.Observation.effstim()`](https://pysynphot.readthedocs.io/en/latest/ref_api.html#pysynphot.observation.Observation.effstim).
`WDmodel.passband.``synflux`(*spec*, *ind*, *pb*)[[source]](_modules/WDmodel/passband.html#synflux)[¶](#WDmodel.passband.synflux)
Compute the synthetic flux of spectrum `spec` through passband `pb`
| Parameters: | * **spec** ([`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – The spectrum.
Must have `dtype=[('wave', '<f8'), ('flux', '<f8')]`
* **ind** (*array-like*) – Indices of spectrum `spec` that overlap with the passband `pb`.
Can be produced by [`WDmodel.passband.interp_passband()`](#WDmodel.passband.interp_passband)
* **pb** (*array-like*) – The passband transmission.
Must satisfy `pb.shape == spec[ind].flux.shape`
|
| Returns: | **flux** – The normalized flux of the spectrum through the passband |
| Return type: | [float](https://docs.python.org/3/library/functions.html#float) |
Notes
The passband is assumed to be dimensionless photon transmission efficiency.
Routine is intended to be a mch faster implementation of
[`pysynphot.observation.Observation.effstim()`](https://pysynphot.readthedocs.io/en/latest/ref_api.html#pysynphot.observation.Observation.effstim), since it is called over and over by the samplers as a function of model parameters.
Uses [`numpy.trapz()`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.trapz.html#numpy.trapz) for interpolation.
See also
[`WDmodel.passband.interp_passband()`](#WDmodel.passband.interp_passband)
`WDmodel.passband.``synphot`(*spec*, *ind*, *pb*, *zp=0.0*)[[source]](_modules/WDmodel/passband.html#synphot)[¶](#WDmodel.passband.synphot)
Compute the synthetic magnitude of spectrum `spec` through passband `pb`
| Parameters: | * **spec** ([`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – The spectrum.
Must have `dtype=[('wave', '<f8'), ('flux', '<f8')]`
* **ind** (*array-like*) – Indices of spectrum `spec` that overlap with the passband `pb`.
Can be produced by [`WDmodel.passband.interp_passband()`](#WDmodel.passband.interp_passband)
* **pb** (*array-like*) – The passband transmission.
Must satisfy `pb.shape == spec[ind].flux.shape`
* **zp** ([*float*](https://docs.python.org/3/library/functions.html#float)*,* *optional*) – The zeropoint to apply to the synthetic flux
|
| Returns: | **mag** – The synthetic magnitude of the spectrum through the passband |
| Return type: | [float](https://docs.python.org/3/library/functions.html#float) |
See also
[`WDmodel.passband.synflux()`](#WDmodel.passband.synflux)
[`WDmodel.passband.interp_passband()`](#WDmodel.passband.interp_passband)
##### WDmodel.viz module[¶](#module-WDmodel.viz)
Routines to visualize the DA White Dwarf model atmosphere fit
`WDmodel.viz.``plot_mcmc_line_fit`(*spec*, *linedata*, *model*, *cont_model*, *draws*, *balmer=None*)[[source]](_modules/WDmodel/viz.html#plot_mcmc_line_fit)[¶](#WDmodel.viz.plot_mcmc_line_fit)
Plot a comparison of the normalized hydrogen Balmer lines of the spectrum and model
Note that we fit the full spectrum, not just the lines. The lines are extracted using a coarse continuum fit in
[`WDmodel.fit.pre_process_spectrum()`](index.html#WDmodel.fit.pre_process_spectrum). This fit is purely cosmetic and in no way contributes to the likelihood. It’s particularly useful to detect small velocity offsets or wavelength calibration errors.
| Parameters: | * **spec** ([`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – The spectrum. Must have
`dtype=[('wave', '<f8'), ('flux', '<f8'), ('flux_err', '<f8')]`
* **linedata** ([`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – The observations of the spectrum corresponding to the hydrogen Balmer lines. Must have
`dtype=[('wave', '<f8'), ('flux', '<f8'), ('flux_err', '<f8'), ('line_mask', 'i4'), ('line_ind', 'i4')]`
* **model** ([`WDmodel.WDmodel.WDmodel`](index.html#WDmodel.WDmodel.WDmodel) instance) – The DA White Dwarf SED model generator
* **cont_model** ([`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – The continuuum model. Must have the same structure as `spec`
Produced by [`WDmodel.fit.pre_process_spectrum()`](index.html#WDmodel.fit.pre_process_spectrum)
* **draws** (*array-like*) – produced by [`plot_mcmc_spectrum_fit()`](#WDmodel.viz.plot_mcmc_spectrum_fit) - see notes for content.
* **balmer** (*array-like**,* *optional*) – list of Balmer lines to plot - elements must be in range `[1, 6]`
These correspond to the lines defined in
[`WDmodel.WDmodel.WDmodel._lines`](index.html#WDmodel.WDmodel.WDmodel._lines). Default is `range(1, 7)`
|
| Returns: | * **fig** ([`matplotlib.figure.Figure`](https://matplotlib.org/api/_as_gen/matplotlib.figure.Figure.html#matplotlib.figure.Figure) instance) – The output figure containing the line profile plot
* **fig2** ([`matplotlib.figure.Figure`](https://matplotlib.org/api/_as_gen/matplotlib.figure.Figure.html#matplotlib.figure.Figure) instance) – The output figure containing histograms of the line residuals
|
See also
[`WDmodel.viz.plot_mcmc_spectrum_fit()`](#WDmodel.viz.plot_mcmc_spectrum_fit)
`WDmodel.viz.``plot_mcmc_model`(*spec*, *phot*, *linedata*, *scale_factor*, *phot_dispersion*, *objname*, *outdir*, *specfile*, *model*, *covmodel*, *cont_model*, *pbs*, *params*, *param_names*, *samples*, *samples_lnprob*, *covtype=u'Matern32'*, *balmer=None*, *ndraws=21*, *everyn=1*, *savefig=False*)[[source]](_modules/WDmodel/viz.html#plot_mcmc_model)[¶](#WDmodel.viz.plot_mcmc_model)
Make all the plots to visualize the full fit of the DA White Dwarf data
Wraps [`plot_mcmc_spectrum_fit()`](#WDmodel.viz.plot_mcmc_spectrum_fit),
[`plot_mcmc_photometry_res()`](#WDmodel.viz.plot_mcmc_photometry_res),
[`plot_mcmc_spectrum_nogp_fit()`](#WDmodel.viz.plot_mcmc_spectrum_nogp_fit), [`plot_mcmc_line_fit()`](#WDmodel.viz.plot_mcmc_line_fit) and
[`corner.corner()`](https://corner.readthedocs.io/en/stable/api.html#corner.corner) and saves all the plots to a combined PDF, and optionally individual PDFs.
| Parameters: | * **spec** ([`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – The spectrum. Must have
`dtype=[('wave', '<f8'), ('flux', '<f8'), ('flux_err', '<f8')]`
* **phot** (None or [`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – The photometry. Must have
`dtype=[('pb', 'str'), ('mag', '<f8'), ('mag_err', '<f8')]`
* **linedata** ([`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – The observations of the spectrum corresponding to the hydrogen Balmer lines. Must have
`dtype=[('wave', '<f8'), ('flux', '<f8'), ('flux_err', '<f8'), ('line_mask', 'i4'), ('line_ind', 'i4')]`
* **scale_factor** ([*float*](https://docs.python.org/3/library/functions.html#float)) – factor by which the flux was scaled for y-axis label
* **phot_dispersion** ([*float*](https://docs.python.org/3/library/functions.html#float)*,* *optional*) – Excess photometric dispersion to add in quadrature with the photometric uncertainties `phot.mag_err`. Use if the errors are grossly underestimated. Default is `0.`
* **objname** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – object name - used to title plots
* **outdir** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – controls where the plot is written out if `savefig=True`
* **specfile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Used in the title, and to set the name of the `outfile` if `savefig=True`
* **model** ([`WDmodel.WDmodel.WDmodel`](index.html#WDmodel.WDmodel.WDmodel) instance) – The DA White Dwarf SED model generator
* **covmodel** ([`WDmodel.covariance.WDmodel_CovModel`](index.html#WDmodel.covariance.WDmodel_CovModel) instance) – The parametrized model for the covariance of the spectrum `spec`
* **cont_model** ([`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – The continuuum model. Must have the same structure as `spec`
Produced by [`WDmodel.fit.pre_process_spectrum()`](index.html#WDmodel.fit.pre_process_spectrum)
* **pbs** ([*dict*](https://docs.python.org/3/library/stdtypes.html#dict)) – Passband dictionary containing the passbands corresponding to
`phot.pb` and generated by [`WDmodel.passband.get_pbmodel()`](index.html#WDmodel.passband.get_pbmodel).
* **params** ([*dict*](https://docs.python.org/3/library/stdtypes.html#dict)) – dictionary of parameters with keywords `value`, `fixed`, `scale`,
`bounds` for each. Same format as returned from
[`WDmodel.io.read_params()`](index.html#WDmodel.io.read_params)
* **param_names** (*array-like*) – Ordered list of free parameter names
* **samples** (*array-like*) – Samples from the flattened Markov Chain with shape `(N, len(param_names))`
* **samples_lnprob** (*array-like*) – Log Posterior corresponding to `samples` from the flattened Markov Chain with shape `(N,)`
* **covtype** (`{'Matern32', 'SHO', 'Exp', 'White'}`) – stationary kernel type used to parametrize the covariance in
[`WDmodel.covariance.WDmodel_CovModel`](index.html#WDmodel.covariance.WDmodel_CovModel)
* **balmer** (*array-like**,* *optional*) – list of Balmer lines to plot - elements must be in range `[1, 6]`
These correspond to the lines defined in
[`WDmodel.WDmodel.WDmodel._lines`](index.html#WDmodel.WDmodel.WDmodel._lines). Default is `range(1, 7)`
* **ndraws** ([*int*](https://docs.python.org/3/library/functions.html#int)*,* *optional*) – Number of draws to make from the Markov Chain to overplot. Higher numbers provide a better sense of the uncertainty in the model at the cost of speed and a larger, slower to render output plot.
* **everyn** ([*int*](https://docs.python.org/3/library/functions.html#int)*,* *optional*) – If the posterior function was evaluated using only every nth observation from the data, this should be specified to visually indicate the observations used.
* **savefig** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – if True, save the individual figures
|
| Returns: | * **model_spec** ([`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – The model spectrum. Has
`dtype=[('wave', '<f8'), ('flux', '<f8'), ('flux_err', '<f8'), ('norm_flux', '<f8')]`
and same shape as input `spec`. The `norm_flux` attribute has the model flux without the Gaussian process prediction applied.
* **SED_model** ([`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – The SED model spectrum. Has
`dtype=[('wave', '<f8'), ('flux', '<f8'), ('flux_err', '<f8')]`
* **model_mags** (None or [`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – If there is observed photometry, this contains the model magnitudes.
Has `dtype=[('pb', 'str'), ('mag', '<f8')]`
|
`WDmodel.viz.``plot_mcmc_photometry_res`(*objname*, *phot*, *phot_dispersion*, *model*, *pbs*, *draws*)[[source]](_modules/WDmodel/viz.html#plot_mcmc_photometry_res)[¶](#WDmodel.viz.plot_mcmc_photometry_res)
Plot the observed DA white dwarf photometry as well as the “best-fit” model magnitudes
| Parameters: | * **objname** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – object name - used to title plots
* **phot** (None or [`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – The photometry. Must have
`dtype=[('pb', 'str'), ('mag', '<f8'), ('mag_err', '<f8')]`
* **phot_dispersion** ([*float*](https://docs.python.org/3/library/functions.html#float)*,* *optional*) – Excess photometric dispersion to add in quadrature with the photometric uncertainties `phot.mag_err`. Use if the errors are grossly underestimated. Default is `0.`
* **model** ([`WDmodel.WDmodel.WDmodel`](index.html#WDmodel.WDmodel.WDmodel) instance) – The DA White Dwarf SED model generator
* **pbs** ([*dict*](https://docs.python.org/3/library/stdtypes.html#dict)) – Passband dictionary containing the passbands corresponding to
`phot.pb` and generated by [`WDmodel.passband.get_pbmodel()`](index.html#WDmodel.passband.get_pbmodel).
* **draws** (*array-like*) – produced by [`plot_mcmc_spectrum_fit()`](#WDmodel.viz.plot_mcmc_spectrum_fit) - see notes for content.
|
| Returns: | * **fig** ([`matplotlib.figure.Figure`](https://matplotlib.org/api/_as_gen/matplotlib.figure.Figure.html#matplotlib.figure.Figure) instance) – The output figure
* **mag_draws** (*array-like*) – The magnitudes corresponding to the parameters `draws` from the Markov Chain used in `fig`
|
Notes
Each element of `mag_draws` contains
* `wres` - the difference between the observed and synthetic magnitudes
* `model_mags` - the model magnitudes corresponding to the current model parameters
* `mu` - the flux normalization parameter that must be added to the `model_mags`
See also
[`WDmodel.viz.plot_mcmc_spectrum_fit()`](#WDmodel.viz.plot_mcmc_spectrum_fit)
`WDmodel.viz.``plot_mcmc_spectrum_fit`(*spec*, *objname*, *specfile*, *scale_factor*, *model*, *covmodel*, *result*, *param_names*, *samples*, *ndraws=21*, *everyn=1*)[[source]](_modules/WDmodel/viz.html#plot_mcmc_spectrum_fit)[¶](#WDmodel.viz.plot_mcmc_spectrum_fit)
Plot the spectrum of the DA White Dwarf and the “best fit” model
The full fit parametrizes the covariance model using a stationary Gaussian process as defined by [`WDmodel.covariance.WDmodel_CovModel`](index.html#WDmodel.covariance.WDmodel_CovModel). The posterior function constructed in
[`WDmodel.likelihood.WDmodel_Posterior`](index.html#WDmodel.likelihood.WDmodel_Posterior) is evaluated by the sampler in the [`WDmodel.fit.fit_model()`](index.html#WDmodel.fit.fit_model) method. The median value is reported as the best-fit value for each of the fit parameters in
`WDmodel.likelihood.WDmodel_Likelihood.parameter_names`.
| Parameters: | * **spec** ([`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – The spectrum. Must have
`dtype=[('wave', '<f8'), ('flux', '<f8'), ('flux_err', '<f8')]`
* **objname** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – object name - used to title plots
* **outdir** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – controls where the plot is written out if `save=True`
* **specfile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Used in the title, and to set the name of the `outfile` if `save=True`
* **scale_factor** ([*float*](https://docs.python.org/3/library/functions.html#float)) – factor by which the flux was scaled for y-axis label
* **model** ([`WDmodel.WDmodel.WDmodel`](index.html#WDmodel.WDmodel.WDmodel) instance) – The DA White Dwarf SED model generator
* **covmodel** ([`WDmodel.covariance.WDmodel_CovModel`](index.html#WDmodel.covariance.WDmodel_CovModel) instance) – The parametrized model for the covariance of the spectrum `spec`
* **result** ([*dict*](https://docs.python.org/3/library/stdtypes.html#dict)) – dictionary of parameters with keywords `value`, `fixed`, `scale`,
`bounds` for each. Same format as returned from
[`WDmodel.io.read_params()`](index.html#WDmodel.io.read_params)
* **param_names** (*array-like*) – Ordered list of free parameter names
* **samples** (*array-like*) – Samples from the flattened Markov Chain with shape `(N, len(param_names))`
* **ndraws** ([*int*](https://docs.python.org/3/library/functions.html#int)*,* *optional*) – Number of draws to make from the Markov Chain to overplot. Higher numbers provide a better sense of the uncertainty in the model at the cost of speed and a larger, slower to render output plot.
* **everyn** ([*int*](https://docs.python.org/3/library/functions.html#int)*,* *optional*) – If the posterior function was evaluated using only every nth observation from the data, this should be specified to visually indicate the observations used.
|
| Returns: | * **fig** ([`matplotlib.figure.Figure`](https://matplotlib.org/api/_as_gen/matplotlib.figure.Figure.html#matplotlib.figure.Figure) instance) – The output figure
* **draws** (*array-like*) – The actual draws from the Markov Chain used in `fig`
|
Notes
It’s faster to draw samples from the posterior in one location, and pass along the same samples to all the methods in [`WDmodel.viz`](#module-WDmodel.viz).
Consequently, most require `draws` as an input. This makes all the plots connected, and none will return if an error is thrown here, but this is the correct behavior as all of them are visualizing one aspect of the same fit.
Each element of `draws` contains
* `smoothedmod` - the model spectrum
* `wres` - the prediction from the Gaussian process
* `wres_err` - the diagonal of the covariance matrix for the prediction from the Gaussian process
* `full_mod` - the full model SED, in order to compute the synthetic photometry
* `out_draw` - the dictionary of model parameters from this draw. Same format as `result`.
`WDmodel.viz.``plot_mcmc_spectrum_nogp_fit`(*spec*, *objname*, *specfile*, *scale_factor*, *cont_model*, *draws*, *covtype=u'Matern32'*, *everyn=1*)[[source]](_modules/WDmodel/viz.html#plot_mcmc_spectrum_nogp_fit)[¶](#WDmodel.viz.plot_mcmc_spectrum_nogp_fit)
Plot the spectrum of the DA White Dwarf and the “best fit” model without the Gaussian process
Unlike [`plot_mcmc_spectrum_fit()`](#WDmodel.viz.plot_mcmc_spectrum_fit) this version does not apply the prediction from the Gaussian process to the spectrum model to match the observed spectrum. This visualization is useful to indicate if the Gaussian process - i.e. the kernel choice `covtype` used to parametrize the covariance is - is appropriate.
| Parameters: | * **spec** ([`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – The spectrum. Must have
`dtype=[('wave', '<f8'), ('flux', '<f8'), ('flux_err', '<f8')]`
* **objname** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – object name - used to title plots
* **outdir** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – controls where the plot is written out if `save=True`
* **specfile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Used in the title, and to set the name of the outfile if `save=True`
* **scale_factor** ([*float*](https://docs.python.org/3/library/functions.html#float)) – factor by which the flux was scaled for y-axis label
* **cont_model** ([`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – The continuuum model. Must have the same structure as `spec`
Produced by [`WDmodel.fit.pre_process_spectrum()`](index.html#WDmodel.fit.pre_process_spectrum)
* **draws** (*array-like*) – produced by [`plot_mcmc_spectrum_fit()`](#WDmodel.viz.plot_mcmc_spectrum_fit) - see notes for content.
* **covtype** (`{'Matern32', 'SHO', 'Exp', 'White'}`) – stationary kernel type used to parametrize the covariance in
[`WDmodel.covariance.WDmodel_CovModel`](index.html#WDmodel.covariance.WDmodel_CovModel)
* **everyn** ([*int*](https://docs.python.org/3/library/functions.html#int)*,* *optional*) – If the posterior function was evaluated using only every nth observation from the data, this should be specified to visually indicate the observations used.
|
| Returns: | **fig** – The output figure |
| Return type: | [`matplotlib.figure.Figure`](https://matplotlib.org/api/_as_gen/matplotlib.figure.Figure.html#matplotlib.figure.Figure) instance |
See also
[`WDmodel.viz.plot_mcmc_spectrum_fit()`](#WDmodel.viz.plot_mcmc_spectrum_fit)
`WDmodel.viz.``plot_minuit_spectrum_fit`(*spec*, *objname*, *outdir*, *specfile*, *scale_factor*, *model*, *result*, *save=True*)[[source]](_modules/WDmodel/viz.html#plot_minuit_spectrum_fit)[¶](#WDmodel.viz.plot_minuit_spectrum_fit)
Plot the MLE fit of the spectrum with the model, assuming uncorrelated noise.
| Parameters: | * **spec** ([`numpy.recarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html#numpy.recarray)) – The spectrum. Must have
`dtype=[('wave', '<f8'), ('flux', '<f8'), ('flux_err', '<f8')]`
* **objname** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – object name - used to title plots
* **outdir** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – controls where the plot is written out if `save=True`
* **specfile** ([*str*](https://docs.python.org/3/library/stdtypes.html#str)) – Used in the title, and to set the name of the `outfile` if `save=True`
* **scale_factor** ([*float*](https://docs.python.org/3/library/functions.html#float)) – factor by which the flux was scaled for y-axis label
* **model** ([`WDmodel.WDmodel.WDmodel`](index.html#WDmodel.WDmodel.WDmodel) instance) – The DA White Dwarf SED model generator
* **result** ([*dict*](https://docs.python.org/3/library/stdtypes.html#dict)) – dictionary of parameters with keywords `value`, `fixed`, `scale`,
`bounds` for each. Same format as returned from
[`WDmodel.io.read_params()`](index.html#WDmodel.io.read_params)
* **save** ([*bool*](https://docs.python.org/3/library/functions.html#bool)) – if True, save the file
|
| Returns: | **fig** |
| Return type: | [`matplotlib.figure.Figure`](https://matplotlib.org/api/_as_gen/matplotlib.figure.Figure.html#matplotlib.figure.Figure) instance |
Notes
The MLE fit uses [`iminuit.Minuit.migrad()`](https://iminuit.readthedocs.io/en/stable/api.html#iminuit.Minuit.migrad) to fit the spectrum with the model. This fit doesn’t try to account for the covariance in the data, and is not expected to be great - just fast, and capable of setting a reasonable initial guess. If it is apparent from the plot that this fit is very far off, refine the initial guess to the fitter.
Indices and tables[¶](#indices-and-tables)
===
* [Index](genindex.html)
* [Module Index](py-modindex.html)
* [Search Page](search.html) |
rechonest | cran | R | Package ‘rechonest’
October 14, 2022
Type Package
Title R Interface to Echo Nest API
Version 1.2
Date 2016-03-16
Author <NAME>[aut,cre]
Maintainer <NAME> <<EMAIL>>
Description The 'Echo nest' <http://the.echonest.com> is the industry's leading
music intelligence company, providing developer with deepest understanding of
music content and music fans. This package can be used to access artist's data
including songs, blogs, news, reviews etc. Song's data including audio summary,
style, danceability, tempo etc can also be accessed.
URL https://github.com/mukul13/rechonest
License MIT + file LICENSE
LazyData TRUE
Imports httr,RCurl,jsonlite
RoxygenNote 5.0.1
NeedsCompilation no
Repository CRAN
Date/Publication 2016-03-18 00:00:16
R topics documented:
basic_playlis... 2
extract_artist_name... 3
get_artist_biographie... 4
get_artist_blog... 4
get_artist_dat... 5
get_artist_familiarit... 6
get_artist_hotttness... 7
get_artist_image... 7
get_artist_new... 8
get_artist_review... 9
get_artist_song... 10
get_artist_term... 10
get_artist_video... 11
get_genre_inf... 12
get_top_genre_artist... 12
get_top_hott... 13
get_top_term... 14
get_twitter_handl... 14
list_genre... 15
list_term... 16
search_artis... 16
search_genr... 18
search_song... 18
similar_artist... 20
similar_genre... 21
standard_static_playlis... 22
suggest_artist_name... 23
basic_playlist To return basic playlist
Description
To return basic playlist
Usage
basic_playlist(api_key, type = NA, artist_id = NA, artist = NA,
song_id = NA, genre = NA, track_id = NA, results = 15, partner = NA,
tracks = F, limited_interactivity = NA)
Arguments
api_key Echo Nest API key
type the type of the playlist to be generated
artist_id artist id
artist artist name
song_id song ID
genre genre name
track_id track ID
results the number of results desired
partner partner catalog
tracks tracks info
limited_interactivity
interactivity limitation
Value
data frame giving basic playlist
Examples
## Not run:
data=basic_playlist(api_key,type="artist-radio",artist=c("coldplay","adele"))
## End(Not run)
extract_artist_names To extract artist names from text.
Description
To extract artist names from text.
Usage
extract_artist_names(api_key, text, min_hotttnesss = NA,
max_hotttnesss = NA, min_familiarity = NA, max_familiarity = NA,
sort = NA, results = NA)
Arguments
api_key Echo Nest API key
text text that contains artist names
min_hotttnesss the minimum hotttnesss for returned artists
max_hotttnesss the maximum hotttnesss for returned artists
min_familiarity
the minimum familiarity for returned artists
max_familiarity
the maximum familiarity for returned artists
sort specified the sort order of the results
results the number of results desired
Value
data frame giving artist’s names
Examples
## Not run:
data=extract_artist_names(api_key,text="I like adele and Maroon 5")
## End(Not run)
get_artist_biographies
To get a list of artist biographies
Description
To get a list of artist biographies
Usage
get_artist_biographies(api_key, name = NA, id = NA, start = NA,
results = 15, license = "unknown")
Arguments
api_key Echo Nest API key
name artist name
id Echo Nest ID
start the desired index of the first result returned
results the number of results desired
license the desired licenses of the returned images
Value
data frame giving artist’s biographies
Examples
## Not run:
data=get_artist_biographies(api_key,name="coldplay")
## End(Not run)
get_artist_blogs To get blogs about artist
Description
To get blogs about artist
Usage
get_artist_blogs(api_key, name = NA, start = NA, id = NA, results = 15,
high_relevance = F)
Arguments
api_key Echo Nest API key
name artist’s name
start the desired index of the first result returned
id artist’s id
results maximum size
high_relevance if true only items that are highly relevant for this artist will be returned
Value
data frame giving blogs about artist
Examples
## Not run:
data=get_artist_blogs(api_key,name="coldplay",results=35)
## End(Not run)
get_artist_data To get artist’s data
Description
To get artist’s data
Usage
get_artist_data(api_key, name = NA, id = NA, hotttnesss = T, terms = F,
blogs = F, news = F, familiarity = F, audio = F, images = F,
songs = F, reviews = F, discovery = F, partner = NA,
biographies = F, doc_counts = F, artist_location = F,
years_active = F, urls = F)
Arguments
api_key Echo Nest API key
name artist’s name
id artist’s id
hotttnesss artist’s hotttnesss
terms artist’s terms
blogs blogs about artist
news news articles about artist
familiarity artist’s familiarity
audio artist’s audio details
images artist’s images details
songs artist’s songs details
reviews reviews about artist
discovery artist’s discovery details
partner partner catalog
biographies artist’s biographies
doc_counts artist’s doc_counts
artist_location
artist location
years_active years active
urls urls of artist websites
Value
data frame giving artist’s hotttnesss
Examples
## Not run:
data=get_artist_data(api_key,name="coldplay",terms=T,blogs=T)
## End(Not run)
get_artist_familiarity
To get artist’s familiarity
Description
To get artist’s familiarity
Usage
get_artist_familiarity(api_key, name = NA, id = NA)
Arguments
api_key Echo Nest API key
name artist’s name
id artist’s id
Value
data frame giving artist’s familiarity
Examples
## Not run:
data=get_artist_familiarity(api_key,name="coldplay")
## End(Not run)
get_artist_hotttnesss To get artist’s hotttnesss
Description
To get artist’s hotttnesss
Usage
get_artist_hotttnesss(api_key, name = NA, id = NA)
Arguments
api_key Echo Nest API key
name artist’s name
id artist’s id
Value
data frame giving artist’s hotttnesss
Examples
## Not run:
data=get_artist_hotttnesss(api_key,name="coldplay")
## End(Not run)
get_artist_images To get artist’s images
Description
To get artist’s images
Usage
get_artist_images(api_key, name = NA, id = NA, start = NA, results = 15,
license = "unknown")
Arguments
api_key Echo Nest API key
name artist name
id Echo Nest ID
start the desired index of the first result returned
results the number of results desired
license the desired licenses of the returned images
Value
data frame giving artist’s images
Examples
## Not run:
data=list_genres(api_key)
## End(Not run)
get_artist_news To get news about artist
Description
To get news about artist
Usage
get_artist_news(api_key, name = NA, id = NA, start = NA, results = 15,
high_relevance = F)
Arguments
api_key Echo Nest API key
name artist’s name
id artist’s id
start the desired index of the first result returned
results maximum size
high_relevance if true only items that are highly relevant for this artist will be returned
Value
data frame giving news about artist
Examples
## Not run:
data=get_artist_news(api_key,name="coldplay",results=35)
## End(Not run)
get_artist_reviews To get reviews about artist
Description
To get reviews about artist
Usage
get_artist_reviews(api_key, name = NA, id = NA, start = NA,
results = 15)
Arguments
api_key Echo Nest API key
name artist’s name
id artist’s id
start the desired index of the first result returned
results maximum size
Value
data frame giving blogs about artist
Examples
## Not run:
data=get_artist_reviews(api_key,name="coldplay",results=35)
## End(Not run)
get_artist_songs To get artist’s songs
Description
To get artist’s songs
Usage
get_artist_songs(api_key, name = NA, id = NA, start = NA, results = 15)
Arguments
api_key Echo Nest API key
name artist’s name
id artist’s id
start the desired index of the first result returned
results maximum size
Value
data frame giving artist’s songs
Examples
## Not run:
data=get_artist_songs(api_key,name="coldplay")
## End(Not run)
get_artist_terms To get artist’s terms
Description
To get artist’s terms
Usage
get_artist_terms(api_key, name = NA, id = NA)
Arguments
api_key Echo Nest API key
name artist’s name
id artist’s id
Value
data frame giving artist’s terms
Examples
## Not run:
data=get_artist_terms(api_key,name="coldplay")
## End(Not run)
get_artist_videos To get a list of video documents found on the web related to an artist
Description
To get a list of video documents found on the web related to an artist
Usage
get_artist_videos(api_key, name = NA, id = NA, start = NA, results = 15)
Arguments
api_key Echo Nest API key
name artist name
id Echo Nest ID
start the desired index of the first result returned
results the number of results desired
Value
data frame giving artist’s videos
Examples
## Not run:
data=get_artist_videos(api_key,name="coldplay")
## End(Not run)
get_genre_info To get basic information about a genre
Description
To get basic information about a genre
Usage
get_genre_info(api_key, genre, description = T, urls = T)
Arguments
api_key Echo Nest API key
genre the genre name
description genre’s description
urls genre’s urls
Value
data frame giving basic info about a genre
Examples
## Not run:
data=get_genre_info(api_key,genre="post rock")
## End(Not run)
get_top_genre_artists To Return the top artists for the given genre
Description
To Return the top artists for the given genre
Usage
get_top_genre_artists(api_key, genre)
Arguments
api_key Echo Nest API key
genre the genre name
Value
data frame top artist of the given genre
Examples
## Not run:
data=get_top_genre_artists(api_key,genre="pop")
## End(Not run)
get_top_hottt To return a list of the top hottt artists
Description
To return a list of the top hottt artists
Usage
get_top_hottt(api_key, genre = NA, start = NA, results = 15)
Arguments
api_key Echo Nest API key
genre the set of genres of interest
start the desired index of the first result returned
results the number of results desired
Value
data frame giving top hottt artists
Examples
## Not run:
data=get_top_hottt(api_key)
## End(Not run)
get_top_terms To returns a list of the overall top terms
Description
To returns a list of the overall top terms
Usage
get_top_terms(api_key, results = NA)
Arguments
api_key Echo Nest API key
results the number of results desired
Value
data frame giving top terms
Examples
## Not run:
data=get_top_terms(api_key)
## End(Not run)
get_twitter_handle To get the twitter handle for an artist
Description
To get the twitter handle for an artist
Usage
get_twitter_handle(api_key, name = NA, id = NA)
Arguments
api_key Echo Nest API key
name artist name
id Echo Nest ID
list_genres 15
Value
data frame giving twitter handle
Examples
## Not run:
data=get_twitter_handle(api_key,name="coldplay")
## End(Not run)
list_genres To get genre’s list
Description
To get genre’s list
Usage
list_genres(api_key)
Arguments
api_key Echo Nest API key
Value
data frame giving genre’s list
Examples
## Not run:
data=list_genres(api_key)
## End(Not run)
list_terms To get a list of the best typed descriptive terms
Description
To get a list of the best typed descriptive terms
Usage
list_terms(api_key, type = "style")
Arguments
api_key Echo Nest API key
type term type
Value
data frame giving best typed descriptive terms
Examples
## Not run:
data=list_terms(api_key)
## End(Not run)
search_artist To search artist by using name
Description
To search artist by using name
Usage
search_artist(api_key, name = NA, style = NA, hotttnesss = T,
description = NA, start = NA, results = 15, sort = NA, partner = NA,
artist_location = NA, genre = NA, mood = NA, rank_type = "relevance",
fuzzy_match = F, max_familiarity = NA, min_familiarity = NA,
max_hotttnesss = NA, min_hotttnesss = NA, artist_start_year_before = NA,
artist_start_year_after = NA, artist_end_year_before = NA,
artist_end_year_after = NA)
Arguments
api_key Echo Nest API key
name artist’s name
style artist’s style
hotttnesss artist’s hotttnesss (Default is true)
description artist’s description
start the desired index of the first result returned
results maximum size
sort to sort ascending or descending
partner partner catalog
artist_location
artist location
genre genre name
mood mood like happy or sad
rank_type For search by description, style or mood indicates whether results should be
ranked by query relevance or by artist familiarity
fuzzy_match if true, a fuzzy search is performed
max_familiarity
maximum familiarity
min_familiarity
minimum familiarity
max_hotttnesss maximum hotttnesss
min_hotttnesss minimum hotttnesss
artist_start_year_before
Matches artists that have an earliest start year before the given value
artist_start_year_after
Matches artists that have an earliest start year after the given value
artist_end_year_before
Matches artists that have a latest end year before the given value
artist_end_year_after
Matches artists that have a latest end year after the given value
Value
data frame giving artist’s data
Examples
## Not run:
data=search_artist(api_key,"coldplay",sort="hotttnesss-desc",results=50)
## End(Not run)
search_genre To search for genres by name
Description
To search for genres by name
Usage
search_genre(api_key, genre = NA, description = T, urls = T,
results = 15)
Arguments
api_key Echo Nest API key
genre the genre name
description genre’s description
urls genre’s urls
results the number of results desired
Value
data frame giving searched genres
Examples
## Not run:
data=search_genre(api_key,genre="rock")\
## End(Not run)
search_songs To search song
Description
To search song
Usage
search_songs(api_key, artist = NA, artist_id = NA, title = NA,
hotttnesss = T, style = NA, artist_location = T, combined = NA,
sort = NA, audio_summary = F, partner = NA, min_name = NA,
discovery = T, max_name = NA, min_val = NA, max_val = NA,
start = NA, results = 15, mode = NA, key = NA, currency = T,
description = NA, rank_type = "relevance", mood = NA, familiarity = T,
song_type = NA, artist_start_year_before = NA,
artist_start_year_after = NA, artist_end_year_before = NA,
artist_end_year_after = NA)
Arguments
api_key Echo Nest API key
artist artist’s name
artist_id artist’s id
title song’s title
hotttnesss song’s hotttnesss
style artist’s style
artist_location
artist location
combined query both artist and title fields
sort to sort ascending or descending
audio_summary song’s audio summary
partner partner catalog
min_name features’ minimum value settings
discovery artist’s discovery measure
max_name features’ maximum value settings
min_val features’ minimum value settings
max_val features’ maximum value settings
start the desired index of the first result returned
results maximum size
mode the mode of songs
key the key of songs in the playlist
currency song currency
description song’s description
rank_type For search by description, style or mood indicates whether results should be
ranked by query relevance or by artist familiarity
mood a mood like happy or sad
familiarity song’s familiarity
song_type controls the type of songs returned
artist_start_year_before
Matches artists that have an earliest start year before the given value
artist_start_year_after
Matches artists that have an earliest start year after the given value
artist_end_year_before
Matches artists that have a latest end year before the given value
artist_end_year_after
Matches artists that have a latest end year after the given value
Value
data frame giving artist’s familiarity
Examples
## Not run:
data=search_songs(api_key,style="pop",results=31)
## End(Not run)
similar_artists To search similar artists by using names or IDs
Description
To search similar artists by using names or IDs
Usage
similar_artists(api_key, name = NA, id = NA, seed_catalog = NA,
hotttnesss = T, start = 0, results = 15, max_familiarity = NA,
min_familiarity = NA, max_hotttnesss = NA, min_hotttnesss = NA,
artist_start_year_before = NA, artist_start_year_after = NA,
artist_end_year_before = NA, artist_end_year_after = NA)
Arguments
api_key Echo Nest API key
name artists’ name (maximum upto 5 names)
id Echo Nest IDs (maximum upto 5 IDs)
seed_catalog seed catalog
hotttnesss artist’s hotttnesss
start the desired index of the first result returned
results maximum size
max_familiarity
maximum familiarity
min_familiarity
minimum familiarity
max_hotttnesss maximum hotttnesss
min_hotttnesss minimum hotttnesss
artist_start_year_before
Matches artists that have an earliest start year before the given value
artist_start_year_after
Matches artists that have an earliest start year after the given value
artist_end_year_before
Matches artists that have a latest end year before the given value
artist_end_year_after
Matches artists that have a latest end year after the given value
Value
data frame giving similar artists’ data
Examples
## Not run:
data=similar_artists(api_key,name=c("coldplay","adele","maroon 5"),results=35 )
## End(Not run)
similar_genres To return similar genres to a given genre
Description
To return similar genres to a given genre
Usage
similar_genres(api_key, genre = NA, description = T, urls = T,
start = NA, results = 15)
Arguments
api_key Echo Nest API key
genre the genre name
description genre’s description
urls genre’s urls
start the desired index of the first result returned
results the number of results desired
Value
data frame giving similar genres
Examples
## Not run:
data=similar_genres(api_key,genre="rock")
## End(Not run)
standard_static_playlist
To return standard static playlist
Description
To return standard static playlist
Usage
standard_static_playlist(api_key, type = NA, artist_id = NA, artist = NA,
song_id = NA, genre = NA, track_id = NA, results = 15, partner = NA,
tracks = F, limited_interactivity = NA, song_selection = NA,
variety = NA, distribution = NA, adventurousness = NA,
seed_catalog = NA, sort = NA, song_type = NA)
Arguments
api_key Echo Nest API key
type the type of the playlist to be generated
artist_id artist id
artist artist name
song_id song ID
genre genre name
track_id track ID
results the number of results desired
partner partner catalog
tracks tracks info
limited_interactivity
interactivity limitation
song_selection to determine how songs are selected from each artist in artist-type playlists
variety the maximum variety of artists to be represented in the playlist
distribution controls the distribution of artists in the playlist
adventurousness
controls the trade between known music and unknown music
seed_catalog ID of seed catalog for the playlist
sort sorting parameter
song_type controls the type of songs returned
Value
data frame giving standard static playlist
Examples
## Not run:
data= standard_static_playlist(api_key,type="artist-radio",artist=c("coldplay","adele"))
## End(Not run)
suggest_artist_names To suggest artists based upon partial names
Description
To suggest artists based upon partial names
Usage
suggest_artist_names(api_key, name, results = NA)
Arguments
api_key Echo Nest API key
name a partial artist name
results the number of results desired (maximum 15)
Value
data frame giving artist’s names
Examples
## Not run:
data=suggest_artist_names(api_key,"cold")
## End(Not run) |
github.com/rwynn/monstache/v6 | go | Go | README
[¶](#section-readme)
---
### monstache
a go daemon that syncs mongodb to elasticsearch in realtime
[![Monstache CI](https://github.com/rwynn/monstache/workflows/Monstache%20CI/badge.svg?branch=rel6)](https://github.com/rwynn/monstache/actions?query=branch%3Arel6)
[![Go Report Card](https://goreportcard.com/badge/github.com/rwynn/monstache)](https://goreportcard.com/report/github.com/rwynn/monstache)
##### Version 6
This version of monstache is designed for MongoDB 3.6+ and Elasticsearch 7.0+. It uses the official MongoDB golang driver and the community supported Elasticsearch driver from olivere.
Some of the monstache settings related to MongoDB have been removed in this version as they are now supported in the
[connection string](https://github.com/mongodb/mongo-go-driver/raw/v1.0.0/x/network/connstring/connstring.go)
##### Changes from previous versions
Monstache now defaults to use change streams instead of tailing the oplog for changes. Without any configuration monstache watches the entire MongoDB deployment. You can specify specific namespaces to watch by setting the option
`change-stream-namespaces` to an array of strings.
The interface for golang plugins has changed due to the switch to the new driver. Previously the API exposed a `Session` field typed as a `*mgo.Session`. Now that has been replaced with a `MongoClient` field which has the type
`*mongo.Client`.
See the MongoDB go driver docs for details on how to use this client.
Documentation
[¶](#section-documentation)
---
### Overview [¶](#pkg-overview)
package main provides the monstache binary |
powerbydesign | cran | R | Package ‘powerbydesign’
October 14, 2022
Type Package
Title Power Estimates for ANOVA Designs
Date 2021-02-25
Version 1.0.5
Author <NAME> [aut, cre]
Maintainer <NAME> <<EMAIL>>
Description Functions for bootstrapping the power of ANOVA designs
based on estimated means and standard deviations of the conditions.
Please refer to the documentation of the boot.power.anova() function
for further details.
License GPL (>= 3)
LazyData TRUE
Suggests testthat
Imports lme4, gdata, MASS, reshape2, stringr, plyr, ggplot2
RoxygenNote 7.1.1
NeedsCompilation no
Repository CRAN
Date/Publication 2021-02-25 13:20:02 UTC
R topics documented:
boot.power.anov... 2
design.anov... 3
plot.power_by_samplesiz... 5
boot.power.anova Bootstrap the Power of an ANOVA Design
Description
This function bootstraps the power of each effect in an ANOVA design for a given range of sample
sizes. Power is computed by randomly drawing samples from a multivariate normal distribution
specified according to the values supplied by the design.anova object. Power is defined as the
proportion of bootstrap iterations the p-values of each effect lie below the supplied alpha level. Note
that this function runs many ANOVAs which might be slow for large sample size ranges or bootstrap
iterations (see Details below). Further note that this function does not check for assumptions such
as sphericity.
Usage
boot.power.anova(design, n_from, n_to, num_iterations_bootstrap, alpha = 0.05)
Arguments
design object of type design.anova
n_from numeric, lower boundary of sample size range (inclusive) ; Refers to N per
between condition
n_to numeric, upper boundary of sample size range (inclusive) ; Refers to N per
between condition
num_iterations_bootstrap
numeric, number of bootstrap iterations for each sample size
alpha numeric, alpha level
Details
Note that this function requires the computation of many ANOVAs and therefore becomes slow
with increasing sample size ranges and bootstrap iterations. It is therefore suggested to first use a
very low number of bootstrap iterations, such as 10, in order to determine a sensible sample size
range for the power of interest. Once done, use this small sample size range and dramatically
increase the bootstrap iterations, such as 3000, in order to determine more reliable power estimates.
Because the power-by-samplesize function is monotonically increasing, a zigzag of power values
with increasing sample sizes indicates that the selected bootstrap iterations are too low.
Value
list containing power-by-samplesize data.frames for each effect
See Also
design.anova, plot.power_by_samplesize
Examples
## Not run:
design <- design.anova(
between = list(age = c("young","old"),
sex = c("male","female")),
within = list(condition = c("cond1","cond2","cond3")),
default_within_correlation = 0.7
)
power_by_samplesize <- boot.power.anova(
design,
n_from = 40,
n_to = 60,
num_iterations_bootstrap = 1000
)
plot(power_by_samplesize,
crit_power = 0.9,
plot_dir = "power_plots")
## End(Not run)
design.anova Define an ANOVA Design
Description
Constructs an "design.anova" object required by the boot.power.anova function.
Usage
design.anova(
between = list(),
within = list(),
default_within_correlation = 0,
save_input_as = NULL,
silent_load = FALSE
)
Arguments
between list, between-subjects factors including its levels
within list, within-subjects factors including its levels
default_within_correlation
numeric, default within-subjects correlation the correlation matrix is populated
with (for designs including within-subjects factors)
save_input_as character, file name prefix of the files the input values entered by the user are
save to. File names are constructed as paste0(save_input_as,"_cor_matrix.csv")
and paste0(save_input_as,"_means_and_sds.csv")
silent_load boolean, FALSE (default): always show input dialogs (even if data was success-
fully loaded from a file); TRUE: show input dialogs only if file did not yet exist
and break with error if data from file does not match the design
Details
Based on the supplied within-subjects factors and between-subjects factors, this function constructs
all conditions of the ANOVA design and opens two dialog windows querying for the expected
correlation matrix and cell means (+ standard deviations) for all conditions.
The first dialog window queries for the correlation matrix of the conditions. If you have a pure
between-subjects design, you may instantly close this window. Otherwise, enter the expected
correlations between all conditions that include within-subjects manipulations. Using the "de-
fault_within_correlation" parameter, a default value can be set. You should fill in only the lower
triangle of the correlation matrix and only the values not containing NAs.
The second dialog window queries for the means and standard deviations expected for each condi-
tion.
Use the "save_input_as" parameter in order to define a file name prefix of the files where the function
saves your input values. This will populate the dialog windows with the saved values on the next
execution of this function. If the parameter is NULL, input values will not be saved. NOTE: You
must delete the respective files if you want to change the design.
Value
object of type design.anova
See Also
boot.power.anova
Examples
## Not run:
design <- design.anova(
between = list(age = c("young","old"),
sex = c("male","female")),
within = list(condition = c("cond1","cond2","cond3")),
default_within_correlation = 0.7,
save_input_as = "myexp1",
silent_load = T
)
## End(Not run)
plot.power_by_samplesize
Plot Power Results
Description
Plots the power-by-samplesize data.frames returned by the power functions.
Usage
## S3 method for class 'power_by_samplesize'
plot(x, crit_power = NULL, plot_dir = NULL, ...)
Arguments
x list, results of a power function
crit_power numeric, critical power value one is looking for (adds a line and the critical
sample size to the plot)
plot_dir character, name of an existing directory where the power plots should be saved
to (leaving it NULL will plot to the default device)
... additional parameters, not used at the moment
See Also
boot.power.anova |
github.com/mattn/go-mastodon | go | Go | README
[¶](#section-readme)
---
### go-mastodon
[![Build Status](https://github.com/mattn/go-mastodon/workflows/test/badge.svg?branch=master)](https://github.com/mattn/go-mastodon/actions?query=workflow%3Atest)
[![Codecov](https://codecov.io/gh/mattn/go-mastodon/branch/master/graph/badge.svg)](https://codecov.io/gh/mattn/go-mastodon)
[![Go Reference](https://pkg.go.dev/badge/github.com/mattn/go-mastodon.svg)](https://pkg.go.dev/github.com/mattn/go-mastodon)
[![Go Report Card](https://goreportcard.com/badge/github.com/mattn/go-mastodon)](https://goreportcard.com/report/github.com/mattn/go-mastodon)
#### Usage
##### Application
```
package main
import (
"context"
"fmt"
"log"
"github.com/mattn/go-mastodon"
)
func main() {
app, err := mastodon.RegisterApp(context.Background(), &mastodon.AppConfig{
Server: "https://mstdn.jp",
ClientName: "client-name",
Scopes: "read write follow",
Website: "https://github.com/mattn/go-mastodon",
})
if err != nil {
log.Fatal(err)
}
fmt.Printf("client-id : %s\n", app.ClientID)
fmt.Printf("client-secret: %s\n", app.ClientSecret)
}
```
##### Client
```
package main
import (
"context"
"fmt"
"log"
"github.com/mattn/go-mastodon"
)
func main() {
c := mastodon.NewClient(&mastodon.Config{
Server: "https://m<EMAIL>",
ClientID: "client-id",
ClientSecret: "client-secret",
})
err := c.Authenticate(context.Background(), "your-email", "your-password")
if err != nil {
log.Fatal(err)
}
timeline, err := c.GetTimelineHome(context.Background(), nil)
if err != nil {
log.Fatal(err)
}
for i := len(timeline) - 1; i >= 0; i-- {
fmt.Println(timeline[i])
}
}
```
#### Status of implementations
* GET /api/v1/accounts/:id
* GET /api/v1/accounts/verify_credentials
* PATCH /api/v1/accounts/update_credentials
* GET /api/v1/accounts/:id/followers
* GET /api/v1/accounts/:id/following
* GET /api/v1/accounts/:id/statuses
* POST /api/v1/accounts/:id/follow
* POST /api/v1/accounts/:id/unfollow
* GET /api/v1/accounts/:id/block
* GET /api/v1/accounts/:id/unblock
* GET /api/v1/accounts/:id/mute
* GET /api/v1/accounts/:id/unmute
* GET /api/v1/accounts/:id/lists
* GET /api/v1/accounts/relationships
* GET /api/v1/accounts/search
* GET /api/v1/apps/verify_credentials
* GET /api/v1/bookmarks
* POST /api/v1/apps
* GET /api/v1/blocks
* GET /api/v1/conversations
* DELETE /api/v1/conversations/:id
* POST /api/v1/conversations/:id/read
* GET /api/v1/favourites
* GET /api/v1/filters
* POST /api/v1/filters
* GET /api/v1/filters/:id
* PUT /api/v1/filters/:id
* DELETE /api/v1/filters/:id
* GET /api/v1/follow_requests
* POST /api/v1/follow_requests/:id/authorize
* POST /api/v1/follow_requests/:id/reject
* POST /api/v1/follows
* GET /api/v1/instance
* GET /api/v1/instance/activity
* GET /api/v1/instance/peers
* GET /api/v1/lists
* GET /api/v1/lists/:id/accounts
* GET /api/v1/lists/:id
* POST /api/v1/lists
* PUT /api/v1/lists/:id
* DELETE /api/v1/lists/:id
* POST /api/v1/lists/:id/accounts
* DELETE /api/v1/lists/:id/accounts
* POST /api/v1/media
* GET /api/v1/mutes
* GET /api/v1/notifications
* GET /api/v1/notifications/:id
* POST /api/v1/notifications/:id/dismiss
* POST /api/v1/notifications/clear
* POST /api/v1/push/subscription
* GET /api/v1/push/subscription
* PUT /api/v1/push/subscription
* DELETE /api/v1/push/subscription
* GET /api/v1/reports
* POST /api/v1/reports
* GET /api/v2/search
* GET /api/v1/statuses/:id
* GET /api/v1/statuses/:id/context
* GET /api/v1/statuses/:id/card
* GET /api/v1/statuses/:id/reblogged_by
* GET /api/v1/statuses/:id/favourited_by
* POST /api/v1/statuses
* DELETE /api/v1/statuses/:id
* POST /api/v1/statuses/:id/reblog
* POST /api/v1/statuses/:id/unreblog
* POST /api/v1/statuses/:id/favourite
* POST /api/v1/statuses/:id/unfavourite
* POST /api/v1/statuses/:id/bookmark
* POST /api/v1/statuses/:id/unbookmark
* GET /api/v1/timelines/home
* GET /api/v1/timelines/public
* GET /api/v1/timelines/tag/:hashtag
* GET /api/v1/timelines/list/:id
* GET /api/v1/streaming/user
* GET /api/v1/streaming/public
* GET /api/v1/streaming/hashtag?tag=:hashtag
* GET /api/v1/streaming/hashtag/local?tag=:hashtag
* GET /api/v1/streaming/list?list=:list_id
* GET /api/v1/streaming/direct
#### Installation
```
go install github.com/mattn/go-mastodon@latest
```
#### License
MIT
#### Author
<NAME> (a.k.a. mattn)
Documentation
[¶](#section-documentation)
---
### Overview [¶](#pkg-overview)
Package mastodon provides functions and structs for accessing the mastodon API.
### Index [¶](#pkg-index)
* [Constants](#pkg-constants)
* [func Base64Encode(file *os.File) (string, error)](#Base64Encode)
* [func Base64EncodeFileName(filename string) (string, error)](#Base64EncodeFileName)
* [func String(v string) *string](#String)
* [type Account](#Account)
* [type AccountSource](#AccountSource)
* [type AppConfig](#AppConfig)
* [type Application](#Application)
* + [func RegisterApp(ctx context.Context, appConfig *AppConfig) (*Application, error)](#RegisterApp)
* [type ApplicationVerification](#ApplicationVerification)
* [type Attachment](#Attachment)
* [type AttachmentMeta](#AttachmentMeta)
* [type AttachmentSize](#AttachmentSize)
* [type Card](#Card)
* [type Client](#Client)
* + [func NewClient(config *Config) *Client](#NewClient)
* + [func (c *Client) AccountBlock(ctx context.Context, id ID) (*Relationship, error)](#Client.AccountBlock)
+ [func (c *Client) AccountFollow(ctx context.Context, id ID) (*Relationship, error)](#Client.AccountFollow)
+ [func (c *Client) AccountMute(ctx context.Context, id ID) (*Relationship, error)](#Client.AccountMute)
+ [func (c *Client) AccountUnblock(ctx context.Context, id ID) (*Relationship, error)](#Client.AccountUnblock)
+ [func (c *Client) AccountUnfollow(ctx context.Context, id ID) (*Relationship, error)](#Client.AccountUnfollow)
+ [func (c *Client) AccountUnmute(ctx context.Context, id ID) (*Relationship, error)](#Client.AccountUnmute)
+ [func (c *Client) AccountUpdate(ctx context.Context, profile *Profile) (*Account, error)](#Client.AccountUpdate)
+ [func (c *Client) AccountsSearch(ctx context.Context, q string, limit int64) ([]*Account, error)](#Client.AccountsSearch)
+ [func (c *Client) AddPushSubscription(ctx context.Context, endpoint string, public ecdsa.PublicKey, shared []byte, ...) (*PushSubscription, error)](#Client.AddPushSubscription)
+ [func (c *Client) AddToList(ctx context.Context, list ID, accounts ...ID) error](#Client.AddToList)
+ [func (c *Client) Authenticate(ctx context.Context, username, password string) error](#Client.Authenticate)
+ [func (c *Client) AuthenticateApp(ctx context.Context) error](#Client.AuthenticateApp)
+ [func (c *Client) AuthenticateToken(ctx context.Context, authCode, redirectURI string) error](#Client.AuthenticateToken)
+ [func (c *Client) Bookmark(ctx context.Context, id ID) (*Status, error)](#Client.Bookmark)
+ [func (c *Client) ClearNotifications(ctx context.Context) error](#Client.ClearNotifications)
+ [func (c *Client) CreateFilter(ctx context.Context, filter *Filter) (*Filter, error)](#Client.CreateFilter)
+ [func (c *Client) CreateList(ctx context.Context, title string) (*List, error)](#Client.CreateList)
+ [func (c *Client) DeleteConversation(ctx context.Context, id ID) error](#Client.DeleteConversation)
+ [func (c *Client) DeleteFilter(ctx context.Context, id ID) error](#Client.DeleteFilter)
+ [func (c *Client) DeleteList(ctx context.Context, id ID) error](#Client.DeleteList)
+ [func (c *Client) DeleteStatus(ctx context.Context, id ID) error](#Client.DeleteStatus)
+ [func (c *Client) DismissNotification(ctx context.Context, id ID) error](#Client.DismissNotification)
+ [func (c *Client) Favourite(ctx context.Context, id ID) (*Status, error)](#Client.Favourite)
+ [func (c *Client) FollowRemoteUser(ctx context.Context, uri string) (*Account, error)](#Client.FollowRemoteUser)
+ [func (c *Client) FollowRequestAuthorize(ctx context.Context, id ID) error](#Client.FollowRequestAuthorize)
+ [func (c *Client) FollowRequestReject(ctx context.Context, id ID) error](#Client.FollowRequestReject)
+ [func (c *Client) GetAccount(ctx context.Context, id ID) (*Account, error)](#Client.GetAccount)
+ [func (c *Client) GetAccountCurrentUser(ctx context.Context) (*Account, error)](#Client.GetAccountCurrentUser)
+ [func (c *Client) GetAccountFollowers(ctx context.Context, id ID, pg *Pagination) ([]*Account, error)](#Client.GetAccountFollowers)
+ [func (c *Client) GetAccountFollowing(ctx context.Context, id ID, pg *Pagination) ([]*Account, error)](#Client.GetAccountFollowing)
+ [func (c *Client) GetAccountLists(ctx context.Context, id ID) ([]*List, error)](#Client.GetAccountLists)
+ [func (c *Client) GetAccountPinnedStatuses(ctx context.Context, id ID) ([]*Status, error)](#Client.GetAccountPinnedStatuses)
+ [func (c *Client) GetAccountRelationships(ctx context.Context, ids []string) ([]*Relationship, error)](#Client.GetAccountRelationships)
+ [func (c *Client) GetAccountStatuses(ctx context.Context, id ID, pg *Pagination) ([]*Status, error)](#Client.GetAccountStatuses)
+ [func (c *Client) GetBlocks(ctx context.Context, pg *Pagination) ([]*Account, error)](#Client.GetBlocks)
+ [func (c *Client) GetBookmarks(ctx context.Context, pg *Pagination) ([]*Status, error)](#Client.GetBookmarks)
+ [func (c *Client) GetConversations(ctx context.Context, pg *Pagination) ([]*Conversation, error)](#Client.GetConversations)
+ [func (c *Client) GetFavouritedBy(ctx context.Context, id ID, pg *Pagination) ([]*Account, error)](#Client.GetFavouritedBy)
+ [func (c *Client) GetFavourites(ctx context.Context, pg *Pagination) ([]*Status, error)](#Client.GetFavourites)
+ [func (c *Client) GetFilter(ctx context.Context, id ID) (*Filter, error)](#Client.GetFilter)
+ [func (c *Client) GetFilters(ctx context.Context) ([]*Filter, error)](#Client.GetFilters)
+ [func (c *Client) GetFollowRequests(ctx context.Context, pg *Pagination) ([]*Account, error)](#Client.GetFollowRequests)
+ [func (c *Client) GetInstance(ctx context.Context) (*Instance, error)](#Client.GetInstance)
+ [func (c *Client) GetInstanceActivity(ctx context.Context) ([]*WeeklyActivity, error)](#Client.GetInstanceActivity)
+ [func (c *Client) GetInstancePeers(ctx context.Context) ([]string, error)](#Client.GetInstancePeers)
+ [func (c *Client) GetList(ctx context.Context, id ID) (*List, error)](#Client.GetList)
+ [func (c *Client) GetListAccounts(ctx context.Context, id ID) ([]*Account, error)](#Client.GetListAccounts)
+ [func (c *Client) GetLists(ctx context.Context) ([]*List, error)](#Client.GetLists)
+ [func (c *Client) GetMutes(ctx context.Context, pg *Pagination) ([]*Account, error)](#Client.GetMutes)
+ [func (c *Client) GetNotification(ctx context.Context, id ID) (*Notification, error)](#Client.GetNotification)
+ [func (c *Client) GetNotifications(ctx context.Context, pg *Pagination) ([]*Notification, error)](#Client.GetNotifications)
+ [func (c *Client) GetPoll(ctx context.Context, id ID) (*Poll, error)](#Client.GetPoll)
+ [func (c *Client) GetPushSubscription(ctx context.Context) (*PushSubscription, error)](#Client.GetPushSubscription)
+ [func (c *Client) GetRebloggedBy(ctx context.Context, id ID, pg *Pagination) ([]*Account, error)](#Client.GetRebloggedBy)
+ [func (c *Client) GetReports(ctx context.Context) ([]*Report, error)](#Client.GetReports)
+ [func (c *Client) GetStatus(ctx context.Context, id ID) (*Status, error)](#Client.GetStatus)
+ [func (c *Client) GetStatusCard(ctx context.Context, id ID) (*Card, error)](#Client.GetStatusCard)
+ [func (c *Client) GetStatusContext(ctx context.Context, id ID) (*Context, error)](#Client.GetStatusContext)
+ [func (c *Client) GetTimelineDirect(ctx context.Context, pg *Pagination) ([]*Status, error)](#Client.GetTimelineDirect)
+ [func (c *Client) GetTimelineHashtag(ctx context.Context, tag string, isLocal bool, pg *Pagination) ([]*Status, error)](#Client.GetTimelineHashtag)
+ [func (c *Client) GetTimelineHome(ctx context.Context, pg *Pagination) ([]*Status, error)](#Client.GetTimelineHome)
+ [func (c *Client) GetTimelineList(ctx context.Context, id ID, pg *Pagination) ([]*Status, error)](#Client.GetTimelineList)
+ [func (c *Client) GetTimelineMedia(ctx context.Context, isLocal bool, pg *Pagination) ([]*Status, error)](#Client.GetTimelineMedia)
+ [func (c *Client) GetTimelinePublic(ctx context.Context, isLocal bool, pg *Pagination) ([]*Status, error)](#Client.GetTimelinePublic)
+ [func (c *Client) MarkConversationAsRead(ctx context.Context, id ID) error](#Client.MarkConversationAsRead)
+ [func (c *Client) NewWSClient() *WSClient](#Client.NewWSClient)
+ [func (c *Client) PollVote(ctx context.Context, id ID, choices ...int) (*Poll, error)](#Client.PollVote)
+ [func (c *Client) PostStatus(ctx context.Context, toot *Toot) (*Status, error)](#Client.PostStatus)
+ [func (c *Client) Reblog(ctx context.Context, id ID) (*Status, error)](#Client.Reblog)
+ [func (c *Client) RemoveFromList(ctx context.Context, list ID, accounts ...ID) error](#Client.RemoveFromList)
+ [func (c *Client) RemovePushSubscription(ctx context.Context) error](#Client.RemovePushSubscription)
+ [func (c *Client) RenameList(ctx context.Context, id ID, title string) (*List, error)](#Client.RenameList)
+ [func (c *Client) Report(ctx context.Context, accountID ID, ids []ID, comment string) (*Report, error)](#Client.Report)
+ [func (c *Client) Search(ctx context.Context, q string, resolve bool) (*Results, error)](#Client.Search)
+ [func (c *Client) StreamingDirect(ctx context.Context) (chan Event, error)](#Client.StreamingDirect)
+ [func (c *Client) StreamingHashtag(ctx context.Context, tag string, isLocal bool) (chan Event, error)](#Client.StreamingHashtag)
+ [func (c *Client) StreamingList(ctx context.Context, id ID) (chan Event, error)](#Client.StreamingList)
+ [func (c *Client) StreamingPublic(ctx context.Context, isLocal bool) (chan Event, error)](#Client.StreamingPublic)
+ [func (c *Client) StreamingUser(ctx context.Context) (chan Event, error)](#Client.StreamingUser)
+ [func (c *Client) Unbookmark(ctx context.Context, id ID) (*Status, error)](#Client.Unbookmark)
+ [func (c *Client) Unfavourite(ctx context.Context, id ID) (*Status, error)](#Client.Unfavourite)
+ [func (c *Client) Unreblog(ctx context.Context, id ID) (*Status, error)](#Client.Unreblog)
+ [func (c *Client) UpdateFilter(ctx context.Context, id ID, filter *Filter) (*Filter, error)](#Client.UpdateFilter)
+ [func (c *Client) UpdatePushSubscription(ctx context.Context, alerts *PushAlerts) (*PushSubscription, error)](#Client.UpdatePushSubscription)
+ [func (c *Client) UploadMedia(ctx context.Context, file string) (*Attachment, error)](#Client.UploadMedia)
+ [func (c *Client) UploadMediaFromBytes(ctx context.Context, b []byte) (*Attachment, error)](#Client.UploadMediaFromBytes)
+ [func (c *Client) UploadMediaFromMedia(ctx context.Context, media *Media) (*Attachment, error)](#Client.UploadMediaFromMedia)
+ [func (c *Client) UploadMediaFromReader(ctx context.Context, reader io.Reader) (*Attachment, error)](#Client.UploadMediaFromReader)
+ [func (c *Client) VerifyAppCredentials(ctx context.Context) (*ApplicationVerification, error)](#Client.VerifyAppCredentials)
* [type Config](#Config)
* [type Context](#Context)
* [type Conversation](#Conversation)
* [type DeleteEvent](#DeleteEvent)
* [type Emoji](#Emoji)
* [type ErrorEvent](#ErrorEvent)
* + [func (e *ErrorEvent) Error() string](#ErrorEvent.Error)
* [type Event](#Event)
* [type Field](#Field)
* [type Filter](#Filter)
* [type History](#History)
* [type ID](#ID)
* + [func (id *ID) UnmarshalJSON(data []byte) error](#ID.UnmarshalJSON)
* [type Instance](#Instance)
* [type InstanceStats](#InstanceStats)
* [type List](#List)
* [type Media](#Media)
* [type Mention](#Mention)
* [type Notification](#Notification)
* [type NotificationEvent](#NotificationEvent)
* [type Pagination](#Pagination)
* [type Poll](#Poll)
* [type PollOption](#PollOption)
* [type Profile](#Profile)
* [type PushAlerts](#PushAlerts)
* [type PushSubscription](#PushSubscription)
* [type Relationship](#Relationship)
* [type Report](#Report)
* [type Results](#Results)
* [type Sbool](#Sbool)
* + [func (s *Sbool) UnmarshalJSON(data []byte) error](#Sbool.UnmarshalJSON)
* [type Status](#Status)
* [type Stream](#Stream)
* [type Tag](#Tag)
* [type Toot](#Toot)
* [type TootPoll](#TootPoll)
* [type Unixtime](#Unixtime)
* + [func (t *Unixtime) UnmarshalJSON(data []byte) error](#Unixtime.UnmarshalJSON)
* [type UpdateEvent](#UpdateEvent)
* [type WSClient](#WSClient)
* + [func (c *WSClient) StreamingWSHashtag(ctx context.Context, tag string, isLocal bool) (chan Event, error)](#WSClient.StreamingWSHashtag)
+ [func (c *WSClient) StreamingWSList(ctx context.Context, id ID) (chan Event, error)](#WSClient.StreamingWSList)
+ [func (c *WSClient) StreamingWSPublic(ctx context.Context, isLocal bool) (chan Event, error)](#WSClient.StreamingWSPublic)
+ [func (c *WSClient) StreamingWSUser(ctx context.Context) (chan Event, error)](#WSClient.StreamingWSUser)
* [type WeeklyActivity](#WeeklyActivity)
#### Examples [¶](#pkg-examples)
* [Client](#example-Client)
* [Pagination](#example-Pagination)
* [RegisterApp](#example-RegisterApp)
### Constants [¶](#pkg-constants)
```
const (
VisibilityPublic = "public"
VisibilityUnlisted = "unlisted"
VisibilityFollowersOnly = "private"
VisibilityDirectMessage = "direct"
)
```
Convenience constants for Toot.Visibility
### Variables [¶](#pkg-variables)
This section is empty.
### Functions [¶](#pkg-functions)
####
func [Base64Encode](https://github.com/mattn/go-mastodon/blob/v0.0.6/helper.go#L24) [¶](#Base64Encode)
```
func Base64Encode(file *[os](/os).[File](/os#File)) ([string](/builtin#string), [error](/builtin#error))
```
Base64Encode returns the base64 data URI format string of the file.
####
func [Base64EncodeFileName](https://github.com/mattn/go-mastodon/blob/v0.0.6/helper.go#L13) [¶](#Base64EncodeFileName)
```
func Base64EncodeFileName(filename [string](/builtin#string)) ([string](/builtin#string), [error](/builtin#error))
```
Base64EncodeFileName returns the base64 data URI format string of the file with the file name.
####
func [String](https://github.com/mattn/go-mastodon/blob/v0.0.6/helper.go#L41) [¶](#String)
```
func String(v [string](/builtin#string)) *[string](/builtin#string)
```
String is a helper function to get the pointer value of a string.
### Types [¶](#pkg-types)
####
type [Account](https://github.com/mattn/go-mastodon/blob/v0.0.6/accounts.go#L13) [¶](#Account)
```
type Account struct {
ID [ID](#ID) `json:"id"`
Username [string](/builtin#string) `json:"username"`
Acct [string](/builtin#string) `json:"acct"`
DisplayName [string](/builtin#string) `json:"display_name"`
Locked [bool](/builtin#bool) `json:"locked"`
CreatedAt [time](/time).[Time](/time#Time) `json:"created_at"`
FollowersCount [int64](/builtin#int64) `json:"followers_count"`
FollowingCount [int64](/builtin#int64) `json:"following_count"`
StatusesCount [int64](/builtin#int64) `json:"statuses_count"`
Note [string](/builtin#string) `json:"note"`
URL [string](/builtin#string) `json:"url"`
Avatar [string](/builtin#string) `json:"avatar"`
AvatarStatic [string](/builtin#string) `json:"avatar_static"`
Header [string](/builtin#string) `json:"header"`
HeaderStatic [string](/builtin#string) `json:"header_static"`
Emojis [][Emoji](#Emoji) `json:"emojis"`
Moved *[Account](#Account) `json:"moved"`
Fields [][Field](#Field) `json:"fields"`
Bot [bool](/builtin#bool) `json:"bot"`
Discoverable [bool](/builtin#bool) `json:"discoverable"`
Source *[AccountSource](#AccountSource) `json:"source"`
}
```
Account holds information for a mastodon account.
####
type [AccountSource](https://github.com/mattn/go-mastodon/blob/v0.0.6/accounts.go#L45) [¶](#AccountSource)
added in v0.0.4
```
type AccountSource struct {
Privacy *[string](/builtin#string) `json:"privacy"`
Sensitive *[bool](/builtin#bool) `json:"sensitive"`
Language *[string](/builtin#string) `json:"language"`
Note *[string](/builtin#string) `json:"note"`
Fields *[][Field](#Field) `json:"fields"`
}
```
AccountSource is a Mastodon account profile field.
####
type [AppConfig](https://github.com/mattn/go-mastodon/blob/v0.0.6/apps.go#L13) [¶](#AppConfig)
```
type AppConfig struct {
[http](/net/http).[Client](/net/http#Client)
Server [string](/builtin#string)
ClientName [string](/builtin#string)
// Where the user should be redirected after authorization (for no redirect, use urn:ietf:wg:oauth:2.0:oob)
RedirectURIs [string](/builtin#string)
// This can be a space-separated list of items listed on the /settings/applications/new page of any Mastodon
// instance. "read", "write", and "follow" are top-level scopes that include all the permissions of the more
// specific scopes like "read:favourites", "write:statuses", and "write:follows".
Scopes [string](/builtin#string)
// Optional.
Website [string](/builtin#string)
}
```
AppConfig is a setting for registering applications.
####
type [Application](https://github.com/mattn/go-mastodon/blob/v0.0.6/apps.go#L31) [¶](#Application)
```
type Application struct {
ID [ID](#ID) `json:"id"`
RedirectURI [string](/builtin#string) `json:"redirect_uri"`
ClientID [string](/builtin#string) `json:"client_id"`
ClientSecret [string](/builtin#string) `json:"client_secret"`
// AuthURI is not part of the Mastodon API; it is generated by go-mastodon.
AuthURI [string](/builtin#string) `json:"auth_uri,omitempty"`
}
```
Application is a mastodon application.
####
func [RegisterApp](https://github.com/mattn/go-mastodon/blob/v0.0.6/apps.go#L42) [¶](#RegisterApp)
```
func RegisterApp(ctx [context](/context).[Context](/context#Context), appConfig *[AppConfig](#AppConfig)) (*[Application](#Application), [error](/builtin#error))
```
RegisterApp returns the mastodon application.
Example [¶](#example-RegisterApp)
```
app, err := mastodon.RegisterApp(context.Background(), &mastodon.AppConfig{
Server: "https://mstdn.jp",
ClientName: "client-name",
Scopes: "read write follow",
Website: "https://github.com/mattn/go-mastodon",
})
if err != nil {
log.Fatal(err)
}
fmt.Printf("client-id : %s\n", app.ClientID)
fmt.Printf("client-secret: %s\n", app.ClientSecret)
```
```
Output:
```
####
type [ApplicationVerification](https://github.com/mattn/go-mastodon/blob/v0.0.6/apps.go#L99) [¶](#ApplicationVerification)
added in v0.0.6
```
type ApplicationVerification struct {
Name [string](/builtin#string) `json:"name"`
Website [string](/builtin#string) `json:"website"`
VapidKey [string](/builtin#string) `json:"vapid_key"`
}
```
ApplicationVerification is mastodon application.
####
type [Attachment](https://github.com/mattn/go-mastodon/blob/v0.0.6/mastodon.go#L269) [¶](#Attachment)
```
type Attachment struct {
ID [ID](#ID) `json:"id"`
Type [string](/builtin#string) `json:"type"`
URL [string](/builtin#string) `json:"url"`
RemoteURL [string](/builtin#string) `json:"remote_url"`
PreviewURL [string](/builtin#string) `json:"preview_url"`
TextURL [string](/builtin#string) `json:"text_url"`
Description [string](/builtin#string) `json:"description"`
Meta [AttachmentMeta](#AttachmentMeta) `json:"meta"`
}
```
Attachment hold information for attachment.
####
type [AttachmentMeta](https://github.com/mattn/go-mastodon/blob/v0.0.6/mastodon.go#L281) [¶](#AttachmentMeta)
added in v0.0.4
```
type AttachmentMeta struct {
Original [AttachmentSize](#AttachmentSize) `json:"original"`
Small [AttachmentSize](#AttachmentSize) `json:"small"`
}
```
AttachmentMeta holds information for attachment metadata.
####
type [AttachmentSize](https://github.com/mattn/go-mastodon/blob/v0.0.6/mastodon.go#L287) [¶](#AttachmentSize)
added in v0.0.4
```
type AttachmentSize struct {
Width [int64](/builtin#int64) `json:"width"`
Height [int64](/builtin#int64) `json:"height"`
Size [string](/builtin#string) `json:"size"`
Aspect [float64](/builtin#float64) `json:"aspect"`
}
```
AttachmentSize holds information for attatchment size.
####
type [Card](https://github.com/mattn/go-mastodon/blob/v0.0.6/status.go#L55) [¶](#Card)
```
type Card struct {
URL [string](/builtin#string) `json:"url"`
Title [string](/builtin#string) `json:"title"`
Description [string](/builtin#string) `json:"description"`
Image [string](/builtin#string) `json:"image"`
Type [string](/builtin#string) `json:"type"`
AuthorName [string](/builtin#string) `json:"author_name"`
AuthorURL [string](/builtin#string) `json:"author_url"`
ProviderName [string](/builtin#string) `json:"provider_name"`
ProviderURL [string](/builtin#string) `json:"provider_url"`
HTML [string](/builtin#string) `json:"html"`
Width [int64](/builtin#int64) `json:"width"`
Height [int64](/builtin#int64) `json:"height"`
}
```
Card holds information for a mastodon card.
####
type [Client](https://github.com/mattn/go-mastodon/blob/v0.0.6/mastodon.go#L28) [¶](#Client)
```
type Client struct {
[http](/net/http).[Client](/net/http#Client)
Config *[Config](#Config)
UserAgent [string](/builtin#string)
}
```
Client is a API client for mastodon.
Example [¶](#example-Client)
```
c := mastodon.NewClient(&mastodon.Config{
Server: "https://mstdn.jp",
ClientID: "client-id",
ClientSecret: "client-secret",
})
err := c.Authenticate(context.Background(), "your-email", "your-password")
if err != nil {
log.Fatal(err)
}
timeline, err := c.GetTimelineHome(context.Background(), nil)
if err != nil {
log.Fatal(err)
}
for i := len(timeline) - 1; i >= 0; i-- {
fmt.Println(timeline[i])
}
```
```
Output:
```
####
func [NewClient](https://github.com/mattn/go-mastodon/blob/v0.0.6/mastodon.go#L132) [¶](#NewClient)
```
func NewClient(config *[Config](#Config)) *[Client](#Client)
```
NewClient returns a new mastodon API client.
####
func (*Client) [AccountBlock](https://github.com/mattn/go-mastodon/blob/v0.0.6/accounts.go#L219) [¶](#Client.AccountBlock)
```
func (c *[Client](#Client)) AccountBlock(ctx [context](/context).[Context](/context#Context), id [ID](#ID)) (*[Relationship](#Relationship), [error](/builtin#error))
```
AccountBlock blocks the account.
####
func (*Client) [AccountFollow](https://github.com/mattn/go-mastodon/blob/v0.0.6/accounts.go#L199) [¶](#Client.AccountFollow)
```
func (c *[Client](#Client)) AccountFollow(ctx [context](/context).[Context](/context#Context), id [ID](#ID)) (*[Relationship](#Relationship), [error](/builtin#error))
```
AccountFollow follows the account.
####
func (*Client) [AccountMute](https://github.com/mattn/go-mastodon/blob/v0.0.6/accounts.go#L239) [¶](#Client.AccountMute)
```
func (c *[Client](#Client)) AccountMute(ctx [context](/context).[Context](/context#Context), id [ID](#ID)) (*[Relationship](#Relationship), [error](/builtin#error))
```
AccountMute mutes the account.
####
func (*Client) [AccountUnblock](https://github.com/mattn/go-mastodon/blob/v0.0.6/accounts.go#L229) [¶](#Client.AccountUnblock)
```
func (c *[Client](#Client)) AccountUnblock(ctx [context](/context).[Context](/context#Context), id [ID](#ID)) (*[Relationship](#Relationship), [error](/builtin#error))
```
AccountUnblock unblocks the account.
####
func (*Client) [AccountUnfollow](https://github.com/mattn/go-mastodon/blob/v0.0.6/accounts.go#L209) [¶](#Client.AccountUnfollow)
```
func (c *[Client](#Client)) AccountUnfollow(ctx [context](/context).[Context](/context#Context), id [ID](#ID)) (*[Relationship](#Relationship), [error](/builtin#error))
```
AccountUnfollow unfollows the account.
####
func (*Client) [AccountUnmute](https://github.com/mattn/go-mastodon/blob/v0.0.6/accounts.go#L249) [¶](#Client.AccountUnmute)
```
func (c *[Client](#Client)) AccountUnmute(ctx [context](/context).[Context](/context#Context), id [ID](#ID)) (*[Relationship](#Relationship), [error](/builtin#error))
```
AccountUnmute unmutes the account.
####
func (*Client) [AccountUpdate](https://github.com/mattn/go-mastodon/blob/v0.0.6/accounts.go#L89) [¶](#Client.AccountUpdate)
```
func (c *[Client](#Client)) AccountUpdate(ctx [context](/context).[Context](/context#Context), profile *[Profile](#Profile)) (*[Account](#Account), [error](/builtin#error))
```
AccountUpdate updates the information of the current user.
####
func (*Client) [AccountsSearch](https://github.com/mattn/go-mastodon/blob/v0.0.6/accounts.go#L274) [¶](#Client.AccountsSearch)
```
func (c *[Client](#Client)) AccountsSearch(ctx [context](/context).[Context](/context#Context), q [string](/builtin#string), limit [int64](/builtin#int64)) ([]*[Account](#Account), [error](/builtin#error))
```
AccountsSearch searches accounts by query.
####
func (*Client) [AddPushSubscription](https://github.com/mattn/go-mastodon/blob/v0.0.6/notification.go#L69) [¶](#Client.AddPushSubscription)
added in v0.0.5
```
func (c *[Client](#Client)) AddPushSubscription(ctx [context](/context).[Context](/context#Context), endpoint [string](/builtin#string), public [ecdsa](/crypto/ecdsa).[PublicKey](/crypto/ecdsa#PublicKey), shared [][byte](/builtin#byte), alerts [PushAlerts](#PushAlerts)) (*[PushSubscription](#PushSubscription), [error](/builtin#error))
```
AddPushSubscription adds a new push subscription.
####
func (*Client) [AddToList](https://github.com/mattn/go-mastodon/blob/v0.0.6/lists.go#L90) [¶](#Client.AddToList)
added in v0.0.4
```
func (c *[Client](#Client)) AddToList(ctx [context](/context).[Context](/context#Context), list [ID](#ID), accounts ...[ID](#ID)) [error](/builtin#error)
```
AddToList adds accounts to a list.
Only accounts already followed by the user can be added to a list.
####
func (*Client) [Authenticate](https://github.com/mattn/go-mastodon/blob/v0.0.6/mastodon.go#L140) [¶](#Client.Authenticate)
```
func (c *[Client](#Client)) Authenticate(ctx [context](/context).[Context](/context#Context), username, password [string](/builtin#string)) [error](/builtin#error)
```
Authenticate gets access-token to the API.
####
func (*Client) [AuthenticateApp](https://github.com/mattn/go-mastodon/blob/v0.0.6/mastodon.go#L154) [¶](#Client.AuthenticateApp)
added in v0.0.6
```
func (c *[Client](#Client)) AuthenticateApp(ctx [context](/context).[Context](/context#Context)) [error](/builtin#error)
```
AuthenticateApp logs in using client credentials.
####
func (*Client) [AuthenticateToken](https://github.com/mattn/go-mastodon/blob/v0.0.6/mastodon.go#L168) [¶](#Client.AuthenticateToken)
added in v0.0.3
```
func (c *[Client](#Client)) AuthenticateToken(ctx [context](/context).[Context](/context#Context), authCode, redirectURI [string](/builtin#string)) [error](/builtin#error)
```
AuthenticateToken logs in using a grant token returned by Application.AuthURI.
redirectURI should be the same as Application.RedirectURI.
####
func (*Client) [Bookmark](https://github.com/mattn/go-mastodon/blob/v0.0.6/status.go#L254) [¶](#Client.Bookmark)
added in v0.0.5
```
func (c *[Client](#Client)) Bookmark(ctx [context](/context).[Context](/context#Context), id [ID](#ID)) (*[Status](#Status), [error](/builtin#error))
```
Bookmark bookmarks the toot of id and returns status of the bookmark toot.
####
func (*Client) [ClearNotifications](https://github.com/mattn/go-mastodon/blob/v0.0.6/notification.go#L64) [¶](#Client.ClearNotifications)
```
func (c *[Client](#Client)) ClearNotifications(ctx [context](/context).[Context](/context#Context)) [error](/builtin#error)
```
ClearNotifications clears notifications.
####
func (*Client) [CreateFilter](https://github.com/mattn/go-mastodon/blob/v0.0.6/filters.go#L43) [¶](#Client.CreateFilter)
added in v0.0.5
```
func (c *[Client](#Client)) CreateFilter(ctx [context](/context).[Context](/context#Context), filter *[Filter](#Filter)) (*[Filter](#Filter), [error](/builtin#error))
```
CreateFilter creates a new filter.
####
func (*Client) [CreateList](https://github.com/mattn/go-mastodon/blob/v0.0.6/lists.go#L57) [¶](#Client.CreateList)
added in v0.0.4
```
func (c *[Client](#Client)) CreateList(ctx [context](/context).[Context](/context#Context), title [string](/builtin#string)) (*[List](#List), [error](/builtin#error))
```
CreateList creates a new list with a given title.
####
func (*Client) [DeleteConversation](https://github.com/mattn/go-mastodon/blob/v0.0.6/status.go#L467) [¶](#Client.DeleteConversation)
added in v0.0.5
```
func (c *[Client](#Client)) DeleteConversation(ctx [context](/context).[Context](/context#Context), id [ID](#ID)) [error](/builtin#error)
```
DeleteConversation delete the conversation specified by id.
####
func (*Client) [DeleteFilter](https://github.com/mattn/go-mastodon/blob/v0.0.6/filters.go#L122) [¶](#Client.DeleteFilter)
added in v0.0.5
```
func (c *[Client](#Client)) DeleteFilter(ctx [context](/context).[Context](/context#Context), id [ID](#ID)) [error](/builtin#error)
```
DeleteFilter removes a filter.
####
func (*Client) [DeleteList](https://github.com/mattn/go-mastodon/blob/v0.0.6/lists.go#L83) [¶](#Client.DeleteList)
added in v0.0.4
```
func (c *[Client](#Client)) DeleteList(ctx [context](/context).[Context](/context#Context), id [ID](#ID)) [error](/builtin#error)
```
DeleteList removes a list.
####
func (*Client) [DeleteStatus](https://github.com/mattn/go-mastodon/blob/v0.0.6/status.go#L387) [¶](#Client.DeleteStatus)
```
func (c *[Client](#Client)) DeleteStatus(ctx [context](/context).[Context](/context#Context), id [ID](#ID)) [error](/builtin#error)
```
DeleteStatus delete the toot.
####
func (*Client) [DismissNotification](https://github.com/mattn/go-mastodon/blob/v0.0.6/notification.go#L59) [¶](#Client.DismissNotification)
added in v0.0.5
```
func (c *[Client](#Client)) DismissNotification(ctx [context](/context).[Context](/context#Context), id [ID](#ID)) [error](/builtin#error)
```
DismissNotification deletes a single notification.
####
func (*Client) [Favourite](https://github.com/mattn/go-mastodon/blob/v0.0.6/status.go#L234) [¶](#Client.Favourite)
```
func (c *[Client](#Client)) Favourite(ctx [context](/context).[Context](/context#Context), id [ID](#ID)) (*[Status](#Status), [error](/builtin#error))
```
Favourite favourites the toot of id and returns status of the favourite toot.
####
func (*Client) [FollowRemoteUser](https://github.com/mattn/go-mastodon/blob/v0.0.6/accounts.go#L288) [¶](#Client.FollowRemoteUser)
```
func (c *[Client](#Client)) FollowRemoteUser(ctx [context](/context).[Context](/context#Context), uri [string](/builtin#string)) (*[Account](#Account), [error](/builtin#error))
```
FollowRemoteUser sends follow-request.
####
func (*Client) [FollowRequestAuthorize](https://github.com/mattn/go-mastodon/blob/v0.0.6/accounts.go#L311) [¶](#Client.FollowRequestAuthorize)
```
func (c *[Client](#Client)) FollowRequestAuthorize(ctx [context](/context).[Context](/context#Context), id [ID](#ID)) [error](/builtin#error)
```
FollowRequestAuthorize authorizes the follow request of user with id.
####
func (*Client) [FollowRequestReject](https://github.com/mattn/go-mastodon/blob/v0.0.6/accounts.go#L316) [¶](#Client.FollowRequestReject)
```
func (c *[Client](#Client)) FollowRequestReject(ctx [context](/context).[Context](/context#Context), id [ID](#ID)) [error](/builtin#error)
```
FollowRequestReject rejects the follow request of user with id.
####
func (*Client) [GetAccount](https://github.com/mattn/go-mastodon/blob/v0.0.6/accounts.go#L54) [¶](#Client.GetAccount)
```
func (c *[Client](#Client)) GetAccount(ctx [context](/context).[Context](/context#Context), id [ID](#ID)) (*[Account](#Account), [error](/builtin#error))
```
GetAccount return Account.
####
func (*Client) [GetAccountCurrentUser](https://github.com/mattn/go-mastodon/blob/v0.0.6/accounts.go#L64) [¶](#Client.GetAccountCurrentUser)
```
func (c *[Client](#Client)) GetAccountCurrentUser(ctx [context](/context).[Context](/context#Context)) (*[Account](#Account), [error](/builtin#error))
```
GetAccountCurrentUser returns the Account of current user.
####
func (*Client) [GetAccountFollowers](https://github.com/mattn/go-mastodon/blob/v0.0.6/accounts.go#L155) [¶](#Client.GetAccountFollowers)
```
func (c *[Client](#Client)) GetAccountFollowers(ctx [context](/context).[Context](/context#Context), id [ID](#ID), pg *[Pagination](#Pagination)) ([]*[Account](#Account), [error](/builtin#error))
```
GetAccountFollowers returns followers list.
####
func (*Client) [GetAccountFollowing](https://github.com/mattn/go-mastodon/blob/v0.0.6/accounts.go#L165) [¶](#Client.GetAccountFollowing)
```
func (c *[Client](#Client)) GetAccountFollowing(ctx [context](/context).[Context](/context#Context), id [ID](#ID), pg *[Pagination](#Pagination)) ([]*[Account](#Account), [error](/builtin#error))
```
GetAccountFollowing returns following list.
####
func (*Client) [GetAccountLists](https://github.com/mattn/go-mastodon/blob/v0.0.6/lists.go#L27) [¶](#Client.GetAccountLists)
added in v0.0.4
```
func (c *[Client](#Client)) GetAccountLists(ctx [context](/context).[Context](/context#Context), id [ID](#ID)) ([]*[List](#List), [error](/builtin#error))
```
GetAccountLists returns the lists containing a given account.
####
func (*Client) [GetAccountPinnedStatuses](https://github.com/mattn/go-mastodon/blob/v0.0.6/accounts.go#L143) [¶](#Client.GetAccountPinnedStatuses)
added in v0.0.5
```
func (c *[Client](#Client)) GetAccountPinnedStatuses(ctx [context](/context).[Context](/context#Context), id [ID](#ID)) ([]*[Status](#Status), [error](/builtin#error))
```
GetAccountPinnedStatuses returns statuses pinned by specified accuont.
####
func (*Client) [GetAccountRelationships](https://github.com/mattn/go-mastodon/blob/v0.0.6/accounts.go#L259) [¶](#Client.GetAccountRelationships)
```
func (c *[Client](#Client)) GetAccountRelationships(ctx [context](/context).[Context](/context#Context), ids [][string](/builtin#string)) ([]*[Relationship](#Relationship), [error](/builtin#error))
```
GetAccountRelationships returns relationship for the account.
####
func (*Client) [GetAccountStatuses](https://github.com/mattn/go-mastodon/blob/v0.0.6/accounts.go#L133) [¶](#Client.GetAccountStatuses)
```
func (c *[Client](#Client)) GetAccountStatuses(ctx [context](/context).[Context](/context#Context), id [ID](#ID), pg *[Pagination](#Pagination)) ([]*[Status](#Status), [error](/builtin#error))
```
GetAccountStatuses return statuses by specified account.
####
func (*Client) [GetBlocks](https://github.com/mattn/go-mastodon/blob/v0.0.6/accounts.go#L175) [¶](#Client.GetBlocks)
```
func (c *[Client](#Client)) GetBlocks(ctx [context](/context).[Context](/context#Context), pg *[Pagination](#Pagination)) ([]*[Account](#Account), [error](/builtin#error))
```
GetBlocks returns block list.
####
func (*Client) [GetBookmarks](https://github.com/mattn/go-mastodon/blob/v0.0.6/status.go#L154) [¶](#Client.GetBookmarks)
added in v0.0.5
```
func (c *[Client](#Client)) GetBookmarks(ctx [context](/context).[Context](/context#Context), pg *[Pagination](#Pagination)) ([]*[Status](#Status), [error](/builtin#error))
```
GetBookmarks returns the bookmark list of the current user.
####
func (*Client) [GetConversations](https://github.com/mattn/go-mastodon/blob/v0.0.6/status.go#L455) [¶](#Client.GetConversations)
added in v0.0.5
```
func (c *[Client](#Client)) GetConversations(ctx [context](/context).[Context](/context#Context), pg *[Pagination](#Pagination)) ([]*[Conversation](#Conversation), [error](/builtin#error))
```
GetConversations return direct conversations.
####
func (*Client) [GetFavouritedBy](https://github.com/mattn/go-mastodon/blob/v0.0.6/status.go#L204) [¶](#Client.GetFavouritedBy)
```
func (c *[Client](#Client)) GetFavouritedBy(ctx [context](/context).[Context](/context#Context), id [ID](#ID), pg *[Pagination](#Pagination)) ([]*[Account](#Account), [error](/builtin#error))
```
GetFavouritedBy returns the account list of the user who liked the toot of id.
####
func (*Client) [GetFavourites](https://github.com/mattn/go-mastodon/blob/v0.0.6/status.go#L144) [¶](#Client.GetFavourites)
```
func (c *[Client](#Client)) GetFavourites(ctx [context](/context).[Context](/context#Context), pg *[Pagination](#Pagination)) ([]*[Status](#Status), [error](/builtin#error))
```
GetFavourites returns the favorite list of the current user.
####
func (*Client) [GetFilter](https://github.com/mattn/go-mastodon/blob/v0.0.6/filters.go#L33) [¶](#Client.GetFilter)
added in v0.0.5
```
func (c *[Client](#Client)) GetFilter(ctx [context](/context).[Context](/context#Context), id [ID](#ID)) (*[Filter](#Filter), [error](/builtin#error))
```
GetFilter retrieves a filter by ID.
####
func (*Client) [GetFilters](https://github.com/mattn/go-mastodon/blob/v0.0.6/filters.go#L23) [¶](#Client.GetFilters)
added in v0.0.5
```
func (c *[Client](#Client)) GetFilters(ctx [context](/context).[Context](/context#Context)) ([]*[Filter](#Filter), [error](/builtin#error))
```
GetFilters returns all the filters on the current account.
####
func (*Client) [GetFollowRequests](https://github.com/mattn/go-mastodon/blob/v0.0.6/accounts.go#L301) [¶](#Client.GetFollowRequests)
```
func (c *[Client](#Client)) GetFollowRequests(ctx [context](/context).[Context](/context#Context), pg *[Pagination](#Pagination)) ([]*[Account](#Account), [error](/builtin#error))
```
GetFollowRequests returns follow requests.
####
func (*Client) [GetInstance](https://github.com/mattn/go-mastodon/blob/v0.0.6/instance.go#L30) [¶](#Client.GetInstance)
```
func (c *[Client](#Client)) GetInstance(ctx [context](/context).[Context](/context#Context)) (*[Instance](#Instance), [error](/builtin#error))
```
GetInstance returns Instance.
####
func (*Client) [GetInstanceActivity](https://github.com/mattn/go-mastodon/blob/v0.0.6/instance.go#L48) [¶](#Client.GetInstanceActivity)
added in v0.0.3
```
func (c *[Client](#Client)) GetInstanceActivity(ctx [context](/context).[Context](/context#Context)) ([]*[WeeklyActivity](#WeeklyActivity), [error](/builtin#error))
```
GetInstanceActivity returns instance activity.
####
func (*Client) [GetInstancePeers](https://github.com/mattn/go-mastodon/blob/v0.0.6/instance.go#L58) [¶](#Client.GetInstancePeers)
added in v0.0.3
```
func (c *[Client](#Client)) GetInstancePeers(ctx [context](/context).[Context](/context#Context)) ([][string](/builtin#string), [error](/builtin#error))
```
GetInstancePeers returns instance peers.
####
func (*Client) [GetList](https://github.com/mattn/go-mastodon/blob/v0.0.6/lists.go#L47) [¶](#Client.GetList)
added in v0.0.4
```
func (c *[Client](#Client)) GetList(ctx [context](/context).[Context](/context#Context), id [ID](#ID)) (*[List](#List), [error](/builtin#error))
```
GetList retrieves a list by ID.
####
func (*Client) [GetListAccounts](https://github.com/mattn/go-mastodon/blob/v0.0.6/lists.go#L37) [¶](#Client.GetListAccounts)
added in v0.0.4
```
func (c *[Client](#Client)) GetListAccounts(ctx [context](/context).[Context](/context#Context), id [ID](#ID)) ([]*[Account](#Account), [error](/builtin#error))
```
GetListAccounts returns the accounts in a given list.
####
func (*Client) [GetLists](https://github.com/mattn/go-mastodon/blob/v0.0.6/lists.go#L17) [¶](#Client.GetLists)
added in v0.0.4
```
func (c *[Client](#Client)) GetLists(ctx [context](/context).[Context](/context#Context)) ([]*[List](#List), [error](/builtin#error))
```
GetLists returns all the lists on the current account.
####
func (*Client) [GetMutes](https://github.com/mattn/go-mastodon/blob/v0.0.6/accounts.go#L321) [¶](#Client.GetMutes)
```
func (c *[Client](#Client)) GetMutes(ctx [context](/context).[Context](/context#Context), pg *[Pagination](#Pagination)) ([]*[Account](#Account), [error](/builtin#error))
```
GetMutes returns the list of users muted by the current user.
####
func (*Client) [GetNotification](https://github.com/mattn/go-mastodon/blob/v0.0.6/notification.go#L49) [¶](#Client.GetNotification)
```
func (c *[Client](#Client)) GetNotification(ctx [context](/context).[Context](/context#Context), id [ID](#ID)) (*[Notification](#Notification), [error](/builtin#error))
```
GetNotification returns notification.
####
func (*Client) [GetNotifications](https://github.com/mattn/go-mastodon/blob/v0.0.6/notification.go#L39) [¶](#Client.GetNotifications)
```
func (c *[Client](#Client)) GetNotifications(ctx [context](/context).[Context](/context#Context), pg *[Pagination](#Pagination)) ([]*[Notification](#Notification), [error](/builtin#error))
```
GetNotifications returns notifications.
####
func (*Client) [GetPoll](https://github.com/mattn/go-mastodon/blob/v0.0.6/polls.go#L32) [¶](#Client.GetPoll)
added in v0.0.5
```
func (c *[Client](#Client)) GetPoll(ctx [context](/context).[Context](/context#Context), id [ID](#ID)) (*[Poll](#Poll), [error](/builtin#error))
```
GetPoll returns poll specified by id.
####
func (*Client) [GetPushSubscription](https://github.com/mattn/go-mastodon/blob/v0.0.6/notification.go#L124) [¶](#Client.GetPushSubscription)
added in v0.0.5
```
func (c *[Client](#Client)) GetPushSubscription(ctx [context](/context).[Context](/context#Context)) (*[PushSubscription](#PushSubscription), [error](/builtin#error))
```
GetPushSubscription retrieves information about the active push subscription.
####
func (*Client) [GetRebloggedBy](https://github.com/mattn/go-mastodon/blob/v0.0.6/status.go#L194) [¶](#Client.GetRebloggedBy)
```
func (c *[Client](#Client)) GetRebloggedBy(ctx [context](/context).[Context](/context#Context), id [ID](#ID), pg *[Pagination](#Pagination)) ([]*[Account](#Account), [error](/builtin#error))
```
GetRebloggedBy returns the account list of the user who reblogged the toot of id.
####
func (*Client) [GetReports](https://github.com/mattn/go-mastodon/blob/v0.0.6/report.go#L16) [¶](#Client.GetReports)
```
func (c *[Client](#Client)) GetReports(ctx [context](/context).[Context](/context#Context)) ([]*[Report](#Report), [error](/builtin#error))
```
GetReports returns report of the current user.
####
func (*Client) [GetStatus](https://github.com/mattn/go-mastodon/blob/v0.0.6/status.go#L164) [¶](#Client.GetStatus)
```
func (c *[Client](#Client)) GetStatus(ctx [context](/context).[Context](/context#Context), id [ID](#ID)) (*[Status](#Status), [error](/builtin#error))
```
GetStatus returns status specified by id.
####
func (*Client) [GetStatusCard](https://github.com/mattn/go-mastodon/blob/v0.0.6/status.go#L184) [¶](#Client.GetStatusCard)
```
func (c *[Client](#Client)) GetStatusCard(ctx [context](/context).[Context](/context#Context), id [ID](#ID)) (*[Card](#Card), [error](/builtin#error))
```
GetStatusCard returns status specified by id.
####
func (*Client) [GetStatusContext](https://github.com/mattn/go-mastodon/blob/v0.0.6/status.go#L174) [¶](#Client.GetStatusContext)
```
func (c *[Client](#Client)) GetStatusContext(ctx [context](/context).[Context](/context#Context), id [ID](#ID)) (*[Context](#Context), [error](/builtin#error))
```
GetStatusContext returns status specified by id.
####
func (*Client) [GetTimelineDirect](https://github.com/mattn/go-mastodon/blob/v0.0.6/status.go#L435) [¶](#Client.GetTimelineDirect)
added in v0.0.5
```
func (c *[Client](#Client)) GetTimelineDirect(ctx [context](/context).[Context](/context#Context), pg *[Pagination](#Pagination)) ([]*[Status](#Status), [error](/builtin#error))
```
GetTimelineDirect return statuses from direct timeline.
####
func (*Client) [GetTimelineHashtag](https://github.com/mattn/go-mastodon/blob/v0.0.6/status.go#L299) [¶](#Client.GetTimelineHashtag)
```
func (c *[Client](#Client)) GetTimelineHashtag(ctx [context](/context).[Context](/context#Context), tag [string](/builtin#string), isLocal [bool](/builtin#bool), pg *[Pagination](#Pagination)) ([]*[Status](#Status), [error](/builtin#error))
```
GetTimelineHashtag return statuses from tagged timeline.
####
func (*Client) [GetTimelineHome](https://github.com/mattn/go-mastodon/blob/v0.0.6/status.go#L274) [¶](#Client.GetTimelineHome)
```
func (c *[Client](#Client)) GetTimelineHome(ctx [context](/context).[Context](/context#Context), pg *[Pagination](#Pagination)) ([]*[Status](#Status), [error](/builtin#error))
```
GetTimelineHome return statuses from home timeline.
####
func (*Client) [GetTimelineList](https://github.com/mattn/go-mastodon/blob/v0.0.6/status.go#L314) [¶](#Client.GetTimelineList)
added in v0.0.4
```
func (c *[Client](#Client)) GetTimelineList(ctx [context](/context).[Context](/context#Context), id [ID](#ID), pg *[Pagination](#Pagination)) ([]*[Status](#Status), [error](/builtin#error))
```
GetTimelineList return statuses from a list timeline.
####
func (*Client) [GetTimelineMedia](https://github.com/mattn/go-mastodon/blob/v0.0.6/status.go#L325) [¶](#Client.GetTimelineMedia)
```
func (c *[Client](#Client)) GetTimelineMedia(ctx [context](/context).[Context](/context#Context), isLocal [bool](/builtin#bool), pg *[Pagination](#Pagination)) ([]*[Status](#Status), [error](/builtin#error))
```
GetTimelineMedia return statuses from media timeline.
NOTE: This is an experimental feature of pawoo.net.
####
func (*Client) [GetTimelinePublic](https://github.com/mattn/go-mastodon/blob/v0.0.6/status.go#L284) [¶](#Client.GetTimelinePublic)
```
func (c *[Client](#Client)) GetTimelinePublic(ctx [context](/context).[Context](/context#Context), isLocal [bool](/builtin#bool), pg *[Pagination](#Pagination)) ([]*[Status](#Status), [error](/builtin#error))
```
GetTimelinePublic return statuses from public timeline.
####
func (*Client) [MarkConversationAsRead](https://github.com/mattn/go-mastodon/blob/v0.0.6/status.go#L472) [¶](#Client.MarkConversationAsRead)
added in v0.0.5
```
func (c *[Client](#Client)) MarkConversationAsRead(ctx [context](/context).[Context](/context#Context), id [ID](#ID)) [error](/builtin#error)
```
MarkConversationAsRead mark the conversation as read.
####
func (*Client) [NewWSClient](https://github.com/mattn/go-mastodon/blob/v0.0.6/streaming_ws.go#L21) [¶](#Client.NewWSClient)
```
func (c *[Client](#Client)) NewWSClient() *[WSClient](#WSClient)
```
NewWSClient return WebSocket client.
####
func (*Client) [PollVote](https://github.com/mattn/go-mastodon/blob/v0.0.6/polls.go#L42) [¶](#Client.PollVote)
added in v0.0.5
```
func (c *[Client](#Client)) PollVote(ctx [context](/context).[Context](/context#Context), id [ID](#ID), choices ...[int](/builtin#int)) (*[Poll](#Poll), [error](/builtin#error))
```
PollVote votes on a poll specified by id, choices is the Poll.Options index to vote on
####
func (*Client) [PostStatus](https://github.com/mattn/go-mastodon/blob/v0.0.6/status.go#L341) [¶](#Client.PostStatus)
```
func (c *[Client](#Client)) PostStatus(ctx [context](/context).[Context](/context#Context), toot *[Toot](#Toot)) (*[Status](#Status), [error](/builtin#error))
```
PostStatus post the toot.
####
func (*Client) [Reblog](https://github.com/mattn/go-mastodon/blob/v0.0.6/status.go#L214) [¶](#Client.Reblog)
```
func (c *[Client](#Client)) Reblog(ctx [context](/context).[Context](/context#Context), id [ID](#ID)) (*[Status](#Status), [error](/builtin#error))
```
Reblog reblogs the toot of id and returns status of reblog.
####
func (*Client) [RemoveFromList](https://github.com/mattn/go-mastodon/blob/v0.0.6/lists.go#L100) [¶](#Client.RemoveFromList)
added in v0.0.4
```
func (c *[Client](#Client)) RemoveFromList(ctx [context](/context).[Context](/context#Context), list [ID](#ID), accounts ...[ID](#ID)) [error](/builtin#error)
```
RemoveFromList removes accounts from a list.
####
func (*Client) [RemovePushSubscription](https://github.com/mattn/go-mastodon/blob/v0.0.6/notification.go#L119) [¶](#Client.RemovePushSubscription)
added in v0.0.5
```
func (c *[Client](#Client)) RemovePushSubscription(ctx [context](/context).[Context](/context#Context)) [error](/builtin#error)
```
RemovePushSubscription deletes the active push subscription.
####
func (*Client) [RenameList](https://github.com/mattn/go-mastodon/blob/v0.0.6/lists.go#L70) [¶](#Client.RenameList)
added in v0.0.4
```
func (c *[Client](#Client)) RenameList(ctx [context](/context).[Context](/context#Context), id [ID](#ID), title [string](/builtin#string)) (*[List](#List), [error](/builtin#error))
```
RenameList assigns a new title to a list.
####
func (*Client) [Report](https://github.com/mattn/go-mastodon/blob/v0.0.6/report.go#L26) [¶](#Client.Report)
```
func (c *[Client](#Client)) Report(ctx [context](/context).[Context](/context#Context), accountID [ID](#ID), ids [][ID](#ID), comment [string](/builtin#string)) (*[Report](#Report), [error](/builtin#error))
```
Report reports the report
####
func (*Client) [Search](https://github.com/mattn/go-mastodon/blob/v0.0.6/status.go#L392) [¶](#Client.Search)
```
func (c *[Client](#Client)) Search(ctx [context](/context).[Context](/context#Context), q [string](/builtin#string), resolve [bool](/builtin#bool)) (*[Results](#Results), [error](/builtin#error))
```
Search search content with query.
####
func (*Client) [StreamingDirect](https://github.com/mattn/go-mastodon/blob/v0.0.6/streaming.go#L190) [¶](#Client.StreamingDirect)
added in v0.0.5
```
func (c *[Client](#Client)) StreamingDirect(ctx [context](/context).[Context](/context#Context)) (chan [Event](#Event), [error](/builtin#error))
```
StreamingDirect returns a channel to read events on a direct messages.
####
func (*Client) [StreamingHashtag](https://github.com/mattn/go-mastodon/blob/v0.0.6/streaming.go#L169) [¶](#Client.StreamingHashtag)
```
func (c *[Client](#Client)) StreamingHashtag(ctx [context](/context).[Context](/context#Context), tag [string](/builtin#string), isLocal [bool](/builtin#bool)) (chan [Event](#Event), [error](/builtin#error))
```
StreamingHashtag returns a channel to read events on tagged timeline.
####
func (*Client) [StreamingList](https://github.com/mattn/go-mastodon/blob/v0.0.6/streaming.go#L182) [¶](#Client.StreamingList)
added in v0.0.4
```
func (c *[Client](#Client)) StreamingList(ctx [context](/context).[Context](/context#Context), id [ID](#ID)) (chan [Event](#Event), [error](/builtin#error))
```
StreamingList returns a channel to read events on a list.
####
func (*Client) [StreamingPublic](https://github.com/mattn/go-mastodon/blob/v0.0.6/streaming.go#L159) [¶](#Client.StreamingPublic)
```
func (c *[Client](#Client)) StreamingPublic(ctx [context](/context).[Context](/context#Context), isLocal [bool](/builtin#bool)) (chan [Event](#Event), [error](/builtin#error))
```
StreamingPublic returns a channel to read events on public.
####
func (*Client) [StreamingUser](https://github.com/mattn/go-mastodon/blob/v0.0.6/streaming.go#L154) [¶](#Client.StreamingUser)
```
func (c *[Client](#Client)) StreamingUser(ctx [context](/context).[Context](/context#Context)) (chan [Event](#Event), [error](/builtin#error))
```
StreamingUser returns a channel to read events on home.
####
func (*Client) [Unbookmark](https://github.com/mattn/go-mastodon/blob/v0.0.6/status.go#L264) [¶](#Client.Unbookmark)
added in v0.0.5
```
func (c *[Client](#Client)) Unbookmark(ctx [context](/context).[Context](/context#Context), id [ID](#ID)) (*[Status](#Status), [error](/builtin#error))
```
Unbookmark is unbookmark the toot of id and return status of the unbookmark toot.
####
func (*Client) [Unfavourite](https://github.com/mattn/go-mastodon/blob/v0.0.6/status.go#L244) [¶](#Client.Unfavourite)
```
func (c *[Client](#Client)) Unfavourite(ctx [context](/context).[Context](/context#Context), id [ID](#ID)) (*[Status](#Status), [error](/builtin#error))
```
Unfavourite unfavourites the toot of id and returns status of the unfavourite toot.
####
func (*Client) [Unreblog](https://github.com/mattn/go-mastodon/blob/v0.0.6/status.go#L224) [¶](#Client.Unreblog)
```
func (c *[Client](#Client)) Unreblog(ctx [context](/context).[Context](/context#Context), id [ID](#ID)) (*[Status](#Status), [error](/builtin#error))
```
Unreblog unreblogs the toot of id and returns status of the original toot.
####
func (*Client) [UpdateFilter](https://github.com/mattn/go-mastodon/blob/v0.0.6/filters.go#L78) [¶](#Client.UpdateFilter)
added in v0.0.5
```
func (c *[Client](#Client)) UpdateFilter(ctx [context](/context).[Context](/context#Context), id [ID](#ID), filter *[Filter](#Filter)) (*[Filter](#Filter), [error](/builtin#error))
```
UpdateFilter updates a filter.
####
func (*Client) [UpdatePushSubscription](https://github.com/mattn/go-mastodon/blob/v0.0.6/notification.go#L96) [¶](#Client.UpdatePushSubscription)
added in v0.0.5
```
func (c *[Client](#Client)) UpdatePushSubscription(ctx [context](/context).[Context](/context#Context), alerts *[PushAlerts](#PushAlerts)) (*[PushSubscription](#PushSubscription), [error](/builtin#error))
```
UpdatePushSubscription updates which type of notifications are sent for the active push subscription.
####
func (*Client) [UploadMedia](https://github.com/mattn/go-mastodon/blob/v0.0.6/status.go#L405) [¶](#Client.UploadMedia)
```
func (c *[Client](#Client)) UploadMedia(ctx [context](/context).[Context](/context#Context), file [string](/builtin#string)) (*[Attachment](#Attachment), [error](/builtin#error))
```
UploadMedia upload a media attachment from a file.
####
func (*Client) [UploadMediaFromBytes](https://github.com/mattn/go-mastodon/blob/v0.0.6/status.go#L416) [¶](#Client.UploadMediaFromBytes)
added in v0.0.6
```
func (c *[Client](#Client)) UploadMediaFromBytes(ctx [context](/context).[Context](/context#Context), b [][byte](/builtin#byte)) (*[Attachment](#Attachment), [error](/builtin#error))
```
UploadMediaFromBytes uploads a media attachment from a byte slice.
####
func (*Client) [UploadMediaFromMedia](https://github.com/mattn/go-mastodon/blob/v0.0.6/status.go#L426) [¶](#Client.UploadMediaFromMedia)
added in v0.0.5
```
func (c *[Client](#Client)) UploadMediaFromMedia(ctx [context](/context).[Context](/context#Context), media *[Media](#Media)) (*[Attachment](#Attachment), [error](/builtin#error))
```
UploadMediaFromMedia uploads a media attachment from a Media struct.
####
func (*Client) [UploadMediaFromReader](https://github.com/mattn/go-mastodon/blob/v0.0.6/status.go#L421) [¶](#Client.UploadMediaFromReader)
added in v0.0.4
```
func (c *[Client](#Client)) UploadMediaFromReader(ctx [context](/context).[Context](/context#Context), reader [io](/io).[Reader](/io#Reader)) (*[Attachment](#Attachment), [error](/builtin#error))
```
UploadMediaFromReader uploads a media attachment from an io.Reader.
####
func (*Client) [VerifyAppCredentials](https://github.com/mattn/go-mastodon/blob/v0.0.6/apps.go#L106) [¶](#Client.VerifyAppCredentials)
added in v0.0.6
```
func (c *[Client](#Client)) VerifyAppCredentials(ctx [context](/context).[Context](/context#Context)) (*[ApplicationVerification](#ApplicationVerification), [error](/builtin#error))
```
VerifyAppCredentials returns the mastodon application.
####
type [Config](https://github.com/mattn/go-mastodon/blob/v0.0.6/mastodon.go#L20) [¶](#Config)
```
type Config struct {
Server [string](/builtin#string)
ClientID [string](/builtin#string)
ClientSecret [string](/builtin#string)
AccessToken [string](/builtin#string)
}
```
Config is a setting for access mastodon APIs.
####
type [Context](https://github.com/mattn/go-mastodon/blob/v0.0.6/status.go#L49) [¶](#Context)
```
type Context struct {
Ancestors []*[Status](#Status) `json:"ancestors"`
Descendants []*[Status](#Status) `json:"descendants"`
}
```
Context holds information for a mastodon context.
####
type [Conversation](https://github.com/mattn/go-mastodon/blob/v0.0.6/status.go#L71) [¶](#Conversation)
added in v0.0.5
```
type Conversation struct {
ID [ID](#ID) `json:"id"`
Accounts []*[Account](#Account) `json:"accounts"`
Unread [bool](/builtin#bool) `json:"unread"`
LastStatus *[Status](#Status) `json:"last_status"`
}
```
Conversation holds information for a mastodon conversation.
####
type [DeleteEvent](https://github.com/mattn/go-mastodon/blob/v0.0.6/streaming.go#L31) [¶](#DeleteEvent)
```
type DeleteEvent struct{ ID [ID](#ID) }
```
DeleteEvent is a struct for passing deletion event to app.
####
type [Emoji](https://github.com/mattn/go-mastodon/blob/v0.0.6/mastodon.go#L295) [¶](#Emoji)
added in v0.0.3
```
type Emoji struct {
ShortCode [string](/builtin#string) `json:"shortcode"`
StaticURL [string](/builtin#string) `json:"static_url"`
URL [string](/builtin#string) `json:"url"`
VisibleInPicker [bool](/builtin#bool) `json:"visible_in_picker"`
}
```
Emoji hold information for CustomEmoji.
####
type [ErrorEvent](https://github.com/mattn/go-mastodon/blob/v0.0.6/streaming.go#L36) [¶](#ErrorEvent)
```
type ErrorEvent struct {
// contains filtered or unexported fields
}
```
ErrorEvent is a struct for passing errors to app.
####
func (*ErrorEvent) [Error](https://github.com/mattn/go-mastodon/blob/v0.0.6/streaming.go#L39) [¶](#ErrorEvent.Error)
```
func (e *[ErrorEvent](#ErrorEvent)) Error() [string](/builtin#string)
```
####
type [Event](https://github.com/mattn/go-mastodon/blob/v0.0.6/streaming.go#L42) [¶](#Event)
```
type Event interface {
// contains filtered or unexported methods
}
```
Event is an interface passing events to app.
####
type [Field](https://github.com/mattn/go-mastodon/blob/v0.0.6/accounts.go#L38) [¶](#Field)
added in v0.0.3
```
type Field struct {
Name [string](/builtin#string) `json:"name"`
Value [string](/builtin#string) `json:"value"`
VerifiedAt [time](/time).[Time](/time#Time) `json:"verified_at"`
}
```
Field is a Mastodon account profile field.
####
type [Filter](https://github.com/mattn/go-mastodon/blob/v0.0.6/filters.go#L13) [¶](#Filter)
added in v0.0.5
```
type Filter struct {
ID [ID](#ID) `json:"id"`
Phrase [string](/builtin#string) `json:"phrase"`
Context [][string](/builtin#string) `json:"context"`
WholeWord [bool](/builtin#bool) `json:"whole_word"`
ExpiresAt [time](/time).[Time](/time#Time) `json:"expires_at"`
Irreversible [bool](/builtin#bool) `json:"irreversible"`
}
```
Filter is metadata for a filter of users.
####
type [History](https://github.com/mattn/go-mastodon/blob/v0.0.6/mastodon.go#L262) [¶](#History)
added in v0.0.3
```
type History struct {
Day [string](/builtin#string) `json:"day"`
Uses [string](/builtin#string) `json:"uses"`
Accounts [string](/builtin#string) `json:"accounts"`
}
```
History hold information for history.
####
type [ID](https://github.com/mattn/go-mastodon/blob/v0.0.6/compat.go#L9) [¶](#ID)
added in v0.0.2
```
type ID [string](/builtin#string)
```
####
func (*ID) [UnmarshalJSON](https://github.com/mattn/go-mastodon/blob/v0.0.6/compat.go#L11) [¶](#ID.UnmarshalJSON)
added in v0.0.2
```
func (id *[ID](#ID)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error)
```
####
type [Instance](https://github.com/mattn/go-mastodon/blob/v0.0.6/instance.go#L9) [¶](#Instance)
```
type Instance struct {
URI [string](/builtin#string) `json:"uri"`
Title [string](/builtin#string) `json:"title"`
Description [string](/builtin#string) `json:"description"`
EMail [string](/builtin#string) `json:"email"`
Version [string](/builtin#string) `json:"version,omitempty"`
Thumbnail [string](/builtin#string) `json:"thumbnail,omitempty"`
URLs map[[string](/builtin#string)][string](/builtin#string) `json:"urls,omitempty"`
Stats *[InstanceStats](#InstanceStats) `json:"stats,omitempty"`
Languages [][string](/builtin#string) `json:"languages"`
ContactAccount *[Account](#Account) `json:"contact_account"`
}
```
Instance holds information for a mastodon instance.
####
type [InstanceStats](https://github.com/mattn/go-mastodon/blob/v0.0.6/instance.go#L23) [¶](#InstanceStats)
added in v0.0.3
```
type InstanceStats struct {
UserCount [int64](/builtin#int64) `json:"user_count"`
StatusCount [int64](/builtin#int64) `json:"status_count"`
DomainCount [int64](/builtin#int64) `json:"domain_count"`
}
```
InstanceStats holds information for mastodon instance stats.
####
type [List](https://github.com/mattn/go-mastodon/blob/v0.0.6/lists.go#L11) [¶](#List)
added in v0.0.4
```
type List struct {
ID [ID](#ID) `json:"id"`
Title [string](/builtin#string) `json:"title"`
}
```
List is metadata for a list of users.
####
type [Media](https://github.com/mattn/go-mastodon/blob/v0.0.6/status.go#L79) [¶](#Media)
added in v0.0.5
```
type Media struct {
File [io](/io).[Reader](/io#Reader)
Thumbnail [io](/io).[Reader](/io#Reader)
Description [string](/builtin#string)
Focus [string](/builtin#string)
}
```
Media is struct to hold media.
####
type [Mention](https://github.com/mattn/go-mastodon/blob/v0.0.6/mastodon.go#L247) [¶](#Mention)
```
type Mention struct {
URL [string](/builtin#string) `json:"url"`
Username [string](/builtin#string) `json:"username"`
Acct [string](/builtin#string) `json:"acct"`
ID [ID](#ID) `json:"id"`
}
```
Mention hold information for mention.
####
type [Notification](https://github.com/mattn/go-mastodon/blob/v0.0.6/notification.go#L16) [¶](#Notification)
```
type Notification struct {
ID [ID](#ID) `json:"id"`
Type [string](/builtin#string) `json:"type"`
CreatedAt [time](/time).[Time](/time#Time) `json:"created_at"`
Account [Account](#Account) `json:"account"`
Status *[Status](#Status) `json:"status"`
}
```
Notification holds information for a mastodon notification.
####
type [NotificationEvent](https://github.com/mattn/go-mastodon/blob/v0.0.6/streaming.go#L24) [¶](#NotificationEvent)
```
type NotificationEvent struct {
Notification *[Notification](#Notification) `json:"notification"`
}
```
NotificationEvent is a struct for passing notification event to app.
####
type [Pagination](https://github.com/mattn/go-mastodon/blob/v0.0.6/mastodon.go#L310) [¶](#Pagination)
```
type Pagination struct {
MaxID [ID](#ID)
SinceID [ID](#ID)
MinID [ID](#ID)
Limit [int64](/builtin#int64)
}
```
Pagination is a struct for specifying the get range.
Example [¶](#example-Pagination)
```
c := mastodon.NewClient(&mastodon.Config{
Server: "https://mstdn.jp",
ClientID: "client-id",
ClientSecret: "client-secret",
})
var followers []*mastodon.Account var pg mastodon.Pagination for {
fs, err := c.GetAccountFollowers(context.Background(), "1", &pg)
if err != nil {
log.Fatal(err)
}
followers = append(followers, fs...)
if pg.MaxID == "" {
break
}
time.Sleep(10 * time.Second)
}
for _, f := range followers {
fmt.Println(f.Acct)
}
```
```
Output:
```
####
type [Poll](https://github.com/mattn/go-mastodon/blob/v0.0.6/polls.go#L12) [¶](#Poll)
added in v0.0.5
```
type Poll struct {
ID [ID](#ID) `json:"id"`
ExpiresAt [time](/time).[Time](/time#Time) `json:"expires_at"`
Expired [bool](/builtin#bool) `json:"expired"`
Multiple [bool](/builtin#bool) `json:"multiple"`
VotesCount [int64](/builtin#int64) `json:"votes_count"`
VotersCount [int64](/builtin#int64) `json:"voters_count"`
Options [][PollOption](#PollOption) `json:"options"`
Voted [bool](/builtin#bool) `json:"voted"`
OwnVotes [][int](/builtin#int) `json:"own_votes"`
Emojis [][Emoji](#Emoji) `json:"emojis"`
}
```
Poll holds information for mastodon polls.
####
type [PollOption](https://github.com/mattn/go-mastodon/blob/v0.0.6/polls.go#L26) [¶](#PollOption)
added in v0.0.5
```
type PollOption struct {
Title [string](/builtin#string) `json:"title"`
VotesCount [int64](/builtin#int64) `json:"votes_count"`
}
```
Poll holds information for a mastodon poll option.
####
type [Profile](https://github.com/mattn/go-mastodon/blob/v0.0.6/accounts.go#L74) [¶](#Profile)
```
type Profile struct {
// If it is nil it will not be updated.
// If it is empty, update it with empty.
DisplayName *[string](/builtin#string)
Note *[string](/builtin#string)
Locked *[bool](/builtin#bool)
Fields *[][Field](#Field)
Source *[AccountSource](#AccountSource)
// Set the base64 encoded character string of the image.
Avatar [string](/builtin#string)
Header [string](/builtin#string)
}
```
Profile is a struct for updating profiles.
####
type [PushAlerts](https://github.com/mattn/go-mastodon/blob/v0.0.6/notification.go#L31) [¶](#PushAlerts)
added in v0.0.5
```
type PushAlerts struct {
Follow *[Sbool](#Sbool) `json:"follow"`
Favourite *[Sbool](#Sbool) `json:"favourite"`
Reblog *[Sbool](#Sbool) `json:"reblog"`
Mention *[Sbool](#Sbool) `json:"mention"`
}
```
####
type [PushSubscription](https://github.com/mattn/go-mastodon/blob/v0.0.6/notification.go#L24) [¶](#PushSubscription)
added in v0.0.5
```
type PushSubscription struct {
ID [ID](#ID) `json:"id"`
Endpoint [string](/builtin#string) `json:"endpoint"`
ServerKey [string](/builtin#string) `json:"server_key"`
Alerts *[PushAlerts](#PushAlerts) `json:"alerts"`
}
```
####
type [Relationship](https://github.com/mattn/go-mastodon/blob/v0.0.6/accounts.go#L185) [¶](#Relationship)
```
type Relationship struct {
ID [ID](#ID) `json:"id"`
Following [bool](/builtin#bool) `json:"following"`
FollowedBy [bool](/builtin#bool) `json:"followed_by"`
Blocking [bool](/builtin#bool) `json:"blocking"`
Muting [bool](/builtin#bool) `json:"muting"`
MutingNotifications [bool](/builtin#bool) `json:"muting_notifications"`
Requested [bool](/builtin#bool) `json:"requested"`
DomainBlocking [bool](/builtin#bool) `json:"domain_blocking"`
ShowingReblogs [bool](/builtin#bool) `json:"showing_reblogs"`
Endorsed [bool](/builtin#bool) `json:"endorsed"`
}
```
Relationship holds information for relationship to the account.
####
type [Report](https://github.com/mattn/go-mastodon/blob/v0.0.6/report.go#L10) [¶](#Report)
```
type Report struct {
ID [int64](/builtin#int64) `json:"id"`
ActionTaken [bool](/builtin#bool) `json:"action_taken"`
}
```
Report holds information for a mastodon report.
####
type [Results](https://github.com/mattn/go-mastodon/blob/v0.0.6/mastodon.go#L303) [¶](#Results)
```
type Results struct {
Accounts []*[Account](#Account) `json:"accounts"`
Statuses []*[Status](#Status) `json:"statuses"`
Hashtags []*[Tag](#Tag) `json:"hashtags"`
}
```
Results hold information for search result.
####
type [Sbool](https://github.com/mattn/go-mastodon/blob/v0.0.6/compat.go#L28) [¶](#Sbool)
added in v0.0.5
```
type Sbool [bool](/builtin#bool)
```
####
func (*Sbool) [UnmarshalJSON](https://github.com/mattn/go-mastodon/blob/v0.0.6/compat.go#L30) [¶](#Sbool.UnmarshalJSON)
added in v0.0.5
```
func (s *[Sbool](#Sbool)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error)
```
####
type [Status](https://github.com/mattn/go-mastodon/blob/v0.0.6/status.go#L17) [¶](#Status)
```
type Status struct {
ID [ID](#ID) `json:"id"`
URI [string](/builtin#string) `json:"uri"`
URL [string](/builtin#string) `json:"url"`
Account [Account](#Account) `json:"account"`
InReplyToID interface{} `json:"in_reply_to_id"`
InReplyToAccountID interface{} `json:"in_reply_to_account_id"`
Reblog *[Status](#Status) `json:"reblog"`
Content [string](/builtin#string) `json:"content"`
CreatedAt [time](/time).[Time](/time#Time) `json:"created_at"`
Emojis [][Emoji](#Emoji) `json:"emojis"`
RepliesCount [int64](/builtin#int64) `json:"replies_count"`
ReblogsCount [int64](/builtin#int64) `json:"reblogs_count"`
FavouritesCount [int64](/builtin#int64) `json:"favourites_count"`
Reblogged interface{} `json:"reblogged"`
Favourited interface{} `json:"favourited"`
Bookmarked interface{} `json:"bookmarked"`
Muted interface{} `json:"muted"`
Sensitive [bool](/builtin#bool) `json:"sensitive"`
SpoilerText [string](/builtin#string) `json:"spoiler_text"`
Visibility [string](/builtin#string) `json:"visibility"`
MediaAttachments [][Attachment](#Attachment) `json:"media_attachments"`
Mentions [][Mention](#Mention) `json:"mentions"`
Tags [][Tag](#Tag) `json:"tags"`
Card *[Card](#Card) `json:"card"`
Poll *[Poll](#Poll) `json:"poll"`
Application [Application](#Application) `json:"application"`
Language [string](/builtin#string) `json:"language"`
Pinned interface{} `json:"pinned"`
}
```
Status is struct to hold status.
####
type [Stream](https://github.com/mattn/go-mastodon/blob/v0.0.6/streaming_ws.go#L24) [¶](#Stream)
```
type Stream struct {
Event [string](/builtin#string) `json:"event"`
Payload interface{} `json:"payload"`
}
```
Stream is a struct of data that flows in streaming.
####
type [Tag](https://github.com/mattn/go-mastodon/blob/v0.0.6/mastodon.go#L255) [¶](#Tag)
```
type Tag struct {
Name [string](/builtin#string) `json:"name"`
URL [string](/builtin#string) `json:"url"`
History [][History](#History) `json:"history"`
}
```
Tag hold information for tag.
####
type [Toot](https://github.com/mattn/go-mastodon/blob/v0.0.6/mastodon.go#L226) [¶](#Toot)
```
type Toot struct {
Status [string](/builtin#string) `json:"status"`
InReplyToID [ID](#ID) `json:"in_reply_to_id"`
MediaIDs [][ID](#ID) `json:"media_ids"`
Sensitive [bool](/builtin#bool) `json:"sensitive"`
SpoilerText [string](/builtin#string) `json:"spoiler_text"`
Visibility [string](/builtin#string) `json:"visibility"`
Language [string](/builtin#string) `json:"language"`
ScheduledAt *[time](/time).[Time](/time#Time) `json:"scheduled_at,omitempty"`
Poll *[TootPoll](#TootPoll) `json:"poll"`
}
```
Toot is a struct to post status.
####
type [TootPoll](https://github.com/mattn/go-mastodon/blob/v0.0.6/mastodon.go#L239) [¶](#TootPoll)
added in v0.0.5
```
type TootPoll struct {
Options [][string](/builtin#string) `json:"options"`
ExpiresInSeconds [int64](/builtin#int64) `json:"expires_in"`
Multiple [bool](/builtin#bool) `json:"multiple"`
HideTotals [bool](/builtin#bool) `json:"hide_totals"`
}
```
TootPoll holds information for creating a poll in Toot.
####
type [Unixtime](https://github.com/mattn/go-mastodon/blob/v0.0.6/unixtime.go#L8) [¶](#Unixtime)
added in v0.0.3
```
type Unixtime [time](/time).[Time](/time#Time)
```
####
func (*Unixtime) [UnmarshalJSON](https://github.com/mattn/go-mastodon/blob/v0.0.6/unixtime.go#L10) [¶](#Unixtime.UnmarshalJSON)
added in v0.0.3
```
func (t *[Unixtime](#Unixtime)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error)
```
####
type [UpdateEvent](https://github.com/mattn/go-mastodon/blob/v0.0.6/streaming.go#L17) [¶](#UpdateEvent)
```
type UpdateEvent struct {
Status *[Status](#Status) `json:"status"`
}
```
UpdateEvent is a struct for passing status event to app.
####
type [WSClient](https://github.com/mattn/go-mastodon/blob/v0.0.6/streaming_ws.go#L15) [¶](#WSClient)
```
type WSClient struct {
[websocket](/github.com/gorilla/websocket).[Dialer](/github.com/gorilla/websocket#Dialer)
// contains filtered or unexported fields
}
```
WSClient is a WebSocket client.
####
func (*WSClient) [StreamingWSHashtag](https://github.com/mattn/go-mastodon/blob/v0.0.6/streaming_ws.go#L45) [¶](#WSClient.StreamingWSHashtag)
```
func (c *[WSClient](#WSClient)) StreamingWSHashtag(ctx [context](/context).[Context](/context#Context), tag [string](/builtin#string), isLocal [bool](/builtin#bool)) (chan [Event](#Event), [error](/builtin#error))
```
StreamingWSHashtag return channel to read events on tagged timeline using WebSocket.
####
func (*WSClient) [StreamingWSList](https://github.com/mattn/go-mastodon/blob/v0.0.6/streaming_ws.go#L55) [¶](#WSClient.StreamingWSList)
added in v0.0.4
```
func (c *[WSClient](#WSClient)) StreamingWSList(ctx [context](/context).[Context](/context#Context), id [ID](#ID)) (chan [Event](#Event), [error](/builtin#error))
```
StreamingWSList return channel to read events on a list using WebSocket.
####
func (*WSClient) [StreamingWSPublic](https://github.com/mattn/go-mastodon/blob/v0.0.6/streaming_ws.go#L35) [¶](#WSClient.StreamingWSPublic)
```
func (c *[WSClient](#WSClient)) StreamingWSPublic(ctx [context](/context).[Context](/context#Context), isLocal [bool](/builtin#bool)) (chan [Event](#Event), [error](/builtin#error))
```
StreamingWSPublic return channel to read events on public using WebSocket.
####
func (*WSClient) [StreamingWSUser](https://github.com/mattn/go-mastodon/blob/v0.0.6/streaming_ws.go#L30) [¶](#WSClient.StreamingWSUser)
```
func (c *[WSClient](#WSClient)) StreamingWSUser(ctx [context](/context).[Context](/context#Context)) (chan [Event](#Event), [error](/builtin#error))
```
StreamingWSUser return channel to read events on home using WebSocket.
####
type [WeeklyActivity](https://github.com/mattn/go-mastodon/blob/v0.0.6/instance.go#L40) [¶](#WeeklyActivity)
added in v0.0.3
```
type WeeklyActivity struct {
Week [Unixtime](#Unixtime) `json:"week"`
Statuses [int64](/builtin#int64) `json:"statuses,string"`
Logins [int64](/builtin#int64) `json:"logins,string"`
Registrations [int64](/builtin#int64) `json:"registrations,string"`
}
```
WeeklyActivity holds information for mastodon weekly activity. |
whitenoise | readthedoc | Python | WhiteNoise 6.6.0 documentation
Hide navigation sidebar
Hide table of contents sidebar
Toggle site navigation sidebar
[WhiteNoise](#)
Toggle Light / Dark / Auto color theme
Toggle table of contents sidebar
[WhiteNoise](#)
* [WhiteNoise](index.html#document-index)
* [Using WhiteNoise with Django](index.html#document-django)
* [Using WhiteNoise with any WSGI application](index.html#document-base)
* [Using WhiteNoise with Flask](index.html#document-flask)
* [Changelog](index.html#document-changelog)
[Back to top](#)
Toggle Light / Dark / Auto color theme
Toggle table of contents sidebar
WhiteNoise[#](#whitenoise)
===
**Radically simplified static file serving for Python web apps**
With a couple of lines of config WhiteNoise allows your web app to serve its own static files, making it a self-contained unit that can be deployed anywhere without relying on nginx, Amazon S3 or any other external service. (Especially useful on Heroku, OpenShift and other PaaS providers.)
It’s designed to work nicely with a CDN for high-traffic sites so you don’t have to sacrifice performance to benefit from simplicity.
WhiteNoise works with any WSGI-compatible app but has some special auto-configuration features for Django.
WhiteNoise takes care of best-practices for you, for instance:
* Serving compressed content (gzip and Brotli formats, handling Accept-Encoding and Vary headers correctly)
* Setting far-future cache headers on content which won’t change
Worried that serving static files with Python is horribly inefficient?
Still think you should be using Amazon S3? Have a look at the [Infrequently Asked Questions](#infrequently-asked-questions) below.
Requirements[#](#requirements)
---
WhiteNoise works with any WSGI-compatible application.
Python 3.8 to 3.12 supported.
Django 3.2 to 5.0 supported.
Installation[#](#installation)
---
Install with:
```
pip install whitenoise
```
QuickStart for Django apps[#](#quickstart-for-django-apps)
---
Edit your `settings.py` file and add WhiteNoise to the `MIDDLEWARE`
list, above all other middleware apart from Django’s [SecurityMiddleware](https://docs.djangoproject.com/en/stable/ref/middleware/#module-django.middleware.security):
```
MIDDLEWARE = [
# ...
"django.middleware.security.SecurityMiddleware",
"whitenoise.middleware.WhiteNoiseMiddleware",
# ...
]
```
That’s it, you’re ready to go.
Want forever-cacheable files and compression support? Just add this to your
`settings.py`:
```
STATICFILES_STORAGE = "whitenoise.storage.CompressedManifestStaticFilesStorage"
```
For more details, including on setting up CloudFront and other CDNs see the [Using WhiteNoise with Django](index.html#document-django)
guide.
QuickStart for other WSGI apps[#](#quickstart-for-other-wsgi-apps)
---
To enable WhiteNoise you need to wrap your existing WSGI application in a WhiteNoise instance and tell it where to find your static files. For example:
```
from whitenoise import WhiteNoise
from my_project import MyWSGIApp
application = MyWSGIApp()
application = WhiteNoise(application, root="/path/to/static/files")
application.add_files("/path/to/more/static/files", prefix="more-files/")
```
And that’s it, you’re ready to go. For more details see the [full documentation](index.html#document-base).
Using WhiteNoise with Flask[#](#using-whitenoise-with-flask)
---
WhiteNoise was not specifically written with Flask in mind, but as Flask uses the standard WSGI protocol it is easy to integrate with WhiteNoise (see the
[Using WhiteNoise with Flask](index.html#document-flask) guide).
Endorsements[#](#endorsements)
---
WhiteNoise owes its initial popularity to the nice things that some of Django and pip’s core developers said about it:
> [@jezdez](https://twitter.com/jezdez/status/440901769821179904): *[WhiteNoise]
> is really awesome and should be the standard for Django + Heroku*
> [@dstufft](https://twitter.com/dstufft/status/440948000782032897): *WhiteNoise
> looks pretty excellent.*
> [@idangazit](https://twitter.com/idangazit/status/456720556331528192) *Received
> a positive brainsmack from @_EvansD’s WhiteNoise. Vastly smarter than S3 for
> static assets. What was I thinking before?*
It’s now being used by thousands of projects, including some high-profile sites such as [mozilla.org](https://www.mozilla.org/).
Issues & Contributing[#](#issues-contributing)
---
Raise an issue on the [GitHub project](https://github.com/evansd/whitenoise) or feel free to nudge [@_EvansD](https://twitter.com/_evansd) on Twitter.
Infrequently Asked Questions[#](#infrequently-asked-questions)
---
### Isn’t serving static files from Python horribly inefficient?[#](#isn-t-serving-static-files-from-python-horribly-inefficient)
The short answer to this is that if you care about performance and efficiency then you should be using WhiteNoise behind a CDN like CloudFront. If you’re doing *that* then, because of the caching headers WhiteNoise sends, the vast majority of static requests will be served directly by the CDN without touching your application, so it really doesn’t make much difference how efficient WhiteNoise is.
That said, WhiteNoise is pretty efficient. Because it only has to serve a fixed set of files it does all the work of finding files and determining the correct headers upfront on initialization. Requests can then be served with little more than a dictionary lookup to find the appropriate response. Also, when used with gunicorn (and most other WSGI servers) the actual business of pushing the file down the network interface is handled by the kernel’s very efficient
`sendfile` syscall, not by Python.
### Shouldn’t I be pushing my static files to S3 using something like Django-Storages?[#](#shouldn-t-i-be-pushing-my-static-files-to-s3-using-something-like-django-storages)
No, you shouldn’t. The main problem with this approach is that Amazon S3 cannot currently selectively serve compressed content to your users. Compression
(using either the venerable gzip or the more modern brotli algorithms) can make dramatic reductions in the bandwidth required for your CSS and JavaScript. But in order to do this correctly the server needs to examine the
`Accept-Encoding` header of the request to determine which compression formats are supported, and return an appropriate `Vary` header so that intermediate caches know to do the same. This is exactly what WhiteNoise does,
but Amazon S3 currently provides no means of doing this.
The second problem with a push-based approach to handling static files is that it adds complexity and fragility to your deployment process: extra libraries specific to your storage backend, extra configuration and authentication keys,
and extra tasks that must be run at specific points in the deployment in order for everything to work. With the CDN-as-caching-proxy approach that WhiteNoise takes there are just two bits of configuration: your application needs the URL of the CDN, and the CDN needs the URL of your application. Everything else is just standard HTTP semantics. This makes your deployments simpler, your life easier, and you happier.
### What’s the point in WhiteNoise when I can do the same thing in a few lines of Apache/nginx config?[#](#what-s-the-point-in-whitenoise-when-i-can-do-the-same-thing-in-a-few-lines-of-apache-nginx-config)
There are two answers here. One is that WhiteNoise is designed to work in situations where Apache, nginx and the like aren’t easily available. But more importantly, it’s easy to underestimate what’s involved in serving static files correctly. Does your few lines of nginx config distinguish between files which might change and files which will never change and set the cache headers appropriately? Did you add the right CORS headers so that your fonts load correctly when served via a CDN? Did you turn on the special nginx setting which allows it to send gzipped content in response to an `HTTP/1.0` request,
which for some reason CloudFront still uses? Did you install the extension which allows you to serve pre-compressed brotli-encoded content to modern browsers?
None of this is rocket science, but it’s fiddly and annoying and WhiteNoise takes care of all it for you.
License[#](#license)
---
MIT Licensed
### Using WhiteNoise with Django[#](#using-whitenoise-with-django)
Note
To use WhiteNoise with a non-Django application see the
[generic WSGI documentation](index.html#document-base).
This guide walks you through setting up a Django project with WhiteNoise.
In most cases it shouldn’t take more than a couple of lines of configuration.
I mention Heroku in a few places as that was the initial use case which prompted me to create WhiteNoise, but there’s nothing Heroku-specific about WhiteNoise and the instructions below should apply whatever your hosting platform.
#### 1. Make sure *staticfiles* is configured correctly[#](#make-sure-staticfiles-is-configured-correctly)
If you’re familiar with Django you’ll know what to do. If you’re just getting started with a new Django project then you’ll need add the following to the bottom of your
`settings.py` file:
```
STATIC_ROOT = BASE_DIR / "staticfiles"
```
As part of deploying your application you’ll need to run `./manage.py collectstatic` to put all your static files into `STATIC_ROOT`. (If you’re running on Heroku then this is done automatically for you.)
Make sure you’re using the [static](https://docs.djangoproject.com/en/stable/ref/templates/builtins/#std:templatetag-static) template tag to refer to your static files,
rather than writing the URL directly. For example:
```
{% load static %}
<img src="{% static "images/hi.jpg" %}" alt="Hi!"<!-- DON'T WRITE THIS -->
<img src="/static/images/hi.jpg" alt="Hi!">
```
For further details see the Django [staticfiles](https://docs.djangoproject.com/en/stable/howto/static-files/) guide.
#### 2. Enable WhiteNoise[#](#enable-whitenoise)
Edit your `settings.py` file and add WhiteNoise to the `MIDDLEWARE` list.
The WhiteNoise middleware should be placed directly after the Django [SecurityMiddleware](https://docs.djangoproject.com/en/stable/ref/middleware/#module-django.middleware.security)
(if you are using it) and before all other middleware:
```
MIDDLEWARE = [
# ...
"django.middleware.security.SecurityMiddleware",
"whitenoise.middleware.WhiteNoiseMiddleware",
# ...
]
```
That’s it – WhiteNoise will now serve your static files (you can confirm it’s working using the [steps below](#check-its-working)). However, to get the best performance you should proceed to step 3 below and enable compression and caching.
Note
You might find other third-party middleware that suggests it should be given highest priority at the top of the middleware list. Unless you understand exactly what is happening you should ignore this advice and always place `WhiteNoiseMiddleware` above other middleware. If you plan to have other middleware run before WhiteNoise you should be aware of the
[request_finished bug](https://code.djangoproject.com/ticket/29069) in Django.
#### 3. Add compression and caching support[#](#add-compression-and-caching-support)
WhiteNoise comes with a storage backend which compresses your files and hashes them to unique names, so they can safely be cached forever. To use it, set it as your staticfiles storage backend in your settings file.
On Django 4.2+:
```
STORAGES = {
# ...
"staticfiles": {
"BACKEND": "whitenoise.storage.CompressedManifestStaticFilesStorage",
},
}
```
On older Django versions:
```
STATICFILES_STORAGE = "whitenoise.storage.CompressedManifestStaticFilesStorage"
```
This combines automatic compression with the caching behaviour provided by Django’s [ManifestStaticFilesStorage](https://docs.djangoproject.com/en/stable/ref/contrib/staticfiles/#manifeststaticfilesstorage) backend. If you want to apply compression but don’t want the caching behaviour then you can use the alternative backend:
```
"whitenoise.storage.CompressedStaticFilesStorage"
```
Note
If you are having problems after switching to the WhiteNoise storage backend please see the [troubleshooting guide](#storage-troubleshoot).
If you need to compress files outside of the static files storage system you can use the supplied [command line utility](index.html#cli-utility)
##### Brotli compression[#](#brotli-compression)
As well as the common gzip compression format, WhiteNoise supports the newer,
more efficient [brotli](https://en.wikipedia.org/wiki/Brotli) format. This helps reduce bandwidth and increase loading speed. To enable brotli compression you will need the [Brotli Python package](https://pypi.org/project/Brotli/)
installed by running `pip install whitenoise[brotli]`.
Brotli is supported by [all major browsers](https://caniuse.com/#feat=brotli)
(except IE11). WhiteNoise will only serve brotli data to browsers which request it so there are no compatibility issues with enabling brotli support.
Also note that browsers will only request brotli data over an HTTPS connection.
#### 4. Use a Content-Delivery Network[#](#use-a-content-delivery-network)
The above steps will get you decent performance on moderate traffic sites, however for higher traffic sites, or sites where performance is a concern you should look at using a CDN.
Because WhiteNoise sends appropriate cache headers with your static content, the CDN will be able to cache your files and serve them without needing to contact your application again.
Below are instruction for setting up WhiteNoise with Amazon CloudFront, a popular choice of CDN. The process for other CDNs should look very similar though.
##### Instructions for Amazon CloudFront[#](#instructions-for-amazon-cloudfront)
Go to CloudFront section of the AWS Web Console, and click “Create Distribution”. Put your application’s domain (without the http prefix) in the
“Origin Domain Name” field and leave the rest of the settings as they are.
It might take a few minutes for your distribution to become active. Once it’s ready, copy the distribution domain name into your `settings.py` file so it looks something like this:
```
STATIC_HOST = "https://d4663kmspf1sqa.cloudfront.net" if not DEBUG else ""
STATIC_URL = STATIC_HOST + "/static/"
```
Or, even better, you can avoid hardcoding your CDN into your settings by doing something like this:
```
STATIC_HOST = os.environ.get("DJANGO_STATIC_HOST", "")
STATIC_URL = STATIC_HOST + "/static/"
```
This way you can configure your CDN just by setting an environment variable.
For apps on Heroku, you’d run this command
```
heroku config:set DJANGO_STATIC_HOST=https://d4663kmspf1sqa.cloudfront.net
```
##### Using compression algorithms other than gzip[#](#using-compression-algorithms-other-than-gzip)
By default, CloudFront will discard any `Accept-Encoding` header browsers include in requests, unless the value of the header is gzip. If it is gzip, CloudFront will fetch the uncompressed file from the origin, compress it, and return it to the requesting browser.
To get CloudFront to not do the compression itself as well as serve files compressed using other algorithms, such as Brotli, you must configure your distribution to
[cache based on the Accept-Encoding header](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/ServingCompressedFiles.html#compressed-content-custom-origin). You can do this in the `Behaviours`
tab of your distribution.
Note
By default your entire site will be accessible via the CloudFront URL. It’s possible that this can cause SEO problems if these URLs start showing up in search results. You can restrict CloudFront to only proxy your static files by following [these directions](#restricting-cloudfront).
#### 5. Using WhiteNoise in development[#](#using-whitenoise-in-development)
In development Django’s `runserver` automatically takes over static file handling. In most cases this is fine, however this means that some of the improvements that WhiteNoise makes to static file handling won’t be available in development and it opens up the possibility for differences in behaviour between development and production environments. For this reason it’s a good idea to use WhiteNoise in development as well.
You can disable Django’s static file handling and allow WhiteNoise to take over simply by passing the `--nostatic` option to the `runserver` command, but you need to remember to add this option every time you call `runserver`. An easier way is to edit your `settings.py` file and add
`whitenoise.runserver_nostatic` to the top of your `INSTALLED_APPS` list:
```
INSTALLED_APPS = [
# ...
"whitenoise.runserver_nostatic",
"django.contrib.staticfiles",
# ...
]
```
Note
In older versions of WhiteNoise (below v4.0) it was not possible to use
`runserver_nostatic` with [Channels](https://channels.readthedocs.io/) as Channels provides its own implementation of runserver. Newer versions of WhiteNoise do not have this problem and will work with Channels or any other third-party app that provides its own implementation of runserver.
#### 6. Index Files[#](#index-files)
When the [`WHITENOISE_INDEX_FILE`](#WHITENOISE_INDEX_FILE) option is enabled:
* Visiting `/example/` will serve the file at `/example/index.html`
* Visiting `/example` will redirect (302) to `/example/`
* Visiting `/example/index.html` will redirect (302) to `/example/`
If you want to something other than `index.html` as the index file, then you can also set this option to an alternative filename.
#### Available Settings[#](#available-settings)
The WhiteNoiseMiddleware class takes all the same configuration options as the WhiteNoise base class, but rather than accepting keyword arguments to its constructor it uses Django settings. The setting names are just the keyword arguments upper-cased with a ‘WHITENOISE_’ prefix.
WHITENOISE_ROOT[#](#WHITENOISE_ROOT)
Default:
`None`
Absolute path to a directory of files which will be served at the root of your application (ignored if not set).
Don’t use this for the bulk of your static files because you won’t benefit from cache versioning, but it can be convenient for files like
`robots.txt` or `favicon.ico` which you want to serve at a specific URL.
WHITENOISE_AUTOREFRESH[#](#WHITENOISE_AUTOREFRESH)
Default:
`settings.DEBUG`
Recheck the filesystem to see if any files have changed before responding.
This is designed to be used in development where it can be convenient to pick up changes to static files without restarting the server. For both performance and security reasons, this setting should not be used in production.
WHITENOISE_USE_FINDERS[#](#WHITENOISE_USE_FINDERS)
Default:
`settings.DEBUG`
Instead of only picking up files collected into `STATIC_ROOT`, find and serve files in their original directories using Django’s “finders” API.
This is useful in development where it matches the behaviour of the old
`runserver` command. It’s also possible to use this setting in production, avoiding the need to run the `collectstatic` command during the build, so long as you do not wish to use any of the caching and compression features provided by the storage backends.
WHITENOISE_MAX_AGE[#](#WHITENOISE_MAX_AGE)
Default:
`60 if not settings.DEBUG else 0`
Time (in seconds) for which browsers and proxies should cache **non-versioned** files.
Versioned files (i.e. files which have been given a unique name like `base.a4ef2389.css` by including a hash of their contents in the name) are detected automatically and set to be cached forever.
The default is chosen to be short enough not to cause problems with stale versions but long enough that, if you’re running WhiteNoise behind a CDN, the CDN will still take the majority of the strain during times of heavy load.
Set to `None` to disable setting any `Cache-Control` header on non-versioned files.
WHITENOISE_INDEX_FILE[#](#WHITENOISE_INDEX_FILE)
Default:
`False`
If `True` enable [index file serving](#index-files-django). If set to a non-empty string, enable index files and use that string as the index file name.
WHITENOISE_MIMETYPES[#](#WHITENOISE_MIMETYPES)
Default:
`None`
A dictionary mapping file extensions (lowercase) to the mimetype for that extension. For example:
```
{'.foo': 'application/x-foo'}
```
Note that WhiteNoise ships with its own default set of mimetypes and does not use the system-supplied ones (e.g. `/etc/mime.types`). This ensures that it behaves consistently regardless of the environment in which it’s run. View the defaults in the [media_types.py](https://github.com/evansd/whitenoise/blob/6.6.0/src/whitenoise/media_types.py) file.
In addition to file extensions, mimetypes can be specified by supplying the entire filename, for example:
```
{'some-special-file': 'application/x-custom-type'}
```
WHITENOISE_CHARSET[#](#WHITENOISE_CHARSET)
Default:
`'utf-8'`
Charset to add as part of the `Content-Type` header for all files whose mimetype allows a charset.
WHITENOISE_ALLOW_ALL_ORIGINS[#](#WHITENOISE_ALLOW_ALL_ORIGINS)
Default:
`True`
Toggles whether to send an `Access-Control-Allow-Origin: *` header for all static files.
This allows cross-origin requests for static files which means your static files will continue to work as expected even if they are served via a CDN and therefore on a different domain. Without this your static files will *mostly* work, but you may have problems with fonts loading in Firefox, or accessing images in canvas elements, or other mysterious things.
The W3C [explicitly state](https://www.w3.org/TR/cors/#security) that this behaviour is safe for publicly accessible files.
WHITENOISE_SKIP_COMPRESS_EXTENSIONS[#](#WHITENOISE_SKIP_COMPRESS_EXTENSIONS)
Default:
`('jpg', 'jpeg', 'png', 'gif', 'webp','zip', 'gz', 'tgz', 'bz2', 'tbz', 'xz', 'br', 'swf', 'flv', 'woff', 'woff2')`
File extensions to skip when compressing.
Because the compression process will only create compressed files where this results in an actual size saving, it would be safe to leave this list empty and attempt to compress all files. However, for files which we’re confident won’t benefit from compression, it speeds up the process if we just skip over them.
WHITENOISE_ADD_HEADERS_FUNCTION[#](#WHITENOISE_ADD_HEADERS_FUNCTION)
Default:
`None`
Reference to a function which is passed the headers object for each static file,
allowing it to modify them.
For example:
```
def force_download_pdfs(headers, path, url):
if path.endswith('.pdf'):
headers['Content-Disposition'] = 'attachment'
WHITENOISE_ADD_HEADERS_FUNCTION = force_download_pdfs
```
The function is passed:
headersA [wsgiref.headers](https://docs.python.org/3/library/wsgiref.html#module-wsgiref.headers) instance (which you can treat just as a dict) containing the headers for the current file
pathThe absolute path to the local file
urlThe host-relative URL of the file e.g. `/static/styles/app.css`
The function should not return anything; changes should be made by modifying the headers dictionary directly.
WHITENOISE_IMMUTABLE_FILE_TEST[#](#WHITENOISE_IMMUTABLE_FILE_TEST)
Default:
See [immutable_file_test](https://github.com/evansd/whitenoise/blob/6.6.0/src/whitenoise/middleware.py#L134) in source
Reference to function, or string.
If a reference to a function, this is passed the path and URL for each static file and should return whether that file is immutable, i.e.
guaranteed not to change, and so can be safely cached forever. The default is designed to work with Django’s ManifestStaticFilesStorage backend, and any derivatives of that, so you should only need to change this if you are using a different system for versioning your static files.
If a string, this is treated as a regular expression and each file’s URL is matched against it.
Example:
```
def immutable_file_test(path, url):
# Match filename with 12 hex digits before the extension
# e.g. app.db8f2edc0c8a.js
return re.match(r'^.+\.[0-9a-f]{12}\..+$', url)
WHITENOISE_IMMUTABLE_FILE_TEST = immutable_file_test
```
The function is passed:
pathThe absolute path to the local file
urlThe host-relative URL of the file e.g. `/static/styles/app.css`
WHITENOISE_STATIC_PREFIX[#](#WHITENOISE_STATIC_PREFIX)
Default:
Path component of `settings.STATIC_URL` (with
`settings.FORCE_SCRIPT_NAME` removed if set)
The URL prefix under which static files will be served.
Usually this can be determined automatically by using the path component of
`STATIC_URL`. So if `STATIC_URL` is `https://example.com/static/`
then `WHITENOISE_STATIC_PREFIX` will be `/static/`.
If your application is not running at the root of the domain and
`FORCE_SCRIPT_NAME` is set then this value will be removed from the
`STATIC_URL` path first to give the correct prefix.
If your deployment is more complicated than this (for instance, if you are using a CDN which is doing path rewriting) then you may need to configure this value directly.
WHITENOISE_KEEP_ONLY_HASHED_FILES[#](#WHITENOISE_KEEP_ONLY_HASHED_FILES)
Default:
`False`
Stores only files with hashed names in `STATIC_ROOT`.
By default, Django’s hashed static files system creates two copies of each file in `STATIC_ROOT`: one using the original name, e.g. `app.js`, and one using the hashed name, e.g. `app.db8f2edc0c8a.js`. If WhiteNoise’s compression backend is being used this will create another two copies of each of these files (using Gzip and Brotli compression) resulting in six output files for each input file.
In some deployment scenarios it can be important to reduce the size of the build artifact as much as possible. This setting removes the “un-hashed”
version of the file (which should be not be referenced in any case) which should reduce the space required for static files by half.
Note, this setting is only effective if the WhiteNoise storage backend is being used.
WHITENOISE_MANIFEST_STRICT[#](#WHITENOISE_MANIFEST_STRICT)
Default:
`True`
Set to `False` to prevent Django throwing an error if you reference a static file which doesn’t exist in the manifest. Note, if the static file does not exist, it will still throw an error.
This works by setting the [manifest_strict](https://docs.djangoproject.com/en/stable/ref/contrib/staticfiles/#django.contrib.staticfiles.storage.ManifestStaticFilesStorage.manifest_strict) option on the underlying Django storage instance, as described in the Django documentation:
> If a file isn’t found in the `staticfiles.json` manifest at runtime, a
> `ValueError` is raised. This behavior can be disabled by subclassing
> `ManifestStaticFilesStorage` and setting the `manifest_strict` attribute to
> `False` – nonexistent paths will remain unchanged.
Note, this setting is only effective if the WhiteNoise storage backend is being used.
#### Additional Notes[#](#additional-notes)
##### Django Compressor[#](#django-compressor)
For performance and security reasons WhiteNoise does not check for new files after startup (unless using Django DEBUG mode). As such, all static files must be generated in advance. If you’re using Django Compressor, this can be performed using its [offline compression](https://django-compressor.readthedocs.io/en/stable/usage.html#offline-compression) feature.
---
##### Serving Media Files[#](#serving-media-files)
WhiteNoise is not suitable for serving user-uploaded “media” files. For one thing, as described above, it only checks for static files at startup and so files added after the app starts won’t be seen. More importantly though,
serving user-uploaded files from the same domain as your main application is a security risk (this [blog post](https://security.googleblog.com/2012/08/content-hosting-for-modern-web.html) from Google security describes the problem well). And in addition to that, using local disk to store and serve your user media makes it harder to scale your application across multiple machines.
For all these reasons, it’s much better to store files on a separate dedicated storage service and serve them to users from there. The [django-storages](https://django-storages.readthedocs.io/)
library provides many options e.g. Amazon S3, Azure Storage, and Rackspace CloudFiles.
---
##### How do I know it’s working?[#](#how-do-i-know-it-s-working)
You can confirm that WhiteNoise is installed and configured correctly by running you application locally with `DEBUG` disabled and checking that your static files still load.
First you need to run `collectstatic` to get your files in the right place:
```
python manage.py collectstatic
```
Then make sure `DEBUG` is set to `False` in your `settings.py` and start the server:
```
python manage.py runserver
```
You should find that your static files are served, just as they would be in production.
---
##### Troubleshooting the WhiteNoise Storage backend[#](#troubleshooting-the-whitenoise-storage-backend)
If you’re having problems with the WhiteNoise storage backend, the chances are they’re due to the underlying Django storage engine. This is because WhiteNoise only adds a thin wrapper around Django’s storage to add compression support,
and because the compression code is very simple it generally doesn’t cause problems.
The most common issue is that there are CSS files which reference other files
(usually images or fonts) which don’t exist at that specified path. When Django attempts to rewrite these references it looks for the corresponding file and throws an error if it can’t find it.
To test whether the problems are due to WhiteNoise or not, try swapping the WhiteNoise storage backend for the Django one:
```
STATICFILES_STORAGE = "django.contrib.staticfiles.storage.ManifestStaticFilesStorage"
```
If the problems persist then your issue is with Django itself (try the [docs](https://docs.djangoproject.com/en/stable/ref/contrib/staticfiles/) or the [mailing list](https://groups.google.com/d/forum/django-users)). If the problem only occurs with WhiteNoise then raise a ticket on the [issue tracker](https://github.com/evansd/whitenoise/issues).
---
##### Restricting CloudFront to static files[#](#restricting-cloudfront-to-static-files)
The instructions for setting up CloudFront given above will result in the entire site being accessible via the CloudFront URL. It’s possible that this can cause SEO problems if these URLs start showing up in search results. You can restrict CloudFront to only proxy your static files by following these directions:
> 1. Go to your newly created distribution and click “*Distribution Settings*”, then
> the “*Behaviors*” tab, then “*Create Behavior*”. Put `static/*` into the path pattern and
> click “*Create*” to save.
> 2. Now select the `Default (*)` behaviour and click “*Edit*”. Set “*Restrict Viewer Access*”
> to “*Yes*” and then click “*Yes, Edit*” to save.
> 3. Check that the `static/*` pattern is first on the list, and the default one is second.
> This will ensure that requests for static files are passed through but all others are blocked.
##### Using other storage backends[#](#using-other-storage-backends)
WhiteNoise will only work with storage backends that stores their files on the local filesystem in `STATIC_ROOT`. It will not work with backends that store files remotely, for instance on Amazon S3.
##### WhiteNoise makes my tests run slow![#](#whitenoise-makes-my-tests-run-slow)
WhiteNoise is designed to do as much work as possible upfront when the application starts so that it can serve files as efficiently as possible while the application is running. This makes sense for long-running production processes, but you might find that the added startup time is a problem during test runs when application instances are frequently being created and destroyed.
The simplest way to fix this is to make sure that during testing the
`WHITENOISE_AUTOREFRESH` setting is set to `True`. (By default it is
`True` when `DEBUG` is enabled and `False` otherwise.) This stops WhiteNoise from scanning your static files on start up but other than that its behaviour should be exactly the same.
It is also worth making sure you don’t have unnecessary files in your
`STATIC_ROOT` directory. In particular, be careful not to include a
`node_modules` directory which can contain a very large number of files and significantly slow down your application startup. If you need to include specific files from `node_modules` then you can create symlinks from within your static directory to just the files you need.
##### Why do I get “ValueError: Missing staticfiles manifest entry for …”?[#](#why-do-i-get-valueerror-missing-staticfiles-manifest-entry-for)
If you are seeing this error that means you are referencing a static file in your templates (using something like `{% static "foo" %}` which doesn’t exist, or at least isn’t where Django expects it to be. If you don’t understand why Django can’t find the file you can use
```
python manage.py findstatic --verbosity 2 foo
```
which will show you all the paths which Django searches for the file “foo”.
If, for some reason, you want Django to silently ignore such errors you can set
`WHITENOISE_MANIFEST_STRICT` to `False`.
##### Using WhiteNoise with Webpack / Browserify / $LATEST_JS_THING[#](#using-whitenoise-with-webpack-browserify-latest-js-thing)
A simple technique for integrating any frontend build system with Django is to use a directory layout like this:
```
./static_src
↓
$ ./node_modules/.bin/webpack
↓
./static_build
↓
$ ./manage.py collectstatic
↓
./static_root
```
Here `static_src` contains all the source files (JS, CSS, etc) for your project. Your build tool (which can be Webpack, Browserify or whatever you choose) then processes these files and writes the output into `static_build`.
The path to the `static_build` directory is added to `settings.py`:
```
STATICFILES_DIRS = [BASE_DIR / "static_build"]
```
This means that Django can find the processed files, but doesn’t need to know anything about the tool which produced them.
The final `manage.py collectstatic` step writes “hash-versioned” and compressed copies of the static files into `static_root` ready for production.
Note, both the `static_build` and `static_root` directories should be excluded from version control (e.g. through `.gitignore`) and only the
`static_src` directory should be checked in.
##### Deploying an application which is not at the root of the domain[#](#deploying-an-application-which-is-not-at-the-root-of-the-domain)
Sometimes Django apps are deployed at a particular prefix (or “subdirectory”)
on a domain e.g. <https://example.com/my-app/> rather than just <https://example.com>.
In this case you would normally use Django’s [FORCE_SCRIPT_NAME](https://docs.djangoproject.com/en/1.11/ref/settings/#force-script-name)
setting to tell the application where it is located. You would also need to ensure that `STATIC_URL` uses the correct prefix as well. For example:
```
FORCE_SCRIPT_NAME = "/my-app"
STATIC_URL = FORCE_SCRIPT_NAME + "/static/"
```
If you have set these two values then WhiteNoise will automatically configure itself correctly. If you are doing something more complex you may need to set
[`WHITENOISE_STATIC_PREFIX`](#WHITENOISE_STATIC_PREFIX) explicitly yourself.
### Using WhiteNoise with any WSGI application[#](#using-whitenoise-with-any-wsgi-application)
Note
These instructions apply to any WSGI application. However, for Django applications you would be better off using the [WhiteNoiseMiddleware](index.html#document-django) class which makes integration easier.
To enable WhiteNoise you need to wrap your existing WSGI application in a WhiteNoise instance and tell it where to find your static files. For example:
```
from whitenoise import WhiteNoise
from my_project import MyWSGIApp
application = MyWSGIApp()
application = WhiteNoise(application, root="/path/to/static/files")
application.add_files("/path/to/more/static/files", prefix="more-files/")
```
On initialization, WhiteNoise walks over all the files in the directories that have been added (descending into sub-directories) and builds a list of available static files.
Any requests which match a static file get served by WhiteNoise, all others are passed through to the original WSGI application.
See the sections on [compression](#compression) and [caching](#caching)
for further details.
#### WhiteNoise API[#](#whitenoise-api)
*class* WhiteNoise(*application*, *root=None*, *prefix=None*, *\**kwargs*)[#](#WhiteNoise)
Parameters:
* **application** (*callable*) – Original WSGI application
* **root** (*str*) – If set, passed to `add_files` method
* **prefix** (*str*) – If set, passed to `add_files` method
* ****kwargs** – Sets [configuration attributes](#configuration) for this instance
WhiteNoise.add_files(*root*, *prefix=None*)[#](#WhiteNoise.add_files)
Parameters:
* **root** (*str*) – Absolute path to a directory of static files to be served
* **prefix** (*str*) – If set, the URL prefix under which the files will be served. Trailing slashes are automatically added.
#### Compression Support[#](#compression-support)
When WhiteNoise builds its list of available files it checks for corresponding files with a `.gz` and a `.br` suffix (e.g., `scripts/app.js`,
`scripts/app.js.gz` and `scripts/app.js.br`). If it finds them, it will assume that they are (respectively) gzip and [brotli](https://en.wikipedia.org/wiki/Brotli)
compressed versions of the original file and it will serve them in preference to the uncompressed version where clients indicate that they that compression format (see note on Amazon S3 for why this behaviour is important).
WhiteNoise comes with a command line utility which will generate compressed versions of your files for you. Note that in order for brotli compression to work the [Brotli Python package](https://pypi.org/project/Brotli/) must be installed.
Usage is simple:
```
$ python -m whitenoise.compress --help usage: compress.py [-h] [-q] [--no-gzip] [--no-brotli]
root [extensions [extensions ...]]
Search for all files inside <root> *not* matching <extensions> and produce compressed versions with '.gz' and '.br' suffixes (as long as this results in a smaller file)
positional arguments:
root Path root from which to search for files
extensions File extensions to exclude from compression (default: jpg,
jpeg, png, gif, webp, zip, gz, tgz, bz2, tbz, xz, br, swf, flv,
woff, woff2)
optional arguments:
-h, --help show this help message and exit
-q, --quiet Don't produce log output
--no-gzip Don't produce gzip '.gz' files
--no-brotli Don't produce brotli '.br' files
```
You can either run this during development and commit your compressed files to your repository, or you can run this as part of your build and deploy processes.
(Note that this is handled automatically in Django if you’re using the custom storage backend.)
#### Caching Headers[#](#caching-headers)
By default, WhiteNoise sets a max-age header on all responses it sends. You can configure this by passing a `max_age` keyword argument.
WhiteNoise sets both `Last-Modified` and `ETag` headers for all files and will return Not Modified responses where appropriate. The ETag header uses the same format as nginx which is based on the size and last-modified time of the file.
If you want to use a different scheme for generating ETags you can set them via you own function by using the [`add_headers_function`](#add_headers_function) option.
Most modern static asset build systems create uniquely named versions of each file. This results in files which are immutable (i.e., they can never change their contents) and can therefore by cached indefinitely. In order to take advantage of this, WhiteNoise needs to know which files are immutable. This can be done using the [`immutable_file_test`](#immutable_file_test) option which accepts a reference to a function.
The exact details of how you implement this method will depend on your particular asset build system but see the [`option documentation`](#immutable_file_test) for a simple example.
Once you have implemented this, any files which are flagged as immutable will have “cache forever” headers set.
#### Index Files[#](#index-files)
When the [`index_file`](#index_file) option is enabled:
* Visiting `/example/` will serve the file at `/example/index.html`
* Visiting `/example` will redirect (302) to `/example/`
* Visiting `/example/index.html` will redirect (302) to `/example/`
If you want to something other than `index.html` as the index file, then you can also set this option to an alternative filename.
#### Using a Content Distribution Network[#](#using-a-content-distribution-network)
See the instructions for [using a CDN with Django](index.html#cdn) . The same principles apply here although obviously the exact method for generating the URLs for your static files will depend on the libraries you’re using.
#### Redirecting to HTTPS[#](#redirecting-to-https)
WhiteNoise does not handle redirection itself, but works well alongside
[wsgi-sslify](https://github.com/jacobian/wsgi-sslify), which performs HTTP to HTTPS redirection as well as optionally setting an HSTS header. Simply wrap the WhiteNoise WSGI application with
`sslify()` - see the [wsgi-sslify](https://github.com/jacobian/wsgi-sslify) documentation for more details.
#### Configuration attributes[#](#configuration-attributes)
These can be set by passing keyword arguments to the constructor, or by sub-classing WhiteNoise and setting the attributes directly.
autorefresh[#](#autorefresh)
Default:
`False`
Recheck the filesystem to see if any files have changed before responding.
This is designed to be used in development where it can be convenient to pick up changes to static files without restarting the server. For both performance and security reasons, this setting should not be used in production.
max_age[#](#max_age)
Default:
`60`
Time (in seconds) for which browsers and proxies should cache files.
The default is chosen to be short enough not to cause problems with stale versions but long enough that, if you’re running WhiteNoise behind a CDN, the CDN will still take the majority of the strain during times of heavy load.
Set to `None` to disable setting any `Cache-Control` header on non-versioned files.
index_file[#](#index_file)
Default:
`False`
If `True` enable [index file serving](#index-files). If set to a non-empty string, enable index files and use that string as the index file name.
mimetypes[#](#mimetypes)
Default:
`None`
A dictionary mapping file extensions (lowercase) to the mimetype for that extension. For example:
```
{'.foo': 'application/x-foo'}
```
Note that WhiteNoise ships with its own default set of mimetypes and does not use the system-supplied ones (e.g. `/etc/mime.types`). This ensures that it behaves consistently regardless of the environment in which it’s run. View the defaults in the [media_types.py](https://github.com/evansd/whitenoise/blob/6.6.0/src/whitenoise/media_types.py) file.
In addition to file extensions, mimetypes can be specified by supplying the entire filename, for example:
```
{'some-special-file': 'application/x-custom-type'}
```
charset[#](#charset)
Default:
`utf-8`
Charset to add as part of the `Content-Type` header for all files whose mimetype allows a charset.
allow_all_origins[#](#allow_all_origins)
Default:
`True`
Toggles whether to send an `Access-Control-Allow-Origin: *` header for all static files.
This allows cross-origin requests for static files which means your static files will continue to work as expected even if they are served via a CDN and therefore on a different domain. Without this your static files will *mostly* work, but you may have problems with fonts loading in Firefox, or accessing images in canvas elements, or other mysterious things.
The W3C [explicitly state](https://www.w3.org/TR/cors/#security) that this behaviour is safe for publicly accessible files.
add_headers_function[#](#add_headers_function)
Default:
`None`
Reference to a function which is passed the headers object for each static file,
allowing it to modify them.
For example:
```
def force_download_pdfs(headers, path, url):
if path.endswith('.pdf'):
headers['Content-Disposition'] = 'attachment'
application = WhiteNoise(application,
add_headers_function=force_download_pdfs)
```
The function is passed:
headersA [wsgiref.headers](https://docs.python.org/3/library/wsgiref.html#module-wsgiref.headers) instance (which you can treat just as a dict) containing the headers for the current file
pathThe absolute path to the local file
urlThe host-relative URL of the file e.g. `/static/styles/app.css`
The function should not return anything; changes should be made by modifying the headers dictionary directly.
immutable_file_test[#](#immutable_file_test)
Default:
`return False`
Reference to function, or string.
If a reference to a function, this is passed the path and URL for each static file and should return whether that file is immutable, i.e.
guaranteed not to change, and so can be safely cached forever.
If a string, this is treated as a regular expression and each file’s URL is matched against it.
Example:
```
def immutable_file_test(path, url):
# Match filename with 12 hex digits before the extension
# e.g. app.db8f2edc0c8a.js
return re.match(r'^.+\.[0-9a-f]{12}\..+$', url)
```
The function is passed:
pathThe absolute path to the local file
urlThe host-relative URL of the file e.g. `/static/styles/app.css`
### Using WhiteNoise with Flask[#](#using-whitenoise-with-flask)
This guide walks you through setting up a Flask project with WhiteNoise.
In most cases it shouldn’t take more than a couple of lines of configuration.
#### 1. Make sure where your *static* is located[#](#make-sure-where-your-static-is-located)
If you’re familiar with Flask you’ll know what to do. If you’re just getting started with a new Flask project then the default is the `static` folder in the root path of the application.
Check the `static_folder` argument in [Flask Application Object documentation](http://flask.pocoo.org/docs/api/#application-object) for further information.
#### 2. Enable WhiteNoise[#](#enable-whitenoise)
In the file where you create your app you instantiate Flask Application Object
(the `flask.Flask()` object). All you have to do is to wrap it with
`WhiteNoise()` object.
If you use Flask quick start approach it will look something like that:
```
from flask import Flask from whitenoise import WhiteNoise
app = Flask(__name__)
app.wsgi_app = WhiteNoise(app.wsgi_app, root="static/")
```
If you opt for the [pattern of creating your app with a function](http://flask.pocoo.org/snippets/20/), then it would look like that:
```
from flask import Flask from sqlalchemy import create_engine from whitenoise import WhiteNoise
from myapp import config from myapp.views import frontend
def create_app(database_uri, debug=False):
app = Flask(__name__)
app.debug = debug
# set up your database
app.engine = create_engine(database_uri)
# register your blueprints
app.register_blueprint(frontend)
# add whitenoise
app.wsgi_app = WhiteNoise(app.wsgi_app, root="static/")
# other setup tasks
return app
```
That’s it – WhiteNoise will now serve your static files.
#### 3. Custom *static* folder[#](#custom-static-folder)
If it turns out that you are not using the Flask default for *static* folder,
fear not. You can instantiate WhiteNoise and add your *static* folders later:
```
from flask import Flask from whitenoise import WhiteNoise
app = Flask(__name__)
app.wsgi_app = WhiteNoise(app.wsgi_app)
my_static_folders = (
"static/folder/one/",
"static/folder/two/",
"static/folder/three/",
)
for static in my_static_folders:
app.wsgi_app.add_files(static)
```
See the `WhiteNoise.add_files` documentation for further customization.
#### 4. Prefix[#](#prefix)
By default, WhiteNoise will serve up static files from the URL root –
i.e., `http://localhost:5000/style.css`.
To change that, set a [prefix](https://whitenoise.readthedocs.io/en/stable/base.html#whitenoise-api) string:
```
app.wsgi_app = WhiteNoise(app.wsgi_app, root="static/", prefix="assets/")
```
Now, *style.css* will be available at `http://localhost:5000/assets/style.css`.
### Changelog[#](#changelog)
#### 6.6.0 (2023-10-11)[#](#id1)
* Support Django 5.0.
* Drop Python 3.7 support.
#### 6.5.0 (2023-06-16)[#](#id2)
* Support Python 3.12.
* Changed documentation site URL from `https://whitenoise.evans.io/` to `https://whitenoise.readthedocs.io/`.
#### 6.4.0 (2023-02-25)[#](#id3)
* Support Django 4.2.
* Remove further support for byte strings from the `root` and `prefix` arguments to `WhiteNoise`, and Django’s `STATIC_ROOT` setting.
Like in the previous release, this seems to be a remnant of Python 2 support.
Again, this change may be backwards incompatible for a small number of projects, but it’s unlikely.
Django does not support `STATIC_ROOT` being a byte string.
#### 6.3.0 (2023-01-03)[#](#id4)
* Add some video file extensions to be ignored during compression.
Since such files are already heavily compressed, further compression rarely helps.
Thanks to <NAME> in [PR #431](https://github.com/evansd/whitenoise/pull/431).
* Remove the behaviour of decoding byte strings passed for settings that take strings.
This seemed to be left around from supporting Python 2.
This change may be backwards incompatible for a small number of projects.
* Document “hidden” feature of setting `max_age` to `None` to disable the `Cache-Control` header.
* Drop support for working as old-style Django middleware, as support was [removed in Django 2.0](https://docs.djangoproject.com/en/dev/releases/2.0/#features-removed-in-2-0).
#### 6.2.0 (2022-06-05)[#](#id5)
* Support Python 3.11.
* Support Django 4.1.
#### 6.1.0 (2022-05-10)[#](#id6)
* Drop support for Django 2.2, 3.0, and 3.1.
#### 6.0.0 (2022-02-10)[#](#id7)
* Drop support for Python 3.5 and 3.6.
* Add support for Python 3.9 and 3.10.
* Drop support for Django 1.11, 2.0, and 2.1.
* Add support for Django 4.0.
* Import new MIME types from Nginx, changes:
+ `.avif` files are now served with the `image/avif` MIME type.
+ Open Document files with extensions `.odg`, `.odp`, `.ods`, and `.odt` are now served with their respective `application/vnd.oasis.opendocument.*` MIME types.
* The `whitenoise.__version__` attribute has been removed.
Use `importlib.metadata.version()` to check the version of Whitenoise if you need to.
* Requests using the `Range` header can no longer read beyond the end of the requested range.
Thanks to <NAME> in [PR #322](https://github.com/evansd/whitenoise/pull/322).
* Treat empty and `"*"` values for `Accept-Encoding` as if the client doesn’t support any encoding.
Thanks to <NAME> in [PR #323](https://github.com/evansd/whitenoise/pull/323).
#### 5.3.0 (2021-07-16)[#](#id8)
* Gracefully handle unparsable `If-Modified-Since` headers (thanks [@danielegozzi](https://github.com/danielegozzi)).
* Test against Django 3.2 (thanks [@jhnbkr](https://github.com/jhnbkr)).
* Add mimetype for Markdown (`.md`) files (thanks [@bz2](https://github.com/bz2)).
* Various documentation improvements (thanks [@PeterJCLaw](https://github.com/PeterJCLaw) and [@AliRn76](https://github.com/AliRn76)).
#### 5.2.0 (2020-08-04)[#](#id9)
* Add support for [relative STATIC_URLs](https://docs.djangoproject.com/en/3.1/ref/settings/#std:setting-STATIC_URL) in settings, as allowed in Django 3.1.
* Add mimetype for `.mjs` (JavaScript module) files and use recommended `text/javascript` mimetype for `.js` files (thanks [@hanswilw](https://github.com/hanswilw)).
* Various documentation improvements (thanks [@lukeburden](https://github.com/lukeburden)).
#### 5.1.0 (2020-05-20)[#](#id10)
* Add a [`manifest_strict`](index.html#WHITENOISE_MANIFEST_STRICT) setting to prevent Django throwing errors when missing files are referenced (thanks [@MegacoderKim](https://github.com/MegacoderKim)).
#### 5.0.1 (2019-12-12)[#](#id11)
* Fix packaging to indicate only Python 3.5+ compatibiity (thanks [@mdalp](https://github.com/mdalp)).
#### 5.0 (2019-12-10)[#](#id12)
Note
This is a major version bump, but only because it removes Python 2 compatibility. If you were already running under Python 3 then there should be no breaking changes.
WhiteNoise is now tested on Python 3.5–3.8 and Django 2.0–3.0.
Other changes include:
* Fix incompatibility with Django 3.0 which caused problems with Safari (details [here](https://github.com/evansd/whitenoise/issues/240)).
Thanks [@paltman](https://github.com/paltman) and [@giilby](https://github.com/giilby) for diagnosing.
* Lots of improvements to the test suite (including switching to py.test).
Thanks [@NDevox](https://github.com/ndevox) and [@Djailla](https://github.com/djailla).
#### 4.1.4 (2019-09-24)[#](#id13)
* Make tests more deterministic and easier to run outside of `tox`.
* Fix Fedora packaging [issue](https://github.com/evansd/whitenoise/issues/225).
* Use [Black](https://github.com/psf/black) to format all code.
#### 4.1.3 (2019-07-13)[#](#id14)
* Fix handling of zero-valued mtimes which can occur when running on some filesystems (thanks [@twosigmajab](https://github.com/twosigmajab) for reporting).
* Fix potential path traversal attack while running in autorefresh mode on Windows (thanks [@phith0n](https://github.com/phith0n) for reporting).
This is a good time to reiterate that autofresh mode is never intended for production use.
#### 4.1.2 (2019-11-19)[#](#id15)
* Add correct MIME type for WebAssembly, which is required for files to be executed (thanks [@mdboom](https://github.com/mdboom) ).
* Stop accessing the `FILE_CHARSET` Django setting which was almost entirely unused and is now deprecated (thanks [@timgraham](https://github.com/timgraham)).
#### 4.1.1 (2018-11-12)[#](#id16)
* Fix [bug](https://github.com/evansd/whitenoise/issues/202) in ETag handling (thanks [@edmorley](https://github.com/edmorley)).
* Documentation fixes (thanks [@jamesbeith](https://github.com/jamesbeith) and [@mathieusteele](https://github.com/mathieusteele)).
#### 4.1 (2018-09-12)[#](#id17)
* Silenced spurious warning about missing directories when in development (i.e “autorefresh”) mode.
* Support supplying paths as [Pathlib](https://docs.python.org/3.4/library/pathlib.html) instances, rather than just strings (thanks [@browniebroke](https://github.com/browniebroke)).
* Add a new [CompressedStaticFilesStorage](index.html#compression-and-caching) backend to support applying compression without applying Django’s hash-versioning process.
* Documentation improvements.
#### 4.0 (2018-08-10)[#](#id18)
Note
**Breaking changes**
The latest version of WhiteNoise removes some options which were deprecated in the previous major release:
* The WSGI integration option for Django
(which involved editing `wsgi.py`) has been removed. Instead, you should add WhiteNoise to your middleware list in `settings.py` and remove any reference to WhiteNoise from
`wsgi.py`.
See the [documentation](index.html#django-middleware) for more details.
(The [pure WSGI](index.html#document-base) integration is still available for non-Django apps.)
* The `whitenoise.django.GzipManifestStaticFilesStorage` alias has now been removed. Instead you should use the correct import path:
`whitenoise.storage.CompressedManifestStaticFilesStorage`.
If you are not using either of these integration options you should have no issues upgrading to the latest version.
Removed Python 3.3 Support
Removed support for Python 3.3 since it’s end of life was in September 2017.
Index file support
WhiteNoise now supports serving [index files](index.html#index-files-django) for directories (e.g. serving `/example/index.html` at `/example/`). It also creates redirects so that visiting the index file directly, or visiting the URL without a trailing slash will redirect to the correct URL.
Range header support (“byte serving”)
WhiteNoise now respects the HTTP Range header which allows a client to request only part of a file. The main use for this is in serving video files to iOS devices as Safari refuses to play videos unless the server supports the Range header.
ETag support
WhiteNoise now adds ETag headers to files using the same algorithm used by nginx. This gives slightly better caching behaviour than relying purely on Last Modified dates (although not as good as creating immutable files using something like `ManifestStaticFilesStorage`, which is still the best option if you can use it).
If you need to generate your own ETags headers for any reason you can define a custom [`add_headers_function`](index.html#WHITENOISE_ADD_HEADERS_FUNCTION).
Remove requirement to run collectstatic
By setting [`WHITENOISE_USE_FINDERS`](index.html#WHITENOISE_USE_FINDERS) to `True` files will be served directly from their original locations (usually in `STATICFILES_DIRS` or app
`static` subdirectories) without needing to be collected into `STATIC_ROOT`
by the collectstatic command. This was always the default behaviour when in `DEBUG` mode but previously it wasn’t possible to enable this behaviour in production. For small apps which aren’t using the caching and compression features of the more advanced storage backends this simplifies the deployment process by removing the need to run collectstatic as part of the build step – in fact, it’s now possible not to have any build step at all.
Customisable immutable files test
WhiteNoise ships with code which detects when you are using Django’s ManifestStaticFilesStorage backend and sends optimal caching headers for files which are guaranteed not to change. If you are using a different system for generating cacheable files then you might need to supply your own function for detecting such files. Previously this required subclassing WhiteNoise, but now you can use the [`WHITENOISE_IMMUTABLE_FILE_TEST`](index.html#WHITENOISE_IMMUTABLE_FILE_TEST) setting.
Fix runserver_nostatic to work with Channels
The old implementation of [runserver_nostatic](index.html#runserver-nostatic) (which disables Django’s default static file handling in development) did not work with [Channels](https://channels.readthedocs.io/), which needs its own runserver implementation. The runserver_nostatic command has now been rewritten so that it should work with Channels and with any other app which provides its own runserver.
Reduced storage requirements for static files
The new [`WHITENOISE_KEEP_ONLY_HASHED_FILES`](index.html#WHITENOISE_KEEP_ONLY_HASHED_FILES) setting reduces the number of files in STATIC_ROOT by half by storing files only under their hashed names
(e.g. `app.db8f2edc0c8a.js`), rather than also keeping a copy with the original name (e.g. `app.js`).
Improved start up performance
When in production mode (i.e. when [`autorefresh`](index.html#WHITENOISE_AUTOREFRESH)
is disabled), WhiteNoise scans all static files when the application starts in order to be able to serve them as efficiently and securely as possible. For most applications this makes no noticeable difference to start up time, however for applications with very large numbers of static files this process can take some time. In WhiteNoise 4.0 the file scanning code has been rewritten to do the minimum possible amount of filesystem access which should make the start up process considerably faster.
Windows Testing
WhiteNoise has always aimed to support Windows as well as *NIX platforms but we are now able to run the test suite against Windows as part of the CI process which should ensure that we can maintain Windows compatibility in future.
Modification times for compressed files
The compressed storage backend (which generates Gzip and Brotli compressed files) now ensures that compressed files have the same modification time as the originals. This only makes a difference if you are using the compression backend with something other than WhiteNoise to actually serve the files, which very few users do.
Replaced brotlipy with official Brotli Python Package
Since the official [Brotli project](https://github.com/google/brotli) offers a [Brotli Python package](https://pypi.org/project/Brotli/) brotlipy has been replaced with Brotli.
Furthermore a `brotli` key has been added to `extras_require` which allows installing WhiteNoise and Brotli together like this:
```
pip install whitenoise[brotli]
```
#### 3.3.1 (2017-09-23)[#](#id19)
* Fix issue with the immutable file test when running behind a CDN which rewrites paths (thanks @lskillen).
#### 3.3.0 (2017-01-26)[#](#id20)
* Support the new [immutable](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control#Revalidation_and_reloading) Cache-Control header.
This gives better caching behaviour for immutable resources than simply setting a large max age.
#### 3.2.3 (2017-01-04)[#](#id21)
* Gracefully handle invalid byte sequences in URLs.
* Gracefully handle filenames which are too long for the filesystem.
* Send correct Content-Type for Adobe’s `crossdomain.xml` files.
#### 3.2.2 (2016-09-26)[#](#id22)
* Convert any config values supplied as byte strings to text to avoid runtime encoding errors when encountering non-ASCII filenames.
#### 3.2.1 (2016-08-09)[#](#id23)
* Handle non-ASCII URLs correctly when using the `wsgi.py` integration.
* Fix exception triggered when a static files “finder” returned a directory rather than a file.
#### 3.2 (2016-05-27)[#](#id24)
* Add support for the new-style middleware classes introduced in Django 1.10.
The same WhiteNoiseMiddleware class can now be used in either the old
`MIDDLEWARE_CLASSES` list or the new `MIDDLEWARE` list.
* Fixed a bug where incorrect Content-Type headers were being sent on 304 Not Modified responses (thanks [@oppianmatt](https://github.com/oppianmatt)).
* Return Vary and Cache-Control headers on 304 responses, as specified by the
[RFC](https://tools.ietf.org/html/rfc7232#section-4.1).
#### 3.1 (2016-05-15)[#](#id25)
* Add new [`WHITENOISE_STATIC_PREFIX`](index.html#WHITENOISE_STATIC_PREFIX) setting to give flexibility in supporting non-standard deployment configurations e.g. serving the application somewhere other than the domain root.
* Fix bytes/unicode bug when running with Django 1.10 on Python 2.7
#### 3.0 (2016-03-23)[#](#id26)
Note
The latest version of WhiteNoise contains some small **breaking changes**.
Most users will be able to upgrade without any problems, but some less-used APIs have been modified:
* The setting `WHITENOISE_GZIP_EXCLUDE_EXTENSIONS` has been renamed to
`WHITENOISE_SKIP_COMPRESS_EXTENSIONS`.
* The CLI [compression utility](index.html#cli-utility) has moved from `python -m whitenoise.gzip`
to `python -m whitenoise.compress`.
* The now redundant `gzipstatic` management command has been removed.
* WhiteNoise no longer uses the system mimetypes files, so if you are serving particularly obscure filetypes you may need to add their mimetypes explicitly using the new [`mimetypes`](index.html#WHITENOISE_MIMETYPES) setting.
* Older versions of Django (1.4-1.7) and Python (2.6) are no longer supported.
If you need support for these platforms you can continue to use [WhiteNoise 2.x](https://whitenoise.readthedocs.io/en/legacy-2.x/).
* The `whitenoise.django.GzipManifestStaticFilesStorage` storage backend has been moved to
`whitenoise.storage.CompressedManifestStaticFilesStorage`. The old import path **will continue to work** for now, but users are encouraged to update their code to use the new path.
Simpler, cleaner Django middleware integration
WhiteNoise can now integrate with Django by adding a single line to
`MIDDLEWARE_CLASSES` without any need to edit `wsgi.py`. This also means that WhiteNoise plays nicely with other middleware classes such as
*SecurityMiddleware*, and that it is fully compatible with the new [Channels](https://channels.readthedocs.io/)
system. See the [updated documentation](index.html#django-middleware) for details.
Brotli compression support
[Brotli](https://en.wikipedia.org/wiki/Brotli) is the modern, more efficient alternative to gzip for HTTP compression. To benefit from smaller files and faster page loads, just install the [brotlipy](https://brotlipy.readthedocs.io/) library, update your `requirements.txt` and WhiteNoise will take care of the rest. See the [documentation](index.html#brotli-compression)
for details.
Simpler customisation
It’s now possible to add custom headers to WhiteNoise without needing to create a subclass, using the new [`add_headers_function`](index.html#WHITENOISE_ADD_HEADERS_FUNCTION) setting.
Use WhiteNoise in development with Django
There’s now an option to force Django to use WhiteNoise in development, rather than its own static file handling. This results in more consistent behaviour between development and production environments and fewer opportunities for bugs and surprises. See the [documentation](index.html#runserver-nostatic) for details.
Improved mimetype handling
WhiteNoise now ships with its own mimetype definitions (based on those shipped with nginx) instead of relying on the system ones, which can vary between environments. There is a new [`mimetypes`](index.html#WHITENOISE_MIMETYPES)
configuration option which makes it easy to add additional type definitions if needed.
Thanks
A big thank-you to [<NAME>](https://github.com/edmorley) and [<NAME>](https://github.com/timgraham) for their contributions to this release.
#### 2.0.6 (2015-11-15)[#](#id28)
* Rebuild with latest version of wheel to get extras_require support.
#### 2.0.5 (2015-11-15)[#](#id29)
* Add missing argparse dependency for Python 2.6 (thanks @movermeyer)).
#### 2.0.4 (2015-09-20)[#](#id30)
* Report path on MissingFileError (thanks @ezheidtmann).
#### 2.0.3 (2015-08-18)[#](#id31)
* Add `__version__` attribute.
#### 2.0.2 (2015-07-03)[#](#id32)
* More helpful error message when `STATIC_URL` is set to the root of a domain (thanks @dominicrodger).
#### 2.0.1 (2015-06-28)[#](#id33)
* Add support for Python 2.6.
* Add a more helpful error message when attempting to import DjangoWhiteNoise before `DJANGO_SETTINGS_MODULE` is defined.
#### 2.0 (2015-06-20)[#](#id34)
* Add an `autorefresh` mode which picks up changes to static files made after application startup (for use in development).
* Add a `use_finders` mode for DjangoWhiteNoise which finds files in their original directories without needing them collected in `STATIC_ROOT` (for use in development).
Note, this is only useful if you don’t want to use Django’s default runserver behaviour.
* Remove the `follow_symlinks` argument from `add_files` and now always follow symlinks.
* Support extra mimetypes which Python doesn’t know about by default (including .woff2 format)
* Some internal refactoring. Note, if you subclass WhiteNoise to add custom behaviour you may need to make some small changes to your code.
#### 1.0.6 (2014-12-12)[#](#id35)
* Fix unhelpful exception inside make_helpful_exception on Python 3 (thanks @abbottc).
#### 1.0.5 (2014-11-25)[#](#id36)
* Fix error when attempting to gzip empty files (thanks @ryanrhee).
#### 1.0.4 (2014-11-14)[#](#id37)
* Don’t attempt to gzip `.woff` files as they’re already compressed.
* Base decision to gzip on compression ratio achieved, so we don’t incur gzip overhead just to save a few bytes.
* More helpful error message from `collectstatic` if CSS files reference missing assets.
#### 1.0.3 (2014-06-08)[#](#id38)
* Fix bug in Last Modified date handling (thanks to <NAME> for spotting).
#### 1.0.2 (2014-04-29)[#](#id39)
* Set the default max_age parameter in base class to be what the docs claimed it was.
#### 1.0.1 (2014-04-18)[#](#id40)
* Fix path-to-URL conversion for Windows.
* Remove cruft from packaging manifest.
#### 1.0 (2014-04-14)[#](#id41)
* First stable release.
On this page
* [WhiteNoise](index.html#document-index)
* [Using WhiteNoise with Django](index.html#document-django)
+ [1. Make sure *staticfiles* is configured correctly](index.html#make-sure-staticfiles-is-configured-correctly)
+ [2. Enable WhiteNoise](index.html#enable-whitenoise)
+ [3. Add compression and caching support](index.html#add-compression-and-caching-support)
- [Brotli compression](index.html#brotli-compression)
+ [4. Use a Content-Delivery Network](index.html#use-a-content-delivery-network)
- [Instructions for Amazon CloudFront](index.html#instructions-for-amazon-cloudfront)
- [Using compression algorithms other than gzip](index.html#using-compression-algorithms-other-than-gzip)
+ [5. Using WhiteNoise in development](index.html#using-whitenoise-in-development)
+ [6. Index Files](index.html#index-files)
+ [Available Settings](index.html#available-settings)
- [`WHITENOISE_ROOT`](index.html#WHITENOISE_ROOT)
- [`WHITENOISE_AUTOREFRESH`](index.html#WHITENOISE_AUTOREFRESH)
- [`WHITENOISE_USE_FINDERS`](index.html#WHITENOISE_USE_FINDERS)
- [`WHITENOISE_MAX_AGE`](index.html#WHITENOISE_MAX_AGE)
- [`WHITENOISE_INDEX_FILE`](index.html#WHITENOISE_INDEX_FILE)
- [`WHITENOISE_MIMETYPES`](index.html#WHITENOISE_MIMETYPES)
- [`WHITENOISE_CHARSET`](index.html#WHITENOISE_CHARSET)
- [`WHITENOISE_ALLOW_ALL_ORIGINS`](index.html#WHITENOISE_ALLOW_ALL_ORIGINS)
- [`WHITENOISE_SKIP_COMPRESS_EXTENSIONS`](index.html#WHITENOISE_SKIP_COMPRESS_EXTENSIONS)
- [`WHITENOISE_ADD_HEADERS_FUNCTION`](index.html#WHITENOISE_ADD_HEADERS_FUNCTION)
- [`WHITENOISE_IMMUTABLE_FILE_TEST`](index.html#WHITENOISE_IMMUTABLE_FILE_TEST)
- [`WHITENOISE_STATIC_PREFIX`](index.html#WHITENOISE_STATIC_PREFIX)
- [`WHITENOISE_KEEP_ONLY_HASHED_FILES`](index.html#WHITENOISE_KEEP_ONLY_HASHED_FILES)
- [`WHITENOISE_MANIFEST_STRICT`](index.html#WHITENOISE_MANIFEST_STRICT)
+ [Additional Notes](index.html#additional-notes)
- [Django Compressor](index.html#django-compressor)
- [Serving Media Files](index.html#serving-media-files)
- [How do I know it’s working?](index.html#how-do-i-know-it-s-working)
- [Troubleshooting the WhiteNoise Storage backend](index.html#troubleshooting-the-whitenoise-storage-backend)
- [Restricting CloudFront to static files](index.html#restricting-cloudfront-to-static-files)
- [Using other storage backends](index.html#using-other-storage-backends)
- [WhiteNoise makes my tests run slow!](index.html#whitenoise-makes-my-tests-run-slow)
- [Why do I get “ValueError: Missing staticfiles manifest entry for …”?](index.html#why-do-i-get-valueerror-missing-staticfiles-manifest-entry-for)
- [Using WhiteNoise with Webpack / Browserify / $LATEST_JS_THING](index.html#using-whitenoise-with-webpack-browserify-latest-js-thing)
- [Deploying an application which is not at the root of the domain](index.html#deploying-an-application-which-is-not-at-the-root-of-the-domain)
* [Using WhiteNoise with any WSGI application](index.html#document-base)
+ [WhiteNoise API](index.html#whitenoise-api)
- [`WhiteNoise`](index.html#WhiteNoise)
* [`WhiteNoise.add_files()`](index.html#WhiteNoise.add_files)
+ [Compression Support](index.html#compression-support)
+ [Caching Headers](index.html#caching-headers)
+ [Index Files](index.html#index-files)
+ [Using a Content Distribution Network](index.html#using-a-content-distribution-network)
+ [Redirecting to HTTPS](index.html#redirecting-to-https)
+ [Configuration attributes](index.html#configuration-attributes)
- [`autorefresh`](index.html#autorefresh)
- [`max_age`](index.html#max_age)
- [`index_file`](index.html#index_file)
- [`mimetypes`](index.html#mimetypes)
- [`charset`](index.html#charset)
- [`allow_all_origins`](index.html#allow_all_origins)
- [`add_headers_function`](index.html#add_headers_function)
- [`immutable_file_test`](index.html#immutable_file_test)
* [Using WhiteNoise with Flask](index.html#document-flask)
+ [1. Make sure where your *static* is located](index.html#make-sure-where-your-static-is-located)
+ [2. Enable WhiteNoise](index.html#enable-whitenoise)
+ [3. Custom *static* folder](index.html#custom-static-folder)
+ [4. Prefix](index.html#prefix)
* [Changelog](index.html#document-changelog)
+ [6.6.0 (2023-10-11)](index.html#id1)
+ [6.5.0 (2023-06-16)](index.html#id2)
+ [6.4.0 (2023-02-25)](index.html#id3)
+ [6.3.0 (2023-01-03)](index.html#id4)
+ [6.2.0 (2022-06-05)](index.html#id5)
+ [6.1.0 (2022-05-10)](index.html#id6)
+ [6.0.0 (2022-02-10)](index.html#id7)
+ [5.3.0 (2021-07-16)](index.html#id8)
+ [5.2.0 (2020-08-04)](index.html#id9)
+ [5.1.0 (2020-05-20)](index.html#id10)
+ [5.0.1 (2019-12-12)](index.html#id11)
+ [5.0 (2019-12-10)](index.html#id12)
+ [4.1.4 (2019-09-24)](index.html#id13)
+ [4.1.3 (2019-07-13)](index.html#id14)
+ [4.1.2 (2019-11-19)](index.html#id15)
+ [4.1.1 (2018-11-12)](index.html#id16)
+ [4.1 (2018-09-12)](index.html#id17)
+ [4.0 (2018-08-10)](index.html#id18)
+ [3.3.1 (2017-09-23)](index.html#id19)
+ [3.3.0 (2017-01-26)](index.html#id20)
+ [3.2.3 (2017-01-04)](index.html#id21)
+ [3.2.2 (2016-09-26)](index.html#id22)
+ [3.2.1 (2016-08-09)](index.html#id23)
+ [3.2 (2016-05-27)](index.html#id24)
+ [3.1 (2016-05-15)](index.html#id25)
+ [3.0 (2016-03-23)](index.html#id26)
+ [2.0.6 (2015-11-15)](index.html#id28)
+ [2.0.5 (2015-11-15)](index.html#id29)
+ [2.0.4 (2015-09-20)](index.html#id30)
+ [2.0.3 (2015-08-18)](index.html#id31)
+ [2.0.2 (2015-07-03)](index.html#id32)
+ [2.0.1 (2015-06-28)](index.html#id33)
+ [2.0 (2015-06-20)](index.html#id34)
+ [1.0.6 (2014-12-12)](index.html#id35)
+ [1.0.5 (2014-11-25)](index.html#id36)
+ [1.0.4 (2014-11-14)](index.html#id37)
+ [1.0.3 (2014-06-08)](index.html#id38)
+ [1.0.2 (2014-04-29)](index.html#id39)
+ [1.0.1 (2014-04-18)](index.html#id40)
+ [1.0 (2014-04-14)](index.html#id41) |
@automattic/eslint-changed | npm | JavaScript | ESLint Changed
===
Run [ESLint](https://www.npmjs.com/package/eslint) on files and only report new warnings and errors.
Installation
---
Install via your favorite JS package manager. Note the peer dependency on eslint.
For example,
```
npm install @automattic/eslint-changed eslint
```
Usage
---
To identify the changes, `eslint-changed` needs the ESLint output for both the old and new versions of the file, as well as the diff between them.
If you use git, it can determine this automatically. Otherwise, you can supply the necessary information manually.
Options used in both modes are:
* `--debug`: Enable debug output.
* `--ext <list>`: Comma-separated list of JavaScript file extensions. Ignored if files are listed. (default: ".js")
* `--format <name>`: ESLint format to use for output. (default: "stylish")
* `--in-diff-only`: Only include messages on lines changed in the diff. This may miss things like deleting a `var` that leads to a new `no-undef` elsewhere.
### Manual diff
The following options are used with manual mode:
* `--diff <file>`: A file containing the unified diff of the changes.
* `--diff-base <dir>`: Base directory the diff is relative to. Defaults to the current directory.
* `--eslint-orig <file>`: A file containing the JSON output of eslint on the unchanged files.
* `--eslint-new <file>`: A file containing the JSON output of eslint on the changed files.
### With git
In git mode, `eslint-changed` needs to be able to run `git`. If this is not available by that name in the shell path,
set environment variable `GIT` as appropriate.
The following options are used with manual mode:
* `--git`: Signify that you're using git mode.
* `--git-staged`: Compare the staged version to the HEAD version (this is the default).
* `--git-unstaged`: Compare the working copy version to the staged (or HEAD) version.
* `--git-base <ref>`: Compare the HEAD version to the HEAD of a different base (e.g. branch).
Examples
---
This will compare the staged changes with HEAD.
```
npx @automattic/eslint-changed --git
```
This will compare HEAD with origin/trunk.
```
npx @automattic/eslint-changed --git --git-base origin/trunk
```
This does much the same as the previous example, but manually. If you're using something other than git, you might do something like this.
```
# Produce a diff.
git diff origin/trunk...HEAD > /tmp/diff
# Check out the merge-base of origin/trunk and HEAD.
git checkout origin/trunk...HEAD
# Run ESLint.
npx eslint --format=json . > /tmp/eslint.orig.json
# Go back to HEAD.
git checkout -
# Run ESLint again.
npx eslint --format=json . > /tmp/eslint.new.json
# Run eslint-changed.
npx @automattic/eslint-changed --diff /tmp/diff --eslint-orig /tmp/eslint.orig.json --eslint=new /tmp/eslint.new.json
```
Note that, to be exactly the same as the above, you'd want to extract the list of files from the diff instead of linting everything. But this will work.
Inspiration
---
We had been using [phpcs-changed](https://packagist.org/packages/sirbrillig/phpcs-changed) for a while, and wanted the same thing for ESLint.
Readme
---
### Keywords
none |
google-storage1 | rust | Rust | Crate google_storage1
===
This documentation was generated from *storage* crate version *5.0.3+20230119*, where *20230119* is the exact revision of the *storage:v1* schema built by the mako code generator *v5.0.3*.
Everything else about the *storage* *v1* API can be found at the official documentation site.
The original source code is on github.
Features
---
Handle the following *Resources* with ease from the central hub …
* bucket access controls
* *delete*, *get*, *insert*, *list*, *patch* and *update*
* buckets
* *delete*, *get*, *get iam policy*, *insert*, *list*, *lock retention policy*, *patch*, *set iam policy*, *test iam permissions* and *update*
* channels
* *stop*
* default object access controls
* *delete*, *get*, *insert*, *list*, *patch* and *update*
* notifications
* *delete*, *get*, *insert* and *list*
* object access controls
* *delete*, *get*, *insert*, *list*, *patch* and *update*
* objects
* *compose*, *copy*, *delete*, *get*, *get iam policy*, *insert*, *list*, *patch*, *rewrite*, *set iam policy*, *test iam permissions*, *update* and *watch all*
* projects
* *hmac keys create*, *hmac keys delete*, *hmac keys get*, *hmac keys list*, *hmac keys update* and *service account get*
Upload supported by …
* *insert objects*
Download supported by …
* *get objects*
Subscription supported by …
* *list objects*
* *watch all objects*
Not what you are looking for ? Find all other Google APIs in their Rust documentation index.
Structure of this Library
---
The API is structured into the following primary items:
* **Hub**
+ a central object to maintain state and allow accessing all *Activities*
+ creates *Method Builders* which in turn
allow access to individual *Call Builders*
* **Resources**
+ primary types that you can apply *Activities* to
+ a collection of properties and *Parts*
+ **Parts**
- a collection of properties
- never directly used in *Activities*
* **Activities**
+ operations to apply to *Resources*
All *structures* are marked with applicable traits to further categorize them and ease browsing.
Generally speaking, you can invoke *Activities* like this:
```
let r = hub.resource().activity(...).doit().await
```
Or specifically …
```
let r = hub.objects().compose(...).doit().await let r = hub.objects().copy(...).doit().await let r = hub.objects().delete(...).doit().await let r = hub.objects().get(...).doit().await let r = hub.objects().get_iam_policy(...).doit().await let r = hub.objects().insert(...).doit().await let r = hub.objects().list(...).doit().await let r = hub.objects().patch(...).doit().await let r = hub.objects().rewrite(...).doit().await let r = hub.objects().set_iam_policy(...).doit().await let r = hub.objects().test_iam_permissions(...).doit().await let r = hub.objects().update(...).doit().await let r = hub.objects().watch_all(...).doit().await
```
The `resource()` and `activity(...)` calls create builders. The second one dealing with `Activities`
supports various methods to configure the impending operation (not shown here). It is made such that all required arguments have to be
specified right away (i.e. `(...)`), whereas all optional ones can be build up as desired.
The `doit()` method performs the actual communication with the server and returns the respective result.
Usage
---
### Setting up your Project
To use this library, you would put the following lines into your `Cargo.toml` file:
```
[dependencies]
google-storage1 = "*"
serde = "^1.0"
serde_json = "^1.0"
```
### A complete example
```
extern crate hyper;
extern crate hyper_rustls;
extern crate google_storage1 as storage1;
use storage1::api::Object;
use storage1::{Result, Error};
use std::default::Default;
use storage1::{Storage, oauth2, hyper, hyper_rustls, chrono, FieldMask};
// Get an ApplicationSecret instance by some means. It contains the `client_id` and
// `client_secret`, among other things.
let secret: oauth2::ApplicationSecret = Default::default();
// Instantiate the authenticator. It will choose a suitable authentication flow for you,
// unless you replace `None` with the desired Flow.
// Provide your own `AuthenticatorDelegate` to adjust the way it operates and get feedback about
// what's going on. You probably want to bring in your own `TokenStorage` to persist tokens and
// retrieve them from storage.
let auth = oauth2::InstalledFlowAuthenticator::builder(
secret,
oauth2::InstalledFlowReturnMethod::HTTPRedirect,
).build().await.unwrap();
let mut hub = Storage::new(hyper::Client::builder().build(hyper_rustls::HttpsConnectorBuilder::new().with_native_roots().https_or_http().enable_http1().build()), auth);
// As the method needs a request, you would usually fill it with the desired information
// into the respective structure. Some of the parts shown here might not be applicable !
// Values shown here are possibly random and not representative !
let mut req = Object::default();
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.objects().rewrite(req, "sourceBucket", "sourceObject", "destinationBucket", "destinationObject")
.user_project("ipsum")
.source_generation(-93)
.rewrite_token("ut")
.projection("gubergren")
.max_bytes_rewritten_per_call(-16)
.if_source_metageneration_not_match(-57)
.if_source_metageneration_match(-50)
.if_source_generation_not_match(-50)
.if_source_generation_match(-7)
.if_metageneration_not_match(-62)
.if_metageneration_match(-17)
.if_generation_not_match(-99)
.if_generation_match(-56)
.destination_predefined_acl("eos")
.destination_kms_key_name("labore")
.doit().await;
match result {
Err(e) => match e {
// The Error enum provides details about what exactly happened.
// You can also just use its `Debug`, `Display` or `Error` traits
Error::HttpError(_)
|Error::Io(_)
|Error::MissingAPIKey
|Error::MissingToken(_)
|Error::Cancelled
|Error::UploadSizeLimitExceeded(_, _)
|Error::Failure(_)
|Error::BadRequest(_)
|Error::FieldClash(_)
|Error::JsonDecodeError(_, _) => println!("{}", e),
},
Ok(res) => println!("Success: {:?}", res),
}
```
### Handling Errors
All errors produced by the system are provided either as Result enumeration as return value of the doit() methods, or handed as possibly intermediate results to either the
Hub Delegate, or the Authenticator Delegate.
When delegates handle errors or intermediate values, they may have a chance to instruct the system to retry. This
makes the system potentially resilient to all kinds of errors.
### Uploads and Downloads
If a method supports downloads, the response body, which is part of the Result, should be read by you to obtain the media.
If such a method also supports a Response Result, it will return that by default.
You can see it as meta-data for the actual media. To trigger a media download, you will have to set up the builder by making this call: `.param("alt", "media")`.
Methods supporting uploads can do so using up to 2 different protocols:
*simple* and *resumable*. The distinctiveness of each is represented by customized
`doit(...)` methods, which are then named `upload(...)` and `upload_resumable(...)` respectively.
### Customization and Callbacks
You may alter the way an `doit()` method is called by providing a delegate to the
Method Builder before making the final `doit()` call.
Respective methods will be called to provide progress information, as well as determine whether the system should
retry on failure.
The delegate trait is default-implemented, allowing you to customize it with minimal effort.
### Optional Parts in Server-Requests
All structures provided by this library are made to be encodable and
decodable via *json*. Optionals are used to indicate that partial requests are responses
are valid.
Most optionals are are considered Parts which are identifiable by name, which will be sent to
the server to indicate either the set parts of the request or the desired parts in the response.
### Builder Arguments
Using method builders, you are able to prepare an action call by repeatedly calling it’s methods.
These will always take a single argument, for which the following statements are true.
* PODs are handed by copy
* strings are passed as `&str`
* request values are moved
Arguments will always be copied or cloned into the builder, to make them independent of their original life times.
Re-exports
---
* `pub extern crate google_apis_common as client;`
* `pub use api::Storage;`
* `pub use hyper;`
* `pub use hyper_rustls;`
* `pub use client::chrono;`
* `pub use client::oauth2;`
Modules
---
* api
Structs
---
* FieldMaskA `FieldMask` as defined in `https://github.com/protocolbuffers/protobuf/blob/ec1a70913e5793a7d0a7b5fbf7e0e4f75409dd41/src/google/protobuf/field_mask.proto#L180`
Enums
---
* Error
Traits
---
* DelegateA trait specifying functionality to help controlling any request performed by the API.
The trait has a conservative default implementation.
Type Aliases
---
* ResultA universal result type used as return for all calls.
Crate google_storage1
===
This documentation was generated from *storage* crate version *5.0.3+20230119*, where *20230119* is the exact revision of the *storage:v1* schema built by the mako code generator *v5.0.3*.
Everything else about the *storage* *v1* API can be found at the official documentation site.
The original source code is on github.
Features
---
Handle the following *Resources* with ease from the central hub …
* bucket access controls
* *delete*, *get*, *insert*, *list*, *patch* and *update*
* buckets
* *delete*, *get*, *get iam policy*, *insert*, *list*, *lock retention policy*, *patch*, *set iam policy*, *test iam permissions* and *update*
* channels
* *stop*
* default object access controls
* *delete*, *get*, *insert*, *list*, *patch* and *update*
* notifications
* *delete*, *get*, *insert* and *list*
* object access controls
* *delete*, *get*, *insert*, *list*, *patch* and *update*
* objects
* *compose*, *copy*, *delete*, *get*, *get iam policy*, *insert*, *list*, *patch*, *rewrite*, *set iam policy*, *test iam permissions*, *update* and *watch all*
* projects
* *hmac keys create*, *hmac keys delete*, *hmac keys get*, *hmac keys list*, *hmac keys update* and *service account get*
Upload supported by …
* *insert objects*
Download supported by …
* *get objects*
Subscription supported by …
* *list objects*
* *watch all objects*
Not what you are looking for ? Find all other Google APIs in their Rust documentation index.
Structure of this Library
---
The API is structured into the following primary items:
* **Hub**
+ a central object to maintain state and allow accessing all *Activities*
+ creates *Method Builders* which in turn
allow access to individual *Call Builders*
* **Resources**
+ primary types that you can apply *Activities* to
+ a collection of properties and *Parts*
+ **Parts**
- a collection of properties
- never directly used in *Activities*
* **Activities**
+ operations to apply to *Resources*
All *structures* are marked with applicable traits to further categorize them and ease browsing.
Generally speaking, you can invoke *Activities* like this:
```
let r = hub.resource().activity(...).doit().await
```
Or specifically …
```
let r = hub.objects().compose(...).doit().await let r = hub.objects().copy(...).doit().await let r = hub.objects().delete(...).doit().await let r = hub.objects().get(...).doit().await let r = hub.objects().get_iam_policy(...).doit().await let r = hub.objects().insert(...).doit().await let r = hub.objects().list(...).doit().await let r = hub.objects().patch(...).doit().await let r = hub.objects().rewrite(...).doit().await let r = hub.objects().set_iam_policy(...).doit().await let r = hub.objects().test_iam_permissions(...).doit().await let r = hub.objects().update(...).doit().await let r = hub.objects().watch_all(...).doit().await
```
The `resource()` and `activity(...)` calls create builders. The second one dealing with `Activities`
supports various methods to configure the impending operation (not shown here). It is made such that all required arguments have to be
specified right away (i.e. `(...)`), whereas all optional ones can be build up as desired.
The `doit()` method performs the actual communication with the server and returns the respective result.
Usage
---
### Setting up your Project
To use this library, you would put the following lines into your `Cargo.toml` file:
```
[dependencies]
google-storage1 = "*"
serde = "^1.0"
serde_json = "^1.0"
```
### A complete example
```
extern crate hyper;
extern crate hyper_rustls;
extern crate google_storage1 as storage1;
use storage1::api::Object;
use storage1::{Result, Error};
use std::default::Default;
use storage1::{Storage, oauth2, hyper, hyper_rustls, chrono, FieldMask};
// Get an ApplicationSecret instance by some means. It contains the `client_id` and
// `client_secret`, among other things.
let secret: oauth2::ApplicationSecret = Default::default();
// Instantiate the authenticator. It will choose a suitable authentication flow for you,
// unless you replace `None` with the desired Flow.
// Provide your own `AuthenticatorDelegate` to adjust the way it operates and get feedback about
// what's going on. You probably want to bring in your own `TokenStorage` to persist tokens and
// retrieve them from storage.
let auth = oauth2::InstalledFlowAuthenticator::builder(
secret,
oauth2::InstalledFlowReturnMethod::HTTPRedirect,
).build().await.unwrap();
let mut hub = Storage::new(hyper::Client::builder().build(hyper_rustls::HttpsConnectorBuilder::new().with_native_roots().https_or_http().enable_http1().build()), auth);
// As the method needs a request, you would usually fill it with the desired information
// into the respective structure. Some of the parts shown here might not be applicable !
// Values shown here are possibly random and not representative !
let mut req = Object::default();
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.objects().rewrite(req, "sourceBucket", "sourceObject", "destinationBucket", "destinationObject")
.user_project("ipsum")
.source_generation(-93)
.rewrite_token("ut")
.projection("gubergren")
.max_bytes_rewritten_per_call(-16)
.if_source_metageneration_not_match(-57)
.if_source_metageneration_match(-50)
.if_source_generation_not_match(-50)
.if_source_generation_match(-7)
.if_metageneration_not_match(-62)
.if_metageneration_match(-17)
.if_generation_not_match(-99)
.if_generation_match(-56)
.destination_predefined_acl("eos")
.destination_kms_key_name("labore")
.doit().await;
match result {
Err(e) => match e {
// The Error enum provides details about what exactly happened.
// You can also just use its `Debug`, `Display` or `Error` traits
Error::HttpError(_)
|Error::Io(_)
|Error::MissingAPIKey
|Error::MissingToken(_)
|Error::Cancelled
|Error::UploadSizeLimitExceeded(_, _)
|Error::Failure(_)
|Error::BadRequest(_)
|Error::FieldClash(_)
|Error::JsonDecodeError(_, _) => println!("{}", e),
},
Ok(res) => println!("Success: {:?}", res),
}
```
### Handling Errors
All errors produced by the system are provided either as Result enumeration as return value of the doit() methods, or handed as possibly intermediate results to either the
Hub Delegate, or the Authenticator Delegate.
When delegates handle errors or intermediate values, they may have a chance to instruct the system to retry. This
makes the system potentially resilient to all kinds of errors.
### Uploads and Downloads
If a method supports downloads, the response body, which is part of the Result, should be read by you to obtain the media.
If such a method also supports a Response Result, it will return that by default.
You can see it as meta-data for the actual media. To trigger a media download, you will have to set up the builder by making this call: `.param("alt", "media")`.
Methods supporting uploads can do so using up to 2 different protocols:
*simple* and *resumable*. The distinctiveness of each is represented by customized
`doit(...)` methods, which are then named `upload(...)` and `upload_resumable(...)` respectively.
### Customization and Callbacks
You may alter the way an `doit()` method is called by providing a delegate to the
Method Builder before making the final `doit()` call.
Respective methods will be called to provide progress information, as well as determine whether the system should
retry on failure.
The delegate trait is default-implemented, allowing you to customize it with minimal effort.
### Optional Parts in Server-Requests
All structures provided by this library are made to be encodable and
decodable via *json*. Optionals are used to indicate that partial requests are responses
are valid.
Most optionals are are considered Parts which are identifiable by name, which will be sent to
the server to indicate either the set parts of the request or the desired parts in the response.
### Builder Arguments
Using method builders, you are able to prepare an action call by repeatedly calling it’s methods.
These will always take a single argument, for which the following statements are true.
* PODs are handed by copy
* strings are passed as `&str`
* request values are moved
Arguments will always be copied or cloned into the builder, to make them independent of their original life times.
Re-exports
---
* `pub extern crate google_apis_common as client;`
* `pub use api::Storage;`
* `pub use hyper;`
* `pub use hyper_rustls;`
* `pub use client::chrono;`
* `pub use client::oauth2;`
Modules
---
* api
Structs
---
* FieldMaskA `FieldMask` as defined in `https://github.com/protocolbuffers/protobuf/blob/ec1a70913e5793a7d0a7b5fbf7e0e4f75409dd41/src/google/protobuf/field_mask.proto#L180`
Enums
---
* Error
Traits
---
* DelegateA trait specifying functionality to help controlling any request performed by the API.
The trait has a conservative default implementation.
Type Aliases
---
* ResultA universal result type used as return for all calls.
Struct google_storage1::api::Storage
===
```
pub struct Storage<S> {
pub client: Client<S, Body>,
pub auth: Box<dyn GetToken>,
/* private fields */
}
```
Central instance to access all Storage related resource activities
Examples
---
Instantiate a new hub
```
extern crate hyper;
extern crate hyper_rustls;
extern crate google_storage1 as storage1;
use storage1::api::Object;
use storage1::{Result, Error};
use std::default::Default;
use storage1::{Storage, oauth2, hyper, hyper_rustls, chrono, FieldMask};
// Get an ApplicationSecret instance by some means. It contains the `client_id` and
// `client_secret`, among other things.
let secret: oauth2::ApplicationSecret = Default::default();
// Instantiate the authenticator. It will choose a suitable authentication flow for you,
// unless you replace `None` with the desired Flow.
// Provide your own `AuthenticatorDelegate` to adjust the way it operates and get feedback about
// what's going on. You probably want to bring in your own `TokenStorage` to persist tokens and
// retrieve them from storage.
let auth = oauth2::InstalledFlowAuthenticator::builder(
secret,
oauth2::InstalledFlowReturnMethod::HTTPRedirect,
).build().await.unwrap();
let mut hub = Storage::new(hyper::Client::builder().build(hyper_rustls::HttpsConnectorBuilder::new().with_native_roots().https_or_http().enable_http1().build()), auth);
// As the method needs a request, you would usually fill it with the desired information
// into the respective structure. Some of the parts shown here might not be applicable !
// Values shown here are possibly random and not representative !
let mut req = Object::default();
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.objects().rewrite(req, "sourceBucket", "sourceObject", "destinationBucket", "destinationObject")
.user_project("Stet")
.source_generation(-13)
.rewrite_token("et")
.projection("sed")
.max_bytes_rewritten_per_call(-24)
.if_source_metageneration_not_match(-68)
.if_source_metageneration_match(-76)
.if_source_generation_not_match(-31)
.if_source_generation_match(-93)
.if_metageneration_not_match(-20)
.if_metageneration_match(-34)
.if_generation_not_match(-22)
.if_generation_match(-28)
.destination_predefined_acl("amet.")
.destination_kms_key_name("consetetur")
.doit().await;
match result {
Err(e) => match e {
// The Error enum provides details about what exactly happened.
// You can also just use its `Debug`, `Display` or `Error` traits
Error::HttpError(_)
|Error::Io(_)
|Error::MissingAPIKey
|Error::MissingToken(_)
|Error::Cancelled
|Error::UploadSizeLimitExceeded(_, _)
|Error::Failure(_)
|Error::BadRequest(_)
|Error::FieldClash(_)
|Error::JsonDecodeError(_, _) => println!("{}", e),
},
Ok(res) => println!("Success: {:?}", res),
}
```
Fields
---
`client: Client<S, Body>``auth: Box<dyn GetToken>`Implementations
---
### impl<'a, S> Storage<S#### pub fn new<A: 'static + GetToken>(
client: Client<S, Body>,
auth: A
) -> Storage<S#### pub fn bucket_access_controls(&'a self) -> BucketAccessControlMethods<'a, S#### pub fn buckets(&'a self) -> BucketMethods<'a, S#### pub fn channels(&'a self) -> ChannelMethods<'a, S#### pub fn default_object_access_controls(
&'a self
) -> DefaultObjectAccessControlMethods<'a, S#### pub fn notifications(&'a self) -> NotificationMethods<'a, S#### pub fn object_access_controls(&'a self) -> ObjectAccessControlMethods<'a, S#### pub fn objects(&'a self) -> ObjectMethods<'a, S#### pub fn projects(&'a self) -> ProjectMethods<'a, S#### pub fn user_agent(&mut self, agent_name: String) -> String
Set the user-agent header field to use in all requests to the server.
It defaults to `google-api-rust-client/5.0.3`.
Returns the previously set user-agent.
#### pub fn base_url(&mut self, new_base_url: String) -> String
Set the base url to use in all requests to the server.
It defaults to `https://storage.googleapis.com/storage/v1/`.
Returns the previously set base url.
#### pub fn root_url(&mut self, new_root_url: String) -> String
Set the root url to use in all requests to the server.
It defaults to `https://storage.googleapis.com/`.
Returns the previously set root url.
Trait Implementations
---
### impl<S: Clone> Clone for Storage<S#### fn clone(&self) -> Storage<SReturns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
---
### impl<S> !RefUnwindSafe for Storage<S### impl<S> Send for Storage<S>where
S: Send,
### impl<S> Sync for Storage<S>where
S: Sync,
### impl<S> Unpin for Storage<S>where
S: Unpin,
### impl<S> !UnwindSafe for Storage<SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::BucketAccessControl
===
```
pub struct BucketAccessControl {
pub bucket: Option<String>,
pub domain: Option<String>,
pub email: Option<String>,
pub entity: Option<String>,
pub entity_id: Option<String>,
pub etag: Option<String>,
pub id: Option<String>,
pub kind: Option<String>,
pub project_team: Option<BucketAccessControlProjectTeam>,
pub role: Option<String>,
pub self_link: Option<String>,
}
```
An access-control entry.
Activities
---
This type is used in activities, which are methods you may call on this type or where this type is involved in.
The list links the activity name, along with information about where it is used (one of *request* and *response*).
* delete bucket access controls (none)
* get bucket access controls (response)
* insert bucket access controls (request|response)
* list bucket access controls (none)
* patch bucket access controls (request|response)
* update bucket access controls (request|response)
Fields
---
`bucket: Option<String>`The name of the bucket.
`domain: Option<String>`The domain associated with the entity, if any.
`email: Option<String>`The email address associated with the entity, if any.
`entity: Option<String>`The entity holding the permission, in one of the following forms:
* user-userId
* user-email
* group-groupId
* group-email
* domain-domain
* project-team-projectId
* allUsers
* allAuthenticatedUsers Examples:
* The user <EMAIL> would be <EMAIL>.
* The group <EMAIL> would be <EMAIL>.
* To refer to all members of the Google Apps for Business domain example.com, the entity would be domain-example.com.
`entity_id: Option<String>`The ID for the entity, if any.
`etag: Option<String>`HTTP 1.1 Entity tag for the access-control entry.
`id: Option<String>`The ID of the access-control entry.
`kind: Option<String>`The kind of item this is. For bucket access control entries, this is always storage#bucketAccessControl.
`project_team: Option<BucketAccessControlProjectTeam>`The project team associated with the entity, if any.
`role: Option<String>`The access permission for the entity.
`self_link: Option<String>`The link to this access-control entry.
Trait Implementations
---
### impl Clone for BucketAccessControl
#### fn clone(&self) -> BucketAccessControl
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> BucketAccessControl
Returns the “default value” for a type.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl Resource for BucketAccessControl
### impl ResponseResult for BucketAccessControl
Auto Trait Implementations
---
### impl RefUnwindSafe for BucketAccessControl
### impl Send for BucketAccessControl
### impl Sync for BucketAccessControl
### impl Unpin for BucketAccessControl
### impl UnwindSafe for BucketAccessControl
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Struct google_storage1::api::BucketAccessControlDeleteCall
===
```
pub struct BucketAccessControlDeleteCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Permanently deletes the ACL entry for the specified entity on the specified bucket.
A builder for the *delete* method supported by a *bucketAccessControl* resource.
It is not used directly, but through a `BucketAccessControlMethods` instance.
Example
---
Instantiate a resource method builder
```
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.bucket_access_controls().delete("bucket", "entity")
.user_project("et")
.doit().await;
```
Implementations
---
### impl<'a, S> BucketAccessControlDeleteCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<Response<Body>Perform the operation you have build so far.
#### pub fn bucket(self, new_value: &str) -> BucketAccessControlDeleteCall<'a, SName of a bucket.
Sets the *bucket* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn entity(self, new_value: &str) -> BucketAccessControlDeleteCall<'a, SThe entity holding the permission. Can be user-userId, user-emailAddress, group-groupId, group-emailAddress, allUsers, or allAuthenticatedUsers.
Sets the *entity* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(
self,
new_value: &str
) -> BucketAccessControlDeleteCall<'a, SThe project to be billed for this request. Required for Requester Pays buckets.
Sets the *user project* query property to the given value.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> BucketAccessControlDeleteCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> BucketAccessControlDeleteCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(self, scope: St) -> BucketAccessControlDeleteCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(
self,
scopes: I
) -> BucketAccessControlDeleteCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> BucketAccessControlDeleteCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for BucketAccessControlDeleteCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for BucketAccessControlDeleteCall<'a, S### impl<'a, S> Send for BucketAccessControlDeleteCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for BucketAccessControlDeleteCall<'a, S### impl<'a, S> Unpin for BucketAccessControlDeleteCall<'a, S### impl<'a, S> !UnwindSafe for BucketAccessControlDeleteCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::BucketAccessControlGetCall
===
```
pub struct BucketAccessControlGetCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Returns the ACL entry for the specified entity on the specified bucket.
A builder for the *get* method supported by a *bucketAccessControl* resource.
It is not used directly, but through a `BucketAccessControlMethods` instance.
Example
---
Instantiate a resource method builder
```
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.bucket_access_controls().get("bucket", "entity")
.user_project("Stet")
.doit().await;
```
Implementations
---
### impl<'a, S> BucketAccessControlGetCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, BucketAccessControl)Perform the operation you have build so far.
#### pub fn bucket(self, new_value: &str) -> BucketAccessControlGetCall<'a, SName of a bucket.
Sets the *bucket* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn entity(self, new_value: &str) -> BucketAccessControlGetCall<'a, SThe entity holding the permission. Can be user-userId, user-emailAddress, group-groupId, group-emailAddress, allUsers, or allAuthenticatedUsers.
Sets the *entity* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(self, new_value: &str) -> BucketAccessControlGetCall<'a, SThe project to be billed for this request. Required for Requester Pays buckets.
Sets the *user project* query property to the given value.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> BucketAccessControlGetCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> BucketAccessControlGetCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(self, scope: St) -> BucketAccessControlGetCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> BucketAccessControlGetCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> BucketAccessControlGetCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for BucketAccessControlGetCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for BucketAccessControlGetCall<'a, S### impl<'a, S> Send for BucketAccessControlGetCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for BucketAccessControlGetCall<'a, S### impl<'a, S> Unpin for BucketAccessControlGetCall<'a, S### impl<'a, S> !UnwindSafe for BucketAccessControlGetCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::BucketAccessControlInsertCall
===
```
pub struct BucketAccessControlInsertCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Creates a new ACL entry on the specified bucket.
A builder for the *insert* method supported by a *bucketAccessControl* resource.
It is not used directly, but through a `BucketAccessControlMethods` instance.
Example
---
Instantiate a resource method builder
```
use storage1::api::BucketAccessControl;
// As the method needs a request, you would usually fill it with the desired information
// into the respective structure. Some of the parts shown here might not be applicable !
// Values shown here are possibly random and not representative !
let mut req = BucketAccessControl::default();
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.bucket_access_controls().insert(req, "bucket")
.user_project("duo")
.doit().await;
```
Implementations
---
### impl<'a, S> BucketAccessControlInsertCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, BucketAccessControl)Perform the operation you have build so far.
#### pub fn request(
self,
new_value: BucketAccessControl
) -> BucketAccessControlInsertCall<'a, SSets the *request* property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn bucket(self, new_value: &str) -> BucketAccessControlInsertCall<'a, SName of a bucket.
Sets the *bucket* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(
self,
new_value: &str
) -> BucketAccessControlInsertCall<'a, SThe project to be billed for this request. Required for Requester Pays buckets.
Sets the *user project* query property to the given value.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> BucketAccessControlInsertCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> BucketAccessControlInsertCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(self, scope: St) -> BucketAccessControlInsertCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(
self,
scopes: I
) -> BucketAccessControlInsertCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> BucketAccessControlInsertCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for BucketAccessControlInsertCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for BucketAccessControlInsertCall<'a, S### impl<'a, S> Send for BucketAccessControlInsertCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for BucketAccessControlInsertCall<'a, S### impl<'a, S> Unpin for BucketAccessControlInsertCall<'a, S### impl<'a, S> !UnwindSafe for BucketAccessControlInsertCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::BucketAccessControlListCall
===
```
pub struct BucketAccessControlListCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Retrieves ACL entries on the specified bucket.
A builder for the *list* method supported by a *bucketAccessControl* resource.
It is not used directly, but through a `BucketAccessControlMethods` instance.
Example
---
Instantiate a resource method builder
```
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.bucket_access_controls().list("bucket")
.user_project("vero")
.doit().await;
```
Implementations
---
### impl<'a, S> BucketAccessControlListCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, BucketAccessControls)Perform the operation you have build so far.
#### pub fn bucket(self, new_value: &str) -> BucketAccessControlListCall<'a, SName of a bucket.
Sets the *bucket* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(self, new_value: &str) -> BucketAccessControlListCall<'a, SThe project to be billed for this request. Required for Requester Pays buckets.
Sets the *user project* query property to the given value.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> BucketAccessControlListCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> BucketAccessControlListCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(self, scope: St) -> BucketAccessControlListCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> BucketAccessControlListCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> BucketAccessControlListCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for BucketAccessControlListCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for BucketAccessControlListCall<'a, S### impl<'a, S> Send for BucketAccessControlListCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for BucketAccessControlListCall<'a, S### impl<'a, S> Unpin for BucketAccessControlListCall<'a, S### impl<'a, S> !UnwindSafe for BucketAccessControlListCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::BucketAccessControlPatchCall
===
```
pub struct BucketAccessControlPatchCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Patches an ACL entry on the specified bucket.
A builder for the *patch* method supported by a *bucketAccessControl* resource.
It is not used directly, but through a `BucketAccessControlMethods` instance.
Example
---
Instantiate a resource method builder
```
use storage1::api::BucketAccessControl;
// As the method needs a request, you would usually fill it with the desired information
// into the respective structure. Some of the parts shown here might not be applicable !
// Values shown here are possibly random and not representative !
let mut req = BucketAccessControl::default();
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.bucket_access_controls().patch(req, "bucket", "entity")
.user_project("vero")
.doit().await;
```
Implementations
---
### impl<'a, S> BucketAccessControlPatchCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, BucketAccessControl)Perform the operation you have build so far.
#### pub fn request(
self,
new_value: BucketAccessControl
) -> BucketAccessControlPatchCall<'a, SSets the *request* property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn bucket(self, new_value: &str) -> BucketAccessControlPatchCall<'a, SName of a bucket.
Sets the *bucket* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn entity(self, new_value: &str) -> BucketAccessControlPatchCall<'a, SThe entity holding the permission. Can be user-userId, user-emailAddress, group-groupId, group-emailAddress, allUsers, or allAuthenticatedUsers.
Sets the *entity* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(
self,
new_value: &str
) -> BucketAccessControlPatchCall<'a, SThe project to be billed for this request. Required for Requester Pays buckets.
Sets the *user project* query property to the given value.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> BucketAccessControlPatchCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> BucketAccessControlPatchCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(self, scope: St) -> BucketAccessControlPatchCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> BucketAccessControlPatchCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> BucketAccessControlPatchCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for BucketAccessControlPatchCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for BucketAccessControlPatchCall<'a, S### impl<'a, S> Send for BucketAccessControlPatchCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for BucketAccessControlPatchCall<'a, S### impl<'a, S> Unpin for BucketAccessControlPatchCall<'a, S### impl<'a, S> !UnwindSafe for BucketAccessControlPatchCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::BucketAccessControlUpdateCall
===
```
pub struct BucketAccessControlUpdateCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Updates an ACL entry on the specified bucket.
A builder for the *update* method supported by a *bucketAccessControl* resource.
It is not used directly, but through a `BucketAccessControlMethods` instance.
Example
---
Instantiate a resource method builder
```
use storage1::api::BucketAccessControl;
// As the method needs a request, you would usually fill it with the desired information
// into the respective structure. Some of the parts shown here might not be applicable !
// Values shown here are possibly random and not representative !
let mut req = BucketAccessControl::default();
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.bucket_access_controls().update(req, "bucket", "entity")
.user_project("diam")
.doit().await;
```
Implementations
---
### impl<'a, S> BucketAccessControlUpdateCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, BucketAccessControl)Perform the operation you have build so far.
#### pub fn request(
self,
new_value: BucketAccessControl
) -> BucketAccessControlUpdateCall<'a, SSets the *request* property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn bucket(self, new_value: &str) -> BucketAccessControlUpdateCall<'a, SName of a bucket.
Sets the *bucket* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn entity(self, new_value: &str) -> BucketAccessControlUpdateCall<'a, SThe entity holding the permission. Can be user-userId, user-emailAddress, group-groupId, group-emailAddress, allUsers, or allAuthenticatedUsers.
Sets the *entity* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(
self,
new_value: &str
) -> BucketAccessControlUpdateCall<'a, SThe project to be billed for this request. Required for Requester Pays buckets.
Sets the *user project* query property to the given value.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> BucketAccessControlUpdateCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> BucketAccessControlUpdateCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(self, scope: St) -> BucketAccessControlUpdateCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(
self,
scopes: I
) -> BucketAccessControlUpdateCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> BucketAccessControlUpdateCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for BucketAccessControlUpdateCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for BucketAccessControlUpdateCall<'a, S### impl<'a, S> Send for BucketAccessControlUpdateCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for BucketAccessControlUpdateCall<'a, S### impl<'a, S> Unpin for BucketAccessControlUpdateCall<'a, S### impl<'a, S> !UnwindSafe for BucketAccessControlUpdateCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::Bucket
===
```
pub struct Bucket {
pub acl: Option<Vec<BucketAccessControl>>,
pub autoclass: Option<BucketAutoclass>,
pub billing: Option<BucketBilling>,
pub cors: Option<Vec<BucketCors>>,
pub custom_placement_config: Option<BucketCustomPlacementConfig>,
pub default_event_based_hold: Option<bool>,
pub default_object_acl: Option<Vec<ObjectAccessControl>>,
pub encryption: Option<BucketEncryption>,
pub etag: Option<String>,
pub iam_configuration: Option<BucketIamConfiguration>,
pub id: Option<String>,
pub kind: Option<String>,
pub labels: Option<HashMap<String, String>>,
pub lifecycle: Option<BucketLifecycle>,
pub location: Option<String>,
pub location_type: Option<String>,
pub logging: Option<BucketLogging>,
pub metageneration: Option<i64>,
pub name: Option<String>,
pub owner: Option<BucketOwner>,
pub project_number: Option<u64>,
pub retention_policy: Option<BucketRetentionPolicy>,
pub rpo: Option<String>,
pub satisfies_pzs: Option<bool>,
pub self_link: Option<String>,
pub storage_class: Option<String>,
pub time_created: Option<DateTime<Utc>>,
pub updated: Option<DateTime<Utc>>,
pub versioning: Option<BucketVersioning>,
pub website: Option<BucketWebsite>,
}
```
A bucket.
Activities
---
This type is used in activities, which are methods you may call on this type or where this type is involved in.
The list links the activity name, along with information about where it is used (one of *request* and *response*).
* delete buckets (none)
* get buckets (response)
* get iam policy buckets (none)
* insert buckets (request|response)
* list buckets (none)
* lock retention policy buckets (response)
* patch buckets (request|response)
* set iam policy buckets (none)
* test iam permissions buckets (none)
* update buckets (request|response)
Fields
---
`acl: Option<Vec<BucketAccessControl>>`Access controls on the bucket.
`autoclass: Option<BucketAutoclass>`The bucket’s Autoclass configuration.
`billing: Option<BucketBilling>`The bucket’s billing configuration.
`cors: Option<Vec<BucketCors>>`The bucket’s Cross-Origin Resource Sharing (CORS) configuration.
`custom_placement_config: Option<BucketCustomPlacementConfig>`The bucket’s custom placement configuration for Custom Dual Regions.
`default_event_based_hold: Option<bool>`The default value for event-based hold on newly created objects in this bucket. Event-based hold is a way to retain objects indefinitely until an event occurs, signified by the hold’s release. After being released, such objects will be subject to bucket-level retention (if any). One sample use case of this flag is for banks to hold loan documents for at least 3 years after loan is paid in full. Here, bucket-level retention is 3 years and the event is loan being paid in full. In this example, these objects will be held intact for any number of years until the event has occurred (event-based hold on the object is released) and then 3 more years after that. That means retention duration of the objects begins from the moment event-based hold transitioned from true to false. Objects under event-based hold cannot be deleted, overwritten or archived until the hold is removed.
`default_object_acl: Option<Vec<ObjectAccessControl>>`Default access controls to apply to new objects when no ACL is provided.
`encryption: Option<BucketEncryption>`Encryption configuration for a bucket.
`etag: Option<String>`HTTP 1.1 Entity tag for the bucket.
`iam_configuration: Option<BucketIamConfiguration>`The bucket’s IAM configuration.
`id: Option<String>`The ID of the bucket. For buckets, the id and name properties are the same.
`kind: Option<String>`The kind of item this is. For buckets, this is always storage#bucket.
`labels: Option<HashMap<String, String>>`User-provided labels, in key/value pairs.
`lifecycle: Option<BucketLifecycle>`The bucket’s lifecycle configuration. See lifecycle management for more information.
`location: Option<String>`The location of the bucket. Object data for objects in the bucket resides in physical storage within this region. Defaults to US. See the developer’s guide for the authoritative list.
`location_type: Option<String>`The type of the bucket location.
`logging: Option<BucketLogging>`The bucket’s logging configuration, which defines the destination bucket and optional name prefix for the current bucket’s logs.
`metageneration: Option<i64>`The metadata generation of this bucket.
`name: Option<String>`The name of the bucket.
`owner: Option<BucketOwner>`The owner of the bucket. This is always the project team’s owner group.
`project_number: Option<u64>`The project number of the project the bucket belongs to.
`retention_policy: Option<BucketRetentionPolicy>`The bucket’s retention policy. The retention policy enforces a minimum retention time for all objects contained in the bucket, based on their creation time. Any attempt to overwrite or delete objects younger than the retention period will result in a PERMISSION_DENIED error. An unlocked retention policy can be modified or removed from the bucket via a storage.buckets.update operation. A locked retention policy cannot be removed or shortened in duration for the lifetime of the bucket. Attempting to remove or decrease period of a locked retention policy will result in a PERMISSION_DENIED error.
`rpo: Option<String>`The Recovery Point Objective (RPO) of this bucket. Set to ASYNC_TURBO to turn on Turbo Replication on a bucket.
`satisfies_pzs: Option<bool>`Reserved for future use.
`self_link: Option<String>`The URI of this bucket.
`storage_class: Option<String>`The bucket’s default storage class, used whenever no storageClass is specified for a newly-created object. This defines how objects in the bucket are stored and determines the SLA and the cost of storage. Values include MULTI_REGIONAL, REGIONAL, STANDARD, NEARLINE, COLDLINE, ARCHIVE, and DURABLE_REDUCED_AVAILABILITY. If this value is not specified when the bucket is created, it will default to STANDARD. For more information, see storage classes.
`time_created: Option<DateTime<Utc>>`The creation time of the bucket in RFC 3339 format.
`updated: Option<DateTime<Utc>>`The modification time of the bucket in RFC 3339 format.
`versioning: Option<BucketVersioning>`The bucket’s versioning configuration.
`website: Option<BucketWebsite>`The bucket’s website configuration, controlling how the service behaves when accessing bucket contents as a web site. See the Static Website Examples for more information.
Trait Implementations
---
### impl Clone for Bucket
#### fn clone(&self) -> Bucket
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> Bucket
Returns the “default value” for a type.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl Resource for Bucket
### impl ResponseResult for Bucket
Auto Trait Implementations
---
### impl RefUnwindSafe for Bucket
### impl Send for Bucket
### impl Sync for Bucket
### impl Unpin for Bucket
### impl UnwindSafe for Bucket
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Struct google_storage1::api::BucketDeleteCall
===
```
pub struct BucketDeleteCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Permanently deletes an empty bucket.
A builder for the *delete* method supported by a *bucket* resource.
It is not used directly, but through a `BucketMethods` instance.
Example
---
Instantiate a resource method builder
```
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.buckets().delete("bucket")
.user_project("ipsum")
.if_metageneration_not_match(-23)
.if_metageneration_match(-59)
.doit().await;
```
Implementations
---
### impl<'a, S> BucketDeleteCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<Response<Body>Perform the operation you have build so far.
#### pub fn bucket(self, new_value: &str) -> BucketDeleteCall<'a, SName of a bucket.
Sets the *bucket* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(self, new_value: &str) -> BucketDeleteCall<'a, SThe project to be billed for this request. Required for Requester Pays buckets.
Sets the *user project* query property to the given value.
#### pub fn if_metageneration_not_match(
self,
new_value: i64
) -> BucketDeleteCall<'a, SIf set, only deletes the bucket if its metageneration does not match this value.
Sets the *if metageneration not match* query property to the given value.
#### pub fn if_metageneration_match(self, new_value: i64) -> BucketDeleteCall<'a, SIf set, only deletes the bucket if its metageneration matches this value.
Sets the *if metageneration match* query property to the given value.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> BucketDeleteCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> BucketDeleteCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(self, scope: St) -> BucketDeleteCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> BucketDeleteCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> BucketDeleteCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for BucketDeleteCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for BucketDeleteCall<'a, S### impl<'a, S> Send for BucketDeleteCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for BucketDeleteCall<'a, S### impl<'a, S> Unpin for BucketDeleteCall<'a, S### impl<'a, S> !UnwindSafe for BucketDeleteCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::BucketGetCall
===
```
pub struct BucketGetCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Returns metadata for the specified bucket.
A builder for the *get* method supported by a *bucket* resource.
It is not used directly, but through a `BucketMethods` instance.
Example
---
Instantiate a resource method builder
```
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.buckets().get("bucket")
.user_project("voluptua.")
.projection("et")
.if_metageneration_not_match(-31)
.if_metageneration_match(-96)
.doit().await;
```
Implementations
---
### impl<'a, S> BucketGetCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, Bucket)Perform the operation you have build so far.
#### pub fn bucket(self, new_value: &str) -> BucketGetCall<'a, SName of a bucket.
Sets the *bucket* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(self, new_value: &str) -> BucketGetCall<'a, SThe project to be billed for this request. Required for Requester Pays buckets.
Sets the *user project* query property to the given value.
#### pub fn projection(self, new_value: &str) -> BucketGetCall<'a, SSet of properties to return. Defaults to noAcl.
Sets the *projection* query property to the given value.
#### pub fn if_metageneration_not_match(self, new_value: i64) -> BucketGetCall<'a, SMakes the return of the bucket metadata conditional on whether the bucket’s current metageneration does not match the given value.
Sets the *if metageneration not match* query property to the given value.
#### pub fn if_metageneration_match(self, new_value: i64) -> BucketGetCall<'a, SMakes the return of the bucket metadata conditional on whether the bucket’s current metageneration matches the given value.
Sets the *if metageneration match* query property to the given value.
#### pub fn delegate(self, new_value: &'a mut dyn Delegate) -> BucketGetCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> BucketGetCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(self, scope: St) -> BucketGetCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> BucketGetCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> BucketGetCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for BucketGetCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for BucketGetCall<'a, S### impl<'a, S> Send for BucketGetCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for BucketGetCall<'a, S### impl<'a, S> Unpin for BucketGetCall<'a, S### impl<'a, S> !UnwindSafe for BucketGetCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::BucketGetIamPolicyCall
===
```
pub struct BucketGetIamPolicyCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Returns an IAM policy for the specified bucket.
A builder for the *getIamPolicy* method supported by a *bucket* resource.
It is not used directly, but through a `BucketMethods` instance.
Example
---
Instantiate a resource method builder
```
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.buckets().get_iam_policy("bucket")
.user_project("sed")
.options_requested_policy_version(-9)
.doit().await;
```
Implementations
---
### impl<'a, S> BucketGetIamPolicyCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, Policy)Perform the operation you have build so far.
#### pub fn bucket(self, new_value: &str) -> BucketGetIamPolicyCall<'a, SName of a bucket.
Sets the *bucket* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(self, new_value: &str) -> BucketGetIamPolicyCall<'a, SThe project to be billed for this request. Required for Requester Pays buckets.
Sets the *user project* query property to the given value.
#### pub fn options_requested_policy_version(
self,
new_value: i32
) -> BucketGetIamPolicyCall<'a, SThe IAM policy format version to be returned. If the optionsRequestedPolicyVersion is for an older version that doesn’t support part of the requested IAM policy, the request fails.
Sets the *options requested policy version* query property to the given value.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> BucketGetIamPolicyCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> BucketGetIamPolicyCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(self, scope: St) -> BucketGetIamPolicyCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> BucketGetIamPolicyCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> BucketGetIamPolicyCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for BucketGetIamPolicyCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for BucketGetIamPolicyCall<'a, S### impl<'a, S> Send for BucketGetIamPolicyCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for BucketGetIamPolicyCall<'a, S### impl<'a, S> Unpin for BucketGetIamPolicyCall<'a, S### impl<'a, S> !UnwindSafe for BucketGetIamPolicyCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::BucketInsertCall
===
```
pub struct BucketInsertCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Creates a new bucket.
A builder for the *insert* method supported by a *bucket* resource.
It is not used directly, but through a `BucketMethods` instance.
Example
---
Instantiate a resource method builder
```
use storage1::api::Bucket;
// As the method needs a request, you would usually fill it with the desired information
// into the respective structure. Some of the parts shown here might not be applicable !
// Values shown here are possibly random and not representative !
let mut req = Bucket::default();
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.buckets().insert(req, "project")
.user_project("gubergren")
.projection("et")
.predefined_default_object_acl("accusam")
.predefined_acl("voluptua.")
.doit().await;
```
Implementations
---
### impl<'a, S> BucketInsertCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, Bucket)Perform the operation you have build so far.
#### pub fn request(self, new_value: Bucket) -> BucketInsertCall<'a, SSets the *request* property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn project(self, new_value: &str) -> BucketInsertCall<'a, SA valid API project identifier.
Sets the *project* query property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(self, new_value: &str) -> BucketInsertCall<'a, SThe project to be billed for this request.
Sets the *user project* query property to the given value.
#### pub fn projection(self, new_value: &str) -> BucketInsertCall<'a, SSet of properties to return. Defaults to noAcl, unless the bucket resource specifies acl or defaultObjectAcl properties, when it defaults to full.
Sets the *projection* query property to the given value.
#### pub fn predefined_default_object_acl(
self,
new_value: &str
) -> BucketInsertCall<'a, SApply a predefined set of default object access controls to this bucket.
Sets the *predefined default object acl* query property to the given value.
#### pub fn predefined_acl(self, new_value: &str) -> BucketInsertCall<'a, SApply a predefined set of access controls to this bucket.
Sets the *predefined acl* query property to the given value.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> BucketInsertCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> BucketInsertCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(self, scope: St) -> BucketInsertCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> BucketInsertCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> BucketInsertCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for BucketInsertCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for BucketInsertCall<'a, S### impl<'a, S> Send for BucketInsertCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for BucketInsertCall<'a, S### impl<'a, S> Unpin for BucketInsertCall<'a, S### impl<'a, S> !UnwindSafe for BucketInsertCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::BucketListCall
===
```
pub struct BucketListCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Retrieves a list of buckets for a given project.
A builder for the *list* method supported by a *bucket* resource.
It is not used directly, but through a `BucketMethods` instance.
Example
---
Instantiate a resource method builder
```
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.buckets().list("project")
.user_project("dolore")
.projection("dolore")
.prefix("voluptua.")
.page_token("amet.")
.max_results(84)
.doit().await;
```
Implementations
---
### impl<'a, S> BucketListCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, Buckets)Perform the operation you have build so far.
#### pub fn project(self, new_value: &str) -> BucketListCall<'a, SA valid API project identifier.
Sets the *project* query property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(self, new_value: &str) -> BucketListCall<'a, SThe project to be billed for this request.
Sets the *user project* query property to the given value.
#### pub fn projection(self, new_value: &str) -> BucketListCall<'a, SSet of properties to return. Defaults to noAcl.
Sets the *projection* query property to the given value.
#### pub fn prefix(self, new_value: &str) -> BucketListCall<'a, SFilter results to buckets whose names begin with this prefix.
Sets the *prefix* query property to the given value.
#### pub fn page_token(self, new_value: &str) -> BucketListCall<'a, SA previously-returned page token representing part of the larger set of results to view.
Sets the *page token* query property to the given value.
#### pub fn max_results(self, new_value: u32) -> BucketListCall<'a, SMaximum number of buckets to return in a single response. The service will use this parameter or 1,000 items, whichever is smaller.
Sets the *max results* query property to the given value.
#### pub fn delegate(self, new_value: &'a mut dyn Delegate) -> BucketListCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> BucketListCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(self, scope: St) -> BucketListCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> BucketListCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> BucketListCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for BucketListCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for BucketListCall<'a, S### impl<'a, S> Send for BucketListCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for BucketListCall<'a, S### impl<'a, S> Unpin for BucketListCall<'a, S### impl<'a, S> !UnwindSafe for BucketListCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::BucketLockRetentionPolicyCall
===
```
pub struct BucketLockRetentionPolicyCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Locks retention policy on a bucket.
A builder for the *lockRetentionPolicy* method supported by a *bucket* resource.
It is not used directly, but through a `BucketMethods` instance.
Example
---
Instantiate a resource method builder
```
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.buckets().lock_retention_policy("bucket", -6)
.user_project("invidunt")
.doit().await;
```
Implementations
---
### impl<'a, S> BucketLockRetentionPolicyCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, Bucket)Perform the operation you have build so far.
#### pub fn bucket(self, new_value: &str) -> BucketLockRetentionPolicyCall<'a, SName of a bucket.
Sets the *bucket* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn if_metageneration_match(
self,
new_value: i64
) -> BucketLockRetentionPolicyCall<'a, SMakes the operation conditional on whether bucket’s current metageneration matches the given value.
Sets the *if metageneration match* query property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(
self,
new_value: &str
) -> BucketLockRetentionPolicyCall<'a, SThe project to be billed for this request. Required for Requester Pays buckets.
Sets the *user project* query property to the given value.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> BucketLockRetentionPolicyCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> BucketLockRetentionPolicyCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(self, scope: St) -> BucketLockRetentionPolicyCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(
self,
scopes: I
) -> BucketLockRetentionPolicyCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> BucketLockRetentionPolicyCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for BucketLockRetentionPolicyCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for BucketLockRetentionPolicyCall<'a, S### impl<'a, S> Send for BucketLockRetentionPolicyCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for BucketLockRetentionPolicyCall<'a, S### impl<'a, S> Unpin for BucketLockRetentionPolicyCall<'a, S### impl<'a, S> !UnwindSafe for BucketLockRetentionPolicyCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::BucketPatchCall
===
```
pub struct BucketPatchCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Patches a bucket. Changes to the bucket will be readable immediately after writing, but configuration changes may take time to propagate.
A builder for the *patch* method supported by a *bucket* resource.
It is not used directly, but through a `BucketMethods` instance.
Example
---
Instantiate a resource method builder
```
use storage1::api::Bucket;
// As the method needs a request, you would usually fill it with the desired information
// into the respective structure. Some of the parts shown here might not be applicable !
// Values shown here are possibly random and not representative !
let mut req = Bucket::default();
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.buckets().patch(req, "bucket")
.user_project("est")
.projection("At")
.predefined_default_object_acl("sed")
.predefined_acl("sit")
.if_metageneration_not_match(-35)
.if_metageneration_match(-39)
.doit().await;
```
Implementations
---
### impl<'a, S> BucketPatchCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, Bucket)Perform the operation you have build so far.
#### pub fn request(self, new_value: Bucket) -> BucketPatchCall<'a, SSets the *request* property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn bucket(self, new_value: &str) -> BucketPatchCall<'a, SName of a bucket.
Sets the *bucket* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(self, new_value: &str) -> BucketPatchCall<'a, SThe project to be billed for this request. Required for Requester Pays buckets.
Sets the *user project* query property to the given value.
#### pub fn projection(self, new_value: &str) -> BucketPatchCall<'a, SSet of properties to return. Defaults to full.
Sets the *projection* query property to the given value.
#### pub fn predefined_default_object_acl(
self,
new_value: &str
) -> BucketPatchCall<'a, SApply a predefined set of default object access controls to this bucket.
Sets the *predefined default object acl* query property to the given value.
#### pub fn predefined_acl(self, new_value: &str) -> BucketPatchCall<'a, SApply a predefined set of access controls to this bucket.
Sets the *predefined acl* query property to the given value.
#### pub fn if_metageneration_not_match(
self,
new_value: i64
) -> BucketPatchCall<'a, SMakes the return of the bucket metadata conditional on whether the bucket’s current metageneration does not match the given value.
Sets the *if metageneration not match* query property to the given value.
#### pub fn if_metageneration_match(self, new_value: i64) -> BucketPatchCall<'a, SMakes the return of the bucket metadata conditional on whether the bucket’s current metageneration matches the given value.
Sets the *if metageneration match* query property to the given value.
#### pub fn delegate(self, new_value: &'a mut dyn Delegate) -> BucketPatchCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> BucketPatchCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(self, scope: St) -> BucketPatchCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> BucketPatchCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> BucketPatchCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for BucketPatchCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for BucketPatchCall<'a, S### impl<'a, S> Send for BucketPatchCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for BucketPatchCall<'a, S### impl<'a, S> Unpin for BucketPatchCall<'a, S### impl<'a, S> !UnwindSafe for BucketPatchCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::BucketSetIamPolicyCall
===
```
pub struct BucketSetIamPolicyCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Updates an IAM policy for the specified bucket.
A builder for the *setIamPolicy* method supported by a *bucket* resource.
It is not used directly, but through a `BucketMethods` instance.
Example
---
Instantiate a resource method builder
```
use storage1::api::Policy;
// As the method needs a request, you would usually fill it with the desired information
// into the respective structure. Some of the parts shown here might not be applicable !
// Values shown here are possibly random and not representative !
let mut req = Policy::default();
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.buckets().set_iam_policy(req, "bucket")
.user_project("ipsum")
.doit().await;
```
Implementations
---
### impl<'a, S> BucketSetIamPolicyCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, Policy)Perform the operation you have build so far.
#### pub fn request(self, new_value: Policy) -> BucketSetIamPolicyCall<'a, SSets the *request* property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn bucket(self, new_value: &str) -> BucketSetIamPolicyCall<'a, SName of a bucket.
Sets the *bucket* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(self, new_value: &str) -> BucketSetIamPolicyCall<'a, SThe project to be billed for this request. Required for Requester Pays buckets.
Sets the *user project* query property to the given value.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> BucketSetIamPolicyCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> BucketSetIamPolicyCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(self, scope: St) -> BucketSetIamPolicyCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> BucketSetIamPolicyCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> BucketSetIamPolicyCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for BucketSetIamPolicyCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for BucketSetIamPolicyCall<'a, S### impl<'a, S> Send for BucketSetIamPolicyCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for BucketSetIamPolicyCall<'a, S### impl<'a, S> Unpin for BucketSetIamPolicyCall<'a, S### impl<'a, S> !UnwindSafe for BucketSetIamPolicyCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::BucketTestIamPermissionCall
===
```
pub struct BucketTestIamPermissionCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Tests a set of permissions on the given bucket to see which, if any, are held by the caller.
A builder for the *testIamPermissions* method supported by a *bucket* resource.
It is not used directly, but through a `BucketMethods` instance.
Example
---
Instantiate a resource method builder
```
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.buckets().test_iam_permissions("bucket", &vec!["sanctus".into()])
.user_project("Lorem")
.doit().await;
```
Implementations
---
### impl<'a, S> BucketTestIamPermissionCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, TestIamPermissionsResponse)Perform the operation you have build so far.
#### pub fn bucket(self, new_value: &str) -> BucketTestIamPermissionCall<'a, SName of a bucket.
Sets the *bucket* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn add_permissions(
self,
new_value: &str
) -> BucketTestIamPermissionCall<'a, SPermissions to test.
Append the given value to the *permissions* query property.
Each appended value will retain its original ordering and be ‘/’-separated in the URL’s parameters.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(self, new_value: &str) -> BucketTestIamPermissionCall<'a, SThe project to be billed for this request. Required for Requester Pays buckets.
Sets the *user project* query property to the given value.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> BucketTestIamPermissionCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> BucketTestIamPermissionCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(self, scope: St) -> BucketTestIamPermissionCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> BucketTestIamPermissionCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> BucketTestIamPermissionCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for BucketTestIamPermissionCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for BucketTestIamPermissionCall<'a, S### impl<'a, S> Send for BucketTestIamPermissionCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for BucketTestIamPermissionCall<'a, S### impl<'a, S> Unpin for BucketTestIamPermissionCall<'a, S### impl<'a, S> !UnwindSafe for BucketTestIamPermissionCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::BucketUpdateCall
===
```
pub struct BucketUpdateCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Updates a bucket. Changes to the bucket will be readable immediately after writing, but configuration changes may take time to propagate.
A builder for the *update* method supported by a *bucket* resource.
It is not used directly, but through a `BucketMethods` instance.
Example
---
Instantiate a resource method builder
```
use storage1::api::Bucket;
// As the method needs a request, you would usually fill it with the desired information
// into the respective structure. Some of the parts shown here might not be applicable !
// Values shown here are possibly random and not representative !
let mut req = Bucket::default();
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.buckets().update(req, "bucket")
.user_project("sed")
.projection("diam")
.predefined_default_object_acl("dolores")
.predefined_acl("dolores")
.if_metageneration_not_match(-68)
.if_metageneration_match(-93)
.doit().await;
```
Implementations
---
### impl<'a, S> BucketUpdateCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, Bucket)Perform the operation you have build so far.
#### pub fn request(self, new_value: Bucket) -> BucketUpdateCall<'a, SSets the *request* property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn bucket(self, new_value: &str) -> BucketUpdateCall<'a, SName of a bucket.
Sets the *bucket* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(self, new_value: &str) -> BucketUpdateCall<'a, SThe project to be billed for this request. Required for Requester Pays buckets.
Sets the *user project* query property to the given value.
#### pub fn projection(self, new_value: &str) -> BucketUpdateCall<'a, SSet of properties to return. Defaults to full.
Sets the *projection* query property to the given value.
#### pub fn predefined_default_object_acl(
self,
new_value: &str
) -> BucketUpdateCall<'a, SApply a predefined set of default object access controls to this bucket.
Sets the *predefined default object acl* query property to the given value.
#### pub fn predefined_acl(self, new_value: &str) -> BucketUpdateCall<'a, SApply a predefined set of access controls to this bucket.
Sets the *predefined acl* query property to the given value.
#### pub fn if_metageneration_not_match(
self,
new_value: i64
) -> BucketUpdateCall<'a, SMakes the return of the bucket metadata conditional on whether the bucket’s current metageneration does not match the given value.
Sets the *if metageneration not match* query property to the given value.
#### pub fn if_metageneration_match(self, new_value: i64) -> BucketUpdateCall<'a, SMakes the return of the bucket metadata conditional on whether the bucket’s current metageneration matches the given value.
Sets the *if metageneration match* query property to the given value.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> BucketUpdateCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> BucketUpdateCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(self, scope: St) -> BucketUpdateCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> BucketUpdateCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> BucketUpdateCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for BucketUpdateCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for BucketUpdateCall<'a, S### impl<'a, S> Send for BucketUpdateCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for BucketUpdateCall<'a, S### impl<'a, S> Unpin for BucketUpdateCall<'a, S### impl<'a, S> !UnwindSafe for BucketUpdateCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::Channel
===
```
pub struct Channel {
pub address: Option<String>,
pub expiration: Option<i64>,
pub id: Option<String>,
pub kind: Option<String>,
pub params: Option<HashMap<String, String>>,
pub payload: Option<bool>,
pub resource_id: Option<String>,
pub resource_uri: Option<String>,
pub token: Option<String>,
pub type_: Option<String>,
}
```
An notification channel used to watch for resource changes.
Activities
---
This type is used in activities, which are methods you may call on this type or where this type is involved in.
The list links the activity name, along with information about where it is used (one of *request* and *response*).
* stop channels (request)
* watch all objects (request|response)
Fields
---
`address: Option<String>`The address where notifications are delivered for this channel.
`expiration: Option<i64>`Date and time of notification channel expiration, expressed as a Unix timestamp, in milliseconds. Optional.
`id: Option<String>`A UUID or similar unique string that identifies this channel.
`kind: Option<String>`Identifies this as a notification channel used to watch for changes to a resource, which is “api#channel”.
`params: Option<HashMap<String, String>>`Additional parameters controlling delivery channel behavior. Optional.
`payload: Option<bool>`A Boolean value to indicate whether payload is wanted. Optional.
`resource_id: Option<String>`An opaque ID that identifies the resource being watched on this channel. Stable across different API versions.
`resource_uri: Option<String>`A version-specific identifier for the watched resource.
`token: Option<String>`An arbitrary string delivered to the target address with each notification delivered over this channel. Optional.
`type_: Option<String>`The type of delivery mechanism used for this channel.
Trait Implementations
---
### impl Clone for Channel
#### fn clone(&self) -> Channel
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> Channel
Returns the “default value” for a type.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl Resource for Channel
### impl ResponseResult for Channel
Auto Trait Implementations
---
### impl RefUnwindSafe for Channel
### impl Send for Channel
### impl Sync for Channel
### impl Unpin for Channel
### impl UnwindSafe for Channel
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Struct google_storage1::api::ChannelStopCall
===
```
pub struct ChannelStopCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Stop watching resources through this channel
A builder for the *stop* method supported by a *channel* resource.
It is not used directly, but through a `ChannelMethods` instance.
Example
---
Instantiate a resource method builder
```
use storage1::api::Channel;
// As the method needs a request, you would usually fill it with the desired information
// into the respective structure. Some of the parts shown here might not be applicable !
// Values shown here are possibly random and not representative !
let mut req = Channel::default();
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.channels().stop(req)
.doit().await;
```
Implementations
---
### impl<'a, S> ChannelStopCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<Response<Body>Perform the operation you have build so far.
#### pub fn request(self, new_value: Channel) -> ChannelStopCall<'a, SSets the *request* property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn delegate(self, new_value: &'a mut dyn Delegate) -> ChannelStopCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> ChannelStopCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(self, scope: St) -> ChannelStopCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> ChannelStopCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> ChannelStopCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for ChannelStopCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for ChannelStopCall<'a, S### impl<'a, S> Send for ChannelStopCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for ChannelStopCall<'a, S### impl<'a, S> Unpin for ChannelStopCall<'a, S### impl<'a, S> !UnwindSafe for ChannelStopCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::DefaultObjectAccessControlDeleteCall
===
```
pub struct DefaultObjectAccessControlDeleteCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Permanently deletes the default object ACL entry for the specified entity on the specified bucket.
A builder for the *delete* method supported by a *defaultObjectAccessControl* resource.
It is not used directly, but through a `DefaultObjectAccessControlMethods` instance.
Example
---
Instantiate a resource method builder
```
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.default_object_access_controls().delete("bucket", "entity")
.user_project("elitr")
.doit().await;
```
Implementations
---
### impl<'a, S> DefaultObjectAccessControlDeleteCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<Response<Body>Perform the operation you have build so far.
#### pub fn bucket(
self,
new_value: &str
) -> DefaultObjectAccessControlDeleteCall<'a, SName of a bucket.
Sets the *bucket* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn entity(
self,
new_value: &str
) -> DefaultObjectAccessControlDeleteCall<'a, SThe entity holding the permission. Can be user-userId, user-emailAddress, group-groupId, group-emailAddress, allUsers, or allAuthenticatedUsers.
Sets the *entity* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(
self,
new_value: &str
) -> DefaultObjectAccessControlDeleteCall<'a, SThe project to be billed for this request. Required for Requester Pays buckets.
Sets the *user project* query property to the given value.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> DefaultObjectAccessControlDeleteCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(
self,
name: T,
value: T
) -> DefaultObjectAccessControlDeleteCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(
self,
scope: St
) -> DefaultObjectAccessControlDeleteCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(
self,
scopes: I
) -> DefaultObjectAccessControlDeleteCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> DefaultObjectAccessControlDeleteCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for DefaultObjectAccessControlDeleteCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for DefaultObjectAccessControlDeleteCall<'a, S### impl<'a, S> Send for DefaultObjectAccessControlDeleteCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for DefaultObjectAccessControlDeleteCall<'a, S### impl<'a, S> Unpin for DefaultObjectAccessControlDeleteCall<'a, S### impl<'a, S> !UnwindSafe for DefaultObjectAccessControlDeleteCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::DefaultObjectAccessControlGetCall
===
```
pub struct DefaultObjectAccessControlGetCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Returns the default object ACL entry for the specified entity on the specified bucket.
A builder for the *get* method supported by a *defaultObjectAccessControl* resource.
It is not used directly, but through a `DefaultObjectAccessControlMethods` instance.
Example
---
Instantiate a resource method builder
```
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.default_object_access_controls().get("bucket", "entity")
.user_project("nonumy")
.doit().await;
```
Implementations
---
### impl<'a, S> DefaultObjectAccessControlGetCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, ObjectAccessControl)Perform the operation you have build so far.
#### pub fn bucket(self, new_value: &str) -> DefaultObjectAccessControlGetCall<'a, SName of a bucket.
Sets the *bucket* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn entity(self, new_value: &str) -> DefaultObjectAccessControlGetCall<'a, SThe entity holding the permission. Can be user-userId, user-emailAddress, group-groupId, group-emailAddress, allUsers, or allAuthenticatedUsers.
Sets the *entity* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(
self,
new_value: &str
) -> DefaultObjectAccessControlGetCall<'a, SThe project to be billed for this request. Required for Requester Pays buckets.
Sets the *user project* query property to the given value.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> DefaultObjectAccessControlGetCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(
self,
name: T,
value: T
) -> DefaultObjectAccessControlGetCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(
self,
scope: St
) -> DefaultObjectAccessControlGetCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(
self,
scopes: I
) -> DefaultObjectAccessControlGetCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> DefaultObjectAccessControlGetCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for DefaultObjectAccessControlGetCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for DefaultObjectAccessControlGetCall<'a, S### impl<'a, S> Send for DefaultObjectAccessControlGetCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for DefaultObjectAccessControlGetCall<'a, S### impl<'a, S> Unpin for DefaultObjectAccessControlGetCall<'a, S### impl<'a, S> !UnwindSafe for DefaultObjectAccessControlGetCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::DefaultObjectAccessControlInsertCall
===
```
pub struct DefaultObjectAccessControlInsertCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Creates a new default object ACL entry on the specified bucket.
A builder for the *insert* method supported by a *defaultObjectAccessControl* resource.
It is not used directly, but through a `DefaultObjectAccessControlMethods` instance.
Example
---
Instantiate a resource method builder
```
use storage1::api::ObjectAccessControl;
// As the method needs a request, you would usually fill it with the desired information
// into the respective structure. Some of the parts shown here might not be applicable !
// Values shown here are possibly random and not representative !
let mut req = ObjectAccessControl::default();
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.default_object_access_controls().insert(req, "bucket")
.user_project("sadipscing")
.doit().await;
```
Implementations
---
### impl<'a, S> DefaultObjectAccessControlInsertCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, ObjectAccessControl)Perform the operation you have build so far.
#### pub fn request(
self,
new_value: ObjectAccessControl
) -> DefaultObjectAccessControlInsertCall<'a, SSets the *request* property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn bucket(
self,
new_value: &str
) -> DefaultObjectAccessControlInsertCall<'a, SName of a bucket.
Sets the *bucket* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(
self,
new_value: &str
) -> DefaultObjectAccessControlInsertCall<'a, SThe project to be billed for this request. Required for Requester Pays buckets.
Sets the *user project* query property to the given value.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> DefaultObjectAccessControlInsertCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(
self,
name: T,
value: T
) -> DefaultObjectAccessControlInsertCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(
self,
scope: St
) -> DefaultObjectAccessControlInsertCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(
self,
scopes: I
) -> DefaultObjectAccessControlInsertCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> DefaultObjectAccessControlInsertCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for DefaultObjectAccessControlInsertCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for DefaultObjectAccessControlInsertCall<'a, S### impl<'a, S> Send for DefaultObjectAccessControlInsertCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for DefaultObjectAccessControlInsertCall<'a, S### impl<'a, S> Unpin for DefaultObjectAccessControlInsertCall<'a, S### impl<'a, S> !UnwindSafe for DefaultObjectAccessControlInsertCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::DefaultObjectAccessControlListCall
===
```
pub struct DefaultObjectAccessControlListCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Retrieves default object ACL entries on the specified bucket.
A builder for the *list* method supported by a *defaultObjectAccessControl* resource.
It is not used directly, but through a `DefaultObjectAccessControlMethods` instance.
Example
---
Instantiate a resource method builder
```
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.default_object_access_controls().list("bucket")
.user_project("dolores")
.if_metageneration_not_match(-95)
.if_metageneration_match(-31)
.doit().await;
```
Implementations
---
### impl<'a, S> DefaultObjectAccessControlListCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, ObjectAccessControls)Perform the operation you have build so far.
#### pub fn bucket(
self,
new_value: &str
) -> DefaultObjectAccessControlListCall<'a, SName of a bucket.
Sets the *bucket* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(
self,
new_value: &str
) -> DefaultObjectAccessControlListCall<'a, SThe project to be billed for this request. Required for Requester Pays buckets.
Sets the *user project* query property to the given value.
#### pub fn if_metageneration_not_match(
self,
new_value: i64
) -> DefaultObjectAccessControlListCall<'a, SIf present, only return default ACL listing if the bucket’s current metageneration does not match the given value.
Sets the *if metageneration not match* query property to the given value.
#### pub fn if_metageneration_match(
self,
new_value: i64
) -> DefaultObjectAccessControlListCall<'a, SIf present, only return default ACL listing if the bucket’s current metageneration matches this value.
Sets the *if metageneration match* query property to the given value.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> DefaultObjectAccessControlListCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(
self,
name: T,
value: T
) -> DefaultObjectAccessControlListCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(
self,
scope: St
) -> DefaultObjectAccessControlListCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(
self,
scopes: I
) -> DefaultObjectAccessControlListCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> DefaultObjectAccessControlListCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for DefaultObjectAccessControlListCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for DefaultObjectAccessControlListCall<'a, S### impl<'a, S> Send for DefaultObjectAccessControlListCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for DefaultObjectAccessControlListCall<'a, S### impl<'a, S> Unpin for DefaultObjectAccessControlListCall<'a, S### impl<'a, S> !UnwindSafe for DefaultObjectAccessControlListCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::DefaultObjectAccessControlPatchCall
===
```
pub struct DefaultObjectAccessControlPatchCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Patches a default object ACL entry on the specified bucket.
A builder for the *patch* method supported by a *defaultObjectAccessControl* resource.
It is not used directly, but through a `DefaultObjectAccessControlMethods` instance.
Example
---
Instantiate a resource method builder
```
use storage1::api::ObjectAccessControl;
// As the method needs a request, you would usually fill it with the desired information
// into the respective structure. Some of the parts shown here might not be applicable !
// Values shown here are possibly random and not representative !
let mut req = ObjectAccessControl::default();
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.default_object_access_controls().patch(req, "bucket", "entity")
.user_project("est")
.doit().await;
```
Implementations
---
### impl<'a, S> DefaultObjectAccessControlPatchCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, ObjectAccessControl)Perform the operation you have build so far.
#### pub fn request(
self,
new_value: ObjectAccessControl
) -> DefaultObjectAccessControlPatchCall<'a, SSets the *request* property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn bucket(
self,
new_value: &str
) -> DefaultObjectAccessControlPatchCall<'a, SName of a bucket.
Sets the *bucket* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn entity(
self,
new_value: &str
) -> DefaultObjectAccessControlPatchCall<'a, SThe entity holding the permission. Can be user-userId, user-emailAddress, group-groupId, group-emailAddress, allUsers, or allAuthenticatedUsers.
Sets the *entity* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(
self,
new_value: &str
) -> DefaultObjectAccessControlPatchCall<'a, SThe project to be billed for this request. Required for Requester Pays buckets.
Sets the *user project* query property to the given value.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> DefaultObjectAccessControlPatchCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(
self,
name: T,
value: T
) -> DefaultObjectAccessControlPatchCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(
self,
scope: St
) -> DefaultObjectAccessControlPatchCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(
self,
scopes: I
) -> DefaultObjectAccessControlPatchCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> DefaultObjectAccessControlPatchCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for DefaultObjectAccessControlPatchCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for DefaultObjectAccessControlPatchCall<'a, S### impl<'a, S> Send for DefaultObjectAccessControlPatchCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for DefaultObjectAccessControlPatchCall<'a, S### impl<'a, S> Unpin for DefaultObjectAccessControlPatchCall<'a, S### impl<'a, S> !UnwindSafe for DefaultObjectAccessControlPatchCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::DefaultObjectAccessControlUpdateCall
===
```
pub struct DefaultObjectAccessControlUpdateCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Updates a default object ACL entry on the specified bucket.
A builder for the *update* method supported by a *defaultObjectAccessControl* resource.
It is not used directly, but through a `DefaultObjectAccessControlMethods` instance.
Example
---
Instantiate a resource method builder
```
use storage1::api::ObjectAccessControl;
// As the method needs a request, you would usually fill it with the desired information
// into the respective structure. Some of the parts shown here might not be applicable !
// Values shown here are possibly random and not representative !
let mut req = ObjectAccessControl::default();
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.default_object_access_controls().update(req, "bucket", "entity")
.user_project("consetetur")
.doit().await;
```
Implementations
---
### impl<'a, S> DefaultObjectAccessControlUpdateCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, ObjectAccessControl)Perform the operation you have build so far.
#### pub fn request(
self,
new_value: ObjectAccessControl
) -> DefaultObjectAccessControlUpdateCall<'a, SSets the *request* property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn bucket(
self,
new_value: &str
) -> DefaultObjectAccessControlUpdateCall<'a, SName of a bucket.
Sets the *bucket* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn entity(
self,
new_value: &str
) -> DefaultObjectAccessControlUpdateCall<'a, SThe entity holding the permission. Can be user-userId, user-emailAddress, group-groupId, group-emailAddress, allUsers, or allAuthenticatedUsers.
Sets the *entity* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(
self,
new_value: &str
) -> DefaultObjectAccessControlUpdateCall<'a, SThe project to be billed for this request. Required for Requester Pays buckets.
Sets the *user project* query property to the given value.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> DefaultObjectAccessControlUpdateCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(
self,
name: T,
value: T
) -> DefaultObjectAccessControlUpdateCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(
self,
scope: St
) -> DefaultObjectAccessControlUpdateCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(
self,
scopes: I
) -> DefaultObjectAccessControlUpdateCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> DefaultObjectAccessControlUpdateCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for DefaultObjectAccessControlUpdateCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for DefaultObjectAccessControlUpdateCall<'a, S### impl<'a, S> Send for DefaultObjectAccessControlUpdateCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for DefaultObjectAccessControlUpdateCall<'a, S### impl<'a, S> Unpin for DefaultObjectAccessControlUpdateCall<'a, S### impl<'a, S> !UnwindSafe for DefaultObjectAccessControlUpdateCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::Notification
===
```
pub struct Notification {
pub custom_attributes: Option<HashMap<String, String>>,
pub etag: Option<String>,
pub event_types: Option<Vec<String>>,
pub id: Option<String>,
pub kind: Option<String>,
pub object_name_prefix: Option<String>,
pub payload_format: Option<String>,
pub self_link: Option<String>,
pub topic: Option<String>,
}
```
A subscription to receive Google PubSub notifications.
Activities
---
This type is used in activities, which are methods you may call on this type or where this type is involved in.
The list links the activity name, along with information about where it is used (one of *request* and *response*).
* delete notifications (none)
* get notifications (response)
* insert notifications (request|response)
* list notifications (none)
Fields
---
`custom_attributes: Option<HashMap<String, String>>`An optional list of additional attributes to attach to each Cloud PubSub message published for this notification subscription.
`etag: Option<String>`HTTP 1.1 Entity tag for this subscription notification.
`event_types: Option<Vec<String>>`If present, only send notifications about listed event types. If empty, sent notifications for all event types.
`id: Option<String>`The ID of the notification.
`kind: Option<String>`The kind of item this is. For notifications, this is always storage#notification.
`object_name_prefix: Option<String>`If present, only apply this notification configuration to object names that begin with this prefix.
`payload_format: Option<String>`The desired content of the Payload.
`self_link: Option<String>`The canonical URL of this notification.
`topic: Option<String>`The Cloud PubSub topic to which this subscription publishes. Formatted as: ‘//pubsub.googleapis.com/projects/{project-identifier}/topics/{my-topic}’
Trait Implementations
---
### impl Clone for Notification
#### fn clone(&self) -> Notification
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> Notification
Returns the “default value” for a type.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl Resource for Notification
### impl ResponseResult for Notification
Auto Trait Implementations
---
### impl RefUnwindSafe for Notification
### impl Send for Notification
### impl Sync for Notification
### impl Unpin for Notification
### impl UnwindSafe for Notification
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Struct google_storage1::api::NotificationDeleteCall
===
```
pub struct NotificationDeleteCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Permanently deletes a notification subscription.
A builder for the *delete* method supported by a *notification* resource.
It is not used directly, but through a `NotificationMethods` instance.
Example
---
Instantiate a resource method builder
```
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.notifications().delete("bucket", "notification")
.user_project("est")
.doit().await;
```
Implementations
---
### impl<'a, S> NotificationDeleteCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<Response<Body>Perform the operation you have build so far.
#### pub fn bucket(self, new_value: &str) -> NotificationDeleteCall<'a, SThe parent bucket of the notification.
Sets the *bucket* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn notification(self, new_value: &str) -> NotificationDeleteCall<'a, SID of the notification to delete.
Sets the *notification* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(self, new_value: &str) -> NotificationDeleteCall<'a, SThe project to be billed for this request. Required for Requester Pays buckets.
Sets the *user project* query property to the given value.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> NotificationDeleteCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> NotificationDeleteCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(self, scope: St) -> NotificationDeleteCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> NotificationDeleteCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> NotificationDeleteCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for NotificationDeleteCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for NotificationDeleteCall<'a, S### impl<'a, S> Send for NotificationDeleteCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for NotificationDeleteCall<'a, S### impl<'a, S> Unpin for NotificationDeleteCall<'a, S### impl<'a, S> !UnwindSafe for NotificationDeleteCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::NotificationGetCall
===
```
pub struct NotificationGetCall<'a, S>where
S: 'a,{ /* private fields */ }
```
View a notification configuration.
A builder for the *get* method supported by a *notification* resource.
It is not used directly, but through a `NotificationMethods` instance.
Example
---
Instantiate a resource method builder
```
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.notifications().get("bucket", "notification")
.user_project("duo")
.doit().await;
```
Implementations
---
### impl<'a, S> NotificationGetCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, Notification)Perform the operation you have build so far.
#### pub fn bucket(self, new_value: &str) -> NotificationGetCall<'a, SThe parent bucket of the notification.
Sets the *bucket* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn notification(self, new_value: &str) -> NotificationGetCall<'a, SNotification ID
Sets the *notification* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(self, new_value: &str) -> NotificationGetCall<'a, SThe project to be billed for this request. Required for Requester Pays buckets.
Sets the *user project* query property to the given value.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> NotificationGetCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> NotificationGetCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(self, scope: St) -> NotificationGetCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> NotificationGetCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> NotificationGetCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for NotificationGetCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for NotificationGetCall<'a, S### impl<'a, S> Send for NotificationGetCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for NotificationGetCall<'a, S### impl<'a, S> Unpin for NotificationGetCall<'a, S### impl<'a, S> !UnwindSafe for NotificationGetCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::NotificationInsertCall
===
```
pub struct NotificationInsertCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Creates a notification subscription for a given bucket.
A builder for the *insert* method supported by a *notification* resource.
It is not used directly, but through a `NotificationMethods` instance.
Example
---
Instantiate a resource method builder
```
use storage1::api::Notification;
// As the method needs a request, you would usually fill it with the desired information
// into the respective structure. Some of the parts shown here might not be applicable !
// Values shown here are possibly random and not representative !
let mut req = Notification::default();
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.notifications().insert(req, "bucket")
.user_project("est")
.doit().await;
```
Implementations
---
### impl<'a, S> NotificationInsertCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, Notification)Perform the operation you have build so far.
#### pub fn request(self, new_value: Notification) -> NotificationInsertCall<'a, SSets the *request* property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn bucket(self, new_value: &str) -> NotificationInsertCall<'a, SThe parent bucket of the notification.
Sets the *bucket* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(self, new_value: &str) -> NotificationInsertCall<'a, SThe project to be billed for this request. Required for Requester Pays buckets.
Sets the *user project* query property to the given value.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> NotificationInsertCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> NotificationInsertCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(self, scope: St) -> NotificationInsertCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> NotificationInsertCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> NotificationInsertCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for NotificationInsertCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for NotificationInsertCall<'a, S### impl<'a, S> Send for NotificationInsertCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for NotificationInsertCall<'a, S### impl<'a, S> Unpin for NotificationInsertCall<'a, S### impl<'a, S> !UnwindSafe for NotificationInsertCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::NotificationListCall
===
```
pub struct NotificationListCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Retrieves a list of notification subscriptions for a given bucket.
A builder for the *list* method supported by a *notification* resource.
It is not used directly, but through a `NotificationMethods` instance.
Example
---
Instantiate a resource method builder
```
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.notifications().list("bucket")
.user_project("sed")
.doit().await;
```
Implementations
---
### impl<'a, S> NotificationListCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, Notifications)Perform the operation you have build so far.
#### pub fn bucket(self, new_value: &str) -> NotificationListCall<'a, SName of a Google Cloud Storage bucket.
Sets the *bucket* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(self, new_value: &str) -> NotificationListCall<'a, SThe project to be billed for this request. Required for Requester Pays buckets.
Sets the *user project* query property to the given value.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> NotificationListCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> NotificationListCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(self, scope: St) -> NotificationListCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> NotificationListCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> NotificationListCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for NotificationListCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for NotificationListCall<'a, S### impl<'a, S> Send for NotificationListCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for NotificationListCall<'a, S### impl<'a, S> Unpin for NotificationListCall<'a, S### impl<'a, S> !UnwindSafe for NotificationListCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::ObjectAccessControl
===
```
pub struct ObjectAccessControl {
pub bucket: Option<String>,
pub domain: Option<String>,
pub email: Option<String>,
pub entity: Option<String>,
pub entity_id: Option<String>,
pub etag: Option<String>,
pub generation: Option<i64>,
pub id: Option<String>,
pub kind: Option<String>,
pub object: Option<String>,
pub project_team: Option<ObjectAccessControlProjectTeam>,
pub role: Option<String>,
pub self_link: Option<String>,
}
```
An access-control entry.
Activities
---
This type is used in activities, which are methods you may call on this type or where this type is involved in.
The list links the activity name, along with information about where it is used (one of *request* and *response*).
* get default object access controls (response)
* insert default object access controls (request|response)
* patch default object access controls (request|response)
* update default object access controls (request|response)
* delete object access controls (none)
* get object access controls (response)
* insert object access controls (request|response)
* list object access controls (none)
* patch object access controls (request|response)
* update object access controls (request|response)
Fields
---
`bucket: Option<String>`The name of the bucket.
`domain: Option<String>`The domain associated with the entity, if any.
`email: Option<String>`The email address associated with the entity, if any.
`entity: Option<String>`The entity holding the permission, in one of the following forms:
* user-userId
* user-email
* group-groupId
* group-email
* domain-domain
* project-team-projectId
* allUsers
* allAuthenticatedUsers Examples:
* The user <EMAIL> would be <EMAIL>.
* The group <EMAIL> would be <EMAIL>.
* To refer to all members of the Google Apps for Business domain example.com, the entity would be domain-example.com.
`entity_id: Option<String>`The ID for the entity, if any.
`etag: Option<String>`HTTP 1.1 Entity tag for the access-control entry.
`generation: Option<i64>`The content generation of the object, if applied to an object.
`id: Option<String>`The ID of the access-control entry.
`kind: Option<String>`The kind of item this is. For object access control entries, this is always storage#objectAccessControl.
`object: Option<String>`The name of the object, if applied to an object.
`project_team: Option<ObjectAccessControlProjectTeam>`The project team associated with the entity, if any.
`role: Option<String>`The access permission for the entity.
`self_link: Option<String>`The link to this access-control entry.
Trait Implementations
---
### impl Clone for ObjectAccessControl
#### fn clone(&self) -> ObjectAccessControl
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> ObjectAccessControl
Returns the “default value” for a type.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl Resource for ObjectAccessControl
### impl ResponseResult for ObjectAccessControl
Auto Trait Implementations
---
### impl RefUnwindSafe for ObjectAccessControl
### impl Send for ObjectAccessControl
### impl Sync for ObjectAccessControl
### impl Unpin for ObjectAccessControl
### impl UnwindSafe for ObjectAccessControl
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Struct google_storage1::api::ObjectAccessControlDeleteCall
===
```
pub struct ObjectAccessControlDeleteCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Permanently deletes the ACL entry for the specified entity on the specified object.
A builder for the *delete* method supported by a *objectAccessControl* resource.
It is not used directly, but through a `ObjectAccessControlMethods` instance.
Example
---
Instantiate a resource method builder
```
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.object_access_controls().delete("bucket", "object", "entity")
.user_project("Stet")
.generation(-19)
.doit().await;
```
Implementations
---
### impl<'a, S> ObjectAccessControlDeleteCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<Response<Body>Perform the operation you have build so far.
#### pub fn bucket(self, new_value: &str) -> ObjectAccessControlDeleteCall<'a, SName of a bucket.
Sets the *bucket* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn object(self, new_value: &str) -> ObjectAccessControlDeleteCall<'a, SName of the object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.
Sets the *object* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn entity(self, new_value: &str) -> ObjectAccessControlDeleteCall<'a, SThe entity holding the permission. Can be user-userId, user-emailAddress, group-groupId, group-emailAddress, allUsers, or allAuthenticatedUsers.
Sets the *entity* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(
self,
new_value: &str
) -> ObjectAccessControlDeleteCall<'a, SThe project to be billed for this request. Required for Requester Pays buckets.
Sets the *user project* query property to the given value.
#### pub fn generation(self, new_value: i64) -> ObjectAccessControlDeleteCall<'a, SIf present, selects a specific revision of this object (as opposed to the latest version, the default).
Sets the *generation* query property to the given value.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> ObjectAccessControlDeleteCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> ObjectAccessControlDeleteCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(self, scope: St) -> ObjectAccessControlDeleteCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(
self,
scopes: I
) -> ObjectAccessControlDeleteCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> ObjectAccessControlDeleteCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for ObjectAccessControlDeleteCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for ObjectAccessControlDeleteCall<'a, S### impl<'a, S> Send for ObjectAccessControlDeleteCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for ObjectAccessControlDeleteCall<'a, S### impl<'a, S> Unpin for ObjectAccessControlDeleteCall<'a, S### impl<'a, S> !UnwindSafe for ObjectAccessControlDeleteCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::ObjectAccessControlGetCall
===
```
pub struct ObjectAccessControlGetCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Returns the ACL entry for the specified entity on the specified object.
A builder for the *get* method supported by a *objectAccessControl* resource.
It is not used directly, but through a `ObjectAccessControlMethods` instance.
Example
---
Instantiate a resource method builder
```
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.object_access_controls().get("bucket", "object", "entity")
.user_project("et")
.generation(-77)
.doit().await;
```
Implementations
---
### impl<'a, S> ObjectAccessControlGetCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, ObjectAccessControl)Perform the operation you have build so far.
#### pub fn bucket(self, new_value: &str) -> ObjectAccessControlGetCall<'a, SName of a bucket.
Sets the *bucket* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn object(self, new_value: &str) -> ObjectAccessControlGetCall<'a, SName of the object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.
Sets the *object* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn entity(self, new_value: &str) -> ObjectAccessControlGetCall<'a, SThe entity holding the permission. Can be user-userId, user-emailAddress, group-groupId, group-emailAddress, allUsers, or allAuthenticatedUsers.
Sets the *entity* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(self, new_value: &str) -> ObjectAccessControlGetCall<'a, SThe project to be billed for this request. Required for Requester Pays buckets.
Sets the *user project* query property to the given value.
#### pub fn generation(self, new_value: i64) -> ObjectAccessControlGetCall<'a, SIf present, selects a specific revision of this object (as opposed to the latest version, the default).
Sets the *generation* query property to the given value.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> ObjectAccessControlGetCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> ObjectAccessControlGetCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(self, scope: St) -> ObjectAccessControlGetCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> ObjectAccessControlGetCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> ObjectAccessControlGetCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for ObjectAccessControlGetCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for ObjectAccessControlGetCall<'a, S### impl<'a, S> Send for ObjectAccessControlGetCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for ObjectAccessControlGetCall<'a, S### impl<'a, S> Unpin for ObjectAccessControlGetCall<'a, S### impl<'a, S> !UnwindSafe for ObjectAccessControlGetCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::ObjectAccessControlInsertCall
===
```
pub struct ObjectAccessControlInsertCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Creates a new ACL entry on the specified object.
A builder for the *insert* method supported by a *objectAccessControl* resource.
It is not used directly, but through a `ObjectAccessControlMethods` instance.
Example
---
Instantiate a resource method builder
```
use storage1::api::ObjectAccessControl;
// As the method needs a request, you would usually fill it with the desired information
// into the respective structure. Some of the parts shown here might not be applicable !
// Values shown here are possibly random and not representative !
let mut req = ObjectAccessControl::default();
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.object_access_controls().insert(req, "bucket", "object")
.user_project("Lorem")
.generation(-23)
.doit().await;
```
Implementations
---
### impl<'a, S> ObjectAccessControlInsertCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, ObjectAccessControl)Perform the operation you have build so far.
#### pub fn request(
self,
new_value: ObjectAccessControl
) -> ObjectAccessControlInsertCall<'a, SSets the *request* property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn bucket(self, new_value: &str) -> ObjectAccessControlInsertCall<'a, SName of a bucket.
Sets the *bucket* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn object(self, new_value: &str) -> ObjectAccessControlInsertCall<'a, SName of the object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.
Sets the *object* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(
self,
new_value: &str
) -> ObjectAccessControlInsertCall<'a, SThe project to be billed for this request. Required for Requester Pays buckets.
Sets the *user project* query property to the given value.
#### pub fn generation(self, new_value: i64) -> ObjectAccessControlInsertCall<'a, SIf present, selects a specific revision of this object (as opposed to the latest version, the default).
Sets the *generation* query property to the given value.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> ObjectAccessControlInsertCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> ObjectAccessControlInsertCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(self, scope: St) -> ObjectAccessControlInsertCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(
self,
scopes: I
) -> ObjectAccessControlInsertCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> ObjectAccessControlInsertCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for ObjectAccessControlInsertCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for ObjectAccessControlInsertCall<'a, S### impl<'a, S> Send for ObjectAccessControlInsertCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for ObjectAccessControlInsertCall<'a, S### impl<'a, S> Unpin for ObjectAccessControlInsertCall<'a, S### impl<'a, S> !UnwindSafe for ObjectAccessControlInsertCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::ObjectAccessControlListCall
===
```
pub struct ObjectAccessControlListCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Retrieves ACL entries on the specified object.
A builder for the *list* method supported by a *objectAccessControl* resource.
It is not used directly, but through a `ObjectAccessControlMethods` instance.
Example
---
Instantiate a resource method builder
```
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.object_access_controls().list("bucket", "object")
.user_project("dolores")
.generation(-81)
.doit().await;
```
Implementations
---
### impl<'a, S> ObjectAccessControlListCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, ObjectAccessControls)Perform the operation you have build so far.
#### pub fn bucket(self, new_value: &str) -> ObjectAccessControlListCall<'a, SName of a bucket.
Sets the *bucket* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn object(self, new_value: &str) -> ObjectAccessControlListCall<'a, SName of the object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.
Sets the *object* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(self, new_value: &str) -> ObjectAccessControlListCall<'a, SThe project to be billed for this request. Required for Requester Pays buckets.
Sets the *user project* query property to the given value.
#### pub fn generation(self, new_value: i64) -> ObjectAccessControlListCall<'a, SIf present, selects a specific revision of this object (as opposed to the latest version, the default).
Sets the *generation* query property to the given value.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> ObjectAccessControlListCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> ObjectAccessControlListCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(self, scope: St) -> ObjectAccessControlListCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> ObjectAccessControlListCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> ObjectAccessControlListCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for ObjectAccessControlListCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for ObjectAccessControlListCall<'a, S### impl<'a, S> Send for ObjectAccessControlListCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for ObjectAccessControlListCall<'a, S### impl<'a, S> Unpin for ObjectAccessControlListCall<'a, S### impl<'a, S> !UnwindSafe for ObjectAccessControlListCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::ObjectAccessControlPatchCall
===
```
pub struct ObjectAccessControlPatchCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Patches an ACL entry on the specified object.
A builder for the *patch* method supported by a *objectAccessControl* resource.
It is not used directly, but through a `ObjectAccessControlMethods` instance.
Example
---
Instantiate a resource method builder
```
use storage1::api::ObjectAccessControl;
// As the method needs a request, you would usually fill it with the desired information
// into the respective structure. Some of the parts shown here might not be applicable !
// Values shown here are possibly random and not representative !
let mut req = ObjectAccessControl::default();
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.object_access_controls().patch(req, "bucket", "object", "entity")
.user_project("Lorem")
.generation(-22)
.doit().await;
```
Implementations
---
### impl<'a, S> ObjectAccessControlPatchCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, ObjectAccessControl)Perform the operation you have build so far.
#### pub fn request(
self,
new_value: ObjectAccessControl
) -> ObjectAccessControlPatchCall<'a, SSets the *request* property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn bucket(self, new_value: &str) -> ObjectAccessControlPatchCall<'a, SName of a bucket.
Sets the *bucket* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn object(self, new_value: &str) -> ObjectAccessControlPatchCall<'a, SName of the object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.
Sets the *object* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn entity(self, new_value: &str) -> ObjectAccessControlPatchCall<'a, SThe entity holding the permission. Can be user-userId, user-emailAddress, group-groupId, group-emailAddress, allUsers, or allAuthenticatedUsers.
Sets the *entity* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(
self,
new_value: &str
) -> ObjectAccessControlPatchCall<'a, SThe project to be billed for this request. Required for Requester Pays buckets.
Sets the *user project* query property to the given value.
#### pub fn generation(self, new_value: i64) -> ObjectAccessControlPatchCall<'a, SIf present, selects a specific revision of this object (as opposed to the latest version, the default).
Sets the *generation* query property to the given value.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> ObjectAccessControlPatchCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> ObjectAccessControlPatchCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(self, scope: St) -> ObjectAccessControlPatchCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> ObjectAccessControlPatchCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> ObjectAccessControlPatchCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for ObjectAccessControlPatchCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for ObjectAccessControlPatchCall<'a, S### impl<'a, S> Send for ObjectAccessControlPatchCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for ObjectAccessControlPatchCall<'a, S### impl<'a, S> Unpin for ObjectAccessControlPatchCall<'a, S### impl<'a, S> !UnwindSafe for ObjectAccessControlPatchCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::ObjectAccessControlUpdateCall
===
```
pub struct ObjectAccessControlUpdateCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Updates an ACL entry on the specified object.
A builder for the *update* method supported by a *objectAccessControl* resource.
It is not used directly, but through a `ObjectAccessControlMethods` instance.
Example
---
Instantiate a resource method builder
```
use storage1::api::ObjectAccessControl;
// As the method needs a request, you would usually fill it with the desired information
// into the respective structure. Some of the parts shown here might not be applicable !
// Values shown here are possibly random and not representative !
let mut req = ObjectAccessControl::default();
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.object_access_controls().update(req, "bucket", "object", "entity")
.user_project("sit")
.generation(-81)
.doit().await;
```
Implementations
---
### impl<'a, S> ObjectAccessControlUpdateCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, ObjectAccessControl)Perform the operation you have build so far.
#### pub fn request(
self,
new_value: ObjectAccessControl
) -> ObjectAccessControlUpdateCall<'a, SSets the *request* property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn bucket(self, new_value: &str) -> ObjectAccessControlUpdateCall<'a, SName of a bucket.
Sets the *bucket* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn object(self, new_value: &str) -> ObjectAccessControlUpdateCall<'a, SName of the object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.
Sets the *object* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn entity(self, new_value: &str) -> ObjectAccessControlUpdateCall<'a, SThe entity holding the permission. Can be user-userId, user-emailAddress, group-groupId, group-emailAddress, allUsers, or allAuthenticatedUsers.
Sets the *entity* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(
self,
new_value: &str
) -> ObjectAccessControlUpdateCall<'a, SThe project to be billed for this request. Required for Requester Pays buckets.
Sets the *user project* query property to the given value.
#### pub fn generation(self, new_value: i64) -> ObjectAccessControlUpdateCall<'a, SIf present, selects a specific revision of this object (as opposed to the latest version, the default).
Sets the *generation* query property to the given value.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> ObjectAccessControlUpdateCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> ObjectAccessControlUpdateCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(self, scope: St) -> ObjectAccessControlUpdateCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(
self,
scopes: I
) -> ObjectAccessControlUpdateCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> ObjectAccessControlUpdateCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for ObjectAccessControlUpdateCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for ObjectAccessControlUpdateCall<'a, S### impl<'a, S> Send for ObjectAccessControlUpdateCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for ObjectAccessControlUpdateCall<'a, S### impl<'a, S> Unpin for ObjectAccessControlUpdateCall<'a, S### impl<'a, S> !UnwindSafe for ObjectAccessControlUpdateCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::Object
===
```
pub struct Object {
pub acl: Option<Vec<ObjectAccessControl>>,
pub bucket: Option<String>,
pub cache_control: Option<String>,
pub component_count: Option<i32>,
pub content_disposition: Option<String>,
pub content_encoding: Option<String>,
pub content_language: Option<String>,
pub content_type: Option<String>,
pub crc32c: Option<String>,
pub custom_time: Option<DateTime<Utc>>,
pub customer_encryption: Option<ObjectCustomerEncryption>,
pub etag: Option<String>,
pub event_based_hold: Option<bool>,
pub generation: Option<i64>,
pub id: Option<String>,
pub kind: Option<String>,
pub kms_key_name: Option<String>,
pub md5_hash: Option<String>,
pub media_link: Option<String>,
pub metadata: Option<HashMap<String, String>>,
pub metageneration: Option<i64>,
pub name: Option<String>,
pub owner: Option<ObjectOwner>,
pub retention_expiration_time: Option<DateTime<Utc>>,
pub self_link: Option<String>,
pub size: Option<u64>,
pub storage_class: Option<String>,
pub temporary_hold: Option<bool>,
pub time_created: Option<DateTime<Utc>>,
pub time_deleted: Option<DateTime<Utc>>,
pub time_storage_class_updated: Option<DateTime<Utc>>,
pub updated: Option<DateTime<Utc>>,
}
```
An object.
Activities
---
This type is used in activities, which are methods you may call on this type or where this type is involved in.
The list links the activity name, along with information about where it is used (one of *request* and *response*).
* compose objects (response)
* copy objects (request|response)
* delete objects (none)
* get objects (response)
* get iam policy objects (none)
* insert objects (request|response)
* list objects (none)
* patch objects (request|response)
* rewrite objects (request)
* set iam policy objects (none)
* test iam permissions objects (none)
* update objects (request|response)
* watch all objects (none)
Fields
---
`acl: Option<Vec<ObjectAccessControl>>`Access controls on the object.
`bucket: Option<String>`The name of the bucket containing this object.
`cache_control: Option<String>`Cache-Control directive for the object data. If omitted, and the object is accessible to all anonymous users, the default will be public, max-age=3600.
`component_count: Option<i32>`Number of underlying components that make up this object. Components are accumulated by compose operations.
`content_disposition: Option<String>`Content-Disposition of the object data.
`content_encoding: Option<String>`Content-Encoding of the object data.
`content_language: Option<String>`Content-Language of the object data.
`content_type: Option<String>`Content-Type of the object data. If an object is stored without a Content-Type, it is served as application/octet-stream.
`crc32c: Option<String>`CRC32c checksum, as described in RFC 4960, Appendix B; encoded using base64 in big-endian byte order. For more information about using the CRC32c checksum, see Hashes and ETags: Best Practices.
`custom_time: Option<DateTime<Utc>>`A timestamp in RFC 3339 format specified by the user for an object.
`customer_encryption: Option<ObjectCustomerEncryption>`Metadata of customer-supplied encryption key, if the object is encrypted by such a key.
`etag: Option<String>`HTTP 1.1 Entity tag for the object.
`event_based_hold: Option<bool>`Whether an object is under event-based hold. Event-based hold is a way to retain objects until an event occurs, which is signified by the hold’s release (i.e. this value is set to false). After being released (set to false), such objects will be subject to bucket-level retention (if any). One sample use case of this flag is for banks to hold loan documents for at least 3 years after loan is paid in full. Here, bucket-level retention is 3 years and the event is the loan being paid in full. In this example, these objects will be held intact for any number of years until the event has occurred (event-based hold on the object is released) and then 3 more years after that. That means retention duration of the objects begins from the moment event-based hold transitioned from true to false.
`generation: Option<i64>`The content generation of this object. Used for object versioning.
`id: Option<String>`The ID of the object, including the bucket name, object name, and generation number.
`kind: Option<String>`The kind of item this is. For objects, this is always storage#object.
`kms_key_name: Option<String>`Not currently supported. Specifying the parameter causes the request to fail with status code 400 - Bad Request.
`md5_hash: Option<String>`MD5 hash of the data; encoded using base64. For more information about using the MD5 hash, see Hashes and ETags: Best Practices.
`media_link: Option<String>`Media download link.
`metadata: Option<HashMap<String, String>>`User-provided metadata, in key/value pairs.
`metageneration: Option<i64>`The version of the metadata for this object at this generation. Used for preconditions and for detecting changes in metadata. A metageneration number is only meaningful in the context of a particular generation of a particular object.
`name: Option<String>`The name of the object. Required if not specified by URL parameter.
`owner: Option<ObjectOwner>`The owner of the object. This will always be the uploader of the object.
`retention_expiration_time: Option<DateTime<Utc>>`A server-determined value that specifies the earliest time that the object’s retention period expires. This value is in RFC 3339 format. Note 1: This field is not provided for objects with an active event-based hold, since retention expiration is unknown until the hold is removed. Note 2: This value can be provided even when temporary hold is set (so that the user can reason about policy without having to first unset the temporary hold).
`self_link: Option<String>`The link to this object.
`size: Option<u64>`Content-Length of the data in bytes.
`storage_class: Option<String>`Storage class of the object.
`temporary_hold: Option<bool>`Whether an object is under temporary hold. While this flag is set to true, the object is protected against deletion and overwrites. A common use case of this flag is regulatory investigations where objects need to be retained while the investigation is ongoing. Note that unlike event-based hold, temporary hold does not impact retention expiration time of an object.
`time_created: Option<DateTime<Utc>>`The creation time of the object in RFC 3339 format.
`time_deleted: Option<DateTime<Utc>>`The deletion time of the object in RFC 3339 format. Will be returned if and only if this version of the object has been deleted.
`time_storage_class_updated: Option<DateTime<Utc>>`The time at which the object’s storage class was last changed. When the object is initially created, it will be set to timeCreated.
`updated: Option<DateTime<Utc>>`The modification time of the object metadata in RFC 3339 format. Set initially to object creation time and then updated whenever any metadata of the object changes. This includes changes made by a requester, such as modifying custom metadata, as well as changes made by Cloud Storage on behalf of a requester, such as changing the storage class based on an Object Lifecycle Configuration.
Trait Implementations
---
### impl Clone for Object
#### fn clone(&self) -> Object
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> Object
Returns the “default value” for a type.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl Resource for Object
### impl ResponseResult for Object
Auto Trait Implementations
---
### impl RefUnwindSafe for Object
### impl Send for Object
### impl Sync for Object
### impl Unpin for Object
### impl UnwindSafe for Object
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Struct google_storage1::api::ObjectComposeCall
===
```
pub struct ObjectComposeCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Concatenates a list of existing objects into a new object in the same bucket.
A builder for the *compose* method supported by a *object* resource.
It is not used directly, but through a `ObjectMethods` instance.
Example
---
Instantiate a resource method builder
```
use storage1::api::ComposeRequest;
// As the method needs a request, you would usually fill it with the desired information
// into the respective structure. Some of the parts shown here might not be applicable !
// Values shown here are possibly random and not representative !
let mut req = ComposeRequest::default();
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.objects().compose(req, "destinationBucket", "destinationObject")
.user_project("et")
.kms_key_name("gubergren")
.if_metageneration_match(-21)
.if_generation_match(-60)
.destination_predefined_acl("consetetur")
.doit().await;
```
Implementations
---
### impl<'a, S> ObjectComposeCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, Object)Perform the operation you have build so far.
#### pub fn request(self, new_value: ComposeRequest) -> ObjectComposeCall<'a, SSets the *request* property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn destination_bucket(self, new_value: &str) -> ObjectComposeCall<'a, SName of the bucket containing the source objects. The destination object is stored in this bucket.
Sets the *destination bucket* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn destination_object(self, new_value: &str) -> ObjectComposeCall<'a, SName of the new object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.
Sets the *destination object* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(self, new_value: &str) -> ObjectComposeCall<'a, SThe project to be billed for this request. Required for Requester Pays buckets.
Sets the *user project* query property to the given value.
#### pub fn kms_key_name(self, new_value: &str) -> ObjectComposeCall<'a, SResource name of the Cloud KMS key, of the form projects/my-project/locations/global/keyRings/my-kr/cryptoKeys/my-key, that will be used to encrypt the object. Overrides the object metadata’s kms_key_name value, if any.
Sets the *kms key name* query property to the given value.
#### pub fn if_metageneration_match(self, new_value: i64) -> ObjectComposeCall<'a, SMakes the operation conditional on whether the object’s current metageneration matches the given value.
Sets the *if metageneration match* query property to the given value.
#### pub fn if_generation_match(self, new_value: i64) -> ObjectComposeCall<'a, SMakes the operation conditional on whether the object’s current generation matches the given value. Setting to 0 makes the operation succeed only if there are no live versions of the object.
Sets the *if generation match* query property to the given value.
#### pub fn destination_predefined_acl(
self,
new_value: &str
) -> ObjectComposeCall<'a, SApply a predefined set of access controls to the destination object.
Sets the *destination predefined acl* query property to the given value.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> ObjectComposeCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> ObjectComposeCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(self, scope: St) -> ObjectComposeCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> ObjectComposeCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> ObjectComposeCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for ObjectComposeCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for ObjectComposeCall<'a, S### impl<'a, S> Send for ObjectComposeCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for ObjectComposeCall<'a, S### impl<'a, S> Unpin for ObjectComposeCall<'a, S### impl<'a, S> !UnwindSafe for ObjectComposeCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::ObjectCopyCall
===
```
pub struct ObjectCopyCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Copies a source object to a destination object. Optionally overrides metadata.
A builder for the *copy* method supported by a *object* resource.
It is not used directly, but through a `ObjectMethods` instance.
Example
---
Instantiate a resource method builder
```
use storage1::api::Object;
// As the method needs a request, you would usually fill it with the desired information
// into the respective structure. Some of the parts shown here might not be applicable !
// Values shown here are possibly random and not representative !
let mut req = Object::default();
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.objects().copy(req, "sourceBucket", "sourceObject", "destinationBucket", "destinationObject")
.user_project("dolores")
.source_generation(-46)
.projection("gubergren")
.if_source_metageneration_not_match(-4)
.if_source_metageneration_match(-32)
.if_source_generation_not_match(-61)
.if_source_generation_match(-2)
.if_metageneration_not_match(-50)
.if_metageneration_match(-56)
.if_generation_not_match(-73)
.if_generation_match(-62)
.destination_predefined_acl("sadipscing")
.destination_kms_key_name("At")
.doit().await;
```
Implementations
---
### impl<'a, S> ObjectCopyCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, Object)Perform the operation you have build so far.
#### pub fn request(self, new_value: Object) -> ObjectCopyCall<'a, SSets the *request* property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn source_bucket(self, new_value: &str) -> ObjectCopyCall<'a, SName of the bucket in which to find the source object.
Sets the *source bucket* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn source_object(self, new_value: &str) -> ObjectCopyCall<'a, SName of the source object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.
Sets the *source object* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn destination_bucket(self, new_value: &str) -> ObjectCopyCall<'a, SName of the bucket in which to store the new object. Overrides the provided object metadata’s bucket value, if any.For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.
Sets the *destination bucket* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn destination_object(self, new_value: &str) -> ObjectCopyCall<'a, SName of the new object. Required when the object metadata is not otherwise provided. Overrides the object metadata’s name value, if any.
Sets the *destination object* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(self, new_value: &str) -> ObjectCopyCall<'a, SThe project to be billed for this request. Required for Requester Pays buckets.
Sets the *user project* query property to the given value.
#### pub fn source_generation(self, new_value: i64) -> ObjectCopyCall<'a, SIf present, selects a specific revision of the source object (as opposed to the latest version, the default).
Sets the *source generation* query property to the given value.
#### pub fn projection(self, new_value: &str) -> ObjectCopyCall<'a, SSet of properties to return. Defaults to noAcl, unless the object resource specifies the acl property, when it defaults to full.
Sets the *projection* query property to the given value.
#### pub fn if_source_metageneration_not_match(
self,
new_value: i64
) -> ObjectCopyCall<'a, SMakes the operation conditional on whether the source object’s current metageneration does not match the given value.
Sets the *if source metageneration not match* query property to the given value.
#### pub fn if_source_metageneration_match(
self,
new_value: i64
) -> ObjectCopyCall<'a, SMakes the operation conditional on whether the source object’s current metageneration matches the given value.
Sets the *if source metageneration match* query property to the given value.
#### pub fn if_source_generation_not_match(
self,
new_value: i64
) -> ObjectCopyCall<'a, SMakes the operation conditional on whether the source object’s current generation does not match the given value.
Sets the *if source generation not match* query property to the given value.
#### pub fn if_source_generation_match(self, new_value: i64) -> ObjectCopyCall<'a, SMakes the operation conditional on whether the source object’s current generation matches the given value.
Sets the *if source generation match* query property to the given value.
#### pub fn if_metageneration_not_match(
self,
new_value: i64
) -> ObjectCopyCall<'a, SMakes the operation conditional on whether the destination object’s current metageneration does not match the given value.
Sets the *if metageneration not match* query property to the given value.
#### pub fn if_metageneration_match(self, new_value: i64) -> ObjectCopyCall<'a, SMakes the operation conditional on whether the destination object’s current metageneration matches the given value.
Sets the *if metageneration match* query property to the given value.
#### pub fn if_generation_not_match(self, new_value: i64) -> ObjectCopyCall<'a, SMakes the operation conditional on whether the destination object’s current generation does not match the given value. If no live object exists, the precondition fails. Setting to 0 makes the operation succeed only if there is a live version of the object.
Sets the *if generation not match* query property to the given value.
#### pub fn if_generation_match(self, new_value: i64) -> ObjectCopyCall<'a, SMakes the operation conditional on whether the destination object’s current generation matches the given value. Setting to 0 makes the operation succeed only if there are no live versions of the object.
Sets the *if generation match* query property to the given value.
#### pub fn destination_predefined_acl(
self,
new_value: &str
) -> ObjectCopyCall<'a, SApply a predefined set of access controls to the destination object.
Sets the *destination predefined acl* query property to the given value.
#### pub fn destination_kms_key_name(self, new_value: &str) -> ObjectCopyCall<'a, SResource name of the Cloud KMS key, of the form projects/my-project/locations/global/keyRings/my-kr/cryptoKeys/my-key, that will be used to encrypt the object. Overrides the object metadata’s kms_key_name value, if any.
Sets the *destination kms key name* query property to the given value.
#### pub fn delegate(self, new_value: &'a mut dyn Delegate) -> ObjectCopyCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> ObjectCopyCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(self, scope: St) -> ObjectCopyCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> ObjectCopyCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> ObjectCopyCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for ObjectCopyCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for ObjectCopyCall<'a, S### impl<'a, S> Send for ObjectCopyCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for ObjectCopyCall<'a, S### impl<'a, S> Unpin for ObjectCopyCall<'a, S### impl<'a, S> !UnwindSafe for ObjectCopyCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::ObjectDeleteCall
===
```
pub struct ObjectDeleteCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Deletes an object and its metadata. Deletions are permanent if versioning is not enabled for the bucket, or if the generation parameter is used.
A builder for the *delete* method supported by a *object* resource.
It is not used directly, but through a `ObjectMethods` instance.
Example
---
Instantiate a resource method builder
```
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.objects().delete("bucket", "object")
.user_project("sit")
.if_metageneration_not_match(-83)
.if_metageneration_match(-22)
.if_generation_not_match(-66)
.if_generation_match(-4)
.generation(-6)
.doit().await;
```
Implementations
---
### impl<'a, S> ObjectDeleteCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<Response<Body>Perform the operation you have build so far.
#### pub fn bucket(self, new_value: &str) -> ObjectDeleteCall<'a, SName of the bucket in which the object resides.
Sets the *bucket* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn object(self, new_value: &str) -> ObjectDeleteCall<'a, SName of the object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.
Sets the *object* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(self, new_value: &str) -> ObjectDeleteCall<'a, SThe project to be billed for this request. Required for Requester Pays buckets.
Sets the *user project* query property to the given value.
#### pub fn if_metageneration_not_match(
self,
new_value: i64
) -> ObjectDeleteCall<'a, SMakes the operation conditional on whether the object’s current metageneration does not match the given value.
Sets the *if metageneration not match* query property to the given value.
#### pub fn if_metageneration_match(self, new_value: i64) -> ObjectDeleteCall<'a, SMakes the operation conditional on whether the object’s current metageneration matches the given value.
Sets the *if metageneration match* query property to the given value.
#### pub fn if_generation_not_match(self, new_value: i64) -> ObjectDeleteCall<'a, SMakes the operation conditional on whether the object’s current generation does not match the given value. If no live object exists, the precondition fails. Setting to 0 makes the operation succeed only if there is a live version of the object.
Sets the *if generation not match* query property to the given value.
#### pub fn if_generation_match(self, new_value: i64) -> ObjectDeleteCall<'a, SMakes the operation conditional on whether the object’s current generation matches the given value. Setting to 0 makes the operation succeed only if there are no live versions of the object.
Sets the *if generation match* query property to the given value.
#### pub fn generation(self, new_value: i64) -> ObjectDeleteCall<'a, SIf present, permanently deletes a specific revision of this object (as opposed to the latest version, the default).
Sets the *generation* query property to the given value.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> ObjectDeleteCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> ObjectDeleteCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(self, scope: St) -> ObjectDeleteCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> ObjectDeleteCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> ObjectDeleteCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for ObjectDeleteCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for ObjectDeleteCall<'a, S### impl<'a, S> Send for ObjectDeleteCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for ObjectDeleteCall<'a, S### impl<'a, S> Unpin for ObjectDeleteCall<'a, S### impl<'a, S> !UnwindSafe for ObjectDeleteCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::ObjectGetCall
===
```
pub struct ObjectGetCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Retrieves an object or its metadata.
This method supports **media download**. To enable it, adjust the builder like this:
`.param("alt", "media")`.
Please note that due to missing multi-part support on the server side, you will only receive the media,
but not the `Object` structure that you would usually get. The latter will be a default value.
A builder for the *get* method supported by a *object* resource.
It is not used directly, but through a `ObjectMethods` instance.
Example
---
Instantiate a resource method builder
```
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.objects().get("bucket", "object")
.user_project("no")
.projection("nonumy")
.if_metageneration_not_match(-43)
.if_metageneration_match(-13)
.if_generation_not_match(-101)
.if_generation_match(-58)
.generation(-91)
.doit().await;
```
Implementations
---
### impl<'a, S> ObjectGetCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, Object)Perform the operation you have build so far.
#### pub fn bucket(self, new_value: &str) -> ObjectGetCall<'a, SName of the bucket in which the object resides.
Sets the *bucket* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn object(self, new_value: &str) -> ObjectGetCall<'a, SName of the object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.
Sets the *object* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(self, new_value: &str) -> ObjectGetCall<'a, SThe project to be billed for this request. Required for Requester Pays buckets.
Sets the *user project* query property to the given value.
#### pub fn projection(self, new_value: &str) -> ObjectGetCall<'a, SSet of properties to return. Defaults to noAcl.
Sets the *projection* query property to the given value.
#### pub fn if_metageneration_not_match(self, new_value: i64) -> ObjectGetCall<'a, SMakes the operation conditional on whether the object’s current metageneration does not match the given value.
Sets the *if metageneration not match* query property to the given value.
#### pub fn if_metageneration_match(self, new_value: i64) -> ObjectGetCall<'a, SMakes the operation conditional on whether the object’s current metageneration matches the given value.
Sets the *if metageneration match* query property to the given value.
#### pub fn if_generation_not_match(self, new_value: i64) -> ObjectGetCall<'a, SMakes the operation conditional on whether the object’s current generation does not match the given value. If no live object exists, the precondition fails. Setting to 0 makes the operation succeed only if there is a live version of the object.
Sets the *if generation not match* query property to the given value.
#### pub fn if_generation_match(self, new_value: i64) -> ObjectGetCall<'a, SMakes the operation conditional on whether the object’s current generation matches the given value. Setting to 0 makes the operation succeed only if there are no live versions of the object.
Sets the *if generation match* query property to the given value.
#### pub fn generation(self, new_value: i64) -> ObjectGetCall<'a, SIf present, selects a specific revision of this object (as opposed to the latest version, the default).
Sets the *generation* query property to the given value.
#### pub fn delegate(self, new_value: &'a mut dyn Delegate) -> ObjectGetCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> ObjectGetCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(self, scope: St) -> ObjectGetCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> ObjectGetCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> ObjectGetCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for ObjectGetCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for ObjectGetCall<'a, S### impl<'a, S> Send for ObjectGetCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for ObjectGetCall<'a, S### impl<'a, S> Unpin for ObjectGetCall<'a, S### impl<'a, S> !UnwindSafe for ObjectGetCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::ObjectGetIamPolicyCall
===
```
pub struct ObjectGetIamPolicyCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Returns an IAM policy for the specified object.
A builder for the *getIamPolicy* method supported by a *object* resource.
It is not used directly, but through a `ObjectMethods` instance.
Example
---
Instantiate a resource method builder
```
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.objects().get_iam_policy("bucket", "object")
.user_project("dolore")
.generation(-25)
.doit().await;
```
Implementations
---
### impl<'a, S> ObjectGetIamPolicyCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, Policy)Perform the operation you have build so far.
#### pub fn bucket(self, new_value: &str) -> ObjectGetIamPolicyCall<'a, SName of the bucket in which the object resides.
Sets the *bucket* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn object(self, new_value: &str) -> ObjectGetIamPolicyCall<'a, SName of the object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.
Sets the *object* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(self, new_value: &str) -> ObjectGetIamPolicyCall<'a, SThe project to be billed for this request. Required for Requester Pays buckets.
Sets the *user project* query property to the given value.
#### pub fn generation(self, new_value: i64) -> ObjectGetIamPolicyCall<'a, SIf present, selects a specific revision of this object (as opposed to the latest version, the default).
Sets the *generation* query property to the given value.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> ObjectGetIamPolicyCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> ObjectGetIamPolicyCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(self, scope: St) -> ObjectGetIamPolicyCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> ObjectGetIamPolicyCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> ObjectGetIamPolicyCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for ObjectGetIamPolicyCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for ObjectGetIamPolicyCall<'a, S### impl<'a, S> Send for ObjectGetIamPolicyCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for ObjectGetIamPolicyCall<'a, S### impl<'a, S> Unpin for ObjectGetIamPolicyCall<'a, S### impl<'a, S> !UnwindSafe for ObjectGetIamPolicyCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::ObjectInsertCall
===
```
pub struct ObjectInsertCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Stores a new object and metadata.
A builder for the *insert* method supported by a *object* resource.
It is not used directly, but through a `ObjectMethods` instance.
Example
---
Instantiate a resource method builder
```
use storage1::api::Object;
use std::fs;
// As the method needs a request, you would usually fill it with the desired information
// into the respective structure. Some of the parts shown here might not be applicable !
// Values shown here are possibly random and not representative !
let mut req = Object::default();
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `upload_resumable(...)`.
// Values shown here are possibly random and not representative !
let result = hub.objects().insert(req, "bucket")
.user_project("dolore")
.projection("amet")
.predefined_acl("ut")
.name("At")
.kms_key_name("sit")
.if_metageneration_not_match(-76)
.if_metageneration_match(-20)
.if_generation_not_match(-45)
.if_generation_match(-87)
.content_encoding("rebum.")
.upload_resumable(fs::File::open("file.ext").unwrap(), "application/octet-stream".parse().unwrap()).await;
```
Implementations
---
### impl<'a, S> ObjectInsertCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn upload_resumable<RS>(
self,
resumeable_stream: RS,
mime_type: Mime
) -> Result<(Response<Body>, Object)>where
RS: ReadSeek,
Upload media in a resumable fashion.
Even if the upload fails or is interrupted, it can be resumed for a certain amount of time as the server maintains state temporarily.
The delegate will be asked for an `upload_url()`, and if not provided, will be asked to store an upload URL that was provided by the server, using `store_upload_url(...)`. The upload will be done in chunks, the delegate may specify the `chunk_size()` and may cancel the operation before each chunk is uploaded, using
`cancel_chunk_upload(...)`.
* *multipart*: yes
* *max size*: 0kb
* *valid mime types*: ‘*/*’
#### pub async fn upload<RS>(
self,
stream: RS,
mime_type: Mime
) -> Result<(Response<Body>, Object)>where
RS: ReadSeek,
Upload media all at once.
If the upload fails for whichever reason, all progress is lost.
* *multipart*: yes
* *max size*: 0kb
* *valid mime types*: ‘*/*’
#### pub fn request(self, new_value: Object) -> ObjectInsertCall<'a, SSets the *request* property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn bucket(self, new_value: &str) -> ObjectInsertCall<'a, SName of the bucket in which to store the new object. Overrides the provided object metadata’s bucket value, if any.
Sets the *bucket* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(self, new_value: &str) -> ObjectInsertCall<'a, SThe project to be billed for this request. Required for Requester Pays buckets.
Sets the *user project* query property to the given value.
#### pub fn projection(self, new_value: &str) -> ObjectInsertCall<'a, SSet of properties to return. Defaults to noAcl, unless the object resource specifies the acl property, when it defaults to full.
Sets the *projection* query property to the given value.
#### pub fn predefined_acl(self, new_value: &str) -> ObjectInsertCall<'a, SApply a predefined set of access controls to this object.
Sets the *predefined acl* query property to the given value.
#### pub fn name(self, new_value: &str) -> ObjectInsertCall<'a, SName of the object. Required when the object metadata is not otherwise provided. Overrides the object metadata’s name value, if any. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.
Sets the *name* query property to the given value.
#### pub fn kms_key_name(self, new_value: &str) -> ObjectInsertCall<'a, SResource name of the Cloud KMS key, of the form projects/my-project/locations/global/keyRings/my-kr/cryptoKeys/my-key, that will be used to encrypt the object. Overrides the object metadata’s kms_key_name value, if any.
Sets the *kms key name* query property to the given value.
#### pub fn if_metageneration_not_match(
self,
new_value: i64
) -> ObjectInsertCall<'a, SMakes the operation conditional on whether the object’s current metageneration does not match the given value.
Sets the *if metageneration not match* query property to the given value.
#### pub fn if_metageneration_match(self, new_value: i64) -> ObjectInsertCall<'a, SMakes the operation conditional on whether the object’s current metageneration matches the given value.
Sets the *if metageneration match* query property to the given value.
#### pub fn if_generation_not_match(self, new_value: i64) -> ObjectInsertCall<'a, SMakes the operation conditional on whether the object’s current generation does not match the given value. If no live object exists, the precondition fails. Setting to 0 makes the operation succeed only if there is a live version of the object.
Sets the *if generation not match* query property to the given value.
#### pub fn if_generation_match(self, new_value: i64) -> ObjectInsertCall<'a, SMakes the operation conditional on whether the object’s current generation matches the given value. Setting to 0 makes the operation succeed only if there are no live versions of the object.
Sets the *if generation match* query property to the given value.
#### pub fn content_encoding(self, new_value: &str) -> ObjectInsertCall<'a, SIf set, sets the contentEncoding property of the final object to this value. Setting this parameter is equivalent to setting the contentEncoding metadata property. This can be useful when uploading an object with uploadType=media to indicate the encoding of the content being uploaded.
Sets the *content encoding* query property to the given value.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> ObjectInsertCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> ObjectInsertCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(self, scope: St) -> ObjectInsertCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> ObjectInsertCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> ObjectInsertCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for ObjectInsertCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for ObjectInsertCall<'a, S### impl<'a, S> Send for ObjectInsertCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for ObjectInsertCall<'a, S### impl<'a, S> Unpin for ObjectInsertCall<'a, S### impl<'a, S> !UnwindSafe for ObjectInsertCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::ObjectListCall
===
```
pub struct ObjectListCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Retrieves a list of objects matching the criteria.
A builder for the *list* method supported by a *object* resource.
It is not used directly, but through a `ObjectMethods` instance.
Example
---
Instantiate a resource method builder
```
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.objects().list("bucket")
.versions(true)
.user_project("sadipscing")
.start_offset("tempor")
.projection("sea")
.prefix("et")
.page_token("Lorem")
.max_results(68)
.include_trailing_delimiter(true)
.end_offset("rebum.")
.delimiter("At")
.doit().await;
```
Implementations
---
### impl<'a, S> ObjectListCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, Objects)Perform the operation you have build so far.
#### pub fn bucket(self, new_value: &str) -> ObjectListCall<'a, SName of the bucket in which to look for objects.
Sets the *bucket* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn versions(self, new_value: bool) -> ObjectListCall<'a, SIf true, lists all versions of an object as distinct results. The default is false. For more information, see Object Versioning.
Sets the *versions* query property to the given value.
#### pub fn user_project(self, new_value: &str) -> ObjectListCall<'a, SThe project to be billed for this request. Required for Requester Pays buckets.
Sets the *user project* query property to the given value.
#### pub fn start_offset(self, new_value: &str) -> ObjectListCall<'a, SFilter results to objects whose names are lexicographically equal to or after startOffset. If endOffset is also set, the objects listed will have names between startOffset (inclusive) and endOffset (exclusive).
Sets the *start offset* query property to the given value.
#### pub fn projection(self, new_value: &str) -> ObjectListCall<'a, SSet of properties to return. Defaults to noAcl.
Sets the *projection* query property to the given value.
#### pub fn prefix(self, new_value: &str) -> ObjectListCall<'a, SFilter results to objects whose names begin with this prefix.
Sets the *prefix* query property to the given value.
#### pub fn page_token(self, new_value: &str) -> ObjectListCall<'a, SA previously-returned page token representing part of the larger set of results to view.
Sets the *page token* query property to the given value.
#### pub fn max_results(self, new_value: u32) -> ObjectListCall<'a, SMaximum number of items plus prefixes to return in a single page of responses. As duplicate prefixes are omitted, fewer total results may be returned than requested. The service will use this parameter or 1,000 items, whichever is smaller.
Sets the *max results* query property to the given value.
#### pub fn include_trailing_delimiter(
self,
new_value: bool
) -> ObjectListCall<'a, SIf true, objects that end in exactly one instance of delimiter will have their metadata included in items in addition to prefixes.
Sets the *include trailing delimiter* query property to the given value.
#### pub fn end_offset(self, new_value: &str) -> ObjectListCall<'a, SFilter results to objects whose names are lexicographically before endOffset. If startOffset is also set, the objects listed will have names between startOffset (inclusive) and endOffset (exclusive).
Sets the *end offset* query property to the given value.
#### pub fn delimiter(self, new_value: &str) -> ObjectListCall<'a, SReturns results in a directory-like mode. items will contain only objects whose names, aside from the prefix, do not contain delimiter. Objects whose names, aside from the prefix, contain delimiter will have their name, truncated after the delimiter, returned in prefixes. Duplicate prefixes are omitted.
Sets the *delimiter* query property to the given value.
#### pub fn delegate(self, new_value: &'a mut dyn Delegate) -> ObjectListCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> ObjectListCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(self, scope: St) -> ObjectListCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> ObjectListCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> ObjectListCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for ObjectListCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for ObjectListCall<'a, S### impl<'a, S> Send for ObjectListCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for ObjectListCall<'a, S### impl<'a, S> Unpin for ObjectListCall<'a, S### impl<'a, S> !UnwindSafe for ObjectListCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::ObjectPatchCall
===
```
pub struct ObjectPatchCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Patches an object’s metadata.
A builder for the *patch* method supported by a *object* resource.
It is not used directly, but through a `ObjectMethods` instance.
Example
---
Instantiate a resource method builder
```
use storage1::api::Object;
// As the method needs a request, you would usually fill it with the desired information
// into the respective structure. Some of the parts shown here might not be applicable !
// Values shown here are possibly random and not representative !
let mut req = Object::default();
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.objects().patch(req, "bucket", "object")
.user_project("Stet")
.projection("aliquyam")
.predefined_acl("ut")
.if_metageneration_not_match(-3)
.if_metageneration_match(-26)
.if_generation_not_match(-16)
.if_generation_match(-19)
.generation(-96)
.doit().await;
```
Implementations
---
### impl<'a, S> ObjectPatchCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, Object)Perform the operation you have build so far.
#### pub fn request(self, new_value: Object) -> ObjectPatchCall<'a, SSets the *request* property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn bucket(self, new_value: &str) -> ObjectPatchCall<'a, SName of the bucket in which the object resides.
Sets the *bucket* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn object(self, new_value: &str) -> ObjectPatchCall<'a, SName of the object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.
Sets the *object* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(self, new_value: &str) -> ObjectPatchCall<'a, SThe project to be billed for this request, for Requester Pays buckets.
Sets the *user project* query property to the given value.
#### pub fn projection(self, new_value: &str) -> ObjectPatchCall<'a, SSet of properties to return. Defaults to full.
Sets the *projection* query property to the given value.
#### pub fn predefined_acl(self, new_value: &str) -> ObjectPatchCall<'a, SApply a predefined set of access controls to this object.
Sets the *predefined acl* query property to the given value.
#### pub fn if_metageneration_not_match(
self,
new_value: i64
) -> ObjectPatchCall<'a, SMakes the operation conditional on whether the object’s current metageneration does not match the given value.
Sets the *if metageneration not match* query property to the given value.
#### pub fn if_metageneration_match(self, new_value: i64) -> ObjectPatchCall<'a, SMakes the operation conditional on whether the object’s current metageneration matches the given value.
Sets the *if metageneration match* query property to the given value.
#### pub fn if_generation_not_match(self, new_value: i64) -> ObjectPatchCall<'a, SMakes the operation conditional on whether the object’s current generation does not match the given value. If no live object exists, the precondition fails. Setting to 0 makes the operation succeed only if there is a live version of the object.
Sets the *if generation not match* query property to the given value.
#### pub fn if_generation_match(self, new_value: i64) -> ObjectPatchCall<'a, SMakes the operation conditional on whether the object’s current generation matches the given value. Setting to 0 makes the operation succeed only if there are no live versions of the object.
Sets the *if generation match* query property to the given value.
#### pub fn generation(self, new_value: i64) -> ObjectPatchCall<'a, SIf present, selects a specific revision of this object (as opposed to the latest version, the default).
Sets the *generation* query property to the given value.
#### pub fn delegate(self, new_value: &'a mut dyn Delegate) -> ObjectPatchCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> ObjectPatchCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(self, scope: St) -> ObjectPatchCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> ObjectPatchCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> ObjectPatchCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for ObjectPatchCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for ObjectPatchCall<'a, S### impl<'a, S> Send for ObjectPatchCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for ObjectPatchCall<'a, S### impl<'a, S> Unpin for ObjectPatchCall<'a, S### impl<'a, S> !UnwindSafe for ObjectPatchCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::ObjectRewriteCall
===
```
pub struct ObjectRewriteCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Rewrites a source object to a destination object. Optionally overrides metadata.
A builder for the *rewrite* method supported by a *object* resource.
It is not used directly, but through a `ObjectMethods` instance.
Example
---
Instantiate a resource method builder
```
use storage1::api::Object;
// As the method needs a request, you would usually fill it with the desired information
// into the respective structure. Some of the parts shown here might not be applicable !
// Values shown here are possibly random and not representative !
let mut req = Object::default();
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.objects().rewrite(req, "sourceBucket", "sourceObject", "destinationBucket", "destinationObject")
.user_project("dolor")
.source_generation(-82)
.rewrite_token("magna")
.projection("diam")
.max_bytes_rewritten_per_call(-91)
.if_source_metageneration_not_match(-18)
.if_source_metageneration_match(-8)
.if_source_generation_not_match(-23)
.if_source_generation_match(-39)
.if_metageneration_not_match(-43)
.if_metageneration_match(-7)
.if_generation_not_match(-9)
.if_generation_match(-99)
.destination_predefined_acl("diam")
.destination_kms_key_name("At")
.doit().await;
```
Implementations
---
### impl<'a, S> ObjectRewriteCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, RewriteResponse)Perform the operation you have build so far.
#### pub fn request(self, new_value: Object) -> ObjectRewriteCall<'a, SSets the *request* property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn source_bucket(self, new_value: &str) -> ObjectRewriteCall<'a, SName of the bucket in which to find the source object.
Sets the *source bucket* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn source_object(self, new_value: &str) -> ObjectRewriteCall<'a, SName of the source object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.
Sets the *source object* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn destination_bucket(self, new_value: &str) -> ObjectRewriteCall<'a, SName of the bucket in which to store the new object. Overrides the provided object metadata’s bucket value, if any.
Sets the *destination bucket* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn destination_object(self, new_value: &str) -> ObjectRewriteCall<'a, SName of the new object. Required when the object metadata is not otherwise provided. Overrides the object metadata’s name value, if any. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.
Sets the *destination object* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(self, new_value: &str) -> ObjectRewriteCall<'a, SThe project to be billed for this request. Required for Requester Pays buckets.
Sets the *user project* query property to the given value.
#### pub fn source_generation(self, new_value: i64) -> ObjectRewriteCall<'a, SIf present, selects a specific revision of the source object (as opposed to the latest version, the default).
Sets the *source generation* query property to the given value.
#### pub fn rewrite_token(self, new_value: &str) -> ObjectRewriteCall<'a, SInclude this field (from the previous rewrite response) on each rewrite request after the first one, until the rewrite response ‘done’ flag is true. Calls that provide a rewriteToken can omit all other request fields, but if included those fields must match the values provided in the first rewrite request.
Sets the *rewrite token* query property to the given value.
#### pub fn projection(self, new_value: &str) -> ObjectRewriteCall<'a, SSet of properties to return. Defaults to noAcl, unless the object resource specifies the acl property, when it defaults to full.
Sets the *projection* query property to the given value.
#### pub fn max_bytes_rewritten_per_call(
self,
new_value: i64
) -> ObjectRewriteCall<'a, SThe maximum number of bytes that will be rewritten per rewrite request. Most callers shouldn’t need to specify this parameter - it is primarily in place to support testing. If specified the value must be an integral multiple of 1 MiB (1048576). Also, this only applies to requests where the source and destination span locations and/or storage classes. Finally, this value must not change across rewrite calls else you’ll get an error that the rewriteToken is invalid.
Sets the *max bytes rewritten per call* query property to the given value.
#### pub fn if_source_metageneration_not_match(
self,
new_value: i64
) -> ObjectRewriteCall<'a, SMakes the operation conditional on whether the source object’s current metageneration does not match the given value.
Sets the *if source metageneration not match* query property to the given value.
#### pub fn if_source_metageneration_match(
self,
new_value: i64
) -> ObjectRewriteCall<'a, SMakes the operation conditional on whether the source object’s current metageneration matches the given value.
Sets the *if source metageneration match* query property to the given value.
#### pub fn if_source_generation_not_match(
self,
new_value: i64
) -> ObjectRewriteCall<'a, SMakes the operation conditional on whether the source object’s current generation does not match the given value.
Sets the *if source generation not match* query property to the given value.
#### pub fn if_source_generation_match(
self,
new_value: i64
) -> ObjectRewriteCall<'a, SMakes the operation conditional on whether the source object’s current generation matches the given value.
Sets the *if source generation match* query property to the given value.
#### pub fn if_metageneration_not_match(
self,
new_value: i64
) -> ObjectRewriteCall<'a, SMakes the operation conditional on whether the destination object’s current metageneration does not match the given value.
Sets the *if metageneration not match* query property to the given value.
#### pub fn if_metageneration_match(self, new_value: i64) -> ObjectRewriteCall<'a, SMakes the operation conditional on whether the destination object’s current metageneration matches the given value.
Sets the *if metageneration match* query property to the given value.
#### pub fn if_generation_not_match(self, new_value: i64) -> ObjectRewriteCall<'a, SMakes the operation conditional on whether the object’s current generation does not match the given value. If no live object exists, the precondition fails. Setting to 0 makes the operation succeed only if there is a live version of the object.
Sets the *if generation not match* query property to the given value.
#### pub fn if_generation_match(self, new_value: i64) -> ObjectRewriteCall<'a, SMakes the operation conditional on whether the object’s current generation matches the given value. Setting to 0 makes the operation succeed only if there are no live versions of the object.
Sets the *if generation match* query property to the given value.
#### pub fn destination_predefined_acl(
self,
new_value: &str
) -> ObjectRewriteCall<'a, SApply a predefined set of access controls to the destination object.
Sets the *destination predefined acl* query property to the given value.
#### pub fn destination_kms_key_name(
self,
new_value: &str
) -> ObjectRewriteCall<'a, SResource name of the Cloud KMS key, of the form projects/my-project/locations/global/keyRings/my-kr/cryptoKeys/my-key, that will be used to encrypt the object. Overrides the object metadata’s kms_key_name value, if any.
Sets the *destination kms key name* query property to the given value.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> ObjectRewriteCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> ObjectRewriteCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(self, scope: St) -> ObjectRewriteCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> ObjectRewriteCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> ObjectRewriteCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for ObjectRewriteCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for ObjectRewriteCall<'a, S### impl<'a, S> Send for ObjectRewriteCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for ObjectRewriteCall<'a, S### impl<'a, S> Unpin for ObjectRewriteCall<'a, S### impl<'a, S> !UnwindSafe for ObjectRewriteCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::ObjectSetIamPolicyCall
===
```
pub struct ObjectSetIamPolicyCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Updates an IAM policy for the specified object.
A builder for the *setIamPolicy* method supported by a *object* resource.
It is not used directly, but through a `ObjectMethods` instance.
Example
---
Instantiate a resource method builder
```
use storage1::api::Policy;
// As the method needs a request, you would usually fill it with the desired information
// into the respective structure. Some of the parts shown here might not be applicable !
// Values shown here are possibly random and not representative !
let mut req = Policy::default();
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.objects().set_iam_policy(req, "bucket", "object")
.user_project("ipsum")
.generation(-73)
.doit().await;
```
Implementations
---
### impl<'a, S> ObjectSetIamPolicyCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, Policy)Perform the operation you have build so far.
#### pub fn request(self, new_value: Policy) -> ObjectSetIamPolicyCall<'a, SSets the *request* property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn bucket(self, new_value: &str) -> ObjectSetIamPolicyCall<'a, SName of the bucket in which the object resides.
Sets the *bucket* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn object(self, new_value: &str) -> ObjectSetIamPolicyCall<'a, SName of the object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.
Sets the *object* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(self, new_value: &str) -> ObjectSetIamPolicyCall<'a, SThe project to be billed for this request. Required for Requester Pays buckets.
Sets the *user project* query property to the given value.
#### pub fn generation(self, new_value: i64) -> ObjectSetIamPolicyCall<'a, SIf present, selects a specific revision of this object (as opposed to the latest version, the default).
Sets the *generation* query property to the given value.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> ObjectSetIamPolicyCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> ObjectSetIamPolicyCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(self, scope: St) -> ObjectSetIamPolicyCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> ObjectSetIamPolicyCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> ObjectSetIamPolicyCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for ObjectSetIamPolicyCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for ObjectSetIamPolicyCall<'a, S### impl<'a, S> Send for ObjectSetIamPolicyCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for ObjectSetIamPolicyCall<'a, S### impl<'a, S> Unpin for ObjectSetIamPolicyCall<'a, S### impl<'a, S> !UnwindSafe for ObjectSetIamPolicyCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::ObjectTestIamPermissionCall
===
```
pub struct ObjectTestIamPermissionCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Tests a set of permissions on the given object to see which, if any, are held by the caller.
A builder for the *testIamPermissions* method supported by a *object* resource.
It is not used directly, but through a `ObjectMethods` instance.
Example
---
Instantiate a resource method builder
```
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.objects().test_iam_permissions("bucket", "object", &vec!["no".into()])
.user_project("justo")
.generation(-45)
.doit().await;
```
Implementations
---
### impl<'a, S> ObjectTestIamPermissionCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, TestIamPermissionsResponse)Perform the operation you have build so far.
#### pub fn bucket(self, new_value: &str) -> ObjectTestIamPermissionCall<'a, SName of the bucket in which the object resides.
Sets the *bucket* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn object(self, new_value: &str) -> ObjectTestIamPermissionCall<'a, SName of the object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.
Sets the *object* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn add_permissions(
self,
new_value: &str
) -> ObjectTestIamPermissionCall<'a, SPermissions to test.
Append the given value to the *permissions* query property.
Each appended value will retain its original ordering and be ‘/’-separated in the URL’s parameters.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(self, new_value: &str) -> ObjectTestIamPermissionCall<'a, SThe project to be billed for this request. Required for Requester Pays buckets.
Sets the *user project* query property to the given value.
#### pub fn generation(self, new_value: i64) -> ObjectTestIamPermissionCall<'a, SIf present, selects a specific revision of this object (as opposed to the latest version, the default).
Sets the *generation* query property to the given value.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> ObjectTestIamPermissionCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> ObjectTestIamPermissionCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(self, scope: St) -> ObjectTestIamPermissionCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> ObjectTestIamPermissionCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> ObjectTestIamPermissionCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for ObjectTestIamPermissionCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for ObjectTestIamPermissionCall<'a, S### impl<'a, S> Send for ObjectTestIamPermissionCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for ObjectTestIamPermissionCall<'a, S### impl<'a, S> Unpin for ObjectTestIamPermissionCall<'a, S### impl<'a, S> !UnwindSafe for ObjectTestIamPermissionCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::ObjectUpdateCall
===
```
pub struct ObjectUpdateCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Updates an object’s metadata.
A builder for the *update* method supported by a *object* resource.
It is not used directly, but through a `ObjectMethods` instance.
Example
---
Instantiate a resource method builder
```
use storage1::api::Object;
// As the method needs a request, you would usually fill it with the desired information
// into the respective structure. Some of the parts shown here might not be applicable !
// Values shown here are possibly random and not representative !
let mut req = Object::default();
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.objects().update(req, "bucket", "object")
.user_project("ipsum")
.projection("Stet")
.predefined_acl("gubergren")
.if_metageneration_not_match(-5)
.if_metageneration_match(-61)
.if_generation_not_match(-98)
.if_generation_match(-13)
.generation(-47)
.doit().await;
```
Implementations
---
### impl<'a, S> ObjectUpdateCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, Object)Perform the operation you have build so far.
#### pub fn request(self, new_value: Object) -> ObjectUpdateCall<'a, SSets the *request* property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn bucket(self, new_value: &str) -> ObjectUpdateCall<'a, SName of the bucket in which the object resides.
Sets the *bucket* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn object(self, new_value: &str) -> ObjectUpdateCall<'a, SName of the object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.
Sets the *object* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(self, new_value: &str) -> ObjectUpdateCall<'a, SThe project to be billed for this request. Required for Requester Pays buckets.
Sets the *user project* query property to the given value.
#### pub fn projection(self, new_value: &str) -> ObjectUpdateCall<'a, SSet of properties to return. Defaults to full.
Sets the *projection* query property to the given value.
#### pub fn predefined_acl(self, new_value: &str) -> ObjectUpdateCall<'a, SApply a predefined set of access controls to this object.
Sets the *predefined acl* query property to the given value.
#### pub fn if_metageneration_not_match(
self,
new_value: i64
) -> ObjectUpdateCall<'a, SMakes the operation conditional on whether the object’s current metageneration does not match the given value.
Sets the *if metageneration not match* query property to the given value.
#### pub fn if_metageneration_match(self, new_value: i64) -> ObjectUpdateCall<'a, SMakes the operation conditional on whether the object’s current metageneration matches the given value.
Sets the *if metageneration match* query property to the given value.
#### pub fn if_generation_not_match(self, new_value: i64) -> ObjectUpdateCall<'a, SMakes the operation conditional on whether the object’s current generation does not match the given value. If no live object exists, the precondition fails. Setting to 0 makes the operation succeed only if there is a live version of the object.
Sets the *if generation not match* query property to the given value.
#### pub fn if_generation_match(self, new_value: i64) -> ObjectUpdateCall<'a, SMakes the operation conditional on whether the object’s current generation matches the given value. Setting to 0 makes the operation succeed only if there are no live versions of the object.
Sets the *if generation match* query property to the given value.
#### pub fn generation(self, new_value: i64) -> ObjectUpdateCall<'a, SIf present, selects a specific revision of this object (as opposed to the latest version, the default).
Sets the *generation* query property to the given value.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> ObjectUpdateCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> ObjectUpdateCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(self, scope: St) -> ObjectUpdateCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> ObjectUpdateCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> ObjectUpdateCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for ObjectUpdateCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for ObjectUpdateCall<'a, S### impl<'a, S> Send for ObjectUpdateCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for ObjectUpdateCall<'a, S### impl<'a, S> Unpin for ObjectUpdateCall<'a, S### impl<'a, S> !UnwindSafe for ObjectUpdateCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::ObjectWatchAllCall
===
```
pub struct ObjectWatchAllCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Watch for changes on all objects in a bucket.
A builder for the *watchAll* method supported by a *object* resource.
It is not used directly, but through a `ObjectMethods` instance.
Example
---
Instantiate a resource method builder
```
use storage1::api::Channel;
// As the method needs a request, you would usually fill it with the desired information
// into the respective structure. Some of the parts shown here might not be applicable !
// Values shown here are possibly random and not representative !
let mut req = Channel::default();
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.objects().watch_all(req, "bucket")
.versions(false)
.user_project("sed")
.start_offset("nonumy")
.projection("sea")
.prefix("ipsum")
.page_token("kasd")
.max_results(80)
.include_trailing_delimiter(false)
.end_offset("erat")
.delimiter("clita")
.doit().await;
```
Implementations
---
### impl<'a, S> ObjectWatchAllCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, Channel)Perform the operation you have build so far.
#### pub fn request(self, new_value: Channel) -> ObjectWatchAllCall<'a, SSets the *request* property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn bucket(self, new_value: &str) -> ObjectWatchAllCall<'a, SName of the bucket in which to look for objects.
Sets the *bucket* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn versions(self, new_value: bool) -> ObjectWatchAllCall<'a, SIf true, lists all versions of an object as distinct results. The default is false. For more information, see Object Versioning.
Sets the *versions* query property to the given value.
#### pub fn user_project(self, new_value: &str) -> ObjectWatchAllCall<'a, SThe project to be billed for this request. Required for Requester Pays buckets.
Sets the *user project* query property to the given value.
#### pub fn start_offset(self, new_value: &str) -> ObjectWatchAllCall<'a, SFilter results to objects whose names are lexicographically equal to or after startOffset. If endOffset is also set, the objects listed will have names between startOffset (inclusive) and endOffset (exclusive).
Sets the *start offset* query property to the given value.
#### pub fn projection(self, new_value: &str) -> ObjectWatchAllCall<'a, SSet of properties to return. Defaults to noAcl.
Sets the *projection* query property to the given value.
#### pub fn prefix(self, new_value: &str) -> ObjectWatchAllCall<'a, SFilter results to objects whose names begin with this prefix.
Sets the *prefix* query property to the given value.
#### pub fn page_token(self, new_value: &str) -> ObjectWatchAllCall<'a, SA previously-returned page token representing part of the larger set of results to view.
Sets the *page token* query property to the given value.
#### pub fn max_results(self, new_value: u32) -> ObjectWatchAllCall<'a, SMaximum number of items plus prefixes to return in a single page of responses. As duplicate prefixes are omitted, fewer total results may be returned than requested. The service will use this parameter or 1,000 items, whichever is smaller.
Sets the *max results* query property to the given value.
#### pub fn include_trailing_delimiter(
self,
new_value: bool
) -> ObjectWatchAllCall<'a, SIf true, objects that end in exactly one instance of delimiter will have their metadata included in items in addition to prefixes.
Sets the *include trailing delimiter* query property to the given value.
#### pub fn end_offset(self, new_value: &str) -> ObjectWatchAllCall<'a, SFilter results to objects whose names are lexicographically before endOffset. If startOffset is also set, the objects listed will have names between startOffset (inclusive) and endOffset (exclusive).
Sets the *end offset* query property to the given value.
#### pub fn delimiter(self, new_value: &str) -> ObjectWatchAllCall<'a, SReturns results in a directory-like mode. items will contain only objects whose names, aside from the prefix, do not contain delimiter. Objects whose names, aside from the prefix, contain delimiter will have their name, truncated after the delimiter, returned in prefixes. Duplicate prefixes are omitted.
Sets the *delimiter* query property to the given value.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> ObjectWatchAllCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> ObjectWatchAllCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(self, scope: St) -> ObjectWatchAllCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> ObjectWatchAllCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> ObjectWatchAllCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for ObjectWatchAllCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for ObjectWatchAllCall<'a, S### impl<'a, S> Send for ObjectWatchAllCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for ObjectWatchAllCall<'a, S### impl<'a, S> Unpin for ObjectWatchAllCall<'a, S### impl<'a, S> !UnwindSafe for ObjectWatchAllCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::ProjectHmacKeyCreateCall
===
```
pub struct ProjectHmacKeyCreateCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Creates a new HMAC key for the specified service account.
A builder for the *hmacKeys.create* method supported by a *project* resource.
It is not used directly, but through a `ProjectMethods` instance.
Example
---
Instantiate a resource method builder
```
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.projects().hmac_keys_create("projectId", "serviceAccountEmail")
.user_project("nonumy")
.doit().await;
```
Implementations
---
### impl<'a, S> ProjectHmacKeyCreateCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, HmacKey)Perform the operation you have build so far.
#### pub fn project_id(self, new_value: &str) -> ProjectHmacKeyCreateCall<'a, SProject ID owning the service account.
Sets the *project id* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn service_account_email(
self,
new_value: &str
) -> ProjectHmacKeyCreateCall<'a, SEmail address of the service account.
Sets the *service account email* query property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(self, new_value: &str) -> ProjectHmacKeyCreateCall<'a, SThe project to be billed for this request.
Sets the *user project* query property to the given value.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> ProjectHmacKeyCreateCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> ProjectHmacKeyCreateCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(self, scope: St) -> ProjectHmacKeyCreateCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> ProjectHmacKeyCreateCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> ProjectHmacKeyCreateCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for ProjectHmacKeyCreateCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for ProjectHmacKeyCreateCall<'a, S### impl<'a, S> Send for ProjectHmacKeyCreateCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for ProjectHmacKeyCreateCall<'a, S### impl<'a, S> Unpin for ProjectHmacKeyCreateCall<'a, S### impl<'a, S> !UnwindSafe for ProjectHmacKeyCreateCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::ProjectHmacKeyDeleteCall
===
```
pub struct ProjectHmacKeyDeleteCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Deletes an HMAC key.
A builder for the *hmacKeys.delete* method supported by a *project* resource.
It is not used directly, but through a `ProjectMethods` instance.
Example
---
Instantiate a resource method builder
```
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.projects().hmac_keys_delete("projectId", "accessId")
.user_project("dolores")
.doit().await;
```
Implementations
---
### impl<'a, S> ProjectHmacKeyDeleteCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<Response<Body>Perform the operation you have build so far.
#### pub fn project_id(self, new_value: &str) -> ProjectHmacKeyDeleteCall<'a, SProject ID owning the requested key
Sets the *project id* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn access_id(self, new_value: &str) -> ProjectHmacKeyDeleteCall<'a, SName of the HMAC key to be deleted.
Sets the *access id* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(self, new_value: &str) -> ProjectHmacKeyDeleteCall<'a, SThe project to be billed for this request.
Sets the *user project* query property to the given value.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> ProjectHmacKeyDeleteCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> ProjectHmacKeyDeleteCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(self, scope: St) -> ProjectHmacKeyDeleteCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> ProjectHmacKeyDeleteCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> ProjectHmacKeyDeleteCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for ProjectHmacKeyDeleteCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for ProjectHmacKeyDeleteCall<'a, S### impl<'a, S> Send for ProjectHmacKeyDeleteCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for ProjectHmacKeyDeleteCall<'a, S### impl<'a, S> Unpin for ProjectHmacKeyDeleteCall<'a, S### impl<'a, S> !UnwindSafe for ProjectHmacKeyDeleteCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::ProjectHmacKeyGetCall
===
```
pub struct ProjectHmacKeyGetCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Retrieves an HMAC key’s metadata
A builder for the *hmacKeys.get* method supported by a *project* resource.
It is not used directly, but through a `ProjectMethods` instance.
Example
---
Instantiate a resource method builder
```
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.projects().hmac_keys_get("projectId", "accessId")
.user_project("eos")
.doit().await;
```
Implementations
---
### impl<'a, S> ProjectHmacKeyGetCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, HmacKeyMetadata)Perform the operation you have build so far.
#### pub fn project_id(self, new_value: &str) -> ProjectHmacKeyGetCall<'a, SProject ID owning the service account of the requested key.
Sets the *project id* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn access_id(self, new_value: &str) -> ProjectHmacKeyGetCall<'a, SName of the HMAC key.
Sets the *access id* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(self, new_value: &str) -> ProjectHmacKeyGetCall<'a, SThe project to be billed for this request.
Sets the *user project* query property to the given value.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> ProjectHmacKeyGetCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> ProjectHmacKeyGetCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(self, scope: St) -> ProjectHmacKeyGetCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> ProjectHmacKeyGetCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> ProjectHmacKeyGetCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for ProjectHmacKeyGetCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for ProjectHmacKeyGetCall<'a, S### impl<'a, S> Send for ProjectHmacKeyGetCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for ProjectHmacKeyGetCall<'a, S### impl<'a, S> Unpin for ProjectHmacKeyGetCall<'a, S### impl<'a, S> !UnwindSafe for ProjectHmacKeyGetCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::ProjectHmacKeyListCall
===
```
pub struct ProjectHmacKeyListCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Retrieves a list of HMAC keys matching the criteria.
A builder for the *hmacKeys.list* method supported by a *project* resource.
It is not used directly, but through a `ProjectMethods` instance.
Example
---
Instantiate a resource method builder
```
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.projects().hmac_keys_list("projectId")
.user_project("elitr")
.show_deleted_keys(true)
.service_account_email("et")
.page_token("clita")
.max_results(100)
.doit().await;
```
Implementations
---
### impl<'a, S> ProjectHmacKeyListCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, HmacKeysMetadata)Perform the operation you have build so far.
#### pub fn project_id(self, new_value: &str) -> ProjectHmacKeyListCall<'a, SName of the project in which to look for HMAC keys.
Sets the *project id* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(self, new_value: &str) -> ProjectHmacKeyListCall<'a, SThe project to be billed for this request.
Sets the *user project* query property to the given value.
#### pub fn show_deleted_keys(self, new_value: bool) -> ProjectHmacKeyListCall<'a, SWhether or not to show keys in the DELETED state.
Sets the *show deleted keys* query property to the given value.
#### pub fn service_account_email(
self,
new_value: &str
) -> ProjectHmacKeyListCall<'a, SIf present, only keys for the given service account are returned.
Sets the *service account email* query property to the given value.
#### pub fn page_token(self, new_value: &str) -> ProjectHmacKeyListCall<'a, SA previously-returned page token representing part of the larger set of results to view.
Sets the *page token* query property to the given value.
#### pub fn max_results(self, new_value: u32) -> ProjectHmacKeyListCall<'a, SMaximum number of items to return in a single page of responses. The service uses this parameter or 250 items, whichever is smaller. The max number of items per page will also be limited by the number of distinct service accounts in the response. If the number of service accounts in a single response is too high, the page will truncated and a next page token will be returned.
Sets the *max results* query property to the given value.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> ProjectHmacKeyListCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> ProjectHmacKeyListCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(self, scope: St) -> ProjectHmacKeyListCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> ProjectHmacKeyListCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> ProjectHmacKeyListCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for ProjectHmacKeyListCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for ProjectHmacKeyListCall<'a, S### impl<'a, S> Send for ProjectHmacKeyListCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for ProjectHmacKeyListCall<'a, S### impl<'a, S> Unpin for ProjectHmacKeyListCall<'a, S### impl<'a, S> !UnwindSafe for ProjectHmacKeyListCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::ProjectHmacKeyUpdateCall
===
```
pub struct ProjectHmacKeyUpdateCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Updates the state of an HMAC key. See the HMAC Key resource descriptor for valid states.
A builder for the *hmacKeys.update* method supported by a *project* resource.
It is not used directly, but through a `ProjectMethods` instance.
Example
---
Instantiate a resource method builder
```
use storage1::api::HmacKeyMetadata;
// As the method needs a request, you would usually fill it with the desired information
// into the respective structure. Some of the parts shown here might not be applicable !
// Values shown here are possibly random and not representative !
let mut req = HmacKeyMetadata::default();
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.projects().hmac_keys_update(req, "projectId", "accessId")
.user_project("erat")
.doit().await;
```
Implementations
---
### impl<'a, S> ProjectHmacKeyUpdateCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, HmacKeyMetadata)Perform the operation you have build so far.
#### pub fn request(
self,
new_value: HmacKeyMetadata
) -> ProjectHmacKeyUpdateCall<'a, SSets the *request* property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn project_id(self, new_value: &str) -> ProjectHmacKeyUpdateCall<'a, SProject ID owning the service account of the updated key.
Sets the *project id* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn access_id(self, new_value: &str) -> ProjectHmacKeyUpdateCall<'a, SName of the HMAC key being updated.
Sets the *access id* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(self, new_value: &str) -> ProjectHmacKeyUpdateCall<'a, SThe project to be billed for this request.
Sets the *user project* query property to the given value.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> ProjectHmacKeyUpdateCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> ProjectHmacKeyUpdateCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(self, scope: St) -> ProjectHmacKeyUpdateCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> ProjectHmacKeyUpdateCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> ProjectHmacKeyUpdateCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for ProjectHmacKeyUpdateCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for ProjectHmacKeyUpdateCall<'a, S### impl<'a, S> Send for ProjectHmacKeyUpdateCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for ProjectHmacKeyUpdateCall<'a, S### impl<'a, S> Unpin for ProjectHmacKeyUpdateCall<'a, S### impl<'a, S> !UnwindSafe for ProjectHmacKeyUpdateCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct google_storage1::api::ProjectServiceAccountGetCall
===
```
pub struct ProjectServiceAccountGetCall<'a, S>where
S: 'a,{ /* private fields */ }
```
Get the email address of this project’s Google Cloud Storage service account.
A builder for the *serviceAccount.get* method supported by a *project* resource.
It is not used directly, but through a `ProjectMethods` instance.
Example
---
Instantiate a resource method builder
```
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.projects().service_account_get("projectId")
.user_project("nonumy")
.doit().await;
```
Implementations
---
### impl<'a, S> ProjectServiceAccountGetCall<'a, S>where
S: Service<Uri> + Clone + Send + Sync + 'static,
S::Response: Connection + AsyncRead + AsyncWrite + Send + Unpin + 'static,
S::Future: Send + Unpin + 'static,
S::Error: Into<Box<dyn StdError + Send + Sync>>,
#### pub async fn doit(self) -> Result<(Response<Body>, ServiceAccount)Perform the operation you have build so far.
#### pub fn project_id(self, new_value: &str) -> ProjectServiceAccountGetCall<'a, SProject ID
Sets the *project id* path property to the given value.
Even though the property as already been set when instantiating this call,
we provide this method for API completeness.
#### pub fn user_project(
self,
new_value: &str
) -> ProjectServiceAccountGetCall<'a, SThe project to be billed for this request.
Sets the *user project* query property to the given value.
#### pub fn delegate(
self,
new_value: &'a mut dyn Delegate
) -> ProjectServiceAccountGetCall<'a, SThe delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
```
It should be used to handle progress information, and to implement a certain level of resilience.
```
Sets the *delegate* property to the given value.
#### pub fn param<T>(self, name: T, value: T) -> ProjectServiceAccountGetCall<'a, S>where
T: AsRef<str>,
Set any additional parameter of the query string used in the request.
It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
##### Additional Parameters
* *alt* (query-string) - Data format for the response.
* *fields* (query-string) - Selector specifying which fields to include in a partial response.
* *key* (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
* *oauth_token* (query-string) - OAuth 2.0 token for the current user.
* *prettyPrint* (query-boolean) - Returns response with indentations and line breaks.
* *quotaUser* (query-string) - An opaque string that represents a user for quota purposes. Must not exceed 40 characters.
* *uploadType* (query-string) - Upload protocol for media (e.g. “media”, “multipart”, “resumable”).
* *userIp* (query-string) - Deprecated. Please use quotaUser instead.
#### pub fn add_scope<St>(self, scope: St) -> ProjectServiceAccountGetCall<'a, S>where
St: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead of the default `Scope` variant
`Scope::CloudPlatform`.
The `scope` will be added to a set of scopes. This is important as one can maintain access tokens for more than one scope.
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a *read-only* scope will be sufficient, a read-write scope will do as well.
#### pub fn add_scopes<I, St>(self, scopes: I) -> ProjectServiceAccountGetCall<'a, S>where
I: IntoIterator<Item = St>,
St: AsRef<str>,
Identifies the authorization scope(s) for the method you are building.
See `Self::add_scope()` for details.
#### pub fn clear_scopes(self) -> ProjectServiceAccountGetCall<'a, SRemoves all scopes, and no default scope will be used either.
In this case, you have to specify your API-key using the `key` parameter (see `Self::param()`
for details).
Trait Implementations
---
### impl<'a, S> CallBuilder for ProjectServiceAccountGetCall<'a, SAuto Trait Implementations
---
### impl<'a, S> !RefUnwindSafe for ProjectServiceAccountGetCall<'a, S### impl<'a, S> Send for ProjectServiceAccountGetCall<'a, S>where
S: Sync,
### impl<'a, S> !Sync for ProjectServiceAccountGetCall<'a, S### impl<'a, S> Unpin for ProjectServiceAccountGetCall<'a, S### impl<'a, S> !UnwindSafe for ProjectServiceAccountGetCall<'a, SBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Type Alias google_storage1::Result
===
```
pub type Result<T> = Result<T, Error>;
```
A universal result type used as return for all calls.
Trait google_storage1::Delegate
===
```
pub trait Delegate: Send {
// Provided methods
fn begin(&mut self, _info: MethodInfo) { ... }
fn http_error(&mut self, _err: &Error) -> Retry { ... }
fn api_key(&mut self) -> Option<String> { ... }
fn token(
&mut self,
e: Box<dyn Error + Send + Sync, Global>
) -> Result<Option<String>, Box<dyn Error + Send + Sync, Global>> { ... }
fn upload_url(&mut self) -> Option<String> { ... }
fn store_upload_url(&mut self, url: Option<&str>) { ... }
fn response_json_decode_error(
&mut self,
json_encoded_value: &str,
json_decode_error: &Error
) { ... }
fn http_failure(&mut self, _: &Response<Body>, _err: Option<Value>) -> Retry { ... }
fn pre_request(&mut self) { ... }
fn chunk_size(&mut self) -> u64 { ... }
fn cancel_chunk_upload(&mut self, chunk: &ContentRange) -> bool { ... }
fn finished(&mut self, is_success: bool) { ... }
}
```
A trait specifying functionality to help controlling any request performed by the API.
The trait has a conservative default implementation.
It contains methods to deal with all common issues, as well with the ones related to uploading media
Provided Methods
---
#### fn begin(&mut self, _info: MethodInfo)
Called at the beginning of any API request. The delegate should store the method information if he is interesting in knowing more context when further calls to it are made.
The matching `finished()` call will always be made, no matter whether or not the API request was successful. That way, the delegate may easily maintain a clean state between various API calls.
#### fn http_error(&mut self, _err: &Error) -> Retry
Called whenever there is an HttpError, usually if there are network problems.
If you choose to retry after a duration, the duration should be chosen using the exponential backoff algorithm.
Return retry information.
#### fn api_key(&mut self) -> Option<StringCalled whenever there is the need for your applications API key after the official authenticator implementation didn’t provide one, for some reason.
If this method returns None as well, the underlying operation will fail
#### fn token(
&mut self,
e: Box<dyn Error + Send + Sync, Global>
) -> Result<Option<String>, Box<dyn Error + Send + Sync, Global>Called whenever the Authenticator didn’t yield a token. The delegate may attempt to provide one, or just take it as a general information about the impending failure.
The given Error provides information about why the token couldn’t be acquired in the first place
#### fn upload_url(&mut self) -> Option<StringCalled during resumable uploads to provide a URL for the impending upload.
It was saved after a previous call to `store_upload_url(...)`, and if not None,
will be used instead of asking the server for a new upload URL.
This is useful in case a previous resumable upload was aborted/canceled, but should now be resumed.
The returned URL will be used exactly once - if it fails again and the delegate allows to retry, we will ask the server for a new upload URL.
#### fn store_upload_url(&mut self, url: Option<&str>)
Called after we have retrieved a new upload URL for a resumable upload to store it in case we fail or cancel. That way, we can attempt to resume the upload later,
see `upload_url()`.
It will also be called with None after a successful upload, which allows the delegate to forget the URL. That way, we will not attempt to resume an upload that has already finished.
#### fn response_json_decode_error(
&mut self,
json_encoded_value: &str,
json_decode_error: &Error
)
Called whenever a server response could not be decoded from json.
It’s for informational purposes only, the caller will return with an error accordingly.
##### Arguments
* `json_encoded_value` - The json-encoded value which failed to decode.
* `json_decode_error` - The decoder error
#### fn http_failure(&mut self, _: &Response<Body>, _err: Option<Value>) -> Retry
Called whenever the http request returns with a non-success status code.
This can involve authentication issues, or anything else that very much depends on the used API method.
The delegate should check the status, header and decoded json error to decide whether to retry or not. In the latter case, the underlying call will fail.
If you choose to retry after a duration, the duration should be chosen using the exponential backoff algorithm.
#### fn pre_request(&mut self)
Called prior to sending the main request of the given method. It can be used to time the call or to print progress information.
It’s also useful as you can be sure that a request will definitely be made.
#### fn chunk_size(&mut self) -> u64
Return the size of each chunk of a resumable upload.
Must be a power of two, with 1<<18 being the smallest allowed chunk size.
Will be called once before starting any resumable upload.
#### fn cancel_chunk_upload(&mut self, chunk: &ContentRange) -> bool
Called before the given chunk is uploaded to the server.
If true is returned, the upload will be interrupted.
However, it may be resumable if you stored the upload URL in a previous call to `store_upload_url()`
#### fn finished(&mut self, is_success: bool)
Called before the API request method returns, in every case. It can be used to clean up internal state between calls to the API.
This call always has a matching call to `begin(...)`.
##### Arguments
* `is_success` - a true value indicates the operation was successful. If false, you should discard all values stored during `store_upload_url`.
Implementors
---
### impl Delegate for DefaultDelegate
Struct google_storage1::FieldMask
===
```
pub struct FieldMask(_);
```
A `FieldMask` as defined in `https://github.com/protocolbuffers/protobuf/blob/ec1a70913e5793a7d0a7b5fbf7e0e4f75409dd41/src/google/protobuf/field_mask.proto#L180`
Trait Implementations
---
### impl Clone for FieldMask
#### fn clone(&self) -> FieldMask
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter.
#### fn default() -> FieldMask
Returns the “default value” for a type.
#### fn deserialize<D>(
deserializer: D
) -> Result<FieldMask, <D as Deserializer<'de>>::Error>where
D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter.
#### type Err = Infallible
The associated error which can be returned from parsing.#### fn from_str(s: &str) -> Result<FieldMask, <FieldMask as FromStr>::ErrParses a string `s` to return a value of this type.
#### fn eq(&self, other: &FieldMask) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for FieldMask
#### fn serialize<S>(
&self,
s: S
) -> Result<<S as Serializer>::Ok, <S as Serializer>::Error>where
S: Serializer,
Serialize this value into the given Serde serializer.
### impl StructuralEq for FieldMask
### impl StructuralPartialEq for FieldMask
Auto Trait Implementations
---
### impl RefUnwindSafe for FieldMask
### impl Send for FieldMask
### impl Sync for FieldMask
### impl Unpin for FieldMask
### impl UnwindSafe for FieldMask
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
T: Display + ?Sized,
#### default fn to_string(&self) -> String
Converts the given value to a `String`.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Enum google_storage1::Error
===
```
pub enum Error {
HttpError(Error),
UploadSizeLimitExceeded(u64, u64),
BadRequest(Value),
MissingAPIKey,
MissingToken(Box<dyn Error + Send + Sync, Global>),
Cancelled,
FieldClash(&'static str),
JsonDecodeError(String, Error),
Failure(Response<Body>),
Io(Error),
}
```
Variants
---
### HttpError(Error)
The http connection failed
### UploadSizeLimitExceeded(u64, u64)
An attempt was made to upload a resource with size stored in field `.0`
even though the maximum upload size is what is stored in field `.1`.
### BadRequest(Value)
Represents information about a request that was not understood by the server.
Details are included.
### MissingAPIKey
We needed an API key for authentication, but didn’t obtain one.
Neither through the authenticator, nor through the Delegate.
### MissingToken(Box<dyn Error + Send + Sync, Global>)
We required a Token, but didn’t get one from the Authenticator
### Cancelled
The delgate instructed to cancel the operation
### FieldClash(&'static str)
An additional, free form field clashed with one of the built-in optional ones
### JsonDecodeError(String, Error)
Shows that we failed to decode the server response.
This can happen if the protocol changes in conjunction with strict json decoding.
### Failure(Response<Body>)
Indicates an HTTP repsonse with a non-success status code
### Io(Error)
An IO error occurred while reading a stream into memory
Trait Implementations
---
### impl Debug for Error
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter.
#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more1.0.0 · source#### fn description(&self) -> &str
👎Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎Deprecated since 1.33.0: replaced by Error::source, which can support downcasting#### fn provide<'a>(&'a self, request: &mut Request<'a>)
🔬This is a nightly-only experimental API. (`error_generic_member_access`)Provides type based access to context intended for error reports.
#### fn from(err: Error) -> Error
Converts to this type from the input type.Auto Trait Implementations
---
### impl !RefUnwindSafe for Error
### impl Send for Error
### impl Sync for Error
### impl Unpin for Error
### impl !UnwindSafe for Error
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToString for Twhere
T: Display + ?Sized,
#### default fn to_string(&self) -> String
Converts the given value to a `String`.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more |
oews2021 | cran | R | Package ‘oews2021’
October 14, 2022
Type Package
Title May 2021 Occupational Employment and Wage Statistics
Version 1.0.0
Author <NAME>
Maintainer <NAME> <<EMAIL>>
Description Contains data from the May 2021 Occupational Employment and
Wage Statistics data release from the U.S. Bureau of Labor Statistics. The
dataset covers employment and wages across occupations, industries, states,
and at the national level. Metropolitan data is not included.
License MIT + file LICENSE
Depends R (>= 3.5.0)
Encoding UTF-8
LazyData true
LazyDataCompression bzip2
RoxygenNote 7.2.1
NeedsCompilation no
Repository CRAN
Date/Publication 2022-09-21 16:20:02 UTC
R topics documented:
oews202... 2
oews2021 Occupational Employment and Wage Statistics, May 2021
Description
U.S. Bureau of Labor Statistics data on wages and employment across occupations at the national,
state, and industry levels. This dataset does not include data at the metropolitan level. NA values are
used when data is unavailable, the percentage of establishments reporting is less than 0.5
Usage
data(oews2021)
Format
A data frame with 211,275 rows and 26 variables
Details
• AREA. U.S. (99), state FIPS code, Metropolitan Statistical Area (MSA) or New England City
and Town Area (NECTA) code, or OEWS-specific nonmetropolitan area code
• AREA_TITLE. Area name
• AREA_TYPE. Area type: 1= U.S.; 2= State; 3= U.S. Territory; 4= Metropolitan Statistical
Area (MSA) or New England City and Town Area (NECTA); 6= Nonmetropolitan Area
• PRIM_STATE. The primary state for the given area. "US" is used for the national estimates.
• NAICS. North American Industry Classification System (NAICS) code for the given industry.
• NAICS_TITLE. North American Industry Classification System (NAICS) title for the given
industry.
• I_GROUP. Industry level. Indicates cross-industry or NAICS sector, 3-digit, 4-digit, 5-digit,
or 6-digit industry. For industries that OEWS no longer publishes at the 4-digit NAICS level,
the “4-digit” designation indicates the most detailed industry breakdown available: either a
standard NAICS 3-digit industry or an OEWS-specific combination of 4-digit industries. In-
dustries that OEWS has aggregated to the 3-digit NAICS level (for example, NAICS 327000)
will appear twice, once with the “3-digit” and once with the “4-digit” designation.
• OWN_CODE. Ownership type: 1= Federal Government; 2= State Government; 3= Local
Government; 123= Federal, State, and Local Government; 235=Private, State, and Local Gov-
ernment; 35 = Private and Local Government; 5= Private; 57=Private, Local Government
Gambling Establishments (Sector 71), and Local Government Casino Hotels (Sector 72); 58=
Private plus State and Local Government Hospitals; 59= Private and Postal Service; 1235=
Federal, State, and Local Government and Private Sector.
• OCC_CODE. The 6-digit Standard Occupational Classification (SOC) code or OEWS-specific
code for the occupation.
• OCC_TITLE. SOC title or OEWS-specific title for the occupation.
• O_GROUP. SOC occupation level. For most occupations, this field indicates the standard SOC
major, minor, broad, and detailed levels, in addition to all-occupations totals. For occupations
that OEWS no longer publishes at the SOC detailed level, the “detailed” designation indicates
the most detailed data available: either a standard SOC broad occupation or an OEWS-specific
combination of detailed occupations. Occupations that OEWS has aggregated to the SOC
broad occupation level will appear in the file twice, once with the “broad” and once with the
“detailed” designation.
• TOT_EMP. Estimated total employment rounded to the nearest 10 (excludes self-employed).
• EMP_PRSE. Percent relative standard error (PRSE) for the employment estimate. PRSE is a
measure of sampling error, expressed as a percentage of the corresponding estimate. Sampling
error occurs when values for a population are estimated from a sample survey of the popula-
tion, rather than calculated from data for all members of the population. Estimates with lower
PRSEs are typically more precise in the presence of sampling error.
• H_MEAN. Mean hourly wage.
• A_MEAN. Mean annual wage.
• MEAN_PRSE. Percent relative standard error (PRSE) for the mean wage estimate. PRSE is
a measure of sampling error, expressed as a percentage of the corresponding estimate. Sam-
pling error occurs when values for a population are estimated from a sample survey of the
population, rather than calculated from data for all members of the population. Estimates with
lower PRSEs are typically more precise in the presence of sampling error.
• H_PCT10. Hourly 10th percentile wage.
• H_PCT25. Hourly 25th percentile wage.
• H_MEDIAN. Hourly median wage (or the 50th percentile).
• H_PCT75. Hourly 75th percentile wage.
• H_PCT90. Hourly 90th percentile wage.
• A_PCT10. Annual 10th percentile wage.
• A_PCT25. Annual 25th percentile wage.
• A_MEDIAN. Annual median wage (or the 50th percentile).
• A_PCT75. Annual 75th percentile wage.
• A_PCT90. Annual 90th percentile wage.
Source
Bureau of Labor Statistics, U.S. Department of Labor, Occupational Employment and Wage Statis-
tics
References
Bureau of Labor Statistics, U.S. Department of Labor, Occupational Employment and Wage Statis-
tics, 2022-09-19. https://www.bls.gov/oes/tables.htm |
sbo | cran | R | Package ‘sbo’
October 14, 2022
Type Package
Title Text Prediction via Stupid Back-Off N-Gram Models
Version 0.5.0
Author <NAME>
Maintainer <NAME> <<EMAIL>>
Description Utilities for training and evaluating text predictors based on Stupid Back-Off N-
gram models (Brants et al., 2007, <https://www.aclweb.org/anthology/D07-1090/>).
License GPL-3
Encoding UTF-8
LazyData true
RoxygenNote 7.1.1.9000
Depends R (>= 3.5.0)
LinkingTo Rcpp, testthat
Imports Rcpp, rlang, tidyr, dplyr, utils, stats, graphics
Suggests ggplot2, knitr, rmarkdown, cli, testthat, covr
SystemRequirements C++11
URL https://vgherard.github.io/sbo/, https://github.com/vgherard/sbo
BugReports https://github.com/vgherard/sbo/issues
VignetteBuilder knitr
NeedsCompilation yes
Repository CRAN
Date/Publication 2020-12-05 19:50:02 UTC
R topics documented:
as_sbo_dictionar... 2
babbl... 3
eval_sbo_predicto... 4
kgram_freq... 5
plot.word_coverag... 7
predict.sbo_kgram_freq... 9
predict.sbo_predicto... 10
preproces... 11
prun... 11
sbo_dictionar... 12
sbo_prediction... 14
tokenize_sentence... 17
twitter_dic... 18
twitter_freq... 18
twitter_predtabl... 19
twitter_tes... 19
twitter_trai... 20
word_coverag... 20
as_sbo_dictionary Coerce to dictionary
Description
Coerce objects to sbo_dictionary class.
Usage
as_sbo_dictionary(x, ...)
## S3 method for class 'character'
as_sbo_dictionary(x, .preprocess = identity, EOS = "", ...)
Arguments
x object to be coerced.
... further arguments passed to or from other methods.
.preprocess a function for corpus preprocessing.
EOS a length one character vector listing all (single character) end-of-sentence to-
kens.
Details
This function is an S3 generic for coercing existing objects to sbo_dictionary class objects. Cur-
rently, only a method for character vectors is implemented, and this will be described below.
Character vector input: Calling as_sbo_dictionary(x) simply decorates the character vector x
with the class sbo_dictionary attribute, and with customizable .preprocess and EOS attributes.
Value
A sbo_dictionary object.
Author(s)
<NAME>
Examples
dict <- as_sbo_dictionary(c("a","b","c"), .preprocess = tolower, EOS = ".")
babble Babble!
Description
Generate random text based on Stupid Back-off language model.
Usage
babble(model, input = NA, n_max = 100L, L = attr(model, "L"))
Arguments
model a sbo_predictor object.
input a length one character vector. Starting point for babbling! If NA, as by default, a
random word is sampled from the model’s dictionary.
n_max a length one integer. Maximum number of words to generate.
L a length one integer. Number of next-word suggestions from which to sample
(see details).
Details
This function generates random text from a Stupid Back-off language model. babble randomly
samples one of the top L next word predictions. Text generation stops when an End-Of-Sentence
token is encountered, or when the number of generated words exceeds n_max.
Value
A character vector of length one.
Author(s)
<NAME>
Examples
# Babble!
p <- sbo_predictor(twitter_predtable)
set.seed(840) # Set seed for reproducibility
babble(p)
eval_sbo_predictor Evaluate Stupid Back-off next-word predictions
Description
Evaluate next-word predictions based on Stupid Back-off N-gram model on a test corpus.
Usage
eval_sbo_predictor(model, test, L = attr(model, "L"))
Arguments
model a sbo_predictor object.
test a character vector. Perform a single prediction on each entry of this vector (see
details).
L Maximum number of predictions for each input sentence (maximum allowed is
attr(model, "L"))
Details
This function allows to obtain information on the quality of Stupid Back-off model predictions,
such as next-word prediction accuracy, or the word-rank distribution of correct prediction, by direct
test against a test set corpus. For a reasonable estimate of prediction accuracy, the different entries
of the test vector should be uncorrelated documents (e.g. separate tweets, as in the twitter_test
example dataset).
More in detail, eval_sbo_predictor performs the following operations:
1. Sample a single sentence from each entry of the character vector test.
2. Sample a single $N$-gram from each sentence obtained in the previous step.
3. Predict next words from the $(N-1)$-gram prefix.
4. Return all predictions, together with the true word completions.
Value
A tibble, containing the input $(N-1)$-grams, the true completions, the predicted completions and
a column indicating whether one of the predictions were correct or not.
Author(s)
<NAME>
Examples
# Evaluating next-word predictions from a Stupid Back-off N-gram model
if (suppressMessages(require(dplyr) && require(ggplot2))) {
p <- sbo_predictor(twitter_predtable)
set.seed(840) # Set seed for reproducibility
test <- sample(twitter_test, 500)
eval <- eval_sbo_predictor(p, test)
## Compute three-word accuracies
eval %>% summarise(accuracy = sum(correct)/n()) # Overall accuracy
eval %>% # Accuracy for in-sentence predictions
filter(true != "<EOS>") %>%
summarise(accuracy = sum(correct) / n())
## Make histogram of word-rank distribution for correct predictions
dict <- attr(twitter_predtable, "dict")
eval %>%
filter(correct, true != "<EOS>") %>%
transmute(rank = match(true, table = dict)) %>%
ggplot(aes(x = rank)) + geom_histogram(binwidth = 30)
}
kgram_freqs k-gram frequency tables
Description
Get k-gram frequency tables from a training corpus.
Usage
kgram_freqs(corpus, N, dict, .preprocess = identity, EOS = "")
sbo_kgram_freqs(corpus, N, dict, .preprocess = identity, EOS = "")
kgram_freqs_fast(corpus, N, dict, erase = "", lower_case = FALSE, EOS = "")
sbo_kgram_freqs_fast(corpus, N, dict, erase = "", lower_case = FALSE, EOS = "")
Arguments
corpus a character vector. The training corpus from which to extract k-gram frequen-
cies.
N a length one integer. The maximum order of k-grams for which frequencies are
to be extracted.
dict either a sbo_dictionary object, a character vector, or a formula (see details).
The language model dictionary.
.preprocess a function to apply before k-gram tokenization.
EOS a length one character vector listing all (single character) end-of-sentence to-
kens.
erase a length one character vector. Regular expression matching parts of text to be
erased from input. The default removes anything not alphanumeric, white space,
apostrophes or punctuation characters (i.e. ".?!:;").
lower_case a length one logical vector. If TRUE, puts everything to lower case.
Details
These functions extract all k-gram frequency tables from a text corpus up to a specified k-gram order
N. These are the building blocks to train any N-gram model. The functions sbo_kgram_freqs()
and sbo_kgram_freqs_fast() are aliases for kgram_freqs() and kgram_freqs_fast(), respec-
tively.
The optimized version kgram_freqs_fast(erase = x, lower_case = y) is equivalent to kgram_freqs(.preprocess
= preprocess(erase = x, lower_case = y)), but more efficient (both from the speed and memory
point of view).
Both kgram_freqs() and kgram_freqs_fast() employ a fixed (user specified) dictionary: any
out-of-vocabulary word gets effectively replaced by an "unknown word" token. This is specified
through the argument dict, which accepts three types of arguments: a sbo_dictionary object,
a character vector (containing the words of the dictionary) or a formula. In this last case, valid
formulas can be either max_size ~ V or target ~ f, where V and f represent a dictionary size and
a corpus word coverage fraction (of corpus), respectively. This usage of the dict argument allows
to build the model dictionary ’on the fly’.
The return value is a "sbo_kgram_freqs" object, i.e. a list of N tibbles, storing frequency counts for
each k-gram observed in the training corpus, for k = 1, 2, ..., N. In these tables, words are represented
by integer numbers corresponding to their position in the reference dictionary. The special codes
0, length(dictionary)+1 and length(dictionary)+2 correspond to the "Begin-Of-Sentence",
"End-Of-Sentence" and "Unknown word" tokens, respectively.
Furthermore, the returned objected has the following attributes:
• N: The highest order of N-grams.
• dict: The reference dictionary, sorted by word frequency.
• .preprocess: The function used for text preprocessing.
• EOS: A length one character vector listing all (single character) end-of-sentence tokens em-
ployed in k-gram tokenization.
The .preprocess argument of kgram_freqs allows the user to apply a custom transformation to
the training corpus, before kgram tokenization takes place.
The algorithm for k-gram tokenization considers anything separated by (any number of) white
spaces (i.e. " ") as a single word. Sentences are split according to end-of-sentence (single char-
acter) tokens, as specified by the EOS argument. Additionally text belonging to different entries of
the preprocessed input vector which are understood to belong to different sentences.
Nota Bene: It is useful to keep in mind that the function passed through the .preprocess argument
also captures its enclosing environment, which is by default the environment in which the former
was defined. If, for instance, .preprocess was defined in the global environment, and the latter
binds heavy objects, the resulting sbo_kgram_freqs will contain bindings to the same objects. If
sbo_kgram_freqs is stored out of memory and recalled in another R session, these objects will also
be reloaded in memory. For this reason, for non interactive use, it is advisable to avoid using pre-
processing functions defined in the global environment (for instance, base::identity is preferred
to function(x) x).
Value
A sbo_kgram_freqs object, containing the k-gram frequency tables for k = 1, 2, ..., N.
Author(s)
<NAME>
Examples
# Obtain k-gram frequency table from corpus
## Get k-gram frequencies, for k <= N = 3.
## The dictionary is built on the fly, using the most frequent 1000 words.
freqs <- kgram_freqs(corpus = twitter_train, N = 3, dict = max_size ~ 1000,
.preprocess = preprocess, EOS = ".?!:;")
freqs
## Using a predefined dictionary
freqs <- kgram_freqs_fast(twitter_train, N = 3, dict = twitter_dict,
erase = "[^.?!:;'\\w\\s]", lower_case = TRUE,
EOS = ".?!:;")
freqs
## 2-grams, no preprocessing, use a dictionary covering 50% of corpus
freqs <- kgram_freqs(corpus = twitter_train, N = 2, dict = target ~ 0.5,
EOS = ".?!:;")
freqs
# Obtain k-gram frequency table from corpus
freqs <- kgram_freqs_fast(twitter_train, N = 3, dict = twitter_dict)
## Print result
freqs
plot.word_coverage Plot method for word_coverage objects
Description
Plot cumulative corpus coverage fraction of a dictionary.
Usage
## S3 method for class 'word_coverage'
plot(
x,
include_EOS = FALSE,
show_limit = TRUE,
type = "l",
xlim = c(0, length(x)),
ylim = c(0, 1),
xticks = seq(from = 0, to = length(x), by = length(x)/5),
yticks = seq(from = 0, to = 1, by = 0.25),
xlab = "Rank",
ylab = "Covered fraction",
title = "Cumulative corpus coverage fraction of dictionary",
subtitle = "_default_",
...
)
Arguments
x a word_coverage object.
include_EOS length one logical. Should End-Of-Sentence tokens be considered in the com-
putation of coverage fraction?
show_limit length one logical. If TRUE, plots an horizontal line corresponding to the total
coverage fraction.
type what type of plot should be drawn, as detailed in ?plot.
xlim length two numeric. Extremes of the x-range.
ylim length two numeric. Extremes of the y-range.
xticks numeric vector. position of the x-axis ticks.
yticks numeric vector. position of the y-axis ticks.
xlab length one character. The x-axis label.
ylab length one character. The y-axis label.
title length one character. Plot title.
subtitle length one character. Plot subtitle; if "default", prints dictionary length and total
covered fraction.
... further arguments passed to or from other methods.
Details
This function generates nice plots of cumulative corpus coverage fractions. The x coordinate in the
resulting plot is the word rank in the underlying dictionary; the y coordinate at x is the cumulative
coverage fraction for rank <= x.
Author(s)
<NAME>
Examples
c <- word_coverage(twitter_dict, twitter_test)
plot(c)
predict.sbo_kgram_freqs
Predict method for k-gram frequency tables
Description
Predictive text based on Stupid Back-off N-gram model.
Usage
## S3 method for class 'sbo_kgram_freqs'
predict(object, input, lambda = 0.4, ...)
Arguments
object a sbo_kgram_freqs object.
input a length one character vector, containing the input for next-word prediction.
lambda a numeric vector of length one. The back-off penalization in Stupid Back-off
algorithm.
... further arguments passed to or from other methods.
Value
A tibble containing the next-word probabilities for all words in the dictionary.
Author(s)
<NAME>
Examples
predict(twitter_freqs, "i love")
predict.sbo_predictor Predict method for Stupid Back-off text predictor
Description
Predictive text based on Stupid Back-off N-gram model.
Usage
## S3 method for class 'sbo_predictor'
predict(object, input, ...)
Arguments
object a sbo_predictor object.
input a character vector, containing the input for next-word prediction.
... further arguments passed to or from other methods.
Details
This method returns the top L next-word predictions from a text predictor trained with Stupid Back-
Off.
Trying to predict from a sbo_predtable results into an error. Instead, one should load a sbo_predictor
object and use this one to predict(), as shown in the example below.
Value
A character vector if length(input) == 1, otherwise a character matrix.
Author(s)
<NAME>
Examples
p <- sbo_predictor(twitter_predtable)
x <- predict(p, "i love")
x
x <- predict(p, "you love")
x
#N.B. the top predictions here are x[1], followed by x[2] and x[3].
predict(p, c("i love", "you love")) # Behaviour with length()>1 input.
preprocess Preprocess text corpus
Description
A simple text preprocessing utility.
Usage
preprocess(input, erase = "[^.?!:;'\\w\\s]", lower_case = TRUE)
Arguments
input a character vector.
erase a length one character vector. Regular expression matching parts of text to be
erased from input. The default removes anything not alphanumeric, white space,
apostrophes or punctuation characters (i.e. ".?!:;").
lower_case a length one logical vector. If TRUE, puts everything to lower case.
Value
a character vector containing the processed output.
Author(s)
<NAME>
Examples
preprocess("Hi @ there! I'm using `sbo`.")
prune Prune k-gram objects
Description
Prune M-gram frequency tables or Stupid Back-Off prediction tables for an M-gram model to a
smaller order N.
Usage
prune(object, N, ...)
## S3 method for class 'sbo_kgram_freqs'
prune(object, N, ...)
## S3 method for class 'sbo_predtable'
prune(object, N, ...)
Arguments
object A kgram_freqs or a sbo_predtable class object.
N a length one positive integer. N-gram order of the new object.
... further arguments passed to or from other methods.
Details
This generic function provides a helper to prune M-gram frequency tables or M-gram models,
represented by sbo_kgram_freqs and sbo_predtable objects respectively, to objects of a smaller
N-gram order, N < M. For k-gram frequency objects, frequency tables for k > N are simply dropped.
For sbo_predtable’s, the predictions coming from the nested N-gram model are instead retained.
In both cases, all other other attributes besides k-gram order (such as the corpus preprocessing
function, or the lambda penalty in Stupid Back-Off training) are left unchanged.
Value
an object of the same class of the input object.
Author(s)
<NAME>
Examples
# Drop k-gram frequencies for k > 2
freqs <- twitter_freqs
summary(freqs)
freqs <- prune(freqs, N = 2)
summary(freqs)
# Extract a 2-gram model from a larger 3-gram model
pt <- twitter_predtable
summary(pt)
pt <- prune(pt, N = 2)
summary(pt)
sbo_dictionary Dictionaries
Description
Build dictionary from training corpus.
Usage
sbo_dictionary(
corpus,
max_size = Inf,
target = 1,
.preprocess = identity,
EOS = ""
)
dictionary(
corpus,
max_size = Inf,
target = 1,
.preprocess = identity,
EOS = ""
)
Arguments
corpus a character vector. The training corpus from which to extract the dictionary.
max_size a length one numeric. If less than Inf, only the most frequent max_size words
are retained in the dictionary.
target a length one numeric between 0 and 1. If less than one, retains only as many
words as needed to cover a fraction target of the training corpus.
.preprocess a function for corpus preprocessing. Takes a character vector as input and returns
a character vector.
EOS a length one character vector listing all (single character) end-of-sentence to-
kens.
Details
The function dictionary() is an alias for sbo_dictionary().
This function builds a dictionary using the most frequent words in a training corpus. Two pruning
criterions can be applied:
1. Dictionary size, as implemented by the max_size argument.
2. Target coverage fraction, as implemented by the target argument.
If both these criterions imply non-trivial cuts, the most restrictive critierion applies.
The .preprocess argument allows the user to apply a custom transformation to the training corpus,
before word tokenization. The EOS argument allows to specify a set of characters to be identified as
End-Of-Sentence tokens (and thus not part of words).
The returned object is a sbo_dictionary object, which is a character vector containing words
sorted by decreasing corpus frequency. Furthermore, the object stores as attributes the original
values of .preprocess and EOS (i.e. the function used in corpus preprocessing and the End-Of-
Sentence characters for sentence tokenization).
Value
A sbo_dictionary object.
Author(s)
<NAME>
Examples
# Extract dictionary from `twitter_train` corpus (all words)
dict <- sbo_dictionary(twitter_train)
# Extract dictionary from `twitter_train` corpus (top 1000 words)
dict <- sbo_dictionary(twitter_train, max_size = 1000)
# Extract dictionary from `twitter_train` corpus (coverage target = 50%)
dict <- sbo_dictionary(twitter_train, target = 0.5)
sbo_predictions Stupid Back-off text predictions
Description
Train a text predictor via Stupid Back-off
Usage
sbo_predictor(object, ...)
predictor(object, ...)
## S3 method for class 'character'
sbo_predictor(
object,
N,
dict,
.preprocess = identity,
EOS = "",
lambda = 0.4,
L = 3L,
filtered = "<UNK>",
...
)
## S3 method for class 'sbo_kgram_freqs'
sbo_predictor(object, lambda = 0.4, L = 3L, filtered = "<UNK>", ...)
## S3 method for class 'sbo_predtable'
sbo_predictor(object, ...)
sbo_predtable(object, lambda = 0.4, L = 3L, filtered = "<UNK>", ...)
predtable(object, lambda = 0.4, L = 3L, filtered = "<UNK>", ...)
## S3 method for class 'character'
sbo_predtable(
object,
lambda = 0.4,
L = 3L,
filtered = "<UNK>",
N,
dict,
.preprocess = identity,
EOS = "",
...
)
## S3 method for class 'sbo_kgram_freqs'
sbo_predtable(object, lambda = 0.4, L = 3L, filtered = "<UNK>", ...)
Arguments
object either a character vector or an object inheriting from classes sbo_kgram_freqs
or sbo_predtable. Defines the method to use for training.
... further arguments passed to or from other methods.
N a length one integer. Order ’N’ of the N-gram model.
dict a sbo_dictionary, a character vector or a formula. For more details see kgram_freqs.
.preprocess a function for corpus preprocessing. For more details see kgram_freqs.
EOS a length one character vector. String listing End-Of-Sentence characters. For
more details see kgram_freqs.
lambda a length one numeric. Penalization in the Stupid Back-off algorithm.
L a length one integer. Maximum number of next-word predictions for a given
input (top scoring predictions are retained).
filtered a character vector. Words to exclude from next-word predictions. The strings
’<UNK>’ and ’<EOS>’ are reserved keywords referring to the Unknown-Word
and End-Of-Sentence tokens, respectively.
Details
These functions are generics used to train a text predictor with Stupid Back-Off. The functions
predictor() and predtable() are aliases for sbo_predictor() and sbo_predtable(), respec-
tively.
The sbo_predictor data structure carries all information required for prediction in a compact and
efficient (upon retrieval) way, by directly storing the top L next-word predictions for each k-gram
prefix observed in the training corpus.
The sbo_predictor objects are for interactive use. If the training process is computationally heavy,
one can store a "raw" version of the text predictor in a sbo_predtable class object, which can be
safely saved out of memory (with e.g. save()). The resulting object can be restored in another R
session, and the corresponding sbo_predictor object can be loaded rapidly using again the generic
constructor sbo_predictor() (see example below).
The returned objects are a sbo_predictor and a sbo_predtable objects. The latter contains Stupid
Back-Off prediction tables, storing next-word prediction for each k-gram prefix observed in the text,
whereas the former is an external pointer to an equivalent (but processed) C++ structure.
Both objects have the following attributes:
• N: The order of the underlying N-gram model, "N".
• dict: The model dictionary.
• lambda: The penalization used in the Stupid Back-Off algorithm.
• L: The maximum number of next-word predictions for a given text input.
• .preprocess: The function used for text preprocessing.
• EOS: A length one character vector listing all (single character) end-of-sentence tokens.
Value
A sbo_predictor object for sbo_predictor(), a sbo_predtable object for sbo_predtable().
Author(s)
<NAME>
See Also
predict.sbo_predictor
Examples
# Train a text predictor directly from corpus
p <- sbo_predictor(twitter_train, N = 3, dict = max_size ~ 1000,
.preprocess = preprocess, EOS = ".?!:;")
# Train a text predictor from previously computed 'kgram_freqs' object
p <- sbo_predictor(twitter_freqs)
# Load a text predictor from a Stupid Back-Off prediction table
p <- sbo_predictor(twitter_predtable)
# Predict from Stupid Back-Off text predictor
p <- sbo_predictor(twitter_predtable)
predict(p, "i love")
# Build Stupid Back-Off prediction tables directly from corpus
t <- sbo_predtable(twitter_train, N = 3, dict = max_size ~ 1000,
.preprocess = preprocess, EOS = ".?!:;")
# Build Stupid Back-Off prediction tables from kgram_freqs object
t <- sbo_predtable(twitter_freqs)
## Not run:
# Save and reload a 'sbo_predtable' object with base::save()
save(t)
load("t.rda")
## End(Not run)
tokenize_sentences Sentence tokenizer
Description
Get sentence tokens from text
Usage
tokenize_sentences(input, EOS = ".?!:;")
Arguments
input a character vector.
EOS a length one character vector listing all (single character) end-of-sentence to-
kens.
Value
a character vector, each entry of which corresponds to a single sentence.
Author(s)
<NAME>
Examples
tokenize_sentences("Hi there! I'm using `sbo`.")
twitter_dict Top 1000 dictionary from Twitter training set
Description
Top 1000 dictionary from Twitter training set
Usage
twitter_dict
Format
A character vector. Contains the 1000 most frequent words from the example training set twitter_train,
sorted by word rank.
See Also
twitter_train
Examples
head(twitter_dict, 10)
twitter_freqs k-gram frequencies from Twitter training set
Description
k-gram frequencies from Twitter training set
Usage
twitter_freqs
Format
A sbo_kgram_freqs object. Contains k-gram frequencies from the example training set twitter_train.
See Also
twitter_train
twitter_predtable Next-word prediction tables from 3-gram model trained on Twitter
training set
Description
Next-word prediction tables from 3-gram model trained on Twitter training set
Usage
twitter_predtable
Format
A sbo_predtable object. Contains prediction tables of a 3-gram Stupid Back-off model trained on
the example training set twitter_train.
See Also
twitter_train
twitter_test Twitter test set
Description
Twitter test set
Usage
twitter_test
Format
A collection of 10’000 Twitter posts in English.
Source
https://www.kaggle.com/crmercado/tweets-blogs-news-swiftkey-dataset-4million
See Also
twitter_train
Examples
head(twitter_test)
twitter_train Twitter training set
Description
Twitter training set
Usage
twitter_train
Format
A collection of 50’000 Twitter posts in English.
Source
https://www.kaggle.com/crmercado/tweets-blogs-news-swiftkey-dataset-4million
See Also
twitter_test, twitter_dict, twitter_freqs, twitter_predtable
Examples
head(twitter_train)
word_coverage Word coverage fraction
Description
Compute total and cumulative corpus coverage fraction of a dictionary.
Usage
word_coverage(object, corpus, ...)
## S3 method for class 'sbo_dictionary'
word_coverage(object, corpus, ...)
## S3 method for class 'character'
word_coverage(object, corpus, .preprocess = identity, EOS = "", ...)
## S3 method for class 'sbo_kgram_freqs'
word_coverage(object, corpus, ...)
## S3 method for class 'sbo_predictions'
word_coverage(object, corpus, ...)
Arguments
object either a character vector, or an object inheriting from one of the classes sbo_dictionary,
sbo_kgram_freqs, sbo_predtable or sbo_predictor. The object storing the
dictionary for which corpus coverage is to be computed.
corpus a character vector.
... further arguments passed to or from other methods.
.preprocess preprocessing function for training corpus. See kgram_freqs and sbo_dictionary
for further details.
EOS a length one character vector. String containing End-Of-Sentence characters,
see kgram_freqs and sbo_dictionary for further details.
Details
This function computes the corpus coverage fraction of a dictionary, that is the fraction of words
appearing in corpus which are contained in the original dictionary.
This function is a generic, accepting as object argument any object storing a dictionary, along
with a preprocessing function and a list of End-Of-Sentence characters. This includes all sbo
main classes: sbo_dictionary, sbo_kgram_freqs, sbo_predtable and sbo_predictor. When
object is a character vector, the preprocessing function and the End-Of-Sentence characters must
be specified explicitly.
The coverage fraction is computed cumulatively, and the dependence of coverage with respect to
maximal rank can be explored through plot() (see examples below)
Value
a word_coverage object.
Author(s)
<NAME>
See Also
predict.sbo_predictor
Examples
c <- word_coverage(twitter_dict, twitter_train)
print(c)
summary(c)
# Plot coverage fraction, including the End-Of-Sentence in word counts.
plot(c, include_EOS = TRUE) |
ex_force | hex | Erlang | ExForce
===
Simple wrapper for Salesforce REST API.
Installation
---
The package can be installed by adding `ex_force` to your list of dependencies in `mix.exs`:
```
def deps do
[
{:ex_force, "~> 0.3"}
]
end
```
Check out [Choosing a Tesla Adapter](https://github.com/chulkilee/ex_force/wiki/Choosing-a-Tesla-Adapter).
Usage
---
```
{:ok, %{instance_url: instance_url} = oauth_response} =
ExForce.OAuth.get_token(
"https://login.salesforce.com",
grant_type: "password",
client_id: "client_id",
client_secret: "client_secret",
username: "username",
password: "password" <> "security_token"
)
{:ok, version_maps} = ExForce.versions(instance_url)
latest_version = version_maps |> Enum.map(&Map.fetch!(&1, "version")) |> List.last()
client = ExForce.build_client(oauth_response, api_version: latest_version)
names =
ExForce.query_stream(client, "SELECT Name FROM Account")
|> Stream.map(&Map.fetch!(&1.data, "Name"))
|> Stream.take(50)
|> Enum.to_list()
```
Note that streams emit [`ExForce.SObject`](ExForce.SObject.html) or an error tuple.
[Link to this section](#summary)
Summary
===
[Types](#types)
---
[client()](#t:client/0)
[field_name()](#t:field_name/0)
[query_id()](#t:query_id/0)
[sobject()](#t:sobject/0)
[sobject_id()](#t:sobject_id/0)
[sobject_name()](#t:sobject_name/0)
[soql()](#t:soql/0)
[Functions](#functions)
---
[basic_info(client, name)](#basic_info/2)
Retrieves basic metadata for the specific SObject.
[build_client(instance_url)](#build_client/1)
See [`ExForce.Client.build_client/1`](ExForce.Client.html#build_client/1).
[build_client(instance_url, opts)](#build_client/2)
See [`ExForce.Client.build_client/2`](ExForce.Client.html#build_client/2).
[create_sobject(client, name, attrs)](#create_sobject/3)
Creates a SObject.
[delete_sobject(client, id, name)](#delete_sobject/3)
Deletes a SObject.
[describe_global(client)](#describe_global/1)
Lists the available objects.
[describe_sobject(client, name)](#describe_sobject/2)
Retrieves extended metadata for the specified SObject.
[get_sobject(client, id, name, fields)](#get_sobject/4)
Retrieves a SObject by ID.
[get_sobject_by_external_id(client, field_value, field_name, sobject_name)](#get_sobject_by_external_id/4)
Retrieves a SObject based on the value of a specified extneral ID field.
[get_sobject_by_relationship(client, id, sobject_name, field_name, fields)](#get_sobject_by_relationship/5)
Retrieves a SObject by relationship field.
[query(client, soql)](#query/2)
Excutes the SOQL query and get the result of it.
[query_all(client, soql)](#query_all/2)
Excutes the SOQL query and get the result of it, including deleted or archived objects.
[query_all_stream(client, soql)](#query_all_stream/2)
[query_retrieve(client, query_id_or_url)](#query_retrieve/2)
Retrieves additional query results for the specified query ID.
[query_stream(client, soql)](#query_stream/2)
[resources(client, version)](#resources/2)
Lists available resources for the specific API version.
[stream_query_result(client, qr)](#stream_query_result/2)
Returns `Enumerable.t` from the `QueryResult`.
[update_sobject(client, id, name, attrs)](#update_sobject/4)
Updates a SObject.
[update_sobjects(client, records, all_or_none \\ false)](#update_sobjects/3)
Updates multiple SObjects using the Composite API.
[versions(instance_url)](#versions/1)
Lists available REST API versions at an instance.
[Link to this section](#types)
Types
===
[Link to this section](#functions)
Functions
===
ExForce.Client behaviour
===
HTTP Client for Salesforce REST API
Adapter
---
Defaults to [`ExForce.Client.Tesla`](ExForce.Client.Tesla.html). To use your own adapter, set it via Mix configuration.
```
config :ex_force, client: ClientMock
```
[Link to this section](#summary)
Summary
===
[Types](#types)
---
[context()](#t:context/0)
[instance_url()](#t:instance_url/0)
[opts()](#t:opts/0)
[t()](#t:t/0)
[Callbacks](#callbacks)
---
[build_client(context)](#c:build_client/1)
[build_client(context, opts)](#c:build_client/2)
[build_oauth_client(instance_url)](#c:build_oauth_client/1)
[build_oauth_client(instance_url, opts)](#c:build_oauth_client/2)
[request(t, t)](#c:request/2)
[Functions](#functions)
---
[build_client(context)](#build_client/1)
[build_client(context, opts)](#build_client/2)
[build_oauth_client(instance_url)](#build_oauth_client/1)
[build_oauth_client(instance_url, opts)](#build_oauth_client/2)
[request(client, request)](#request/2)
[Link to this section](#types)
Types
===
[Link to this section](#callbacks)
Callbacks
===
[Link to this section](#functions)
Functions
===
ExForce.Client.Tesla
===
HTTP Client for Salesforce REST API using [`Tesla`](https://hexdocs.pm/tesla/1.4.3/Tesla.html).
Adapter
---
To use a different [`Tesla`](https://hexdocs.pm/tesla/1.4.3/Tesla.html) adapter, set it via Mix configuration.
```
config :tesla, ExForce.Client.Tesla, adapter: Tesla.Adapter.Hackney
```
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[build_client(context, opts \\ [])](#build_client/2)
Returns a [`Tesla`](https://hexdocs.pm/tesla/1.4.3/Tesla.html) client for [`ExForce`](ExForce.html) functions
[build_oauth_client(instance_url, opts \\ [])](#build_oauth_client/2)
Returns a [`Tesla`](https://hexdocs.pm/tesla/1.4.3/Tesla.html) client for [`ExForce.OAuth`](ExForce.OAuth.html) functions
[request(client, request)](#request/2)
Sends a request to Salesforce
[Link to this section](#functions)
Functions
===
ExForce.OAuth
===
Handles OAuth2
Grant Types
---
* `authorization_code`: [Understanding the Web Server OAuth Authentication Flow](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/intro_understanding_web_server_oauth_flow.htm)
* `password`: [Understanding the Username-Password OAuth Authentication Flow](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/intro_understanding_username_password_oauth_flow.htm)
* `token`: [Understanding the User-Agent OAuth Authentication Flow](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/intro_understanding_user_agent_oauth_flow.htm)
* `refresh_token`: [Understanding the OAuth Refresh Token Process](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/intro_understanding_refresh_token_oauth.htm)
[Link to this section](#summary)
Summary
===
[Types](#types)
---
[code()](#t:code/0)
[password()](#t:password/0)
[redirect_uri()](#t:redirect_uri/0)
[username()](#t:username/0)
[Functions](#functions)
---
[authorize_url(endpoint, enum)](#authorize_url/2)
Returns the authorize url based on the configuration.
[build_client(instance_url)](#build_client/1)
See [`ExForce.Client.build_oauth_client/1`](ExForce.Client.html#build_oauth_client/1).
[build_client(instance_url, opts)](#build_client/2)
See [`ExForce.Client.build_oauth_client/2`](ExForce.Client.html#build_oauth_client/2).
[get_token(url, payload)](#get_token/2)
Fetches an [`ExForce.OAuthResponse`](ExForce.OAuthResponse.html) struct by making a request to the token endpoint.
[Link to this section](#types)
Types
===
[Link to this section](#functions)
Functions
===
ExForce.OAuthResponse
===
Represents the result of a successful OAuth response.
[Link to this section](#summary)
Summary
===
[Types](#types)
---
[access_token()](#t:access_token/0)
[refresh_token()](#t:refresh_token/0)
[t()](#t:t/0)
[Link to this section](#types)
Types
===
ExForce.QueryResult
===
Represents the result of a query.
[Link to this section](#summary)
Summary
===
[Types](#types)
---
[t()](#t:t/0)
[Link to this section](#types)
Types
===
ExForce.Request
===
Represents an API request.
[Link to this section](#summary)
Summary
===
[Types](#types)
---
[t()](#t:t/0)
[Link to this section](#types)
Types
===
ExForce.Response
===
Represents an API response.
[Link to this section](#summary)
Summary
===
[Types](#types)
---
[t()](#t:t/0)
[Link to this section](#types)
Types
===
ExForce.SObject
===
Represents a SObject in API responses.
[Link to this section](#summary)
Summary
===
[Types](#types)
---
[t()](#t:t/0)
[Functions](#functions)
---
[build(raw)](#build/1)
Transforms a [`Map`](https://hexdocs.pm/elixir/Map.html) into [`ExForce.SObject`](ExForce.SObject.html#content) recursively.
[Link to this section](#types)
Types
===
[Link to this section](#functions)
Functions
===
API Reference
===
Modules
---
[ExForce](ExForce.html)
Simple wrapper for Salesforce REST API.
[ExForce.Client](ExForce.Client.html)
HTTP Client for Salesforce REST API
[ExForce.Client.Tesla](ExForce.Client.Tesla.html)
HTTP Client for Salesforce REST API using [`Tesla`](https://hexdocs.pm/tesla/1.4.3/Tesla.html).
[ExForce.OAuth](ExForce.OAuth.html)
Handles OAuth2
[ExForce.OAuthResponse](ExForce.OAuthResponse.html)
Represents the result of a successful OAuth response.
[ExForce.QueryResult](ExForce.QueryResult.html)
Represents the result of a query.
[ExForce.Request](ExForce.Request.html)
Represents an API request.
[ExForce.Response](ExForce.Response.html)
Represents an API response.
[ExForce.SObject](ExForce.SObject.html)
Represents a SObject in API responses. |
kitsune_p2p_bootstrap | rust | Rust | Constant kitsune_p2p_bootstrap::PRUNE_EXPIRED_FREQ
===
```
pub const PRUNE_EXPIRED_FREQ: Duration;
```
how often should we prune the expired entries?
Function kitsune_p2p_bootstrap::run
===
```
pub async fn run(
addr: impl Into<SocketAddr> + 'static,
proxy_list: Vec<String>
) -> Result<(BootstrapDriver, SocketAddr, BootstrapShutdown), String>
```
Run a bootstrap with the default prune frequency `PRUNE_EXPIRED_FREQ`.
Function kitsune_p2p_bootstrap::run_with_prune_freq
===
```
pub async fn run_with_prune_freq(
addr: impl Into<SocketAddr> + 'static,
proxy_list: Vec<String>,
prune_frequency: Duration
) -> Result<(BootstrapDriver, SocketAddr, BootstrapShutdown), String>
```
Run a bootstrap server with a set prune frequency.
Type Alias kitsune_p2p_bootstrap::BootstrapDriver
===
```
pub type BootstrapDriver = BoxFuture<'static, ()>;
```
Aliased Type
---
```
struct BootstrapDriver { /* private fields */ }
```
Trait Implementations
---
1.33.0 · source### impl<P> Deref for Pin<P>where
P: Deref,
#### type Target = <P as Deref>::Target
The resulting type after dereferencing.#### fn deref(&self) -> &<P as Deref>::Target
Dereferences the value.
Type Alias kitsune_p2p_bootstrap::BootstrapShutdown
===
```
pub type BootstrapShutdown = Box<dyn FnOnce() + Send + Sync + 'static>;
```
Aliased Type
---
```
struct BootstrapShutdown(/* private fields */);
```
Trait Implementations
---
1.0.0 · source### impl<T, A> Deref for Box<T, A>where
A: Allocator,
T: ?Sized,
#### type Target = T
The resulting type after dereferencing.#### fn deref(&self) -> &T
Dereferences the value. |
MetaLonDA | cran | R | Package ‘MetaLonDA’
October 12, 2022
Type Package
Title Metagenomics Longitudinal Differential Abundance Method
Version 1.1.8
Date 2019-12-17
Author <NAME>, <NAME>, <NAME>, <NAME>
Maintainer <NAME> <<EMAIL>>
URL https://github.com/aametwally/MetaLonDA
BugReports https://github.com/aametwally/MetaLonDA/issues
Description Identify time intervals of differentially abundant metagenomic features in longitudi-
nal studies (Metwally AA, et al., Microbiome, 2018 <doi:10.1186/s40168-018-0402-y>).
License MIT + file LICENSE
Depends R(>= 3.5.0)
Imports gss, plyr, zoo, pracma, ggplot2, parallel, doParallel,
metagenomeSeq, DESeq2, edgeR
Suggests knitr, rmarkdown
biocViews
Repository CRAN
NeedsCompilation no
Encoding UTF-8
LazyData true
RoxygenNote 7.0.2
VignetteBuilder knitr
Date/Publication 2019-12-18 02:10:26 UTC
R topics documented:
areaPermutatio... 2
curveFittin... 3
findSigInterva... 4
intervalAre... 5
metalond... 5
metalondaAl... 7
metalonda_test_dat... 9
normaliz... 9
permutatio... 10
visualizeAre... 11
visualizeARHistogra... 12
visualizeFeatur... 12
visualizeFeatureSplin... 13
visualizeTimeInterval... 14
visualizeVolcanoPlo... 15
areaPermutation Calculate Area Ratio (AR) of each feature’s time interval for all per-
mutations
Description
Fits longitudinal samples from the same group using negative binomial or LOWESS for all permu-
tations
Usage
areaPermutation(perm)
Arguments
perm list has all the permutated models
Value
returns a list of all permutation area ratio
References
<NAME> (<EMAIL>)
Examples
data(metalonda_test_data)
n.sample = 5 # sample size;
n.timepoints = 10 # time point;
n.perm = 3
n.group= 2 # number of group;
Group = factor(c(rep(0,n.sample*n.timepoints), rep(1,n.sample*n.timepoints)))
Time = rep(rep(1:n.timepoints, times = n.sample), 2)
ID = factor(rep(1:(2*n.sample), each = n.timepoints))
points = seq(1, 10, length.out = 10)
aggretage.df = data.frame(Count = metalonda_test_data[1,], Time = Time, Group = Group, ID = ID)
perm = permutation(aggretage.df, n.perm = 3, method = "nbinomial", points)
areaPermutation(perm)
curveFitting Fit longitudinal data
Description
Fits longitudinal samples from the same group using negative binomial smoothing splines or LOWESS
Usage
curveFitting(df, method = "nbinomial", points)
Arguments
df dataframe has the Count, Group, ID, Time
method fitting method (nbinomial, lowess)
points points at which the prediction should happen
Value
returns the fitted model
References
<NAME> (<EMAIL>)
Examples
data(metalonda_test_data)
n.sample = 5
n.timepoints = 10
n.group = 2
Group = factor(c(rep(0, n.sample*n.timepoints), rep(1, n.sample*n.timepoints)))
Time = rep(rep(1:n.timepoints, times = n.sample), 2)
ID = factor(rep(1:(2*n.sample), each = n.timepoints))
points = seq(1, 10, length.out = 10)
aggretage.df = data.frame(Count = metalonda_test_data[1,], Time = Time, Group = Group, ID = ID)
cf = curveFitting(df = aggretage.df, method= "nbinomial", points)
findSigInterval Find significant time intervals
Description
Identify significant time intervals
Usage
findSigInterval(adjusted.pvalue, threshold = 0.05, sign)
Arguments
adjusted.pvalue
vector of the adjusted p-value
threshold p-value cut off
sign vector hold area sign of each time interval
Value
returns a list of the start and end points of all significant time intervals
References
<NAME> (<EMAIL>)
Examples
p = c(0.04, 0.01, 0.02, 0.04, 0.06, 0.2, 0.06, 0.04)
sign = c(1, 1, 1, 1, -1, -1, 1, 1)
findSigInterval(p, threshold = 0.05, sign)
intervalArea Calculate Area Ratio (AR) of each feature’s time interval
Description
Calculate Area Ratio (AR) of each feature’s time interval
Usage
intervalArea(curve.fit.df)
Arguments
curve.fit.df gss data object of the fitted spline
Value
returns the area ratio for all time intervals
References
<NAME> (<EMAIL>)
Examples
data(metalonda_test_data)
n.sample = 5
n.timepoints = 10
n.group= 2
Group = factor(c(rep(0,n.sample*n.timepoints), rep(1,n.sample*n.timepoints)))
Time = rep(rep(1:n.timepoints, times = n.sample), 2)
ID = factor(rep(1:(2*n.sample), each = n.timepoints))
points = seq(1, 10, length.out = 10)
aggregate.df = data.frame(Count = metalonda_test_data[1,], Time = Time, Group = Group, ID = ID)
model = curveFitting(df = aggregate.df, method= "nbinomial", points)
intervalArea(model)
metalonda Metagenomic Longitudinal Differential Abundance Analysis for one
feature
Description
Find significant time intervals of the one feature
Usage
metalonda(
Count,
Time,
Group,
ID,
n.perm = 500,
fit.method = "nbinomial",
points,
text = 0,
parall = FALSE,
pvalue.threshold = 0.05,
adjust.method = "BH",
time.unit = "days",
ylabel = "Normalized Count",
col = c("blue", "firebrick"),
prefix = "Test"
)
Arguments
Count matrix has the number of reads that mapped to each feature in each sample.
Time vector of the time label of each sample.
Group vector of the group label of each sample.
ID vector of the subject ID label of each sample.
n.perm number of permutations.
fit.method fitting method (nbinomial, lowess).
points points at which the prediction should happen.
text Feature’s name.
parall boolean to indicate whether to use multicore.
pvalue.threshold
p-value threshold cutoff for identifing significant time intervals.
adjust.method multiple testing correction method.
time.unit time unit used in the Time vector (hours, days, weeks, months, etc.)
ylabel text to be shown on the y-axis of all generated figures (default: "Normalized
Count")
col two color to be used for the two groups (eg., c("red", "blue")).
prefix prefix to be used to create directory for the analysis results
Value
returns a list of the significant time intervals for the tested feature.
References
<NAME> (<EMAIL>)
Examples
data(metalonda_test_data)
n.sample = 5
n.timepoints = 10
n.group = 2
Group = factor(c(rep(0, n.sample*n.timepoints), rep(1,n.sample*n.timepoints)))
Time = rep(rep(1:n.timepoints, times = n.sample), 2)
ID = factor(rep(1:(2*n.sample), each = n.timepoints))
points = seq(1, 10, length.out = 10)
## Not run:
output.nbinomial = metalonda(Count = metalonda_test_data[1,], Time = Time, Group = Group,
ID = ID, fit.method = "nbinomial", n.perm = 10, points = points,
text = rownames(metalonda_test_data)[1], parall = FALSE, pvalue.threshold = 0.05,
adjust.method = "BH", time.unit = "hours", ylabel = "Normalized Count", col = c("black", "green"))
## End(Not run)
metalondaAll Metagenomic Longitudinal Differential Abundance Analysis for all
Features
Description
Identify significant features and their significant time interval
Usage
metalondaAll(
Count,
Time,
Group,
ID,
n.perm = 500,
fit.method = "nbinomial",
num.intervals = 100,
parall = FALSE,
pvalue.threshold = 0.05,
adjust.method = "BH",
time.unit = "days",
norm.method = "none",
prefix = "Output",
ylabel = "Normalized Count",
col = c("blue", "firebrick")
)
Arguments
Count Count matrix of all features
Time Time label of all samples
Group Group label of all samples
ID individual ID label for samples
n.perm number of permutations
fit.method The fitting method (nbinomial, lowess)
num.intervals The number of time intervals at which metalonda test differential abundance
parall logic to indicate whether to use multicore
pvalue.threshold
p-value threshold cutoff
adjust.method Multiple testing correction methods
time.unit time unit used in the Time vector (hours, days, weeks, months, etc.)
norm.method normalization method to be used to normalize count matrix (css, tmm, ra, log10,
median_ratio)
prefix prefix for the output figure
ylabel text to be shown on the y-axis of all generated figures (default: "Normalized
Count")
col two color to be used for the two groups (eg., c("red", "blue")).
Value
Returns a list of the significant features a long with their significant time intervals
References
<NAME> (<EMAIL>)
Examples
## Not run:
data(metalonda_test_data)
n.sample = 5
n.timepoints = 10
n.group = 2
Group = factor(c(rep(0, n.sample*n.timepoints), rep(1,n.sample*n.timepoints)))
Time = rep(rep(1:n.timepoints, times = n.sample), 2)
ID = factor(rep(1:(2*n.sample), each = n.timepoints))
points = seq(1, 10, length.out = 10)
output.nbinomial = metalondaAll(Count = metalonda_test_data, Time = Time, Group = Group,
ID = ID, n.perm = 10, fit.method = "nbinomial", num.intervals = 100,
parall = FALSE, pvalue.threshold = 0.05, adjust.method = "BH",
time.unit = "hours", norm.method = "none", prefix = "Test", time.unit = "hours",
ylabel = "Normalized Count", col = c("black", "green"))
## End(Not run)
metalonda_test_data Simulated data of OTU abundance for 2 phenotypes each has 5 sub-
jects at 10 time-points
Description
The dataset is used for testing the MetaLonDA
Usage
metalonda_test_data
Format
A data frame with 2 OTUs patterns
normalize Normalize count matrix
Description
Normalize count matrix
Usage
normalize(count, method = "css")
Arguments
count count matrix
method normalization method
References
<NAME> (<EMAIL>)
permutation Permute group labels
Description
Permutes the group label of the samples in order to construct the AR empirical distibution
Usage
permutation(
perm.dat,
n.perm = 500,
method = "nbinomial",
points,
lev,
parall = FALSE
)
Arguments
perm.dat dataframe has the Count, Group, ID, Time
n.perm number of permutations
method The fitting method (negative binomial, LOWESS)
points The points at which the prediction should happen
lev the two level’s name
parall boolean to indicate whether to use multicore.
Value
returns the fitted model for all the permutations
References
<NAME> (<EMAIL>)
Examples
data(metalonda_test_data)
n.sample = 5
n.timepoints = 10
n.perm = 3
n.group = 2
Group = factor(c(rep(0, n.sample*n.timepoints), rep(1, n.sample*n.timepoints)))
Time = rep(rep(1:n.timepoints, times = n.sample), 2)
ID = factor(rep(1:(2*n.sample), each = n.timepoints))
points = seq(1, 10, length.out = 10)
aggregate.df = data.frame(Count = metalonda_test_data[1,], Time = Time, Group = Group, ID = ID)
prm = permutation(aggregate.df, n.perm = 3, method = "nbinomial", points)
visualizeArea Visualize significant time interval
Description
Visualize significant time interval
Usage
visualizeArea(
aggregate.df,
model.ss,
method,
start,
end,
text,
group.levels,
unit = "days",
ylabel = "Normalized Count",
col = c("blue", "firebrick"),
prefix = "Test"
)
Arguments
aggregate.df Dataframe has the Count, Group, ID, Time
model.ss The fitted model
method Fitting method (negative binomial or LOWESS)
start Vector of the start points of the time intervals
end Vector of the end points of the time intervals
text Feature name
group.levels Level’s name
unit time unit used in the Time vector (hours, days, weeks, months, etc.)
ylabel text to be shown on the y-axis of all generated figures (default: "Normalized
Count")
col two color to be used for the two groups (eg., c("red", "blue")).
prefix prefix to be used to create directory for the analysis results
References
<NAME> (<EMAIL>)
visualizeARHistogram Visualize Area Ratio (AR) empirical distribution
Description
Visualize Area Ratio (AR) empirical distribution for each time interval
Usage
visualizeARHistogram(permuted, text, method, prefix = "Test")
Arguments
permuted Permutation of the permuted data
text Feature name
method fitting method
prefix prefix to be used to create directory for the analysis results
References
<NAME> (<EMAIL>)
visualizeFeature Visualize Longitudinal Feature
Description
Visualize Longitudinal Feature
Usage
visualizeFeature(
df,
text,
group.levels,
unit = "days",
ylabel = "Normalized Count",
col = c("blue", "firebrick"),
prefix = "Test"
)
Arguments
df dataframe has the Count, Group, ID, Time
text feature name
group.levels The two level’s name
unit time interval unit
ylabel text to be shown on the y-axis of all generated figures (default: "Normalized
Count")
col two color to be used for the two groups (eg., c("red", "blue")).
prefix prefix to be used to create directory for the analysis results
References
<NAME> (<EMAIL>)
Examples
data(metalonda_test_data)
pfx = tempfile()
dir.create(file.path(pfx))
n.sample = 5
n.timepoints = 10
n.group = 2
Group = factor(c(rep(0, n.sample*n.timepoints), rep(1, n.sample*n.timepoints)))
Time = rep(rep(1:n.timepoints, times = n.sample), 2)
ID = factor(rep(1:(2*n.sample), each = n.timepoints))
points = seq(1, 10, length.out = 10)
aggregate.df = data.frame(Count = metalonda_test_data[1,], Time = Time, Group = Group, ID = ID)
visualizeFeature(df = aggregate.df, text = rownames(metalonda_test_data)[1],
group.levels = Group, prefix = pfx)
visualizeFeatureSpline
Visualize the feature trajectory with the fitted Splines
Description
Plot the longitudinal features along with the fitted splines
Usage
visualizeFeatureSpline(
df,
model,
method,
text,
group.levels,
unit = "days",
ylabel = "Normalized Count",
col = c("blue", "firebrick"),
prefix = "Test"
)
Arguments
df dataframe has the Count , Group, ID, Time
model the fitted model
method The fitting method (negative binomial, LOWESS)
text feature name
group.levels The two level’s name
unit time unit used in the Time vector (hours, days, weeks, months, etc.)
ylabel text to be shown on the y-axis of all generated figures (default: "Normalized
Count")
col two color to be used for the two groups (eg., c("red", "blue")).
prefix prefix to be used to create directory for the analysis results
References
<NAME> (<EMAIL>)
visualizeTimeIntervals
Visualize all significant time intervals for all tested features
Description
Visualize all significant time intervals for all tested features
Usage
visualizeTimeIntervals(
interval.details,
prefix = "Test",
unit = "days",
col = c("blue", "firebrick")
)
Arguments
interval.details
Dataframe has infomation about significant interval (feature name, start, end,
dominant, p-value)
prefix prefix for the output figure
unit time unit used in the Time vector (hours, days, weeks, months, etc.)
col two color to be used for the two groups (eg., c("red", "blue")).
References
<NAME> (<EMAIL>)
visualizeVolcanoPlot Visualize log2 fold-change and significance of each interval as volcano
plot
Description
Visualize log2 fold-change and significance of each interval as volcano plot
Usage
visualizeVolcanoPlot(df, text, prefix = "Test")
Arguments
df Dataframe has a detailed summary about feature’s significant intervals
text Feature name
prefix prefix to be used to create directory for the analysis results
References
<NAME> (<EMAIL>) |
dmai | cran | R | Package ‘dmai’
October 13, 2022
Type Package
Title Divisia Monetary Aggregates Index
Version 0.4.0
Author <NAME> [aut, cre],
<NAME> [aut, ctb]
Maintainer <NAME> <<EMAIL>>
Description Functions to calculate Divisia monetary aggregates index as given in Bar-
nett, W. A. (1980) (<DOI:10.1016/0304-4076(80)90070-6>).
Depends R (>= 3.1)
Imports dplyr, magrittr, ggplot2, stringr, tibble, tidyr
License GPL-2
URL https://github.com/myaseen208/dmai,
https://myaseen208.github.io/dmai/
Encoding UTF-8
LazyData true
RoxygenNote 6.1.1
Note Department of Mathematics and Statistics, University of
Agriculture Faisalabad, Faisalabad-Pakistan.
Suggests testthat
NeedsCompilation no
Repository CRAN
Date/Publication 2019-05-18 17:20:03 UTC
R topics documented:
dma... 2
dmaiIntr... 4
dmai Divisia Monetary Aggregates Index
Description
Calculates Divisia monetary aggregates index as given in Barnett, W. A. (1980).
Usage
## Default S3 method:
dmai(.data, method = c("Barnett", "Hancock"),
logbase = NULL)
Arguments
.data data.frame
method Method to calculate Divisia monetary aggregates index, Barnett or Hancock
logbase base of log to be used in Barnett divisia monetary aggregates index method,
default is NULL or 10
Value
Divisia Monetary Aggregates Index
Author(s)
1. <NAME> (<<EMAIL>>)
2. <NAME> (<<EMAIL>>)
References
<NAME>. (1980). Economic Monetary Aggregates: An Application of Aggregation and Index
Number Theory. Journal of Econometrics. 14(1):11-48. (https://www.sciencedirect.com/science/article/pii/03044076809007
Examples
Data <-
tibble::tibble(
Date = paste(c("Jun", "Dec"), rep(seq(from = 2000, to = 2017, by = 1), each = 2), sep = "-")
, x1 = runif(n = 36, min = 162324, max = 2880189)
, x2 = runif(n = 36, min = 2116, max = 14542)
, x3 = runif(n = 36, min = 92989, max = 3019556)
, x4 = runif(n = 36, min = 205155, max = 4088784)
, x5 = runif(n = 36, min = 6082, max = 186686)
, x6 = runif(n = 36, min = 11501, max = 50677)
, x7 = runif(n = 36, min = 61888, max = 901419)
, x8 = runif(n = 36, min = 13394, max = 347020)
, x9 = runif(n = 36, min = 25722, max = 701887)
, x10 = runif(n = 36, min = 6414, max = 37859)
, x11 = runif(n = 36, min = 11688, max = 113865)
, x12 = runif(n = 36, min = 2311, max = 23130)
, x13 = runif(n = 36, min = 23955, max = 161318)
, r1 = runif(n = 36, min = 0.00, max = 0.00)
, r2 = runif(n = 36, min = 0.00, max = 0.00)
, r3 = runif(n = 36, min = 0.00, max = 0.00)
, r4 = runif(n = 36, min = 0.93, max = 7.43)
, r5 = runif(n = 36, min = 1.12, max = 7.00)
, r6 = runif(n = 36, min = 0.99, max = 7.93)
, r7 = runif(n = 36, min = 1.51, max = 7.42)
, r8 = runif(n = 36, min = 2.20, max = 9.15)
, r9 = runif(n = 36, min = 2.64, max = 9.37)
, r10 = runif(n = 36, min = 2.80, max = 11.34)
, r11 = runif(n = 36, min = 3.01, max = 12.41)
, r12 = runif(n = 36, min = 2.78, max = 13.68)
, r13 = runif(n = 36, min = 3.23, max = 14.96)
)
Data$Date <- as.Date(paste("01", Data$Date, sep = "-"), format = "%d-%b-%Y")
Data
# Divisia monetary aggregates index using Barnett method
DMAIBarnett <- dmai(.data = Data, method = "Barnett", logbase = NULL)
DMAIBarnett
DMAIBarnett1 <- dmai(.data = Data, method = "Barnett", logbase = 10)
DMAIBarnett1
DMAIBarnett2 <- dmai(.data = Data, method = "Barnett", logbase = 2)
DMAIBarnett2
DMAIBarnett3 <- dmai(.data = Data, method = "Barnett", logbase = exp(1))
DMAIBarnett3
# Divisia monetary aggregates index using Hancock method
DMAIHancock <- dmai(.data = Data, method = "Hancock")
DMAIHancock
library(ggplot2)
ggplot(data = DMAIBarnett, mapping = aes(x = Date, y = DMAI)) +
geom_point() +
geom_line() +
geom_text(aes(label = round(DMAI, 2)), vjust = "inward", hjust = "inward") +
scale_x_date(
date_breaks = "6 months"
, date_labels = "%b-%Y"
, limits = c(min(DMAIBarnett$Date), max = max(DMAIBarnett$Date))) +
theme_bw() +
theme(axis.text.x = element_text(angle = 90))
ggplot(data = DMAIHancock, mapping = aes(x = Date, y = DMAI)) +
geom_point() +
geom_line() +
geom_text(aes(label = round(DMAI, 2)), vjust = "inward", hjust = "inward") +
scale_x_date(
date_breaks = "6 months"
, date_labels = "%b-%Y"
, limits = c(min(DMAIHancock$Date), max = max(DMAIHancock$Date))) +
theme_bw() +
theme(axis.text.x = element_text(angle = 90))
dmaiIntro Divisia Monetary Aggregates Index
Description
The dmai package provides functionalities to calculate Divisia monetary aggregates index as given
in Barnett, <NAME>. (1980).
Author(s)
1. <NAME> (<<EMAIL>>)
2. <NAME> (<<EMAIL>>)
References
<NAME>. (1980). Economic Monetary Aggregates: An Application of Aggregation and Index
Number Theory. Journal of Econometrics. 14(1):11-48. (https://www.sciencedirect.com/science/article/pii/03044076809007 |
github.com/goccy/go-reflect | go | Go | README
[¶](#section-readme)
---
### go-reflect
![Go](https://github.com/goccy/go-reflect/workflows/Go/badge.svg)
[![GoDoc](https://godoc.org/github.com/goccy/go-reflect?status.svg)](https://pkg.go.dev/github.com/goccy/go-reflect?tab=doc)
[![codecov](https://codecov.io/gh/goccy/go-reflect/branch/master/graph/badge.svg)](https://codecov.io/gh/goccy/go-reflect)
[![Go Report Card](https://goreportcard.com/badge/github.com/goccy/go-reflect)](https://goreportcard.com/report/github.com/goccy/go-reflect)
Documentation
[¶](#section-documentation)
---
### Index [¶](#pkg-index)
* [func Copy(dst, src Value) int](#Copy)
* [func DeepEqual(x, y interface{}) bool](#DeepEqual)
* [func Swapper(slice interface{}) func(i, j int)](#Swapper)
* [func ToReflectType(t Type) reflect.Type](#ToReflectType)
* [func ToReflectValue(v Value) reflect.Value](#ToReflectValue)
* [func TypeID(v interface{}) uintptr](#TypeID)
* [type ChanDir](#ChanDir)
* [type Kind](#Kind)
* [type MapIter](#MapIter)
* [type Method](#Method)
* [type SelectCase](#SelectCase)
* [type SelectDir](#SelectDir)
* [type SliceHeader](#SliceHeader)
* [type StringHeader](#StringHeader)
* [type StructField](#StructField)
* [type StructTag](#StructTag)
* [type Type](#Type)
* + [func ArrayOf(count int, elem Type) Type](#ArrayOf)
+ [func ChanOf(dir ChanDir, t Type) Type](#ChanOf)
+ [func FuncOf(in, out []Type, variadic bool) Type](#FuncOf)
+ [func MapOf(key, elem Type) Type](#MapOf)
+ [func PtrTo(t Type) Type](#PtrTo)
+ [func SliceOf(t Type) Type](#SliceOf)
+ [func StructOf(fields []StructField) Type](#StructOf)
+ [func ToType(t reflect.Type) Type](#ToType)
+ [func TypeAndPtrOf(v interface{}) (Type, unsafe.Pointer)](#TypeAndPtrOf)
+ [func TypeOf(v interface{}) Type](#TypeOf)
* [type Value](#Value)
* + [func Append(s Value, x ...Value) Value](#Append)
+ [func AppendSlice(s, t Value) Value](#AppendSlice)
+ [func Indirect(v Value) Value](#Indirect)
+ [func MakeChan(typ Type, buffer int) Value](#MakeChan)
+ [func MakeFunc(typ Type, fn func(args []Value) (results []Value)) Value](#MakeFunc)
+ [func MakeMap(typ Type) Value](#MakeMap)
+ [func MakeMapWithSize(typ Type, n int) Value](#MakeMapWithSize)
+ [func MakeSlice(typ Type, len, cap int) Value](#MakeSlice)
+ [func New(typ Type) Value](#New)
+ [func NewAt(typ Type, p unsafe.Pointer) Value](#NewAt)
+ [func Select(cases []SelectCase) (int, Value, bool)](#Select)
+ [func ToValue(v reflect.Value) Value](#ToValue)
+ [func ValueNoEscapeOf(v interface{}) Value](#ValueNoEscapeOf)
+ [func ValueOf(v interface{}) Value](#ValueOf)
+ [func Zero(typ Type) Value](#Zero)
* + [func (v Value) Addr() Value](#Value.Addr)
+ [func (v Value) Bool() bool](#Value.Bool)
+ [func (v Value) Bytes() []byte](#Value.Bytes)
+ [func (v Value) Call(in []Value) []Value](#Value.Call)
+ [func (v Value) CallSlice(in []Value) []Value](#Value.CallSlice)
+ [func (v Value) CanAddr() bool](#Value.CanAddr)
+ [func (v Value) CanInterface() bool](#Value.CanInterface)
+ [func (v Value) CanSet() bool](#Value.CanSet)
+ [func (v Value) Cap() int](#Value.Cap)
+ [func (v Value) Close()](#Value.Close)
+ [func (v Value) Complex() complex128](#Value.Complex)
+ [func (v Value) Convert(t Type) Value](#Value.Convert)
+ [func (v Value) Elem() Value](#Value.Elem)
+ [func (v Value) Field(i int) Value](#Value.Field)
+ [func (v Value) FieldByIndex(index []int) Value](#Value.FieldByIndex)
+ [func (v Value) FieldByName(name string) Value](#Value.FieldByName)
+ [func (v Value) FieldByNameFunc(match func(string) bool) Value](#Value.FieldByNameFunc)
+ [func (v Value) Float() float64](#Value.Float)
+ [func (v Value) Index(i int) Value](#Value.Index)
+ [func (v Value) Int() int64](#Value.Int)
+ [func (v Value) Interface() interface{}](#Value.Interface)
+ [func (v Value) InterfaceData() [2]uintptr](#Value.InterfaceData)
+ [func (v Value) IsNil() bool](#Value.IsNil)
+ [func (v Value) IsValid() bool](#Value.IsValid)
+ [func (v Value) IsZero() bool](#Value.IsZero)
+ [func (v Value) Kind() Kind](#Value.Kind)
+ [func (v Value) Len() int](#Value.Len)
+ [func (v Value) MapIndex(key Value) Value](#Value.MapIndex)
+ [func (v Value) MapKeys() []Value](#Value.MapKeys)
+ [func (v Value) MapRange() *MapIter](#Value.MapRange)
+ [func (v Value) Method(i int) Value](#Value.Method)
+ [func (v Value) MethodByName(name string) Value](#Value.MethodByName)
+ [func (v Value) NumField() int](#Value.NumField)
+ [func (v Value) NumMethod() int](#Value.NumMethod)
+ [func (v Value) OverflowComplex(x complex128) bool](#Value.OverflowComplex)
+ [func (v Value) OverflowFloat(x float64) bool](#Value.OverflowFloat)
+ [func (v Value) OverflowInt(x int64) bool](#Value.OverflowInt)
+ [func (v Value) OverflowUint(x uint64) bool](#Value.OverflowUint)
+ [func (v Value) Pointer() uintptr](#Value.Pointer)
+ [func (v Value) Recv() (Value, bool)](#Value.Recv)
+ [func (v Value) Send(x Value)](#Value.Send)
+ [func (v Value) Set(x Value)](#Value.Set)
+ [func (v Value) SetBool(x bool)](#Value.SetBool)
+ [func (v Value) SetBytes(x []byte)](#Value.SetBytes)
+ [func (v Value) SetCap(n int)](#Value.SetCap)
+ [func (v Value) SetComplex(x complex128)](#Value.SetComplex)
+ [func (v Value) SetFloat(x float64)](#Value.SetFloat)
+ [func (v Value) SetInt(x int64)](#Value.SetInt)
+ [func (v Value) SetLen(n int)](#Value.SetLen)
+ [func (v Value) SetMapIndex(key, elem Value)](#Value.SetMapIndex)
+ [func (v Value) SetPointer(x unsafe.Pointer)](#Value.SetPointer)
+ [func (v Value) SetString(x string)](#Value.SetString)
+ [func (v Value) SetUint(x uint64)](#Value.SetUint)
+ [func (v Value) Slice(i, j int) Value](#Value.Slice)
+ [func (v Value) Slice3(i, j, k int) Value](#Value.Slice3)
+ [func (v Value) String() string](#Value.String)
+ [func (v Value) TryRecv() (Value, bool)](#Value.TryRecv)
+ [func (v Value) TrySend(x Value) bool](#Value.TrySend)
+ [func (v Value) Type() Type](#Value.Type)
+ [func (v Value) Uint() uint64](#Value.Uint)
+ [func (v Value) UnsafeAddr() uintptr](#Value.UnsafeAddr)
* [type ValueError](#ValueError)
#### Examples [¶](#pkg-examples)
* [Kind](#example-Kind)
* [MakeFunc](#example-MakeFunc)
* [StructOf](#example-StructOf)
* [StructTag](#example-StructTag)
* [TypeOf](#example-TypeOf)
### Constants [¶](#pkg-constants)
This section is empty.
### Variables [¶](#pkg-variables)
This section is empty.
### Functions [¶](#pkg-functions)
####
func [Copy](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L334) [¶](#Copy)
```
func Copy(dst, src [Value](#Value)) [int](/builtin#int)
```
Copy copies the contents of src into dst until either dst has been filled or src has been exhausted.
It returns the number of elements copied.
Dst and src each must have kind Slice or Array, and dst and src must have the same element type.
As a special case, src can have kind String if the element type of dst is kind Uint8.
####
func [DeepEqual](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L389) [¶](#DeepEqual)
```
func DeepEqual(x, y interface{}) [bool](/builtin#bool)
```
DeepEqual reports whether x and y are “deeply equal,” defined as follows.
Two values of identical type are deeply equal if one of the following cases applies.
Values of distinct types are never deeply equal.
Array values are deeply equal when their corresponding elements are deeply equal.
Struct values are deeply equal if their corresponding fields,
both exported and unexported, are deeply equal.
Func values are deeply equal if both are nil; otherwise they are not deeply equal.
Interface values are deeply equal if they hold deeply equal concrete values.
Map values are deeply equal when all of the following are true:
they are both nil or both non-nil, they have the same length,
and either they are the same map object or their corresponding keys
(matched using Go equality) map to deeply equal values.
Pointer values are deeply equal if they are equal using Go's == operator or if they point to deeply equal values.
Slice values are deeply equal when all of the following are true:
they are both nil or both non-nil, they have the same length,
and either they point to the same initial entry of the same underlying array
(that is, &x[0] == &y[0]) or their corresponding elements (up to length) are deeply equal.
Note that a non-nil empty slice and a nil slice (for example, []byte{} and []byte(nil))
are not deeply equal.
Other values - numbers, bools, strings, and channels - are deeply equal if they are equal using Go's == operator.
In general DeepEqual is a recursive relaxation of Go's == operator.
However, this idea is impossible to implement without some inconsistency.
Specifically, it is possible for a value to be unequal to itself,
either because it is of func type (uncomparable in general)
or because it is a floating-point NaN value (not equal to itself in floating-point comparison),
or because it is an array, struct, or interface containing such a value.
On the other hand, pointer values are always equal to themselves,
even if they point at or contain such problematic values,
because they compare equal using Go's == operator, and that is a sufficient condition to be deeply equal, regardless of content.
DeepEqual has been defined so that the same short-cut applies to slices and maps: if x and y are the same slice or the same map,
they are deeply equal regardless of content.
As DeepEqual traverses the data values it may find a cycle. The second and subsequent times that DeepEqual compares two pointer values that have been compared before, it treats the values as equal rather than examining the values to which they point.
This ensures that DeepEqual terminates.
####
func [Swapper](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L393) [¶](#Swapper)
```
func Swapper(slice interface{}) func(i, j [int](/builtin#int))
```
####
func [ToReflectType](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L308) [¶](#ToReflectType)
```
func ToReflectType(t [Type](#Type)) [reflect](/reflect).[Type](/reflect#Type)
```
ToReflectType convert Type to reflect.Type
####
func [ToReflectValue](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L313) [¶](#ToReflectValue)
```
func ToReflectValue(v [Value](#Value)) [reflect](/reflect).[Value](/reflect#Value)
```
ToReflectValue convert Value to reflect.Value
####
func [TypeID](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L269) [¶](#TypeID)
```
func TypeID(v interface{}) [uintptr](/builtin#uintptr)
```
TypeID returns unique type identifier of v.
### Types [¶](#pkg-types)
####
type [ChanDir](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L89) [¶](#ChanDir)
```
type ChanDir = [reflect](/reflect).[ChanDir](/reflect#ChanDir)
```
ChanDir represents a channel type's direction.
```
const (
RecvDir [ChanDir](#ChanDir) = 1 << [iota](/builtin#iota) // <-chan
SendDir // chan<-
BothDir = [RecvDir](#RecvDir) | [SendDir](#SendDir) // chan
)
```
####
type [Kind](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L39) [¶](#Kind)
```
type Kind = [reflect](/reflect).[Kind](/reflect#Kind)
```
A Kind represents the specific kind of type that a Type represents.
The zero Kind is not a valid kind.
Example [¶](#example-Kind)
```
for _, v := range []interface{}{"hi", 42, func() {}} {
switch v := reflect.ValueOf(v); v.Kind() {
case reflect.String:
fmt.Println(v.String())
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
fmt.Println(v.Int())
default:
fmt.Printf("unhandled kind %s", v.Kind())
}
}
```
```
Output:
hi 42
unhandled kind func
```
```
const (
Invalid [Kind](#Kind) = [iota](/builtin#iota)
Bool
Int
Int8
Int16
Int32
Int64
Uint
Uint8
Uint16
Uint32
Uint64
Uintptr
Float32
Float64
Complex64
Complex128
Array
Chan
Func
Interface
Map
Ptr
Slice
String
Struct
UnsafePointer
)
```
####
type [MapIter](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L99) [¶](#MapIter)
```
type MapIter = [reflect](/reflect).[MapIter](/reflect#MapIter)
```
A MapIter is an iterator for ranging over a map.
See Value.MapRange.
####
type [Method](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L168) [¶](#Method)
```
type Method struct {
// Name is the method name.
// PkgPath is the package path that qualifies a lower case (unexported)
// method name. It is empty for upper case (exported) method names.
// The combination of PkgPath and Name uniquely identifies a method
// in a method set.
// See <https://golang.org/ref/spec#Uniqueness_of_identifiers>
Name [string](/builtin#string)
PkgPath [string](/builtin#string)
Type [Type](#Type) // method type
Func [Value](#Value) // func with receiver as first argument
Index [int](/builtin#int) // index for Type.Method
}
```
Method represents a single method.
####
type [SelectCase](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L139) [¶](#SelectCase)
```
type SelectCase struct {
Dir [SelectDir](#SelectDir) // direction of case
Chan [Value](#Value) // channel to use (for send or receive)
Send [Value](#Value) // value to send (for send)
}
```
A SelectCase describes a single case in a select operation.
The kind of case depends on Dir, the communication direction.
If Dir is SelectDefault, the case represents a default case.
Chan and Send must be zero Values.
If Dir is SelectSend, the case represents a send operation.
Normally Chan's underlying value must be a channel, and Send's underlying value must be assignable to the channel's element type. As a special case, if Chan is a zero Value,
then the case is ignored, and the field Send will also be ignored and may be either zero or non-zero.
If Dir is SelectRecv, the case represents a receive operation.
Normally Chan's underlying value must be a channel and Send must be a zero Value.
If Chan is a zero Value, then the case is ignored, but Send must still be a zero Value.
When a receive operation is selected, the received Value is returned by Select.
####
type [SelectDir](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L145) [¶](#SelectDir)
```
type SelectDir = [reflect](/reflect).[SelectDir](/reflect#SelectDir)
```
```
const (
SelectSend SelectDir // case Chan <- Send
SelectRecv // case <-Chan:
SelectDefault // default
)
```
####
type [SliceHeader](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L112) [¶](#SliceHeader)
added in v0.1.1
```
type SliceHeader = [reflect](/reflect).[SliceHeader](/reflect#SliceHeader)
```
SliceHeader is the runtime representation of a slice.
It cannot be used safely or portably and its representation may change in a later release.
Moreover, the Data field is not sufficient to guarantee the data it references will not be garbage collected, so programs must keep a separate, correctly typed pointer to the underlying data.
####
type [StringHeader](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L120) [¶](#StringHeader)
added in v0.1.1
```
type StringHeader = [reflect](/reflect).[StringHeader](/reflect#StringHeader)
```
StringHeader is the runtime representation of a string.
It cannot be used safely or portably and its representation may change in a later release.
Moreover, the Data field is not sufficient to guarantee the data it references will not be garbage collected, so programs must keep a separate, correctly typed pointer to the underlying data.
####
type [StructField](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L184) [¶](#StructField)
```
type StructField struct {
// Name is the field name.
Name [string](/builtin#string)
// PkgPath is the package path that qualifies a lower case (unexported)
// field name. It is empty for upper case (exported) field names.
// See <https://golang.org/ref/spec#Uniqueness_of_identifiers>
PkgPath [string](/builtin#string)
Type [Type](#Type) // field type
Tag [StructTag](#StructTag) // field tag string
Offset [uintptr](/builtin#uintptr) // offset within struct, in bytes
Index [][int](/builtin#int) // index sequence for Type.FieldByIndex
Anonymous [bool](/builtin#bool) // is an embedded field
}
```
A StructField describes a single field in a struct.
####
type [StructTag](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L86) [¶](#StructTag)
```
type StructTag = [reflect](/reflect).[StructTag](/reflect#StructTag)
```
A StructTag is the tag string in a struct field.
By convention, tag strings are a concatenation of optionally space-separated key:"value" pairs.
Each key is a non-empty string consisting of non-control characters other than space (U+0020 ' '), quote (U+0022 '"'),
and colon (U+003A ':'). Each value is quoted using U+0022 '"'
characters and Go string literal syntax.
Example [¶](#example-StructTag)
```
type S struct {
F string `species:"gopher" color:"blue"`
}
s := S{}
st := reflect.TypeOf(s)
field := st.Field(0)
fmt.Println(field.Tag.Get("color"), field.Tag.Get("species"))
```
```
Output:
blue gopher
```
####
type [Type](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L19) [¶](#Type)
```
type Type = *rtype
```
Type is the representation of a Go type.
Not all methods apply to all kinds of types. Restrictions,
if any, are noted in the documentation for each method.
Use the Kind method to find out the kind of type before calling kind-specific methods. Calling a method inappropriate to the kind of type causes a run-time panic.
Type values are comparable, such as with the == operator,
so they can be used as map keys.
Two Type values are equal if they represent identical types.
####
func [ArrayOf](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L204) [¶](#ArrayOf)
```
func ArrayOf(count [int](/builtin#int), elem [Type](#Type)) [Type](#Type)
```
ArrayOf returns the array type with the given count and element type.
For example, if t represents int, ArrayOf(5, t) represents [5]int.
If the resulting type would be larger than the available address space,
ArrayOf panics.
####
func [ChanOf](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L213) [¶](#ChanOf)
```
func ChanOf(dir [ChanDir](#ChanDir), t [Type](#Type)) [Type](#Type)
```
ChanOf returns the channel type with the given direction and element type.
For example, if t represents int, ChanOf(RecvDir, t) represents <-chan int.
The gc runtime imposes a limit of 64 kB on channel element types.
If t's size is equal to or exceeds this limit, ChanOf panics.
####
func [FuncOf](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L224) [¶](#FuncOf)
```
func FuncOf(in, out [][Type](#Type), variadic [bool](/builtin#bool)) [Type](#Type)
```
FuncOf returns the function type with the given argument and result types.
For example if k represents int and e represents string,
FuncOf([]Type{k}, []Type{e}, false) represents func(int) string.
The variadic argument controls whether the function is variadic. FuncOf panics if the in[len(in)-1] does not represent a slice and variadic is true.
####
func [MapOf](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L234) [¶](#MapOf)
```
func MapOf(key, elem [Type](#Type)) [Type](#Type)
```
MapOf returns the map type with the given key and element types.
For example, if k represents int and e represents string,
MapOf(k, e) represents map[int]string.
If the key type is not a valid map key type (that is, if it does not implement Go's == operator), MapOf panics.
####
func [PtrTo](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L240) [¶](#PtrTo)
```
func PtrTo(t [Type](#Type)) [Type](#Type)
```
PtrTo returns the pointer type with element t.
For example, if t represents type Foo, PtrTo(t) represents *Foo.
####
func [SliceOf](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L246) [¶](#SliceOf)
```
func SliceOf(t [Type](#Type)) [Type](#Type)
```
SliceOf returns the slice type with element type t.
For example, if t represents int, SliceOf(t) represents []int.
####
func [StructOf](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L257) [¶](#StructOf)
```
func StructOf(fields [][StructField](#StructField)) [Type](#Type)
```
StructOf returns the struct type containing fields.
The Offset and Index fields are ignored and computed as they would be by the compiler.
StructOf currently does not generate wrapper methods for embedded fields and panics if passed unexported StructFields.
These limitations may be lifted in a future version.
Example [¶](#example-StructOf)
```
typ := reflect.StructOf([]reflect.StructField{
{
Name: "Height",
Type: reflect.TypeOf(float64(0)),
Tag: `json:"height"`,
},
{
Name: "Age",
Type: reflect.TypeOf(int(0)),
Tag: `json:"age"`,
},
})
v := reflect.New(typ).Elem()
v.Field(0).SetFloat(0.4)
v.Field(1).SetInt(2)
s := v.Addr().Interface()
w := new(bytes.Buffer)
if err := json.NewEncoder(w).Encode(s); err != nil {
panic(err)
}
fmt.Printf("value: %+v\n", s)
fmt.Printf("json: %s", w.Bytes())
r := bytes.NewReader([]byte(`{"height":1.5,"age":10}`))
if err := json.NewDecoder(r).Decode(s); err != nil {
panic(err)
}
fmt.Printf("value: %+v\n", s)
```
```
Output:
value: &{Height:0.4 Age:2}
json: {"height":0.4,"age":2}
value: &{Height:1.5 Age:10}
```
####
func [ToType](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L318) [¶](#ToType)
```
func ToType(t [reflect](/reflect).[Type](/reflect#Type)) [Type](#Type)
```
ToType convert reflect.Type to Type
####
func [TypeAndPtrOf](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L290) [¶](#TypeAndPtrOf)
added in v1.0.0
```
func TypeAndPtrOf(v interface{}) ([Type](#Type), [unsafe](/unsafe).[Pointer](/unsafe#Pointer))
```
TypeAndPtrOf returns raw Type and ptr value in favor of performance.
####
func [TypeOf](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L263) [¶](#TypeOf)
```
func TypeOf(v interface{}) [Type](#Type)
```
TypeOf returns the reflection Type that represents the dynamic type of i.
If i is a nil interface value, TypeOf returns nil.
Example [¶](#example-TypeOf)
```
// As interface types are only used for static typing, a
// common idiom to find the reflection Type for an interface
// type Foo is to use a *Foo value.
writerType := reflect.TypeOf((*io.Writer)(nil)).Elem()
fileType := reflect.TypeOf((*os.File)(nil))
fmt.Println(fileType.Implements(writerType))
```
```
Output:
true
```
####
type [Value](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L161) [¶](#Value)
```
type Value struct {
// contains filtered or unexported fields
}
```
Value is the reflection interface to a Go value.
Not all methods apply to all kinds of values.
Restrictions, if any, are noted in the documentation for each method.
Use the Kind method to find out the kind of value before calling kind-specific methods.
Calling a method inappropriate to the kind of type causes a run time panic.
The zero Value represents no value.
Its IsValid method returns false, its Kind method returns Invalid,
its String method returns "<invalid Value>", and all other methods panic.
Most functions and methods never return an invalid value.
If one does, its documentation states the conditions explicitly.
A Value can be used concurrently by multiple goroutines provided that the underlying Go value can be used concurrently for the equivalent direct operations.
To compare two Values, compare the results of the Interface method.
Using == on two Values does not compare the underlying values they represent.
####
func [Append](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L399) [¶](#Append)
```
func Append(s [Value](#Value), x ...[Value](#Value)) [Value](#Value)
```
Append appends the values x to a slice s and returns the resulting slice.
As in Go, each x's value must be assignable to the slice's element type.
####
func [AppendSlice](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L405) [¶](#AppendSlice)
```
func AppendSlice(s, t [Value](#Value)) [Value](#Value)
```
AppendSlice appends a slice t to a slice s and returns the resulting slice.
The slices s and t must have the same element type.
####
func [Indirect](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L412) [¶](#Indirect)
```
func Indirect(v [Value](#Value)) [Value](#Value)
```
Indirect returns the value that v points to.
If v is a nil pointer, Indirect returns a zero Value.
If v is not a pointer, Indirect returns v.
####
func [MakeChan](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L417) [¶](#MakeChan)
```
func MakeChan(typ [Type](#Type), buffer [int](/builtin#int)) [Value](#Value)
```
MakeChan creates a new channel with the specified type and buffer size.
####
func [MakeFunc](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L443) [¶](#MakeFunc)
```
func MakeFunc(typ [Type](#Type), fn func(args [][Value](#Value)) (results [][Value](#Value))) [Value](#Value)
```
MakeFunc returns a new function of the given Type that wraps the function fn. When called, that new function does the following:
* converts its arguments to a slice of Values.
* runs results := fn(args).
* returns the results as a slice of Values, one per formal result.
The implementation fn can assume that the argument Value slice has the number and type of arguments given by typ.
If typ describes a variadic function, the final Value is itself a slice representing the variadic arguments, as in the body of a variadic function. The result Value slice returned by fn must have the number and type of results given by typ.
The Value.Call method allows the caller to invoke a typed function in terms of Values; in contrast, MakeFunc allows the caller to implement a typed function in terms of Values.
The Examples section of the documentation includes an illustration of how to use MakeFunc to build a swap function for different types.
Example [¶](#example-MakeFunc)
```
// swap is the implementation passed to MakeFunc.
// It must work in terms of reflect.Values so that it is possible
// to write code without knowing beforehand what the types
// will be.
swap := func(in []reflect.Value) []reflect.Value {
return []reflect.Value{in[1], in[0]}
}
// makeSwap expects fptr to be a pointer to a nil function.
// It sets that pointer to a new function created with MakeFunc.
// When the function is invoked, reflect turns the arguments
// into Values, calls swap, and then turns swap's result slice
// into the values returned by the new function.
makeSwap := func(fptr interface{}) {
// fptr is a pointer to a function.
// Obtain the function value itself (likely nil) as a reflect.Value
// so that we can query its type and then set the value.
fn := reflect.ValueOf(fptr).Elem()
// Make a function of the right type.
v := reflect.MakeFunc(fn.Type(), swap)
// Assign it to the value fn represents.
fn.Set(v)
}
// Make and call a swap function for ints.
var intSwap func(int, int) (int, int)
makeSwap(&intSwap)
fmt.Println(intSwap(0, 1))
// Make and call a swap function for float64s.
var floatSwap func(float64, float64) (float64, float64)
makeSwap(&floatSwap)
fmt.Println(floatSwap(2.72, 3.14))
```
```
Output:
1 0 3.14 2.72
```
####
func [MakeMap](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L448) [¶](#MakeMap)
```
func MakeMap(typ [Type](#Type)) [Value](#Value)
```
MakeMap creates a new map with the specified type.
####
func [MakeMapWithSize](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L454) [¶](#MakeMapWithSize)
```
func MakeMapWithSize(typ [Type](#Type), n [int](/builtin#int)) [Value](#Value)
```
MakeMapWithSize creates a new map with the specified type and initial space for approximately n elements.
####
func [MakeSlice](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L460) [¶](#MakeSlice)
```
func MakeSlice(typ [Type](#Type), len, cap [int](/builtin#int)) [Value](#Value)
```
MakeSlice creates a new zero-initialized slice value for the specified slice type, length, and capacity.
####
func [New](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L466) [¶](#New)
```
func New(typ [Type](#Type)) [Value](#Value)
```
New returns a Value representing a pointer to a new zero value for the specified type. That is, the returned Value's Type is PtrTo(typ).
####
func [NewAt](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L472) [¶](#NewAt)
```
func NewAt(typ [Type](#Type), p [unsafe](/unsafe).[Pointer](/unsafe#Pointer)) [Value](#Value)
```
NewAt returns a Value representing a pointer to a value of the specified type, using p as that pointer.
####
func [Select](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L483) [¶](#Select)
```
func Select(cases [][SelectCase](#SelectCase)) ([int](/builtin#int), [Value](#Value), [bool](/builtin#bool))
```
Select executes a select operation described by the list of cases.
Like the Go select statement, it blocks until at least one of the cases can proceed, makes a uniform pseudo-random choice,
and then executes that case. It returns the index of the chosen case and, if that case was a receive operation, the value received and a boolean indicating whether the value corresponds to a send on the channel
(as opposed to a zero value received because the channel is closed).
####
func [ToValue](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L323) [¶](#ToValue)
```
func ToValue(v [reflect](/reflect).[Value](/reflect#Value)) [Value](#Value)
```
ToValue convert reflect.Value to Value
####
func [ValueNoEscapeOf](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L303) [¶](#ValueNoEscapeOf)
added in v0.1.1
```
func ValueNoEscapeOf(v interface{}) [Value](#Value)
```
ValueNoEscapeOf no escape of ValueOf.
####
func [ValueOf](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L297) [¶](#ValueOf)
```
func ValueOf(v interface{}) [Value](#Value)
```
ValueOf returns a new Value initialized to the concrete value stored in the interface i. ValueOf(nil) returns the zero Value.
####
func [Zero](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L492) [¶](#Zero)
```
func Zero(typ [Type](#Type)) [Value](#Value)
```
Zero returns a Value representing the zero value for the specified type.
The result is different from the zero value of the Value struct,
which represents no value at all.
For example, Zero(TypeOf(42)) returns a Value with Kind Int and value 0.
The returned value is neither addressable nor settable.
####
func (Value) [Addr](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L731) [¶](#Value.Addr)
```
func (v [Value](#Value)) Addr() [Value](#Value)
```
Addr returns a pointer value representing the address of v.
It panics if CanAddr() returns false.
Addr is typically used to obtain a pointer to a struct field or slice element in order to call a method that requires a pointer receiver.
####
func (Value) [Bool](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L737) [¶](#Value.Bool)
```
func (v [Value](#Value)) Bool() [bool](/builtin#bool)
```
Bool returns v's underlying value.
It panics if v's kind is not Bool.
####
func (Value) [Bytes](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L743) [¶](#Value.Bytes)
```
func (v [Value](#Value)) Bytes() [][byte](/builtin#byte)
```
Bytes returns v's underlying value.
It panics if v's underlying value is not a slice of bytes.
####
func (Value) [Call](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L755) [¶](#Value.Call)
```
func (v [Value](#Value)) Call(in [][Value](#Value)) [][Value](#Value)
```
Call calls the function v with the input arguments in.
For example, if len(in) == 3, v.Call(in) represents the Go call v(in[0], in[1], in[2]).
Call panics if v's Kind is not Func.
It returns the output results as Values.
As in Go, each input argument must be assignable to the type of the function's corresponding input parameter.
If v is a variadic function, Call creates the variadic slice parameter itself, copying in the corresponding values.
####
func (Value) [CallSlice](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L766) [¶](#Value.CallSlice)
```
func (v [Value](#Value)) CallSlice(in [][Value](#Value)) [][Value](#Value)
```
CallSlice calls the variadic function v with the input arguments in,
assigning the slice in[len(in)-1] to v's final variadic argument.
For example, if len(in) == 3, v.CallSlice(in) represents the Go call v(in[0], in[1], in[2]...).
CallSlice panics if v's Kind is not Func or if v is not variadic.
It returns the output results as Values.
As in Go, each input argument must be assignable to the type of the function's corresponding input parameter.
####
func (Value) [CanAddr](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L775) [¶](#Value.CanAddr)
```
func (v [Value](#Value)) CanAddr() [bool](/builtin#bool)
```
CanAddr reports whether the value's address can be obtained with Addr.
Such values are called addressable. A value is addressable if it is an element of a slice, an element of an addressable array,
a field of an addressable struct, or the result of dereferencing a pointer.
If CanAddr returns false, calling Addr will panic.
####
func (Value) [CanInterface](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L780) [¶](#Value.CanInterface)
```
func (v [Value](#Value)) CanInterface() [bool](/builtin#bool)
```
CanInterface reports whether Interface can be used without panicking.
####
func (Value) [CanSet](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L789) [¶](#Value.CanSet)
```
func (v [Value](#Value)) CanSet() [bool](/builtin#bool)
```
CanSet reports whether the value of v can be changed.
A Value can be changed only if it is addressable and was not obtained by the use of unexported struct fields.
If CanSet returns false, calling Set or any type-specific setter (e.g., SetBool, SetInt) will panic.
####
func (Value) [Cap](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L795) [¶](#Value.Cap)
```
func (v [Value](#Value)) Cap() [int](/builtin#int)
```
Cap returns v's capacity.
It panics if v's Kind is not Array, Chan, or Slice.
####
func (Value) [Close](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L801) [¶](#Value.Close)
```
func (v [Value](#Value)) Close()
```
Close closes the channel v.
It panics if v's Kind is not Chan.
####
func (Value) [Complex](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L807) [¶](#Value.Complex)
```
func (v [Value](#Value)) Complex() [complex128](/builtin#complex128)
```
Complex returns v's underlying value, as a complex128.
It panics if v's Kind is not Complex64 or Complex128.
####
func (Value) [Convert](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L814) [¶](#Value.Convert)
```
func (v [Value](#Value)) Convert(t [Type](#Type)) [Value](#Value)
```
Convert returns the value v converted to type t.
If the usual Go conversion rules do not allow conversion of the value v to type t, Convert panics.
####
func (Value) [Elem](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L822) [¶](#Value.Elem)
```
func (v [Value](#Value)) Elem() [Value](#Value)
```
Elem returns the value that the interface v contains or that the pointer v points to.
It panics if v's Kind is not Interface or Ptr.
It returns the zero Value if v is nil.
####
func (Value) [Field](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L828) [¶](#Value.Field)
```
func (v [Value](#Value)) Field(i [int](/builtin#int)) [Value](#Value)
```
Field returns the i'th field of the struct v.
It panics if v's Kind is not Struct or i is out of range.
####
func (Value) [FieldByIndex](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L834) [¶](#Value.FieldByIndex)
```
func (v [Value](#Value)) FieldByIndex(index [][int](/builtin#int)) [Value](#Value)
```
FieldByIndex returns the nested field corresponding to index.
It panics if v's Kind is not struct.
####
func (Value) [FieldByName](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L841) [¶](#Value.FieldByName)
```
func (v [Value](#Value)) FieldByName(name [string](/builtin#string)) [Value](#Value)
```
FieldByName returns the struct field with the given name.
It returns the zero Value if no field was found.
It panics if v's Kind is not struct.
####
func (Value) [FieldByNameFunc](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L849) [¶](#Value.FieldByNameFunc)
```
func (v [Value](#Value)) FieldByNameFunc(match func([string](/builtin#string)) [bool](/builtin#bool)) [Value](#Value)
```
FieldByNameFunc returns the struct field with a name that satisfies the match function.
It panics if v's Kind is not struct.
It returns the zero Value if no field was found.
####
func (Value) [Float](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L855) [¶](#Value.Float)
```
func (v [Value](#Value)) Float() [float64](/builtin#float64)
```
Float returns v's underlying value, as a float64.
It panics if v's Kind is not Float32 or Float64.
####
func (Value) [Index](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L861) [¶](#Value.Index)
```
func (v [Value](#Value)) Index(i [int](/builtin#int)) [Value](#Value)
```
Index returns v's i'th element.
It panics if v's Kind is not Array, Slice, or String or i is out of range.
####
func (Value) [Int](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L867) [¶](#Value.Int)
```
func (v [Value](#Value)) Int() [int64](/builtin#int64)
```
Int returns v's underlying value, as an int64.
It panics if v's Kind is not Int, Int8, Int16, Int32, or Int64.
####
func (Value) [Interface](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L876) [¶](#Value.Interface)
```
func (v [Value](#Value)) Interface() interface{}
```
Interface returns v's current value as an interface{}.
It is equivalent to:
```
var i interface{} = (v's underlying value)
```
It panics if the Value was obtained by accessing unexported struct fields.
####
func (Value) [InterfaceData](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L882) [¶](#Value.InterfaceData)
```
func (v [Value](#Value)) InterfaceData() [2][uintptr](/builtin#uintptr)
```
InterfaceData returns the interface v's value as a uintptr pair.
It panics if v's Kind is not Interface.
####
func (Value) [IsNil](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L893) [¶](#Value.IsNil)
```
func (v [Value](#Value)) IsNil() [bool](/builtin#bool)
```
IsNil reports whether its argument v is nil. The argument must be a chan, func, interface, map, pointer, or slice value; if it is not, IsNil panics. Note that IsNil is not always equivalent to a regular comparison with nil in Go. For example, if v was created by calling ValueOf with an uninitialized interface variable i,
i==nil will be true but v.IsNil will panic as v will be the zero Value.
####
func (Value) [IsValid](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L902) [¶](#Value.IsValid)
```
func (v [Value](#Value)) IsValid() [bool](/builtin#bool)
```
IsValid reports whether v represents a value.
It returns false if v is the zero Value.
If IsValid returns false, all other methods except String panic.
Most functions and methods never return an invalid Value.
If one does, its documentation states the conditions explicitly.
####
func (Value) [IsZero](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect113.go#L7) [¶](#Value.IsZero)
```
func (v [Value](#Value)) IsZero() [bool](/builtin#bool)
```
IsZero reports whether v is the zero value for its type.
It panics if the argument is invalid.
####
func (Value) [Kind](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L908) [¶](#Value.Kind)
```
func (v [Value](#Value)) Kind() [Kind](#Kind)
```
Kind returns v's Kind.
If v is the zero Value (IsValid returns false), Kind returns Invalid.
####
func (Value) [Len](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L914) [¶](#Value.Len)
```
func (v [Value](#Value)) Len() [int](/builtin#int)
```
Len returns v's length.
It panics if v's Kind is not Array, Chan, Map, Slice, or String.
####
func (Value) [MapIndex](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L922) [¶](#Value.MapIndex)
```
func (v [Value](#Value)) MapIndex(key [Value](#Value)) [Value](#Value)
```
MapIndex returns the value associated with key in the map v.
It panics if v's Kind is not Map.
It returns the zero Value if key is not found in the map or if v represents a nil map.
As in Go, the key's value must be assignable to the map's key type.
####
func (Value) [MapKeys](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L930) [¶](#Value.MapKeys)
```
func (v [Value](#Value)) MapKeys() [][Value](#Value)
```
MapKeys returns a slice containing all the keys present in the map,
in unspecified order.
It panics if v's Kind is not Map.
It returns an empty slice if v represents a nil map.
####
func (Value) [MapRange](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L950) [¶](#Value.MapRange)
```
func (v [Value](#Value)) MapRange() *[MapIter](#MapIter)
```
MapRange returns a range iterator for a map.
It panics if v's Kind is not Map.
Call Next to advance the iterator, and Key/Value to access each entry.
Next returns false when the iterator is exhausted.
MapRange follows the same iteration semantics as a range statement.
Example:
```
iter := reflect.ValueOf(m).MapRange()
for iter.Next() {
k := iter.Key()
v := iter.Value()
...
}
```
####
func (Value) [Method](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L958) [¶](#Value.Method)
```
func (v [Value](#Value)) Method(i [int](/builtin#int)) [Value](#Value)
```
Method returns a function value corresponding to v's i'th method.
The arguments to a Call on the returned function should not include a receiver; the returned function will always use v as the receiver.
Method panics if i is out of range or if v is a nil interface value.
####
func (Value) [MethodByName](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L967) [¶](#Value.MethodByName)
```
func (v [Value](#Value)) MethodByName(name [string](/builtin#string)) [Value](#Value)
```
MethodByName returns a function value corresponding to the method of v with the given name.
The arguments to a Call on the returned function should not include a receiver; the returned function will always use v as the receiver.
It returns the zero Value if no method was found.
####
func (Value) [NumField](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L973) [¶](#Value.NumField)
```
func (v [Value](#Value)) NumField() [int](/builtin#int)
```
NumField returns the number of fields in the struct v.
It panics if v's Kind is not Struct.
####
func (Value) [NumMethod](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L978) [¶](#Value.NumMethod)
```
func (v [Value](#Value)) NumMethod() [int](/builtin#int)
```
NumMethod returns the number of exported methods in the value's method set.
####
func (Value) [OverflowComplex](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L984) [¶](#Value.OverflowComplex)
```
func (v [Value](#Value)) OverflowComplex(x [complex128](/builtin#complex128)) [bool](/builtin#bool)
```
OverflowComplex reports whether the complex128 x cannot be represented by v's type.
It panics if v's Kind is not Complex64 or Complex128.
####
func (Value) [OverflowFloat](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L990) [¶](#Value.OverflowFloat)
```
func (v [Value](#Value)) OverflowFloat(x [float64](/builtin#float64)) [bool](/builtin#bool)
```
OverflowFloat reports whether the float64 x cannot be represented by v's type.
It panics if v's Kind is not Float32 or Float64.
####
func (Value) [OverflowInt](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L996) [¶](#Value.OverflowInt)
```
func (v [Value](#Value)) OverflowInt(x [int64](/builtin#int64)) [bool](/builtin#bool)
```
OverflowInt reports whether the int64 x cannot be represented by v's type.
It panics if v's Kind is not Int, Int8, Int16, Int32, or Int64.
####
func (Value) [OverflowUint](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L1002) [¶](#Value.OverflowUint)
```
func (v [Value](#Value)) OverflowUint(x [uint64](/builtin#uint64)) [bool](/builtin#bool)
```
OverflowUint reports whether the uint64 x cannot be represented by v's type.
It panics if v's Kind is not Uint, Uintptr, Uint8, Uint16, Uint32, or Uint64.
####
func (Value) [Pointer](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L1025) [¶](#Value.Pointer)
```
func (v [Value](#Value)) Pointer() [uintptr](/builtin#uintptr)
```
Pointer returns v's value as a uintptr.
It returns uintptr instead of unsafe.Pointer so that code using reflect cannot obtain unsafe.Pointers without importing the unsafe package explicitly.
It panics if v's Kind is not Chan, Func, Map, Ptr, Slice, or UnsafePointer.
If v's Kind is Func, the returned pointer is an underlying code pointer, but not necessarily enough to identify a single function uniquely. The only guarantee is that the result is zero if and only if v is a nil func Value.
If v's Kind is Slice, the returned pointer is to the first element of the slice. If the slice is nil the returned value is 0. If the slice is empty but non-nil the return value is non-zero.
####
func (Value) [Recv](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L1034) [¶](#Value.Recv)
```
func (v [Value](#Value)) Recv() ([Value](#Value), [bool](/builtin#bool))
```
Recv receives and returns a value from the channel v.
It panics if v's Kind is not Chan.
The receive blocks until a value is ready.
The boolean value ok is true if the value x corresponds to a send on the channel, false if it is a zero value received because the channel is closed.
####
func (Value) [Send](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L1041) [¶](#Value.Send)
```
func (v [Value](#Value)) Send(x [Value](#Value))
```
Send sends x on the channel v.
It panics if v's kind is not Chan or if x's type is not the same type as v's element type.
As in Go, x's value must be assignable to the channel's element type.
####
func (Value) [Set](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L1048) [¶](#Value.Set)
```
func (v [Value](#Value)) Set(x [Value](#Value))
```
Set assigns x to the value v.
It panics if CanSet returns false.
As in Go, x's value must be assignable to v's type.
####
func (Value) [SetBool](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L1054) [¶](#Value.SetBool)
```
func (v [Value](#Value)) SetBool(x [bool](/builtin#bool))
```
SetBool sets v's underlying value.
It panics if v's Kind is not Bool or if CanSet() is false.
####
func (Value) [SetBytes](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L1060) [¶](#Value.SetBytes)
```
func (v [Value](#Value)) SetBytes(x [][byte](/builtin#byte))
```
SetBytes sets v's underlying value.
It panics if v's underlying value is not a slice of bytes.
####
func (Value) [SetCap](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L1067) [¶](#Value.SetCap)
```
func (v [Value](#Value)) SetCap(n [int](/builtin#int))
```
SetCap sets v's capacity to n.
It panics if v's Kind is not Slice or if n is smaller than the length or greater than the capacity of the slice.
####
func (Value) [SetComplex](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L1073) [¶](#Value.SetComplex)
```
func (v [Value](#Value)) SetComplex(x [complex128](/builtin#complex128))
```
SetComplex sets v's underlying value to x.
It panics if v's Kind is not Complex64 or Complex128, or if CanSet() is false.
####
func (Value) [SetFloat](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L1079) [¶](#Value.SetFloat)
```
func (v [Value](#Value)) SetFloat(x [float64](/builtin#float64))
```
SetFloat sets v's underlying value to x.
It panics if v's Kind is not Float32 or Float64, or if CanSet() is false.
####
func (Value) [SetInt](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L1085) [¶](#Value.SetInt)
```
func (v [Value](#Value)) SetInt(x [int64](/builtin#int64))
```
SetInt sets v's underlying value to x.
It panics if v's Kind is not Int, Int8, Int16, Int32, or Int64, or if CanSet() is false.
####
func (Value) [SetLen](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L1092) [¶](#Value.SetLen)
```
func (v [Value](#Value)) SetLen(n [int](/builtin#int))
```
SetLen sets v's length to n.
It panics if v's Kind is not Slice or if n is negative or greater than the capacity of the slice.
####
func (Value) [SetMapIndex](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L1102) [¶](#Value.SetMapIndex)
```
func (v [Value](#Value)) SetMapIndex(key, elem [Value](#Value))
```
SetMapIndex sets the element associated with key in the map v to elem.
It panics if v's Kind is not Map.
If elem is the zero Value, SetMapIndex deletes the key from the map.
Otherwise if v holds a nil map, SetMapIndex will panic.
As in Go, key's elem must be assignable to the map's key type,
and elem's value must be assignable to the map's elem type.
####
func (Value) [SetPointer](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L1108) [¶](#Value.SetPointer)
```
func (v [Value](#Value)) SetPointer(x [unsafe](/unsafe).[Pointer](/unsafe#Pointer))
```
SetPointer sets the unsafe.Pointer value v to x.
It panics if v's Kind is not UnsafePointer.
####
func (Value) [SetString](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L1114) [¶](#Value.SetString)
```
func (v [Value](#Value)) SetString(x [string](/builtin#string))
```
SetString sets v's underlying value to x.
It panics if v's Kind is not String or if CanSet() is false.
####
func (Value) [SetUint](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L1120) [¶](#Value.SetUint)
```
func (v [Value](#Value)) SetUint(x [uint64](/builtin#uint64))
```
SetUint sets v's underlying value to x.
It panics if v's Kind is not Uint, Uintptr, Uint8, Uint16, Uint32, or Uint64, or if CanSet() is false.
####
func (Value) [Slice](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L1127) [¶](#Value.Slice)
```
func (v [Value](#Value)) Slice(i, j [int](/builtin#int)) [Value](#Value)
```
Slice returns v[i:j].
It panics if v's Kind is not Array, Slice or String, or if v is an unaddressable array,
or if the indexes are out of bounds.
####
func (Value) [Slice3](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L1134) [¶](#Value.Slice3)
```
func (v [Value](#Value)) Slice3(i, j, k [int](/builtin#int)) [Value](#Value)
```
Slice3 is the 3-index form of the slice operation: it returns v[i:j:k].
It panics if v's Kind is not Array or Slice, or if v is an unaddressable array,
or if the indexes are out of bounds.
####
func (Value) [String](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L1144) [¶](#Value.String)
```
func (v [Value](#Value)) String() [string](/builtin#string)
```
String returns the string v's underlying value, as a string.
String is a special case because of Go's String method convention.
Unlike the other getters, it does not panic if v's Kind is not String.
Instead, it returns a string of the form "<T value>" where T is v's type.
The fmt package treats Values specially. It does not call their String method implicitly but instead prints the concrete values they hold.
####
func (Value) [TryRecv](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L1153) [¶](#Value.TryRecv)
```
func (v [Value](#Value)) TryRecv() ([Value](#Value), [bool](/builtin#bool))
```
TryRecv attempts to receive a value from the channel v but will not block.
It panics if v's Kind is not Chan.
If the receive delivers a value, x is the transferred value and ok is true.
If the receive cannot finish without blocking, x is the zero Value and ok is false.
If the channel is closed, x is the zero value for the channel's element type and ok is false.
####
func (Value) [TrySend](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L1161) [¶](#Value.TrySend)
```
func (v [Value](#Value)) TrySend(x [Value](#Value)) [bool](/builtin#bool)
```
TrySend attempts to send x on the channel v but will not block.
It panics if v's Kind is not Chan.
It reports whether the value was sent.
As in Go, x's value must be assignable to the channel's element type.
####
func (Value) [Type](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L1166) [¶](#Value.Type)
```
func (v [Value](#Value)) Type() [Type](#Type)
```
Type returns v's type.
####
func (Value) [Uint](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L1172) [¶](#Value.Uint)
```
func (v [Value](#Value)) Uint() [uint64](/builtin#uint64)
```
Uint returns v's underlying value, as a uint64.
It panics if v's Kind is not Uint, Uintptr, Uint8, Uint16, Uint32, or Uint64.
####
func (Value) [UnsafeAddr](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L1184) [¶](#Value.UnsafeAddr)
```
func (v [Value](#Value)) UnsafeAddr() [uintptr](/builtin#uintptr)
```
UnsafeAddr returns a pointer to v's data.
It is for advanced clients that also import the "unsafe" package.
It panics if v is not addressable.
####
type [ValueError](https://github.com/goccy/go-reflect/blob/v1.2.0/reflect.go#L104) [¶](#ValueError)
```
type ValueError = [reflect](/reflect).[ValueError](/reflect#ValueError)
```
A ValueError occurs when a Value method is invoked on a Value that does not support it. Such cases are documented in the description of each method. |
twangContinuous | cran | R | Package ‘twangContinuous’
October 14, 2022
Type Package
Date 2021-02-15
Title Toolkit for Weighting and Analysis of Nonequivalent Groups -
Continuous Exposures
Version 1.0.0
Description Provides functions for propensity score
estimation and weighting for continuous exposures as described in Zhu, Y.,
<NAME>., & <NAME>. (2015). A boosting algorithm for
estimating generalized propensity scores with continuous treatments.
Journal of Causal Inference, 3(1), 25-40. <doi:10.1515/jci-2014-0022>.
License GPL (>= 2)
Encoding UTF-8
LazyData true
VignetteBuilder knitr
Imports Rcpp (>= 0.12.19), lattice (>= 0.20-35), gbm (>= 2.1.3),
survey, xtable
Suggests knitr, rmarkdown
RoxygenNote 7.1.1
NeedsCompilation no
Author <NAME> [aut, cre] (<https://orcid.org/0000-0001-6305-6579>),
<NAME> [ctb]
Maintainer <NAME> <<EMAIL>>
Depends R (>= 3.5.0)
Repository CRAN
Date/Publication 2021-02-26 09:50:02 UTC
R topics documented:
bal.tabl... 2
da... 3
get.weight... 5
plot.ps.con... 6
ps.con... 6
summary.ps.con... 9
bal.table Compute the balance table.
Description
‘bal.table‘ is a generic function for extracting balance tables from ‘ps.cont‘ objects, one for an
unweighted analysis and one for the weighted analysis.
Usage
bal.table(x, digits = 3, ...)
Arguments
x A ‘ps.cont‘ object
digits Number of digits to round to. Default: 3
... Additional arguments.
Value
Returns a data frame containing the balance information. * ‘unw‘ The unweighted correlation
between the exposure and each covariate. * ‘wcor‘ The weighted correlation between the exposure
and each covariate.
See Also
[ps.cont]
Examples
## Not run: bal.table(test.mod)
dat A synthetic data set that was derived from a large scale observational
study on youth in substance use treatment.
Description
A subset of measures from the Global Appraisal of Individual Needs biopsychosocial assessment
instrument (GAIN) (Dennis, Titus et al. 2003) from sites that administered two different types
of substance use disorder treatments (treatment “A” and treatment “B”). The Center for Substance
Abuse Treatment (CSAT) funded the sites that administered these two SUD treatments. This dataset
consists of 4,000 adolescents, 2,000 in each treatment group. The dataset includes substance use
and mental health variables.
Usage
data("dat")
Format
A data frame with 4000 observations on the following 29 variables.
treat a factor with levels A B
tss_0 a numeric vector
tss_3 a numeric vector
tss_6 a numeric vector
sfs8p_0 a numeric vector
sfs8p_3 a numeric vector
sfs8p_6 a numeric vector
eps7p_0 a numeric vector
eps7p_3 a numeric vector
eps7p_6 a numeric vector
ias5p_0 a numeric vector
dss9_0 a numeric vector
mhtrt_0 a numeric vector
sati_0 a numeric vector
sp_sm_0 a numeric vector
sp_sm_3 a numeric vector
sp_sm_6 a numeric vector
gvs a numeric vector
ers21_0 a numeric vector
nproc a numeric vector
ada_0 a numeric vector
ada_3 a numeric vector
ada_6 a numeric vector
recov_0 a numeric vector
recov_3 a numeric vector
recov_6 a numeric vector
subsgrps_n a numeric vector
sncnt a numeric vector
engage a numeric vector
Details
tss_0 Traumatic Stress Scale - Baseline
tss_3 Traumatic Stress Scale - 3 months
tss_6 Traumatic Stress Scale - 6 months
sfs8p_0 Substance Frequency Scale - Baseline
sfs8p_3 Substance Frequency Scale - 3 months
sfs8p_6 Substance Frequency Scale - 6 months
eps7p_0 Emotional Problems Scale - Baseline
eps7p_3 Emotional Problems Scale - 3 months
eps7p_6 Emotional Problems Scale - 6 months
ias5p_0 Illegal Activities Scale - baseline
dss9_0 depressive symptom scale - baseline
mhtrt_0 mental health treatment in the past 90 days - baseline
sati_0 substance abuse treatment index - baseline
sp_sm_0 substance problem scale (past month) - baseline
sp_sm_3 substance problem scale (past month) - 3 months
sp_sm_6 substance problem scale (past month) - 6 months
gvs General Victimization Scale
ers21_0 Environmental Risk Scale - baseline
ada_0 adjusted days abstinent (any in past 90) - baseline
ada_3 adjusted days abstinent (any in past 90) - 3 months
ada_6 adjusted days abstinent (any in past 90) - 6 months
recov_0 in recovery - baseline
recov_3 in recovery - 3 months
recov_6 in recovery - 6 months
subsgrps_n primarily opioid using youth vs alcohol/marijuana using youth vs other
Source
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., & <NAME>. (2002).
Five outpatient treatment models for adolescent marijuana use: a description of the Cannabis Youth
Treatment Interventions. Addiction, 97, 70-83.
References
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., & <NAME>. (2002).
Five outpatient treatment models for adolescent marijuana use: a description of the Cannabis Youth
Treatment Interventions. Addiction, 97, 70-83.
Examples
data(dat)
## maybe str(dat) ; plot(dat) ...
get.weights Extract propensity score weights
Description
Extracts propensity score weights from a ps.cont object.
Usage
get.weights(ps1, stop.method = "wcor", withSampW = TRUE)
Arguments
ps1 a ps.cont object
stop.method indicates which set of weights to retrieve from the ps.cont object
withSampW Returns weights with sample weights multiplied in, if they were provided in the
original ps.cont call.
Value
a vector of weights
Author(s)
<NAME>
See Also
ps.cont
plot.ps.cont Plot the ‘ps.cont‘ object.
Description
This function produces a collection of diagnostic plots for ‘ps.cont‘ objects.
Usage
## S3 method for class 'ps.cont'
plot(x, plots = "optimize", subset = NULL, ...)
Arguments
x ‘ps.cont‘ object
plots An indicator of which type of plot is desired. The options are * ‘"optimize"‘ A
plot of the balance criteria as a function of the GBM iteration. * ‘"es"‘ Plots
of the standardized effect size of the pre-treatment variables before and after
weighting
subset Used to restrict which of the ‘stop.method‘s will be used in the figure.
... Additional arguments.
Value
Returns diagnostic plots for ‘ps.cont‘ objects.
See Also
[ps.cont]
Examples
## Not run: plot(test.mod)
ps.cont Gradient boosted propensity score estimation for continuous expo-
sures
Description
‘ps.cont‘ calculates propensity scores using gradient boosted regression and provides diagnostics of
the resulting propensity scores.
Usage
ps.cont(
formula,
data,
n.trees = 10000,
interaction.depth = 3,
shrinkage = 0.01,
bag.fraction = 1,
sampw = NULL,
print.level = 2,
verbose = FALSE,
stop.method = "wcor",
treat.as.cont = FALSE,
...
)
Arguments
formula An object of class [formula]: a symbolic description of the propensity score
model to be fit with the treatment variable on the left side of the formula and the
potential confounding variables on the right side.
data A dataset that includes the treatment as well as the potential confounding vari-
ables.
n.trees Number of gbm iterations passed on to [gbm]. Default: 10000.
interaction.depth
A positive integer denoting the tree depth used in gradient boosting. Default: 3.
shrinkage A numeric value between 0 and 1 denoting the learning rate. See [gbm] for more
details. Default: 0.01.
bag.fraction A numeric value between 0 and 1 denoting the fraction of the observations ran-
domly selected in each iteration of the gradient boosting algorithm to propose
the next tree. See [gbm] for more details. Default: 1.0.
sampw Optional sampling weights.
print.level The amount of detail to print to the screen. Default: 2.
verbose If ‘TRUE‘, lots of information will be printed to monitor the the progress of the
fitting. Default: ‘FALSE‘.
stop.method A method or methods of measuring and summarizing balance across pretreat-
ment variables. Current options are ‘wcor‘, the weighted Pearson correlation,
summarized by using the mean across the pretreatment variables. Default: ‘wcor‘.
treat.as.cont Used as a check on whether the exposure has greater than five levels. If it does
not and treat.as.cont=FALSE, an error will be produced. Default: FALSE
... Additional arguments that are passed to ps function.
Value
Returns an object of class ‘ps.cont‘, a list containing
* ‘gbm.obj‘ The returned [gbm] object.
* ‘treat‘ The treatment variable.
* ‘desc‘ A list containing balance tables for each method selected in ‘stop.methods‘. Includes a
component for the unweighted analysis names “unw”. Each ‘desc‘ component includes a list with
the following components
- ‘ess‘ The effective sample size.
- ‘n‘ The number of subjects.
- ‘max.wcor‘ The largest weighted correlation across the covariates.
- ‘mean.wcor‘ The average weighted correlation across the covariates.
- ‘rms.wcor‘ The root mean square of the absolute weighted correlations across the covariates.
- ‘bal.tab‘ a (potentially large) table summarizing the quality of the weights for balancing the distri-
bution of the pretreatment covariates. This table is best extracted using the [bal.table] method. See
the help for [bal.table] for details.
- ‘n.trees‘ The estimated optimal number of [gbm] iterations to optimize the loss function.
* ‘ps.den‘ Denominator values for the propensity score weights.
* ‘ps.num‘ Numerator values for the propensity score weights.
* ‘w‘ The propensity score weights. If sampling weights are given then these are incorporated into
these weights.
* ‘datestamp‘ Records the date of the analysis.
* ‘parameters‘ Saves the ‘ps.cont‘ call.
* ‘alerts‘ Text containing any warnings accumulated during the estimation.
* ‘iters‘ A sequence of iterations used in the GBM fits used by ‘plot‘ function.
* ‘balance‘ The balance measures for the pretreatment covariates used in plotting.
* ‘sampw‘ The sampling weights as specified in the ‘sampw‘ argument.
* ‘preds‘ Predicted values based on the propensity score model.
* ‘covariates‘ Data frame containing the covariates used in the propensity score model.
* ‘n.trees‘ Maximum number of trees considered in GBM fit.
* ‘data‘ Data as specified in the ‘data‘ argument.
References
<NAME>., <NAME>., & <NAME>. (2015). A boosting algorithm for estimating general-
ized propensity scores with continuous treatments. *Journal of Causal Inference*, 3(1), 25-40.
doi: 10.1515/jci20140022
See Also
[gbm], [plot.ps.cont], [bal.table], [summary.ps.cont]
Examples
## Not run: test.mod <- ps.cont(tss_0 ~ sfs8p_0 + sati_0 + sp_sm_0
+ recov_0 + subsgrps_n + treat, data=dat
## End(Not run)
summary.ps.cont Displays a useful description of a ‘ps.cont‘ object.
Description
Computes a short summary table describing the size of the dataset and the quality of the propensity
score weights about a stored ‘ps.cont‘ object.
Usage
## S3 method for class 'ps.cont'
summary(object, ...)
Arguments
object A ‘ps.cont‘ object
... Additional arguments.
Value
*‘n‘ The number of subjects. *‘ess‘ The effective sample size. *‘max.wcor‘ The largest weighted
correlation across the covariates. *‘mean.wcor‘ The average weighted correlation across the covari-
ates. *‘rms.wcor‘ The root mean square of the absolute weighted correlations across the covariates.
*‘iter‘ The estimated optimal number of [gbm] iterations to optimize the loss function.
See Also
[ps.cont]
Examples
## Not run: summary(test.mod) |
github.com/timandy/routine | go | Go | README
[¶](#section-readme)
---
### routine
[![Build Status](https://github.com/timandy/routine/actions/workflows/build.yml/badge.svg)](https://github.com/timandy/routine/actions)
[![Codecov](https://codecov.io/gh/timandy/routine/branch/main/graph/badge.svg)](https://app.codecov.io/gh/timandy/routine)
[![Go Report Card](https://goreportcard.com/badge/github.com/timandy/routine)](https://goreportcard.com/report/github.com/timandy/routine)
[![Documentation](https://pkg.go.dev/badge/github.com/timandy/routine.svg)](https://pkg.go.dev/github.com/timandy/routine)
[![Release](https://img.shields.io/github/release/timandy/routine.svg)](https://github.com/timandy/routine/releases)
[![License](https://img.shields.io/github/license/timandy/routine.svg)](https://github.com/timandy/routine/raw/main/LICENSE)
> [中文版](https://github.com/timandy/routine/blob/v1.1.2/README_zh.md)
`routine` encapsulates and provides some easy-to-use, non-competitive, high-performance `goroutine` context access interfaces, which can help you access coroutine context information more gracefully.
### Introduce
From the very beginning of its design, the `Golang` language has spared no effort to shield the concept of coroutine context from developers, including the acquisition of coroutine `goid`, the state of coroutine within the process, and the storage of coroutine context.
If you have used other languages such as `C++`, `Java` and so on, then you must be familiar with `ThreadLocal`, but after starting to use `Golang`, you will be deeply confused and distressed by the lack of convenient functions like `ThreadLocal`.
Of course, you can choose to use `Context`, which carries all the context information, appears in the first input parameter of all functions, and then shuttles around your system.
And the core goal of `routine` is to open up another way: Introduce `goroutine local storage` to the `Golang` world.
### Usage & Demo
This chapter briefly introduces how to install and use the `routine` library.
#### Install
```
go get github.com/timandy/routine
```
#### Use `goid`
The following code simply demonstrates the use of `routine.Goid()`:
```
package main
import (
"fmt"
"time"
"github.com/timandy/routine"
)
func main() {
goid := routine.Goid()
fmt.Printf("cur goid: %v\n", goid)
go func() {
goid := routine.Goid()
fmt.Printf("sub goid: %v\n", goid)
}()
// Wait for the sub-coroutine to finish executing.
time.Sleep(time.Second)
}
```
In this example, the `main` function starts a new coroutine, so `Goid()` returns the main coroutine `1` and the child coroutine `6`:
```
cur goid: 1 sub goid: 6
```
#### Use `ThreadLocal`
The following code briefly demonstrates `ThreadLocal`'s creation, setting, getting, spreading across coroutines, etc.:
```
package main
import (
"fmt"
"time"
"github.com/timandy/routine"
)
var threadLocal = routine.NewThreadLocal()
var inheritableThreadLocal = routine.NewInheritableThreadLocal()
func main() {
threadLocal.Set("hello world")
inheritableThreadLocal.Set("Hello world2")
fmt.Println("threadLocal:", threadLocal.Get())
fmt.Println("inheritableThreadLocal:", inheritableThreadLocal.Get())
// The child coroutine cannot read the previously assigned "hello world".
go func() {
fmt.Println("threadLocal in goroutine:", threadLocal.Get())
fmt.Println("inheritableThreadLocal in goroutine:", inheritableThreadLocal.Get())
}()
// However, a new sub-coroutine can be started via the Go/GoWait/GoWaitResult function, and all inheritable variables of the current coroutine can be passed automatically.
routine.Go(func() {
fmt.Println("threadLocal in goroutine by Go:", threadLocal.Get())
fmt.Println("inheritableThreadLocal in goroutine by Go:", inheritableThreadLocal.Get())
})
// You can also create a task via the WrapTask/WrapWaitTask/WrapWaitResultTask function, and all inheritable variables of the current coroutine can be automatically captured.
task := routine.WrapTask(func() {
fmt.Println("threadLocal in task by WrapTask:", threadLocal.Get())
fmt.Println("inheritableThreadLocal in task by WrapTask:", inheritableThreadLocal.Get())
})
go task.Run()
// Wait for the sub-coroutine to finish executing.
time.Sleep(time.Second)
}
```
The execution result is:
```
threadLocal: hello world inheritableThreadLocal: Hello world2 threadLocal in goroutine: <nil>
inheritableThreadLocal in goroutine: <nil>
threadLocal in goroutine by Go: <nil>
inheritableThreadLocal in goroutine by Go: Hello world2 threadLocal in task by WrapTask: <nil>
inheritableThreadLocal in task by WrapTask: Hello world2
```
### API
This chapter introduces in detail all the interfaces encapsulated by the `routine` library, as well as their core functions and implementation methods.
#### `Goid() int64`
Get the `goid` of the current `goroutine`.
It can be obtained directly through assembly code under `386`, `amd64`, `armv6`, `armv7`, `arm64`, `loong64`, `mips`, `mipsle`, `mips64`, `mips64le`, `ppc64`, `ppc64le`, `riscv64`, `s390x`, `wasm` architectures. This operation has extremely high performance and the time-consuming is usually only one-fifth of `rand.Int()`.
#### `NewThreadLocal() ThreadLocal`
Create a new `ThreadLocal` instance with the initial value stored with `nil`.
#### `NewThreadLocalWithInitial(supplier Supplier) ThreadLocal`
Create a new `ThreadLocal` instance with the initial value stored as the return value of the method `supplier()`.
#### `NewInheritableThreadLocal() ThreadLocal`
Create a new `ThreadLocal` instance with the initial value stored with `nil`.
When a new coroutine is started via `Go()`, `GoWait()` or `GoWaitResult()`, the value of the current coroutine is copied to the new coroutine.
When a new task is created via `WrapTask()`, `WrapWaitTask()` or `WrapWaitResultTask()`, the value of the current coroutine is captured to the new task.
#### `NewInheritableThreadLocalWithInitial(supplier Supplier) ThreadLocal`
Create a new `ThreadLocal` instance with the initial value stored as the return value of the method `supplier()`.
When a new coroutine is started via `Go()`, `GoWait()` or `GoWaitResult()`, the value of the current coroutine is copied to the new coroutine.
When a new task is created via `WrapTask()`, `WrapWaitTask()` or `WrapWaitResultTask()`, the value of the current coroutine is captured to the new task.
#### `WrapTask(fun Runnable) FutureTask`
Create a new task and capture the `inheritableThreadLocals` from the current goroutine.
This function returns a `FutureTask` instance, but the return task will not run automatically.
You can run it in a sub-goroutine or goroutine-pool by `FutureTask.Run()` method, wait by `FutureTask.Get()` or `FutureTask.GetWithTimeout()` method.
When the returned task run `panic` will be caught and error stack will be printed, the `panic` will be trigger again when calling `FutureTask.Get()` or `FutureTask.GetWithTimeout()` method.
#### `WrapWaitTask(fun CancelRunnable) FutureTask`
Create a new task and capture the `inheritableThreadLocals` from the current goroutine.
This function returns a `FutureTask` instance, but the return task will not run automatically.
You can run it in a sub-goroutine or goroutine-pool by `FutureTask.Run()` method, wait by `FutureTask.Get()` or `FutureTask.GetWithTimeout()` method.
When the returned task run `panic` will be caught, the `panic` will be trigger again when calling `FutureTask.Get()` or `FutureTask.GetWithTimeout()` method.
#### `WrapWaitResultTask(fun CancelCallable) FutureTask`
Create a new task and capture the `inheritableThreadLocals` from the current goroutine.
This function returns a `FutureTask` instance, but the return task will not run automatically.
You can run it in a sub-goroutine or goroutine-pool by `FutureTask.Run()` method, wait and get result by `FutureTask.Get()` or `FutureTask.GetWithTimeout()` method.
When the returned task run `panic` will be caught, the `panic` will be trigger again when calling `FutureTask.Get()` or `FutureTask.GetWithTimeout()` method.
#### `Go(fun Runnable)`
Start a new coroutine and automatically copy all contextual `inheritableThreadLocals` data of the current coroutine to the new coroutine.
Any `panic` while the child coroutine is executing will be caught and the stack automatically printed.
#### `GoWait(fun CancelRunnable) FutureTask`
Start a new coroutine and automatically copy all contextual `inheritableThreadLocals` data of the current coroutine to the new coroutine.
You can wait for the sub-coroutine to finish executing through the `FutureTask.Get()` or `FutureTask.GetWithTimeout()` method that returns a value.
Any `panic` while the child coroutine is executing will be caught and thrown again when `FutureTask.Get()` or `FutureTask.GetWithTimeout()` is called.
#### `GoWaitResult(fun CancelCallable) FutureTask`
Start a new coroutine and automatically copy all contextual `inheritableThreadLocals` data of the current coroutine to the new coroutine.
You can wait for the sub-coroutine to finish executing and get the return value through the `FutureTask.Get()` or `FutureTask.GetWithTimeout()` method of the return value.
Any `panic` while the child coroutine is executing will be caught and thrown again when `FutureTask.Get()` or `FutureTask.GetWithTimeout()` is called.
[More API Documentation](https://pkg.go.dev/github.com/timandy/routine#section-documentation)
### Garbage Collection
`routine` allocates a `thread` structure for each coroutine, which stores context variable information related to the coroutine.
A pointer to this structure is stored on the `g.labels` field of the coroutine structure.
When the coroutine finishes executing and exits, `g.labels` will be set to `nil`, no longer referencing the `thread` structure.
The `thread` structure will be collected at the next `GC`.
If the data stored in `thread` is not additionally referenced, these data will be collected together.
### Support Grid
| | **`darwin`** | **`linux`** | **`windows`** | **`freebsd`** | **`js`** | |
| --- | --- | --- | --- | --- | --- | --- |
| **`386`** | | ✅ | ✅ | ✅ | | **`386`** |
| **`amd64`** | ✅ | ✅ | ✅ | ✅ | | **`amd64`** |
| **`armv6`** | | ✅ | | | | **`armv6`** |
| **`armv7`** | | ✅ | | | | **`armv7`** |
| **`arm64`** | ✅ | ✅ | | | | **`arm64`** |
| **`loong64`** | | ✅ | | | | **`loong64`** |
| **`mips`** | | ✅ | | | | **`mips`** |
| **`mipsle`** | | ✅ | | | | **`mipsle`** |
| **`mips64`** | | ✅ | | | | **`mips64`** |
| **`mips64le`** | | ✅ | | | | **`mips64le`** |
| **`ppc64`** | | ✅ | | | | **`ppc64`** |
| **`ppc64le`** | | ✅ | | | | **`ppc64le`** |
| **`riscv64`** | | ✅ | | | | **`riscv64`** |
| **`s390x`** | | ✅ | | | | **`s390x`** |
| **`wasm`** | | | | | ✅ | **`wasm`** |
| | **`darwin`** | **`linux`** | **`windows`** | **`freebsd`** | **`js`** | |
✅: Supported
### Thanks
Thanks to all [contributors](https://github.com/timandy/routine/graphs/contributors) for their contributions!
### *License*
`routine` is released under the [Apache License 2.0](https://github.com/timandy/routine/blob/v1.1.2/LICENSE).
```
Copyright 2021-2023 TimAndy
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and limitations under the License.
```
Documentation
[¶](#section-documentation)
---
### Index [¶](#pkg-index)
* [func Go(fun Runnable)](#Go)
* [func Goid() int64](#Goid)
* [type Callable](#Callable)
* [type CancelCallable](#CancelCallable)
* [type CancelRunnable](#CancelRunnable)
* [type CancelToken](#CancelToken)
* [type Cloneable](#Cloneable)
* [type FutureCallable](#FutureCallable)
* [type FutureTask](#FutureTask)
* + [func GoWait(fun CancelRunnable) FutureTask](#GoWait)
+ [func GoWaitResult(fun CancelCallable) FutureTask](#GoWaitResult)
+ [func NewFutureTask(callable FutureCallable) FutureTask](#NewFutureTask)
+ [func WrapTask(fun Runnable) FutureTask](#WrapTask)
+ [func WrapWaitResultTask(fun CancelCallable) FutureTask](#WrapWaitResultTask)
+ [func WrapWaitTask(fun CancelRunnable) FutureTask](#WrapWaitTask)
* [type Runnable](#Runnable)
* [type RuntimeError](#RuntimeError)
* + [func NewRuntimeError(cause any) RuntimeError](#NewRuntimeError)
+ [func NewRuntimeErrorWithMessage(message string) RuntimeError](#NewRuntimeErrorWithMessage)
+ [func NewRuntimeErrorWithMessageCause(message string, cause any) RuntimeError](#NewRuntimeErrorWithMessageCause)
* [type Supplier](#Supplier)
* [type ThreadLocal](#ThreadLocal)
* + [func NewInheritableThreadLocal() ThreadLocal](#NewInheritableThreadLocal)
+ [func NewInheritableThreadLocalWithInitial(supplier Supplier) ThreadLocal](#NewInheritableThreadLocalWithInitial)
+ [func NewThreadLocal() ThreadLocal](#NewThreadLocal)
+ [func NewThreadLocalWithInitial(supplier Supplier) ThreadLocal](#NewThreadLocalWithInitial)
### Constants [¶](#pkg-constants)
This section is empty.
### Variables [¶](#pkg-variables)
This section is empty.
### Functions [¶](#pkg-functions)
####
func [Go](https://github.com/timandy/routine/blob/v1.1.2/api_routine.go#L47) [¶](#Go)
```
func Go(fun [Runnable](#Runnable))
```
Go starts a new goroutine, and copy inheritableThreadLocals from current goroutine.
This function will auto invoke the func and print error stack when panic occur in goroutine.
####
func [Goid](https://github.com/timandy/routine/blob/v1.1.2/api_goid.go#L4) [¶](#Goid)
```
func Goid() [int64](/builtin#int64)
```
Goid return the current goroutine's unique id.
### Types [¶](#pkg-types)
####
type [Callable](https://github.com/timandy/routine/blob/v1.1.2/api_routine.go#L7) [¶](#Callable)
added in v1.0.7
```
type Callable func() [any](/builtin#any)
```
Callable provides a function that returns a value of type any.
####
type [CancelCallable](https://github.com/timandy/routine/blob/v1.1.2/api_routine.go#L13) [¶](#CancelCallable)
added in v1.0.9
```
type CancelCallable func(token [CancelToken](#CancelToken)) [any](/builtin#any)
```
CancelCallable provides a cancellable function that returns a value of type any.
####
type [CancelRunnable](https://github.com/timandy/routine/blob/v1.1.2/api_routine.go#L10) [¶](#CancelRunnable)
added in v1.0.9
```
type CancelRunnable func(token [CancelToken](#CancelToken))
```
CancelRunnable provides a cancellable function without return values.
####
type [CancelToken](https://github.com/timandy/routine/blob/v1.1.2/api_future_task.go#L9) [¶](#CancelToken)
added in v1.0.9
```
type CancelToken interface {
// IsCanceled returns true if task was canceled.
IsCanceled() [bool](/builtin#bool)
// Cancel notifies the waiting coroutine that the task has canceled and returns stack information.
Cancel()
}
```
CancelToken propagates notification that operations should be canceled.
####
type [Cloneable](https://github.com/timandy/routine/blob/v1.1.2/api_cloneable.go#L4) [¶](#Cloneable)
added in v1.0.3
```
type Cloneable interface {
// Clone create and returns a copy of this object.
Clone() [any](/builtin#any)
}
```
Cloneable interface to support copy itself.
####
type [FutureCallable](https://github.com/timandy/routine/blob/v1.1.2/api_future_task.go#L6) [¶](#FutureCallable)
added in v1.1.2
```
type FutureCallable func(task [FutureTask](#FutureTask)) [any](/builtin#any)
```
FutureCallable provides a future function that returns a value of type any.
####
type [FutureTask](https://github.com/timandy/routine/blob/v1.1.2/api_future_task.go#L18) [¶](#FutureTask)
added in v1.1.2
```
type FutureTask interface {
// IsDone returns true if completed in any fashion: normally, exceptionally or via cancellation.
IsDone() [bool](/builtin#bool)
// IsCanceled returns true if task was canceled.
IsCanceled() [bool](/builtin#bool)
// IsFailed returns true if completed exceptionally.
IsFailed() [bool](/builtin#bool)
// Complete notifies the waiting coroutine that the task has completed normally and returns the execution result.
Complete(result [any](/builtin#any))
// Cancel notifies the waiting coroutine that the task has canceled and returns stack information.
Cancel()
// Fail notifies the waiting coroutine that the task has terminated due to panic and returns stack information.
Fail(error [any](/builtin#any))
// Get return the execution result of the sub-coroutine, if there is no result, return nil.
// If task is canceled, a panic with cancellation will be raised.
// If panic is raised during the execution of the sub-coroutine, it will be raised again at this time.
Get() [any](/builtin#any)
// GetWithTimeout return the execution result of the sub-coroutine, if there is no result, return nil.
// If task is canceled, a panic with cancellation will be raised.
// If panic is raised during the execution of the sub-coroutine, it will be raised again at this time.
// If the deadline is reached, a panic with timeout error will be raised.
GetWithTimeout(timeout [time](/time).[Duration](/time#Duration)) [any](/builtin#any)
// Run execute the task, the method can be called repeatedly, but the task will only execute once.
Run()
}
```
FutureTask provide a way to wait for the sub-coroutine to finish executing, get the return value of the sub-coroutine, and catch the sub-coroutine panic.
####
func [GoWait](https://github.com/timandy/routine/blob/v1.1.2/api_routine.go#L55) [¶](#GoWait)
added in v1.0.2
```
func GoWait(fun [CancelRunnable](#CancelRunnable)) [FutureTask](#FutureTask)
```
GoWait starts a new goroutine, and copy inheritableThreadLocals from current goroutine.
This function will auto invoke the func and return a FutureTask instance, so we can wait by FutureTask.Get or FutureTask.GetWithTimeout method.
If panic occur in goroutine, The panic will be trigger again when calling FutureTask.Get or FutureTask.GetWithTimeout method.
####
func [GoWaitResult](https://github.com/timandy/routine/blob/v1.1.2/api_routine.go#L64) [¶](#GoWaitResult)
added in v1.0.2
```
func GoWaitResult(fun [CancelCallable](#CancelCallable)) [FutureTask](#FutureTask)
```
GoWaitResult starts a new goroutine, and copy inheritableThreadLocals from current goroutine.
This function will auto invoke the func and return a FutureTask instance, so we can wait and get result by FutureTask.Get or FutureTask.GetWithTimeout method.
If panic occur in goroutine, The panic will be trigger again when calling FutureTask.Get or FutureTask.GetWithTimeout method.
####
func [NewFutureTask](https://github.com/timandy/routine/blob/v1.1.2/api_future_task.go#L53) [¶](#NewFutureTask)
added in v1.1.2
```
func NewFutureTask(callable [FutureCallable](#FutureCallable)) [FutureTask](#FutureTask)
```
NewFutureTask Create a new instance.
####
func [WrapTask](https://github.com/timandy/routine/blob/v1.1.2/api_routine.go#L19) [¶](#WrapTask)
added in v1.1.2
```
func WrapTask(fun [Runnable](#Runnable)) [FutureTask](#FutureTask)
```
WrapTask create a new task and capture the inheritableThreadLocals from the current goroutine.
This function returns a FutureTask instance, but the return task will not run automatically.
You can run it in a sub-goroutine or goroutine-pool by FutureTask.Run method, wait by FutureTask.Get or FutureTask.GetWithTimeout method.
When the returned task run panic will be caught and error stack will be printed, the panic will be trigger again when calling FutureTask.Get or FutureTask.GetWithTimeout method.
####
func [WrapWaitResultTask](https://github.com/timandy/routine/blob/v1.1.2/api_routine.go#L39) [¶](#WrapWaitResultTask)
added in v1.1.2
```
func WrapWaitResultTask(fun [CancelCallable](#CancelCallable)) [FutureTask](#FutureTask)
```
WrapWaitResultTask create a new task and capture the inheritableThreadLocals from the current goroutine.
This function returns a FutureTask instance, but the return task will not run automatically.
You can run it in a sub-goroutine or goroutine-pool by FutureTask.Run method, wait and get result by FutureTask.Get or FutureTask.GetWithTimeout method.
When the returned task run panic will be caught, the panic will be trigger again when calling FutureTask.Get or FutureTask.GetWithTimeout method.
####
func [WrapWaitTask](https://github.com/timandy/routine/blob/v1.1.2/api_routine.go#L29) [¶](#WrapWaitTask)
added in v1.1.2
```
func WrapWaitTask(fun [CancelRunnable](#CancelRunnable)) [FutureTask](#FutureTask)
```
WrapWaitTask create a new task and capture the inheritableThreadLocals from the current goroutine.
This function returns a FutureTask instance, but the return task will not run automatically.
You can run it in a sub-goroutine or goroutine-pool by FutureTask.Run method, wait by FutureTask.Get or FutureTask.GetWithTimeout method.
When the returned task run panic will be caught, the panic will be trigger again when calling FutureTask.Get or FutureTask.GetWithTimeout method.
####
type [Runnable](https://github.com/timandy/routine/blob/v1.1.2/api_routine.go#L4) [¶](#Runnable)
added in v1.0.7
```
type Runnable func()
```
Runnable provides a function without return values.
####
type [RuntimeError](https://github.com/timandy/routine/blob/v1.1.2/api_error.go#L4) [¶](#RuntimeError)
added in v1.0.8
```
type RuntimeError interface {
// Goid returns the goid of the coroutine that created the current error.
Goid() [int64](/builtin#int64)
// Gopc returns the pc of go statement that created the current error coroutine.
Gopc() [uintptr](/builtin#uintptr)
// Message returns the detail message string of this error.
Message() [string](/builtin#string)
// StackTrace returns an array of stack trace elements, each representing one stack frame.
StackTrace() [][uintptr](/builtin#uintptr)
// Cause returns the cause of this error or nil if the cause is nonexistent or unknown.
Cause() [RuntimeError](#RuntimeError)
// Error returns a short description of this error.
Error() [string](/builtin#string)
}
```
RuntimeError runtime error with stack info.
####
func [NewRuntimeError](https://github.com/timandy/routine/blob/v1.1.2/api_error.go#L25) [¶](#NewRuntimeError)
added in v1.0.8
```
func NewRuntimeError(cause [any](/builtin#any)) [RuntimeError](#RuntimeError)
```
NewRuntimeError create a new RuntimeError instance.
####
func [NewRuntimeErrorWithMessage](https://github.com/timandy/routine/blob/v1.1.2/api_error.go#L31) [¶](#NewRuntimeErrorWithMessage)
added in v1.0.8
```
func NewRuntimeErrorWithMessage(message [string](/builtin#string)) [RuntimeError](#RuntimeError)
```
NewRuntimeErrorWithMessage create a new RuntimeError instance.
####
func [NewRuntimeErrorWithMessageCause](https://github.com/timandy/routine/blob/v1.1.2/api_error.go#L37) [¶](#NewRuntimeErrorWithMessageCause)
added in v1.0.8
```
func NewRuntimeErrorWithMessageCause(message [string](/builtin#string), cause [any](/builtin#any)) [RuntimeError](#RuntimeError)
```
NewRuntimeErrorWithMessageCause create a new RuntimeError instance.
####
type [Supplier](https://github.com/timandy/routine/blob/v1.1.2/api_thread_local.go#L16) [¶](#Supplier)
added in v1.0.2
```
type Supplier func() [any](/builtin#any)
```
Supplier provides a function that returns a value of type any.
####
type [ThreadLocal](https://github.com/timandy/routine/blob/v1.1.2/api_thread_local.go#L4) [¶](#ThreadLocal)
added in v1.0.2
```
type ThreadLocal interface {
// Get returns the value in the current goroutine's local threadLocals or inheritableThreadLocals, if it was set before.
Get() [any](/builtin#any)
// Set copy the value into the current goroutine's local threadLocals or inheritableThreadLocals.
Set(value [any](/builtin#any))
// Remove delete the value from the current goroutine's local threadLocals or inheritableThreadLocals.
Remove()
}
```
ThreadLocal provides goroutine-local variables.
####
func [NewInheritableThreadLocal](https://github.com/timandy/routine/blob/v1.1.2/api_thread_local.go#L34) [¶](#NewInheritableThreadLocal)
added in v1.0.2
```
func NewInheritableThreadLocal() [ThreadLocal](#ThreadLocal)
```
NewInheritableThreadLocal create and return a new ThreadLocal instance.
The initial value stored with nil.
The value can be inherited to sub goroutines witch started by Go, GoWait, GoWaitResult methods.
The value can be captured to FutureTask which created by WrapTask, WrapWaitTask, WrapWaitResultTask methods.
####
func [NewInheritableThreadLocalWithInitial](https://github.com/timandy/routine/blob/v1.1.2/api_thread_local.go#L42) [¶](#NewInheritableThreadLocalWithInitial)
added in v1.0.2
```
func NewInheritableThreadLocalWithInitial(supplier [Supplier](#Supplier)) [ThreadLocal](#ThreadLocal)
```
NewInheritableThreadLocalWithInitial create and return a new ThreadLocal instance.
The initial value stored as the return value of the method supplier.
The value can be inherited to sub goroutines witch started by Go, GoWait, GoWaitResult methods.
The value can be captured to FutureTask which created by WrapTask, WrapWaitTask, WrapWaitResultTask methods.
####
func [NewThreadLocal](https://github.com/timandy/routine/blob/v1.1.2/api_thread_local.go#L20) [¶](#NewThreadLocal)
added in v1.0.2
```
func NewThreadLocal() [ThreadLocal](#ThreadLocal)
```
NewThreadLocal create and return a new ThreadLocal instance.
The initial value stored with nil.
####
func [NewThreadLocalWithInitial](https://github.com/timandy/routine/blob/v1.1.2/api_thread_local.go#L26) [¶](#NewThreadLocalWithInitial)
added in v1.0.2
```
func NewThreadLocalWithInitial(supplier [Supplier](#Supplier)) [ThreadLocal](#ThreadLocal)
```
NewThreadLocalWithInitial create and return a new ThreadLocal instance.
The initial value stored as the return value of the method supplier. |
@sakitam-gis/ol5 | npm | JavaScript | OpenLayers
===
[OpenLayers](https://openlayers.org/) is a high-performance, feature-packed library for creating interactive maps on the web. It can display map tiles, vector data and markers loaded from any source on any web page. OpenLayers has been developed to further the use of geographic information of all kinds. It is completely free, Open Source JavaScript, released under the 2-clause BSD License (also known as the FreeBSD).
Getting Started
---
Install the [`ol` package](https://www.npmjs.com/package/ol):
```
npm install ol
```
Import just what you need for your application:
```
import Map from 'ol/Map';
import View from 'ol/View';
import TileLayer from 'ol/layer/Tile';
import XYZ from 'ol/source/XYZ';
new Map({
target: 'map',
layers: [
new TileLayer({
source: new XYZ({
url: 'https://{a-c}.tile.openstreetmap.org/{z}/{x}/{y}.png'
})
})
],
view: new View({
center: [0, 0],
zoom: 2
})
});
```
See the following examples for more detail on bundling OpenLayers with your application:
* Using [Rollup](https://github.com/openlayers/ol-rollup)
* Using [Webpack](https://github.com/openlayers/ol-webpack)
* Using [Parcel](https://github.com/openlayers/ol-parcel)
* Using [Browserify](https://github.com/openlayers/ol-browserify)
Supported Browsers
---
OpenLayers runs on all modern browsers that support [HTML5](https://html.spec.whatwg.org/multipage/) and [ECMAScript 5](http://www.ecma-international.org/ecma-262/5.1/). This includes Chrome, Firefox, Safari and Edge. For older browsers and platforms like Internet Explorer (down to version 9) and Android 4.x, [polyfills](http://polyfill.io) for `requestAnimationFrame` and `Element.prototype.classList` are required, and using the KML format requires a polyfill for `URL`.
Documentation
---
Check out the [hosted examples](https://openlayers.org/en/latest/examples/), the [workshop](https://openlayers.org/workshop/) or the [API documentation](https://openlayers.org/en/latest/apidoc/).
Bugs
---
Please use the [GitHub issue tracker](https://github.com/openlayers/openlayers/issues) for all bugs and feature requests. Before creating a new issue, do a quick search to see if the problem has been reported already.
Contributing
---
Please see our guide on [contributing](https://github.com/openlayers/openlayers/blob/HEAD/CONTRIBUTING.md) if you're interested in getting involved.
Community
---
* Need help? Find it on [Stack Overflow using the tag 'openlayers'](http://stackoverflow.com/questions/tagged/openlayers)
* Follow [@openlayers](https://twitter.com/openlayers) on Twitter
Readme
---
### Keywords
* map
* mapping
* ol |
virtual_fields_filler | hex | Erlang | Virtual Fields Filler
===
Fill the virtual fields for your Ecto structs and nested structs recursively.
In your Schema, add the `fill_virtual_fields/1` function:
```
defmodule MyApp.User do
@behaviour VirtualFieldsFiller
use Ecto.Schema
alias __MODULE__
schema "users" do
field(:first_name, :string)
field(:last_name, :string)
field(:full_name, :string, virtual: true)
timestamps(type: :utc_datetime)
end
def fill_virtual_fields(%User{} = user) do
first_name = Map.fetch!(user, :first_name)
last_name = Map.fetch!(user, :last_name)
Map.put(user, :full_name, "#{first_name} #{last_name}")
end end
```
Then, after fetching a user (or list of users) from your DB, call [`VirtualFieldsFiller.fill_virtual_fields/1`](VirtualFieldsFiller.html#fill_virtual_fields/1):
```
import VirtualFieldsFiller
User
|> Repo.get!(id)
|> fill_virtual_fields()
```
If you use [QueryBuilder](https://github.com/mathieuprog/query_builder), you may organize your code as follows:
```
# in your controller:
Blog.get_article_by_id(article_id, preload: [:category, comments: :user])
# in the Blog context
def get_article_by_id(id, opts \\ []) do
QueryBuilder.where(Article, id: id)
|> QueryBuilder.from_list(opts)
|> Repo.one!()
|> fill_virtual_fields()
end
```
As `QueryBuilder` encourages to pass the association names that needs to be preloaded to the Context function, you may call `fill_virtual_fields/1` in the Context function, right after the call to the `Repo` function, so that any nested struct gets its virtual fields filled.
Installation
---
Add `virtual_fields_filler` for Elixir as a dependency in your `mix.exs` file:
```
def deps do
[
{:virtual_fields_filler, "~> 0.3.0"}
]
end
```
HexDocs
---
HexDocs documentation can be found at <https://hexdocs.pm/virtual_fields_filler>.
VirtualFieldsFiller behaviour
===
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[fill_virtual_fields(entity)](#fill_virtual_fields/1)
[Callbacks](#callbacks)
---
[fill_virtual_fields(struct)](#c:fill_virtual_fields/1)
[Link to this section](#functions)
Functions
===
[Link to this section](#callbacks)
Callbacks
=== |
metanit_com_cpp_tutorial | free_programming_book | Ruby | # Руководство по языку программирования C++
Глава 1. Введение в C++
Язык программирования C++
Первая программа на Windows. Компилятор g++
Первая программа на Windows. Компилятор Clang
Первая программа на Linux. Компилятор g++
Первая программа на MacOS. Компилятор Clang
Настройка параметров компиляции
Локализация и кириллица в консоли
Глава 2. Основы языка программирования C++
Структура программы
Переменные
Типы данных
Константы
Арифметические операции
Статическая типизация и преобразования типов
Поразрядные операции
Операции присваивания
Условные выражения
Конструкция if-else и тернарный оператор
Конструкция switch
Циклы
Ссылки
Массивы
Многомерные массивы
Массивы символов
Введение в строки
Глава 3. Указатели
Что такое указатели
Операции с указателями
Арифметика указателей
Константы и указатели
Указатели и массивы
Глава 4. Функции
Определение и объявление функций
Область видимости объектов
Параметры функции
Передача аргументов по значению и по ссылке
Константные параметры
Оператор return и возвращение результата
Указатели в параметрах функции
Массивы в параметрах функции
Параметры функции main
Перегрузка функций
Рекурсивные функции
Указатели на функции
Указатели на функции как параметры
Тип функции
Разделение программы на файлы
Глава 5. Динамическая память и smart-указатели
Динамические объекты
Динамические массивы
unique_ptr<Tshared_ptr<TГлава 6. Объектно-ориентированное программирование
Определение классов
Конструкторы и инициализация объектов
Управление доступом. Инкапсуляция
Объявление и определение функций класса
Конструктор копирования
Ключевое слово this
Дружественные функции и классы
Статические члены класса
Деструктор
Структуры
Перечисления
Наследование
Скрытие функционала базового класса
Множественное наследование
Преобразование типов
Динамическое преобразование
Особенности динамического связывания
Чистые виртуальные функции и абстрактные классы<
Перегрузка операторов
Операторы преобразования типов
Оператор индексирования
Переопределение оператора присваивания
Пространства имен
Вложенные классы
Глава 7. Исключения
Обработка исключений
Вложенные try-catch
Создание своих типов исключений
Тип exception
Типы исключений
Глава 8. Шаблоны
Шаблоны функций
Шаблон класса
Специализация шаблона класса
Наследование и шаблоны классов
Глава 9. Контейнеры
Типы контейнеров
Вектор
Итераторы
Операции с векторами
Array
List
Forward_list
Deque
Стек std::stack
Очередь std::queue
Очередь приоритетов std::priority_queue
Множества
Словарь std::map
Span
Глава 10. Строки
Определение строк
Строки с поддержкой Unicode
Преобразование типов и строки
Сравнение строк
Поиск подстроки
Изменение строки
Операции с символами
Программа подсчета слов
Тип std:string_view
Глава 11. Семантика перемещения
rvalue
Конструктор перемещения
Оператор присваивания с перемещением
Роль noexcept при перемещении
Глава 12. Объекты функций и лямбда-выражения
Объекты функций
Лямбда-выражения
Захват внешних значений в лямбда-выражениях
Шаблон std::function<Глава 13. Алгоритмы и представления
Минимальный и максимальный элементы
Поиск элементов
Копирование элементов
Сортировка
Представления. Фильтрация
Проекция данных
Пропуск элементов. drop_view и drop_while_view
Извлечение диапазона элементов. take_view и take_while_view
Цепочки представлений
Глава 14. Ограничения шаблонов
Оператор requires
Концепты
Выражение requires
Ограничения типа для auto
Глава 15. Потоки и система ввода-вывода
Базовые типы для работы с потоками
Файловые потоки. Открытие и закрытие
Переопределение операторов ввода и вывода
Глава 16. Стандартная библиотека C++
Математические константы и операции
std::optional<TГлава 17. Идиомы С++
Идиома копирования и замены
Идиома Move-and-Swap
Глава 18. Среды разработки
Первая программа в Visual Studio
Первая программа в Qt Creator
# Помощь сайту - donation
Date: 2021-07-18
Categories:
Tags:
Если Вам понравился или помог этот проект, то вы можете внести вклад в его поддержку и развитие, перечислив любую сумму на один из следующих реквизитов:
qiwi.com/n/METANIT
410011174743222
# Тестовая страница для кастомизации
Date: 2022-05-06
Categories:
Tags:
Высота шрифта текста статьи (например, 16px):
Семейство шрифтов текста (например, Verdana):
Фоновый цвет статьи:
Фоновый цвет боковых, верхней и нижней панелей:
Фоновый цвет меню:
Высота шрифта листинга кода (например, 14px):
Семейство шрифтов листинга кода (например, Consolas):
Фоновый цвет листинга кода:
Высота листинга кода (например, 400px)(чтобы растянуть листинг кода по всей длине, применяется значение auto):
# Введение в C++
Язык программирования С++ представляет высокоуровневый компилируемый язык программирования общего назначения со статической типизацией, который подходит для создания самых различных приложений. На сегодняшний день С++ является одним из самых популярных и распространенных языков.
Своими корнями он уходит в язык Си, который был разработан в 1969—1973 годах в компании Bell Labs программистом Деннисом Ритчи (<NAME>). В начале 1980-х годов датский программист Бьерн Страуструп (Bjarne Stroustrup), который в то время работал в компании Bell Labs, разработал С++ как расширение к языку Си. Фактически вначале C++ просто дополнял язык Си некоторыми возможностями объектно-ориентированного программирования. И поэтому сам Страуструп вначале называл его как "C with classes" ("Си с классами").
Впоследствии новый язык стал набирать популярность. В него были добавлены новые возможности, которые делали его не просто дополнением к Си, а совершенно новым языком программирования. В итоге "Си с классами" был переименован в С++. И с тех по оба языка стали развиваться независимо друг от друга.
Текущий стандарт языка можно найти по ссылке https://eel.is/c++draft/
С++ является мощным языком, унаследовав от Си богатые возможности по работе с памятью. Поэтому нередко С++ находит свое применение в системном программировании, в частности, при создании операционных систем, драйверов, различных утилит, антивирусов и т.д. К слову сказать, ОС Windows большей частью написана на С++. Но только системным программированием применение данного языка не ограничивается. С++ можно использовать в программах любого уровня, где важны скорость работы и производительность. Нередко он применяется для создания графических приложений, различных прикладных программ. Также особенно часто его используют для создания игр с богатой насыщенной визуализацией. Кроме того, в последнее время набирает ход мобильное направление, где С++ тоже нашел свое применение. И даже в веб-разработке также можно использовать С++ для создания веб-приложений или каких-то вспомогательных сервисов, которые обслуживают веб-приложения. В общем С++ - язык широкого пользования, на котором можно создавать практически любые виды программ.
С++ является компилируемым языком, а это значит, что компилятор транслирует исходный код на С++ в исполняемый файл, который содержит набор машинных инструкций. Но разные платформы имеют свои особенности, поэтому скомпилированные программы нельзя просто перенести с одной платформы на другую и там уже запустить. Однако на уровне исходного кода программы на С++ по большей степени обладают переносимостью, если не используются какие-то специфичные для текущей ос функции. А наличие компиляторов, библиотек и инструментов разработки почти под все распространенные платформы позволяет компилировать один и тот же исходный код на С++ в приложения под эти платформы.
В отличие от Си язык C++ позволяет писать приложения в объектно-ориентированном стиле, представляя программу как совокупность взаимодействующих между собой классов и объектов. Что упрощает создание крупных приложений.
В 1979-80 годах <NAME> разработал расширение к языку Си - "Си с классами". В 1983 язык был переименован в С++.
В 1985 году была выпущена первая коммерческая версия языка С++, а также первое издание книги "Языка программирования C++", которая представляла первое описание этого языка при отсутствии официального стандарта.
В 1989 была выпущена новая версия языка C++ 2.0, которая включала ряд новых возможностей. После этого язык развивался относительно медленно вплоть до 2011 года. Но при этом в 1998 году была предпринята первая попытка по стандартизации языка организацией ISO (International Organiztion for Standartization). Первый стандарт получил название ISO/IEC 14882:1998 или сокращенно С++98. В дальнейшем в 2003 была издана новая версия стандарта C++03.
В 2011 году был издан новый стандарт C++11, который содержал множество добавлений и обогащал язык С++ большим числом новых функциональных возможностей. С тех пор было выпущено еще ряд стандартов. На момент написания данной статьи самый последний стандарт - C++20 был опубликован в декабре 2020 года. В 2023 году ожидается выход стандарта C++23
Для написания программ на языке С++ как минимум необходимы два компонента: текстовый редактор, с помощью которого можно набрать исходный код, и компилятор, который принимает файл с исходным кодом и компилирует его в исполняемый файл. В качестве текстового редактора можно выбрать любой понравившийся. Я бы посоветовал кросcплатформенный редактор Visual Studio Code, который поддерживает плагины для разных языков, в том числе для C++.
Если с текстовым редактором относительно просто - можно выбрать любой, то выбор компилятора может действительно стать проблемой. Поскольку в настоящий момент есть очень много различных компиляторов, которые могут отличаться по различным аспектам, в частности, по реализации стандартов. Базовый список компиляторов для С++ можно посмотреть в википедии. А на странице https://en.cppreference.com/w/cpp/compiler_support можно ознакомиться с поддержкой компиляторами последних стандартов. В общем случае нередко рекомендуют хотя бы ознакомиться как минимум с тремя основными компиляторами:
g++ от проекта GNU (в составе набора компиляторов GCC)
Clang (доступен в рамках проекта LLVM)
компилятор C++ от компании Microsoft (используется в Visual Studio)
Так, если обратиться к опросу разработчиков, проведенному компанией JetBrains s 2022, то доли использования различных компиляторов среди разработчиков распределились следующим образом:
Также для создания программ можно использовать интегрированные среды разработки IDE, такие как Visual Studio, Netbeans, Eclipse, Qt и т.д., которые упрощают создание приложений.
# Первая программа на Windows. Компилятор g++
Рассмотрим создание первой простейшей программы на C++ с помощью компилятора g++, который на сегодняшний день является одним из наиболее популярных компиляторов для C++, доступен для разных платформ и который распространяется в рамках пакета компиляторов GCC. Более подробную информацию о g++ можно получить на официальном сайте проекта https://gcc.gnu.org/.
Набор компиляторов GCC распространяется в различных версиях. Для Windows одной из наиболее популярных версий является пакет средств для разработки от некоммерческого проекта MSYS2. Следует отметить, что для MSYS2 требуется 64-битная версия Windows 7 и выше (то есть Vista, XP и более ранние версии не подходят)
Итак, загрузим программу установки MSYS2 с официального сайта MSYS2:
На первом шаге установки будет предложено установить каталог для установки. По умолчанию это каталог C:\msys64:
Оставим каталог установки по умолчанию (при желании можно изменить). На следующем шаге устанавливаются настройки для ярлыка для меню Пуск, и затем собственно будет произведена установка. После завершения установки нам отобразить финальное окно, в котором нажмем на кнопку Завершить
После завершения установки запустится консольное приложение MSYS2.exe. Если по каким-то причинам оно не запустилось, то в папке установки C:/msys64 надо найти файл usrt_64.exe:
Теперь нам надо установить собственно набор компиляторов GCC. Для этого введем в этом приложении следующую команду:
> pacman -S mingw-w64-ucrt-x86_64-gcc
Для управления пакетами MSYS2 использует пакетный менеджер Packman. И данная команда говорит пакетному менеджеру packman установить пакет
```
mingw-w64-ucrt-x86_64-gcc
```
,
который представляет набор компиляторов GCC (название устанавливаемого пакета указывается после параметра `-S` ).
и после завершения установки мы можем приступать к программированию на языке C++. Если мы откроем каталог установки и зайдем в нем в папку C:\msys64\ucrt64\bin, то найдем там все необходимые файлы компиляторов:
В частности, файл g++.exe как раз и будет представлять компилятор для языка С++.
Далее для упрощения запуска компилятора мы можем добавить путь к нему в Переменные среды. Для этого можно в окне поиска в Windows ввести "изменение переменных среды текущего пользователя":
Нам откроется окно Переменных среды:
И добавим путь к компилятору `C:\msys64\ucrt64\bin` :
Чтобы убедиться, что набор компиляторов GCC успешно установлен, введем следующую команду:
> gcc --version
В этом случае нам должна отобразиться версия компиляторов
Итак, компилятор установлен, и теперь мы можем написать первую программу. Для этого потребуется любой текстовый редактор для набора исходного кода. Можно взять распространенный редактор Visual Studio Code или даже обычный встроенный Блокнот.
Итак, создадим на жестком диске С папку для исходных файлов. А в этой папке создадим новый текстовый файл, который переименуем в hello.cpp. То есть по сути файлы исходного кода на С++ - это обычные текстовые файлы, которые, как правило, имеют расширение cpp.
В моем случае файл hello.cpp находится в папке C:\cpp.
Теперь определим в файле hello.cpp простейший код, который будет выводить строку на консоль:
Для вывода строки на консоль необходимо подключить нужный функционал. Для этого в начале файла идет строка
Данная строка представляет директиву препроцессора, которая позволяет подключить библиотеку iostream. Эта библиотека нужна для вывода строки на консоль.
Далее идет определение функции main. Функция main должна присутствовать в любой программе на С++, с нее собственно и начинается выполнение приложения.
Функция main состоит из четырех элементов:
Тип возвращаемого значения. В данном случае это тип int. Этот тип указывает, что функция должна возвращать целое число.
Имя функции. В данном случае функция называется main.
Список параметров. После имени функции в скобках идет список параметров. Но в данном случае скобки пустые, то есть функция main не принимает параметров.
Тело функции. После списка параметров в фигурных скобках идет тело функции. Здесь и определяются собственно те действия, которые выполняет функция main.
В теле функции происходит вывод строки на консоль. Для обращения к консоли используется стандартный поток вывода std::cout. С помощью оператора << в этот поток (в данном случае фактически на консоль) передается строка символов, которую надо вывести на консоль, то есть "Hello METANIT.COM!".
В конце осуществляем выход из функции с помощью оператора return. Так как функция должна возвращать целое число, то после return указывается число 0. Ноль используется в качестве индикатора успешного завершения программы.
После каждой инструкции в языке C++ ставятся точка с запятой.
Каждая строка снабжена комментарием. Все, что написано после двойного слеша // представляет комментарий. Комментарий не учитывается при компиляции приложения, и не является частью программного кода, а служат лишь для его описания. Комментарий позволяет понять, что делает программа.
Теперь скомпилируем этот файл. Для этого откроем командную строку Windows и вначале с помощью команды cd перейдем к папке с исходным файлом:
> cd C:\cpp
Чтобы скомпилировать исходный код, необходимо компилятору gcc передать в качестве параметра файл hello.cpp:
> g++ hello.cpp -o hello
Дополнительный необязательный параметр `-o hello` указывает, что скомпилированный файл будет называться hello.exe. Если не указать этот параметр, то файл будет называться по умолчанию - a.exe.
После выполнения этой команды будет скомпилирован исполняемый файл, который в Windows по умолчанию называется hello.exe. И мы можем обратиться к этому файлу, и в этом случае консоль выведет строку "Hello METANIT.COM!", собственно как и прописано в коде.
Если вместо командной строки используется оболочка PowerShell, то для запуска файла надо прописать "./hello".
Стоит отметить, что мы можем совместить компиляцию и выполнение следующей командой:
> g++ hello.cpp -o hello.exe & hello.exe
# Настройка параметров компиляции
По умолчанию при компиляции не отображается никакх предупреждений. Тем не менее предупреждения компилятора могут подсказать о наличие определенных проблем в коде, даже если код успешно компилируется. Простейший пример: в программе определена переменная, но она нигде не используется. И при компиляции компилятор может подсказать о данной пробелеме, что поможет разработчику идентифицировать проблему и сразу отреагировать на нее.
Для компиляции с предупреждениями применяется флаг `-Wall` : > g++ -Wall source.cpp
Есть разные версии стандарта языка Си, и каждый из них может добавлять дополнительный функционал, который мы, возможно, захотим использовать в программе. С помощью флага -std= можно указать конкретный стандарт, добавив `c++11` , `c++14` , `c++17` или `c++20` . Например, для компиляции в стандарт `c++17` нужно написать: > g++ -std=c++17 source.cpp
Аналогично для компиляции в стандарт `C++20` используется команда: > g++ -std=c++20 source.c
Чтобы гарантировать, что программа будет строго соответствовать определенному стандарту, можно указать флаг `-pedantic` > g++ -std=c++11 -Wall -pedantic app.cpp g++ -std=c++14 -Wall -pedantic app.cpp g++ -std=c++17 -Wall -pedantic app.cpp g++ -std=c++20 -Wall -pedantic app.cpp
В этом случае компилятор будет генерировать предупреждения, если код не соответствует правилам стандарта.
Для того, чтобы автоматически запустить приложение после компиляции, можно использовать следующую команду:
> g++ source.cpp & ./a.out
Можно налепить в одну команду различные опции:
> g++ -std=c++20 -Wall -pedantic app.cpp -o app & app
Основные параметры компилятора Clang в ряде случаев повторяют параметры для gcc. Например, компиляция с помощью Clang под определенный стандарт с выводом ошибок:
> clang++ -std=c++20 -Wall -pedantic app.cpp -o app.exe & app.exe
# Локализация и кириллица в консоли
Если программа при выводе на консоль использует кириллицу, то мы можем столкнуться с ситуацией, когда вместо кириллических символов будут отображаются непонятные знаки. Особенно это актуально для ОС Windows. И в этом случае необходимо явным образом задать текущую локаль (культуру) для вывода символов. В языке C++ это можно сделать с помощью встроенной функции setlocale().
Итак, изменим код, который использовался в прошлых темах следующим образом:
Компиляция и запуск в ОС Windows может выглядеть следующим образом:
> c:\cpp>g++ -std=c++20 -Wall -pedantic app.cpp -o app & app Р?С?РёР?РчС' Р?РёС?! c:\cppВместо ожидаемого текста я получаю какие-то непонятные символы. Теперь изменим код, применив функцию setlocale():
Теперь для вывода данных на консоль вместо объекта `std::cout` применяется объект std::wcout, который предназначен для работы с символами
Unicode. В данном случае предполагается, что кодировка самого файла - UTF-8. Кроме того, перед строкой указан символ L.
Повторно компилируем и запустим приложение:
> c:\cpp>g++ -std=c++20 -Wall -pedantic app.cpp -o app & app Привет мир! c:\cppНа некоторых платформах, например, Ubuntu, мы можем не столкнуться с подобной проблемой. И в этом случае вызов функции setlocale просто не окажет никакого влияния.
# Основы языка программирования C++
Программа на С++ состоит из набора инструкций. Каждая инструкция (statement) выполняет определенное действие. В конце инструкции в языке C++ ставится точка с запятой (;). Данный знак указывает компилятору на завершение инструкции. Например:
Данная строка выводит на консоль строку "Hello world!", является инструкцией и поэтому завершается точкой с запятой.
Набор инструкций может представлять блок кода. Блок кода заключается в фигурные скобки, а инструкции помещаются между открывающей и закрывающей фигурными скобками:
В этом блоке кода две инструкции, которые выводят на консоль определенную строку.
Каждая программа на языке С++ должна иметь как минимум одну функцию - функцию main(). Именно с этой функции начинается выполнение приложения. Ее имя main фиксировано и для всех программ на С++ всегда одинаково.
Функция также является блоком кода, поэтому ее тело обрамляется фигурными скобками, между которыми определяется набор инструкций.
В частности, при создании первой программы использовалась следующая функция main:
Определение функии main начинается с возвращаемого типа. Функция main в любом случае должна возвращать число. Поэтому ее определение начинается с ключевого слова int.
Далее идет название функции, то есть main. После названия в скобках идет список параметров. В данном случае функция main не принимает никаких параметров, поэтому после названия указаны пустые скобки. Однако есть другие варианты определения функции main, которые подразумевыют использование параметров. В частности, нередко может встречаться следующее определение функции main, использующей параметры:
И после списка параметров идет блок кода, который и содержит в виде инструкций собственно те действия, выполняемые функцией main.
В конце функции идет инструкция return:
Эта инструкция завершает выполнение функции, передавая управление во вне туда, где была вызвана функция. В случае с функцией `main` контроль передается операционной системе. Число 0 после оператора `return` указывает операционной системе, что выполнение функции завершилось успешно, без ошибок. Также стоит отметить, что в функции main можно опустить инструкцию `return 0;` :
В примере выше на консоль выводится строка, но чтобы использовать вывод на консоль, необходимо в начале файла с исходным кодом подключать библиотеку iostream с помощью директивы include.
Директива include является директивой препроцессора. Каждая директива препроцессора размещается на одной строке. И в отличие от обычных инструкциий языка C++, которые завершаются точкой с запятой ; , признаком завершения препроцессорной директивы является перевод на новую строку. Кроме того, директива должна начинаться со знака решетки #. Непосредственно директива "include" определяет, какие файлы и библиотеки надо подключить в данном месте в код программы.
Исходный код может содержать комментарии. Комментарии позволяют понять смысл программы, что делают те или иные ее части. При компиляции комментарии игнорируются и не оказывают никакого влияние на работу приложения и на его размер.
В языке C++ есть два типа комментариев: однострочный и многострочный. Однострочный комментарий размещается на одной строке после двойного слеша //:
Многострочный комментарий заключается между символами /* текст комментария */. Он может размещаться на нескольких строках. Например:
Создание исполняемого файла из исходного кода на C++ в общем случае состоит из трех этапов:
Препроцессор обрабатывает все директивы препроцессора (например, директиву `#include` )
Компилятор обрабатывает каждый файл с исходным кодом и создает из него объектный файл, который содержит машинный код. Например, код может разбросан по нескольким файлам с исходным кодом, и для каждого файла создается свой объектный файл
Компоновщик (он же линкер/линковщик) объединяет все объектные файлы в единую программу. Данный процесс называется компоновкой/линковкой
Например, если у нас исходный код находится в трех файлах .cpp
# Переменные
Для хранения данных в программе в языке C++ используются переменные. Фактически переменная представляет именнованный участок памяти. Переменная имеет тип, имя и значение. Тип определяет, какие именно данные может хранить переменная.
Перед использованием любую переменную надо определить. Синтаксис определения переменной выглядит следующим образом:
Имя переменной последовательность алфавитных-цифровых символов и знака подчеркивания _. При этом имя переменной должно начинаться с алфавитного символа или подчеркивания.
Кроме того, в качестве имени переменной нельзя использовать ключевые слова языке C++, например, for или if. Но таких слов не так много, и по ходу освоения C++ вы соориентируетесь, какие слова являются ключевыми.
Следует отметить, что при этом не рекомендуются следующие именования:
Дело в том, что при подобных именах повышается вероятность, что подобные названия будут конфликтовать с именами (например, именами переменных), которые генерирует компилятор или которые определены в подключаемых стандартных модулях C++. Поэтому некоторые вообще не рекомендуют начинать имя с символа подчеркивания
В общем случае переменная определяется в следующем виде:
Например, простейшее определение переменной:
Здесь определена переменная age, которая имеет тип int. Поскольку определение переменной представляет собой инструкцию, то после него ставится точка с запятой.
Также стоит учитывать, что C++ - регистрозависимый язык, а это значит, что регистр символов имеет большое значение. То есть в следующем коде будут определяться две разные переменные:
Поэтому переменная Age не будет представлять то же самое, что и переменная age.
Также нельзя объявить больше одной переменной с одним и тем же именем, например:
Подобное определение вызовет ошибку на этапе компиляции.
После определения переменной можно присвоить некоторое значение. Присвоение переменной начального значения называется инициализацией. В C++ есть три вида инициализации:
Нотация присваивания (assignment notation)
Функциональная нотация (functional notation)
Инициализация в фигурных скобках (braced initialization)
Рассмотрим все эти виды инициализаций
Суть нотациия присваивания - с помощью оператора присваивания (знак "равно" или =) переменной передаем некоторое значение:
Здесь в качестве значения переменной присваивается число 20. Постоянные значения любого типа, наподобие чисел, символов, строк, такие как 20, 123.456 (дробное число), "A" или "hello", называются литералами. То есть в данном случае переменной присваивается целочисленный литерал 20.
Например, определим в программе переменную и выведем ее значение на консоль:
С помощью последовательности операторов << можно вывести несколько значений на консоль.
После компиляции и запуска скомпилированной программы на консоль будет выведено число 28.
> Age = 28
Можно сразу при определении переменной дать ей некоторое начальное значение:
При инициализации
```
braced initialization
```
после названия переменной в фигурных скобках указывается ее значение:
При функциональной нотации после названия переменной в круглых скобках указывается ее значение:
Во всех трех случаях присваиваемое переменной значение может представлять сложное вычисляемое выражение. Например:
Можно сразу инициализировать несколько переменных:
В большинстве случаев все три варианта инициализации эквивалентны. Однако инициализация в фигурных скобках немного безопаснее, когда применяется сужающее преобразование. В общем случае ожидается, что переменной передается значение, которое соответствует ее типу. Если же это не так, то компилятор попытается преобразовать присваиваемое значение в тип переменной. Сужающее преобразование изменяет значение одного типа на тип с более ограниченным диапазоном значений. Таким образом, преобразование может привести к потере информации. Возьмем следующий пример:
Здесь переменным age1 и age2, которые представляют тип int, то есть целое число, присваивается дробное значение - 23.5 и 24.5 соответственно. Но в большинстве компиляторов, по крайней мере на момент написания данной статьи, этот код нормально скомпилируется и выполнится. Мы получим следующий вывод:
> Age1 = 23 Age2 = 24
Теперь возьмем пример с инициализацией через фигурные скобки:
Здесь переменной age, которая также представляет целое число, также присваивается дробное значение - 22.5. Однако теперь при компиляции многие компиляторы сообщат нам об ошибке. Например, вывод компилятора g++:
> hello.cpp: In function 'int main()': hello.cpp:5:15: error: narrowing conversion of '2.25e+1' from 'double' to 'int' [-Wnarrowing] 5 | int age1 {22.5};
Следует отметить, что некоторые компиляторы могут все таки скомпилировать данный код, однако все равно отобразят предупреждение.
При инициализации в фигурных скобках можно опустить значение:
В этом случае переменная будет инициализироваться нулем и фактически будет аналогично коду:
Если переменную не инициализировать, то происходит ее инициализация по умолчанию. И переменная получает некоторое значение по умолчанию, которое зависит от места, где эта переменная определена.
Если переменная, которая представляет встроенный тип (например, тип int), определена внутри функции, то она получает неопределенное значение. Если переменная встроенного типа определена вне функции, то она получает то значение по умолчанию, которое соответствует ее типу. Для числовых типов это число 0. Например:
Переменная x определена вне функции, и поэтому она получит значение по умолчанию - число 0.
Гораздо сложнее дело обстоит с переменной y, которая определена внутри функции main - ее значение будет неопределенным, и многое будет зависеть от используемого компилятора. В частности, вывод программы, скомпилированной с помощью компилятора G++, может выглядеть следующим образом:
> X = 0 Y = 0
А в Visual Studio отсутствие значения переменной y вызовет ошибку компиляции.
Но в любом случае перед использованием переменной лучше явным образом назначать ей определенное значение, а не полагаться на значение по умолчанию.
Ключевой особенностью переменных является то, что мы можем изменять их значения:
Консольный вывод программы:
> Age1 = 22 Age2 = 23 Age3 = 38
# Типы данных
Каждая переменная имеет определенный тип. И этот тип определяет, какие значения может иметь переменная, какие операции с ней можно производить и сколько байт в памяти она будет занимать. В языке C++ определены следующие базовые типы данных: логический тип bool, целочисленные типы, типа чисел с плавающей точкой, символьные типы. Рассмотрим эти группы по отдельности.
Логический тип bool может хранить одно из двух значений: true (истинно, верно) и false (неверно, ложно). Например, определим пару переменных данного типа и выведем их значения на консоль:
При выводе значения типа `bool` преобразуются в 1 (если true) и 0 (если false). Как правило, данный тип применяется преимущество в условных выражениях, которые будут далее рассмотрены. Значение по умолчанию для переменных этого типа - `false` .
Целые числа в языке C++ представлены следующими типами:
char: представляет один символ в кодировке ASCII. Занимает в памяти 1 байт (8 бит). Может хранить любое значение из диапазона от -128 до 127, либо от 0 до 255
Несмотря на то, что данный тип представляет тот же диапазон значений, что и вышеописанный тип `signed char` , но они не эквивалентны. Тип `char` предназначен для
хранения числового кода символа и в реальности может представлять как `signed byte` , так и `unsigned byte` в зависимости от конкретного компилятора.
short: представляет целое число в диапазоне от –32768 до 32767. Занимает в памяти 2 байта (16 бит).
Данный тип также имеет псевдонимы short int, signed short int, signed short.
unsigned short: представляет целое число в диапазоне от 0 до 65535. Занимает в памяти 2 байта (16 бит).
Данный тип также имеет синоним unsigned short int.
int: представляет целое число. В зависимости от архитектуры процессора может занимать 2 байта (16 бит) или 4 байта (32 бита). Диапазон предельных значений соответственно также может варьироваться от –32768 до 32767 (при 2 байтах) или от −2 147 483 648 до 2 147 483 647 (при 4 байтах). Но в любом случае размер должен быть больше или равен размеру типа short и меньше или равен размеру типа long
Данный тип имеет псевдонимы signed int и signed.
unsigned int: представляет положительное целое число. В зависимости от архитектуры процессора может занимать 2 байта (16 бит) или 4 байта (32 бита), и из-за этого диапазон предельных значений может меняться: от 0 до 65535 (для 2 байт), либо от 0 до 4 294 967 295 (для 4 байт).
Имеет псевдоним unsigned
long: в зависимости от архитектуры может занимать 4 или 8 байт и представляет целое число в диапазоне от −2 147 483 648 до 2 147 483 647 (при 4 байтах) или от −9 223 372 036 854 775 808 до +9 223 372 036 854 775 807 (при 8 байтах). Занимает в памяти 4 байта (32 бита) или.
unsigned long: представляет целое число в диапазоне от 0 до 4 294 967 295. Занимает в памяти 4 байта (32 бита).
Имеет синоним unsigned long int.
long long: представляет целое число в диапазоне от −9 223 372 036 854 775 808 до +9 223 372 036 854 775 807. Занимает в памяти 8 байт (64 бита).
unsigned long long: представляет целое число в диапазоне от 0 до 18 446 744 073 709 551 615. Занимает в памяти, как правило, 8 байт (64 бита).
Имеет псевдоним unsigned long long int.
Для представления чисел в С++ применятся целочисленные литералы со знаком или без, типа -10 или 10. Например, определим ряд переменных целочисленных типов и выведем их значения на консоль:
Но стоит отметить, что все целочисленные литералы по умолчанию представляют тип `int` . Так, выше переменным разных типов присваивались различные числа - 64, -64, 88, -88, 1024 и т.д.
Но все эти целочисленные литералы представляют тип `int` . Однако мы можем использовать целочисленные литералы и других типов. Целочисленные литералы без знака (которые представляют `unsigned` -типы)
имеют суффикс u или U. Литералы типов `long` и `long long` имеют суффиксы
L/l и LL/ll соответственно:
Тем не менее использовать суффиксы необязательно, поскольку, как правило, компилятор может успешно преобразовать целочисленный литерал типа (который технически представляет тип int) к нужному типу без потери информации.
Если число большое, то при вводе мы можем где-то ошибиться. Чтобы упростить читабельность чисел, начиная со стандарта C++14 в язык была добавлена возможность разделения разрядов числа с помощью одинарной кавычки '
По умолчанию все стандартные целочисленные литералы представляют числа в привычной нам десятичной системе. Однако C++ также позволяет использовать и числа в других системах исчисления.
Чтобы указать, что число - шестнадцатеричное, перед числом указывается префикс 0x или 0X. Например:
Чтобы указать, что число - восьмеричное, перед числом указывается ноль 0. Например:
Бинарные литералы предваряются префиксом 0b или 0B:
Все эти типы литералов также поддерживают суффиксы `U/L/LL` :
Для хранения дробных чисел в C++ применяются числа с плавающей точкой. Число с плавающей точкой состоит из двух частей: мантиссы и показателя степени. Оба могут быть как положительными, так и отрицательными. Величина числа – это мантисса, умноженная на десять в степени экспоненты.
Например, число 365 может быть записано в виде числа с плавающей точкой следующим образом:
> 3.650000E02
В качестве разделителя целой и дробной частей используется символ точки. Мантисса здесь имеет семь десятичных цифр - `3.650000` , показатель степени - две цифры `02` . Буква E означает экспоненту, после нее указывается показатель степени
(степени десяти), на которую умножается часть 3.650000 (мантисса), чтобы получить требуемое значение. То есть, чтобы вернуться к обычному десятичному представлению, нужно выполнить следующую операцию: > 3.650000 × 102 = 365
Другой пример - возьмем небольшое число:
> -3.650000E-03
В данном случае мы имеем дело с числом `–3.65 × 10-3` , что равно `–0.00365` . Здесь мы видим, что в зависимости от значения показателя
степени десятичная точка "плавает". Собственно поэтому их и называют числами с плавающей точкой. Однако хотя такая запись позволяет определить очень большой диапазон чисел, не все эти числа могут быть представлены с полной точностью; числа с плавающей запятой в целом являются приблизительными представления точного числа. Например, число 1254311179 выглядело бы так: `1.254311E09` . Однако если перейти к десятичной записи, то это будет `1254311000` . А это не то же самое, что и `1254311179` , поскольку мы потеряли три младших разряда.
В языке C++ есть три типа для представления чисел с плавающей точкой:
float: представляет вещественное число одинарной точности с плавающей точкой в диапазоне +/- 3.4E-38 до 3.4E+38. В памяти занимает 4 байта (32 бита)
double: представляет вещественное число двойной точности с плавающей точкой в диапазоне +/- 1.7E-308 до 1.7E+308. В памяти занимает 8 байт (64 бита)
long double: представляет вещественное число двойной точности с плавающей точкой не менее 8 байт (64 бит). В зависимости от размера занимаемой памяти может отличаться диапазон допустимых значений.
В своем внутреннем бинарном представлении каждое число с плавающей запятой состоит из одного бита знака, за которым следует фиксированное количество битов для показателя степени и набор битов для хранения мантиссы. В числах float 1 бит предназначен для хранения знака, 8 бит для экспоненты и 23 для мантиссы, что в сумме дает 32 бита. Мантисса позволяет определить точность числа в виде 7 десятичных знаков.
В числах double: 1 знаковый бит, 11 бит для экспоненты и 52 бит для мантиссы, то есть в сумме 64 бита. 52-разрядная мантисса позволяет определить точность до 16 десятичных знаков.
Для типа long double расклад зависит от конкретного компилятора и реализации этого типа данных. Большинство компиляторов предоставляют точность до 18 - 19 десятичных знаков (64-битная мантисса), в других же (как например, в Microsoft Visual C++) `long double` аналогичен типу `double` .
В C++ литералы чисел с плавающими точками представлены дробными числами, которые в качестве разделителя целой и дробной частей применяют точку:
Даже если переменной присваивается целое число, чтобы показать, что мы присваиваем число с плавающей точкой, применяется точка:
Так, здесь число `1.` представляет литерал числа с плавающей точкой, и в принципе аналогичен `1.0` . По умолчанию все такие числа с точкой расцениваются как числа типа double. Чтобы показать, что число представляет другой тип, для `float` применяется суффикс
f/F, а для `long double` - l/L:
В качестве альтернативы также можно применять экспоненциальную запись:
При перечислении типов данных указывался размер, который он занимает в памяти. Но стандарт языка устанавливает лишь минимальные значения, которые должны быть. Например, для типов int и short минимальное значение - 16 бит, для типа long - 32 бита, для типа long double - 64 разряда. При этом размер типа long должен быть не меньше размера типа int, а размер типа int - не меньше размера типа short, а размер типа `long double` должен быть не меньше `double` . А разработчики компиляторов могут выбирать предельные размеры для типов самостоятельно,
исходя из аппаратных возможностей компьютера. К примеру, компилятор g++ Windows для `long double` использует 16 байт.
А компилятор в Visual Studio, который также работает под Windows, и clang++ под Windows для `long double` используют 8 байт. То есть даже в рамках одной платформы разные компиляторы могут
по разному подходить к размерам некоторых типов данных. Но в целом используются те размеры, которые указаны выше при описании типов данных.
Однако бывают ситуации, когда необходимо точно знать размер определенного типа. И для этого в С++ есть оператор sizeof(), который возвращает размер памяти в байтах, которую занимает переменная:
Консольный вывод при компиляции в g++:
> sizeof(number) = 16
В C++ есть следующие символьные типы данных:
char: представляет один символ в кодировке ASCII. Занимает в памяти 1 байт (8 бит). Может хранить любое значение из диапазона от -128 до 127, либо от 0 до 255
wchar_t: представляет расширенный символ. На Windows занимает в памяти 2 байта (16 бит), на Linux - 4 байта (32 бита). Может хранить любой значение из диапазона от 0 до 65 535 (при 2 байтах), либо от 0 до 4 294 967 295 (для 4 байт)
char8_t: представляет один символ в кодировке Unicode. Занимает в памяти 1 байт. Может хранить любой значение из диапазона от 0 до 256
char16_t: представляет один символ в кодировке Unicode. Занимает в памяти 2 байта (16 бит). Может хранить любой значение из диапазона от 0 до 65 535
char32_t: представляет один символ в кодировке Unicode. Занимает в памяти 4 байта (32 бита). Может хранить любой значение из диапазона от 0 до 4 294 967 295
Переменная типа char хранит числовой код одного символа и занимает один байт. Стандарт языка С++ не определяет кодировку символов, которая будет использоваться для символов char, поэтому производители компиляторов могут выбирать любую кодировку, но обычно это ASCII.
В качестве значения переменная типа `char` может принимать один символ в одинарных кавычках, либо числовой код символа:
В данном случае переменные a1 и a2 будут иметь одно и то же значение, так как 65 - это числовой код символа "A" в таблице ASCII. При выводе на консоль с помощью `cout` по умолчанию отображается символ.
Кроме того, в C++ можно использовать специальные управляющие последовательности, которые предваряются слешем и которые интерпретируются особым образом. Например, "\n" представляет перевод строки, а "\t" - табуляцию.
Однако ASCII обычно подходит для наборов символов языков, которые используют латиницу. Но если необходимо работать с символами для нескольких языков одновременно или с символами языков, отличных от английского, 256-символьных кодов может быть недостаточно. И в этом случае применяется Unicode.
Unicode (Юникод) — это стандарт, который определяет набор символов и их кодовых точек, а также несколько различных кодировок для этих кодовых точек. Наиболее часто используемые кодировки: UTF-8, UTF-16 и UTF-32. Разница между ними заключается в том, как представлена кодовая точка символа; числовое же значение кода для любого символа остается одним и тем же в любой из кодировок. Основные отличия:
UTF-8 представляет символ как последовательность переменной длины от одного до четырех байт. Набор символов ASCII появляется в UTF-8 как однобайтовые коды, которые имеют те же значения кодов, что и в ASCII. UTF-8 на сегодняшний день является самой популярной кодировкой Unicode.
UTF-16 представляет символы как одно или два 16-битных значения.
UTF-32 представляет все символы как 32-битные значения
В C++ есть четыре типа для хранения символов Unicode: wchar_t, char8_t, char16_t и char32_t ( `char16_t` и `char32_t` были добавлены в C+11, а `char8_t` - в C++20). Тип wchar_t — это основной тип, предназначенный для наборов символов, размер которых выходит за пределы одного байта. Собственно отсюда и его название: `wchar_t` - wide (широкий) char. происходит от широкого символа, потому что этот символ «шире», чем
обычный однобайтовый символ. Значения `wchar_t` определяются также как и символы `char` за тем исключением, что они предваряются символов "L":
Также можно передать код символа
Значение, заключенное в одинарные кавычки, представляет собой шестнадцатеричный код символа. Обратная косая черта указывает на начало управляющей последовательности, а x после обратной косой черты означает, что код шестнадцатеричный.
Стоит учитывать, что для вывода на консоль символов `wchar_t` следует использовать не `std::cout` , а поток std::wcout:
При этом поток `std::wcout` может работать как с char, так и с wchar_t. А поток std::cout для переменной wchar_t вместо символа будет выводить его числовой код. Проблема с типом `wchar_t` заключается в том, что его размер сильно зависит от реализации и применяемой кодировки. Кодировка обычно соответствует
предпочтительной кодировке целевой платформы. Так, для Windows `wchar_t` обычно имеет ширину 16 бит и кодируется с помощью UTF-16.
Большинство других платформ устанавливают размер в 32 бита, а в качестве кодировки применяют UTF-32. С одной стороны, это позволяет больше соответствовать конкретной платформе.
Но с другой стороны, затрудняет написание кода, переносимого на разные платформы. Поэтому в общем случае часто рекомендуется использовать типы char8_t, char16_t и char32_t. Значения этих типов предназначены для хранения символов в кодировке UTF-8,
UTF-16 или UTF-32 соответственно, а их размеры одинаковы на всех распространенных платформах.
Для определения символов типов char8_t, char16_t и char32_t применяются соответственно префиксы u8, u и U:
Стоит отметить, что для вывода на консоль значений char8_t/char16_t/char32_t пока нет встроенных инструментов типа `std:cout/std:wcout` .
Иногда бывает трудно определить тип выражения. В этом случае можно предоставить компилятору самому выводить тип объекта. И для этого применяется спецификатор auto. При этом если мы определяем переменную со спецификатором auto, эта переменная должна быть обязательно инициализирована каким-либо значением:
На основании присвоенного значения компилятор выведет тип переменной. Неинициализированные переменные со спецификатором auto не допускаются:
# Константы
Date: 2023-02-08
Categories:
Tags:
Отличительной особенностью переменных является то, что мы можем многократно в течение работы программы изменять их значение:
Но кроме переменных в языке программирования C++ можно определять константы. Их значение устанавливается один раз и впоследствии мы его не можем изменить. Константа определяется практически так же, как и переменная за тем исключением, что в начале определения константы идет ключевое слово const. Например:
И также в процессе программы мы сможем обращаться к значению константы:
Но если же мы захотим после определения константы присвоить ей некоторое значение, то компилятор не сможет скомпилировать программу и выведет ошибку:
То есть такой код не будет работать. И так как нельзя изменить значения константы, то ее всегда необходимо инициализировать, если мы хотим, чтобы она имела некоторое значение.
Если константа не будет инициализирована, то компилятор также выведет ошибку и не сможет скомпилировать программу, как в следующем случае:
В качестве значения константам можно передавать как обычные литералы, так и динамически вычисляемые значения, например, значения переменных или других констант:
Обычно в качестве констант определяются такие значения, которые должны оставаться постоянными в течение работы всей программы и не могут быть изменены. Например, если программы выполняет математические операции с использованием числа PI, то было бы оптимально определить данное значение как константу, так как оно все равно в принципе неизменно:
По умолчанию язык C++ не содержит встроенных средств для ввода с консоли и вывода на консоль, эти средства предоставляются библиотекой iostream. В ней определены два типа: istream и ostream. istream представляет поток ввода, а ostream - поток вывода.
Вообще, сам термин "поток" в данном случае представляет последовательность символов, которая записывается на устройство ввода-вывода или считывается с него. И в данном случае под устройством ввода-вывода рассматривается консоль.
Для записи или вывода символов на консоль применяется объект cout, который представляет тип ostream. А для чтения с консоли используется объект cin
Для использования этих объектов в начало исходного файла необходимо подключить библиотеку iostream:
Так как оператор << возвращает левый операнд - cout, то с помощью цепочки операторов мы можем передать на консоль несколько значений. Например, определим простейшую программу вывода на консоль:
Консольный вывод программы:
> Name: Tom Age: 33 Weight: 81.23
Оператору << передаются различные значения - строки, значения переменных, которые выводятся на консоль.
Строки могут содержать управляющие последовательности, которые интерпретируются определенным образом. Например, последовательность "\n" интерпретируется как перевод на новую строку. Из других управляющих последовательностей также нередко употребляется "\t", которая интерпретируется как табуляция.
Также цепочку операторов << можно завершать значением std::endl, которое вызывает перевод на новую строку и сброс буфера. При выводе в поток данные вначале помещаются в буфер. И сброс буфера гарантирует, что все переданные для вывода на консоль данные немедленно будут выведены на консоль.
Для считывания с консоли данных применяется оператор ввода >>, который принимает два операнда. Левый операнд представляет объект типа istream (в данном случае объект cin), с которого производится считывание, а правый операнд - объект, в который считываются данные.
Например, считаем данные с консоли:
Здесь после приглашений к вводу программа ожидает ввода значений для переменных age и weight.
Стоит отметить, что так как оператор ввода в первом случае будет добавлять данные в целочисленную переменную age, то он ожидает ввода числа. В случае с переменной weight оператор ввода ожидает число с плавающей точкой, причем разделителем целой и дробной части должна быть точка. Поэтому мы не можем ввести любые значения, например, строки. В этом случае программа может выдать некорректный результат.
Оператор ввода >> возвращает левый операнд - объект cin, поэтому мы можем по цепочке считывать данные в различные переменные:
После ввода одного из значений надо будет ввести пробел и затем вводить следующее значение.
При чтении и записи в предыдущих темах использовались объекты `std::cout` и `std::cin` соответственно. Причем они использовались с
префиксом std::. Этот префикс указывает, что объекты cout, cin, endl определены в пространстве имен
std. А само двойное двоеточие :: представляет оператор области видимости (scope operator), который позволяет указать, в каком пространстве
имен определен объект. И без префикса эти объекты по умолчанию мы использовать не можем.
Однако подобная запись может показаться несколько громоздкой. И в этом случае можно использовать оператор using, который позволяет ввести в программу объекты из различных пространств имен.
Использование оператора using имеет следующий формат:
Например, пусть у нас есть следующая программа:
Здесь используются сразу три объекта из пространства имен std: `cout` , `cin` и `endl` . Перепишем программу с использованием using:
Для каждого объекта из пространства std определяется свое выражение using. При этом программа будет работать также как и раньше.
Ключевое слово using также позволяет определять псевдонимы для типов. Это может пригодиться, когда мы работаем с типами с длинными названиями, а определение коротких псевдонимов позволит сократить код. Например:
В данном случае для типа `unsigned long long` определен псевдоним `ullong` . Стоит отметить, что это именно определение псевдонима, а НЕ определение
нового типа. Стоит отметить, что для определения псевдонимов в С++ также может использоваться старый подход в стиле языка С с помощью оператора `typedef` :
Арифметические операции производятся над числами. Значения, которые участвуют в операции, называются операндами. В языке программирования C++ арифметические операции могут быть бинарными (производятся над двумя операндами) и унарными (выполняются над одним операндом). К бинарным операциям относят следующие:
+
Операция сложения возвращает сумму двух чисел:
В этом примере результат операций применяется для инициализации переменных, но мы также можем использовать операцию присвоения для установки значения переменных:
-
Операция вычитания возвращает разность двух чисел:
*
Операция умножения возвращает произведение двух чисел:
/
Операция деления возвращает частное двух чисел:
При делении стоит быть внимательным, так как если в операции участвуют два целых числа, то дробная часть (при ее наличии) будет отбрасываться, даже если результат присваивается переменной `float` или `double` :
Чтобы результат представлял число с плавающей точкой, один из операндов также должен представлять число с плавающей точкой:
%
Операция получения остатка от целочисленного деления:
При сложении или вычитании чисел с плавающей точкой, которые сильно отличаются по значению, следует проявлять осторожность. Например, сложим число `1.23E-4` ( `0.000123` ) и `3.65E+6` (3650000). Мы ожидаем, что сумма будет равна `3650000,000123` . Но при преобразовании в число с плавающей запятой
с точностью до семи цифр это становится следующим > 3.650000E+06 + 1.230000E-04 = 3.650000E+06
Или соответствующий код на С++:
То есть первое число никак не изменилось, поскольку для хранения точности отводится только 7 цифр.
Также стоит отметить, что стандарт IEEE, который реализуется компиляторами С++, определяет специальные значения для чисел с плавающей точкой, в которых мантисса на бинарном уровне состоит только из нулей, а экспонента, которая состоит из одних единиц, в зависимости от знака представляет значения `+infinity` (плюс бесконечность +∞) и `-infinity` (минус бесконечность -∞).
И при делении положительного числа на ноль, результатом будет `+infinity` , а при делении отрицательного числа на ноль - `-infinity` . Другое специальное значение с плавающей точкой, определенное этим стандартом, представляет значение NaN (не число). Это значение представляет результат операции, который не определяется математически, например, когда ноль делится на ноль или бесконечность на бесконечность. Результатом любой операции, в которой один или оба операнда являются `NaN` , также является `NaN` .
Для демонстрации рассмотрим следующую программу:
В выражении `a / b` число 1.5 делится на 0, поэтому результатом будет плюс бесконечность. Аналогично в выражении `d / c` число -1.5 делится на 0, поэтому результатом будет минус бесконечность. В выражении `b / c` число 0 делится на 0, поэтому результатом будет NaN. Соответственно последнее выражение `result + a` будет аналогично `NaN + 0` , соответственно результатом тоже будет `NaN`
Консольный вывод программы
> 1.5/0 = inf -1.5/0 = -inf 0/0 = nan nan + 1.5 = nan
Также есть две унарные арифметические операции, которые производятся над одним числом: ++ (инкремент) и -- (декремент). Каждая из операций имеет две разновидности: префиксная и постфиксная:
Префиксный инкремент.
Увеличивает значение переменной на единицу и полученный результат используется как значение выражения ++x
Постфиксный инкремент.
Увеличивает значение переменной на единицу, но значением выражения x++ будет то, которое было до увеличения на единицу
Префиксный декремент.
Уменьшает значение переменной на единицу, и полученное значение используется как значение выражения --x
Постфиксный декремент.
Уменьшает значение переменной на единицу, но значением выражения x-- будет то, которое было до уменьшения на единицу
Операторы могут быть левоассоциативными - такие операторы выполняются слева направо и правоассоциативными - выполняются справа налево. Подавляющее большинство операторов левоассоциативны (например, бинарные арифметические операции), поэтому большинство выражений оценивается слева направо. Правоассоциативными операторами являются все унарные операторы, различные операторы присваивания и условный оператор.
Кроме того, одни операции имеют больший приоритет, чем другие и поэтому выполняются вначале. Операции в порядке уменьшения приоритета:
++ (инкремент), -- (декремент) |
| --- |
* (умножение), / (деление), % (остаток от деления) |
+ (сложение), - (вычитание) |
Приоритет операций следует учитывать при выполнении набора арифметических выражений:
Хотя операции выполняются слева направо, но вначале будет выполняться операция инкремента `++b` , которая увеличит значение переменной
b и возвратит его в качестве результата, так как эта операция имеет больший приоритет. Затем выполняется умножение `5 * ++b` ,
и только в последнюю очередь выполняется сложение `a + 5 * ++b`
Следует учитывать, что если в одной инструкции для одной переменной сразу несколько раз вызываются операции инкремента и декремента, то результат может быть неопределенным, и много зависит от конкретного компилятора. Например:
Так, и g++, и clang++ скомпилируют данный код, и результат переменной result будет таким, как в принципе и ожидается - 16, но компилятор clang++ также сгенерирует предупреждение.
Скобки позволяют переопределить порядок вычислений. Например:
Несмотря на то, что операция сложения имеет меньший приоритет, но вначале будет выполняться именно сложение, а не умножение, так как операция сложения заключена в скобки.
# Статическая типизация и преобразования типов
С++ является статически типизированным языком программирования. То есть если мы определили для переменной какой-то тип данных, то в последующем мы этот тип изменить не сможем. Соответственно переменная может получить значения только того типа, который она представляет. Однако нередко возникает необходимость присвоить переменной значения каких-то других типов. И в этом случае применяются преобразования типов.
Ряд преобразований компилятор может производить неявно, то есть автоматически. Например:
Здесь переменная age представляет тип `unsigned int` и условно хранит возраст. Эта переменная инициализируется числом 25, а все целочисленные литералы без суффиксов по умолчанию
представляют тип `int` ( `signed int` ). Но компилятор знает как преобразовать значение 25 к типу `unsigned int` , и каких-то проблем в данном случае не будет.
Но посмотрим на другой пример:
Здесь переменной age уже присваивается число -25 - отрицательное, в то время как тип переменной - `unsigned int` предполагает лишь использование положительных чисел.
И в этом случае мы столкнемся с ошибкой компиляции. Например, вывод компилятора g++: > error: narrowing conversion of '-25' from 'int' to 'unsigned int' [-Wnarrowing]
Рассмотрим, как выполняются некоторые базовые преобразования:
Переменной типа bool присваивается значение другого типа. В этом случае переменная получает false, если значение равно 0. Во всех остальных случаях переменная получает true.
Числовой или символьной переменной присваивается значение типа bool. В этом случае переменная получает 1, если значение равно true, либо получает 0, если присваиваемое значение равно false.
Целочисленной переменной присваивается дробное число. В этом случае дробная часть после запятой отбрасывается.
Переменной, которая представляет тип с плавающей точкой, присваивается целое число. В этом случае, если целое число содержит больше битов, чем может вместить тип переменной, то часть информации усекается.
Переменной беззнакового типа (unsigned) присваивается значение не из его диапазона. В этом случае результатом будет остаток от деления по модулю. Например, тип unsigned char может хранить значения от 0 до 255. Если присвоить ему значение вне этого диапазона, то компилятор присвоит ему остаток от деления по модулю 256 (так как тип unsigned char может хранить 256 значений). Так, при присвоении значения -5 переменная типа unsigned char получит значение 251
Переменной знакового типа (signed) присваивается значение не из его диапазона. В этом случае результат не детерминирован. Программа может выдавать адекватный результат, а может работать некорректно.
В арифметических операциях необходимо, чтобы оба операнда представляли один и тот же тип. Если же операнды имеют разные типы, то компилятор автоматически выбирает операнд с типом который имеет меньший диапазон значений и пытается его преобразовать в тип второго операнда, который имеет больший диапазон значений. С точки зрения преобразований в операциях типы можно расположить следующим образом в порядке приоритета (от более высокого к более низкому):
long double
double
float
unsigned int
int
То есть, если в операции участвует число типа `float` и типа `long double` , то компилятор автоматически преобразует операнд типа `float` в
тип `long double` (который в соответствии с вышеуказанным списком имеет более высокий приоритет). Операнды типов
```
char, signed char, unsigned char, short
```
и `unsigned short` всегда при операциях преобразуются как минимум
в тип `int`
Например, программист заработал за 8 часовой рабочий день 100,2$, рассчитаем его заработок за час:
Здесь переменная hours, которая представляет тип `int` и хранит количество часов, будет преобразована к "более приоритетному" типу `double` .
С одной стороны, это может показаться довольно удобно. С другой стороны, подобные преобразования могут привести к нежелательным результатам. Например:
Здесь в операции `n - x` число n будет преобразовываться к более приоритетному типу - `unsigned int` . Формально эта операция возвращает `5 - 8 = -3` .
Но в нашем случае оба операнда и соответственно результат представляют тип `unsigned int` , поэтому в итоге результат равен `4294967293` .
Те преобразования, при которых не происходит потеря информации, являются безопасными. Как правило, это преобразования от типа с меньшей разрядностью к типу с большей разрядностью. В частности, это следующие цепочки преобразований:
unsigned char -> unsigned short -> unsigned int -> unsigned long
float -> double -> long double
Примеры безопасных преобразований:
Но также есть опасные преобразования. При подобных преобразованиях мы потенциально можем потерять точность данных. Как правило, это преобразования от типа с большей разрядностью к типу с меньшей разрядностью.
В данном случае переменным a и b присваивается значения, которые выходят за пределы диапазона допустимых значений для данных типов.
И в подобных примерах многое зависит от компилятора. В ряде случаев компиляторы при компиляции выдают предупреждение, тем не менее программа может быть успешно скомпилирована. В других случаях компиляторы не выдают никакого предупреждения. Собственно в этом и заключается опасность, что программа успешно компилируется, но тем не менее существует риск потери точности данных. Значение переменной - это всего лишь набор битов в памяти, которые интерпретируются в соответствии с определенным типом. И для разных типов один и тот же набор битов может интерпретироваться по разному. Поэтому важно учитывать диапазоны значений для того или иного типа при присвоении переменной значения.
Если речь идет об инициализации переменных, то, чтобы избежать опасных преобразований, когда может произойти потеря точности, рекомендуется использовать инициализацию в фигурных скобках:
В этом случае компилятор сгенерирует ошибку, и программа не скомпилируется.
Для выполнения явных преобразований типов (explicit type conversion) применяется оператор static_cast
Данный оператор преобразует значение в круглых скобках - `value` к типу, который указан в угловых скобках - `type` .
Слово `static` в названии оператора отражает тот факт, что приведение проверяется статически, то есть во время компиляции. Применение оператора `static_cast` указывает компилятору, что мы уверены, что в этом месте надо применить преобразование, поэтому даже при инициализации в фигурных скобках
компилятор не сгенерирует ошибку. Например, программист заработал за 8 часовой рабочий день 100,2$, рассчитаем его заработок за час, но в виде значения unsigned int:
Здесь выражение
```
static_cast<unsigned int>(sum/hours)
```
вычисляет значение выражения `sum/hours` (а оно будет представлять тип double), и затем преобразует его
в тип `unsigned int`
Стоит отметить, что раньше во времена динозавров в С++ применялась операция преобразования, унаследованная от языка Си:
То есть перед преобразуемым значением в круглых скобках указывался тип, в который надо выполнить преобразование. Например, используем эту операцию в ранее приведенном коде:
Результат будет тот же. Однако в современном C++ эту операцию практически вытеснил оператор static_cast.
Поразрядные операции выполняются над отдельными разрядами или битами чисел. Данные операции производятся только над целыми числами. Но сначала вкратце рассмотрим, что представляют собой разряды чисел.
На уровне компьютера все данные представлены в виде набора бит. Каждый бит может иметь два значения: 1 (есть сигнал) и 0 (нет сигнала). И все данные фактически представляют набор нулей и единиц. 8 бит представляют 1 байт. Подобную систему называют двоичной.
Например, число 13 в двоичной системе будет равно `1101` 2. Как мы это получили: > // перевод десятичного числа 13 в двоичную систему 13 / 2 = 6 // остаток 1 (13 - 6 *2 = 1) 6 / 2 = 3 // остаток 0 (6 - 3 *2 = 0) 3 / 2 = 1 // остаток 1 (3 - 1 *2 = 1) 1 / 2 = 0 // остаток 1 (1 - 0 *2 = 1)
Общий алгоритм состоит в последовательном делении числа и результатов деления на 2 и получение остатков, пока не дойдем до 0. Затем выстраиваем остатки в линию в обратном порядке и таким образом формируем двоичное представление числа. Конкретно в данном случае по шагам:
Делим число 13 на 2. Результат деления - 6, остаток от деления - 1 (так как 13 - 6 *2 = 1)
Далее делим результат предыдущей операции деления - число 6 на 2. Результат деления - 3, остаток от деления - 0
Делим результат предыдущей операции деления - число 3 на 2. Результат деления - 1, остаток от деления - 1
Делим результат предыдущей операции деления - число 1 на 2. Результат деления - 0, остаток от деления - 1
Последний результат деления равен 0, поэтому завершаем процесс и выстраиваем остатки от операций делений, начиная с последнего - `1101`
При обратном переводе из двоичной системы в десятичную умножаем значение каждого бита (1 или 0) на число 2 в степени, равной номеру бита (нумерация битов идет от нуля):
> // перевод двоичного числа 1101 в десятичную систему 1(3-й бит)1(2-й бит)0(1-й бит)1(0-й бит) 1 * 23 + 1 * 22 + 0 * 21 + 1 * 20 = 1 * 8 + 1 * 4 + 0 * 2 + 1 * 1 = 8 + 4 + 0 + 1 = 13
Для записи чисел со знаком в С++ применяется дополнительный код (two's complement), при котором старший разряд является знаковым. Если его значение равно 0, то число положительное, и его двоичное представление не отличается от представления беззнакового числа. Например, 0000 0001 в десятичной системе 1.
Если старший разряд равен 1, то мы имеем дело с отрицательным числом. Например, 1111 1111 в десятичной системе представляет -1. Соответственно, 1111 0011 представляет -13.
Чтобы получить из положительного числа отрицательное, его нужно инвертировать и прибавить единицу:
Например, получим число -3. Для этого сначала возьмем двоичное представление числа 3:
> 310 = 0000 00112
Инвертируем биты
> ~0000 0011 = 1111 1100
И прибавим 1
> 1111 1100 + 1 = 1111 1101
Таким образом, число `1111 1101` является двоичным представлением числа -3.
Рассмотрим, как будет идти сложение числа со знаком и без знака. Например, сложим 12 и -8:
> 1210 = 000011002 + -810 = 111110002 (8 - 00001000, после инверсии - 11110111, после +1 = 11111000) = 410 = 000001002
Мы видим, что в двоичной системе получилось число `00000100` 2 или `4` 10 в десятичной системе.
Для большей наглядности в C++ удобно использовать бинарную запись числа, которая предваряется символами 0b:
В данном случае переменная number имеет значение `0b0000'1100` , что в десятичной системе соответствует числу 12.
Каждое целое число в памяти представлено в виде определенного количества разрядов. И операции сдвига позволяют сдвинуть битовое представление числа на несколько разрядов вправо или влево. Операции сдвига применяются только к целочисленным операндам. Есть две операции:
<<
Сдвигает битовое представление числа, представленного первым операндом, влево на определенное количество разрядов, которое задается вторым операндом.
>Сдвигает битовое представление числа вправо на определенное количество разрядов.
Число 2 в двоичном представлении 10. Если сдвинуть число 10 на два разряда влево, то получится 1000, что в десятичной системе равно число 8.
Число 16 в двоичном представлении 10000. Если сдвинуть число 10 на три разряда вправо (три последних разряда отбрасываются), то получится 10, что в десятичной системе представляет число 2.
Можно заметить, что сдвиг на один разряд влево фактически аналогично умножению на 2, тогда как сдвиг вправо на один раз эквивалентно делению на два. Мы можем обобщить: сдвиг влево на `n` аналогичен умножению числа на `2n` , а сдвиг вправо на `n` разрядов аналогичен делению на `2n` , что можно использовать вместо умножения/деления на степени двойки:
Поразрядные операции также проводятся только над соответствующими разрядами целочисленных операндов:
&: поразрядная конъюнкция (операция И или поразрядное умножение). Возвращает 1, если оба из соответствующих разрядов обоих чисел равны 1
|: поразрядная дизъюнкция (операция ИЛИ или поразрядное сложение). Возвращает 1, если хотя бы один из соответствующих разрядов обоих чисел равен 1
^: поразрядное исключающее ИЛИ. Возвращает 1, если только один из соответствующих разрядов обоих чисел равен 1
~: поразрядное отрицание или инверсия. Инвертирует все разряды операнда. Если разряд равен 1, то он становится равен 0, а если он равен 0, то он получает значение 1.
Например, выражение `5 | 2` равно 7. Число 5 в двоичной записи равно 101, а число 2 - 10 или 010. Сложим соответствующие разряды обоих чисел. При сложении если хотя бы
один разряд равен 1, то сумма обоих разрядов равна 1. Поэтому получаем:
В итоге получаем число 111, что в десятичной записи представляет число 7.
Возьмем другое выражение `6 & 2` . Число 6 в двоичной записи равно 110, а число 2 - 10 или 010. Умножим соответствующие разряды
обоих чисел. Произведение обоих разрядов равно 1, если оба этих разряда равны 1. Иначе произведение равно 0. Поэтому получаем:
Получаем число 010, что в десятичной системе равно 2.
Многие недооценивают поразрядные операции, не понимают, для чего они нужны. Тем не менее они могут помочь в решении ряда задач. Прежде всего они позволяют нам манипулировать данными на уровне отдельных битов. Один из примеров. У нас есть три числа, которые находятся в диапазоне от 1 до 3:
Мы знаем, что значения этих чисел не будут больше 3, и нам нужно эти данные максимально сжать. Мы можем три числа сохранить в одно число. И в этом нам помогут поразрядные операции.
Разберем этот код. Сначала определяем все сохраняемые числа value1, value2, value3. Для хранения результата определена переменная result, которая по умолчанию равна 0. Для большей наглядности ей присвоено значение в бинарном формате:
Сохраняем первое число в result:
Здесь мы имеем дело с логической операцией поразрядного сложения - если один из соответствующих разрядов равен 1, то результирующий разряд тоже будет равен 1. То есть фактически
Итак, первое число сохранили в result. Мы будем сохранять числа по порядку. То есть сначала в result будет идти первое число, затем второе и далее третье. Поэтому сдвигаем число result на два разряда влево (наши числа занимают в памяти не более двух разрядов):
То есть фактически выполняем следующее вычисление:
Далее повторяем логическую операцию сложения, сохраняем второе число:
что эквивалентно
Далее повторяем сдвиг на два разряда влево и сохраняем третье число. В итоге мы получим в двоичном представлении число `0b0011'1001` . В десятичной системе это число равно 57.
Но это не имеет значения, потому что нам важны конкретные биты числа. Стоит отметить, что мы сохранили в одно число три числа, и в переменной result еще есть свободное место.
Причем в реальности не важно, сколько именно битов надо сохранить. В данном случае для примера сохраняем лишь два бита.
Для восстановления данных прибегнем к обратному порядку:
Получаем числа в порядке, обратном тому, в котором они были сохранены. Поскольку мы знаем, что каждое сохраненное число занимает лишь два разряда, то по сути нам надо получить лишь последние два бита. Для этого применяем битовую маску `0b000'0011` и операцию логического умножения, которая возвращает 1, если каждый из двух соответствующих разрядов равен 1.
То есть операция
эквивалентна
Таким образом, последнее число равно 0b0000'0001 или 1 в десятичной системе
Стоит отметить, что если мы точно знаем структуру данных, то мы легко можем составить битовую маску, чтобы получить нужно число:
Здесь получаем первое число, которое, как мы знаем, что оно занимает третий и четвертый бит совокупного числа. Для этого применяем умножение на битовую маску 0b0011'0000. И затем сдвигаем число на 4 разряда вправо.
Аналогично, если мы точно знаем структуру, по которой сохраняются данные, то мы могли бы сохранить данные сразу в нужное место в числе result:
# Операции присваивания
Операции присваивания позволяют присвоить некоторое значения. Эти операции выполняются над двумя операндами, причем левый операнд может представлять только модифицируемое именованное выражение, например, переменную.
Базовая операция присваивания = позволяет присвоить значение правого операнда левому операнду:
То есть в данном случае переменная x (левый операнд) будет иметь значение 2 (правый операнд).
Стоит отметить, что тип значения правого операнда не всегда может совпадать с типом левого операнда. В этом случае компилятор пытается преобразовать значение правого операнда к типу левого операнда.
При этом операции присваивания имеют правосторонний порядок, то есть выполняются справа налево. И, таким образом, можно выполнять множественное присваивание:
Здесь сначала вычисляется значение выражения `c = 34` . Значение правого операнда - `34` присваивается левому операнду с.
Далее вычисляется выражение `b = c` : значение правого операнда c (34) присваивается левому операнду
b. И в конце вычисляется выражение `a = b` : значение правого операнда b (34) присваивается левому операнду
a.
Кроме того, следует отметить, что операции присваивания имеют наименьший приоритет по сравнению с другими типами операций, поэтому выполняются в последнюю очередь:
В соответствии с приоритетом операций вначале выполняется выражение `3 + 5` , и только потом его значение присваивается переменной x.
Все остальные операции присваивания являются сочетанием простой операции присваивания с другими операциями и имеют общую форму op=:
+=: присваивание после сложения. Присваивает левому операнду сумму левого и правого операндов: A += B эквивалентно A = A + B
-=: присваивание после вычитания. Присваивает левому операнду разность левого и правого операндов: A -= B эквивалентно A = A - B
*=: присваивание после умножения. Присваивает левому операнду произведение левого и правого операндов: A *= B эквивалентно A = A * B
/=: присваивание после деления. Присваивает левому операнду частное левого и правого операндов: A /= B эквивалентно A = A / B
%=: присваивание после деления по модулю. Присваивает левому операнду остаток от целочисленного деления левого операнда на правый: A %= B эквивалентно A = A % B
<<=: присваивание после сдвига разрядов влево. Присваивает левому операнду результат сдвига его битового представления влево на определенное количество разрядов, равное значению правого операнда: A <<= B эквивалентно A = A << B
>>=: присваивание после сдвига разрядов вправо. Присваивает левому операнду результат сдвига его битового представления вправо на определенное количество разрядов, равное значению правого операнда: A >>= B эквивалентно A = A >> B
&=: присваивание после поразрядной конъюнкции. Присваивает левому операнду результат поразрядной конъюнкции его битового представления с битовым представлением правого операнда: A &= B эквивалентно A = A & B
|=: присваивание после поразрядной дизъюнкции. Присваивает левому операнду результат поразрядной дизъюнкции его битового представления с битовым представлением правого операнда: A |= B эквивалентно A = A | B
^=: присваивание после операции исключающего ИЛИ. Присваивает левому операнду результат операции исключающего ИЛИ его битового представления с битовым представлением правого операнда: A ^= B эквивалентно A = A ^ B
Примеры операций:
# Условные выражения
Date: 2023-02-17
Categories:
Tags:
Условные выражения представляют собой некоторое условие и возвращают значение типа bool, то есть значение true (если условие истинно), либо значение false (если условие ложно). К условным выражениям относятся операции сравнения и логические операции.
В языке программирования C++ есть следующие операции сравнения:
==
Операция "равно". Возвращает true, если оба операнда равны, и false, если они не равны:
<
<=
>=
!=
Операция "не равно". Возвращает true, если первый операнд не равен второму, и false, если оба операнда равны:
Стоит отметить, что если мы выводим значение типа bool на консоль, то по умолчанию на консоль выводится 1 (если true) и 0 (если false):
Чтобы все-таки вывести true/false, перед выводимым значением указывается манипулятор std::boolalpha:
Обычно операции сравнения применяются в условных конструкциях типа if...else, которые будут рассмотрены далее.
Логические операции обычно объединяют несколько операций сравнения. К логическим операциям относят следующие:
! (операция отрицания)
Унарная операция, которая возвращает true, если операнд равен false. Если операнд равен true, операция возвращает false.
&& (конъюнкция, логическое умножение)
|| (дизъюнкция, логическое сложение)
^ ( XOR или eXclusive OR)
Возвращает true, если хотя бы оба операнда имеют разные значения. Возвращает false, если оба операнда равны.
Логические операции удобно применять для объединения операций сравнения или других логических операций:
Стоит учитывать, что операции сравнения имеют больший приоритет, чем логические операции. Поэтому в выражении `a ==5 && b > 8` вначале будут выполняться подвыражения - операции сравнения `a ==5` и `b > 8` , а затем собственно операция логического умножения. В данном случае первое условие `a ==5 && b > 8` будет истинно, если верны одновременно обе операции сравнения. А условие `a ==5 || b > 8` будет верно, если хотя бы одна из операций сравнения возвращает true. Соответственно мы получим следующий консольный вывод: > (a ==5 && b > 8) - false (a ==5 || b > 8) - true (a ==5 ^ b > 8) - true
Стоит отметить, что логические операторы `&&` и `||` представляют операторы сокращенного вычисления (short-circuit evaluation). Это значит, что
если первый операнд операции достаточен для того, чтобы определить результат всей операции, то второй операнд не вычисляется. Так, возьмем следующую операцию:
Первый операнд, который представляет выражение `a ==6` , возвращает `false` . В этом случае для операции `&&` нет смысла вычислять второй операнд, поскольку результат
операции в любом случае будет равен `false` . Аналогично обстоит дело с операцией `||` :
Первый операнд, который представляет выражение `a ==5` , возвращает `true` . В этом случае для операции `||` нет смысла вычислять второй операнд, поскольку результат
операции в любом случае будет равен `true` .
Условная конструкция if-else направляет ход программы по одному из возможных путей в зависимости от условия. Она проверяет истинность условия, и если оно истинно, выполняет блок инструкций. В простейшем виде конструкция if имеет следующую сокращенную форму:
В качестве условия использоваться условное выражение, которое возвращает true или false. Если условие возвращает true, то выполняются последующие инструкции, которые входят в блок if. Если условие возвращает false, то последующие инструкции не выполняются. Блок инструкций заключается в фигурные скобки. Например:
Здесь условие конструкции if представляет выражение `a == 8` , то есть мы сравниваем, равно ли значение переменной `a` числу 8.
И это условие верно и возвращает `true` . Соответственно будет выполняться единственная инструкция из блока if, которая выведет на консоль строку "a == 8". А консольный вывод будет следующим: > a == 8 End of program
После конструкции `if` могут идти остальные инструкции программы, которые выполняются вне зависимости от условия. Так, в примере выше после блока if идет инструкция, которая выводит на консоль строку `"End of program"` .
Теперь посмотрим на противоположную ситуацию:
Здесь уже другое условие: `a == 7` , то есть мы сравниваем, равно ли значение переменной `a` числу 7. Но переменная `a` НЕ равна 7, поэтому условие неверно и возвращает `false` . Соответственно инструкции в блоке `if` НЕ выполняются. А консольный вывод будет следующим: > End of program
Стоит отметить, что если блок инструкций состоит из одной инструкции, то мы можем не использовать фигурные скобки:
Можно вовсе поместить инструкцию на одной строке с условием:
Также мы можем использовать полную форму конструкции if, которая включает оператор else:
После оператора else мы можем определить набор инструкций, которые выполняются, если условие в операторе if возвращает false. То есть если `условие` истинно, выполняются инструкции после оператора if, а если это выражение ложно, то выполняются инструкции после оператора else.
В данном случае условие `n > 22` ложно, то есть возвращает false, поэтому будет выполняться блок `else` . И в итоге на консоль будет выведена строка `"n <= 22"` .
Однако нередко надо обработать не два возможных альтернативных варианта, а гораздо больше. Например, в случае выше можно насчитать три условия: переменная n может быть больше 22, меньше 22 и равна 22. Для проверки альтернативных условий мы можем вводить выражения else if:
То есть в данном случае мы получаем три ветки развития событий в программе.
Подобных альтернативных условий с помощью выражения `else if` можно вводить больше одного:
Если в блоке if или else или else-if необходимо выполнить только одну инструкцию, то фигурные скобки можно опустить:
Стоит отметить, что если вместо значений типа `bool` передаются целые числа, то они преобразуются к типу `bool` - для нулевых значений возвращается `false` , для ненулевых - `true` , например:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
#include <iostream> intmain(){inta {8};// a = trueif(a) std::cout << "a = true"<< std::endl; else std::cout << "a = false" << std::endl; int b {};// b = falseif(b) std::cout << "b = true" << std::endl; else std::cout << "b = false" << std::endl; }
Здесь переменная `a` равна `8` , соответственно условие `if(a)` будет равно `true` . А вот переменная `b` по умолчанию равна 0, поэтому
условие в `if(b)` возвращает `false` . Конструкции `if..else` могут быть вложенными. Например:
Здесь, если `a == 5` , то выполнение переходит к вложенноей конструкции `if(b==8)` , которая проверяет условие `b==8` и выполняет либо блок if,
либо блок else. Иногда в конструкции `if` для различных промежуточных вычислений необходимо определить переменную. Мы можем это сделать непосредственно в блоке кода. Однако начиная со стандарта
C++17 язык С++ поддерживает особую форму конструкции if:
Подобная форма также принимает условие, только перед ним еще может идти определение и инициализация переменной. Например:
В данном случае выражение инициализации `int c {a - b}` представляет определение и инициализацию переменной `c` . Дальше идет условие `a > b` .
Если оно верно, то выполняется инструкция
иначе выполняется инструкция
Но в обоих случаях мы можем использовать переменную `c` . Вне конструкции `if-else` переменная c не доступна.
Другой пример. Допустим, нам надо выяснить, делится ли одно число на другое без остатка. Если же есть остаток, то вывести его на консоль:
В данном случае в конструкции `if` определяется переменная `rem` , которая представляет остаток от деления a на b. Тернарный оператор в некотором роде похож на конструкцию `if-else` . Он принимает три операнда в следующем виде:
Первый операнд представляет условие. Если это условие верно (равно true), тогда выбирается/выполняется второй операнд, который помещается после символа ?. Если условие не верно, тогда выбирается/выполняется третий операнд, который помещается после двоеточия.
Например, возьмем следующую конструкцию `if-else` :
Здесь если a больше b, то `c=a-b` , иначе `c=a+b` . Перепишем ее с помощью тернарного оператора:
Здесь первым операндом тернарного оператора является условие `a > b` . Если это условие верно, то возвращается второй операнд - результат
выражения `a - b` . Если условие не верно, то возвращается третий операнд - `a + b` . И возвращенный операнд присваивается переменной c.
Тернарный оператор не обязательно должен возвращать некоторое значение, он может просто выполнять некоторые действия. Например:
Здесь тот же первый операнд-условие. Если оно верно, выполняется второй операнд - `std::cout << a-b` , если нет, то третий операнд - `std::cout << a+b` .
В рамках одного тернарного оператора можно комбинировать несколько других. Например:
Здесь условие представляет выражение `a < b` . Если оно верно, то возвращается второй операнд - строка "a is less than b". Но если условие не верно, то возвращается третий операнд,
который, в свою очередь, представляет другой тернарный оператор
```
(a == b ? "a is equal to b" : "a is greater than b")
```
. Здесь опять же оценивается условие-первый операнд `a == b` . Если оно верно, то возвращается строка "a is equal to b". Если нет, то строка - "a is greater than b".
# Конструкция switch-case
Конструкция switch-case позволяет сравнить некоторое выражение с набором значений. Она имеет следующую форму:
После ключевого слова switch в скобках идет сравниваемое выражение. Значение этого выражения последовательно сравнивается со значениями после оператора сase. И если совпадение будет найдено, то будет выполняться определенный блок сase.
Стоит отметить, что сравниваемое выражение в `switch` должно представлять один из целочисленных или символьных типов или перечисление (рассматриваются далее).
В конце конструкции switch может стоять блок default. Он необязателен и выполняется в том случае, если значение после switch не соответствует ни одному из операторов case. Например:
Чтобы избежать выполнения последующих блоков case/default, в конце каждого блока ставится оператор break. То есть в данном случае будет выполняться оператор
После выполнения оператора break произойдет выход из конструкции switch..case, и остальные операторы case будут проигнорированы. Поэтому на консоль будет выведена следующая строка
> x = 2
Стоит отметить важность использования оператора break. Если мы его не укажем в блоке `case` , то после этого блока выполнение перейдет к следующему блоку case. Например,
уберем из предыдущего примера все операторы break:
В этом случае опять же будет выполняться оператор `case 2:` , так как переменная x=2. Однако так как этот блок case не завершается
оператором break, то после его завершения будет выполняться набор инструкций после `case 3:` даже несмотря на то, что переменная x по прежнему равна 2.
В итоге мы получим следующий консольный вывод: > x = 2 x = 3 x is undefined
Можно определять для нескольких меток `case` один набор инструкций:
Здесь если `x=1` или `x=2` , то выполняется одна и та же инструкция
```
std::cout << "x = 1 or 2" << "\n"
```
. Аналогично для вариантов `x=3` и `x=4` также определена общая инструкция. Определение переменных в блоках `case` , возможно, встречается нечасто. Однако может вызвать затруднения. Так, если переменная определяется в блоке case, то все инструкции
блока помещаются в фигурные скобки (для блока `default` это не требуется):
Иногда в конструкции `switch` для различных промежуточных вычислений необходимо определить переменную. Для этой цели начиная со стандарта
C++17 язык С++ поддерживает особую форму конструкции `switch` :
Подобная форма также принимает выражение, значение которого сравнивается с константами после операторов `case` . Но теперь перед выражением еще может идти
определение и инициализация переменной. Например:
В данном случае в конструкции `switch` определяется переменная `k` , которая доступна только в рамках этой конструкции `switch` . В качестве выражения
используется значение переменной `op` , которая представляет знак операции. И в зависимости от этого значения, выполняем определенную операцию с переменными n и k.
Циклы позволяют выполняет некоторый набор инструкций множество раз, пока соблюдается определенное условие. В языке C++ имеются следующие виды циклов:
for
while
do...while
Цикл `while` выполняет некоторый код, пока его условие истинно, то есть возвращает true. Он имеет следующее формальное определение:
После ключевого слова while в скобках идет условное выражение, которое возвращает true или false. Затем в фигурных скобках идет набор инструкций, которые составляют тело цикла. И пока условие возвращает true, будут выполняться инструкции в теле цикла.
Например, выведем квадраты чисел от 1 до 9:
Здесь пока условие `i < 10` истинно, будет выполняться цикл while, в котором выводится на консоль
квадрат числа и инкрементируется переменная i. В какой-то момент переменная i увеличится до 10, условие `i < 10` возвратит false, и цикл завершится.
Консольный вывод программы:
> 1 * 1 = 1 2 * 2 = 4 3 * 3 = 9 4 * 4 = 16 5 * 5 = 25 6 * 6 = 36 7 * 7 = 49 8 * 8 = 64 9 * 9 = 81
Каждый отдельный проход цикла называется итерацией. То есть в примере выше было 9 итераций.
Если цикл содержит одну инструкцию, то фигурные скобки можно опустить:
Цикл for имеет следующее формальное определение:
инициализатор выполняется один раз при начале выполнения цикла и представляет установку начальных условий, как правило, это инициализация счетчиков - специальных переменных, которые используются для контроля за циклом.
условие представляет условие, при соблюдении которого выполняется цикл. Как правило, в качестве условия используется операция сравнения, и если она возвращает ненулевое значение (то есть условие истинно), то выполняется тело цикла, а затем выполняется `итерация` .
итерация выполняется после каждого завершения блока цикла и задает изменение параметров цикла. Обычно здесь происходит увеличение счетчиков цикла.
Например, перепишем программу по выводу квадратов чисел с помощью цикла for:
Первая часть объявления цикла - `int i {1}` - создает и инициализирует счетчик i. Фактически это то же самое, что и
объявление и инициализация переменной. Счетчик необязательно должен представлять тип
int. Это может быть и другой числовой тип, например, float. И перед выполнением цикла его значение будет равно 1.
Вторая часть - условие, при котором будет выполняться цикл. В данном случае цикл будет выполняться, пока переменная i не станет равна 10.
И третья часть - приращение счетчика на единицу. Опять же нам необязательно увеличивать на единицу. Можно уменьшать: i--. Можно изменять на другое значение: i+=2.
В итоге блок цикла сработает 9 раз, пока переменная i не станет равна 10. И каждый раз это значение будет увеличиваться на 1. И по сути мы получим тот же самый результат, что и в случае с циклом while:
> 1 * 1 = 1 2 * 2 = 4 3 * 3 = 9 4 * 4 = 16 5 * 5 = 25 6 * 6 = 36 7 * 7 = 49 8 * 8 = 64 9 * 9 = 81
Необязательно указывать все три выражения в определении цикла, мы можем одно или даже все из них опустить:
Формально определение цикла осталось тем же, только теперь первое и третье выражения в определении цикла отсутствуют: `for (; i < 10;)` . Переменная-счетчик определена и инициализирована вне цикла, а ее приращение происходит в самом цикле.
Также цикл не обязательно должен содержать тело. Например, вычислим с помощью цикла сумму чисел от 1 до 5:
Здесь для хранения суммы чисел определена переменная `sum` , которая по умолчанию равна 0. В цикле определяем переменную-счетчик i и выполняем цикл, пока i не станет равна 6. Обратите внимание на третью часть определения цикла - `sum += i++` . Здесь мы прибавляем к переменной sum значение переменной i и потом увеличиваем значение i. Таким образом, мы
нашли сумму чисел, но при этом обошлись без тела цикла.
Выражение инициализации может определять больше одной переменной. Например, определим две переменных и выведем на консоль их произведение:
Здесь цикл отрабатывает, пока либо переменная i не станет равна 6, либо пока переменная j не станет равна 10. Консольный вывод:
> 1*5=5 2*6=12 3*7=21 4*8=32 5*9=45
Существует также особая форма цикла for, которая предназначена специально для работы с последовательностями значений. Эта форма имеет следующее формальное определение:
Здесь выражение `{2, 3, 4, 5}` как раз представляет последовательность значений - чисел int. Цикл перебирает всю эту последовательность и помещает каждое значение в
переменную `n` , значение которой выводится на консоль.
Другой пример. Строка фактически представляет собой последовательность символов, которую также можно перебрать с помощью данной вида циклов:
Здесь каждый символ строки помещается в переменную c, значение которой затем выводится на консоль.
В дальнейшем мы рассмотрим различные виды последовательности, которые можно перебирать с помощью данного вида циклов.
В цикле do сначала выполняется код цикла, а потом происходит проверка условия в инструкции while. И пока это условие истинно, то есть не равно 0, то цикл повторяется. Формальное определение цикла:
Здесь код цикла сработает 6 раз, пока i не станет равным нулю.
Но важно отметить, что цикл do гарантирует хотя бы однократное выполнение действий, даже если условие в инструкции while не будет истинно. То есть мы можем написать:
Хотя у нас переменная i меньше 0, цикл все равно один раз выполнится.
Рассмотрим еще пример. В ряде консольных программ также реализован подобный цикл, который срабатывает, пока пользователь не введет какой-либо символ. Допустим, пользователь вводит числа, и программа должна вычислить среднее арифметическое чисел. И пусть пользователь может ввести неопределенное количество чисел, а если он захочет завершить ввод чисел, пусть введет символ "y" или "Y":
В данном случае считываем каждое число в переменную number, а затем прибавляем ее к переменной total, попутно увеличиваем счетчик чисел - count. После каждого ввода также ожидаем еще один ввод - если пользователь введет "y" или "Y", завершаем цикл и выводим среднее арифметическое чисел. Пример работы программы:
> Enter a number: 6 Finish? (y/n): n Enter a number: 5 Finish? (y/n): y The average value is 5.5
Можно определять вложенные циклы. Например, выведем таблицу умножения с помощью вложенного цикла `for` :
> 1 2 3 4 5 6 7 8 9 2 4 6 8 10 12 14 16 18 3 6 9 12 15 18 21 24 27 4 8 12 16 20 24 28 32 36 5 10 15 20 25 30 35 40 45 6 12 18 24 30 36 42 48 54 7 14 21 28 35 42 49 56 63 8 16 24 32 40 48 56 64 72 9 18 27 36 45 54 63 72 81
Иногда возникает необходимость выйти из цикла до его завершения. В этом случае можно воспользоваться оператором break. Например, нам надо подсчитать сумму чисел от 1 до 9, пока она не станет больше 20:
Здесь когда значение переменной result станет больше 20 (то есть когда i будет равно 6), осуществляется выход из цикла с помощью оператора break:
> 1 3 6 10 15 21
В отличие от оператора `break` , оператор continue производит переход к следующей итерации. Например, нам надо посчитать сумму только нечетных чисел из некоторого диапазона:
Чтобы узнать, четное ли число, мы получаем остаток от целочисленного деления на 2, и если он равен 0, то с помощью оператора continue переходим к следующей итерации цикла. А если число нечетное, то складываем его с остальными нечетными числами.
Иногда необходимо, чтобы цикл выполнялся бесконечно. Например, когда нам надо бесконечно отслеживать изменения каких-то значений или когда мы точно не знаем, сколько итераций циклу предстоит сделать. Бесконечные циклы находят широкое применение в различных областях, например, в графических программах, играх, сетевых программах и т.д. Для создания бесконечного цикла можно использовать любой вид циклов, но во всех случаях условие всегда истинно:
Однако в этих случаях в зависимости от ситуации все равно может быть какое-то условие, при котором цикл может завершить работу. В этом случае для выхода из цикла может применяться оператор break. Например, пусть пользователь бесконечно может вводить числа, а программа выводит ему квадрат числа. Но если пользователь ввел 0, то выполним выход из цикла:
# Ссылки
Ссылка (reference) представляет способ манипулировать каким-либо объектом. Фактически ссылка - это альтернативное имя для объекта. Для определения ссылки применяется знак амперсанда &:
В данном случае определена ссылка refNumber, которая ссылается на объект number. При этом в определении ссылки используется тот же тип, который представляет объект, на который ссылка ссылается, то есть в данном случае int.
При этом нельзя просто определить ссылку:
Она обязательно должна указывать на какой-нибудь объект.
Также нельзя присвоить ссылке литеральное значение, например, число:
После установления ссылки мы можем через нее манипулировать самим объектом, на который она ссылается:
Изменения по ссылке неизбежно скажутся и на том объекте, на который ссылается ссылка.
Можно определять не только ссылки на переменные, но и ссылки на константы. Но при этом ссылка сама должна быть константной:
Инициализировать неконстантную ссылку константным объектом мы не можем:
Также константная ссылка может указывать и на обычную переменную, только значение по такой ссылке мы не сможем изменить:
В данном случае несмотря на то, что мы не можем напрямую изменить значение по константной ссылке, тем не менее мы можем изменить сам объект, что приведет естественно к изменению константной ссылки.
В большинстве случае ссылки находят свое применение в функциях, когда надо передать значения по ссылке, что будет рассмотрено в последующих статьях. Однако есть и другие сценарии использования ссылок. Например, в цикл `for` , который перебирает последовательность в стиле "for-each", мы не можем изменить значения перебираемых элементов. Например:
Здесь два цикла. В первом цикле при переборе массива помещаем каждый элемент массива в переменную n и изменяем ее значение на квадрат числа. Однако это приведет только к изменению этой переменной n, но никак не элементов перебираемого массива numbers. Элементы массива сохранят свои значения, что нам и покажет второй цикл, который выводит элементы на консоль:
> 1 2 3 4 5
Теперь используем ссылки:
Теперь в первом цикле переменная n представляет ссылку на элемент массива. Использование ссылки позволяет оптимизировать работу с циклом, поскольку теперь значение элемента массива не копируется в переменную n. И через ссылку можно изменить значение соответствующего элемента:
> 1 4 9 16 25
Иногда, наоборот, не нужно или даже нежелательно изменять элементы коллекции. В этом случае мы можем сделать ссылку константной:
Хотя здесь мы не можем изменять значение элемента, но также с помощью ссылок оптимизируем перебор массива, так как элементы массива не копируются в переменную n.
Массив представляет набор однотипных данных. Формальное определение массива выглядит следующим образом:
После типа переменной идет название массива, а затем в квадратных скобках его размер. Например, определим массив из 4 чисел:
Число элементов массива также можно определять через константу:
Некоторые компиляторы (например, G++) также поддерживают установку размера с помощью переменных.
Данный массив имеет четыре числа, но все эти числа имеют неопределенное значение. Чтобы установить значения элементов массива, указываются фигурные скобки (инициализатор), внутри которых перечисляются значения для элементов массива:
В данном случае фигурные скобки пусты, поэтому все элементы массива получают нулевые значения.
Также мы можем указать конкретные значения для всех элементов массива:
В данном случае в памяти выделяется некоторая область из четырех ячеек по 4 байта (размер типа int), где каждая ячейка содержит определенный элемент массива:
Если значений в инициализаторе меньше, чем элементов в массиве, то значения передаются первым элементам, а остальные получают нулевые значения:
Если значений в инициализаторе больше, чем элементов в массиве, то при компиляции возникнет ошибка:
Здесь массив имеет размер 4, однако ему передается 6 значений.
Если размер массива не указан явно, то он выводится из количества переданных значений:
В данном случае в массиве есть 6 элементов.
При этом не допускается присвоение одному массиву другого массива:
После определения массива мы можем обратиться к его отдельным элементам по индексу. Индексы начинаются с нуля, поэтому для обращения к первому элементу необходимо использовать индекс 0. Обратившись к элементу по индексу, мы можем получить его значение, либо изменить его. Например, получим второй элемент (индекс 1):
Изменение значения второго элемента:
Например, получим и изменим значения элементов:
При обращении по индексу следует учитывать, что мы не можем обратиться по несуществующему индексу. Так, если в массиве 4 элемента, то мы можем использовать индексы с 0 до 3 для обращения к его элементам. Использование любого другого индекса приведет к ошибке:
Если необходимо, чтобы нельзя было изменять значения элементов массива, то такой массив можно определить как константный с помощью ключевого слова const
Длина массива не всегда бывает известна. Однако может потребоваться получить ее. Для этого можно использовать несколько способов. Первый способ, который пришел из языка С, представляет применение оператора sizeof:
По сути длина массива равна совокупной длине его элементов. Все элементы представляют один и тот же тип и занимают один и тот же размер в памяти. Поэтому с помощью выражения `sizeof(numbers)` находим длину всего массива в байтах, а с помощью выражения `sizeof(numbers[0])` -
длину одного элемента в байтах. Разделив два значения, можно получить количество элементов в массиве.
Второй способ представляет применение встроенной библиотечной функции std::size():
Используя циклы, можно пробежаться по всему массиву и через индексы обратиться к его элементам:
Чтобы пройтись по массиву в цикле, надо знать его длину. Здесь длина задана константой n.вначале надо найти длину массива. И в цикле for перебираем все элементы, пока счетчик i не станет равным длине массива. В итоге на консоль будут выведены все элементы массива:
> 11 12 13 14
Другой пример - вычислим сумму элементов массива:
Здесь длина массива вычисляется динамически - с помощью функции `std::size()` . Используем другую форму цикла `for` , которая предназначена специально для перебора последовательностей, в том числе для массивов
При переборе массива каждый перебираемый элемент будет помещаться в переменную number, значение которой в цикле выводится на консоль.
Если нам неизвестен тип объектов в массиве, то мы можем использовать спецификатор auto для определения типа:
Подобно тому, как вводятся данные для отдельных переменных, можно вводить значения для отдельных элементов массива. Например, пусть пользователь вводит значения числового массива:
Здесь в цикле сначала вводятся шесть чисел для каждого элемента массива, затем выводим этот массив.
# Многомерные массивы
Каждый массив имеет такую характеристику как размерность. Количество размерностей соотвествует числу пар квадратных скобок. Например:
В данном случае массив numbers имеет одну размерность (одна пара квадратных скобок), то есть он одномерный. При этом не важно, сколько элементов содержит этот массив. В любом случае его можно представить в виде ряда элемента значений - в виде строки или столбца.
Но кроме одномерных массивов в C++ есть и многомерные. Элементы таких массивов сами в свою очередь являются массивами, в которых также элементы могут быть массивами. Как правило, распространены двухмерные и трехмерные массивы. Например, определим двухмерный массив чисел:
Здесь массив numbers имеет две размерности (две пары квадратных скобок): первая размерность равна 3, а вторая размерность - 2. Такой массив состоит из трех элементов, при этом каждый элемент представляет массив из двух элементов. Двухмерный массив еще можно представить в виде таблицы, где первая размерность представляет количество строк, а вторая размерность - количество столбцов.
Подобным образом можно определять массивы и с большим количеством размерностей, например, трехмерный массив:
Как и в общем случае многомерный массив можно инициализировать некоторыми значениями, например, нулями:
Также можно инициализировать все элементы индивидуальными значениями. Так, массив numbers состоит из трех элементов (строк), каждый из которых представляет массив из двух элементов (столбцов). Поэтому такой массив мы можем проинициализировать, например, следующим образом:
Вложенные фигурные скобки очерчивают элементы для каждого подмассива. Такой массив еще можно представить в виде таблицы:
1 | 2 |
| --- | --- |
4 | 5 |
7 | 8 |
Возможна также инициализация не всех элементов, а только некоторых:
В этом случае значения присваиваются первым элементам массивов, а остальные элементы инициализируются нулями.
При рассмотрении одномерных массивов мы видели, что компилятор можем автоматически выводить длину массива на основании количества элементов. При инициализации многомерных массивов тоже тоже можно опустить длину массива, но только первую размерность (первые квадратные скобки):
И чтобы обратиться к элементам многомерного массива, потребуется индексы для каждой размерности. Так, если речь идет о двухмерном массиве, нам надо указать индексы для обоих его размерностей:
Здесь массив nums можно разобрать по индексам следующим образом:
nums[0][0] | nums[0][1] |
| --- | --- |
nums[0] | 1 | 2 |
nums[1][0] | nums[1][1] |
nums[1] | 3 | 4 |
nums[2][0] | nums[2][1] |
nums[2] | 5 | 6 |
Соответственно выражение `nums[1][0]` представляет обращение к первому элементу второго подмассива (первый столбец второй строки)
Переберем двухмерный массив:
Также для перебора элементов многомерного массива можно использовать другую форму цикла for, которая специально предназначена для перебора последовательностей:
Для перебора массивов, которые входят в массив, применяются ссылки. То есть во внешнем цикле
```
for(auto &subnumbers : numbers)
```
&subnumbers представляет ссылку на подмассив в массиве. Во внутреннем цикле
```
for(int number : subnumbers)
```
из каждого
подмассива в subnumbers получаем отдельные его элементы в переменную number и выводим ее значение на консоль.
Свои особенности имеют символьные массивы. При инициализации мы можем передать символьному массиву как набор символов, так и строку:
На первый взгляд оба массива имеют один и тот же набор символов, пусть в первом случае это просто набор отдельных символов, а во втором - строка. Но в первом случае - массив hello1 будет иметь пять элементов. А во втором случае массив hello2 будет иметь не 5 элементов, а 6, поскольку при инициализации строкой в символьный массив автоматически добавляется нулевой символ '\0'.
Способ определения символьного массива влияет на работу с ним. Так, при выводе на консоль оператор `count` отобразит всю строку до символа "\0".
В конце обязательно должен быть символ '\0', иначе на консоль будут выводится символы из последовательных ячеек памяти,
которые содержат мусор, пока не встретится либо нулевой символ, либо не произойдет недопустимый доступ к памяти. Например, сравним вывод трех символьных массивов:
Пример консольного вывода:
> hello╨J:╕╗☻ hello hello
В первом случае консольный вывод не детерминирован, поскольку символьный массив не заканчивается нулевым символом.
Выше мы рассматривали, что строку можно представить в виде массива символов. Как же тогда нам представить массив строк? Для этого можно использовать двухмерный массив символов:
Здесь массив langs содержит 8 элементов (8 строк). Максимальное количество символов (условно стобцов) в каждой строке задается с помощью константы `max_length` . Но непосредственно
сами строки массива необязательно должны достигать этой длины. Например, в строке "C++" только четыре символа (3 + автоматически добавляемый нулевой символ). Все остальные элементы получают
по умолчанию нулевые значения.
И поскольку каждый элемент массива langs представляет строку, то мы можем получить по индексу нужную строку и вывести на консоль.
Так как вложенные массивы представляют строки или массивы символов, то каждый такой массив мы можем ввести на консоль в виде строки:
Здесь при переборе массива langs, каждый его отдельный элемент - массив символов будет помещаться в переменную lang. И далее мы можем вывести этот массив в виде строки на консоль.
Функция getline() потока cin считывает последовательность символов, включая пробелы. По умолчанию, ввод заканчивается, когда считывается символ перевода строки '\n' (например, при нажатии клавиши Enter). Функция `getline()` имеет две версии. Первая версия принимает два параметра: первый параметра указывает на массив символов для хранения введенных данных, а
второй параметр указывает на максимальное количество символов, которое надо сохранить в массив. Это количество включает символ завершения строки - нулевой байт '\0', который
автоматически добавляться в конец ввода:
В данном случае считываем не более 100 символов в массив text. Пример ввода:
> Enter some text: Hello METANIT.COM You entered: Hello METANIT.COM
Другая форма функции `getline()` также принимает третий параметр - символ, который будет выступать сиигналом завершения ввода. Например
Здесь в качестве символа окончания ввода выступает восклицательный знак !, поэтому при считывании он не будет включаться в строку:
> Enter some text: Hello World! You entered: Hello World
# Введение в строки
Date: 2023-02-26
Categories:
Tags:
Мы можем работать со строками в С++ в так называемом С-стиле как с массивами символов, которые оканчиваются на нулевой байт '0'. Однако, что если такой символ не будет найден или в процессе манипуляций со строкой будет удален, то дальшейшие действия с такой строкой могут иметь недетерминированный результат. По этой причине строки в С-стиле считаются небезопасными, и рекомендуется для хранения строк в C++ использовать тип std::string из модуля `<string>` .
Объект типа string содержит последовательность символов типа char, которая может быть пустой. Например, определение пустой строки:
И можно инициализировать объект string другим объектом string:
Подобно массиву мы можем обращаться с помощью индексов к отдельным символам строки, получать и изменять их:
Поскольку объект string представляет последовательность символов, то эту последовательность можно перебрать с помощью цикла for. Например, подсчитаем, сколько раз в строке встречается буква "l":
Для считывания введенной строки с консоли, как и для считывания других значений, можно использовать объект std::cin:
Консольный вывод:
> Input your name: Tom Your name: Tom
Однако если при данном способе ввода строка будет содержать подстроки, разделенные пробелом, то `std::cin` будет использовать только первую подстроку: > Input your name: <NAME> Your name: Tom
Чтобы считать всю строку, применяется метод getline():
# Указатели
Указатели представляют собой объекты, значением которых служат адреса других объектов (переменных, констант, указателей) или функций. Как и ссылки, указатели применяются для косвенного доступа к объекту. Однако в отличие от ссылок указатели обладают большими возможностями.
Для определения указателя надо указать тип объекта, на который указывает указатель, и символ звездочки *:
Сначала идет тип данных, на который указывает указатель, и символ звездочки *. Затем имя указателя.
Например, определим указатель на объект типа int:
Такой указатель может хранить только адрес переменной типа int, но пока данный указатель не ссылается ни на какой объект и хранит случайное значение. Мы его даже можем попробовать вывести на консоль:
Например, в моем случае консоль вывела "0x8" - некоторый адрес в шестнадцатеричном формате (обычно для представления адресов в памяти применяется шестнадцатеричная форма). Но также можно инициализировать указатель некоторым значением:
Поскольку конкрентное значение не указано, указатель в качестве значения получает число 0. Это значение представляет специальный адрес, который не указывает не на что. Также можно явным образом инициализировать нулем, например, используя специальную константу nullptr:
Хотя никто не запрещает не инициализировать указатели. Однако в общем случае рекомендуется все таки инициализировать, либо каким-то конкретным значением, либо нулем, как выше. Так, к примеру, нулевое значение в будущем позволит определить, что указатель не указывает ни на какой объект.
Cтоит отметить что положение звездочки не влияет на определение указателя: ее можно помещать ближе к типу данных, либо к имени переменной - оба определения будут равноценны:
Также стоит отметить, что размер значения указателя (хранимый адрес) не зависит от типа указателя. Он зависит от конкретной платформы. На 32-разрядных платформах размер адресов равен 4 байтам, а на 64-разрядных - 8 байтам. Например:
В данном случае определены два указателя на разные типы - int и double. Переменные этих типов имеют разные размеры - 4 и 8 байт соответственно. Но размеры значений указателей будут одинаковы. В моем случае на 64-разрядной платформе размер обоих указателей равен 8 байтам.
С помощью операция & можно получить адрес некоторого объекта, например, адрес переменной. Затем этот адрес можно присвоить указателю::
Выражение `&number` возвращает адрес переменной `number` . Поэтому переменная `pnumber` будет хранить адрес переменной `number` .
Что важно, переменная number имеет тип int, и указатель, который указывает на ее адрес, тоже имеет тип int. То есть должно быть соответствие по типу. Однако также можно использовать
ключевое слово auto:
Если мы попробуем вывести адрес переменной на консоль, то увидим, что он представляет шестнадцатиричное значение:
Консольный вывод программы в моем случае:
> number addr: 0x1543bffc74
В каждом отдельном случае адрес может отличаться и при разных запусках программы может меняться. К примеру, в моем случае машинный адрес переменной number - `0x1543bffc74` .
То есть в памяти компьютера есть адрес 0x1543bffc74, по которому располагается переменная number. Так как переменная x представляет тип int,
то на большинстве архитектур она будет занимать следующие 4 байта (на конкретных архитектурах размер памяти для типа int может отличаться). Таким образом,
переменная типа int последовательно займет ячейки памяти с адресами 0x1543bffc74, 0x1543bffc75, 0x1543bffc76, 0x1543bffc77.
И указатель pnumber будет ссылаться на адрес, по которому располагается переменная number, то есть на адрес 0x1543bffc74.
Итак, указатель pnumber хранит адрес переменной number, а где хранится сам указатель pnumber? Чтобы узнать это, мы также можем применить к переменной pnumber операцию &:
Консольный вывод программы в моем случае:
> number addr: 0xe1f99ff7cc pnumber addr: 0xe1f99ff7c0
Здесь мы видим, что переменная number располагается по адресу `0xe1f99ff7cc` , а указатель, который хранит этот адрес, - по адресу `0xe1f99ff7c0` . Из вывода видно,
что обе переменные хранятся совсем рядом в памяти
Но так как указатель хранит адрес, то мы можем по этому адресу получить хранящееся там значение, то есть значение переменной number. Для этого применяется операция * или операция разыменования ("indirection operator" / "dereference operator"). Результатом этой операции всегда является объект, на который указывает указатель. Применим данную операцию и получим значение переменной number:
Пример консольного вывода:
> Address = 0x44305ffd4c Value = 25
Значение, которое получено в результате операции разыменования, можно присвоить другой переменной:
И также используя указатель, мы можем менять значение по адресу, который хранится в указателе:
Так как по адресу, на который указывает указатель, располагается переменная x, то соответственно ее значение изменится.
# Операции с указателями
Указатели поддерживают ряд операций: присваивание, получение адреса указателя, получение значения по указателю, некоторые арифметические операции и операции сравнения.
Указателю можно присвоить адрес объекта того же типа, либо значение другого указателя. Для получения адреса объекта используется операция &:
При этом указатель и переменная должны иметь один и тот же тип, в данном случае это тип int.
Операция разыменования указателя представляет выражение в виде `*имя_указателя` . Эта операция позволяет получить объект по адресу, который хранится в указателе.
Через выражение `*pa` мы можем получить значение по адресу, который хранится в указателе `pa` , а через выражение типа `*pa = значение` вложить по этому адресу новое значение. И так как в данном случае указатель `pa` указывает на переменную `a` , то при изменении значения по адресу, на который указывает указатель, также изменится и значение
переменной `a` .
Присвоение указателю другого указателя:
Когда указателю присваивается другой указатель, то фактически первый указатель начинает также указывать на тот же адрес, на который указывает второй указатель:
> pa: address=0x56347ffc5c value=10 pb: address=0x56347ffc58 value=2 pa: address=0x56347ffc58 value=2 b value=125
Нулевой указатель (null pointer) - это указатель, который не указывает ни на какой объект. Если мы не хотим, чтобы указатель указывал на какой-то конкретный адрес, то можно присвоить ему условное нулевое значение. Для определения нулевого указателя можно инициализировать указатель нулем или константой nullptr:
Так как ссылка не является объектом, то нельзя определить указатель на ссылку, однако можно определить ссылку на указатель. Через подобную ссылку можно изменять значение, на которое указывает указатель или изменять адрес самого указателя:
Указатель хранит адрес переменной, и по этому адресу мы можем получить значение этой переменной. Но кроме того, указатель, как и любая переменная, сам имеет адрес, по которому он располагается в памяти. Этот адрес можно получить также через операцию &:
К указателям могут применяться операции сравнения >, >=, <, <=,==, !=. Операции сравнения применяются только к указателям одного типа. Для сравнения используются номера адресов:
Консольный вывод в моем случае:
> pa (0xa9da5ffdac) is greater than pb (0xa9da5ffda8)
Иногда требуется присвоить указателю одного типа значение указателя другого типа. В этом случае следует выполнить операцию приведения типов с помощью операции `(тип_указателя *)` :
Для преобразования указателя к другому типу в скобках перед указателем ставится тип, к которому надо преобразовать. Причем если мы не можем просто создать объект, например, переменную типа void, то для указателя это вполне будет работать. То есть можно создать указатель типа void.
Кроме того, следует отметить, что указатель на тип char ( `char *pc {&c}` ) при выводе на консоль система интерпретирует как строку:
Поэтому если мы все-таки хотим вывести на консоль адрес, который хранится в указателе типа char, то это указатель надо преобразовать к другому типу, например, к void* или к int*.
# Арифметика указателей
Указатели могут участвовать в арифметических операциях (сложение, вычитание, инкремент, декремент). Однако сами операции производятся немного иначе, чем с числами. И многое здесь зависит от типа указателя.
К указателю можно прибавлять целое число, и также можно вычитать из указателя целое число. Кроме того, можно вычитать из одного указателя другой указатель.
Рассмотрим вначале операции инкремента и декремента и для этого возьмем указатель на объект типа int:
Операция инкремента ++ увеличивает значение на единицу. В случае с указателем увеличение на единицу будет означать увеличение адреса, который хранится в указателе, на размер типа указателя. То есть в данном случае указатель на тип int, а размер объектов int в большинстве архитектур равен 4 байтам. Поэтому увеличение указателя типа int на единицу означает увеличение адреса в указателе на 4. Так, в моем случае консольный вывод выглядит следующим образом:
> address=0x81315ffd84 value=10 address=0x81315ffd88 value=828374408 address=0x81315ffd84 value=10
Здесь видно, что после инкремента значение указателя увеличилось на 4: с `0x81315ffd84` до `0x81315ffd88` . А после декремента, то есть уменьшения на
единицу, указатель получил предыдущий адрес в памяти. Фактически увеличение на единицу означает, что мы хотим перейти к следующему объекту в памяти, который находится за текущим и на который указывает указатель. А уменьшение на единицу означает
переход назад к предыдущему объекту в памяти.
После изменения адреса мы можем получить значение, которое находится по новому адресу, однако это значение может быть неопределенным, как показано в случае выше.
В случае с указателем типа int увеличение/уменьшение на единицу означает изменение адреса на 4. Аналогично, для указателя типа short эти операции изменяли бы адрес на 2, а для указателя типа char на 1.
В моем случае консольный вывод будет выглядеть следующим образом:
> Pointer pd: address:0x2731bffd58 Pointer pd: address:0x2731bffd60 Pointer pn: address:0x2731bffd56 Pointer pn: address:0x2731bffd58
Как видно из консольного вывода, увеличение на единицу указателя типа double привело к увеличению хранимого в нем адреса на 8 единиц (размер объекта double - 8 байт), а увеличение на единицу указателя типа short дало увеличение хранимого в нем адреса на 2 (размер типа short - 2 байта).
Аналогично указатель будет изменяться при прибавлении/вычитании не единицы, а какого-то другого числа.
Добавление к указателю типа double числа 2
означает, что мы хотим перейти на два объекта double вперед, что подразумевает изменение адреса на 2 * 8 = 16 байт.
Вычитание из указателя типа short числа 3
означает, что мы хотим перейти на три объекта short назад, что подразумевает изменение адреса на 3 * 2 = 6 байт.
И в моем случае я получу следующий консольный вывод:
> Pointer pd: address:0xb88d5ffbe8 Pointer pd: address:0xb88d5ffbf8 Pointer pn: address:0xb88d5ffbe6 Pointer pn: address:0xb88d5ffbe0
В отличие от сложения операция вычитания может применяться не только к указателю и целому числу, но и к двум указателям одного типа:
Согласно стандарту разность указателей представляет тип `std::ptrdiff_t` , который в реальности является псевдонимом для типов `int` , `long` и `long long` . Какой конкретно из этих типов применяется для хранения разности, зависит от конкретной платформы.
Например, на Windows 64x это тип `long long` . Поэтому переменная `ab` , которая хранит разность адресов,
определена с помощью оператора `auto` . Консольный вывод в моем случае: > pa: 0x6258fffab4 pb: 0x6258fffab0 ab: 1
Результатом разности двух указателей является "расстояние" между ними. Например, в случае выше адрес из первого указателя на 4 больше, чем адрес из второго указателя (0x6258fffab0 + 4 = 0x6258fffab4). Так как размер одного объекта int равен 4 байтам, то расстояние между указателями будет равно (0x6258fffab4 - 0x6258fffab0)/4 = 1.
При работе с указателями надо отличать операции с самим указателем и операции со значением по адресу, на который указывает указатель.
То есть в данном случае через операцию разыменования `*pa` получаем значение, на которое указывает указатель pa, то есть число 10, и
выполняем операцию сложения. То есть в данном случае обычная операция сложения между двумя числами, так как выражение `*pa` представляет число.
Но в то же время есть особенности, в частности, с операциями инкремента и декремента. Дело в том, что операции *, ++ и -- имеют одинаковый приоритет и при размещении рядом выполняются справа налево.
Например, выполним постфиксный инкремент:
В выражении `b {*pa++}` сначала к указателю присваивается единица (то есть к адресу добавляется 4, так как указатель типа int). Затем
так как инкремент постфиксный, с помощью операции разыменования возвращается значение, которое было до инкремента - то есть число 10.
И это число 10 присваивается переменной b. И в моем случае результат работы будет следующий: > pa: address=0xdfe63ffb20 value=10 b: value=10 pa: address=0xdfe63ffb24 value=10
Скобки изменяют порядок операций. Здесь сначала выполняется операция разыменования и получение значения, затем это значение увеличивается на 1. Теперь по адресу в указателе находится число 11. И затем, так как инкремент постфиксный, переменная b получает значение, которое было до инкремента, то есть опять число 10. Таким образом, в отличие от предыдущего случая все операции производятся над значением по адресу, который хранит указатель, но не над самим указателем. И, следовательно, изменится результат работы:
> pa: address=0xc0523ffe70 value=10 b: value=10 pa: address=0xc0523ffe70 value=11
Аналогично будет с префиксным инкрементом:
В данном случае сначала с помощью операции разыменования получаем значение по адресу из указателя pa, к этому значению прибавляется единица. То есть теперь значение по адресу, который хранится в указателе, равно 11. Затем результат операции присваивается переменной b:
> pa: address=0x45cafffb40 value=10 b: value=11 pa: address=0x45cafffb40 value=11
Теперь сначала изменяет адрес в указателе, затем мы получаем по этому адресу значение и присваиваем его переменной b. Полученное значение в этом случае может быть неопределенным:
> pa: address=0x7fc59ffa50 value=10 b: value=0 pa: address=0x7fc59ffa54 value=0
# Константы и указатели
Указатели могут указывать как на переменные, так и на константы. Чтобы определить указатель на константу, он тоже должен объявляться с ключевым словом const:
Здесь указатель `pa` указывает на константу `a` . Поэтому даже если мы захотим изменить значение по адресу, который хранится в указателе,
мы не сможем это сделать, например так:
В этом случае мы просто получим ошибку во время компиляции.
Возможна также ситуация, когда указатель на константу на самом деле указывает на переменную:
В этом случае переменную отдельно мы сможем изменять, однако по прежнему изменить ее значение через указатель мы не сможем.
Через указатель на константу мы не можем изменять значение переменной/константы. Но мы можем присвоить указателю адрес любой другой переменной или константы:
От указателей на константы надо отличать константные указатели. Они не могут изменять адрес, который в них хранится, но могут изменять значение по этому адресу.
И объединение обоих предыдущих случаев - константный указатель на константу, который не позволяет менять ни хранимый в нем адрес, ни значение по этому адресу:
В C++ указатели и массивы тесно связаны. Обычно компилятор преобразует массив в указатели. С помощью указателей можно манипулировать элементами массива, как и с помощью индексов.
Имя массива по сути является адресом его первого элемента. Соответственно через операцию разыменования мы можем получить значение по этому адресу:
Так, в моем случае я получу следующий консольный вывод:
> nums[0] address: 0x1f1ebffe60 nums[0] value: 1
Прибавляя к адресу первого элемента некоторое число, мы можем получить определенный элемент массива.
То есть, например, адрес второго элемента будет представлять выражение `nums+1` , а его значение - `*(nums+1)` .
В отношении сложения и вычитания здесь действуют те же правила, что и в операциях с указателями. Добавление единицы означает прибавление к адресу значения, которое равно размеру типа массива. Так, в данном случае массив представляет тип int, размер которого, как правило, составляет 4 байта, поэтому прибавление единицы к адресу означает увеличение адреса на 4. Прибавляя к адресу 2, мы увеличиваем значение адреса на 4 * 2 = 8. И так далее.
Например, в цикле пробежимся по всем элементам:
И в итоге программа выведет на консоль следующий результат:
> nums[0]: address=0xd95adffc30 value=1 nums[1]: address=0xd95adffc34 value=2 nums[2]: address=0xd95adffc38 value=3 nums[3]: address=0xd95adffc3c value=4 nums[4]: address=0xd95adffc40 value=5
Но при этом имя массива это не стандартный указатель, и мы не можем изменить его адрес, например, так:
Имя массива всегда хранит адрес самого первого элемента. И нередко для перемещения по элементам массива используются отдельные указатели:
Здесь указатель `ptr` изначально указывает на первый элемент массива. Увеличив указатель на 2, мы пропустим 2 элемента в массиве и
перейдем к элементу `nums[2]` .
Можно сразу присвоить указателю адрес конкретного элемента массива:
С помощью указателей легко перебрать массив:
Так как указатель хранит адрес, то мы можем продолжать цикл, пока адрес в указателе не станет равным адресу последнего элемента.
Аналогичным образом можно перебрать и многомерный массив:
Поскольку в данном случае мы имеем дело с двухмерным массивом, то адресом первого элемента будет выражение `a[0]` . Соответственно указатель указывает на
этот элемент. С каждой итерацией указатель увеличивается на единицу, пока его значение не станет равным адресу последнего элемента, который хранится в
указателе end.
Мы также могли бы обойтись и без указателя на последний элемент, проверяя значение счетчика:
Но в обоих случаях программа вывела бы следующий результат:
> 1 2 3 4 5 6 7 8 9 10 11 12
Поскольку массив символов может интерпретироваться как строка, то указатель на значения типа char тоже может интерпретироваться как строка:
При выводе на консоль значения указателя фактически будет выводиться строка.
Также можно применять операцию разыменовывания для получения отдельных символов, например, выведем первый символ:
Если же необходимо вывести на консоль адрес указателя, то его надо преобразовать к типу void*:
В остальном работа с указателем на массив символов производится также, как и с указателями на массивы других типов.
Также поскольку указатель типа char тоже может интерпретироваться как строка, то теоретически мы можем написать следующим образом:
Однако следует учитывать, что строковые литералы в С++ рассматриваются как константы. Поэтому предыдущее определение указателя может при компиляции вызвать как минимум предупреждение, а попытка изменить элементы строки через указатель - к ошибке компиляции. Поэтому при определении указателя на строку, следует определять указатель как указатель на константу:
Также можно определять массивы указателей. В некотором смысле массив указателей будет похож на массив, который содержит другие массивы. Однако массив указателей имеет преимущества.
Например, возьмем обычный двухмерный символьный массив - массив, который хранит строки:
Для определения двухмерного массива мы должны указать как минимум размер вложенных массивов, который будет достаточным, чтобы вместить каждую строку. В данном случае размер каждого вложенного массива - 20 символов. Однако зачем для первой строки - "C++", которая содержит 4 символа (включая концевой нулевой байт) выделять аж 20 байтов? Это - ограничение подобных массивов. Массивы указателей же позволяют обойти подобное ограничение:
В данном случае элементами массива langs являются указатели: 3 указателя, каждый из которых занимает 4 или 8 байт в зависимости от архитекутуры (размер адреса). Каждый из этих указателей указывает на адрес в памяти, где расположены соответствующие строки: "C++", "Python", "JavaScript". Однако каждая из этих строк будет занимать именно то пространство, которое ей непосредственно необходимо. То есть строка "С++" будет занимать 4 байта. С одной стороны, мы здесь сталкиваемся с дополнительными издержками: дополнительно выделяется память для хранения адресов в указателях. С другой стороны, когда строки в массиве сильно различаются по длине , то мы можем получить общий выигрыш в количестве потребляемой памяти.
# Функции
Функция определяет действия, которые выполняет программа. Функции позволяют выделить набор инструкций и назначить ему имя. А затем многократно по присвоенному имени вызывать в различных частях программы. По сути функция - это именованный блок кода.
Формальное определение функции выглядит следующим образом:
Первая строка представляет заголовок функции. Вначале указывается возвращаемый тип функции. Если функция не возвращает никакого значения, то используется тип void.
Затем идет имя функции, которое представляет произвольный идентификатор. К именованию функции применяются те же правила, что и к именованию переменных.
После имени функции в скобках идет перечисление параметров. Функция может не иметь параметров, в этом случае указываются пустые скобки.
После заголовка функции в фигурных скобках идет тело функции, которое содержит выполняемые инструкции.
Для возвращения результата функция применяет оператор return. Если функция имеет в качестве возвращаемого типа любой тип, кроме void, то она должна обязательно с помощью оператора return возвращать какое-либо значение.
Например, определение функции main, которая должна быть в любой программе на языке C++ и с которой начинается ее выполнение:
Возвращаемым типом функции является тип int, поэтому функция должна использовать оператор return и возвращать какое-либо значение, которое соответствует типу int. Возвращаемое значение ставится после оператора return.
Стоит отметить, что С++ позволяет не использовать оператор `return` в функции main:
Но если функция имеет тип void, то ей не надо ничего возвращать. Например, мы могли бы определить следующую функцию, которая просто выводит некоторый текст на консоль:
Когда запускается программа на языке C++, то запускается функция main. Никакие другие функции, определенные в программе, автоматически не выполняются. Для выполнения функции ее необходимо вызвать. Вызов функции осуществляется в форме:
После имени функции указываются скобки, в которых перечисляются аргументы - значения для параметров функции.
Например, определим и выполним простейшую функцию:
Здесь определена функция hello, которая вызывается в функции main два раза. В этом и заключается преимущество функций: мы можем вынести некоторые общие действия в отдельную функцию и затем вызывать многократно в различных местах программы. В итоге программа два раза выведет строку "hello".
> hello hello
При использовании функций стоит учитывать, что компилятор должен знать о функции до ее вызова. Поэтому вызов функции должен происходить после ее определения, как в случае выше. В некоторых языках это не имеет значение, но в языке C++ это играет большую роль. И если, к примеру, мы сначала вызовем, а потом определим функцию, то мы получим ошибку на этапе компиляции, как в следующем случае:
В этом случае перед вызовом функции надо ее дополнительно объявить. Объявление функции еще называют прототипом. Формальное объявление выглядит следующим образом:
Фактически это заголовок функции. То есть для функции hello объявление будет выглядеть следующим образом:
Используем объявление функции:
В данном случае несмотря на то, что определение функции идет после ее вызова, но так как функция уже объявлена до ее вызова, то компилятор будет знать о функции hello, и никаких проблем в работе программы не возникнет.
# Область видимости объектов
Date: 2023-02-15
Categories:
Tags:
Все переменные имеют определенное время жизни (lifetime) и область видимости (scope). Время жизни начинается с момента определения переменной и длится до ее уничтожения. Область видимости представляет часть программы, в пределах которой можно использовать объект. Как правило, область видимости ограничивается блоком кода, который заключается в фигурные скобки. В зависимости от области видимости создаваемые объекты могут быть глобальными, локальными или автоматическими.
Глобальные переменные определены в файле программы вне любой из функций или любого другого блока кода и могут использоваться любой функцией. Глобальные переменные существуют в течение всей жизни программы и уничтожаются лишь с завершением программы.
Если глобальные переменные не инициализированы, то они получают нулевые значения.
Например, определим и используем глобальную переменную:
Здесь переменная n является глобальной и доступна из любой функции. При этом любая функция может изменить ее значение.
Объекты, которые создаются внутри блока кода (он может представлять функцию или какую-либо конструкцию типа циклов), называются локальными. Такие объекты доступны в пределах только того блока кода, в котором они определены.
Локальные объекты, которые существуют только во время выполнения того блока, в котором они определены, являются автоматическими.
При входе в блок для подобных переменных выделяется память, а после завершения работы этого блока, выделенная память освобождается, а объекты удаляются.
Здесь в функции print определена локальная переменная n. В функции main определена автоматическая переменная m. Вне своих функций эти переменные недоступны. Например, мы не можем использовать переменную n в функции main, так как ее область видимости ограничена функцией print. Соответственно также мы не можем использовать переменную m в функции print, так как эта переменная ограничена фукцией main.
Подобным образом с помощью блока кода можно определить вложенные области видимости:
Для каждой области видимости доступны все те объекты, которые определены во внешней области видимости или во внешнем контексте. Глобальная область видимости является внешней для функции, поэтому функция может использовать глобальные переменные. А фукция является внешним контекстом для вложенного блока кода, поэтому блок кода может использовать переменную n1, которая определена в функции вне этого блока. Однако переменные, определенные в блоке кода, вне этого блока использовать нельзя.
Локальные объекты, определенные внутри одного контекста, могут скрывать объекты с тем же именем, определенные во внешнем контексте:
Здесь определено три переменных с именем n. Переменная n, определенная на уровне функции main ( `int n = 10;` ) скрывает глобальную переменную n. А переменная n, определенная на уровне блока, скрывает переменную, определенную
на уровне функции main.
Однако иногда бывает необходимо обратиться к глобальной переменной. В этом случае для обращения именно к глобальной переменной можно использовать оператор :: перед именем переменной
Кроме автоматических есть особый тип локальных объектов - статические объекты. Они определяются на уровне функций с помощью ключевого слова static. Если автоматические переменные определяются и инициализируются при каждом входе в функцию, то статические переменные инициализируются только один раз, а при последующих вызовах функции используется старое значение статической переменной. То есть разница между локальными автоматическими и локальными статическими переменными состоит во времени жизни: автоматические переменные существуют до конца выполнения блока кода, а статические - до конца выполнения программы.
Например, пусть у нас будет функция со стандартной автоматической переменной:
Функция print вызывается три раза, и при каждом вызове программа повторно будет выделять память для переменной n, которая определена в функции. А после завершения работы print, память для переменной n будет освобождаться. Соответственно ее значение при каждом вызове будет неизменно:
> n=1 n=1 n=1
Теперь сделаем переменную n статической:
К переменной был добавлено ключевое слово static, поэтому при завершении работы функции print переменная не уничтожается, ее память не очищается, наоборот, она сохраняется в памяти. И соответственно результат работы программы будет иным:
> n=1 n=2 n=3
# Параметры функции
Через параметры в функцию можно передать различные значения. Параметры перечисляются после имени функции через запятую в скобках:
Для каждого параметра указывается его тип и имя.
Например, определим и вызовем функцию, которая выводит на консоль имя и возраст человека:
При запуске программы мы получим следующий консольный вывод:
> Name: Tom Age: 38
Функция `print()` принимает два параметра и выводит на консоль их значения. Первый параметр называется `name` и представляет тип `std::string` . А второй параметр называется `age` и представляет тип `unsigned int` . И при вызове функции для этих параметров необходимо передать значения
Значения, передаваемые параметрам функции при ее вызове, называются аргументами. В данном случае в качестве аргументов передаются строковый и целочисленный литералы. При этом аргументы передаются по позиции, то есть первый аргумент (строка "Tom") передается первому параметру (параметру name), второй аргумент (число 38) - второму параметру (параметру age) и так далее.
При этом аргументы должны соответствовать параметрам по типу или допускать неявно преобразование в тип параметра. Так, в примере выше параметру name передается строковый литерал, который автоматически преобразуется в тип `std::string` .
Подобным образом можно передавать в параметрам не только литералы, но и, например, значения переменных и констант:
При вызове функции здесь параметр name получит значение константы userName, а параметр age - значение переменной userAge.
При использовании прототипа функции прототип после имени функции в скобках должен содержать типы параметров:
Опционально в прототипе можно указывать имена параметров:
Функция может принимать аргументы по умолчанию, то есть некоторые значения, которые функция использует, если при вызове для параметров явным образом не передается значение:
Для установки значения по умолчанию параметру присваивается это значение `unsigned age = 18` . И если для второго параметра не будет передано
значение, то он будет использовать значение по умолчанию. Консольный вывод программы: > Name: Sam Age: 18 Name: Tom Age: 22
При объявлении прототипа подобной функции он тоже может содержать значение по умолчанию для параметра. И в этом случае мы можем не определять в функции значение по умолчанию для параметра - оно будет браться из прототипа:
При определении необязательных параметров стоит учитывать, что они должны идти после обязательных. Например, назначим параметру name тоже значение по умолчанию:
Поскольку здесь параметр name определен как необязательный, то следующий за ним параметр age тоже должен иметь значение по умолчанию.
Стоит отметить, что начиная со стандарта C++20 можно вместо конкретного типа для параметров указывать ключевое слово auto. Тогда тип параметров будет выводиться автоматически на этапе компиляции на основе передаваемых в функцию аргументов:
В данном случае функция sum вычисляет сумму двух значений. Поскольку мы можем передать числа разных типов, то, чтобы не зависеть от конкретного типа, для параметров вместо типа можно указать ключевое слово `auto` .
# Передача аргументов по значению и по ссылке
Аргументы, которые представляют переменные или константы, могут передаваться в функцию по значению (by value) и по ссылке (by reference).
При передаче аргументов по значению функция получает копию значения переменных и констант. Например:
Функция square принимает число типа `int` и возводит его в квадрат. В функции main перед и после выполнения функции square происходит вывод на консоль
значения переменной n, которая передается в square в качестве аргумента.
И при выполнении мы увидим, что изменение параметра m в функции square действуют только в рамках этой функции. Значение переменной n, которое передается в функцию, никак не изменяется:
> Before square: n = 4 In square: m = 16 After square: n = 4
Почему так происходит? При компиляции функции для ее параметров выделяются отдельные участки памяти. При вызове функции вычисляются значения аргументов, которые передаются на место параметров. И затем значения аргументов заносятся в эти участки памяти. То есть функция манипулирует копиями значений объектов, а не самими объектами.
При передаче параметров по ссылке передается ссылка на объект, через которую мы можем манипулировать самим объектов, а не просто его значением. Так, перепишем предыдущий пример, используя передачу по ссылке:
Теперь параметр m передается по ссылке. Ссылочный параметр связывается непосредственно с объектом, поэтому через ссылку можно менять сам объект. То есть здесь при вызове функции параметр `m` в функции `square` будет представлять тот же объект, что и переменная `n`
И если мы скомпилируем и запустим программу, то результат будет иным:
> Before square: n = 4 In square: m = 16 After square: n = 16
Передача по ссылке позволяет возвратить из функции сразу несколько значений. Также передача параметров по ссылке является более эффективной при передаче очень больших объектов. Поскольку в этом случае не происходит копирования значений, а функция использует сам объект, а не его значение.
От передачи аргументов по ссылке следует отличать передачу ссылок в качестве аргументов:
Если функция принимает аргументы по значению, то изменение параметров внутри функции также никак не скажется на внешних объектах, даже если при вызове функции в нее передаются ссылки на объекты.
> Before square: n = 4 In square: m = 16 After square: n = 4
Передача параметров по значению больше подходит для передачи в функцию небольших объектов, значения которых копируются в определенные участки памяти, которые потом использует функция.
Передача параметров по ссылке больше подходит для передачи в функцию больших объектов, в этом случае не нужно копировать все содержимое объекта в участок памяти, за счет чего увеличивается производительность программы.
Передача параметров по значению и по ссылке отличаются еще одним важным моментом. С++ может автоматически преобразовывать значения одних типов в другие, в том числе если подобные преобразования сопровождаются потерей точности (например, преобразование от типа double к типу int). Но при передаче параметров по ссылке неявные автоматические преобразования типов исключены. Так, рассмотрим пример:
Здесь определены две практически идентичные функции. Только функция printVal получает параметр по значению, а функция printRef - по ссылке. При вызове в обе функции передается число типа `double` . Но параметр обоих функций представляет тип `int` . И если при передаче по значению переданное число double успешно преобразуется в int (пусть и
с потерей точности), то при передаче по ссылке мы столкнемся с ошибкой на этапе компиляции. Это еще одна причина, почему нередко рекомендуется передавать значения по ссылки - исключается
вероятность предвиденных и иногда нежелательных преобразований типов.
# Константные параметры
Параметры могут быть константными - значения таких параметров не могут меняться. Константые параметры предваряются ключевым словом const. Например:
Здесь в функции square параметр `n` определен как константный. Внутри функции square мы не можем изменить его значение. Фактически он аналогичен константе. Стоит отметить что ключевое слово `const` для константного параметра используется при определении функции
При объявлении функции - в ее прототипе для параметров, которые передаются по значению, указывать const необязательно
Для параметров, которые передаются по ссылке, указывать const в прототипе обязательно.
Зачем нужны такие константные параметры? Иногда нужно гарантировать, что параметр от начала и до конца функции будет иметь одно и то же начальное значение. Если параметр неконстантный, то есть вероятность, что значение параметра будет случайным образом изменено внутри функции. И даже есть такая рекомендация, что если не планируется явным образом менять значение параметра, то лучше сразу определитть его как константный.
Константному параметру можно передать в качестве аргумента как константу, так и переменную.
От этой ситуации следует отличать передачу констант в качестве аргументов для неконстантных параметров:
Несмотря на то, что при вызове функции ей передаются константы, но так как сами параметры не являются константными, то функция может изменять их значения.
Параметры, которые передаются по ссылке, также могут быть константными:
Значение константной ссылки также нельзя менять.
Если функция получает аргументы по ссылке, то чтобы передать в функцию константу, параметры тоже должны представлять ссылку на константу (неконстантной ссылке мы не можем передать константу.):
И если в функцию необходимо передать большие объекты, которые не должны изменяться, то определение параметров именно как константных ссылок больше всего подходит для данной задачи.
# Оператор return и возвращение результата
Date: 2023-03-04
Categories:
Tags:
Для возвращения результата из функции применяется оператор return. Этот оператор имеет две формы:
Первая форма оператора return применяется для возвращения результата из функции. Если функция имеет в качестве возвращаемого типа любой тип, отличный от void, то такая функция обязятельно должна возвратить некоторое значение с помощью оператора return. Причем возвращаемое значение должно соответствовать возвращаемому типу функции, либо допускать неявное преобразование в этот тип.
Единственная функция, которая возвращает некоторое значение, и где можно не использовать оператор return - это функция main.
Например, мы хотим написать программу, которая бы вычисляла сумму чисел. Определим функцию, которая возвращает сумму чисел:
Здесь функция sum, которая вычисляет сумму чисел, принимает два значения типа `int` и возвращает значение типа `int` , поэтому прототип функции выглядит следующим образом
И в этом случае в функции sum необходимо использовать оператор return, после которого идет возвращаемое значение:
В данном случае возвращается значение переменной `res` . Хотя это могло бы быть сложное выражение, которое возвращало число int, например:
Так как функция sum возвращает значение, то ее результат можно присвоить какой-нибудь переменной или константе:
Либо напрямую использовать результат функции sum, как число, например, при выводе на консоль:
Рассмотрим еще один пример:
Здесь определена функция `calculate` , которая также принимает два числа и символ - знак операции. В конструкции `switch` в зависимости от знака операции с помощью оператора
return возвращается результат определенной операции
Другая форма оператора return не принимает после себя никаких значений и может использоваться в тех функциях, которые не возвращают никакого значения, то есть имеют в качестве возвращаемого типа void. Эта форма может быть полезна, если нам надо выйти из функции до ее завершения.
Например, функция принимает имя и возраст пользователя и выводит их на консоль:
Здесь в функции print проверяем переданный возраст. И если он представляет недопустимое значение, то с помощью оператора return осуществляем выход из функции.
Компилятор С++ может автоматически выводить тип возвращаемого значения, если вместо возвращаемого типу используется оператор auto:
Здесь тип результата в функции sum выводится автоматически. Поскольку возвращется сумма `a + b` , результат которой будет представлять тип `int` , соответственно
компилятор выведет, что функция возвращает тип int. Стоит отметить, что функция sum определена до того, как она вызывается в функции main. В данном случае нет большого смысла использовать оператор auto вместо `int` . Обычно auto применяется,
если название возвращаемого типа довольно большое и сложное, что позволит сократить код.
# Указатели в параметрах функции
Параметры функции в C++ могут представлять указатели. Указатели передаются в функцию по значению, то есть функция получает копию указателя. В то же время копия указателя будет в качестве значения иметь тот же адрес, что оригинальный указатель. Поэтому используя в качестве параметров указатели, мы можем получить доступ к значению аргумента и изменить его.
Например, пусть у нас будет простейшая функция, которая увеличивает число на единицу:
Здесь переменная n передается в качестве аргумента для параметра x. Передача происходит по значению, поэтому любое изменение параметра x в функции increment никак не скажется на значении переменной n. Что мы можем увидеть, запустим программу:
> increment function: 11 main function: 10
Теперь изменим функцию increment, использовав в качестве параметра указатель:
Для изменения значения параметра применяется операция разыменования с последующим инкрементом: `(*x)++` . Это изменяет значение, которое находится по адресу, хранимому в указателе x. Поскольку теперь функция в качестве параметра принимает указатель, то при ее вызове необходимо передать адрес переменной: `increment(&n);` .
В итоге изменение параметра x также повлияет на переменную n, потому что оба они хранят адрес на один и тот же участок памяти:
> increment function: 11 main function: 11
В то же время поскольку аргумент передается в функцию по значению, то есть функция получает копию адреса, то если внутри функции будет изменен адрес указателя, то это не затронет внешний указатель, который передается в качестве аргумента:
В функцию increment передается указатель ptr, который хранит адрес переменной n. При вызове функция increment получает копию этого указателя через параметр x. В функции изменяется адрес указателя x на адрес переменной z. Но это никак не затронет указатель ptr, так как он предствляет другую копию. В итоге поле переустановки адреса указатели x и ptr будут хранить разные адреса.
Результат работы программы:
> increment function: 6 main function: 10
Параметры, которые преставляют указатели, могут быть константными.
По константному параметру мы не можем изменить значение. То есть фактически такие параметры представляют указатели на константу. Поэтому константные параметры полезны, когда необходимо передать в функцию адрес константы - в этом случае параметр обязательно должен быть константным:
При этом константность параметра не означает, что мы не можем изменить адрес, хранимый в указателе, например, следующим образом:
Чтобы гарантировать, что не только значение по указателю не будет меняться, но и само значение указателя (хранимый в нем адрес) не будет меняться, надо определить указатель как константный:
Параметры, передаваемые по ссылке, и параметры-указатели похожи в том плане, что оба эти вида параметров позволяют менять значения передаваемых в них переменных. Единственной отличительной особенностью указателя является то, что он может иметь значение `nullptr` , в то время как ссылка всегда должна ссылаться на что-то.
Поэтому, если необходимо, что параметр не имел никакого значения, то можно использовать указатели. Единственное, что в этом случае необходимо проверять указатель на
значение `nullptr` перед его использованием.
# Массивы в параметрах функции
Если функция принимает в качестве параметра массив, то фактически в эту функцию передается указатель на первый элемент массива. То есть как и в случае с указателями нам доступен адрес, по которому мы можем менять значения. Поэтому следующие объявления функции будут по сути равноценны:
Передадим в функцию массив:
В данном случае функция print выводит на консоль первый элемент массива.
Теперь определим параметр как указатель:
Здесь также в функцию передается массив, однако параметр представляет указатель на первый элемент массива.
Поскольку параметр, определенный как массив, рассматривается именно как указатель на первый элемент, то мы не сможем корректно получить длину массива, например, следующим образом:
И также мы не сможем использовать цикл for для перебора этого массива:
Чтобы должным образом определять конец массив, перебирать элементы массива, обращаться к этим элементам, необходимо использовать специальный маркер, который бы сигнализировал об окончании массива. Для этого могут использоваться разные подходы.
Первый подход заключается в том, чтобы один из элементов массива сам сигнализировал о его окончании. В частности, массив символов может представлять строку - набор символов, который завершается нулевым символом '\0'. Фактически нулевой символ служит признком окончания символьного массива:
Однако при таком подходе мы должны быть уверены, что массив содержит такой подобный признак завершения. И если бы в данном случае нулевого байта не оказалось бы в строке, то это привело бы к неприятным последствиям. Поэтому обычно применяется другой подход, который заключается в передаче в функцию размера массива:
Третий подход заключается в передаче указателя на конец массива. Можно вручную вычислять указатель на конец массива. А можно использовать встроенные библиотечные функции std::begin() и std::end():
Причем end возвращает указатель не на последний элемент, а адрес за последним элементом в массиве.
Применим данные функции:
Поскольку при передаче массива передается фактически указатель на первый элемент, то используя этот указатель, мы можем изменить элемены массива. Если нет необходимости в изменении массива, то лучше параметр-массив определять как константный:
В данном случае функция print просто выводит значения из массива, поэтому параметры этой функции помечаются как константные.
Функция twice изменяет элементы массива - увеличивает их в два раза, поэтому в этой функции параметр-массив является неконстантным. Причем поле выполнения функции twice массив numbers будет изменен.
Консольный вывод программы:
> 1 2 3 4 5 2 4 6 8 10
Еще один сценарий передачи массива в функцию представляет передача массива по ссылке. Прототип функции, которая принимает массив по ссылке, выглядит следующим образом:
Обратите внимание на скобки в записи `(&)` . Они указывают именно на то, что массив передается по ссылке. Пример использования:
Подобным образом можпо передавать константные ссылки на массивы.
С одной стороны, может показаться, что в передаче массива по ссылке нет большого смысла, поскольку при передачи массива по значению итак просто передается адрес этого массива. Но с другой стороны, передача массива по ссылке имеет некоторые преимущества. Во-первых, не копируется значение - адрес массива, мы напрямую работаем с оригинальным массивом. Во-вторых, передача массива по ссылке позволяет ограничить размер такого массива, соотвественно при компиляции компилятор уже будет знать, сколько элементов будет иметь массив.
Здесь функция print принимает ссылку строго на массив с 5 элементами. И поскольку мы знаем точный размер массива, то нам нет необходимости передавать в функцию дополнительно размер массива.
Если же мы попробуем передать в функцию массив с другим количеством элементов, то на этапе компиляции мы столкнемся с ошибкой, как, например, в следующем случае:
Многомерный массив также передается как указатель на его первый элемент. В то же время поскольку элементами многомерного массива являются другие массивы, то указатель на первый элемент многомерного массива фактически будет представлять указатель на массив.
Когда определяется параметр как указатель на массив, размер второй размерности (а также всех последующих размерностей) должен быть определен, так как данный размер является частью типа элемента. Пример объявления:
Здесь предполагается, что передаваемый массив будет двухмерным, и все его подмассивы будут иметь по 3 элемента. Стоит обратить внимание на скобки вокруг имени параметра, которые и позволяют определить параметр как указатель на массив. И от этой ситуации стоит отличать следующую:
В данном случае параметр определен как массив указателей, а не как указатель на массив.
Рассмотрим применение указателя на массив в качестве параметра:
В функции main определяется двухмерный массив - он состоит из трех подмассивов. Каждый подмассив имеет по три элемента.
В функцию print вместе с массивом передается и число строк - по сути число подмассивов. В самой функции print получаем количество элементов в каждом подмассиве и с помощью двух циклов перебираем все элементы. С помощью выражения `rows[0]` можно обратиться к первому подмассиву
в двухмерном массиве, а с помощью выражения `rows[0][0]` - к первому элементу первого подмассива. И таким образом, манипулируя индексами
можно перебрать весь двухмерный массив.
В итоге мы получим следующий консольный вывод:
> 1 2 3 4 5 6 7 8 9
Также мы могли бы использовать нотацию массивов при объявлении и определении функции print, которая может показаться проще, нежели нотация указателей. Но в этом случае опять же надо было бы указать явным образом вторую размерность:
# Параметры функции main
В прошлых темах функция main определялась без параметров. Однако также можно определить данную функцию с параметрами:
Первый параметр, argc, представляет тип `int` и хранит количество аргументов командной строки.
Второй параметр, argv[], представляет собой массив указателей и хранит все переданные аргументы командной строки в виде строк. Таким образом,
благодаря данным параметрам мы можем при вызове программы в консоли передать ей некоторые данные.
Например, определим следующую программу:
В данном случае просто выводим все аргументы командной строки на консоль. Скомпилируем и просто запустим программу, не передавая ей никаких аргументов:
> c:\cpp>g++ hello.cpp -o hello & hello hello
В моем случае код программы расположен в файле "hello.cpp" и компилируется в файл с именем hello. После запуска программы, даже если мы не передаем ей никакх аргументов, в массиве `argv[]` будет как минимум один элемент - название файла программы. То есть в моем случае в массиве будет одна строка "hello". А первый параметр, `argc` , будет равен 1.
Передадим программе какие-нибудь аргументы
> c:\cpp>g++ hello.cpp -o hello & hello Tom 38 hello Tom 38
Здесь программе при запуске передается два значения - "Tom" и "38". Передаваемые аргументы разделяются пробелом. Причем даже если передается число (как в случае с вторым аргументом), то программа все равно получает его в виде строки. Соответственно теперь в массиве argv будет три элемента.
При возвращении указателя из функции он должен содержать либо значение nullptr, либо адрес, который все еще действителен. Поэтому не следует возвращать из функции адрес автоматической локальной переменной, так как она удаляется после завершения этой функции. Так, рассмотрим следующий некорректный пример функции, которая возвращает большее число из двух:
Параметры функции аналогичны переменным - при вызове функции в стеке для них выделяется память. И вданном случае возвращается адрес участка памяти соответствующего параметра ( `return &a` или `return &b` ). Но после возвращения адреса функция завершает свою работу, соответствующие участки памяти очищаются, параметры удаляются, поэтому
возвращенный адрес будет недействительным. И хотя компилятор даже может скомпилировать данную функцию, ограничившись предупреждениями, но такая функция не будет работать корректно.
Тем не менее это не значит, что мы в принципе не можем возвращать указатель из функции. Например, возьмем следующую ситуацию:
Здесь аналогичная функция, которая вычисляет наибольшее из двух чисел, только передаются не сами числа, а адреса переменных. Поэтому возвращенный адрес по прежнему будет действительным.
При этом нам необязательно присваивать результат переменной или константе, можно напрямую обратиться к результату функции:
Функция также может возвращать ссылку. Однако тут мы можем столкнуться с теми же проблемами, что и при возвращении указателей: не следует возвращать ссылку на локальный объект, который создается внутри функции. Поскольку все создаваемые в функции объекты удаляются после ее завершения, а их память очищается, то возвращаемая ссыла будет указывать на несуществующий объект, как в следующем случае:
Здесь функция возвращает ссылку на максимальное значение - один из переданных параметров. Но поскольку память, выделенная для параметров, передаваемых по значению, после выполнения функции очищается, то возвращенная ссылка в итоге будет указывать на несуществующий объект.
Чтобы выйти из ситуации, мы можем передавать значения по ссылке:
Стоит отметить, что если параметры представляют константные ссылки, то чтобы возвратить одну из этих ссылок, функция должна возвращать константную ссылку.
Язык С++ позволяет определять функции с одним и тем же именем, но разным набором параметров. Подобная возможность называется перегрузкой функций (function overloading). Компилятор же на этапе компиляции на основании параметров выберет нужный тип функции.
Чтобы определить несколько различных версий функции с одним и тем же именем, все эти версии должны отличаться как минимум по одному из следующих признаков:
имеют разное количество параметров
соответствующие параметры имеют разный тип
При этом различные версии функции могут также отличаться по возвращаемому типу. Однако компилятор, когда выбирает, какую версию функции использовать, ориентируется именно на количество параметров и их тип.
Здесь определены две версии функция sum, которая складывает два числа. В одном случае она складывает два числа типа int, в другом - числа типа double. При вызове функций компилятор на основании переданных аргументов определяет, какую версию использовать. Например, при первом вызове передаются числа int:
Соответственно для этого вызова выбирается версия
Во втором вызове в функцию передаются числа с плавающей точкой:
Поэтому выбирается версия, которая принимает числа double:
Аналогично перегруженные версии функции могут отличаться по количеству параметров:
При перегрузке функций с параметрами-ссылками следует учитывать, что параметры типов `data_type` и `data_type&` не различаются при перегрузке. Например,
два следующих прототипа:
Не считаются разными версиями функции print.
При перегрузке функций константный параметр отличается от неконстантного параметра только для ссылок и указателей. В остальных случаях константый параметр будет идентичен неконстантному параметру. Например, следующие два прототипа при перегрузке различаться НЕ будут:
Во втором прототипе компилятор игнорирует оператор const.
Пример перегрузки функции с константными параметрами
Здесь функция square принимает указатель на число и возводит его в квадрат. Но первом случае параметр представляет указатель на константу, а во втором - обычный указатель.
В первом вызове
так как передаваемое число n1 представляет константу. Поэтому переданное в эту функцию число n1 не изменится.
В втором вызове
в которой для примера также меняется передаваемое значение. Поэтому переданное в эту функцию число n2 изменит свое значение. Консольный вывод программы
> square(n1): 4 n1: 2 square(n2): 9 n2: 9
С передачей константной ссылки все будет аналогичо.
# Рекурсивные функции
Date: 2023-03-05
Categories:
Tags:
Рекурсивные функции - это функции, которые вызывают сами себя. Такие функции довольно часто используются для обхода различных представлений. Например, если нам надо найти определенный файл в папке, то мы сначала смотрим все файлы в этой папке, затем смотрим все ее подпак
Например, определим вычисление факториала в виде рекурсивной функции:
В функции factorial задано условие, что если число n больше 1, то это число умножается на результат этой же функции, в которую в качестве параметра передается число n-1. То есть происходит рекурсивный спуск. И так далее, пока не дойдем того момента, когда значение параметра не будет равно 1. В этом случае функция возвратит 1.
Рекурсивная функция обязательно должна иметь некоторый базовый вариант, который использует оператор return и к которому сходится выполнение остальных вызовов этой функции. В случае с факториалом базовый вариант представлен ситуацией, при которой n = 1. В этом случае сработает инструкция `return 1;` . Например, при вызове `factorial(5)` получится следующая цепь вызовов: `5 * factorial(4)` `5 * 4 * factorial(3)`
```
5 * 4 * 3 * factorial(2)
```
```
5 * 4 * 3 * 2 * factorial(1)
```
`5 * 4 * 3 * 2 * 1`
Другим распространенным показательным примером рекурсивной функции служит функция, вычисляющая числа Фиббоначчи. n-й член последовательности чисел Фибоначчи определяется по формуле: f(n)=f(n-1) + f(n-2), причем f(0)=0, а f(1)=1. Значения f(0)=0 и f(1)=1, таким образом, определяют базовые варианты для данной функции:
Результат работы программы - вывод 10 чисел из последовательности Фиббоначчи на консоль:
> 0, 1, 1, 2, 3, 5, 8, 13, 21, 34
В примерах выше функция напрямую вызывала саму себя. Но рекурсивный вызов функции также может быть косвенным. Например, функция fun1() вызывает другую функцию fun2(), которая, в свою очередь, вызывает fun1(). В этом случае функции fun1() и fun2() также называются взаимно рекурсивными функциями.
Стоит отметить, что нередко рекурсивные функции можно представить в виде циклов. Например, для вычисления факториала вместо рекурсии используем цикл:
И нередко циклические конструкции более эффективны, чем рексурсия. Например, если функция вызывает себя тысячи раз, потребуется большой объем стековой памяти для хранения копий значений аргументов и адреса возврата для каждого вызова, что может привести к тому, что программа быстро исчерпает выделенную для нее память стека, поскольку объем памяти стека обычно фиксирован и ограничен. что может привести к аварийному завершению программы. Поэтому в таких случаях, как правило, лучше использовать альтернативные подходы, например цикл. Однако, несмотря на накладные расходы, использование рекурсии часто может значительно упростить написание программы.
Рассмотрим примение рекурсии на примере быстрой сортировки. Определим следующий код:
Для сортировки массива здесь определена функция `sort` , которая принимает три параметра: сортируемый массив, начальный и конечный индексы сортируемой части массива:
При первом вызове функции предполагается, что начальный индес, start, будет равен 0, а конечный индекс, end, будет представлять индекс последнего элемента массива.
Сначала проверяем размер сортируемой части:
Если начальный При каждом выполнении функции `sort()` в конце текущая сортируемая последовательность разбивается на две меньшие подпоследовательности, для каждой из которых
также рекурсивно вызывается функция sort(). Поэтому в конечном итоге мы должны получить подпоследовательность, которая содержит только один элемент.
Далее устанавливаем индекс текущего элемента
Далее в цикле сравниваем этот элемент с остальными элементами, которые идут после индекса start:
Если i-ый элемент меньше выбранного начального элемента, то меняем i-ый элемент с элементом, который следует за start. В результате все элементы, которые меньше выбранного элемента, помещаются перед всеми элементами, которые больше или равны ему.
Когда цикл заканчивается, переменная current содержит индекс последнего найденного элемента, которое меньше выбранного начального элемента с индексом start. И эти элементы с индексами current и start меняем местами:
Таким образом, элемент, относительного которого производилась проверка других элементов, тепень имеет индекс current и помещается после элементов, которые меньше него.
В конце сортируем две подпоследовательности по обе стороны от текущего элемента с индексом current, вызывая sort() для каждого подмножества. Индексы элементов, меньших выбранного слова, идут от начала до индекса `current-1` , а индексы тех, которые больше, идут от индекса `current+1` до конца.
Например, при первом выполнении исходные данные будут иметь следующие значения:
В итоге цикл произведет последовательно 7 итераций, при которых элементы будут меняться следующим образом
> после 1-итерации: 3 0 6 -2 -6 11 3 после 2-итерации: 3 0 6 -2 -6 11 3 после 3-итерации: 3 0 -2 6 -6 11 3 после 4-итерации: 3 0 -2 -6 6 11 3 после 5-итерации: 3 0 -2 -6 6 11 3 после 6-итерации: 3 0 -2 -6 6 11 3 после 7-итерации: 3 0 -2 -6 6 11 3
А переменная `current` будет равна 3, то есть было три элемента, которые меньше выбранного элемента с индексом start. И далее меняем местами
current и start меняем местами. В итоге получаем: > -6 0 -2 3 6 11 3
Затем разбиваем на две подпоследовательности: слева от `current` > -6 0 -2
и справа от `current` > 6 11 3
И для каждой из этих подпоследовательностей отдельно запускаем функцию `sort()` .
# Указатели на функции
Указатель на функцию (function pointer) хранит адрес функции. По сути указатель на функцию содержит адрес первого байта в памяти, по которому располагается выполняемый код функции.
Самым распространенным указателем на функцию является ее имя. С помощью имени функции можно вызывать ее и получать результат ее работы.
Но также указатель на функцию мы можем определять в виде отдельной переменной с помощью следующего синтаксиса:
`тип` представляет тип возвращаемого функцией значения. `имя_указателя` представляет произвольно выбранный идентификатор в соответствии с правилами о наименовании переменных. `параметры` определяют типы параметров через запятую (при их наличии).
Указатель может указывать только на такую функцию, которая имеет тот же возвращаемый тип и типы параметров, что и определение указателя на функцию.
Например, определим указатель на функцию:
В данном случае определен указатель, который имеет имя message. Он может указывать на функции без параметров, которые возвращают тип void (то есть ничего не возвращают).
Используем указатель на функцию:
Указателю на функцию можно присвоить функцию, которая соответствует указателю по возвращаемому типу и спецификации параметров:
То есть в данном случае указатель message теперь хранит адрес функции hello. И посредством обращения к указателю мы можем вызвать эту функцию:
В качестве альтернативы мы можем обращаться к указателю на функцию следующим образом:
Впоследствии мы можем присвоить указателю адрес другой функции, которая также соответствует определению указателя. В итоге результатом данной программы будет следующий вывод:
> Hello, World Good Bye, World
При определении указателя стоит обратить внимание на скобки вокруг имени. Так, использованное выше определение
НЕ будет аналогично следующему определению:
Во втором случае определен не указатель на функцию, а прототип функции message, которая возвращает указатель типа void*.
Указатель можно при определении можно сразу инициализировать:
Можно инициализировать значением nullptr:
Если указатель при определении инициализируется какой-либо функцией, то можно опустить все определение типа и просто использовать слово auto:
Можно подчеркнуть, что переменная является именно указателем, указав после `auto` символ звездочки:
Но особой разницы - что со звездочкой, что без звездочки нет.
Стоит отметить, что при присвоении функции мы можем применять операцию получения адреса:
Но в принципе применение такого символа, как и символа звездочки с auto, ни на что не влияет.
Рассмотрим еще один указатель на функцию:
Здесь определен указатель operation, который может указывать на функцию с двумя параметрами типа int, возвращающую также значение типа int. Соответственно мы можем присвоить указателю адреса функций add и subtract и вызвать их, передав при вызове указателю некоторые значения для параметров.
Кроме одиночных указателей на функции мы можем определять их массивы. Для этого используется следующий формальный синтаксис:
Здесь actions представляет массив указателей на функции, каждая из которых обязательно должна принимать два параметра типа int и возвращать значение типа double.
Посмотрим применение массива указателей на функции на примере:
Здесь массив operations содержит три функции add, subtract и multiply, которые последовательно вызываются в цикле через перебор массива в функции main.
Консольный вывод программы:
> x + y = 15 x - y = 5 x * y = 50
# Указатели на функции как параметры
Указатель на функцию фактически представляет некоторый тип, и функция также может иметь параметр, который представляет тип указателя на функцию. Таким образом, мы можем через параметр на функцию передавать в одну функцию другую. То есть функция может быть аргументом другой функции.
Рассмотрим пример:
В данном случае первый параметр функции operation - `int (*op)(int, int)` - представляет указатель на функцию, которая возвращает
значение типа int и принимает два параметра типа int. Результатом функции является вызов той функции, на которую указывает указатель. Определению указателя соответствуют две функции: add и subtract, поэтому их адрес можно передать в вызов функции operation:
```
operation(add, a, b);
```
.
Результат работы программы:
> result: 16 result: 4
Функция, передаваемая другой функции в качестве аргумента, называется функцией обратного вызова или коллбек (callback). А функция, которая принимает другую функцию в качестве аргумента, является функцией высшего порядка. Таким образом, в примере выше функция operation представляет функцию высокого порядка, а функции add и subtract - функции обратного вызова.
Рассмотрим другой пример - определим функцию, которая может принимать в качестве параметра некоторое условие и вычислять все элементы массива, которые соответствуют этому условию:
Функция action в качестве первого параметра принимает некоторую функцию, которая задает условие, которому должны соответствовать элементы массива. Это условие представляет указатель
```
bool (*condition)(int)
```
. То есть это некоторая функцию, которая принимает целое число и в зависимости от того, соответствует оно условию или
нет, возвращает значение типа `bool` ( `true` , если число из массива соответствует условию, и `false` , если не соответствует).
На момент определения функции action точное условие может быть неизвестно. В текущей программе условия представлены двумя функциями. Функция `isEven()` возвращает true, если число четное, и false, если число нечетное.
А функция `isPositive()` возвращает true, если число положительное, и false, если отрицательное.
Второй параметр функции action - массив чисел int, для которых вызываем условие. А третий параметр - размер массива. Если число соотвествует условию, то выводим это число на консоль
При вызове функции `action()` в нее можно передать нужное условие:
В итоге программа выведет на экран числа из массива nums, которые соответствуют переданному условию:
> Even numbers: -4 -2 0 2 4 Positive numbers: 1 2 3 4 5
# Тип функции
При определении указателя на функцию применяется синтаксис, который может показаться малочитабельным, например:
Здесь operation представляет указатель на функцию, которая принимает два параметра типа int и возвращает значение int
Но даже такое определение может быть сложно воспринимать. Ситуация усугубляется, если параметры или возвращаемый результат представляют типы с более длинными названиями. Использовагие ключевого слова auto позволяет упростить определение:
Однако в данном случае нам надо обязательно инициализировать указатель определенной функцией. Кроме того, иногда возникает необходимость явным образом указать тип указателя, например, для параметра функции или переменной. Так как указатель на функцию представляет отдельный тип, то для него можно определить псевдноним с помощью ключевого слова using:
В данном случае определен псевдоним BinaryOp, который представляет указатель на функцию типа `int (*)(int, int)` , то есть функцию, которая принимает два числа int и
возвращает значение типа int.
Далее мы можем, используя этот псевдоним, мы можем определять указатели на функции, которые соответствуют типу указателя:
Аналогичным образом можно использовать псевдонимы для определения параметров:
В данном случае третий параметр функции do_operation представляет указатель типа `int (*)(int, int)` .
Функция может возвращать указатель на другую функцию. Это может быть актуально, если имеется ограниченное количество вариантов - выполняемых функций, и надо выбрать одну из них. Но при этом набор вариантов и выбор из них определяется в промежуточной функции.
Здесь определена функция message, которая в зависимости от переданного числа возвращает одну из двух функций goodmorning или goodevening. Рассмотрим объявление функции message:
Вначале указан тип, который возвращается функцией, которая возвращается из message, то есть тип void (функции goodmorning и goodevening имеют тип void). Далее идет в скобках имя функции со списком параметров, то есть функция message принимает один параметр типа `unsigned int` :
```
(*message(unsigned hour))
```
.
После этого отдельно в скобках идет спецификация параметров функции, которая будет возвращаться из message. Поскольку функции goodmorning и goodevening не принимают никаких параметров, то
указываются пустые скобки.
Имя функции фактически представляет указатель на нее, поэтому в функции message мы можем возвратить нужную функцию, указав после оператора return ее имя.
Для получения указателя на функцию определяем переменную action:
Эта переменная представляет указатель на функцию, которая не принимает параметров и имеет в качестве возвращаемого типа тип void, то есть она соответствует функциям goodmorning и goodevening.
Затем вызываем функцию message и получаем указатель на функцию в переменную action:
Далее, используя указатель action, вызываем полученную функцию:
Поскольку в функцию message передается число 16, то она будет возвращать указатель на функцию goodevening, поэтому при ее вызове на консоль будет выведено сообщение "Good Evening!"
Рассмотрим более сложный пример, в котором в зависимости от выбора пользователя выполняется та или иная арифметическая операция над двумя числами:
В данной программе мы предполагаем, что пользователь должен выбрать для выполнения одну из трех функций: add, subtract, multiply. И выбранная функция будет выполнять определенное действие над двумя числами x и y.
Сам выбор происходит в функции `select()` . Она получает условный номер функции и возвращает указатель на функцию - по сути выбранную функцию.
Все выбираемые функции имеют прототип вида:
И прототип функции select должен соответствовать этому прототипу:
То есть в начале идет тип - возвращаемый тип указателя на функцию, то есть int. Затем идет определение самой функции select - ее название со списком параметров помещается в скобках - `(*select(int))` . Затем идет спецификация параметров функции, на которую определяется указатель. Так как функции add, subtract и multiply принимают два значения типа int,
то соответственно спецификация параметров выглядит следующим образом `(int, int)` .
Для выбора нужной функции в select применяется конструкция switch-case:
В функции `main()` вызываем функцию select, передавая в нее определенное число и получая в качестве результата указатель на функцию:
После этого мы сможем вызвать функцию по указателю. Поскольку функция по указателю должна принимать два значения типа int, то мы их можем передать в вызываемую функцию и получить ее результат:
В примере выше сделано довольно просто - какое-бы число не будет передано в функцию select, она всегда возвратит указатель на некоторую функцию. Однако мы можем ограничить выбор:
В данном случае все доступные для выбора функции хранятся в массиве actions, который представляет массив указателей на функции. Если передан номер 1, 2 или 3, то возвращаем из этого массива определенную функцию. Если переданный номер функции представляет другое число, то возвращаем значение `nullptr` :
Поскольку возвращается nullptr, то перед выполнением полученной функции указатель надо проверить на nullptr:
# Разделение программы на файлы
Если программа содержит много кода, то более оптимально было бы разнести отдельные части кода по отдельным файлам. Например, одни функции могут храниться в одном файле исходного кода, другие функции - в другом файле.
Например, определим файл sum.cpp, который будет иметь следующий код:
Это функция вычисления суммы чисел.
Добавим еще один файл - , который будет содержать объявление функции sum:
И также определим главный файл, который назовем app.cpp:
Функция main вызывает функцию sum для вычисления суммы чисел. Но перед использованием функции она должна быть определена или по крайней мере должен быть известен ее заголовок. В прошлых темах объявление функции добавлялось непосредственно в главный и единственный файл программы. Однако если функции определены в отдельных файлах, то более оптимально помещать объявления функций в специальные заголовочные файлы и потом подключать эти файлы. Именно поэтому в начале с помощью директивы include подключается файл sum.h, который содержит объявление или заголовок функции.
Файл sum.h еще называется заголовочным файлом (header file), так как содержит объявление, заголовок функции. ПРичем в данном случае предполагается что все файлы располагаются в одном каталоге:
Можно было бы и не подключать файл sum.h и вообще не создавать его, а объявление функции поместить непосредственно в файл app.cpp. Но при изменении функции может потребоваться изменить и ее объявление. И если функция sum используется в нескольких файлах с исходным кодом, то в каждом из этих файлов придется менять ее объявление. В данном же случае достаточно изменить объявление функции в одном файле - sum.h.
То же самое верно и для компиляции через Clang::
> clang++ app.cpp sum.cpp -o app.exe
На выходе будет сгенерирован единый файл app.
При работе в Visual Studio заголовочные файлы обычны помещаются в каталог "Headers":
А при компиляции все файлы автоматически компилируются в один.
Кроме функций внешние файлы могут содержать различные объекты - переменные и константы. Для подключения внешних объектов в файл кода применяется ключевое слово extern.
Например, пусть у нас есть файл objects.cpp, в котором определяюься :
Здесь определены переменные типа std::string и unsigned int. Для использования типа `std::string` необходимо подключить заголовочный файл `<string>`
Пусть главный файл программы называется app.cpp и использует эти переменные:
Чтобы использовать переменные, определенные во внешнем файле, они объявляются с помощью ключевого слова extern
В функции main в цикле times раз выводим строку message на консоль.
Здесь подразумевается, что файлы app.cpp и objects.cpp располагаются в одной папке. То же самое верно и для компиляции через Clang::
> clang++ app.cpp objects.cpp -o app.exe
На выходе будет сгенерирован единый файл app. А консольный вывод программы будет следующим
> Hello Hello Hello
Подключение констант имеет особенность - ключевое слово extern надо указывать и при определении константы. Так, изменим файл objects.cpp следующим образом:
Также изменим файл app.cpp:
Результат будет тот же, что и в предыдущем случае.
В примере выше мы вынуждены подключать в главный файл программы два внешних объекта. Но что, если этих переменных и констант очень много? Чтобы не загрязнять главный файл программы, мы можем, как и в случае с внешними функциями, вынести объявления внешних объектов в отдельный заголовочный файл. Так, определим в той же папке, где располагаются файлы `app.cpp` и `objects.cpp` ,
новый файл - objects.h:
Теперь подключим этот заголовочный файл в файле app.cpp:
# Динамическая память и smart-указатели
В C++ можно использовать различные типы объектов, которые различаются по использованию памяти. Так, глобальные объекты создаются при запуске программы и освобождаются при ее завершении. Локальные автоматические объекты создаются в блоке кода и удаляются, когда этот блок кода завершает работу. Локальные статические объекты создаются перед их первым использованием и освобождаются при завершении программы.
Глобальные, а также статические локальные объекты помещаются в статической памяти, а локальные автоматические объекты размещаются в стеке. Объекты в статической памяти и стеке создаются и удаляются компилятором. Статическая память очищается при завершении программы, а объекты из стека существуют, пока выполняется блок, в котором они определены. Когда блок завершает выполнение, то память в стеке, отведенная для переменных блока, освобождается. Стоит отметить, что память, выделяемая для стека, имеет ограниченный фиксированный размер.
В дополнение к этим типам в C++ можно создавать динамические объекты. Продолжительность их жизни не зависит от того, где они созданы. Динамические объекты существуют, пока не будут удалены явным образом. Динамические объекты размещаются в динамической памяти (free store). Это область памяти, не занятая операционной системой или другими загруженными в данный момент программами.
Использование динамических объектов имеет ряд преимуществ. Во-первых, более эффективное использование памяти - выделяется имеенно столько места, сколько необходимо, а после использования сразу освобождается. Во-вторых, мы можем использовать гораздо больший объем памяти, который в ином случае был бы не доступен. Но это и накладывает ограничения: мы должны следить, чтобы все динамические объекты были удалены.
Для управления динамическими объектами применяются операторы new и delete.
Оператор new выделяет место в динамической памяти для объекта и возвращает указатель на этот объект.
Оператор delete получает указатель на динамический объект и удаляет его из памяти.
Создание динамического объекта:
Оператор new создает новый объект типа int в динамической памяти и возвращает указатель на него. Таким образом, указатель `ptr` содержит адрес выделенной памяти.
Значение такого объекта неопределено.
Также можно инициализировать объект при выделении памяти:
Здесь объект в памяти, на который указывает указатель ptr, получает значение по умолчанию - число 0.
Для инициализации можно использовать фигурные скобки:
Также можно инициализировать объект некоторым конкретным значением, например:
Здесь значение объекта в динамической памяти равно 5.
Далее, используя указатель, можно присвоить динамическому объекту другое значение:
Динамические объекты будут существовать, пока не будут явным образом удалены. И после завершения использования динамических объектов следует освободить их память с помощью оператора delete:
Особенно это надо учитывать, если динамический объект создается в одной части кода, а используется в другой. Например:
В функции usePtr получаем из функции createPtr указатель на динамический объект. Однако после выполнения функции usePtr этот объект автоматически не удаляется из памяти (как это происходит в случае с локальными автоматическими объектами). Поэтому его надо явным образом удалить, использовав оператор delete.
Если явным образом не вызвать оператор `delete` , то выделенная динамическая память будет освобождена после завершения программы. Но стоит отметить, что даже после освобождения памяти указатель по-прежнему содержит старый адрес, хотя память по нему условно освобождена и готова к использованию для будущих динамических объектов. Такой указатель называется "болтающимся указателем" (dangling pointer). Мы даже можем попробовать обратиться по этому указателю. Но использование объекта по указателю после его удаления или повторное применение оператора `delete` к указателю могут привести
к непредсказуемым результатам:
Чтобы избежать подобных болтающихся указателей, рекомендуется после освобождения памяти обнулять указатель:
При попытке обращения к объекту через нулевой указатель программа просто завершит выполнение. А применение оператора `delete` к нулевому указателю не имеет никакого эффекта.
Также нередко имеет место ситуация, когда на один и тот же динамический объект указывают сразу несколько указателей. Если оператор delete применен к одному из указателей, то память объекта освобождается, и по второму указателю этот объект мы использовать уже не сможем. Если же после этого ко второму указателю применить оператор delete, то динамическая память может быть нарушена.
В то же время недопустимость указателей после применения к ним оператора delete не означает, что эти указатели мы в принципе не сможем использовать. Мы сможем их использовать, если присвоим им адрес другого объекта:
Здесь после удаления объекта, на который указывает p1, этому указателю передается адрес другого объекта в динамической памяти. Соответственно мы также можем использовать указатель p1. В то же время адрес в указателе p2 по прежнему будет недействительным.
Кроме отдельных динамических объектов в языке C++ мы можем использовать динамические массивы. Для выделения памяти под динамический массив также используется оператор new, после которого в квадратных скобках указывается, сколько массив будет содержать объектов:
Причем в этом случае оператор new также возвращает указатель на объект типа int - первый элемент в созданном массиве.
В данном случае определяется массив из четырех элементов типа int, но каждый из них имеет неопределенное значение. Однако мы также можем инициализировать массив значениями:
При инициализации массива конкретными значениями следует учитывать, что если значений в фигурных скобках больше чем длина массива, то оператор new потерпит неудачу и не сможет создать массив. Если переданных значений, наоборот, меньше, то элементы, для которых не предоставлены значения, инициализируются значением по умолчанию.
Стоит отметить, что в стандарт С++20 добавлена возможность выведения размера массива, поэтому, если применяется стандарт С++20, то можно не указывать длину массива:
После создания динамического массива мы сможем с ним работать по полученному указателю, получать и изменять его элементы:
Причем для доступа к элементам динамического массива можно использовать как синтаксис массивов ( `numbers[0]` ), так и операцию разыменования ( `*numbers` )
Соответственно для перебора такого массива можно использовать различные способы:
Обратите внимание, что для задания размера динамического массива мы можем применять обычную переменную, а не константу, как в случае со стандартными массивами.
Для удаления динамического массива и освобождения его памяти применяется специальная форма оператора delete:
Чтобы после освобождения памяти указатель не хранил старый адрес, также рекомендуется обнулить его:
Также мы можем создавать многомерные динамические массивы. Рассмотрим на примере двухмерных массивов. Что такое по сути двухмерный массив? Это набор массив массивов. Соответственно, чтобы создать динамический двухмерный массив, нам надо создать общий динамический массив указателей, а затем его элементы - вложенные динамические массивы. В общем случае это выглядит так:
Вначале выделяем память для массива указателей (условно таблицы):
Затем в цикле выделяем память для каждого отдельного массива (условно строки таблицы):
Освобождение памяти идет в обратном порядке - сначала освобождаем память для каждого отдельного вложенного массива, а затем для всего массива указателей.
Пример с вводом и выводом данных двухмерного динамического массива:
Пример работы программы:
> Enter data for 1 row 1 column: 2 2 column: 3 Enter data for 2 row 1 column: 4 2 column: 5 Enter data for 3 row 1 column: 6 2 column: 7 2 3 4 5 6 7
От типа `int**` , который представляет указатель на указатель (pointer-to-pointer) следует отличать ситуацию "указатель на массив" (pointer to array). Например:
Здесь запись `int (*a)[2]` представляет указатель на массив из двух элементов типа int. Фактически мы можем работать с этим объектом как с двухмерным массивом (таблицей), только количество
столбцов в данном случае фиксировано - 2. И память для такого массива выделяется один раз:
То есть в данном случае мы имеем дело с таблице из n строк и 2 столцов. Используя два индекса (для строки и столца), можно обращаться к определенному элементу, установить или получить его значение. Консольный вывод данной программы:
> 1 2 3 4 5 6
# Smart-указатели. unique_ptr<TSmart pointers или "интеллектуальные указатели" — это объекты, которые имитируют стандартные указатели: они также содержат адрес (как правило, адрес выделенной динамической памяти), и их можно также использовать для обращения к объектам по этому адресу. Но главное их отличие от стандартных указателей состоит в том, что нам не надо беспокоиться об освобождении памяти с помощью операторов `delete` или `delete[]` . Вся выделенная память, используемая интеллектуальными указателями, будет освобождаться автоматически,
когда она станет не нужна. Соответственно это означает, что мы не столкнемся с утечками памяти, не соответствием между выделениями и освобождениями памяти и болтающимися указателями. Таким образом, smart-указатели позволяют упростить и
обезопасить управление памятью. Типы интеллектуальных указателей определены в модуле memory стандартной библиотеки языка С++ и доступны в пространстве имен std. Указатель unique_ptr<T> представляет указатель на тип T, который является "уникальным" в том смысле, что что может быть только один объект `unique_ptr` , который содержит один и тот же адрес. То есть не может одновременно быть двух или более объектов `unique_ptr<T>` , которые указывают один и тот же адрес памяти. Если же мы попробуем определить два одновременно существующих указателя, которые указывают на
один и тот же адрес, компилятор не скопирует код.
И когда unique_ptr уничтожается, уничтожается и значение, на которое он указывает. Соответственно данный тип указателей полезен, когда нужен указатель на объект, на который НЕ будет других указателей и который будет удален после удаления указателя.
По умолчанию `unique_ptr<T>` инициализируется значением nullptr
Чтобы выделить память и создать в ней объект, на который будет указывать указатель, применяется функция std::make_unique<T>. В качестве параметра в нее передается объект, на который будет указывать указатель:
В данном случае выделяется динамическая память для хранения числа 125, и указатель ptr указывает на эту память.
Стоит отметить, что до принятия стандарта C++14 применялась другая форма создания указателя:
Для получения стандартного указателя из `std::unique_ptr` применяется функция get():
После определения интеллектуального указателя мы можем получать и изменять значение, на которое он указывает, так же, как и при работе с обычными указателями:
Консольный вывод:
> Address: 0x2775dfa8030 Initial value: 125 New value: 254
Стоит отметить, что начиная со стандарта C++ 20 получить адрес из smart-указателя мы можем напрямую без функции get:
`unique_ptr` также может работать с массивами. Например, определим указатель, который ссылается на массив:
Если потребуется освободить память, на которую указывает указатель, то можно применить функцию reset():
#include <iostream>#include <memory>intmain(){auto ptr { std::make_unique<int>(123) };// освобождаем память и удаляем объект 123ptr.reset();if(!ptr) // если ptr{std::cout << "Memory is free"<< std::endl;}else {std::cout << *ptr << std::endl;}}
После выполнения функции `reset()` указатель получает значение `nullptr` Также можно передать в функцию `reset()` новый объект, для которого будет выделяться память и на который будет указывать указатель.
В данном случае после вызова функции `reset()` изначальная память освобождается и выделяется новый участок памяти для числа 254. Соотвественно меняется адрес и само значение, на которое указывает
указатель. Консольный вывод: > Old address: 0x2b13dd73270 Old value: 123 New address: 0x2b13dd68010 New value: 254
# shared_ptr<TТип shared_ptr<T> применяется для создания указателей на объекты, на которые может указывать несколько указателей. shared_ptr<T> позволяет создавать множество объектов `shared_ptr<T>` , которые содержат один и тот же адрес. Для данных указателей применяется механизм подсчета ссылок (reference counting). Каждый раз, когда создается объект `shared_ptr<T>` , увеличивается счетчик объектов `shared_ptr<T>` , которые содержат определенный адрес. Когда объект `shared_ptr<T>` удаляется или ему присваивается другой адрес,
счетчик ссылок уменьшается на единицу. Когда больше нет объектов `shared_ptr<T>` , которые ссылаются на определенный адрес, счетчик ссылок сбрасывается в ноль. При определении указателя без явной инициализации по умолчанию он инициализируется значением `nullptr`
Для инициализации конкретным значением можно применять функцию std::make_shared<T Далее через указатель `shared_ptr` можно также, как и через обычный указатель, получать и изменять динамический объект:
Также можно инициализировать указатель `shared_ptr` другим указателем `shared_ptr` . В этом случае они будут хранить один и тот же адрес:
Пример консольного вывода:
> ptr1 address: 0x15fbbf81020 ptr1 value: 22 ptr2 address: 0x15fbbf81020 ptr2 value: 22
Начиная со стандарта C++20 указатель `shared_ptr` может указывать на массив:
Кроме использования встроенных типов, таких как int, double и т.д., мы можем определять свои собственные типы или классы. Класс представляет составной тип, который может использовать другие типы.
Класс предназначен для описания некоторого типа объектов. То есть по сути класс является планом объекта. А объект представляет конкретное воплощение класса, его реализацию. Можно еще провести следующую аналогию. У нас у всех есть некоторое представление о человеке, у которого есть имя, возраст, какие-то другие характеристики. То есть некоторый шаблон - этот шаблон можно назвать классом. Конкретное воплощение этого шаблона может отличаться, например, одни люди имеют одно имя, другие - другое имя. И реально существующий человек будет представлять объект или экземпляр этого класса.
Для определения класса применяется ключевое слово class, после которого идет имя класса:
После названия класса в фигурных скобках располагаются компоненты класса. Причем после закрывающей фигурной скобки идет точка с запятой.
Например, определим простейший класс:
В данном случае класс называется Person. Как правило, названия классов начинаются с большой буквы. Допустим, данные класс представляет человека. Данный класс пуст, не содержит никаких компонентов, тем не менее он уже представляет новый тип. И после определения класса мы можем определять его переменные или константы:
Но данный класс мало что делает. Класс может определять переменные и константы для хранения состояния объекта и функции для определения поведения объекта. Поэтому добавим в класс Person некоторое состояние и поведение:
Теперь класс Person имеет две переменных name и age, которые предназначены для хранения имени и возраста человека соответственно. Переменные класса еще называют полями класса. Также класс определяет функцию print, которая выводит значения переменных класса на консоль. Также стоит обратить внимание на модификатор доступа public:, который указывает, что идущие после него переменные и функции будут доступны извне, из внешнего кода.
Затем в функции main создается один объект класса Person. Через точку мы можем обратиться к его переменным и функциям.
Например, мы можем установить значения полей класса
Ну и также мы можем вызывать функции у объекта:
Подобным образом можно получать значения переменных объектов
Также можно поля класса, как и обычные переменные, инициализировать некоторыми начальными значениями:
На объекты классов, как и на объекты других типов, можно определять указатели. Затем через указатель можно обращаться к членам класса - переменным и методам. Однако если при обращении через обычную переменную используется символ точка, то для обращения к членам класса через указатель применяется стрелка (->):
Изменения по указателю ptr в данном случае приведут к изменениям объекта person.
# Конструкторы и инициализация объектов
Конструкторы представляют специальную функцию, которая имеет то же имя, что и класс, которая не возвращает никакого значения и которая позволяют инициалилизировать объект класса во время го создания и таким образом гарантировать, что поля класса будут иметь определенные значения. При каждом создании нового объекта класса вызывается конструктор класса.
В прошлой теме был разработан следующий класс:
Здесь при создании объекта класса Person, который называется person
вызывается конструктор по умолчанию. Если мы не определяем в классе явным образом конструктор, как в случае выше, то компилятор автоматически компилирует конструктор по умолчанию. Подобный конструктор не принимает никаких параметров и по сути ничего неи делает.
Теперь определим свой конструктор. Например, в примере выше мы устанавливаем значения для полей класса Person. Но, допустим, мы хотим, чтобы при создании объекта эти поля уже имели некоторые значения по умолчанию. Для этой цели определим конструктор:
Теперь в классе Person определен конструктор:
По сути конструктор представляет функцию, которая может принимать параметры и которая должна называться по имени класса. В данном случае конструктор принимает два параметра и передает их значения полям name и age, а затем выводит сообщение о создании объекта.
Если мы определяем свой конструктор, то компилятор больше не создает конструктор по умолчанию. И при создании объекта нам надо обязательно вызвать определенный нами конструктор.
Вызов конструктора получает значения для параметров и возвращает объект класса:
После этого вызова у объекта person для поля name будет определено значение "Tom", а для поля age - значение 38. Вполедствии мы также сможем обращаться к этим полям и переустанавливать их значения.
В качестве альтернативы для создания объекта можно использовать инициализатор в фигурных скобках:
Тажке можно присвоить объекту результат вызова конструктора:
По сути она будет эквивалетна предыдущей.
Консольный вывод определенной выше программы:
> Person has been created Name: Tom Age: 38
Конструкторы облегчают нам создание нескольких объектов, которые должны иметь разные значения:
Здесь создаем три разных объекта класса Person (условно трех разных людей), и соответственно в данном случае консольный вывод будет следующим:
> Person has been created Person has been created Person has been created Name: Tom Age: 38 Name: Bob Age: 42 Name: Sam Age: 25
Подобным образом мы можем определить несколько конструкторов и затем их использовать:
В классе Person определено три конструктора, и в функции все эти конструкторы используются для создания объектов:
> Name: Tom Age: 38 Name: Bob Age: 18 Name: Undefined Age: 18
Хотя пример выше прекрасно работает, однако мы можем заметить, что все три конструктора выполняют фактически одни и те же действия - устанавливают значения переменных name и age. И в C++ можем сократить их определения, вызова из одного конструктора другой и тем самым уменьшить объем кода:
Запись
```
Person(string p_name): Person(p_name, 18)
```
представляет вызов конструктора, которому передается значение параметра p_name и число 18. То есть второй
конструктор делегирует действия по инициализации переменных первому конструктору. При этом второй конструктор может дополнительно определять какие-то свои действия.
Таким образом, следующее создание объекта
будет использовать третий конструктор, который в свою очередь вызывает второй конструктор, а тот обращается к первому конструктору.
Данная техника еще называется делегированием конструктора, поскольку мы делегируем инициализацию другому конструктору.
Как и другие функции, конструкторы могут иметь параметры по умолчанию:
В теле конструктора мы можем передать значения переменным класса. Однако константы требуют особого отношения. Например, вначале определим следующий класс:
Этот класс не будет компилироваться из-за отсутствия инициализации константы name. Хотя ее значение устанавливается в конструкторе, но к моменту, когда инструкции из тела конструктора начнут выполняться, константы уже должны быть инициализированы. И для этого необходимо использовать списки инициализации:
Списки инициализации представляют перечисления инициализаторов для каждой из переменных и констант через двоеточие после списка параметров конструктора:
Здесь выражение `name{p_name}` позволяет инициализировать константу значением параметра p_name. Здесь значение помещается в фигурные скобки, но также можно использовать кргулые:
Списки инициализации пободным образом можно использовать и для присвоения значений переменным:
При использовании списков инициализации важно учитывать, что передача значений должна идти в том порядке, в котором константы и переменные определены в классе. То есть в данном случае в классе сначала определена константа name, а потом переменная age. Соответственно в таком же порядке идет передача им значений. Поэтому при добавлении дополнительных полей или изменения порядка существующих придется следить, чтобы все инициализировалось в належащем порядке.
# Управление доступом. Инкапсуляция
Класс может определять различное состояние, различные функции. Однако не всегда желательно, чтобы к некоторым компонента класса был прямой доступ извне. Для разграничения доступа к различным компонентам класса применяются спецификаторы доступа
Спецификатор public делает члены класса - поля и функции открытыми, доступными из любой части программы. Например, возьмем следующий класс Person:
То есть в данном случае поля name и age и функция print являются открытыми, общедоступными, и мы можем обращаться к ним во внешнем коде. Однако это имеет некоторые недостатки. ТаК, мы можем обратиться к полям класса и присвоить им любые значения, даже если они будут не совсем корректными, исходя из логики прогаммы:
В том числе можно присвоить какие-то недопустимые значения. Например, полю age можно передать чересчур большой нереальвй возраст. Или мы не хотим, чтобы имени можно было присвоить пустую строку. Естественно это не очень хорошая ситуация.
Однако с помощью другого спецификатора private мы можем скрыть реализацию членов класса, то есть сделать их закрытыми, инкапсулировать внутри класса. Перепишем класс Person с применением спецификатора private:
Все компоненты, которые определяются после спецификатора private и идут до спецификатора public, являются закрытыми, приватными. Теперь теперь мы не можем обратиться к переменным name и age вне класса Person. Мы можем к ним обращаться только внутри класса Person. А функция print и конструктор по прежнему общедоступные, поэтому мы можем обращаться к ним в любом месте программы.
Стоит отметить, что в данном случае мы все равно можем передать некорректные значения - через конструктор. В этом случае можно проверять входные данные и использовать различные стратегии, например, не создавать объект или передавать ему данные по умолчанию, но в целях упрощения я опущу подобную проверку.
Если для каких-то компонентов отсутствует спецификатор доступа, то по умолчанию применяется спецификатор `private` . Так,
предыдущий класс Person будет аналогичен следующему
Хотя в примере выше мы избегаем установки некорректных значений, тем не менее иногда может потребоваться доступ к подобным полям. Например, человек стал старше на год - надо изменить возраст. Или мы хотим отдельно получить имя. В этом случае мы можем определить специальные функции, через которые будем контроллировать доступ к состоянию класса:
Чтобы можно было получить извне значения переменных name и age, определены дополнительные функции getAge и getName. Установить значение переменной name напрямую можно только через конструктор, а значение переменной age - через конструктор или через функцию setAge. При этом функция setAge устанавливает значение для переменной age, если оно соответствует определенным условиям.
Таким образом, состояние класса скрыто извне, к нему можно получить доступ только посредством дополнительно определенных функций, которые представляют интерфейс класса.
# Объявление и определение функций класса
В языке C++ можно разделять объявление и определение функций в том числе по отношению к функциям, которые создаются в классах. Для этого используется выражение
```
имя_класса::имя_функции(параметры) { тело_функции}
```
.
Например, возьмем следующий класс Person:
Разобъем класс, вынеся реализацию его методов во вне:
Теперь функции класса Person (в данном случае конструктор и функция print) в самом классе имеют только объявления. Реализации функций размещены вне класса Person.
Подобное разделение упрощает обзор и понимание интрефейса класса, особенно когда функции имеют много кода.
Три такой организации кода также можно делегировать конструктор:
# Конструктор копирования
По умолчанию компилятор при компиляции классов генерирует специальный конструктор - конструктор копирования, который позволяет создать объект на основе другого объекта (по сути копирует объект). Конструктор копирования по умолчанию копирует значения полей объекта, в новый объект. Рассотрим простейший пример:
В данном случае строка:
представляет вызов конструктора копирования. Хотя мы нигде в коде не определяем конструктор, который принимал бы другой объект Person - подобный конструктор автоматически генерирует компилятор. В итоге объект tomas будет иметь все те же значения, что и объект tom.
Конструктор копирования - замечательная вещь, когда нам надо создать один объект на основе другого, однако данный конструктор имеет свои недостатки. Например, если поле представляет указатель, то копируется адрес. В итоге поля обоих объектов будут указывать на один и тот же адрес в памяти. Соответственно если мы захотим изменить значение для одного объекта, это значение также изменится и для другого объекта. И в этом случае мы можем определить свой конструктор копирования.
Конструктор копирования должен принимать в качестве параметра объект того же класса. Причем параметр лучше принимать по ссылке, потому что при передаче по значению компилятор будет создавать копию объекта. А для создания копия объекта будет вызываться конструктор копирования, что приведет бесконечной рекурсии. Итак, определим свой конструктор копирования:
Итак, здесь конструктор копирования принимает константную ссылку на объект Person и присваивает значения его полей соответствующим полям текщего объекта. Для примера, чтобы данные чуть отличались, я добавил к свойству age единицу. В итоге вместо конструктора копирования по умолчанию будет применяться кастомный конструктор.
Конструктор копирования не всегда может быть не нужен. И его можно удалить с помощью оператора delete:
Объекты классов также могут представлять константы:
Но при работе с константными объектами мы можем получить данные их полей, но изменить их не можем. Так, если в примере выше мы раскомментируем строку
То мы столкнемся с ошибкой на этапе компиляции, так как объект tom - константа.
Константность объекта накладывает некоторые ограничения на вызов его функций. Например, в класс Person выше добавим функцию `print()` для вывода данных объекта:
Как ни странно, данный пример не скомпилируется из-за функции print, хотя в ней нет никакого изменения полей объекта. Потому что в любой функции класса теоретически можно изменять его поля, а компилятор не может определить, меняется ли значение в функции или нет. Поэтому одинаково отказывается компилировать и те функции, которые меняют состояние объекта, и те функции, которые его не меняют.
Для константного объекта можно вызывать только константные функции. Для определения таких функций после списка параметров ставится ключевое слово const:
#include <iostream>classPerson {private:std::string name;unsigned age;public:Person(std::string p_name, unsigned p_age){name = p_name;age = p_age;}// константная функцияvoidprint() const {std::cout << "Name: " << name << "\tAge: " << age << std::endl;}};int main(){const Person tom{"Tom", 38};tom.print(); // Name: Tom Age: 38Person bob{"Bob", 42};bob.print(); // Name: Bob Age: 42}
В данном случае функция print определена как константная, поэтому ее можно вызвать как для константого, так и для неконстантного объекта. В любом случае в константной функции НЕ должно происходить изменение полей класса.
Еще одно ограничение, с которым можно столкнуться, касается вызова в константной функции других функций этого же класса - константная функция может вызыть только константные функции класса:
#include <iostream>classPerson {private:std::string name;unsigned age;public:Person(std::string p_name, unsigned p_age){name = p_name;age = p_age;}std::string getName() const{returnname;}unsigned getAge() const{returnage;}voidprint() const {// в константной функции можно вызывать только константные функцииstd::cout << "Name: " << getName() << "\tAge: " << getAge() << std::endl;}};int main(){const Person tom{"Tom", 38};tom.print(); // Name: Tom Age: 38Person bob{"Bob", 42};bob.print(); // Name: Bob Age: 42}
Здесь дополнительно определены функции getName и getAge, которые соответственно возвращают имя и возраст. Обе эти функции константные, поэтому их можно вызвать в константной функции print.
Еще одно ограничение, связанное с константными функциями, состоит в том, что, если мы хотим возвратить из константной функции указатель или ссылку, то они указетель должен указывать на константу, а ссылка должна быть константной. В чем это проявляется? Попробуем возвратить ссылку и указатель:
#include <iostream>classPerson {private:std::string name;unsigned age;public:Person(std::string p_name, unsigned p_age){name = p_name;age = p_age;}// возвращаем константную ссылкуconststd::string& getName() const{returnname;}// возвращаем указатель на константуconstunsigned* getAge() const{return&age;}voidprint() const {std::cout << "Name: " << name << "\tAge: " << age << std::endl;}};int main(){const Person tom{"Tom", 38};std::string tom_name =tom.getName();const unsigned* tom_age = tom.getAge();std::cout << "Name: " << tom_name << "\tAge: " << *tom_age << std::endl;}
Здесь константная функция `getName` возвращает константную ссылку, а функция `getAge` - указатель на константу.
Иногда бывает необходимо, чтобы какие-то данные константного объекта все-таки можно было менять. В этом случае для переменной, которую необходимо менять, можно использовать ключевое слово mutable. И даже если объект является константным, значение такой переменной можно изменить.
#include <iostream>classPerson {public:std::string name;mutableunsigned age; // переменную age можно изменитьPerson(std::string p_name, unsigned p_age){name = p_name;age = p_age;}voidprint() const {std::cout << "Name: " << name << "\tAge: " << age << std::endl;}};int main(){const Person tom{"Tom", 38};tom.age = 22;tom.print(); // Name: Tom Age: 22}
# Ключевое слово this
Ключевое слово this представляет указатель на текущий объект данного класса. Соответственно через this мы можем обращаться внутри класса к любым его членам.
В данном случае определен класс Point, который представляет точку на плоскости. И для хранения координат точки в классе определены переменные x и y.
Для обращения к переменным используется указатель this. Причем после this ставится не точка, а стрелка ->.
В большинстве случаев для обращения к членам класса вряд ли поднадобится ключевое слово this. Но оно может быть необходимо, если параметры функции или переменные, которые определяются внутри функции, называются также как и переменные класса. К примеру, чтобы в конструкторе разграничить параметры и переменные класса как раз и используется указатель this.
Другое практическое применение this - с его помощью можно возвращать текущий объект класса:
Здесь метод move с помощью указателя this возвращает ссылку на объект текущего класса, осуществляя условное перемещение точки. Таким образом, мы можем по цепочке для одного и того же объекта вызывать метод move:
Здесь также важно отметить возвращение не просто объекта Point, а ссылки на этот объект. Так, в данном случае выге определенная строка фактически будет аналогично следующему коду:
Но если бы метод move возвращал бы не ссылку, а посто объект:
То вызов
```
p1.move(10, 5).move(10)
```
был бы фактически эквивалентен следующему коду:
Где второй вызов метода move вызывался бы для временной копии и никак бы не затрагивал переменную p1.
В качестве альтернативы можно возвращать сам указатель this:
В данном случае, поскольку функция `move()` возвращает указатель this, то у результата функции мы также можем вызвать функцию `move` через операцию `->` :
Здесь класс Integer представляет условно целое число, которое хранится в переменной value. В нем определены функции add() (сложение), subtract() (вычитание), и multiply() (умножение), которые принимают другой объект Integer и выполняеют соответствующую операцию между текущим объектом и аргументом. Причем каждая из этих функций возвращает текущий объект, благодаря чему эти функции можно было выполнить по цепочке:
# Дружественные функции и классы
Дружественные функции - это функции, которые не являются членами класса, однако имеют доступ к его закрытым членам - переменным и функциям, которые имеют спецификатор private.
Для определения дружественных функций используется ключевое слово friend. Например, определим следующую программу:
Здесь определен класс Auto, который представляет автомобиль. У этого класса определены приватные закрытые переменные name (название автомобиля) и price (цена автомобиля). Также в классе объявлены две дружественные функции: drive (функция вождения автомобиля) и setPrice (функция назначения цены). Обе этих функции принимают в качестве параметра ссылку на объект Auto.
Когда мы объявляем дружественные функции, то фактически мы говорим компилятору, что это друзья класса и они имеют доступ ко всем членам этого класса, в том числе закрытым.
При этом для дружественных функций не важно, определяются они под спецификатором public или private. Для них это не имеет значения.
Определение этих функций производится вне класса. И поскольку эти функции являются дружественными, то внутри этих функций мы можем через переданную ссылку Auto обратиться ко всем его закрытым переменным.
Консольный вывод программы:
> Tesla : 5000 Tesla is driven Tesla : 8000
Дружественные функции могут определяться в другом классе. Например, определим класс Person, который использует объект Auto:
Вначале определен класс Person, который представляет человека. Однако поскольку класс Person использует класс Auto, то перед классом Person идет объявление класса Auto.
Две функции из класса Person принимают ссылку на объект Auto:
То есть фигурально говоря, человек водит автомобиль и назначает ему цену с помощью этих функциий.
Класс Auto определяет дружественные функции с той же сигнатурой:
Причем поскольку данные функции будут определены в классе Person, то названия этих функций предваряются префиксом "Person::".
И поскольку в этих функциях предполагается использовать объект Auto, то ко времени определения этих функций все члены объекта Auto должны быть известны, поэтому определения функций находятся не в самом классе Person, а после класса Auto. И так как эти функции определены в классе Auto как дружественные, мы можем обратиться в этих функциях к закрытым членам класса Auto.
Консольный вывод программы:
> Tom drives Tesla Tesla : 8000
В случае выше класс Person использует только две функции из класса Auto. Но допустим впоследствии возникла необходимость добавить в класс Auto еще ряд дружественных функций, которые будут определены в классе Person. Либо мы можем предполагать, что класс Person будет активно использовать объекты Auto. И в этом случае целесообразно определять не отдельные дружественные функции, а определить дружественным весь класс Person:
Единственное, что в данном случае изменилось по сравнению с предыдущим примером - это то, что в классе Auto определение дружественных функций было заменено определением дружественного класса:
То есть тем самым мы опять же говорим, что класс Person - это друг класса Auto, поэтому объекты Person могут обращаться к приватным переменным класса Auto. После этого в классе Person можно обращаться к закрытым членам класса Auto из любых функций.
# Статические члены класса
Кроме переменных и методов, которые относятся непосредственно к объекту, C++ позволяет определять переменные и методы, которые относятся непосредственно к классу или иначе говоря статические члены класса. Статические переменные и методы относят в целом ко всему классу. Для их определения используется ключевое слово static.
Статические переменные обычно применяются для хранения значений, специфичных для класса, для всех объектов класса в целом. То есть статические поля хранят состояние всего класса. Статическая переменная определяется только один раз и будет существовать, даже если объекты класса не были созданы.
Показательным примером статических переменных являются различные счетчики. Например, нам надо хранить количество созданных объектов. Такое количество относится классу, однако не зависит от конкретного объекта. Посмотрим на примере:
Здесь в классе Person определена статическая переменная `count` с помощью ключевого слова static:
Обратите внимание, после `static` идет ключевое слово inline. Это ключевое слово в принципе необязательно для статических переменных и необходимо
конкретно в данном случае для инициализации переменной count. В данном случае нулем.
При создании каждого нового объекта в конструкторе увеличиваем счетчик на единицу:
В функциях класса Person мы можем обращаться к этой статической переменной. Например, в функции `print_count` выводим ее значение на консоль:
Для теста в функции main создаем три объекта Person и затем у каждого вызываем функцию print_count:
Но поскольку переменная count статическая и относится ко всему классу в целом и не зависит от конкретного объекта, во всех трех случаях будет вывtдено число 3.
Статические функции также принадлежат классу в целом и не зависят от любого отдельного объекта класса. Обычно статические функции-члены используются для работы со статическими переменными. Например, в примере выше функция `print_count()` выводит значение статической переменной count и никак не зависит от конкретного объекта, не использует и не изменяет
переменные и функции объектов. Поэтому такую функцию можно и даже лучше сделать статической:
Для определения статической функции перед ней также указывается ключевое слово static:
К подобным функциям также можно обращаться через имя объекта:
Как выше было продемонстрировано, для обращения к статическим членам можно использовать имя любого объекта. Однако C++ также поддерживает и другой синтаксис:
После имени класса идет оператор `::` и имя статического компонента класса. Например:
Здесь добавлена публичная статическая переменная `maxAge` , которая представляет максимальный возраст. Поскольку этот показатель не зависит от определенного объекта и относится
в целом к классу объектов Person (справедливо для всех людей), то определяем такую переменную как статическую. В конструкторе используем эту переменную для верификации переданного
в конструктор возраста. Если он больше максимального, то возраст получает значение по умолчанию - 1.
Далее в функции main мы можем по имени класса обратиться к статической функции print_count и переменной maxAge:
Консольный вывод программы:
> Created 3 objects Max age: 120 Max age: 110
Также можно определять статические константы. Так, в примере выше маловероятно, что значение maxAge будет меняться. Поэтому мы можем определить ее как константу. Ее значения нельзя будет изменить, а во остальном работа с ней идет также как и со статическими переменными:
# Деструктор
Деструктор выполняет освобождение использованных объектом ресурсов и удаление нестатических переменных объекта. Деструктор автоматически вызывается, когда удаляется объект. Удаление объекта происходит в следующих случаях:
когда завершается выполнение области видимости, внутри которой определены объекты
когда удаляется контейнер (например, массив), который содержит объекты
когда удаляется объект, в котором определены переменные, представляющие другие объекты
динамически созданные объекты удаляются при применении к указателю на объект оператора delete
По сути деструктор - это функция, которая называется по имени класса (как и конструктор) и перед которой стоит тильда (~):
Деструктор не имеет возвращаемого значения и не принимает параметров. Каждый класс может иметь только один деструктор.
Обычно деструктор не так часто требуется и в основном используется для освобождения связанных ресурсов. Например, объект класса использует некоторый файл, и в деструкторе можно определить код закрытия файла. Или если в классе выделяется память с помощью оператора `new` , то в деструкторе можно освободить подобную память.
Сначала рассмотрим простейшее определение деструктора:
В классе Person определен деструктор, который просто уведомляет об уничтожении объекта:
В функции main создаются три объекта Person. Причем два из них создается во вложенном блоке кода.:
Этот блок кода задается границы области видимости, в которой существуют эти объекты. И когда выполнение блока завершается, то уничтожаются обе переменных, и для обоих объектов вызываются деструкторы.
После этого создается третий объект - sam
Поскольку он определен в контексте функции main, то и уничтожается при завершении этой функции. В итоге мы получим следующий консольный вывод:
> Person Tom created Person Bob created Person Bob deleted Person Tom deleted Person Sam created Person Sam deleted
Чуть более практический пример. Допустим, у нас есть счетчик объектов Person в виде статической переменной. И если в конструкторе при создании каждого нового объекта счетчик увеличивается, то в деструкторе мы можем уменьшать счетчик:
Консольный вывод программы:
> Person Tom created. Count: 1 Person Bob created. Count: 2 Person Bob deleted. Count: 1 Person Tom deleted. Count: 0 Person Sam created. Count: 1 Person Sam deleted. Count: 0
При этом выполнение самого деструктора еще не удаляет сам объект. Непосредственно удаление объекта производится в ходе явной фазы удаления, которая следует после выполнения деструктора.
Стоит также отметить, что для любого класса, который не определяет собственный деструктор, компилятор сам создает деструктор. Например, если бы класс Person не имел бы явно определенного деструктора, то для него автоматически создавался бы следующий деструктор:
# Структуры
Кроме классов для создания своих типов данных можно использовать структуры, которые унаследованы языком С++ от языка Си. Структура в C++ представляет собой производный тип данных, который представляет какую-то определенную сущность, также как и класс. Нередко структуры применителько к С++ также называют классами. И в реальности различия между ними не такие большие. Структура также может определять переменные, функции, конструкторы, деструкторы. Однако обычно структуры служат для хранения каких-то общедоступных данных в виде публичных переменных. Для остальных сценариев используются классы.
Для определения структуры применяется ключевое слово struct, а сам формат определения выглядит следующим образом:
`Имя_структуры` представляет произвольный идентификатор, к которому применяются те же правила, что и при наименовании переменных.
После имени структуры в фигурных скобках помещаются компоненты структуры - переменные и функции.
Например, определим простейшую структуру:
Здесь определена структура `person` , которая имеет две переменных: `name` (представляет тип string) и `age` (представляет тип unsigned).
После определения структуры мы можем ее использовать. Для начала мы можем определить объект структуры - по сути обычную переменную, которая будет представлять выше созданный тип. Также после создания переменной структуры можно обращаться к ее элементам - получать их значения или, наоборот, присваивать им новые значения. Для обращения к элементам структуры используется операция "точка":
По сути структура похожа на класс, то есть с помощью структур также можно определять сущности для использования в программе. В то же время все члены структуры, для которых не используется спецификатор доступа (public, private), по умолчанию являются открытыми (public). Тогда как в классе все его члены, для которых не указан спецификатор доступа, являются закрытыми (private).
Кроме того мы можем инициализировать структуру, присвоив ее переменным значения с помощью синтаксиса инициализации:
Инициализация структур аналогична инициализации массивов: в фигурных скобках передаются значения для элементов структуры по порядку. Так как в структуре person первым определено свойство, которое представляет тип unsigned - число, то в фигурных скобках вначале идет число. И так далее для всех элементов структуры по порядку.
При этом любой класс мы можем представить в виде структуры и наоборот. Возьмем, к примеру, следующий класс:
Данный класс определяет сущность человека и содержит ряд приватных и публичных переменных и функции. Вместо класса для определения той же сущности мы могли бы использовать структуру:
И в плане конечного результата программы мы не увидели бы никакой разницы.
Когда использовать структуры? Как правило, структуры используются для описания таких данных, которые имеют только набор публичных атрибутов - открытых переменных. Например, как та же структура person, которая была определена в начале статьи. Иногда подобные сущности еще называют аггрегатными классами (aggregate classes).
# Перечисления
Перечисления (enum) представляют еще один способ определения своих типов. Их отличительной особенностью является то, что они содержат набор числовых констант. Перечисление имеет следующую форму:
После ключевых enum class идет название перечисления, и затем в фигруных скобках перечисляются через запятую константы перечисления.
Определим простейшее перечисление:
В данном случае перечисление называется Day и представляет дни недели. В фигурных скобках заключены все дни недели. Фактически они представляют числовые константы.
Каждой константе сопоставляется некоторое числовое значение. По умолчанию первая константа получает в качестве значения 0, а остальные увеличиваются на единицу. Так, в примере выше `Monday` будет иметь значение 0, `Tuesday` - 1 и так далее. Таким образом, последняя константа - `Sunday` будет равна 6.
После создания перечисления мы можем определить его переменную и присвоить ей одну из констант:
В данном случае определяется переменная `today` , которая равна `Day::Thursday` , то есть четвертой константе перечисления Day.
Чтобы вывести значение переменной на консоль, можно использовать преобразование к типу целочисленному типу:
То есть в данном случае на консоль будет выведено `Today: 3` , так как константа `Thursday` имеет значение 3.
Мы также можем управлять установкой значений в перечислении. Так, мы можем задать начальное значение для одной контанты, тогда у последуюших констант значение увеличивается на единицу:
В данном случае `Tuesday` будет равно 2, а `Sunday` - 7.
Можно назначить каждой константе индивидуальное значение или сочетать этот подход с автоустановкой:
В данном случае `Saturday` будет равно 7, а `Sunday` - 1.
Можно даже назначать двум константам одно и то же значение:
Здесь константы `Monday` и `Mon` имеют одно и то же значение.
Можно присвоить константам значение уже имеющихся констант:
Стоит учитывать, что константы перечисления должны представлять целочисленные константы. Однако мы можем выбрать другой целочисленный тип, например, `char` :
Если мы захотим вывести значения этих констант на консоль в виде символов, то необходимо преобразовать их к типу `char` :
Перечисления удобны, когда необходимо хранить ограниченный набор состояний и в зависимости от текущего состояния выполнять некоторые действия. Например:
В данном случае все арифметические операции хранятся в перечислении Operation. В функции calculate зависимости от значения третьего параметра - применяемой операции выполняются определенные действия с двумя первыми параметрами.
При обращении к контантам перечисления по умолчанию необходимо указывать название перечисления, например, `Day::Monday` . Но начиная со стандарта C++20 мы можем подключить
константы перечисления в текущий контекст с помощью оператора using.
И в дальнейшем использовать только имя констант:
Также мы можем подключить только одну константу:
В данном случае подключаем только константу `Day::Monday` . Для обращения к дргуим константам по прежднему необходимо использовать имя перечисления. Поскольку такая возможность добавлена лишь начиная со стандарта С++20, то при компиляции с g++ или clang++ добавляется соответствующий флаг - `-std=c++20` Стоит отметить, что раньше в С++ использовалась другая форма перечислений, которые пришли из языка С и определяются без ключевого слова `class` :
Такие перечисления еще называют `unscoped` (то есть не ограниченные ни какой областью видимостью). Естественно такие перечисления можно встретить в старых
программах. Однако в виду того, что они потенциально могут привести к большему количеству ошибок, то в настоящее время такая форма все меньше и меньше используется.
Наследование (inheritance) представляет один из ключевых аспектов объектно-ориентированного программирования, который позволяет наследовать функциональность одного класса (базового класса) в другом - производном классе (derived class).
Зачем нужно наследование? Рассмотрим небольшую ситуацию, допустим, у нас есть классы, которые представляют человека и сотрудника компании:
В данном случае класс Employee фактически содержит функционал класса Person: свойства name и age и функцию print. В целях демонстрации все переменные здесь определены как рубличные. И здесь, с одной стороны, мы сталкиваемся с повторением функционала в двух классах. С другой строны, мы также сталкиваемся с отношением is ("является"). То есть мы можем сказать, что сотрудник компании ЯВЛЯЕТСЯ человеком. Так как сотрудник компании имеет в принципе все те же признаки, что и человек (имя, возраст), а также добавляет какие-то свои (компанию). Поэтому в этом случае лучше использовать механизм наследования. Унаследуем класс Employee от класса Person:
Для установки отношения наследования после названия класса ставится двоеточие, затем идет спецификатор доступа и название класса, от которого мы хотим унаследовать функциональность. В этом отношении класс Person еще будет называться базовым классом (также называют суперклассом, родительским классом), а Employee - производным классом (также называют подклассом, классом-наследником).
Спецификатор доступа позволяет указать, к каким членам класса производный класс будет иметь доступ. В данном случае используется спецификатор public, который позволяет использовать в производном классе все публичные члены базового класса. Если мы не используем модификатор доступа, то класс Employee ничего не будет знать о переменных name и age и функции print.
После установки наследования мы можем убрать из класса Employee те переменные, которые уже определены в классе Person. Используем оба класса:
Таким образом, через переменную класса Employee мы можем обращаться ко всем открытым членам класса Person.
Но теперь сделаем все переменные приватными, а для их инициализации добавим конструкторы. И тут стоит учитывать, что конструкторы при наследовании не наследуются. И если базовый класс содержит только конструкторы с параметрами, то производный класс должен вызывать в своем конструкторе один из конструкторов базового класса:
После списка параметров конструктора производного класса через двоеточие идет вызов конструктора базового класса, в который передаются значения параметров n и a.
Если бы мы не вызвали конструктор базового класса, то это было бы ошибкой.
Консольный вывод программы:
> Name: Tom Age: 38 Name: Bob Age: 42
Таким образом, в строке
Вначале будет вызываться конструктор базового класса Person, в который будут передаваться значения "Bob" и 42. И таким образом будут установлены имя и возраст. Затем будет выполняться собственно конструктор Employee, который установит компанию.
Также мы могли бы определить конструктор Employee следующим образом, используя списки инициализации:
В примерах выше конструктор Employee отличается от конструктора Person одним параметром - company. Все остальные параметры из Employee передаются в Person. Однако, если бы у нас было бы полное соответствие по параметрам между двумя классами, то мы могли бы и не определять отдельный конструктор для Employee, а подключить конструктор базового класса:
Здесь в классе Employee подключаем конструктор базового класса с помощью ключевого слова using:
Таким образом, класс Employee фактически будет иметь тот же конструктор, что и Person с теми же двумя параметрами. И этот конструктор мы также можем вызвать для создания объекта Employee:
При определении конструктора копирования в производном классе следует вызывать в нем конструктор копирования базового класса. Например, добавим в классы Person и Employee конструкторы копирования:
В конструкторе копирования производного класса Employee вызываем конструктор копирования базового класса Person:
При этом в конструктор копирования Person передается объект employee, где будут установлены переменные name и age. В самом же конструкторе класса Employee лишь устанавливается переменная company.
Уничтожение объекта производного класса может вовлекать как собственно деструктор производного класса, так и деструктор базового класса. Например, определим в обоих классах деструкторы
В обоих классах деструктор просто выводит некоторое сообщение. В функции main создается один объект Employee, однако при завершении программы будет вызываться деструктор как из производного, так и из базового класса:
> Person created Employee created Name: Tom Age: 38 Employee deleted Person deleted
По консольному выводу мы видим, что при создании объекта Employee сначала вызывается конструктор базового класса Person и затем собственно конструктор Employee. А при удалении объекта Employee процесс идет в обратном порядке - сначала вызывается деструктор производного класса и затем деструктор базового класса. Соответственно, если в деструкторе базового класса идет освобождение памяти, то оно в любом случае будет выполнено при удалении объекта производного класса.
Иногда наследование от класса может быть нежелательно. И с помощью спецификатора final мы можем запретить наследование:
После этого мы не сможем унаследовать другие классы от класса User. И, например, если мы попробуем написать, как в случае ниже, то мы столкнемся с ошибкой:
Если переменные или функции в базовом классе являются закрытыми, то есть объявлены со спецификатором private то, производный класс хотя и наследует эти переменные и функции, но не может к ним обращаться. К примеру, попробуем определить в производном классе функцию, которая выводит значения приватных переменных базового класса:
В базовом классе Person определены приватные переменные name и age. В производном классе Employee в функции printEmployee мы пытаемся обратиться к ним, чтобы вывести их значение на консоль. И в данном случае мы столнемся с ошибкой, так как переменные name и age - приватные переменные базового класса Person. И производный класс Employee к ним не имеет доступа.
Однако иногда возникает необходимость в таких переменных и функциях базового класса, которые были бы доступны в производных классах, но не были бы доступны извне. То есть тот же private, только с возможностью доступа для производных классов. И именно для определения уровня доступа подобных членов класса используется спецификатор protected.
Например, определим переменную name со спецификатором protected:
Таким образом, мы можем использовать переменную name в производном классе, например, в методе printEmployee, но извне базового и производного классов мы к ней обратиться по-прежнему не можем.
Как мы увидели, спецификатор доступа - `public` , `private` , `protected` играют большую роль в том, к каким именно переменным и функциям базового класса могут
обращаться производные классы. Однако на доступ также влияет спецификатор доступа базового класса, применяемый при установке наследования: Так, в примере выше мы используем спецификатор `public` . И здесь мы также можем использовать три варианта: `public, protected` или `private` .
Если спецификатор базового класса явным образом не указан: то по умолчанию применяется спецификатор `private` (При наследовании структур, если спецификатор доступа не укзаан, то по умолчанию применяется спецификатор `public` ). Таким образом, в базовом классе при определении переменных и функций мы можем использовать три спецификатора для управления доступом: `public, protected` или `private` .
И те же три спецификатора мы можем использовать при установке наследования от базового класса. Эти спецификаторы накладываются друг на друга и образуют 9 возможных комбинаций.
Если члены базового класса определены со спецификатором private, то в производном классе они в принципе недоступны независимо от спецификатора доступа к базовому классу.
Если спецификатор базового класса - public, то уровень доступа унаследованных членов остается неизменным. Таким образом, унаследованные открытые члены являются общедоступными, а унаследованные члены со спецификатором `protected` сохраняют этот спецификатор и в производном классе. Если спецификатор базового класса - protected, то все унаследованные члены со спецификатором `protected` и `public` в производном классе наследуются как `protected` . Смысл этого состоит в том, что если у производного класса будут свои классы-наследники, то в этих классах-наследниках также можно обращаться
к подобным членам базового класса. Если спецификатор базового класса - private, то все унаследованные члены со спецификатором `protected` и `public` в производном классе наследуются как `private` . Они доступны в любой функции производного класса, но вне производного класса (в том числе у его наследников) они не доступны.
Рассмотрим пример. Пусть спецификатором базового класса будет private
Поскольку спецификатор базового класса Person - `private` , то класс Employee наследует переменную `name` и функцию
А если мы создадим новый класс и унаследуем его от Employee, например, класс Manager:
То приватные переменная name и функция print из Employee в классе Manager будут недоступны.
Что делать, если в примере выше для класса Employee мы все таки хотим вызвать функцию print? Мы можем восстановить уровень доступа с помощью ключевого слова using:
В классе Employee мы устанавливаем уровень доступ к функции print базового класса Person как public:
После этого функция print будет иметь свой первоначальный спецификатор доступа - public и будет доступна вне класса Employee:
Подобным образом можно сделать публичной и переменную `name` , несмотря на то, что в базовом классе Person она определена как `protected` :
Однако если переменные и функции определены в базовом классе как приватные сделать их публичными нельзя.
# Скрытие функционала базового класса
С++ позволяет определять в производном классе переменные и функции с теми же именами, что имеют переменные и функции в базовом классе. В этом случае переменные и функции производного класса будут скрывать одноименные переменные и функции базового класса.
Производный класс может определить функцию с тем же именем, что и функция в базовом классе, с тем же или другим списком параметров. Для компилятора такая функция будет существовать независимо от базового класса. И подобное определение функции в производном классе не будет переопределением функции из базового класса.
Здесь класс Person, который представляет человека, определяет функцию `print()` , которая выводит значение переменных name и age. Класс Employee, который представляет сотрудника компании и является производным от класса Person, также определяет функцию `print()` , которая выводит значение переменной company.
В итоге объект Employee будет использовать реализацию функции print класса Employee, а не класса Person:
Функция print в Employee скрывает функцию print класса Person. Однако иногда может потребоваться возможность вызвать реализацию функции, которая определена именно в базовом классе. В этом случае можно использовать оператор :::
Здесь вызов
представляет обращение к функции print базового класса Person. В итоге мы получим другой консольный вывод:
> Name: Tom Age: 38 Works in Google
Производный класс может иметь переменные с тем же именем, что и базовый класс, Хотя такие ситуации могут привести к путанице, и, возможно, представляют нелучший вариант наименования переменных. Тем не менее мы можем так делать. Например:
Здесь класс Integer представляет целое число, значение которого хранится в переменной value. Этот класс наследуется классом Decimal, который представляет дробное число. Целая часть хранится в поле value класса Integer. А для хранения дробной части определена своя переменная `value` . Причем переменная value в Integer имеет спецификатор `protected` ,
поэтому теоретически мы могли бы обращаться к ней в классе Decimal. Однако поскольку в Decimal определена своя переменная value, то она скрывает переменную value и базового класса.
Чтобы все таки обратиться к переменной value из базового класса, надо использовать оператор :::
например:
Стоит учитывать, что таким образом мы можем обратиться только к переменным со спецификаторами `public` и `protected` . К приватным же переменным базового класса мы так обратиться не можем.
# Множественное наследование
Производный класс может иметь несколько прямых базовых классов. Подобный тип наследования называется множественным наследованием в отличие от одиночного наследования, при котором используется один базовый класс. Поскольку это несколько усложняет иерархию наследования, то используется гораздо реже, чем одиночное наследование.
Здесь класс Camera представляет фотокамеру и для съемки фото предоставляет функцию makePhoto. Класс Phone представляет телефон и для звонков предоставляет функцию makeCall. Оба эти класса наследуются классом Smartphone, который представляет смартфон и может и делать фото, и выполнять звонки.
Стоит обратить внимание, что при установке наследования для каждого базового класса указывается спецификатор доступа:
В итоге через объект Smartphone мы сможем вызывать функции обоих базовых классов:
При множественном наследовании также необходимо вызывать конструкторы базовых классов, если они имеют параметры. Например, пусть у нас есть класс книги Book, класс компьютерного файла File и класс электронной книги Ebook, который наследуется от этих классов:
Оба базовых класса имеют конструкторы с одним параметром. И в конструкторе Ebook вызываем эти конструкторы:
Причем стоит обратить внимание на порядок вызов конструкторов. В определении класса Ebook первым базовым классом указан класс Book, поэтому сначала вызываем конструктор класса Book и только потом конструктор класса File.
Для каждого класса также определен деструктор. Посмотрим на очередность вызова конструкторов и деструкторов. И для этого в функции main создадим один объект Ebook, вызывая у него все функции базовых классов:
В итоге мы получим следующий консольный вывод
> Book created File created Ebook created Title: About C++ 320 pages 5.6Mb Ebook deleted File deleted Book deleted
Мы видим, что первым вызывается конструктор класса Book, который указан первым среди базовых классов. Деструкторы вызываются в обратном порядке. Таким образом, деструктор Book выполнится последним.
В примере выше все классы имели функции, которые называются по разному. Но посмотрим, что будет в следующем случае:
Здесь базовые классы Book и File имеют функцию с одним и тем же именем - `print()` . В итоге у нас получается двойственность, и такой код просто не скомпилируется.
Чтобы решить проблему, мы можем указать, из какого конкретного класса мы хотим вызвать функцию print:
В качестве альтернативы мы можем выполнять операцию преобразования к нужному типу и затем вызывать функцию:
Еще одной формой двойственности при наследовании может быть наследование от нескольких классов, которые косвенно или напрямую наследуются от одного и того же класса. Например:
Здесь в основе иерархии классов находится класс человека - Person, от которого наследуются класс рабочего Employee и класс студента Student. Но у нас может быть работающий студент. И для этого определяем класс StudentEmployee, который наследуется от Student и Employee. Подобных ситуаций, конечно, лучше избегать, но тем не менее они то же могут встречаться. И если мы запустим программу, то увидим, что для одного объекта StudentEmployee два раза вызывается конструктор и деструктор класса Person:
> Person created Person created Person deleted Person deleted
Более того, мы видим, что вызов `bob.print()` не компилируется.
Для решения этой проблемы в C++ применяются виртуальные базовые классы - при установке наследования перед именем базового класса указывается ключевое слово virtual. Применим вирутальные классы:
Теперь при определении классов Student и Employee базовый класс Person указан как виртуальный:
В итоге для объекта StudentEmployee мы сможем вызвать функцию print:
А консольный вывод будет следующим:
> Person created Person Bob Person deleted
Таким образом, мы видим, что теперь конструктор и деструктор класса Person вызываются только один раз.
Date: 2023-03-15
Categories:
Tags:
При вызове функции программа должна определять, с какой именно реализацией функции соотносить этот вызов, то есть связать вызов функции с самой функцией. В С++ есть два типа связывания - статическое и динамическое.
Когда вызовы функций фиксируются до выполнения программы на этапе компиляции, это называется статическим связыванием (static binding), либо ранним связыванием (early binding). При этом вызов функции через указатель определяется исключительно типом указателя, а не объектом, на который он указывает. Например:
В данном случае класс Employee наследуется от класса Person, но оба этих класса определяют функцию `print()` , которая выводит данные об объекте. В функции
main создаем два объекта и поочередно присваиваем их указателю на тип Person и вызываем через этот указатель функцию print. Однако даже если этому указателю
присваивается адрес объекта Employee, то все равно вызывает реализация функции из класса Person:
То есть выбор реализации функции определяется не типом объекта, а типом указателя. Консольный вывод программы:
> Name: Tom Name: Bob
Другой тип связывания представляет динамическое связывание (dynamic binding), еще называют поздним связыванием (late binding), которое позволяет на этапе выполнения решать, функцию какого типа вызвать. Для этого в языке С++ применяют виртуальные функции. Для определения виртуальной функции в базовом классе функция определяется с ключевым словом virtual. Причем данное ключевое слово можно применить к функции, если она определена внутри класса. А производный класс может переопределить ее поведение.
Итак, сделаем функцию print в базовом классе Person виртуальной:
#include <iostream>classPerson{public:Person(std::string name): name{name}{ }virtualvoidprint() const // виртуальная функция{std::cout << "Name: " << name << std::endl;}private:std::string name;};class Employee: public Person{public:Employee(std::string name, std::string company): Person{name}, company{company}{ }void print() const{Person::print();std::cout << "Works in " << company << std::endl;}private:std::string company;};int main(){Person tom {"Tom"};Person* person {&tom};person->print(); // Name: TomEmployee bob {"Bob", "Microsoft"};person = &bob;person->print(); // Name: Bob// Works in Microsoft}
Таким образом, базовый класс Person определяет виртуальную функцию print, а производный класс Employee переопределяет ее. В первом же примере, где функция
> Name: Tom Name: Bob Works in Microsoft
В этом и состоит отличие переопределения виртуальных функций от скрытия.
Класс, который определяет или наследует виртуальную функцию, еще назвается полиморфным (polymorphic class). То есть в данном случае Person и Employee являются полиморфными классами.
Стоит отметить, что вызов виртуальной функции через имя объекта всегда разрешается статически.
Динамическое связывание возможно только через указатель или ссылку.
При определении вирутальных функций есть ряд ограничений. Чтобы функция попадала под динамическое связывание, в производном классе она должна иметь тот же самый набор параметров и возвращаемый тип, что и в базовом классе. Например, если в базовом классе виртуальная функция определена как константная, то в производном классе она тоже должна быть константной. Если же функция имеет разный набор параметров или несоответствие по константности, то мы будем иметь дело со скрытием функций, а не переопределением. И тогда будет применяться статическое связывание.
Также статические функции не могут быть виртуальными.
Чтобы явным образом указать, что мы хотим переопредлить функцию, а не скрыть ее, в производном классе после списка параметров функции указывается слово override
#include <iostream>classPerson{public:Person(std::string name): name{name}{ }virtualvoidprint() const // виртуальная функция{std::cout << "Name: " << name << std::endl;}private:std::string name;};class Employee: public Person{public:Employee(std::string name, std::string company): Person{name}, company{company}{ }void print() const override // явным образом указываем, что функция переопределена{Person::print();std::cout << "Works in " << company << std::endl;}private:std::string company;};int main(){Person tom {"Tom"};Person* person {&tom};person->print(); // Name: TomEmployee bob {"Bob", "Microsoft"};person = &bob;person->print(); // Name: Bob// Works in Microsoft}
То есть здесь выражение
указывает, что мы явным образом хотим переопределить функцию print. Однако может возникнуть вопрос: в предыдущем примере мы не указывали `override` для вирутальной функции,
но переопределение все равно работало, зачем же тогда нужен `override` ? Дело в том, что `override` явным образом указывает компилятору, что это переопределяемая функция. И если она не соответствует
виртуальной функции в базовом классе по списку параметров, возвращаемому типу, константности, или в базовом классе вообще нет функции с таким именем, то компилятор при компиляции сгенерирует
ошибку. И по ошибке мы увидим, что с нашей переопределенной функцией что-то не так. Если же `override` не указать, то компилятор будет считать, что речь идет о скрытии функции, и
никаких ошибок не будет генерировать, компиляция пройдет успешно. Поэтмоу при переопределении виртуальной функции в производном классе лучше указывать слово override
При этом стоит отметить, что виртуальную функцию можно переопределить по всей иерархии наследования в том числе не в прямых производных классах.
Стоит отметить, что виртальные функции имеют свою цены - объекты классов с виртуальными функциями требуют немного больше памяти и немного больше времени для выполнения. Поскольку при создании объекта полиморфного класса (который имеет виртуальные функции) в объекте создается специальный указатель. Этот указатель используется для вызова любой виртуальной функции в объекте. Специальный указатель указывает на таблицу указателей функций, которая создается для класса. Эта таблица, называемая виртуальной таблицей или vtable, содержит по одной записи для каждой виртуальной функции в классе.
Когда функция вызывается через указатель на объект базового класса, происходит следующая последовательность событий
Указатель на vtable в объекте используется для поиска адреса vtable для класса.
Затем в таблице идет поиск указателя на вызываемую виртуальную функцию.
Через найденный указатель функции в vtable вызывается сама функция. В итоге вызов виртуальной функции происходит немного медленнее, чем прямой вызов невиртуальной функции, поэтому каждое объявление и вызов виртуальной функции несет некоторые накладные расходы.
С помощью спецификатора final мы можем запретить определение в производных классах функций, которые имеют то же самое имя, возвращаемый тип и список параметров, что и виртуальная функция в базовом классе. Например:
Также можно переопределить функцию базового класса, но запретить ее переопределение в дальнейших производных классах:
# Преобразование типов
Объект производного класса одновременно является объектом базового класса. Поэтому преобразования из производного типа в базовый выполняются автоматически.
Здесь класс Person является базовым, а Employee производным. Поэтому компилятор может автоматически преобразовать объект Employee в тип Person. Это можно сделать с помощью конструктора копирования:
Или через операцию присваивания:
Но также можно выполнять преобразования явным образом, например, с помощью функции static_cast():
Указатель на объект производного класса можно преобразовать автоматически в указатель на объект базового типа:
В данном случае указатель на объект Person получает адрес объекта Employee.
Подобным образом можно создать указатель производного класса и преобразовать автоматически в указатель на базовый тип
То же самое касается ссылок:
В некоторых случаях возможно приведение в обратную сторону- от базового к производному. Но, во-первых, автоматически оно не выполняется, для этого надо использовать функции преобразования, в частности, `static_cast()` . Во-вторых, будет оно работать или нет, зависит от типа объекта. Чтобы можно
было привести объект базового класса, например, Person, к указателю производного класса, например Employee, указатель базового класса должен указывать на объект
класса Employee (или классов, производных от Employee). Если это не так, то результат приведения не определен. Например:
Здесь указатель person, хоть и представляет указатель на тип Person, в реальности указывает на объект Employee. Поэтому с помощью функции `static_cast()` этот указатель
можно привести к типу Employee*.
Но возьмем другую ситуацию:
Здесь указатель person указывает на объект Person. Однако с помощью функции static_cast мы можем успешно его привести к указателю на Employee. Теоретически через подобный указатель мы можем обратиться к функции getCompany, которая определена в классе Employee. Но в классе Person ее нет, и поэтому при попытке к ней обратиться программа завершится с ошибкой. Поэтому если нет уверенности, что объект представляет определенный производный класс, то лучше не выполнять подобные преобразования из базового типа в производный.
smart-указатели на базовый класс также могут указывать на объект производного класса
# Динамическое преобразование
Динамическое приведение типов, в отличие от статического, выполняется во время выполнения программы. Для этого применяется функция dynamic_cast<>(). Также как и для `static_cast` , в угловых скобках указывается тип, к которому выполняется преобразование, в круглые скобки передается преобразуемый объект:
Но эту функцию можно применять только к указателям и ссылкам на полиморфные типы классов, которые содержат хотя бы одну виртуальную функцию. Причина в том, что только указатели на типы полиморфных классов содержат информацию, которая необходима функции `dynamic_cast` для проверки правильности преобразования. Конечно, типы, между которыми выполняется преобразование, должны быть указателями или ссылками на классы в одной иерархии классов.
Есть два вида динамического приведения. Первый — это преобразование от указателя на базовый класс к указателю на производный класс - так называемое нисхолящее преобразование или downcast (базовые классы в иерархии помещаются вверху, а производные внизу, поэтому преобразование идет сверху вниз). Второй тип — преобразование между базовыми типами в одной иерархии (при множественном наследовании) - кросскаст (crosscast).
Рассмотрим следующую программу:
#include <iostream>classBook // класс книги{public:Book(std::string title, unsigned pages): title{title}, pages{pages}{}std::string getTitle() const{returntitle;}unsigned getPages() const{returnpages;}virtualvoidprint() const {std::cout << title << ". Pages: " << pages << std::endl;}private:std::string title; // название книгиunsigned pages; // количество страниц};class File // класс электронного файла{public:File(unsigned size): size{size}{}unsigned getSize() const {return size;}virtual void print() const {std::cout << "Size: " << size << std::endl;}private:unsigned size; // размер в мегабайтах};class Ebook : public Book, public File // класс электронной книги{public:Ebook(std::string title, unsigned pages, unsigned size): Book{title, pages}, File{size}{}void print() const override{std::cout << getTitle() << "\tPages: " << getPages() << "\tSize: " << getSize() << "Mb" << std::endl;}};int main(){Ebook cppbook{"About C++", 350, 6};Book* book = &cppbook; // указывает на объект Ebook// динамическое преобразование из Book в EbookEbook* ebook{dynamic_cast<Ebook*>(book)};ebook->print(); // About C++ Pages: 350 Size: 6Mb}
Здесь у нас есть класс Book, который представляет книгу с переменными title и pages для хранения названия книги и количества страниц. А также есть класс File, который представляет электронный файл, в котором для хранения размера определено поле size. Класс электронной книги Ebook наследуется от обоих классов.
Чтобы динамическое преобразование было возможно, базовые классы определяют виртуальную функцию print.
В функции main создаем один объект типа Ebook и его адрес передаем указателю book, который представляет базовый тип Book*. Поскольку этот указатель все таки хранит адрес объекта Ebook, то мы можем привести его к указателю на Ebook:
Далее через указатель мы сможем обращаться к функционалу класса Ebook.
Стоит отметить, что в данном случае динамическое преобразование не имеет смысла, так как мы итак могли бы вызвать у указателя book функцию print и за счет виртуальности функции получили бы тот же самый результат. Преобразование нужно, если нам необходимо обратиться к каким-то членам производного класса, которые не определены в базовом. Например, класс Book не имеет функции getSize(), и чтобы обратиться к ней могло потребоваться преобразование.
В примере выше был приведен так называемый downcast. Теперь рассмотрим crosscast:
Преобразование из указателя на Book в указатель на File является кросскастом и в данном случае возможно, потому что указатель book хранит адрес объекта Ebook, который также наследуется от File.
Но подобные преобразования не всегда выполняются успешно. В этом случае функция `dynamic_cast()` возвращает указатель nullptr, и после получения результата
мы можем проверить на это значение:
В данном случае указатель book хранит адрес объекта Book и поэтому не может быть преобразован к типу указателя на File. Поэтому вызов
```
dynamic_cast<File*>(book)
```
возвратит nullptr. После этого мы можем проверить результат и в зависимости от результата проверки выполнить опреленные действия.
Обратите внимание, что если преобразуемый указатель является указателем на константу, то тип указателя, к которому выполняется приведение, также должен представлять указатель на константу:
Если необходимо выполнить приведение из указателя на константу в обычный указатель (не на константу), то сначала надо выполнить приведение к указателю того же типа, что и исходный, с помощью функции const_cast<T>():
Функция `dynamic_cast` также может применяться к ссылкам (из ссылки на базовый тип в ссылку на производный тип):
В данном случае ссылка book в реальности ссылается на объект Ebook, поэтому эту ссылку можно преобразовать в ссылку Ebook&. Но что, если ссылку book не ссылается на объект Ebook:
В этом случае при преобразовании мы столкнемся с ошибкой, и программа завершит свое выполнение. Если с указателями мы могли бы проверить результат на nullptr, то в случае с ссылками мы этого сделать не можем. Однако, чтобы избежать некорректного преобразования ссылок мы опять же можем преобразовывать в соответствующий тип указателя
Для динамического преобразования смарт-указателей `std::shared_ptr` применяется функция std::dynamic_pointer_cast<T>():
В данном случае указатель book, который представляет тип
```
std::shared_ptr<Book>
```
в реальности указывает на объект Ebook. Поэтому его можно привести к типу
указателя
```
std::shared_ptr<Ebook>
```
. Если же преобразование невозможно, то функция возвращает `nullptr` :
В данном случае указатель book указывает на объект Book. Поэтому при преобразовании в указатель на объект Ebook функция возвратит `nullptr` .
# Особенности динамического связывания
Если необходимо обеспечить динамическое связывание при передаче параметров в функцию, то такой параметр должен представлять ссылку или указатель на объект базового типа:
#include <iostream>classPerson{public:Person(std::string name): name{name} { }virtualvoidprint() const // виртуальная функция{std::cout << name << std::endl;}std::string getName() const {return name;}private:std::string name;};class Employee: public Person{public:Employee(std::string name, std::string company): Person{name}, company{company}{ }void print() const override // функция переопределена{std::cout << getName() << " (" << company << ")" << std::endl;}private:std::string company;};void printPerson(const Person& person) {person.print();}int main(){Person tom {"Tom"};Employee bob {"Bob", "Microsoft"};printPerson(tom); // TomprintPerson(bob); // Bob (Microsoft)}
В данном случае функция `printPerson` в качестве параметра принимает константную ссылку на объект типа Person, коим в реальности также может быть объект Employee. Поэтому при вызове функции
print программа будет динамически решать, какую именно реализацию функции вызвать.
Объекты базовых и производных классов можно хранить в одной коллекции, например, массиве. Например:
#include <iostream>classPerson{public:Person(std::string name): name{name} { }virtualvoidprint() const // виртуальная функция{std::cout << name << std::endl;}std::string getName() const {return name;}private:std::string name;};class Employee: public Person{public:Employee(std::string name, std::string company): Person{name}, company{company}{ }void print() const override // функция переопределена{std::cout << getName() << " (" << company << ")" << std::endl;}private:std::string company;};void printPerson(const Person& person) {person.print();}int main(){Person tom {"Tom"};Employee bob {"Bob", "Microsoft"};Employee sam {"Sam", "Google"};Person people[]{tom, bob, sam};for(const auto& person: people){person.print();}}
Здесь массив people хранит объекты Person, в качестве которых также могут выступать объекты Employee. Однако при такой организации каждый объект Employee, который помещается в массив, преобразуется в объект Person. В итоге при переборе такого массива вызывается функция print из класса Person:
> <NAME>
Если мы хотим обеспечить для элементов массива динамическое связывание, то такие объекты должны представлять указатели. Например, используем указатели:
Здесь массив хранит адреса всех объектов, соотвественно получим совсем другой вывод:
> <NAME> (Microsoft) Sam (Google)
Деструктор определяет логику удаления класса. При удалении объекта производного класса мы ожидаем, что будет выполняться деструктор производного, а затем и деструктор базового классов, что позволяет выполнить необходимую логику (например, освобождение выделенной памяти) для обоих классов. Однако в некоторых ситуациях такое может не сработать. Например:
#include <iostream>#include <memory>classPerson{public:Person(std::string name): name{name} { }~Person(){std::cout << "Person "<< name << " deleted"<< std::endl;}virtualvoidprint() const // виртуальная функция{std::cout << name << std::endl;}std::string getName() const {return name;}private:std::string name;};class Employee: public Person{public:Employee(std::string name, std::string company): Person{name}, company{company}{ }~Employee(){std::cout << "Employee " << name << " deleted" << std::endl;}void print() const override // функция переопределена{std::cout << getName() << " (" << company << ")" << std::endl;}private:std::string company;};void printPerson(const Person& person){person.print();}int main(){std::unique_ptr<Person> sam { std::make_unique<Employee>("Sam", "Google") };sam->print();}
Здесь переменная sam представляет smart-указатель `std::unique_ptr` на объект Person, который автоматически выделяет память для одного объекта Employee. Поскольку
объект Employee - это одновременно объект Person, то никакой проблемы в данном случае не будет.
Для обоих классов определены деструкторы, который просто выводят строку на консоль. То есть мы ожидаем, что после завершения функции main объект указателя sam будет удален, и будут выполняться деструкторы классов Employee и Person (ведь у нас объект Employee). Но что нам покажется в реальности консоль:
> Sam (Google) Person Sam deleted
А консоль нам показывает, что для объекта по указателю sam вызывается только деструктор класса Person, хотя объект то у нас Employee. Это может иметь неприятные последствия, особенно, если в конструкторе Employee выделяем память, а в деструткоре Employee освобождаем. Чтобы все-таки деструктор Employee вызывался, нам надо определить деструктор базового класса как виртуальный. Итак, изменим код деструктора класса Person, добавив перед ним слово virtual:
Весь остальной код остается прежним. И теперь мы получим другой консольный вывод:
> Sam (Google) Employee Sam deleted Person Sam deleted
Таким образом, теперь вызывается деструктор обоих классов.
Стоит отметить, что виртуальные функции позволяют нам обойти ограничения на доступ к функциям. Например, сделаем функцию print в классе Employee приватной:
#include <iostream>classPerson{public:Person(std::string name): name{name}{ }virtualvoidprint() const // виртуальная функция{std::cout << "Name: " << name << std::endl;}private:std::string name;};class Employee: public Person{public:Employee(std::string name, std::string company): Person{name}, company{company}{ }private:void print() const override // функция переопределена{Person::print();std::cout << "Works in " << company << std::endl;}std::string company;};int main(){Employee bob {"Bob", "Microsoft"};Person* person {&bob};//bob.print(); // так нельзя - функция приватнаяperson->print(); // а так можно}
Поскольку теперь функция print в Employee приватная, мы не можем вне класса вызвать эту функцию напрямую для объекта Employee:
Зато можем вызвать эту реализацию через указатель на тип Person:
Иногда возникает необходимость определить класс, который не предполагает создания конкретных объектов. Например, класс фигуры. В реальности есть конкретные фигуры: квадрат, прямоугольник, треугольник, круг и так далее. Однако абстрактной фигуры самой по себе не существует. В то же время может потребоваться определить для всех фигур какой-то общий класс, который будет содержать общую для всех функциональность. И для описания подобных сущностей используются абстрактные классы.
Абстрактные классы - это классы, которые содержат или наследуют без переопределения хотя бы одну чистую виртуальную функцию. Абстрактный класс определяет интерфейс для переопределения производными классами.
Что такое чистые виртуальные функции (pure virtual functions)? Это функции, которые не имеют определения. Цель подобных функций - просто определить функционал без реализации, а реализацию определят производные классы. Чтобы определить виртуальную функцию как чистую, ее объявление завершается значением "=0". Например, определим абстрактный класс, который представляет геометрическую фигуру:
Класс Shape является абстрактным, потому что он содержит как минимум одну чистую виртуальную функцию. А в данном случае даже две таких функции - для вычисления площади и периметра фигуры. И ни одна из функций не имеет никакой реализации. В данном случае обе функции являются константными, но это необязательно. Гловно, чтобы любой производный класс от Shape должен будет предоставить для этих функций свою реализацию.
При этом мы не можем создать объект абстрактного класса:
Для применения абстрактного класса определим следующую программу:
Здесь определены два класса-наследника от абстрактного класса Shape - Rectangle (прямоугольник) и Circle (круг). При создании классов-наследников все они должны либо определить для чистых виртуальных функций конкретную реализацию, либо повторить объявление чистой виртуальной функции. Во втором случае производные классы также будут абстрактными.
В данном же случае и Circle, и Rectangle являются конкретными классами и реализуют все виртуальные функции.
Консольный вывод программы:
> Rectangle square: 1500 Rectangle perimeter: 160 Circle square: 2826 Circle perimeter: 188.4
Стоит отметить, что абстрактный класс может определять и обычные функции и переменные, может иметь несколько конструкторов, но при этом нельзя создавать объекты этого абстрактного класса. Например:
В данном случае класс Shape также имеет две переменных, конструктор, который устанавливает их значения, и невиртуальную функцию, которая выводит их значения. В производных классах также необходимо вызвать этот конструктор. Однако объект абстрактного класса с помощью его конструктора мы создать не можем.
Перегрузка операторов (operator overloading) позволяет определить для объектов классов втроенные операторы, такие как +, -, * и т.д. Для определения оператора для объектов своего класса, необходимо определить функцию, название которой содержит слово operator и символ перегружаемого оператора. Функция оператора может быть определена как член класса, либо вне класса.
Перегрузить можно только те операторы, которые уже определены в C++. Создать новые операторы нельзя. Также нельзя изменить количество операндов, их ассоциативность, приоритет.
Если функция оператора определена как отдельная функция и не является членом класса, то количество параметров такой функции совпадает с количеством операндов оператора. Например, у функции, которая представляет унарный оператор, будет один параметр, а у функции, которая представляет бинарный оператор, - два параметра. Если оператор принимает два операнда, то первый операнд передается первому параметру функции, а второй операнд - второму параметру. При этом как минимум один из параметров должен представлять тип класса.
Формальное определение операторов в виде функций-членов класса:
Формальное определение операторов в виде функций, которые не являются членами класса:
Здесь `ClassType` представляет тип, для которого определяется оператор. `Type` - тип другого операнда, который может совпадать, а может и не совпадать с первым. `ReturnType` - тип возвращаемого результата, который также может совпадать с одним из типов операндов, а может и отличаться. `Op` - сама операция.
Рассмотрим пример с классом Counter, который хранит некоторое число:
Здесь в классе Counter определен оператор сложения, цель которого сложить два объекта Counter:
Текущий объект будет представлять левый операнд операции. Объект, который передается в функцию через параметр counter, будет представлять правый операнд операции. Здесь параметр функции определен как константная ссылка, но это необязательно. Также функция оператора определена как константная, но это тоже не обязательно.
Результатом оператора сложения является новый объект Counter, в котором значение value равно сумме значений value обоих операндов.
После опеределения оператора можно складывать два объекта Counter:
Подобным образом можно определить функцию оператора вне класса:
Если бинарный оператор определяется в виде внешней функции, как здесь, то он принимает два параметра. Первый параметр будет представлять левый операнд операции, а второй параметр - правый операнд.
Но по сравнению с предыдущим кодом здесь сделано еще пару изменений. Во-первых, внешняя функция естественно не может обращаться к приватным полям класса, поэтому для доступа к ним придется создавать отдельные функции, которые бы возвращали значения полей. Я для простоты просто сделал переменную value публичной. Другим решением в данном случае могло быть определение дружественной функции оператора. Второй момент - внешние функции оператора не могут быть константными. Поэтому гораздо определение операторов внутри класса имеет некоторые преимущества.
Стоит отметить, что необязательно возвращать объект класса. Это может быть и любой объект в зависимости от ситуации. И также мы можем определять дополнительные перегруженные функции операторов:
Здесь определена вторая версия оператора сложения, которая складывает объект Counter с числом и возвращает также число. Поэтому левый операнд операции должен представлять тип Counter, а правый операнд - тип int.
Какие операторы где переопределять? Операторы присвоения, индексирования ([]), вызова (()), доступа к члену класса по указателю (->) следует определять в виде функций-членов класса. Операторы, которые изменяют состояние объекта или непосредственно связаны с объектом (инкремент, декремент), обычно также определяются в виде функций-членов класса. Операторы выделения и удаления памяти (
```
new new[] delete delete[]
```
) определяются только в виде функций, которые не являются членами класса.
Все остальные операторы можно определять как отдельные функции, а не члены класса.
Результатом операторов сравнения (==, !=, <, >), как правило, является значение типа bool. Например, перегрузим данные операторы для типа Counter:
Если речь идет о простом сравнении на основе полей класса, то для операторов `==` и `!=` проще использовать специальный оператор default:
Например, в случае с оператором ==:
По умолчанию будут сравниваться все поля класса, для которых определен оператор ==. Если значения всех полей будут равны, то оператор возвратить `true`
Оператор присвоения обычно возвращает ссылку на свой левый операнд:
Унарные операции обычно возвращают новый объект, созданный на основе имеющегося. Например, возьмем операцию унарного минуса:
Здесь операция унарного минуса возвращает новый объект Counter, значение value в котором фактически равно значению value текущего объекта, умноженного на -1.
Особую сложность может представлять переопределение операций инкремента и декремента, поскольку нам надо определить и префиксную, и постфиксную форму для этих операторов. Определим подобные операторы для типа Counter:
Префиксные операторы должны возвращать ссылку на текущий объект, который можно получить с помощью указателя this:
В самой функции можно определить некоторую логику по инкременту значения. В данном случае значение value увеличивается на 1.
Постфиксные операторы должны возвращать значение объекта до инкремента, то есть предыдущее состояние объекта. Поэтому постфиксная форма возвращает копию объекта до инкремента:
Чтобы постфиксная форма отличалась от префиксной постфиксные версии получают дополнительный параметр типа int, который не используется. Хотя в принципе мы можем его использовать.
оператор << принимает два аргумента: ссылку на объект потока (левый операнд) и фактическое значение для вывода (правый операнд). Затем он возвращает новую ссылку на поток, которую можно передать при следующем вызове оператора << в цепочке.
Стандартный выходной поток `cout` имеет тип std::ostream. Поэтому первый параметр (левый операнд) представляет объект ostream, а второй (правый операнд) - выводимый объект Counter.
Поскольку мы не можем изменить стандартное определение std::ostream, поэтому определяем функцию оператора, которая не является членом класса.
В данном случае для выводим значение переменной value. Для получения значения value извне класса Counter я добавил функцию `getValue()` .
Возвращаемое значение всегда должно быть ссылкой на тот же объект потока, на который ссылается левый операнд оператора.
После определения функции оператора можно выводить на консоль объекты Counter:
Иногда более оптимально выражать одни операторы через другие, нежели создавать отдельно операторы с повторяющейся логикой. Например:
Здесь вначале реализован оператор сложения с присвоением +=:
В функции оператора сложения мы создаем копию текущего объекта и к этой копии и аргументу применяем оператор +=:
В данном случае суть сложения: к полю value прибавляем значение value другого объекта. Однако логика оператора может быть более сложной, и чтобы не повторяться, мы можем таким образом выражать одни операторы через другие.
# Операторы преобразования типов
C++ позвляет определить функцию оператора преобразования из типа текущего класса в другой тип. Тип, в который производится преобразование, может быть фундаментальным типом или типом класса. В общем случае оператор преобразования имеет следующую форму:
`OtherType` здесь представляет тип, в который преобразуем объект. При чем тип возвращаемого значения у функции оператора не указыватся,
поскольку целевой тип всегда подразумевается в имени функции, поэтому здесь функция должна возвращать объект OtherType. В отличие от большинства операторов, операторы преобразования должны быть определены только как функции-члены класса. Их нельзя определить как обычные функции. Они также являются единственными операторами, в которых ключевому слову оператора не предшествует тип возвращаемого значения (вместо этого возвращаемый тип идет после ключевого слова `operator` ). Рассмотрим простейший пример:
Здесь в классе Counter определен оператор преобразования в тип int:
В данном случае просто возвращаем значение переменной value.
После этого можно, например, присвоить переменной или параметру типа int значение типа Counter - и такое значение будет автоматически преобразовываться в int:
Благодаря оператору преобразования подобная конвертация типов выполняется неявно. Но преобразование типов также можно выполнять явно, например, с помощью функции `static_cast`
Еще один пример - преобразование в тип `bool` :
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
#include <iostream>classCounter{public:Counter(doublen){value = n;}voidprint() const{std::cout << "value: "<< value << std::endl;}// Оператор преобразования в booloperator bool() const { return value != 0; } private:int value;};// тестируем операторыvoid testCounter(const Counter& counter){counter.print();if (counter)std::cout << "Counter is non-zero." << std::endl;if (!counter)std::cout << "Counter is zero." << std::endl;}int main(){Counter counter1{22};testCounter(counter1);Counter counter2{0};testCounter(counter2);}
В данном случае в операторе преобразования, если значение value объекта Counter равно 0, возвращаем false, иначе возвращаем true:
1
operator bool() const { return value != 0; }
Благодаря чему мы можем использовать объект Counter в условных выражениях, подобно типу bool:
Стоит отметить, что хотя мы не определили явным образом оператор ! (оператор логического отрицания) для типа Counter, но выражение `!counter` будет успешно выполняться,
потому что в данном случае объект `counter` будет неявно преобразован в `bool` .
Неявные преобразования не всегда могут быть желательны. В этом случае их можно отключить, определив функцию оператора с помощью ключевого слова explicit:
Подобным образом можно выполнять преобразования между типами классов:
Здесь класс Ebook представляет электронную книгу, а PrintBook - печатную книгу. С помощью операторов преобразования можно преобразовать объект одного типа в другой и наборот, то есть, грубо говоря оцифровать или распечатать книгу. Чтобы не было зацикленных ссылок одного класса на другой, здесь реализация операторов преобразования отделена от объявления.
# Оператор индексирования
Оператор индексирования [] (subscript operator) позволяет интерпретировать объект как массив или как контейнер других объектов и позволяет выбирать из объекта отдельные элементы. Функция оператора [] должна принимать в качестве аргумента условный индекс, по контрому в объекте-контейнере можно найти нужный элемент. Рассмотрим простейшей пример:
Здесь класс Person определяет три приватных переменных - name, age и company, которые недоступны из вне. Фактически мы можем рассматривать объект Person как контейнер над этими переменными. И для обращения к ним определяем оператор индексации:
Оператор в качестве операнда принимает числовой индекс, в данном случае типа `unsigned` (некоторое положительное целое число) и в зависимости от этого индекса
возвращает значение определенной переменной. В данном случае возвращаем значение типа std::string, для этого значение поля age приводим к строке с помощью функции `std::to_string` .
Значение какой именно переменной возвращать по тому или иному индексу условно. В данном случае логично (но необязательно) сделать это в порядке объявления переменных в классе. Затем в функции main, используя данный оператор, можно обратиться к определенной переменной через
индекс:
Консольный вывод:
> Tom 38 Microsoft
Причем необязательно использовать именно числовые индексы. Они могут представлять любой тип, например, std::string:
Оператор индексации можно использовать для доступа к элементам из набора, который может быть в объекте. Например, компания имеет набор сотрудников:
Для простоты класс Company содержит набор сотрудников в виде массива строк. С помощью функции индексации мы можем обратиться к одному из элементов массива employees. Чтобы можно было изменять возвращенные объекты, оператор возвращает не просто объект `std::string` , а именно ссылку std::string&.
Однако главная проблема здесь заключается в организации логики возвращения элемента. В первом случае (в случае с классом Person) необходимо разнотипные значения приводить к единому типу. Во втором случае мы можем столкнуться с ситуацией, что будет передан недействительный индекс. На этот счет есть разные стратегии, но в любом случае подобные ситуации надо иметь в виду при определении оператора.
# Переопределение оператора присваивания
Компилятор по умолчанию компилирует для типов оператор присваивания, благодаря чему мы можем присваивать значения некоторого типа переменным/параметрам/константам этого типа. Создаваемый по умолчанию оператор присваивания просто копирует все переменные-члены класса одну за другой (в том порядке, в котором они объявлены в определении класса). Например:
Здесь определен класс Counter, в котором есть переменная value. Оператор присваивания по умолчанию копирует элементы объекта справа от оператора присваивания объект того же типа слева. Когда мы присваиваем объект c1 типа Counter объекту c2, то объект c2 получает копию значению переменной value из c1:
Стоит отметить, что последующее изменение переменной value в одном объекте Counter, никак не затронет другой объект.
Так работает создаваемый по умолчанию оператор присваивания. Но его можно переопределить. Но стоит учитывать, что оператор присваивания можно определить только как функцию-член класса.
Оператор присваивания должен возвращать ссылку на объект, а его параметр должен представлять ссылку на константу. Реализуем подобный оператор для типа Counter:
Посмотрим детально на реализованный оператор:
Функция оператора неконстантая, так как мы меняем в функции состояние объекта. В качестве параметра передается константная ссылка на присваиваемый объект Counter, так как его не надо изменять.
В качестве результата возвращаем ссылку объект на текущий объект Counter. Может возникнуть вопрос: почему не объект counter, который передается в функцию в качестве параметра? Дело в том, что оператор `=` является правоассоциативным и должен возвращать левый операнд. Например, возьмем следующую ситуацию:
Здесь цепь присваиваний выполняется следующим образом:
То есть сначала counter2 получает значение counter1. Потому counter3 получает результат предыдущей операции, а этот результат представляет объект counter2.
Другой, на первый взгляд ненужный момент - проверка на соответствие параметра текущему объекту:
Если параметр представляет текущий объект, то нет смысла выполнять присваивание значения переменной value. Данная проверка должна предупредить ситуацию типа:
что может предупредить ненужные присваения и повысить производительность особенно в тех ситуациях, когда речь идет об освобождении/выделении памяти, и даже предпредить возможные ошибки, связанные с записью в уже особожденную память.
Иногда реализация оператора присваивания может стать вынужденной необходимостью. Например, рассмотрим следующую программу:
Здесь класс Counter хранит указатель на число int. В конструкторе выделяем память для одного числа, которое передается в качестве параметра. В деструкторе особождаем память. Кажется, никаких проблем с памятью не должно возникнуть. Но возьмем применение этого класса в функции main:
Во вложенном блоке создается объект counter2, который присваивается объекту counter1. Это приведет к копированию адреса указателя, в итоге объекты counter1 и counter2 будут указывать на один и тот же адрес в памяти. Но после завершения вложенного блока заканчивается область видимости объекта counter2, у него вызывается деструктор, в котором память освобождается. Соответственно освобождается и память в counter1, так как это одна и та же память. Но counter1 все еще работает, так как он создан вне вложенного блока. В итоге мы столкнемся с непредсказуемым результатом.
Чтобы выйти из этой ситуации опять же можем определить оператор присваивания:
В предыдущей ситуации есть альтернатива реализации оператора присвоения - удаление оператора присвоения по умолчанию. Для этого применяется ключевое слово delete:
В этом случае, если в программе будет применяться операция присвоения, типа
То компилятор сгенерирует ошибку.
# Пространства имен
Пространство имен позволяет сгруппировать функционал в отдельные контейнеры. Пространство имен представляет блок кода, который содержит набор компонентов (функций, классов и т.д.) и имеет некоторое имя, которое прикрепляется к каждому компоненту из этого пространства имен. Полное имя каждого компонента — это имя пространства имен, за которым следует оператор :: (оператор области видимости или scope operator) и имя компонента. Примером может служить оператор `cout` ,
который предназначен для вывода строки на консоль и который определен в пространстве имен std. Соответственно чтобы обратиться к этому оператору, применяется
выражение `std::cout` .
Если пространство имен не указано, то по умолчанию применяется глобальное пространство имен. применяется по умолчанию, если пространство имен не было определено. Все имена в глобальном пространстве имен такие же, как вы их объявляете, без прикрепления имени пространства имен. Например:
Здесь определены функции print и main и константа message и не используется никакого пространства имен. Поэтому фактически функции print и main и константа message определены в глобальном пространстве имен. В принципе для обращения к ним также можно использовать оператор ::, только без названия пространства имен, хотя это и избыточно:
Стоит отметить, что функция `main` должна быть определена в глобальном пространстве имен.
Теперь рассмотрим, как определять и использовать свои пространства имен.
Для определения пространства имен применяется ключевое слово namespace, за которым идет название имени пространства имен:
После имени пространства имен идет блок кода, в который собственно помещаются компоненты пространства имен - функции, классы и т.д.
Например, определим пространство имен, которое назовем hello:
Здесь в пространстве hello определена функция print и константа message. И чтобы обратиться к этим компонентам вне пространства имен hello, надо использовать его имя:
Внутри пространства имен к его компонентам можно обращаться без имени пространства имен:
Одно пространство имен может содержать другие пространства:
Здесь в пространстве console определено вложенное пространство messages, которое содержит ряд констант. Для обращения к компонентам вложенного пространства имен также надо использовать его имя:
Вне пространства имен console для обращения к подобным констанстам надо указывать всю цепь пространств имен:
Директива using позволяет ссылаться на любой компонент пространства имен без использования его имени:
Здесь подключаем все компоненты пространства имен console в глобальное пространство имен. И после этого указывать имя этого пространства для обращения к его компонентам не требуется:
Однако такое подключение может привести к нежелательным последствиям, если в глобальном пространстве имен определяются компоненты с теми же именами (например, переменная message). В этом случае мы можем подключить только отдельные компоненты:
Здесь подключаем только функцию print:
Поэтому для обращения к ней не надо указывать имя пространства имен. А для обращения к любым другим компонентам надо.
Если название пространства длинное, то для него можно определить псевдоним:
В данном случае для пространства `console::messages` устанавливается псевдоним `mes` .
# Вложенные классы
Вложенный класс (nested class) — это класс, определение которого находится внутри другого класса. Обычно вложенные классы применяются для описания таких сущностей, которые могут существовать только в рамках объекта внешнего класса, особенно когда внешний класс работает с набором объектов вложенного класса.
Рассмотрим небольшой пример. Допустим, нам надо определить для пользователя данные, которые описывают его учетную запись (логин, пароль):
Здесь класс Person представляет класс пользователя. А данные его учетной записи выделены в отдельный класс - Account. Класс Account определен как приватный. Таким образом, обращаться к этому классу мы сможем только внутри класса Person.
Во вложенных классах также можно использовать специафикаторы доступа. В данном случае поля email и password и конструктор определены как публичные, общедоступные, чтобы их можно было использовать в классе Person вне класса Account. Тем более, что так как класс Account - приватный, эти поля все равно недоступны из вне класса Person.
Для хранения данных аккаунта конкретного объекта Person определена переменная account:
В данном случае она инициализируется начальными данными - пустыми строками для email и пароля.
В конструкторе класса Person получаем данные для email и пароля и на их основе создаем объект Account:
Поскольку поля email и password - публичные, мы можем обратиться в функциях класса Person, например, выведем в функции print их значения:
В функции main создаем один объект Person и передаем через конструктор данные в том числе для объекта Account:
Консольный вывод программы:
> Name: Tom Email: <EMAIL> Password: qwerty
В примере выше объекты Account нельзя создавать или использовать вне класса Person, так как класс Account является приватным. Однако мы можем также сделать его общедоступным и после этого обращаться к нему вне класса Person:
Теперь вложенный класс является публичным. Мы можем даже создать объекты этого класса и обращаться к его переменным и функциям, используя имя внешнего класса:
Функции вложенного класса могут напрямую ссылаться на статические члены внешнего класса, а также на любые другие типы, определенные во внешнем классе. Доступ к другим членам внешнего класса можно получить из вложенного класса стандартными способами: через объект класса, указатель или ссылку на объект класса. При этом функции вложенного класса могут обращаться в том числе к приватным переменным и константам, которые определены во внешнем классе.
# Исключения
В процессе работы программы могут возникать различные ошибки. Например, при передаче файла по сети оборвется сетевое подключение или будут введены некорректные и недопустимые данные, которые вызовут падение программы. Такие ошибки еще называются исключениями. Исключение представлякт временный объект любого типа, который используется для сигнализации об ошибке. Цель объекта-исключения состоит в том, чтобы передать информацию из точки, в которой произошла ошибка, в код, который должен ее обработать. Если исключение не обработано, то при его возникновении программа прекращает свою работу.
Например, в следующей программе происходит деление чисел:
Эта программа успешно скомпилируется, но при ее выполнении возникнет ошибка, поскольку в коде производится деление на ноль, после чего программа аварийно завершится.
С одной стороны, мы можем в функции divide определить проверку и выполнять деление, если параметр b не равен 0. Однако нам в любом случае надо возвращать из функции divide некоторый результат - некоторое число. То есть мы не можем просто написать:
И в этом случае нам надо известить систему о возникшей ошибке. Для этого используется оператор throw.
Оператор throw генерирует исключение. Через оператор throw можно передать информацию об ошибке. Например, функция divide могла бы выглядеть следующим образом:
То есть если параметр b равен 0, то генерируем исключение.
Но это исключение еще надо обработать в коде, где будет вызываться функция divide. Для обработки исключений применяется конструкция try...catch. Она имеет следующую форму:
В блок кода после ключевого слова try помещается код, который потенциально может сгенерировать исключение.
После ключевого слова catch в скобках идет параметр, который передает информацию об исключении. Затем в блоке производится собственно обработка исключения.
Так изменим весь код следующим образом:
Код, который потенциально может сгенерировать исключение - вызов функции divide помещается в блок try.
В блоке catch идет обработка исключения. Причем многоточие в скобках после оператора catch ( `catch(...)` ) позволяет обработать любое исключение.
В итоге когда выполнение программы дойдет до строки
> double z {divide(x, y)};
При выполнении этой строки будет сгенерировано исключение, поэтому последующие инструкции из блока try выполняться не будут, а управление перейдет в блок catch, в котором на консоль просто выводится сообщение об ошибке. После выполнения блока catch программа аварийно не завершится, а продолжит свою работу, выполняя операторы после блока catch:
> Error! The End...
Однако в данном случае мы только знаем, что произошла какая-то ошибка, а какая именно, неизвестно. Поэтому через параметр в блоке `catch` мы можем получить
то сообщение, которое передается оператору `throw` :
С помощью параметра
```
const char* error_message
```
получаем сообщение, которое предано оператору throw, и выводим это сообщение на консоль.
Почему здесь мы получаем сообщение об ошибке в виде типа `const char*` ? Потому что после оператора `throw` идет строковый литерал, который представляет как
раз тип `const char*` . И в этом случае консольный вывод будет выглядеть следующим образом: > Division by zero! The End...
Таким образом, мы можем узнать суть возникшего исключения. Подобным образом мы можем передавать информацию об исключении через любые типы, например, std::string:
Тогда в блоке catch мы можем получить эту информацию в виде объекта `std::string` :
Если же исключение не обработано, то вызывается функция std::terminate() (из модуля `<exception>` стандартной библиотеки C++),
которая, в свою очередь, по умолчанию вызывает другую функцию - std::abort() (из `<cstdlib>` ), которая собственно и завершает программу.
Существует очень много функций и в стандартной библиотеке С++, и в каких-то сторонних библиотеках. И может возникнуть вопрос, какие из них вызывать в конструкции try-catch, чтобы не столкнуться с необработанным исключением и аварийным завершением программы. В этом случае может помочь прежде всего документация по функции (при ее наличии). Другой сигнал - ключевое слово noexcept, которое при использовании в заголовке функции указывает, что эта функция никогда не будет генерировать исключения. Например:
Здесь указываем, что функция print() никогда не вызовет исключение. Таким образом, встретив функцию с подобным ключевым словом, можно ожидать, что она не вызовет исключения. И соответственно нет необходимости помещать ее вызов в конструкцию try-catch.
При обработке исключения стоит помнить, что при передаче объекта оператору `throw` блок catch получает копию этого объекта. И эта копия существует только в пределах блока catch. Для значений примитивных типов, например, `int` , копирование значения может не влиять на производительность программы. Однако при передаче объектов классов издержки могут выше.
Поэтому в этом случае объекты обычно передаются по ссылке, например:
Мы можем генерировать и обрабатывать несколько разных исключительных ситуаций. Допустим, нам надо, чтобы при делении делитель был меньше делимого:
В функции `divide` в зависимости от значения числа b оператору `throw` передаем либо число:
либо строковый литерал:
Для тестирования функции divide определена другая функция - test, где вызов функции `divide()` помещен в конструкцию `try..catch` .
Поскольку при генерации исключения мы можем получать ошибку в виде двух типов - int (если b равно 0) и `const char*` (если b больше a), то для обработки каждого типа исключений
определены два разных блока catch:
В функции main вызываем функцию test, передавая в нее различные числа. При вызове:
число b не равно 0 и меньше a, поэтому никаких исключений не возникает, блок try срабатывает до конца, и функция завершает свое выполнение.
При втором вызове
число b равно 0, поэтому генерируется исключение, а в качестве объекта исключения передается число 0. Поэтому при возникновении исключения программа выберет тот блок catch, где обрабатывается исключения типа int:
При третьем вызове
число b больше a, поэтому объект исключения будет представлять строковый литерал или `const char*` . Поэтому при возникновении исключения программа выберет
блок catch, где обрабатывается исключения типа const char*:
Таким образом, в данном случае мы получим следующий консольный вывод:
> 5 Error code: 0 The first number is greater than the second one
Может быть ситуация, когда генерируется исключение внутри конструкции `try-catch` , и даже есть блок `catch` для обработки исключений, однако он обрабатывает
другие типы исключений:
Здесь нет блока catch для обработки исключения типа int. Поэтому при генерации исключения:
Программа не найдет нужный блок catch для обработки исключения, и программа аварийно завершит свое выполнение.
Стоит отметить, что, если в блоке try создаются некоторые объекты, то при возникновении исключения у них вызываются деструкторы. Например:
В классе Person определяет деструктор, который выводит сообщение на консоль. В функции print просто генерируем исключение.
В функции main в блоке try создаем один объект Person и вызываем у него функцию print, что естественно приведет к генерарции исключения и переходу управления программы в блок catch. И если мы посмотрим на консольный вывод
> Person Tom created Person Tom deleted Print Error
то мы увидим, что прежде чем начнется обработка исключения в блоке catch, будет вызван деструктор объекта Person.
# Вложенные try-catch
Одни конструкции `try-catch` могут содержать другие. Если исключение возникает во вложенной конструкции try-catch, то программа сначала ищет во вложенной конструкции
блок catch, который обрабатывает нужный тип исключения. Если во вложенной конструкции try-catch такой блок catch не найден, то программа начинает искать аналогичный блок catch во внешей
конструкии try-catch. Посмотрим на примере.
Здесь функция `divide()` вызывается во внутренней конструкции try-catch. Оператор throw генерирует исключение, объект которого представляет строковый литерал - тип `const char*` .
Во вложенной конструкции try-catch есть такой блок catch, который обрабатывает исключения типа `const chat*` . И выполнения этого блока catch программа продолжает свой
обычный ход работы, а блок catch во внешей конструкции try-catch НЕ выполняется. В итоге будет следующий консольный вывод: > Inner execption: Division by zero Inner try-catch finished External try-catch finished
Теперь возьмем другую ситуацию - во вложенной конструкции try-catch нет нужного блока catch:
Фактически это тот же самый пример, только теперь блок catch во вложенной конструкции обрабатывает исключения типа `unsigned` . В итоге, когда будет сгенерировано исключение,
вложенная конструкция не сможет найти нужный блок catch для обработки исключения типа `const char*` . Поэтому выполнение выполнение программы переходит в блок
catch внешней конструкции `try-catch` , который обрабатывает исключения типа `const char*` . Поэтому консольный вывод будет другим > External execption: Division by zero External try-catch finished
# Создание своих типов исключений
Для каких-то специфичных задач мы можем создавать свои типы исключений, что позволяет нам передавать более структурированную и комплексную информацию об ошибке, нежели примитивные типы. Например, рассмотрим следующую программу:
Здесь определяется класс Person, в конструктор которого передается имя и возраст пользователя. Однако нам необходимо, чтобы возраст был в некотором разумном диапазоне, например, от 1 до 110. И в этом случае в конструкторе класса проверяем переданное значение возраста. Если оно выходит за допустимые пределы, с помощью оператора `throw` генерируем исключение
класса AgeException, который чуть выше определен.
Класс AgeException специально создан, чтобы инкапсулировать исключение, связанное с возрастом человека. Этот класс просто хранит сообщение об ошибке и определяет метод getMessage для доступа к нему.
В конструкции `try-catch` для теста определяем пару объектов Person. При передаче некорректного возраста:
будет генерироваться исключение AgeException, и управление перейдем в блок `catch` , который обрабатывает данный тип исключений:
Чтобы не происходило ненужного копирования объекта исключения, в блок catch объект исключения передается по ссылке.
Соответственно консольный вывод программы будет сдедующим
> Name: Tom Age: 38 Invalid age
При возникновении исключения обработчики catch проверяются в той последовательности, в которой они определены в коде. И если будет найден первый блок catch, параметр которого соответствует типу исключения, то он выбирается для обработки исключения. Для исключений, которые являются базовыми типами (а не типами классов), необходимо точное совпадение типа исключения с типом параметра в блоке catch. А для исключений-объектов классов при сопоставлении могут применяться неявные преобразования. В этом случае обработчик catch выбирается, если
Параметр в catch имеет тот же самый тип, что и исключение ( `const` игнорируется) Тип параметра в catch представляет базовый класс для типа исключения или ссылку на базовый класс ( `const` игнорируется) Исключение и параметр в catch представляют указатели, соответственно объект исключения может быть неявно преобразован к типу параметра ( `const` игнорируется)
Поскольку исключения производных классов неявно преобразуются в тип базового класса, то мы можем перехватывать все исключения, которые представляют базовый и производный типы, с помощью одного обработчика catch.
#include <iostream>classAgeException{public: AgeException(std::string message): message{message}{}virtualstd::string getMessage() const // виртуальная функция{return message;}private:std::string message;};class MaxAgeException: public AgeException{public: MaxAgeException(std::string message, unsigned maxAge): AgeException{message}, maxAge{maxAge}{}std::string getMessage() const override // переопределяем виртуальную функцию{ return AgeException::getMessage() + " Max age should be " + std::to_string(maxAge);}private:unsigned maxAge;};class Person{public:Person(std::string name, unsigned age){if(!age) // если возраст равен 0{throw AgeException{"Invalid age"};}if(age>110) // если возраст больше 110{throw MaxAgeException{"Invalid age.", 110};}this->name = name;this->age = age;}void print() const{std::cout << "Name: " << name << "\tAge: " << age << std::endl;}private:std::string name;unsigned age;};int main(){try{Person bob{"Bob", 1500}; // Некорректные данныеbob.print();}catch (const AgeException& ex){std::cout << ex.getMessage() << std::endl;}}
Здесь для ситуаций, когда будет превышен максимальный возраст, определен класс MaxAgeException, который наследуется от AgeException, принимает значение максимально допустимого возраста и переопределяет функцию getMessage.
Несмотря на то, что в конструкторе класса Person по отдельности генерируются два этих типа исключения
оба типа исключений мы можем обработать, обрабатывая только исключение базового типа - AgeException:
Поскольку функция getMessage - виртуальная и переопределена в MaxAgeException, а параметр в catch передается по ссылке, то при вызове этой функции будет выбрана нужная реализация. И в данном случае на консоль будет выведено:
> Invalid age. Max age should be 110
Иногда может потребоваться выполнить раздельную обработку исключений базовых и производных классов, особенно когда необходимо вызвать какие-нибудь функции производных классов, которых нет в базовых. Поскольку объекты исключений могут сопоставляться с параметрами базовых классов в блоке catch, то обработку производных классов надо размещать перед обработкой базовых классов. Например, возьмем выше определенные классы Person, AgeException и MaxAgeException и обработаем типы исключений по-отдельности:
# Тип exception
Все исключения в языке C++ описываются типом exception, который определен в заголовочном файле <exception>. И при обработке исключений мы также можем использовать данный класс, интерфейс которого выглядит следующим образом
Используем данный тип для обработки исключения:
Прежде всего, оператору throw передается объект типа std::exception
Если мы хотим отловить исключения типа exception, то нам надо в выражении catch определить переменную этого типа:
То есть здесь err представляет константную ссылку на объект exception. Если мы не собираемся использовать эту переменную в блоке catch, то можно указать просто тип исключения:
С помощью функции what() можно получить сообщение об ошибке в виде строки в С-стиле. Однако непосредственно для типа std::exception он имеет мало смысла, поскольку просто выводит название класса. Но в производных классах он может использоваться для вывода сообщения об ошибке.
На основе класса `std::exception` мы можем создавать свои собственные типы исключений. Например:
Здесь определен класс Person, который представляет пользователя. В конструктор класса передается имя и возраст. Однако передаваемое число может превышать разумный возраст или быть равно нулю. В этом случае мы генерируем исключение типа `person_error` :
Класс person_error унаследован от std::exception, через конструктор получает сообщение об ошибке и хранит его в переменной message:
Для возвращения сообщения нам надо переопределить виртуальную функцию `what()` . Но проблема заключается в том, что функция возвращает строку `const char*` ,
но класс хранит сообщение в виде строки std::string. И чтобы получить из `std::string` значение `const char*` , у строки вызываем функцию `c_str()`
Для теста определена функция testPerson, в которой в блоке try создается объект Person. Конструкция try..catch использует два блока catch для обработки исключений. Первый блок обрабатывает исключения производного типа - класса person_error, а последний блок представляет базовый тип exceptionЖ
И в данном случае программа выдаст следующий результат:
> Name: Tom Age: 38 Person error: Invalid age
# Типы исключений
Кроме типа exception в C++ есть еще несколько производных типов исключений, которые могут использоваться при различных ситуациях. Основные из них:
runtime_error: общий тип исключений, которые возникают во время выполнения
range_error: исключение, которое возникает, когда полученный результат превосходит допустимый диапазон
overflow_error: исключение, которое возникает, если полученный результат превышает допустимый диапазон
underflow_error: исключение, которое возникает, если полученный в вычислениях результат имеет недопустимое отрицательное значение (выход за нижнюю допустимую границу значений)
logic_error: исключение, которое возникает при наличии логических ошибок к коде программы
domain_error: исключение, которое возникает, если для некоторого значения, передаваемого в функцию, не определен результат
invalid_argument: исключение, которое возникает при передаче в функцию некорректного аргумента
length_error: исключение, которое возникает при попытке создать объект большего размера, чем допустим для данного типа
out_of_range: исключение, которое возникает при попытке доступа к элементам вне допустимого диапазона
Многие стандартные типы исключений делятся на две группы в зависимости от базового класса - logic_error или runtime_error. Большинство этих типов определено в заголовочном файле `<stdexcept>`
Типы, основанные на logic_error, предназначены для ошибок, которые могли быть обнаружены до выполнения программы и являются следствием неправильной логики программы. Например, ввызов функции с одним или несколькими недопустимыми аргументами или вызов функции-члена для объекта, состояние которого не соответствует требованиям для этой конкретной функции. Вместо обработки подобных исключений можно явно проверять допустимость аргументов или состояние объекта перед вызовом функции.
Другая группа, производная от runtime_error, предназначена для ошибок, которые могут быть обнаружены только во время выполнения. Например, исключения, производные от system_error, обычно инкапсулируют ошибки, происходящие от вызовов базовой операционной системы, таких как сбой ввода или вывода файла. Доступ к файлам, как и любое взаимодействие с оборудованием, всегда может дать сбой, который нельзя предсказать (например, сбои диска, отсоединенные кабели, сбои сети и т. д.).
Конструкция try...catch может использовать несколько блоков catch для обработки различных типов исключений. При возникновении исключения для его обработки будет выбран тот, который использует тип возникшего исключения.
При использовании нескольких блоков catch вначале помещаются блоки catch, которые обрабатывают более частные исключения, а только потом блоки catch с более общими типами исключений:
Здесь определен класс Person, в конструктор которого передаем строку-имя и число-возраст пользователя. Но передаваемые данные могут быть некорректными. Например, для допустимого возраста установим пределельный диапазон 1-110, а имя не должно быть длинее 11 символов. И в конструкторе проверяем переданные значения и генерируем различные исключения:
Для теста определена функция testPerson, в которой в блоке try создается объект Person. Конструкция try..catch использует три блока catch для разных типов исключений. Причем последний блок представляет самый общий тип исключений exception. Второй блок обрабатывает исключения типа range_error, производный от runtime_error. А первый блок обрабатывает исключения типа length_error, который является производным от logic_error.
С помощью функции what() в блоках catch возвращаем информацию об ошибке. И в данном случае программа выдаст следующий результат:
> Name: Tom Age: 38 Length_error: Name must be less than 10 chars Range_error: Age must be between 1 and 110
# Шаблон класса
Шаблон класса (class template) позволяет задать внутри класса объекты, тип которых на этапе написания кода неизвестен. Но прежде чем перейти к определению шаблона класса, рассмотрим проблему, с которой мы можем столкнуться и которую позволяют решить шаблоны.
Допустим, нам надо описать класс пользователя, которые хранит два имя и id (идентификатор), который отличает одного пользователя от другого. С именем все относителньо просто - это строка. А какой тип данных выбрать для хранения id? Мы можем хранить id как число, как строку, как данные какого-то другого типа данных. И каждый тип в разных ситуациях может иметь свои преимущества. Как правило, для id применяются числа и строки, и, на первый взгляд, мы можем просто определить два класса для разных типов:
#include <iostream>// класс Person, где id - целое числоclassUintPerson {public:UintPerson(unsigned id, std::string name) : id{id}, name{name}{ }voidprint() const {std::cout << "Id: " << id << "\tName: " << name << std::endl;}private:unsigned id;std::string name;};// класс Person, где id - строкаclass StringPerson {public:StringPerson(std::string id, std::string name) : id{id}, name{name}{ }void print() const {std::cout << "Id: " << id << "\tName: " << name << std::endl;}private:std::string id;std::string name;};int main(){UintPerson tom{123456, "Tom"};tom.print(); // Id: 123456 Name: TomStringPerson bob{"tvi4xhcfhr", "Bob"};bob.print(); // Id: tvi4xhcfhr Name: Bob}
Здесь класс UintPerson представляет класс пользователя, где id представляет целое число типа unsinged, а тип StringPerson - класс пользователя, где id - строка. В функции main мы можем создавать объекты этих типов и успешно их использовать. Хотя данный пример работает, но по сути мы получаем два идентичных класса, которые отличаются только типом переменной id. А что, если для id потребуется использовать какой-то еще тип? Чтобы упростить код в C++ можно использовать шаблоны классов.
Шаблоны классов позволяют уменьшить повторяемость кода. Для определения шаблона класса применяется следующий синтаксис:
Для применения шаблонов перед классом указывается ключевое слово template, после которого идут угловые скобки. В угловых скобках указываются параметры шаблона. Если несколько параметров шаблона, то они указываются через запятую.
Сам шаблон класса, как и обычный класс, всегда начинается с ключевого слова class (или struct, если речь о структуре), за которым следует имя шаблона класса и тело определения в фигурных скобках. Как и в случае с обычным классом, все шаблон класса заканчивается точкой с запятой. Содержимое шаблона класса фактически аналогично определению стандартного класса за тем исключением, что внутри шаблона вместо конкретных типов мы можем использовать параметры шаблона, которые указаны в угловых скобках. Во всем остальном шаблон класса подобен обычному классу, который может наследоваться, определять функции, переменные, конструкторы, переопределять виртуальные функции и т.д.
Параметр в угловых скобках представляет произвольный идентификатор, перед которым указывается слово typename или class:
Здесь определен один параметр, который называется `T` . Какое слово перед ним использовать - `class` или `typename` , не столь важно.
Перепишем пример с классами UintPerson и StringPerson, применив шаблоны:
#include <iostream> template<typenameT>classPerson {public:Person(T id, std::string name) : id{id}, name{name}{ }voidprint() const {std::cout << "Id: " << id << "\tName: " << name << std::endl;}private:T id;std::string name;};int main(){Person tom{123456, "Tom"}; // T - числоtom.print(); // Id: 123456 Name: TomPerson bob{"tvi4xhcfhr", "Bob"}; // T - строкаbob.print(); // Id: tvi4xhcfhr Name: Bob}
В данном случае шаблон класса применяет один параметр - T. То есть это будет какой-то тип, но какой именно, на этапе написания кода неизвестно.
Данный параметр T будет представлять тип переменной id:
При создании объектов шаблона класса Person, компилятор на основании первого параметра конструктора будет выводить тип id. Например, в первом случае:
полю id передается число 123456. Поскольку это числовой литерал типа int, то и id будет представлять тип `int` .
Во втором случае
переменной id передается строка "tvi4xhcfhr" - это литерал типа `const char*` , соответственно id будет представлять этот тип. В этом случае компилятор будет создавать два определения класса - для каждого набора типов - для int и для `const char*` и будет использовать эти определения классов
для создания его объектов, которые применяют определенный тип данных для id.
В примере выше тип id определялся автоматически. Но мы также можем явным образом указать тип в угловых скобках после названия класса:
Также можно применять сразу несколько параметров. Например, необходимо определить класс банковского перевода:
Класс Transaction использует два параметра типа T и V. Параметр T определяет тип для счетов, которые участвуют в процессе перевода. Здесь в качестве номеров счетов можно использовать и числовые и строковые значения и значения других типов. А параметр V задает тип для кода операции - опять же это может быть любой тип.
При использовании шаблона в этом случае надо указать два типа:
Типы передаются параметрам по позиции. Так, тип string будет использоваться вместо параметра T, а тип int - вместо параметра V.
В случае с переменной transaction2 типы T и V выводятся исходя из параметров конструктора.
Синтаксис определения функций вне шаблона класса может немного отличаться от их определения внутри шаблона. В частности, определения функций вне шаблона класса должны определяться как шаблон, даже если они не используют параметры шаблона.
При определении конструктора вне шаблона класса, его имя должно уточняться именем шаблона класса:
В данном случае все функции, в том числе конструкторы, деструктор, функция оператора присваивания, определяются как функции шаблона класса `Person<T>` . Причем в данном случае конструктор копирования или функция print никак не используют параметр T, но все равно они определяются как шаблоны.
То же самое касается и деструктора.
Как и параметры функций, параметры шаблонов могут иметь значения по умолчанию - тип по умолчанию, который будет использоваться. Например:
Здесь для параметра шаблона в качестве типа по умолчанию используется тип int. Параметр шаблона определяет тип переменной id, которую можно установить через функцию setId.
Мы можем указать тип в угловых скобках явным образом:
В данном случае в качестве типа параметра шаблона применяется тип `std::string` , соответственно id будет представлять строку.
Во втором случае тип явным образом не указывается, поэтому применяется тип по умолчанию - int:
Поэтому здесь id будет представлять число.
# Шаблоны
Шаблоны классов позволяют определить конструкции (функции, классы), которые используют определенные типы, но на момент написания кода точно не известно, что это будут за типы. Иными словами, шаблоны позволяют определить универсальные конструкции, которые не зависят от определенного типа.
Шаблоны функций (function template) позволяют определять функции, которые не зависят от конкретных типов.
Вначале рассмотрим пример, где это может пригодиться. Например, нам надо определить функцию для сложения двух чисел `int` , `double` и `std::string` . Первое, что приходит на ум, сделать перегрузку функции - для каждого типа определить свою версию:
Данный пример отлично работает, производит вычисления, как и должен. Однако в данном случае мы сталкиваемся с тем, что функция add фактически повторяется. Ее версии фактически выполняют одно и то же действие, единственно что отличается тип параметров и возвращаемого значения.
Теперь применим шаблоны функций. Шаблоны функций представляют некоторый образец, по которому можно создать конкретную функцию, специфическую для определенного типа:
Определение шаблона функции начинается с ключевого слова template, после которого в угловых скобках идет слово typename и затем список параметров шаблона:
В данном случае после `typename` указан один параметр - `T` . Параметр шаблона представляет произвольный идентификатор, в качестве которого,
как правило, применяются заглавные буквы, например, T. Но это необязательно. То есть в данном случае параметр T будет представлять некоторый тип, который становится
известным во время компиляции. Это может быть и тип int, и double, и string, и любой другой тип. Но поскольку внутри функции мы применяем операцию сложения, важно, чтобы тип, который будет применяться вместо параметра T, поддерживал операцию сложения, которая возвращала бы объект этого же типа.
Если вдруг используемый тип не будет применять операцию сложения, то на этапе компиляции мы столкнемся с ошибкой.
И при вызове функции add в нее можно передавать объекты и типа int, и типа double, и любого другого типа. При вызове функции компилятор на основании типа аргументов выведет конкретный тип, связанный с параметром шаблона T, и создаст экземпляр функции add, который работает с конкретным типом, и при вызове функции будет вызваться данный экземпляр функции. Если для последующего вызова функции требуется тот же экземпляр, то компилятор использует существующий экземпляр функции.
При этом также можно использовать ссылки, указатели, массивы, которые представляют тип параметра шаблона. Например, в программе выше передадим параметры по ссылке и сделаем их константными:
Другой пример - функция обмена значениями:
Функция swap принимает два параметра любого типа и меняет их значения.
Использование указателей на примере функции вычисления наибольшего значения:
В примерах выше компилятор на основании параметров определял, какой именно тип использует функция. Однако мы можем также явным образом указать, какой тип мы хотим использовать:
Здесь мы явным образом указываем, что мы хотим использовать тип `double` :
При этом во втором случае в функцию передаются целочисленные литералы, однако мы все равно используем реализацию функции для типа `double` :
С одной стороны, параметризация позволяет снизить возможность перегрузки функций, так как мы можем абстрагироваться от конкретных типов. С другой стороны, все равно могут быть ситуации, когда необходимы разные версии функции. И тут мы можем совместить перегрузку функций и их параметризацию. Например, нам нужно найти максимальное значение из двух, либо из набора элементов:
Здесь определены две параметризированные функции для нахождения максимального значения. Но одна из них просто сравнивает два значения по адресам в указателях, а вторая проходит по всем элементам массива и выбирает из них наибольший.
Можно использовать несколько параметров. Например, нам надо определить функцию перевода некоторой суммы с одного счета на другой и передать код операции. Для идентификаторов счета и кода операции мы можем использовать разные типы - числа, строки и т.д. Для этого определим следующую функцию:
В данном случае при вызове
```
transfer("id1234", "id5678", 2804, 5000);
```
вместо параметра T будет подставляться символьный массив, а вместо параметра K - тип int.
Иногда может быть неизвестно, значение какого именного типа будет возвращаться, или мы захотим предоставить компилятору выводить точный тип возвращаемых данных. В этом случае вместо возвращаемого типа можно использовать заменитель decltype(auto). Например, определим функцию вычисления среднего арифметического с выведением результата:
С++ позволяет определять шаблоны с нетипизированными параметрами, то есть как и в функции, мы можем определять обычные параметры конкретных типов, например:
Здесь определен шаблон функции print, которая печатает значение value, которое представляет тип T, N раз. При этом параметр N имеет четко установленный тип - `unsigned int` .
При вызове функции параметру N можно передать значение. Например, напечатаем 4 раза число 3:
Большого смысла такие параметры не имеют, поскольку мы равным образом можем определить стандартные параметры в функции. Однако в ряде сценариев они могут оказаться полезными. Например, вычислим длину любого массива:
Другой пример - в одном из листингов выше для вычисления среднего значения чисел массива в функцию передавался размер массива. Но с помощью нетипизированных параметров мы можем избавиться от необходимости передавать в функцию размер массива:
Здесь для представления размера массива определен параметр N типа `size_t` . При вызове функции на основе переданного в функцию массива этот параметр получает
конкретное значение - размер массива. Стоит отметить, что начиная со стандарта C++20 можно определять параметры, типы которых автоматически выводятся исходя из переданных аргументов. Аналогично можно и выводить тип результата. Для этого применяется ключевое слово auto. Равным образом для определения типов параметров и результатов функции можно использовать выражения `auto*` , `auto&` и `const auto&` :
# Специализация шаблона
Date: 2023-03-19
Categories:
Tags:
При определении шаблона класса компилятор сам генерирует по этому шаблону классы, которые применяют определенные типы. Однако мы сами можем определить подобные классы для конкретного набора параметров шаблона. Подобные определения классов называются специализацией шаблона. Специализация шаблона может быть полной и частичной.
При полной специализации шаблона указываются значения для всех параметров шаблона. И тогда для указанного набора аргументов (типов) компилятор будет использовать специализацию шаблона, а не создавать класс на основе шаблона. Например:
Вначале надо определить сам шаблон. В данном случае это шаблон класса Person, который принимает один параметр. Этот параметр внутри шаблона используется для определения типа переменной id.
После шаблона класса идет специализация шаблона:
В данном случае специализация полная, так как для всех параметров шаблона (по сути для единственного параметра шаблона) указано значение - в данном случае тип `unsigned` .
В этом случае после ключевого слова `template` идут пустые угловые скобки. То есть данная специализация будет применяться только для тех случаев, когда параметр шаблона
представляет тип `unsigned` .
Cпециализация шаблона класса необязательно должна иметь те же члены, что и сам шаблон: специализация шаблона может изменять, добавлять или опускать члены без ограничений. Так, в данном случае id представляет тип unsigned и генерируется в конструкторе на основе дополнительно добавленной статической переменной. Эта статическая переменная увеличивается с каждым новым созданным объектом, поэтому каждый новый объект Person<unsigned> для id будет получать значение на 1 больше, чем предыдущего. При этом функции setId в специализации нет, он нам не нужен.
В функции main мы можем использовать эту специализацию для создания объектов Person:
Поскольку для этих объектов в качестве параметра шаблона указан тип `unsigned` , то будет использоваться наша специализация шаблона.
Для всех других параметров шаблона компилятор будет сам создавать определение класса. Например:
Здесь параметру шаблона передается в качестве значения тип `std::string` . Соответственно переменная id будет представлять строку, а для ее установки применяется
функция setId, в которую передается строка.
При частичной специализации указываются значения не для всех параметров шаблона. Например:
Здесь для примера шаблон имеет два параметра T и K:
Параметр T устанавливает тип для переменной id, а параметр K - для номера телефона, который хранится в переменной phone (мы можем передать номер телефона в виде строки или числа - последовательности цифр).
После определения шаблона идет частичная специализация шаблона для типа unsigned:
То есть определяется только значение для параметра T - это тип `unsigned` . Значение параметра K по прежнему остается неизвестным. И в этом случае после ключевого слова
template указываем неустановленные параметра (в данном случае параметр K), для которых значение неизвестно. Угловые скобки после названия класса (
```
class Person<unsigned, K>
```
) указывают как специализируются параметры шаблона.
Список здесь должен иметь то же количество параметров, что и в исходном неспециализированном шаблоне. Первый параметр для этой специализации — unsigned.
Другой параметр указывается как соответствующее имя параметра в оригинальном шаблоне. Таким образом, если первым среди параметров шаблонов указывается `unsigned` , то для создания класса компилятор использует частичную специализацию:
Если первым значением для параметров шаблона указан тип, отличный от unsigned, тогда компилятор полностью сам генерирует определение класса:
# Наследование и шаблоны классов
При наследовании класса на основе шаблона нам надо указать значения для параметров шаблона базового класса. И в данном случае мы можем также и производный класс определить как шаблон, и использовать его параметры при установке базового класса:
В данном случае в начале определен шаблон базового класса Person, который использует параметр шаблона T для установки типа для переменной id. Далее определен шаблон класс Employee, который наследуется от класса Person:
Таким образом, для базового класса в качестве параметра шаблона будет использоваться то значение, которое определяется шаблоном Employee.
При этом чтобы обратиться к функциональности базового класса, необходимо использовать выражение `Person<T>` (то есть указывать значение для параметра шаблона Person):
Далее в программе мы можем типизировать объекты Employee определенным типом, и этот тип будет применяться для фукциональности базового класса:
Другой вариант наследования состоит в том, что на этапе наследования мы явным образом устанавливаем для базового класса используемые типы:
В данном случае класс Employee представляет обычный класс, который наследуется от типа `Person<unsigned>` . То есть теперь для функционала базового класса
параметр T будет представлять тип unsigned.
# Контейнеры
Для управления наборами объектов в стандартной библиотеке C++ определены контейнеры. Контейнер представляет коллекцию объектов определенного типа и позволяет и управлять доступом к этим элементам. В С++ есть два типа контейнеров: ассоциативные и последовательные.
Последовательный контейнер (sequential container) хранит элементы последовательно, элементы располагаются друг рядом с другом. Однако меняется их порядок в зависимости от конкретного контейнера. .
Типы последовательных контейнеров:
array: коллекция фиксированного размера.
Добавлять или удалять элементы из контейнера нельзя.
vector: коллекция переменного размера.
Обеспечивает добавление и удаление элементов из любого места контейнера.
deque: двусторонняя очередь.
list: двухсвязный список
forward_list: односвязный список.
Таким образом, стандартная библиотека C++ по умолчанию содержит ряд контейнеров, которые представляют определенные структуры данных. Все они имеют как некоторые общие, так и специфические возможности. За исключением класса array все они поддерживают добавление и удаление элементов. Основное различие между ними состоит в том, как они обеспечивают добавление и удаление элементов, а также доступ к элементам в контейнере. И в зависимости от ситуации и потребностей можно использовать тот или иной тип контейнеров. Например, если надо иметь возможность произвольный элемент контейнера, то применяется array или vector (с list или forwarded_list может потребоваться пробегаться по списку, чтобы найти нужный элемент). Если же надо иметь возможность добавлять или удалить элементы в середине контейнера, то можно применять list или forwarded_list, что с вектором сложнее сделать. Однако наиболее часто используется вектор, как более гибкий тип данных. Другие типы контейнеров применяются гораздо реже.
Кроме последовательных контейнеров есть так называемые адаптеры контейнеров (container adaptor). Технически они не являются контейнерами, а инкапсулируют один из вышеописанных контейнеров (например, вектор) и позвляют работать с этими контейнерами определенным образом. Это следующие типы
std::stack<>: представляет структуру данных "стек"
std::queue<>: представляет структуру данных "очередь"
std::priority_queue<>: также представляет очередь, но при этому ее элементы имеют приоритеты
Ассоциативные контейнеры (associative containers) представляют такие контейнеры, где с каждым элементом ассоциирован некоторый ключ, и этот ключ применяется для доступа к элементу в контейнере.
В С++ ассоциативные контейнеры представлены множествами (set) и картами/словарями (map).
Вектор представляет контейнер, который содержит коллекцию объектов одного типа. Для работы с векторами необходимо включить заголовок:
Определим простейший вектор:
В угловых скобках указывается тип, объекты которого будут храниться в векторе. То есть вектор numbers хранит объекты типа int. Однако такой вектор пуст. Он не содержит никаких элементов.
Но мы можем инициализировать вектор одним из следующих способов:
Важно понимать отличие в данном случае круглых скобок от фигурных:
При этом можно хранить в векторе элементы только одного типа, который указан в угловых скобках. Значения других типов в вектор сохранить нельзя, как например, в следующем случае:
Для обращения к элементам вектора можно использовать разные способы:
[index]: получение элемента по индексу (также как и в массивах), индексация начинается с нуля
at(index): функция возращает элемент по индексу
Выполним перебор вектора и получим некоторые его элементы:
При этом следует учитывать, что индексация не добавляет элементов. Например, если вектор содержит 5 элементов, то мы не можем обратиться к шестому элементу:
При таком обращении результат неопределен. Некоторые комиляторы могут генерировать ошибку, некоторые продолжат работать, но даже в этом случае такое обращение будет ошибочно, и оно в любом случае не добавит в вектор шестой элемент.
Чтобы избежать подобных ситуаций, можно использовать функцию at(), которая хотя также возвращает элемент по индексу, но при попытке обращения по недопустимому индексу будет генерировать исключение out_of_range:
# Итераторы
Итераторы обеспечивают доступ к элементам контейнера и представляют реализацию распространенного паттерна объектно-ориентированного программирования "Iterator". С помощью итераторов очень удобно перебирать элементы. В C++ итераторы реализуют общий интерфейс для различных типов контейнеров, что позволяет использовать единой подход для обращения к элементам разных типов контейнеров.
Стоит отметить, что итераторы имеют только контейнеры, адаптеры контейнеров — типы std::stack, std::queue и std::priority_queue итераторов не имеют.
Итератор описывается типом iterator. Для каждого контейнера конкретный тип итератора будет отличаться. Так, итератор для контейнера `list<int>` представляет тип list<int>::iterator, а итератор контейнера `vector<int>` представляет тип vector<int>::iterator и так далее. Однако общий функционад, который применяется для доступа к элементам, будет аналогичен.
Для получения итераторов контейнеры в C++ обладают такими функциями, как begin() и end(). Функция begin() возвращает итератор, который указывает на первый элемент контейнера (при наличии в контейнере элементов). Функция end() возвращает итератор, который указывает на следующую позицию после последнего элемента, то есть по сути на конец контейнера. Если контейнер пуст, то итераторы, возвращаемые обоими методами begin и end совпадают. Если итератор begin не равен итератору end, то между ними есть как минимум один элемент.
Обе этих функции возвращают итератор для конкретного типа контейнера:
В данном случае создается вектор - контейнер типа vector, который содержит значения типа int. И этот контейнер инициализируется набором {1, 2, 3, 4}. И через метод `begin()` можно получить итератор для этого контейнера. Причем этот итератор будет указывать на первый элемент контейнера.
С итераторами можно проводить следующие операции:
*iter: получение элемента, на который указывает итератор
++iter: перемещение итератора вперед для обращения к следующему элементу
--iter: перемещение итератора назад для обращения к предыдущему элементу. Итераторы контейнера forward_list не поддерживают операцию декремента.
iter1 == iter2: два итератора равны, если они указывают на один и тот же элемент
iter1 != iter2: два итератора не равны, если они указывают на разные элементы
iter + n: возвращает итератор, который смещен от итератора iter на n позиций вперед
iter - n: возвращает итератор, который смещен от итератора iter на n позиций назад
iter += n: перемещает итератор на n позиций вперед
iter -= n: перемещает итератор на n позиций назад
iter1 - iter2: возвращает количество позиций между итераторами iter1 и iter2
>, >=, <, <=: операции сравнения. Один итератор больше другого, если указывает на элемент, который ближе к концу
Стоит отметить, что итераторы не всех контейнеров поддерживают все эти операции.
Итераторы для типов std::forward_list, std::unordered_set и std::unordered_map не поддерживают операции --, -= и -. (поскольку std::forward_list - однонаправленный список, где каждый элемент хранит указатель только на следующий элемент)
Итераторы для типа std::list поддерживают операции инкремента и декремента, но не поддерживаются операции +=, -=, + и -. Те же ограничения имеют итераторы контейнеров std::map и std::set.
Операции +=, -=, +, -, <, <=, >, >= и <=> поддерживаются только итераторами произвольного доступа (итераторы контейнеров std::vector, array и deque)
Поскольку итератор по сути представляет указатель на определенный элемент, то через этот указатель мы можем получить текущий элемент итератора и изменить его значение:
После получения итератора он будет указывать на первый элемент контейнера. То есть при выражение `*iter` возвратит первый элемент вектора.
Прибавляя или отнимая определенное число, можно переместить итератор вперед или назад на определенное количество элементов:
Опять же повторю, что стоит учитывать, что не все операции поддерживаются итераторами всех контейнеров.
Например, используем итераторы для перебора элементов вектора:
При работе с контейнерами следует учитывать, что добавление или удаление элементов в контейнере может привести к тому, что все текущие итераторы для данного контейнера, а также ссылки и указатели на его элементы станут недопустимыми. Поэтому при добавлении или удалении элементов в контейнере в общем случае следует перестать использовать текущие итераторы для этого контейнера.
Если контейнер представляет константу, то для обращения к элементам этого контейнера можно использовать только константный итератор (тип const_iterator). Такой итератор позволяет считывать элементы, но не изменять их:
```
std::vector<int>::const_iterator
```
.
Для получения константного итератора также можно использовать функции cbegin() и cend. При этом даже если контейнер не представляет константу, но для его перебора используется константный итератор, то опять же нельзя изменять значения элементов этого контейнера:
Стоит отметить, что для типов std::set (множество) и std::map (словарь) доступны только константные итераторы.
Реверсивные итераторы позволяют перебирать элементы контейнера в обратном направлении. Для получения реверсивного итератора применяются функции rbegin() и rend(), а сам итератор представляет тип reverse_iterator:
```
std::vector<int>::reverse_iterator
```
. Консольный вывод программы: > 5 4 3 2 1
Если надо обеспечить защиту от изменения значений контейнера, то можно использовать константный реверсивный итератор, который представлен типом const_reverse_iterator и который можно получить с помощью функций crbegin() и crend():
Для массивов в C++ также имеется поддержка итераторов. Для этого в стандартной библиотеке С++ определены функции std::begin() (возвращает итератор на начало массива) и std::end() (возвращает итератор на конец массива):
Как и контейнеры, массив можно перебрать с помощью итераторов:
Но перебор массива вполне можно сделать и другими способами - через индексы, обычные указатели. Но итераторы на массивы могут быть полезны при манипуляции с контейнерами. Например, функция insert(), которая есть у ряда контейнеров, позволяет добавить в контейнер какую-то часть другого контейнера. Для выделения добавляемой части могут применяться итераторы. И таким образом, с помощью итераторов можно добавить в контейнер, например, в вектор какую-то часть контейнера:
Здесь строка
Добавляет в вектор numbers, начиная с позиции, на которую указывает итератор `numbers.end()` (то есть в самый конец вектора), диапазон элементов массива data.
Начало этого диапазона задается выражением `std::begin(data) + 1` (то есть со 2-го элемента), а конуц - выражением `std::end(data)-1` (то есть по предпоследний элемент включительно). Консольный вывод: > 1 2 3 4 5 6 7
# Операции с векторами
Для добавления элементов в вектор применяется функция push_back(), в которую передается добавляемый элемент:
Векторы являются динамическими структурами в отличие от массивов, где мы скованы его заданым размером. Поэтому мы можем динамически добавлять в вектор новые данные.
Функция emplace_back() выполняет аналогичную задачу - добавляет элемент в конец контейнера:
Ряд функций позволяет добавлять элементы на определенную позицию.
emplace(pos, value): вставляет элемент value на позицию, на которую указывает итератор pos
insert(pos, value): вставляет элемент value на позицию, на которую указывает итератор pos, аналогично функции emplace
insert(pos, n, value): вставляет n элементов value начиная с позиции, на которую указывает итератор pos
insert(pos, begin, end): вставляет начиная с позиции, на которую указывает итератор pos, элементы из другого контейнера из диапазона между итераторами begin и end
insert(pos, values): вставляет список значений начиная с позиции, на которую указывает итератор pos
Функция emplace:
Функция insert:
Если необходимо удалить все элементы вектора, то можно использовать функцию clear:
Функция pop_back() удаляет последний элемент вектора:
Если нужно удалить элемент из середины или начала контейнера, применяется функция std::erase(), которая имеет следующие формы:
Также начиная со стандарта С++20 в язык была добавлена функция std::erase(). Она не является частью типа vector. В качестве первого параметра она принимает вектор, а в качестве второго - элемент, который надо удалить:
В данном случае удаляем из вектора numbers3 все вхождения числа 1.
С помощью функции size() можно узнать размер вектора, а с помощью функции empty() проверить, путой ли вектор:
С помощью функции resize() можно изменить размер вектора. Эта функция имеет две формы:
resize(n, value): также оставляет в векторе n первых элементов. Если размер вектора меньше n, то добавляются недостающие элементы со значением value
Функция assign() позволяет заменить все элементы вектора определенным набором:
В данном случае элементы вектора заменяются набором из четырех строк "C++".
Также можно передать непосредственно набор значений, который заменит значения вектора:
Еще одна функция - swap() обменивает значения двух контейнеров:
Векторы можно сравнивать - они поддерживают все операции сравнения: <, >, <=, >=, ==, !=. Сравнение контейнеров осуществляется на основании сравнения пар элементов на тех же позициях. Векторы равны, если они содержат одинаковые элементы на тех же позициях. Иначе они не равны:
# Array
Контейнер array из одноименного модуля `<array>` представляет аналог массива. Он также имеет фиксированный размер.
Для создания объекта array в угловых скобках после названия типа необходимо передать его тип и размер:
В данном случае определен объект array из 5 чисел типа int. По умолчанию все элементы контейнера имеют неопределенные значения.
Чтобы инициализировать контейнер определенными значениями, можно использовать инициализатор - в фигурных скобках передать значения элементам контейнера:
В данном случае пустой инициализатор инициализирует все элементы контейнера numbers нулями. Также можно указать конкретные значения для элементов:
Фиксированный размер накладывает ограничение на инициализацию: количество передаваемых контейнеру элементов не должно превышать его размер. Можно передать меньше значений, которые будут переданы первым элементам контейнера, а остальные элементы получат значения по умолчанию (например, для целочисленных типов это число 0):
Однако если при инициализации мы предадим большее количество элементов, нежели размер контейнера, то мы столкнемся с ошибкой.
Стоит отметить, что начиная со стандарта C++17 при инициализации можно не указывать тип и количество элементов - компилятор выводит это автоматически исходя из списка инициализации:
Однако в этом случае в списке инициализации в фигурных скобках должно быть как минимум одно значение.
Для доступа к элементам контейнера array можно применять тот же синтаксис, что при работе с массивами - в квадратных скобках указывать индекс элемента, к которому идет обращение:
С помощью стандартных циклов можно перебрать контейнер array:
В контейнер array нельзя добавлять новые элементы, так же как и удалять уже имеющиеся. Основные функции типа array, которые мы можем использовать:
size(): возвращает размер контейнера
at(index): возвращает элемент по индексу index
fill(n): присваивает всем элементам контейнера значение n
Применение методов:
Несмотря на то, что объекты array похожи на обычные массивы, тип array более гибок. Например, мы не можем присваивать одному массиву напрямую значения второго массива. В то же время объекту array мы можем передавать данные другого объекта array:
Также мы можем сравнивать два контейнера array:
Два контейнера сравниваются поэлементно. Так, в примере выше очевидно, что контейнеры numbers1 и numbers2 равны. Тогда как сравнение массивов начиная со стандарта C++20 объявлено устаревшим.
# List
Контейнер list представляет двухсвязный список, то есть такой список, где каждый элемент имеет указатели на предыдущий и последовательный элемент. Благодаря чему мы можем перемещаться по списку как вперед, так и назад. Для использования списка необходимо подключить заголовочный файл list.
В отличие от других контейнеров для типа list не определена операция обращения по индексу или функция at(), которая выполняет похожую задачу.
Тем не менее для контейнера list можно использовать функции front() и back(), которые возвращают соответственно первый и последний элементы.
Чтобы обратиться к элементам, которые находятся в середине (после первого и до последнего элементов), придется выполнять перебор элементов с помощью циклов или итераторов:
Для получения размера списка можно использовать функцию size():
С помощью функции resize() можно изменить размер списка. Эта функция имеет две формы:
Для добавления элементов в контейнер list применяется ряд функций.
push_back(val): добавляет значение val в конец списка
push_front(val): добавляет значение val в начало списка
emplace_back(val): добавляет значение val в конец списка
emplace_front(val): добавляет значение val в начало списка
Для удаления элементов из контейнера list могут применяться следующие функции:
# Forward_list
Контейнер forward_list представляет односвязный список, то есть такой список, где каждый элемент хранит указатель на следующий элемент. Для использования данного типа списка необходимо подключить заголовочный файл forward_list.
Напрямую в списке forward_list можно получить только первый элемент. Для этого применяется функция front(). Для перебора элементов также можно использовать цикл:
Также для перебора и получения элементов можно использовать итераторы
Причем класс forward_list добавляет ряд дополнительных функций для получения итераторов: before_begin() и cbefore_begin(). Обе функции возвращают итератор (вторая функция возвращает константный итератор `const_iterator` ) на
несуществующий элемент непосредственно перед началом списка. К значению по этому итератору обратиться нельзя.
По умолчанию класс forward_list не определяет никаких функций, которые позволяют получить размер контейнера. В этом классе только функция max_size(), которая позволяет получить масимальный размер контейнера.
Для изменения размера контейнера можно использовать функцию resize(), которая имеет две формы:
Использование функции:
Для добавления элементов в forward_list применяются следующие функции:
push_front(val): добавляет объект val в начало списка
emplace_front(val): добавляет объект val в начало списка
emplace_after(p, val): вставляет объект val после элемента, на который указывает итератор p. Возвращает итератор на вставленный элемент. Если p представляет итератор на позицию после конца списка, то результат неопределен.
insert_after(p, begin, end): вставляет после элемента, на который указывает итератор p, набор объектов из другого контейнера, начало и конец которого определяется итераторами begin и end. Возвращает итератор на последний вставленный элемент.
insert_after(p, il): вставляет после элемента, на который указывает итератор p, список инициализации il. Возвращает итератор на последний вставленный элемент.
Чтобы удалить элемент из контейнера forward_list, можно использовать следующие функции:
clear(): удаляет все элементы
erase_after(p): удаляет элемент после элемента, на который указывает итератор p. Возвращает итератор на элемент после удаленного
erase_after(begin, end): удаляет диапазон элементов, на начало и конец которого указывают соответственно итераторы begin и end. Возвращает итератор на элемент после последнего удаленного
Использование функций:
# Deque
Deque представляет двухстороннюю очередь. Для использования данного контейнера нужно подключить заголовочный файл deque.
Способы создания двухсторонней очереди:
Для получения элементов очереди можно использовать операцию [] и ряд функций:
[index]: получение элемента по индексу
at(index): возращает элемент по индексу
Стоит отметить, что если мы будем обращаться с помощью операции индексирования по некорректному индексу, который выходит за границы контейнера, то результат будет неопредленным:
В этом случае использование функции at() является более предпочтительным, так как при обращении по некорректному индексу она генерирует исключение out_of_range:
Также в цикле или с помощью итераторов можно перебрать элементы контейнера:
Чтобы узнать размер очереди, можно использовать функцию size(). А функция empty() позволяет узнать, содержит ли очередь элементы. Она возвращает значение true, если в очереди есть элементы:
Функция resize() позволяет изменить размер очереди. Эта функция имеет две формы:
Функция swap() обменивает значениями две очереди:
Чтобы добавить элементы в очередь deque, можно применять ряд функций:
push_back(val): добавляет значение val в конец очереди
push_front(val): добавляет значение val в начало очереди
emplace_back(val): добавляет значение val в конец очереди
emplace_front(val): добавляет значение val в начало очереди
При добавлении в контейнер deque следует учитывать, что добавление может сделать недействительными все итераторы, указатели и ссылки на элементы контейнера.
Для удаления элементов из контейнера deque используются следующие функции:
При удалении стоит учитывать, что удаление элементов из любой позиции (за исключением удаления первого и последнего элементов) делает все итераторы, указатели и ссылки на элементы deque недействительными.
Таким образом, deque, как и vector и array, поддерживает произвольный доступ к элементам контейнера, но в отличие от вектора также поддерживает добавление в начало контейнера. Кроме того, во внутренней реализации deque при изменении размера не выделяет новый массив в памяти для вмещения нового набора элементов, а манипулирует указателями.
# Стек std::stack
Класс std::stack<T> представляет стек - структуру данных, которая работает по принципу LIFO (last-in first-out или "последний вошел — первым вышел") — первым всегда извлекается последний добавленный элемент. Стек можно сравнить со стопкой предметов, например, стопкой тарелок - тарелки добавляются сверху, каждая последующая тарелка кладется поверх предыдущей. А если надо взять тарелку, то сначала берется та, которая в самом верху (которую положили самой последней).
Для работы со стеком надо подключать заголовочный файл `<stack>` . Определение пустого стека:
Мы можем получить только самый верхний элемент стека - для этого применяется функция top():
В данном случае после добавления трех элементов стек будет выглядеть следующим образом:
> ------- | Sam | ------- | Bob | ------- | Tom | -------
На верхушке стека будет располагаться послдений добавленный элемент. И с помощью функции `top()` можно получить этот элемент
Для удаления элементов применяется функция pop(). Удаление производится в порядке, обратном добавлению:
В данном случае, пока стек не станет пустым, выводим на консоль верхний (последний добавленный) элемент с помощью функции top и затем извлекаем его с помощью функции pop. Консольный вывод программы:
> <NAME>
Стек можно инициализировать другим стеком или деком (двусторонней очередью):
# Очередь std::queue
Класс std::queue<T> представляет очередь - контейнер, который работает по принципу FIFO (first-in first-out или "первый вошел — первым вышел") — первым всегда извлекается первый добавленный элемент. То есть это контейнер, аналогичный стандартной очереди, которая часто встречается в нашей повседневной жизни.
Для работы со очередью надо подключать заголовочный файл `<queue>` . Определение пустой очереди:
С помощью функции size() можно получить количество элементов в очереди, а с помощью функции empty() проверить очередь на наличие элементов (если возвращается `true` , то очередь пуста):
Мы можем получить только самый первый элемент очереди - для этого применяется функция front() и с самый последний с помощью функции back():
Для удаления элемента из начала очереди применяется функция pop():
Очередь можно инициализировать другой очередь. или деком (двусторонней очередью):
С помощью функции swap() можно обменять элементы двух очередей:
# Очередь приоритетов std::priority_queue
Класс std::priority_queue<T> представляет очередь приоритетов - контейнер, который, как и станлдартная очередь, работает по принципу FIFO. Данный класс определен в заголовочном файле `<queue>` (там же, где и класс queue), однако в плане функционала больше похож на класс stack.
Определение пустой очереди приоритетов:
Мы можем получить только самый первый элемент очереди - для этого применяется функция top():
Обратите внимание, что первой добавляется строка "Sam", а последней - строка "Bob", однако первой (условно более приоритетной) мы получаем строку "Tom". В данном случае мы как раз сталкиваемся с действием приоритетов. При добавлении элементов в очередь приоритетов применяется функция компаратора, которая сравнивает добавляемые элементы и располагает их в очереди в определенном порядке.
По умолчанию применяется для сравнения данных применяется функция, которая располагает первыми элементы, которые условно "больше". Например, строка "Tom" условно больше, чем "Sam" или "Bob", потому что буква "T" располагается в алфавите после "S" и "B". Соответственно очередь будет выглядеть таким образом:
> Tom(1) - Sam (2) - Bob(3)
В данном случае элементы будут располагаться следующим образом:
> 22 - 13 - 4
Для удаления элементов применяется функция pop(), которая извлекает элемент из начала очереди:
# Множества
Множество (set) представляет такой контейнер, который может хранить только уникальные значения. Как правило, множества применяются для создания коллекций, которые не должны иметь дубликатов.
Множества представлены типом std::set<>, который определен в заголовочном файле `<set>` . Создание пустого множества:
Поскольку тип `std::set` представляет шаблон, то в угловых скобках после названия типа указывается тип элементов, которые будет хранить множество. В данном случае
множество будет хранить числа типа int.
Также можно инициализировать множество с помощью другого множества или списка инициализации:
Функция size() возвращает количество элементов множества. Кроме того, с помощью функции empty() можно проверить, пустое ли множество (возвращает `true` ,
если множество пусто):
Для перебора множества можно применять циклы for. Например, в стиле `for-each` :
Для добавления элементов применяется функция insert():
И поскольку множество может хранить только уникальные значения, мы не можем добавить одно и то же значения несколько раз. Поэтому несколько вызовов `numbers.insert(2)` никак не скажутся на соджержимом множества - число 2 будет добавлено только один раз.
Другой момент, который стоит учитывать, что множество упорядочивает элементы. По умолчанию элементы располагаются по возрастанию. То есть при выводе содержимого множества мы получим следующий консольный вывод:
> 1 2 3 4 5 6
Для удаления из множества применяется функция erase(), в которую передается удаляемый элемент:
Удаление отсутствующего элемента (как в случае вызова `numbers.erase(1)` ) никак не сказывается. Консольный вывод: > 4 5
Функция `count()` позволяет проверить, есть ли определенное значение во множестве. Если определенное значение имеется во множестве, то функция возвращает 1, если нет - то 0:
Начиная со стандарта C++20 также для проверки наличия элемента можно применять функцию constains(), которая возвращает true, если элемент есть, и `false` , если элемент отсутствует:
Выше был рассмотрен объект `std::set<>` , который представляет упорядоченное множество и который упорядочивает все свои элементы по определенному критерию (по умолчанию - по возрастанию).
Но также в стандартной библиотеке C++ (заголовочный файл `unordered_set<>` ) есть тип неупорядоченных множеств std::unordered_set<>. Он поддерживает практически все те же функции, только не упорядочивает
элементы. Например, если мы возьмем простое множество:
То есть элементы будут расположены во множестве по умолчанию по возрастанию:
> 1 2 3 4 5 6
Теперь применим неупорядоченное множество `unordered_set`
В этом случае элементы будут расположены во множестве в том порядке, в котором они были добавлены в контейнер:
> 6 1 4 5 2 3
Причем функция `insert()` добавляет элементы в начало множества
# Словарь std::map
Карта или std::map представляет контейнер, где каждое значение ассоциировано с определенным ключом. И по этому ключу можно получить элемент. Причем ключи могут иметь только уникальные значения. Примером такого контейнера может служить словарь, где каждому слову сопоставляется его перевод или объяснение. Поэтому такие структуры еще называют словарями.
Стандартная библиотека C++ предоставляет два типа словарей: std::map<Key, Value> и std::unordered_map<Key, Value>. Эти типы представляют шаблоны, которые типизируются двумя типами. Первый тип - Key задает тип для ключей, а второй тип - Value устанавливает тип для значений.
Тип std::map определен в заголовочном файле `<map>` . Определение пустого словаря:
Здесь определен словарь products, который будет условно хранить цену товаров. Для ключей будет применяться тип `std::string` , а для значений - числа типа `unsigned` (условно в качестве ключа будет выступать название товара, а в качестве значения - его цена).
Для обращения к элементам словаря - получения или изменения их значений, так же, как в массиве или векторе, применяется оператора индексирования []. Только вместо целочисленных индексов можно использовать ключи любого типа в следующем виде:
Здесь определен словарь products, в котором ключами служат строки, а значениями - числа типа unsigned. Поэтому для установки элемента в квадратные скобки передается ключ-строка, а присваивается значение-число:
Будем считать, что ключ - название товара, а значение - цена товара. То есть в данном случае элементу с ключом "bread" присваивается значение 30. При этом не важно, что ранее создан пустой словарь, и в нем нет никакого элемента с ключом "bread" - если его нет, то он создается. Если же элемент с данным ключом уже есть, то меняется его значение.
Чтобы получить элемент по определенному ключу, используем тот же синтаксис. Например, поскольку значение элемента - число, то мы можем, обратившись по ключу, получить это число:
В выше приведенной программе просто выводим значения элементов на консоль:
> bread 30 milk 80 apple 60
Для перебора элементов можно применять цикл for в стиле "for-each":
Рассмотрим определение цикла. Каждый элемент словаря фактически представляет объект типа std::pair<const Key, Value>, который хранит, как ключ, так и значение. В нашем случае это объект
```
std::pair<const std::string, unsigned int>
```
. И с помощью полей `first` и `second` данного объекта мы могли бы получить соответственно ключ и значение элемента:
Но начиная со стандарта С++17 также можно использовать другой синтаксис, который позволяет сразу разложить объект на отдельные части - ключ и значение:
В данном случае в `product` будет помещаться ключ, а в `price` - значение элемента словаря. В итоге при выполнении программы мы получим следующий консольный вывод: > apple 60 bread 30 milk 80
Обратите внимание, что элементы располагаются в словаре и соответственно выводятся на консоль по возрастанию ключей. Поскольку ключи представляют строки, то они сортируются в алфавитном порядке.
Тот факт, что в словаре элементы представляют тип std::pair, позволяет инициализировать словарь объектами std::pair:
И даже можно сократить определение:
Как было показано выше, для добавления элемента в словарь достаточно просто установить для некоторого ключа какой-нибудь значение. Для удаления же элементов применяется функция erase(), в которую передается ключ удаляемого элемента:
Для получения количества элементов в словаре применяется функция `size()` . Также класс map имеет функцию empty(), которая возвращает `true` ,
если словарь пуст.
Чтобы проверить, есть ли в словаре элемент с определенным ключом, применяются функции count() (возвращает 1, если элемент есть, и 0 - если отсутствует) и contains() (возвращает true, если элемент есть, и false - если отсутствует). В обе функции передается ключ элемента:
Тип `std::map` определяет словарь, который упорядочиваниет все свои элементы - по умолчанию в порядке возрастания ключей. Если упорядоченность не нужна, можно применять ти
std::unordered_map, который в целом предоставляет тот же самый функционал, только не упорядочивает элементы и определен в заголовочном файле `<unordered_map>`
Консольный вывод:
> apple 60 milk 80 bread 30
Стоит отметить, что итераторы типа std::map являеются константными, что не позволяет изменять значения элементов при переборе:
Стандарт C++20 предоставляет тип std::span<T>, который позволяет ссылаться на любую последовательность значений типа T - это может быть и `std::vector<T>` , и `std::array<T>` , и стандартный массив, и ряд других последовательностей. Рассмотрим, в чем его
преимущество.
Например, нам надо определить функцию, которая вычисляет максимальное значение в наборе данных. Но последовательностей может быть много. И, допустим, мы хотим, чтобы можно было найти максимальный элемент в векторе и массиве. Для этого мы могли бы определить две версии одной функции, которые принимают по отдельности вектор и массив:
Для обработки массива также необходимо передать размер массива, чтобы перебрать его в цикле и найти максимальный элемент. Но тип `std::span` позволяет сократить
код. Так, мы можем определить только одну версию:
И здесь не важно, что мы передаем в функцию `max()` - вектор или массив, функция будет работать для обеих последовательностей. Причем для массива не надо передавать размер,
компилятор определит его автоматически.
Также span имеет некоторые функции, которые имеются у других последовательностей:
size(): размер спана
empty(): возвращает true, если спан пуст
data(): указатель на элементы
back(): последний элемент
Например, увеличим элементы спана в 2 раза:
Объект `std::span` также можно создать явным образом, передав ему нужную последовательность:
Однако в любом случае, поскольку `std::span<T>` подразумевает, что мы можем менять значения его элементов, последовательность не должна быть константный.
Например, следующий код работать не будет:
Чтобы использовать константные последовательности, надо использовать форму `std::span<const T>` . Однако в этом случае мы не сможем менять значения элементов
последовательности:
# Строки
Как уже было рассмотрено в статье Введение в строки, в языке C++ для работы со строками определен специальный тип std::string, определенный в модуле `<string>` . Рассмотрим подробнее основные моменты работы с данным типом.
Объект типа string содержит последовательность символов типа char, которая может быть пустой. Например, определение пустой строки:
Есть ряд других способов инициализации. Так, можно инициализировать строку повторяющимся набором символов:
И можно инициализировать объект string дргим объектом string:
Можно инициализировать только часть строки
С помощью стандартных потоков ввода и вывода `cin` и `cout` можно ввести данные в строку и вывести на консоль:
Консольный вывод:
> Input your name: Tom Your name: Tom
Чтобы считать всю строку вне зависимости от наличия пробелов, можно применять метод getline():
С помощью методов length() и size() можно узнать размер строки, то есть из скольких символов она состоит (нулевой байт при этом не учитывается):
Если строка пустая, то она содержит 0 символов. В этом случае мы можем применить метод empty() - он возвращает true, если строка пустая:
Для объединения строк применяется операция сложения:
Стоит отметить, что при операции сложения оба операнда НЕ должны одновременно представлять строковые литералы. Например, в следующем случае мы получим ошибку
Если вдруг все таки необходимо объединить два строковых литерала, то можно просто опустить операцию сложения
В качестве альтернативы можно неявно преобразовать строковый литерал в объект `string` . Для этого импортируется пространство имен `std::string_literals` и к строковым литералам добавляется суффикс s
В данном случае строки в суффиксом s, например, `"hello "s` , будут представлять объекты string.
В строках можно использовать различные специальные символы, которые имеют специальное назначение, например, символ "\t" представляет табуляцию, а "\n" - перевод текста на новую строчку. Но поскольку слеш \ применяется для индикации подобных последовательностей, то чтобы вывести в строке один слеш нам надо предварять его другим слешем: "\\". Кроме того, если мы захотим вывести двойную кавычку внутри строки, то ее тоже надо предварять слешем - "\"". Например:
В данном случае консольный вывод будет выглядеть следующим образом:
> Name: "Tom" Age: 38
raw-литералы позволяют упростить определение подобных строк. Такие литералы предваряются префиксом R, а сам текст заключается в двойные кавычки и круглые скобки:
Здесь все отступы табуляции, переводы на новую строку, кавычки внутри строки будут интерпретироваться так, как они определены в строке. В итоге результат будет аналогичен предыдущему.
# Строки с поддержкой Unicode
Для поддержки строк с символами кодировок Unicode в С++ в модуле `<string>` также есть еще ряд дополнительных типов строк.
std::wstring содержит строку символов типа wchar_t
std::u8string содержит строку символов типа char8_t (добавлена в C++20)
std::u16string содержит строку символов типа char16_t
std::u32string содержит строку символов типа char32_t
Например, определим переменную типа `std::wstring`
Ее также можно инициализировать конкретной строкой. При присвоении переменной `wstring` перед строкой указывается префикс L.
Для определения строк других типов также применяются определенные суффиксы: u8 для u8string, u для
u16string и U для u32string. Например, определим переменные этих типов
Для вывода строки типа `std::wstring` применяется поток std::wcout
Для вывода строк остальных типов в С++ пока не определено своих типов потоков.
В остальном работа со строками этих типов аналогична работе с типом `string` , они могут использовать те же функции, что и тип `string` . Однако стоит учитывать, что поскольку `wstring` использует тип символов `wchar_t` , то
кодировка символов таких строк зависит от компилятом. На Windows обычно применяется UTF-16, соответственно строка будет состоять из 2-байтовых символов UTF-16.
Но в большинстве других систем применяются 4-байтные символы UTF-32 `wchar_t` . На этом фоне типы `u8string` , `u16string` и `u32string` выглядят более предпочтительно, однако на данный момент в C++ имеется довольно ограниченная поддержка по работе с символами Unicode.
# Преобразование типов и строки
Нередко может возникнуть необходимость объединить строку с данными других типов, например, числами. Однако объединить строку мы можем только с другой строкой. Поэтому данные других типов вначале необходимо преобразовать в строку. Для преобразования в строку применяется функция std:to_string(), в которую передается преобразуемое значение:
Нередко может вощникнуть противиположная задача - преобразовать строку в другой тип. Есть ряд функций, которые преобразуют строку в число определенного типа:
`stoi()` : преобразует в тип int `stol()` : в long `stoll()` : в long long `stoul()` : в unsigned long `stoull()` : в unsigned long long `stof()` : в float `stod()` : в double `stold()` : в long double Все они определены в модуле `<string>` , работают однотипно и в качестве параметра принимают преобразуемую строку:
При необходимости можно преобразовать значение типа string в указатель на символы. Для преобразования в указатель на константную строку применяется метод c_str():
Для получения указателя также можно применять метод data(), который возвращает указатель на неконстантное значение, если объект string не является константой.
# Сравнение строк
К строкам в языке С++ можно применять операции сравнения.
1> >= < <= == != <=Эти операции сравнивают два объекта string, либо объект string со строковым литералом. Во всех операцияъ операнды сравниваются посимвольно до тех пор, пока не будет найдена пара соответствующих символов, которые содержат разные символы, или пока не будет достигнут конец одного или обоих операндов. Когда пара символов различается, с помощью сравнение числовых кодов символов определяется, какая строка условно "меньше" или "больше". Если разные пары символов не найдены, но строки имеют разную длину, более короткая строка «меньше» более длинной строки. Две строки равны, если они содержат одинаковое количество символы и все соответствующие коды символов равны. Подобный способ сравнения еще называется лексикографическим сравнением или сравнением в лексикографическом порядке. При этом стоит отметить, что поскольку сравниваются числовые коды символов, результат сравнения также зависит от регистра символов.
Например, оператор == возвращает true, если все символы обеих строк равны.
1234std::string s1 {"hello"};boolresult {s1 == "Hello"}; // false - строки различаются по региструresult = s1 == "hello"; // true
Поскольку строки "hello" и "Hello" отличаются по регистру первой буквы, соответственно отличается и числовой код символа, поэтому эти строки не равны.
Другой пример - операция > (больше):
std::string s1 {"helm"}; std::string s2 {"hello"}; bool result {s1 > s2}; // true
В данном случае условие s1 > s2 верно, то есть s1 больше чем s2, так как при равенстве первых трех символов ("hel") третий символ первой строки ("m") стоит в алфавите после четвертого символа второй строки ("l"), то есть "m" больше чем "l" (несмотря на то, что по количеству символов "hello" больше чем "helm").
Возьмем небольшую программу. Например, у нас есть массив имен, и нам надо отсортировать их в алфавитном порядке:
12345678910111213141516171819202122232425262728293031#include <iostream>#include <string>intmain(){std::string people[] {"Tom", "Alice", "Sam", "Bob", "Kate"}; // сортируем по возрастаниюboolsorted {};do{sorted = true; // остается true, если строки отсортированыstd::string temp; // переменная для обмена значениямиfor(unsigned i {1}; i < std::size(people); i++){// если предыдущая строка больше последующейif(people[i-1] > people[i]){ // обмениваем значенияtemp = people[i];people[i] = people[i-1];people[i-1] = temp;sorted = false;}}} while(!sorted);// вывод строк на консольfor(constauto person: people) {std::cout << person << std::endl;}}
Здесь для сортировки массива строк применяется не самая быстрая, но наглядная сортировка пузырьком, которая сравнивает предыдущую строку с последующей. Если предыдущая "больше" последующей, то через промежуточную переменную temp обмениваем значения. Чтобы оптимизировать сортировку, добавлена переменная `sorted` . Каждый раз, когда производится обмен значениями, эта переменная
сбрасывается в false. А это значит, что нам надо заново запустить внешний цикл `do-while` .
Консольный вывод:
> Alice Bob Kate Sam Tom
Для сравнения строк у строки также можно вызвать функцию compare(). В нее передается другая строка, с которой сравнивается текущая. Функция `compare` возвращает 0, если две строки равны. Если текущая строка больше, то возвращается число больше 0. Если текущая строка меньше, то возвращается число менише 0. Например:
1234std::string tom {"Tom"};std::string person{"Tom"};intresult = tom.compare(person);std::cout << result << std::endl; // 0 - строки равны
Здесь две строки равны, поэтому возвращается число 0.
123456789std::string tom {"Tom"};std::string bob{"Bob"};std::string sam {"Sam"};intresult = sam.compare(bob); // 1 - первая строка (sam) больше второй (bob)std::cout << result << std::endl; // 1result = sam.compare(tom); // -1 - первая строка (sam) меньше второй (tom)std::cout << result << std::endl; // -1
Здесь строка "Sam" больше строки "Bob", поэтому результатом первого сравнения будет число 1. А во втором сравнении первая строка "Sam" будет меньше второй строки "Tom", соответственно функция возвратить -1.
Функция `compare()` имеет ряд версий. Отметим одну из них, которая принимает три параметра:
1intcompare(size_t_Off, size_t_Nx, conststd::string &_Right) const
Первый параметр представляет индекс первого символа в текущей строке, начиная с которого начинается подстрока. Второй параметр - количество символов подстроки. Третий параметр - строка, которая сравнивается подстрока. То есть сравниваем строку из третьего параметра с подстрокой, которая начинается с индекса в первом параметре и имеет количество символов, указанных во втором параметре.
Где мы это можем применить? Например, нам надо узнать индекс, с которого встречается одна строка в другой:
123456789101112131415#include <iostream>#include <string>intmain(){std::string text {"Hello world!"};std::string word {"world"};for(unsigned i{}; i < text.length() - word.length() + 1; i++){if(text.compare(i, word.length(), word) == 0){std::cout << "text contains "<< word << " at index "<< i << std::endl;}}}
В данном случае мы пытаемся найти индекс строки word ("world") в строке text ("Hello world!"). Для этого в цикле проходим по символам из строки text, пока не дойдем до символа с индексом
```
text.length() - word.length() + 1
```
(так как сравниваем word.length() символов, поэтому вычитаем word.length(). И так как строки могут быть равны, добавляем 1)
В цикле выполняем сравнение
1if(text.compare(i, word.length(), word) == 0)
то есть в строке text сравниваем подстроку, которая начинается с индекса i и имеет `word.length()` символов с строкой word. Если для какого то числа i мы смогли найти подобное равенство,
то выводим число i на консоль. И в данном случае консоль отобразит следующее > text contains world at index 6
Еще одна форма функции `compare()` позволяет сравнивать две подстроки:
12345678910111213141516#include <iostream>#include <string>intmain(){std::string text {"Hello world!"};std::string word {"world"};unsigned size {2}; // число сравниваемых символов из второй строкиfor(unsigned i{}; i < text.length() - size + 1; i++){if(text.compare(i, size, word, 1, size) == 0){std::cout << "text contains substring at index "<< i << std::endl;}}}
Здесь в принципе тот же алгоритм, что и с предыдущем примере. Только теперь сравниваем из строки word подстроку размером size, которая начинаяется с индекса 1 (то есть подстрока "or"), с подстрокой из строки text. И в данном случае консольный вывод будет следующим
> text contains substring at index 7
Функция substr() получает подстроку. Эта функция принимает два параметра. Первый параметр представляет индекс, с которого начинается подстрока. Второй параметр - количество символов извлекаемой подстроки. Результатом функции является выделенная строка:
Возможно, что количество символов извлекаемой подстроки будет больше доступного количества символов в строки. В этом случае в подстроку извлекаются все оставшиеся символы:
Другая ситуация - начальный индекс подстроки недействителен - равен или больше количества символов:
В этом случае мы столкнемся с исключением `std::out_of_range` , и программа аварийно завершит выполнение. Если надо извлечь все символы начиная с какого-то определенного, то можно использовать другую форму функции `substr()` , которая принимает только начальный индекс:
Иногда возникает необходимость проверить, начинается ли строка на определенную подстроку. В принципе для этой цели можно использовать и ранее рассмотренные функции compare() и substr(). Например:
Однако начиная со стандарта C++20 таже можно использовать функцию starts_with(). Если текущая строка начинается на другую строку, то функция возвращает `true` :
Аналогично с помощью функций compare() и substr() можно проверить, завершается ли текст на определенную подстроку. Например:
Но в стандарт C++20 для этой цели была специально добавлена функцию ends_with(). Если текущая строка заканчивается на другую строку, то функция возвращает `true` :
Функция find() возвращает индекс первого вхождения подстроки или отдельного символа в строке в виде значние я типа `size_t` :
Если строка или символ не найдены (как в примере выше в последнем случае), то возвращается специальная константа std::string::npos, которая представляет очень большое число (как видно из примера, число 18446744073709551615). И при поиске мы можем проверять результат функции `find()` на равенство этой константе:
Функция `find` имеет ряд дополнительных версий. Так, с помощью второго параметра мы можем указать индекс, с которого надо вести поиск:
Используя эту версию, мы можем написать программу для поиска количества вхождений строки в тексте, то есть выяснить, сколько раз строка встречается в тексте:
Здесь в цикле пробегаемся по тексту, в котором надо найти строку, пока счетчик i не будет равен
```
text.length() - word.length()
```
.
С помощью функции `find()` получаем индекс первого вхождения слова в тексте, начиная с индекса i. Если таких вхождений не найдено, то выходим из цикла.
Если же найден индекс, то счетчик i получает индекс, следующий за индексом найденного вхождения.
В итоге, поскольку искомое слово "friend" встречается в тексте два раза, то программа выведет
> The word is found 2 times.
В качестве альтернативы мы могли бы использовать цикл `while` :
Еще одна версия позволяет искать в тексте не всю строку, а только ее часть. Для этого в качестве третьего параметра передается количество символов из искомой строки, которые программа будет искать в тексте:
Стоит отметить, что в этом случае искомая строка должна представлять строковый литерал или строку в С-стиле (например, символьный массив с концевым нулевым байтом).
Функция rfind() работает аналогично функции `find()` , принимает те же самые параметры, только ищет подстроку в обратном порядке - с конца строки:
Пара функций - find_first_of() и find_last_of() позволяют найти соответственно первый и последний индекс любого из набора символов:
В данном случае ищем в строке "Phone number: +23415678901" первую и последнюю позицию любого из символов из строки "0123456789". То есть таким образом мы найдем начальный и конечный индекс номера телефона.
Если нам, наоборот, надо найти позиции символов, которые НЕ представляют любой символ из набора, то мы можем использовать функции find_first_not_of() (первая позиция) и find_last_not_of() (последняя позиция):
Мы можем комбинировать функции. Например, найдем количество слов в строке:
Вкратце рассмотрим данный код. В качестве текста, где будем подсчитывать слова, определям переменную text. И также определяем строку разделителей, такие как знаки пунктуации, пробелы, символ перевода строки, которые не являются словами:
Например, если в строке одни только символы из набора separators, тогда функция `find_first_not_of()` возвратит значение `std::string::npos` ,
что будет означать, что в тексте больше нет непунктационных знаков.
И если start указывает на действительный индекс начала слова, то увеличиваем счетчик слово. Далее находим индекс первого символа из separators, который идет сразу после слова. То есть фактически это индекс после последнего символа слова, который помещаем в переменную end:
После того, как мы нашли начальный индексы слова и его конец, переустанавливаем start на начальный индекс следующего слова и повторяем действия цикла:
В конце выводим количество найденных слов.
# Изменение строки
Если надо добавить в конец строки другую строку, применяется метод append(), в который передается добавляемая строка:
Для вставки одной строки в другую применяется функция insert(). Она имеет несколько различных версий. Самая простая версия принимет индекс вставки и вставляемую строку:
В данном случае в строку text начиная с 7-го индекса вставляем строку str. В итоге переменная text будет равна "insert a string into a text".
Также можно вставлять строковый литерал:
Можно вставлять часть подстроки:
Здесь в text вставляем из переменной langs 3 символа с 5-го индекса, то есть подстроку " C,".
Среди других версий функции `insert()` также следует отметить версию, которая позволяет вставить определенный символ определенное число раз:
В данном случае вставляем в строку text символ * 5 раз начиная с 8 индекса.
Для замены в строке некоторой части применяется функция replace(). Эта функция также имеет много версий, поэтому рассмотрим самые распространенные.
Самая простая версия принимает три параметра:
Первый параметр - представляет индекс, с которого надо заменять подстроку. Второй параметр - количество заменяемых символов. Третий параметр - на какую строку надо заменить. Пример:
Здесь в строке text заменяем 4 символа с 6-го индекса на строку "C++". Таким образом, из строки "Lang: Java" мы получим строку "Lang: C++".
В предыдущем примере символы заменялись на строковый литерал. Но также можно заменять на объект string:
Нередко стоит задача заменить какой-то определенную подстроку, индекс которой может быть не известен. В этом случае мы можем воспользоваться поиском в строке, чтобы найти индекс подстроки и ее размер. Например, возьмем текст "Hello, Tom!" и заменим подстроку "Tom" на "Bob":
Здесь находим позицию первого символа подстроки "Tom" в тексте и сохраняем ее в переменную start. Символ, следующий за последним символом подстроки "Tom", находится путем поиска символа разделителя из строки separators с помощью функции `find_first_of()` . Далее используем найденные позиции индекса
в `replace()` .
Однако в тексте может быть множество вхождений определенной подстроки (в нашем случае строки "Tom"), и может встать задача заменить все эти вхождения. Для этого мы можем использовать циклы:
Здесь сначала находим индекс первого вхождения подстроки, которую надо заменить, и сохраняем этот индекс в переменную start. В цикле заменяем последовательно все вхождения подстроки. После каждой замены находим индекс следующего вхождения, сохраняем его в переменную start и повторяем цикл. Когда больше нет вхождений подстроки в текст, start будет содержать значение `std::string::npos` , что завершает цикл. Из других версий функции `replace()` можно выделить функцию, которая заменяет подстроку определенным символом, который повторяется определенное количество раз:
Здесь заменяет в строке text 6 символов начиная с 9-го индекса на 5 символов *.
Если надо не просто заменить символы, а удалить их из текста, также можно использовать функцию `replace()` - в этом случае удаляемые символы фактически заменяются на пустую строку:
Однако С++ также предоставляет для удаления символов специальную функцию - erase(). В качестве параметров она принимает начальный индекс удаления и количество удаляемых символов:
Аналогично можно удалить все вхождения определенной подстроки:
Функция `erase()` имеет ряд дополнительных версий. Так, можно оставить определенное количество символов с начала строки, а остальные удалить:
Если в функцию не передается никаких параметров, то она удаляет все символы, и в результате получаем пустую строку:
Стоит отметить, что в стандарт С++20 была добавлена функция std::erase(), которая удаляет все вхождения определенного символа в строке:
В данном случае удаляем из строки text символ T.
Стандартная библиотека С++ также предоставляет ряд встроенных функций для работы с символами. В основном они связанны с проверкой символов:
`isupper(c)` : проверяет, является ли c заглавной буквой, по умолчанию от "A" до "Z" `islower(c)` : проверяет, является ли c буквой нижнего регистра, по умолчанию от 'a' до 'z' `isalpha(c)` : проверяет, является ли c алфавитным символом `isdigit(c)` : проверяет, является ли c цифрой от '0' до '9' `isxdigit(c)` : проверяет, является ли c шестнадцатеричной цифрой, от '0' до '9', от 'a' до 'f' или от 'A' до 'F' `isalnum(c)` : проверяет, является ли c алфавитно-цифровым символом; аналогично
```
isisalpha(c) || isdigit(c)
```
`isspace(c)` : проверяет, является ли c пробелом (' '), символом перевода строки ('\n'), возвратом каретки
('\r'), перевод страницы ('\f'), горизонтальная ('\t') или вертикальная ('\v') табуляция `isblank(c)` : проверяет, является ли c пробелом (' ') или символом табуляция ('\t') `ispunct(c)` : проверяет, является ли c символом пунктуации (один из следующих: _ { } [ ] # ( ) < > % : ; ? * + - / ^ & | ~ ! "=") `isprint(c)` : проверяет, является ли c печатным символом, который включает прописные или строчные буквы,цифры, знаки пунктуации и пробелы `iscntrl(c)` : проверяет, является ли c управляющим непечатным символом `isgraph(c)` : проверяет, имеет ли c графическое представление `tolower(c)` : переводит символ c в нижний регистр `toupper(c)` : переводит символ c в верхний регистр
Например, проверим, в каком регистре символ:
#include <iostream> intmain(){unsigned charletter {'a'};if(std::isupper(letter))std::cout << "Uppercase letter"<< std::endl;elseif(std::islower(letter))std::cout << "Lowercase letter"<< std::endl;else // если не буквенный символstd::cout << "Undefined" << std::endl;}
Данные функции очень часто применяются при обработке строк. Рассмотрим прстейшую задачу - нам надо извлечь из некоторого текста (например, из строки
```
"Phone number: +1(234)456-78-90"
```
) номер телефона:
Здесь проходим по всем символам текста и, если символ представляет цифру, то заносим его в строку phone.
Другая задача - нам надо сравнить две строки вне зависимости от регистра. С одной стороны, мы могли бы использовать простую операцию сравнения ==, которая также может сравнивать строки. Но если мы попытаемся сравнить две строки, в которых хотя бы один символ отличается по регистру, то они будут не равны:
Результат данной программы:
> strings are not equal
Чтобы организовать сравнение без учета регистра, мы могли бы переводить символы в верхний или нижний регистр и сравнивать их:
В данном случае сначала сравниваем длину строк, так как если длины не равны, то сами строки тоже не равны.
Далее в цикле проходим по всем символам обоих строк, переводим их в нижний регистр и сравниваем. Если хотя бы одна пара соответствующих символов не равна, то сбрасываем флаг равенства is_equal в false и выходим из цикла, поскольку строки в этом случае будут уже не равны, и дальше нет смысла сравнивать символы.
Результат данной программы:
> HELLO and hello are equal
# Программа подсчета слов
В качестве практики работы со строками напишем небольшую программу для подсчета слов.
Вначале определяется переменная text, которая будет содержать введенную с консоли строку:
Определяем строку разделителей, такие как знаки пунктуации, пробелы, символ перевода строки, которые не являются словами:
Стоит отметить, что символ перевода строки здесь определен больше для демонстрации, поскольку при вводе выше через `std::getline` ввода строки будет завершен, когда мы нажмем на Enter.
Соответственно введенная строка никогда не будет содержать символ \n. Однако если мы изменим символ, до которого идет ввод строки через `std::getline` , или в качестве источника ввода будет
использовать, например, текст из файла, тогда символ перевода строки тоже будет играть определенную роль.
Поскольку количество слов может быть неопределенным, то для хранения слов определяем вектор:
Например, если в строке одни только символы из набора separators, тогда функция `find_first_not_of()` возвратит значение `std::string::npos` ,
что будет означать, что в тексте больше нет непунктационных знаков.
И если start указывает на действительный индекс начала слова, то находим индекс после последнего символа слова, который помещаем в переменную end:
После того, как мы нашли начальный индексы слова и его конец, то с помощью функци `substr()` получаем подстроку между
этими индексами, добавляем слово в вектор, переустанавливаем start на начальный индекс следующего слова и повторяем действия цикла:
В конце выводим количество найденных слов и сами слова.
Пример работы программы:
> Enter some text: When in Rome, do as the Romans do. Text contains 8 words: When in Rome do as the Romans do
# Тип std:string_view
Если параметр представляет тип `std::string` , то мы можем передавать такому параметру как объект `std::string` , так и строковый литерал. Например:
Здесь строка передается по ссылке, что позволяет избежать ненужного копирования символов. А модификатор `const` защищает строку от изменения. Однако несмотря на то, что строка
передается по ссылке, когда параметру передаем строковый литерал, то в памяти в процессе преобразования строкового литерала к типу `std::string` все равно будет идти копирование символов и соответственно дополнительные выделения памяти, что отрицательно влияет на производительность. Тип std::string_view призван решить данную проблему. Он определен в модуле `<string_view>` и действует аналогично `const std::string` с одним исключением - `string_view` не копирует символы строкового объекта вне зависимости, что он представляет - тип `std::string` , строковый литерал или символьный масссив.
Поэтому в качестве типа параметра оптимальнее использовать std::string_view, а не константную ссылку `const std::string&` .
При этом не важно, что параметр может передаваться по значению, а не по ссылке - копирование символов все равно не будет происходить. В своей внутренней реализации std::string_view
лишь копирует длину строки и указатель на последовательность символов. Однако стоит отметить, что работа с типом `string_views` подразумевает, что символы строки не будут изменяться, поскольку тип `string_views` в своей внутренней реализации представляет константу вне зависимости от того, применяется ли к параметру модификатор const. Изменим выше определенную программу, применив тип `std::string_view` :
В остальном, тип `std::string_view` реализует большинство тех же функций, что и тип `std::string` . Например, найдем в строке, которая представляет `string_view` ,
количество слов:
По сути здесь реализована та же программа по подсчету слов, что и в прошлой статье, только в качестве текста передается строка типа `std::string_view` . И для выделения слов
вызываем ряд функций типа, которые эквиваленты соответствующим функциям типа `std::string` : `find_first_not_of()` , `find_first_of()` , `length()` . Стоит отметить, что для работы со строками Unicode также имеются свои типы: `std::wstring_view` , `std::u8string_view` , `std::u16string_view` и `std::u32string_view` . Работа с этими типами будет аналогична работе с `std::string_view` .
# Семантика перемещения
В С++ используемые значения мы можем разделить на две группы: lvalue и rvalue. lvalue представляет именованное значение, например, переменные, параметры, константы. С lvalue ассоциирован некоторый адрес в памяти, в котором на постоянной основе хранится некоторое значение. И мы можем lvalue присвоить некоторое значение. А rvalue - это то, что можно только присваивать, например, литералы или результаты выражений. Например:
Здесь n представляет lvalue, а число 5 - rvalue. Подобные названия приняты, потому что n расположен слева от оператора присваивания (left value), а присваиваемое значение - число 5 справа от оператора присвоения (right value). Другой пример:
Здесь n и k - lvalue, а 5 и выражение `n + 7` - rvalue.
rvalue-ссылка может ссылаться на результат выражения, даже если этот результат представляет временное значение. Привязка к rvalue-ссылке продлевает время жизни такого временного значения. Его память не будет удалена, пока rvalue-ссылка находится в области видимости.
Для установки ссылки rvalue применяются два амперсанда после имени типа:
В данном случае результат выражения `n+3` сохраняется в памяти (в стеке), а ссылка tempRef будет представлять ссылку на это временное значение. Когда завершится функция main, соответственно завершится и область видимости
переменной tempRef и будет удалено временное значение, на которое эта переменная ссылается. Стоит отметить, что tempRef, хоть и хранит ссылку на rvalue, само по себе также является
lvalue.
Стоит отметить, что мы не можем установить ссылку rvalue на значение lvalue, например:
Здесь n - lvalue, а ссылке rvalue мы можем только передать значение rvalue. Тем не менее в некоторых ситуациях может возникать необходимость преобразовать lvalue в rvalue. Для этого применяется встроенная функция std::move(), которая имеется по умолчанию в стандартной библиотеке C++:
Здесь значение переменной n преобразуется из типа int в тип int&& - rvalue-ссылку на int. В данном случае практического смысла в подобном преобразовании мало, но далее на примере конструктора перемещения мы посмотрим реальную пользу подобной функции.
Также стоит обратить внимание, что при преобразовании константного значения результатом является константная ссылка:
rvalue-ссылка может выступать в качестве параметра функции.
Чтобы указать, что параметр представляет rvalue-ссылку, после типа указываются два амперсанда &&. То есть здесь функция print принимает rvalue-ссылку на значение std::string. При вызове этой функции ей можно передать rvalue:
Но нельзя передать lvalue, поэтому следующие строки НЕ скомпилируются:
Однако мы могли бы применить опять же функцию `std::move()` и преобразовать переменную в rvalue:
При возвращении значения локальной переменной или параметра компилятор рассматривает значение как rvalue. Но если возвращаемое значение представляет переменную компилятор также может выполнять оптимизацию NRVO (named return value optimization):
Здесь функция defaultMessage возвращает rvalue, соответственно результат этой функции мы можем передать внутрь функции print. Применяемая оптимизация NRVO означает, что компилятор сохраняет объект результата непосредственно в памяти, предназначенной для хранения возвращаемого функцией значения. То есть после применения NRVO больше не выделяется память для отдельной автоматической переменной с именем message. То есть при выполнении этой программы создается только один объект std::string.
Аналогично происходит при сохранении во внешнюю переменную:
Здесь также создается только один объект std::string.
Эта была вводная информация про значения rvalue и как с ними работать. В следующих статьях посмотрим, где в применяется в практическом плане.
# Конструктор перемещения
Конструктор перемещения (move constructor) представляет альтернативу конструктору копирования в тех ситуациях, когда надо сделать копию объекта, но копирование данных нежелательно - вместо копирования данных они просто перемещаются из одной копии объекта в другую.
Рассмотрим на примере.
Здесь определен класс условного сообщения Message. Текст сообщения хранится в символьном указателе text. Также, чтобы был виден весь процесс создания/копирования/удаления данных в классе сообщения определена статическая переменная counter, которая будет увеличиваться с созданием каждого нового объекта. И текущее значение счетчика будет присваиваться переменной id, которая представляет номер сообщения:
В конструкторе Message выделяем память для хранения текста сообщения, который передается через параметр - символьный массив, копируем данные в выделенную область памяти и устанавливаем номер сообщения. Для копирования данных в Message определен конструктор копирования.
Также определен класс Messenger, который принимает в конструкторе сообщение и сохраняет его в переменную message:
С помощью функции sendMessage мессенджер условно отправляет сообщение.
В функции main создаем объект Messenger, передавая ему один объект сообщения, и затем вызываем функцию sendMessage
Обратите внимание, что в конструктор объекта Messenger здесь передается объект Message, который не привязан ни к какой переменной. Посмотрим на консольный вывод:
> Create message 1 Create message 2 Copy message 1 to 2 Delete message 1 Send message 2: Hello Word Delete message 2
Здесь мы видим, что в процессе работы программы создается два объекта Message, причем вовлекается конструктор копирования. Посмотрим по этапно.
Выполнение строки
Приводит к выполнению конструктора Message, в котором строка "Hello Word" передается переменной text и устанавливает номер сообщения. Этот временный объект Message будет иметь номер 1. Соответственно на консоль выводится
> Create message 1
Далее созданный объект Message передается в конструктор Messenger:
Обратите внимание на выражение `message{mes}` . Оно берет переданный в конструктор временный объект Message и с помощью конструктора копирования
передает в переменную message его копию. Конструктор копирования Message в свою очередь обращается к стандартному конструктору:
Создается объект Message номер 2. Стандартный конструктор выделяет память для строки. У нас получаются две копии, каждая из которых хранит указатели на разные участки памяти. То есть мы имеем две независимые копии, и на консоль будет выведено:
> Create message 2 Copy message 1 to 2 Delete message 1
Теперь объект Messenger хранит второе сообщение. Первое, временное сообщение удаляется.
Далее вызывается функция sendMessage()
Информация о хранимом в мессенджере сообщении выводится на консоль, и это сообщение по завершению работы функции main удаляется.
> Send message 2: Hello Word Delete message 2
С точки зрения создания копий, выделения/управления/освобождения памяти вроде проблем никаких нет. Но мы видим, что выделенная память для первого сообщения в итоге все равно никак не использовалась. То есть мы по сути зря использовали эту память. Не было бы лучше, если бы вместо выделения нового участка памяти для второго сообщения, мы могли бы просто передать ему память, которая уже выделена для первого сообщения? Первое же сообщение все равно удаляется. И для решения этой проблемы как раз используются конструкторы перемещения.
Конструктор перемещения принимает один параметр, который должен представлять ссылку rvalue на объект текущего типа:
Здесь параметр `moved` представляет перемещаемый объект.
Изменим выше приведенный код, применив конструктор перемещения в классе Message:
По сравнению с предыдущим кодом здесь сделаны два изменения. Прежде всего в класс Message добавлен конструктор перемещения:
Здесь параметр moved представляет перемещаемый объект. Мы не вызваем стандартный конструктор, как в случае с конструктором копирования, потому что нам не надо выделять память. Вместо этого мы просто передаем в переменную text значение указателя (адрес блока выделенной памяти) из перемещаемого объекта moved:
Таким образом мы избегаем ненужного дополнительного выделения памяти. И чтобы указатель text перемещаемого объекта moved перестал указывать на эту область памяти, и соответственно чтобы в деструкторе объекта moved не было ее освобождения, передаем указателю значение `nullptr` .
Другой важный момент - в конструкторе Messenger при копировании объекта используем встроенную функцию std::move(), которая имеется в стандартной библиотеке С++:
Функция std::move() преобразует переданное значение в ссылку rvalue. Несмотря на свое название, эта функция ничего не перемещает.
Выражение
фактически приведет к вызову конструктора перемещения, в который будет передан параметр mes. А результат конструктора перемещения
будет присвоен переменной message. Соответственно теперь мы получим другой консольный вывод: > Create message 1 Create Message 2 Move message 1 to 2 Delete message 1 Send message 2: Hello Word Delete message 2
Если бы конструктор перемещения не был бы определен, то выражение
вызывало бы конструктор копирования.
Таким образом, также создается два объекта, но теперь мы уходим от ненужного выделения памяти и копирования значения. Как правило, перещаются объекты, которые больше не нужны, как в примере выше.
Стоит отметить, что компилятор сам компилирует по умолчанию конструктор перемещения, который перемещает значения всех нестатических переменных. Однако если мы определяем деструктор или конструктор копирования или оператор присваивания с копированием или перемещением, то компилятор не генерирует стандартный конструктор перемещения.
Еще одним показательным примером применения конструктора перемещение может служить добавление объекта в вектор. Тип std::vector представляет динамический список и для добавления объекта определяет функцию push_back(). Эта функция имеет две версии:
Первая версия принимает константную ссылку и предназначена прежде всего для передачи lvalue. Вторая версия принимает rvalue.
Например, возьмем вышеопределенный класс Message и добавим один объект в вектор:
Здесь в вектор добавляется объект Message в виде rvalue. В своей внутренней реализации добавляемый объект будет сохраняться, и при сохранении будет вызываться конструктор перемещения, чтобы переместить данные из rvalue. Консольный вывод программы:
> Create Message 1 Create Message 2 Move Message 1 to 2 Delete Message 1 Delete Message 2
Таким образом, опять же мы избегаем издержек при не нужном копировании данных, и вместо копирования перемещаем их.
Если же мы передадим в функцию `push_back()` значение lvalue, то будет вызываться другая версия функции, которая принимает константную ссылку, и в итоге будет вызываться конструктор
копирования с созданием копии:
Консольный вывод программы:
> Create message 1 Create message 2 Copy message 1 to 2 Delete message 2 Delete message 1
# Оператор присваивания с перемещением
Оператор присваивания с перемещением (move assignment operator) призван решать те же задачи, что и конструктор перемещения. Подобный оператор имеет следующую форму:
В качестве параметра передаем перемещаемый объект в виде rvalue-ссылки. В коде оператора выполняем некоторые действия
Определим и используем конструктор присваивания с перемещением:
#include <iostream>// класс сообщенияclassMessage{public:// обычный конструкторMessage(constchar* data, unsigned count){size = count;text = newchar[size]; // выделяем памятьfor(unsigned i{}; i < size; i++) // копируем данные{text[i] = data[i];}id = ++counter;std::cout << "Create Message "<< id << std::endl;}// обычный оператор присваиванияMessage& operator=(constMessage& copy){std::cout << "Copy assign message "<< copy.id << " to "<< id << std::endl;if(© != this) // избегаем самоприсваивания{deletetext; // освобождаем память текущего объекта// копируем данные по указателю из перемещаемого объекта в текущийsize = copy.size;text = newchar[size]; // выделяем памятьfor(unsigned i{}; i < size; i++) // копируем данные{text[i] = copy.text[i];}}return*this; // возвращаем текущий объект}// опрератор присваивания с перемещениемMessage& operator=(Message&& moved){std::cout << "Move assign message "<< moved.id << " to "<< id << std::endl;if(&moved != this) // избегаем самоприсваивания{deletetext; // освобождаем память текущего объектаtext = moved.text; // копируем указатель из перемещаемого объекта в текущийsize = moved.size;moved.text = nullptr; // сбрасываем значение указателя в перемещаемом объектеmoved.size = 0;}return*this; // возвращаем текущий объект}// деструктор~Message(){ std::cout << "Delete Message " << id << std::endl;delete[] text; // освобождаем память}char* getText() const { return text; }unsigned getSize() const { return size; }unsigned getId() const {return id;}private:char* text{}; // текст сообщенияunsigned size{}; // размер сообщенияunsigned id{}; // номер сообщенияstatic inline unsigned counter{}; // статический счетчик для генерации номера объекта};int main(){char text1[] {"Hello Word"};Message hello{text1, std::size(text1)};char text2[] {"Hi World!"};hello = Message{text2, std::size(text2)}; // присваивание объектаstd::cout << "Message " << hello.getId() << ": " << hello.getText() << std::endl;}
В операторе присваивания получаем перемещаемый объект Message, удаляем ранее выделенную память и копируем значение указателя из перемещаемого объекта:
В функции main присваиваем переменной hello объект Message:
Стоит отметить, что, как и в случае с конструктором перемещения, присваиваемое значение представляет rvalue - временный объект в памяти (
```
Message{text2, std::size(text2)};
```
),
который после выполнения операции (присовения) будет не нужен. И это как раз идеальный случай для
применения оператора присваивания с перемещением. Консольный вывод данной программы: > Create message 1 Create message 2 Move assign message 2 to 1 Delete message 2 Message 1: Hi World! Delete message 1
Как видно, переменная hello представляет объект Message с номером 1. Стоит отметить, что если в классе определено несколько операторов присваивания (стандартный и присваивание с перемещением), то по умолчанию для rvalue будет применяться оператор присваивания с перемещением. При присвоении lvalue будет применять стандартный оператор присвоения (без перемещения):
Стоит отметить, что мы можем применить функцию std::move() для преобразования lvalue в rvalue:
Здесь переменная hi преобразуется в rvalue, поэтому при присвоении будет срабатывать оператор присвоения с перемещением.
Стоит отметить, что компилятор сам компилирует оператор присваивания с перемещением по умолчанию, который перемещает значения всех нестатических переменных. Однако если мы определяем деструктор или конструктор копирования или конструктор перемещения или оператор присваивания, то компилятор не генерирует стандартный оператор присваивания с перемещением.
Поскольку smart-указатель std::unique_ptr уникально указывает на определенный адрес памяти, не может быть двух и более указателей std::unique_ptr, которые указывают на один и тот же участок памяти. Именно поэтому у типа unique_ptr нет конструктора копирования и оператора присваивания с копирования. Соотвественно при попытки их применить мы столкнемся с ошибками компиляции:
Однако unique_ptr имеет конструктор перемещения и оператор присвоения с перемещением, которые при необходимости перемещения данных из одного указателя в другой можно использовать
Стоит отметить, что после того, как мы переместим значение из указателя, мы не сможем получить значения по данному указателю.
# Роль noexcept при перемещении
При определении конструкторов перемещения и операторов присвоения с перемещением рекомендуется объявлять их с оператором noexcept, если функции конструктора и оператора присваивания в принципе не генерируют исключение. Сначала посмотрим, зачем это нужно
Тип std::vector представляет динамический список и для добавления объекта определяет функцию push_back(). Эта функция имеет две версии:
То есть если мы передаем в функцию rvalue, срабатывает вторая версия, которая для сохранения данных внутри вектора использует конструктор перемещения. Но посмотрим, что будет, если мы попробуем добавить в вектор несколько объектов:
#include <iostream>#include <vector>// класс сообщенияclassMessage{public:// обычный конструкторMessage(constchar* data, unsigned count){size = count;text = newchar[size]; // выделяем памятьfor(unsigned i{}; i < size; i++) // копируем данные{text[i] = data[i];}id = ++counter;std::cout << "Create Message "<< id << std::endl;}// конструктор копированияMessage(constMessage& copy) : Message{copy.getText(), copy.size } // обращаемся к стандартному конструктору{std::cout << "Copy Message "<< copy.id << " to "<< id << std::endl;}Message(Message&& moved){id = ++counter;std::cout << "Create Message "<< id << std::endl;text = moved.text; // перемещаем текст сообщенияsize = moved.size; // копируем размер сообщенияmoved.text = nullptr;std::cout << "Move Message "<< moved.id << " to "<< id << std::endl;}// деструктор~Message(){ std::cout << "Delete Message " << id << std::endl;delete[] text; // освобождаем память}char* getText() const { return text; }unsigned getSize() const { return size; }unsigned getId() const {return id;}private:char* text{}; // текст сообщенияunsigned size{}; // размер сообщенияunsigned id{}; // номер сообщенияstatic inline unsigned counter{}; // статический счетчик для генерации номера объекта};int main(){std::vector<Message> messages{};messages.push_back(Message{"Hello world", 12});messages.push_back(Message{"Bye world", 10});}
В конструкторе Message выделяем динамическую память для символов сообщения, устанавливаем размер и номер сообщения, а в деструкторе освобождаем память. Для копирования данных в Message определен конструктор копирования.
Также Message определяет конструктор перемещения:
Здесь мы не вызваем стандартный конструктор, как в случае с конструктором копирования, потому что нам не надо выделять память. Вместо этого мы просто передаем в переменную text значение указателя (адрес блока выделенной памяти) из перемещаемого объекта moved `text{moved.text}` . И чтобы указатель text перемещаемого объекта moved перестал указывать на эту область памяти, и соответственно чтобы в деструкторе
объекта moved не было ее освобождения, передаем указателю значение `nullptr` .
В функции main в вектор добавляется два объекта Message, которые представляют rvalue:
Посмотрим, каким будет консольный вывод:
> Create Message 1 Create Message 2 Move Message 1 to 2 Delete Message 1 Create Message 3 Create Message 4 Move Message 3 to 4 Create Message 5 Copy Message 2 to 5 Delete Message 2 Delete Message 3 Delete Message 5 Delete Message 4
Итак, мы добавляем в вектор 2 объекта Message, но у нас в итоге создается 5 объектов Message. Рассмотрим по этапно. Вначале добавляем один объект Message:
В итоге создается один объект Message, который представляет rvalue. Из него с помощью конструктора перемещения данные перемещаются в другой объект Message, который хранится внутри вектора.
> Create Message 1 Create Message 2 Move Message 1 to 2 Delete Message 1
Вектор - динамический список, в который мы можем добавлять значения, но эта динамика в своей внутренней реализации при добавлении новых элементов выделяет новый участок динамической памяти, чтобы вместить уже имеющиеся элементы и новые добавляемые элементы. Это приводит к тому, что вектор копирует данные из старого участка памяти в новые. В итоге при выполнении строки
Выделяется память для двух объектов Message. Опять же создается rvalue-объект, его данные перемещаются в объект Message внутри вектора. Но вместе с этим из ранее выделенного участка памяти первый добавленный объект копируется в новый участок памяти. Причем при копировании применяется конструктор копирования. Что мы видим по консольному выводу
> Create Message 3 Create Message 4 Move Message 3 to 4 Create Message 5 Copy Message 2 to 5 Delete Message 2 Delete Message 3
Почему здесь используется конструктор копирования, а не конструктор перемещения, который в данном случае был бы более предпочтителен? Дело в том, что вектор не уверен, что конструктор перемещения не сгенерирует исключение и в этом случае прибегает к конструктору копирования.
Но в данном случае у нас нет в конструкторе перемещения каких-то моментов, которые могли бы привести к генерации исключения. Поэтому определим конструктор с ключевым словом noexcept:
И если теперь мы перекомпилируем и запустим программу, то мы увидим, что вместо конструктора копирования будет применяться конструктор перемещения:
> Create Message 1 Create Message 2 Move Message 1 to 2 Delete Message 1 Create Message 3 Create Message 4 Move Message 3 to 4 Create Message 5 Move Message 2 to 5 Delete Message 2 Delete Message 3 Delete Message 5 Delete Message 4
Данный механизм копирования применяется не только типом вектор, поэтому конструктор перемещения и оператор присваивания с перемещением следует определять с ключевым словом noexcept.
# Объекты функций и лямбда-выражения
Объект функции (function object) или функтор (functor) представляет объект, который может вызываться как функция. Для этого применяется оператор (). Рассмотрим простейший пример:
Здесь определен класс Print. В нем определена функция оператора (), которая принимает один параметр - некоторую строку и ничего не возвращает. Внутри функции оператора выводим переданную строку на консоль.
Сколько параметров будет принимать функция оператора, какие типы эти параметры будут представлять и какой результат будет возвращать функция - все это мы сами определяем исходя из своих задач. Но стоит отметить, что такой оператор может быть определен только как функция-член класса.
Затем мы можем определить объект этого класса и вызвать его как функцию, передав один аргумент:
#include <iostream>classSum{public:intoperator()(intx, inty) const { return x + y; } };int main(){Sum sum; // определяем объект функцииint result {sum(2, 3)}; // вызываем объект функцииstd::cout << result << std::endl; // 5std::cout << sum(5, 3) << std::endl; // 8std::cout << sum(12, 13) << std::endl; // 25}
Здесь определяется класс Sum, в котором функция оператора принимает два числа и возвращает их сумму. Поскольку функция оператора возвращает значение int, мы можем присвоить этот результат переменной int и вообще использовать как int.
Подобные объекты-функции также могут хранить некоторое состояние:
Здесь объект-функция Id возвращает значение приватной переменной id после инкремента.
Лямбда-выражения представляют более краткий компактный синтаксис для определения объектов-функций. Формальный синтаксис лямбда-выражения:
Лямбда-выражение начинается с квадратных скобок. Затем, как в обычной функции, в круглых скобках идет перечисление параметров - типы и их имена. Также начиная со стандарта C++14 для параметров можно указывать значения по умолчанию. За списком параметров, как и в обычной функции, в фигурных скобках помещаются действия, выполняемые лямбда-выражением.
Например, простейшее лямбда-выражение:
Здесь лямбда выражение не имеет параметров, поэтому указаны пустые скобки. Внутри лямбда-выражения просто выводится на консоль строка "Hello".
Каждый раз, когда компилятор встречает лямбда-выражение, он генерирует новый тип класса, который представляет объект-функцию. В примере выше сгенерированный класс упрощенно может выглядеть примерно так:
Такой класс имеет произвольное, но уникальное сгенерированное имя. А действия лямбда-выражения определяются в виде оператора (), причем вместо возвращаемого типа применяется слово auto. То есть компилятор сам выводит возвращаемый тип, который может быть `void` , а может представлять какой-то определенный тип.
Для лямбда-функций без параметров вы можете опустить пустой список параметров (). То есть лямбда-выражение формы []() {...} может быть дополнительно сокращено до [] {...}:
Мы можем непосредственно при определении сразу же вызвать лямбда-выражения, указав после тела выражения круглые скобки со значениями для параметров лямбды:
Поскольку лямбда-выражения здесь не имеют параметров, то для вызова сразу после определения лямбды указываются пустые скобки
Это приведет к тому, что будут выполняться действия лямбда-выражения, и на консоль будет выведена строка "Hello".
Лямбда-выражение можно определить как переменную:
Здесь переменная hello в качестве значения хранит лямбда-выражение. Чтобы компилятор автоматически определил тип переменной, она определена с ключевым словом auto
Далее через имя переменной мы можем вызвать лямбда-выражение как обычную функцию:
Теперь определим лямбда-выражение с двумя параметрами:
Здесь лямбда-выражение принимает один параметр типа `const std::string&` , то есть строку, которая выводится на консоль. И это лямбда-выражение присвоено переменной print.
Вызывая print как стандартную функцию, нужно передать в нее некоторую строку:
Также можно сразу же при определении вызвать лямбда-выражение, передав в него строку
Лямбда-выражение может возвращать произвольное значение. В этом случае, как и в обычной функции, применяется оператор return:
В данном случае лямбда-выражение возвращает сумму параметров в виде значения int. Соответственно результат выражения мы можем использовать как значение типа int.
Стоит отметить, что по умолчанию компилятор сам определяет, значение какого именно типа будет возвращаться из лямбды. Однако мы также можем явным образом указать возвращаемый тип:
Для установки возвращаемого типа после списка параметров указывается стрелка и собственно возвращаемый тип. Так, в данном случае возвращается значение типа double:
То есть сумма чисел, которая по умолчанию представляет тип int, будет преобразована в значение типа double.
Лямбда-выражение может передаваться в качестве значения параметру функции, который представляет указатель на функцию:
Здесь определена функция do_operation, которая принимает два числа и указатель на функцию - операцию над этими числами. На место указателя на функцию мы можем передать лямбда-выражение, которое соответствует этому указателю. Это возможно благодаря тому, что компилятор может автоматически сгенерировать для лямбда-выражения оператор преобразования типа в эквивалентный тип указателя функции. Следует отметить, что оператор преобразования не генерируется, если лямбда-выражения обращается к внешним переменным, и далее мы рассмотрим подобную ситуацию.
Также можно определить лямбда-выражение непосредственно при использовании, что может быть полезно, если лямбду больше нигде не планируется использовать:
Универсальное лямбда-выражение (generic lambda) — это лямбда-выражение, в котором как минимум для одного параметра в качестве типа указано слово auto или выражения auto& или const auto&. Это позволяет уйти от жесткой привязки параметров к определенному типу. Например:
В данном случае определено лямбда-выражение, которое принимает два параметра и возвращает их сумму. Оно присвоено переменной add:
То есть на момент написания мы не знаем, какие типы будут представлять параметры. Конкретные типы будет выводить компилятор при вызове лямбда-выражения исходя из переданных в него значений:
Так, в данном случае передаются два числа типа int, соответственно результат будет сумма этих чисел в виде значения int.
Другой пример - определим универсальное лямбда-выражение, которое выводит произвольное значение на консоль:
# Захват внешних значений в лямбда-выражениях
Лямбда-выражение может "захватывать" переменные, определенные вне этого выражения в окружающей области. Для этого применяются квадратные скобки, с которых начинается выражение.
Если надо получить все внешние переменные из области, где определено лямбда-выражение, по значению, то в квадратных скобках указывается символ "равно" =. Но в этом случае в лямбда-выражении значения внешних переменных изменить нельзя:
Благодаря выражению `[=]` лямбда может получить внешнюю переменную n и использовать ее значение.
Для подобного лямбда-выражения компилятор будет генерировать класс наподобие:
И здесь следует отметить пару моментов. Прежде всего, значение внешней переменной передается через параметр, который представляет константную ссылку, и сохраняется в приватную переменную. Другой момент - поскольку все действия лямбда-выражения выполняются в операторе (), который определен как константный, то значение приватной переменной мы изменить не можем. Поэтому внешние переменные передаются по значению - мы можем получить их значение, но изменить его не можем.
Стоит отметить, что хотя мы не можем изменить внешнюю переменную, но мы можем передать по значению указатель на внешнюю переменную и через этот указатель внутри лямбда-выражения изменить значение переменной
Если надо получить внешние переменные по ссылке, то в квадратных скобках указывается символ амперсанда &. В этом случае лямбда-выражение может изменять значения этих переменных:
Благодаря выражению `[&]` лямбда increment может получить внешнюю переменную n по ссылке и изменять ее значение. В данном случае увеличиваем переменную n на единицу.
И по консольному выводу мы можем увидить, что n в лямбда-выражении и внешняя переменная n фактически представляют одно и то же значение: > n inside lambda: 11 n outside lambda: 11
Для подобного лямбда-выражения компилятор будет генерировать класс наподобие:
Хотя оператор (), также как и в предыдущем случае, определен как константный, но поскольку внешняя переменная сохраняется как ссылка, то мы можем через эту ссылку изменить ее значение.
В предыдущем случае мы смогли получить внешнюю переменную и изменить ее значение. Но иногда бывает необходимо изменять изменять копию переменной, которую использует лямбда-выражение, а не саму внешнюю переменную. В этом случае мы можем поставить после списка параметров ключевое слово mutable:
#include <iostream>intmain(){intn{10};auto increment = [=]() mutable { n++; // увеличиваем значение внешней переменной nstd::cout << "n inside lambda: " << n << std::endl; };increment();std::cout << "n outside lambda: " << n << std::endl; }
Здесь внешняя переменная n передается по значению, и мы ее изменить не можем, но мы можем изменить копию этого значения, которое используется внутри лямбды, что нам и покажет консольный вывод:
> n inside lambda: 11 n outside lambda: 10
По умолчанию выражения `[=]` / `[&]` позволяют захватить все переменные из окружения. Но также можно захватить только определенные переменные. Чтобы получить внешние переменные, применяется
выражение `[&имя_переменной]` :
Если надо получить внешнюю переменную по значению, то просто указываем ее имя в квадратных скобках:
Если надо захватить несколько переменных, то они указываются через запятую. Если переменная передается по ссылке, то перед ее именем указывается амперсанд:
Можно передать все по значению и лишь некоторые по ссылке
Или, наоборот, передать все по ссылке и лишь некоторые по значению
Для обращения к членам класса - переменным и функциям (вне зависимости приватным или публичным) применяется выражение [this]:
В функции print класса Message выводим на консоль текст сообщения, хранимое в переменной text. Для вывода применяется внешняя функция printer, которая осуществляет некоторое декоративное оформление и выполняет функцию, передаваемую в качестве параметра. В качестве этой функции в данном случае применяется лямбда-выражение, которое обращается к переменной text класса.
Вместе с указателем this можно захватывать и другие переменные окружения. Для этого `this` можно комбинировать с & или = и
захватом отдельных переменных. Например, `[=, this]` , `[this, &n]` и `[x, this, &n]`
# Шаблон std::function<При использовании объектов-функций и лямбда-выражений следует помнить, что они не эквивалентны указателям на функции. Чтобы упростить совместное использование указателей на функции, объектов-функций и лямбда-выражений модуль `<functional>` определяет шаблон std::function<>. `std::function` представляет нечто, что может вызываться как функция - это может быть и указатель на функцию, и объект-функция, и лямбда-выражение. Рассмотрим, как мы можем его использовать:
Для использования типа `std::function` прежде всего необходимо подключить заголовочный файл `<functional>`
В функции main определяем переменную `std::function` :
Переменная operation предсталяет некоторую функцию, которая имеет два параметра типа int и возвращает также значение типа int.
Мы можем присвоить этой переменной объект-функцию:
Обычную функцию:
Или лямбда-выражение:
Для вызова все этих функций вызываем operation как функцию, передавая ее параметрам некоторые значения:
Другой пример
Здесь переменная action представляет некоторую функцию, которая принимает строку и ничего не возвращает -
```
std::function<void(std::string)>
```
# Алгоритмы и представления
Алгоритмы представляют специальные функции, которые определены в модуле <algorithm> и выполняются над контейнерами элементов. Разберем наиболее распространенные.
Функции std::min_element и std::max_element возвращают минимальный и макисмальный элементы соответственно из некоторого диапазона. В качестве коллекции элементов может выступать контейнер или массив. Диапазон элементов задается начальным и конечным итераторами контейнера/массива.
Здесь находим минимальный и максимальный элементы вектора numbers. В обоих случаях в качестве диапазона выступает весь контейнер - от итератора `begin(numbers)` до итератора end(numbers). Результатом каждой функции также является итератор. Потому для получения значения (максимального/минимального значения)
применяем операцию разыменования:
```
*std::min_element(...)
```
. Консольный вывод: > Min: 1 Max: 8
Поскольку диапазон поиска значений представляет необязательно весь контейнер, а может быть только частью контейера, ограниченной итераторами, то мы можем найти максимальное/минимальное значения на каком-то определенном диапазоне, например, от 2-го до предпоследнего элемента:
Также для получения мин/макс. значений можно принименять функцию std::minmax_element(), которая также используется итераторы для задания диапазона поиска. Но результат возвращает в виде объекта
```
std::pair<iterator, iterator>
```
:
# Поиск элементов
Функция std::find() ищет на определенном диапазоне элементов определенное значение. Для сравнения значений применяется операция сравнения ==. Рассмотрим одну из версий функции:
Для установки диапазона в функцию передаются итератор на начало и конец диапазона и значение, которое надо найти. Результат функции - итератор на найденное значение. Если же значение не найдено, то возвращаемый итератор указывает на конец диапазона. Например, попробуем найти в векторе чисел некоторые числа
В данном случае поиск вынесен в отдельную функцию - findValue. В ней ищем в векторе чисел некоторое число. В качестве начала и конца диапазона для поиска в функцию `std::find()` передаются итераторы на начало и конец вектор:
Если число не найдено, то полученный итератор равен итератору на конец вектора:
Если же число найдено, то вычитая из полученного итератора итератор на начало вектора, мы можем получить индекс числа в векторе:
Ряд дополнительных функций возвращают итератор на значение в зависимости от некоторого условия. Функция std::find_if() возвращает итератор на первый элемент диапазона, который удовлетворяет некоторому условию. А функция std::find_if_not(), наоброт, возвращает итератор на первый элемент диапазона, который НЕ удовлетворяет некоторому условиЮ. Посмотрим на примере функции `std::find_if()` :
Функция `std::find_if()` также получает итераторы на начало и конец дипазона для поиска, а третий параметр представляет условие, которому должны удовлетворять значения:
Условие представляет функцию, которая принимает некоторое значение произвольного типа и возвращает значение типа `bool` - `true` , если значение соответствует условию, и `false` , если не соответствует. Фактически условие можно описать указателем на функцию `bool(*condition)(T)` , где T- произвольный тип. Для теста здесь определены три функции, которые представляют условия: `is_even()` (проверяет, является ли число четным), `is_positive()` (если число положительное) и `is_greater10()` (если число больше 10). Функция `std::find_if()` возвращает итератор на первое найденное значение, которое удовлетворяет условию. Если таких значений не найдено, то итератор указывает на конец диапазона. Принцип работы `std::find_if_not()` будет аналогичен.
# Копирование элементов
Функция std::copy_if() применяется для копирования значений, которые соответствуют некоторому условию, из одного диапазона в другой. Эта функция имеет ряд версий, рассмотрим одну из из них:
Первые два параметра - start_source_iterator и end_source_iterator представляют соответственно итераторы на начало и конец диапазона значений, откуда надо копировать значения. Третий параметр - start_dest_iterator представляет итератор на начало диапазона, в который надо вставить скопированные значения.
Последний параметр - condition представляет условие. В качестве условия выступает функция, которая принимает некоторое значение произвольного типа и возвращает значение типа bool - `true` , если значение соответствует условию, и `false` - если не соответствует.
Результатом функции является итератор, который указывает на адрес сразу после последнего скопированного элемента из исходного диапазона.
Рассмотрим применение функции:
В данном случае копируем из вектора чисел numbers в два других вектора - even_numbers и pos_numbers. Обратите внимание, что для обоих последних векторов задана длина:
Теоретически все элементы вектора numbers могут соответствовать условию, поэтому для обоих векторов устанавливается длина, равная длине вектора numbers.
Условия заданы отдельными функциями - is_even (определяет, является ли число четное) и is_positive (является ли число положительным).
Сначала копируем из вектора numbers в вектор even_numbers все числа, которые являются четными:
Тут диапазон поиска значений определяется итераторами на начало и конец вектора numbers. Вставляться скопированные элементы будут начиная с начала вектора even_numbers (begin(even_numbers)), а в качестве условия выступает функция is_even.
В реальности выполнение функции будет аналогично следующему:
Таким образом, все четные числа из numbers будут скопированы в even_numbers. Но тут есть проблема - не все элементы исходного вектора numbers могут быть четными, а изначальный размер вектора even_numbers равен размеру numbers. Чтобы оставить в even_numbers только скопированное количество элементов, усекаем вектор even_numbers до позиции последнего скопированного элементам, на который указывает указатель end_even_iter.
Аналогичным образом получаем положительные числа, только в качестве условия применяется функция is_positive:
Консольный вывод программы:
> -4 -2 0 2 4 1 2 3 4 5
Идиома remove-erase idiom призвана решить проблему удаления элементов из контейнера, поскольку данная проблема может представлять нетривиальную задачу, чреватую возникновением ошибок. Данная идиома предполагает применение алгоритма remove() или remove_if(), за которым следует вызов функции erase() контейнера.
При применении алгоритмов remove() и remove_if() те элементы, которые надо сохранить, помещаются в начало контейнера, а функции `remove()` и `remove_if()` возвращают итератор на первый удаляемый элемент. Затем этот итератор передается в функцию `erase()` , которая собственно и удаляет элементы.
Реализация идиомы:
Здесь для примера удаляем из вектора все отрицательные числа. Для этого сначала вызываем функцию `std::remove_if()` :
В качестве первого и второго параметров она принимает итераторы на начало и конец диапазона, из которого надо удалить числа. Здесь диапазон определяется итераторами на начало и конец вектора. В качестве третьего параметра передается условие. Условие должно представлять функцию, которая принимает некоторое значение и возвращает значение типа bool - `true` , если значение соответствует условию, и `false` - если не соответствует. В данном случае в качестве такого условия передаем функцию
is_negative, которая вычисляет, является ли число отрицательным. То есть мы удаляем отрицательные числа. Для вывода вектора на консоль применяется функция print. После выполнения `remove_if` на консоль будет выведено > 0 1 2 3 4 5 1 2 3 4 5
В результате `remove_if()` просто перемещает все элементы, которые нужно сохранить (0 и положительные числа), в начало диапазона. При этом часть из этих чисел остается в конце вектора,
но это не имеет значения, поскольку эта часть вектора будет удалена. А сама функция возвращает итератор iter, который указывает на первый удаляемый элемент.
Далее удаляем все элементы, которые начинаются с этого итератора:
Теперь консольный вывод будет следующим:
> 0 1 2 3 4 5
Поскольку безопасное удаление из контейнеров представляет довольно часто встречаемую задачу, то начиная со стандарта C++20 в язык С++ были добавлены функции std::erase() и std::erase_if()
Функция std::erase() удаляет отдельное значение из контейнера (не применяется к std::set и std::map):
Функция std::erase_if() удаляет значения из контейнера, которые соответствуют условию:
Так, перепишем предыдущий пример с помощью функции `std::erase_if` :
# Сортировка
Функция std::sort() из заголовочного файла `<algorithm>` предназначена для сортировки диапазон элементов.
Первые два параметра функции представляют начальный и конечный итераторы сортируемого диапазона. Третий необязательный параметр представляет функцию сравнения или компаратор.
Если компаратор не указан, то элементы сортируются по возрастанию (например, строки сортируются в лексикографическом порядке).
Например, отсортируем вектор строк по умолчанию:
В данном случае сортируем весь вектор people, поэтому в функцию std::sort передаются итераторы на начало и конец вектора. И в данном случае мы получим следующий результат:
> <NAME> Sam Tom
То есть строки сортируются в лексикографическом порядке (как идут в алфавите начальные буквы строк). Теперь применим компаратор, например, отсортируем вектор в зависимости от длины строк:
В качестве компаратора здесь определена функция compare. Компаратор должен принимать два значения и возвращать значение типа bool - если первое значение должно идти до второго, то возвращается `true` . И в случае выше функция compare принимает две строки и возвращает `true` , если длина первой строки меньше второй. Соответственно теперь консольный
вывод будет следующим: > <NAME> Sam Kate Alice
То есть строки отсортированы по длине по возрастанию.
Начиная со стандарта C++20 также для сортировки элементов контейнера можно использовать упрощенный подход - функцию std::ranges::sort(), которая в качестве параметра принимает сортируемый контейнер:
По умолчанию данные сортируются по возрастанию (в случае со строками в лексикографическом порядке)
> <NAME> Kate Sam Tom
Также в качестве второго параметра можно передать функцию компаратора, которая определяет принцип сравнения значений:
С помощью функции-компаратора compare упорядочиваем элементы по возрастанию их длин:
> <NAME> Sam Kate Alice
Функция `std::ranges::sort` поддерживает проекцию данных для функции компаратора. Например:
Здесь сортируемый вектор содержит объекты Person, который хранит имя в строковом поле name. Мы могли бы определить для сравнения данных Person еще одну функцию компаратора. Но в данном случае мы также можем использовать и ранее определенную функцию компаратора для сравнения строк. Но в этом случае в `std::ranges::sort()` в качестве
третьего параметра передается функция проекции из Person в std::string - функция personToString, которая возвращает имя объекта Person. Именно эти данные и будут передаваться в
функцию компаратора при сравнении двух объектов.
# Представления. Фильтрация
Представление или view представляет легковесный диапазон, которое ссылается на элементы, которыми оно не владеет. Представление обычно основано на другом диапазоне и обеспечивает определенный способ работы с этим дипазоном, например, преобразование или фильтрация. То есть есть некоторый контейнер, который непосредственно хранит элементы, и есть представление, которое ссылается на эти элементы. При этом представления более эффективны в плане производительности, чем стандартные диапазоны, основанные на итераторах. Это связано с тем, что представление не владеет элементами, на которые оно ссылается, поэтому ему не нужно делать копию. Поэтому можно создавать представления без потери производительности.
Для создания представлений применяются адаптеры диапазонов (range adapter). Есть два способа определения представления.
Применение конструктора представления std::ranges::xxx_view, где `xxx` - название представления
Первый параметр - контейнер, для которого создается представления, второй параметр определяет функцию, применяемую для создания представления
Применение операции | к контейнеру и функции std::views::xxx, где `xxx` - название представления
Здесь range также представляет контейнер, а функция `std::views::xxx()` в качестве параметра принимает функцию, которая применяется для создания представления
Рассмотрим на примере самой распространенной операции - фильтрации.
Для фильтрации применяется представление std::ranges::filter_view. В качестве второго параметра конструктор представления принимает условие - функцию, которая принимает некоторый объект и возвращает значение типа bool - `true` , если объект соответствует условию, и `false` , если не соответствует.
Например, отфильтруем элементы вектора
#include <iostream>#include <vector>#include <ranges>classPerson{public:Person(std::string name, unsigned age): name{name}, age{age}{}std::string getName() const{returnname;}unsigned getAge() const{returnage;}voidprint() const {std::cout << name <<"\t" << age << std::endl;}private:std::string name;unsigned age;};bool ageMoreThan33(const Person& person) { return person.getAge() > 33; }int main(){std::vector<Person> people {Person{"Tom", 38}, Person{"Kate", 31}, Person{"Bob", 42}, Person{"Alice", 34}, Person{"Sam", 25}};auto filter_view = std::ranges::filter_view{people, ageMoreThan33};for(const auto& person: filter_view){person.print();}}
Здесь вектор хранит объекты типа Person, для каждого из которых устанавливается имя и возраст. И в данном случае из вектора получаем все объекты Person, возраст которых больше 33:
Условие вынесено в отдельную функцию `ageMoreThan33()` , которая возвращает true, если возраст больше 33. Также можно было бы определить условие в виде лямбда-выражения.
В результате вызова конструктора std::ranges::filter_view мы получим представление с объектами Person, которые соотвествуют условию. Полученное представление можно также перебрать с помощью цикла for и получить каждый его элемент (в данном случае объекты Person). Консольный вывод программы:
> Tom 38 Bob 42 Alice 34
Подобным образом можно было бы передать вместо внешней функции лямбда-выражение:
Второй способ создания представления фильтрации представление применение функции std::views::filter(), в которую передается функция филтрации
#include <iostream>#include <vector>#include <ranges>classPerson{public:Person(std::string name, unsigned age): name{name}, age{age}{}std::string getName() const{returnname;}unsigned getAge() const{returnage;}voidprint() const {std::cout << name <<"\t" << age << std::endl;}private:std::string name;unsigned age;}; int main(){std::vector<Person> people {Person{"Tom", 38}, Person{"Kate", 31}, Person{"Bob", 42}, Person{"Alice", 34}, Person{"Sam", 25}};auto ageMoreThan33 = [](const Person& p){return p.getAge() > 33;};auto filter_view = people | std::views::filter(ageMoreThan33);for(const auto& person: filter_view){person.print();}}
Для создания представления фильтрации используем операцию |, в которой операндами выступают контейнер данных и функция std::views::filter:
Результат будет тем же самым, что в случае использования конструктора представления.
# Проекция данных
Представление transform позволяет преобразовать данные из одного типа в другой. Для создания представления можно применять конструктор std::ranges::transform_view, который принимает контейнер, данные которого надо преобразовать, и функцию преобразования:
#include <iostream>#include <vector>#include <ranges>classPerson{public:Person(std::string name, unsigned age): name{name}, age{age}{}std::string getName() const{returnname;}unsigned getAge() const{returnage;}voidprint() const {std::cout << name <<"\t" << age << std::endl;}private:std::string name;unsigned age;}; int main(){std::vector<Person> people {Person{"Tom", 38}, Person{"Kate", 31}, Person{"Bob", 42}, Person{"Alice", 34}, Person{"Sam", 25}};auto personToString = [](const Person& p){return p.getName();};auto view = = std::ranges::transform_view{people, personToString};for(const auto& person: view){std::cout << person << std::endl;}}
В функцию преобразовани передается объекта типа, элементы которого хранятся в конейнере (в нашем случае объекты Person). В данном случае функция преобразования определена как лямбда-выражение, которое преобразует объект Person в строку - просто возвращаем имя пользователя:
В конструктор представления передаем контейнер и функцию преобразования:
В итоге мы получим представление, которое будет содержать только строки - имя пользователей. Консольный вывод:
> Tom Kate Bob Alice Sam
В качестве альтернативы можно использовать функцию std::views::transform(), которая принимает функцию преобразования:
#include <iostream>#include <vector>#include <ranges>classPerson{public:Person(std::string name, unsigned age): name{name}, age{age}{}std::string getName() const{returnname;}unsigned getAge() const{returnage;}voidprint() const {std::cout << name <<"\t" << age << std::endl;}private:std::string name;unsigned age;}; int main(){std::vector<Person> people {Person{"Tom", 38}, Person{"Kate", 31}, Person{"Bob", 42}, Person{"Alice", 34}, Person{"Sam", 25}};auto personToString = [](const Person& p){return p.getName();};auto view = people | std::views::transform(personToString);for(const auto& person: view){std::cout << person << std::endl;}}
Для создания представления используем операцию |, которая применяется к контейнеру и функции
```
std::views::transform
```
# Цепочки представлений
Обращение к элементам в представлении характеризуется отложенным выполнением. То есть фактически создание представление выполняется только тогда, когда происходит обращение к его элементам. Это позволяет комбинировать или составлять представления без потери производительности.
Например, нам надо взять 3 элемента, начиная с третьего элемента. Мы можем разбить задачу на два этапа:
Пропустить 2 элемента:
Взять 3 элемента:
Операция | повзволяет объединить все эти операции и создать единое представление:
Консольный вывод программы:
> <NAME>
Подобным образом можно наслаивать цепочку и из большего количества операций и использовать и другие типы представлений. Например, отфильтруем и преобразуем данные:
#include <iostream>#include <vector>#include <ranges>classPerson{public:Person(std::string name, unsigned age): name{name}, age{age}{}std::string getName() const{returnname;}unsigned getAge() const{returnage;}voidprint() const {std::cout << name <<"\t" << age << std::endl;}private:std::string name;unsigned age;}; int main(){std::vector<Person> people {Person{"Tom", 38}, Person{"Kate", 31}, Person{"Bob", 42}, Person{"Alice", 34}, Person{"Sam", 25}};// фильтрация Person с age > 33auto ageMoreThan33 = [](const Person& p){return p.getAge() > 33;};// функция преобразования из Person в stringauto personToString = [](const Person& p){return p.getName();};auto view = people | std::views::filter(ageMoreThan33) |std::views::transform(personToString);for(const auto& person: view){std::cout << person << std::endl;}}
В данном случае сначала отбираем все объекты Person, у которых поле age больше 33, и затем отобранные объекты преобразуются в строку - для каждого объекта Person возвращается значение поля name:
Консольный вывод программы:
> <NAME>
# Ограничения шаблонов
Ограничения шаблонов (как функций, так и классов) позволяют ограничить набор возможных типов, которые будут применяться параметрами шаблонов. Добавляя ограничения к параметрам шаблона, решаются следующие задачи:
Из заголовка шаблона сразу видно, какие аргументы шаблона разрешены, а какие нет.
Шаблон создается только в том случае, если аргументы шаблона удовлетворяют всем ограничениям.
любое нарушение ограничений шаблона приводит к сообщению об ошибке, которое гораздо ближе к первопричине проблемы, а именно к попытке использовать шаблон с неверными аргументами.
Начиная со стандарта С++20 в язык был добавлен оператор requires, который позволяет установить для параметров шаблонов ограничения.
Ограничения представляют условные выражения, которые возвращают значение типа bool - если параметр типа удовлетворяет условию, то возвращается true. Каждое ограничение предписывает одно или несколько требований для одного или нескольких параметров шаблона.
Например, мы хотим определить функцию, которая может складывать числа:
Здесь определен шаблон функции sum, который принимает значения типа T и возвращает их сумму также в виде значения типа T. Для параметра T после слова `requires` установлено ограничение
Для определения ограничения применяется встроенная структура `std::is_same` из стандартной библиотеки C++. Эта структура в свою очередь типизируется двумя
типами. Переменная value структуры возвращает `true` , если оба типа одинаковы. То есть выражение
```
std::is_same<T, int>::value
```
возвратит `true` ,
если T - это int. Аналонично устанавливаем еще одно ограничение к типу -
```
std::is_same<T, double>::value
```
. И с помощью операции || объединяем два ограничения.
То есть T может представлять либо int, либо double.
Далее мы можем передавать в функцию sum() значения типов, которые удовлетворяют этим ограничениям:
Значения других же типов мы передать не можем. Так, в примере выше закомментрирована строка, где в функцию sum() передаются значения типа `long` :
Если мы ее раскомментируем, то компилятор не скомпилирует программу и отобразит нам ошибку, которая сообщит, что в функцию sum переданы значения некорректных типов.
# Концепты
Начиная со стандарта C++20 в язык С++ была добавлена такая функциональность как concepts (концепты). Концепты позволяют установить ограничения для параметров шаблонов (как шаблонов функций, так и шаблонов класса).
Концепт фактически представляет шаблон для именованного набора ограничений, где каждое ограничение предписывает одно или несколько требований для одного или нескольких параметров шаблона. В общем случае он имеет следующий вид
Список параметров концепта содержит один или несколько параметров шаблона. Во время компиляции компилятор оценивает концепты, чтобы определить, удовлетворяет ли набор аргументов заданным ограничениям.
`Ограничения` представляют условные выражения, которые возвращают значение типа bool - если параметр типа удовлетворяет условию, то возвращается true.
Простейший пример:
В данном случае определен концепт size. Его смысл в том, что тип, который будет передаваться через параметр `T` , должен удовлетворять условию
```
sizeof(T) <= sizeof(int)
```
. То есть физический размер объектов типа T не должен быть больше размера значений типа `int` .
Чтобы определить, соответствует ли определенный тип концепту, надо использовать выражение концепта, которое состоит из имени концепта, после которого в угловых скобках указываются аргументы для параметров шаблона.
Например, проверим в действии вышеопределенный концепт size:
В данном случае выражения `size<unsigned int>` и `size<char>` удовлетворяют ограничению концепта, так как применяемые в них типы `unsigned int` и `char` по размеру не превосходят или меньше размера типа int. Поэтому эти выражения возвратят true.
А выражение `size<double>` не соответствует ограничению концепта, так как размер типа double больше размера типа int, поэтому это выражение возвратит
false
Одни концепты могут основываться на других концептах. Например:
Вначале определяется два простейших концепта. `small_size` требует, чтобы размер типа был меньше размера типа int. А концепт `big_size` требует, чтобы размер типа был больше размера типа long. С помощью стандартных логических операций && и || монжно
объединять ограничения. И в данном случае концепт size требует, что тип T удовлетворял либо условию `small_size<T>` , либо `big_size<T>` :
Теперь главный вопрос - зачем все эти концепты нужны? Мы можем применять концепты в качестве ограничений для шаблонов:
В данном случае определен концепт size, согласно которому размер типа T должен быть равен или меньше размера типа int.
При определении шаблона функции sum в качестве ограничения применяется концепт size:
То есть значения, которые будут передаваться в эту функцию, должны представлять тип, размер которого не больше размера типа int. Таким образом, мы можем вызвать эту функцию, передава ей значения типа `int` :
Но мы не можем передать значения типа `double` , так как эти значения занимают в памяти 8 байт - больше чем значения типа int:
Поэтому последняя строка в коде закомментрирована. И если бы мы расскоментировали ее, то на этапе компиляции получили бы ошибку.
С++ позволяет сократить синтаксис применения концепта:
В таком случае концепт указывается в угловых скобках вместа слова `typename` перед названием параметра типа
Данные концепты не имеют большого смысла и призваны дать общее понимание того, как определяются и работают концепты. Кроме того, в стандартной библиотеке C++ есть довольно большой набор встроенных концептов, которые можно использовать в самых различных ситуациях. Все эти концепты определены в модуле <conceptsНапример, встроенный концеп std::same_as<K, T> проверяет, представляют ли T и K один и тот же тип. Например, нам надо определить шаблон функции, которая складывает числа int и double:
Здесь ограничение
устанавливает, что тип T должен представлять либо int, либо double.
Но в данном случае мы могли бы также использовать и другой концепт - std::convertible_to<K, T>. Он проверяет, можно ли преобразовать значение K в значение T. Например, значение int можно быть неявно преобразовано в double. И мы могли бы переписать предыдущий пример следующим образом:
# Выражение requires
Выражение requires призвано конкретизировать и детализировать ограничения. Оно имеет следующие формы:
После слова `requires` может идти необязательный список параметров в круглых скобках, который во многом аналогичен списку параметров функции.
За списком параметров в фигурных скобках указываются требования, которые могут использовать параметры. Каждое требование заканчивается точкой с запятой.
Требования могут быть простыми и составными. При этом параметры выражения require никогда не привязываются к
фактическим аргументам, а выражения в фигурных скобках никогда не выполняются. Все, что компилятор делает с этими выражениями, — это проверяет,
образуют ли они допустимый код C++.
Простое требование представляет произвольное выражение C++. И если это выражение для указанных типов допустимо, то тип удовлетворяет этому требованию.
Для выражения requires может быть задано множество требований, и тип должен удовлетворять всем этим требования. Например:
Здесь определен концепт operation, ограничения которого определяются с помощью выражения `requires` :
Выражение `requires` определяет один параметр типа T, который мы проверяем на соответствие требованиям. В данном случае определено три требования: `item + item` , `item - item` и `item * item` . То есть мы берем некий тип T и проверяем, будут ои для объекта этого типа допустимы подобные выражения.
И чтобы эти выражения были допустимы, для типа T должны быть определены операции сложения, вычитания и умножения. Причем тип T должен соответствовать всем
этим требованиям.
В функции main проверяем различные типы на соответствие концепту operation. Например:
Для типа `int` определены все заявленые операции - и сложение, и вычитание, и умножение. Соответственно данное выражение возвратит `true` .
То же самое касается типа `char` в случае с выражением `operation<char>` А вот типа `std::string` поддерживает только операцию сложения (объединение строк), поэтому следующее выражение возратит false:
То же самое касается нашего пустого класса Counter, для которого вообще не определено никаких операций:
В качестве выражений в блоке `requires` можно использовать не только арифметические или другие операции. Это могут быть любые выражения:
вызовы функций, конструкторов, преобразования типов, доступ к членам класса и т. д. Однако нельзя определять локальные переменные внутри фигурных скобок.
Все переменные, которые надо использовать в выражениях, должны быть либо глобальными переменными, либо переменными, представленными в списке параметров. Например, определим
дополнительный параметр типа int:
Здесь определен концепт `is_collection` . Он проверяет, является ли тип T коллекцией, в которой можно обратиться к элементам по индексам. Выражение `requires` теперь принимает два параметра. Второй параметр представляет тип int.
Здесь используется одно требование - `collection[n]` . То есть тип T должен поддерживать обращение к элементам по целочисленному индексу. Тип `int` не является коллекцией, соответственно для этого типа будет возвращаться false:
А вот строки, массивы, векторы поддерживают обращение по индексу, поэтому для них будет возвращаться true:
Применим выражение `requires` для создания концепта для ограничения шаблона:
В данном случае ограничение
указывает, что T может представлять любой тип, который поддерживает операцию сложения. Это могут быть и числа, и строки std::string.
Также можно использовать выражение `requires` напрямую после оператора requires:
В данном случае первое слово `requires` представляет оператор, который устанавливает ограничение для шаблона. А второе слово `requires` представляет выражение, которое определяет требования.
Составное требование похоже на простое требование. Но при этом оно может также запрещать выражению генерировать исключение и/или ограничивать тип, который оно оценивает. Составное требование может принимать следующие формы
Требование `{ expr } noexcept` будет выполняться, если все функции, которые вызываются в выражении `expr` , определены как noexcept В требовании
```
{ expr } -> ограничение_типа;
```
после стрелки `->` должно идти ограничение типа. Например:
Здесь концепт `is_pointer` использует составное требование, которое состоит из трех требований:
Требрвание `ptr[n]` проверяет, можем ли мы обращаться к значениям, применив операцию индексации. Далее применяются ряд требований с ограничением типа:
В данном случае, с одной стороны, для типа Pointer должна поддерживаться операция вычитания, причем вычитается целое число. С другой стороны, результат этой операции должен также представлять данный тип Pointer. Для установки ограничения типа применяется встроенный концепт std::same_as из заголовочного файла `concepts` .
Концепту с пободными ограничениями будут соответствовать, например, указатели на значения любого типа:
Например, определим составное требование для ограничения шаблона:
Здесь требование предписывает, что тип T должен поддерживать операцию сложению, при этом ее результат должен представлять тип, который может быть преобразован в тип T (либо сам тип T).
# Ограничения типа для auto
Ограничения типа также можно использовать для ограничения типов, вместо которых используются заменители auto, auto* и auto&. Причем везде, где используется auto, также можно указывать ограничение типа: для определения локальных переменных, для типа результат функции, в лямбда-выражениях и т. д. Всякий раз, когда конкретный тип, который будет применяться вместо `auto` ,
не удовлетворяет ограничению типа, компилятор выдает ошибку.
Рассмотрим пример:
Здесь определен концепт `Numeric` , который предполагает, что тип T должен представлять целое число ( `std::integral<T>` ), либо число с плавающей точкой
(
```
std::floating_point<T>
```
). В определении функции sum рядом с заместителем типа `auto` применяем данный концеп, ограничивая возможный набор используемых типов, только числовыми типами. Причем это
делам для типа результата функции:
Для типа параметров
Для константы внутри функции
Затем можно вызывать эту функцию sum, передавая в нее числа, как целые, так и с плавающей точкой:
# Потоки и система ввода-вывода
Все инструменты для работы с системой ввода-вывода и потоками в языке С++ определены в стандартной библиотеке. Заголовочный файл iostream определяет следующие базовые типы для работы с потоками:
istream и wistream: читают данные с потока
ostream и wostream: записывают данные в поток
iostream и wiostream: читают и записывают данные в поток
Для каждого типа определен его двойник, который начинается на букву w и который предназначен для поддержки данных типа wchar_t.
Эти типы являются базовыми для других классов, управляющих потоками ввода-вывода.
Объект типа ostream получает значения различных типов, преобразует их в последовательность символов и передает их через буфер в определенное место для вывода (консоль, файл, сетевые интерфейсы и т.д.)
Поток istream получает через буфер из определенного места последовательности символов (с консоли, из файла, из сети и т.д.) и преобразует эти последовательности в значения различных типов. То есть когда мы вводим данные (с той же клавиатуры в консоли), сначала данные накапливаются в буфере и только затем передаются объекту istream.
По умолчанию в стандартной библиотеке определены объекты этих классов - cout, cin, cerr, которые работают с консолью.
К примеру, по умолчанию стандартная библиотека C++ предоставляет объект cout, который представляет тип ostream и позволяет выводить данные на консоль:
Так как оператор << возвращает левый операнд - cout, то с помощью цепочки операторов мы можем передать на консоль несколько значений:
Для чтения данных из потока применяется оператор ввода >>, который принимает два операнда. Левый операнд представляет поток istream, с которого производится считывание, а правый операнд - объект, в который считываются данные.
Для чтения с консоли применяется объект cin, который представляет тип istream.
Однако такой способ не очень подходит для чтения строк с консоли особенно когда считываемая строка содержит пробельные символы. В этом случае лучше использовать встроенную функцию getline(), которая в качестве параметра принимает поток istream и переменную типа string, в которую надо считать данные:
Пример работы программы:
> Input name: <NAME> Your name: <NAME>
По умолчанию признаком окончания ввода служит перевод на другую строку, например, с помощью клавиши Enter. Но также можно задать свой признак окончания ввода с помощью дополнительного параметра функции `getline()` . Для этого надо передать символ, который будет служить окончанием ввода:
В данном случае ввод завершится, когда пользователь введет символ *. Таким образом, мы можем ввести многострочный текст, но при вводе звездочки ввод завершится. Пример работы программы:
> Input text: Hello World Good bye world* Your text: Hello World Good bye world
Для вывода сообщения об ошибке на консоль применяется объект cerr, который представляет объект типа ostream:
Для работы с потоками данных типов wchar_t в стандартной библиотеке определены объекты wcout (тип wostream), wcerr (тип wostream) и wcin (тип wistream), которые являются аналогами для объектов cout, cerr и cin и работают аналогично
# Файловые потоки. Открытие и закрытие
Для работы с файлами в стандартной библиотеке определен заголовочный файл fstream, который определяет базовые типы для чтения и записи файлов. В частности, это:
ifstream: для чтения с файла
ofstream: для записи в файл
fstream: совмещает запись и чтение
Для работы с данными типа wchar_t для этих потоков определены двойники:
wifstream
wofstream
wfstream
При операциях с файлом вначале необходимо открыть файл с помощью функции open(). Данная функция имеет две версии:
`open(путь)` `open(путь, режим)`
Для открытия файла в функцию необходимо передать путь к файлу в виде строки. И также можно указать режим открытия. Список доступных режимов открытия файла:
ios::in: файл открывается для ввода (чтения). Может быть установлен только для объекта ifstream или fstream
ios::out: файл открывается для вывода (записи). При этом старые данные удаляются. Может быть установлен только для объекта ofstream или fstream
ios::app: файл открывается для дозаписи. Старые данные не удаляются.
ios::ate: после открытия файла перемещает указатель в конец файла
ios::trunc: файл усекается при открытии. Может быть установлен, если также установлен режим `out`
ios::binary: файл открывается в бинарном режиме
Если при открытии режим не указан, то по умолчанию для объектов ofstream применяется режим `ios::out` , а для
объектов ifstream - режим `ios::in` . Для объектов fstream совмещаются режимы `ios::out` и `ios::in` .
Однако в принципе необязательно использовать функцию open для открытия файла. В качестве альтернативы можно также использовать конструктор объектов-потоков и передавать в них путь к файлу и режим открытия:
При вызове конструктора, в который передан путь к файлу, данный файл будет автоматически открываться:
В данном случае предполагается, что файл "hello.txt" располагается в той же папке, где и файл программы.
Вообще использование конструкторов для открытия потока является более предпочтительным, так как определение переменной, представляющей файловой поток, уже преполагает, что этот поток будет открыт для чтения или записи. А использование конструктора избавит от ситуации, когда мы забудем открыть поток, но при этом начнем его использовать.
В процессе работы мы можем проверить, окрыт ли файл с помощью функции is_open(). Если файл открыт, то она возвращает true:
После завершения работы с файлом его следует закрыть с помощью функции close(). Также стоит отметить, то при выходе объекта потока из области видимости, он удаляется, и у него автоматически вызывается функция close.
Потоки для работы с текстовыми файлами представляют объекты, для которых не задан режим открытия ios::binary.
Для записи в файл к объекту ofstream или fstream применяется оператор << (как и при выводе на консоль):
Здесь предполагается, что файла "hello.txt" располагается в одной папке с файлом программы. Данный способ перезаписывает файл заново. Если надо дозаписать текст в конец файла, то для открытия файла нужно использовать режим ios::app:
Если надо считать всю строку целиком или даже все строки из файла, то лучше использовать встроенную функцию getline(), которая принимает поток для чтения и переменную, в которую надо считать текст:
Также для чтения данных из файла для объектов ifstream и fstream может применяться оператор >> (также как и при чтении с консоли):
Здесь вектор структур Point записывается в файл.
При чем при записи значений переменных файл они отделяются пробелом. В итоге будет создаваться файл в формате
> 0 0 4 5 -5 7
Используя оператор >>, можно считать последовательно данные в переменные x и y и ими инициализировать структуру.
Но стоит отметить, что это ограниченный способ, поскольку при чтении файла поток in использует пробел для отделения одного значения от другого и таким образом считывает эти значения в переменные x и y. Если же нам надо записать и затем считать строку, которая содержит пробелы, и какие-то другие данные, то такой способ, конечно, не сработает.
# Переопределение операторов ввода и вывода
Операторы ввода >> и вывода << прекрасно работают для примитивных типов данных, таких как int или double. В то же время для использования их с объектами классов необходимо переопределять эти операторы.
Стандартный выходной поток cout имеет тип std::ostream. Поэтому первый параметр (левый операнд) операции << представляет ссылку на неконстантный объект ostream. Данный объект не должен представлять константу, так как запись в поток изменяет его состояние. Причем параметр представляет именно ссылку, так как нельзя копировать объект класса ostream.
Второй параметр оператора определяется как ссылка на константный объекта класса, который надо вывести в поток.
Для совместимости с другими операторами переопределяемый оператор должен возвращать значение параметра std::ostream.
Также следует отметить, что операторы ввода и вывода не должны быть членами в классе, а определяются вне класса как обычные функции.
В данном случае оператор вывода определяется для объектов структуры Person. Сам оператор по сути просто выводит имя и возраст пользователя через пробел. Консольный вывод программы:
> Tom 38 Bob 42
Первый параметр оператора >> представляет ссылку на объект istream, с которого осуществляется чтение. Второй параметр представляет ссылку на неконстантный объект, в который надо считать данные. В качестве результата оператор возвращают ссылку на поток ввода istream из первого параметра.
Оператор ввода последовательно считывает из потока данные в переменные name и age и затем использует их для установки имени и возраста пользователя.
При этом в данном случае предполагается, что имя представляет одно слово. Если надо считать сложное имя, которое состоит из нескольких слов, или имя и фамилию, то естественно надо определять более сложную логику.
Пример работы программы:
> Input name and age: Bob 42 Name: Bob Age: 42
Однако что если мы введем для возраста вместо числа строку? В этом случае переменная age получит неопределенное значение. Существуют различные варианты, как обрабатывать подобные ситуации. Но в качестве примера мы можем в случае некорректного ввода устанавливать значение по умолчанию:
С помощью выражения `if(in)` проверяем, является ли ввод удачным. Если он завершился успешно, то устанавливаем введенные значения. Если же ввод не удался, у объекта Person
остаются те значения, которые у него было до ввода.
Определив операторы ввода и выводы, мы можем их использовать также и для чтения и записи файла:
Здесь для класса Person определены операторы ввода и вывода. С помощью оператора вывода данные будут записываться в файл users.txt, а с помощью оператора ввода - считываться из файла. В конце считанные данные выводятся на консоль:
Результат работы программы:
> All users: Tom 23 Bob 25 Alice 22 Kate 31
# Стандартная библиотека C++
Начиная со стандарта C++20 стандартная библиотека предоставляет модуль numbers, который содержит ряд встроенных математических констант. Некоторые наиболее распростраенные:
`std::numbers::e` : число 2.71828 (основание натурального алгоритма) `std::numbers::pi` : число π - `3.14159...` `std::numbers::sqrt2` : квадратный корень числа 2 - `1.41421...` `std::numbers::phi` : золотое сечение (число Фидия) φ - `1.618...`
Все эти числа представляют тип double
Заголовочный файл <cmath> стандартной библиотеки C++ определяет набор математических функций. которые можно использовать в программах. Перечислю наиболее распространенные:
`abs(arg)` : вычисляет абсолютное значение arg. В отличие от большинства функций `ceil(arg)` : вычисляет ближайшее целое число, большее или равное arg, и возвращает его в виде числа с плавающей точкой. Например, выражение `std::ceil(2.5)` возвращает `3.0` , а `std::ceil(-2.5)` - `-2.0` . (дробная часть округляется до единицы) `floor(arg)` : вычисляет ближайшее целое число, меньшее или равное arg, и возвращает его в виде числа с плавающей точкой. Например, выражение `std::floor(2.5)` возвращает 2.0, а `std::floor(-2.5)` - число -3.0. (дробная часть округляется до нуля) `exp(arg)` : вычисляет выражение `earg` . `log(arg)` : вычисляет натуральный логарифм (по основанию e) числа arg. `log10(arg)` : вычисляет логарифм по основанию 10 от arg. `pow(arg1, arg2)` : вычисляет значение arg1, возведенное в степень arg2, то есть `arg1arg2` . Числа arg1 и arg2
могут быть целочисленными или с плавающей запятой. Так, результат `std::pow(2, 3)` равен 8.0, а `std::pow(4, 0,5)` равно 2,0. `sqrt(arg)` : вычисляет квадратный корень из arg. `round(arg)` , `lround (arg)` и `llround (arg)` округляют число до ближайщего целого.
Разница между ними состоит в типа возвращаемого результата: `round()` возвращает число с плавающей точкой, `lround (arg)` -
число `long` , а `llround (arg)` - `long long` . Половинные значения округляются до нуля: `std::lround(0.5)` возвращает `1L` , тогда как `std::round(-1.5f)` возвращает -2.0f. `sin(arg)` : вычисляет синус угла, при этом arg представляет значение в радианах. `cod(arg)` : вычисляет косинус угла. `tan(arg)` : вычисляет тангенс угла. `isinf(arg)` : возвращает `true` , если аргумент представляет +-бесконечность. `isnan(arg)` : возвращает `true` , если аргумент представляет NaN.
Пример применения некоторых функций:
Консольный вывод:
> abs(-3) = 3 pow(-3, 2) = 9 round(-3.4) = -3 ceil(3.2) = 4 floor(3.2) = 3 ceil(-3.2) = -3 floor(-3.2) = -4
Проверка результата арифметических операций на NaN и бесконечность:
Консольный вывод:
> 1.5/-1.5 is Infinity? 0 1.5/-1.5 is Nan? 0 1.5/0 is Infinity? 1 0/0 is NaN? 1
Для более удобного форматирования строк начиная со стандарта C++20 в стандартную библиотеку языка C++ добавлен модуль `format` и в частности функция std::format(). В качестве первого
аргумента функция принимает строку форматирования. Эта строка содержит любое количество плейсхолдеров `{}` . Второй и последующие параметры представляют
аргументы, которые вставляются в эти плейсхолдеры - внутрь фигурных скобок - по одному аргументу для каждой пары фигурных скобок.
Рассмотрим небольшой пример:
Здесь строка форматирования содержит три плейсхолдера `{}` : `"{} + {} = {}"` . В качестве второго, третьего и четвертого параметра - передаются значения,
которые будут вставляться в плейсхолдеры в порядке следования: первое значения вставляет в первую пару фигурных скобок, второе значение - во вторую пару и так далее. В итоге
мы получим следующий консольный вывод: > 10 + 7 = 17
Стоит отметить, что поддержка этой функции в виду ее недавного добавления в стандарт в зависимости от компилятора может отличаться. Так, Visual Studio полностью поддерживает функцию, а в GCC(g++) поддержка была добавлена только начиная с версии 13.0. А при компиляции с помощью Clang может потребоваться добавить флаг
```
-fexperimental-library
```
> clang++ -std=c++20 -fexperimental-library hello.cpp -o hello
Каждый плейсхолдер может содержать различные настройки в следующим виде:
В квадратных скобках указаны отдельные параметры форматирования. Все эти параметры применяются к различным типам. Рассмотрим некоторые из них.
Спецификатор формата позволяет установить количество отображаемых десятичных знаков числа с плавающей запятой и количество отображаемых символов строки. Он имеет следующий формат:
После двоеточия и точки указывается количество знаков отображаемого числа:
В данном случае форматирование применяется к числу sum. Оно равно `100.2567` . В строку форматирования передается спецификатор `:.5` , соответственно при выводе на консоль
отображаться будут только первые 5 цифр числа: > sum = 100.26
Стоит отметить, что последняя отображаемая цифра увеличивается на 1, если предыдущая цифра, которая отобрасывается при форматировании, больше или равна 5. Поэтому в данном случае вместо `100.25` консоль выводит `100.26`
По умолчанию значение спецификатора формата указывает на общее количество значащих цифр (в нашем примере 5), в том числе цифры как до, так и после точки. Но также можно в качестве спецификатора указать количество цифр после запятой. Для этого идобавляется букву f к спецификатору формата:
Здесь опять же выводим пять знаков, но уже после запятой. Однако поскольку число sum в дробной части имеет только 4 знака, то в качестве пятого знака применяется 0:
> sum = 100.25670
Аналогично с помощью данного спецификатора можно задать количество отображаемых символов строки. Например, отобразим только пять символов строки:
Параметр `width` позволяет задать минимальную ширину поля. Чтобы достичь этой минимальной ширины в отформатированное значение при необходимости вставляются
дополнительные символы - символы заполнения. Какие символы вставляются - зависит
от типа значения и других параметров форматирования. Для явной установки символа заполнения применяется параметр `fill` , который идет перед параметром `width` .
Например, если параметру ширины предшествует символ "0" (ноль), то перед числом вставляются
дополнительные нули. Если символ заполнения явным образом не указан, то вставляется так называемый символ заполнения по умолчанию.
В данном случае выводятся три числа, для каждого из которых устанавливается минимальная длина в 7 символов. Однако для первого и третьего чисел в качестве символа заполнения применяется "0", а для второго числа - символ заполнения по умолчанию (пробел). Консольный вывод:
> a = 0000002 b = 5 c = -000008
Если для числа указан знак (+ или -), то символы заполнения вставляются после знака.
Параметр `align` определяет, как выравнивается форматируемое значение: по левому краю (<),
по правому краю (>) или по центру (^). Выравнивание по умолчанию зависит от типа значения.
Консольный вывод:
> 1| -0.2| 3| 4| 1 |-0.2 |3 |4 | 1 | -0.2 | 3 | 4 |
Параметр `type` задает тип форматирования (не тип данных). Конкретный тип зависит от типа данных значения.
Для чисел с плавающей точкой применяется типы:
f: форматирование с фиксированной точкой
g: общее форматирование
e: экспоненциальная запись
a: шестнадцатеричная запись
Для целых чисел добавляются типы:
b: вывод числа в двоичном формате
x: вывод числа в шестнадцатеричном формате
Применение типов:
По умолчанию аргументы передаются в строку форматирования в порядке следования. Но мы можем переопределить вывод, указав в фигурных скобках до двоеточия номер аргумента (нумерация начинается с нуля):
В данном случае хотя в функции `std:format` первым аргументом для вывода является число `a` , однако в строке форматирования первым выводится
третий аргумент - `{2:}` . И подобным образом можно указать вывод для других аргументов.
Явное указание индексов аргументов может быть полезным, когда нам надо вывести один и тот же аргумент несколько раз. Например, выведем одно и то же число в десятичной, двоичной и шестнадцатеричной форме:
# std::optional<TНачиная с версии стандарта C++17 в стандартную библиотеку C++ был добавлен тип std::optional<T> (модуль optional), который позволяет избежать таких ситуаций, когда значение не найдено или не устнавлено, и определить для подобных ситуаций значение по умолчанию. Рассмотрим конкретную ситуацию, для чего он нужен.
Например, нам надо определить функцию для поиска индекса символа в строке. С одной стороны, все просто - перебираем строку посимвольно, находим нужный символ и возвращаем его индекс. С другой стороны, а что, если символ не будет найден? На этот случай мы могли бы предусмотреть возвращение некоторого значения по умолчанию. Наиболее часто используемым вариантом в данном случае является число -1, поскольку данное число не представляет действительный индекс. И получив его из функции, мы будем знать, что символ не найден.
Тип optional предоставляет альтернативный подход: если индекс найден, то он возвращается. Если значение не найдено, то возвращается константа std::nullopt, которая указывает, что значение `optional` не установлено.
Для примера определим следующую программу:
Здесь определена функция `find_index()` , которая принимает text в виде константной ссылки на строку и символ для поиска и возвращает значение
.
Объект `optional` типизируется типом значений, которые он должен содержать. Поскольку индекс символа представляет целое беззнаковое число, то здесь типизируем `optional` типом `unsigned` . В самой функции, если символ не найден или строка пуста, то возвращаем значение `std::nullopt` .
Таким образом, если индекс найден, то `optional` будет содержать найденный индекс. А если индекс не найден, то значение `std::nullopt` В функции main ищем два символа: "p", который есть в исходном тексте, и "b", который отсутствует. Для поления значения из `option` можно использовать операцию
*, например, `*p_index` . В итоге в данном случае мы получим следующий вывод: > Index of p: 4 Index of b: 4251975392
Так, мы видим, что поскольку символ "b" не найден, `optional` будет содержать довольно большое число, которое и представляет `std::nullopt` .
Пойдем дальше и изменим программу следующим образом:
Здесь добавлена функция `print_index()` , которая выводит индекс найденного символа. При этом мы можем проверить значение `optional` : Если optional равен `std::nullopt` , то это условие возвратит `false` . Таким образом, мы можем проверить на наличие значения. Консольный вывод программы: > Index of p: 4 Index of b not found
Тип `optional` также предоставляет ряд функций. Некоторые из них: has_value(): возвращает `true` , если optional содержит значение.
Так, в примере выше мы могли бы заменить проверку
на следующую строку
value(): возвращает значение из optional. Так, в примере выше мы могли бы значение следующим образом
Единственное, что надо помнить, что перед вызовом этого метода следует проверять на наличие значения.
value_or(default): если optional содержит значение, то возвращает это значение. Если значение в optional отсутствует, то возвращает аргумент default, который передается в функцию
Рассмотрим применение последней функции, которая может использоваться в тех ситуациях, когда необходимо определить для параметров значение по умолчанию:
Здесь функция `pow` принимает число, которое надо возвести в степень, и значение степени в виде
.
По умолчанию, если этому параметру не передано значение, то оно равно `std::nullopt` .
В самой функции проверяем это значение, и если оно НЕ установлено, то возвращаем число 2 (то есть число будет возводиться в квадрат):
# Идиома копирования и замены
Когда нужно изменить состояние одного или нескольких объектов, и на любом этапе модификации может возникнуть ошибка, для создания кода, устойчиваого к ошибкам, может применяться идиома копирования и замены (copy-and-swap idiom). Суть данной идиомы состоит в следующей последовательности действий:
Создаем копию объекта(ов)
Изменяем копию. При этом оригинальные объекты остаются нетронутыми
Если все изменения прошли успешно, заменяем оригинальный объект измененной копией. Если же при изменении копии на каком-то этапе возникла ошибка, то оригинальный объект не заменяется.
Обычно эта идиома применяется в функциях и частным, хотя и распространенным, случаем ее применения является оператор присваивания. В общем случаем это выглядит так:
В функции оператора присваивания сначала создается временная копия присваиваемого объекта. И в случае успешного создания копиии текущий объект (this) и копия обмениваются содержимым через некоторую функцию `swap()` .
Функция swap может быть реализована как внешняя функция или как функция-член класса (в примере выше предполагается, что она реализована внутри класса). При этом функция swap определяется как не генерирующая исключения (с ключевым словом noexcept). Поэтому единственной точкой, где может возникнуть исключение, функция копирования (конструктор копирования) объекта. Если копирование не удается, то управление не доходит до выполнения функции swap.
Устойчивость к исключениям заключается в том, что в операторе присваивания нет точки, где генерация исключения могла бы привести к утечке памяти. Приведённая выше реализация также устойчива к присваиваниям объекта самому себе (a=a), однако содержит издержки, связанные с тем, что временная копия в этом случае тоже будет создаваться. Исключить издержки можно дополнительной проверкой:
Рассмотрим реализацию этого принципа. Но сначала посмотрим, какую проблему мы можем решить с помощью подобной идиомы. Пусть у нас есть следуюший класс:
Здесь шаблон класса Array в конструкторе получает некоторый размер и использует его для выделения динамической памяти для массива. В деструкторе динамическая память освобождается.
В функции оператора присваивания нам надо присвоить значения объекта-параметра текущему объекту. Для этого освобождаем ранее выделенную память и выделяем заново память для нового массива. В данном случае кажется, что все нормально, поскольку память удалена. Но посмотрим на выделение памяти:
Но надо отметить, что оператор new[] генерирует исключение `std::bad_alloc` , если по какой-то причине не удалось выделить память.
Например, когда надо выделить память под очень большой массив, который не помещается в доступной памяти. Если оператор `new[]` не может выделить новую память, указатель data становится так называемым висячим указателем — указателем на освобожденную память. Поэтому даже
если мы обработаем исключение bad_alloc, то объект Array будет непригоден для использования. А на этапе вызове деструктора мы столкнемся со сбоем.
Далее в цикле присваиваем значения элементам массива data:
В данном случае элементу типа T присваивается значение типа T. Однако T может представлять любой тип. И этот тип должен поддерживать оператор присвоения. Но в этом операторе присвоения также может быть реализована какая-нибудь логика, которая может генерировать исключения.
Изменим код, применив идиому копирования и замены:
Прежде всего здесь добавлен конструктор копирования, где, чтобы не повторять логику выделения памяти, вызываем обычный конструктор и копируем значения в текущий объект.
Таким образом, мы получим копию текущего объекта.
Для обмена значениями реализована функция swap:
Для упрощения для обмена значениями применяется стандартная функция std::swap из стандартной библиотеки C++, которая обменивает значения двух параметров, используя их оператор копирования. Фактически здесь обмениваем значениями по отдельности элементы динамического массива и размеры массива.
В функции оператора присваивания применяем конструктор копирования и функцию swap:
Далее в функции main мы можем создать объект Array и присвоить его другому объекту:
Консольный вывод:
> 0 1 2 3 4
Хотя часто подобный способ применяется именно в операторах присвоения, но также он может применяться в других ситуациях, где необходимо выполнить устойчивую к исключениям модификацию объекта. И всегда принцип будет тот же. Сначала копируем объект, который надо изменить. Далее выполняем над объектом-копией изменения. И если все пройдет удачно, обмениваем значениями целевой объект и объект-копию.
Объекты классов могут на протяжении всего своего существования использовать различные ресурсы - динамически выделенная память, файлы, сетевые подключения и т.д. В этом случае в C++ применяется так называемый принцип/идиома RAII (resource acquisition is initialization). RAII предполагает, что получение ресурса производится при инициализации объекта. А освобождение ресурса производится в деструкторе объекта.
Здесь определен класс IntArray, который условно представляет массив чисел int и который управляет некоторым ресурсом. В данном случае ресурс представляет динамическую память, выделенную для хранения массива чисел int. Получение динамической памяти происходит в конструкторе объекта, а освобождение в деструкторе.
Стоит отметить, что динамическая память представляет частный случая ресурсов (в реальности это могут быть файлы, сетевые подключения и т.д.) и тут используется прежде всего в целях демонстрации, поскольку вместо выделения-освобождения памяти в подобной ситуации мы можем использовать smart-указатели.
При этом важно, чтобы ресурс (в данном случае динамическая память) освобождался только один раз. Для этой цели в классе удалены конструктор копирования и оператор присваивания, что позволяет избежать ситуации, когда два объекта хранят указатель на одну и ту же область динамической памяти и соответственно потом в деструкторе будут пытаться освободить эту память.
Для обращения к элементам динамического массива определен оператор индексирования `[]` , а для получения непосредственно указателя - функция get.
Стоит отметить функцию release, которая позволяет передать указатель на управления во вне, в том числе другой объект. В этом случае сбрасывае указатель в nullptr, а обязанность освободить память ляжет на внешней код, который получает этот указатель:
В функции main создаем один объект IntArray:
В итоге в конструкторе выделяется память, а после завершения функции main у IntArray вызывается деструктор, который освобождает память. Консольный вывод программы:
> 0 1 2 3 4 Freeing memory...
Посмотрим на применение функции `release()` :
Здесь указатель на динамического массива через функцию release передается в переменную `data` . После этого функция main несет ответственность за освобождение памяти,
что и происходит в конце функции.
# Идиома Move-and-Swap / Перемещение с обменом
Идиома move-and-swap или перемещение с обменом применяется в операторах присвоения с перемещением. Она позволяет избежать дублирования кода деструктора и конструктора копирования. Суть данной идиомы состоит в следующей последовательности действий:
Для перемещаемого объекта создаем копию с помощью конструктора перемещения
Заменяем текущий объект измененной копией. Если же при изменении копии на каком-то этапе возникла ошибка, то текущий объект не заменяется.
Общая форма `move-and-swap` выглядит следующим образом:
Рассмотрим простейшую реализацию на следующем примере:
#include <iostream> classMessage{public:Message(std::string data) : text{newstd::string(data)} // выделяем память{id = ++counter;std::cout << "Create message "<< id << std::endl;}// конструктор копированияMessage(constMessage& copy) : Message{copy.getText()}{std::cout << "Copy message "<< copy.id << " to "<< id << std::endl;}// конструктор перемещенияMessage(Message&& moved) noexcept : text{moved.text} {id = ++counter;std::cout << "Move message "<< moved.id << " to "<< id << std::endl;moved.text = nullptr;}~Message(){ std::cout << "Delete message " << id << std::endl;delete text; // освобождаем память}// присваивание с копированиемMessage& operator=(const Message& copy){std::cout << "Copy assign message " << copy.id << " to " << id << std::endl;if (© != this) // избегаем самоприсваивания{*text = copy.getText();}return *this; }// присваивание с перемещениемMessage& operator=(Message&& moved) noexcept{std::cout << "Move assign message " << moved.id << " to " << id << std::endl;Message temp{std::move(moved)}; // вызываем конструктор перемещенияswap(temp); // обмен значениямиreturn *this; // возвращаем текущий объект}// функция обменаvoid swap(Message& other) noexcept{std::swap(text, other.text); // обмениваем два указателя}std::string& getText() const { return *text; }unsigned getId() const {return id;}private:std::string* text; // текст сообщенияunsigned id{}; // номер сообщенияstatic inline unsigned counter{}; // статический счетчик для генерации номера объекта};int main(){Message mes{""};mes = Message{"hello"}; // присваивание с перемещениемstd::cout << "Message " << mes.getId() << ": " << mes.getText() << std::endl;}
В конструкторе Message выделяем динамическую память для объекта std::string и устанавливаем номер сообщения:
а в деструкторе освобождаем память
1
2
3
4
5
~Message(){ std::cout << "Delete message " << id << std::endl;delete text; // освобождаем память}
Конструктор перемещения перемещает в текущий объект указатель на строку из перемещаемого объекта:
В операторе присваивания с перемещением используем идиому move-and-swap:
В операторе присваивания сначала создаем временный объект Message, в который перемещаем данные из перемещаемого объекта. Чтобы вызывался именно конструктор перемещения, применяем встроенную функцию `std::move()` , которая преобразует объект moved в rvalue
Затем с помощью функции swap обмениваем текущий и временный объект. В этой функции фактически вызываем встроенную функцию std::swap(), в которой обмениваем указатели двух объектов.
В функции main применяем оператор присваивания:
Консольный вывод программы:
> Create message 1 Create message 2 Move assign message 2 to 1 Move message 2 to 3 Delete message 3 Delete message 2 Message 1: hello Delete message 1
Здесь мы видим, что переменная mes будет представлять объект Message с номером 1. Выражение `Message{"hello"}` будет создавать объект Message с номером 2.
При выполнении присвоения
начинает выполняться оператор присваивания с перемещением. В нем вызывается конструктор перемещения, который создает объект Message с номером 3 и перемещает в него данные из второго объекта
Далее обмениваем указатели объектов 1 и 3. При этом после использования для ненужных объектов 2 и 3 вызывается деструктор.
# Среды разработки
Date: 2023-02-07
Categories:
Tags:
Для создания программы на C++ нам нужны, как минимум, две вещи: текстовый редактор для набора кода и компилятор для превращения этого кода в приложение. При этом для компиляции необходимо запускать консоль или терминал. Однако есть и более удобный способ - использование различных сред разработки или IDE. Они, как правило, содержит встроенный текстовый редактор, компилятор и позволяют скомпилировать и запустить программу по одному клику мыши, а также имеют еще множество разных вспомогательных возможностей.
Для программирования под Windows наиболее популярной средой разработки, если говорить о C++, является Visual Studio. Данную среду можно найти по ссылке https://visualstudio.microsoft.com/ru/vs/community/.
После загрузки и запуска установщика Visual Studio в нем необходимо отметить пункт Разработка классических приложений на C++:
Выбрав все необходимые пункты, нажмем ОК для запуска установки. После установки Visual Studio создадим первый проект. Для этого откроем Visual Studio. На стартовом экране среди шаблонов проектов для языка C++ выберем тип Console App, который представляет шаблон консольного приложения:
На следующем экране в поле для имени проекта дадим проекту имя HelloApp и также можно указать расположение проекта. И затем нажмем на Create для создания проекта.
После этого Visual Studio создаст типовой проект консольного приложения на C++.
Справа в окне Solution Explorer отображается структура проекта. В реальности окно Solution Explorer содержит в решение. В данном случае оно называется HelloApp. Решение может содержать несколько проектов. По умолчанию у нас один проект, который имеет то же имя - HelloApp. В проекте есть ряд узлов:
External Dependencies: отображает файлы, которые используются в файлах исходного кода, но не являются частью проекта
Header Files: предназначена для хранения заголовочных файлов с расширением .h
Resource Files: предназначена для хранения файлов ресурсов, например, изображений
Source Files: хранит файлы с исходным кодом
По умолчанию каталог Source Files содержит один файл с исходным кодом - HelloApp.cpp ( `название проекта` + расширение файла `.cpp` - как правило, исходные файлы на C++ имеют расширение `.сpp` ).
HelloApp.cpp содержит код на языке C++, и именно этот код мы можем увидеть в слева в текстовом редакторе Visual Studio. По умолчанию HelloApp.cpp содержит следующий код:
Здесь использован весь тот код, который был рассмотрен в начальных темах.
Теперь запустим программу. Для этого в Visual Studio нажмем на сочетание клавиш Ctrl+F5 или выберем пункт меню Debug -> Start Without Debugging:
И в итоге Visual Studio передаст исходный код компилятору, который скомпилирует из кода исполняемый файл exe, который потом будет запущен на выполнение. И мы увидим на запущенной консоли наше сообщение:
После этого на жестком диске в папке решения в каталоге \x64\Debug скомпилированный файл exe, который мы можем запускать независимо от Visual Studio:
В данном случае файл HelloApp.exe как раз и представляет скомпилированный исполняемый файл. И, кроме того, в той же папке автоматически генерируется вспомогательный файл - HelloApp.pdb, который содержит отладочную информацию.
Для языка C++ есть несколько стандартов, каждый из которых добавляет некоторые дополнительные возможности. И Visual Studio позволяет задать стандарт, который будет использоваться при компиляции приложения. Для этого перейдем к свойствам проекта:
А в окне свойств перейдем к пункту Configuration Properties -> C/C++ -> Language. На открывшемся окне свойств с помощью опции C++ Language Standard можно задать стандарт языка, который мы хотим использовать:
# Первая программа в Qt Creator
Date: 2022-03-12
Categories:
Tags:
Одной из популярных сред разработки под С++ является среда Qt Creator. Qt Creator является кроссплатформенным, может работать на Windows, Linux и macOS и позволяет разрабатывать широкий диапазон приложений - десктопные и мобильные приложения, а также приложения для встроенных платформ. Рассмотрим, как создать простейшую программу на С++ в Qt Creator.
Загрузим программу установки. Для этого перейдем на страницу https://www.qt.io/download-qt-installer
Сайт автоматически определяет текущую операционную систему и предлагает для нее загрузить онлайн-установщик. Для загрузки нажмем на кнопку Download:
Вначале программа установки предложит осуществить вход с логином и паролем от учетной записи QT. Однако если у вас нет учетной записи QT, то необходимо зарегистрироваться. Для этого нажмем на ссылку "Зарегистрироваться". И в поля ввода введем логин-электронный адрес и пароль:
Нажмем на кнопку "Далее". После этого на указанный электронный адрес придет ссылка, по которой надо перейти для завершения регистрации.
После этого в программе установки QT снова нажмем на кнопку "Далее"
Затем отметим пару флажков и нажмем на кнопку "Далее":
И после этого мы перейдем непосредственно к установке затем отметим пару флажков и нажмем на кнопку "Далее":
Затем нам будет предложено выбрать, надо ли отправлять отчет :
Далее надо будет указать каталог для установки (можно оставить каталог по умолчанию), а также тип установки:
В качестве типа установки можно указать "Выборочная установка", тогда на следующем шаге необходимо будет указать устанавливаемые компоненты:
В данном случае я выбрал для установки последнюю на данный момент версию Qt - Qt 6.2.3 за исключением двух пакетов (MSVC 2019). При установке для Windows прежде всего стоит отметить пункт компилятора MinGW - на данный момент это MinGW 11.2.0. 64-bit. Остальные компоненты можно устанавливать при необходимости. При установки следует учитывать свободное место на жестком диске, так как некоторые компоненты занимают довольно многом места.
В зависимости от текущей операционной системы набор компонентов может отличаться. Например, набор компонентов для Qt 6.2.3 для MacOS:
Затем надо принять лицензионное соглашение и настроить ярлык для меню Пуск. И далее нажмем на кнопку "Установить":
После завершения установки запустим Qt Creator. На стартовом экране выберем вкладку Projects (Проекты), на которой нажмем на кнопку New (Создать):
В окне создания нового проекта в качестве шаблона проекта выберем Plain C++ Application:
Далее надо будет задать имя проекта и каталог, где он будет располагаться:
На следующих шагах оставим все значения по умолчанию. И на последнем шаге нажмем на кнопку Finish для создания проекта:
И нам откроется проект с некоторым содержимым по умолчанию:
Проект будет иметь один файл - main.cpp, и в центральной части - текстовом редакторе будет открыт его код:
Запустим его, нажав на зеленую стрелку в нижнем левом углу Qt Creator. И в нижней части Qt Creator откроется окно Application Output с результатами работы скомпилированной программы
Создайте программу, которая переводит метры в киллометры. Например, пользователь вводит 2345 метров, а программа в ответ отображает 2 километра и 345 метров.
Консольный вывод:
> Enter a number of metres: 2345 2345 meters = 2 kilometers and 345 meters.
Напишите программу, которая запрашивает у пользователь радиус круга и, используя полученный радиус, вычисляет площадь круга.
Консольный вывод:
> Enter the radius: 10 The area of the circle: 314.15
Напишите программу обмена валют: программа запрашивает текущий курс доллара, например, к рублю, и количество единиц (рублей) для конвертации и выводит на консоль сконвертированную сумму в долларах.
Консольный вывод:
> Enter exchange rate: 73.86 Enter sum: 100000 100000 rubles = 1353.91$
Индекс массы тела (ИМТ) представляет массу человека в килограммах, деленную на квадрат роста в метрах (масса/(рост * рост)). Напишите программу, которая спрашивает у пользователя его вес (в киллограммах) и рост (сантиметрах), по ним вычисляет индекс массы тела и выводит его на консоль.
Консольный вывод:
> Enter your weight: 56 Enter your height: 168 Your BMI: 19.8413
Напишите программу, которая считывает с консоли три символа и упаковывает их в одно число. Выведите полученное число на консоль. Затем распакуйте число - получите обратно упакованные символы из числа в отдельные переменные.
Пример работы программы:
> Enter a first character: e Enter a second character: u Enter a third character: g 6649191 gue
Одна из форм определения цвета представляет запись в формате RGB, где R, G и B - соответственно компоненты красного, зеленого и синего цвета. Каждая компонента может иметь значение от 0 до 255. Например, число 0xffffff в шестнадцатеричном формате представляет цвет, где все три компоненты равны `FF` 16 или `255` 10 в десятичной системе.
Напишите программу, которая считывает с консоли значения для трех компонент цвета и сохраняет их в числовую переменную color.
Пример работы программы:
> Red: 15 Green: 14 Blue: 255 986879
Одна из форм определения цвета представляет запись в формате RGB, где R, G и B - соответственно компоненты красного, зеленого и синего цвета. Каждая компонента может иметь значение от 0 до 255. Например, число `0x04F1aA2` 16 в шестнадцатеричном формате представляет цвет, где красная компонента равна `04` , зеленая - `F1` и
синяя - `A2`
Пусть дана переменная
Напишите программу, которая извлекает из этой переменной все три цветовых компоненты в отдельные переменные.
Консольный вывод:
> Red: 4 Green: 241 Blue: 162 red: 4 green: f1 blue: a2
Даны следующие переменные:
Напишите программу, которая обменивает значения переменных a и b без использования третьей переменной. Используйте для этого одну из поразрядных операций.
Консольный вывод:
> a = 11 b = 8
Напишите программу, которая предлагает ввести два целых числа, а затем использует конструкцию `if-else` для вывода сообщения о том, равны ли два числа.
Пример работы программы:
> Enter a first number: 3 Enter a second number: 3 numbers are equal
Напишите программу, которая предлагает ввести два целых числа и выясняет, делится ли первое число на второе без остатка (кратно ли второе число). Предусмотрите вариант, когда в качестве второго числа можно ввести 0 (на ноль же делить нельзя). В этом случае программа ничего вычисляет, а просто завершает выполнение.
Пример работы программы:
> c:\cpp>g++ hello.cpp -o hello & hello Enter a first number: 10 Enter a second number: 5 a and b devisible c:\cpp>g++ hello.cpp -o hello & hello Enter a first number: 2 Enter a second number: 0 Panic! b = 0! Bad data! c:\cpp>g++ hello.cpp -o hello & hello Enter a first number: 4 Enter a second number: 3 a and not devisible
Стоит отметить, что в C++ все числа, которые равны 0, в условных выражениях преобразуются в false. Соответственно мы можем использовать этот факт, для небольшого сокращения кода:
Напишите программу, в которую пользователь вводит число от 1 до 100. Используйте вложенный оператор if, чтобы сначала убедиться, что число находится в пределах этого диапазона. А затем при выполнении этого условия определите, является ли введенное число больше, меньше или равным 50. И выведите результат на консоль.
Консольный вывод:
> c:\cpp>g++ hello.cpp -o hello & hello Enter a number between 1 and 100: 34.5 number is less than 50 c:\cpp>g++ hello.cpp -o hello & hello Enter a number between 1 and 100: 50 number = 50 c:\cpp>g++ hello.cpp -o hello & hello Enter a number between 1 and 100: 51 number is greater than 50 c:\cpp>g++ hello.cpp -o hello & hello Enter a number between 1 and 100: 190 The number is outside the range [1, 100]
Напишите программу, в которой вводятся два числа, и программа проверят, больше ли первое число второму или меньше или они равны. Для проверки используйте тренарный оператор.
Консольный вывод:
> c:\cpp>g++ hello.cpp -o hello & hello Enter a first number: 20 Enter a second number: 10 a > b c:\cpp>g++ hello.cpp -o hello & hello Enter a first number: 3 Enter a second number: 10 a < b c:\cpp>g++ hello.cpp -o hello & hello Enter a first number: 4 Enter a second number: 4 a=b
Напишите программу, которая выводит квадраты нечетных целых чисел от 1 до предела который вводит пользователь.
Пример работы программы:
> Enter a limit: 7 1: 1 3: 9 5: 25 7: 49
Напишите программу, в которой в цикле do-while пользователь вводит по одному символу, а программа подсчитывает количество введенных символов. Когда пользователь вводит точку, ввод заканчивается, и программа выводит пользователю число введенных символов (не включая финальную точку)
Пример работы программы:
> world....... Characters count: 5
Напишите программу, в которой в цикле while пользователь вводит произвольное количество чисел, а программа вычисляет их сумму. После каждого ввода спрашивайте пользователя, закончил ли он ввод чисел. Если пользователь ввел "y" или "Y", то ввод чисел завершается, после чего программа должна вывести сумму всех введенных чисел и их среднее арифметическое.
Пример работы программы:
> Enter a number: 1 Finish (y/n)? n Enter a number: 2 Finish (y/n)? n Enter a number: 3 Finish (y/n)? y sum: 6 average: 2
Напишите программу, в которой определен одномерный массив чисел int. Пользователь должен вводить с консоли значения для всех элементов массива. После завершения ввода всех чисел программа должна вывести элементы массива в обратном порядке.
Пример работы программы:
> Enter numbers 1 2 3 4 5 7 7 5 4 3 2 1
Напишите программу, в которой с помощью функции `std::cin.getline()` пользователь вводит строку с консоли в массив символов.
С помощью цикла подсчитайте количество символов, введенных пользователем. Затем с помощью второго цикла выведите все символы введенной строки в обратном порядке.
Пример работы программы:
> Enter a string: hello metanit.com Characters count: 17 moc.tinatem olleh
Напишите программу, которая определяет и инициализирует массив первыми 20 нечетными числами. Выведите числа из массива на консоль по пять в строку. Для вывода используйте нотацию указателей (имя массива в качестве указателя). Подобным образом с помощью указателя выведите элементы массива в обратном порядке.
Напишите программу, которая определяет и инициализирует массив первыми 20 нечетными числами. Выведите числа из массива на консоль по пять в строку. Для вывода определите указатель, который указывает на массив. С помощью инкремента (++) указателя выведите элементы в прямом порядке. А с потом в отдельном цикле с помощью декремента указателя выведите элементы массива в обратном порядке.
# Параметры функций и возвращение значения
Напишите функцию, которая считывает с консоли строку или массив символов и возвращает строку, где символы размещены в обратном порядке
Пример работы программы:
> Enter a string: Hello METANIT.COM! String in reverse order: !MOC.TINATEM olleH
Напишите функцию, которая возводит число в определенную степень. В качестве параметров функция должна принимать само число и показатель степени. А в качестве результата возвращать результат возведения числа в степень. Степень может быть как положительной, так и отрицательной.
Консольный вывод программы:
> pow(5,2) = 25 pow(5,-2) = 0.04
Напишите функцию `add()` , которая складывает два значения и возвращает их сумму. Предоставьте перегруженные версии для сложения значений типов `int` , `double` и `string`
Пример работы программы:
> 3 + 4 = 7 3.1 + 4.2 = 7.3 aaa + bbb = aaabbb he + llo = hello
Напишите программу, в которой пользователь вводит размер массива, и программа динамически выделяет массив такого размера для хранения значений типа int. Используя указатель, инициализируйте все элементы массива так, чтобы значение элемента по индексу i было равно i * i (то есть квадрату числа i). Вычислите сумму элементов, используя синтаксис массивов (обращение к элементам по индексу в квадратных скобках), и выведите результат на консоль.
Консольный вывод:
> Enter array size: 4 Sum = 14
# Smart-указатель unique_ptr
Пример работы программы:
> Enter array size: 4 Sum = 14
Создайте класс Integer с одной приватной переменной типа `int` . Определите конструктор класса, который выводит сообщение при создании объекта.
Определите функции для получения и установки переменной и вывода ее значения. В функции main создайте объект класса Integer и вызовите у него функции класса, получая, устанавливая и выводя значение переменной каждого объекта.
Возьмите класс Integer из предыдущего задания
И разделите объявление функций класса от их определения.
# Дружественные функции
Дан следующий класс Integer, который представляет число:
Добавьте в него функцию, которая сравнивает текущий объект с другим объектом Integer, переданным в качестве аргумента. Функция должна возвращать число больше нуля, если текущий объект больше аргумента; меньше нуля, если текущий объект меньше аргумента, и 0, если объекты равны.
В функции main создайте несколько объектов Integer и сравните их.
Сделайте функцию compare, которая определена в предыдущем задании в классе Integer, дружественной.
Определите базовый класс Animal, который представляет животное и который содержит две приватные переменные: строку для хранения имени животного и целое число для хранения веса животного. Также определите общедоступную функцию print, которая выводит на консоль сообщение с указанием имени и веса объекта Animal.
Также создайте два производных класса Cat (кошка) и Dog (собака), которые наследуются от класса Animal. В функции main создайте несколько объектов типа Cat и Dog и с помощью функции print выведите информацию об этих объектах на консоль.
Определите абстрактный класс Animal, который представляет животное и который имеет две переменных - name (кличка животного) и weight (вес животного). Также определите в классе Animal чистую виртуальную функцию `makeSound` , которая представляет произносимые животным звуки, и функцию print, которая выводит на консоль имя и вес животного. Определите классы Cat (кошка) и Dog (собака), которые наследуются от класса Animal и которые переопределяют функцию `makeSound`
Консольный вывод:
> Name: Murzik Weight: 15kg Miau, miau. I am a cool kote Name: Pushok Weight: 20kg Gov gov, I am a dog
Дан класс:
Определите для класса Integer оператор, который позволяет умножать объект Integer на число. Причем определите версии для умножения как на целое, так и на дробное число.
Создайте класс Rational, который представляет дробь. Определите в нем переменные, которые представляют числитель и знаменатель. Также определите для этого класса операторы сложения и умножения, а также оператор вывода на консоль.
Определите вектор (объект vector) для хранения чисел типа int. Пусть пользователь сначала вводит с консоли число N - размер вектора. Затем в цикле с консоли вводит N чисел, которые добавляются в данный вектор. После ввода в цикле выведите все числа из вектора в строчку в обратном порядке.
Пример работы программы:
> Enter vector count: 5 Enter 5 numbers 2 3 4 5 7 7 5 4 3 2
Пример работы программы:
> Enter vector size: 4 Sum = 14
Напишите функцию, которая позволяет вычислить среднее арифметическое чисел в векторе std::vector<double> и массиве `double[]` .
Чтобы не создавать для массива и вектора отдельные функции, используйте тип std::span<doubleКонсольный вывод:
> Average of num_array: 5.5 Average of num_vector: 5.5
# Ввод-вывод строк
Напишите программу, которая считывает именя и оценки произвольного количества студентов. Вычислите и выведите среднюю оценку и выведите имена и оценки всех учеников.
Пример работы программы:
> Enter a student name: Tom Enter Tom's grade: 5 Finish (y/n): n Enter a student name: Bob Enter Bob's grade: 4 Finish (y/n): n Enter a student name: Sam Enter Sam's grade: 3 Finish (y/n): y Average grade: 4 Name Grade Tom 5 Bob 4 Sam 3
Напишите программу, которая считывает текст. Найдите и сохраните каждое уникальное слово и количество его вхождений в тексте. Выведите слова и количество их вхождений.
Пример работы программы:
> Enter some text: Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text Words Lorem: 2 Ipsum: 2 is: 1 simply: 1 dummy: 2 text: 2 of: 1 the: 2 printing: 1 and: 1 typesetting: 1 industry: 2 has: 1 been: 1 s: 1 standard: 1
Напишите программу, которая считывает с клавиатуры некоторый текст произвольный длины. Далее запрашивает ввод слова, которое должно быть заменено в тексте на столько звездочек, сколько символов в слове. Заменяются только целые слова. Причем надо заменить все вхождения слова вне зависимости от его регистра. Затем выведите на консоль полученную новую строку.
Пример работы программы:
> Enter some text: A Friend in need is a friend indeed. Enter the word to be replaced: friend A ****** in need is a ****** indeed.
Напишите программу, которая запрашивает ввод двух слов и определяет, является ли одно из них анаграммой другого. Анаграмма слова образована перестановкой его букв. Например, "воз" и "зов" или "все" и "сев".
Пример работы программы:
> c:\cpp>g++ hello.cpp -o hello & hello Enter the first word: listen Enter the second word: silent listen and silent are anagrams c:\cpp>g++ hello.cpp -o hello & hello Enter the first word: listen Enter the second word: lent listen and lent are not anagrams
Напишите программу, которая считывает с консоли текст. Далее программа считывает с консоли символ и находит в тексте все слова, которые начинаются с этого символа (не важно в каком регистре). В конце на консоль выводится список всех этих слов.
Пример работы программы:
> Enter a text: A friend in need is a friend indeed. Enter a char: i Words beginning with 'i' are: in is indeed
Определите лямбда-выражение, которое возвращает количество строк в векторе
```
std::vector<std::string>
```
, которые начинаются с определенной буквы.
Консольный вывод
> Names: <NAME> <NAME> Alice 2 names begin(s) with T
|
az_saas | hex | Erlang | AzSaas
===
Elixir wrapper for Azure Marketplace SaaS fulfillment APIs version 2.
Usage
---
For the production API you need an access-token.
See: [Register a SaaS application](https://docs.microsoft.com/en-us/azure/marketplace/partner-center-portal/pc-saas-registration)
```
iex> AzSaas.list_subscriptions("myRealTokenHere")
{:ok, %HTTPoison.Response{...}}
```
API versions
---
production API version (default): `"2018-08-31"`
mock API version: `"2018-08-31"`
To set the `api-version` query param:
Using mock API example 1
---
```
iex> AzSaas.list_subscriptions("noRealTokenForMockAPIRequired", [], [params: %{"api-version" => "2018-09-15"})
```
Using mock API example 2
---
```
iex> Application.put_env(:az_saas, :api_version, "2018-09-15")
iex> AzSaas.list_subscriptions("noRealTokenForMockAPIRequired")
```
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[activate_subscription(access_token, subscription_id, plan_id, quantity, headers \\ [], options \\ [])](#activate_subscription/6)
Activate a subscription
[change_plan(access_token, subscription_id, plan_id, headers \\ [], options \\ [])](#change_plan/5)
Update the plan on the subscription.
[change_quantity(access_token, subscription_id, quantity, headers \\ [], options \\ [])](#change_quantity/5)
Update the quantity on the subscription.
[delete_subscription(access_token, subscription_id, headers \\ [], options \\ [])](#delete_subscription/4)
Unsubscribe and delete the specified subscription.
[get_operation_status(access_token, subscription_id, operation_id, headers \\ [], options \\ [])](#get_operation_status/5)
Enables the publisher to track the status of the specified triggered async operation (such as Subscribe, Unsubscribe, ChangePlan, or ChangeQuantity).
[get_subscription(access_token, subscription_id, headers \\ [], options \\ [])](#get_subscription/4)
Gets the specified SaaS subscription. Use this call to get license information and plan information.
[list_available_plans(access_token, subscription_id, headers \\ [], options \\ [])](#list_available_plans/4)
Use this call to find out if there are any private or public offers for the current publisher.
[list_outstanding_operations(access_token, subscription_id, headers \\ [], options \\ [])](#list_outstanding_operations/4)
Lists the outstanding operations for the current publisher.
[list_subscriptions(access_token, headers \\ [], options \\ [])](#list_subscriptions/3)
Lists all the SaaS subscriptions for a publisher.
[resolve_subscription(access_token, token, headers \\ [], options \\ [])](#resolve_subscription/4)
The resolve endpoint enables the `publisher to resolve a marketplace token to a persistent resource ID.
See: [Azure Marketplace docs](https://docs.microsoft.com/en-us/azure/marketplace/partner-center-portal/pc-saas-fulfillment-api-v2#resolve-a-subscription)
[update_operation_status(access_token, subscription_id, operation_id, plan_id, quantity, status, headers \\ [], options \\ [])](#update_operation_status/8)
Update the status of an operation to indicate success or failure with the provided values.
[Link to this section](#functions)
Functions
=== |
progress | cran | R | Package ‘progress’
October 14, 2022
Title Terminal Progress Bars
Version 1.2.2
Author <NAME> [aut, cre], <NAME> [aut]
Maintainer <NAME> <<EMAIL>>
Description Configurable Progress bars, they may include percentage,
elapsed time, and/or the estimated completion time. They work in
terminals, in 'Emacs' 'ESS', 'RStudio', 'Windows' 'Rgui' and the
'macOS' 'R.app'. The package also provides a 'C++' 'API', that works
with or without 'Rcpp'.
License MIT + file LICENSE
LazyData true
URL https://github.com/r-lib/progress#readme
BugReports https://github.com/r-lib/progress/issues
Imports hms, prettyunits, R6, crayon
Suggests Rcpp, testthat, withr
RoxygenNote 6.1.0
Encoding UTF-8
NeedsCompilation no
Repository CRAN
Date/Publication 2019-05-16 21:30:03 UTC
R topics documented:
progress_ba... 2
progress_bar Progress bar in the terminal
Description
Progress bars are configurable, may include percentage, elapsed time, and/or the estimated comple-
tion time. They work in the command line, in Emacs and in R Studio. The progress package was
heavily influenced by https://github.com/tj/node-progress
Creating the progress bar
A progress bar is an R6 object, that can be created with progress_bar$new(). It has the following
arguments:
format The format of the progress bar. A number of tokens can be used here, see them below. It
defaults to "[:bar] :percent", which means that the progress bar is within brackets on the
left, and the percentage is printed on the right.
total Total number of ticks to complete. If it is unknown, use NA here. Defaults to 100.
width Width of the progress bar. Default is the current terminal width (see options() and width)
minus two.
stream This argument is deprecated, and message() is used to print the progress bar.
complete Completion character, defaults to =.
incomplete Incomplete character, defaults to -.
current Current character, defaults to >.
callback Callback function to call when the progress bar finishes. The progress bar object itself is
passed to it as the single parameter.
clear Whether to clear the progress bar on completion. Defaults to TRUE.
show_after Amount of time in seconds, after which the progress bar is shown on the screen. For
very short processes, it is probably not worth showing it at all. Defaults to two tenth of a
second.
force Whether to force showing the progress bar, even if the given (or default) stream does not
seem to support it.
message_class Extra classes to add to the message conditions signalled by the progress bar.
Using the progress bar
Three functions can update a progress bar. progress_bar$tick() increases the number of ticks by
one (or another specified value). progress_bar$update() sets a given ratio and progress_bar$terminate()
removes the progress bar. progress_bar$finished can be used to see if the progress bar has fin-
ished.
Note that the progress bar is not shown immediately, but only after show_after seconds. (Set this
to zero, and call ‘tick(0)‘ to force showing the progress bar.)
progress_bar$message() prints a message above the progress bar. It fails if the progress bar has
already finished.
Tokens
They can be used in the format argument when creating the progress bar.
:bar The progress bar itself.
:current Current tick number.
:total Total ticks.
:elapsed Elapsed time in seconds.
:elapsedfull Elapsed time in hh:mm:ss format.
:eta Estimated completion time in seconds.
:percent Completion percentage.
:rate Download rate, bytes per second. See example below.
:tick_rate Similar to :rate, but we don’t assume that the units are bytes, we just print the raw
number of ticks per second.
:bytes Shows :current, formatted as bytes. Useful for downloads or file reads if you don’t know
the size of the file in advance. See example below.
:spin Shows a spinner that updates even when progress is advanced by zero.
Custom tokens are also supported, and you need to pass their values to progress_bar$tick() or
progress_bar$update(), in a named list. See example below.
Options
The ‘progress_enabled‘ option can be set to ‘FALSE‘ to turn off the progress bar. This works for
the C++ progress bar as well.
Examples
## We don't run the examples on CRAN, because they takes >10s
## altogether. Unfortunately it is hard to create a set of
## meaningful progress bar examples that also run quickly.
## Not run:
## Basic
pb <- progress_bar$new(total = 100)
for (i in 1:100) {
pb$tick()
Sys.sleep(1 / 100)
}
## ETA
pb <- progress_bar$new(
format = " downloading [:bar] :percent eta: :eta",
total = 100, clear = FALSE, width= 60)
for (i in 1:100) {
pb$tick()
Sys.sleep(1 / 100)
}
## Elapsed time
pb <- progress_bar$new(
format = " downloading [:bar] :percent in :elapsed",
total = 100, clear = FALSE, width= 60)
for (i in 1:100) {
pb$tick()
Sys.sleep(1 / 100)
}
## Spinner
pb <- progress_bar$new(
format = "(:spin) [:bar] :percent",
total = 30, clear = FALSE, width = 60)
for (i in 1:30) {
pb$tick()
Sys.sleep(3 / 100)
}
## Custom tokens
pb <- progress_bar$new(
format = " downloading :what [:bar] :percent eta: :eta",
clear = FALSE, total = 200, width = 60)
f <- function() {
for (i in 1:100) {
pb$tick(tokens = list(what = "foo "))
Sys.sleep(2 / 100)
}
for (i in 1:100) {
pb$tick(tokens = list(what = "foobar"))
Sys.sleep(2 / 100)
}
}
f()
## Download (or other) rates
pb <- progress_bar$new(
format = " downloading foobar at :rate, got :bytes in :elapsed",
clear = FALSE, total = NA, width = 60)
f <- function() {
for (i in 1:100) {
pb$tick(sample(1:100 * 1000, 1))
Sys.sleep(2/100)
}
pb$tick(1e7)
invisible()
}
f()
pb <- progress_bar$new(
format = " got :current rows at :tick_rate/sec",
clear = FALSE, total = NA, width = 60)
f <- function() {
progress_bar 5
for (i in 1:100) {
pb$tick(sample(1:100, 1))
Sys.sleep(2/100)
}
pb$terminate()
invisible()
}
f()
## End(Not run) |
learnxinyminutes_com_docs_powershell | free_programming_book | Perl | # Learn X in Y minutes
Date: 2000-01-01
Categories:
Tags:
Get the code: LearnPowershell.ps1
PowerShell is the Windows scripting language and configuration management framework from Microsoft built on the .NET Framework. Windows 7 and up ship with PowerShell.Nearly all examples below can be a part of a shell script or executed directly in the shell.
A key difference with Bash is that it is mostly objects that you manipulate rather than plain text. After years of evolving, it resembles Python a bit.
Powershell as a Language:
```
# Single line comments start with a number symbol.
<#
Multi-line comments
like so
#####################################################
## 1. Primitive Datatypes and Operators
####################################################
# Numbers
3 # => 3
# Math
1 + 1 # => 2
8 - 1 # => 7
10 * 2 # => 20
35 / 5 # => 7.0
# Powershell uses banker's rounding,
# meaning [int]1.5 would round to 2 but so would [int]2.5
# Division always returns a float.
# You must cast result to [int] to round.
[int]5 / [int]3 # => 1.66666666666667
[int]-5 / [int]3 # => -1.66666666666667
5.0 / 3.0 # => 1.66666666666667
-5.0 / 3.0 # => -1.66666666666667
[int]$result = 5 / 3
$result # => 2
# Modulo operation
7 % 3 # => 1
# Exponentiation requires longform or the built-in [Math] class.
[Math]::Pow(2,3) # => 8
# Enforce order of operations with parentheses.
1 + 3 * 2 # => 7
(1 + 3) * 2 # => 8
# Boolean values are primitives (Note: the $)
$True # => True
$False # => False
# negate with !
!$True # => False
!$False # => True
# Boolean Operators
# Note "-and" and "-or" usage
$True -and $False # => False
$False -or $True # => True
# True and False are actually 1 and 0 but only support limited arithmetic.
# However, casting the bool to int resolves this.
$True + $True # => 2
$True * 8 # => '[System.Boolean] * [System.Int32]' is undefined
[int]$True * 8 # => 8
$False - 5 # => -5
# Comparison operators look at the numerical value of True and False.
0 -eq $False # => True
1 -eq $True # => True
2 -eq $True # => False
-5 -ne $False # => True
# Using boolean logical operators on ints casts to booleans for evaluation.
# but their non-cast value is returned
# Don't mix up with bool(ints) and bitwise -band/-bor
[bool](0) # => False
[bool](4) # => True
[bool](-6) # => True
0 -band 2 # => 0
-5 -bor 0 # => -5
# Equality is -eq (equals)
1 -eq 1 # => True
2 -eq 1 # => False
# Inequality is -ne (notequals)
1 -ne 1 # => False
2 -ne 1 # => True
# More comparisons
1 -lt 10 # => True
1 -gt 10 # => False
2 -le 2 # => True
2 -ge 2 # => True
# Seeing whether a value is in a range
1 -lt 2 -and 2 -lt 3 # => True
2 -lt 3 -and 3 -lt 2 # => False
# (-is vs. -eq) -is checks if two objects are the same type.
# -eq checks if the objects have the same values.
# Note: we called '[Math]' from .NET previously without the preceeding
# namespaces. We can do the same with [Collections.ArrayList] if preferred.
[System.Collections.ArrayList]$a = @() # Point a at a new list
$a = (1,2,3,4)
$b = $a # => Point b at what a is pointing to
$b -is $a.GetType() # => True, a and b equal same type
$b -eq $a # => True, a and b values are equal
[System.Collections.Hashtable]$b = @{} # => Point a at a new hash table
$b = @{'one' = 1
'two' = 2}
$b -is $a.GetType() # => False, a and b types not equal
# Strings are created with " or ' but " is required for string interpolation
"This is a string."
'This is also a string.'
# Strings can be added too! But try not to do this.
"Hello " + "world!" # => "Hello world!"
# A string can be treated like a list of characters
"Hello world!"[0] # => 'H'
# You can find the length of a string
("This is a string").Length # => 16
# You can also format using f-strings or formatted string literals.
$name = "Steve"
$age = 22
"He said his name is $name."
# => "He said his name is Steve"
"{0} said he is {1} years old." -f $name, $age
# => "Steve said he is 22 years old"
"$name's name is $($name.Length) characters long."
# => "Steve's name is 5 characters long."
# Escape Characters in Powershell
# Many languages use the '\', but Windows uses this character for
# file paths. Powershell thus uses '`' to escape characters
# Take caution when working with files, as '`' is a
# valid character in NTFS filenames.
"Showing`nEscape Chars" # => new line between Showing and Escape
"Making`tTables`tWith`tTabs" # => Format things with tabs
# Negate pound sign to prevent comment
# Note that the function of '#' is removed, but '#' is still present
`#Get-Process # => Fail: not a recognized cmdlet
# $null is not an object
$null # => None
# $null, 0, and empty strings and arrays all evaluate to False.
# All other values are True
function Test-Value ($value) {
if ($value) {
Write-Output 'True'
}
else {
Write-Output 'False'
}
}
Test-Value ($null) # => False
Test-Value (0) # => False
Test-Value ("") # => False
Test-Value [] # => True
# *[] calls .NET class; creates '[]' string when passed to function
Test-Value ({}) # => True
Test-Value @() # => False
####################################################
## 2. Variables and Collections
####################################################
# Powershell uses the "Write-Output" function to print
Write-Output "I'm Posh. Nice to meet you!" # => I'm Posh. Nice to meet you!
# Simple way to get input data from console
$userInput = Read-Host "Enter some data: " # Returns the data as a string
# There are no declarations, only assignments.
# Convention is to use camelCase or PascalCase, whatever your team uses.
$someVariable = 5
$someVariable # => 5
# Accessing a previously unassigned variable does not throw exception.
# The value is $null by default
# Ternary Operators exist in Powershell 7 and up
0 ? 'yes' : 'no' # => no
# The default array object in Powershell is an fixed length array.
$defaultArray = "thing","thing2","thing3"
# you can add objects with '+=', but cannot remove objects.
$defaultArray.Add("thing4") # => Exception "Collection was of a fixed size."
# To have a more workable array, you'll want the .NET [ArrayList] class
# It is also worth noting that ArrayLists are significantly faster
# ArrayLists store sequences
[System.Collections.ArrayList]$array = @()
# You can start with a prefilled ArrayList
[System.Collections.ArrayList]$otherArray = @(4, 5, 6)
# Add to the end of a list with 'Add' (Note: produces output, append to $null)
$array.Add(1) > $null # $array is now [1]
$array.Add(2) > $null # $array is now [1, 2]
$array.Add(4) > $null # $array is now [1, 2, 4]
$array.Add(3) > $null # $array is now [1, 2, 4, 3]
# Remove from end with index of count of objects-1; array index starts at 0
$array.RemoveAt($array.Count-1) # => 3 and array is now [1, 2, 4]
# Let's put it back
$array.Add(3) > $null # array is now [1, 2, 4, 3] again.
# Access a list like you would any array
$array[0] # => 1
# Look at the last element
$array[-1] # => 3
# Looking out of bounds returns nothing
$array[4] # blank line returned
# You can look at ranges with slice syntax.
# The start index is included, the end index is not
# (It's a closed/open range for you mathy types.)
$array[1..3] # Return array from index 1 to 3 => [2, 4]
$array[2..-1] # Return array starting from index 2 => [4, 3]
$array[0..3] # Return array from beginning until index 3 => [1, 2, 4]
$array[0..2] # Return array selecting every second entry => [1, 4]
$array.Reverse() # mutates array to reverse order => [3, 4, 2, 1]
# Use any combination of these to make advanced slices
# Remove arbitrary elements from a array with "del"
$array.Remove($array[2]) # $array is now [1, 2, 3]
# Insert an element at a specific index
$array.Insert(1, 2) # $array is now [1, 2, 3] again
# Get the index of the first item found matching the argument
$array.IndexOf(2) # => 1
$array.IndexOf(6) # Returns -1 as "outside array"
# You can add arrays
# Note: values for $array and for $otherArray are not modified.
$array + $otherArray # => [1, 2, 3, 4, 5, 6]
# Concatenate arrays with "AddRange()"
$array.AddRange($otherArray) # Now $array is [1, 2, 3, 4, 5, 6]
# Check for existence in a array with "in"
1 -in $array # => True
# Examine length with "Count" (Note: "Length" on arrayList = each items length)
$array.Count # => 6
# Tuples are like arrays but are immutable.
# To use Tuples in powershell, you must use the .NET tuple class.
$tuple = [System.Tuple]::Create(1, 2, 3)
$tuple.Item(0) # => 1
$tuple.Item(0) = 3 # Raises a TypeError
# You can do some of the array methods on tuples, but they are limited.
$tuple.Length # => 3
$tuple + (4, 5, 6) # => Exception
$tuple[0..2] # => $null
2 -in $tuple # => False
# Hashtables store mappings from keys to values, similar to Dictionaries.
$emptyHash = @{}
# Here is a prefilled dictionary
$filledHash = @{"one"= 1
"two"= 2
"three"= 3}
# Look up values with []
$filledHash["one"] # => 1
# Get all keys as an iterable with ".Keys".
# items maintain the order at which they are inserted into the dictionary.
$filledHash.Keys # => ["one", "two", "three"]
# Get all values as an iterable with ".Values".
$filledHash.Values # => [1, 2, 3]
# Check for existence of keys or values in a hash with "-in"
"one" -in $filledHash.Keys # => True
1 -in $filledHash.Values # => False
# Looking up a non-existing key returns $null
$filledHash["four"] # $null
# Adding to a dictionary
$filledHash.Add("five",5) # $filledHash["five"] is set to 5
$filledHash.Add("five",6) # exception "Item with key "five" has already been added"
$filledHash["four"] = 4 # $filledHash["four"] is set to 4, running again does nothing
# Remove keys from a dictionary with del
$filledHash.Remove("one") # Removes the key "one" from filled dict
####################################################
## 3. Control Flow and Iterables
####################################################
# Let's just make a variable
$someVar = 5
# Here is an if statement.
# This prints "$someVar is smaller than 10"
if ($someVar -gt 10) {
Write-Output "$someVar is bigger than 10."
}
elseif ($someVar -lt 10) { # This elseif clause is optional.
Write-Output "$someVar is smaller than 10."
}
else { # This is optional too.
Write-Output "$someVar is indeed 10."
}
<#
Foreach loops iterate over arrays
prints:
dog is a mammal
cat is a mammal
mouse is a mammal
#>
foreach ($animal in ("dog", "cat", "mouse")) {
# You can use -f to interpolate formatted strings
"{0} is a mammal" -f $animal
}
<#
For loops iterate over arrays and you can specify indices
prints:
0 a
1 b
2 c
3 d
4 e
5 f
6 g
7 h
#>
$letters = ('a','b','c','d','e','f','g','h')
for($i=0; $i -le $letters.Count-1; $i++){
Write-Host $i, $letters[$i]
}
<#
While loops go until a condition is no longer met.
prints:
0
1
2
3
#>
$x = 0
while ($x -lt 4) {
Write-Output $x
$x += 1 # Shorthand for x = x + 1
}
# Switch statements are more powerful compared to most languages
$val = "20"
switch($val) {
{ $_ -eq 42 } { "The answer equals 42"; break }
'20' { "Exactly 20"; break }
{ $_ -like 's*' } { "Case insensitive"; break }
{ $_ -clike 's*'} { "clike, ceq, cne for case sensitive"; break }
{ $_ -notmatch '^.*$'} { "Regex matching. cnotmatch, cnotlike, ..."; break }
default { "Others" }
}
# Handle exceptions with a try/catch block
try {
# Use "throw" to raise an error
throw "This is an error"
}
catch {
Write-Output $Error.ExceptionMessage
}
finally {
Write-Output "We can clean up resources here"
}
# Writing to a file
$contents = @{"aa"= 12
"bb"= 21}
$contents | Export-CSV "$env:HOMEDRIVE\file.csv" # writes to a file
$contents = "test string here"
$contents | Out-File "$env:HOMEDRIVE\file.txt" # writes to another file
# Read file contents and convert to json
Get-Content "$env:HOMEDRIVE\file.csv" | ConvertTo-Json
####################################################
## 4. Functions
####################################################
# Use "function" to create new functions
# Keep the Verb-Noun naming convention for functions
function Add-Numbers {
$args[0] + $args[1]
}
Add-Numbers 1 2 # => 3
# Calling functions with parameters
function Add-ParamNumbers {
param( [int]$firstNumber, [int]$secondNumber )
$firstNumber + $secondNumber
}
Add-ParamNumbers -FirstNumber 1 -SecondNumber 2 # => 3
# Functions with named parameters, parameter attributes, parsable documentation
<#
.SYNOPSIS
Setup a new website
.DESCRIPTION
Creates everything your new website needs for much win
.PARAMETER siteName
The name for the new website
.EXAMPLE
New-Website -Name FancySite -Po 5000
New-Website SiteWithDefaultPort
New-Website siteName 2000 # ERROR! Port argument could not be validated
('name1','name2') | New-Website -Verbose
#>
function New-Website() {
[CmdletBinding()]
param (
[Parameter(ValueFromPipeline=$true, Mandatory=$true)]
[Alias('name')]
[string]$siteName,
[ValidateSet(3000,5000,8000)]
[int]$port = 3000
)
BEGIN { Write-Output 'Creating new website(s)' }
PROCESS { Write-Output "name: $siteName, port: $port" }
END { Write-Output 'Website(s) created' }
}
####################################################
## 5. Modules
####################################################
# You can import modules and install modules
# The Install-Module is similar to pip or npm, pulls from Powershell Gallery
Install-Module dbaTools
Import-Module dbaTools
$query = "SELECT * FROM dbo.sometable"
$queryParams = @{
SqlInstance = 'testInstance'
Database = 'testDatabase'
Query = $query
}
Invoke-DbaQuery @queryParams
# You can get specific functions from a module
Import-Module -Function Invoke-DbaQuery
# Powershell modules are just ordinary Posh files. You
# can write your own, and import them. The name of the
# module is the same as the name of the file.
# You can find out which functions and attributes
# are defined in a module.
Get-Command -module dbaTools
Get-Help dbaTools -Full
####################################################
## 6. Classes
####################################################
# We use the "class" statement to create a class
class Instrument {
[string]$Type
[string]$Family
}
$instrument = [Instrument]::new()
$instrument.Type = "String Instrument"
$instrument.Family = "Plucked String"
$instrument
<# Output:
Type Family
---- ------
String Instrument Plucked String
#####################################################
## 6.1 Inheritance
####################################################
# Inheritance allows new child classes to be defined that inherit
# methods and variables from their parent class.
class Guitar : Instrument
{
[string]$Brand
[string]$SubType
[string]$ModelType
[string]$ModelNumber
}
$myGuitar = [Guitar]::new()
$myGuitar.Brand = "Taylor"
$myGuitar.SubType = "Acoustic"
$myGuitar.ModelType = "Presentation"
$myGuitar.ModelNumber = "PS14ce Blackwood"
$myGuitar.GetType()
<#
IsPublic IsSerial Name BaseType
-------- -------- ---- --------
True False Guitar Instrument
#####################################################
## 7. Advanced
####################################################
# The powershell pipeline allows things like High-Order Functions.
# Group-Object is a handy cmdlet that does incredible things.
# It works much like a GROUP BY in SQL.
<#
The following will get all the running processes,
group them by Name,
and tell us how many instances of each process we have running.
Tip: Chrome and svcHost are usually big numbers in this regard.
#>
Get-Process | Foreach-Object ProcessName | Group-Object
# Useful pipeline examples are iteration and filtering.
1..10 | ForEach-Object { "Loop number $PSITEM" }
1..10 | Where-Object { $PSITEM -gt 5 } | ConvertTo-Json
# A notable pitfall of the pipeline is its performance when
# compared with other options.
# Additionally, raw bytes are not passed through the pipeline,
# so passing an image causes some issues.
# See more on that in the link at the bottom.
<#
Asynchronous functions exist in the form of jobs.
Typically a procedural language,
Powershell can operate non-blocking functions when invoked as Jobs.
## This function is known to be non-optimized, and therefore slow.
$installedApps = Get-CimInstance -ClassName Win32_Product
# If we had a script, it would hang at this func for a period of time.
$scriptBlock = {Get-CimInstance -ClassName Win32_Product}
Start-Job -ScriptBlock $scriptBlock
# This will start a background job that runs the command.
# You can then obtain the status of jobs and their returned results.
$allJobs = Get-Job
$jobResponse = Get-Job | Receive-Job
# Math is built in to powershell and has many functions.
$r=2
$pi=[math]::pi
$r2=[math]::pow( $r, 2 )
$area = $pi*$r2
$area
# To see all possibilities, check the members.
[System.Math] | Get-Member -Static -MemberType All
<#
This is a silly one:
You may one day be asked to create a func that could take $start and $end
and reverse anything in an array within the given range
based on an arbitrary array without mutating the original array.
Let's see one way to do that and introduce another data structure.
#$targetArray = 'a','b','c','d','e','f','g','h','i','j','k','l','m'
function Format-Range ($start, $end, $array) {
[System.Collections.ArrayList]$firstSectionArray = @()
[System.Collections.ArrayList]$secondSectionArray = @()
[System.Collections.Stack]$stack = @()
for ($index = 0; $index -lt $array.Count; $index++) {
if ($index -lt $start) {
$firstSectionArray.Add($array[$index]) > $null
}
elseif ($index -ge $start -and $index -le $end) {
$stack.Push($array[$index])
}
else {
$secondSectionArray.Add($array[$index]) > $null
}
}
$finalArray = $firstSectionArray + $stack.ToArray() + $secondSectionArray
return $finalArray
}
Format-Range 2 6 $targetArray
# => 'a','b','g','f','e','d','c','h','i','j','k','l','m'
# The previous method works, but uses extra memory by allocating new arrays.
# It's also kind of lengthy.
# Let's see how we can do this without allocating a new array.
# This is slightly faster as well.
function Format-Range ($start, $end) {
while ($start -lt $end)
{
$temp = $targetArray[$start]
$targetArray[$start] = $targetArray[$end]
$targetArray[$end] = $temp
$start++
$end--
}
return $targetArray
}
Format-Range 2 6 # => 'a','b','g','f','e','d','c','h','i','j','k','l','m'
```
Powershell as a Tool:
Getting Help:
```
# Find commands
Get-Command about_* # alias: gcm
Get-Command -Verb Add
Get-Alias ps
Get-Alias -Definition Get-Process
Get-Help ps | less # alias: help
ps | Get-Member # alias: gm
Show-Command Get-WinEvent # Display GUI to fill in the parameters
Update-Help # Run as admin
```
If you are uncertain about your environment:
```
Get-ExecutionPolicy -List
Set-ExecutionPolicy AllSigned
# Execution policies include:
# - Restricted: Scripts won't run.
# - RemoteSigned: Downloaded scripts run only if signed by a trusted publisher.
# - AllSigned: Scripts need to be signed by a trusted publisher.
# - Unrestricted: Run all scripts.
help about_Execution_Policies # for more info
# Current PowerShell version:
$PSVersionTable
```
```
# Calling external commands, executables,
# and functions with the call operator.
# Exe paths with arguments passed or containing spaces can create issues.
C:\Program Files\dotnet\dotnet.exe
# The term 'C:\Program' is not recognized as a name of a cmdlet,
# function, script file, or executable program.
# Check the spelling of the name, or if a path was included,
# verify that the path is correct and try again
"C:\Program Files\dotnet\dotnet.exe"
C:\Program Files\dotnet\dotnet.exe # returns string rather than execute
&"C:\Program Files\dotnet\dotnet.exe --help" # fail
&"C:\Program Files\dotnet\dotnet.exe" --help # success
# Alternatively, you can use dot-sourcing here
."C:\Program Files\dotnet\dotnet.exe" --help # success
# the call operator (&) is similar to Invoke-Expression,
# but IEX runs in current scope.
# One usage of '&' would be to invoke a scriptblock inside of your script.
# Notice the variables are scoped
$i = 2
$scriptBlock = { $i=5; Write-Output $i }
& $scriptBlock # => 5
$i # => 2
invoke-expression ' $i=5; Write-Output $i ' # => 5
$i # => 5
# Alternatively, to preserve changes to public variables
# you can use "Dot-Sourcing". This will run in the current scope.
$x=1
&{$x=2};$x # => 1
.{$x=2};$x # => 2
# Remoting into computers is easy.
Enter-PSSession -ComputerName RemoteComputer
# Once remoted in, you can run commands as if you're local.
RemoteComputer\PS> Get-Process powershell
<#
Handles NPM(K) PM(K) WS(K) CPU(s) Id SI ProcessName
------- ------ ----- ----- ------ -- -- -----------
1096 44 156324 179068 29.92 11772 1 powershell
545 25 49512 49852 25348 0 powershell
#>
RemoteComputer\PS> Exit-PSSession
<#
Powershell is an incredible tool for Windows management and Automation.
Let's take the following scenario:
You have 10 servers.
You need to check whether a service is running on all of them.
You can RDP and log in, or PSSession to all of them, but why?
Check out the following
#$serverList = @(
'server1',
'server2',
'server3',
'server4',
'server5',
'server6',
'server7',
'server8',
'server9',
'server10'
)
[scriptblock]$script = {
Get-Service -DisplayName 'Task Scheduler'
}
foreach ($server in $serverList) {
$cmdSplat = @{
ComputerName = $server
JobName = 'checkService'
ScriptBlock = $script
AsJob = $true
ErrorAction = 'SilentlyContinue'
}
Invoke-Command @cmdSplat | Out-Null
}
<#
Here we've invoked jobs across many servers.
We can now Receive-Job and see if they're all running.
Now scale this up 100x as many servers :)
#>
```
Interesting Projects
`cd` that reads your mind
Got a suggestion? A correction, perhaps? Open an Issue on the Github Repo, or make a pull request yourself!
Originally contributed by <NAME>, and updated by 7 contributor(s).
|
hfs | npm | JavaScript | [HFS: HTTP File Server (version 3)](#hfs-http-file-server-version-3)
===
[Introduction](#introduction)
---
HFS is the best way via web to access or share files from your disk.
* You be the server, share files **fresh from your disk**, with **unlimited** space and bandwidth.
* It's all very **fast**. Try download zipping 100GB, it starts immediately!
* **Easy to use**. HFS tries to detect problems and suggest solutions.
* Share **even a single file** with our *virtual file system*, even with a different name, all without touching the real file. Present things the way you want!
* **Watch** all activities in real-time.
* **Control bandwidth**, decide how much to give.
* **No intermediaries**, give a huge file to your friend without waiting for it to be uploaded on a server first.
This is a full rewrite of [the Delphi version](https://github.com/rejetto/hfs2).
[How does it work](#how-does-it-work)
---
* run HFS on your computer, administration page automatically shows up
* select what files and folders you want to be accessible
* access those files from a phone or another computer just using a browser
* possibly create accounts and limit access to files
[Features](#features)
---
* https
* unicode
* virtual file system
* mobile friendly
* search
* accounts
* resumable downloads & uploads
* download folders as zip archive
* remote delete
* simple website serving
* plug-ins
* real-time monitoring of connections
* [show some files](https://github.com/rejetto/hfs/discussions/270)
* speed throttler
* admin web interface
* multi-language front-end
* virtual hosting (plug-in)
* anti-brute-force (plug-in)
* [reverse-proxy support](https://github.com/rejetto/hfs/wiki/Reverse-proxy)
[Installation](#installation)
---
NB: minimum Windows version required is 8.1 , Windows Server 2012 R2 (because of Node.js 16)
1. go to <https://github.com/rejetto/hfs/releases>
2. click on `Assets`
3. **download** the right version for your system, unzip and launch `hfs` file.
* If you cannot find your system in the list, see next section [Other systems](#other-systems).
4. the browser should automatically open on `localhost` address, so you can configure the rest in the Admin-panel.
* if a browser cannot be opened on the computer where you are installing HFS,
you should enter this command in HFS console: `create-admin <PASSWORD>`
* if you are running as a service and cannot access the console, your best option is to stop it, launch it
at command line (not as a service), and follow previous instruction.
+ if you can never access the console, even with the previous instructions,
you can [edit config file add add your admin account](https://github.com/rejetto/hfs/blob/HEAD/config.md#accounts)
If you access *Admin-panel* via localhost, by default HFS **won't** require you to login.
If you don't like this behavior, disable it in the Admin-panel or enter this console command `config localhost_admin false`.
### [Other systems](#other-systems)
If your system is not Windows/Linux/Mac or you just don't want to run the binaries, you can try this alternative version:
1. [install node.js](https://nodejs.org)
2. execute at command line `npx hfs@latest`
The `@latest` part is optional, and ensures that you are always up to date.
If this procedure fails, it may be that you are missing one of [these requirements](https://github.com/nodejs/node-gyp#installation).
Configuration and other files will be stored in `%HOME%/.vfs`
### [Service](#service)
If you want to run HFS at boot (as a service), we suggest the following methods
#### [On Linux](#on-linux)
1. [install node.js](https://nodejs.org)
2. create a file `/etc/systemd/system/hfs.service` with this content
```
[Unit]
Description=HFS After=network.target
[Service]
Type=simple Restart=always ExecStart=/usr/bin/npx -y hfs@latest
[Install]
WantedBy=multi-user.target
```
3. run `sudo systemctl daemon-reload && sudo systemctl enable hfs && sudo systemctl start hfs && sudo systemctl status hfs`
NB: update will be attempted at each restart
#### [On Windows](#on-windows)
1. [install node.js](https://nodejs.org)
2. run `npm -g i hfs`
3. run `npx qckwinsvc2 install name="HFS" description="HFS" path="%APPDATA%\npm\node_modules\hfs\src\index.js" args="--cwd %HOMEPATH%\.hfs" now`
To update
* run `npx qckwinsvc2 uninstall name="HFS"`
* run `npm -g update hfs`
* run `npx qckwinsvc2 install name="HFS" description="HFS" path="%APPDATA%\npm\node_modules\hfs\src\index.js" args="--cwd %HOMEPATH%\.hfs" now`
[Console commands](#console-commands)
---
If you have access to HFS' console, you can enter commands. Start with `help` to have a full list.
[Configuration](#configuration)
---
Configuration can be done in several ways
* accessing the Admin-panel with your browser
+ it will automatically open when you start HFS. Bookmark it. if your port is 8000 the address will be <http://localhost:8000/~/admin>
* passing via command line at start in the form `--NAME VALUE`
* using envs in the form `HFS_NAME` (eg: `HFS_PORT`)
* directly editing the `config.yaml` file. As soon as you save it is reloaded and changes are applied
* after HFS has started you can enter console command in the form `config NAME VALUE`
`NAME` stands for the property name that you want to change. See the [complete list](https://github.com/rejetto/hfs/blob/HEAD/config.md).
You can specify a different `config.yaml` at command line with `--config PATH` or similarly with an env `HFS_CONFIG`.
### [Where is it stored](#where-is-it-stored)
Configuration is stored in the file `config.yaml`, exception made for custom HTML which is stored in `custom.html`.
These files are kept in the Current Working Directory (cwd), which is by default the same folder of `hfs.exe`
if you are using this kind of distribution on Windows, or `USER_FOLDER/.hfs` on other systems.
You can decide a different cwd passing `--cwd SOME_FOLDER` parameter at command line.
You can decide also a different file for config by passing `--config SOME_FILE`, or inside an *env* called `HFS_CONFIG`.
Any relative path provided is relative to the *cwd*.
[Check details about config file format](https://github.com/rejetto/hfs/blob/HEAD/config.md).
[Internationalization](#internationalization)
---
It is possible to show the Front-end in other languages.
Translation for some languages is already provided. If you find an error, consider reporting it or [editing the source file](https://github.com/rejetto/hfs/tree/main/src/langs).
In the Languages section of the Admin-panel you can install additional language files.
If your language is missing, please consider [translating yourself](https://github.com/rejetto/hfs/wiki/Translation).
[Why you should upgrade from HFS 2.x to 3](#why-you-should-upgrade-from-hfs-2x-to-3)
---
As you can see from the list of features, we already have some goods that you cannot find in HFS 2.
Other than that, you can also consider:
* it's more robust: it was designed to be an always-running server, while HFS 1-2 was designed for occasional usage (transfer and quit)
* passwords are never really stored, just a non-reversible hash is
* faster search (up to 12x)
* more flexible permissions
But you may still want to stay with HFS 2.x (so far) for the following reasons
* smaller
* more tested
* classic window interface (can be easier for some people)
[Security](#security)
---
While this project focuses on ease of use, we care about security.
* HTTPS support
* Passwords are not saved, and user password is safe even logging in without https thanks to [SRP](https://en.wikipedia.org/wiki/Secure_Remote_Password_protocol)
* Automated tests ran on every release, including libraries audit
* No default admin password
Some actions you can take for improved security:
* use https, better if using a proper certificate, even free with [Letsencrypt](https://letsencrypt.org/).
* have a domain (ddns is ok too), start vhosting plugin, configure your domain, enable "Block requests that are not using any of the domains above"
* install rejetto/antidos plugin
* start antibrute plugin (but it's started by default)
* disable "unprotected admin on localhost"
[Hidden features](#hidden-features)
---
* Appending `#LOGIN` to address will bring up the login dialog
* Appending ?lang=CODE to address will force a specific language
* right/ctrl/command click on toggle-all checkbox will invert each checkbox state
[Contribute](#contribute)
---
There are several ways to contribute
* [Report bugs](https://github.com/rejetto/hfs/issues/new?labels=bug&template=bug_report.md)
It's very important to report bugs, and if you are not so sure about it, don't worry, we'll discuss it.
If you find important security problems, please [contact us privately](mailto:<EMAIL>) so that we can publish a fix before the problem is disclosed, for the safety of other users.
* [Translate to your language](https://github.com/rejetto/hfs/wiki/Translation).
* [Suggest ideas](https://github.com/rejetto/hfs/discussions)
While the project should not become too complex, yours may be an idea for a plugin.
* Submit your code
If you'd like to make a change yourself in the code, please first open an "issue" or "discussion" about it,
so we'll try to cooperate and understand what's the best path for it.
* [Make a plugin](https://github.com/rejetto/hfs/blob/HEAD/dev-plugins.md)
A plugin can change the look (a theme), and/or introduce a new functionality.
[More](#more)
---
* [Build yourself](https://github.com/rejetto/hfs/blob/HEAD/dev.md)
* [License](https://github.com/rejetto/hfs/blob/master/LICENSE.txt)
* [To-do list](https://github.com/rejetto/hfs/blob/HEAD/todo.md)
Readme
---
### Keywords
* file server
* http server |
@types/marker-animate-unobtrusive | npm | JavaScript | [Installation](#installation)
===
> `npm install --save @types/marker-animate-unobtrusive`
[Summary](#summary)
===
This package contains type definitions for marker-animate-unobtrusive (<https://github.com/terikon/marker-animate-unobtrusive>).
[Details](#details)
===
Files were exported from <https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/marker-animate-unobtrusive>.
[index.d.ts](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/marker-animate-unobtrusive/index.d.ts)
---
```
/// <reference types="google.maps" /declare namespace jQuery.easing {
type IEasingType =
| "swing"
| "easeInQuad"
| "easeOutQuad"
| "easeInOutQuad"
| "easeInCubic"
| "easeOutCubic"
| "easeInOutCubic"
| "easeInQuart"
| "easeOutQuart"
| "easeInOutQuart"
| "easeInQuint"
| "easeOutQuint"
| "easeInOutQuint"
| "easeInSine"
| "easeOutSine"
| "easeInOutSine"
| "easeInExpo"
| "easeOutExpo"
| "easeInOutExpo"
| "easeInCirc"
| "easeOutCirc"
| "easeInOutCirc"
| "easeInElastic"
| "easeOutElastic"
| "easeInOutElastic"
| "easeInBack"
| "easeOutBack"
| "easeInOutBack"
| "easeInBounce"
| "easeOutBounce"
| "easeInOutBounce";
}
interface SlidingMarkerOptions extends google.maps.MarkerOptions {
easing?: jQuery.easing.IEasingType | undefined;
duration?: number | undefined;
animateFunctionAdapter?:
| ((
marker: google.maps.Marker,
destPoint: google.maps.LatLng,
easing: "linear" | jQuery.easing.IEasingType,
duration: number,
) => void)
| undefined;
}
declare class SlidingMarker extends google.maps.Marker {
static initializeGlobally(): void;
constructor(opts?: SlidingMarkerOptions);
setDuration(duration: number): void;
getDuration(): number;
setEasing(easing: jQuery.easing.IEasingType): void;
getEasing(): jQuery.easing.IEasingType;
getAnimationPosition(): google.maps.LatLng;
setPositionNotAnimated(position: google.maps.LatLng | google.maps.LatLngLiteral): void;
}
declare class MarkerWithGhost extends SlidingMarker {
setGhostPosition(ghostPosition: google.maps.LatLng | google.maps.LatLngLiteral): void;
getGhostPosition(): google.maps.LatLng;
getGhostAnimationPosition(): google.maps.LatLng;
}
declare module "SlidingMarker" {
export = SlidingMarker;
}
declare module "MarkerWithGhost" {
export = MarkerWithGhost;
}
```
### [Additional Details](#additional-details)
* Last updated: Wed, 18 Oct 2023 05:47:08 GMT
* Dependencies: [@types/google.maps](https://npmjs.com/package/@types/google.maps)
[Credits](#credits)
===
These definitions were written by [<NAME>](https://github.com/viskin).
Readme
---
### Keywords
none |
astroml | readthedoc | Unknown | AstroML Documentation
Release 0.1
<NAME>
Aug 12, 2022
CONTENTS 1 Installation 3 2 Pages 5 3 Documentation 9 Python Module Index 29 Index 31
i
ii
AstroML Documentation, Release 0.1 AstroML is a compilation of machine learning tutorials and examples, all in the context of astronomy research and astrophysical object classification. The examples include working with lightcurves as well as single and multi-band images.
NOTE: This is a private project led by an astronomy PhD student, and is not related to the pypi astroML program!
AstroML Documentation, Release 0.1 2 CONTENTS
CHAPTER
ONE
INSTALLATION The source code and accompanying data can be installed in your local machine by cloning the development version:
git clone https://github.com/Professor-G/AstroML.git python setup.py install pip install -r requirements.txt
AstroML Documentation, Release 0.1 4 Chapter 1. Installation
CHAPTER
TWO
PAGES 2.1 Introduction to Machine Learning My colleague Etienne used to joke that the USPS mail system had been using machine learning since the mid 1900’s,
whereas astronomers have only recently begun to appreciate its application as a result of the big data era we are tran-
sitioning into. Machine learning is a powerful to not only automate procedures (such as mail sorting), but also predict signal behavior, classify unlabeled data, and more recently, create new data altogether. Its utility in astronomy research is tremendous, in this set of tutorials the applications of the most popular algorithms will be reviewed, with Python code to accompany the examples.
In particular, this project will focus on working with lightcurves, as well as single and multi-band images.
2.2 Data Preprocessing 2.2.1 Lightcurves In most machine tutorials you will run into two common variables: data_x and data_y.
data_x corresponds to the 2-dimensional features matrix, with each row being the one-dimensional feature vector for that particular sample – data_y is then the corresponding 1-dimensional labels vector.
In this tutorial we will be working with 150 lightcurves, 30 lightcurves per class: Variable (VAR), Long Period Variable
(LPV), Constant (CON), Cataclysmic Variable (CV), and Microlensing (ML). These five classes have been simulated,
30 times each, therefore our training set contains 5 classes, with each containing 30 samples.
The lightcurves for these 150 simulated lightcurves can be downloaded here. Each text file corresponds to one lightcurve, each containing three columns: time, mag, magerr We will load these lightcurves to construct our training data matrix, which will look like this:
First, we will load the lightcurve filenames by specifying the path in which they’re located in your local machine:
import os import numpy as np path = '/Users/daniel/lightcurves/'
filenames = os.listdir(path)
AstroML Documentation, Release 0.1 Fig. 1: Figure 1: Two-dimensional training data matrix, where the first column contains the label of each sample, with the corresponding row representing the feature vector for that particular sample.
AstroML Documentation, Release 0.1 If you print the filenames you will note that each filename is titled according to its class, so let’s first construct our data_y vector by saving the name of the filename, but we don’t want the numbers, only the name before the ‘_’ character:
data_y = []
for name in filenames:
label = name.split('_')[0]
data_Y.append(label)
Now that we have our labels vector, we must construct the individual features vector for each lightcurve, after which we will combine all these as rows for our features matrix. For simplicity let’s only calculate three features: the mean,
the standard deviation, and the standard deviation over the mean.
We will load each file and record these three features in separate lists:
mean = []
std = []
std_over_mean = []
for name in filenames:
lightcurve = np.loadtxt(path+name)
mag = lightcurve[:,1]
mean.append(np.mean(mag))
std.append(np.std(mag))
std_over_mean.append(np.std(mag)/np.mean(mag))
Finally, to construct the data_x features matrix, we need to combine these three vectors row-wise, which the numpy.c_[]
function can do for us:
data_x = np.c_[mean, std, std_over_mean]
We can see that the shape of this matrix is (150, 3): 150 samples (rows), and 3 features (columns). With both the data_y and data_x variables, we are ready to begin training our machine learning classifiers, but to avoid this procedure in the future, let’s combine these two matrices to construct our entire training data matrix:
training_matrix = np.c_[data_y, data_x]
This matrix is the one shown in Figure 1, which we can now save for easier access in the future (note that since our matrix contains labels that are strings, we must specify the format when saving the file):
np.savetxt('training_data', training_matrix, fmt='%s')
In the future we can load this matrix, and re-construct data_x and data_y. Note that when loading this file, we need to specify that that the dtype is string, since our labels are not numerical. Therefore when designating our data_x matrix,
we must set the astype to float:
training_data = np.loadtxt('training_data', dtype=str)
data_y = training_data[:,0]
data_x = training_data[:,1:].astype('float')
Part of data processing may include normalizing the data_x matrix, which is specially important when working with artificial neural networks, since at each epoch, the weights must be updated, and if the feature ranges are huge, the gradient will explode!
For example, we can check the range of our three features using numpy.ptp()
AstroML Documentation, Release 0.1 print('mean range: {}'.format(np.ptp(mean)))
print('std range: {}'.format(np.ptp(std)))
print('std over mean range: {}'.format(np.ptp(std_over_mean)))
The largest range is within our mean feature, and while a range of 6 may not cause issues, it is always recommended to normalize features when using neural networks. For more information see the other pages!
To normalize your data, you can use the sklearn scalers. There are multiple scales for different normalization types,
but they all function the same way – first we create the scaler itself, for now let’s try the StandardScaler(), which scales to unit variance:
from sklearn.preprocessing import StandardScaler scaler = StandardScaler()
We have the standard scaler ready to go, but it’s important to remember that if we scale our data during training, we must also scale new, unseen data, in the same manner. This is why the scaler must be fitted and saved. Let’s create our scaler by passing along our data_x matrix:
scaler.fit(data_x)
Now that we have fit our scalar, we need to use it to transform our original data_x, which we can call scaled_data_x:
scaled_data_x = scaler.transform(data_x)
These preprocessing scalers scale column by column, therefore each feature is scaled independently of one another.
More importantly, in order to use this scaler in the future, as we must to scale new, unseen data, we must pass along a vector, or matrix, containing the same number of features, in the same order!!!
If you have a lightcurve you want to predict in the future, but for some reason you are missing one feature, then the scaler will not work, as it needs the same number of features as it was fitted with. In practice you would replace these missing values with 0, for example, but just remember that if you’re scaling your training data, you must also scale any new data going forward. Therefore you must always reconstruct the scaler by fitting the original data_x again, so that you can use it to .transform() new data.
2.2.2 Images 2.3 Principal Component Analysis 2.4 t-Stochastic Neighbor Embeddening 2.5 Ensemble Algorithms 2.6 Convolutional Neural Networks 2.7 Data Augmentation 2.8 Hyperparameter Optimization
CHAPTER
THREE
DOCUMENTATION Here is the documentation for all the modules:
3.1 API Reference This page contains auto-generated API reference documentation1 .
3.1.1 AstroML
@author: dgodinez Submodules
AstroML.lightcurve_features Created on Thu Jan 12 14:30:12 2017
@author: danielgodinez Module Contents Functions
shannon_entropy(time, mag, magerr) Shannon entropy (Shannon et al. 1949) is used as a met-
ric to quantify the amount of
con(time, mag, magerr) Con is defined as the number of clusters containing three
or more
kurtosis(time, mag, magerr)
"
skewness(time, mag, magerr) Skewness measures the assymetry of a lightcurve, with
a positive skewness
vonNeumannRatio(time, mag, magerr) The von Neumann ratio was defined in 1941 by John
von Neumann and serves as the
continues on next page
1 Created with sphinx-autoapi
AstroML Documentation, Release 0.1
Table 1 – continued from previous page
stetsonJ(time, mag, magerr) The variability index J was first suggested by <NAME>.
Stetson and serves as a
stetsonK(time, mag, magerr) The variability index K was first suggested by <NAME>.
Stetson and serves as a
stetsonL(time, mag, magerr) The variability index L was first suggested by <NAME>.
Stetson and serves as a
median_buffer_range(time, mag, magerr) This function returns the ratio of points that are between
std_over_mean(time, mag, magerr) A measure of the ratio of standard deviation and mean.
amplitude(time, mag, magerr) This amplitude metric is defined as the difference be-
tween the maximum magnitude
median_distance(time, mag, magerr) This function calculates the median eucledian distance
between each photometric
above1(time, mag, magerr) This function measures the ratio of data points that are
above3(time, mag, magerr) This function measures the ratio of data points that are
above5(time, mag, magerr) This function measures the ratio of data points that are
below1(time, mag, magerr) This function measures the ratio of data points that are
below3(time, mag, magerr) This function measures the ratio of data points that are
below5(time, mag, magerr) This function measures the ratio of data points that are
medianAbsDev(time, mag, magerr)
"
root_mean_squared(time, mag, magerr) A measure of the root mean square deviation.
meanMag(time, mag, magerr) Calculates mean magnitude, weighted by the errors.
integrate(time, mag, magerr) Integrate magnitude using the trapezoidal rule.
auto_corr(time, mag, magerr) Similarity between observations as a function of a time
lag between them.
peak_detection(time, mag, magerr) Function to detect number of peaks.
MaxSlope(time, mag, magerr) Examining successive (time-sorted) magnitudes, the
maximal first difference
LinearTrend(time, mag, magerr) Slope of a linear fit to the light-curve.
PairSlopeTrend(time, mag, magerr) Considering the last 30 (time-sorted) measurements of
source magnitude,
FluxPercentileRatioMid20(time, mag, magerr) In order to caracterize the sorted magnitudes distribution
we use percentiles.
FluxPercentileRatioMid35(time, mag, magerr) In order to caracterize the sorted magnitudes distribution
we use percentiles.
FluxPercentileRatioMid50(time, mag, magerr) In order to caracterize the sorted magnitudes distribution
we use percentiles.
FluxPercentileRatioMid65(time, mag, magerr) In order to caracterize the sorted magnitudes distribution
we use percentiles.
FluxPercentileRatioMid80(time, mag, magerr) In order to caracterize the sorted magnitudes distribution
we use percentiles.
PercentAmplitude(time, mag, magerr) The largest absolute departure from the median flux, di-
vided by the median flux
continues on next page
AstroML Documentation, Release 0.1
Table 1 – continued from previous page
PercentDifferenceFluxPercentile(time, mag, Ratio of F5,95 over the median flux.
magerr)
half_mag_amplitude_ratio(time, mag, magerr) The ratio of the squared sum of residuals of magnitudes
cusum(time, mag, magerr) Range of cumulative sum
shapiro_wilk(time, mag, magerr) Normalization-test.
AndersonDarling(time, mag, magerr) The Anderson-Darling test is a statistical test of whether
a given
Gskew(time, mag, magerr) Median-based measure of the skew
abs_energy(time, mag, magerr) Returns the absolute energy of the time series, defined
to be the sum over the squared
abs_sum_changes(time, mag, magerr) Returns sum over the abs value of consecutive changes
in mag.
benford_correlation(time, mag, magerr) Useful for anomaly detection applications. Returns the
c3(time, mag, magerr[, lag]) A measure of non-linearity.
complexity(time, mag, magerr) This function calculator is an estimate for a time series
complexity.
count_above(time, mag, magerr) Number of values higher than the median
count_below(time, mag, magerr) Number of values below the median
first_loc_max(time, mag, magerr) Returns location of maximum mag relative to the
first_loc_min(time, mag, magerr) Returns location of minimum mag relative to the
check_for_duplicate(time, mag, magerr) Checks if any val in mag repeats.
check_for_max_duplicate(time, mag, magerr) Checks if the maximum value in mag repeats.
check_for_min_duplicate(time, mag, magerr) Checks if the minimum value in mag repeats.
check_max_last_loc(time, mag, magerr) Returns position of last maximum mag relative to
check_min_last_loc(time, mag, magerr) Returns position of last minimum mag relative to
longest_strike_above(time, mag, magerr) Returns the length of the longest consecutive subse-
quence in
longest_strike_below(time, mag, magerr) Returns the length of the longest consecutive subse-
quence in mag
mean_change(time, mag, magerr) Returns mean over the differences between subsequent
observations.
mean_abs_change(time, mag, magerr) Returns mean over the abs differences between subse-
quent observations.
mean_n_abs_max(time, mag, magerr[, num- Calculates the arithmetic mean of the n absolute maxi-
ber_of_maxima]) mum values of the time series, n = 1.
mean_second_derivative(time, mag, magerr) Returns the mean value of a central approximation of the
second derivative.
number_of_crossings(time, mag, magerr) Calculates the number of crossings of x on the median,
m. A crossing is defined as two
number_of_peaks(time, mag, magerr[, n]) Calculates the number of peaks of at least support n in
the time series x.
ratio_recurring_points(time, mag, magerr) Returns the ratio of unique values, that are present in the
time
sample_entropy(time, mag, magerr) Returns sample entropy: http://en.wikipedia.org/wiki/
Sample_Entropy
sum_values(time, mag, magerr) Sums over all mag values.
time_reversal_asymmetry(time, mag, magerr[, Derives a feature introduced by Fulcher.
lag])
variance(time, mag, magerr) Returns the variance.
variance_larger_than_standard_deviation(time, This feature denotes if the variance of x is greater than
mag, magerr) its standard deviation.
continues on next page
AstroML Documentation, Release 0.1
Table 1 – continued from previous page
variation_coefficient(time, mag, magerr) Returns the variation coefficient (standard error / mean,
give relative value of variation around mean) of x.
large_standard_deviation(time, mag, magerr[, r]) Does time series have "large" standard deviation?
symmetry_looking(time, mag, magerr[, r]) Check to see if the distribution of the mag "looks sym-
metric". This is the case if:
index_mass_quantile(time, mag, magerr[, r]) Calculates the relative index i of time series x where r%
of the mass of x lies left of i.
number_cwt_peaks(time, mag, magerr[, n]) Number of different peaks in the magnitude array.
permutation_entropy(time, mag, magerr[, tau, di- Calculate the permutation entropy.
mension])
quantile(time, mag, magerr[, r]) Calculates the r quantile of the mag. This is the value of
mag greater than r% of the ordered values.
AstroML.lightcurve_features.shannon_entropy(time, mag, magerr)
Shannon entropy (Shannon et al. 1949) is used as a metric to quantify the amount of information carried by a
signal. The procedure employed here follows that outlined by (D. Mislis et al. 2015). The probability of each
point is given by a Cumulative Distribution Function (CDF). Following the same procedure as (D. Mislis et al.
2015), this function employs both the normal and inversed gaussian CDF, with the total shannon entropy given
by a combination of the two. See: (SIDRA: a blind algorithm for signal detection in photometric surveys, D.
Mislis et al., 2015)
Parameters
• mag (the time-varying intensity of the lightcurve. Must be an array.) –
• magerr (photometric error for the intensity. Must be an array.) –
Returns
rtype
Return type
float AstroML.lightcurve_features.con(time, mag, magerr)
Con is defined as the number of clusters containing three or more consecutive observations with magnitudes
brighter than the reference magnitude plus 3 standard deviations. For a microlensing event Con = 1, assuming
a flat lightcurve prior to the event. The magnitude measurements are split into bins such that the reference
magnitude is defined as the mean of the measurements in the largest bin.
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
Returns
rtype
Return type
float AstroML.lightcurve_features.kurtosis(time, mag, magerr)
” Kurtosis function returns the calculated kurtosis of the lightcurve. It’s a measure of the peakedness (or flatness)
of the lightcurve relative to a normal distribution. See: www.xycoon.com/peakedness_small_sample_test_1.htm
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
Returns
rtype
AstroML Documentation, Release 0.1
Return type
float AstroML.lightcurve_features.skewness(time, mag, magerr)
Skewness measures the assymetry of a lightcurve, with a positive skewness indicating a skew to the right, and a
negative skewness indicating a skew to the left.
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
Return type
rtype: float AstroML.lightcurve_features.vonNeumannRatio(time, mag, magerr)
The von Neumann ratio was defined in 1941 by <NAME> and serves as the mean square successive
difference divided by the sample variance. When this ratio is small, it is an indication of a strong positive
correlation between the successive photometric data points. See: (<NAME>, The Annals of Mathematical
Statistics 12, 367 (1941))
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
Returns
rtype
Return type
float AstroML.lightcurve_features.stetsonJ(time, mag, magerr)
The variability index J was first suggested by <NAME> and serves as a measure of the correlation between
the data points, tending to 0 for variable stars and getting large as the difference between the successive data
points increases. See: (<NAME>, Publications of the Astronomical Society of the Pacific 108, 851 (1996)).
Parameters
• mag (the time-varying intensity of the lightcurve. Must be an array.) –
• magerr (photometric error for the intensity. Must be an array.) –
Returns
rtype
Return type
float AstroML.lightcurve_features.stetsonK(time, mag, magerr)
The variability index K was first suggested by <NAME> and serves as a measure of the kurtosis of the
magnitude distribution. See: (<NAME>, Publications of the Astronomical Society of the Pacific 108, 851
(1996)).
Parameters
• mag (the time-varying intensity of the lightcurve. Must be an array.) –
• magerr (photometric error for the intensity. Must be an array.) –
Returns
rtype
Return type
float
AstroML Documentation, Release 0.1 AstroML.lightcurve_features.stetsonL(time, mag, magerr)
The variability index L was first suggested by <NAME> and serves as a means of distinguishing between
different types of variation. When individual random errors dominate over the actual variation of the signal, K
approaches 0.798 (Gaussian limit). Thus, when the nature of the errors is Gaussian, stetsonL = stetsonJ, except
it will be amplified by a small factor for smoothly varying signals, or suppressed by a large factor when data
is infrequent or corrupt. See: (<NAME>, Publications of the Astronomical Society of the Pacific 108, 851
(1996)).
Parameters
• mag (the time-varying intensity of the lightcurve. Must be an array.) –
• magerr (photometric error for the intensity. Must be an array.) –
Returns
rtype
Return type
float AstroML.lightcurve_features.median_buffer_range(time, mag, magerr)
This function returns the ratio of points that are between plus or minus 10% of the amplitude value over the
mean.
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
Returns
rtype
Return type
float AstroML.lightcurve_features.std_over_mean(time, mag, magerr)
A measure of the ratio of standard deviation and mean.
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
Returns
rtype
Return type
float AstroML.lightcurve_features.amplitude(time, mag, magerr)
This amplitude metric is defined as the difference between the maximum magnitude measurement and the low-
est magnitude measurement, divided by 2. We account for outliers by removing the upper and lower 2% of
magnitudes.
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
Returns
rtype
Return type
float AstroML.lightcurve_features.median_distance(time, mag, magerr)
This function calculates the median eucledian distance between each photometric measurement, helpful metric
for detecting overlapped lightcurves.
AstroML Documentation, Release 0.1
Parameters
• time (time of observations.) –
• mag (the time-varying intensity of the lightcurve. Must be an array.) –
Returns
rtype
Return type
float AstroML.lightcurve_features.above1(time, mag, magerr)
This function measures the ratio of data points that are above 1 standard deviation from the median magnitude.
Parameters
• mag (the time-varying intensity of the lightcurve. Must be an array.) –
• magerr (photometric error for the intensity. Must be an array.) –
Returns
rtype
Return type
float AstroML.lightcurve_features.above3(time, mag, magerr)
This function measures the ratio of data points that are above 3 standard deviations from the median magnitude.
Parameters
• mag (the time-varying intensity of the lightcurve. Must be an array.) –
• magerr (photometric error for the intensity. Must be an array.) –
Returns
rtype
Return type
float AstroML.lightcurve_features.above5(time, mag, magerr)
This function measures the ratio of data points that are above 5 standard deviations from the median magnitude.
Parameters
• mag (the time-varying intensity of the lightcurve. Must be an array.) –
• magerr (photometric error for the intensity. Must be an array.) –
Returns
rtype
Return type
float AstroML.lightcurve_features.below1(time, mag, magerr)
This function measures the ratio of data points that are below 1 standard deviations from the median magnitude.
Parameters
• mag (the time-varying intensity of the lightcurve. Must be an array.) –
• magerr (photometric error for the intensity. Must be an array.) –
AstroML Documentation, Release 0.1
Returns
rtype
Return type
float AstroML.lightcurve_features.below3(time, mag, magerr)
This function measures the ratio of data points that are below 3 standard deviations from the median magnitude.
Parameters
• mag (the time-varying intensity of the lightcurve. Must be an array.) –
• magerr (photometric error for the intensity. Must be an array.) –
Returns
rtype
Return type
float AstroML.lightcurve_features.below5(time, mag, magerr)
This function measures the ratio of data points that are below 5 standard deviations from the median magnitude.
Parameters
• mag (the time-varying intensity of the lightcurve. Must be an array.) –
• magerr (photometric error for the intensity. Must be an array.) –
Returns
rtype
Return type
float AstroML.lightcurve_features.medianAbsDev(time, mag, magerr)
” A measure of the mean average distance between each magnitude value and the mean magnitude. https://en.
wikipedia.org/wiki/Median_absolute_deviation
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
Returns
rtype
Return type
float AstroML.lightcurve_features.root_mean_squared(time, mag, magerr)
A measure of the root mean square deviation.
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
Returns
rtype
Return type
float AstroML.lightcurve_features.meanMag(time, mag, magerr)
Calculates mean magnitude, weighted by the errors.
AstroML Documentation, Release 0.1
Parameters
• mag (the time-varying intensity of the lightcurve. Must be an array.) –
• magerr (photometric error for the intensity. Must be an array.) –
Returns
rtype
Return type
float AstroML.lightcurve_features.integrate(time, mag, magerr)
Integrate magnitude using the trapezoidal rule. See: http://en.wikipedia.org/wiki/Trapezoidal_rule
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
Returns
rtype
Return type
float AstroML.lightcurve_features.auto_corr(time, mag, magerr)
Similarity between observations as a function of a time lag between them.
Parameters
• mag (the time-varying intensity of the lightcurve. Must be an array.) –
• magerr (photometric error for the intensity. Must be an array.) –
Returns
rtype
Return type
float AstroML.lightcurve_features.peak_detection(time, mag, magerr)
Function to detect number of peaks.
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
Returns
rtype
Return type
int AstroML.lightcurve_features.MaxSlope(time, mag, magerr)
Examining successive (time-sorted) magnitudes, the maximal first difference (value of delta magnitude over delta
time)
Parameters
• time (time of observations. Must be an array.) –
• mag (the time-varying intensity of the lightcurve. Must be an array.) –
• magerr (photometric error for the intensity. Must be an array.) –
Returns
rtype
AstroML Documentation, Release 0.1
Return type
float AstroML.lightcurve_features.LinearTrend(time, mag, magerr)
Slope of a linear fit to the light-curve.
AstroML.lightcurve_features.PairSlopeTrend(time, mag, magerr)
Considering the last 30 (time-sorted) measurements of source magnitude, the fraction of increasing first differ-
ences minus the fraction of decreasing first differences. Percentage of all pairs of consecutive flux measurements
that have positive slope
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
Returns
rtype
Return type
float AstroML.lightcurve_features.FluxPercentileRatioMid20(time, mag, magerr)
In order to caracterize the sorted magnitudes distribution we use percentiles. If F5,95 is the difference between
95% and 5% magnitude values, we calculate the following: Ratio of flux percentiles (60th - 40th) over (95th -
5th)
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
Returns
rtype
Return type
float AstroML.lightcurve_features.FluxPercentileRatioMid35(time, mag, magerr)
In order to caracterize the sorted magnitudes distribution we use percentiles. If F5,95 is the difference between
95% and 5% magnitude values, we calculate the following: Ratio of flux percentiles (67.5th - 32.5th) over (95th
- 5th)
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
Returns
rtype
Return type
float AstroML.lightcurve_features.FluxPercentileRatioMid50(time, mag, magerr)
In order to caracterize the sorted magnitudes distribution we use percentiles. If F5,95 is the difference between
95% and 5% magnitude values, we calculate the following: Ratio of flux percentiles (75th - 25th) over (95th -
5th)
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
Returns
rtype
Return type
float
AstroML Documentation, Release 0.1 AstroML.lightcurve_features.FluxPercentileRatioMid65(time, mag, magerr)
In order to caracterize the sorted magnitudes distribution we use percentiles. If F5,95 is the difference between
95% and 5% magnitude values, we calculate the following: Ratio of flux percentiles (82.5th - 17.5th) over (95th
- 5th)
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
Returns
rtype
Return type
float AstroML.lightcurve_features.FluxPercentileRatioMid80(time, mag, magerr)
In order to caracterize the sorted magnitudes distribution we use percentiles. If F5,95 is the difference between
95% and 5% magnitude values, we calculate the following: Ratio of flux percentiles (90th - 10th) over (95th -
5th)
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
Returns
rtype
Return type
float AstroML.lightcurve_features.PercentAmplitude(time, mag, magerr)
The largest absolute departure from the median flux, divided by the median flux Largest percentage difference
between either the max or min magnitude and the median.
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
Returns
rtype
Return type
float AstroML.lightcurve_features.PercentDifferenceFluxPercentile(time, mag, magerr)
Ratio of F5,95 over the median flux. Difference between the 2nd & 98th flux percentiles.
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
Returns
rtype
Return type
float AstroML.lightcurve_features.half_mag_amplitude_ratio(time, mag, magerr)
The ratio of the squared sum of residuals of magnitudes that are either brighter than or fainter than the mean
magnitude. For EB-like variability, having sharp flux gradients around its eclipses, A is larger than 1
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
Returns
rtype
AstroML Documentation, Release 0.1
Return type
float AstroML.lightcurve_features.cusum(time, mag, magerr)
Range of cumulative sum
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
Returns
rtype
Return type
float AstroML.lightcurve_features.shapiro_wilk(time, mag, magerr)
Normalization-test. The Shapiro-Wilk test tests the null hypothesis that the data was drawn from a normal dis-
tribution.
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
Returns
rtype
Return type
float AstroML.lightcurve_features.AndersonDarling(time, mag, magerr)
The Anderson-Darling test is a statistical test of whether a given sample of data is drawn from a given probability
distribution. When applied to testing if a normal distribution adequately describes a set of data, it is one of the
most powerful statistical tools for detecting most departures from normality.
From Kim et al. 2009: “To test normality, we use the Anderson–Darling test (Anderson & Darling 1952; Stephens
1974) which tests the null hypothesis that a data set comes from the normal distribution.” (Doi:10.1111/j.1365-
2966.2009.14967.x.)
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
Returns
rtype
Return type
float AstroML.lightcurve_features.Gskew(time, mag, magerr)
Median-based measure of the skew Gskew = mq3 + mq97 2m mq3 is the median of magnitudes lesser or equal
than the quantile 3. mq97 is the median of magnitudes greater or equal than the quantile 97. 2m is 2 times the
median magnitude.
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
Returns
rtype
Return type
float
AstroML Documentation, Release 0.1 AstroML.lightcurve_features.abs_energy(time, mag, magerr)
Returns the absolute energy of the time series, defined to be the sum over the squared values of the time-series.
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
Returns
rtype
Return type
float AstroML.lightcurve_features.abs_sum_changes(time, mag, magerr)
Returns sum over the abs value of consecutive changes in mag.
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
Returns
rtype
Return type
float AstroML.lightcurve_features.benford_correlation(time, mag, magerr)
Useful for anomaly detection applications. Returns the correlation from first digit distribution when compared
to the Newcomb-Benford’s Law distribution
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
Returns
rtype
Return type
float AstroML.lightcurve_features.c3(time, mag, magerr, lag=1)
A measure of non-linearity. See: Measure of non-linearity in time series: [1] <NAME>. and <NAME>.
(1997). Discrimination power of measures for nonlinearity in a time series PHYSICAL REVIEW E, VOLUME
55, NUMBER 5
Parameters
• mag (the time-varying intensity of the lightcurve. Must be an array.) –
• lag (the lag that should be used in the calculation of the feature.) –
Returns
rtype
Return type
float AstroML.lightcurve_features.complexity(time, mag, magerr)
This function calculator is an estimate for a time series complexity. A higher value represents more complexity
(more peaks,valleys,etc.) See: Batista, Gustavo EAPA, et al (2014). CID: an efficient complexity-invariant
distance for time series. Data Mining and Knowledge Difscovery 28.3 (2014): 634-669.
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
AstroML Documentation, Release 0.1
Returns
rtype
Return type
float AstroML.lightcurve_features.count_above(time, mag, magerr)
Number of values higher than the median
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
Returns
rtype
Return type
int AstroML.lightcurve_features.count_below(time, mag, magerr)
Number of values below the median
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
Returns
rtype
Return type
int AstroML.lightcurve_features.first_loc_max(time, mag, magerr)
Returns location of maximum mag relative to the lenght of mag array.
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
Returns
rtype
Return type
int AstroML.lightcurve_features.first_loc_min(time, mag, magerr)
Returns location of minimum mag relative to the lenght of mag array.
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
Returns
rtype
Return type
int AstroML.lightcurve_features.check_for_duplicate(time, mag, magerr)
Checks if any val in mag repeats. 1 if True, 0 if False
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
Returns
rtype
AstroML Documentation, Release 0.1
Return type
int AstroML.lightcurve_features.check_for_max_duplicate(time, mag, magerr)
Checks if the maximum value in mag repeats. 1 if True, 0 if False
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
Returns
rtype
Return type
int AstroML.lightcurve_features.check_for_min_duplicate(time, mag, magerr)
Checks if the minimum value in mag repeats. 1 if True, 0 if False.
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
Returns
rtype
Return type
int AstroML.lightcurve_features.check_max_last_loc(time, mag, magerr)
Returns position of last maximum mag relative to the length of mag array.
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
Returns
rtype
Return type
int AstroML.lightcurve_features.check_min_last_loc(time, mag, magerr)
Returns position of last minimum mag relative to the length of mag array.
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
Returns
rtype
Return type
int AstroML.lightcurve_features.longest_strike_above(time, mag, magerr)
Returns the length of the longest consecutive subsequence in mag that is bigger than the median.
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
Returns
rtype
Return type
int
AstroML Documentation, Release 0.1 AstroML.lightcurve_features.longest_strike_below(time, mag, magerr)
Returns the length of the longest consecutive subsequence in mag that is smaller than the median.
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
Returns
rtype
Return type
int AstroML.lightcurve_features.mean_change(time, mag, magerr)
Returns mean over the differences between subsequent observations.
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
Returns
rtype
Return type
float AstroML.lightcurve_features.mean_abs_change(time, mag, magerr)
Returns mean over the abs differences between subsequent observations.
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
Returns
rtype
Return type
float AstroML.lightcurve_features.mean_n_abs_max(time, mag, magerr, number_of_maxima=1)
Calculates the arithmetic mean of the n absolute maximum values of the time series, n = 1.
Parameters
• mag (the time-varying intensity of the lightcurve. Must be an array.) –
• number_of_maxima (the number of maxima to be considered) –
Returns
rtype
Return type
float AstroML.lightcurve_features.mean_second_derivative(time, mag, magerr)
Returns the mean value of a central approximation of the second derivative.
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
Returns
rtype
Return type
float
AstroML Documentation, Release 0.1 AstroML.lightcurve_features.number_of_crossings(time, mag, magerr)
Calculates the number of crossings of x on the median, m. A crossing is defined as two sequential values where
the first value is lower than m and the next is greater, or vice-versa. If you set m to zero, you will get the number
of zero crossings.
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
Returns
rtype
Return type
int AstroML.lightcurve_features.number_of_peaks(time, mag, magerr, n=7)
Calculates the number of peaks of at least support n in the time series x. A peak of support n is defined as a
subsequence of x where a value occurs, which is bigger than its n neighbors to the left and to the right. n = 7
Hence in the sequence:
>>> x = [3, 0, 0, 4, 0, 0, 13]
4 is a peak of support 1 and 2 because in the subsequences
>>> [0, 4, 0]
>>> [0, 0, 4, 0, 0]
4 is still the highest value. Here, 4 is not a peak of support 3 because 13 is the 3th neighbour to the right of 4 and
its bigger than 4.
Parameters
• mag (the time-varying intensity of the lightcurve. Must be an array.) –
• n (the support of the peak) –
Returns
rtype
Return type
int AstroML.lightcurve_features.ratio_recurring_points(time, mag, magerr)
Returns the ratio of unique values, that are present in the time series more than once, normalized to the number
of data points.
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
Returns
rtype
Return type
float AstroML.lightcurve_features.sample_entropy(time, mag, magerr)
Returns sample entropy: http://en.wikipedia.org/wiki/Sample_Entropy
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
AstroML Documentation, Release 0.1
Returns
rtype
Return type
float AstroML.lightcurve_features.sum_values(time, mag, magerr)
Sums over all mag values.
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
Returns
rtype
Return type
float AstroML.lightcurve_features.time_reversal_asymmetry(time, mag, magerr, lag=1)
Derives a feature introduced by Fulcher. See: (<NAME>., <NAME>. (2014). Highly comparative feature-
based time-series classification. Knowledge and Data Engineering, IEEE Transactions on 26, 3026–3037.)
Parameters
• mag (the time-varying intensity of the lightcurve. Must be an array.) –
• lag (the lag that should be used in the calculation.) –
Returns
rtype
Return type
float AstroML.lightcurve_features.variance(time, mag, magerr)
Returns the variance.
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
Returns
rtype
Return type
float AstroML.lightcurve_features.variance_larger_than_standard_deviation(time, mag, magerr)
This feature denotes if the variance of x is greater than its standard deviation. Is equal to variance of x being
larger than 1.
1 is True, 0 is False.
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
Returns
rtype
Return type
int
AstroML Documentation, Release 0.1 AstroML.lightcurve_features.variation_coefficient(time, mag, magerr)
Returns the variation coefficient (standard error / mean, give relative value of variation around mean) of x.
Parameters
mag (the time-varying intensity of the lightcurve. Must be an array.) –
Returns
rtype
Return type
float AstroML.lightcurve_features.large_standard_deviation(time, mag, magerr, r=0.3)
Does time series have “large” standard deviation?
Boolean variable denoting if the standard dev of x is higher than ‘r’ times the range = difference between max
and min of x.
Parameters
• mag (the time-varying intensity of the lightcurve. Must be an array.) –
• r (the percentage of the range to compare with.) –
Returns
rtype
Return type
float AstroML.lightcurve_features.symmetry_looking(time, mag, magerr, r=0.5)
Check to see if the distribution of the mag “looks symmetric”. This is the case if:
mean(X)-median(X)| < r * (max(X)-min(X))
where r is the percentage of the range to compare with.
Parameters
• mag (the time-varying intensity of the lightcurve. Must be an array.) –
• r (the percentage of the range to compare with.) –
Returns
rtype
Return type
int AstroML.lightcurve_features.index_mass_quantile(time, mag, magerr, r=0.5)
Calculates the relative index i of time series x where r% of the mass of x lies left of i. For example for r = 50%
this feature will return the mass center of the time series.
Parameters
• mag (the time-varying intensity of the lightcurve. Must be an array.) –
• r (the percentage of the range to compare with.) –
Returns
rtype
AstroML Documentation, Release 0.1
Return type
float AstroML.lightcurve_features.number_cwt_peaks(time, mag, magerr, n=30)
Number of different peaks in the magnitude array.
To estimamte the numbers of peaks, x is smoothed by a ricker wavelet for widths ranging from 1 to n. This feature
calculator returns the number of peaks that occur at enough width scales and with sufficiently high Signal-to-
Noise-Ratio (SNR)
Parameters
• mag (the time-varying intensity of the lightcurve. Must be an array.) –
• n (param) –
Returns
rtype
Return type
int AstroML.lightcurve_features.permutation_entropy(time, mag, magerr, tau=1, dimension=3)
Calculate the permutation entropy.
Ref: https://www.aptech.com/blog/permutation-entropy/
Bandt, Christoph and <NAME>. “Permutation entropy: a natural complexity measure for time series.”
Physical review letters 88 17 (2002): 174102
Parameters
• mag (the time-varying intensity of the lightcurve. Must be an array.) –
• tau (the embedded time delay that determines the time separation
between the mag values.) –
• dimension (the embedding dimension.) –
Returns
rtype
Return type
float AstroML.lightcurve_features.quantile(time, mag, magerr, r=0.75)
Calculates the r quantile of the mag. This is the value of mag greater than r% of the ordered values.
Parameters
• mag (the time-varying intensity of the lightcurve. Must be an array.) –
• r (the percentage of the range to compare with.) –
Returns
rtype
Return type
float
PYTHON MODULE INDEX a
AstroML, 9 AstroML.lightcurve_features, 9
29
AstroML Documentation, Release 0.1 30 Python Module Index
INDEX A count_above() (in module As-
above1() (in module AstroML.lightcurve_features), 15 troML.lightcurve_features), 22 above3() (in module AstroML.lightcurve_features), 15 count_below() (in module As-
above5() (in module AstroML.lightcurve_features), 15 troML.lightcurve_features), 22 abs_energy() (in module AstroML.lightcurve_features), cusum() (in module AstroML.lightcurve_features), 20
20 abs_sum_changes() (in module As- F
troML.lightcurve_features), 21 first_loc_max() (in module As-
amplitude() (in module AstroML.lightcurve_features), troML.lightcurve_features), 22
14 first_loc_min() (in module As-
AndersonDarling() (in module As- troML.lightcurve_features), 22
troML.lightcurve_features), 20 FluxPercentileRatioMid20() (in module As-
module, 9 FluxPercentileRatioMid35() (in module As-
AstroML.lightcurve_features troML.lightcurve_features), 18
module, 9 FluxPercentileRatioMid50() (in module As-
auto_corr() (in module AstroML.lightcurve_features), troML.lightcurve_features), 18
17 FluxPercentileRatioMid65() (in module As-
below1() (in module AstroML.lightcurve_features), 15 troML.lightcurve_features), 19 below3() (in module AstroML.lightcurve_features), 16 below5() (in module AstroML.lightcurve_features), 16 G benford_correlation() (in module As- Gskew() (in module AstroML.lightcurve_features), 20
troML.lightcurve_features), 21
H C half_mag_amplitude_ratio() (in module As-
c3() (in module AstroML.lightcurve_features), 21 troML.lightcurve_features), 19 check_for_duplicate() (in module As-
troML.lightcurve_features), 22 I check_for_max_duplicate() (in module As- index_mass_quantile() (in module As-
troML.lightcurve_features), 23 troML.lightcurve_features), 27 check_for_min_duplicate() (in module As- integrate() (in module AstroML.lightcurve_features),
troML.lightcurve_features), 23 17 check_max_last_loc() (in module As-
troML.lightcurve_features), 23 K check_min_last_loc() (in module As- kurtosis() (in module AstroML.lightcurve_features),
troML.lightcurve_features), 23 12 complexity() (in module AstroML.lightcurve_features),
21 L con() (in module AstroML.lightcurve_features), 12 large_standard_deviation() (in module As-
AstroML Documentation, Release 0.1 LinearTrend() (in module As- R
troML.lightcurve_features), 18 ratio_recurring_points() (in module As-
longest_strike_above() (in module As- troML.lightcurve_features), 25
troML.lightcurve_features), 23 root_mean_squared() (in module As-
longest_strike_below() (in module As- troML.lightcurve_features), 16
troML.lightcurve_features), 23
S M sample_entropy() (in module As-
MaxSlope() (in module AstroML.lightcurve_features), troML.lightcurve_features), 25
17 shannon_entropy() (in module As-
mean_abs_change() (in module As- troML.lightcurve_features), 12
troML.lightcurve_features), 24 shapiro_wilk() (in module As-
mean_change() (in module As- troML.lightcurve_features), 20
troML.lightcurve_features), 24 skewness() (in module AstroML.lightcurve_features),
mean_n_abs_max() (in module As- 13
troML.lightcurve_features), 24 std_over_mean() (in module As-
mean_second_derivative() (in module As- troML.lightcurve_features), 14
troML.lightcurve_features), 24 stetsonJ() (in module AstroML.lightcurve_features),
meanMag() (in module AstroML.lightcurve_features), 16 13 median_buffer_range() (in module As- stetsonK() (in module AstroML.lightcurve_features),
troML.lightcurve_features), 14 13 median_distance() (in module As- stetsonL() (in module AstroML.lightcurve_features),
troML.lightcurve_features), 14 13 medianAbsDev() (in module As- sum_values() (in module AstroML.lightcurve_features),
troML.lightcurve_features), 16 26 module symmetry_looking() (in module As-
AstroML, 9 troML.lightcurve_features), 27
AstroML.lightcurve_features, 9
T N time_reversal_asymmetry() (in module As-
number_cwt_peaks() (in module As- troML.lightcurve_features), 26
troML.lightcurve_features), 28 number_of_crossings() (in module As- V
troML.lightcurve_features), 24
variance() (in module AstroML.lightcurve_features),
number_of_peaks() (in module As-
troML.lightcurve_features), 25
variance_larger_than_standard_deviation() (in P variation_coefficient() (in module As-
PairSlopeTrend() (in module As- troML.lightcurve_features), 26
troML.lightcurve_features), 18 vonNeumannRatio() (in module As-
peak_detection() (in module As- troML.lightcurve_features), 13
troML.lightcurve_features), 17 PercentAmplitude() (in module As-
troML.lightcurve_features), 19 PercentDifferenceFluxPercentile() (in module
AstroML.lightcurve_features), 19 permutation_entropy() (in module As-
troML.lightcurve_features), 28 Q
quantile() (in module AstroML.lightcurve_features),
28 |
gamma-astro-data-formats | readthedoc | Python | Data formats for gamma-ray astronomy 0.3 documentation
[Data formats for gamma-ray astronomy](index.html#document-index)
---
Data formats for gamma-ray astronomy 0.3[¶](#data-formats-for-gamma-ray-astronomy-version)
===
The *Data formats for gamma-ray astronomy* is a community-driven initiative for the definition of a common and open high-level data format for gamma-ray instruments.
* Repository: <https://github.com/open-gamma-ray-astro/gamma-astro-data-formats>
* Docs: <https://gamma-astro-data-formats.readthedocs.io/>
* Mailing list: <https://lists.nasa.gov/mailman/listinfo/open-gamma-ray-astroReferences[¶](#references)
---
The following paper describes the context of this initiative and its evolution:
* <NAME>.; <NAME>.; <NAME>. Evolution of Data Formats in Very-High-Energy Gamma-Ray Astronomy. Universe 2021, 7, 374. <https://doi.org/10.3390/universe7100374>.
Content[¶](#content)
---
### About[¶](#about)
In gamma-ray astronomy, a variety of data formats and proprietary software have been traditionally used, often developed for one specific mission or experiment.
Especially for ground-based imaging atmospheric Cherenkov telescopes (IACTs),
data and software are mostly private to the collaborations operating the telescopes. The next-generation IACT instrument, the Cherenkov Telescope Array
(CTA), will be the first ground-based gamma-ray telescope array operated as an open observatory with public observer access. This implies fundamentally different requirements for the data formats and software tools. Open community access is a novelty in this domain, putting a challenge on the implementation of services that make very high energy (VHE) gamma-ray astronomy as accessible as any other waveband.
To work towards open and interoperable data formats for gamma-ray astronomy, we have started this open data format specification. This was started at the
[PyGamma15 workshop](http://gammapy.github.io/PyGamma15/) in November 2015, followed by a [meeting in April 2016 in Meudon](https://github.com/open-gamma-ray-astro/2016-04_IACT_DL3_Meeting) and another [meeting in March 2017 in Heidelberg](https://github.com/open-gamma-ray-astro/2017-03_IACT_DL3_Meeting). Version 0.1 of the spec was release in April 2016, version 0.2 in August 2018,
version 0.3 in November 2022.
You can find more information about this effort in [Deil et al. (2017)](https://adsabs.harvard.edu/abs/2017AIPC.1792g0006D).
The scope of this effort is to cover all **high-level** data from gamma-ray telescopes, starting at the level of event lists and IRFs, also covering reduced IRFs, maps, spectra, light curves, up to source models and catalogs. Both pointing (IACT) and slewing (Fermi-LAT, HAWC) telescope data is in scope. To a large degree, the formats should also be useful for other astroparticle data,
e.g. from cosmic rays or neutrinos.
All of the data formats described here at the moment can be, and in practice mostly are, stored in FITS. Some experimentation with HDF5 and ECSV is ongoing.
The data format specifications don’t explicity mention the serialisation format,
but rather the key name and data type for each metadata or data field.
The formats described here are partially in use by current instruments
(Fermi-LAT, HESS, VERITAS, MAGIC, FACT) as well as in the next-generation Cherenkov Telescope Array CTA. Prototyping and support for many of the formats here is happening in the CTA science tool prototypes [Gammapy](http://gammapy.org/) and [ctools](http://cta.irap.omp.eu/ctools/).
The data formats are also used e.g. in [gamma-cat](https://gamma-cat.readthedocs.io/), a collection and source catalog for very-high-energy gamma-ray astronomy.
However, we would like to stress that this is an inofficial effort, the formats described here are work in progress and have not been officially adopted by any of the mentioned instruments. The expectation is that CTA partially adopt these formats, and partially develop new ones in the time frame 2018-2020, and then the other IACTs will use the official format chosen by CTA.
Development of this data format specification happens at
<https://github.com/open-gamma-ray-astro/gamma-astro-data-formats>, information how to contribute is given in `CONTRIBUTING.md` there. Discussion on major and controversial points should happen on the mailing list
(<https://lists.nasa.gov/mailman/listinfo/open-gamma-ray-astro>). So far, no official decision process exists, so some discussions are stuck because of major differences in opinion. Probably, moving forward, decisions will be made not in this open forum, but rather by the major instruments via their choices,
especially CTA.
### General[¶](#general)
This section contains general information and basic definitions.
It’s purpose is two-fold:
1. Users and developers can learn or look up the nitty-gritty details how coordinates, times, … are defined and basic information about file and storage formats (e.g. how axis-information for multi-dimensional arrays can be stored in FITS files).
2. Data format specifications can refer to the definitions in this section,
e.g. we don’t have to repeat that the azimuth angle is measured east of north in each format specification where azimuth is used.
#### Time[¶](#time)
##### Introduction[¶](#introduction)
This page describes how times should be stored.
This is a solved problem, we follow the [FITS standard](https://fits.gsfc.nasa.gov/standard40/fits_standard40aa-le.pdf).
However, the FITS standard is very complex (see also [FITS time paper](http://adsabs.harvard.edu/abs/2015A%26A...574A..36R)),
and allows for different ways to store times in FITS, some of which are hard to understand and implement.
To keep things simple, we here agree on a way to store times that is fully compliant, but a subset of the ways allowed by the FITS standard
(see [Formats](#time-formats) below).
This has the advantage of simplicity and uniformity for writers and readers
(see [Tools](#time-tools) below).
One major point allowing for this simplicity is that we only need single 64-bit float precision for time in high-level (DL3 and up) gamma-ray astronomy, as explained in the [Precision](#time-precision) section below.
At the time of writing this general page on time, the following definitions reference it:
* [EVENTS](index.html#iact-events) has the `TIME` column and several times given via header keywords
* [GTI](index.html#iact-gti) has the `START` and `STOP` columns
* [POINTING](index.html#iact-pnt) has the `TIME` column
* [Observation index table](index.html#obs-index) has the `TSTART` and `TSTOP` columns
Other useful resources concerning time:
* <https://heasarc.gsfc.nasa.gov/docs/fcg/common_dict.html>
* [Time in Fermi data analysis](http://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/Cicerone/Cicerone_Data/Time_in_ScienceTools.html)
##### Formats[¶](#formats)
In tables, times should be given as 64-bit float values. For all the tables and columns mentioned in the introduction, the times are given as seconds since a reference time point.
The reference time point is specified by the following FITS header keywords:
* `MJDREFI` type: int, unit: days
+ Integer part of instrument specific MJD time reference
* `MJDREFF` type: float, unit: days
+ Float part of instrument specific MJD time reference
* `TIMEUNIT` type: string
+ Time unit (e.g. ‘s’)
* `TIMESYS` type: string
+ Time system, also referred as time scale (e.g. ‘UT1’, ‘UTC’, ‘TT’, ‘TAI’, see Table 30 of the [FITS standard](https://fits.gsfc.nasa.gov/standard40/fits_standard40aa-le.pdf))
* `TIMEREF` type: string
+ Time reference frame, used for example for barycentric corrections
(options: ‘TOPOCENTER’, ‘GEOCENTER’, ‘BARYCENTER’, ‘HELIOCENTER’, and more see Table 31 of the [FITS standard](https://fits.gsfc.nasa.gov/standard40/fits_standard40aa-le.pdf))
See the [FITS standard](https://fits.gsfc.nasa.gov/standard40/fits_standard40aa-le.pdf) and the [FITS time paper](http://adsabs.harvard.edu/abs/2015A%26A...574A..36R) for further information.
For light curves (not specified yet), the use of `TIMESYS='d'` in days is also common.
Some existing files don’t give `TIMEUNIT` and / or `TIMEREF`. In those cases, science tools should assume defaults of `TIMEUNIT='s'` and `TIMEREF='LOCAL'`. No defaults exist for
`MJDREFI`, `MJDREFF` and `TIMESYS`, if those are missing science tools should raise an error and exit.
New files should always be written with all five header keys.
In addition to that main way of specifying times as a floating point number wrt. a reference time point,
the following header keys with date and time values as strings can be added.
This is for convenience and humans reading the information. Usually science tools will not access this redundant and optional information. The time system used should be the one given by `TIMESYS`.
All values for the keywords mentioned here shall be ISO8601 date and time strings,
`yyyy-mm-ddTHH:MM:SS.fff`.
The precision is not fixed, so seconds or fractional seconds could be left out.
* `DATE-OBS` type: string
+ Observation date and time, typically the start of observations if not defined otherwise in the comment
* `DATE-BEG` type: string
+ Observation start date and time
* `DATE-AVG` type: string
+ Average observation date and time
* `DATE-END` type: string
+ Observation end date and time
##### Tools[¶](#tools)
The [SOFA Time Scale and Calendar Tools](http://www.iausofa.org/2015_0209_C/sofa/sofa_ts_c.pdf) document provides a detailed description of times in the high-precision IAU SOFA library, which is the gold standard for times in astronomy.
The SOFA time routines are available via the [Astropy time](http://docs.astropy.org/en/latest/time/index.html) Python package,
which makes it easy to convert between different **time scales**
(`utc`, `tt` and `mjd` in this example).
```
>>> from astropy.time import Time
>>> time = Time('2011-01-01 00:00:00', scale='utc', format='iso')
>>> time
<Time object: scale='utc' format='iso' value=2011-01-01 00:00:00.000>
>>> time.tt
<Time object: scale='tt' format='iso' value=2011-01-01 00:01:06.184>
>>> time.mjd 55562.0
```
as well as different **time formats** (`iso`, `isot` and `fits` in this example)
```
>>> time.iso
'2011-01-01 00:00:00.000'
>>> time.isot
'2011-01-01T00:00:00.000'
>>> time.fits
'2011-01-01T00:00:00.000(UTC)'
```
If you don’t want to install SOFA or Astropy (or to double-check),
you can use the [xTime](https://heasarc.gsfc.nasa.gov/cgi-bin/Tools/xTime/xTime.pl) time conversion utility provided by HEASARC as a web tool.
* Gammapy uses [Astropy time](http://docs.astropy.org/en/latest/time/index.html), with custom utility functions for FITS I/O to write in the formats recommended here.
* Gammalib has custom code to hande times that is described in [Times in Gammalib](http://cta.irap.omp.eu/gammalib-devel/user_manual/modules/obs.html#times-in-gammalib).
##### Precision[¶](#precision)
Depending on the use case and required precision, times are stored as strings or as one or several integer or floating point numbers. Tools usually use one 64-bit or two 64-bit floating point numbers for time calculations.
For high-level gamma-ray astronomy, the situation can be summarised like this
(see sub-section computation below for details):
* **Do use single 64-bit floats for times.**
The resulting precision will be about 0.1 micro-seconds or better,
which is sufficient for any high-level analysis (including milli-second pulsars).
* **Do not use 32-bit floats for times.**
If you do, times will be incorrect at the 1 to 100 second level.
* More than 64-bit floats or a combination of two floating point numbers is not necessary
For data acquisition and low-level analysis (event triggering, traces, …),
IACTs require nanosecond precision or better. There, the simple advice to use 64-bit floats representing seconds wrt. a single reference time doesn’t work!
One either needs to have several reference times (e.g. per-observation) or two integer or float values. This is not covered by this spec.
The time precision obtained with a single 32-bit or 64-bit float can be computed with this function:
```
def time_precision(time_range, float_precision):
"""Compute time precision (seconds) in float computations.
For a given `time_range` and `float_precision`, the `time_precision`
is computed as the smallest time difference corresponding to the
float precision.
time_range -- (IN) Time range of application (years)
float_precision -- (IN) {32, 64} Floating point precision
time_precision -- (OUT) Time precision (seconds)
"""
import numpy as np
YEAR_TO_SEC = 315576000
dtype = {32: np.float32, 64: np.float64}[float_precision]
t1 = dtype(YEAR_TO_SEC * time_range)
t2 = np.nextafter(t1, np.finfo(dtype).max)
print('Time range: {} years, float precision: {} bit => time precision: {:.3g} seconds.'
''.format(time_range, float_precision, t2-t1))
```
```
>>> time_precision(10, 32)
Time range: 10 years, float precision: 32 bit => time precision: 256 seconds.
>>> time_precision(10, 64)
Time range: 10 years, float precision: 64 bit => time precision: 4.77e-07 seconds.
```
#### Coordinates[¶](#coordinates)
This section describes the sky coordinates in use by science tools. It is referenced from the description of data formats to explain the exact meaning of the coordinates stored.
We don’t have a separate section for world coordinate systems (WCS), pixel coordinates, projections, that is covered here as well (see [FITS WCS](http://fits.gsfc.nasa.gov/fits_wcs.html) and
[WCSLIB](http://www.atnf.csiro.au/people/mcalabre/WCS/) for references).
We only discuss 2-dimensional sky and image coordinates here, other coordinates like e.g. time or an energy axis aren’t covered here.
Some conventions are adopted from [astropy.coordinates](http://astropy.readthedocs.io/en/latest/coordinates/index.html),
which is a Python wrapper of the [IAU SOFA](http://www.iausofa.org/) C time and coordinate library,
which is the authoritative implementation of the IAU Standards of Fundamental Astronomy.
In some cases code examples are given using astropy.coordinates to obtain a reference value that can be used to check a given software package (in case it’s not based on astropy.coordinates).
##### RA / DEC[¶](#ra-dec)
The most common way to give sky coordinates is as right ascension (RA) and declination (DEC) in the [equatorial coordinate system](https://en.wikipedia.org/wiki/Equatorial_coordinate_system).
Actually there are several equatorial coordinate systems in use, the most common ones being FK4, FK5 and ICRS.
If you’re interested to learn more about these and other astronomical coordinate systems, look into the [Explanatory Supplement to the Astronomical Almanac](https://ui.adsabs.harvard.edu/abs/2014AAS...22324720U).
But in practice it’s pretty simple: when someone gives or talks about RA / DEC coordinates, they mean either ICRS or FK5 J2000 coordinates. The difference between those two is at the sub-arcsecond level for the whole sky, i.e.
irrelevant for gamma-ray astronomy.
We recommend you by default assume RA / DEC is in the ICRS frame, which is the default in [astropy.coordinates.SkyCoord](http://astropy.readthedocs.io/en/latest/api/astropy.coordinates.SkyCoord.html) and also the current standard celestial reference system adopted by the IAU (see [Wikipedia - ICRS](https://en.wikipedia.org/wiki/International_Celestial_Reference_System)).
##### Galactic[¶](#galactic)
The [Galactic coordinate system](https://en.wikipedia.org/wiki/Galactic_coordinate_system) is often used by Galactic astronomers.
Unfortunately there are slightly different variants in use (usually with differences at the arcsecond level),
and there are no standard names for these slightly different Galactic coordinate frames.
See [here](https://github.com/astropy/astropy/issues/3344) for an open discussion which Galactic coordinates to support and what to call them in Astropy.
We recommend you use ICRS RA / DEC for precision coordinate computations. If you do use Galactic coordinates,
we recommend you compute them like Astropy does (which I think is the most frame in use in the literature and in existing astronomy software).
Both ICRS and Galactic coordinates don’t need the specification of an [epoch](https://en.wikipedia.org/wiki/Epoch_(astronomy))
or [equinox](https://en.wikipedia.org/wiki/Equinox_(celestial_coordinates)).
To check your software, you can use the `(l, b) = (0, 0)` position:
```
>>> from astropy.coordinates import SkyCoord
>>> SkyCoord(0, 0, unit='deg', frame='galactic')
<SkyCoord (Galactic): (l, b) in deg (0.0, 0.0)>
>>> SkyCoord(0, 0, unit='deg', frame='galactic').icrs
<SkyCoord (ICRS): (ra, dec) in deg (266.40498829, -28.93617776)>
```
##### Alt / Az[¶](#alt-az)
The [horizontal coordinate system](https://en.wikipedia.org/wiki/Horizontal_coordinate_system) is the one connected to an observer at a given location on earth and point in time.
* Azimuth is oriented east of north (i.e. north is at 0 deg, east at 90 deg,
south at 180 deg and west at 270 deg). This is the convention used by
[astropy.coordinates.AltAz](http://astropy.readthedocs.io/en/latest/api/astropy.coordinates.AltAz.html) and quoted as the most common convention in astronomy on Wikipedia (see [horizontal coordinate system](https://en.wikipedia.org/wiki/Horizontal_coordinate_system)).
* The zenith angle is defined as the angular separation from the [zenith](https://en.wikipedia.org/wiki/Zenith),
which is the direction defined by the line connecting the Earth’s center and the observer.
Altitude and elevation are the same thing, and are defined as 90 degree minus the zenith angle.
The reason to define altitude like this instead of the angle above the horizon is that usually Earth models aren’t perfect spheres, but ellipsoids, so the zenith angle as defined here isn’t perfectly perpendicular with the horizon plane.
* Unless explicitly specified, Alt / Az should be assumed to not include any refraction corrections,
i.e. be valid assuming no refraction. Usually this can be achived in coordinate codes by setting the atmospheric pressure to zero, i.e. turning the atmosphere off.
Here’s some Astropy coordinates code that shows how to convert back and forth between ICRS and AltAz coordinates (the default pressure is set to zero in Astropy, i.e. this is without refraction corrections):
```
import astropy.units as u from astropy.time import Time from astropy.coordinates import Angle, SkyCoord, EarthLocation, AltAz
# Take any ICRS sky coordinate icrs = SkyCoord.from_name('crab')
print('RA = {pos.ra.deg:10.5f}, DEC = {pos.dec.deg:10.5f}'.format(pos=icrs))
# RA = 83.63308, DEC = 22.01450
# Convert to AltAz for some random observation time and location
# This assumes pressure is zero, i.e. no refraction time = Time('2010-04-26', scale='tt')
location = EarthLocation(lon=42 * u.deg, lat=42 * u.deg, height=42 * u.meter)
altaz_frame = AltAz(obstime=time, location=location)
altaz = icrs.transform_to(altaz_frame)
print('AZ = {pos.az.deg:10.5f}, ALT = {pos.alt.deg:10.5f}'.format(pos=altaz))
# AZ = 351.88232, ALT = -25.56281
# Convert back to ICRS to make sure round-tripping is OK icrs2 = altaz.transform_to('icrs')
print('RA = {pos.ra.deg:10.5f}, DEC = {pos.dec.deg:10.5f}'.format(pos=icrs2))
# RA = 83.63308, DEC = 22.01450
```
##### Field of view[¶](#field-of-view)
FOV coordinates are currently used in two places in this spec:
1. Some [background models](index.html#bkg) are in the FOV coordinate system and FOV coordinates can also be used for other IRFs.
2. FOV coordinates appear is as optional columns in the [events](index.html#iact-events) table.
While it is possible to compute FOV coordinates from the `RA`, `DEC`, `TIME` columns and the observatory [Earth location](#coords-location), some IACTs choose to add FOV coordinate columns to their event lists for convenience.
In Gamma-ray astronomy, sometimes field of view (FOV) coordinates are used.
The basic idea is to have a coordinate system that is centered on the array pointing position. We define FOV coordinates here to be spherical coordinates,
there is no projection or WCS, only a spherical rotation.
Two ways to give the spherical coordinate are defined:
1. `(LON, LAT)` with the pointing position on the equator at `(LON, LAT) = (0, 0)`
* `LON`: Longitude (range -180 deg to + 180 deg)
* `LAT`: Latitude (range -90 deg to + 90 deg)
2. `(THETA, PHI)` with the pointing position at the pole `THETA=0`
* `THETA`: Offset angle (range 0 deg to +180 deg)
* `PHI`: Position angle (range 0 deg to 360 deg)
Two orientations of the FOV coordinate system are defined:
1. Aligned with the `ALTAZ` system 2. Aligned with the `RADEC` system
This yields the following possible coordinates:
| Field | Description |
| --- | --- |
| FOV_ALTAZ_LON | Longitude in ALTAZ FOV system |
| FOV_ALTAZ_LAT | Latitude in ALTAZ FOV system |
| FOV_ALTAZ_THETA | Offset angle in ALTAZ FOV system |
| FOV_ALTAZ_PHI | Position angle in ALTAZ FOV system |
| FOV_RADEC_LON | Longitude in RADEC FOV system |
| FOV_RADEC_LAT | Latitude in RADEC FOV system |
| FOV_RADEC_THETA | Offset angle in RADEC FOV system |
| FOV_RADEC_PHI | Position angle in RADEC FOV system |
* The FOV offset angle (separation to pointing position) `THETA` doesn’t depend on the orientation. So in this spec, often simply `THETA` is used,
and that is equal to `FOV_ALTAZ_THETA` and `FOV_RADEC_THETA`.
* The other FOV coordinates depend on the alignment and orientation of a second coordinate systems (`OTHER`, either `ALTAZ` or `RADEC`).
+ FOV PHI is counterclockwise from OTHER north,
i.e. `PHI=0 deg` pointing to OTHER LAT,
and `PHI=270 deg` pointing to OTHER LON
+ FOV LON should increase with decreasing OTHER LON
+ FOV LAT should increase with increasing OTHER LAT
In the [events](index.html#iact-events) table, the column names `DETX` and `DETY`
are sometimes used. This originates from the [OGIP event list](https://heasarc.gsfc.nasa.gov/docs/heasarc/ofwg/docs/events/ogip_94_003/ogip_94_003.html) standard,
which uses these names for “detector coordinates”. Given that IACTs don’t have a detector chip (or at least the FOV coordinates used in high-level analysis are different from the IACT camera coordinate detectors), the definition is not unambiguous,
both `(DETX, DETY) = (FOV_ALTAZ_LON, FOV_ALTAZ_LAT)`
and `(DETX, DETY) = (FOV_RADEC_LON, FOV_RADEC_LAT)`
have been used.
To resolve this ambiguity, we propose a header key `FOVALIGN={ALTAZ,RADEC}`,
specifying which definition of field-of-view coordinates is used. If the key is not present, `FOVALIGN=ALTAZ` should be assumed as default.
Given the situation that there is no concensus yet, one suggestion is to avoid putting FOV coordinates in EVENTS, or if they are added, to clearly state how they are defined.
##### Earth location[¶](#earth-location)
When working with Alt-Az coordinates or very high-precision times,
an observatory Earth location is needed. However, note that high-level analysis for most use cases does not need this information.
The FITS standard mentions `OBSGEO-X`, `OBSGEO-Y`, `OBSGEO-Z`
header keys, and we might want to consider using those in the future.
For now, as of 2018, however, the existing IACT FITS data uses the following header keys, so their use is encouraged:
* `GEOLON` type: float, unit: deg
+ Geographic longitude of array centre
* `GEOLAT` type: float, unit: deg
+ Geographic latitude of array centre
* `ALTITUDE` type: float, unit: m
+ Altitude of array center above sea level
While it is possible in principle to change this for each FITS file,
in practice the observatory or telescope array centre position is something that is chosen once and then used consistently in the event reconstruction and analysis. As an example, H.E.S.S. uses the following location and FITS header keys:
```
GEOLAT = -23.2717777777778 / latitude of observatory (deg)
GEOLON = 16.5002222222222 / longitude of observatory (deg)
ALTITUDE= 1835. / altitude of observatory (m)
```
#### FITS Multidimensional datasets[¶](#fits-multidimensional-datasets)
As described e.g. [here](http://heasarc.gsfc.nasa.gov/docs/heasarc/ofwg/docs/general/ogip_94_006/ogip_94_006.html)
or [here](http://heasarc.gsfc.nasa.gov/docs/heasarc/caldb/docs/memos/cal_gen_92_003/cal_gen_92_003.html#tth_sEc4)
or in the [FITS standard](https://fits.gsfc.nasa.gov/standard40/fits_standard40aa-le.pdf),
there are several ways to serialise multi-dimensional arrays and corresponding axis information in FITS files.
Here we describe the schemes in use in gamma-ray astronomy and give examples.
##### IMAGE HDU[¶](#image-hdu)
* Data array is stored in an IMAGE HDU.
* Axis information is either stored in the IMAGE HDU header or in extra BINTABLE HDUs, sometimes a mix.
* Advantage: IMAGE HDUs can be opened up in image viewers like ds9.
* Disadvantage: axis information is not self contained, an extra HDU is needed.
###### Example[¶](#example)
E.g. the Fermi-LAT counts cubes or diffuse model spectral cubes are stored in an IMAGE HDU,
with the information about the two celestial axes in WCS header keywords,
and the information about the energy axis in ENERGIES (for spectral cube) or EBOUNDS (for counts cube) BINTABLE HDU extensions.
```
$ ftlist gll_iem_v02.fit H
Name Type Dimensions
--- --- ---
HDU 1 Primary Array Image Real4(720x360x30)
HDU 2 ENERGIES BinTable 1 cols x 30 rows
```
Let’s have a look at the header of the primary IMAGE HDU.
As you can see, there’s three axes.
The first two are Galactic longitude and latitude and the pixel to sky coordinate mapping is specified by header keywords according to the [FITS WCS](http://fits.gsfc.nasa.gov/fits_wcs.html)
standard.
I think the energy axis isn’t a valid FITS WCS axis specification.
ds9 uses the C????3 keys to infer a WCS mapping of pixels to energies, but it is incorrect.
Software that’s supposed to work with this axis needs to know to look at the ENERGIES table instead.
```
$ ftlist gll_iem_v02.fit K SIMPLE = T / Written by IDL: Tue Jul 7 15:25:03 2009 BITPIX = -32 /
NAXIS = 3 / number of data axes NAXIS1 = 720 / length of data axis 1 NAXIS2 = 360 / length of data axis 2 NAXIS3 = 30 / length of data axis 3 EXTEND = T / FITS dataset may contain extensions COMMENT FITS (Flexible Image Transport System) format is defined in 'Astronomy COMMENT and Astrophysics', volume 376, page 359; bibcode: 2001A&A...376..359H FLUX = 8.29632317174 /
CRVAL1 = 0. / Value of longitude in pixel CRPIX1 CDELT1 = 0.5 / Step size in longitude CRPIX1 = 360.5 / Pixel that has value CRVAL1 CTYPE1 = 'GLON-CAR' / The type of parameter 1 (Galactic longitude in CUNIT1 = 'deg ' / The unit of parameter 1 CRVAL2 = 0. / Value of latitude in pixel CRPIX2 CDELT2 = 0.5 / Step size in latitude CRPIX2 = 180.5 / Pixel that has value CRVAL2 CTYPE2 = 'GLAT-CAR' / The type of parameter 2 (Galactic latitude in C CUNIT2 = 'deg ' / The unit of parameter 2 CRVAL3 = 50. / Energy of pixel CRPIX3 CDELT3 = 0.113828620540137 / log10 of step size in energy (if it is logarith CRPIX3 = 1. / Pixel that has value CRVAL3 CTYPE3 = 'photon energy' / Axis 3 is the spectra CUNIT3 = 'MeV ' / The unit of axis 3 CHECKSUM= '3fdO3caL3caL3caL' / HDU checksum updated 2009-07-07T22:31:18 DATASUM = '2184619035' / data unit checksum updated 2009-07-07T22:31:18 HISTORY From Ring/Hybrid fit with GALPROP 54_87Xexph7S extrapolation HISTORY Integrated flux (m^-2 s^-1) over all sky and energies: 8.30 HISTORY Written by rings_gll.pro DATE = '2009-07-07' /
FILENAME= '$TEMPDIR/diffuse/gll_iem_v02.fit' /File name with version number TELESCOP= 'GLAST ' /
INSTRUME= 'LAT ' /
ORIGIN = 'LISOC ' /LAT team product delivered from the LISOC OBSERVER= 'MICHELSON' /Instrument PI END
```
##### BINTABLE HDU[¶](#bintable-hdu)
* Data array and axis information is stored in a BINTABLE HDU with one row.
* This is called the “multidimensional array” convention in appendix B of
[1995A%26AS..113..159C](http://adsabs.harvard.edu/abs/1995A%26AS..113..159C).
* The OGIP Calibration Memo CAL/GEN/92-003 has a section
[use of multi-dimensional datasets](http://heasarc.gsfc.nasa.gov/docs/heasarc/caldb/docs/memos/cal_gen_92_003/cal_gen_92_003.html#tth_sEc4)
that describes this format in greater detail.
* Advantage: everything is contained in one HDU. (as many axes and data arrays as you like)
* Disadvantage: format is a bit unintuitive / header is quite complex / can’t be opened directly in ds9.
###### Example[¶](#id1)
Let’s look at an example file in this format, the [`aeff_P6_v1_diff_back.fits`](_downloads/3fef13e73603f6afc9af33219c4366ba/aeff_P6_v1_diff_back.fits) which represents the Fermi-LAT effective area (an old version) as a function of energy and offset.
It follows the [OGIP effective area](https://heasarc.gsfc.nasa.gov/docs/heasarc/caldb/docs/memos/cal_gen_92_019/cal_gen_92_019.html) format.
The data array and axis information are stored in one BINTABLE HDU called
“EFFECTIVE AREA”, with 5 columns and one row:
```
$ ftlist aeff_P6_v1_diff_back.fits H
Name Type Dimensions
--- --- ---
HDU 1 Primary Array Null Array HDU 2 EFFECTIVE AREA BinTable 5 cols x 1 rows
```
There five columns contain arrays of different length that represent:
* First axis is energy (ENERG_LO and ENERG_HI columns) with 60 bins.
* Second axis is cosine of theta (CTHETA_LO and CTHETA_HI columns) with 32 bins.
* First and only data array is effective area (EFFAREA) at the given energy and cosine theta values.
```
$ ftlist aeff_P6_v1_diff_back.fits C HDU 2
Col Name Format[Units](Range) Comment
1 ENERG_LO 60E [MeV]
2 ENERG_HI 60E [MeV]
3 CTHETA_LO 32E
4 CTHETA_HI 32E
5 EFFAREA 1920E [m2]
```
The part that’s most difficult to understand / remember is how the relevant information is encoded in the BINTABLE FITS header.
But note the `HDUDOC = 'CAL/GEN/92-019'` key. If you Google CAL/GEN/92-019 you will find that it points to the [OGIP effective area](https://heasarc.gsfc.nasa.gov/docs/heasarc/caldb/docs/memos/cal_gen_92_019/cal_gen_92_019.html) format document.
document, which explains in detail what all the other keys mean.
There’s some software (e.g. `fv`) that understands this way of encoding n-dimensional arrays and axis information in FITS BINTABLEs.
```
$ ftlist aeff_P6_v1_diff_back.fits[1] K XTENSION= 'BINTABLE' / binary table extension BITPIX = 8 / 8-bit bytes NAXIS = 2 / 2-dimensional binary table NAXIS1 = 8416 / width of table in bytes NAXIS2 = 1 / number of rows in table PCOUNT = 0 / size of special data area GCOUNT = 1 / one data group (required keyword)
TFIELDS = 5 / number of fields in each row TTYPE1 = 'ENERG_LO' /
TFORM1 = '60E '
TTYPE2 = 'ENERG_HI' /
TFORM2 = '60E '
TTYPE3 = 'CTHETA_LO' /
TFORM3 = '32E ' /
TTYPE4 = 'CTHETA_HI' /
TFORM4 = '32E ' /
TTYPE5 = 'EFFAREA ' /
TFORM5 = '1920E '
ORIGIN = 'LISOC ' / name of organization making this file DATE = '2008-05-06T08:56:19.9999' / file creation date (YYYY-MM-DDThh:mm:ss U EXTNAME = 'EFFECTIVE AREA' / name of this binary table extension TUNIT1 = 'MeV ' /
TUNIT2 = 'MeV ' /
TUNIT3 = ' '
TUNIT4 = ' '
TUNIT5 = 'm2 ' /
TDIM5 = '(60, 32)'
TELESCOP= 'GLAST ' /
INSTRUME= 'LAT ' /
DETNAM = 'BACK '
HDUCLASS= 'OGIP ' /
HDUDOC = 'CAL/GEN/92-019' /
HDUCLAS1= 'RESPONSE' /
HDUCLAS2= 'EFF_AREA' /
HDUVERS = '1.0.0 ' /
EARVERSN= '1992a ' /
1CTYP5 = 'ENERGY ' / Always use log(ENERGY) for interpolation 2CTYP5 = 'COSTHETA' / Off-axis angle cosine CREF5 = '(ENERG_LO:ENERG_HI,CTHETA_LO:CTHETA_HI)' /
CSYSNAME= 'XMA_POL ' /
CCLS0001= 'BCF ' /
CDTP0001= 'DATA ' /
CCNM0001= 'EFF_AREA' /
CBD10001= 'VERSION(P6_v1_diff)'
CBD20001= 'CLASS(P6_v1_diff_back)'
CBD30001= 'ENERG(18-560000)MeV'
CBD40001= 'CTHETA(0.2-1)'
CBD50001= 'PHI(0-360)deg'
CBD60001= 'NONE '
CBD70001= 'NONE '
CBD80001= 'NONE '
CBD90001= 'NONE '
CVSD0001= '2007-01-17' / Dataset validity start date (UTC)
CVST0001= '00:00:00' /
CDES0001= 'GLAST LAT effective area' /
EXTVER = 1 / auto assigned by template parser CHECKSUM= 'IpAMIo5LIoALIo5L' / HDU checksum updated 2008-05-06T08:56:20 DATASUM = '340004495' / data unit checksum updated 2008-05-06T08:56:20 END
```
#### HDU classes[¶](#hdu-classes)
Following NASA’s recommendation
(see [HFWG Recommendation R8](http://heasarc.gsfc.nasa.gov/docs/heasarc/ofwg/docs/general/ogip_94_006/ogip_94_006.html)),
a hierarchical classification is applied to each HDU within DL3 FITS files,
using the `HDUCLASS` and `HDUCLASn` keywords.
Some useful links from other projects:
* <http://cxc.harvard.edu/contrib/arots/fits/content.txt>
* <https://heasarc.gsfc.nasa.gov/docs/heasarc/ofwg/docs/ofwg_recomm/r8.html>
* <https://heasarc.gsfc.nasa.gov/docs/heasarc/ofwg/docs/ofwg_recomm/hduclas.html>
* <http://www.starlink.rl.ac.uk/docs/sun167.htx/sun167se3.html>
* <https://confluence.slac.stanford.edu/display/ST/LAT+PhotonsThe different HDUs defined in the current specifications are listed here:
* `EVENTS` : Table containing the event lists.
See [EVENTS](index.html#iact-events).
* `GTI` : Table containing the Good Time Intervals (‘GTIs’) for the event list. See [GTI](index.html#iact-gti).
* `POINTING` : Table containing the pointing direction of the telescopes for a number of time stamps. See [POINTING](index.html#iact-pnt).
* `RESPONSE` : Table containing any of the different instrument response function components defined in the specs. See [IRFs](index.html#iact-irf).
The current HDU class scheme used is the following:
* `HDUCLASS` : General identifier of the data format. Recommended value: “GADF” (for “gamma-astro-data-formats”)
* `HDUDOC` : Link to the DL3 specifications documentation
* `HDUVERS` : Version of the DL3 specification format
* `HDUCLAS1` : General type of HDU, currently: `EVENTS`, `GTI` or `RESPONSE`
* `HDUCLAS2` : In case of `RESPONSE` type, refers to the IRF components stored within the HDU: `EFF_AREA`, `BKG`, `EDISP` or `RPSF`
* `HDUCLAS3` : In case of `RESPONSE` type, refers to the way the IRF component was produced (`POINT-LIKE` or `FULL-ENCLOSURE`)
* `HDUCLAS4` : In case of `RESPONSE` type, refers to the name of the specific format
| HDUCLAS1 | HDUCLAS2 | HDUCLAS3 | HDUCLAS4 |
| --- | --- | --- | --- |
| EVENTS | | | |
| GTI | | | |
| POINTING | | | |
| RESPONSE | EFF_AREA | POINT-LIKE | AEFF_2D |
| | | FULL-ENCLOSURE | AEFF_2D |
| | EDISP | POINT-LIKE | EDISP_2D |
| | | FULL-ENCLOSURE | EDISP_2D |
| | RPSF | FULL-ENCLOSURE | PSF_TABLE |
| | | | PSF_3GAUSS |
| | | | PSF_KING |
| | | | PSF_GTPSF |
| | BKG | POINT-LIKE | BKG_2D |
| | | | BKG_3D |
| | | FULL-ENCLOSURE | BKG_2D |
| | | | BKG_3D |
| | RAD_MAX | POINT-LIKE | RAD_MAX_2D |
| INDEX | OBS | | |
| | HDU | | |
#### Notes[¶](#notes)
Here we collect miscellaneous notes that are helpful when reading or working with the specs.
##### FITS BINTABLE TFORM data type codes[¶](#fits-bintable-tform-data-type-codes)
The valid FITS BINTABLE TFORM data type codes are given in this table in the FITS standard paper in
[table 18](http://www.aanda.org/articles/aa/full_html/2010/16/aa15362-10/T18.html)
Information on how to use it correctly via CFITSIO is
[here](https://heasarc.gsfc.nasa.gov/docs/software/fitsio/c/c_user/node20.html)
For [astropy.io.fits](http://docs.astropy.org/en/stable/io/fits/index.html), there’s these dicts to translate FITS BINTABLE TFORM codes to Numpy dtype codes:
```
>>> from astropy.io.fits.column import FITS2NUMPY, NUMPY2FITS
>>> FITS2NUMPY
{'J': 'i4', 'I': 'i2', 'L': 'i1', 'E': 'f4', 'M': 'c16', 'B': 'u1', 'K': 'i8', 'C': 'c8', 'D': 'f8', 'A': 'a'}
>>> NUMPY2FITS
{'i1': 'L', 'c16': 'M', 'i4': 'J', 'f2': 'E', 'i2': 'I', 'b1': 'L', 'i8': 'K', 'u8': 'K', 'u1': 'B', 'u4': 'J', 'u2': 'I', 'c8': 'C', 'f8': 'D', 'f4': 'E', 'a': 'A'}
```
But normally you never should have to manually handle these dtypes from Python.
`astropy.io.fits` or `astropy.table.Table` will read and write the TFORM FITS header keys correctly for you.
##### References[¶](#references)
Existing FITS specs and recommendations:
* <http://fits.gsfc.nasa.gov/fits_home.html>
* <http://fits.gsfc.nasa.gov/registry/grouping.htmlExisting HEASARC specs and recommendations:
* <https://heasarc.gsfc.nasa.gov/docs/heasarc/ofwg/ofwg_recomm.html>
* <http://heasarc.gsfc.nasa.gov/docs/heasarc/caldb/caldb_doc.html#### Glossary[¶](#glossary)
##### FITS[¶](#fits)
Flexible Image Transport System
<http://fits.gsfc.nasa.gov/##### HEASARC[¶](#heasarc)
High Energy Astrophysics Science Archive Research Centre.
<http://heasarc.gsfc.nasa.gov/##### OGIP FITS Standards[¶](#ogip-fits-standards)
The FITS Working Group in the Office of Guest Investigators Program has established conventions for FITS files for high-energy astrophysics projects.
<http://hesperia.gsfc.nasa.gov/rhessidatacenter/software/ogip/ogip.html##### CALDB[¶](#caldb)
The HEASARC’s calibration database (CALDB) system stores and indexes datasets associated with the calibration of high energy astronomical instrumentation.
<http://heasarc.gsfc.nasa.gov/docs/heasarc/caldb/caldb_intro.html##### IACT[¶](#iact)
IACT = imaging atmospheric Cherenkov telescope (see [wikipedia article](https://en.wikipedia.org/wiki/IACT)).
##### Observation = Run[¶](#observation-run)
For IACTs observations are usually conducted by pointing the array (or a sub-array) for a period of time (typically half an hour for current IACTs)
at a fixed location in celestial coordinates (i.e. the telescopes slew in horizontal Alt/Az coordinates to keep the pointing position RA/DEC in the center of the field of view).
For current IACTs the term “run” is more common than “observation”, but for CTA probably the term “observation” will be used. So it’s recommended to use observation in these format specs.
##### Off Observation[¶](#off-observation)
The term “off observation” or “off run” refers to observations where most of the field of view contains no gamma-ray emission (apart from a possible diffuse extragalactic isotropic component, which is supposed to be very weak at TeV energies).
[AGN](https://en.wikipedia.org/wiki/Active_galactic_nucleus) observations are sometimes also considered “off observations”, because the fraction of the field of view containing their gamma-ray emission is often very small, and most of the field of view is empty.
For further info on background modeling see
[Berge (2007)](http://adsabs.harvard.edu/abs/2007A%26A...466.1219B)
### Events[¶](#events)
This document describes the format to store DL3 event data for gamma-ray instruments.
The main table is `EVENTS`, which at the moment contains not only event parameters, but also key information about the observation needed to analyse the data such as the pointing position or the livetime.
The `GTI` table gives the “good time intervals”. The `EVENTS` table should match the `GTI` table, i.e. contain the relevant events within these time intervals.
No requirement is stated yet whether event times must be sorted chronologically;
this is under discussion and might be added in a future version of the spec;
sorting events by time now when producing EVENTS tables is highly recommended.
The `POINTING` table defines the pointing position at given times, allowing to interpolate the pointing position at any time. It currently isn’t in use yet,
science tools access the pointing position from the EVENTS header and only support fixed-pointing observations.
We note that it is likely that the EVENTS, GTI, and POINTING will change significantly in the future. Discusssion on observing modes is ongoing, a new HDU might be introduced that stores the “observation” (sometimes called “tech”)
information, like e.g. the observation mode and pointing information. Other major decision like whether there will be on set of IRFs per OBS_ID (like we have now), or per GTI, is being discussed in CTA. Another discussion point is how to handle trigger dead times. Currently science tools have to access
`DEADC` or `LIVETIME` from the event header, and combine that with `GTI`
if they want to analyse parts of observations. One option could be to absorb the dead-time correction into the effective areas, another option could be to add dead-time correction factors to GTI tables.
#### EVENTS[¶](#events)
The `EVENTS` extension is a binary FITS table that contains an event list.
Each row of the table provides information that characterises one event.
The mandatory and optional columns of the table are listed below. In addition, a list of header keywords providing metadata is specified.
Also here there are mandatory and optional keywords.
The recommended extension name of the binary table is `EVENTS`.
##### Mandatory columns[¶](#mandatory-columns)
We follow the [OGIP event list](https://heasarc.gsfc.nasa.gov/docs/heasarc/ofwg/docs/events/ogip_94_003/ogip_94_003.html) standard.
* `EVENT_ID` type: int64
+ Event identification number at the DL3 level.
See notes on [EVENT_ID](#iact-events-event-id) below.
* `TIME` type: float64, unit: s
+ Event time (see [Time](index.html#time))
* `RA` type: float, unit: deg
+ Reconstructed event Right Ascension (see [RA / DEC](index.html#coords-radec)).
* `DEC` type: float, unit: deg
+ Reconstructed event Declination (see [RA / DEC](index.html#coords-radec)).
* `ENERGY` type: float, unit: TeV
+ Reconstructed event energy.
##### Optional columns[¶](#optional-columns)
Note
None of the following columns is required to be part of an `EVENTS`
extension. Any software **using** these columns should first check whether the columns exist, and warn in case of their absence. Any software **ignoring**
these columns should make sure that their presence does not deteriorate the functioning of the software.
* `EVENT_TYPE` type: bit field (in FITS `tform=32X`)
+ Event quality partition.
* `MULTIP` type: int
+ Telescope multiplicity. Number of telescopes that have seen the event.
* `GLON` type: float, unit: deg
+ Reconstructed event Galactic longitude (see [Galactic](index.html#coords-galactic)).
* `GLAT` type: float, unit: deg
+ Reconstructed event Galactic latitude (see [Galactic](index.html#coords-galactic)).
* `ALT` type: float, unit: deg
+ Reconstructed altitude (see [Alt / Az](index.html#coords-altaz))
* `AZ` type: float, unit: deg
+ Reconstructed azimuth (see [Alt / Az](index.html#coords-altaz))
* `DETX` type: float, unit: deg
+ Reconstructed field of view X (see [Field of view](index.html#coords-fov)).
* `DETY` type: float, unit: deg
+ Reconstructed field of view Y (see [Field of view](index.html#coords-fov)).
* `THETA` type: float, unit: deg
+ Reconstructed field of view offset angle (see [Field of view](index.html#coords-fov)).
* `PHI` type: float, unit: deg
+ Reconstructed field of view position angle (see [Field of view](index.html#coords-fov)).
* `GAMMANESS` type: float
+ Classification score of a signal / background separation. SHOULD be between 0 and 1, with higher values indicating larger confidence that the event was produced by a gamma ray.
* `DIR_ERR` type: float, unit: deg
+ Direction error of reconstruction
* `ENERGY_ERR` type: float, unit: TeV
+ Error on reconstructed event energy
* `COREX` type: float, unit: m
+ Reconstructed core position X of shower
* `COREY` type: float, unit: m
+ Reconstructed core position Y of shower
* `CORE_ERR` type: float, unit: m
+ Error on reconstructed core position of shower
* `XMAX` type: float, unit: radiation lengths
+ First interaction depth
* `XMAX_ERR` type: float, unit: radiation lengths
+ Error on first interaction depth
* `HIL_MSW` type: float
+ Hillas mean scaled width
* `HIL_MSW_ERR` type: float
+ Hillas mean scaled width error
* `HIL_MSL` type: float
+ Hillas mean scaled length
* `HIL_MSL_ERR` type: float
+ Hillas mean scaled length error
##### Mandatory header keywords[¶](#mandatory-header-keywords)
The standard FITS reference time header keywords should be used (see [Formats](index.html#time-formats)).
An observatory Earth location should be given as well (see [Earth location](index.html#coords-location)).
* `HDUCLASS` type: string
+ Signal conformance with HEASARC/OGIP conventions (option: ‘OGIP’). See [HDU classes](index.html#hduclass).
* `HDUDOC` type: string
+ Reference to documentation where data format is documented. See [HDU classes](index.html#hduclass).
* `HDUVERS` type: string
+ Version of the format (e.g. ‘1.0.0’). See [HDU classes](index.html#hduclass).
* `HDUCLAS1` type: string
+ Primary extension class (option: ‘EVENTS’). See [HDU classes](index.html#hduclass).
* `OBS_ID` type: int
+ Unique observation identifier (Run number)
* `TSTART` type: float, unit: s
+ Start time of observation (relative to reference time, see [Time](index.html#time))
* `TSTOP` type: float, unit: s
+ End time of observation (relative to reference time, see [Time](index.html#time))
* `ONTIME` type: float, unit: s
+ Total *good time* (sum of length of all Good Time Intervals).
If a Good Time Interval (GTI) table is provided, `ONTIME` should be
calculated as the sum of those intervals. Corrections for instrumental
*dead time* effects are **NOT** included.
* `LIVETIME` type: float, unit: s
+ Total time (in seconds) on source, corrected for the *total* instrumental
dead time effect.
* `DEADC` type: float
+ Dead time correction, defined by `LIVETIME/ONTIME`.
Is comprised in [0,1]. Defined to be 0 if `ONTIME=0`.
* `OBS_MODE` type: string
+ Observation mode. See notes on [OBS_MODE](#iact-events-obs-mode) below.
* `RA_PNT` type: float, unit: deg
+ Pointing Right Ascension (see [RA / DEC](index.html#coords-radec)). Not mandatory if `OBS_MODE=DRIFT`, but average values could optionally be provided.
* `DEC_PNT` type: float, unit: deg
+ Pointing declination (see [RA / DEC](index.html#coords-radec)). Not mandatory if `OBS_MODE=DRIFT`, but average values could optionally be provided.
* `ALT_PNT` type: float, unit: deg
+ Pointing Altitude (see [Alt / Az](index.html#coords-altaz)). Only mandatory if `OBS_MODE=DRIFT`
* `AZ_PNT` type: float, unit: deg
+ Pointing azimuth (see [Alt / Az](index.html#coords-altaz)). Only mandatory if `OBS_MODE=DRIFT`
* `EQUINOX` type: float
+ Equinox in years for the celestial coordinate system in which positions
given in either the header or data are expressed (options: 2000.0).
See also [HFWG Recommendation R3](https://heasarc.gsfc.nasa.gov/docs/heasarc/ofwg/docs/ofwg_recomm/r3.html) for the OGIP standard.
* `RADECSYS` type: string
+ Stellar reference frame used for the celestial coordinate system in
which positions given in either the header or data are expressed.
(options: ‘ICRS’, ‘FK5’).
See also [HFWG Recommendation R3](https://heasarc.gsfc.nasa.gov/docs/heasarc/ofwg/docs/ofwg_recomm/r3.html) for the OGIP standard.
* `ORIGIN` type: string
+ Organisation that created the FITS file.
This can be the same as `TELESCOP` (e.g. “HESS”), but it could
also be different if an organisation has multiple telescopes (e.g. “NASA” or “ESO”).
* `TELESCOP` type: string
+ Telescope (e.g. ‘CTA’, ‘HESS’, ‘VERITAS’, ‘MAGIC’, ‘FACT’)
* `INSTRUME` type: string
+ Instrument used to aquire the data contained in the file.
Each organisation and telescop has to define this.
E.g. for CTA it could be ‘North’ and ‘South’, or sub-array configurations,
this has not been defined yet.
* `CREATOR` type: string
+ Software that created the file. When appropriate, the value of the
`CREATOR` keyword should also reference the specific version of the
program that created the FITS file. It is intented that this keyword
should refer to the program that originally defined the FITS file
structure and wrote the contents. If a FITS file is subsequently
copied largely intact into a new FITS by another program, then the value
of the `CREATOR` keyword should still refer to the original program.
`HISTORY` keywords should be used instead to document any further
processing that is performed on the file after it is created.
For more reading on the OGIP standard, see
[here](https://heasarc.gsfc.nasa.gov/docs/heasarc/ofwg/docs/ofwg_recomm/r7.html).
##### Optional header keywords[¶](#optional-header-keywords)
* `OBSERVER` type: string
+ Name of observer. This could be for example the PI of a proposal.
* `CREATED` type: string
+ Time when file was created in ISO standard date representation
“ccyy-mm-ddThh:mm:ss” (UTC)
* `OBJECT` type: string
+ Observed object (e.g. ‘Crab’)
* `RA_OBJ` type: float, unit: deg
+ Right ascension of `OBJECT`
* `DEC_OBJ` type: float, unit: deg
+ Declination of `OBJECT`
* `EV_CLASS` type: str
+ Event class. See notes on [EV_CLASS and EVENT_TYPE](#iact-events-class-type) below.
* `TELAPSE` type: float, unit: s
+ Time interval between start and stop time (`TELAPSE=TSTOP-TSTART`).
Any gaps due to bad weather, or high background counts and/or other
anomalies, are included.
Warning
Keywords below seem to be pretty low-level and eventually instrument specific. It needs to be discussed whether a recommendation on these keywords should be made, or whether the definition should be left to the respective consortia.
* `HDUCLAS2` type: string
+ Secondary extension class (option: ‘ACCEPTED’). See [HDU classes](index.html#hduclass).
* `TELLIST` type: string
+ Telescope IDs in observation (e.g. ‘1,2,3,4’)
* `N_TELS` type: int
+ Number of observing telescopes
* `TASSIGN` type: string
+ Place of time reference (‘Namibia’)
* `DST_VER` type: string
+ Version of DST/Data production
* `ANA_VER` type: string
+ Reconstruction software version
* `CAL_VER` type: string
+ Calibration software version
* `CONV_DEP` type: float
+ convergence depth (0 for parallel pointing)
* `CONV_RA` type: float, unit: deg
+ Convergence Right Ascension
* `CONV_DEC` type: float, unit: deg
+ Convergence Declination
* `TRGRATE` type: float, unit: Hz
+ Mean system trigger rate
* `ZTRGRATE` type: float, unit: Hz
+ Zenith equivalent mean system trigger rate
* `MUONEFF` type: float
+ Mean muon efficiency
* `BROKPIX` type: float
+ Fraction of broken pixels (0.15 means 15% broken pixels)
* `AIRTEMP` type: float, unit: deg C
+ Mean air temperature at ground during the observation.
* `PRESSURE` type: float, unit: hPa
+ Mean air pressure at ground during the observation.
* `RELHUM` type: float
+ Relative humidity
* `NSBLEVEL` type: float, unit: a.u.
+ Measure for night sky background level
##### Notes[¶](#notes)
This paragraph contains some explanatory notes on some of the columns and header keys mentioned above.
###### EVENT_ID[¶](#event-id)
Most analyses with high-level science tools don’t need `EVENT_ID` information.
But being able to uniquely identify every event is important, e.g. when comparing the high-level reconstructed event parameters (`RA`, `DEC`,
`ENERGY`) for different calibrations, reconstructions or gamma-hadron separations.
Assigning a unique `EVENT_ID` during data taking can be difficult or impossible. E.g. in H.E.S.S. we have two numbers `BUNCH_ID_HESS` and
`EVENT_ID_HESS` that only together uniquely identify an event within a given run (i.e. `OBS_ID`). Probably the scheme to uniquely identify events at the DL0 level for CTA will be even more complicated, because of the much larger number of telescopes and events.
So given that data taking and event identification is different for every IACT at low data levels and is already fixed for existing IACTs, we propose here to have an `EVENT_ID` that is simpler and works the same for all IACTs at the DL3 level.
As an example: for H.E.S.S. we achieve this by using an INT64 for `EVENT_ID`
and to store `EVENT_ID = (BUNCH_ID_HESS << 32) | (EVENT_ID_HESS)`, i.e. use the upper bits to contain the low-level bunch ID and the lower bits to contains the low-level event ID. This encoding is unique and reversible, i.e. it’s easy to go back to `BUNCH_ID_HESS` and `EVENT_ID_HESS` for a given `EVENT_ID`,
and to low-level checks (e.g. look at the shower images for a given event that behaves strangely in reconstructed high-level parameters).
###### EV_CLASS and EVENT_TYPE[¶](#ev-class-and-event-type)
Currently in this format specification, event class `EV_CLASS` is a header key
(i.e. the same for all events in a given event list) and `EVENT_TYPE` is a bitfield column (i.e. can have a different value for each event). Both are optional at this time, only used as provenance information, not by science tools to make any decisions how to analyse the data.
The reason for this is simply that we have not agreed yet on a scheme what event class and event type means, and how it should be used by science tools for analysis. Developing this will be one of the major topics for the next version of the spec. It is likely that a proper definition of event classes and types will not be compatible with what is currently defined here, so not filling
`EV_CLASS` and `EVENT_TYPE` when creating DL3 data is not a bad idea.
To summarise a bit the discussions on this important point in the past years,
they were mostly done by looking at what Fermi-LAT is doing and some prototyping in H.E.S.S. to export DL3 data to FITS.
The scheme in Fermi-LAT for event classes and event types is nicely summarised
[here](https://github.com/gammapy/PyGamma15/blob/gh-pages/talks/fermi2/fermi_advanced_v0.pdf)
or [here](https://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/Cicerone/Cicerone_Data/LAT_DP.html).
There event classes and types are key parts of the data model, used for EVENT to IRF association and even end users need to learn about them and pass event class and type information when using science tools.
One option could be to mostly adopt what Fermi-LAT does for IACTs. However, a major difference is that Fermi-LAT is a more stable detector, that has a CALDB of a very limited number or IRFs that can be used for all data, whereas for IACTs with changing telescope configurations, degrading mirrors, changing atmosphere, zenith angle, … most likely we will have to produce per-observation IRFs, and then it’s easier to bundle the IRFs with the EVENTS in one file,
and the use of event class and type to link EVENTS and IRFs is no longer needed.
An alternative scheme is to use the term “event class” to describe a given analysis configuration. This is e.g. how we currently use `EV_CLASS` in H.E.S.S., we fill values like “standard” or “hard” or “loose” to describe a given full configuration that is the result of a calibration, event reconstruction and gamma-hadron separation pipeline. This is similar to what Fermi-LAT does, except less sophisticated, the event classes are completely independent, there is no nesting, and separate events and IRF files are produced for each class/configuration. To analyse data from a given class, the user chooses which set of files to download (e.g. “loose” for pulsars or “hard” for a bright source where a good PSF is needed), and then the science tools don’t need to do anything with `EV_CLASS`, it is just provenance information. Some people are experimenting with the use of `EVENT_TYPE` in a similar way as Fermi-LAT,
e.g. to have event quality partitioning based on number of telescopes that saw a given event, or other criteria. Again, it is left to users to split events by
`EVENT_TYPE` and produce IRFs for each event type and pass those for a joint fit to the science tools, as there is no agreement or implementation yet in the science tools to support `EVENT_TYPE` directly.
So to conclude and summarise again: `EV_CLASS` and `EVENT_TYPE` as mentioned here in this spec are optional and very preliminary. Defining event class and type for IACTs needs more prototyping by the science tools and current IACTs and CTA and discussion, and then a proposal for a specification.
###### OBS_MODE[¶](#obs-mode)
The keyword `OBS_MODE` specifies the mode of the observation.
This is used in cases where the instrument or detector can be configured to operate in different modes which significantly affect the resulting data, or to accomodate different types of instrument. More details can be found
[here](https://heasarc.gsfc.nasa.gov/docs/heasarc/ofwg/docs/general/ogip_94_001/ogip_94_001.html)
Most IACT data will be `POINTING`, indicating a constant pointing in equatorial coordinates.
In addition to the OGIP-defined values (`POINTING`, `RASTER`, `SLEW` and `SCAN`), we define the option `DRIFT` to accomodate ground-based wide-field instruments, in which local zenith/azimuth coordinates remain constant. In this case, the header keywords
`RA_PNT` and `DEC_PNT` are no longer mandatory, and instead `ALT_PNT` and `AZ_PNT`
are required.
It is likely that `OBS_MODE` in the future will be a key piece of information in the DL3 data model, defining the observation mode (e.g. pointed, divergent,
slewing, …) and being required to analyse the data correctly.
#### GTI[¶](#gti)
The `GTI` extension is a binary FITS table that contains the Good Time Intervals (‘GTIs’) for the event list. A general description of GTIs can be found in the [OGIP GTI](http://heasarc.gsfc.nasa.gov/docs/heasarc/ofwg/docs/rates/ogip_93_003/ogip_93_003.html#tth_sEc6.3) standard.
This HDU contains two mandatory columns named `START` and `STOP`. At least one row is containing the start and end time of the observation must be present. The values are in units of seconds with respect to the reference time defined in the header (keywords MJDREFI and MJDREFF). This extension allows for a detailed handling of good time intervals (i.e. excluding periods with cloud cover or lightning during one observation).
High-level Science tools could modify the GTIs according to user parameter.
See e.g. [gtmktime](https://www.slac.stanford.edu/exp/glast/wb/prod/pages/sciTools_gtmktime/gtmktime_help.htm) for an application example from the Fermi Science Tools.
##### Mandatory columns[¶](#mandatory-columns)
* `START` type: float64, unit: s
+ Start time of good time interval (see [Time](index.html#time))
* `STOP` type: float64, unit: s
+ End time of good time interval (see [Time](index.html#time))
##### Mandatory header keywords[¶](#mandatory-header-keywords)
The standard FITS reference time header keywords should be used (see [Formats](index.html#time-formats)).
Warning
This is a first draft proposal of a pointing table.
It is not being used yet by data producers or science tools.
Please note that the format is likely subject to change.
#### POINTING[¶](#pointing)
The `POINTING` extension is a binary FITS table that contains for a number of time stamps the pointing direction of the telescopes. A *pointing* is here defined as the centre of the field of view (or centre of the camera coordinates). In reality, all telescopes may point to different positions
(for example for divergent pointing mode). The main purpose of the `POINTING`
extension is to provide time dependent information on how to transform between celestial and terrestial coordinates.
See also [HFWG Recommendation R3](https://heasarc.gsfc.nasa.gov/docs/heasarc/ofwg/docs/ofwg_recomm/r3.html) for the OGIP standard.
##### Mandatory columns[¶](#mandatory-columns)
* `TIME` type: float64, unit: s
+ Pointing time (see [Time](index.html#time))
* `RA_PNT` type: float, unit: deg
+ Pointing Right Ascension (see [RA / DEC](index.html#coords-radec)).
* `DEC_PNT` type: float, unit: deg
+ Pointing declination (see [RA / DEC](index.html#coords-radec)).
##### Optional columns[¶](#optional-columns)
* `ALT_PNT` type: float, unit: deg
+ Pointing altitude (see [Alt / Az](index.html#coords-altaz)).
* `AZ_PNT` type: float, unit: deg
+ Pointing azimuth (see [Alt / Az](index.html#coords-altaz)).
##### Mandatory header keywords[¶](#mandatory-header-keywords)
The standard FITS reference time header keywords should be used (see [Formats](index.html#time-formats)).
An observatory Earth location should be given as well (see [Earth location](index.html#coords-location)).
### IRFs[¶](#irfs)
The instrument response function (IRF) format currently in use for gamma-ray instruments and, in particular, imaging atmospheric Cherenkov telescopes (IACTs) are stored in FITS binary tables using the “multidimensional array” convention (binary tables with a single row and array columns) described at [BINTABLE HDU](index.html#fits-arrays-bintable-hdu).
This format has been used for calibration data and IRF of X-ray instruments,
as well as for the IRFs that are distributed with the Fermi-LAT science tools.
Two different approaches are used to store the IRF of gamma-ray instruments:
* Full-enclosure IRF: all IRF components are stored as a function of the offset with respect to the source position.
* Point-like IRF: IRF components are calculated after applying a cut in direction offset. This format has been used by the current generation of IACTs to perform spectral analysis and light curves.
At the moment (November 2015), this format is used by H.E.S.S., MAGIC and VERITAS and supported by Gammapy and Gammalib and is being proposed for DL3 IRF (i.e. the format distributed to end users and used by the science tools for CTA).
#### IRF components[¶](#irf-components)
The IRF is made up of several components, described here:
##### Effective Area[¶](#effective-area)
Effective area combines the detection efficiency of an instrument with the observable area.
It can be interpreted as the area a perfect detector directly measuring gamma rays would have.
\[A_{\mathrm{eff}}(E, ...) = p(E, ...) \cdot A,\]
with \(A\) the total observable area and \(p(E, ...)\) the detection probability for a gamma ray of a given energy and possibly other parameters,
like the position in the field of view.
Effective area has to be calculated from simulated events in a discretized manner.
\(p\) is estimated in bins of the dependend variables by dividing the number of detected and reconstructed events by the number of all simulated events.
Effective are in bin \(i\) is then
\[A_{\mathrm{eff}, i} = \frac{N_{\mathrm{detected}, i}}{N_{\mathrm{simulated}, i}} \cdot A,\]
Usually, \(A\) is the area in which the events were simulated,
for CORSIKA simulations this will be \(A = \pi R_{\mathrm{max}}^2\),
with \(R_{\mathrm{max}}\) being the maximum scatter radius.
Calculation of effective area can be done at several analysis steps,
e. g. after trigger, after image cleaning or most important for the DL3 event lists, after applying all event selection criteria,
including gamma-hadron separation and for the case of point-like IRFs the \(\theta^2\) cut.
As in general, there is no reconstructed energy for the undetected events,
effective area can only be expressed in bins of the true simulated energy.
The proposed effective area format follows mostly the [OGIP effective area](https://heasarc.gsfc.nasa.gov/docs/heasarc/caldb/docs/memos/cal_gen_92_019/cal_gen_92_019.html) format document.
For the moment, the format for the effective area works to a satisfactory level.
Nevertheless, for instance the energy threshold variation across the FoV is not taken into account. However, since the threshold definitions are currently non-unified an inclusion of this variation is still arbitrary and subject to analysis chain. In addition, this feature is currently not supported in current open source tools. We therefore keep the optional opportunity to add an individual extension listing the energy threshold varying across the FoV. This will likely be included in future releases.
##### Energy Dispersion[¶](#energy-dispersion)
The energy dispersion information is stored in a FITS file with one required extensions (HDU). The stored quantity is \(\frac{dP}{d\mu}\), a PDF for the **energy migration**
\[\mu = \frac{E_{\mathrm{reco}}}{E_{\mathrm{true}}}\]
as a function of true energy and offset. It should be normalized to unity, i.e.,
\[\int_0^\infty \frac{dP}{d\mu}\,d\mu = 1\,.\]
The migration range covered in the file must be large enough to make this possible
(Suggestion: \(0.2 < \mu < 5\))
###### Transformation[¶](#transformation)
For the purpose of some analysis, for example when extracting an
[RMF](index.html#ogip-rmf), it is necessary to calculate the detector response
\(R(I,J)\), i.e. the probability to find an energy from within a given true energy bin *I* of width \(\Delta E_{\mathrm{true}}\) within a certain reconstructed energy bin *J* of width \(\Delta E_{\mathrm{reco}}\). In order to do so, the following integration has to be performed (for a fixed offset).
\[R(I,J) = \frac{ \int_{\Delta E_{\mathrm{true}}} R(I,E_{\mathrm{true}})\ d E_{\mathrm{true}}}{\Delta E_{\mathrm{true}}},\]
where
\[R(I,E_{\mathrm{true}}) = \int_{\mu(\Delta E_{\mathrm{reco}})} \mathrm{PDF}(E_{\mathrm{true}}, \mu)\ d \mu\]
is the probability to find a given true energy \(E_{\mathrm{true}}\) in the reconstructed energy band *J*.
##### Point spread function[¶](#point-spread-function)
###### Introduction[¶](#introduction)
The point spread function (PSF) ([Wikipedia - PSF](https://en.wikipedia.org/wiki/Point_spread_function)) represents the spatial probability distribution of reconstructed event positions for a point source.
So far we’re only considering radially symmetric PSFs here.
###### Probability distributions[¶](#probability-distributions)
* \(dP/d\Omega(r)\), where \(dP\) is the probability to find an event in a solid angle \(d\Omega\) at an offset \(r\) from the point source.
This is the canonical form we use and the values we store in files.
* Often, when comparing observered event distributions with a PSF model,
the \(dP/dr^2\) distributions in equal-width bins in \(r^2\) is used. The relation is \(d\Omega = \pi dr^2\), i.e. \(dP/d\Omega=(1/\pi)(dP/dr^2)\).
* Sometimes, the distribution \(dP/dr(r)\) is used.
The relation is \(dP/dr = 2 \pi r dP/d\Omega\).
TODO: explain “encircled energy” = “encircled counts” = “cumulative” representation of PSF and define containment fraction and containment radius.
###### Normalisation[¶](#normalisation)
PSFs must be normalised to integrate to total probability 1, i.e.
\[\int dP/d\Omega(r) d\Omega = 1\,.\]
This implies that the PSF producer is responsible for choosing the Theta range and normalising. I.e. it’s OK to choose a theta range that contains only 95% of the PSF, and then the integral will be 0.95.
We recommend everyone store PSFs so that truncation is completely negligible,
i.e. the containment should be 99% or better for all of parameter space.
###### Comments[¶](#comments)
* Usually the PSF is derived from Monte Carlo simulations, but in principle it can be estimated from bright point sources (AGN) as well.
* Tools should assume the PSF is well-sampled and noise-free.
I.e. if limited event statistics in the PSF computation is an issue,
it is up to the PSF producer to denoise it to an acceptable level.
##### Background[¶](#background)
One method of background modeling for IACTs is to construct spatial and / or spectral model templates of the irreducible cosmic ray background for a given reconstruction and gamma-hadron separation from [Off Observation](index.html#glossary-obs-off). These templates can then be used as an ingredient to model the background in observations that contain gamma-ray emission of interest, or to compute the sensitivity for that set of cuts.
Note
Generating background models requires the construction of several intermediate products (counts and livetime histograms, both filled by cutting out exclusion regions around sources like AGN) to arrive at the models containing an absolute rate described here. At this time we don’t specify a format for those intermediate formats.
Note
Background models are sometimes considered an instrument response function
(IRF) and sometimes not (e.g. when the background is estimated from different parts of the field of view for the same observation).
Here we have the background format specifications listed under IRFs,
simply because the storage format is very similar to the other IRFs
(e.g. effective area) and we didn’t want to introduce a new top-level section besides IRFs.
#### IRF axes[¶](#irf-axes)
Most IRFs are dependent on parameters, and the 1-dimensional parameter arrays are stored in columns. The following names are recommended:
* For energy grids, see [here](http://heasarc.gsfc.nasa.gov/docs/heasarc/caldb/docs/memos/cal_gen_92_003/cal_gen_92_003.html#tth_sEc7)
for basic recommendations. Column names should be `ENERGY` or `ENERG_LO`, `ENERG_HI`
because that is used (consistently I think) for OGIP and Fermi-LAT.
For separate HDUs, the extension names should be `ENERGIES` or `EBOUNDS` (used by Fermi-LAT consistently).
* Sky coordinates should be called `RA`, `DEC`, `GLON`, `GLAT`, `ALT`, `AZ` (see [Coordinates](index.html#coords))
* Field of view coordinates `DETX`, `DETY` or `THETA`, `PHI` for offset and position angle in the field of view (see [Field of view](index.html#coords-fov)).
* Offset wrt. the source position should be called `RAD` (this is what the OGIP PSF formats use).
In the specific case of point-like IRFs:
* The energy-dependent radius of the selected region of interest should be `RAD_MAX`
The IRF format specifications mention a recommended axis format and axis units.
But tools should not depend on this and instead:
* Use the axis order specified by the `CREF` header keyword (see [BINTABLE HDU](index.html#fits-arrays-bintable-hdu))
* Use the axis unit specified by the `CUNIT` header keywords (see [BINTABLE HDU](index.html#fits-arrays-bintable-hdu))
#### Full-enclosure IRFs[¶](#full-enclosure-irfs)
Full-enclosure IRF format has been used for calibration data and IRF of X-ray instruments, as well as for the IRFs that are distributed with the Fermi-LAT science tools.
Any full-enclosure IRF component should contain the header keyword:
* `HDUCLAS3 = FULL-ENCLOSURE`
From here on, the specific format of each IRF component:
##### Effective area format[¶](#effective-area-format)
Here we specify the format to store the effective area of a full-enclosure IRF.
Effective area is always stored as a function of true energy. (see [Effective Area](index.html#iact-aeff))
###### AEFF_2D[¶](#aeff-2d)
####### Effective Area vs true energy[¶](#effective-area-vs-true-energy)
The effective area as a function of the true energy and offset angle is saved as a [BINTABLE HDU](index.html#fits-arrays-bintable-hdu) with required columns listed below.
Columns:
* `ENERG_LO`, `ENERG_HI` – ndim: 1, unit: TeV
+ True energy axis
* `THETA_LO`, `THETA_HI` – ndim: 1, unit: deg
+ Field of view offset axis
* `EFFAREA` – ndim: 2, unit: m^2
+ Effective area value as a function of true energy
Recommended axis order: `ENERGY`, `THETA`
Header keywords:
If the IRFs are only known to be “valid” or “safe” to use within a given energy range, that range can be given via the following two keywords. The keywords are optional, not all telescopes use the concept of a safe range; e.g. in CTA at this time this hasn’t been defined. Note that a proper scheme to declare IRF validity range (e.g. masks or weights, or safe cuts that depend on other parameters such as FOV offset) is not available yet.
* `LO_THRES` type: float, unit: TeV
+ Low energy threshold
* `HI_THRES` type: float, unit: TeV
+ High energy threshold
If the effective area corresponds to a given observation with an `OBS_ID`,
that `OBS_ID` should be given as a header keyword. Note that this is not always the case, e.g. sometimes IRFs are simulated and produced for instruments that haven’t even been built yet, and then used to simulate different kinds of observations.
As explained in [HDU classes](index.html#hduclass), the following header keyword should be used to declare the type of HDU:
* `HDUDOC` = ‘<https://github.com/open-gamma-ray-astro/gamma-astro-data-formats>’
* `HDUVERS` = ‘0.3’
* `HDUCLASS` = ‘GADF’
* `HDUCLAS1` = ‘RESPONSE’
* `HDUCLAS2` = ‘EFF_AREA’
* `HDUCLAS3` = ‘FULL-ENCLOSURE’
* `HDUCLAS4` = ‘AEFF_2D’
The recommended `EXTNAME` keyword is “EFFECTIVE AREA”.
Example data file: [`here`](_downloads/62350ce9147b233aad75b348ae0214d5/aeff_2d_full_example.fits).
##### Energy dispersion format[¶](#energy-dispersion-format)
The format to store full-enclosure energy dispersion (see [Energy Dispersion](index.html#iact-edisp)) is the following:
###### EDISP_2D[¶](#edisp-2d)
The energy dispersion information is saved as a
[BINTABLE HDU](index.html#fits-arrays-bintable-hdu) with the following required columns.
Columns:
* `ENERG_LO`, `ENERG_HI` – ndim: 1, unit: TeV
+ True energy axis
* `MIGRA_LO`, `MIGRA_HI` – ndim: 1, unit: dimensionless
+ Energy migration axis (defined above)
* `THETA_LO`, `THETA_HI` – ndim: 1, unit: deg
+ Field of view offset axis (see [Field of view](index.html#coords-fov))
* `MATRIX` – ndim: 3, unit: dimensionless
+ Energy dispersion \(dP/d\mu\), see [Energy Dispersion](index.html#iact-edisp).
Recommended axis order: `ENERGY`, `MIGRA`, `THETA`
Header keywords:
As explained in [HDU classes](index.html#hduclass), the following header keyword should be used to declare the type of HDU:
* `HDUDOC` = ‘<https://github.com/open-gamma-ray-astro/gamma-astro-data-formats>’
* `HDUVERS` = ‘0.3’
* `HDUCLASS` = ‘GADF’
* `HDUCLAS1` = ‘RESPONSE’
* `HDUCLAS2` = ‘EDISP’
* `HDUCLAS3` = ‘FULL-ENCLOSURE’
* `HDUCLAS4` = ‘EDISP_2D’
Example data file: TODO
##### PSF format[¶](#psf-format)
IACTs PSF does not always have a Gaussian shape. Its shape is highly dependent on the analysis and each specific instrument. For this reason, several parameterizations are allowed to store the PSF:
###### PSF_TABLE[¶](#psf-table)
This is a PSF FITS format we agree on for IACTs.
This file contains the offset- and energy-dependent table distribution of the PSF.
This format is almost identical to the [OGIP radial PSF](http://heasarc.gsfc.nasa.gov/docs/heasarc/caldb/docs/memos/cal_gen_92_020/cal_gen_92_020.html) format. The differences are that we don’t have the dependency on azimuthal field of view position, the units are different and the recommended axis order is different
(to have uniformity across axis order in the IACT DL3 IRFs).
Columns:
* `ENERG_LO`, `ENERG_HI` – ndim: 1, unit: TeV
+ True energy axis
* `THETA_LO`, `THETA_HI` – ndim: 1, unit: deg
+ Field of view offset axis (see [Field of view](index.html#coords-fov))
* `RAD_LO`, `RAD_HI` – ndim: 1, unit: deg
+ Offset angle from source position
* `RPSF` – ndim: 3, unit: sr^-1
+ Point spread function value \(dP/d\Omega\), see [Probability distributions](index.html#psf-pdf).
Recommended axis order: `ENERGY`, `THETA`, `RAD`.
Header keywords:
As explained in [HDU classes](index.html#hduclass), the following header keyword should be used to declare the type of HDU:
* `HDUDOC` = ‘<https://github.com/open-gamma-ray-astro/gamma-astro-data-formats>’
* `HDUVERS` = ‘0.3’
* `HDUCLASS` = ‘GADF’
* `HDUCLAS1` = ‘RESPONSE’
* `HDUCLAS2` = ‘PSF’
* `HDUCLAS3` = ‘FULL-ENCLOSURE’
* `HDUCLAS4` = ‘PSF_TABLE’
Example data file: TODO
###### PSF_3GAUSS[¶](#psf-3gauss)
Multi-Gauss mixture models are a common way to model distributions
(for source intensity profiles, PSFs, anything really), see e.g. [2013PASP..125..719H](http://adsabs.harvard.edu/abs/2013PASP..125..719H).
For H.E.S.S., radial PSFs have been modeled as 1, 2 or 3 two-dimensional Gaussians \(dP/d\Omega\).
Note
A two-dimensional Gaussian distribution \(dP/d\Omega = dP/(dx dy)\) is equivalent to an exponential distribution in \(dP/x\), where \(x=r^2\)
and a Rayleigh distribution in \(dP/dr\).
In this format, the triple-Gauss distribution is parameterised as follows:
\[dP/d\Omega(r, S, \sigma_1, A_2, \sigma_2, A_3, \sigma_3) =
\frac{S}{\pi}
\left[
\exp\left(-\frac{r^2}{2\sigma_1^2}\right) +
A_2 \exp\left(-\frac{r^2}{2\sigma_2^2}\right) +
A_3 \exp\left(-\frac{r^2}{2\sigma_3^2}\right)
\right],\]
where \(S\) is `SCALE`, \(\sigma_i\) is `SIGMA_i` and
\(A_i\) is `AMPL_i` (see columns listed below).
TODO: give analytical formula for the integral, so that it’s easy to check if the PSF is normalised for a given set of parameters.
TODO: give test case value and Python function for easy checking?
Note
By setting the amplitudes of the 3rd (and 2nd) Gaussians to 0 one can implement double (or single) Gaussian models as well.
Columns:
* `ENERG_LO`, `ENERG_HI` – ndim: 1, unit: TeV
+ True energy axis
* `THETA_LO`, `THETA_HI` – ndim: 1, unit: deg
+ Field of view offset axis (see [Field of view](index.html#coords-fov))
* `SCALE` – ndim: 2, unit: sr^(-1)
+ Absolute scale of the 1st Gaussian
* `SIGMA_1`, `SIGMA_2`, `SIGMA_3` – ndim: 2, unit: deg
+ Model parameter (see formula above)
* `AMPL_2`, `AMPL_3` – ndim: 2, unit: none
+ Model parameter (see formula above)
Recommended axis order: `ENERGY`, `THETA`
Header keywords:
As explained in [HDU classes](index.html#hduclass), the following header keyword should be used to declare the type of HDU:
* `HDUDOC` = ‘<https://github.com/open-gamma-ray-astro/gamma-astro-data-formats>’
* `HDUVERS` = ‘0.3’
* `HDUCLASS` = ‘GADF’
* `HDUCLAS1` = ‘RESPONSE’
* `HDUCLAS2` = ‘PSF’
* `HDUCLAS3` = ‘FULL-ENCLOSURE’
* `HDUCLAS4` = ‘PSF_3GAUSS’
Example data file: TODO
###### PSF_KING[¶](#psf-king)
The King function parametrisations for PSFs has been in use in astronomy as an analytical PSF model for many instruments, for example by the Fermi-LAT (see [2013ApJ…765…54A](http://adsabs.harvard.edu/abs/2013ApJ...765...54A)).
The distribution has two parameters `GAMMA` \(\gamma\) and `SIGMA` \(\sigma\)
and is given by the following formula:
\[dP/d\Omega(r,\sigma,\gamma) =
\frac{1}{2\pi\sigma^2}
\left(1-\frac{1}{\gamma}\right)
\left(1+\frac{r^2}{2\gamma\sigma^2}\right)^{- \gamma}\]
This formula integrates to 1 (see [Introduction](index.html#psf-intro)).
Columns:
* `ENERG_LO`, `ENERG_HI` – ndim: 1, unit: TeV
+ True energy axis
* `THETA_LO`, `THETA_HI` – ndim: 1, unit: deg
+ Field of view offset axis (see [Field of view](index.html#coords-fov))
* `GAMMA` – ndim: 2, unit: none
+ Model parameter (see formula above)
* `SIGMA` – ndim: 2, unit: deg
+ Model parameter (see formula above)
Recommended axis order: `ENERGY`, `THETA`
Header keywords:
As explained in [HDU classes](index.html#hduclass), the following header keyword should be used to declare the type of HDU:
* `HDUDOC` = ‘<https://github.com/open-gamma-ray-astro/gamma-astro-data-formats>’
* `HDUVERS` = ‘0.3’
* `HDUCLASS` = ‘GADF’
* `HDUCLAS1` = ‘RESPONSE’
* `HDUCLAS2` = ‘PSF’
* `HDUCLAS3` = ‘FULL-ENCLOSURE’
* `HDUCLAS4` = ‘PSF_KING’
Example data file: TODO
###### PSF_GTPSF[¶](#psf-gtpsf)
The FITS file has the following BinTable HDUs / columns:
* PSF HDU
+ Energy – 1D (MeV)
+ Exposure – 1D (cm^2 s)
+ Psf – 2D (sr^-1), shape = (len(Energy) x len(Theta))
Point spread function value \(dP/d\Omega\), see [Probability distributions](index.html#psf-pdf).
* THETA HDU
+ Theta – 1D (deg)
Header keywords:
As explained in [HDU classes](index.html#hduclass), the following header keyword should be used to declare the type of HDU:
* `HDUDOC` = ‘<https://github.com/open-gamma-ray-astro/gamma-astro-data-formats>’
* `HDUVERS` = ‘0.3’
* `HDUCLASS` = ‘GADF’
* `HDUCLAS1` = ‘RESPONSE’
* `HDUCLAS2` = ‘PSF’
* `HDUCLAS3` = ‘FULL-ENCLOSURE’
* `HDUCLAS4` = ‘GTPSF’
Example data file: [`psf-fermi.fits`](_downloads/91a7e43b3bdde1d6d0209509556c1f79/psf-fermi.fits)
##### Background format[¶](#background-format)
Here we specify two formats for the background template models (see [Background](index.html#bkg)) of a full-enclosure IRF:
* `BKG_2D` models depend on `ENERGY` and `THETA`, i.e. are radially symmetric.
* `BKG_3D` models depend on `ENERGY` and field of view coordinates `DETX` and `DETY`.
###### BKG_2D[¶](#bkg-2d)
The `BKG_2D` format contains a 2-dimensional array of post-select background rate, stored in the [BINTABLE HDU](index.html#fits-arrays-bintable-hdu) format.
####### Required columns:[¶](#required-columns)
* `ENERG_LO`, `ENERG_HI` – ndim: 1, unit: TeV
+ Reconstructed energy axis
* `THETA_LO`, `THETA_HI` – ndim: 1, unit: deg
+ Field of view offset axis (see [Field of view](index.html#coords-fov)).
* `BKG` – ndim: 2, unit: s^-1 MeV^-1 sr^-1
+ Absolute post-select background rate
(expected background per time, energy and solid angle).
Recommended axis order: `ENERGY`, `THETA`
####### Header keywords:[¶](#header-keywords)
As explained in [HDU classes](index.html#hduclass), the following header keyword should be used to declare the type of HDU:
* `HDUDOC` = ‘<https://github.com/open-gamma-ray-astro/gamma-astro-data-formats>’
* `HDUVERS` = ‘0.3’
* `HDUCLASS` = ‘GADF’
* `HDUCLAS1` = ‘RESPONSE’
* `HDUCLAS2` = ‘BKG’
* `HDUCLAS3` = ‘FULL-ENCLOSURE’
* `HDUCLAS4` = ‘BKG_2D’
Example data file: [`here`](_downloads/512625bc9308236946e411ad20069824/bkg_2d_full_example.fits).
###### BKG_3D[¶](#bkg-3d)
The `BKG_3D` format contains a 3-dimensional array of post-select background rate, stored in the [BINTABLE HDU](index.html#fits-arrays-bintable-hdu) format.
####### Required columns:[¶](#id3)
* `ENERG_LO`, `ENERG_HI` – ndim: 1, unit: TeV
+ Reconstructed energy axis
* `DETX_LO`, `DETX_HI`, `DETY_LO`, `DETY_HI` – ndim: 1, unit: deg
+ Field of view coordinates binning (see [Field of view](index.html#coords-fov))
* `BKG` – ndim: 3, unit: s^-1 MeV^-1 sr^-1
+ Absolute post-select background rate
(expected background per time, energy and solid angle).
Recommended axis order: `ENERGY`, `DETX`, `DETY`
####### Header keywords:[¶](#id4)
As explained in [HDU classes](index.html#hduclass), the following header keyword should be used to declare the type of HDU:
* `HDUDOC` = ‘<https://github.com/open-gamma-ray-astro/gamma-astro-data-formats>’
* `HDUVERS` = ‘0.3’
* `HDUCLASS` = ‘GADF’
* `HDUCLAS1` = ‘RESPONSE’
* `HDUCLAS2` = ‘BKG’
* `HDUCLAS3` = ‘FULL-ENCLOSURE’
* `HDUCLAS4` = ‘BKG_3D’
Further header keywords:
* `FOVALIGN` = ‘ALTAZ’ / ‘RADEC’
+ Alignment of the field-of-view coordinate system (see [Field of view](index.html#coords-fov))
Example data file: [`here`](_downloads/124d4df0ccb2d8f8a4719986b7e6d83a/bkg_3d_full_example.fits).
###### Notes[¶](#notes)
The background rate is not a “flux” or “surface brightness”.
It is already a count rate, it doesn’t need to be multiplied with effective area to obtain predicted counts, like gamma-ray flux and surface brightness models do.
The rate is computed per observation time (without any dead-time correction,
don’t use livetime when computing or using the background rate).
#### Point-like IRFs[¶](#point-like-irfs)
Point-like IRFs has been classically used within the IACT community. Each IRF component is calculated from the events surviving an energy dependent directional cut around the assumed source position.
The format of each point-like IRF component is analog to the ones already described within the full enclosure IRF specifications (see [Full-enclosure IRFs](index.html#full-enclosure-irfs)), with certain differences listed in this section.
Any point-like IRF component should contain the header keyword:
* `HDUCLAS3 = POINT-LIKE`
##### RAD_MAX[¶](#rad-max)
In addition to the IRFs, the actual directional cut applied to the data needs to be stored. This cut is allowed to be constant or variable along several axes, with a different format.
In case the angular cut is constant along the energy and FoV, an additional header keyword may be added to the IRF HDU:
Header keyword:
* `RAD_MAX` type: float, unit: deg
+ Radius of the directional cut applied to calculate the IRF, in degrees.
If this keyword is present, the science tools should assume the directional cut of this point-like IRF is constant over all axes. In case the angular cut is variable along any axis (reconstructed energy or FoV), an additional HDU is required to store these values. Note any DL3 file with a point-like IRF (with `HDUCLAS3 = POINT-LIKE`) that has no `RAD_MAX` keyword within the HDU metadata should have this additional HDU.
In case the directional cut is variable with energy or the FoV, point-like IRFs require an additional binary table. It stores the values of `RAD_MAX` as a function of the reconstructed energy and FoV following the
[BINTABLE HDU](index.html#fits-arrays-bintable-hdu) format.
###### RAD_MAX_2D[¶](#rad-max-2d)
The `RAD_MAX_2D` format contains a 2-dimensional array of directional cut values, stored in the
[BINTABLE HDU](index.html#fits-arrays-bintable-hdu) format.
Required columns:
* `ENERG_LO`, `ENERG_HI` – ndim: 1, unit: TeV
+ Reconstructed energy axis
* `THETA_LO`, `THETA_HI` – ndim: 1, unit: deg
+ Field of view offset axis
* `RAD_MAX` – ndim: 2, unit: deg
+ Radius of the directional cut applied to calculate the IRF, in degrees.
Recommended axis order: `ENERGY`, `THETA`, `RAD_MAX`
Header keywords:
As explained in [HDU classes](index.html#hduclass), the following header keyword should be used to declare the type of HDU:
* `HDUDOC` = ‘<https://github.com/open-gamma-ray-astro/gamma-astro-data-formats>’
* `HDUVERS` = ‘0.3’
* `HDUCLASS` = ‘GADF’
* `HDUCLAS1` = ‘RESPONSE’
* `HDUCLAS2` = ‘RAD_MAX’
* `HDUCLAS3` = ‘POINT-LIKE’
* `HDUCLAS4` = ‘RAD_MAX_2D’
Example data file: [`here`](_downloads/0386060d6bac27a7e6581e6b55dcafa7/rad_max_2d_point-like_example.fits).
### Data storage[¶](#data-storage)
At the moment there is no agreed way to organise IACT data, and how to connect
`EVENTS` with IRFs or other information such as time or pointing information that is needed for analysis.
Here we document one scheme that is used extensively in H.E.S.S., and partly also by other IACTs. We expect that it will be superceded in the future by a different scheme developed by CTA.
The basic idea is that current IACT data consists of “runs” or “observations”
with a given `OBS_ID`, and that for each observation there is one `EVENTS`
and several IRF FITS HDUs that contain everything needed to analyse that data.
A second idea is that with H.E.S.S. we export all data to FITS, so we have many 1000s of observations and users usually will need to do a run selection e.g. by sky position or observation time, and they want to do that in an efficient way that doesn’t require globbing for 1000s of files and opening up the FITS headers to find out what data is present.
There are two index tables:
#### Observation index table[¶](#observation-index-table)
The observation index table is stored in a FITS file as a BINTABLE HDU:
* Suggested filename: `obs-index.fits.gz`
* Suggested HDU name: `OBS_INDEX`
It contains one row per observation (a.k.a. run) and lists parameters that are commonly used for observation selection, grouping and analysis.
##### Required columns[¶](#required-columns)
* `OBS_ID` type: int
+ Unique observation identifier (Run number)
* `OBS_MODE` type: string
+ Observation mode. See notes on [OBS_MODE](index.html#iact-events-obs-mode).
* `RA_PNT` type: float, unit: deg
+ Pointing Right Ascension (see [RA / DEC](index.html#coords-radec)). Not mandatory if `OBS_MODE=DRIFT`, but average values could optionally be provided.
* `DEC_PNT` type: float, unit: deg
+ Pointing declination (see [RA / DEC](index.html#coords-radec)). Not mandatory if `OBS_MODE=DRIFT`, but average values could optionally be provided.
* `ALT_PNT` type: float, unit: deg
+ Pointing Altitude (see [Alt / Az](index.html#coords-altaz)). Only mandatory if `OBS_MODE=DRIFT`
* `AZ_PNT` type: float, unit: deg
+ Pointing azimuth (see [Alt / Az](index.html#coords-altaz)). Only mandatory if `OBS_MODE=DRIFT`
* `TSTART` type: float, unit: s
+ Start time of observation relative to the reference time (see [Time](index.html#time))
* `TSTOP` type: float, unit: s
+ End time of observation relative to the reference time (see [Time](index.html#time))
##### Optional columns[¶](#optional-columns)
The following columns are optional. They are sometimes used for observation selection or data quality checks or analysis, but aren’t needed for most users.
* `ZEN_PNT` type: float, unit: deg
+ Observation pointing zenith angle at observation mid-time `TMID` (see [Alt / Az](index.html#coords-altaz))
* `ONTIME` type: float, unit: s
+ Total observation time (including dead time).
+ Equals `TSTOP` - `TSTART`
* `LIVETIME` type: float, unit: s
+ Total livetime (observation minus dead time)
* `DEADC` type: float
+ Dead time correction.
+ It is defined such that `LIVETIME` = `DEADC` * `ONTIME`
i.e. the fraction of time the telescope was actually able to take data.
* `DATE-OBS` type: string
+ Observation start date (see [Time](index.html#time))
* `TIME-OBS` type: string
+ Observation start time (see [Time](index.html#time))
* `DATE-END` type: string
+ Observation end date (see [Time](index.html#time))
* `TIME-END` type: string
+ Observation end time (see [Time](index.html#time))
* `N_TELS` type: int
+ Number of participating telescopes
* `TELLIST` type: string
+ Telescope IDs (e.g. ‘1,2,3,4’)
* `QUALITY` type: int
+ Observation data quality. The following levels of data quality are defined:
- -1 = unknown quality
- 0 = best quality, suitable for spectral analysis.
- 1 = medium quality, suitable for detection, but not spectra (typically if the atmosphere was hazy).
- 2 = bad quality, usually not to be used for analysis.
* `OBJECT` type: string
+ Primary target of the observation
+ Recommendations:
- Use a name that can be resolved by [SESAME](http://cds.u-strasbg.fr/cgi-bin/Sesame)
- For survey observations, use “survey”.
- For [Off Observation](index.html#glossary-obs-off), use “off observation”
* `GLON_PNT` type: float, unit: deg
+ Observation pointing Galactic longitude (see [Galactic](index.html#coords-galactic)).
* `GLAT_PNT` type: float, unit: deg
+ Observation pointing Galactic latitude (see [Galactic](index.html#coords-galactic)).
* `RA_OBJ` type: float, unit: deg
+ Right ascension of `OBJECT`
* `DEC_OBJ` type: float, unit: deg
+ Declination of `OBJECT`
* `TMID` type: float, unit: s
+ Mid time of observation relative to the reference time
* `TMID_STR` type: string
+ Mid time of observation in UTC string format: “YYYY-MM-DD HH:MM:SS”
* `EVENT_COUNT` type: int
+ Number of events in run
* `EVENT_RA_MEDIAN` type: float, unit: deg
+ Median right ascension of events
* `EVENT_DEC_MEDIAN` type: float, unit: deg
+ Median declination of events
* `EVENT_ENERGY_MEDIAN` type: float, unit: deg
+ Median energy of events
* `EVENT_TIME_MIN` type: double, unit: s
+ First event time
* `EVENT_TIME_MAX` type: double, unit: s
+ Last event time
* `BKG_SCALE` type: float
+ Observation-dependent background scaling factor. See notes below.
* `TRGRATE` type: float, unit: Hz
+ Mean system trigger rate
* `ZTRGRATE` type: float, unit: Hz
+ Zenith equivalent mean system trigger rate
+ Some HESS chains export this at the moment and this quantity can be useful
for data selection. Comparing values from different chains or other
telescopes would require a more specific specification.
* `MUONEFF` type: float
+ Mean muon efficiency
+ Currently use definitions from analysis chain, since creating a unified
specification is tricky.
* `BROKPIX` type: float
+ Fraction of broken pixels (0.15 means 15% broken pixels)
* `AIRTEMP` type: float, unit: deg C
+ Mean air temperature at ground during the observation.
* `PRESSURE` type: float, unit: hPa
+ Mean air pressure at ground during the observation.
* `NSBLEVEL` type: float, unit: a.u.
+ Measure for NSB level
+ Some HESS chains export this at the moment and this quantity can be useful
for data selection. Comparing values from different chains or other
telescopes would require a more specific specification.
* `RELHUM` type: float
+ Relative humidity
+ [Definition](https://en.wikipedia.org/wiki/Relative_humidity)
##### Mandatory Header keywords[¶](#mandatory-header-keywords)
The standard FITS reference time header keywords should be used (see [Formats](index.html#time-formats)).
An observatory Earth location should be given as well (see [Earth location](index.html#coords-location)).
As explained in [HDU classes](index.html#hduclass), the following header keyword should be used to declare the type of HDU:
* `HDUDOC` = ‘<https://github.com/open-gamma-ray-astro/gamma-astro-data-formats>’
* `HDUVERS` = ‘0.3’
* `HDUCLASS` = ‘GADF’
* `HDUCLAS1` = ‘INDEX’
* `HDUCLAS2` = ‘OBS’
##### Notes[¶](#notes)
* Observation runs where the telescopes don’t point to a fixed RA / DEC position
(e.g. drift scan runs) aren’t supported at the moment by this format.
* Purpose / definition of `BKG_SCALE`:
For a 3D likelihood analysis a good estimate of the background is important.
The run-by-run varation of the background rate is ~20%. The main reasons are the changing atmospheric conditions. This parameter allows to specify (from separate studies) a scaling factor to the [Background](index.html#bkg) This factor comes e.g.
from the analysis of off runs. The background normalisation usually dependends on e.g. the number of events in a run, the zenith angle and other parameters.
This parameter provides the possibility to give the user a better prediction of the background normalisation. For CTA this might be induced from atmospheric monitoring and additional diagnostic input. For HESS we try to find a trend in the off run background normalisations and other parameters such as number of events per unit livetime. The background scale should be around 1.0 if the background model is good. This number should also be set to 1.0 if no dependency analysis has been performed. If the background model normalisation is off by a few orders of magnitude for some reasons, this can also be incorporated here.
#### HDU index table[¶](#hdu-index-table)
The HDU index table is stored in a FITS file as a BINTABLE HDU:
* Suggested filename: `hdu-index.fits.gz`
* Suggested HDU name: `HDU_INDEX`
The HDU index table can be used to locate HDUs. E.g. for a given `OBS_ID` and
(`HDU_TYPE` and / or `HDU_CLASS`), the HDU can be located via the information in the `FILE_DIR`, `FILE_NAME` and `HDU_NAME` columns. The path listed in `FILE_DIR` has to be relative to the location of the index file.
##### BASE_DIR[¶](#base-dir)
By default, file locations should be relative to the location of this HDU index file, i.e. the total file path is `PATH_INDEX_TABLE / FILE_DIR / FILE_NAME`.
Tools are expected to support relative file paths in POSIX notation like
`FILE_DIR = "../../data/"` as well as absolute file path like `FILE_DIR =
"/data/cta"`.
To allow for some additional flexibility, an optional header keyword
`BASE_DIR` can be used. If it is given, the file path is `BASE_DIR / FILE_DIR
/ FILE_NAME`, i.e. the location of the HDU index table becomes irrelevant.
##### Columns[¶](#columns)
| Column Name | Description | Data type | Required? |
| --- | --- | --- | --- |
| `OBS_ID` | Observation ID (a.k.a. run number) | int | yes |
| `HDU_TYPE` | HDU type (see below) | string | yes |
| `HDU_CLASS` | HDU class (see below) | string | yes |
| `FILE_DIR` | Directory of file (rel. to this file) | string | yes |
| `FILE_NAME` | Name of file | string | yes |
| `HDU_NAME` | Name of HDU in file | string | yes |
| `SIZE` | File size (bytes) | int | no |
| `MTIME` | Modification time | double | no |
| `MD5` | Checksum | string | no |
##### Mandatory Header keywords[¶](#mandatory-header-keywords)
As explained in [HDU classes](index.html#hduclass), the following header keyword should be used to declare the type of HDU (this HDU itself, the HDU index table):
* `HDUDOC` = ‘<https://github.com/open-gamma-ray-astro/gamma-astro-data-formats>’
* `HDUVERS` = ‘0.3’
* `HDUCLASS` = ‘GADF’
* `HDUCLAS1` = ‘INDEX’
* `HDUCLAS2` = ‘HDU’
##### HDU_TYPE and HDU_CLASS[¶](#hdu-type-and-hdu-class)
The `HDU_TYPE` and `HDU_CLASS` can be used to select the HDU of interest.
The difference is that `HDU_TYPE` corresponds generally to e.g. PSF, whereas
`HDU_CLASS` corresponds to a specific PSF format. Declaring `HDU_CLASS` here means that tools loading these files don’t have to do guesswork to infer the format on load.
Valid `HDU_TYPE` values:
* `events` - Event list
* `gti` - Good time interval
* `aeff` - Effective area
* `psf` - Point spread function
* `edisp` - Energy dispersion
* `bkg` - Background
* `rad_max` - Directional cut for point-like IRFs
Valid `HDU_CLASS` values:
* `events` - see format spec: [EVENTS](index.html#iact-events)
* `gti` - see format spec: [GTI](index.html#iact-gti)
* `aeff_2d` - see format spec: [AEFF_2D](index.html#aeff-2d)
* `edisp_2d` - see format spec: [EDISP_2D](index.html#edisp-2d)
* `psf_table` - see format spec: [PSF_TABLE](index.html#psf-table)
* `psf_3gauss` - see format spec: [PSF_3GAUSS](index.html#psf-3gauss)
* `psf_king` - see format spec: [PSF_KING](index.html#psf-king)
* `bkg_2d` - see format spec: [BKG_2D](index.html#bkg-2d)
* `bkg_3d` - see format spec: [BKG_3D](index.html#bkg-3d)
* `rad_max_2d` - see format spec: [RAD_MAX_2D](index.html#rad-max-2d)
##### Relation to HDUCLAS[¶](#relation-to-hduclas)
At [HDU classes](index.html#hduclass) and throughout this spec, `HDUCLAS` header keys are defined as a declarative HDU classification scheme. This appears similar to this HDU index table, but in reality is different and incompatible!
Here in the index table, we have `HDU_CLASS` and `HDU_TYPE`. In
[HDU classes](index.html#hduclass), there is `HDUCLASS` which is always “GADF” and then there is a hierarchical `HDUCLAS1`, `HDUCLAS2`, `HDUCLAS3` and `HDUCLAS4` that corresponds to the information in `HDU_CLASS` and `HDU_TYPE` here. Also the values are different: here we have lower-case and use e.g. `HDU_CLASS="aeff"`,
in [HDU classes](index.html#hduclass) we use upper-case and e.g. `HDUCLAS2="EFF_AREA"`
One reason for these inconsistencies is that the spec for this HDU index table was written first (in 2015), and then [HDU classes](index.html#hduclass) was introduced later (in 2017). Another reason is that here, we tried to be simple (flat, not hierarchical classification), and for [HDU classes](index.html#hduclass) we tried to follow an existing standard.
Given that these index tables have been in use in the past years, and that we expect them to be replaced soon by a completely different way to locate and link IACT data HDUs, we decided to keep this format as-is, instead of trying to align it with [HDU classes](index.html#hduclass) and create some form of hierarchical index table.
The observation index provides information of meta data about each observation run. E.g. pointing in the sky, duration, number of events, etc. The HDU index table provides a list of all available HDUs and in what files they are located.
Science tools can make use of this index files to build filenames of required files according to some user parameters.
Note that the HDU index table would be superfluous if IRFs were always bundled in the same file with `EVENTS` and if the observation index table contained the location of that file. For HESS this wasn’t done initially, because the background IRFs were large in size and re-used for many runs. The level of indirection that the HDU index table offers allows to support both IRFs bundled with EVENTS (“per-run IRFs” as used in HESS) as well as the use of a global lookup database of IRFs located separately from EVENTS (sometimes called a CALDB), as used for the CTA first data challenge.
### Sky Maps[¶](#sky-maps)
This page describes data format conventions for FITS binned data and model representations pixelized with the [HEALPix algorithm](http://healpix.jpl.nasa.gov/).
#### HEALPix Formats[¶](#healpix-formats)
This section describes a proposal for HEALPix format conventions which is based on formats currently used within the Fermi Science Tools
(STs) and pointlike. This format is intended for representing maps and cubes of both integral and differential quantities including:
* Photon count maps and cubes (e.g. as generated with with *gtbin*).
* Exposure cubes (e.g. as generated with *gtexpcube2*).
* Source maps – product of exposure with instrument response in spatial dimension (e.g. as generated with *gtsrcmaps*).
* Model maps and cubes (the Fermi IEM and other diffuse emission components).
The format defines a [SKYMAP HDU](#hpx-skymap-table) for storing a sequence of image slices (*bands*) and a [BANDS HDU](index.html#bands-hdu) to store the geometry and coordinate mapping for each band. A band can represent any selection on non-spatial coordinates such as energy, time, or FoV angle. The most common use-case is a sequence of bands representing energy bins (for counts maps) or energy nodes (for source or model maps).
There are three primary HEALPix map formats which use different table structures for mapping table entries to HEALPix pixel and band:
* [IMPLICIT Format](#hpx-implicit): The row to pixel mapping is determined implicitly from the row number. The row number corresponds to the HEALPix pixel number.
* [EXPLICIT Format](#hpx-explicit): The row to pixel mapping is determined explicitly from the `PIX` column. This can be used to define maps or cubes that only encompass a partial region of the sky.
* [LOCAL Format](#hpx-local): The row to pixel mapping is determined explicitly from the `PIX` column but with a local indexing scheme. This can be used to define maps or cubes that only encompass a partial region of the sky.
* [SPARSE Format](#hpx-sparse): The row to pixel mapping is determined explicitly from the `PIX` column but with a variable number of pixels in each band. This format can be used to represent maps that have a different spatial geometry in each band and also supports band-dependent pixel size.
Note that there are variations of these primary formats which use different conventions for column, HDU, and header keywords names. The
`HPX_CONV` keyword defines a specific mapping between the names used here and in these other formats:
* `FGST_CCUBE`
* `FGST_LTCUBE`
* `FGST_BEXPCUBE`
* `FGST_SRCMAP`
* `FGST_TEMPLATE`
* `FGST_SRCMAP_SPARSE`
* `GALPROP`
* `GALPROP2`
##### Non-Standard HEALPix Conventions[¶](#non-standard-healpix-conventions)
##### Sample Files[¶](#sample-files)
* All-sky Counts Cube (IMPLICIT Format): [`FITS`](_downloads/86db2cf765920927cdbe32b4d8d64523/hpx_ccube_implicit.fits)
* Partial-sky Counts Cube (EXPLICIT Format): [`FITS`](_downloads/5c6132b6b2de3baa5b1cffc8f4c56223/hpx_ccube_explicit.fits)
* Partial-sky Counts Map (EXPLICIT Format): [`FITS`](_downloads/2b1a396df9209bcadbafc873cb88b447/hpx_cmap_explicit.fits)
* Partial-sky Counts Cube (SPARSE Format w/ fixed NSIDE):[`FITS`](_downloads/f46dff67c83cb2381fd07a6fb607c84e/hpx_ccube_sparse0.fits)
* Partial-sky Counts Cube (SPARSE Format w/ variable NSIDE):[`FITS`](_downloads/fbee89036d4de8ce050835d6be1d1479/hpx_ccube_sparse1.fits)
##### SKYMAP HDU[¶](#skymap-hdu)
The SKYMAP table contains the map data and row-to-pixel mapping formatted according to one of three indexing schemes specified by the
`INDXSCHM` header keyword: [IMPLICIT Format](#hpx-implicit), [EXPLICIT Format](#hpx-explicit),
or [SPARSE Format](#hpx-sparse). By convention if a file contains a single map it is recommended to name the extension `SKYMAP`. For maps with non-spatial dimensions an accompanying BANDS table must also be defined (see [BANDS HDU](index.html#bands-hdu)).
###### Header Keywords[¶](#header-keywords)
This section lists the keywords that should be written to the SKYMAP BINTABLE header. These keywords define the pixel size and ordering scheme that was used to construct the HEALPix map.
* `PIXTYPE`, type: string
+ Should be set to `HEALPIX`.
* `INDXSCHM`, type: string
+ Indexing scheme. Can be one of `IMPLICIT`, `EXPLICIT`,
`SPARSE`. If this keyword is not provided then the
`IMPLICIT` indexing scheme will be assumed.
* `ORDERING`, type: string
+ HEALPix ordering scheme. Can be `NESTED` or `RING`.
* `COORDSYS`, type: string
+ Map coordinate system. Can be `CEL` (celestial coordinates)
or `GAL` (galactic coordinates).
* `ORDER`, type: int
+ Healpix order. `ORDER` = log2(`NSIDE`) if `NSIDE` is a
power of 2 and -1 otherwise. If the `BANDS` table is defined
this keyword is superseded by the `NSIDE` column.
* `NSIDE`, type: int
+ Number of healpix pixels per side. If the `BANDS` table is defined
this keyword is superseded by the `NSIDE` column.
* `FIRSTPIX`, type: int
+ Index of first pixel in the map.
* `LASTPIX`, type: int
+ Index of last pixel in the map.
* `HPX_REG`, type: string, **optional**
+ Region string for the geometric selection that was used to
construct a partial-sky map. See [HEALPix Region String](#hpx-region-string) for
additional details.
* `HPX_CONV`, type: string, **optional**
+ Convention for HEALPix format. See [Non-Standard HEALPix Conventions](#hpx-conventions) for
additional details.
* `BANDSHDU`, type: string, **optional**
+ Name of HDU containing the BANDS table. If undefined the
extension name should be `EBOUNDS` or `ENERGIES`. See
[BANDS HDU](index.html#bands-hdu) for additional details.
##### HEALPix Region String[¶](#healpix-region-string)
For partial-sky maps that correspond to a particular geometric selection one can optionally specify the selection that was used in constructing the map with the `HPX_REG` header keyword. The following region strings are supported:
* `DISK({LON},{LAT},{RADIUS})`: A circular selection centered on the coordinates (`{LON}`, `{LAT}`) with radius `{RADIUS}` with all quantities given in degrees. A pixel is included in the selection if its center is within `{RADIUS}` deg of coordinates (`{LON}`,
`{LAT}`).
* `DISK_INC({LON},{LAT},{RADIUS})`: A circular selection centered on the coordinates (`{LON}`, `{LAT}`) with radius `{RADIUS}`
with all quantities given in degrees. A pixel is included in the selection if any part of it is encompassed by the circle.
* `HPX_PIXEL({ORDERING},{ORDER},{PIX})`: A selection encompassing all pixels contained in the HEALPix pixel of the given pixelization where ordering is `{ORDERING}` (i.e. `NESTED` or `RING`),
order is `{ORDER}`, and pixel index is `{PIX}`.
##### IMPLICIT Format[¶](#implicit-format)
The IMPLICIT format defines a one-to-one mapping between table row and HEALPix pixel index. Each energy plane is represented by a separate column (`CHANNEL0`, `CHANNEL1`, etc.). This format can only be used for all-sky maps.
###### HEADER[¶](#header)
* `INDXSCHM` : `IMPLICIT`
###### SKYMAP Columns[¶](#skymap-columns)
* `CHANNEL{BAND_IDX}` – ndim: 1, type: float or int
+ Dimension: nrows
+ Amplitude in image plane `{BAND_IDX}`. The HEALPix pixel
index is determined from the table row.
##### EXPLICIT Format[¶](#explicit-format)
The EXPLICIT format uses an additional `PIX` column to explicitly define the pixel number corresponding to each table row. Pixel values for each band are represented by a separate column (`CHANNEL0`,
`CHANNEL1`, etc.). This format can be used for both all-sky and partial-sky maps but requires the same pixel size for all bands.
###### HEADER[¶](#id1)
* `INDXSCHM` : `EXPLICIT`
###### SKYMAP Columns[¶](#id2)
* `PIX` – ndim: 1, unit: None, type: int
+ Dimension: nrows
+ HEALPix pixel index. This index is common to all bands.
* `CHANNEL{BAND_IDX}` – ndim: 1, type: float or int
+ Dimension: nrows
+ Amplitude in HEALPix pixel `PIX` and band
`{BAND_IDX}`.
##### LOCAL Format[¶](#local-format)
The LOCAL format is identical to the [EXPLICIT Format](#hpx-explicit) with the exception that the `PIX` column contains a local rather than global HEALPix index. The local HEALPix index is a mapping of global indices in a partial-sky geometry to 0, …, N-1 where N is the total number of pixels in the geometry. For an all-sky geometry the local index is equal to the global index. This format can be used for both all-sky and partial-sky maps as well as maps with a different pixel-size in each band.
###### HEADER[¶](#id3)
* `INDXSCHM` : `LOCAL`
###### SKYMAP Columns[¶](#id4)
* `PIX` – ndim: 1, unit: None, type: int
+ Dimension: nrows
+ Local HEALPix pixel index. The mapping to global HEALPix index
is derived by finding the index of the ith pixel in the geometry
where pixels are ordered by their global index values.
* `CHANNEL{BAND_IDX}` – ndim: 1, type: float or int
+ Dimension: nrows
+ Amplitude in HEALPix pixel `PIX` and band
`{BAND_IDX}`.
##### SPARSE Format[¶](#sparse-format)
The SPARSE format allows for an arbitrary set of pixels to be defined in each band. The SKYMAP table contains three columns with the pixel index, band index, and amplitude. Pixel values for each band are continguous and arranged in order of band index. This format supports an different HEALPix pixel size in each band defined by the
`NSIDE` column in the `BANDS` table.
Pixels that are undefined but contained within the geometric selection are assumed to be zero while pixels outside the geometric selection are undefined. For counts data this allows for maps that only contain pixels with at least one count.
###### HEADER[¶](#id5)
* `INDXSCHM` : `SPARSE`
###### SKYMAP Columns[¶](#id6)
* `PIX` – ndim: 1, unit: None, type: int
+ Dimension: nrows
+ HEALPix pixel index. Pixels are ordered by band number. The
row to band mapping is defined by the `CHANNEL` column. The
column type may be either 32- or 64-bit depending on the maximum
index of the map geometry. A 32-bit column type is sufficient
for maps with NSIDE as large as 8192.
* `CHANNEL` – ndim: 1, unit: None, type: int
+ Dimension: nrows
+ Band index. This column is optional for maps with a single
band. For efficiency it is recommended to represent this column
with a 16-bit integer.
* `VALUE` – ndim: 1, unit: None, type: float or int
+ Dimension: nrows
+ Amplitude in pixel indexed by `PIX` and `CHANNEL`.
#### WCS Formats[¶](#wcs-formats)
This page describes data format conventions for images and cubes pixelized with World Coordinate Systems (WCS). WCS is a system for describing transformations between pixel/world coordinates and sky coordinates. The conventions described here are extensions to the
[FITS WCS](http://fits.gsfc.nasa.gov/fits_wcs.html) standard which allow for serializing more complex data structures such as sparse maps or maps with non-regular geometry
(e.g. a different pixel size in each image plane).
The format splits the specification of an image into two HDUs: an
[Image HDU](#wcs-image-hdu) and a [BANDS HDU](index.html#bands-hdu). The image HDU contains the image data while the BANDS HDU is an optional extension that stores information about non-spatial dimensions.
There are two supported WCS formats:
* [IMAGE Format](#wcs-image): This is the standard FITS image format whereby data is stored in a FITS ImageHDU. With the exception of handling of non-spatial dimensions images generated with this format follow standard FITS format conventions.
* [SPARSE Format](#wcs-sparse): This is a sparse data format for FITS images which uses a BinTableHDU to store pixel values.
By default multi-dimensional maps are assumed to have the same projection in each image plane with the WCS projection specified in the header keywords.
Both formats support the specification of non-regular multi-dimensional geometries through the inclusion of `CRPIX`,
`CEDLT`, and `CRVAL` columns in the [BANDS HDU](index.html#bands-hdu). A non-regular geometry is one in which each image plane may have a different pixelization (e.g. different pixel size or image extent).
The `CRPIX`, `CEDLT`, and `CRVAL` override the pixelization of the base WCS projection (defined in the IMAGE HDU header) in each image plane.
##### Sample Files[¶](#sample-files)
* Counts Cube: [`FITS`](_downloads/ee64cf774eaff8e22a2e57928bcebcb1/wcs_ccube.fits)
* Counts Cube (Multi-resolution/non-regular): [`FITS`](_downloads/cf22c93f964bfde50306a98e69a04850/wcs_ccube_irregular.fits)
* Counts Cube (SPARSE Format): [`FITS`](_downloads/02df8a94e0ffc8b2180d015f0aad22c7/wcs_ccube_sparse.fits)
##### Image HDU[¶](#image-hdu)
The IMAGE HDU contains the map data and can be formatted according to either the [IMAGE Format](#wcs-image) or [SPARSE Format](#wcs-sparse) scheme.
###### WCS FITS Header Keywords[¶](#wcs-fits-header-keywords)
The keywords listed here are those required by the FITS WCS specification.
* `CRPIX{IDX}`, type: float
+ Pixel coordinate of reference point for axis `{IDX}`.
* `CDELT{IDX}`, type: float
+ Coordinate increment at reference point for axis `{IDX}`.
* `CTYPE{IDX}`, type: float
+ Coordinate system and projection code for axis `{IDX}`.
* `CRVAL{IDX}`, type: float
+ Coordinate value at reference point for axis `{IDX}`.
###### Extra Header Keywords[¶](#extra-header-keywords)
These are extra keywords that are not part of the FITS WCS specification but are required for some features of the GADF format.
* `WCSSHAPE`, type: string, **optional**
+ Comma-separated list with the number of pixels in each dimension
in row-major order, e.g. `(4,5,3)` would correspond to a 4x5
image with three image planes. For non-regular geometries the
image dimension should be the maximum across all image planes.
This keyword is optional for maps in the [IMAGE Format](#wcs-image)
format.
* `BANDSHDU`, type: string, **optional**
+ Name of HDU containing the BANDS table. If undefined the
extension name should be `EBOUNDS` or `ENERGIES`. See
[BANDS HDU](index.html#bands-hdu) for additional details.
##### IMAGE Format[¶](#image-format)
The IMAGE format uses an ImageHDU to store map values. Dimensions of the image are directly inferred from the data member of the HDU. This is the standard format for WCS-based maps.
##### SPARSE Format[¶](#sparse-format)
The SPARSE WCS format uses the same conventions as the [Sparse HEALPix Format](index.html#hpx-sparse). This format uses a BinTableHDU to store map values with one row per pixel.
###### Columns[¶](#columns)
* `PIX` – ndim: 1, unit: None, type: int
+ Dimension: nrows
+ Flattened pixel index in a given image plane (band). Indices
are flattened assuming a column-major ordering for the image.
The row to band mapping is defined by the `CHANNEL` column.
* `CHANNEL` – ndim: 1, unit: None, type: int
+ Dimension: nrows
+ Band index. This column is optional for maps with a single
band. For efficiency it is recommended to represent this column
with a 16-bit integer.
* `VALUE` – ndim: 1, unit: None, type: float or int
+ Dimension: nrows
+ Amplitude in pixel indexed by `PIX` and `CHANNEL`.
This page describes data formats for data structures representing images in celestial coordinates. A sky map may have one or more non-spatial dimensions (e.g. energy or time) such that the data consists of a sequence of image planes. There are two primary pixelization formats which are described in more detail in the following sub-pages:
* [HEALPix Formats](index.html#healpix-skymap)
* [WCS Formats](index.html#wcs-skymap)
Both pixelization formats store the data in a single IMAGE HDU which can be either an ImageHDU or BinTableHDU depending on the image format. A [BANDS HDU](#bands-hdu) is required for maps with non-spatial dimensions.
#### BANDS HDU[¶](#bands-hdu)
For maps with non-spatial dimensions, the BANDS table defines the geometry in each band and the band to coordinate mapping for non-spatial dimensions (e.g. energy). The BANDS table is optional for maps with a single band.
The extension name of the BANDS table associated to an image HDU is given by the `BANDSHDU` header keyword of the image HDU. If
`BANDSHDU` is undefined the BANDS table should be read from the
`EBOUNDS` or `ENERGIES` HDU. The BANDS table extension names
`EBOUNDS` and `ENERGIES` are reserved for maps with a third energy dimension and are supported for backward compatibility with existing file format conventions of the Fermi STs. Although each map will have its own IMAGE HDU, a single BANDS table can be associated to multiple maps (if they share the same geometry).
The BANDS table contains 1 row per band with columns that define the mapping of the band to the non-spatial dimensions of the map. For integral quantities (e.g. counts) this should be the lower and upper edge values of the bin. For differential quantities this should be the coordinates at which the map value was evaluated. Some examples of quantities that can be used to define bands are as follows:
* Energy (Integral): `E_MIN`, `E_MAX`
* Energy (Differential): `ENERGY`
* Event Type: `EVENT_TYPE`
* Time: `TIME_MIN`, `TIME_MAX`
* FoV Angle: `THETA_MIN`, `THETA_MAX`
The mapping of column to non-spatial dimension is defined with the
`AXCOLS{IDX}` header keyword. For maps with multiple non-spatial dimensions the mapping of channel number to band indices follows a column-major ordering. For instance for a map with a first energy dimension with 3 bins indexed by *i* and a second time dimension with 2 bins indexed by *j* the band index mapping to channel number *k*
would be as follows:
```
(i, j) = (0,0) : (k) = (0,)
(i, j) = (1,0) : (k) = (1,)
(i, j) = (2,0) : (k) = (2,)
(i, j) = (0,1) : (k) = (3,)
(i, j) = (1,1) : (k) = (4,)
(i, j) = (2,1) : (k) = (5,)
```
##### Header Keywords[¶](#header-keywords)
* `AXCOLS{IDX}`, type: string, **optional**
+ Comma-separated list of columns in the BANDS table corresponding
to non-spatial dimension with one-based index `{IDX}`. If
there are two elements the columns should be interpreted as the
lower and upper edges of each bin. If there is a single element
the column should be interpreted as the bin center. For
`EBOUNDS` or `ENERGIES` HDUs this keyword is optional.
For time based axes the additional keywords described in [Formats](index.html#time-formats) are required.
##### Columns[¶](#columns)
* `CHANNEL`, ndim: 1
+ Dimension: nbands
+ Unique index of the band. If this column is not defined then
the band index should be inferred from the row number indexing
from zero. For maps with multiple non-spatial dimensions the
index mapping to channel number follows a column-major ordering.
* `E_MIN`, ndim: 1, unit: keV, **optional**
+ Dimension: nbands
+ Lower energy bound for integral quantities.
* `E_MAX`, ndim: 1, unit: keV, **optional**
+ Dimension: nbands
+ Upper energy bound for integral quantities.
* `ENERGY`, ndim: 1, unit: keV, **optional**
+ Dimension: nbands
+ Energy value for differential quantities.
* `EVENT_TYPE`, ndim: 1, **optional**
+ Dimension: nbands
+ Integer key for a sequence of independent data subselections (e.g. FRONT/BACK-converting LAT events).
##### WCS Columns[¶](#wcs-columns)
This section lists BANDS table columns specific to the [WCS map format](index.html#wcs-skymap). These are optional columns to specify the pixelization for non-regular geometries.
* `NPIX`, ndim: 2, type: int, **optional**
+ Number of pixels in longitude and latitude in each image plane.
* `CRPIX`, ndim: 2, type: float, **optional**
+ Reference pixel coordinate [deg] in longitude and latitude in each image plane.
* `CDELT`, ndim: 2, type: float, **optional**
+ Pixel size [deg] in longitude and latitude in each image plane.
##### HPX Columns[¶](#hpx-columns)
This section lists BANDS table columns specific to the [HEALPix map format](index.html#healpix-skymap).
* `NSIDE` – ndim: 1,
+ Dimension: nbands
+ NSIDE of the HEALPix pixelization in this band. If not defined
then `NSIDE` should be inferred from the FITS header. Only
required for formats that support energy-dependent pixelization
(`SPARSE`).
### Spectra[¶](#spectra)
Here we describe data formats for high-level spectral analysis results.
Science tools are encouraged to use these formats for easy interoperability with other codes (e.g. to check results, combine results in one plot, …).
#### SED[¶](#sed)
The SED format is a flexible specification for representing one-dimensional spectra (distributions of amplitude vs. energy). The SED is structured as a table with one row per energy bin/point and columns for the energy, measured normalization, and normalization errors. The format supports both integral and differential representations of the normalization as described in
[Normalization Representation](#norm-representations).
The list of supported columns is given in the [Columns](#sed-columns) section.
All columns are optional by default, and an SED may contain any combination of the allowed columns. [SED Types](#sed-types) are a specification for defining groups of columns that are required to be present in the file. The [Likelihood SED](index.html#likelihood-sed) format is an example of an SED type that contains both measured flux points and the profile likelihoods versus normalization in each energy bin.
The SED format does not enforce a specific set of units but units should be defined in the column metadata for all quantities with physical dimensions. Recommended units are provided in the
[Columns](#sed-columns) section to indicate the dimensionality of each column. Column UCDs are intended for defining the type of physical quantity associated to each column (e.g. energy, photon flux, etc.).
The convention for including UCDs in the column metdata is still under discussion and the UCDs defined in the current format are optional.
The intended serialization format is a FITS BINTABLE with one row per energy bin. However any serialization format that supports tabular data and column metadata could also be supported (e.g. ECSV or HDF5).
Because the SED occupies a single HDU multiple SEDs can be written to a single FITS file with an identifier (e.g. source name or observation epoch) used as the HDU name. Sample FITS and ECSV files are provided in [Sample Files](#sed-sample-files).
##### Normalization Representation[¶](#normalization-representation)
The SED format supports both differential and integral representations of the source normalization. Integral representations correspond to quantities integrated over an energy bin as defined by the `e_min`
and `e_max` columns. Differential representations are quantities evaluated at a discrete energies defined by the `e_ref` column. The supported [Normalization Columns](#norm-columns) are:
* `dnde` : Differential photon flux at `e_ref`.
Dimensionality: photons / (time * area * energy)
* `e2dnde` : Defined as `(e_ref ^ 2) * dnde`.
A commonly published and plotted differential flux quantity.
Dimensionality: energy / (time * area)
* `rate` : Photon rate between `e_min` and `e_max`.
Dimensionality: photons / time.
* `flux` : Photon flux (integral of `dnde`) between `e_min` and
`e_max`. Dimensionality: photons / ( time * area )
* `eflux` : Energy flux (integral of E times `dnde`) between
`e_min` and `e_max`. Dimensionality: energy / ( time * area )
* `npred` : Photon counts between `e_min` and `e_max`.
Dimensionality: photons
* `norm` : Normalization in units of the reference model.
Dimensionality: unitless
An SED should contain at least one of the normalization representations listed above. Multiple representations (e.g. `flux`
and `dnde`) may be included in a single SED.
The `dnde` and `e2dnde` representations are equivalent. We define both here, because both are in common use for publications and plots.
Errors and upper limits on the normalization are defined with the
[Error Columns](#error-columns) by appending the appropriate suffix to the normalization column name. For example in the case of photon flux the error and upper limit columns are:
* `flux_err` : Symmetric 1-sigma error.
* `flux_errp` : Asymmtric 1-sigma positive error.
* `flux_errn` : Asymmtric 1-sigma negative error.
* `flux_ul` : Upper limit with confidence level given by `UL_CONF`
header keyword.
A row may have a value and any combination of upper limits and errors:
```
e_ref dnde dnde_err dnde_errp dnde_errn dnde_ul 1000.0 1.0 0.1 0.1 0.1 1.16 3000.0 1.0 0.1 0.1 0.1 1.16
```
A `nan` value should be used for empty or missing data such as a bin for which there is an upper limit but no value and vice versa, e.g.
```
e_ref dnde dnde_err dnde_ul 1000.0 1.0 0.1 nan 3000.0 nan nan 1.16
```
The `is_ul` column is an optional boolean flag that can be used to designate whether the measurement in a given row should be interpreted as a two-sided confidence interval or an upper limit:
```
e_ref dnde dnde_err dnde_ul is_ul 1000.0 1.0 0.1 nan False 3000.0 nan nan 1.16 True
```
Setting UL values to `nan` is an implicit way of flagging rows that do not contain an upper limit. When parsing an SED one should first check for the existence of the `is_ul` column and otherwise check for `nan` values in the UL column. The `is_ul` column is only required if you want to explicitly flag ULs when both the UL and two-sided interval may be defined in a row.
##### Reference Model[¶](#reference-model)
The *reference model* of an SED is the global parameterization that was used to extract the normalization in each energy bin. The choice of reference model is relevant when considering models that are rapidly changing across a bin or when energy dispersion is large relative to the bin size. The [Reference Model Columns](#refmodel-columns) define the reference model in different representations. When an SED includes a reference model, the normalizations, errors, and upper limits can be given in the `norm` representation which is expressed in units of the reference model. `norm` columns can be converted to another representation by performing an element-wise multiplication of the column with the `ref` column of the desired representation.
##### Likelihood Profiles[¶](#likelihood-profiles)
The [Likelihood Columns](#like-columns) contain values of the model likelihood and likelihood profiles versus normalization. Likelihood profiles provide additional information about the measurement uncertainty in each bin.
A more detailed discussion of the motivation for SED likelihood profiles can be found in [Likelihood SED](index.html#likelihood-sed).
##### SED Types[¶](#sed-types)
By default all columns in the SED format are optional. To facilitate interoperability of files produced by different packages/tools, the SED format defines an *SED Type* string which is set with the
`SED_TYPE` header keyword. The SED type defines a minimal set of columns that must be present in the SED. The SED types and their required columns are given in the following list:
* `dnde`: `e_ref`, `dnde`
* `e2dnde`: `e_ref`, `e2dnde`
* `flux`: `e_min`, `e_max`, `flux`
* `eflux`: `e_min`, `e_max`, `eflux`
* `likelihood`: See [Likelihood SED](index.html#likelihood-sed).
##### Sample Files[¶](#sample-files)
* Differential Flux Points: [`FITS`](_downloads/9f918f1f7a866b941010a57081e31c8d/diff_flux_points.fits) [`ECSV`](_downloads/9defe6a1fff51048c19436241d0b040f/diff_flux_points.ecsv) [`H5`](_downloads/cc5b5a482c5c172fcd7b288535df86a8/diff_flux_points.h5)
* Integral Flux Points: [`FITS`](_downloads/50b2df15cb6e2ed3103592ef7bc5acbd/flux_points.fits) [`ECSV`](_downloads/6222d4fba54637795d3c69123740c443/flux_points.ecsv) [`H5`](_downloads/41dae92de15db07f852fcd3b6d460d68/flux_points.h5)
* Likelihood SED: [`FITS`](_downloads/b23d383d77052c251bb953e994a40452/binlike.fits) [`H5`](_downloads/3d6552c85761d99f6b03f138b1f9fa00/binlike.h5)
* H.E.S.S. 1ES0229 Spectrum: [`FITS`](_downloads/1ca78e0a87588d7451610db4166ef6b8/1es0229_hess_spectrum.fits) [`ECSV`](_downloads/7c4b89e8d9f17fc6a8236c927918677e/1es0229_hess_spectrum.ecsv)
##### Header Keywords[¶](#header-keywords)
* `UL_CONF`, **optional**
+ Confidence level of the upper limit (range: 0 to 1) of the value in the `{NORM_REP}_ul` column.
* `SED_TYPE`, **optional**
+ SED type string (see [SED Types](#sed-types) for more details).
##### Columns[¶](#columns)
This sections lists the column specifications. Unless otherwise specified the data type of all columns should be float32 or float64.
###### Energy Columns[¶](#energy-columns)
* `e_min` – ndim: 1, unit: MeV
+ Dimension: nebins
+ ucd : `em.energy`
+ Lower edge of energy bin. This defines the lower integration
bound for integral representations of the normalization. Can
also define the energy band used to evaluate differential
representations (`dnde` or `e2dnde`).
* `e_max` – ndim: 1, unit: MeV
+ Dimension: nebins
+ ucd : `em.energy`
+ Upper edge of energy bin. This defines the upper integration
bound for integral representations of the normalization. Can
also define the energy band used to evaluate differential
representations (`dnde` or `e2dnde`).
* `e_ref` – ndim: 1, unit: MeV
+ Dimension: nebins
+ ucd : `em.energy`
+ Energy at which differential representations of the normalization
are evaluated (e.g. `dnde`). This can be the geometric center
of the bin or some weighted average of the energy distribution
within the bin.
###### Normalization Columns[¶](#normalization-columns)
* `norm` – ndim: 1, unit: None
+ Dimension: nebins
+ Measured normalization in units of the reference model.
* `dnde` – ndim: 1, unit: 1 / (cm2 s MeV)
+ Dimension: nebins
+ ucd : `phot.flux.density`
+ Measured differential photon flux at `e_ref`.
* `e2dnde` – ndim: 1, unit: MeV / (cm2 s)
+ Dimension: nebins
+ ucd : `phot.flux.density`
+ Measured differential photon flux at `e_ref`, multiplied with `e_ref ^ 2`.
* `flux` – ndim: 1, unit: 1 / (cm2 s)
+ Dimension: nebins
+ ucd : `phot.count`
+ Measured photon flux between `e_min` and `e_max`.
* `eflux` – ndim: 1, unit: MeV / (cm2 s)
+ Dimension: nebins
+ ucd : `phot.flux`
+ Measured energy flux between `e_min` and `e_max`.
* `npred` – ndim: 1
+ Dimension: nebins
+ Measured counts between `e_min` and `e_max`.
###### Error Columns[¶](#error-columns)
The error columns define the error and upper limit for a given representation of the normalization. In the following column specifications `{NORM_REP}` indicates a placeholder for the name of the normalization column (e.g. `flux_err`).
* `{NORM_REP}_err` – ndim: 1
+ Dimension: nebins
+ Symmetric error on the normalization in the representation
`{NORM_REP}`.
* `{NORM_REP}_errp` – ndim: 1
+ Dimension: nebins
+ Positive error on the normalization in the representation
`{NORM_REP}`.
* `{NORM_REP}_errn` – ndim: 1
+ Dimension: nebins
+ Negative error on the normalization in the representation
`{NORM_REP}`. A negative or NaN value indicates that the
negative error is undefined.
* `{NORM_REP}_ul` – ndim: 1
+ Dimension: nebins
+ Upper limit on the normalization in the representation
`{NORM_REP}`. The upper limit confidence level is specified
with the `UL_CONF` header keyword.
* `is_ul` – ndim: 1, type: bool
+ Dimension: nebins
+ Boolean flag indicating whether a row should be interpreted as
an upper limit. If `True` then one should represent the
measurement with the one-sided confidence interval defined by
`{NORM_REP}_ul`. If `False` then the measurement should be
represented by the two-sided intervals defined by
`{NORM_REP}_err` or `{NORM_REP}_errp` and
`{NORM_REP}_errn`.
###### Reference Model Columns[¶](#reference-model-columns)
* `ref_dnde` – ndim: 1, unit: 1 / (MeV cm2 s)
+ Dimension: nebins
+ Differential flux of reference model at the `e_ref`.
* `ref_eflux` – ndim: 1, unit: MeV / (cm2 s)
+ Dimension: nebins
+ Energy flux (integral of E times `dnde`) of reference model
from `e_min` to `e_max`.
* `ref_flux` – ndim: 1, unit: 1 / (cm2 s)
+ Dimension: nebins
+ Flux (integral of `dnde`) of reference model from `e_min` to `e_max`.
* `ref_dnde_e_min` – ndim: 1, unit: 1 / (MeV cm2 s)
+ Dimension: nebins
+ Differential flux of reference model at `e_min`.
* `ref_dnde_e_max` – ndim: 1, unit: 1 / (MeV cm2 s)
+ Dimension: nebins
+ Differential flux of reference model at `e_max`.
* `ref_npred` – ndim: 1, unit: counts
+ Dimension: nebins
+ Number of photon counts of reference model.
###### Likelihood Columns[¶](#likelihood-columns)
* `ts` – ndim: 1, unit: none
+ Dimension: nebins
+ Source test statistic in each energy bin defined as twice the
difference between best-fit and null log-likelihood values. In the
asymptotic limit this is square of the significance.
* `loglike` – ndim: 1, unit: none
+ Dimension: nebins
+ Log-Likelihood value of the best-fit model.
* `loglike_null` – ndim: 1, unit: none
+ Dimension: nebins
+ Log-Likelihood value of the zero normalization model.
* `{NORM_REP}_scan` – ndim: 2, unit: None
+ Dimension: nebins x nnorms
+ Likelihood scan points in each energy bin in the representation
`{NORM_REP}`.
* `dloglike_scan` – ndim: 2, unit: none
+ Dimension: nebins x nnorms
+ Scan over delta LogLikelihood value vs. normalization in each
energy bin. The Delta-Loglikelihood is evaluated with respect
to the zero normalization model (`loglike_null`).
#### Bin-by-bin Likelihood Profiles[¶](#bin-by-bin-likelihood-profiles)
This page describes formats for bin-by-bin likelihood profiles as currently used in some LAT analyses. The bin-by-bin likelihood extends the concept of an SED by providing a representation of the likelihood function in each energy bin. [Likelihood SED](#likelihood-sed) and
[Likelihood SED Cube](#likelihood-sed-cube) are two formats for serializing bin-by-bin likelihoods to a FITS file. A Likelihood SED stores the bin-by-bin likelihood for a single source or test source position while a Likelihood SED Cube stores a sequence of bin-by-bin likelihoods
(e.g. for a grid of positions or a group of sources).
In the following we describe some advantages and limitations of using bin-by-bin likelihoods. Relative to a traditonal SED, the bin-by-bin likelihood retains more information about the shape of the likelihood function around the maximum. This can be important when working in the low statistics regime where the likelihoods are non-Gaussian and a flux value and one sigma uncertainty is insufficient to describe the shape of the likelihood function. Applications in which bin-by-bin likelihoods may be useful include:
* Deriving upper limits on the global spectral distribution of a source. Likelihood SEDs can be used to construct the likelihood function for arbitrary spectral models without recomputing the experimental likelihood function. This is particularly useful for DM searches in which one tests a large number of spectral models
(e.g. for mass and annihilation channel) and recomputing the experimental likelihood function for all models would be very expensive. The bin-by-bin likelihoods are also a convenient way of distributing analysis results in a format that allows other spectral models to be easily tested. The two of the most recent LAT publications on dSph DM searches have publicly released the analysis results in this format (see [2015PhRvL.115w1301A](http://adsabs.harvard.edu/abs/2015PhRvL.115w1301A) and
[2014PhRvD..89d2001A](http://adsabs.harvard.edu/abs/2014PhRvD..89d2001A)).
* Stacking analyses that combine measurements from multiple sources or multiple epochs of observation of a single source. Forming a joint likelihood from the product of Likelihood SEDs fully preserves information in each data set and is equivalent to doing a joint fit as long as the data sets are independent.
* Analyses combining spectral measurements from multiple experiments.
Likelihoods from two or more experiments can be multiplied to derive a joint likelihood function incorporating the measurements of each experiment. As for stacking analyses, the joint likelihood approach avoids merging or averaging data or IRFs. The bin-by-bin likelihoods further allow joint anlayses to be performed without having access to the data sets or tools that produced the original measurement. For an application of this approach in the context of DM searches see [2016JCAP…02..039M](http://adsabs.harvard.edu/abs/2016JCAP...02..039M).
There are a few important caveats to bin-by-bin likelihoods which may limit their use for certain applications:
* Large correlations between the normalizations of two or more model components (e.g. when the spatial models are partially degenerate)
can limit the utility of this approach. Although such correlations can be accounted for by profiling the corresponding nuisance parameters, this may result in unphysical background models with large bin-to-bin fluctuations in the model amplitude. One technique to avoid this issue (see [2015PhRvD..91j2001B](http://adsabs.harvard.edu/abs/2015PhRvD..91j2001B) and
[2016PhRvD..93f2004C](http://adsabs.harvard.edu/abs/2016PhRvD..93f2004C)) is to apply a Gaussian prior that constrains the spectral distribution of the background components to lie within a certain range of the global spectral model of that source
(computed without the test source).
* Because the likelihoods in each energy bin are calculated independently, this technique cannot fully account for bin-to-bin correlations caused by energy dispersion. The effect of energy dispersion can be corrected to first order by scanning the likelihood with a spectral model (e.g. a power-law with index 2)
that is close in shape to the spectral models of interest. However in analyses where the energy response matrix is particularly broad or non-diagonal the systematic errors arising from the approximate treatment of energy dispersion may exceed the statistical errors.
In LAT analyses energy dispersion can become a significant effect when using data below 100 MeV (see [LAT_edisp_usage](http://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/Pass8_edisp_usage.html)). However when using an Index=2.0 and considering energies above 100 MeV, the spectral bias is less than 3% for models with indices between 1 and 3.5.
##### Likelihood SED[¶](#likelihood-sed)
The likelihood SED is a representation of spectral energy distribution of a source that contains a likelihood for the source normalization in each energy bin. This format is a special case of the more general
[SED](index.html#flux-points) format. Depending on the requirements of the analysis the likelihoods can be evaluated with either profiled or fixed nuisance parameters. The likelihood SED can be used in the same way as a traditional SED but contains additional information about the shape of the likelihood function around the maximum. A 2D visualization of the likelihood functions can be produced by creating a colormap with intensity mapped to the likelihood value:
| Low Significance Source | High Significance Source |
| --- | --- |
| | |
In the following we use *nebins* to designate the number of energy bins and *nnorms* to designate the number of points in the normalization scan. The format is a BINTABLE with one row per energy bin containing the columns listed below.
The best-fit model amplitudes, errors, and upper limits are all normalized to a reference spectral model. The `ref` columns define the amplitude of the reference model in different units. The reference model amplitudes are arbitrary and could for instance be set to the best-fit amplitude in each energy bin. `norm` columns contain the best-fit value, its errors, and upper limit in units of the reference model amplitude. Unit conversion of the `norm`
columns can be performed by doing a row-wise multiplication with the respective `ref` column.
###### Sample Files[¶](#sample-files)
See the `likelihood` files in [Sample Files](index.html#sed-sample-files).
###### Header Keywords[¶](#header-keywords)
* `SED_TYPE`
+ SED type string. Should be set to `likelihood`.
* `UL_CONF`, **optional**
+ Confidence level of the upper limit (range: 0 to 1) of the value in the `norm_ul` column.
###### Columns[¶](#columns)
The columns listed here are a subset of the columns defined in the
[SED](index.html#flux-points) format. See [Columns](index.html#sed-columns) for the full column specifications.
####### Required Columns[¶](#required-columns)
* `e_min` – ndim: 1, Dimension: nebins
* `e_max` – ndim: 1, Dimension: nebins
* `e_ref` – ndim: 1, Dimension: nebins
* `ref_dnde` – ndim: 1, Dimension: nebins
* `ref_eflux` – ndim: 1, Dimension: nebins
* `ref_flux` – ndim: 1, Dimension: nebins
* `ref_npred` – ndim: 1, Dimension: nebins
* `norm` – ndim: 1, Dimension: nebins
* `norm_err` – ndim: 1, Dimension: nebins
* `norm_scan` – ndim: 2, Dimension: nebins x nnorms
* `ts` – ndim: 1, Dimension: nebins
* `loglike` – ndim: 1, Dimension: nebins
* `dloglike_scan` – ndim: 2, Dimension: nebins x nnorms
####### Optional Columns[¶](#optional-columns)
* `ref_dnde_e_min` – ndim: 1, Dimension: nebins
* `ref_dnde_e_max` – ndim: 1, Dimension: nebins
* `norm_errp` – ndim: 1, Dimension: nebins
* `norm_errn` – ndim: 1, Dimension: nebins
* `norm_ul` – ndim: 1, Dimension: nebins
##### Likelihood SED Cube[¶](#likelihood-sed-cube)
The Likelihood SED Cube is format for storing a sequence of Likelihood SEDs in a single table. The format defines a file with two BINTABLE HDUs: [SCANDATA](#scandata) and [EBOUNDS](#ebounds).
SCANDATA has one row per Likelihood SED while EBOUNDS has one row per energy bin. Table rows in SCANDATA can be mapped to a list of sources, spatial pixels, or observations epochs. Because the row mapping is not defined by the format itself additional columns can be added to SCANDATA that defined the mapping of each row. Examples would be columns for source name designation, pixel coordinate, or observation epoch.
In the following we use *nrows* to designate table rows, *nebins* to designate the number of energy bins and *nnorms* to designate the number of points in the normalization scan. As for the Likelihood SED format, columns that contain `norm` are expressed in units of the reference model amplitude. These can be multiplied by `ref_eflux`,
`ref_flux`, `ref_dnde`, or `ref_npred` columns in the EBOUNDS HDU to get the normalization in the respective units.
Sample FITS files:
* Low Significance Source: [`tscube_lowts.fits`](_downloads/c89ac6eaf520858aa7e94cff01ae2cc7/tscube_lowts.fits)
* High Significance Source: [`tscube_hights.fits`](_downloads/af198b955e28e6c216ac32f05c777299/tscube_hights.fits)
###### SCANDATA Table[¶](#scandata-table)
The SCANDATA HDU is a BINTABLE with the following columns. The columns listed here are a subset of the columns in the
[SED](index.html#flux-points) format. Relative to the 1D SED formats the dimensionality of all columns is increased by one with the first dimension (rows) mapping to spatial pixels. See [Columns](index.html#sed-columns)
for the full column specifications.
####### Header Keywords[¶](#id3)
* `UL_CONF`, **optional**
+ Confidence level of the upper limit given in the `norm_ul` column.
####### Required Columns[¶](#id4)
* `dloglike_scan` – ndim: 3, Dimension: nrows x nebins x nnorms
* `norm_scan` – ndim: 3, Dimension: nrows x nebins x nnorms
* `norm` – ndim: 2, Dimension: nrows x nebins
* `norm_err` – ndim: 2, Dimension: nrows x nebins
* `ts` – ndim: 2, Dimension: nrows x nebins
* `loglike` – ndim: 2, Dimension: nrows x nebins
####### Optional Columns[¶](#id5)
* `ref_npred` – ndim: 2, Dimension: nrows x nebins
* `norm_errp` – ndim: 2, Dimension: nrows x nebins
* `norm_errn` – ndim: 2, Dimension: nrows x nebins
* `norm_ul` – ndim: 2, Dimension: nrows x nebins
* `bin_status` – ndim: 2, unit: None
+ Dimension: nrows x nebins
+ Fit status code. 0 = OK, >0 = Not OK
###### EBOUNDS Table[¶](#ebounds-table)
The EBOUNDS HDU is a BINTABLE with 1 row per energy bin and the following columns. The columns listed here are a subset of the columns in the [SED](index.html#flux-points) format. See [Columns](index.html#sed-columns) for the full column specifications. Note that for backwards compatibility with existing EBOUNDS table convention (e.g. as used for WCS counts cubes) columns names are upper case.
####### Required Columns[¶](#id6)
* `E_MIN`, unit: keV, Dimension: nebins
* `E_REF`, unit: keV, Dimension: nebins
* `E_MAX`, unit: keV, Dimension: nebins
* `REF_DNDE` – ndim: 1, Dimension: nebins
* `REF_EFLUX` – ndim: 1, Dimension: nebins
* `REF_FLUX` – ndim: 1, Dimension: nebins
####### Optional Columns[¶](#id7)
* `REF_DNDE_E_MIN` – ndim: 1, Dimension: nebins
* `REF_DNDE_E_MAX` – ndim: 1, Dimension: nebins
* `REF_NPRED` – ndim: 1, Dimension: nebins
##### TSCube Output Format[¶](#tscube-output-format)
Recent releases of the Fermi ScienceTools provide a *gttscube*
application that fits a test source on a grid of spatial positions within the ROI. At each test source position this tool calculates the following information:
* TS and best-fit amplitude of the test source.
* A likelihood SED.
The output of the tool is a FITS file containing a Likelihood SED Cube with *nrows* in which each table row maps to a pixel in the grid scan.
The PRIMARY HDU contains the same output as *gttsmap* – a 2-dimensional FITS IMAGE with the test source TS evaluated at each position. The primary fit results are contained in the following BINTABLE HDUs:
* A [SCANDATA Table](#scandata) containing the likelihood SEDs for each spatial pixel.
* A [FITDATA Table](#fitdata) containing fit results for the reference model at each spatial pixel over the full energy range.
* A [EBOUNDS Table](#ebounds) containing the bin definitions and the amplitude of the reference model.
The mapping of rows to pixels is defined by the WCS header keywords in the SCANDATA HDU. Following the usual FITS convention both tables use columnwise ordering for mapping rows to pixel indices.
Here is the list of HDUs:
TS Cube HDUs[¶](#id8)
| HDU | HDU Type | HDU Name | Description |
| --- | --- | --- | --- |
| 0 | IMAGE | PRIMARY | TS map of the region using the test source |
| 1 | BINTABLE | SCANDATA | Table with the data from the likelihood v. normalization scans. Follows format specification given in [SCANDATA Table](#scandata). |
| 2 | BINTABLE | FITDATA | Table with the data from the reference model fits. |
| 3 | BINTABLE | BASELINE | Parameters and Covariences of Baseline fit. |
| 4 | BINTABLE | EBOUNDS | Energy bin edges, fluxes and NPREDs for test source in each energy bin. Follows format specification given in [EBOUNDS Table](#ebounds). |
###### FITDATA Table[¶](#fitdata-table)
The FITDATA HDU is a BINTABLE with 1 row per spatial pixel (*nrows*)
and the following columns:
* `fit_norm` – ndim: 1, unit: None
+ Dimension: nrows
+ Best-fit normalization for the global model in units of the reference model amplitude.
* `fit_norm_err` – ndim: 1, unit: None
+ Dimension: nrows
+ Symmetric error on the global model normalization in units of the reference model amplitude.
* `fit_norm_errp` – ndim: 1, unit: None
+ Dimension: nrows
+ Positive error on the global model normalization in units of the reference model amplitude.
* `fit_norm_errn` – ndim: 1, unit: None
+ Dimension: nrows
+ Negative error on the global model normalization in units of the reference model amplitude.
* `fit_norm_ul` – ndim: 1, unit: None
+ Dimension: nrows
+ Upper limit on the global model normalization in units of the reference model amplitude.
* `fit_ts` – ndim: 1, unit: None
+ Dimension: nrows
+ Test statistic of the best-fit global model.
* `fit_status` – ndim: 1, unit: None
+ Dimension: nrows
+ Status code for the fit. 0 = OK, >0 = Not OK
#### 1D counts spectra[¶](#d-counts-spectra)
This section describes a data format for 1D counts spectra and associated reduced responses (ARF and RMF) that has been used for decades in X-ray astronomy,
and is also used in VHE gamma-ray astronomy.
The [EVENTS](index.html#iact-events) and 2D [IRFs](index.html#iact-irf) can be transformed into a 1D counts vector and 1D IRFs that can serve as input to general X-ray spectral analysis packages such as [Sherpa](http://cxc.harvard.edu/sherpa/). For an introduction to this so-called OGIP data format please refer to the official [Documentation](https://heasarc.gsfc.nasa.gov/docs/heasarc/ofwg/docs/spectra/ogip_92_007/ogip_92_007.html)
on [HEASARC](index.html#glossary-heasarc).
The following section only highlight differences and modifications made to the OGIP standard in order to meet the requirements of gamma-astronomy.
##### PHA[¶](#pha)
The OGIP standard PHA file format is described [here](https://heasarc.gsfc.nasa.gov/docs/heasarc/ofwg/docs/spectra/ogip_92_007/node5.html).
TODO: Should an EBOUNDS extension be added to the PHA file (channels -> energy)?
In OGIP this info has to be extraced from the RMF file.
The values of following header keywords need some attention when using them for
[IACT](index.html#glossary-iact) analysis.
* `BACKSCAL`
+ For now it is assumed that exposure ration between signal and background counts does not depend on energy, thus this keyword must be set
+ The BACKSCAL keywords in the PHA and the BKG file must be set so that
\[\alpha = \frac{\mathrm{PHA}_{\mathrm{backscal}}}{\mathrm{BKG}_{\mathrm{backscal}}}\]
+ It is recommended to set the `BACKSCAL` keyword to 1 in the PHA file and to \(1/\alpha\) in the BKG file
Additional header keyword that can be stored in the PHA header for
[IACT](index.html#glossary-iact) analysis are listed below.
* `OFFSET` type: float, unit deg
+ Distance between observation position and target of a spectral analysis
* `MUONEFF` type: float
+ Muon efficiency of the observation
* `ZENITH` type: tbd, unit: deg
+ Zenith angle of the observation
* `ON-RAD` type: float, unit deg
+ Radius of the spectral extraction region
+ Defines the spectral extraction region together with the standard keywords `RA-OBJ` and `DEC-OBJ`
##### BKG[¶](#bkg)
The values of following header keywords need some attention when using them for
[IACT](index.html#glossary-iact) analysis.
* `BACKSCAL`
+ It is recommended to set the `BACKSCAL` keyword to \(1/\alpha\) in the BKG file (see above)
##### ARF[¶](#arf)
The OGIP standard ARF file format is described [here](http://heasarc.gsfc.nasa.gov/docs/heasarc/caldb/docs/memos/cal_gen_92_002/cal_gen_92_002.html#tth_sEc4).
Additional header keyword that can be stored in the ARF header for
[IACT](index.html#glossary-iact) analysis are listed below.
* `LO_THRES` type: tbd, unit: TeV
+ Low energy threshold of the analysis
* `HI_THRES` type: tbd, unit: TeV
+ High energy threshold of the analysis
##### RMF[¶](#rmf)
The OGIP standard RMF file format is described [here](http://heasarc.gsfc.nasa.gov/docs/heasarc/caldb/docs/memos/cal_gen_92_002/cal_gen_92_002.html#tth_sEc3.1).
How an RMF file can be extracted from a IACT 2D energy dispersion file is explained in [Energy Dispersion](index.html#iact-edisp).
### Light curves[¶](#light-curves)
For light-curves, we recommend to store the information in an as similar format as possible to the one for [Spectra](index.html#spectra) and specifically [SED](index.html#flux-points).
For measurements at a given time, a `TIME` column should be added,
for measurements for a given time interval, `TIME_MIN` and `TIME_MAX`
columns should be added.
As explained in [Time](index.html#time), times should be given as 64-bit floats,
using the FITS time standard to specify a reference time point.
Note that this allows some flexibility, and e.g. the commonly used
“MJD values” for light curves are supported via these header keys:
```
MJDREFI = 0 MJDREFF = 0 TIMEUNIT= 'd'
```
Light curve producers are highly encouraged to always give the `TIMESYS`.
For archival data this is often not given, and it’s unclear if the times are e.g. in the `UTC` or `TT` or some other `TIMESYS`.
This is a very preliminary description, based on the discussion here:
<https://github.com/open-gamma-ray-astro/gamma-astro-data-formats/pull/61If someone has time to provide a more detailed description, and to produce example files in FITS or possibly also ECSV format, this would be highly welcome. |