content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Difference between revisions of "Barion Pixel" Revision as of 14:51, 22 March', ' bat('consent', 'grantConsent'); bat('consent', 'revokeConsent'); bat('consent', 'rejectConsent');) );. Table 6: Properties of events related to content browsing. Table 10: Properties of signUp event
https://docs.barion.com/index.php?title=Barion_Pixel&diff=next&oldid=1637
2019-12-05T18:16:07
CC-MAIN-2019-51
1575540481281.1
[]
docs.barion.com
diff format¶ This sections provide details on how nbdime represents diffs, and will mostly be relevant for those who want to use nbdime as a library, or that want to contribute to nbdime. Figure: nbdime’s content-aware diff Basics¶ A diff object represents the difference B-A between two objects, A and B, as a list of operations (ops) to apply to A to obtain B. Each operation is represented as a dict with at least two items: { "op": <opname>, "key": <key> } The objects A and B are either mappings (dicts) or sequences (lists or strings). A different set of ops are legal for mappings and sequences. Depending on the op, the operation dict usually contains an additional argument, as documented below. The diff objects in nbdime are: - json-compatible nested structures of dicts (with string keys) and - lists of values with heterogeneous datatypes (strings, ints, floats). The difference between these input objects is represented by a json-compatible results object. A JSON schema for validating diff entries is available in diff_format.schema.json. Diff format for mappings¶ For mappings, the key is always a string. Valid operations (ops) are: - remove - delete existing value at key:{ "op": "remove", "key": <string> } - add - insert new value at keynot previously existing:{ "op": "add", "key": <string>, "value": <value> } - replace - replace existing value at keywith new value:{ "op": "replace", "key": <string>, "value": <value> } - patch - patch existing value at keywith another diffobject:{ "op": "patch", "key": <string>, "diff": <diffobject> } Diff format for sequences¶ For sequences (list and string) the key is always an integer index. This index is relative to object A of length N. Valid operations (ops) are: - removerange - delete the values A[key:key+length]:{ "op": "removerange", "key": <string>, "length": <n>} - addrange - insert new items from valuelistbefore A[key], at end if key=len(A):{ "op": "addrange", "key": <string>, "valuelist": <values> } - patch - patch existing value at keywith another diffobject:{ "op": "patch", "key": <string>, "diff": <diffobject> } Relation to JSONPatch¶ The above described diff representation format has similarities with the JSONPatch standard but is also different in a few ways: - operations - JSONPatch contains operations move, copy, testnot used by nbdime. - nbdime contains operations addrange, removerange, and patchnot in JSONPatch. - patch - JSONPatch uses a deep JSON pointer based pathitem in each operation instead of providing a recursive patchop. - nbdime uses a keyitem in its patchop. - diff object - JSONPatch can represent the diff object as a single list. - nbdime uses a tree of lists. To convert a nbdime diff object to the JSONPatch format, use the to_json_patch function: from nbdime.diff_format import to_json_patch jp = to_json_patch(diff_obj) Note This function to_json_patch is currently a draft, subject to change, and not yet covered by tests. Examples¶ For examples of diffs using nbdime, see test_patch.py.
https://nbdime.readthedocs.io/en/stable/diffing.html
2019-08-17T10:38:36
CC-MAIN-2019-35
1566027312128.3
[array(['_images/nbdiff-web.png', "example of nbdime's content-aware diff"], dtype=object) ]
nbdime.readthedocs.io
Add a Reservation Form to a Page or Blog Post Novice Novice tutorials require no prior knowledge of any specific web programming language. Information In this video we have used Beat Heaven as an example but the process of creating and adding a reservation form is the same for all our WordPress themes built on our Fuse Framework. Creating a reservation form Create a new reservation form by following the below instructions: - Log in to your WordPress Dashboard (Ex:) - Go to Reservations panel - Select All Reservations Forms sub panel - On the next page click the Add New button - In the Add/Edit Forms section enter the Form name - In the Date Pickers box select the type of date pickers you need Check in only (usually used for appointment forms) or Check in & Check out (used for hotel checkins) - Add more fields to your form and mark them as Required if you want them to be mandatory fields when your users fill in the form. You can also choose the Type, Label and modify the Width of your form fields. - Click the Save Form button in order to save your changes - Copy the reservation form shortcode - And then, paste this shortcode in any page or blog post More options can be found in the Message Settings panel from where you can modify the default texts of your reservation form: - Form header text - Submit button text - Reset button text, and others Click the Save Form button - In Date settings panel you can exclude dates on which users cannot make reservations - Click on the tinny calendar icon on the right - Indicate the dates you want to be excluded and click the Ok button in the small pop up - If you have finished, click the Save Form button - Finally, you can visit your page to see the result.
http://docs.themefuse.com/conexus/your-theme/reservation-forms/add-a-reservation-form-to-a-page-or-blog-post-fuse-framework
2019-08-17T10:52:35
CC-MAIN-2019-35
1566027312128.3
[]
docs.themefuse.com
Nltest Examples Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2003 with SP2 NLTest Examples Example 1: Verify DCs in a domain In this example, the /dclist parameter is used to create a list of domain controllers of the domain fourthcoffee.com nltest /dclist:fourthcoffee Output displays In this example, you want to find out detailed information about a certain user. At the command prompt, type: nltest /user:"TestAdmin" Output displays The detailed information provided can be used to troubleshoot many issues. Example 3: Verify trust relationship with a specific server In this example, you want to verify that the server a-dc1 has a valid trust relationship with the domain. At the command prompt, type: nltest.exe /server:fourthcoffee-dc-01 /sc_query:fourthcoffee Output displays similar to the following: Flags: 30 HAS_IP HAS_TIMESERV Trusted DC Name \\fourthcoffee-dc-01.forthcoffee.com Trusted DC Connection Status Status = 0 0x0 NERR_Success The command completed successfully Example 4: Determine the PDC emulator for a domain In this example, you want to determine which DC in your domain the Windows NT 4.0–based computers are looking to as the PDC. At the command prompt, type: nltest /dcname:fourthcoffee Output displays similar to the following: PDC for Domain fourthcoffee is \\fourthcoffee-dc-01 The command completed successfully You can see that a-dcp is the PDC emulator for your domain. Example 5: Show trust relationships for a domain In this example, you want to view the established trust relationships for your domain. At the command prompt, type: nltest /domain_trusts Output displays similar to the following: List of domain trusts: 0: forthcoffee forthcoffee.com (NT 5) (Forest Tree Root) (Primary Domain) The command completed successfully This example shows that one domain is trusting itself and no others. See Also Concepts Nltest Overview Nltest Syntax Alphabetical List of Tools Spcheck Overview Netdom Overview Netdiag Overview Netcap Overview Httpcfg Overview Dnslint Overview Dnscmd Overview Dhcploc Overview Dcdiag Overview Browstat Overview
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2003/cc783306(v%3Dws.10)
2019-08-17T12:07:35
CC-MAIN-2019-35
1566027312128.3
[]
docs.microsoft.com
. If the PBX is local and trying to communicate with a remote SIP trunk, see PBX VoIP NAT How-to for more ideas. Disable source port rewriting¶ By default pfSense® software rewrites the source port on all outbound traffic. This is necessary for proper NAT in some circumstances such as having multiple SIP phones behind a single public IP registering to a single external PBX. With a minority of providers, rewriting the source port of RTP can cause one way audio. In that case, setup manual outbound NAT and Static Port on all UDP traffic potentially with the exclusion of UDP 5060. In old versions (pfSense 1.2.x and before) the firewall performed static port NAT on UDP 5060 traffic by default, but that is not desirable now because it breaks more scenarios than not currently. However, in cases where static port on UDP 5060 is required, configuring manual outbound NAT to perform static port NAT for udp/5060 will allow it to function.).
https://docs.netgate.com/pfsense/en/latest/nat/configuring-nat-for-voip-phones.html
2019-08-17T11:46:17
CC-MAIN-2019-35
1566027312128.3
[]
docs.netgate.com
KanterFrampton748 Laptop security is a growing field in London in Edinburgh and the recovery of the UK. In the world all together information security has evolved from a specialist discipline to a central component of an organisations security function. Since the associated with the personal computer in the 1980s computers have fled from from the data handling centre to colonise every office in the country and in great amounts. Right behind them is the consequent increased need for computer security anticipated to the exponential progress in the numbers of malicious hackers and opportunistic cybercriminals in recent years. As an industry sector in the UK computer security is comparatively new whether in London Edinburgh or other cities and cities. Information security has yet to reach its natural level in most areas of the united kingdom and in principle there is still room for expansion in at least the major financial and professional organisations such as Edinburgh. In practice however further growth London] adhere to information security standards. Because of this computer security in London is a highly active industry sector with many specialist agency organizations as well as regular conferences including the gross annual businesses and only a few education or online classes in the subject. This is without doubt due to the relative lack of large companies that are controlled by regulating pressures on information security governance. Without such stresses organisations can be lured to relegate computer security to a lower goal a policy which may be appealing for the short term training London] and attractive power of London for employee flexibility senior management posts and international recognition. This situation has built up over many years over decades even and will not change in an concern of a few years only.
http://docs.aiapir.com/index.php?title=KanterFrampton748&oldid=1924
2017-05-22T19:29:09
CC-MAIN-2017-22
1495463607046.17
[]
docs.aiapir.com
To overcome the 2 GB limitation, the ftp_raw solution below is probably the nicest. You can also perform this command using regular FTP commands: <?php $response = ftp_raw($ftpConnection, "SIZE $filename"); $filesize = floatval(str_replace('213 ', '', $response[0])); ?> [However, this] is insufficient for use on directories. As per RFC 3659 (), servers should return error 550 (File not found) if the command is issued on something else than a file, or if some other error occurred. For example, Filezilla indeed returns this string when using the ftp_raw command on a directory: array(1) { [0]=> string(18) "550 File not found" } RFC 959 () dictates that the returned string always consists of exactly 3 digits, followed by 1 space, followed by some text. (Multi-line text is allowed, but I am ignoring that.) So it is probably better to split the string with substr, or even a regular expression. <?php $response = ftp_raw($ftp, "SIZE $filename"); $responseCode = substr($response[0], 0, 3); $responseMessage = substr($response[0], 4); ?> Or with a regular expression: <?php $response = ftp_raw($ftp, "SIZE $filename"); if (preg_match("/^(\\d{3}) (.*)$/", $response[0], $matches) == 0) throw new Exception("Unable to parse FTP reply: ".$response[0]); list($response, $responseCode, $responseMessage) = $matches; ?> You could then decide to assume that response code '550' means that it's a directory. I guess that's just as 'dangerous' as assuming that ftp_size -1 means that it's a directory.
http://docs.php.net/manual/en/function.ftp-size.php
2017-05-22T19:27:48
CC-MAIN-2017-22
1495463607046.17
[]
docs.php.net
A material design switch. Used to toggle the on/off state of a single setting. The switch itself does not maintain any state. Instead, when the state of the switch changes, the widget calls the onChanged callback. Most widgets that use a switch will listen for the onChanged callback and rebuild the switch with a new value to update the visual appearance of the switch. Requires one of its ancestors to be a Material widget. See also: - SwitchListTile, which combines this widget with a ListTile so that you can give the switch a label. - Checkbox, another widget with similar semantics. - Radio, for selecting among a set of explicit values. - Slider, for selecting a value in a range. - material.google.com/components/selection-controls.html#selection-controls-switch - Inheritance - Object - Widget - StatefulWidget - Switch Constructors - Switch({Key key, bool value, ValueChanged<bool> onChanged, Color activeColor, ImageProvider activeThumbImage, ImageProvider inactiveThumbImage }) Creates a material design switch.const Properties - activeColor → Color The color to use when this switch is on.final - activeThumbImage → ImageProvider An image to use on the thumb of this switch when the switch is on.final - inactiveThumbImage → ImageProvider An image to use on the thumb of this switch when the switch is off.final - onChanged → ValueChanged<bool> Called when the user toggles the switch on or off.final - value → bool Whether this switch is on or _Switch
https://docs.flutter.io/flutter/material/Switch-class.html
2017-05-22T19:15:16
CC-MAIN-2017-22
1495463607046.17
[]
docs.flutter.io
Creating a custom events chart Custom events metrics are used to display the usage of the unique features of your game. When you have successfully implemented the "custom events" tracking events in your game, you can start building charts using custom events. Step 1: Choose a metric Choose a "custom events" metric in the configurator. We offer 3 custom events metrics. Click on the links below to see more details: Step 2: Pick a feature As soon as you choose a "custom events" metric, a tab named "custom events"pops up in the configurator. This tab contains 3 lists to choose from, feature type, feature subtype and feature sub-subtype. You can just choose a feature and press apply, or go into detail and choose also a subtype and sub-subtype for this feature if you want to see something very specific. Step 3: Select an x-axis When the chart is created you are able to choose an x-axis in the bottom bar of the chart. When you pick (as an example) the x-axis "feature subtype", all subtypes of your chosen feature will be displayed automatically and you don´t need to create an extra metric for every subtype.
https://docs.honeytracks.com/wiki/Creating_a_custom_events_chart
2017-05-22T19:08:29
CC-MAIN-2017-22
1495463607046.17
[]
docs.honeytracks.com
New in version 2.1. - junos-eznc - name: load configure file into device junos_config: src: srx.cfg comment: update config provider: "{{ netconf }}" - name: load configure lines into device junos_config: lines: - set interfaces ge-0/0/1 unit 0 description "Test interface" - set vlans vlan01 description "Test vlan" comment: update config provider: "{{ netconf }}" - name: rollback the configuration to id 10 junos_config: rollback: 10 provider: "{{ netconf }}" - name: zero out the current configuration junos_config: zeroize: yes provider: "{{ netconf }}" - name: confirm a previous commit junos_config: provider: "{{ netconf }}" Common return values are documented here Return Values, the following are the fields unique to this module: Note.
http://docs.ansible.com/ansible/junos_config_module.html
2017-05-22T19:20:38
CC-MAIN-2017-22
1495463607046.17
[]
docs.ansible.com
Deleting an AWS Installation from the Console Page last updated: When you deploy Pivotal Cloud Foundry (PCF) to Amazon Web Services (AWS), you provision a set of resources. This topic describes how to delete the AWS resources associated with a PCF deployment. You can use the AWS console to remove an installation of all components, but retain the objects in your bucket for a future deployment. - Log into your AWS Console. - Navigate to your EC2 dashboard. Select Instances from the menu on the left side. - Terminate all your instances. - Select Load Balancers. Delete all load balancers. - From the AWS Console, select RDS. - Select Instances from the menu on the left side. Delete the RDS instances. - Select Create final Snapshot from the drop-down menu. Click Delete. - From the AWS Console, select VPC. - Select Your VPCs from the menu on the left. Delete the VPCs. - Check the box to acknowledge that you want to delete your default VPC. Click Yes, Delete.
http://docs.pivotal.io/pivotalcf/1-10/customizing/deleting-aws-install.html
2017-05-22T19:34:16
CC-MAIN-2017-22
1495463607046.17
[]
docs.pivotal.io
Documentation for pronouncing¶ Pronouncing is a simple interface for the CMU Pronouncing Dictionary. The library is designed to be easy to use, and has no external dependencies. For example, here’s all you need to do in order to find rhymes for a given word: >>> import pronouncing >>> pronouncing.rhymes("climbing") ['diming', 'liming', 'priming', 'rhyming', 'timing'] Read the documentation here:. I made this library because I wanted to be able to use the CMU Pronouncing Dictionary in my projects without having to install the grand behemoth that is NLTK. It’s designed to be friendly to beginner programmers who want to get started with creative language generation and analysis, and for experts who want to make quick prototypes of projects that deal with English pronunciation. Installation¶ Install with pip like so: pip install pronouncing You can also download the source code and install manually: python setup.py install
http://pronouncing.readthedocs.io/en/stable/
2017-05-22T19:15:00
CC-MAIN-2017-22
1495463607046.17
[]
pronouncing.readthedocs.io
Bugfixing Scripts¶ fix/* scripts fix various bugs and issues, some of them obscure. Contents 0.31.12, in which the “bug” was “fixed”./stable-temp¶ Instantly sets the temperature of all free-lying items to be in equilibrium with the environment, which stops temperature updates until something changes. To maintain this efficient state, use tweak fast-heat..
http://dfhack.readthedocs.io/en/stable/docs/_auto/fix.html
2017-05-22T19:09:21
CC-MAIN-2017-22
1495463607046.17
[]
dfhack.readthedocs.io
🔗Stream Exploration Streams are the primary method of ingesting real-time data into CDAP. It is often useful to be able to examine data in a stream in an ad-hoc manner through SQL-like queries Each event in a stream contains a timestamp, a map of headers, and a body. When a stream is created, a corresponding Hive table is created that allows queries to be run over those three columns. Many times, stream event bodies are also structured and have a format and schema of their own. For example, event bodies may be a format such as comma-delimited text or Avro-encoded binary data. In those cases, it is possible to set a format and schema on a stream, enabling more powerful queries. Some formats have a schema that describes the structure of data the format can read. A format such as the Avro format requires that a schema be explicitly given. Other formats, such as the CSV format, do not have a schema. CDAP supplies a default schema for formats such as CSV that you can—and usually will need to—replace with a custom schema to match your data's structure. Let's take a closer look at attaching formats and schemas on streams. 🔗Formats A format defines how the bytes in an event body can be read as a higher-level object. For example, the CSV (comma-separated values) format can read each value in comma-delimited text as a separate column of some given type. Format is a configuration setting that can be set on a stream. This can be done either through the HTTP RESTful API or by using the CDAP Command Line Interface (CLI): set stream format <stream-id> <format-name> [<schema>] [<settings>] set stream format mystream csv "f1 int, f2 string" It is important to note that formats are applied at read time. When events are added to a stream, there are no checks performed to enforce that all events added to the stream are readable by the format you have set on a stream. If any stream event cannot be read by the format you have set, your entire query will fail and you will not get any results. 🔗Schemas As mentioned above, a format may support different schemas. CDAP schemas are adapted from the Avro Schema Declaration with a few differences: - Map keys do not have to be of type name string, but can be of any type. - No "name" property for the enumtype. - No support of "doc" and "aliases" in recordand enumtypes. - No support of "doc" and "default" in recordfields. - No fixedtype. There are a few additional limitations on the types of schemas that can be used for exploration: - For all formats: - Schemas must be a record of at least one field. - For all formats except avro: - Enums are not supported. - Unions are not supported, unless it is a union of a null and another type, representing a nullable type. - Recursive types are not supported. This means you cannot have a record field that references itself in one of its fields. Data exploration using Cloudera Impala has an additional limitation: - Fields must be a scalar type: no maps, arrays, or records allowed. On top of these general limitations, each format has its own restrictions on the types of schemas they support. For example, the CSV/TSV formats do not support maps or records as data types. 🔗Schema Syntax Schemas are represented as JSON Objects, following the same format as Avro schemas. The JSON representation is used by the HTTP RESTful APIs, while the CDAP CLI supports a SQL-like syntax. For example, the SQL-like schema: f1 int, f2 string, f3 array<int> not null, f4 map<string, int> not null, f5 record<x:int, y:double> not null is equivalent to the Avro-like JSON schema: { "type": "record", "name": "rec1", "fields": [ { "name": "f1", "type": [ "int", "null" ] }, { "name": "f2", "type": [ "string", "null" ] }, { "name": "f3", "type": { "type": "array", "items": [ "int", "null" ] } }, { "name": "f4", "type": { "type": "map", "keys": [ "string", "null" ], "values": [ "int", "null" ] } }, { "name": "f5", "type": { "type": "record", "name": "rec2", "fields": [ { "name": "x", "type": [ "int", "null" ] }, { "name": "y", "type": [ "double", "null" ] } ] } } ] } 🔗Accepted Formats Accepted formats (some of which include schemas) are: avro(Avro: format); clf(Apache Combined Log Format, schema); csv(comma-separated values: format); grok(format); syslog(Syslog Message Format, schema); text(format); and tsv(tab-separated values: format). 🔗Avro Format The avro format reads event bodies as binary-encoded Avro. The format requires that a schema be given and has no settings. For example: $ cdap cli call set stream format mystream avro "col1 string, col2 map<string,int> not null, col3 record<x:double, y:float>" > cdap cli call set stream format mystream avro "col1 string, col2 map<string,int> not null, col3 record<x:double, y:float>" 🔗Combined Log Format The Apache Combined Log Format ( clf) is a very common web server log file format. It is a super-set of the similarly-named Common Logfile Format, adding to it two fields ("Referer" and "User-Agent") of request headers. Though described as a format, it is actually a combination of format and schema. The format consists of fields separated by spaces, with fields containing spaces surrounded by quotes, and the request_time field enclosed in square brackets. The schema is: remote_host: Remote hostname or IP number; type string remote_login: The remote logname of the user; type string auth_user: The username as which the user has authenticated; type string request_time: Date and time of the request, enclosed in square brackets; type string request: The request line exactly as it came from the client, enclosed in double quotes; type string status: The HTTP status code returned to the client; type integer content_length: The content-length of the document transferred, in bytes; type integer referrer: "Referer" [sic] HTTP request header, with the site that the client reports having been referred from, enclosed in double quotes; type string user_agent: User-Agent HTTP request header. This is the identifying information that the client browser reports about itself, enclosed in double quotes; type integer Note that in CDAP's implementation, the "Referer" field uses the correct spelling of the word "referrer". 🔗CSV and TSV Formats The csv (comma-separated values) and tsv (tab-separated values) formats read event bodies as delimited text. They have three settings: charset for the text charset, delimiter for the delimiter, and mapping for column-index-to-schema-field mapping. The charset setting defaults to utf-8. The delimiter setting defaults to a comma for the csv format and to a tab for the tsv format. The mapping setting is optional, and is in the zero-based format index0:field0,index1:field1. If provided, the CSV/TSV field order will be decided by the mapping rather than using the schema field order. For example, if the mapping is 1:age,0:name, then the stream event foo,123,82 will be parsed as {"age":123, "name":"foo"}. These formats only support scalars as column types, except for the very last column, which can be an array of strings. All types can be nullable. If no schema is given, the default schema is an array of strings. Neither maps nor records are supported as data types. For example: $ cdap cli set stream format mystream csv "col1 string, col2 int not null, col3 array<string>" > cdap cli set stream format mystream csv "col1 string, col2 int not null, col3 array<string>" 🔗Grok Formats grok allows unstructured data to be parsed into a structured format using grok filters. The grok filters are passed as a setting with the key "pattern". For example, to create a stream-view mygrok on an existing stream mystream using the CDAP CLI: $ cdap cli create stream-view mystream mygrok format grok \ schema "facility string, priority string, message string" \ settings "pattern=(?<facility>\b(?:[0-9]+)\b).(?<priority>\b(?:[0-9]+)\b) (?<message>.*)" > cdap cli create stream-view mystream mygrok format grok ^ schema "facility string, priority string, message string" ^ settings "pattern=(?<facility>\b(?:[0-9]+)\b).(?<priority>\b(?:[0-9]+)\b) (?<message>.*)" 🔗Syslog Format The Syslog Message Format ( syslog) is a combination of a format and this schema: timestamp: date-timestamp; type string logsource: type string program: type string pid: type integer message: type string 🔗Text Format The text format simply interprets each event body as a string. The format supports a very limited schema, namely a record with a single field of type string. The format supports a charset setting that allows you to specify the charset of the text. It defaults to utf-8. For example: $ cdap cli set stream format mystream text "data string not null" "charset=ISO-8859-1" > cdap cli set stream format mystream text "data string not null" "charset=ISO-8859-1" 🔗End-to-End Example In the following example, we will create a stream, send data to it, attach a format and schema to the stream, then query the stream. Suppose we want to create a stream for stock trades. We first create the stream and send some data to it as comma-delimited text: $" If we run a query over the stream, we can see each event as text: cdap > execute "select * from stream_trades" +===================================================================================================+ | stream_trades.ts: BIGINT | stream_trades.headers: map<string,string> | stream_trades.body: STRING | +===================================================================================================+ | 1422493022983 | {} | AAPL,50,112.98 | | 1422493027358 | {} | AAPL,100,112.87 | | 1422493031802 | {} | AAPL,8,113.02 | | 1422493036080 | {} | NFLX,10,437.45 | +===================================================================================================+ Since we know the body of every event is comma-separated text and that each event contains three fields, we can set a format and schema on the stream to allow us to run more complicated queries: cdap > set stream format trades csv "ticker string, num_traded int, price double" cdap > execute "select ticker, count(*) as transactions, sum(num_traded) as volume from stream_trades group by ticker order by volume desc" +========================================================+ | ticker: STRING | transactions: BIGINT | volume: BIGINT | +========================================================+ | AAPL | 3 | 158 | | NFLX | 1 | 10 | +========================================================+ 🔗Formulating Queries When creating your queries, keep these limitations in mind: - The query syntax of CDAP is a subset of the variant of SQL that was first defined by Apache Hive. - Writing into a stream using SQL is not supported. - The SQL command DELETEis not supported. - When addressing your streams in queries, you need to prefix the stream name with stream_. For example, if your stream is named Purchases, then the corresponding table name is stream_purchases. Note that the table name is all lower-case, regardless of how it was defined. - If your stream name contains a '.' or a '-', those characters will be converted to '_' for the Hive table name. For example, if your stream is named my-stream.name, the corresponding Hive table name will be stream_my_stream_name. Beware of name collisions. For example, my.streamwill use the same Hive table name as my_stream. - CDAP uses a custom storage handler to read streams through Hive. This means that queries must be run through CDAP and not directly through Hive unless you place CDAP jars in your Hive classpath. This also means that streams cannot be queried directly by Impala. If you wish to use Impala to explore data in a stream, you can create a CDAP pipeline that converts stream data into a TimePartitionedFileSet. This is also described in the section Introduction to CDAP: Transforming Your Data. - Some versions of Hive may try to create a temporary staging directory at the table location when executing queries. If you are seeing permission errors, try setting hive.exec.stagingdirin your Hive configuration to /tmp/hive-staging. For more examples of queries, please refer to the Hive language manual.
http://docs.cask.co/cdap/4.1.1/en/developers-manual/data-exploration/streams.html
2017-05-22T19:33:27
CC-MAIN-2017-22
1495463607046.17
[]
docs.cask.co
Bandwidth monitor¶ BandwidthD¶ BandwidthD tracks usage of TCP/IP network subnets and builds graphs to display utilization. After installation, BandwidthD is automatically started. Graphs can be accessed using the Server Manager. ntopng¶ ‘admin’ user - Admin user password. This password is not related to the NethServer admin password. - Interfaces - Interfaces on which ntopng will listens to.
http://docs.nethserver.org/en/v7/bandwidth_monitor.html
2017-05-22T19:14:40
CC-MAIN-2017-22
1495463607046.17
[]
docs.nethserver.org
Netatmo Binding The Netatmo binding integrates the following Netatmo products: - Personal Weather Station. Reports temperature, humidity, air pressure, carbon dioxide concentration in the air, as well as the ambient noise level. - Thermostat. Reports ambient temperature, allow to check target temperature, consult and change furnace heating status. See for details on their product. Binding Configuration The binding has no configuration options itself, all configuration is done at ‘Things’ level but before, you’ll have to grant openHab to access Netatmo API. Here is the procedure: 1. Application Creation Create an application at The variables you’ll need to get to setup the binding are: <CLIENT_ID>Your client ID taken from your App at <CLIENT_SECRET>A token provided along with the <CLIENT_ID>. <USERNAME>The username you use to connect to the Netatmo API (usually your mail address). <PASSWORD>The password attached to the above username. 2. Bridge and Things Configuration Once you’ll get needed informations from the Netatmo API, you’ll be able to configure bridge and things. E.g. Bridge netatmo:netatmoapi:home [ clientId="<CLIENT_ID>", clientSecret="<CLIENT_SECRET>", username = "<USERNAME>", password = "<PASSWORD>", readStation=true|false, readThermostat=true|false] { Thing NAMain inside [ equipmentId="aa:aa:aa:aa:aa:aa", [refreshInterval=60000] ] Thing NAModule1 outside [ equipmentId="yy:yy:yy:yy:yy:yy", parentId="aa:aa:aa:aa:aa:aa" ] Thing NAPlug plugtherm [ equipmentId="bb:bb:bb:bb:bb:bb", [refreshInterval=60000] ] Thing NATherm1 thermostat [ equipmentId="xx:xx:xx:xx:xx:xx", parentId="bb:bb:bb:bb:bb:bb" ] ... } Configure Things The IDs for the modules can be extracted from the developer documentation on the netatmo site. First login with your user. Then some examples of the documentation contain the real results of your weather station. Get the IDs of your devices (indoor, outdoor, rain gauge) here. main_device is the ID of the “main device”, the indoor sensor. This is equal to the MAC address of the Netatmo. The other modules you can recognize by “module_name” and then note the “_id” which you need later. Another way to get the IDs is to calculate them: You have to calculate the ID for the outside module as follows: (it cannot be read from the app) - if the first serial character is “h”: start with “02” - if the first serial character is “i”: start with “03” append “:00:00:”, split the rest into three parts of two characters and append with a colon as delimiter. For example your serial number “h00bcdc” should end up as “02:00:00:00:bc:dc”. Discovery If you don’t manually create things in the *.things file, the Netatmo Binding is able to discover automatically all depending modules and devices from Netatmo website. Channels Weather Station Main Indoor Device Example item for the indoor module: Number Netatmo_Indoor_CO2 "CO2" <carbondioxide> { channel = "netatmo:NAMain:home:inside:Co2" } Supported types for the indoor module: - Temperature - TemperatureTrend - Humidity - Co2 - Pressure - PressureTrend - AbsolutePressure - Noise - HeatIndex - Humidex - Dewpoint - DewpointDepression - WifiStatus - Location - TimeStamp - LastStatusStore Weather Station Outdoor module Example item for the outdoor module Number Netatmo_Outdoor_Temperature "Temperature" { channel = "netatmo:NAModule1:home:outside:Temperature" } Supported types for the outdoor module: - Temperature - TemperatureTrend - Humidity - RfStatus - BatteryVP - TimeStamp - Humidex - HeatIndex - Dewpoint - DewpointDepression - LastMessage - LowBattery Weather Station Additional Indoor module Example item for the indoor module Number Netatmo_Indoor2_Temperature "Temperature" { channel = "netatmo:NAModule4:home:insidesupp:Temperature" } Supported types for the additional indoor module: - Co2 - Temperature - Humidity - RfStatus - BatteryVP - TimeStamp - Humidex - HeatIndex - Dewpoint - DewpointDepression - LastMessage - LowBattery Rain Example item for the rain gauge Number Netatmo_Rain_Current "Rain [%.1f mm]" { channel = "netatmo:NAModule3:home:rain:Rain" } Supported types for the rain guage: - Rain - Rain1 - Rain24 - RfStatus - BatteryVP - LastMessage - LowBattery Weather Station Wind module Example item for the wind module: Number Netatmo_Wind_Strength "Wind Strength [%.0f KPH]" { channel = "netatmo:NAModule2:home:wind:WindStrength" } Supported types for the wind module: - WindStrength - WindAngle - GustStrength - GustAngle - LastMessage - LowBattery - RfStatus - BatteryVP Thermostat Relay Device Supported types for the thermostat relay device: - LastStatusStore - WifiStatus - Location Thermostat Module Supported types for the thermostat module: - Temperature - SetpointTemperature - SetpointMode - BoilerOn - BoilerOff - TimeStamp Common problems Missing Certificate Authority This version of the binding has been modified to avoid the need to impoort StartCom certificate in the local JDK can be solved by installing the StartCom CA Certificate into the local JDK like this: - Download the certificate from or use wget - Then import it into the keystore (the password is “changeit”) $JAVA_HOME/bin/keytool -import -keystore $JAVA_HOME/jre/lib/security/cacerts -alias StartCom-Root-CA -file ca.pem If $JAVA_HOME is not set then run the command: update-alternatives --list java This should output something similar to: /usr/lib/jvm/java-8-oracle/jre/bin/java Use everything before /jre/… to set the JAVA_HOME environment variable: export JAVA_HOME=/usr/lib/jvm/java-8-oracle After you set the environment variable, try: ls -l $JAVA_HOME/jre/lib/security/cacerts If it’s set correctly then you should see something similar to: -rw-r--r-- 1 root root 101992 Nov 4 10:54 /usr/lib/jvm/java-8-oracle/jre/lib/security/cacerts Now try and rerun the keytool command. If you didn’t get errors, you should be good to go source. Alternative approach if above solution does not work: sudo keytool -delete -alias StartCom-Root-CA -keystore $JAVA_HOME/jre/lib/security/cacerts -storepass changeit Download the certificate from to $JAVA_HOME/jre/lib/security/ and save it as api.netatmo.net.crt (X.509 / PEM). sudo $JAVA_HOME/bin/keytool -import -keystore $JAVA_HOME/jre/lib/security/cacerts -alias StartCom-Root-CA -file api.netatmo.net.crt The password is “changeit”. Sample data If you want to evaluate this binding but have not got a Netatmo station yourself yet, you can add the Netatmo office in Paris to your account: Icons The following icons are used by original Netatmo web app: Modules - - - Battery status - - - - - Signal status - - - - - Wifi status - - - -
http://docs.openhab.org/addons/bindings/netatmo/readme.html
2017-05-22T19:10:39
CC-MAIN-2017-22
1495463607046.17
[]
docs.openhab.org
Reference Architectures Introduction A PCF reference architecture describes a proven approach for deploying Pivotal Cloud Foundry on a specific IaaS, such as AWS, Azure, GCP, and vSphere, so that meets the following requirements: - Secure - Publicly-accessible - Includes common PCF-managed services such as MySQL, RabbitMQ, and Spring Cloud Services - Can host at least 100 app instances, or far more These documents detail PCF reference architectures for different IaaSes, to help you determine the best configuration for your PCF deployment. Products Covered by the Reference Architectures Pivotal has validated the following PCF products on its own deployments based on these reference architectures: - Pivotal Cloud Foundry Ops Manager - Pivotal Cloud Foundry Elastic Runtime
http://docs.pivotal.io/pivotalcf/1-10/refarch/index.html
2017-05-22T19:33:43
CC-MAIN-2017-22
1495463607046.17
[]
docs.pivotal.io
AppDNA solutions provide the information you need to make changes to your application environment, without requiring the assistance of consultants. You provide basic information about your current and target deployments in a Solutions wizard and then review the easy-to-interpret reports to see which applications will work in the new environment, either without changes or after remediation.
https://docs.citrix.com/de-de/dna/7-8/configure/solution-configure.html
2017-05-22T19:13:58
CC-MAIN-2017-22
1495463607046.17
[]
docs.citrix.com
Applies To: Windows Server 2016.command or the Publish-BCWebContentcommand, depending on the type of content server, to trigger hash generation and to add data to a data package. After all the data has been added to the data package, export it by using the Export-BCCachePackagecommandcommand.
https://docs.microsoft.com/en-us/windows-server/networking/branchcache/deploy/prehashing-and-preloading
2017-05-22T20:25:05
CC-MAIN-2017-22
1495463607046.17
[]
docs.microsoft.com
GUI Scripts¶ gui/* scripts implement dialogs in the main game window. In order to avoid user confusion, as a matter of policy all these tools display the word DFHack on the screen somewhere while active. When that is not appropriate because they merely add keybinding hints to existing DF screens, they deliberately use red instead of green for the key. Contents - GUI Scripts - gui/advfort - gui/advfort_items - gui/assign-rack - gui/autobutcher - gui/choose-weapons - gui/clone-uniform - gui/companion-order - gui/confirm-opts - gui/create-item - gui/dfstatus - gui/extended-status - gui/family-affairs - gui/gm-editor - gui/gm-unit - gui/guide-path - gui/hack-wish - gui/hello-world - gui/liquids - gui/load-screen - gui/manager-quantity - gui/mechanisms - gui/mod-manager - gui/no-dfhack-init - gui/power-meter - gui/prerelease-warning - gui/quickcmd - gui/rename - gui/room-list - gui/settings-manager - gui/siege-engine - gui/stockpiles - gui/unit-info-viewer - gui/workflow - gui/workshop-job gui/advfort¶ This script allows to perform jobs in adventure mode. For more complete help ? while script is running. It’s most comfortable to use this as a keybinding. (e.g. keybinding set Ctrl-T gui/advfort). Possible arguments: An example of player digging in adventure mode: WARNING: changes only persist in non procedural sites, namely: player forts, caves, and camps. gui/advfort_items¶ Does something with items in adventure mode jobs. gui/assign-rack¶ This script requires a binpatch, which has not been available since DF 0.34.11 See Bug 1445 for more info about the patches. Keybinding: P in dwarfmode/QueryBuilding/Some/Weaponrack gui/autobutcher¶ An in-game interface for autobutcher. This script must be called from either the overall status screen or the animal list screen. Keybinding: ShiftB in pet/List/Unit gui/choose-weapons¶ Activate in the Equip->View/Customize page of the military screen. Depending on the cursor location, it rewrites all ‘individual choice weapon’ entries in the selected squad or position to use a specific weapon type matching the assigned unit’s top skill. If the cursor is in the rightmost list over a weapon entry, it rewrites only that entry, and does it even if it is not ‘individual choice’. Rationale: individual choice seems to be unreliable when there is a weapon shortage, and may lead to inappropriate weapons being selected. Keybinding: CtrlW in layer_military/Equip/Customize/View gui/clone-uniform¶ When invoked, the script duplicates the currently selected uniform template, and selects the newly created copy. Activate in the Uniforms page of the military screen with the cursor in the leftmost list. Keybinding: CtrlC in layer_military/Uniforms gui/companion-order¶ A script to issue orders for companions. Select companions with lower case chars, issue orders with upper case. Must be in look or talk mode to issue command on tile. - move - orders selected companions to move to location. If companions are following they will move no more than 3 tiles from you. - equip - try to equip items on the ground. - pick-up - try to take items into hand (also wield) - unequip - remove and drop equipment - unwield - drop held items - wait - temporarily remove from party - follow - rejoin the party after “wait” - leave - remove from party (can be rejoined by talking) Keybinding: ShiftO in dungeonmode gui/confirm-opts¶ A basic configuration interface for the confirm plugin. gui/create-item¶ A graphical interface for creating items. See also: createitem, modtools/create-item, Issue 735 gui/dfstatus¶ Show a quick overview of critical stock quantities, including food, drinks, wood, and various bars. Sections can be enabled/disabled/configured by editing dfhack-config/dfstatus.lua. Keybinding: CtrlShiftI in dwarfmode/Default Keybinding: CtrlShiftI in dfhack/lua/dfstatus gui/extended-status¶ Adds more subpages to the z status screen. Usage: gui/extended-status enable|disable|help|subpage_names enable|disable gui/extended-status gui/family-affairs¶ A user-friendly interface to view romantic relationships, with the ability to add, remove, or otherwise change them at your whim - fantastic for depressed dwarves with a dead spouse (or matchmaking players...). The target/s must be alive, sane, and in fortress mode. gui/family-affairs [unitID] - shows GUI for the selected unit, or the specified unit ID gui/family-affairs divorce [unitID] - removes all spouse and lover information from the unit and it’s partner, bypassing almost all checks. gui/family-affairs [unitID] [unitID] - divorces the two specificed units and their partners, then arranges for the two units to marry, bypassing almost all checks. Use with caution. gui/gm-editor¶ This editor allows to change and modify almost anything in df. Press ? for in-game help. There are three ways to open this editor: - Callling gui/gm-editorfrom a command or keybinding opens the editor on whatever is selected or viewed (e.g. unit/item description screen) - using gui/gm-editor <lua command> - executes lua command and opens editor on its results (e.g. gui/gm-editor "df.global.world.items.all"shows all items) - using gui/gm-editor dialog - shows an in game dialog to input lua command. Works the same as version above. - using gui/gm-editor toggle- will hide (if shown) and show (if hidden) editor at the same position you left it gui/gm-unit¶ An editor for various unit attributes. gui/guide-path¶ Activate in the Hauling menu with the cursor over a Guide order. The script displays the cached path that will be used by the order; the game computes it when the order is executed for the first time. Keybinding: AltP in dwarfmode/Hauling/DefineStop/Cond/Guide gui/hack-wish¶ An alias for gui/create-item. Deprecated. gui/hello-world¶ A basic example for testing, or to start your own script from. gui/liquids¶ This script is a gui front-end to liquids and works similarly, allowing you to add or remove water & magma, and create obsidian walls & floors. Warning There is no undo support. Bugs in this plugin have been known to create pathfinding problems and heat traps. The b key changes how the affected area is selected. The default Rectangle mode works by selecting two corners like any ordinary designation. The p key chooses between adding water, magma, obsidian walls & floors, or just tweaking flags. When painting liquids, it is possible to select the desired level with + -, and choose between setting it exactly, only increasing or only decreasing with s. In addition, f allows disabling or enabling the flowing water computations for an area, and r operates on the “permanent flow” property that makes rivers power water wheels even when full and technically not flowing. After setting up the desired operations using the described keys, use Enter to apply them. Keybinding: AltL in dwarfmode/LookAround gui/load-screen¶ A replacement for the “continue game” screen. Usage: gui/load-screen enable|disable gui/manager-quantity¶ Sets the quantity of the selected manager job Sample usage: keybinding add Alt-Q@jobmanagement gui/manager-quantity Keybinding: AltQ in jobmanagement gui/mechanisms¶ Lists mechanisms connected to the building, and their links. Navigating the list centers the view on the relevant linked buildings. To exit, press Esc or Enter; Esc recenters on the original building, while Enter leaves focus on the current one. Shift Enter has an effect equivalent to pressing Enter, and then re-entering the mechanisms UI. Keybinding: CtrlM in dwarfmode/QueryBuilding/Some gui/mod-manager¶ A simple way to install and remove small mods, which are not included in DFHack. Examples are available here. Each mod is a lua script located in <DF>/mods/, which MUST define the following variables: Of course, this doesn’t actually make a mod - so one or more of the following should also be defined: gui/no-dfhack-init¶ Shows a warning at startup if no valid dfhack.init file is found. gui/power-meter¶ Activate an in-game interface for power-meter after selecting Pressure Plate in the build menu. The script follows the general look and feel of the regular pressure plate build configuration page, but configures parameters relevant to the modded power meter building. Keybinding: CtrlShiftM in dwarfmode/Build/Position/Trap gui/prerelease-warning¶ Shows a warning on world load for pre-release builds. With no arguments passed, the warning is shown unless the “do not show again” option has been selected. With the force argument, the warning is always shown. gui/quickcmd¶ A list of commands which you can edit while in-game, and which you can execute quickly and easily. For stuff you use often enough to not want to type it, but not often enough to be bothered to find a free keybinding. gui/rename¶ Backed by rename, this script allows entering the desired name via a simple dialog in the game ui. gui/rename [building]in qmode changes the name of a building. The selected building must be one of stockpile, workshop, furnace, trap, or siege engine. It is also possible to rename zones from the imenu. gui/rename [unit]with a unit selected changes the nickname. Unlike the built-in interface, this works even on enemies and animals. gui/rename unit-professionchanges the selected unit’s custom profession name. Likewise, this can be applied to any unit, and when used on animals it overrides their species string. The building or unit options are automatically assumed when in relevant UI state. Keybinding: CtrlShiftN Keybinding: CtrlShiftT -> "gui/rename unit-profession" gui/room-list¶ Activate in q mode, either immediately or after opening the assign owner page. The script lists other rooms owned by the same owner, or by the unit selected in the assign list, and allows unassigning them. Keybinding: AltR in dwarfmode/QueryBuilding/Some gui/settings-manager¶ An in-game manager for settings defined in init.txt and d_init.txt. Keybinding: AltS in title Keybinding: AltS in dwarfmode/Default gui/siege-engine¶ Activate an in-game interface for siege-engine, after selecting a siege engine in q mode. The main mode displays the current target, selected ammo item type, linked stockpiles and the allowed operator skill range. The map tile color is changed to signify if it can be hit by the selected engine: green for fully reachable, blue for out of range, red for blocked, yellow for partially blocked. Pressing r changes into the target selection mode, which works by highlighting two points with Enter like all designations. When a target area is set, the engine projectiles are aimed at that area, or units within it (this doesn’t actually change the original aiming code, instead the projectile trajectory parameters are rewritten as soon as it appears). After setting the target in this way for one engine, you can ‘paste’ the same area into others just by pressing p in the main page of this script. The area to paste is kept until you quit DF, or select another area manually. Pressing t switches to a mode for selecting a stockpile to take ammo from. Exiting from the siege engine script via Esc reverts the view to the state prior to starting the script. Shift Esc retains the current viewport, and also exits from the q mode to main menu. Keybinding: AltA in dwarfmode/QueryBuilding/Some/SiegeEngine gui/stockpiles¶ An in-game interface for stocksettings, to load and save stockpile settings from the q menu. Usage: Don’t forget to enable stockpiles and create the stocksettings directory in the DF folder before trying to use the GUI. Keybinding: AltL -> "gui/stockpiles -load" in dwarfmode/QueryBuilding/Some/Stockpile Keybinding: AltS -> "gui/stockpiles -save" in dwarfmode/QueryBuilding/Some/Stockpile gui/unit-info-viewer¶ Displays age, birth, maxage, shearing, milking, grazing, egg laying, body size, and death info about a unit. Recommended keybinding Alt I. gui/workflow¶ Bind to a key (the example config uses Alt-W), and activate with a job selected in a workshop in q mode. This script provides a simple interface to constraints managed by workflow. When active, it displays a list of all constraints applicable to the current job, and their current status. A constraint specifies a certain range to be compared against either individual item or whole stack count, an item type and optionally a material. When the current count is below the lower bound of the range, the job is resumed; if it is above or equal to the top bound, it will be suspended. Within the range, the specific constraint has no effect on the job; others may still affect it. Pressing i switches the current constraint between counting stacks or items. Pressing r lets you input the range directly; e, r, d, f adjust the bounds by 5, 10, or 20 depending on the direction and the i setting (counting items and expanding the range each gives a 2x bonus). Pressing a produces a list of possible outputs of this job as guessed by workflow, and lets you create a new constraint by choosing one as template. If you don’t see the choice you want in the list, it likely means you have to adjust the job material first using job item-material or gui/workshop-job, as described in the workflow documentation. In this manner, this feature can be used for troubleshooting jobs that don’t match the right constraints. If you select one of the outputs with Enter, the matching constraint is simply added to the list. If you use Shift Enter, the interface proceeds to the next dialog, which allows you to edit the suggested constraint parameters to suit your need, and set the item count range. Pressing s (or, with the example config, Alt-W in the z stocks screen) opens the overall status screen: This screen shows all currently existing workflow constraints, and allows monitoring and/or changing them from one screen. The constraint list can be filtered by typing text in the field below. The color of the stock level number indicates how “healthy” the stock level is, based on current count and trend. Bright green is very good, green is good, red is bad, bright red is very bad. The limit number is also color-coded. Red means that there are currently no workshops producing that item (i.e. no jobs). If it’s yellow, that means the production has been delayed, possibly due to lack of input materials. The chart on the right is a plot of the last 14 days (28 half day plots) worth of stock history for the selected item, with the rightmost point representing the current stock value. The bright green dashed line is the target limit (maximum) and the dark green line is that minus the gap (minimum). Keybinding: AltW in dwarfmode/QueryBuilding/Some/Workshop/Job Keybinding: AltW -> "gui/workflow status" in overallstatus Keybinding: AltW -> "gui/workflow status" in dfhack/lua/status_overlay gui/workshop-job¶ Run with a job selected in a workshop in the q mode. The script shows a list of the input reagents of the selected job, and allows changing them like the job item-type and job item-material commands. Specifically, pressing the i key pops up a dialog that lets you select an item type from a list. Pressing m, unless the item type does not allow a material, lets you choose a material. Since there are a lot more materials than item types, this dialog is more complex and uses a hierarchy of sub-menus. List choices that open a sub-menu are marked with an arrow on the left. Warning Due to the way input reagent matching works in DF, you must select an item type if you select a material, or the material will be matched incorrectly in some cases. If you press m without choosing an item type, the script will auto-choose if there is only one valid choice, or pop up an error message box instead of the material selection dialog. Note that both materials and item types presented in the dialogs are filtered by the job input flags, and even the selected item type for material selection, or material for item type selection. Many jobs would let you select only one input item type. For example, if you choose a plant input item type for your prepare meal job, it will only let you select cookable materials. If you choose a barrel item instead (meaning things stored in barrels, like drink or milk), it will let you select any material, since in this case the material is matched against the barrel itself. Then, if you select, say, iron, and then try to change the input item type, now it won’t let you select plant; you have to unset the material first. Keybinding: AltA in dwarfmode/QueryBuilding/Some/Workshop/Job
http://dfhack.readthedocs.io/en/stable/docs/_auto/gui.html
2017-05-22T19:07:31
CC-MAIN-2017-22
1495463607046.17
[array(['../../_images/advfort.png', '../../_images/advfort.png'], dtype=object) array(['../../_images/companion-order.png', '../../_images/companion-order.png'], dtype=object) array(['../../_images/family-affairs.png', '../../_images/family-affairs.png'], dtype=object) array(['../../_images/gm-editor.png', '../../_images/gm-editor.png'], dtype=object) array(['../../_images/guide-path.png', '../../_images/guide-path.png'], dtype=object) array(['../../_images/liquids.png', '../../_images/liquids.png'], dtype=object) array(['../../_images/mechanisms.png', '../../_images/mechanisms.png'], dtype=object) array(['../../_images/mod-manager.png', '../../_images/mod-manager.png'], dtype=object) array(['../../_images/power-meter.png', '../../_images/power-meter.png'], dtype=object) array(['../../_images/room-list.png', '../../_images/room-list.png'], dtype=object) array(['../../_images/siege-engine.png', '../../_images/siege-engine.png'], dtype=object) array(['../../_images/workflow.png', '../../_images/workflow.png'], dtype=object) array(['../../_images/workflow-new1.png', '../../_images/workflow-new1.png'], dtype=object) array(['../../_images/workflow-new2.png', '../../_images/workflow-new2.png'], dtype=object) array(['../../_images/workflow-status.png', '../../_images/workflow-status.png'], dtype=object) array(['../../_images/workshop-job.png', '../../_images/workshop-job.png'], dtype=object) array(['../../_images/workshop-job-item.png', '../../_images/workshop-job-item.png'], dtype=object) array(['../../_images/workshop-job-material.png', '../../_images/workshop-job-material.png'], dtype=object)]
dfhack.readthedocs.io
This example shows how to generate the "golden ratio" to 28 digits of precision using the decimal datatype and a Fibonacci Sequence generator function. // shows use of infinite fibonacci sequence with decimal datatype to generate golden ratio to 28 digits def fib(): a as decimal = 0; b as decimal = 1 while true: yield b a, b = b, a + b i = 0 diff as decimal = 10.0**-28 lastf as decimal = 1 ratio as decimal = 0 for f in fib(): // print "$i $f $(f/lastf) $ratio $(f/lastf - ratio)" break if ++i > 2 and System.Math.Abs(f/lastf - ratio) < diff ratio, lastf = f/lastf, f print "error factor less than $diff on $(i)th iteration" print "golden ratio = $ratio" Output: error factor less than 0.0000000000000000000000000001 on 72th iteration golden ratio = 1.6180339887498948482045868344
http://docs.codehaus.org/display/BOO/High-precision+math+with+decimal+datatype
2014-08-20T10:43:54
CC-MAIN-2014-35
1408500804220.17
[]
docs.codehaus.org
Tutorials Feature Guides AJAX Maven Support - Maven Jetty Plugin - Maven Jetty JSP Compilation Plugin Glassfish JBoss EJB3 and JPA Useful Servlets and Filters Integrations - DWR - JIRA - ActiveMQ - Jetspeed2 - Atomikos Transaction Manager - JOTM - Bitronix Transaction Manager - MyFaces - JSF Reference Implementation - Jakarta Slide - Jetty with Spring - Jetty with XBean -!) - see also Embedding Jetty and Maven Jetty Plugin Connectors - Configuring SSL - AJP13 and mod_jk - Running on port 80 as non-root user
http://docs.codehaus.org/pages/viewpage.action?pageId=68408
2014-08-20T10:51:23
CC-MAIN-2014-35
1408500804220.17
[]
docs.codehaus.org
Developing C++ Programs on Windows Geode uses the Visual Studio 2010 Service Pack 1 compiler for C++ programs on Windows, which invokes Microsoft® cl.exe from the command line at compile time. The Geode When you install on Windows, the installer performs these tasks: - Sets the GFCPP environment variable to product-dir, where product-dir is the path to the native client product directory. - Adds the %GFCPP%\bin executable directory to the Windows PATH environment variable. Step 2. Choose 32-bit or 64-bit Command-line Prompt For 32-bit: Start > Programs > Microsoft Visual Studio > 2010 > Visual Studio Tools > Visual Studio 2010 Command Prompt For 64-bit: Start > Programs > Microsoft Visual Studio 2010 > Visual Studio Tools > Visual Studio 2010 x64 Win64 Command Prompt To build using the Microsoft Visual Studio Interface, from the Solutions Platform, choose Win32 or x86 from the Build menu for 32-bit builds or x64 for a 64-bit build. Step 3. Compile C++ Clients and Dynamically Link Them to Native Client Library The following table lists the compiler and linker switches that must be present on the cl.exe command line. Note: If you want to use the Visual Studio user interface instead of invoking cl.exe from the command line, be sure to supply these parameters. Step 4. Verify that You Can Load the Native Client Library Because Geode does not provide a library that can be linked statically into an application on Windows, you must dynamically link to the native client library. To make the native client library available for loading, verify that the directory product-dir/bin is included in the PATH environment variable, where product-dir is the path to the Geode product directory.
http://gemfire-native-90.docs.pivotal.io/native/introduction/developing-windows.html
2018-01-16T09:45:12
CC-MAIN-2018-05
1516084886397.2
[]
gemfire-native-90.docs.pivotal.io
You can add vCenter Servers as data source to vRealize Network Insight. About this task Multiple vCenter Servers can be added to vRealize Network Insight to start monitoring data. Procedure - Click Add vCenter. - Select Enable Netflow (IPFIX) on this vCenter to enable IPFIX. - Click Add new source and customize the options. - Click Validate. - Add advanced data collection sources to your vCenter Server system. - (Optional) : Click Submit to add the vCenter Server system. The vCenter Server systems appear on the homepage.
https://docs.vmware.com/en/VMware-vRealize-Network-Insight/3.6/com.vmware.vrni.install.doc/GUID-B9F6B6B4-5426-4752-B852-B307E49E86D1.html
2018-01-16T10:00:33
CC-MAIN-2018-05
1516084886397.2
[]
docs.vmware.com
For cPanel & WHM version 68 What: .dynamiccontent .pl .plx .perl .cgi .php .php4 .php5 .php6 .php3 .shmtl - Blocks SSH and FTP logins to the source server for the transferred accounts. Note /usr/local/cpanel/scripts/xferpointfile.: cd /var du -sh Note: The output of these commands shows the used and available space in each directory, as well as the file sizes of each file in the current directory. -:? Note example.comand the username is user.: cd /home/user/ tar czvf public_html_user.tgz public_html mv public_html_user.tgz /home/cptemp/ mv public_html /home/cptemp/ Move the site's weblogs. To do this, run the following commands: cd /usr/local/apache/userlogs/ gzip example.com mv example.com.gz /home/cptemp/ After all of the necessary files are in the /home/cptemp/directory, move the account in WHM. This sets up the new account on the new server. NoteFor more information, read our How to Copy an Account with SSH Keys documentation. Navigate to the /home/cptemp/directory and move the files. To do this, run the following command: scp public_html_dom.tgz [email protected]:/home/dom/ Enter the rootpassword for the new server and move the domain.com.gzfile. To do this, run the following command: scp domain.com.gz [email protected]:/usr/local/apache/domlogs/ - Enter the required login information. Go to the new server and unpack the two large files that you just moved. To do this, run the following commands: cd /home/dom/ gzip -d domain.com.gz tar xzvf public_html_dom.tgz cd /usr/local/apache/domlogs/: -: lsattr /etc/group lsattr /etc/shadow If these files do not contain the flag, run the following commands: chattr -i /etc/passwd chattr -i /etc/group.: Note: How do I disable or enable WHM access from the command line? To disable WHM access for all users, block incoming traffic on port 2086 and 2087. Warning: This will lock you out as well.). Note: # EasyApache documentation. How do I delete an account? You can delete an account with WHM's Terminate Accounts interface (WHM >> Home >> Multi Account Functions >> Terminate Accounts). Note? Important: EasyApache 3 does not support Tomcat for new installations. EasyApache 4 does not support Tomcat and we do not plan to provide support in the future. Navigate to WHM's Install Servlets interface (WHM >> Home >> Account Functions >> Install Servlets). Select an account or domain on which you wish to install servlets, and apply the settings. Notes: If you have not installed Tomcat, you cannot view the Install Servlets interface.: /scripts/fixndc: cp -R /home/dave/public_html/photos/ /home/john/public_html/images/: -rwxr--r-- 1 root root 8192 Dec 26 20:18 aquota.user* -rwxr--r-- 1 root root 2097120 Apr 30 04:19 quota.user* -? Note: You will need an IP address for each nameserver... Important: Ruby on Rails does not function on Amazon Linux servers and is not currently available on CentOS 7 servers... Note example.comstands for the customer's domain.: . Important: You can use older versions of these browsers. However, we do not support older versions of the listed browsers. We strongly encourage you to upgrade to the latest version of your preferred browser.). Note). Additional documentation There is no content with the specified labels
https://docs.cpanel.net/display/68Docs/WHM+FAQ
2018-01-16T09:09:32
CC-MAIN-2018-05
1516084886397.2
[]
docs.cpanel.net
Ping. Ping Ping. Completed Ping Ping. Completed Ping Ping. Completed Ping Event Completed Definition Occurs when an asynchronous operation to send an Internet Control Message Protocol (ICMP) echo message and receive the corresponding ICMP echo reply message completes or is canceled. public: event System::Net::NetworkInformation::PingCompletedEventHandler ^ PingCompleted; public event System.Net.NetworkInformation.PingCompletedEventHandler PingCompleted; member this.PingCompleted : System.Net.NetworkInformation.PingCompletedEventHandler Public Custom Event PingCompleted As PingCompletedEventHandler Examples The following code example demonstrates specifying a callback method for the PingCompleted event. The complete example is available in the Ping class overview. Ping ^ pingSender = gcnew Ping; // When the PingCompleted event is raised, // the PingCompletedCallback method is called. pingSender->PingCompleted += gcnew PingCompletedEventHandler( PingCompletedCallback ); Ping pingSender = new Ping (); // When the PingCompleted event is raised, // the PingCompletedCallback method is called. pingSender.PingCompleted += new PingCompletedEventHandler (PingCompletedCallback); Remarks Applications use the PingCompleted event to get information about the completion status and data collected by a call to one of the SendAsync methods. The PingCompletedEventHandler delegate provides the call back method invoked when SendAsync raises this event.
https://docs.microsoft.com/en-us/dotnet/api/system.net.networkinformation.ping.pingcompleted?view=netframework-4.7.2
2019-06-16T00:44:20
CC-MAIN-2019-26
1560627997508.21
[]
docs.microsoft.com
It’s possible to enforce redirection of connections on the particular port of connection manager with force-redirect-to set to Integer with the following general setting option: <connection_manager> { connections { <listening_port> { 'force-redirect-to' = <destination_port> } } } for example, enable additional port 5322 for c2s connection manager and enforce all connections to be redirected to port 5222 (it will utilize hostname retrieved from SeeOtherHost implementation and will be only used when such value is returned): c2s { connections { ports = [ 5222, 5322 ] 5322 { 'force-redirect-to' = 5222 socket = 'plain' type = 'accept' } } }
https://docs.tigase.net/tigase-server/snapshot/Administration_Guide/webhelp/_enforcing_redirection.html
2019-06-16T00:42:57
CC-MAIN-2019-26
1560627997508.21
[]
docs.tigase.net
The Kubernetes-native platform (v2). The Package manager for Kubernetes. The Kubernetes-native deis certs is only useful for custom domains. Default application domains are SSL-enabled already and can be accessed simply by using https, Deis. Unless you've already done so, add the domain specified when generating the CSR to your app with: $ deis domains:add -a foo Adding to foo... done Add your certificate, any intermediate certificates, and private key to the endpoint with the certs:add command. $ deis certs:add example-com server.crt server.key Adding SSL endpoint... done Note The name given to the certificate can only contain a-z (lowercase), 0-9 and hyphens The Deis Deis. Importantly, your site’s certificate must be the first one: $ cat server.crt server.ca > server.bundle After that, you can add them to Deis with the certs:add command: $ deis certs:add example-com server.bundle server.key Adding SSL endpoint... done Certificates are not automagically connected up to domains, instead you will have to attach a certificate to a domain $ deis certs:attach example-com example.com Each certificate can be connected to many domains. There is no need to upload duplicates. To remove an association $ deis certs:detach example-com example.com You can verify the details of your domain's SSL configuration with deis certs. $ deis certs Name | Common Name | SubjectAltName | Expires | Fingerprint | Domains | Updated | Created +-------------+-------------------+-------------------+-------------------------+-----------------+--------------+-------------+-------------+ example-com | example.com | blog.example.com | 31 Dec 2017 (in 1 year) | 8F:8E[...]CD:EB | example.com | 30 Jan 2016 | 29 Jan 2016 or by looking at at each certificates detailed information $ deis=Deis/OU=Engineering/CN=example.com/[email protected] Subject: /C=US/ST=CA/L=San Francisco/O=Deis/OU=Engineering/CN=example.com/[email protected] $ deis tls:enable -a foo Enabling https-only requests for foo... done Users hitting the HTTP endpoint for the application will now receive a 301 redirect to the HTTPS endpoint. To disable enforced TLS, run $ deis tls:disable -a foo Disabling https-only requests for foo... done You can remove a certificate using the certs:remove command: $ deis: $ deis certs:detach example-com-2017 example.com $ deis Deis and re-run the certs:add command.
https://docs.teamhephy.com/applications/ssl-certificates/
2019-06-16T01:50:07
CC-MAIN-2019-26
1560627997508.21
[]
docs.teamhephy.com
CubicWeb - The Semantic Web is a construction game!¶ CubicWeb is a semantic web application framework, licensed under the LGPL, that empowers developers to efficiently build web applications by reusing components (called cubes) and following the well known object-oriented design principles. Main. QuickStart¶ The impatient developer will move right away to Installation of a CubicWeb environment then to Set-up of a CubicWeb environment. Narrative Documentation¶ A.k.a. “The Book” - Repository development - 1. Cubes - 2. The Registry, selectors and application objects - 3. Data model - 4. Data as objects - 5. Core APIs - 6. Repository customization - 7. Tests - 8. Migration - 9. Profiling and performance - 10. Full Text Indexing in CubicWeb - 11. Dataimport - Web side development - Publisher - Controllers - The Request class (cubicweb.web.request) - RQL search bar - The View system - Configuring the user interface - Ajax - Javascript - Conventions - Server-side Javascript API - Javascript events - Important javascript AJAX APIS - A simple example with asyncRemoteExec - Anatomy of a reloadComponent call - A simple reloadComponent example - Anatomy of a loadxhtml call - A simple example with loadxhtml - A more real-life example - Javascript library: overview - API - Testing javascript - CSS Stylesheet - Edition control - The facets system - Internationalization - The property mecanism - HTTP cache management - Locate resources - Static files handling - Pyramid - Narrative Documentation - Api Documentation - Administration - 1. Installation of a CubicWeb environment - 2. Installing a development environement on Windows - 3. Set-up of a CubicWeb environment - 4. cubicweb-ctltool - 5. Creation of your first instance - 6. Configure an instance - 7. User interface for web site configuration - 8. Multiple sources of data - 9. LDAP integration - 10. Migrating cubicweb instances - benefits from a distributed architecture - 11. Backups (mostly with postgresql) - 12. RQL logs - Additional services - Appendixes Changes¶ Reference documentation¶ Developpers¶ Indexes¶ - the Index, - the Module Index, Social¶
https://cubicweb.readthedocs.io/en/latest/
2019-06-16T01:49:11
CC-MAIN-2019-26
1560627997508.21
[]
cubicweb.readthedocs.io
Microsoft Teams apps permissions and considerations Microsoft Teams apps are a way to aggregate one or more capabilities into an app package that can be installed, upgraded, and uninstalled. The capabilities include: - Bots - Messaging extensions - Tabs - Connectors Apps are consented to by users and managed by IT from a policy perspective. However, for the most part, an app's permissions and risk profile are defined by the permissions and risk profiles of the capabilities it contains. Therefore, this article focuses on permissions and considerations at the capability level. The permissions listed below in capital letters, for example RECEIVE_MESSAGE and REPLYTO_MESSAGE, don't appear anywhere in the Microsoft Teams developer documentation or the permissions for Microsoft Graph. They're simply a descriptive shorthand for the purpose of this article. Global app permissions and considerations Bots and messaging extensions Note - If a bot has its own sign-in, there's a second—different—consent experience the first time the user signs in. - Currently, the Azure AD permissions associated with any of the capabilities inside a Teams app (bot, tab, connector, or messaging extension) are completely separate from the Teams permissions listed here. Tabs A tab is a website running inside Teams. Connectors A connector posts messages to a channel when events in an external system occur. Note It's not currently possible to know which connectors support actionable messages (REPLYTO_CONNECTOR_MESSAGE permission). Outgoing webhooks Outgoing webhooks are created on the fly by team owners or team members if sideloading is enabled for a tenant. They aren't capabilities of Teams apps; this information is included for completeness. Feedback Send feedback about:
https://docs.microsoft.com/en-us/MicrosoftTeams/app-permissions
2019-06-16T00:53:40
CC-MAIN-2019-26
1560627997508.21
[]
docs.microsoft.com
List View. List Border Style View. List Border Style View. List Border Style View. Property Border Style Definition Overrides the BorderStyle property. Setting this property is not supported by the ListView control. public: virtual property System::Web::UI::WebControls::BorderStyle BorderStyle { System::Web::UI::WebControls::BorderStyle get(); void set(System::Web::UI::WebControls::BorderStyle value); }; [System.ComponentModel.Browsable(false)] public override System.Web.UI.WebControls.BorderStyle BorderStyle { get; set; } member this.BorderStyle : System.Web.UI.WebControls.BorderStyle with get, set Public Overrides Property BorderStyle As BorderStyle Property Value NotSet, which indicates that the property is not set. Exceptions An attempt was made to set the BorderStyle property. Remarks Style properties are not supported by the ListView control. If you try to set the BorderStyle.
https://docs.microsoft.com/en-us/dotnet/api/system.web.ui.webcontrols.listview.borderstyle?view=netframework-4.7.2
2019-06-16T01:43:31
CC-MAIN-2019-26
1560627997508.21
[]
docs.microsoft.com
Contents Now Platform Custom Business Applications Previous Topic Next Topic Getting started with the Automated Test Framework Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Getting started with the Automated Test Framework If you are new to the Automated Test Framework, read this overview to learn what the framework can do. Next, follow the tutorial to create and run a test that uses the most basic of ATF features. After you feel comfortable with the basics, explore more advanced features provided by the ATF. ATF features provide flexibility in how you test your instance. Test step configuration categories Category Description Service Catalog in Service Portal Perform end-to-end testing for a catalog item in the Service Portal. Open a record producer, catalog item, or order guide. Set variable values and catalog item quantity. Validate variable values, states, price, and items included in an order guide. Navigate in an order guide. Open and toggle catalog items in an order guide. Add an item or an order guide to a shopping cart. Order a catalog item or an order guide. Submit a record producer. Application Navigator Create tests to check navigation features. Verify that application menus are listed in the left navigation bar. Verify that application modules are listed in the left navigation bar. Navigate to a module as if a user clicked the module in the left navigation bar. Custom UI Create simple tests that mimic user actions with no scripting. Set component values. Assert that specified text is or is not on a page. Validate component values. Click components. Validate the states of components (read-only or not read-only). Form Create tests of forms. Open a new form or an existing record. Set field values. Validate field values or field states (such as mandatory, not mandatory, read only, not read only, visible, and not visible). Validate whether a UI action is visible. Click a button on a modal page. Click a UI action. Submit a form. Service Catalog Perform end-to-end testing for a catalog item. Open a catalog item or a record producer. Search for a catalog item. Set variable values and catalog item quantity. Validate variable values, states, and price. Add an item to a shopping cart. Order a catalog item. Submit a record producer. Forms in Service Portal Create tests of forms in the Service Portal. Open a form. Set field values. Validate field values or field states (such as mandatory, not mandatory, read only, not read only, visible, and not visible). Validate whether a UI action is visible. Click a UI action. Submit a form. REST Create and send an Inbound REST request and verify the response. Test any REST endpoint on the instance. Use a REST request to create records, as well as retrieve, update, or delete records created in a previous test step or that already existed on the instance. Verify the response status code, response headers, response time, and response payload. Server Perform more complex operations, including the following: Perform unit tests using JavaScript, including tests using the Jasmine test framework. Test business rules, script includes, and other scripts. Create tests that operate on data that you define. Output variables Many test steps return output variables whose values you can use as inputs to a later step. For example, you can use output variables to accomplish the following tasks: Perform a server-side assert on a record that you previously inserted. Create a record as one user, and then reopen its form as a different user. Custom test step configurations In addition to the steps built into the Automated Test Framework, you can create custom test step configurations. These custom steps can take input variables and return output variables that you define.Note: You can only define custom test steps that run on the server. The Automated Test Framework does not support creating custom step configurations that run on the browser. Data preservation The Automated Test Framework automatically tracks and deletes any data created by running tests, and automatically rolls back changes after testing. Test suites Test suites enable you to execute a batch of tests in a specified order. In addition, test suites can be hierarchical, with suites nested within other suites. You can associate test suites with schedules that determine when the system runs the test suites. Build and run your first automated testFollow these step-by-step instructions to create and run your first automated test. This test creates a new user record.Next steps with the Automated Test FrameworkAfter you feel comfortable creating and running simple tests, explore the more advanced features of the Automated Test Framework.Domain separation in Automated Test FrameworkThis is an overview of domain separation and the Automated Test Framework. Domain separation enables you to separate data, processes, and administrative tasks into logical groupings called domains. You can then control several aspects of this separation, including which users can see and access data. On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/madrid-application-development/page/administer/auto-test-framework/concept/atf-intro.html
2019-06-16T01:19:56
CC-MAIN-2019-26
1560627997508.21
[]
docs.servicenow.com
CCPSearch Table The CCPSearch table contains the list of file signatures used for the Compliance Checking Program (CCP). At least one of these files needs to be present on a user's computer for the user to be in compliance with the program. The CCPSearch table has the following column. Column Signature_ The Signature_ represents a unique file signature and is also the external key into the Signature, RegLocator, IniLocator, CompLocator, and DrLocator tables. Remarks This table is referred to when the CCPSearch action or the RMCCPSearch action is executed. Validation
https://docs.microsoft.com/en-us/windows/desktop/Msi/ccpsearch-table
2019-06-16T00:46:24
CC-MAIN-2019-26
1560627997508.21
[]
docs.microsoft.com
All content with label as5+buddy_replication+gridfs+hotrod+infinispan+jbossas+jsr-107+notification+setup. Related Labels: expiration, publish, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, release, partitioning, query, lock_striping, nexus, guide, schema, listener, cache, amazon, s3, grid, memcached, test, jcache, api, xsd, maven, documentation, write_behind, ec, cachestore, data_grid, cacheloader, resteasy, cluster, br, development, websocket, transaction, interactive, xaresource, build, searchable, demo, installation, cache_server, scala, client, migration, non-blocking, jpa, filesystem, tx, gui_demo, eventing, client_server, testng, infinispan_user_guide, murmurhash, standalone, webdav, repeatable_read, snapshot, docs, consistent_hash, batching, store, jta, faq, 2lcache, jgroups, lucene, locking, rest, hot_rod more » ( - as5, - buddy_replication, - gridfs, - hotrod, - infinispan, - jbossas, - jsr-107, - notification, - setup ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/as5+buddy_replication+gridfs+hotrod+infinispan+jbossas+jsr-107+notification+setup
2019-06-16T01:44:43
CC-MAIN-2019-26
1560627997508.21
[]
docs.jboss.org
All content with label as5+cache+gridfs+infinispan+jboss_cache+listener+podcast+webdav. Related Labels: expiration, publish, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, release, partitioning,, transaction, async, interactive, xaresource, build, searchable, demo, installation, scala, client, migration, non-blocking, jpa, filesystem, tx, user_guide, article, gui_demo, eventing, client_server, testng, infinispan_user_guide, standalone, repeatable_read, hotrod, snapshot, docs, consistent_hash, store, whitepaper, jta, faq, 2lcache, spring, jsr-107, lucene, jgroups, locking, rest, hot_rod more » ( - as5, - cache, - gridfs, - infinispan, - jboss_cache, - listener, - podcast, - webdav ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/as5+cache+gridfs+infinispan+jboss_cache+listener+podcast+webdav
2019-06-16T01:25:18
CC-MAIN-2019-26
1560627997508.21
[]
docs.jboss.org
All content with label client+demo+distribution+gridfs+hinting+infinispan+query+remoting., wcm, write_behind, ec2, 缓存, s, hibernate, aws, getting, custom_interceptor, clustering, setup, eviction, out_of_memory, concurrency, jboss_cache, examples, import, index, events, batch, hash_function, configuration, buddy_replication, loader, colocation, write_through, cloud, mvcc, notification, tutorial, murmurhash2, jbosscache3x, started, cachestore, data_grid, cacheloader, resteasy, hibernate_search, cluster, br, development, transaction, async, interactive, xaresource, build, gatein, searchable,, - hinting, - infinispan, - query, - remoting ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/client+demo+distribution+gridfs+hinting+infinispan+query+remoting
2019-06-16T02:03:56
CC-MAIN-2019-26
1560627997508.21
[]
docs.jboss.org
All content with label gridfs+infinispan+installation+jcache+jsr-107+locking+repeatable_read+rest+searchable. Related Labels: json, expiration, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, release, query, deadlock, rest_security, archetype, lock_striping, nexus, guide, schema, listener, cache, s3, amazon, grid, test, api, xsd, ehcache, maven, documentation, wcm, write_behind, ec2, 缓存, s, hibernate, jwt, getting, aws, interface, custom_interceptor, setup, clustering, eviction, out_of_memory, concurrency, examples, jboss_cache, import, index, events, hash_function, configuration, batch, buddy_replication, loader, write_through, cloud, mvcc, tutorial, notification, read_committed, xml, distribution, jose, meeting, started, cachestore, data_grid, resteasy, hibernate_search, cluster, development, websocket, transaction, async, interactive, xaresource, build, gatein, demo, ispn, client, non-blocking, migration, jpa, filesystem, tx, json_encryption, eventing, client_server, testng, infinispan_user_guide, standalone, webdav, hotrod, snapshot, docs, consistent_hash, batching, store, jta, faq, 2lcache, as5, jgroups, lucene, json_signature, hot_rod more » ( - gridfs, - infinispan, - installation, - jcache, - jsr-107, - locking, - repeatable_read, - rest, - searchable ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/gridfs+infinispan+installation+jcache+jsr-107+locking+repeatable_read+rest+searchable
2019-06-16T02:17:40
CC-MAIN-2019-26
1560627997508.21
[]
docs.jboss.org
Create, configure, and manage elastic jobs In this article, you will learn how to create, configure, and manage elastic jobs. If you have not used Elastic jobs, learn more about the job automation concepts in Azure SQL Database. Create and configure the agent Create or identify an empty S0 or higher SQL database. This database will be used as the Job database during Elastic Job agent creation. Create an Elastic Job agent in the portal, or with PowerShell. Create, run, and manage jobs Create a credential for job execution in the Job database using PowerShell, or T-SQL. Define the target group (the databases you want to run the job against) using PowerShell, or T-SQL. Create a job agent credential in each database the job will run (add the user (or role) to each database in the group). For an example, see the PowerShell tutorial. Create a job using PowerShell, or T-SQL. Add job steps using PowerShell or T-SQL. Run a job using PowerShell, or T-SQL. Monitor job execution status using the portal, PowerShell, or T-SQL. Credentials for running jobs Jobs use database scoped credentials to connect to the databases specified by the target group upon execution. If a target group contains servers or pools, these database scoped credentials are used to connect to the master database to enumerate the available databases. Setting up the proper credentials to run a job can be a little confusing, so keep the following points in mind: - The database scoped credentials must be created in the Job database. - All target databases must have a login with sufficient permissions for the job to complete successfully ( jobuserin the diagram below). - Credentials can be reused across jobs, and the credential passwords are encrypted and secured from users who have read-only access to job objects. The following image is designed to assist in understanding and setting up the proper job credentials. Remember to create the user in every database (all target user dbs) the job needs to run. Security best practices A few best practice considerations for working with Elastic Jobs: - Limit usage of the APIs to trusted individuals. - Credentials should have the least privileges necessary to perform the job step. For more information, see Authorization and Permissions SQL Server. - When using a server and/or pool target group member, it is highly suggested to create a separate credential with rights on the master database to view/list databases that is used to expand the database lists of the server(s) and/or pool(s) prior to the job execution. Agent performance, capacity, and limitations Elastic Jobs use minimal compute resources while waiting for long-running jobs to complete. Depending on the size of the target group of databases and the desired execution time for a job (number of concurrent workers), the agent requires different amounts of compute and performance of the Job database (the more targets and the higher number of jobs, the higher the amount of compute required). Currently, the preview is limited to 100 concurrent jobs. Prevent jobs from reducing target database performance To ensure resources aren't overburdened when running jobs against databases in a SQL elastic pool, jobs can be configured to limit the number of databases a job can run against at the same time. Set the number of concurrent databases a job runs on by setting the sp_add_jobstep stored procedure's @max_parallelism parameter in T-SQL, or Add-AzSqlElasticJobStep -MaxParallelism in PowerShell. Best practices for creating jobs Idempotent scripts A job's T-SQL scripts must be idempotent. Idempotent means that if the script succeeds, and it is run again, the same result occurs. A script may fail due to transient network issues. In that case, the job will automatically retry running the script a preset number of times before desisting. An idempotent script has the same result even if its been successfully run twice (or more). A simple tactic is to test for the existence of an object before creating it. IF NOT EXIST (some_object) -- Create the object -- If it exists, drop the object before recreating it. Similarly, a script must be able to execute successfully by logically testing for and countering any conditions it finds. Next steps Feedback Send feedback about:
https://docs.microsoft.com/en-us/azure/sql-database/elastic-jobs-overview
2019-06-16T01:25:44
CC-MAIN-2019-26
1560627997508.21
[array(['media/elastic-jobs-overview/job-credentials.png', 'Elastic Jobs credentials'], dtype=object) ]
docs.microsoft.com
IDTExtensibility2.OnAddInsUpdate Method Occurs whenever an add-in is loaded or unloaded from the Visual Studio integrated development environment (IDE). Namespace: Extensibility Assembly: Extensibility (in Extensibility.dll) Syntax 'Declaration Sub OnAddInsUpdate ( _ ByRef custom As Array _ ) void OnAddInsUpdate( ref Array custom ) void OnAddInsUpdate( [InAttribute] Array^% custom ) abstract OnAddInsUpdate : custom:Array byref -> unit function OnAddInsUpdate( custom : Array ) Parameters - custom Type: System.Array% An empty array that you can use to pass host-specific data for use in the add-in. Remarks. Examples Public Sub OnAddInsUpdate(ByRef custom As Array) Try Dim addIn As AddIn = applicationObject.AddIns. _ Item("MyAddin1.Connect") If addInInstance.Connected = True Then System.Windows.Forms.MessageBox.Show("This add-in is _ connected.") Else System.Windows.Forms.MessageBox.Show("This add-in is not _ connected.") End If Catch ex As Runtime.Interop.COMException System.Windows.Forms.MessageBox.Show("Not a registered add- _ in.") End Try End Sub."); } } .NET Framework Security - Full trust for the immediate caller. This member cannot be used by partially trusted code. For more information, see Using Libraries from Partially Trusted Code. See Also Reference IDTExtensibility2 Interface
https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2010/ms166253%28v%3Dvs.100%29
2019-06-16T01:32:43
CC-MAIN-2019-26
1560627997508.21
[]
docs.microsoft.com
Contents HR Service Delivery Previous Topic Next Topic Configuration in non-scoped HR Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Configuration in non-scoped this option in non-scoped HR An option in Human Resources Configuration provides a list of HR profile fields that can be enabled for edit. Understand the difference between how the personal and the employment information fields are updated in the HR profile based on this configuration. Typically, organizations allow employees to update certain personal information, but not sensitive and employment information. HR agents or the manager of the employee changes sensitive information. For example, employees can change their home address and personal email address, but the manager must update the position when the employee is promoted. The HR profile fields that users or managers can edit without HR approval include <number of fields> option in Human Resources Configuration provides the list of editable HR profile fields. The following list of fields indicates which of the configurable fields contain personal and which contain sensitive and employment information. Personal information fields Sensitive and employment information fields Home address Home city Home country Home phone Home state/province Home zip/postal Middle name Personal email Personal mobile phone Prefix Work email Work mobile Work phone Date of birth Department Employee number Employment end date Employment start date Employment status Employment type Ethnicity First name Gender Last name Location Location type Manager Marital status Nationality Notice period Place of birth Position Probation end date Probation period Time type In the configuration option field list, all personal information fields are enabled for edit by default. All employees can open their HR profile from the HRSM Portal and update these fields. If you do not want to allow employees to update one or more of these fields, disable editing by clearing the check box in the configurable list. For example, you any HR profile fields that are not editable, employees or their managers submit an employee information change request. An HR case is created and the HR Employee Change Workflow is started. The workflow requires that the change request is approved. When it is approved, the fields are updated and the HR case is closed. On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/jakarta-hr-service-delivery/page/product/human-resources-global/task/t_ConfigureHRServiceManagement-global.html
2019-06-16T01:12:05
CC-MAIN-2019-26
1560627997508.21
[]
docs.servicenow.com
Welcome to Octavia! Octavia is an open source, operator-scale load balancing solution designed to work with OpenStack. Octavia was born: Nova - For managing amphora lifecycle and spinning up compute resources on demand. Neutron - For network connectivity between amphorae, tenant environments, and external networks. Barbican - For managing TLS certificates and credentials, when TLS session termination is configured on the amphorae. Keystone - For authentication against the Octavia API, and for Octavia to authenticate with other OpenStack projects. Glance - For storing the amphora virtual machine image. Oslo - For communication between Octavia controller components, making Octavia work within the standard OpenStack framework and review system, and project code structure. Taskflow - Is technically part of Oslo; however, Octavia makes extensive use of this job flow system when orchestrating back-end service configuration and. Octavia supports third-party vendor drivers just like Neutron LBaaS, and fully replaces Neutron LBaaS as the load balancing solution for OpenStack. 4.0 consists of the following major components: amphorae - Amphorae are the individual virtual machines, containers, or bare metal servers that accomplish the delivery of load balancing services to tenant application environments. In Octavia version 0.8, the reference implementation of the amphorae image is an Ubuntu virtual machine running HAProxy. controller - The Controller is the “brains” of Octavia. It consists of five sub-components, which are individual daemons. They can be run on separate back-end infrastructure if desired: API Controller - As the name implies, this subcomponent runs Octavia’s API. It takes API requests, performs simple sanitizing on them, and ships them off to the controller worker over the Oslo messaging bus. Controller Worker - This subcomponent takes sanitized API commands from the API controller and performs the actions necessary to fulfill the API request. Health Manager - This subcomponent monitors individual amphorae to ensure they are up and running, and otherwise healthy. It also handles failover events if amphorae fail unexpectedly. Housekeeping Manager - This subcomponent cleans up stale (deleted) database records, manages the spares pool, and manages amphora certificate rotation. Driver Agent - The driver agent receives status and statistics updates from provider drivers. network - Octavia cannot accomplish what it does without manipulating the network environment. Amphorae are spun up with a network interface on the “load balancer network,” and they may also plug directly into tenant networks to reach back-end pool members, depending on how any given load balancing service is deployed by the tenant. For a more complete description of Octavia’s components, please see the Octavia v0.5 Component Design document within this documentation repository. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.
https://docs.openstack.org/octavia/latest/reference/introduction.html
2019-06-16T00:32:11
CC-MAIN-2019-26
1560627997508.21
[]
docs.openstack.org
ASP.NET Core Web API help pages using Swagger / Open API By Christoph Nienaber and Rico Suter When consuming a Web API, understanding its various methods can be challenging for a developer. Swagger, also known as Open API, solves the problem of generating useful documentation and help pages for Web APIs. It provides benefits such as interactive documentation, client SDK generation, and API discoverability. In this article, the Swashbuckle.AspNetCore and NSwag .NET Swagger implementations are showcased: Swashbuckle.AspNetCore is an open source project for generating Swagger documents for ASP.NET Core Web APIs. NSwag is another open source project for integrating Swagger UI or ReDoc into ASP.NET Core Web APIs. It offers approaches to generate C# and TypeScript client code for your API. What is Swagger / Open API? Swagger is a language-agnostic specification for describing REST APIs. The Swagger project was donated to the OpenAPI Initiative, where it's now referred to as Open API. Both names are used interchangeably; however, Open API.json) The core to the Swagger flow is the Swagger specification—by default, a document named swagger.json. It's generated by the Swagger tool chain (or third-party implementations of it) based on your service. It describes the capabilities of your API and how to access it with HTTP. It drives the Swagger UI and is used by the tool chain to enable discovery and client code generation. Here's an example of a Swagger specification, reduced for brevity: { "swagger": "2.0", "info": { "version": "v1", "title": "API V1" }, "basePath": "/", "paths": { "/api/Todo": { "get": { "tags": [ "Todo" ], "operationId": "ApiTodoGet", "consumes": [], "produces": [ "text/plain", "application/json", "text/json" ], "responses": { "200": { "description": "Success", "schema": { "type": "array", "items": { "$ref": "#/definitions/TodoItem" } } } } }, "post": { ... } }, "/api/Todo/{id}": { "get": { ... }, "put": { ... }, "delete": { ... }, "definitions": { "TodoItem": { "type": "object", "properties": { "id": { "format": "int64", "type": "integer" }, "name": { "type": "string" }, "isComplete": { "default": false, "type": "boolean" } } } }, "securityDefinitions": {} } Swagger UI Swagger UI offers a web-based UI that provides information about the service, using the generated Swagger specification. Both Swashbuckle and NSwag include an embedded version of Swagger UI, so that it can be hosted in your ASP.NET Core app using a middleware registration call. The web UI looks like this: Each public action method in your controllers can be tested from the UI. Click a method name to expand the section. Add any necessary parameters, and click Try it out!. Note The Swagger UI version used for the screenshots is version 2. For a version 3 example, see Petstore example.
https://docs.microsoft.com/en-us/aspnet/core/tutorials/web-api-help-pages-using-swagger?tabs=visual-studio
2018-03-17T06:40:51
CC-MAIN-2018-13
1521257644701.7
[array(['web-api-help-pages-using-swagger/_static/swagger-ui.png', 'Swagger UI'], dtype=object) array(['web-api-help-pages-using-swagger/_static/get-try-it-out.png', 'Example Swagger GET test'], dtype=object) ]
docs.microsoft.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. A complex type that describes how CloudFront processes requests. You must create at least as many cache behaviors (including the default cache behavior) as you have origins if you want CloudFront to distribute objects from all of the origins. Each cache behavior specifies the one origin from which you want CloudFront to get objects. If you have two origins and only the default cache behavior, the default cache behavior will cause CloudFront to get objects from one of the origins, but the other origin is never used. For the current limit on the number of cache behaviors that you can add to a distribution, see Amazon CloudFront Limits in the AWS General Reference. If you don't want to specify any cache behaviors, include only an empty CacheBehaviors element. Don't include an empty CacheBehavior element, or CloudFront returns a MalformedXML error. To delete all cache behaviors in an existing distribution, update the distribution configuration and include only an empty CacheBehaviors element. To add, change, or remove one or more cache behaviors, update the distribution configuration and specify all of the cache behaviors that you want to include in the updated distribution. For more information about cache behaviors, see Cache Behaviors in the Amazon CloudFront Developer Guide. Namespace: Amazon.CloudFront.Model Assembly: AWSSDK.CloudFront.dll Version: 3.x.y.z The CacheBehavior
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/CloudFront/TCacheBehavior.html
2018-03-17T06:04:59
CC-MAIN-2018-13
1521257644701.7
[]
docs.aws.amazon.com
Web Application Firewall Setup Contents these files back to your site if there are any problems. Once you have downloaded the files, you can click Continue to complete the setup Solutions to Error Messages and Setup Issues. Still need help? If you cannot complete the setup and the steps above do not help, you can contact us on the support forum if you are a free customer or open a ticket if you are a premium user.
https://docs.wordfence.com/index.php?title=Web_Application_Firewall_Setup&oldid=519
2018-03-17T06:07:05
CC-MAIN-2018-13
1521257644701.7
[]
docs.wordfence.com
JTableSession/exists From Joomla! Documentation < API15:JTableSessionRevision as of 13 Find out if a user has a one or more active sessions [<! removed edit link to red link >] <! removed transcluded page call, red link never existed > Syntax exists($userid) Returns boolean True if a session for this user exists Defined in libraries/joomla/database/table/session.php Importing jimport( 'joomla.database.table.session' ); Source Body function exists($userid) { $query = 'SELECT COUNT(userid) FROM #__session' . ' WHERE userid = '. $this->_db->Quote( $userid ); $this->_db->setQuery( $query ); if ( !$result = $this->_db->loadResult() ) { $this->setError($this->_db->stderr()); return false; } return (boolean) $result; } [<! removed edit link to red link >] <! removed transcluded page call, red link never existed > Examples <CodeExamplesForm />
https://docs.joomla.org/index.php?title=API15:JTableSession/exists&oldid=98349
2015-07-28T06:28:14
CC-MAIN-2015-32
1438042981576.7
[]
docs.joomla.org
Development Guide Local Navigation Import a file into a BlackBerry application project - In the Package Explorer view, select a BlackBerry® application project. - Right-click the project and click Import. - Expand the General folder. - Select the File System folder. - Click Next. - In the From directory dialog box, browse to the location of the source files and select the files that you want to import. - In the Into folder field, browse to the location where you want to save the files. - In the Options section, select the appropriate options. - Click Finish. Previous topic: Import a BlackBerry Java Plug-in sample application Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/developers/deliverables/23671/Import_file_to_a_BB_app_project_655894_11.jsp
2015-07-28T06:02:18
CC-MAIN-2015-32
1438042981576.7
[]
docs.blackberry.com
Bitcask Bitcask is an Erlang application that provides an API for storing and retrieving key/value data using log-structured hash tables that provide very fast access. The design of Bitcask was inspired, in part, by log-structured filesystems and log file merging. Bitcask's Strengths Low latency per item read or written This is due to the write-once, append-only nature of Bitcask database files. High throughput, especially when writing an incoming stream of random items Write operations to Bitcask generally saturate I/O and disk bandwidth, which is a good thing from a performance perspective. This saturation occurs for two reasons: because (1) data that is written to Bitcask doesn't need to be ordered on disk, and (2) the log-structured design of Bitcask allows for minimal disk head movement during writes. Ability to handle datasets larger than RAM without degradation Access to data in Bitcask involves direct lookup from an in-memory hash table. This makes finding data very efficient, even when datasets are very large. Single seek to retrieve any value Bitcask's in-memory hash table of keys points directly to locations on disk where the data lives. Bitcask never uses more than one disk seek to read a value and sometimes even that isn't necessary due to filesystem caching done by the operating system. Predictable lookup and insert performance For the reasons listed above, read operations from Bitcask have fixed, predictable behavior. This is also true of writes to Bitask because write operations require, at most, one seek to the end of the current open file followed by and append to that file. Fast, bounded crash recovery Crash recovery is easy and fast with Bitcask because Bitcask files are append only and write once. The only items that may be lost are partially written records at the tail of the last file that was opened for writes. Recovery operations need to review only the last record or two written and verify CRC data to ensure that the data is consistent. Easy Backup In most systems, backup can be very complicated. Bitcask simplifies this process due to its append-only, write-once disk format. Any utility that archives or copies files in disk-block order will properly back up or copy a Bitcask database. Weaknesses Keys must fit in memory Bitcask keeps all keys in memory at all times, which means that your system must have enough memory to contain your entire keyspace, plus additional space for other operational components and operating- system-resident filesystem buffer space. Installing Bitcask Bitcask is the default storage engine for Riak. You can verify that Bitcask is currently being used as the storage backend with the riak command interface: riak config effective | grep backend If this operation returns anything other than bitcask, read the next section for instructions on switching the backend to Bitcask. Enabling Bitcask You can set Bitcask as the storage engine using each node's configuration files: storage_backend = bitcask {riak_kv, [ {storage_backend, riak_kv_bitcask_backend}, %% Other riak_kv settings... ]}, Configuring Bitcask Bitcask enables you to configure a wide variety of its behaviors, from filesystem sync strategy to merge settings and more. Riak 2.0 enables you to use either the newer configuration system based on a single riak.conf file or the older system, based on an app.config configuration file. Instructions for both systems will be included below. Narrative descriptions of the various settings will be tailored to the newer configuration system, whereas instructions for the older system will largely be contained in the code tabs. The default configuration values for Bitcask are as follows: bitcask.data_root = ./data/bitcask bitcask.io_mode = erlang {bitcask, [ {data_root, "/var/lib/riak/bitcask"}, {io_mode, erlang}, %% Other Bitcask-specific settings ]} All of the other available settings listed below can be added to your configuration files. Open Timeout The open timeout setting specifies the maximum time Bitcask will block on startup while attempting to create or open the Bitcask data directory. The default is 4 seconds. In general, you will not need to adjust this setting. If, however, you begin to receive log messages of the form Failed to start bitcask backend: ..., you may want to consider using a longer timeout. Open timeout is specified using the bitcask.sync.open_timeout parameter, and can be set in terms of seconds, minutes, hours, etc. The following example sets the parameter to 10 seconds: bitcask.sync.open_timeout = 10s {bitcask, [ ..., {open_timeout, 10} %% This value must be expressed in seconds ... ]} Sync Strategy Bitcask enables you to configure the durability of writes by specifying when to synchronize data to disk, i.e. by choosing a sync strategy. The default setting ( none) writes data into operating system buffers that will be written to disk when those buffers are flushed by the operating system. If the system fails before those buffers are flushed, e.g. due to power loss, that data is lost. This possibility holds for any database in which values are asynchronously flushed to disk. Thus, using the default setting of none protects against data loss in the event of application failure, i.e. process death, but leaves open a small window in which data could be lost in the event of a complete system failure, e.g. hardware or OS failure. This possibility can be prevented by choosing the o_sync sync strategy, which forces the operating system to flush to stable storage at write time for every write. The effect of flushing each write is better durability, although it should be noted that write throughput will suffer because each write will have to wait for the write to complete. The following sync strategies are available: none— lets the operating system manage syncing writes (default) o_sync— uses the O_SYNCflag, which forces syncs on every write - Time interval — Riak will force Bitcask to sync at specified intervals The following are possible configurations: bitcask.sync.strategy = none bitcask.sync.strategy = o_sync bitcask.sync.interval = 10s {bitcask, [ ..., {sync_strategy, none}, {sync_strategy, o_sync}, {sync_strategy, {seconds, 10}}, %% The time interval must be specified in seconds ... ]} Max File Size The max_file_size setting describes the maximum permitted size for any single data file in the Bitcask directory. If a write causes the current file to exceed this size threshold then that file is closed, and a new file is opened for writes. The default is 2 GB. Increasing max_file_size will cause Bitcask to create fewer, larger files that are merged less frequently, while decreasing it will cause Bitcask to create more numerous, smaller files that are merged more frequently. To give an example, if your ring size is 16, your servers could see as much as 32 GB of data in the bitcask directories before the first merge is triggered, irrespective of your working set size. You should plan storage accordingly and be aware that it is possible to see disk data sizes that are larger than the working set. The max_file_size setting can be specified using kilobytes, megabytes, etc. The following example sets the max file size to 1 GB: bitcask.max_file_size = 1GB %% The max_file_size setting must be expressed in bytes, as in the %% example below {bitcask, [ ..., {max_file_size, 16#40000000}, %% 1 GB expressed in bytes ... ]} Hint File CRC Check During startup, Bitcask will read from .hint files in order to build its in-memory representation of the key space, falling back to .data files if necessary. This reduces the amount of data that must be read from the disk during startup, thereby also reducing the time required to start up. You can configure Bitcask to either disregard .hint files that don't contain a CRC value or to use them anyway. If you are using the newer, riak.conf-based configuration system, you can instruct Bitcask to disregard .hint files that do not contain a CRC value by setting the hintfile_checksums setting to strict (the default). To use Bitcask in a backward-compatible mode that allows for .hint files without CRC signatures, change the setting to allow_missing. The following example sets the parameter to strict: bitcask.hintfile_checksums = strict %% In the app.config-based system, substitute "require_hint_crc" for %% "hintfile_checksums", "true" for "strict", and "false" for %% "allow_missing" {bitcask, [ ..., {require_hint_crc, true}, ... ]} I/O Mode The io_mode setting specifies which code module Bitcask should use for file access. The available settings are: erlang(default) — Writes are made via Erlang's built-in file API nif— Writes are made via direct calls to the POSIX C API The following example sets io_mode to erlang: bitcask.io_mode = erlang {bitcask, [ ..., {io_mode, erlang}, ... ]} In general, the nif IO mode provides higher throughput for certain workloads, but it has the potential to negatively impact the Erlang VM, leading to higher worst-case latencies and possible throughput collapse. O_SYNC on Linux Synchronous file I/O via o_sync is supported in Bitcask if io_mode is set to nif and is not supported in the erlang mode. If you enable o_sync by setting io_mode to nif, however, you will still get an incorrect warning along the following lines: [warning] <0.445.0>@riak_kv_bitcask_backend:check_fcntl:429 {sync_strategy,o_sync} not implemented on Linux If you are using the older, app.config-based configuration system, you can disable the check that generates this warning by adding the following to the riak_kv section of your app.config: {riak_kv, [ ..., {o_sync_warning_logged, false}, ... ]} Disk Usage and Merging Settings Riak stores each vnode of the ring as a separate Bitcask directory within the configured Bitcask data directory. Each of these directories will contain multiple files with key/value data, one or more “hint” files that record where the various keys exist within the data files, and a write lock file. The design of Bitcask allows for recovery even when data isn't fully synchronized to disk (partial writes). This is accomplished by maintaining data files that are append-only (i.e. never modified in-place) and are never reopened for modification (i.e. they are only for reading). This data management strategy trades disk space for operational efficiency. There can be a significant storage overhead that is unrelated to your working data set but can be tuned in a way that best fits your use case. In short, disk space is used until a threshold is met at which point unused space is reclaimed through a process of merging. The merge process traverses data files and reclaims space by eliminating out-of-date of deleted key/value pairs, writing only the current key/value pairs to a new set of files within the directory. The merge process is affected by all of the settings described in the sections below. In those sections, “dead” refers to keys that no longer contain the most up-to-date values, while “live” refers to keys that do contain the most up-to-date value and have not been deleted. Merge Policy Bitcask enables you to select a merge policy, i.e. when during the day merge operations are allowed to be triggered. The valid options are: always— No restrictions on when merge operations can occur (default) never— Merge will never be attempted window— Merge operations occur during specified hours If you are using the newer, riak.conf-based configuration system, you can select a merge policy using the merge.policy setting. The following example sets the merge policy to never: bitcask.merge.policy = never {bitcask, [ ..., {merge_window, never}, ... ]} If you opt to specify start and end hours for merge operations, you can do so with the merge.window.start and merge.window.end settings in addition to setting the merge policy to window. Each setting is an integer between 0 and 23 for hours on a 24h clock, with 0 meaning midnight and 23 standing for 11 pm. The merge window runs from the first minute of the merge.window.start hour to the last minute of the merge.window.end hour. The following example enables merging between 3 am and 4:59 pm: bitcask.merge.policy = window bitcask.merge.window.start = 3 bitcask.merge.window.end = 17 %% In the app.config-based system, you specify the merge window using %% a tuple, as in the following example: {bitcask, [ ..., {merge_window, {3, 17}}, ... ]} merge_windowand the Multi backend If you are using the older configuration system and using Bitcask with the Multi backend, please note that if you wish to use a merge window, you must set it in the global bitcask section of your configuration file. merge_window settings in per-backend sections are ignored. If merging has a significant impact on performance of your cluster, or if your cluster has quiet periods in which little storage activity occurs, you may want to change this setting from the default. A common way to limit the impact of merging is to create separate merge windows for each node in the cluster and ensure that these windows do not overlap. This ensures that at most one node at a time can be affected by merging, leaving the remaining nodes to handle requests. The main drawback of this approach is that merges will occur less frequently, leading to increased disk space usage. Merge Triggers Merge triggers determine the conditions under which merging will be invoked. These conditions fall into two basic categories: Fragmentation — This describes the ratio of dead keys to total keys in a file that will trigger merging. The value of this setting is an integer percentage (0-100). For example, if a data file contains 6 dead keys and 4 live keys, a merge will be triggered by the default setting (60%). Increasing this value will cause merging to occur less often, whereas decreasing the value will cause merging to happen more often. Dead Bytes — This setting describes how much data stored for dead keys in a single file will trigger merging. If a file meets or exceeds the trigger value for dead bytes, a merge will be triggered. Increasing the value will cause merging to occur less often, whereas decreasing the value will cause merging to happen more often. The default is 512 MB. When either of these constraints are met by any file in the directory, Bitcask will attempt to merge files. You can set the triggers described above using merge.triggers.fragmentation and merge.triggers.dead_bytes, respectively. The former is expressed as an integer between 0 and 100, whereas the latter can be expressed in terms of kilobytes, megabytes, gigabytes, etc. The following example sets the dead bytes threshold to 55% and the fragmentation threshold to 1 GB: bitcask.merge.triggers.fragmentation = 55 bitcask.merge.triggers.dead_bytes = 1GB %% The equivalent settings in the app.config-based system are %% frag_merge_trigger and dead_bytes_merge_trigger, respectively. The %% latter must be expressed in bytes. {bitcask, [ ..., {frag_merge_trigger, 55}, {dead_bytes_merge_trigger, 1073741824}, ... ]} Merge Thresholds Merge thresholds determine which files will be chosen for inclusion in a merge operation. Fragmentation — This setting describes which ratio of dead keys to total keys in a file will cause it to be included in the merge. The value of this setting is a percentage (0-100). For example, if a data file contains 4 dead keys and 6 live keys, it will be included in the merge at the default ratio (40%). Increasing the value will cause fewer files to be merged, while decreasing the value will cause more files to be merged. Dead Bytes — This setting describes which ratio the minimum amount of data occupied by dead keys in a file to cause it to be included in the merge. Increasing this value will cause fewer files to be merged, while decreasing this value will cause more files to be merged. The default is 128 MB. Small File — This setting describes the minimum size a file must be to be excluded from the merge. Files smaller than the threshold will be included. Increasing the value will cause more files to be merged, while decreasing the value will case fewer files to be merged. The default is 10 MB. You can set the thresholds described above using the merge.thresholds.fragmentation, merge.thresholds.dead_bytes, and merge.threshold.small_file settings, respectively. The fragmentation setting is expressed as an integer between 0 and 100, and the dead_bytes and small_file settings can be expressed in terms of kilobytes, megabytes, gigabytes, etc. The following example sets the fragmentation threshold to 45%, the dead bytes threshold to 200 MB, and the small file threshold to 25 MB: bitcask.merge.thresholds.fragmentation = 45 bitcask.merge.thresholds.dead_bytes = 200MB bitcask.merge.thresholds.small_file = 25MB %% In the app.config-based system, the settings corresponding to those %% listed above are frag_threshold, dead_bytes_threshold, and %% small_files threshold, respectively. The latter two settings must be %% expressed in bytes: {bitcask, [ ..., {frag_threshold, 45}, {dead_bytes_threshold, 209715200}, {small_file_threshold, 26214400}, ... ]} The values for the fragmentation and dead bytes thresholds must be equal to or less than their corresponding trigger values. If they are set higher, Bitcask will trigger merges in cases where no files meet the threshold, which means that Bitcask will never resolve the conditions that triggered merging in the first place. Merge Interval Bitcask periodically runs checks to determine whether merges are necessary. You can determine how often those checks take place using the bitcask.merge_check_interval parameter. The default is 3 minutes. bitcask.merge_check_interval = 3m %% In the app.config-based system, this setting is expressed in %% milliseconds and found in the riak_kv section rather than the bitcask %% section: {riak_kv, [ %% Other configs {bitcask_merge_check_interval, 180000}, %% Other configs ]} If merge check operations happen at the same time on different vnodes on the same node, this can produce spikes in I/O usage and undue latency. Bitcask makes it less likely that merge check operations will occur at the same time on different vnodes by applying a jitter to those operations. A jitter is a random variation applied to merge times that you can alter using the bitcask.merge_check_jitter parameter. This parameter is expressed as a percentage of bitcask.merge_check_interval. The default is 30%. bitcask.merge_check_jitter = 30% %% In the app.config-based system, this setting is expressed as a float %% and found in the riak_kv section rather than the bitcask section: {riak_kv, [ %% Other configs {bitcask_merge_check_jitter, 0.3}, %% Other configs ]} For example, if you set the merge check interval to 4 minutes and the jitter to 25%, merge checks will occur at intervals between 3 and 5 minutes. With the default of 3 minutes and 30%, checks will occur at intervals between roughly 2 and 4 minutes. Log Needs Merge If you are using the older, app.config-based configuration system, you can use the log_needs_merge setting to tune and troubleshoot Bitcask merge settings. When set to true (as in the example below), each time a merge trigger is met, the partition/vnode ID and mergeable files will be logged. {bitcask, [ ..., {log_needs_merge, true}, ... ]} log_needs_mergeand the Multi backend If you are using Bitcask with the Multi backend in conjunction with the older, app.config-based configuration system, please note that logneeds_merge _must be set in the global bitcask section of your app.config. All log_needs_merge settings in per-backend sections are ignored. Fold Keys Threshold Fold keys thresholds will reuse the keydir (a) if another fold was started less than a specified time interval ago and (b) there were fewer than a specified number of updates. Otherwise, Bitcask will wait until all current fold keys complete and then start. The default time interval is 0, while the default number of updates is unlimited. Both thresholds can be disabled. The conditions described above can be set using the fold.max_age and fold.max_puts parameters, respectively. The former can be expressed in terms of minutes, hours, days, etc., while the latter is expressed as an integer. Each threshold can be disabled by setting the value to unlimited. The following example sets the max_age to ½ second and the max_puts to 1000: bitcask.max_age = 0.5s bitcask.max_puts = 1000 %% In the app.config-based system, the corresponding parameters are %% max_fold_age and max_fold_puts, respectively. The former must be %% expressed in milliseconds, while the latter must be an integer: {bitcask, [ ..., {max_fold_age, 500}, {max_fold_puts, 1000}, ... ]} %% Each of these thresholds can be disabled by setting the value to -1 Automatic Expiration By default, Bitcask keeps all of your data. But if your data has limited time value or if you need to purge data for space reasons, you can configure object expiration, aka expiry. This feature is disabled by default. You can enable and configure object expiry using the expiry setting and either specifying a time interval in seconds, minutes, hours, etc., or turning expiry off ( off). The following example configures objects to expire after 1 day: bitcask.expiry = 1d %% In the app.config-based system, expiry is expressed in terms of %% seconds: {bitcask, [ ..., {expiry_secs, 86400}, %% Sets the duration to 1 day ... ]} %% Expiry can be turned off by setting this value to -1 Space occupied by stale data may not be reclaimed immediately, but the data will become immediately inaccessible to client requests. Writing to a key will set a new modification timestamp on the value and prevent it from being expired. By default, Bitcask will trigger a merge whenever a data file contains an expired key. This may result in excessive merging under some usage patterns. You can prevent this by configuring an expiry grace time. Bitcask will defer trigger a merge solely for key expiry by the configured amount of time. The default is 0, signifying no grace time. If you are using the newer, riak.conf-based configuration system, you can set an expiry grace time using the expiry.grace_time setting and in terms of minutes, hours, days, etc. The following example sets the grace period to 1 hour: bitcask.expiry.grace_time = 1h %% The equivalent setting in the app.config-based system is %% expiry_grace_time. This must be expressed in seconds: {bitcask, [ ..., {expiry_grace_time, 3600}, %% Sets the grace period to 1 hour ... ]} Automatic expiration and Riak Search If you are using Riak Search in conjunction with Bitcask, please be aware that automatic expiry does not apply to Search indexes. If objects are indexed using Search, those objects can be expired by Bitcask yet still registered in Search indexes, which means that Search queries may return keys that no longer exist. Riak's active anti-entropy (AAE) subsystem will eventually catch this discrepancy, but this depends on AAE being enabled (which is the default) and could take some time. If search queries returning expired keys is a problem for your use case, then we would recommend not using automatic expiration. Tuning Bitcask When tuning your environment, there are a number of things to bear in mind that can assist you in making Bitcask as stable and reliable as possible and to minimize latency and maximize throughput. Tips & Tricks Bitcask depends on filesystem caches Some data storage layers implement their own page/block buffer cache in-memory, but Bitcask does not. Instead, it depends on the filesystem's cache. Adjusting the caching characteristics of your filesystem can impact performance. Be aware of file handle limits Review the documentation on open files limit. Avoid the overhead of updating file metadata (such as last access time) on every read or write operation You can achieve a substantial speed boost by adding the noatimemounting option to Linux's /etc/fstab. This will disable the recording of the last accessed time for all files, which results in fewer disk head seeks. If you need last access times but you'd like some of the benefits of this optimization, you can try relatime. /dev/sda5 /data ext3 noatime 1 1 /dev/sdb1 /data/inno-log ext3 noatime 1 2 Small number of frequently changed keys When keys are changed frequently, fragmentation rapidly increases. To counteract this, you should lower the fragmentation trigger and threshold. Limited disk space When disk space is limited, limiting the space occupied by dead keys is of paramount importance. Lower the dead bytes threshold and trigger to counteract wasted space. Purging stale entries after a fixed period To automatically purge stale values, set the object expiry value to the desired cutoff time. Keys that are not modified for a period equal to or greater than this time interval will become inaccessible. High number of partitions per node Because each cluster has many partitions running, Bitcask will have many open files. To reduce the number of open files, we suggest increasing the max file size so that larger files will be written. You could also decrease the fragmentation and dead-bytes settings and increase the small file threshold so that merging will keep the number of open files small in number. High daytime traffic, low nighttime traffic In order to cope with a high volume of writes without performance degradation during the day, you might want to limit merging to in non-peak periods. Setting the merge window to hours of the day when traffic is low will help. Multi-cluster replication (Riak Enterprise) If you are using Riak Enterprise with the replication feature enabled, your clusters might experience higher production of fragmentation and dead bytes. Additionally, because the fullsync feature operates across entire partitions, it will be made more efficient by accessing data as sequentially as possible (across fewer files). Lowering both the fragmentation and dead-bytes settings will improve performance. - Why does it seem that Bitcask merging is only triggered when a Riak node is restarted? - If the size of key index exceeds the amount of memory, how does Bitcask handle it? - Bitcask Capacity Planning Bitcask Implementation Details Riak will create a Bitcask database directory for each vnode in a cluster. In each of those directories, at most one database file will be open for writing at any given time. The file being written to will grow until it exceeds a specified size threshold, at which time it is closed and a new file is created for additional writes. Once a file is closed, whether purposely or due to server exit, it is considered immutable and will never again be opened for writing. The file currently open for writes is only written by appending, which means that sequential writes do not require disk seeking, which can dramatically speed up disk I/O. Note that this effect can be hampered if you have atime enabled on your filesystem, because the disk head will have to move to update both the data blocks and the file and directory metadata blocks. The primary speed advantage from a log-based database stems of its ability to minimize disk head seeks. Deleting a value from Bitcask is a two-step process: first, a tombstone is recorded in the open file for writes, which indicates that a value was marked for deletion at that time, while references to that key are removed from the in-memory “keydir” information; later, during a merge operation, non-active data files are scanned, and only those values without tombstones are merged into the active data file. This effectively removes the obsolete data and reclaims disk space associated with it. This data management strategy may use up a lot of space over time, since Bitcask writes new values without touching the old ones. The compaction process referred to as “merging” solves this problem. The merge process iterates over all non-active (i.e. immutable) files in a Bitcask database and produces as output a set of data files containing only the “live” or latest versions of each present key. Bitcask Database Files Below are two directory listings showing what you should expect to find on disk when using Bitcask. In this example, we use a 64-partition ring, which results in 64 separate directories, each holding its own Bitcask database. ls ./data/bitcask The result: 0 1004782375664995756265033322492444576013453623296 1027618338748291114361965898003636498195577569280 ... etc ... 981946412581700398168100746981252653831329677312 Note that when starting up the directories are created for each vnode partition's data. At this point, however, there are not yet any Bitcask-specific files. After performing one PUT (write) into the Riak cluster running Bitcask: curl -XPUT \ -H "Content-Type: text/plain" \ -d "hello" The “N” value for this cluster is 3 (the default), so you'll see that the three vnode partitions responsible for this data now have Bitcask database files: bitcask/ ... etc ... |-- 1118962191081472546749696200048404186924073353216-1316787078245894 | |-- 1316787252.bitcask.data | |-- 1316787252.bitcask.hint | `-- bitcask.write.lock ... etc ... |-- 1141798154164767904846628775559596109106197299200-1316787078249065 | |-- 1316787252.bitcask.data | |-- 1316787252.bitcask.hint | `-- bitcask.write.lock ... etc ... |-- 1164634117248063262943561351070788031288321245184-1316787078254833 | |-- 1316787252.bitcask.data | |-- 1316787252.bitcask.hint | `-- bitcask.write.lock ... etc ... As more data is written to the cluster, more Bitcask files are created until merges are triggered. bitcask/ |-- 0-1317147619996589 | |-- 1317147974.bitcask.data | |-- 1317147974.bitcask.hint | |-- 1317221578.bitcask.data | |-- 1317221578.bitcask.hint | |-- 1317221869.bitcask.data | |-- 1317221869.bitcask.hint | |-- 1317222847.bitcask.data | |-- 1317222847.bitcask.hint | |-- 1317222868.bitcask.data | |-- 1317222868.bitcask.hint | |-- 1317223014.bitcask.data | `-- 1317223014.bitcask.hint |-- 1004782375664995756265033322492444576013453623296-1317147628760580 | |-- 1317147693.bitcask.data | |-- 1317147693.bitcask.hint | |-- 1317222035.bitcask.data | |-- 1317222035.bitcask.hint | |-- 1317222514.bitcask.data | |-- 1317222514.bitcask.hint | |-- 1317223035.bitcask.data | |-- 1317223035.bitcask.hint | |-- 1317223411.bitcask.data | `-- 1317223411.bitcask.hint |-- 1027618338748291114361965898003636498195577569280-1317223690337865 |-- 1050454301831586472458898473514828420377701515264-1317223690151365 ... etc ... This is normal operational behavior for Bitcask.
http://docs.basho.com/riak/latest/ops/advanced/backends/bitcask/
2015-07-28T05:46:22
CC-MAIN-2015-32
1438042981576.7
[]
docs.basho.com
.) - Develop a naming convention for each data center and rack, for example: DC1, DC2 or 100, 200 and RAC1, RAC2 or R101, R102. - Other possible configuration settings are described in the cassandra.yaml configuration file. -: - Packaged installs: $ sudo service dse stop $ sudo rm -rf /var/lib/cassandra/* ## Clears the data from the default directories - Tarball installs:_address. - auto_bootstrap: false (Add this setting only when initializing a fresh cluster with no data.) -: - If necessary, change the dse.yaml file on each node to specify the snitch to be delegated by the DseDelegateSnitch. For more information about snitches, see the About Snitches. - Packaged installs: /etc/dse/dse.yaml - Tarball installs: install_location/resources/dse/conf/dse.yaml Example of specifying. - Packaged installs: /etc/dse/cassandra/cassandra-topology.properties - Tarball installs: install_location. - What's next - Configuring system_auth keyspace replication - Replication in a physical or virtual data center (Applies only to the single-token-per-node architecture.)
http://docs.datastax.com/en/datastax_enterprise/4.0/datastax_enterprise/deploy/deployMultiDC.html
2015-07-28T05:48:34
CC-MAIN-2015-32
1438042981576.7
[array(['../images/deploy_NodetoolStatusMulti.png', None], dtype=object)]
docs.datastax.com
PuppetDB 3.0 » Overview This version of PuppetDB is not included in Puppet Enterprise 3.8. PE 3.8 users should see the PuppetDB 2.3 docs. PuppetDB collects data generated by Puppet. It enables advanced Puppet features like 3.0.x series of PuppetDB releases, which adds several new features and contains some breaking changes since the 2..
http://docs.puppetlabs.com/puppetdb/latest/index.html
2015-07-28T05:44:23
CC-MAIN-2015-32
1438042981576.7
[]
docs.puppetlabs.com
Decision Insight 20180122 Upgrade Primary / Replica clusters Important: All nodes within an Axway Decision Insight (DI) cluster must use the same version of DI. Stop replicas and upgrade all nodes Stop all replica nodes. Upgrade each node like a standalone deployment: see Upgrade the deployment. Start all nodes If you left the primary node while it was still running, restart the primary node before the replica nodes so that the database is migrated as well. Otherwise, nodes can be restarted in any order. Check the following page to get instructions on how to start Decision Insight: Linux: Manage a node under Linux Windows: Manage a node under Windows If needed, the replica nodes will download the last checkpoint migrated from the primary node. If a node fails to start, check the <logging directory>/migration-error.log and the <logging directory>/node-error.log. Related Links
https://docs.axway.com/bundle/DecisionInsight_20180122_allOS_en_HTML5/page/upgrade_primary___replica_clusters.html
2018-04-19T13:58:41
CC-MAIN-2018-17
1524125936969.10
[]
docs.axway.com
In order to continue to use self-signed certificates for larger evaluation deployments, a certificate can be generated for a specific hostname. This will allow clients to properly verify the hostname presented in the certificate as the host that they requested in the request URL. To create a self-signed certificate:
https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.4/bk_security/content/knox_self_signed_certificate_specific_hostname_evaluations.html
2018-04-19T14:15:59
CC-MAIN-2018-17
1524125936969.10
[]
docs.hortonworks.com
This API is not supported on tvOS. build.settingsfile: settings = { android = { usesPermissions = { "android.permission.RECORD_AUDIO", }, }, } .wav. To record to the .3gpformat, simply append .3gpto the end of the path. Note that .3gpfiles can only be played by the media.playSound() function. On iOS, to enable audio recording on devices,SMicrophoneUsageDescription = "This app would like to access the microphone.", }, }, } media.newRecording( [path] ) The following platforms can only record to the following audio formats: local filePath = system.pathForFile( "newRecording.aif", system.DocumentsDirectory ) r = media.newRecording( filePath ) r:startRecording()
http://docs.coronalabs.com.s3-website-us-east-1.amazonaws.com/api/library/media/newRecording.html
2018-04-19T13:29:44
CC-MAIN-2018-17
1524125936969.10
[]
docs.coronalabs.com.s3-website-us-east-1.amazonaws.com
Microsoft Visual Basic). The Point UDT consists of X and Y coordinates implemented as property procedures. The following namespaces are required when defining a UDT: Imports System Imports System.Data.SqlTypes Imports Microsoft.SqlServer.Server using System; using System.Data.SqlTypes; using Microsoft.SqlServer.Server; The Microsoft.SqlServer.Server namespace contains the objects required for various attributes of your UDT, and the System.Data.SqlTypes namespace contains the classes that represent Microsoft SQL Server native data types available to the assembly. There may of course be additional namespaces that your assembly requires in order to function correctly. The Point UDT also uses the System.Text namespace for working with strings. Note Managed C++ database objects, such as UDTs, that have been compiled with the /clr:pure Visual C++ compiler option are not supported for execution in SQL Server 2005 RTM. Specifying Attributes. <Serializable(), SqlUserDefinedTypeAttribute(Format.Native, _ IsByteOrdered:=True)> _ Public Structure Point Implements INullable [Serializable] [Microsoft.SqlServer.Server.SqlUserDefinedType(Format.Native, IsByteOrdered=true)] public struct Point : INullable { Implementing Nullability. Private is_Null As Boolean Public ReadOnly Property IsNull() As Boolean _ Implements INullable.IsNull Get Return (is_Null) End Get End Property Public Shared ReadOnly Property Null() As Point Get Dim pt As New Point pt.is_Null = True Return (pt) End Get End Property private bool is_Null; public bool IsNull { get { return (is_Null); } } public static Point Null { get { Point pt = new Point(); pt.is_Null = true; return pt; } } IS NULL vs. IsNull Consider a table that contains the schema Points(id int, location Point), where Point is a CLR UDT, and the following queries: --Query 1 SELECT ID FROM Points WHERE NOT (location IS NULL) -- Or, WHERE location IS NOT NULL --Query 2: SELECT ID FROM Points WHERE location.IsNull = 0. Implementing the Parse Method. ]); return pt; } Implementing the ToString Method. Private _x As Int32 Private _y As Int32 Public Overrides Function ToString() As String If Me.IsNull Then Return "NULL" Else Dim builder As StringBuilder = New StringBuilder builder.Append(_x) builder.Append(",") builder.Append(_y) Return builder.ToString End If End Function private Int32 _x; private Int32 _y; public override string ToString() { if (this.IsNull) return "NULL"; else { StringBuilder builder = new StringBuilder(); builder.Append(_x); builder.Append(","); builder.Append(_y); return builder.ToString(); } } Exposing UDT Properties The Point UDT exposes X and Y coordinates that are implemented as public read-write properties of type System.Int32. Public Property X() As Int32 Get Return (Me._x) End Get Set(ByVal Value As Int32) _x = Value End Set End Property Public Property Y() As Int32 Get Return (Me._y) End Get Set(ByVal Value As Int32) _y = Value End Set End Property public Int32 X { get { return this._x; } set { _x = value; } } public Int32 Y { get { return this._y; } set { _y = value; } } Validating UDT Values When working with UDT data, SQL Server Database Engine automatically converts binary values to UDT values. This conversion process involves checking that values are appropriate for the serialization format of the type and ensuring that the value can be deserialized correctly.. <Serializable(), SqlUserDefinedTypeAttribute(Format.Native, _ IsByteOrdered:=True, _ ValidationMethodName:="ValidatePoint")> _ Public Structure Point [Serializable] [Microsoft.SqlServer.Server.SqlUserDefinedType(Format.Native, IsByteOrdered=true, ValidationMethodName = "ValidatePoint")] public struct Point : INullable { If a validation method is specified, it must have a signature that looks like the following code fragment. Private Function ValidationFunction() As Boolean If (validation logic here) Then Return True Else Return False End If End Function private bool ValidationFunction() { if (validation logic here) { return true; } else { return false; } }. Private Function ValidatePoint() As Boolean If (_x >= 0) And (_y >= 0) Then Return True Else Return False End If End Function private bool ValidatePoint() { if ((_x >= 0) && (_y >= 0)) { return true; } else { return false; } }. )) ' Call ValidatePoint to enforce validation ' for string conversions. If Not pt.ValidatePoint() Then Throw New ArgumentException("Invalid XY coordinate values.") End If]); // Property X() As Int32 Get Return (Me._x) End Get Set(ByVal Value As Int32) Dim temp As Int32 = _x _x = Value If Not ValidatePoint() Then _x = temp Throw New ArgumentException("Invalid X coordinate value.") End If End Set End Property Public Property Y() As Int32 Get Return (Me._y) End Get Set(ByVal Value As Int32) Dim temp As Int32 = _y _y = Value If Not ValidatePoint() Then _y = temp Throw New ArgumentException("Invalid Y coordinate value.") End If End Set End Property."); } } } Coding UDT Methods. For information about how to install the AdventureWorks CLR samples, see "Installing Samples" in SQL Server Books Online. Function Distance() As Double Return DistanceFromXY(0, 0) End Function ' Distance from Point to the specified point. <SqlMethod(OnNullCall:=False)> _ Public Function DistanceFrom(ByVal pFrom As Point) As Double Return DistanceFromXY(pFrom.X, pFrom.Y) End Function ' Distance from Point to the specified x and y values. <SqlMethod(OnNullCall:=False)> _ Public Function DistanceFromXY(ByVal ix As Int32, ByVal iy As Int32) _ As Double Return Math.Sqrt(Math.Pow(ix - _x, 2.0) + Math.Pow(iy - _y, 2.0)) End Function //. Note The SqlMethodAttribute class inherits from the SqlFunctionAttribute class, so SqlMethodAttribute inherits the FillRowMethodName and TableDefinition fields from SqlFunctionAttribute. This implies that it is possible to write a table-valued method, which is not the case. The method compiles and the assembly deploys, but an error about the IEnumerable return type is raised at runtime with the following message: "Method, property, or field '<name>' in class '<class>' in assembly '<assembly>' has invalid return type." The following table describes some of the relevant Microsoft.SqlServer.Server.SqlMethodAttribute properties that can be used in UDT methods, and lists their default values. - DataAccess Indicates whether the function involves access to user data stored in the local instance of SQL Server. Default is DataAccessKind.None. - IsDeterministic Indicates whether the function produces the same output values given the same input values and the same database state. Default is false. - IsMutator Indicates whether the method causes a state change in the UDT instance. Default is false. - IsPrecise Indicates whether the function involves imprecise computations, such as floating point operations. Default is false. - OnNullCall Indicates whether the method is called when null reference input arguments are specified. Default is true.. Note Mutator methods are not allowed in queries. They can be called only in assignment statements or data modification statements. If a method marked as mutator does not return void (or is not a Sub in Visual Basic), CREATE TYPE fails with an error. The following statement assumes the existence of a Triangles UDT that has a Rotate method. The following Transact-SQL update statement invokes the Rotate method: UPDATE Triangles SET t.RotateY(0.6) WHERE id=5. <SqlMethod(IsMutator:=True, OnNullCall:=False)> _ Public Sub Rotate(ByVal anglex as Double, _ ByVal angley as Double, ByVal anglez As Double) RotateX(anglex) RotateY(angley) RotateZ(anglez) End Sub [SqlMethod(IsMutator = true, OnNullCall = false)] public void Rotate(double anglex, double angley, double anglez) { RotateX(anglex); RotateY(angley); RotateZ(anglez); } Implementing a UDT with a User-Defined Format 2005. For information about how to install the CLR samples, see Installing Samples. Installing Samples. Currency Attributes The Currency UDT is defined with the following attributes. <Serializable(), Microsoft.SqlServer.Server.SqlUserDefinedType( _ Microsoft.SqlServer.Server.Format.UserDefined, _ IsByteOrdered:=True, MaxByteSize:=32), _ CLSCompliant(False)> _ Public Structure Currency Implements INullable, IComparable, _ Microsoft.SqlServer.Server.IBinarySerialize [Serializable] [SqlUserDefinedType(Format.UserDefined, IsByteOrdered = true, MaxByteSize = 32)] [CLSCompliant(false)] public struct Currency : INullable, IComparable, IBinarySerialize { Creating Read and Write Methods with IBinarySerialize Sub Write(ByVal w As System.IO.BinaryWriter) _ Implements Microsoft.SqlServer.Server.IBinarySerialize.Write If Me.IsNull Then w.Write(nullMarker) w.Write(System.Convert.ToDecimal(0)) Return End If If cultureName.Length > cultureNameMaxSize Then Throw New ApplicationException(String.Format(CultureInfo.CurrentUICulture, _ "{0} is an invalid culture name for currency as it is too long.", cultureNameMaxSize)) End If Dim paddedName As String = cultureName.PadRight(cultureNameMaxSize, CChar(vbNullChar)) For i As Integer = 0 To cultureNameMaxSize - 1 w.Write(paddedName(i)) Next i ' Normalize decimal value to two places currencyVal = Decimal.Floor(currencyVal * 100) / 100 w.Write(currencyVal) End Sub Public Sub Read(ByVal r As System.IO.BinaryReader) _ Implements Microsoft.SqlServer.Server.IBinarySerialize.Read Dim name As Char() = r.ReadChars(cultureNameMaxSize) Dim stringEnd As Integer = Array.IndexOf(name, CChar(vbNullChar)) If stringEnd = 0 Then cultureName = Nothing Return End If cultureName = New String(name, 0, stringEnd) currencyVal = r.ReadDecimal() End Sub // Installing Samples. See Also Other Resources Creating a User-Defined Type Help and Information Getting SQL Server 2005 Assistance
https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2005/ms131069(v=sql.90)
2018-04-19T14:26:44
CC-MAIN-2018-17
1524125936969.10
[]
docs.microsoft.com
1. Unwrap with Control Painting. This tutorial will explain how to use the control painting option to improve the result of your UV unwrap. When you decide to use UV Master, the plugin will produce the least stretched UVs with the best ratio of pixels per polygons, but sometimes the seams won’t be where you expect them to be as it is an automatic process. Perhaps for the quality of your final model, you may need to have more pixels for a polygon area (lips, eyes, nose) and less for another (the back of a character). The first step is to load the model to unwrap. For this tutorial, this retopologized character will be used. Feel free to use any kind of model. 1.1 Unwrap and things to fix First, let’s do a simple Unwrap of the model and check the seams and the UVs. The steps to do these operations will be explained later in this tutorial. The purpose is to see any potential problems: - The default result is pretty good with the UV seams which go on the back of the model, but the UV unwrap can be improved. - In orange, we can see the UV seams on the model. The forehead has a seam which goes between the eyes (1), which is visible in the UV unwrap on the right. - The seams on the arm go from the top to bottom and not in a straight line (2) and the same appears on the legs (3). Let’s fix these problems and do some improvements. 1.2 Density For this model, we will need larger UVs on the head, to use more pixels on the texture and reduce the UV density on the back as this part will be less visible on the model. To do this we will change the UV pixel ratio for some areas by using Control Painting. Please note that this option, like all other Control Painting options, will remove any existing Polypainting. You are strongly advised to use the Work on Clone command, which will create a clone of the current Tool or SubTool and will remove the highest subdivision levels. - Click on the Enable Control Painting button to enable the Control painting Tools. - Click on the Density option to enable the painting. Adjust the Density to define the desired density, using the slider or the preset buttons. We want larger UVs on the head, so with the ‘x’ (multiply) button set, push the slider to 4 and paint on the head. You should paint a green color. (1 on the illustration below). - For the purpose of this tutorial, we will also adjust the density on the hands. Change the density value to 2 and paint on them. You should see a lighter green on them while painting. (2 on the below illustration). - Now, let’s work on the back of the character to define a lower UV density. Set the ‘/’ (divide) option button and then change the slider value to 4. Your painting should be a Cyan blue. (3 on the below illustration). As the Control Painting is based on Polypainting, we can use the Smooth Brush (press Shift and turn off ZAdd or ZSub while pressing) to soften the Density painting: it will make the transition smoother between density parts of your UVs. Note: Be aware of your active sculpting brush, alpha and stroke when painting areas. Now the density work is done. But at a later stage, if you need to refine the density values (even after the unwrap) switch Density Control Painting on and edit it again. Then press unwrap again and refine as needed until you are satisfied with the result. 1.3 Protect and Attract UV seams On the first unwrap shown in the first chapter of this tutorial, the UV seams go too far on the forehead (close to the eyes) and are going from the back to the front of the legs and arms. To improve the seams placement, we will use Control Painting – similar to Density but dedicated to the protection of an area, or to attract the seams. First, we will protect the front of the character. This part is most of the time the one which mustn’t have seams: - First, we will protect the front of the character. This part is usually the one which mustn’t have seams: - Click on the Protect button, below the Use Control Painting button. (Which should be on, based on the previous Density step.) - Paint the front of the character, from the head to the legs. Paint also the front or top part of the arms. Depending of your own model, choose which part to protect. - Paint the front of the character, from the head to the legs. Paint also the front or top part of the arms. Depending of your own model, choose which part to protect. - Now that all the desired parts are protected, we will provide the plugin with which parts we need to drive the UV seams. Click on the Attract button and paint on the back of the model where you would like to have the seams. Keep in mind that it’s not an accurate solution to create UV seams. Painting large areas provides better results. Note: Be aware of your active sculpting brush, alpha and stroke when painting areas. 1.4 Unwrap! It is now time to do the UV Unwrap of the model, using the previously made Control Painting. Press the Unwrap button. To do so, simply press the Unwrap button of the plugin. After a few seconds, the process will be completed with some statistics in the notebar. 1.5 Coloring Seams and validation The unwrapping doesn’t change the appearance of your model, so it’s impossible to visualize the result of the UV unwrapping. The solution is to use two utilities of UV Master: - Click on the Check Seams button located in the Utilities: it will paint the seams in orange and the openings in brown. if you are not satisfied with the existing seams you can redo the previous couple of steps by painting different Protect or Attract areas and doing a new Unwrap. - Click on the Flatten button to transform your 3D model in a 2D model corresponding to the UV island(s). All your ZBrush brushes and Transpose can be use to alter the UVs. When done, press Unflatten to bring your model back to its 3D shape. 1.6 Conclusion Now, your model has UVs which better suit your needs. You have seen that in a couple of minutes you can create more accurate UVs and change the UV density of local parts. Keep in mind that painting areas is better than painting thin lines as the plugin is not designed to create seams based on accurately painted lines. 2. Unwrapping a model with existing UVs or seams. This short tutorial will explain how to optimize or create the UVs of an imported model with split edges or existing UVs made in another 3D package to use the power of the UV Master algorithm. Only a few steps are needed and can dramatically improve your UVs but it is important to remember that your UV Island position, orientation, scale and seams position will change. For this short tutorial, we will use the famous Demo Head. 2.1 Unwrap from Existing UVs 1. In the software of your choice, create UVs. You don’t need to create clean UVs because UV Master will completely recreate them. You only need to worry about where on the model the seams will be located. The two UV island created from the model. Note: The face is bigger than the other part of the head, because the two parts have been unwrapped separately then manually packed and resized. Press the Check Seam button in the Utility section of the plugin: You should see the UV seams painted like below (of course your own results will vary based on where you put the cuts in your UVs): In orange, the UVs seams and in brown, the border seams. We can clearly see the seam around the face, splitting it from the rest of the head. Now, it’s time to work on the UVs themselves. The first step is to press the Use Existing UV Seams option to disable the creation of the seams as we want to use the existing ones. Now press the Unwrap button to start the operation. When the process is finished, press the Flatten button to visualize your UVs: You should see your mesh flattened like below. Compare with your original unwrap to see the improvement. When your are done, don’t forget to Unflatten your model! 2.2 Unwrap from Existing seams In your 3D package of choice, split your geometry where you need to have your UV seams. Save your 3D model as an OBJ file and import it into ZBrush. Open the UV Master plugin menu and before unwrapping, click on the Check Seams to visualize your existing seams: You should see a set of brown seams, which will show you the split areas of the model, as opposed to orange seams which show UV seams as in the previous chapter. Now, enable the Use Existing UV Seams option. This way no new seams will be created in the unwrap process. Press the Unwrap button. When the note which indicates the end of the process appears, click on it to close it and then press the Flatten button to visualize your UVs. You should see your mesh flattened like below:
http://docs.pixologic.com/user-guide/zbrush-plugins/uv-master/unwrap-tutorials/
2018-04-19T13:34:12
CC-MAIN-2018-17
1524125936969.10
[array(['http://docs.pixologic.com/wp-content/uploads/2013/01/4R7-UVM-39.jpg', 'The two UV island created from the model. Note: The face is bigger than the other part of the head, because the two parts have been unwrapped separately then manually packed and resized.'], dtype=object) array(['http://docs.pixologic.com/wp-content/uploads/2013/01/4R7-UVM-41.jpg', 'In orange, the UVs seams and in brown, the border seams. We can clearly see the seam around the face, splitting it from the rest of the head.'], dtype=object) array(['http://docs.pixologic.com/wp-content/uploads/2013/01/4R7-UVM-42.jpg', '4R7-UVM-42'], dtype=object) array(['http://docs.pixologic.com/wp-content/uploads/2013/01/4R7-UVM-43.jpg', '4R7-UVM-43'], dtype=object) array(['http://docs.pixologic.com/wp-content/uploads/2013/01/4R7-UVM-44.jpg', '4R7-UVM-44'], dtype=object) array(['http://docs.pixologic.com/wp-content/uploads/2013/01/4R7-UVM-45.jpg', '4R7-UVM-45'], dtype=object) array(['http://docs.pixologic.com/wp-content/uploads/2013/01/4R7-UVM-46.jpg', '4R7-UVM-46'], dtype=object) array(['http://docs.pixologic.com/wp-content/uploads/2013/01/4R7-UVM-49.jpg', '4R7-UVM-49'], dtype=object) ]
docs.pixologic.com
Microsoft Docs contributor guide overview Welcome to the docs.microsoft.com (Docs) Contributor Guide! About this guide Here you'll find all of the information you need to contribute to Docs articles, by using the Microsoft Open Publishing Services (OPS) platform and the supporting tools and processes. The table of contents to your left is designed to help you get started and be productive contributing to Microsoft Docs. The introductory articles provide a quick start for the tasks that are common to any contribution activity. Later articles are specific to different tasks, and you should focus on the section that describes the activity that interests you. Many of the articles can also serve as reference content, which you might want to save as a favorite/bookmark in your browser. Also note that there are several links to other sites, which will take you to pages that are off the docs.microsoft.com domain and outside this guide. Contribution tasks There are several ways to contribute to docs: - You can create issues to recommend new articles, or improve existing articles. - You can quickly edit articles to make small changes in the GitHub online editor. - You can review drafts of new articles to ensure quality and technical accuracy. - You can create new articles for topics when you want to help drive the content story. - You can update or create samples to improve the code samples that reinforce important concepts. All our public repositories are hosted on GitHub and written in Markdown. You'll need the following to contribute: - If you don't already have one, create a GitHub account. - Docs articles are written in a markup language called Markdown. You should have a basic understanding of Markdown syntax. Quick start to propose an article change If you don't have time to digest the entire guide or install tools, and you just need to make a minor contribution, here are the essentials. Use the web editing workflow to submit your contribution via a GitHub pull request. You'll be editing the content and submitting the PR in the browser. Additional ways to contribute to docs.microsoft.com content You can learn more about the different tasks in our article on how to contribute.
https://docs.microsoft.com/zh-cn/contribute/
2018-04-19T14:28:20
CC-MAIN-2018-17
1524125936969.10
[]
docs.microsoft.com
How to keep the display on during audio/video playback (XAML) [ This article is for Windows 8.x and Windows Phone 8.x developers writing Windows Runtime apps. If you’re developing for Windows 10, see the latest documentation ]. // Create this variable at a global scope. Set it to null. private DisplayRequest dispRequest = null; ' Create this variable at a global scope. Set it to nothing. Private dispRequest As DisplayRequest dispRequest = Nothing. } } Public Sub StartVideoPlayback() If dispRequest Is Nothing Then '. End If End Sub); } } Note Windows automatically deactivates your app's active display requests when it is moved off screen, and re-activates them when your app comes back to the foreground. You can also use this API to keep the screen on while giving directions in a GPS app. In this case, substitute navigation events for video playback events. To see similar code in the context of functions, download the display power state sample. Related topics Quickstart: video and audio Display power state sample Media playback, start to finish
https://docs.microsoft.com/en-us/previous-versions/windows/apps/jj152728(v=win.10)
2018-04-19T14:27:55
CC-MAIN-2018-17
1524125936969.10
[]
docs.microsoft.com
columns.field String The field to which the column is bound. The value of this field is displayed in the column's cells during data binding. Only columns that are bound to a field can be sortable or filterable. The field name should be a valid Javascript identifier and should contain only alphanumeric characters (or "$" or "_"), and may not start with a digit. Example - specify the column field <div id="grid"></div> <script> $("#grid").kendoGrid({ columns: [ // create a column bound to the "name" field { field: "name" }, // create a column bound to the "age" field { field: "age" } ], dataSource: [ { name: "Jane", age: 30 }, { name: "John", age: 33 }] }); </script>
https://docs.telerik.com/kendo-ui/api/javascript/ui/grid/configuration/columns.field
2018-04-19T13:36:27
CC-MAIN-2018-17
1524125936969.10
[]
docs.telerik.com
A string identifying the type of notification event: "local"— From events created locally. "remote"— From remote push events originating from a server. "remoteRegistration"— Used for initializing remote push events. If successful, the event will also contain a "token"property which contains a string needed by your push server to communicate with Apple's Push Notification server; if not, then the event will have an "error"property. -- The launch arguments provide a notification event if this app was started when the user tapped on a notification local launchArgs = ... if ( launchArgs and launchArgs.notification ) then print( launchArgs.notification.type ) print( launchArgs.notification.name ) print( launchArgs.notification.sound ) print( launchArgs.notification.alert ) print( launchArgs.notification.badge ) print( launchArgs.notification.applicationState ) end
http://docs.coronalabs.com.s3-website-us-east-1.amazonaws.com/api/event/notification/type.html
2018-04-19T13:27:54
CC-MAIN-2018-17
1524125936969.10
[]
docs.coronalabs.com.s3-website-us-east-1.amazonaws.com
Using the Dynamic Brush When creating drawings for a panel, there may be a specific object that is repeated many times to create a bigger picture, such as a landscape. The object can be a blade of grass, tree, or rock. Instead of creating this drawing and then copy/pasting it over and over again, you can create a pattern and assign it as its own brush. You can create your pattern, select it, and add it to your brush styles using the Dynamic brush. - Use the Brush drawing tool to create a small drawing. - Use the Select tool to choose the parts of the drawing you want to use as the pattern. - In the Tool Properties view, click the New Dynamic Brush button to use the current layer as a new brush pattern. If you do not want to use the entire layer as the brush pattern, select the parts of the drawing you want to use as the pattern. If you do this, you must reselect the Brush tool before you click the New Dynamic Brush button. When you add the Dynamic Brush to your brush styles list, it is given a default name and a preview appears in the Tool Properties view. You can use the Rename Brush button to give the Dynamic Brush a more meaningful name. - Use the new Dynamic Brush to quickly repeat a pattern. - Create drawings on the same layer of multiple panels or multiple layers of the same panel. - In the Thumbnails, Timeline, or Stage view, Shift + click to multiselect all the layers you want to use to create the Dynamic brush. If you are creating your brush with panels, Ctrl + Shift + click (Windows) or ⌘ + Shift + click (Mac OS X)) the panels to use to create the Dynamic brush. - Select the Brush tool and click the Add Dynamic Brush button. - In the Tool Properties view, adjust the slider to see the properties of the Dynamic brush. Your new Dynamic brush will contain all the selected drawings. When you use this brush, you will cycle through the drawings.
https://docs.toonboom.com/help/storyboard-pro-5-5/storyboard/drawing/use-dynamic-brush.html
2018-04-19T13:59:41
CC-MAIN-2018-17
1524125936969.10
[array(['../../Resources/Images/_ICONS/Home_Icon.png', None], dtype=object) array(['../../Resources/Images/_ICONS/Producer.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/Harmony.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/HarmonyEssentials.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/HarmonyAdvanced.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/HarmonyPremium.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/Paint.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/StoryboardPro.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/Activation.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/System.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/Adobe_PDF_file_icon_32x32.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/SBP/dynamic_grass_109x131.png', None], dtype=object) array(['../../Resources/Images/SBP/repeat_grass.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/SBP/mutiple_dynamic_brush.png', None], dtype=object) array(['../../Resources/Images/SBP/layers_synamic.png', None], dtype=object) array(['../../Resources/Images/SBP/dynamic_multiple.png', None], dtype=object) ]
docs.toonboom.com
User Guide > Structure > Scene > Flipping Flipping Scenes T-SBFND-009-024 This functionality enables you to easily flip a selection of scenes without flipping their layers one by one. This can be useful when re-using background elements. Flipping the scene will flip all panels included in that scene, as well as the camera movements. Everything else will be kept the same, for example, keyframe timing. Before: After: NOTE: Flipping the scene flips all panels included in the scene. It is not possible to flip one or more panels in a scene without affecting the others. A 3D scene cannot be flipped. A 3D scene cannot be flipped. - In Timeline view, select one scene or more. - Do one of the following: - Select Storyboard > Flip Selected Scenes. - In the Thumbnails view, right-click on your selection and select Flip Selected Scenes. - In the Timeline view, right-click on your selection and select Flip Selected Scenes. - In the Storyboard Pro toolbar, click the Flip Selected Scenes button. To add this button to the Storyboard Pro toolbar, see Toolbar Manager Dialog Box. Use the Flip Selected Scenes shortcut.
https://docs.toonboom.com/help/storyboard-pro-6/storyboard/structure/flip-scene.html
2018-04-19T13:59:35
CC-MAIN-2018-17
1524125936969.10
[array(['../../Resources/Images/_ICONS/Home_Icon.png', None], dtype=object) array(['../../Resources/Images/_ICONS/Producer.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/Harmony.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/HarmonyEssentials.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/HarmonyAdvanced.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/HarmonyPremium.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/Paint.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/StoryboardPro.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/Activation.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/System.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/Adobe_PDF_file_icon_32x32.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/SBP/Scenes_and_Panels/About_Flipping_Before.png', None], dtype=object) array(['../../Resources/Images/SBP/Scenes_and_Panels/About_Flipping_After.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) ]
docs.toonboom.com
: Notably missing from this table are begin blocks and if blocks, which do not introduce new scope blocks. All three types of scopes follow somewhat different rules which will be explained below as well as some extra rules for certain blocks.. julia> module A a = 1 # a global in A's scope end; julia> module B module C c = 2 end b = C.c # can access the namespace of a nested global scope # through a qualified access import ..A # makes module A available d = A.a end; julia> module D b = a # errors as D's global scope is separate from A's end; ERROR: UndefVarError: a not defined julia> module E import ..A # make module A available A.a = 2 # throws below error end; ERROR: cannot assign variables in other modules Note that the interactive prompt (aka REPL) is in the global scope of the module Main. Local Scope A new local scope is introduced by most code-blocks, see above table for a complete list. A local scope usually inherits all the variables from its parent scope, both for reading and writing. There are two subtypes of local scopes, hard and soft, with slightly different rules concerning what variables are inherited. Unlike global scopes, local scopes are not namespaces, thus variables in an inner scope cannot be retrieved from the parent scope through some sort of qualified access. The following rules and examples pertain to both hard and soft (Note, in this and all following examples it is assumed that their top-level is a global scope with a clean workspace, for instance a newly started REPL.) Inside a local scope a variable can be forced to be a local variable using the local keyword: julia> x = 0; julia> for i = 1:10 local x x = i + 1 end julia> x 0 Inside a local scope a new global variable can be defined Soft Local Scope In a soft local scope, all variables are inherited from its parent scope unless a variable is specifically marked with the keyword local. Soft local scopes are introduced by for-loops, while-loops, comprehensions, try-catch-finally-blocks, and let-blocks. There are some extra rules for Let Blocks and for For Loops and Comprehensions. In the following example the x and y refer always to the same variables as the soft local scope inherits both read and write variables: julia> x, y = 0, 1; julia> for i = 1:10 x = i + y + 1 end julia> x 12 Within soft scopes, the global keyword is never necessary, although allowed. The only case when it would change the semantics is (currently) a syntax error: julia> let local j = 2 let global j = 3 end end ERROR: syntax: `global j`: j is local variable in the enclosing scope Hard Local Scope Hard local scopes are introduced by function definitions (in all their forms), struct type definition blocks, and macro-definitions. In a hard local scope, all variables are inherited from its parent scope unless: an assignment would result in a modified global variable, or a variable is specifically marked with the keyword local. Thus global variables are only inherited for reading but behave differently to functions defined in the global scope as they (1, 2) The distinction between inheriting global and local variables for assignment can lead to some slight differences between functions defined in local vs. global scopes. Consider the modification of the last example by moving bar to the global scope: julia> x, y = 1, 2; julia> function bar() x = 10 # local return x + y end; julia> function quz() x = 2 # local return bar() + x # 12 + 2 (x is not modified) end; julia> quz() 14 julia> x, y (1, 2) Note that above subtlety does (::#1) (generic function with 1 method) julia> f(3) ERROR: UndefVarError: a not defined Stacktrace: [1] (::##1#2)(::Int64) at ./none:1 taken as examples. Hard vs. Soft Local Scope Blocks which introduce a soft local scope, such as loops, are generally used to manipulate the variables in their parent scope. Thus their default is to fully access all variables in their parent scope. Conversely, the code inside blocks which introduce a hard local scope (function, type, and macro definitions) can be executed at any place in a program. Remotely changing the state of global variables in other modules should be done with care and thus this is an opt-in feature requiring the global keyword. The reason to allow modifying local variables of parent scopes in nested functions is to allow constructing closures which have a private state, for instance the state variable in the following example: julia> let state = 0 global counter counter() = state += 1 end; julia> counter() 1 julia> counter() 2 See also the closures in the examples in the next two sections.: julia> x, y, z = -1, -1, -1; julia> let x = 1, z println("x: $x, y: $y") # x is local variable, y the global println("z: $z") # errors as z has not been assigned yet but is local end x: 1, y: -1 ERROR: UndefVarError: z not defined = Array{Any}(2); i = 1; julia> = Array{Any}(2); i = 1; julia> and Comprehensions have the following behavior: any new variables introduced in their body scopes are freshly allocated for each loop iteration. This is in contrast to while loops which reuse the variables for all iterations. Therefore these constructs are similar to while loops with let blocks inside: julia> Fs = Array{Any}(2); julia> for j = 1:2 Fs[j] = ()->j end julia> Fs[1]() 1 julia> Fs[2]() 2 for loops will reuse existing variables for its iteration variable: julia> i = 0; julia> for i = 1:3 end julia> i 3 However, comprehensions do not do this, and always freshly allocate their iteration variables: julia> x = 0; julia> [ x for x = 1:3 ]; julia> x struct keywords, are constant by default. Note that const only affects the variable binding; the variable may be bound to a mutable object (such as an array), and that object may still be modified.
https://docs.julialang.org/en/stable/manual/variables-and-scoping/
2018-04-19T13:50:10
CC-MAIN-2018-17
1524125936969.10
[]
docs.julialang.org
This Developer Guide contains PHP code snippets and ready-to-run programs. You can find these code samples in the following sections: Note The Amazon DynamoDB Getting Started Guide contains additional PHP sample programs. The way you use the SDK for PHP depends on your environment and how you want to run your application. The code samples in this document run from the command line, but you can modify them if you want to run them in a different environment (such as a web server). To Run the PHP Code Samples (from the command line) Verify that your environment meets the minimum requirements for the SDK for PHP, as described in Minimum Requirements. Download and install the AWS SDK for PHP. Depending on the installation method that you use, you might have to modify your code to resolve dependencies among the PHP extensions. For more information, see the Getting Started section of the Getting Started in the AWS SDK for PHP documentation. Copy the code sample (from the documentation page you are reading) into a file on your computer. Run the code from the command line. For example: php myProgram.php Note The code samples in this Developer Guide are intended for use with the latest version of the AWS SDK for PHP. PHP: Setting Your AWS Credentials The SDK for PHP requires that you provide AWS credentials to your application at runtime. The code samples in this Developer Guide assume that you are using an AWS credentials file, as described in Credentials in the AWS SDK for PHP documentation. The following is an example of an AWS credentials file named ~/.aws/credentials, where the tilde character ( ~) represents your home directory: [default] aws_access_key_id = AWS access key ID goes hereaws_secret_access_key = Secret key goes here PHP: Setting the AWS Region and Endpoint You must specify an AWS region when you create a DynamoDB client. To do this, you provide the Aws\Sdk object with the region you want. The following code snippet instantiates a new Aws\Sdk object, using the US West (Oregon) region. It then creates a DynamoDB client that uses this region. $sdk = new Aws\Sdk([ 'region' => 'us-west-2', // US West (Oregon) Region 'version' => 'latest' // Use the latest version of the AWS SDK for PHP ]); // Create a new DynamoDB client $dynamodb = $sdk->createDynamoDb(); If you want to run the code samples using DynamoDB locally on your computer, you need to set the endpoint, as shown following: $sdk = new Aws\Sdk([ 'endpoint' => '', // Use DynamoDB running locally 'region' => 'us-west-2', // US West (Oregon) Region 'version' => 'latest' // Use the latest version of the AWS SDK for PHP ]); // Create a new DynamoDB client $dynamodb = $sdk->createDynamoDb();
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/CodeSamples.PHP.html
2016-10-21T11:32:02
CC-MAIN-2016-44
1476988717963.49
[]
docs.aws.amazon.com
1 Introduction MxAssist Performance Bot is an intelligent virtual co-developer bot that helps you improve the performance of your project by inspecting your project model against Mendix development best practice in Mendix Studio Pro. It detects anti-patterns during the design and development, pinpoints you to these anti-patterns, suggests you how to resolve it, and in many cases can automatically fix these issues. MxAssist Performance Bot is built using statistical analysis of thousands of anonymized Mendix projects to learn common anti-patterns as well as using Mendix Expert Services best practices in the development of microflows, domain models, pages, security, etc. It consists of a three-level assistance: - Detection – the bot inspects the model, identifies issue, and pinpoints you to the document/element causing the issue. - Recommendation – the bot explains the identified issue, the potential impact, and how to fix it. There is also a detailed best practice guide with a dedicated step-by-step guideline of how to fix the issue. - Auto-fixing – the bot can automatically implement the best practice and fix the issue. 2 MxAssist Performance Bot Pane To access settings of MxAssist Performance Bot, open Edit > Preferences >the General tab >the MxAssist Performance Bot tab. For more information, see Preferences. MxAssist Performance Bot is enabled by default and is designed as a pane. To access the MxAssist Performance Bot pane, click View > MxAssist Performance Bot. The pane gives you information on each anti-pattern and contains MxAssist Performance Bot settings and configurations: 2.1 Options and Configuration At the top of the MxAssist Performance Bot pane you can see the following options: Inspect now – inspects your project model on performance issues. Limit to current tab – limits the messages displayed in the pane to the current document. Configuration – defines the modules and documents that the MxAssist Performance Bot will analyze. Click the Configuration button to open the MxAssist Performance Bot Configuration dialog box that contains the Project Model and Best Practice tabs. The Project Model tab lists all relevant documents in your project. You can choose which specific modules or documents to inspect or leave out. The Best Practice tab lists the available best practice. You can choose your preferred best practices and inspect your model against it: You can use both project model and best practice configuration together. 2.2 Anti-Pattern Overview Each anti-pattern line in the pane provides you with the following information: Icon – indicates if the anti-pattern can be automatically fixed; if the icon has the “A” letter, the issue can be auto-fixed Code – a unique code that is specific to the anti-pattern type Blue circle – indicates a new detected anti-pattern Message – description/explanation of the anti-pattern Element – the element causing the issue Document – the document containing the element Module – the module containing the document Right-clicking the message line of an anti-pattern in the pane opens the drop-down menu: The following actions are available in the drop-down menu: - Go to Cause – takes you to the element causing the issue. - Go to Usage – opens the corresponding locations where the anti-pattern is used. - View MxAssist Performance recommendation – opens the pop-up window with recommendations (similar to double-clicking the message). - Mark as read – marks the issue as read. This will make the blue circle to disappear. - Suppress this recommendation – suppresses the issue. This will gray out the issue and send it to the bottom of the list. The related indicator in the editor will disappear. 3 Using MxAssist Performance Bot in App Development 3.1 Detecting an Anti-Pattern The first level of assistance is detection that includes inspecting the project model, identifying anti-patterns, and pinpointing you to the document causing the issue. To inspect your project model, click Inspect now in the MxAssist Performance Bot pane. The Inspect now option will be disabled if there are consistency errors in the project. In this case, you need to resolve the consistency errors first. The bot will detect performance anti-patterns and list them in the pane under the associated anti-pattern type. To learn more about each anti-pattern type, click the anti-pattern code link. Click the plus icon next to the anti-pattern type to see the detected cases of this type: To view the element or the document where the anti-pattern is located, double-click the message line or right-click the message line and choose Go to Cause or Go to Usage in the drop-down menu. 3.2 Recommending a Fix The second level of assistance is recommendation – giving you an overview of the issue and recommending how to fix it. There are two ways to view the recommendations: Right-click an anti-pattern message on the pane and select View MxAssist Performance Recommendation in the drop-down menu. Click an indicator in the visual editor to view the detected issue: The recommendation contains the description of the identified issue, potential impact from it, the way to fix it, and a link to a more detailed guidance on fixing the issue: 3.3. Auto-Fixing the Anti-Pattern The third level of assistance is auto-fixing where the bot can automatically implement the best practice and fix the issue in just one click. To avoid undesirable changes, auto-fixing is only available when the bot can safely refactor the code without creating an error or making other undesirable change in the model. Each performance issue has an icon in the pane that indicates whether it is auto-fixable. If the icon has the “A” letter, the issue can be auto-fixed: To auto-fix the issue, follow the steps below: Right-click the message line in the pane and select View MxAssist Performance Recommendation in the drop-down menu or click the corresponding indicator in the editor to open the recommendation. In the MxAssist Performance Recommendation pop-up window, click the available action button, for example, Fix the Commit: After the issue is auto-fixed, a pop-up window listing the changes appears. You can click Show the fix to view the changed document and element.
https://docs.mendix.com/refguide/mx-assist-performance-bot
2021-01-16T06:52:14
CC-MAIN-2021-04
1610703500028.5
[array(['attachments/mx-assist-performance-bot/performance-bot-pane.png', 'Performance Bot Pane'], dtype=object) array(['attachments/mx-assist-performance-bot/drop-down-menu.jpg', 'Drop-Down Menu'], dtype=object) array(['attachments/mx-assist-performance-bot/viewing-anti-pattern.jpg', 'Viewing Anti-Pattern'], dtype=object) array(['attachments/mx-assist-performance-bot/indicator-in-editor.jpg', 'Indicator in the Editor'], dtype=object) array(['attachments/mx-assist-performance-bot/performance-recommendation.jpg', 'Performance Recommendation'], dtype=object) array(['attachments/mx-assist-performance-bot/auto-fixable.png', 'Auto-Fixable Icon'], dtype=object) ]
docs.mendix.com
Kerberos Troubleshooting Adam Saxton posted his great Kerberos troubleshooting guide, which covers both Reporting Services in native mode as well as SharePoint integrated mode configurations: If you are specifically troubleshooting Kerberos with a Reporting Services 2008 deployment, you may also want to check out the following whitepaper: Update (July 18): A new whitepaper for Configuring Kerberos Authentication for SharePoint 2010 products has just been released. Section 4 of that whitepaper explains identity delegation for Reporting Services.
https://docs.microsoft.com/en-us/archive/blogs/robertbruckner/kerberos-troubleshooting
2021-01-16T07:24:13
CC-MAIN-2021-04
1610703500028.5
[]
docs.microsoft.com
Introduction to Astor¶ Intended audience: administrators, developers, users Goal¶ -,….) Principle¶ On each host to be controlled, a device server (called Starter) takes care of all device servers running (or supposed to) on this computer. The controlled server list is read from the TANGO database. A graphical client (called Astor) is connected to all Starter servers and is able to: - Display the control system status and component status using coloured icons. - Execute actions on components (start, stop, test, configure, display information, …. - Execute diagnostics on components. - Execute global analysis on a large number of crates or database. To control a host in remote, the TANGO device server Starter must be running on it. Warning The starter device must have a specific name to be recognized by astor. This name must be tango/admin/{hostname} (e.g. tango/admin/hal). Running Astor¶ Astor is a Java program using Swing classes. Classes has been compiled and the jar file has been built whith java-1.7. To start the application, start the script file: $TANGO_HOME/bin/astor - There are 3 modes to start Astor: - Display¶ At startup, Astor display a tree where node could be a family of hosts (see Starter properties), and leaf are hosts where a Starter device server is registred in database. The icon of the leaf depends on the controlled device servers status as the following definition:
https://tango-controls.readthedocs.io/en/latest/tools-and-extensions/built-in/astor/introduction.html
2021-01-16T05:52:35
CC-MAIN-2021-04
1610703500028.5
[]
tango-controls.readthedocs.io
Frame Rate The frames-per-second (FPS) that a Timeline runs at. Frame rate is stored in the Timeline's Time COMP in the parameter called Rate. It can be set through the Timeline's UI, directly via the component's Rate parameter or via the rate Member of the timeCOMP Class The frame rate of a project at any network location can be set or read through the project.cookRate Member of the Project Class. All CHOPs created have their sample rate set to this rate by default. Every component can have its own frame rate and start-end range. See the Time COMP.
https://docs.derivative.ca/Frame_Rate
2021-01-16T05:48:58
CC-MAIN-2021-04
1610703500028.5
[]
docs.derivative.ca
Music¶ Music include MoonBot Kit Controller Module buzzer drive and external Drive Speaker Module. By calling these modules, you can control MoonBot Kit to play music. Speaker initialization¶ - Introduction Initializes the speaker connected to the specified port. - Parameters - port 2,7,9 Speaker Setting Play Mode¶ - Introduction Set up the playback mode of the speaker. - Parameters - Play mode Single Play:Stop playing after playing specified music Single tune circulation:Play specified music in a loop Play all:Play the next music in the music list automatically after playing the specified music Random Play:Play one of the music lists randomly after playing the specified music Speakers Play Music¶ - Introduction Play music with a given name. - Parameters - Music Name :the drop-down menu for the module Speaker Plays Custom Music¶ - Introduction Play music with the specified music name. Users need to put corresponding custom music into loudspeakers before this operation.(How to save music?),The first four words of a musical name should be letters or numbers. - Parameters - Music Name :Customize the first four characters of the music name,Support only ** English ** or ** Numbers ** Speaker Play Setting¶ - Introduction Set the current speaker playback status. - Parameters - Play settings Play/pause:Play or pause current music Next song:Play the next music in the music list Last song:Play the last music in the music list Stop:Stop playing music Buzzer Plays Scales¶ - Introduction Buzzer to play scales in a set beat - Parameters - Scale High, middle and low levels - Rhythm 1/16~4 beat:Single beat time can be set by buzzer. Buzzer pauses play¶ - Introduction The time when the buzzer pauses to play a given beat. - Parameters - Rhythm 1/16~4 beat:Single beat time can be set by buzzer. Buzzer Sets Play Rhythm¶ - Introduction Set the number of beats per minute (BPM) of buzzer. - Parameters - beats per minute - Buzzer Play Frequency¶ - Introduction Set up a buzzer to play music at a specified frequency at a given time. - Parameters - frequency 0~65535:Frequency Recommendation Setting in the Frequency Range acceptable to the Human Ear(20~20000Hz) - time 0:Continuous broadcasting other:Stop playing for a specified length of time
https://morpx-docs.readthedocs.io/en/latest/MoonBot/MoonBot_Mixly/API_reference/music.html
2019-12-05T17:30:22
CC-MAIN-2019-51
1575540481281.1
[array(['../../../_images/31.png', '../../../_images/31.png'], dtype=object) array(['../../../_images/32.png', '../../../_images/32.png'], dtype=object) array(['../../../_images/33.png', '../../../_images/33.png'], dtype=object) array(['../../../_images/34.png', '../../../_images/34.png'], dtype=object) array(['../../../_images/35.png', '../../../_images/35.png'], dtype=object) array(['../../../_images/36.png', '../../../_images/36.png'], dtype=object) array(['../../../_images/38.png', '../../../_images/38.png'], dtype=object) array(['../../../_images/39.png', '../../../_images/39.png'], dtype=object) array(['../../../_images/310.png', '../../../_images/310.png'], dtype=object) array(['../../../_images/311.png', '../../../_images/311.png'], dtype=object) ]
morpx-docs.readthedocs.io
"httpd_config" Module Description This module allows the server configuration to be viewed over HTTP via the /config path. Configuration To load this module use the following <module> tag: <module name="m_httpd_config.so"> This module requires no other configuration. Special Notes Leaking your server configuration over the internet is a privacy and security risk. You should avoid doing this by either configuring the httpd module to only listen for local connections or by using the httpd_acl module to restrict who can view your server configuration.
https://docs.inspircd.org/2/modules/httpd_config/
2019-12-05T17:38:13
CC-MAIN-2019-51
1575540481281.1
[]
docs.inspircd.org
Difference between revisions of "TDAbleton" Revision as of 18:20, 4 August 2019 TDAbleton is a tool for linking TouchDesigner tightly with Ableton Live. It offers full access to. See also: TDAbleton System Components, Creating Custom TDAbleton Components, and TDAbletonCompBaseExt Extension Contents - 1 Getting Started - 2 TDAbleton Feature Tour - 3 Using TDAbleton Package - 4 Caveats and Gotchas - 5 Troubleshooting Getting Started[edit] System Requirements[edit] - TouchDesigner version 099 2018.26590 and up. (Some features may work on older versions) - Ableton Live 9.7.2 and up. - Max for Live 7.3.3 and up. (Note: Max 7.3.3 has a MIDI pitch bend bug. Use 7.3.4!) Install the latest TDAbleton system[edit] - Open the Samples/TDAbleton folder by choosing Browse Samples in the TouchDesigner Help menu, then entering the TDAbleton folder. - Copy the TouchDesigner folder into your Ableton Remote Scripts folder. For Windows standard installation, this should be: \ProgramData\Ableton\Live x.x\Resources\MIDI Remote Scripts\. For Mac standard installation, this should be: /Contents/App-Resources/MIDI Remote Scripts/. See How to install a third party Remote Script to locate this folder and for more info on this. Set up Ableton Live[edit] - Start Ableton Live - Drag the TouchDesigner Remote Script folder (the one you just copied above) into the "PLACES" section in Live's Browser sidebar (on the left). When you select this new folder in Live, you should see the TDA MIDI.amxdand TDA Master.amxddevices in the browser (on the right). -] - In the Palette, you will find the tdAbletonPackage.toxin the TDAbleton section. Drop this in your project. It will have an error if it is not connecting to Ableton Live! -. - If the tdAbletonComponent does not have a disconnected error flag, the connection is successful. (If you see error messages or are not connected, check the Troubleshooting section below.) - Use the Add TDA Master Device parameter on the abletonSongComponent to add a master Max device to the master track in your Live set. Some features of TDAbleton will work without this, but functionality will be limited. Connecting Ableton Live To Multiple TouchDesigner Instances[edit] To connect Live to multiple instances of TouchDesigner, use the Broadcast feature. You can set Broadcast mode either on the TDA Master Max Device or on the tdAbleton Component. This will broadcast information from Live to all computers on the network. Be sure all tdAbleton devices' TouchDesigner In Port parameter is set to this port. Upgrading TDAbleton[edit] All TDAbleton components have a version number on their TDAbleton parameter page. You can check against the version numbers on palette and forum components to see if you have the latest. The latest updates will be available on the top entry of the TDAbleton Forum Thread. Installation instructions are there. You can also upgrade to a new version from a new TouchDesigner installation: - Follow the Install the latest TDAbleton system instructions above, re-copying the TouchDesigner remote script into the Ableton Remote Scripts folder and restarting your Live Set. - Delete the tdAbletonPackagein your project. - Copy a new tdAbletonPackageinto your project from the palette. - In the tdAbletoncomponent in tdAbletonPackage, pulse the Update Ableton Comps parameter. Important: When installing from either location, the TouchDesigner Max devices in your set (names start with TDA_) may have to be replaced with new versions if you have saved them out separately (using Collect All and Save or other means). You can tell if you are using the current ones (in the MIDI Remote Scripts/TouchDesigner) folder by looking at the path that shows up in the lower Ableton info bar when mousing over the devices' name in your set. Note also that the devices in the TDADemo set do not point there, so they have to be replaced if you based your Ableton set on that file. You can either recreate the devices in your set or replace the files in your set's file structure. Common Ableton Tasks[edit] The following is a list of commonly used data from Ableton Live and where to find it in TDAbleton. - Scene: abletonSongComponent, channels song/info/triggered_sceneand song/info/last_started_scene. Also available via callbacks. - Time Data: abletonSongComponent, channels song/info/bars, song/info/beats, song/info/sub_divisions,. - MIDI Data: abletonMIDIComponent. - MIDI Notes in a Clip: abletonClipSlotComponent. - Audio Levels: abletonLevelComponent. This includes level data on a per-track basis, and can be combined with filters to provide spectrum analysis. - Sending/Receiving Rack Macro data: abletonRackComponent. This is also the smoothest way to receive parameter data.. To find your log file, see this Ableton help page. There is also an Ableton Log File parameter on tdAbleton where you can enter the location and then pulse the Open Log File parameter to get a view of the log inside TouchDesigner. Tip: the Log TDA Debug Msgs parameter turns on and off verbose debugging in Ableton's log file. If this parameter is off, only errors and the most basic information will be put in the Ableton log. Unlike a Wire that connects nodes in the same Operator Family, a Link is the dashed lines between nodes that represent other data flowing between nodes, like CHOP Exports, node paths in parameters, and expressions in parameters referencing CHOP channels, DAT tables and other nodes.. A Tscript-only script that executes one or more scripting language commands. The F1 to F12 keys run macros. The F1 macro puts you in Perform Mode. See also Script, DAT and Python. TOuch Environment file, the file type used by TouchDesigner to save your project. An Operator Family that reads, creates and modifies 3D polygons, curves, NURBS surfaces, spheres, meatballs and other 3D surface data.'.
https://docs.derivative.ca/index.php?title=TDAbleton&diff=prev&oldid=16526
2019-12-05T18:26:57
CC-MAIN-2019-51
1575540481281.1
[]
docs.derivative.ca
Ole DbData Reader. Get Chars(Int32, Int64, Char[], Int32, Int32) Method Definition Reads a stream of characters from the specified column offset into the buffer as an array starting at the given buffer offset. public: virtual long GetChars(int ordinal, long dataIndex, cli::array <char> ^ buffer, int bufferIndex, int length); public long GetChars (int ordinal, long dataIndex, char[] buffer, int bufferIndex, int length); override this.GetChars : int * int64 * char[] * int * int -> int64 Public Function GetChars (ordinal As Integer, dataIndex As Long, buffer As Char(), bufferIndex As Integer, length As Integer) As Long Parameters Returns Remarks, when the OleDbDataReader is reading a large data structure into a buffer. For more information, see the SequentialAccess setting for CommandBehavior. If you pass a buffer that is null, GetChars returns the length of the field in characters. No conversions are performed; therefore, the data retrieved must already be a character array.
https://docs.microsoft.com/en-us/dotnet/api/system.data.oledb.oledbdatareader.getchars?view=netframework-4.8
2019-12-05T18:52:06
CC-MAIN-2019-51
1575540481281.1
[]
docs.microsoft.com
Your C application requires two configuration values: - Application's name: app_name - New Relic license key: license_key All other configuration values are optional, and typically the default settings do not need to be changed. However, when necessary, you can adjust New Relic's C SDK configuration. This document is a quick reference for using some basic configuration options with the C SDK API. For detailed information about changing your configuration settings, including code values and examples, see the C SDK's configuration GUIDE.md on GitHub. Change configuration settings Here are examples of some available configuration options you can change, as defined in the C SDK's public header, libnewrelic.h. Change app name (alias) in UI Owner, Admins, or Add-on Managers You can change your application's alias from the Application settings page in the New Relic UI. This is useful, for example, to give your application a different name, yet keep historic data under the new alias. For more information, see Name your application. New Relic's C SDK does not support server-side configuration. However, you can also use this Application settings page in the UI to set your application's Apdex T threshold. To change the application's alias or Apdex T threshold in the UI: Go to rpm.newrelic.com/apm > (select an app) > Settings > Application. OR Go to rpm.newrelic.com/apm. Then, from the index of applications, select the app's gear fa-gear icon, and select View settings. Change app name in configuration If you change your application's name in your configuration settings, this will result in the same app appearing in the UI with a new name. Any historic data (based on the data retention schedule) will only exist under the old name. (To rename your application but still keep historic data, use the UI settings to change the alias.) If you need to change your application's name in your configuration after your application is connected to the daemon: - Make a new configwith a call to newrelic_create_app_config()using the new application name. - Make a new connected app with a call to newrelic_create_app(). Timing is everything. Switching application names during a single application execution may mean that your instrumented data is sent under the new application name.
https://docs.newrelic.com/docs/agents/c-sdk/install-configure/c-sdk-configuration
2019-12-05T18:25:00
CC-MAIN-2019-51
1575540481281.1
[]
docs.newrelic.com
Access to this feature depends on your subscription level. Requires Infrastructure Pro. New Relic Infrastructure's integrations include a Redis integration that reports your Redis data to New Relic products. This document explains how to install and configure the Redis integration, and describes the data the integration collects. This integration is released as Open Source under the MIT license on GitHub. A change log is also available there for the latest updates. Features New Relic's Redis integration reports critical performance data from your Redis server to New Relic products. You can view this metric data and inventory data in New Relic Infrastructure and Insights. You can view pre-built dashboards of your Redis data, create alert policies, and create custom queries and charts. You can also specify keys that are important to your application and get information about their length. The integration obtains data by executing Redis commands: INFOcommand: Data from the INFO command populates metric data and some inventory data. CONFIG GETcommand: Most inventory data comes from this command. -). If you edited the names of the Redis commands mentioned above, the integration will not be able to retrieve the Redis data. Compatibility and requirements To use the Redis integration, ensure your system meets these requirements: - New Relic Infrastructure installed on host - Linux distribution compatible with New Relic Infrastructure - Redis versions 3.0 or higher Install and activate On-host integrations do not automatically update. For best results, you should occasionally update the integration and update the Infrastructure agent. To install the Redis integration: Follow the instructions for installing an integration, using the file name nri-redis. Via the command line, change the directory to the integrations configuration folder: cd /etc/newrelic-infra/integrations.d Create a copy of the sample configuration file by running: sudo cp redis-config.yml.sample redis-config.yml Edit the redis-config.ymlconfig file based on your Redis server connection methods: - Connect with Unix socket If you connect using Unix socket, specify the unix_socket_pathin the configuration file. Be sure that the user executing the Redis integration has correct permissions for accessing that Unix socket. The permissions of the Unix socket are set in the Redis configuration (value of unixsocketperm). - Connect with TCP If you connect via TCP, the config file is by default set to localhostand port 6379. You can change this by specifying hostnameand/or the portargument. If you use this method, the unix_socket_pathparameter cannot be set. - If required: Set other configuration file settings based on your Redis setup, as described in Configuration. - Restart the Infrastructure agent. It is also possible to manually install integrations from a tarball file. For more information, see Install manually from a tarball archive. Configure the integration New Relic's Redis integration includes commands and arguments for setting up required Redis login information and configuring how data is reported from Redis. You would normally set these configuration steps as part of the Install process. The exact configuration will depend on your Redis setup and preferences. - metrics metricsreports the metric data. Accepts these arguments: hostname: Redis server hostname. Default value: localhost. port: Port where Redis server is listening. Default value: 6379. unix_socket_path: Unix socket path on which Redis server is listening (if set). Default value: None. keys: List of the keys for retrieving their lengths. Default value: None. See keyspace config for more information. keys_limit: Max number of the keys to retrieve their lengths. Default value: 30. For more information, see About this integration and Keyspace metrics configuration. password: Password to use when connecting to the Redis server. Use only if your Redis server is password-protected. Default value: None. - inventory inventorycaptures all the Redis configuration parameters, with the exception of requirepass. To disable collection of inventory data, delete the inventory command from the config file. - keyspace If you want to see metrics related to the length of selected keys, specify the keysparameter in single-line JSON format in redis-config.yml. For example: keys: '{"0":["KEY_1"],"1":["KEY_2"]}' If your selected keys are stored only in Redis database 0, then you can set the keysparameter as follows: keys: '["KEY_1","KEY_2"]' The keys_limitparameter defaults to 30for performance reasons. Be aware that increasing this number could have a substantial impact on your Redis server performance. To see the keyspace metric data that is collected, see RedisKeyspaceSample. Activating remote monitoring For version Redis server hostname. Example: HOSTNAME='Redis DB' - PORT The port where Redis server is listening. Example: PORT=7199 - UNIX_SOCKET_PATH Unix socket path on which Redis server is listening (if set). Example: UNIX_SOCKET_PATH='tpf_unix_sock.server' - KEYS List of the keys for retrieving their lengths. Example: KEYS='{"0":["KEY_1"],"1":["KEY_2"]}' - KEYS_LIMIT Max number of the keys to retrieve their lengths. Example: KEYS_LIMIT=50 Password to use when connecting to the Redis server. Example: PASSWORD='Hhv*$jIV' Find and use data To find your integration data in Infrastructure, go to infrastructure.newrelic.com > Third-party services and select one of the Redis integration links. In New Relic Insights, Redis data is attached to the RedisSample and RedisKeyspaceSample event types. For more on how to find and use your data, see Understand integration data. Metrics The Redis integration collects the following metric data attributes: Redis sample metrics These attributes can be found by querying the RedisSample event types in Insights. Keyspace metrics The Redis integration collects the following keyspace metadata and metrics. These attributes can be found by querying the RedisKeyspaceSample event type in Insights. service:
https://docs.newrelic.com/docs/integrations/host-integrations/host-integrations-list/redis-monitoring-integration
2019-12-05T18:27:49
CC-MAIN-2019-51
1575540481281.1
[]
docs.newrelic.com
Microsoft Azure Azure IoT Nodes¶ How can you add an Azure IoT Node to the ResIOT platform - Login in your Azure portal (Microsoft Azure account required). - Click on "New" and search "IoT Hub" then click on "Create". - Fill the form for the Hub creation and create it. - Copy the name of the Hub in the "Azure IOT Hub Name" field of the connector. - Search the newly created Hub in the searchbar using its name. - Under the left menu group called "Settings" click on "Shared access policies". - Click then on the "device" policy and copy the "Primary Key" to the "Primary Key" field of the connector. - Go back to the dashboard, and select your Hub. - In the left menu, search IoT Devices and navigate to that page. - Click on "Add" and fill the form for the creation of the node. "Must use the Device EUI in the Device ID field!" - The full JSON objects will be delivered as the body of each message.
http://docs.resiot.io/Azure_Setup/
2019-12-05T17:50:37
CC-MAIN-2019-51
1575540481281.1
[]
docs.resiot.io
Selection.Delete method (Word) Deletes the specified number of characters or words. Syntax expression.Delete( _Unit_ , _Count_ ) expression Required. A variable that represents a Selection object. Parameters Return value Long Remarks This method returns a Long value that indicates the number of items deleted, or it returns 0 (zero) if the deletion was unsuccessful. Example See also Support and feedback Have questions or feedback about Office VBA or this documentation? Please see Office VBA support and feedback for guidance about the ways you can receive support and provide feedback.
https://docs.microsoft.com/en-us/office/vba/api/word.selection.delete
2019-12-05T17:46:57
CC-MAIN-2019-51
1575540481281.1
[]
docs.microsoft.com
Add Method (Messages Collection) Add Method (Messages Collection) The Add method creates and returns a new AppointmentItem or Message object in the Messages collection. Syntax Set objMessage = objMsgColl.Add( [subject] [, text**] [, type] [, importance] )** objMessage On successful return, represents the new AppointmentItem or Message object added to the collection. The type of object added depends on the parent folder of the Messages collection. objMsgColl Required. The Messages collection object. subject Optional. String. The subject of the message. When this parameter is not supplied, the default value is an empty string. text Optional. String. The body text of the message. When this parameter is not supplied, the default value is an empty string. type Optional. String. The message class of the message, such as the default, IPM.Note. importance Optional. Long. The importance of the message. The following values are defined: Remarks The method parameters correspond to the Subject, Text, Type, and Importance properties of the Message object. Note If you are adding an AppointmentItem object to a calendar folder, you cannot use any of the parameters of the Add method. You can, however, set the values later by using the corresponding properties. You should create new messages in the Inbox or Outbox folder, and new appointments in the calendar folder. The user must have permission to Add or Delete a Message object. Most users have this permission in their mailbox and their Personal Folders. The new Message object is saved in the MAPI system when you call its Update method. Example This code fragment replies to an original message: ' from the sample function Util_ReplyToConversation Set objNewMsg = objSession.Outbox.Messages.Add ' verify objNewMsg created successfully ... then supply properties Set objSenderAE = objOriginalMsg.Sender ' sender as AddressEntry With objNewMsg .Text = "How about a slightly used bicycle?" ' new text .Subject = objOriginalMsg.Subject ' copy original properties .ConversationTopic = objOriginalMsg.ConversationTopic ' append time stamp; compatible with Microsoft Exchange client Set objOneRecip = .Recipients.Add( _ Name:=objSenderAE.Name, _ Address:=objSenderAE.Type & ":" & objSenderAE.Address, _ Type:=CdoTo) .Recipients.Resolve .Update .Send showDialog:=False End With See Also Concepts Messages Collection Object
https://docs.microsoft.com/en-us/previous-versions/exchange-server/exchange-10/ms526702%28v%3Dexchg.10%29
2019-12-05T17:50:49
CC-MAIN-2019-51
1575540481281.1
[]
docs.microsoft.com
Send Docs Feedback Get List of Subscribed and Unsubscribed Users listOptInOptOut GET Get List of Subscribed and Unsubscribed Users Return the list of users who have opted in (subscribed) or opted out (unsubscribed) from the daily analytics report. By default, all organization administrators are automatically subscribed to receive daily analytics summary reports through email. A value of A value of "optout": 1in the response corresponds to the user having opted out, meaning unsubscribed. Resource URL /organizations/{org_name}/stats/preferences/reports/dailysummaryreport/users?)
http://docs.apigee.com/management/apis/get/organizations/%7borg_name%7d/stats/preferences/reports/dailysummaryreport/users
2016-12-03T02:27:30
CC-MAIN-2016-50
1480698540804.14
[]
docs.apigee.com
JBoss.orgCommunity Documentation Version: 4.0.0 Starting from 3.0.0.Alpha1 version, the JBoss set of plugins includes tools for supporting JBoss Portal and JSR-186/JSR-286 portlets. Thus, this guide provides instructions Chapter 2,. This chapter shows how to create a Dynamic Web Project, add a Java Portlet to it and deploy it to the JBoss Portal. Follow the next procedure to create a Web project with JBoss Portlet capabilities pointed to the JBoss Portal runtime. Select Web perspective or select → → → → in any other perspective. This will display the New Dynamic Web Project wizard.→ → if you are in the Specify the name of the project. Click the Target Runtime area to create a JBoss Portal runtime. Choose JBoss Community > JBoss 4.2 Runtime and select the Create a new local server check box below. Click the button.in the The New Server Runtime Environment wizard appears. In the Name field, type JBoss Portal 2.7 Runtime, and then use the button to point to the location of JBoss Portal + JBoss AS extracted. Click button to proceed. At this point a new JBoss Server instance will be created. On the next page you can verify the runtime information and configuration. If something is incorrect, press thebutton to return to the previous wizard page. Click the button. Click the Configuration area to enable a portlet facet for the project.button in the In the Project Facets dialog, check JBoss Core Portlet and click the button. If the portlet libraries aren't available in the runtime you targeted, JBoss Portlets facets will be hidden on this page. To make them always visible no matter what the runtime is set, you should enable the appropriate option in Section 3.3, “JBoss Portlet Preferences”. The Java and Web Module pages are for configuring Java and Web modules in the project. Here the default values are fine, so leave everything as it is. The last wizard page will ask you to add JBoss Portlet capabilities to the project. Select Portlet Target Runtime Provider and click the button select→ → → → . The Create Portlet wizard starts (for information about the wizard options, see Section 3.2.1, “Java Portlet Wizard” in the guide reference). The wizard fills in the Project and Source Folder fields for you. You should specify a Java package and a class name (for instance, org.example and TestPortlet). Then click the button. You may leave the next three pages with default values, on the last one click thebutton. Once a Java portlet is created, new resources are added to the project structure: a Java portlet class ( TestPortlet.java), default-object.xml and portlet-instances.xml files and the portlet.xml descriptor is updated as well. Now the project is ready to be built and deployed. You can deploy a portlet project in the way you deploy any other web application. Right-click the project and select Run On Server wizard starts.→ . The Select JBoss Portal 2.7 Server created before and click the button. (see Section 2.2.1.1, “Creating a Dynamic Web Project with the JBoss Portlet Capabilities”) Or create a JSF project using the wizard provided by JBoss JSF Tools, then enable JSF and JBoss Portlet facets and add JBoss Portlet capabilities (see Section 2.2.1.2, “Creating a JSF Project and adding the JBoss Portlet Capabilities”) Refer to the further sections for the procedures on how to do this. The basic steps to create a dynamic Web project with the JBoss Portlet capabilities are as follows: Start the Dynamic Web Project wizard navigating to → → → → . You can also select the Java EE perspective and then selecting → → . Specify the project name and set the target runtime to JBoss Portal by following the points 3, 4 and 5 in the Section 2.1.1, “Creating a Web Project with JBoss Portlet Capabilities” procedure. In the Configurationarea, click the button and enable JavaServer Faces,JBoss Core Portlet and JBoss JSF Portlet facets. Click the button. You may leave the next two wizard pages with their defaults, just press thebutton to proceed. On the JBoss Portlet Capabilities page, select Portlet Target Runtime Provider and click the button.Add/Change Richfaces Libraries. You can select the JSF Portletbridge Runtime Provider type. Then it is necessary to set the home of the Portlet Bridge distribution. For information about all the JSF Portlet facet library providers, refer to the wiki article at:. Click thebutton. The project will be created in the workbench. For information on how to organize a JSF project you can read the JSF Tools User Guide. Just remember to point the target runtime to JBoss Portal directory location (see how it is done for a dynamic Web project with steps 3, 4 and 5 in the Section 2.1.1, “Creating a Web Project with JBoss Portlet Capabilities” procedure). To add the JBoss Portlet capabilities to the JSF project you should complete the next steps: Right-click the project and click Properties sheet. Select Project Facets on the left and enable the JavaServer Faces, JBoss Core Portlet and JBoss JSF Portlet facets.to open the project Notice that the Section 2.1.1, “Creating a Web Project with JBoss Portlet Capabilities” procedure. Click the button. To apply the changes click the Properties sheet.button and then the button to close the project The previous section demonstrated how to create a JSF project with JBoss Portlet and JSF Portlet capabilities enabled. Use the following procedure to add a JSF portlet to the created project and deploy it to JBoss Portal. Display the Create Portlet wizard by selecting → → → → from the context menu of the project (for information about the wizard options, see Section 3.2.2, button, on the last one click the button to complete the JSF portlet creation. Complete the steps described in the Section 2.1.3, “Deploying a Portlet to JBoss Portal” procedure to deploy a JSF portlet to JBoss Portal. Just use the other URL to see it in the browser:. This chapter covers the steps required to configure a Seam portlet within a Seam project with the help of the JBoss Portlet Tools features. One of the following two procedures can be used to create a Seam project with JBoss Portlet capabilities enabled: Create a dynamic Web project with the Seam and JBoss Portlets facets enabled (see Section 2.3.1.1, “Creating a Dynamic Web Project with Seam and JBoss Portlet Capabilities”) Create a Seam project with the JBoss Seam portlet configuration using the wizard JBoss Seam Tools provides and follow all the wizard steps to enable JBoss Portlet capabilities (see Section 2.3.1.2, “Creating a Seam Project with JBoss Portlet Capabilities”) To create a dynamic Web project with Seam and JBoss Portlet capabilities you should complete the following steps: Select. New Dynamic Web Project wizard will then be displayed.→ → → → . The Give the project a name and follow the steps 3, 4, 5 of the Section 2.1.1, “Creating a Web Project with JBoss Portlet Capabilities” procedure to set the target runtime. In the Configuration area of the first wizard page, select JBoss Seam Portlet Project v2.0. It will add needed facets to the project. If you now click the JavaServer Faces,Jboss Portlets and Seam facets enabled.button, you should see the The next two pages are for adjusting the project Java and Web modules. They include the default values, so you can skip them by clicking thebutton. thebutton to proceed. On the Seam Facet page, set a Seam runtime and a connection profile. For details about how to set a Seam runtime and a connection profile, see Configure Seam Facet Settings in the Chapter 2 of. Click thebutton to complete the project creation. The steps to create a Seam project with JBoss Portlet capabilities are as follows: Select New Seam Project wizard will be displayed.→ → → → . The Next steps are the same as in the Section 2.3.1.1, “Creating a Dynamic Web Project with Seam and JBoss Portlet Capabilities” procedure starting from the step 2. The previous section has demonstrated how to create a Web project with Seam and JBoss Portlet capabilities. Now you can create a Seam portlet and deploy it to JBoss Portal by following the next procedure: The Create Portlet wizard is displayed (for more information about wizard options, see Section 3.2.2, “JSF/Seam Portlet Wizard” in the guide reference). As the Seam configuration is set for the project, the wizard enters the values for the Seam portlet. Next two pages are filled out with default values, just click thebutton to proceed. On the last one click the button to complete the procedure. To deploy and run the portlet on JBoss Portal complete the steps described in the Section 2.1.3, → and then →.
http://docs.jboss.org/tools/4.0.0.Final/en/jboss_portal_tools_ref_guide/html_single/index.html
2018-01-16T18:22:01
CC-MAIN-2018-05
1516084886476.31
[]
docs.jboss.org
Reconcile freight in transportation management This article describes the freight reconciliation process. Freight reconciliation can be done manually, or it can be set up to occur automatically. To use automatic freight reconciliation, you must set up an audit master where you can define criteria that determine which freight bills are matched automatically. The freight reconciliation process Freight rates are calculated by the rate engine that is associated with the relevant shipping carrier. When a load is confirmed, a freight bill is generated, and the freight rates are transferred to it. The freight rates are apportioned as miscellaneous charges to the relevant source document (purchase order, sales order, and/or transfer order), depending on the setup that is used for the regular billing process. The freight reconciliation process (which is also known as the matching process) can start as soon as the freight invoice arrives from the shipping carrier. The invoice can be received electronically or on paper. If the invoice is received on paper, you can generate an electronic invoice by using the freight bill as a template. Manual reconciliation If you're reconciling freight manually, you must match each invoice line with the freight bill line or lines for the load that is being invoiced. You do this matching on the Freight bill and invoice matching page. If the amount on the invoice line doesn’t match the freight bill amount, you must select a reconciliation reason for the difference. If there are multiple reasons for reconciliation, you can split the unmatched amount across them. The reconciliation reason determines how the difference amounts are posted in the general ledger. When the reconciliation of the whole invoice amount is accounted for, it's submitted for approval, and then the journal is posted. The following illustration shows how to generate a freight invoice and do freight reconciliation in Microsoft Dynamics 365 for Finance and Operations. Automatic reconciliation To use automatic reconciliation, you must specify the schedule for reconciliation, and the invoices and shipping carriers to use. The matching of the invoice lines and freight bills is done according to the setup of the audit master and freight bill type. After you run the automatic reconciliation, you must handle any invoices that the system can't match. You must then process these invoices manually before you can post all the invoices for payment.
https://docs.microsoft.com/en-us/dynamics365/unified-operations/supply-chain/transportation/reconcile-freight-transportation-management
2018-01-16T17:43:44
CC-MAIN-2018-05
1516084886476.31
[array(['media/processflowforfreightreconciliation.jpg', 'Freight reconcilation tasks in Dynamics AX'], dtype=object)]
docs.microsoft.com
InvalidOperationException Constructor Include Protected Members Include Inherited Members [ This article is for Windows Phone 8 developers. If you’re developing for Windows 10, see the latest documentation. ] Initializes a new instance of the InvalidOperationException class. This member is overloaded. For complete information about this member, including syntax, usage, and examples, click a name in the overload list. Overload List Top See Also Reference InvalidOperationException Class InvalidOperationException Members
https://docs.microsoft.com/en-us/previous-versions/windows/apps/6a55sk86(v=vs.105)
2018-01-16T18:10:35
CC-MAIN-2018-05
1516084886476.31
[]
docs.microsoft.com
Maintenance and Support What is the typical MimioProjector bulb life? In Normal mode: 3500 hours. In Eco mode: 5000 hours. The MimioProjector device can be delivered in three configurations: Non-interactive (no pens) Single-pen Interactive Dual-pen Interactive What comes in the box with the MimioProjector device? Customize your own Lumio to reflect your personality. The MimioProjector device will work with any source capable of producing a compliant video signal. Похожие записи:
http://wp-docs.ru/2016/08/26/8139-%D0%BB%D0%B0%D0%BC%D0%BF%D0%B0-loomi-%D1%88%D0%B0%D0%B1%D0%BB%D0%BE%D0%BD
2018-01-16T17:35:54
CC-MAIN-2018-05
1516084886476.31
[array(['http://www.relaxon.net/images/Trafaret-zlaya-tykva.jpg', None], dtype=object) array(['http://www.cosmorelax.ru/upload/pic/2272626802742/3.jpg', None], dtype=object) ]
wp-docs.ru
Private Projects You can make a project in your portfolio private by toggling the Private switch: This will make the project detail page visible to you and to other users linked to the project, and to potential clients that you have been matched with, but it will not be publicly visible, or visible within the project search results. This is useful for scenarios where you can't share publicly certain things you've worked on due to NDAs.
http://docs.commercehero.io/article/149-private-project
2018-01-16T17:30:07
CC-MAIN-2018-05
1516084886476.31
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/57eea01e9033602e61d4a311/images/59b2d1e42c7d3a73488cb7e5/file-2cvVIh6rSh.png', None], dtype=object) ]
docs.commercehero.io
You are viewing documentation for version 2 of the AWS SDK for Ruby. Version 3 documentation can be found here. Class: Aws::RDS::PendingMaintenanceAction - Inherits: - Aws::Resources::Resource - Object - Aws::Resources::Resource - Aws::RDS::PendingMaintenanceAction - Defined in: - (unknown) Instance Attribute Summary collapse - #action ⇒ String readonly The type of pending maintenance action that is available for the resource. - #auto_applied_after_date ⇒ Time readonly The date of the maintenance window when the action is applied. - #current_apply_date ⇒ Time readonly The effective date when the pending maintenance action is applied to the resource. - #description ⇒ String readonly A description providing more detail about the maintenance action. - #forced_apply_date ⇒ Time readonly The date when the maintenance action is automatically applied. - #name ⇒ String readonly - #opt_in_status ⇒ String readonly Indicates the type of opt-in request that has been received for the resource. - #target_arn ⇒ String readonly Attributes inherited from Aws::Resources::Resource Instance Method Summary collapse - #apply_immediately ⇒ ResourcePendingMaintenanceActionList - #apply_on_next_maintenance ⇒ ResourcePendingMaintenanceActionList - #initialize ⇒ Object constructor - #undo_opt_in ⇒ ResourcePendingMaintenanceActionList Methods inherited from Aws::Resources::Resource add_data_attribute, add_identifier, #data, data_attributes, #data_loaded?, identifiers, #load, #wait_until Methods included from Aws::Resources::OperationMethods #add_batch_operation, #add_operation, #batch_operation, #batch_operation_names, #batch_operations, #operation, #operation_names, #operations Constructor Details #initialize(target_arn, name, options = {}) ⇒ Object #initialize(options = {}) ⇒ Object Instance Attribute Details #action ⇒ String (readonly) The type of pending maintenance action that is available for the resource. #auto_applied_after_date ⇒ Time (readonly) The date of the maintenance window when the action is applied. The maintenance action is applied to the resource during its first maintenance window after this date. If this date is specified, any next-maintenance opt-in requests are ignored. #current_apply_date ⇒ Time (readonly) ⇒ String (readonly) A description providing more detail about the maintenance action. #forced_apply_date ⇒ Time (readonly) The date when the maintenance action is automatically applied. The maintenance action is applied to the resource on this date regardless of the maintenance window for the resource. If this date is specified, any immediate opt-in requests are ignored. #name ⇒ String (readonly) #opt_in_status ⇒ String (readonly) Indicates the type of opt-in request that has been received for the resource.
https://docs.aws.amazon.com/sdkforruby/api/Aws/RDS/PendingMaintenanceAction.html
2018-01-16T17:49:27
CC-MAIN-2018-05
1516084886476.31
[]
docs.aws.amazon.com
Throw [oref] Try SET x=2 PRINTLN "about to divide by ",x Throw SET a=7/x PRINTLN "Success: the result is ",a Catch myvar PRINTLN "this is the exception handler" PRINTLN "Error number: ",Err.Number PRINTLN "Error is: ",Err.Description PRINTLN "Error code: ",myvar.Code END Try PRINTLN "this is where the code falls through"
http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=RBAS_cthrow
2017-04-23T05:35:44
CC-MAIN-2017-17
1492917118477.15
[]
docs.intersystems.com
Math library¶ The math library falls into two categories: vector math and computational geometry. All of the following functionality is present inside the Kriti::Math namespace, and as such this will be dropped from the descriptions for brevity. Vector math¶ TODO: fill out There are several useful classes in the vector math library. The first, and most-oft-used, is Vector. This represents a three-dimensional vector, and has overloaded operators for most common operations. Other member functions that may be of interest are length(), length2(), cross(), dot(), projectOnto(), and the toString() function. Individual components are accessible by vector.x() etc. as well as vector[1]. There is also a Point class, which inherits from Vector. This is intended for use in places where you distinctly have a location in 3D space as opposed to a difference between locations. Due to the inheritance, a Point can be used anywhere a Vector can, but not vice-versa. The Matrix class represents a standard 4x4 matrix, stored in column-major ordering internally. Components are accessible by matrix(0, 3), which accesses the last element in the first column. Note that the Matrix class has overloaded operators, and in particular the Matrix * Point operator behaves differently than the Matrix * Vector operator. Finally, the Quaternion class is present, providing a full library for the use of unit quaternions for representing rotations. Useful functionality is present in the overloaded operators, the toMatrix() function, and the slerp() function. The constructor takes an axis vector and an angle to use to represent the initial rotation. TODO: add view calculation etc. Computational geometry¶ The computational geometry library is currently not yet complete and includes basic 2D geometry functionality, such as: - Geometry::closestPoint: finds the closest point on a line to another point p. - Geometry::closestSegmentPoint: finds the closest point on a line segment to another point p. - Geometry::intersectAARect: calculates the intersection of two axis-aligned rectangles and returns another axis-aligned rectangle. TODO: finish filling out
http://kriti.readthedocs.io/en/latest/math.html
2017-04-23T05:23:05
CC-MAIN-2017-17
1492917118477.15
[]
kriti.readthedocs.io
9 years ago by erin_yueh@… - Status changed from new to assigned John updated this gsmd script and uploaded it to OE. Thanks! Werner! :) comment:4 Changed 9 years ago by hiciu What about this: ? comment:5 Changed 8 years ago by john_lee - Status changed from assigned to closed - Resolution set to wontfix - HasPatchForReview unset - Component changed from openmoko-dialer to hardware hardware issue. refer to Note: See TracTickets for help on using tickets..
http://docs.openmoko.org/trac/ticket/1322
2017-04-23T05:29:10
CC-MAIN-2017-17
1492917118477.15
[]
docs.openmoko.org
Value Description A Aggregate function B Combined aggregate and ordered analytical function C Table operator parser contract function D JAR E External stored procedure F Standard function G Trigger H Instance or constructor method I Join index J Journal K Foreign server object. K is supported on the Teradata-to-Hadoop and Teradata-to-Teradata connectors. L User-defined table operator M Macro N Hash index O Table with no primary index and no partitioning P Stored procedure Q Queue table R Table function S Ordered analytical function T Table with a primary index or primary AMP index, partitioning, or both. Or a partitioned table with NoPI U User-defined type V View X Authorization Y GLOP set Z UIF 1 A DATASET schema object created by CREATE SCHEMA. 2 Function alias object.
https://docs.teradata.com/r/Teradata-VantageTM-Data-Dictionary/July-2021/View-Column-Values/TableKind-Column
2022-06-25T13:50:52
CC-MAIN-2022-27
1656103035636.10
[]
docs.teradata.com
Navigation¶ Screenshots¶ Content element with the type [Start: menus]. You can select one of 11 possible menus. In the illustration above is selected the top bar - the main navigation. See the result in the frontend in the illustration below. Menus¶ Start provides 11 menus. Usually they are responsive. Selected pages: If you enter a value, several menu will use the item (or the items) for the menu building. If the label of a menu is extended with “from directory” a selected page will handled as a directory.
https://docs.typo3.org/p/netzmacher/start/main/en-us/Users/BestPractice/Navigation/Index.html
2022-06-25T13:13:11
CC-MAIN-2022-27
1656103035636.10
[]
docs.typo3.org
Note When using a remixed MP4 as input, it's better not rely on track ID's to select tracks, but use --track_type and --track_filter instead. This is because the order in which Remix puts tracks in a remixed MP4 should not be relied on (because the logic it uses to define the order may change). - --track_filter¶ This option allows the use of the Using dynamic track selection functionality to select input tracks from a source. #!/bin/bash mp4split -o video_1000k.ismv \ example.mp4 --track_type=video --track_filter='(systemBitrate==1000000 && FourCC == "AVC1")' mp4split -o audio_description.isma \ example.mp4 --track_type=audio --track_filter='(roles contains "description")' mp4split -o regular_audio_aac.isma\ example.mp4 --track_type=audio --track_filter='!(roles contains "description")' using --key command-line option). Or using a CPIX document: #!/bin/bash mp4split -o decrypted.mp4 --decrypt_cpix=filename.cpix encrypted.mp4 This is similar to how you use the '--decrypt_cpix=filename.cpix' option to decrypt a stream using Capture (see: Decrypt using CPIX with --decrypt. If the S3 account requires using temporary security tokens, as provided by AWS Identity and Access Management (see Temporary security credentials in IAM), specify the security token using the --s3_security_token option. Should there be a need to run mp4split or unified_remix locally on an ec2 instance to package or prepare media a shell function() can be used to replace the existing mp4split command enabling the S3 security token to be used. The following example demonstrates how this can be achieved. For unified_remix simply duplicate the following functions() and substitute all instances of mp4split with unified_remix. mp4split(){ # Define imdsv2 endpoint REQUEST=$(curl -s -X GET -o /dev/null -w "%{http_code}") # Define if statement to check imdsv2 endpoint is available. # Endpoint will only be available if a instance-profile has been assigned to the ec2 via either the console or launch-template. if [[ $REQUEST == "200" ]]; then # Define imdsv2 variable to obtain credentials TOKEN=$(curl -s -X PUT "" -H "X-aws-ec2-metadata-token-ttl-seconds: 60") ROLE=$(curl -s -X GET -H "X-aws-ec2-metadata-token: $TOKEN") CREDS=$(curl -s -X GET -H "X-aws-ec2-metadata-token: $TOKEN""$ROLE") # Define temp variables to populate with newly obtains credentials S3_ACCESS_KEY=$( jq -r '.AccessKeyId' <<< "${CREDS}" ) S3_SECRET_KEY=$( jq -r '.SecretAccessKey' <<< "${CREDS}" ) S3_SECURITY_TOKEN=$( jq -r '.Token' <<< "${CREDS}" ) # Invoke mp4slit binary with s3 options /usr/bin/mp4split \ --s3_access_key=${S3_ACCESS_KEY} \ --s3_secret_key=${S3_SECRET_KEY} \ --s3_security_token=${S3_SECURITY_TOKEN} \ "$@" else # Invoke mp4split binary without s3 options /usr/bin/mp4split "$@" fi } A source file located on a secure S3 bucket with limited access using the role my-test-role can then be read to generate a server manifest file without the need to manually invoke the --s3_security_token option. mp4split --s3_region=eu-west-1 -o foo.ism mp4split version=1.11.13 (25718) Copyright 2007-2022 CodeShop B.V. I0.060 Manifest I0.060 Track 1: I0.060 src=tears-of-steel-avc1-1000k.cmfv I0.060 video bitrate=1001000/1144232 name=video_eng I0.060 id=1 timescale=12288 lang=en I0.060 vide/avc1 dref=1 bitrate=1001000/1144232 size=784x350 sar=1:1 dar=56:25 codecs=avc1.4D401F I0.060 writing 1 buckets for a total of 1094 bytes I0.060 stat: url=, reads=2, size=131 KB Status: 200 FMP4_OK By default, mp4split will add AWS S3 authentication specific query parameters to the URL, in addition to any existing parameters. To make mp4split use HTTP headers for authentication instead, use the --s3_use_headers option. Warning Please be reminded the majority of S3 buckets require requests to be authenticated using Signature version 4. Therefore the --s3_region options needs to be used to match the location of the bucket being accessed. For more information please see AWS Signature v4. Note For more information on how to obtain S3 security tokens please see AWS Security Tokens. The same options can be used for any storage system with an S3 compatible API, e.g. MinIO, Ceph OGW_HLS_FRAGMENT_NOT_FOUND¶ See 404 - FMP4_ISS_FRAGMENT_NOT_FOUND. 404 - FMP4_HDS_FRAGMENT_NOT_FOUND¶ See 404 - FMP4_ISS_FRAGMENT_NOT_FOUND. 404 - FMP4_MPD_FRAGMENT_NOT_FOUND¶ See 404 - FMP4_ISS_FRAGMENT_NOT_FOUND. 412 - FMP4_ISS_FRAGMENT_NOT_YET_AVAILABLE¶ This is a special error code for live smooth streaming. It means the client request is too far ahead of the encoder stream. 404 - FMP4_HLS_FRAGMENT_NOT_YET_AVAILABLE¶ See 412 - FMP4_ISS_FRAGMENT_NOT_YET_AVAILABLE. 404 - FMP4_MPD_FRAGMENT_NOT_YET_AVAILABLE¶ See 412 - FMP4_ISS_FRAGMENT_NOT_YET_AVAILABLE. 503 - FMP4_HDS_FRAGMENT_NOT_YET_AVAILABLE¶ See 412 - FMP4_ISS_FRAGMENT_NOT_YET).
https://beta.docs.unified-streaming.com/documentation/package/usage.html
2022-06-25T14:52:03
CC-MAIN-2022-27
1656103035636.10
[]
beta.docs.unified-streaming.com
The Napatech stateful flow management supports a wide range of features and capabilities. Supported speeds and flow capacities - NT200A02: 2 × 100 Gbit/s, 2 × 40 Gbit/s or 8 × 10 Gbit/s wire speed processing. - NT100A01: 4 × 25/10 Gbit/s or 4 × 10/1 Gbit/s wire speed processing. - The flow table capacity: 140 M bidirectional flows on NT200A02 (64-byte flow records, using the default 10.5 Gbytes in the onboard SDRAM), 90 M bidirectional flows on NT100A01 (64-byte flow records, using the default 7 Gbytes in the onboard SDRAM). - Between 85 M and 130 M lookups per second (LPS), depending on the level of metrics collection. Other supported features - Learning rate exceeds 1 M flows/s using 1 CPU core and reaches a maximum just above 3 M flows/s using multiple CPU cores. - Based on fast DMA access. - Support for up to 256 RX network streams per system performing learning in parallel. - Distribute traffic to a maximum of 128 host buffers per SmartNIC. - Full stateful operation with flow-record updates on a per-frame basis. - Flow termination based on TCP state, timeout or application. - Flow info records can be generated. The application can use flow info records generating NetFlow/IPFIX. - Flow info record generation can be enabled on a per-flow basis. - Zero packet loss. Frames that cannot be looked up by the SmartNIC can be handled in the application. - Fast path forward latency is below 3.5 µs. Software and API support Napatech stateful flow management provides the software packages including the driver and Napatech proprietary API as well as sample applications.
https://docs.napatech.com/r/Stateful-Flow-Management/Stateful-Flow-Management-Capabilities
2022-06-25T13:13:02
CC-MAIN-2022-27
1656103035636.10
[]
docs.napatech.com
Tintri VMstore is a smart storage that sees, learns, and adapts for cloud and virtualization. The Tintri Block Storage driver interacts with configured VMstore running Tintri OS 4.0 and above. It supports various operations using Tintri REST APIs and NFS protocol. To configure the use of a Tintri VMstore with Block Storage, perform the following actions: Edit the etc/cinder/cinder.conf file and set the cinder.volume.drivers.tintri options: volume_driver=cinder.volume.drivers.tintri.TintriDriver # Mount options passed to the nfs client. See section of the # nfs man page for details. (string value) nfs_mount_options = vers=3,lookupcache=pos # # Options defined in cinder.volume.drivers.tintri # # The hostname (or IP address) for the storage system (string # value) tintri_server_hostname = {Tintri VMstore Management IP} # User name for the storage system (string value) tintri_server_username = {username} # Password for the storage system (string value) tintri_server_password = {password} # API version for the storage system (string value) # tintri_api_version = v310 # Following options needed for NFS configuration # File with the list of available nfs shares (string value) # nfs_shares_config = /etc/cinder/nfs_shares # Tintri driver will clean up unused image snapshots. With the following # option, users can configure how long unused image snapshots are # retained. Default retention policy is 30 days # tintri_image_cache_expiry_days = 30 # Path to NFS shares file storing images. # Users can store Glance images in the NFS share of the same VMstore # mentioned in the following file. These images need to have additional # metadata ``provider_location`` configured in Glance, which should point # to the NFS share path of the image. # This option will enable Tintri driver to directly clone from Glance # image stored on same VMstore (rather than downloading image # from Glance) # tintri_image_shares_config = <Path to image NFS share> # # For example: # Glance image metadata # provider_location => # nfs://<data_ip>/tintri/glance/84829294-c48b-4e16-a878-8b2581efd505 Edit the /etc/nova/nova.conf file and set the nfs_mount_options: [libvirt] nfs_mount_options = vers=3 Edit the /etc/cinder/nfs_shares file and add the Tintri VMstore mount points associated with the configured VMstore management IP in the cinder.conf file: {vmstore_data_ip}:/tintri/{submount1} {vmstore_data_ip}:/tintri/{submount2} Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.
https://docs.openstack.org/newton/config-reference/block-storage/drivers/tintri-volume-driver.html
2022-06-25T13:42:36
CC-MAIN-2022-27
1656103035636.10
[]
docs.openstack.org
Upload access providers To upload your access provider configuration, you need a fauna/providers directory containing .ts/ .js files that hold your configuration information. These files look like the following example: import { query as q } from "faunadb"; export default { name: "auth0", issuer: "https://<your-auth0-domain>.auth0.com", jwks_uri: "https://<your-auth0-domain>.auth0.com/.well-known/jwks.json", roles: [ q.Role("user") ] } The issuer domain can be found in your Auth0 dashboard, and the jwks_uri is simply that domain with /.well-known/jwks.json appended. When uploading the access provider, an audience url will be logged to the console. This audience URL should be used in the identifier field when creating a new API in the Auth0 dashboard. Refer to Setting up SSO authentication in Fauna with Auth0 by Brecht De Rooms for more in depth instructions.
https://fgu-docs.com/usage/upload-access-providers/
2022-06-25T14:50:05
CC-MAIN-2022-27
1656103035636.10
[]
fgu-docs.com
Source code for spack.compilers.nag # Copyright 2013-2021 Lawrence Livermore National Security, LLC and other # Spack Project Developers. See the top-level COPYRIGHT file for details. # # SPDX-License-Identifier: (Apache-2.0 OR MIT) from typing import List # novm import spack.compiler[docs]class Nag(spack.compiler.Compiler): # Subclasses use possible names of C compiler cc_names = [] # type: List[str] # Subclasses use possible names of C++ compiler cxx_names = [] # type: List[str] # Subclasses use possible names of Fortran 77 compiler f77_names = ['nagfor'] # Subclasses use possible names of Fortran 90 compiler fc_names = ['nagfor'] # Named wrapper links within build_env_path # Use default wrappers for C and C++, in case provided in compilers.yaml link_paths = { 'cc': 'cc', 'cxx': 'c++', 'f77': 'nag/nagfor', 'fc': 'nag/nagfor'} version_argument = '-V' version_regex = r'NAG Fortran Compiler Release ([0-9.]+)' @property def verbose_flag(self): # NAG does not support a flag that would enable verbose output and # compilation/linking at the same time (with either '-#' or '-dryrun' # the compiler only prints the commands but does not run them). # Therefore, the only thing we can do is to pass the '-v' argument to # the underlying GCC. In order to get verbose output from the latter # at both compile and linking stages, we need to call NAG with two # additional flags: '-Wc,-v' and '-Wl,-v'. However, we return only # '-Wl,-v' for the following reasons: # 1) the interface of this method does not support multiple flags in # the return value and, at least currently, verbose output at the # linking stage has a higher priority for us; # 2) NAG is usually mixed with GCC compiler, which also accepts # '-Wl,-v' and produces meaningful result with it: '-v' is passed # to the linker and the latter produces verbose output for the # linking stage ('-Wc,-v', however, would break the compilation # with a message from GCC that the flag is not recognized). # # This way, we at least enable the implicit rpath detection, which is # based on compilation of a C file (see method # spack.compiler._get_compiler_link_paths): in the case of a mixed # NAG/GCC toolchain, the flag will be passed to g++ (e.g. # 'g++ -Wl,-v ./main.c'), otherwise, the flag will be passed to nagfor # (e.g. 'nagfor -Wl,-v ./main.c' - note that nagfor recognizes '.c' # extension and treats the file accordingly). The list of detected # rpaths will contain only GCC-related directories and rpaths to # NAG-related directories are injected by nagfor anyway. return "-Wl,-v" @property def openmp_flag(self): return "-openmp" @property def debug_flags(self): return ['-g', '-gline', '-g90'] @property def opt_flags(self): return ['-O', '-O0', '-O1', '-O2', '-O3', '-O4'] @property def cxx11_flag(self): # NAG does not have a C++ compiler # However, it can be mixed with a compiler that does support it return "-std=c++11" @property def f77_pic_flag(self): return "-PIC" @property def fc_pic_flag(self): return "-PIC" # Unlike other compilers, the NAG compiler passes options to GCC, which # then passes them to the linker. Therefore, we need to doubly wrap the # options with '-Wl,-Wl,,' @property def f77_rpath_arg(self): return '-Wl,-Wl,,-rpath,,' @property def fc_rpath_arg(self): return '-Wl,-Wl,,-rpath,,' @property def linker_arg(self): return '-Wl,-Wl,,' @property def disable_new_dtags(self): # Disable RPATH/RUNPATH forcing for NAG/GCC mixed toolchains: return '' @property def enable_new_dtags(self): # Disable RPATH/RUNPATH forcing for NAG/GCC mixed toolchains: return ''
https://spack.readthedocs.io/en/v0.17.0/_modules/spack/compilers/nag.html
2022-06-25T13:47:58
CC-MAIN-2022-27
1656103035636.10
[]
spack.readthedocs.io
Working with PDF Pages in C# Aspose.PDF for .NET lets you insert a page to a PDF document at any location in the file as well as add pages to the end of a PDF file. This section shows how to add pages to a PDF without Acrobat Reader. You can add text or images in the headers and footers of your PDF file, and choose different headers in your document with C# library by Aspose. Also, try to crop pages in PDF document programmatically using C#. This section learn you how to add watermarks in your PDF file using Artifact class. You will check the programming sample for this task. Add Page number using PageNumberStamp class. For adding a Stamp in your document use ImageStamp and TextStamp classes. Use adding a watermark for creating background images in your PDF file with Aspose.PDF for .NET. You are able to do the following: - Add Pages - add pages at desired location or to the end of a PDF file and delete a page from you document. - Move Pages - move pages from one document to another. - Delete Pages - delete page from your PDF file using PageCollection collection. - Change Page size - you can change PDF page size with code snippet using Aspose.PDF library. - Rotate Pages - you can change the page orientation of pages in an existing PDF file. - Split Pages - you can split PDF files into one or multiple PDF. - Add Headers and/or Footers - add text or images in the headers and footers of your PDF file . - Crop Pages - you can crop pages in PDF document programmatically with different Page Properties. - Add Watermarks - add watermarks in your PDF file with Artifact Class. - Add Page Numbering in PDF File - PageNumberStamp class will help you to add a Page Number in your PDF file. - Add Backgrounds - background images can be used to add a watermark. - Stamping - you can use the ImageStamp class to add an image stamp to a PDF file and TextStamp class for adding a text.
https://docs.aspose.com/pdf/net/working-with-pages/
2022-06-25T13:49:10
CC-MAIN-2022-27
1656103035636.10
[]
docs.aspose.com
- When to use feature flags - Feature flags in GitLab development - Risk of a broken main branch - Types of feature flags - Feature flag definition and validation - Create a new feature flag - Delete a feature flag - Develop with a feature flag - Changelog - Feature flags in tests Feature. For an overview of the feature flag lifecycle, or if you need help deciding if you should use a feature flag or not, please see the feature flag lifecycle handbook page. When to use feature flags Moved to the “When to use feature flags” section in the handbook. Feature All newly-introduced feature flags must be disabled by default. Features that are developed and merged behind a feature flag should not include a changelog entry. The entry should be added either in the merge request removing the feature flag or the merge request where the default value of the feature flag is set to enabled. If the feature contains any database migrations, it should include a changelog entry for the database changes. --eeflag: bin/feature-flag --ee Risk of a broken master (main) branch rspec:feature-flagsjob that only runs on the masterbranch. When using a feature flag for UI elements, make sure to also use a feature flag for the underlying backend code, if there is any. This ensures there is absolutely no way to use the feature until it is enabled.) Please see Feature flag controls for more details on working with feature flags. include automated tests for all code affected by a feature flag, both when enabled and disabled to ensure the feature works properly. If automated tests are not included for both states, the functionality associated with the untested code path should be manually tested before deployment to production. When using the testing environment, all feature flags are enabled by default. The behavior of FlipperGate is as follows: - You can enable an override for a specified actor to be enabled. - You can disable (remove) an override for a specified actor, falling back to the default state. - There’s no way to model that you explicitly disabled.
https://docs.gitlab.com/14.10/ee/development/feature_flags/
2022-06-25T13:34:03
CC-MAIN-2022-27
1656103035636.10
[]
docs.gitlab.com
sDP-1000, sDP-1000EX Battery Replacement Instruction Remove the outer four Phillips screws carefully Remove the outer two connectors by pulling down. Leave the center connector alone. Carefully pry the battery packs off the back plate using a tool such as a straight razor. Remove residual tape on the back-plate. Remove peels of tape from the new battery pack. Center it carefully on the two holes, making a little space between the two batteries. Wrap batteries and back-plate with the provided nylon reinforced tape. Screw the back-plate with batteries to the battery holder plate. Hold the new battery pack at the angle like the image above, and plug the outer (battery) terminals in. Screw the battery holder plate (outer four) screws in place, and make sure the foot is positioned in the lower and outside corner.
https://docs.sotm-audio.com/doku.php?id=en:how_to_replace_sdp-1000_battery_pack
2022-06-25T13:49:06
CC-MAIN-2022-27
1656103035636.10
[]
docs.sotm-audio.com
Disaster recovery Sitecore Content Hub™ provides business continuity and disaster recovery documentation for the hosted service as well as a business continuity runbook that defines: - The length of time the system remains inaccessible before a business continuity event initiates. - The process for restoration. - The ownership of the process. Note We update the disaster recovery documentation every three months and test the strategy no less than once a year. Can we improve this article ? Provide feedback
https://docs.stylelabs.com/contenthub/4.1.x/content/user-documentation/security/business-continuity/disaster-recovery.html
2022-06-25T13:11:12
CC-MAIN-2022-27
1656103035636.10
[]
docs.stylelabs.com
Arithmetic Operations An arithmetic expression is an expression that results in a TealType.uint64 value. In PyTeal, arithmetic expressions include integer and boolean operators (booleans are the integers 0 or 1). The table below summarized all arithmetic expressions in PyTeal. Most of the above operations take two TealType.uint64 values as inputs. In addition, Eq(a, b) ( ==) and Neq(a, b) ( !=) also work for byte slices. For example, Arg(0) == Arg(1) and Arg(0) != Arg(1) are valid PyTeal expressions. Both And and Or also support more than 2 arguments when called as functions: - And(a, b, ...) - Or(a, b, ...) The associativity and precedence of the overloaded Python arithmetic operators are the same as the original python operators . For example: - Int(1) + Int(2) + Int(3)is equivalent to Add(Add(Int(1), Int(2)), Int(3)) - Int(1) + Int(2) * Int(3)is equivalent to Add(Int(1), Mul(Int(2), Int(3))) Byteslice Arithmetic Byteslice arithemetic is available for Teal V4 and above. Byteslice arithmetic operators allow up to 512-bit arithmetic. In PyTeal, byteslice arithmetic expressions include TealType.Bytes values as arguments (with the exception of BytesZero) and must be 64 bytes or less. The table below summarizes the byteslize arithmetic operations in PyTeal. Currently, byteslice arithmetic operations are not overloaded, and must be explicitly called. Bit and Byte Operations In addition to the standard arithmetic operators above, PyTeal also supports operations that manipulate the individual bits and bytes of PyTeal values. To use these operations, you’ll need to provide an index specifying which bit or byte to access. These indexes have different meanings depending on whether you are manipulating integers or byte slices: For integers, bit indexing begins with low-order bits. For example, the bit at index 4 of the integer 16 ( 000...0001000in binary) is 1. Every other index has a bit value of 0. Any index less than 64 is valid, regardless of the integer’s value. Byte indexing is not supported for integers. For byte strings, bit indexing begins at the first bit. For example, the bit at index 0 of the base16 byte string 0xf0( 11110000in binary) is 1. Any index less than 4 has a bit value of 1, and any index 4 or greater has a bit value of 0. Any index less than 8 times the length of the byte string is valid. Likewise, byte indexing begins at the first byte of the string. For example, the byte at index 0 of that the base16 string 0xff00( 1111111100000000in binary) is 255 ( 111111111in binary), and the byte at index 1 is 0. Any index less than the length of the byte string is valid. Bit Manipulation The GetBit expression can extract individual bit values from integers and byte strings. For example, GetBit(Int(16), Int(0)) # get the 0th bit of 16, produces 0 GetBit(Int(16), Int(4)) # get the 4th bit of 16, produces 1 GetBit(Int(16), Int(63)) # get the 63rd bit of 16, produces 0 GetBit(Int(16), Int(64)) # get the 64th bit of 16, invalid index GetBit(Bytes("base16", "0xf0"), Int(0)) # get the 0th bit of 0xf0, produces 1 GetBit(Bytes("base16", "0xf0"), Int(7)) # get the 7th bit of 0xf0, produces 0 GetBit(Bytes("base16", "0xf0"), Int(8)) # get the 8th bit of 0xf0, invalid index Additionally, the SetBit expression can modify individual bit values from integers and byte strings. For example, SetBit(Int(0), Int(4), Int(1)) # set the 4th bit of 0 to 1, produces 16 SetBit(Int(4), Int(0), Int(1)) # set the 0th bit of 4 to 1, produces 5 SetBit(Int(4), Int(0), Int(0)) # set the 0th bit of 4 to 0, produces 4 SetBit(Bytes("base16", "0x00"), Int(0), Int(1)) # set the 0th bit of 0x00 to 1, produces 0x80 SetBit(Bytes("base16", "0x00"), Int(3), Int(1)) # set the 3rd bit of 0x00 to 1, produces 0x10 SetBit(Bytes("base16", "0x00"), Int(7), Int(1)) # set the 7th bit of 0x00 to 1, produces 0x01 Byte Manipulation In addition to manipulating bits, individual bytes in byte strings can be manipulated. The GetByte expression can extract individual bytes from byte strings. For example, GetByte(Bytes("base16", "0xff00"), Int(0)) # get the 0th byte of 0xff00, produces 255 GetByte(Bytes("base16", "0xff00"), Int(1)) # get the 1st byte of 0xff00, produces 0 GetByte(Bytes("base16", "0xff00"), Int(2)) # get the 2nd byte of 0xff00, invalid index GetByte(Bytes("abc"), Int(0)) # get the 0th byte of "abc", produces 97 (ASCII 'a') GetByte(Bytes("abc"), Int(1)) # get the 1st byte of "abc", produces 98 (ASCII 'b') GetByte(Bytes("abc"), Int(2)) # get the 2nd byte of "abc", produces 99 (ASCII 'c') Additionally, the SetByte expression can modify individual bytes in byte strings. For example, SetByte(Bytes("base16", "0xff00"), Int(0), Int(0)) # set the 0th byte of 0xff00 to 0, produces 0x0000 SetByte(Bytes("base16", "0xff00"), Int(0), Int(128)) # set the 0th byte of 0xff00 to 128, produces 0x8000 SetByte(Bytes("abc"), Int(0), Int(98)) # set the 0th byte of "abc" to 98 (ASCII 'b'), produces "bbc" SetByte(Bytes("abc"), Int(1), Int(66)) # set the 1st byte of "abc" to 66 (ASCII 'B'), produces "aBc"
https://pyteal.readthedocs.io/en/stable/arithmetic_expression.html
2022-06-25T13:36:59
CC-MAIN-2022-27
1656103035636.10
[]
pyteal.readthedocs.io
Network XML format - Element and attribute overview This page provides an introduction to the network port XML format. This stores information about the connection between a virtual interface of a virtual domain, and the virtual network it is attached to. Element and attribute overview ¶ The root element required for all virtual network ports is named networkport and has no configurable attributes The network port XML format is available since 5.5.0 General metadata ¶'/> ... uuid - The content of the uuidelement ownernode records the domain object that is the owner of the network port. It contains two child nodes: uuid - The content of the uuidelement provides a globally unique identifier for the virtual domain. name - The unique name of the virtual domain group - The port group in the virtual network to which the port belongs. Can be omitted if no port groups are defined on the network. mac - The addressattribute provides the MAC address of the virtual port that will be see by the guest. The MAC address must not start with 0xFE as this byte is reserved for use on the host side of the port. Common elements ¶ The following elements are common to one or more of the plug types listed later ... <bandwidth> <inbound average='1000' peak='5000' floor='200' burst='1024'/> <outbound average='128' peak='256' burst='256'/> </bandwidth> <rxfilters trustGuest='yes'/> <virtualport type='802.1Qbg'> <parameters managerid='11' typeid='1193047' typeidversion='2'/> </virtualport> ... bandwidth - This part of the network port XML provides setting quality of service. Incoming and outgoing traffic can be shaped independently. The bandwidthelement and its child elements are described in the QoS section of the Network XML. In addition the classIDattribute may exist to provide the ID of the traffic shaping class that is active. rxfilters - The rxfilterselement property trustGuestprovides the capability for the host to detect and trust reports from the guest regarding changes to the interface mac address and receive filters by setting the attribute to yes. The default setting for the attribute is nofor security reasons and support depends on the guest network device model as well as the type of connection on the host - currently it is only supported for the virtio device model and for macvtap connections on the host. virtualport - The virtualportelement describes metadata that needs to be provided to the underlying network subsystem. It is described in the domain XML interface documentation. Plugs ¶ The plug element has varying content depending on the value of the type attribute. Network ¶ The network plug type refers to a managed virtual network plug that is based on a traditional software bridge device privately managed by libvirt. ... <plug type='network' bridge='virbr0'/> ... The bridge attribute provides the name of the privately managed bridge device associated with the virtual network. Bridge ¶ The bridge plug type refers to an externally managed traditional software bridge. ... <plug type='bridge' bridge='br2'/> ... The bridge attribute provides the name of the externally managed bridge device associated with the virtual network. Direct ¶. Host PCI ¶.
https://docs.virtuozzo.com/libvirt-docs-5.6.0/html/formatnetworkport.html
2022-06-25T14:45:05
CC-MAIN-2022-27
1656103035636.10
[]
docs.virtuozzo.com
While configuring business policies, you can select the existing object groups to match the source or destination. This includes the range of IP addresses or port numbers available in the object groups in the business policy definition. Procedure - In the Enterprise portal, click - Select a profile from the list and click the Business Policy tab. - Click New Rule or . - Enter a name for the business rule. - In the Match area, click Object Group for the source. - Select the relevant Address Group and Port Group from the drop-down list.If the selected address group contains any domain names, then they would be ignored when matching for the source. - If required, you can select the Address and Port Groups for the destination as well. - Choose the other actions as required and click OK. Results - Navigate to Business Policy tab., select an Edge, and click the - Click New Rule or . - Define the rule with relevant object groups and other actions. The Business Policy tab of the Edge displays the policies from the associated profile along with the policies specific to the Edge. Note: By default, the business policies are assigned to the global segment. If required, you can choose a segment from the Select Segment drop-down and create business policies specific to the selected segment.
https://docs.vmware.com/en/VMware-SD-WAN/4.0/VMware-SD-WAN-by-VeloCloud-Administration-Guide/GUID-2F9FED5E-B5D4-475D-97E2-736E0A5B40DA.html
2022-06-25T13:51:40
CC-MAIN-2022-27
1656103035636.10
[array(['images/GUID-0415790E-250D-4396-BA65-2DBE35A75D53-low.png', None], dtype=object) ]
docs.vmware.com
Each XPIO I/O bank contains four global clock input pins (GCs) to bring user clocks onto the device clock management and routing resources. Every HDIO bank contains two GC pins. The global clock inputs bring user clocks onto: - XPLLs in the same bank (XPHY) and adjacent banks. - MMCM and DPLL next to the XPIO banks (XPHY). - Any of the 24 BUFGCEs, 8 BUFGCTRLs, and 4 BUFGCE_DIVs co-located with the MMCMs/DPLLs. - Two GC pins per HDIO bank can drive DPLLs and four clock buffers (only BUFGCE) next to them. Each device has three general purpose global clock buffers: BUFGCTRL, BUFGCE, and BUFGCE_DIV. In addition, there is a local BUFDIV_LEAF clock buffer for driving leaf clocks from horizontal distribution to various blocks in the device. The BUFDIV_LEAF is a physical only buffer and cannot be instantiated by the user. BUFGCTRL has derivative software representations of types BUFGMUX, BUFGMUX1, BUFGMUX_CTRL, and BUFGCE_1. BUFGCE is for improved clock gating and has a software derivative BUFG (BUFGCE with clock enable tied High). The global clock buffers drive routing and distribution tracks into the device logic through the HCS rows. There are 12 routing and 24 distribution tracks in each HCS row. Bottom and top clocking rows have 24 routing and 24 distribution tracks. There is BUFG_PS that drives from PS through horizontal and vertical routing tracks and also BUFG_GT that generates divided clocks for GT clocking. The clock buffers: - Can be used as a clock enable circuit to enable or disable clocks either globally, locally, or within a CR for fine-grained power control - Can be used as a glitch-free multiplexer to: - Select between two clock sources - Switch away from a failed clock source - Are often driven by an MMCM/XPLL/DPLL to: - Eliminate the clock distribution delay - Adjust clock delay relative to another clock See Versal Architecture Clocking Resources for further details on global clocks, I/O, and GT clocking. It also describes which clock routing resources to use for various applications. In Versal ACAP, there is a new group of multi-clock buffers (MBUFG) that are similar to BUFGs but with multiple clock outputs. Refer to Multi-Clock Buffers.
https://docs.xilinx.com/r/en-US/am003-versal-clocking-resources/Clock-Buffers-and-Routing
2022-06-25T13:33:55
CC-MAIN-2022-27
1656103035636.10
[]
docs.xilinx.com