content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
On-Boarding docker image model or docker URI model user guide¶
1: How to onboard a docker image model ?¶
Acumos allows users to onboard their docker image models. Each model dockerised outside acumos by modelers can be onboarded in Acumos. You just have to use the “Onboard dockerised model” panel in the “on-boarding model” page of the Acumos portal. In this panel just type the name of the model and you will received the Acumos image reference to be used to push your docker image model in Acumos. This Acumos image reference looks like :
<acumos_domain>:<docker_proxy_port>/modelname_soultion_id:tag
Then users have to follow the three steps depicted here :
1 : Authenticate in the Acumos docker registry
docker login https://<acumos_domain>:<docker_proxy_port> -u <acumos_userid> -p <acumos_password>
2 : Tag the docker image model with the Acumos image reference
docker tag my_image_model <acumos_domain>:<docker_proxy_port>/modelname_solution_id:tag
3 : Push the model in Acumos
docker push <acumos_domain>:<docker_proxy_port>/modelname_solution_id:tag
The process of on-boarding a docker image model in Acumos is reduced to create a solution Id and upload the model. There are no micro-service, nor tosca file, nor metadata file, nor protobuf file created. Acumos desn’t request a license file during the on-boarding, if needed modelers can add a license file in their docker image model before the on-boarding.
2 : How to onboard a docker URI model ?¶
Acumos allows users to save all their docker image model URI. For each dockerised models, that have been previously stored by modelers in docker repo like Docker Hub for example, modelers have just to use the “Onboard dockerised model URI” panel in the “on-boarding model” page of the Acumos portal. In this panel, type the name of the model and the Host, optionally you can fill the port an the tag.
It is also possible to on-board a licence file associated with your docker URI model. Just drag and drop or browse your licence file to on-board a docker URI model in Acumos is reduced to create a solution Id, save the URI and if needed associate a license file with this URI. There are no micro-service, nor tosca file , nor metadata file, nor protobuf file created. | https://docs.acumos.org/en/boreas/submodules/on-boarding/docs/onboarding-pre_dockerised_and_URI_guide.html | 2019-07-16T04:08:18 | CC-MAIN-2019-30 | 1563195524502.23 | [] | docs.acumos.org |
Routing in a cluster
Routing in a cluster works in much the same way as routing in a standalone system. A few points to note:
All routing configurations must be performed from the cluster IP address and the configurations are propagated to the other cluster nodes.
Routing runs only on spotted SNIP addresses and NSIP addresses.
Routes are limited to the maximum number of ECMP routes supported by the upstream router.
Node-specific routing configurations must be performed by using the owner-node argument as follows:
! !
Retrieve node-specific routing configurations by specifying the node(s) in the owner-node argument as follows:
> vtysh ns# owner-node 0 1 ns(node-0 1)# show cluster state ns(node-0 1)# exit-owner-node
Clear node-specific routing configurations by specifying the node(s) in the owner-node argument as follows:
> vtysh ns# owner-node 0 1 ns(node-0 1)# clear config ns(node-0 1)# exit-owner-node
Routing protocol daemons can run and adjacencies can be formed on active and inactive nodes of a cluster.
Only active nodes advertise host routes to striped VIP addresses. Spotted VIP addresses are advertised by active owner node.
Active and inactive nodes can learn dynamic routes and install them into the routing table.
Routes learnt on a node are propagated to other nodes in the cluster only if route propagation is configured. This is mostly needed in asymmetric topologies where the unconnected nodes may not be able to form adjacencies.
ns(config)# ns route-install propagate
Note
Make sure that route propagation is not configured in a symmetric cluster topology as it can result in making the node unavailable to the cluster. | https://docs.citrix.com/en-us/citrix-adc/12-1/clustering/cluster-overview/cluster-routing.html | 2019-07-16T05:02:40 | CC-MAIN-2019-30 | 1563195524502.23 | [] | docs.citrix.com |
Best Practices for Energy Efficiency
Platform
The Windows platform is highly reliable and enables fast on-and-off performance. However, extensions provided with mobile PC systems, such as services, system tray applets, drivers, and other software, can significantly affect performance, reliability, and energy efficiency.
Hardware Components
- Frequency and depth to which hardware can enter lower power states
- Hardware support of lower power states
- Driver optimization for energy efficiency
Operating System–Directed Power Management
- Efficiency of Windows code while under a load versus while idle
- Cooperation level of all components with Windows-directed power management
- Proper configuration of the operating system to optimize for power management through power policy settings
Application Software and Services
- Efficiency of applications, drivers, and services while under a load versus while idle
- Cooperation level of applications with Windows–directed power management
- Software allowance of the system or devices to enter into low-power idle states
A single application or service component can prevent a system from realizing optimal battery life. Although Windows provides many power configuration options, preinstalled software or power policy settings on many systems are not optimized for the host hardware platform.
- Invest in performance optimizations
- Adjust to user power policy
- Reduce resource usage when the system is on battery power
- Do not render to the display when it is off
- Avoid polling and spinning in tight loops
- Do not prevent the system from turning off the display or idling to sleep
- Respond to common power-management events
- Do not enable debug logging by default; use Event Tracing for Windows instead
Links to Other Resources | https://docs.microsoft.com/en-us/windows/win32/win7appqual/energy-efficiency-best-practices | 2019-07-16T05:11:19 | CC-MAIN-2019-30 | 1563195524502.23 | [] | docs.microsoft.com |
Contents IT Service Management Previous Topic Next Topic Find software on the network Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Find software on the network After using a discovery tool, you can find a definitive list of all the software found on the network. Before you beginRole required: sam About this task Note: A user with the Asset role can delete software installations, but it is not recommended. As an alternative, archive software installation information. Procedure Navigate to Software Asset > Discovery > Software Installations. A software administrator can, for example, look at the list and see that Adobe Acrobat 9.0, 9.2, 9.3, and 9.5 were found. Then, the administrator can edit software discovery models so all the dot versions are considered version 9.0 when doing reconciliation. Click a Display Name in a row. All installations that map to an individual software discovery model are displayed.All fields on the form are read-only.Table 1. Software installation fields Field Description Display name Name of the software installation as it appears in record lists. Publisher Publisher of the software. Version Version of the software. Discovery model Software discovery model that represents the installed software. Prod id Number created by the publisher to identify the software. Install location Path under which the software is installed. Install date Date on which the software was installed. Revision Revision of the software. Instance key Encrypted credentials for the software installation. Installed on Hardware on which the software is installed. Uninstall string Identifier used to uninstall the software. ISO serial number ISO number of the software. Foreground Duration of foreground usage of the software. Background Duration of background usage of the software. Last scanned Date and time on which the software was last discovered on this hardware. Last used Date and time on which the software was last used on this hardware. Counted by The counter summary name that the installation is counted on. Entitlement Entitlement that is associated with the software installation. Inferred suite Software suite inferred by the inference parameters. Valuation Indicates the number of rights the install has. Cached If checked, the license installation has already been counted. Omit from suites If checked, the license is ignored for any suite calculations. This box is automatically checked if the install finds a possible entitlement of the exact software model for this configuration item. Note: Third-party discovery tools can use software normalization to more effectively manage the software installation database. Software normalization allows you to standardize your software installation data, such as the display name, publisher, revision, and version. You can personalize the software installation form to include these normalization fields. For more information, see Personalizing forms. What to do nextFor more information on forms see Configure a form. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/madrid-it-service-management/page/product/asset-management/task/t_FindingSoftwareOnTheNetwork.html | 2019-07-16T04:43:42 | CC-MAIN-2019-30 | 1563195524502.23 | [] | docs.servicenow.com |
Has the transform changed since the last time the flag was set to 'false'?
A change to the transform can be anything that can cause its matrix to be recalculated: any adjustment to its position, rotation or scale. Note that operations which can change the transform will not actually check if the old and new value are different before setting this flag. So setting, for instance, transform.position will always set hasChanged on the transform, regardless of there being any actual change.
using UnityEngine; using System.Collections;
public class ExampleClass : MonoBehaviour { void Update() { if (transform.hasChanged) { print("The transform has changed!"); transform.hasChanged = false; } } } | https://docs.unity3d.com/kr/2017.1/ScriptReference/Transform-hasChanged.html | 2019-07-16T04:12:46 | CC-MAIN-2019-30 | 1563195524502.23 | [] | docs.unity3d.com |
By default, Orchestrator restricts JavaScript access to a limited set of Java classes. If you require JavaScript access to a wider range of Java classes, you must set an Orchestrator system property to allow this access.
About this task
Allowing the JavaScript engine full access to the Java virtual machine (JVM) presents potential security issues. Malformed or malicious scripts might have access to all of the system components to which the user who runs the Orchestrator server has access. Consequently, by default the Orchestrator JavaScript engine can access only the classes in the java.util.* package.
If you require JavaScript access to classes outside of the java.util.* package, you can list in a configuration file the Java packages to which to allow JavaScript access. You then set the com.vmware.scripting.rhino-class-shutter-file system property to point to this file.
Procedure
- Create a text configuration file to store the list of Java packages to which to allow JavaScript access.
For example, to allow JavaScript access to all the classes in the java.net package and to the java.lang.Object class, you add the following content to the file.
java.net.* java.lang.Object
- Save the configuration file with an appropriate name and in an appropriate place.
- Log in to Control Center as root.
- Click System Properties.
- Click the Add icon (
).
- In the Key text box enter com.vmware.scripting.rhino-class-shutter-file.
- In the Value text box enter the path to your configuration file.
- In the Description text box enter a description for the system property.
- Click Add.
- Click Save changes from the pop-up menu.
A message indicates that you have saved successfully.
- Restart the Orchestrator server.
Results
The JavaScript engine has access to the Java classes that you specified. | https://docs.vmware.com/en/vRealize-Orchestrator/7.0/com.vmware.vrealize.orchestrator-install-config.doc/GUIDDBBD1CFB-77B2-452D-A102-59EA2F9C9BC9.html | 2019-07-16T04:20:24 | CC-MAIN-2019-30 | 1563195524502.23 | [] | docs.vmware.com |
If you don’t want to display all the product reviews at the same time, you can set the maximum number to display through “Number of reviews to display” option. Value “zero” matches with no limit.
In this case, to allow users to visualize more reviews you must activate “load more”feature in settings dashboard field.
Depending on how set up, “load more” can be a link or a button. | https://docs.yithemes.com/yith-woocommerce-advanced-reviews/premium-version-settings/view-review/ | 2019-07-16T03:59:12 | CC-MAIN-2019-30 | 1563195524502.23 | [] | docs.yithemes.com |
.
Coat Materials
Coat materials – Specifies materials to use as coatings.
Blend amount –.
Notes
- If any of the Coat materials is a VRayMtl with Fog color different than white, it will be ignored. Fog color will be considered only for the Base material. | https://docs.chaosgroup.com/pages/viewpage.action?pageId=39818320 | 2019-07-16T04:59:15 | CC-MAIN-2019-30 | 1563195524502.23 | [] | docs.chaosgroup.com |
Configure a Resource-based Policy: Knox
How to add a new policy to an existing Knox service.
- On the Service Manager page, select an existing service under Knox.The List of Policies page appears.
- Click Add New Policy.The Create Policy page appears.
- Complete the Create Policy page as follows:
Since Knox does not provide a command line methodology for assigning privileges or roles to users, the User and Group Permissions portion of the Knox Create Policy form is especially important.
- You can use the Plus (+) symbol to add additional conditions. Conditions are evaluated in the order listed in the policy. The condition at the top of the list is applied first, then the second, then the third, and so on.
- Click Add. | https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.1.0/authorization-ranger/content/resource_policy_create_a_knox_policy.html | 2019-07-16T05:21:35 | CC-MAIN-2019-30 | 1563195524502.23 | [] | docs.hortonworks.com |
Define a CSV lookup in Splunk Web
CSV lookups are file-based lookups that match field values from your events to field values in the static table represented by a CSV file. They output corresponding field values from the table to your events. CSV lookups are best for small sets of data. The general workflow for creating a CSV lookup in Splunk Web is to upload a file, share the lookup table file, and then create the lookup definition from the lookup file. CSV lookup table files, and lookup definitions that use CSV files, are both dataset types. See Dataset types and usage.
About the CSV files
There are some restrictions to the files that can be used for CSV lookups.
- The table in the CSV file should have at least two columns.
- One column are supported.
- CSV files cannot have "\r" line endings (OSX 9 or earlier)
- CSV files cannot have header rows that exceed 4096 characters.
Upload the lookup table file
To use a lookup table file, you must upload the file to your Splunk platform.
Prerequisites
- See Lookup example in Splunk Web for an example of how to define a CSV lookup.
- An available .csv or .gz table file.
Steps
- Select Settings > Lookups to go to the Lookups manager page.
- In the Actions column, click Add new next to Lookup table files.
- Select a Destination app from the list.
Your lookup table file is saved in the directory where the application resides. For example:
$SPLUNK_HOME/etc/users/<username>/<app_name>/lookups/.
- Click Choose File to look for the CSV file to upload.
- Enter the destination filename. This is the name the lookup table file will have on the Splunk server. If you are uploading a gzipped CSV file, enter a filename ending in ".gz". If you are uploading a plaintext CSV file, use a filename ending in ".csv".
- Click Save.
After you upload the lookup file, tell the Splunk software which applications can use this file. The default app is Launcher.
- Select Settings > Lookups.
- From the Lookup manager, click Lookup table files.
- Click Permissions in the Sharing column of the lookup you want to share.
- In the Permissions dialog box, under Object should appear in, select All apps to share globally. If you want the lookup to be specific to this app only, select This app only. You can also keep your lookup private by selecting Keep private.
- Click Save.
Create a CSV lookup definition
You must create a lookup definition from the lookup table file.
Prerequisites
In order to create the lookup definition, share the lookup table file so that Splunk software can see it.
Review
Steps
- Select Settings > Lookups.
- Click Lookup definitions.
- Click New.
- Select a Destination app from the drop-down list. Your lookup table file is saved in the directory where the application resides. For example:
$SPLUNK_HOME/etc/users/<username>/<app_name>/lookups/.
- Give your lookup definition a unique Name.
- Select File-based as the lookup Type.
- Select the Lookup file from the drop-down list. For a CSV lookup, the file extension must be .csv.
- (Optional) If the CSV file contains time fields, make the CSV lookup time-bounded by selecting the Configure time-based lookup check box.
-
- (Optional) To define advanced options for your lookup, select the Advanced options check box.
-
- Click Save.
Your lookup is defined as a file-based CSV lookup and appears in the list of lookup definitions.
After you create the lookup definition, specify in which apps you want to use the definition.
- Select Settings > Lookups.
- Click Lookup definitions.
- In the Lookup definitions list, click Permissions in the Sharing column of the lookup definition you want to share.
- In the Permissions dialog box, under Object should appear in, select All apps to share globally. If you want the lookup to be specific to this app only, select This app only. You can also keep your lookup private by selecting Keep private.
- Click Save.
Permissions for lookup table files must be at the same level or higher than those of the lookup definitions that use those files.
You can use this field lookup to add information from the lookup table file to your events. You can use the field lookup with the
lookup command in a search string. Or, you can set the field lookup to run automatically. For information on creating an automatic lookup, see Create a new lookup to run automatically.
Make the lookup automatic
Instead of using the lookup command in your search when you want to apply a field lookup to your events, you can set the lookup to run automatically. See Define an automatic lookup for more information.
Configure a CSV lookup with .conf files
CSV lookups can also be configured using
.conf files. See Configure CSV look
Regarding section 11, my testing shows that tbe default for "maximum matches" is actually 100, not 1000. The same fix is probably necessary in section 5 but I have not tested that type. | https://docs.splunk.com/Documentation/Splunk/6.6.5/Knowledge/Usefieldlookupstoaddinformationtoyourevents | 2019-07-16T05:03:55 | CC-MAIN-2019-30 | 1563195524502.23 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
Using empty types
You can use empty types in entities in NGSI9/NGSI10 operations. In fact,
convenience operations implicitly use empty types in this way by default
You can use the
/type/<type>/id/<id> pattern instead of
<id> in
convenience operations URLs to specify a type).
Moreover, you can use empty entity types in discover context availability or query context operations. In this case, the absence of type in the query is interpreted as "any type".
For example, let's consider having the following context in Orion Context Broker:
- Entity 1:
- ID: Room1
- Type: Room
- Entity 2:
- ID: Room1
- Type: Space
A discoveryContextAvailability/querycontext using:
... "entities": [ { "type": "", "isPattern": "false", "id": "Room1" } ] ...
will match both Entity 1 and Entity 2.
Regarding attributes, they can be created without type in updateContext APPEND. If attribute type is left empty in subsequent updateContext UPDATEs, then the type is not updated and attribute keeps its previous type. | https://fiware-orion.readthedocs.io/en/1.9.0/user/empty_types/index.html | 2019-07-16T04:57:37 | CC-MAIN-2019-30 | 1563195524502.23 | [] | fiware-orion.readthedocs.io |
Endpoints for the AWS GovCloud (US) Regions. AWS GovCloud (US-West) and AWS GovCloud (US-East) uses FIPS 140-2 validated cryptographic modules to support compliance with FIPS 140-2 in all our HTTPS endpoints unless otherwise noted. For more information about FIPS 140-2, see "Cryptographic Module Validation Program" on the NIST Computer Security Resource Center website.
When using the endpoints, note the following:
Amazon S3 has the following website endpoint:
For a list of all AWS endpoints, see Regions and Endpoints in the AWS General Reference.
Endpoints for the AWS GovCloud (US) Regions
The following table lists each AWS service available in the AWS GovCloud (US) Regions and the corresponding endpoints.
For information about giving federated users single sign-on access to the AWS Management Console, see Giving Federated Users Direct Access to the AWS Management Console.
Note
* Amazon API Gateway edge-optimized API and edge-optimized custom domain name are not supported.
* Application Auto Scaling Scaling API and CLI are available. The AWS Auto Scaling console is not available. The AWS Auto Scaling API and command line interface (CLI) are not available.
* Amazon Route 53 hosted Zone ID for the regional endpoint in the AWS GovCloud (US) region is Z1K6XKP9SAGWDV.
** The AWS Key Management Service endpoint kms.us-gov-west-1.amazonaws.com is active, but does not support FIPS 140-2 for TLS connections.
*** Amazon S3 dual-stack endpoints support requests to S3 buckets over IPv6 and IPv4. For more information, see Using Dual-Stack Endpoints. | https://docs.aws.amazon.com/govcloud-us/latest/UserGuide/using-govcloud-endpoints.html | 2019-05-19T14:48:30 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.aws.amazon.com |
Take Online Payments with Square APIs and SDKs
Use Square payment APIs with custom solutions to accept payments online in the United States, Canada, Australia, the United Kingdom, and Japan. Connect, Square’s API suite, and the SqPaymentForm javascript library provide secure payment solutions for online.
Build a custom payment experience
Square payment form + Transactions API: Add our PCI-compliant payment form to your own checkout page and customize the layout and appearance of the form by using stylesheet classes and HTML. You can add a digital wallet that includes Apple Pay, Masterpass, and Google Pay buttons.
The payment form encodes the buyer's payment card information in a secure token that you send to your own server-side resource for payment processing. The payment is processed by the Transaction API charge endpoint. The secure token from the payment form is added to the charge request so that the Square payments processing server charges the correct payment card.
To start taking payments, get a Square access token and then add a few blocks of code to your checkout page and a backend module that makes a call on the Square Transaction API.
See the Payment Form Setup Guide to learn about building a custom online payment solution. The guide includes code templates that start you off with a production quality payment form. No need to design your form from scratch.
Use a pre-built Square solution
Take payments with the Checkout API: We host the pre-built payment flow in our servers securely and handle any required certificates. We provide the checkout and payment confirmation pages but you have the option to create your own payment confirmation page in place of ours.
Just create an order item with our Checkout API , and post it to our servers in a Checkout API request that returns a unique checkout URL.
Use the checkout URL to navigate to the checkout page. We present the buyer with our checkout flow pre-populated with your order details and then handle the payment for you.
After the buyer clicks the Place Order button, Square directs them to an order confirmation page.
See the Checkout API Setup guide to learn about integrating with a pre-build payment solution.
Add a payments plug-in
Choose an online payments plug-in from the Square App Marketplace: If you do not want to write a lot of code, take payments with a plug-in solution built for the eCommerce application you use.
Go to the Square App Marketplace to find our list of partners and leverage proven online payments plug-in solutions from partners like WooCommerce, Wix, BigCommerce, and many others.
Take payments securely on your website with full control of the checkout process using the Transactions API and Square payment form.
Learn more > | https://docs.connect.squareup.com/payments/online-payments | 2019-05-19T15:36:38 | CC-MAIN-2019-22 | 1558232254889.43 | [array(['https://connectv2-docs-production-f.squarecdn.com/assets/docs/sqpaymentform/diagram-sqpaymentform-basic-ae4746e87b433f9bcc3473a8283c6006a94c2941778d4a0207d04bff52290397.png',
'Diagram sqpaymentform basic'], dtype=object)
array(['https://connectv2-docs-production-f.squarecdn.com/assets/docs/checkout/checkout-screen-01-ed78eb4b8dc9074cae25f31b1a2d555735f64d99bbd2872ca664a7e29de3e077.png',
'Checkout screen 01'], dtype=object)
array(['https://connectv2-docs-production-f.squarecdn.com/assets/docs/checkout/checkout-screen-02-de37e8f955599cb14f999d440ade31bc8f18dc8ba2ca15ca4ac8975e8477091c.png',
'Checkout screen 02'], dtype=object) ] | docs.connect.squareup.com |
At some point, you might want to change the host name of the Red Hat Enterprise Linux machine on which you have installed Unified Manager. For example, you might want to rename the host to more easily identify your Unified Manager servers by type, workgroup, or monitored cluster group when you list your Red Hat Enterprise Linux machines.
You can use the host name (or the host IP address) to access the Unified Manager web UI. If you configured a static IP address for your network during deployment, then you would have designated a name for the network host. If you configured the network using DHCP, the host name should be taken from the DNS server.
Regardless of how the host name was assigned, if you change the host name and intend to use the new host name to access the Unified Manager web UI, you must generate a new security certificate.
If you access the web UI by using the server's IP address instead of the host name, you do not have to generate a new certificate if you change the host name. However, it is the best practice to update the certificate, so that the host name in the certificate matches the actual host name. The new certificate does not take effect until the Red Hat Enterprise Linux machine is restarted.
If you change the host name in Unified Manager, you must manually update the host name in OnCommand Workflow Automation (WFA). The host name is not updated automatically in WFA. | http://docs.netapp.com/ocum-73/topic/com.netapp.doc.onc-um-ag/GUID-2DC0B29D-8669-4C93-9270-62BFCB168949.html | 2019-05-19T15:24:58 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.netapp.com |
Delete objects
Deleting objects may be a bit confusing at first: If you have NOT selected any objects then it may look as if the delete button is not working as nothing is getting deleted. See a simple tip at the end of this topic
The reason is due to the presence of helpers on the current object. Usually TAD works on either the current object or on the selected set of objects. However, in case of deletion, TAD will work ONLY on the selected set of objects! The reason is simple: TAD is quite religious about ensuring that the helpers are always located on some object or the other.
If the object where the helpers are sitting is itself to be deleted, then logically speaking, TAD get confused: It does not know where the helpers should go next! Hence it will not permit the deletion of the current object
If you want TAD to delete that object too i.e. delete the object where the helpers are placed, then first shift the helpers to another object (An easy way: Click on another object name in the list of objects displayed in the top-left of the screen) and then select the earlier object (the one which the helpers used to sit on) and add that also into the current selection list. And then press Delete.
Tip
There is an easier way to delete ONE object at a time (If that is what you prefer) Just right click on the object name in the list of object names displayed (next to the class tree) and then select the delete sub-menu item! If it is grayed out, then it means that object is the current object and cannot be deleted.
Press F1 inside the application to read context-sensitive help directly in the application itself
← ∈ | http://docs.teamtad.com/doku.php/action5 | 2019-05-19T15:32:14 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.teamtad.com |
When network segmentation occurs, a distributed system that does not handle the partition condition properly allows multiple subgroups to form. This condition can lead to numerous problems, including distributed applications operating on inconsistent data.
For example, because thin clients connecting to a server cluster are not tied into the membership system, a client might communicate with servers from multiple subgroups. Or, one set of clients might see one subgroup of servers while another set of clients cannot see that subgroup but can see another one.
GemFire XD handles this problem by allowing only one subgroup to form and survive. The distributed systems and caches of other subgroups are shut down as quickly as possible. Appropriate alerts are raised through the GemFire XD logging system to alert administrators to take action.
If both a locator and the lead member leave the distributed system abnormally within a configurable period of time, a network partition is declared and the caches of members who are unable to see the locator and the lead member are immediately closed and disconnected.
If a locator or lead member's distributed system are shut down normally, GemFire XD automatically elects a new one and continues to operate.
If no locator can be contacted by a member, it declares a network partition has occurred, closes itself, and disconnects from the distributed system.
You enable network partition detection by setting the enable-network-partition-detection distributed system property to. | http://gemfirexd.docs.pivotal.io/docs/1.4.0/userguide/manage_guide/Topics/network_partition.html | 2019-05-19T15:50:35 | CC-MAIN-2019-22 | 1558232254889.43 | [] | gemfirexd.docs.pivotal.io |
>> Manager field sets both of them. To set the timeout in Splunk Manager:
1. Click Manager in the upper right-hand corner of Splunk Web.
2. Under System configurations, click System settings.
3. Click General settings.
4. In the System timeout field, enter a timeout value.
5. Click Save.
This sets the user session timeout value for both
splunkweb and
splunkd. Initially, they share the same value of 60 minutes. They will continue to maintain identical values, if you change the value through Manager. Manager or in configuration files, you must restart Splunk for
Jhatsplunk,<br /><br />I don't believe that there's a way to set user timeouts at the user role level. You might ask this question on Splunk Answers; perhaps someone has found some way to do so:
Is?
How is it working for real-time searches? splunkd and splunkweb are active so will the session ever times out if the user leaves its browser open ? | https://docs.splunk.com/Documentation/Splunk/5.0.18/Admin/Configureusertimeouts | 2019-05-19T14:55:34 | CC-MAIN-2019-22 | 1558232254889.43 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
Focus Overview
Focus is an input system concept that is relevant to controls in Silverlight. The control that is currently focused can receive the keyboard input events KeyUp and KeyDown and can thus use keyboard input. Focus is also relevant to the automation system and accessibility. Within a Silverlight-based application, the user can traverse controls in the UI by using a tab sequence. Traversing the tab sequence sets focus to the controls. You can influence the tab sequence as well as navigation behavior by setting several properties defined on the Control class.
Controls and Focus
Only an element that is a Control class can receive focus. Notable visual elements that are thus not focusable include:
Panels, such as StackPanel or Grid
-
Inline text objects such as Run
In order for a control to receive focus, the following must be true:
-
Visibility is Visible.
Focus cannot be entirely outside the Silverlight content area (for example, cannot be in the browser host’s UI).
IsTabStop must be true. Most controls use true as the default value.
Controls that can receive and process keyboard input are the most obvious cases where obtaining focus is important. This is because only the focused element can raise the KeyUp and KeyDown events. But other controls might be designed to receive focus in order to use keyboard keys as activators or accelerators. For example, a Button supports an access mode whereby you can tab to the button in a tab sequence, then press the SPACE key in order to "click" the Button without moving the mouse.
GotFocus and LostFocus
GotFocus is raised whenever focus changes from one control to another control within the Silverlight content area, and also in the case where an application first starts and the first time a control receives focus. GotFocus is raised by the control that receives focus. LostFocus is raised by the control that was previously focused.
Both GotFocus and LostFocus are routed events that bubble; for more information about routed events, see Events Overview for Silverlight. The scenario for bubbling is for composition of controls. At the composition level, focus might be on an object that is a composite part of a larger control. The implementer of the composite control might want the control design to take actions that apply regardless of which component part is focused, or to take different actions if focus leaves the immediate composition and goes to a different control.
If you are consuming existing controls, you generally are most interested in the focus behaviors of the control and not its composite parts, which is the first point at which you as control consumer can attach event handlers without resorting to retemplating the control. But you also have the option of checking for bubbling GotFocus and LostFocus events on your containers for larger page composition: objects such as StackPanel or Grid, or also the root element UserControl. However, because of the bubbling behavior, it might be useful to check the OriginalSource value in each GotFocus and LostFocus event. You might do this to verify that the source of that focus event is relevant to something that should be done on the object where the handler is attached. That object might be one that is upwards in the object tree from the OriginalSource.
Also, if you are authoring a control, you might be interested in the event-related override methods OnGotFocus and OnLostFocus. These methods provide a way to implement the handling of the event as a class method rather than as an attached handler on instances.
The focus events have an asynchronous implementation, which is a necessary performance optimization in Silverlight. However, you can check focus synchronously in the body of a handler by calling FocusManager.GetFocusedElement.
The Focus Method
The Control base class provides a Focus method. Calling this method programmatically attempts to set focus to the control where the method is called. Focus has a Boolean return value that informs you whether the focus attempt is successful. Possible reasons for failure to set focus include each of the points mentioned in Controls and Focus previously in this topic. If you call Focus on the control that already has focus, no events are raised, but the method still returns true so that the method call is not interpreted as a failure to force focus to the intended target.
Tab Sequences
Controls that are rendered in a UI are placed in a default tab sequence. The TAB key has special handling on controls to enable this, such that the element that receives the TAB key input loses focus and the next control in the sequence gets focus, with the appropriate GotFocus and LostFocus events raised. SHIFT+TAB is similar to TAB, but traverses the tab sequence in the reverse direction.
If you call the Focus method, focus will potentially exit the current position in the tab sequence, and be positioned at a new point in the tab sequence. Other user-initiated actions that are based on mouse behavior or accelerators might also cause focus to change and a repositioning in the tab sequence. Note that such user actions are typically handled by individual controls or other UI regions, which handle another input event and then call Focus as part of their implementation.
In order to be in the tab sequence, the control must be focusable as per the requirements listed in Controls and Focus.
In addition to IsTabStop, controls have the TabIndex property and the TabNavigation property. These properties enable you to alter the default tab sequence and behavior. TabIndex declares the sequencing, and TabNavigation declares the behavior for any nested tab sequences within the control.
IsTabStop is true by default for most controls. Certain control classes, such as Label, initialize with IsTabStop false. If IsTabStop is false, you cannot programatically set focus to the control with Focus, users cannot focus it, and the control receives no keyboard input events. For a control such as Label, that behavior is all by intention, because you want the user to interact with the control that is labeled, not the label itself.
Layout containers or any collection that contains multiple controls have a default ordering of the tab sequence. The default tab sequence order is the order that the items appear in the collection. If the UI is defined in XAML, the order in the collection is the same order as they are defined in the XAML. For example, the following XAML declares a UI where the default tab sequence would first tab to Button1, and then to Button2. Neither of the Button elements declare an explicit TabIndex value.
<StackPanel> <Button Name="Button1">Button 1</Button> <Button Name="Button2">Button 2</Button> </StackPanel>
To change the default tab order, you provide a TabIndex value for one or more elements in the UI. Within a container, the lowest value for TabIndex is the first item in the tab sequence. If no explicit TabIndex is specified, then the TabIndex is interpreted to be the maximum value when compared to any explicit TabIndex set value. For example, the following modification to the previous XAML changes the tab sequence so that Button2 is first in tab order, when compared to Button1 that does not have an explicit TabIndex value.
<StackPanel> <Button Name="Button1">Button 1</Button> <Button Name="Button2" TabIndex="1">Button 2</Button> </StackPanel>
An element where IsTabStop is false is not in the tab sequence at all, and its value for TabIndex is irrelevant.
Tab Navigation
Certain containers are intended to change the tab sequence behavior when they are tabbed to. In particular, rather than tabbing into the container’s own children, the tab sequence should continue to the next peer element. This is the case for lists, and in particular for the ListBox control. A control enables this behavior by using a value of Once for the TabNavigation property.
Another possible value for TabNavigation is Cycle. This mode causes the tab sequence to remain inside the container where TabNavigation=Cycle is applied. You should use this mode with caution. The Cycle tab navigation container can create a “focus trap” for users who attempt to use tab navigation to traverse the entire UI, and this can be an issue for accessibility-related tools such as screen readers.
Tab Sequence and Browser Hosts
For a browser-hosted application, the tab sequence of Silverlight content is a nested sequence within the browser host's tab sequence. By default, users are able to tab out of the Silverlight content area and then focus other elements on the hosting HTML page, as well as bring focus to the UI controls of the hosting browser application (for example an address bar).
This behavior can be changed by setting TabNavigation to Cycle on the root UserControl. However, this practice should be used with caution, for the same accesibility reasons as listed in the previous section.
Visual Indication of Current Focus
As part of default behavior, most Silverlight controls provide a focus-specific style that visually indicates which control in a Silverlight UI has focus. In WPF without the visual state model, a separate property FocusVisualStyle provided this style. In Silverlight or when using the visual state model with WPF, you instead provide named states that make adjustments in control visual appearance. The states are changed whenever control overrides of OnGotFocus(RoutedEventArgs) or OnLostFocus(RoutedEventArgs) are invoked. Typically such states are named Focused or Unfocused, and the states sometimes apply to parts of a composite control rather than the full visual tree. For an example of how visual state models can handle focus states, see Walkthrough: Customizing the Appearance of a Button by Using a ControlTemplate.
Automation
Focus on controls has relevance to Silverlight accessibility and automation. For more information, see Silverlight Accessibility Overview.
See Also | https://docs.microsoft.com/en-us/previous-versions/windows/silverlight/dotnet-windows-silverlight/cc903954(v=vs.95) | 2019-05-19T14:26:40 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.microsoft.com |
- !
Law of war crimes: command responsibility and the Yamashita precedent
Résumé de l'exposé.
Sommaire de l'exposé
- Introduction.
- The legal framework for Yamashita's trial in Manila.
- Yamashita and command responsibility: A debatable unique approach.
- A controversial trial disrespectful of the rights of the accused.
- The lack of real evidence of command responsibility: The birth of a questionable strict liability standard.
- Conclusion and perspectives on the notion of command responsibility.
- Notes.
Extraits de l'exposé
[...] Obviously, by virtue of those wide provisions, Yamashita could be charged on the fact that between October and September . thereby violated the laws of war'.4 Following this analysis, we can actually assert that the proceedings against Yamashita were legally justified by the Commission jurisdiction and the provisions setting out command responsibility. [...]
[...] Thus, they stated two things: that there had been widespread atrocities and crimes, and that Yamashita ?failed to provide effective control as was required by the circumstances'.11 We think that, since all the extenuating circumstances were left out, the validity and the scope of the sentence is weakened. Indeed, for the Commission, as Yamashita was Commander of the Japanese forces, he was therefore guilty of every crime committed by every soldier assigned to his command. Hence, we could not approve of the sentence. Our viewpoint is consolidated in Toyoda's case. In fact, Toyoda, Japan's highest ranking naval officer, testified that the orders to defend Manila to the very end were not given by Yamashita himself. [...]
[...] Lael, Yamashita precedent: War Crimes and Command Responsibility', Scholarly Resources Inc (1982). The Yamashita case by the Supreme Court: Ilias Bantekas, Contemporary Law of Superior Responsibility' 93 American Journal of International Law (1999) 573-595. Bruce D. Landrum Yamashita War Crimes Trial: Command Responsibility Then and Now' 149 Military Law Review (1995) 293-301. Kai Ambos ?Superior Responsibility' in A. Cassese, P. Gaeta and J. R. [...]
[...] With Article 28(a) military commanders are imposed with individual responsibility for crimes committed by forces under their effective command and control if they: ?either knew or, owing to the circumstances at the time, should have known that the forces were committing or about to commit such crimes.?15 Thus, the latest development of the definition of superior responsibility can be found in this Article 28 based on the same principle as that elaborated in Celebici, an indirect command responsibility case put before the International Criminal Tribunal for the Former Yugoslavia by virtue of the article of the Tribunal Statute.16 Indeed, this case sets up a threefold requirement for the existence of command responsibility, which has been confirmed by subsequent jurisprudence: the existence of the superior-subordinate relationship; that the superior knew or had reason to know that the criminal act was about to be or had been committed; and that the superior failed to take the reasonable measures to prevent the criminal act or to punish the perpetrator thereof. Clearly, at present time, Yamashita standard has been left out. And although, the outcome of the case has been much debated and criticised, the case has been used to establish what should be understood by command responsibility when the principle is applied as a basis for a person's criminal responsibility for acts committed by his subordinates. NOTES 1. [...]
[...] Indeed, some of the ex parte affidavits and depositions which were presented even included second- and third-hearsay evidence. Furthermore, speed had obviously become the key word for the trial, as Lael pointed out.7 Finally, all those restrictions could not have been other than highly prejudicial to Yamashita. The lack of real evidence of command responsibility: the birth of a questionable strict liability standard During the trial, the prosecution was unable to connect Yamashita with the atrocities and even failed to prove that he had had knowledge about these crimes. [...]
À propos de l'auteurOrianne L.étudiant
- Niveau
- Avancé
- Etude suivie
- sciences...
- Ecole, université
- Institut...
Descriptif de l'exposé
- Date de publication
- 2006-09-22
- Date de mise à jour
- 2006-09-22
- Langue
- anglais
- Format
- Word
- Type
- dissertation
- Nombre de pages
- 7 pages
- Niveau
- avancé
- Téléchargé
- 1 fois
- Validé par
- le comité de lecture
Derniers documents en relations internationales
- | https://docs.school/sciences-politiques-economiques-administratives/relations-internationales/dissertation/loi-crimes-guerre-responsabilite-prise-decision-cas-yamashita-19001.html | 2019-05-19T15:08:03 | CC-MAIN-2019-22 | 1558232254889.43 | [array(['https://ncd1.docsnocookie.school/vapplication/image/icons/cat-SP.png',
None], dtype=object) ] | docs.school |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
New-WAFRWebACL-ChangeToken <String>-MetricName <String>-Name <String>-DefaultAction_Type <WafActionType>
WebACL, which contains the
Rulesthat identify the CloudFront web requests that you want to allow, block, or count. AWS WAF evaluates
Rulesin order based on the value of
Priorityfor each
Rule. You also specify a default action, either
ALLOWor
BLOCK. If a web request doesn't match any of the
Rulesin a
WebACL, AWS WAF responds to the request with the default action. To create and configure a
WebACL, perform the following steps:
ByteMatchSetobjects and other predicates that you want to include in.
ChangeTokenparameter of a
CreateWebACLrequest.
CreateWebACLrequest.
GetChangeTokento get the change token that you provide in the
ChangeTokenparameter of an UpdateWebACL request.. The name can contain only alphanumeric characters (A-Z, a-z, 0-9); the name can't contain white space. You can't change
MetricNameafter you create the
WebACL. | https://docs.aws.amazon.com/powershell/latest/reference/items/New-WAFRWebACL.html | 2019-05-19T14:53:10 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.aws.amazon.com |
6.1.13
Splunk Enterprise 6.1.13 was released on March 2, 2017.
The following issues have been resolved in this release. For information about security fixes not related to authentication or authorization, refer to the Splunk Security Portal.
Issues are listed in all relevant sections. Some issues appear more than once.
Indexer and indexer clustering issues
This documentation applies to the following versions of Splunk® Enterprise: 6.1.13, 6.1.14
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/Splunk/6.1.13/ReleaseNotes/6.1.13 | 2019-05-19T15:03:05 | CC-MAIN-2019-22 | 1558232254889.43 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
If applicable, you should install the latest patches available for your data sources to take advantage of the latest features and enhancements. After uploading a data source patch, you can install it on all of the data sources of the same type.
You must have contacted technical support and obtained the .zip file that contains the latest data source patches by providing them with the version you are upgrading from and the version you want to upgrade to. | http://docs.netapp.com/oci-73/topic/com.netapp.doc.oci-ig-win/GUID-0A873A8B-2CF8-4595-BFAE-CB41FCE92FF6.html | 2019-05-19T14:41:02 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.netapp.com |
Square Payment Form: Add Digital Wallets
Digital Wallet Setup Guides
Add HTML tags, CSS classes, and JavaScript to a web application to support Apple Pay on the Web, Google Pay, and Masterpass payments in the Square Payment Form.
Web
Mobile Web
Client Side
Related Docs
Related Products
Add a digital wallet to the Square Payment Form to accept payments with Apple Pay for Web, Google Pay, and Masterpass.
Digital wallet payment services
Prev | https://docs.connect.squareup.com/payments/sqpaymentform/digitalwallet/intro | 2019-05-19T14:49:54 | CC-MAIN-2019-22 | 1558232254889.43 | [array(['https://connectv2-docs-production-f.squarecdn.com/assets/docs/sqpaymentform/diagram-sqpaymentform-digital-wallets@2x-cb8cbc93ce6f0505e98591a27c3e6661cca906e956204099c7a0d765e09466b4.png',
'Diagram sqpaymentform digital wallets@2x'], dtype=object) ] | docs.connect.squareup.com |
Self Service User Login Setting
The User Login setting for Self Service allows you to configure the method for logging in to Self Service.
Requirements
To require or allow users to log in to Self Service with an LDAP directory account, you need an LDAP server set up in the JSS. (For more information, see Integrating with LDAP Directory Services.)
To require or allow a user to log in using a JSS user account, you must first create an account for that user. (For more information, see JSS User Accounts and Groups.)
Configuring the User Login Setting for Self Service
Log in to the JSS with a web browser.
In the top-right corner of the page, click Settings
.
Click Computer Management.
In the "Computer Management–Management Framework" section, click Self Service
.
Click Edit.
Click the Login tab.
Configure the User Login setting.
Click Save.
The change is applied the next time computers check in with the JSS. | https://docs.jamf.com/9.99.0/casper-suite/administrator-guide/Self_Service_User_Login_Setting.html | 2019-05-19T15:13:42 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.jamf.com |
When developing services on DC/OS, you may find it helpful to access your cluster from your local machine via SOCKS proxy, HTTP proxy, or VPN. For instance, you can work from your own development environment and immediately test against your DC/OS cluster.
SOCKSSOCKS
DC/OS Tunnel can run a SOCKS proxy over SSH to the cluster. SOCKS proxies work for any protocol, but your client must be configured to use the proxy, which runs on port 1080 by default.
HTTPHTTP
The HTTP proxy can run in two modes: transparent and standard.
Transparent ModeTransparent Mode
In transparent mode, the HTTP proxy runs as superuser on port 80 and does not require modification to your application. Access URLs by appending the
mydcos.directory domain. You can also use DNS SRV records as if they were URLs. The HTTP proxy cannot currently access HTTPS in transparent mode.
Standard ModeStandard Mode
Though you must configure your client to use the HTTP proxy in standard mode, it does not have any of the limitations of transparent mode. As in transparent mode, you can use DNS SRV records as URLs.
SRV RecordsSRV Records
A SRV DNS record is a mapping from a name to a IP/port pair. DC/OS creates SRV records in the form
_<port-name>._<service-name>._tcp.marathon.mesos. The HTTP proxy exposes these as URLs. This feature can be useful for communicating with DC/OS services.
VPNVPN
DC/OS Tunnel provides you with full access to the DNS, masters, and agents from within the cluster. OpenVPN requires root privileges to configure these routes.
DC/OS Tunnel Options at a GlanceDC/OS Tunnel Options at a Glance
Using DC/OS TunnelUsing DC/OS Tunnel
PrerequisitesPrerequisites
- Only Linux and macOS are currently supported.
- The DC/OS CLI.
- The DC/OS Tunnel package. Run
dcos package install tunnel-cli --cli.
- SSH access (key authentication only).
- The OpenVPN client for VPN functionality.
Example ApplicationExample Application
All examples will refer to this sample application:
- Service Name:
myapp
- Group:
mygroup
- Port:
555
- Port Name:
myport
myapp is a web server listening on port
555. We’ll be using
curl
as our client application. Each successful example will result in the HTML
served by
myapp to be output output as text.
Using DC/OS Tunnel to run a SOCKS ProxyUsing DC/OS Tunnel to run a SOCKS Proxy
Run the following command from the DC/OS CLI:
dcos tunnel socks ## Example curl --proxy socks5h://127.0.0.1:1080 myapp-mygroup.marathon.agentip.dcos.thisdcos.directory:555
Configure your application to use the proxy on port 1080.
Using DC/OS Tunnel to run a HTTP ProxyUsing DC/OS Tunnel to run a HTTP Proxy
Transparent ModeTransparent Mode
Run the following command from the DC/OS CLI:
sudo dcos tunnel http ## Example curl _myport._myapp.mygroup._tcp.marathon.mesos.mydcos.directory ### Watch out! ## This won't work because you can't specify a port in transparent mode curl myapp-mygroup.marathon.agentip.dcos.thisdcos.directory.mydcos.directory:555
In transparent mode, the HTTP proxy works by port forwarding. Append
.mydcos.directoryto the end of your domain when you enter commands. For instance,.
Standard modeStandard mode
To run the HTTP proxy in standard mode, without root privileges, use the
--portflag to configure it to use another port:
dcos tunnel http --port 8000 ## Example curl --proxy 127.0.0.1:8000 _myport._myapp.mygroup._tcp.marathon.mesos curl --proxy 127.0.0.1:8000 myapp-mygroup.marathon.agentip.dcos.thisdcos.directory:555
Configure your application to use the proxy on the port you specified above.
SRV RecordsSRV Records
The HTTP proxy exposes DC/OS SRV records as URLs in the form
_<port-name>._<service-name>._tcp.marathon.mesos.mydcos.directory (transparent mode) or
_<port-name>._<service-name>._tcp.marathon.mesos (standard mode).
Find your Service NameFind your Service Name
The
<service-name> is the entry in the ID field of a service you create from the DC/OS web interface or the value of the
id field in your Marathon application definition.
Add a Named Port from the DC/OS Web InterfaceAdd a Named Port from the DC/OS Web Interface
To name a port from the DC/OS web interface, go to the Services > Services tab, click the name of your service, and then click Edit. Enter a name for your port on the Networking tab.
Add a Named Port in a Marathon Application DefinitionAdd a Named Port in a Marathon Application Definition
Alternatively, you can add
name to the
portMappings or
portDefinitions field of a Marathon application definition. Whether you use
portMappings or
portDefinitions depends on whether you are using
BRIDGE or
HOST networking. Learn more about networking and ports in Marathon.
"portMappings": [ { "name": "<my-port-name>", "containerPort": 3000, "hostPort": 0, "servicePort": 10000, "labels": { "VIP_0": "1.1.1.1:30000" } } ]
"portDefinitions": [ { "name": "<my-port-name>", "protocol": "tcp", "port": 0, } ]
Using DC/OS Tunnel to run a VPNUsing DC/OS Tunnel to run a VPN
Run the following command from the DC/OS CLI:
sudo dcos tunnel vpn ## Example curl myapp-mygroup.marathon.agentip.dcos.thisdcos.directory:555
The VPN client attempts to auto-configure DNS, but this functionality does not work on macOS. To use the VPN client on macOS, add the DNS servers that DC/OS Tunnel instructs you to use.
When you use the VPN, you are virtually within your cluster. You can access your master and agent nodes directly:
ping master.mesos ping slave.mesos
macOS OpenVPN Client InstallationmacOS OpenVPN Client Installation
If using homebrew then install with:
brew install openvpn
Then to use it:
Either add
/usr/local/sbinto your
$PATH,
or add the flag
--client=/usr/local/sbin/openvpnlike so:
sudo dcos tunnel vpn --client=/usr/local/sbin/openvpn
Another option is to install TunnelBlick (Don’t run it, we are only installing it for the
openvpnexecutable) and add the flag
--client=/Applications/Tunnelblick.app/Contents/Resources/openvpn/openvpn-*/openvpnlike so:
sudo dcos tunnel vpn --client=/Applications/Tunnelblick.app/Contents/Resources/openvpn/openvpn-*/openvpn
Linux OpenVPN Client InstallationLinux OpenVPN Client Installation
openvpn should be available via your distribution’s package manager.
For example:
- Ubuntu:
apt-get update && apt-get install openvpn
- ArchLinux:
pacman -S openvpn | https://docs.mesosphere.com/1.13/developing-services/tunnel/ | 2019-05-19T15:00:04 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.mesosphere.com |
Migration from ArangoDB 2.8 to 3.0
Problem
I want to use ArangoDB 3.0 from now on but I still have data in ArangoDB 2.8. I need to migrate my data. I am running an ArangoDB 3.0 cluster (and possibly a cluster with ArangoDB 2.8 as well).
Solution
The internal data format changed completely from ArangoDB 2.8 to 3.0,
therefore you have to dump all data using
arangodump and then
restore it to the new ArangoDB instance using
arangorestore.
General instructions for this procedure can be found in the manual. Here, we cover some additional details about the cluster case.
Dumping the data in ArangoDB 2.8
Basically, dumping the data works with the following command (use
arangodump
from your ArangoDB 2.8 distribution!):
arangodump --server.endpoint tcp://localhost:8530 --output-directory dump
or a variation of it, for details see the above mentioned manual page and
this section.
If your ArangoDB 2.8 instance is a cluster, simply use one of the
coordinator endpoints as the above
--server.endpoint.
Restoring the data in ArangoDB 3.0
The output consists of JSON files in the output directory, two for each
collection, one for the structure and one for the data. The data format
is 100% compatible with ArangoDB 3.0, except that ArangoDB 3.0 has
an additional option in the structure files for synchronous replication,
namely the attribute
replicationFactor, which is used to specify,
how many copies of the data for each shard are kept in the cluster.
Therefore, you can simply use this command (use the
arangorestore from
your ArangoDB 3.0 distribution!):
arangorestore --server.endpoint tcp://localhost:8530 --input-directory dump
to import your data into your new ArangoDB 3.0 instance. See
this page
for details on the available command line options. If your ArangoDB 3.0
instance is a cluster, then simply use one of the coordinators as
--server.endpoint.
That is it, your data is migrated.
Controling the number of shards and the replication factor
This procedure works for all four combinations of single server and cluster for source and destination respectively. If the target is a single server all simply works.
So it remains to explain how one controls the number of shards and the replication factor if the destination is a cluster.
If the source was a cluster,
arangorestore will use the same number
of shards as before, if you do not tell it otherwise. Since ArangoDB 2.8
does not have synchronous replication, it does not produce dumps
with the
replicationFactor attribute, and so
arangorestore will
use replication factor 1 for all collections. If the source was a
single server, the same will happen, additionally,
arangorestore
will always create collections with just a single shard.
There are essentially 3 ways to change this behaviour:
- The first is to create the collections explicitly on the ArangoDB 3.0 cluster, and then set the
--create-collection falseflag. In this case you can control the number of shards and the replication factor for each collection individually when you create them.
- The second is to use
arangorestore's options
--default-number-of-shardsand
--default-replication-factor(this option was introduced in Version 3.0.2) respectively to specify default values, which are taken if the dump files do not specify numbers. This means that all such restored collections will have the same number of shards and replication factor.
- If you need more control you can simply edit the structure files in the dump. They are simply JSON files, you can even first use a JSON pretty printer to make editing easier. For the replication factor you simply have to add a
replicationFactorattribute to the
parameterssubobject with a numerical value. For the number of shards, locate the
shardssubattribute of the
parametersattribute and edit it, such that it has the right number of attributes. The actual names of the attributes as well as their values do not matter. Alternatively, add a
numberOfShardsattribute to the
parameterssubobject, this will override the
shardsattribute (this possibility was introduced in Version 3.0.2).
Note that you can remove individual collections from your dump by
deleting their pair of structure and data file in the dump directory.
In this way you can restore your data in several steps or even
parallelise the restore operation by running multiple
arangorestore
processes concurrently on different dump directories. You should
consider using different coordinators for the different
arangorestore
processes in this case.
All these possibilities together give you full control over the sharding layout of your data in the new ArangoDB 3.0 cluster. | https://docs.arangodb.com/3.2/cookbook/Administration/Migrate2.8to3.0.html | 2017-09-19T20:50:18 | CC-MAIN-2017-39 | 1505818686034.31 | [] | docs.arangodb.com |
The
When a container is deleted, a container with the same name cannot be created for at least 30 seconds; the container may not be available for more than 30 seconds if the service is still processing the request. While the container is being deleted, attempts to create a container of the same name will fail with status code 409 (Conflict), with the service returning additional error information indicating that the container is being deleted. All other operations, including operations on any blobs under the container, will fail with status code 404 (Not Found) while the container is being deleted.
See Also
Status and Error Codes
Blob Service Error Codes
Specifying Conditional Headers for Blob Service Operations | https://docs.microsoft.com/en-us/rest/api/storageservices/Delete-Container?redirectedfrom=MSDN | 2017-09-19T21:15:27 | CC-MAIN-2017-39 | 1505818686034.31 | [] | docs.microsoft.com |
How to determine data availability?
Last updated on August 28, 2017
The OpenX UI allows you to generate reports that help you analyze and evaluate performance. However, performing an accurate analysis or evaluation requires an understanding of the completeness of your data set. The System Status tab indicates the latest availability of reporting data and can let you know if a delay in data could be impacting your reporting.
To check the status of your reporting data, navigate to Reports > System Status from within the OpenX UI.
Note: To ensure you are reviewing current information, click the Refresh button. The System Status is continually updated based on data processing status.
| https://docs.openx.com/Content/publishers/reporting-doc-data-availability.html | 2017-09-19T20:43:07 | CC-MAIN-2017-39 | 1505818686034.31 | [array(['../Resources/Images/AdExchangeLozenge.png',
'This topic applies to Ad Exchange. This topic applies to Ad Exchange.'],
dtype=object)
array(['../Resources/Images/ProgrammaticDirectLabel.png',
'This topic applies to Programmatic Direct. This topic applies to Programmatic Direct.'],
dtype=object)
array(['../Resources/Images/SSPLozenge.png',
'This topic applies to SSP. Most SSP activities are completed by OpenX. This topic applies to SSP. Most SSP activities are completed by OpenX.'],
dtype=object) ] | docs.openx.com |
A summary of the improvements and new features in Relay Modern.
Compat mode allows the Relay Modern APIs to be incrementally adopted in an existing Relay app. This approach enables the following features compared to Relay Classic:
QueryRenderer, the restrictions on queries from Relay Classic are removed: queries may contain multiple root fields that use arbitrary arguments and return singular or plural values. The
viewerroot field is now optional.
QueryRenderercan be used without defining a route. More in the routing guide.
QueryRenderersupports rendering small amounts of data directly, instead of requiring a container to access data. Containers are optional and can be used as your application grows in size and complexity.
For new Relay apps or existing apps that have been fully converted to the Compat API, the Relay Modern runtime can be enabled to activate even more features. In addition to those described above, this includes:
The new Relay Modern core is more light-weight and significantly faster than the previous version. It is redesigned to work with static queries, which allow us to push more work to build/compilation time. The Modern core is much smaller as a result of removing a lot of the complex features required for dynamic queries. The new core is also an order of magnitude faster in processing the response with an optimized parsing instruction set that is generated at build time. We no longer keep around tracking information needed for dynamic query generation, which drastically reduces the memory overhead of using Relay. This means more memory is left for making the UI feel responsive. Relay Modern also supports persisted queries, reducing the upload size of the request from the full query text to a simple id.
The Relay runtime bundle is roughly 20% of the size of Relay Classic.
The runtime automatically removes cached data that is no longer referenced, helping to reduce memory usage.
Relay Modern supports GraphQL Subscriptions, using the imperative update API to allow modifications to the store whenever a payload is received. It also features experimental support for GraphQL Live Queries via polling.
Some fields - especially those for paginated data - can require post-processing on the client in order to merge previously fetched data with new information. Relay Modern supports custom field handlers that can be used to process these fields to work with various pagination patterns and other use cases.
An area we've gotten a lot of questions on was mutations and their configs. Relay Modern introduces a new mutation API that allows records and fields to be updated in a more direct manner.
The Relay Modern Core adds support for client schema extensions. These allow Relay to conveniently store some extra information with data fetched from the server and be rendered like any other field fetched from the server. This should be able to replace some use cases that previously required a Flux/Redux store on the side.
Relay Modern comes with automatic Flow type generation for the fragments used in Relay containers based on the GraphQL schema. Using these Flow types can help make an application less error-prone, by ensuring all possible
null or
undefined cases are considered even if they don't happen frequently.
Routes no longer need to know anything about the query root in Relay Modern. Relay components can be rendered anywhere wrapped in a
QueryRenderer. This should bring more flexibility around picking routing frameworks.
Relay Modern's core is essentially an un-opinionated store for GraphQL data. It can be used independent of rendering views using React and can be extended to be used with other frameworks.
© 2013–present Facebook Inc.
Licensed under the BSD License. | http://docs.w3cub.com/relay/new-in-relay-modern/ | 2017-09-19T20:42:53 | CC-MAIN-2017-39 | 1505818686034.31 | [] | docs.w3cub.com |
Send Docs Feedback
Scaling Edge for Private Cloud
Edge for Private Cloud v. 4.17.01
Scaling an instance of Edge for Private Cloud refers to the process of adding either additional processing nodes, or adding an entire data center, or region, to the installation.
The following topics describe different types of scaling processes that you can perform:
Help or comments?
- If something's not working: Ask the Apigee Community or see Apigee Support.
- If something's wrong with the docs: Send Docs Feedback
(Incorrect? Unclear? Broken link? Typo?) | http://ja.docs.apigee.com/private-cloud/v4.17.01/scaling-edge-private-cloud | 2017-09-19T20:45:53 | CC-MAIN-2017-39 | 1505818686034.31 | [] | ja.docs.apigee.com |
When you install or uninstall Microsoft Enterprise Desktop Virtualization (MED-V) 2.0, you have the option of running the installation files at the command prompt. This section describes different options that you can specify when you install or uninstall MED-V at the command prompt.
Command-Line Arguments
You can use the following command-line arguments together with their respective MED-V installation files.
Examples of Command-Line Arguments
The following example installs the MED-V workspace created by the MED-V workspace Packager. The installation file creates a log file in the Temp directory and runs the installation file in quiet mode, but does not start the MED-V Host Agent on completion. The installation file overwrites any VHD left behind by a previous installation that has the same name.
setup.exe /l* %temp%\medv-workspace-install.log /qn SUPPRESSMEDVLAUNCH=1 OVERWRITEVHD=1
The following example uninstalls the MED-V workspace that was previously installed. The installation file creates a log file in the Temp directory and runs the installation file in quiet mode. The installation file deletes any remaining virtual hard disk files from the file system.
%ProgramData%\Microsoft\Medv\Workspace\uninstall.exe /l* %temp%\medv-workspace-uninstall.log /qn DELETEDIFFDISKS=1
Related topics
Deploy the MED-V Components
Technical Reference for MED-V | https://docs.microsoft.com/en-us/microsoft-desktop-optimization-pack/medv-v2/command-line-options-for-med-v-installation-files | 2017-09-19T20:31:30 | CC-MAIN-2017-39 | 1505818686034.31 | [] | docs.microsoft.com |
Deletes update-service .
Note
When you delete a service, if there are still running tasks that require cleanup, the service status moves from ACTIVE to DRAINING , and the service is no longer visible in the console or in list-services API operations. After the tasks have stopped, then the service status moves from DRAINING to INACTIVE . Services in the DRAINING or INACTIVE status can still be viewed with describe-services API operations; however, in the future, INACTIVE services may be cleaned up and purged from Amazon ECS record keeping, and describe-services API operations on those services will return a ServiceNotFoundException error.
See also: AWS API Documentation
delete-service [--cluster <value>] --service <value> [--cli-input-json <value>] [--generate-cli-skeleton <value>]
--cluster (string)
The short name or full Amazon Resource Name (ARN) of the cluster that hosts the service to delete. If you do not specify a cluster, the default cluster is assumed.
--service (string)
The name of the service service
This example command deletes the my-http-service service. The service must have a desired count and running count of 0 before you can delete it.
Command:
aws ecs delete-service --service my-http-service
service -> (structure)
The full description of the deleted service.
serviceArn -> (string)The Amazon Resource Name (ARN) that identifies the service. The ARN contains the arn:aws:ecs namespace, followed by the region of the service, the AWS account ID of the service owner, the service namespace, and then the service name. For example, ``arn:aws:ecs:region :012345678910 :service/my-service `` .
serviceName -> (string)The name of your service. Up to 255 letters (uppercase and lowercase), numbers, hyphens, and underscores are allowed. Service names must be unique within a cluster, but you can have similarly named services in multiple clusters within a region or across multiple regions.
clusterArn -> (string)The Amazon Resource Name (ARN) of the cluster that hosts the service.
loadBalancers -> (list)
A list of Elastic Load Balancing load balancer objects, containing the load balancer name, the container name (as it appears in a container definition), and the container port to access from the load balancer.
(structure)
Details on a load balancer that is used with a service.
targetGroupArn -> (string)The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group associated with a service.
loadBalancerName -> (string)The name of a Classic load balancer.
containerName -> (string)The name of the container (as it appears in a container definition) to associate with the load balancer.
containerPort -> (integer)The port on the container to associate with the load balancer. This port must correspond to a containerPort in the service's task definition. Your container instances must allow ingress traffic on the hostPort of the port mapping.
status -> (string)The status of the service. The valid values are ACTIVE , DRAINING , or INACTIVE .
desiredCount -> (integer)The desired number of instantiations of the task definition to keep running on the service. This value is specified when the service is created with create-service , and it can be modified with update-service .
runningCount -> (integer)The number of tasks in the cluster that are in the RUNNING state.
pendingCount -> (integer)The number of tasks in the cluster that are in the PENDING state.
taskDefinition -> (string)The task definition to use for tasks in the service. This value is specified when the service is created with create-service , and it can be modified with update-service .
deploymentConfiguration -> (structure)
Optional deployment parameters that control how many tasks run during the deployment and the ordering of stopping and starting tasks.
maximumPercent -> (integer)The upper limit (as a percentage of the service's desiredCount ) of the number of tasks that are allowed in the RUNNING or PENDING state in a service during a deployment. The maximum number of tasks during a deployment is the desiredCount multiplied by maximumPercent /100, rounded down to the nearest integer value.
minimumHealthyPercent -> (integer)The lower limit (as a percentage of the service's desiredCount ) of the number of running tasks that must remain in the RUNNING state in a service during a deployment. The minimum healthy tasks during a deployment is the desiredCount multiplied by minimumHealthyPercent /100, rounded up to the nearest integer value.
deployments -> (list)
The current state of deployments for the service.
(structure)
The details of an Amazon ECS service deployment.
id -> (string)The ID of the deployment.
status -> (string)The status of the deployment. Valid values are PRIMARY (for the most recent deployment), ACTIVE (for previous deployments that still have tasks running, but are being replaced with the PRIMARY deployment), and INACTIVE (for deployments that have been completely replaced).
taskDefinition -> (string)The most recent task definition that was specified for the service to use.
desiredCount -> (integer)The most recent desired count of tasks that was specified for the service to deploy or maintain.
pendingCount -> (integer)The number of tasks in the deployment that are in the PENDING status.
runningCount -> (integer)The number of tasks in the deployment that are in the RUNNING status.
createdAt -> (timestamp)The Unix timestamp for when the service was created.
updatedAt -> (timestamp)The Unix timestamp for when the service was last updated.
roleArn -> (string)The Amazon Resource Name (ARN) of the IAM role associated with the service that allows the Amazon ECS container agent to register container instances with an Elastic Load Balancing load balancer.
events -> (list)
The event stream for your service. A maximum of 100 of the latest events are displayed.
(structure)
Details on an event associated with a service.
id -> (string)The ID string of the event.
createdAt -> (timestamp)The Unix timestamp for when the event was triggered.
message -> (string)The event message.
createdAt -> (timestamp)The Unix timestamp for when the service was created.
placementConstraints -> (list)
The placement constraints for the tasks in the service.
(structure)
An object representing a constraint on task placement. For more information, see Task Placement Constraints in the Amazon EC2 Container Service Developer Guide .
type -> (string)The type of constraint. Use distinctInstance to ensure that each task in a particular group is running on a different container instance. Use memberOf to restrict selection to a group of valid candidates. Note that distinctInstance is not supported in task definitions.
expression -> (string)A cluster query language expression to apply to the constraint. Note you cannot specify an expression if the constraint type is distinctInstance . For more information, see Cluster Query Language in the Amazon EC2 Container Service Developer Guide .
placementStrategy -> (list)
The placement strategy that determines how tasks for the service are placed.
(structure)
The task placement strategy for a task or service. For more information, see Task Placement Strategies in the Amazon EC2 Container. | http://docs.aws.amazon.com/cli/latest/reference/ecs/delete-service.html | 2017-09-19T20:55:53 | CC-MAIN-2017-39 | 1505818686034.31 | [] | docs.aws.amazon.com |
StatsD Integration
About StatsD Integration
This section is intended for app developers who are going to use the Altoros Heartbeat service to collect custom metrics from their apps. The section describes how to enable an app to send metrics to the StatsD server. These metrics are subsequently pushed to Graphite’s Carbon component for caching and persistence.
Integrating Heartbeat’s StatsD into the App
Download a language-specific StatsD client library from this Github page and add it into your app code.
Follow the steps in Binding a Service Instance to the App to bind the Heartbeat service to your app.
Parse environment variables in the code to get a StatsD endpoint. To do this:
a. Parse environment variables of your app.
b. Find the service called Heartbeat and retrieve the variables highlighted below:
System-Provided: { "VCAP_SERVICES": { "heartbeat": [ { "credentials": { "host": "192.168.222.59", <-- "jmxtrans_prefix": "jmxtrans.admin.demo.", "port": 8125, <-- "statsd_prefix": "apps.admin.demo." <-- }, "label": "heartbeat", "name": "heartbeat",
Prepare a metrics prefix in the following format
statsd_prefix+
app_name+
.+
app_index, where:
statsd_prefix: statsd_prefix is described in the previous step.
app_name: For information about the app’s name, see this documentation.
app_index: For information about the app’s index, see this documentation.
For an example, see this GitHub repository.
Configure the StatsD client library (e.g. set host, port, prefix). For an example, see this GitHub repository.
Define custom metrics inside the app’s code (using the StatsD library). For an example, see this GitHub repository.
Push the app to Pivotal Cloud Foundry (PCF).
Create a custom dashboard in Grafana. For a sample dashboard, see this GItHub repository.
For an example of an app (written in golang) that sends custom metrics to a Grafana dashboard, see this GitHub repository.
Example of an App with StatsD Integration
Navigate to cf marketplace.
cf marketplace Getting services from marketplace in org admin / space demo as admin... OK service plans description app-autoscaler standard Scales bound applications in response to load (beta)
Find Heartbeat in the list of the available services and select the service plan (currently, only standard is available—it will be selected by default).
Create a service instance.
cf create-service heartbeat standard heartbeat
Bind the service to your app.
cf bind-service app_name heartbeat
Restage the app to start the metrics flow.
cf restage app_name
Create a Custom Dashboard
For information about creating and editing Grafana graphs and panels, see Grafana documentation. For an example of an app’s dashboard, see this Github repository. | https://docs.pivotal.io/partners/altoros-heartbeat/statsd.html | 2017-09-19T20:31:43 | CC-MAIN-2017-39 | 1505818686034.31 | [] | docs.pivotal.io |
Networking for On-Demand Services
Page last updated:
This section describes networking considerations for the Redis for Pivotal Cloud Foundry (PCF) on-demand service.
BOSH 2.0 and the Service Network_0<<
When they deploy an on-demand service, operators select the service network when configuring the tile for that on-demand service.
Default Network and Service Network
Like other on-demand PCF services, on-demand Redis for PCF relies, such as RabbitMQ for PCF, running on a separate services network, while other components run on the default network.
. | https://docs.pivotal.io/redis/1-9/odnetworking.html | 2017-09-19T20:42:47 | CC-MAIN-2017-39 | 1505818686034.31 | [array(['http://docs.pivotal.io/svc-sdk/odb/img/service-network-checkbox.png',
'Service Network checkbox'], dtype=object)
array(['images/ODB-architecture.png', 'Architecture Diagram'],
dtype=object) ] | docs.pivotal.io |
_17<<_18<<
To learn how to create environment, job, and scene for your colour models, refer to the Control Center and Server Guide.
Colour Model Scene Structure
To store your colour models in a scene, it is recommended that you create one drawing layer for each character, prop, effect, or location. You should name these according to the model.
You can also load other colour references in the scene to balance your overall colours. For example, if you work in a character colour model scene, it:
The Rename Drawing dialog box opens.
| https://docs.toonboom.com/help/harmony-11/paint/Content/_CORE/_Workflow/012_Colour_Styling_Colour_Models/001_H1_Prep.html | 2017-09-19T20:38:12 | CC-MAIN-2017-39 | 1505818686034.31 | [array(['../../../Resources/Images/_ICONS/Home_Icon.png', None],
dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stage.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/draw.png',
'Toon Boom Harmony 11 Draw Online Documentation'], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/sketch.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/controlcenter.png',
'Installation and Control Center Online Documentation Installation and Control Center Online Documentation'],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/scan.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stagePaint.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stagePlay.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stageXsheet.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/Trad_Anim/004_Colour/small_abby_character.png',
'Four model categories Four model categories'], dtype=object)
array(['../../../Resources/Images/HAR/Trad_Anim/004_Colour/color_styling_solo_organization.png',
'Example naming scheme for a project Example naming scheme for a project'],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/Trad_Anim/004_Colour/rename_drawing.png',
None], dtype=object) ] | docs.toonboom.com |
{"_id":"59bc03d31d2d8d001a3445800:16:24.238Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"settings":"","results":{"codes":[]},"auth":"required","params":[],"url":""},"isReference":false,"order":14,"body":"UI Control through which the user interacts with the 3D world, enabled by default. In daydream, this would represent the daydream controller and it's laser pointer. In cardboard ios, cardboard Android and GearVR, the controller is effectively the reticle. \n\nThe ViroController is also notified of all events that occur within the scene, with the exception of hover. Thus, include this controller in your scene if you would like to register to be always notified of such events. You can also toggle certain UI expects of the controller as well, such as reticle or daydream visibility.\n\nExample Code\n```\n<ViroController\n reticleVisibility={true}\n controllerVisibility={true}\n onClick={this._onClickListenerForAllEvents/>\n```\n[block:api-header]\n{\n \"type\": \"basic\",\n \"title\": \"Props\"\n}\n[/block]\n##Optional Props\n[block:parameters]\n{\n \"data\": {\n \"0-0\": \"**controllerVisibility**\",\n \"12-0\": \"**reticleVisibility**\",\n \"h-0\": \"PropKey\",\n \"h-1\": \"PropType\",\n \"0-1\": \"**PropTypes.bool**\\n\\nFlag for *displaying* the daydream controller. True by default. Note: this only applies to Daydream headsets.\",\n \"12-1\": \"**PropTypes.bool**\\n\\nFlag for *displaying* the reticle. True by default.\",\n \"1-0\": \"**onClick**\",\n \"1-1\": \"**React.PropTypes.func**\\n\\nCalled when any object has been clicked.\\n\\nExample code:\\n``` \\n_onClick(source) {\\n // user has clicked the object\\n}\\n```\",\n \"2-0\": \"**onClickState**\",\n \"2-1\": \"**React.PropTypes.func**\\n\\nCalled for each click state any object goes through as it is clicked. \\nSupported click states and their values are the following:\\n\\n|State Value|Description|\\n|:------|:----------:|\\n|1| Click Down: Triggered when the user has performed a click down action while hovering on that control.|\\n|2| Click Up: Triggered when the user has performed a click up action while hovering on that control.|\\n|3| Clicked: Triggered when the user has performed both a click down and click up action on that control sequentially, thereby having \\\"Clicked\\\" the object.|\\n\\nExample code:\\n``` \\n_onClickState(stateValue, source) {\\n if(stateValue == 1) {\\n // Click Down\\n } else if(stateValue == 2) {\\n // Click Up\\n } else if(stateValue == 3) { \\n // Clicked\\n }\\n}\\n```\\nFor the mapping of sources to controller inputs, see the [Events](doc:events) section.\",\n \"9-0\": \"**onScroll**\",\n \"9-1\": \"**React.PropTypes.func**\\n\\nCalled when the user performs a scroll action, while hovering on any views.\\n\\nFor example:\\n``` \\n_onScroll(scrollPos, source) {\\n // scrollPos[0]: x scroll position from 0.0 to 1.0. \\n // scrollPos[1]: y scroll position from 0.0 to 1.0.\\n}\\n```\\nFor the mapping of sources to controller inputs, see the [Events](doc:events) section.\\n\\nUnsupported VR Platforms: Cardboard(Android and iOS)\",\n \"10-0\": \"**onSwipe**\",\n \"10-1\": \"**React.PropTypes.func**\\n\\nCalled when the user performs a swipe gesture on the physical controller, while hovering on any views. \\n\\nFor example:\\n``` \\n_onSwipe(state, source) {\\n if(state == 1) {\\n // Swiped up\\n } else if(state == 2) {\\n // Swiped down\\n } else if(state == 3) { \\n // Swiped left\\n } else if(state == 4) { \\n // Swiped right\\n }\\n}\\n```\\nFor the mapping of sources to controller inputs, see the [Events](doc:events) section.\\n\\nUnsupported VR Platforms: Cardboard(Android and iOS)\",\n \"11-0\": \"**onTouch**\",\n \"11-1\": \"**React.PropTypes.func**\\n\\nCalled when the user performs a touch action, while hovering on the control. Provides the touch state type, and the x/y coordinate at which this touch event has occurred.\\n\\n|State Value|Description|\\n|:------|:----------:|\\n|1| Touch Down: Triggered when the user makes physical contact with the touch pad on the controller. |\\n|2| Touch Down Move: Called when the user moves around the touch pad immediately after having performed a Touch Down action. |\\n|3| Touch Up: Triggered after the user is no longer in physical contact with the touch pad after a Touch Down action. |\\n\\nFor example:\For the mapping of sources to controller inputs, see the [Events](doc:events) section.\\n\\nUnsupported VR Platforms: Cardboard(Android and iOS).\",\n \"3-0\": \"**onControllerStatus**\",\n \"3-1\": \"**React.PropTypes.func**\\n\\nCalled when the status of the controller has changed. This is only triggered for wireless controllers, else the return value would always be \\\"Connected\\\".\\n\\n|Status Value|Description|\\n|:------|:----------:|\\n|1| Unknown: The controller state is being initialized and is not yet known.|\\n|2| Connecting: The controller is currently scanning and attempting to connect to your device or phone.|\\n|3| Connected: The controller is connected to your device.|\\n|4| Disconnected: The controller has disconnected from your device.|\\n|5| Error: The controller has encountered an internal error and is currently unusable. This is usually triggered upon initialization. |\",\n \"5-1\": \"**PropTypes.oneOfType**\\n``` \\nPropTypes.oneOfType([\\n React.PropTypes.shape({\\n callback: React.PropTypes.func.isRequired,\\n timeToFuse: PropTypes.number\\n }),\\n React.PropTypes.func,\\n])\\n``` \\nAs shown above, onFuse takes one of the types - either a callback, or a dictionary with a callback and duration. \\n\\nIt is called after the user hovers onto and remains hovered on the control for a certain duration of time, as indicated in timeToFuse that represents the duration of time in milliseconds. \\n\\nWhile hovering, the reticle will display a count down animation while fusing towards timeToFuse.\\n\\nNote that timeToFuse defaults to 2000ms.\\n\\nFor example:\\n``` \\n_onFuse(source){\\n // User has hovered over object for timeToFuse milliseconds\\n}\\n```\\nFor the mapping of sources to controller inputs, see the [Events](doc:events) section.\",\n \"5-0\": \"**onFuse**\",\n \"4-0\": \"**onDrag**\",\n \"4-1\": \"**React.PropTypes.func**\\n\\nCalled when any view is dragged. The dragToPos parameter provides the current 3D location of the dragged object. \\n\\nExample code:\\n``` \\n_onDrag(dragToPos, source) {\\n // dragtoPos[0]: x position\\n // dragtoPos[1]: y position\\n // dragtoPos[2]: z position\\n}\\n``` \\nFor the mapping of sources to controller inputs, see the [Events](doc:events) section. \\n\\nUnsupported VR Platforms: Cardboard iOS\",\n \"6-0\": \"**onHover**\",\n \"7-0\": \"**onPinch**\",\n \"8-0\": \"**onRotate**\",\n \"6-1\": \"**React.PropTypes.func**\\n\\nCalled when the user hovers on or off the control.\\n\\nFor example:\\n``` \\n_onHover(isHovering, source) {\\n if(isHovering) {\\n // user is hovering over the box\\n } else {\\n // user is no longer hovering over the box\\n }\\n}\\n```\\nFor the mapping of sources to controller inputs, see the [Events](doc:events) section.\",\n \"7-1\": \"**React.PropTypes.func**\\n\\nCalled when the user performs a pinch gesture on the control. When the pinch starts, the scale factor is set to 1 is relative to the points of the two touch points. \\n\\nFor example:\\n```\\n _onPinch(pinchState, scaleFactor, source) {\\n if(pinchState == 3) {\\n // update scale of obj by multiplying by scaleFactor when pinch ends.\\n return;\\n }\\n //set scale using native props to reflect pinch.\\n }\\n```\\npinchState can be the following values:\\n\\n|State Value|Description|\\n|:------|:----------:|\\n|1| Pinch Start: Triggered when the user has started a pinch gesture.|\\n|2| Pinch Move: Triggered when the user has adjusted the pinch, moving both fingers. |\\n|3| Pinch End: When the user has finishes the pinch gesture and released both touch points. |\\n\\n**This event is only available in AR iOS**.\",\n \"8-1\": \"**React.PropTypes.func**\\n\\nCalled when the user performs a rotation touch gesture on the control. Rotation factor is returned in degrees.\\n\\nWhen setting rotation, the rotation should be relative to it's current rotation, *not* set to the absolute value of the given rotationFactor.\\n\\nFor example:\\n\\n```\\n _onRotate(rotateState, rotationFactor, source) {\\n\\n if (rotateState == 3) {\\n //set to current rotation - rotationFactor.\\n return;\\n }\\n //update rotation using setNativeProps\\n },\\n\\n```\\nrotationFactor can be the following values:\\n\\n|State Value|Description|\\n|:------|:----------:|\\n|1| Rotation Start: Triggered when the user has started a rotation gesture.|\\n|2| Rotation Move: Triggered when the user has adjusted the rotation, moving both fingers. |\\n|3| Rotation End: When the user has finishes the rotation gesture and released both touch points. |\\n\\n**This event is only available in AR iOS**.\"\n },\n \"cols\": 2,\n \"rows\": 13\n}\n[/block]\n\n[block:api-header]\n{\n \"title\": \"Methods\"\n}\n[/block]\n\n[block:parameters]\n{\n \"data\": {\n \"h-0\": \"setNativeProps(nativeProps: object)\",\n \"0-0\": \"A wrapper function around the native component's setNativeProps which allow users to set values on the native component without changing state/setting props and re-rendering. Refer to the React Native documentation on [Direct Manipulation]() for more information.\\n\\n|Parameter|Description|\\n|---|---|\\n|nativeProps | an object where the keys are the properties to set and the values are the values to set |\\n\\nFor example, setting position natively would look like this:\\n\\n```\\ncomponentRef.setNativeProps({\\n position : [0, 0, -1]\\n});\\n```\"\n },\n \"cols\": 1,\n \"rows\": 1\n}\n[/block]","excerpt":"","slug":"virocontroller","type":"basic","title":"ViroController"} | http://docs.viromedia.com/docs/virocontroller | 2017-09-19T20:27:58 | CC-MAIN-2017-39 | 1505818686034.31 | [] | docs.viromedia.com |
Send Docs Feedback
Note: Most user interface tasks can be performed in Edge Classic or the New Edge experience. For an overview, getting started topics, and release notes specific to the New Edge experience, see the docs.
Environment: Send Docs Feedback
(Incorrect? Unclear? Broken link? Typo?) | http://ja.docs.apigee.com/api-services/content/environment-keyvalue-maps | 2017-09-19T20:26:56 | CC-MAIN-2017-39 | 1505818686034.31 | [] | ja.docs.apigee.com |
Send Docs Feedback
Note: Most user interface tasks can be performed in Edge Classic or the New Edge experience. For an overview, getting started topics, and release notes specific to the New Edge experience, see the docs.
Get User
GetUser
GET
Get User
Edge on-premises installation only. For an Edge cloud installation, contact Apigee Customer Support.
Returns information about?) | http://ja.docs.apigee.com/management/apis/get/users/%7Buser_email%7D | 2017-09-19T20:27:20 | CC-MAIN-2017-39 | 1505818686034.31 | [] | ja.docs.apigee.com |
{"_id":"59bc03d41d2d8d001a3445c20:23:11.768Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"settings":"","results":{"codes":[]},"auth":"required","params":[],"url":""},"isReference":false,"order":14,"body":"Software License Agreement\n\nIMPORTANT – PLEASE READ THE TERMS OF THIS Software LICENSE AGREEMENT (“AGREEMENT”) CAREFULLY. BY DOWNLOADING THE SOFTWARE, (1) YOU ACKNOWLEDGE THAT YOU HAVE READ, UNDERSTAND, AND AGREE TO BE BOUND BY THIS AGREEMENT, AND (2) YOU REPRESENT THAT YOU HAVE THE AUTHORITY TO ENTER INTO THIS AGREEMENT, PERSONALLY OR ON BEHALF OF THE COMPANY YOU HAVE NAMED AS THE DEVELOPER (THE “DEVELOPER”), AND TO BIND THE DEVELOPER TO THE TERMS OF THIS AGREEMENT. IF YOU DO NOT AGREE TO THE TERMS AND CONDITIONS OF THIS AGREEMENT, DO NOT DOWNLOAD OR USE THE SOFTWARE.\n\nTHIS AGREEMENT IS A LEGAL AGREEMENT BETWEEN DEVELOPER AND VIRO MEDIA, INC. “VIRO” “WE” OR “US”) FOR THE ACCOMPANYING SOFTWARE PRODUCT, WHICH INCLUDES VIRO’S SOFTWARE AND MAY INCLUDE MEDIA, PRINTED MATERIALS AND “ONLINE” OR ELECTRONIC DOCUMENTATION (THE “SOFTWARE”). \n\nThis Agreement governs your use of the Software. \n\n1.\tLicense. \n(a)\tSoftware. Subject to the terms of this Agreement, VIRO grants to Developer a limited, revocable, nontransferable, nonexclusive, non-sublicenseable license to (a) internally use, perform, display, and reproduce the Software in object code form for the sole purpose of integrating such code into the Developer’s software application (“Developer App”); (b) use, perform, display, reproduce, and distribute the Software in executable object code format solely as incorporated into a Developer App to end users pursuant to a binding written agreement that contains terms no less restrictive than the Minimum EULA Terms set forth below. \n(b)\tThird Party Software. The Software may contain third party software which requires notices and/or additional terms and conditions. Such required third party software notices and/or additional terms and conditions are located at and are made a part of and incorporated by reference into this Agreement. By accepting this Agreement, you are also accepting the additional terms and conditions, if any, set forth therein.\n(c)\tOpen Source Software. Certain items of independent, third-party code may be included in the Software that are subject to the GNU General Public License (“GPL”) or other open source licenses (“Open Source Software”). Such Open Source Software is licensed under the terms of the license that accompanies such Open Source Software. Nothing in this Agreement limits Developer’s rights under, or grants Developer rights that supersede, the terms and conditions of any applicable end user license for such Open Source Software. In particular, nothing in this Agreement restricts Developer’s right to copy, modify, and distribute such Open Source Software that is subject to the terms of the GPL.\n(d)\tTrademarks. Developer shall: (i) use the term “Viro Media” in a referential phrase, e.g., “powered by VIRO” or “powered by ViroMedia” in the Application or Software description; and (ii) use VIRO’s logo in an app scene; options include the splash screen and/or start screen. Examples of approved uses of VIRO’s logo may be found at:. Developer will, upon request, provide VIRO with samples of its uses of VIRO’s trademarks pursuant to this Section 1(d), and will make any changes to such uses as VIRO may request. All uses of VIRO’s name, trademarks, logos and branding shall inure solely to the benefit of VIRO. Except as set forth herein, Developer does not have any rights in or to VIRO’s name, trademarks, logos or branding.\n(e)\tSupport. VIRO has no obligation to provide any support or engineering assistance of any sort unless otherwise agreed in writing by VIRO. You will bear sole responsibility for any and all support requested or required by end users of the Developer App. \n\n2.\tRestrictions. The rights granted to Developer in this Agreement are subject to the following restrictions: Except as expressly permitted in this Agreement, if at all, (a) Developer shall not license, sell, rent, lease, transfer, assign, distribute, display, host, outsource, disclose or otherwise commercially exploit or make the Software available to any third party; (b) Developer shall not modify, make derivative works of, disassemble, reverse compile or reverse engineer any part of the Software; (c) no part of the Software may be copied, reproduced, distributed, republished, downloaded, displayed, posted or transmitted in any form or by any means, including but not limited to electronic, mechanical, photocopying, recording or other means; and (d) any future release, update, or other addition to functionality of the Software shall be subject to the terms of this Agreement. Developer must reproduce, on all copies made by or for Developer, and must not remove, alter, or obscure in any way all proprietary rights notices (including copyright notices) of VIRO or its suppliers on or within the copies of the Software. \n\n3.\tAcceptable Developer App Policy. \n(a)\tThe following constitute the “Acceptable Developer App Policy”: The Developer App may not: (i) circumvent or claim to circumvent limitations on features or functionality of the Software; (ii) violate any third-party right, including any copyright, trademark, patent, trade secret, moral right, privacy right, right of publicity, or any other intellectual property or proprietary right; or (iii) be unlawful. The Developer App must (1) include, in addition to the Software, a substantial amount of other software developed by Developer or licensed by Developer from third parties; (2) provide substantially more than the same functionality as the Software; and (3) not have the purpose of building other platforms that compete with the Software. Developer is solely responsible for the Developer App and may not state or imply that VIRO in any way endorses, certifies, or is affiliated with the Developer App. Developer is solely responsible for compliance with, and will comply with, all applicable laws and regulations in connection with the Developer App, including in connection with any user data collected by, or sent to VIRO via, the Developer App. \n(b)\tVIRO reserves the right (but has no obligation) to review any Developer App, and to block any Developer App, or take other actions VIRO deems appropriate in connection with any Developer App, if VIRO, in its sole discretion: (i) disapproves of the Developer App; (ii) believes that the Developer or the Developer App violates the Acceptable Developer App Policy or any other provision of this Agreement; (iii) believes that the Developer App otherwise creates liability for us, our users, or any other person. \n\n4.\tDISCLAIMER OF WARRANTIES. VIRO IS PROVIDING THE SOFTWARE ON AN “AS IS” BASIS, FOR USE BY DEVELOPER AT ITS OWN RISK. VIRO PROVIDES NO TECHNICAL SUPPORT, WARRANTIES OR REMEDIES FOR THE SOFTWARE. VIRO AND ITS SUPPLIERS DISCLAIM ALL EXPRESS, IMPLIED OR STATUTORY WARRANTIES RELATING TO THE SOFTWARE, INCLUDING BUT NOT LIMITED TO, MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE, AND NON-INFRINGEMENT. VIRO DOES NOT WARRANT THAT USE OF THE SOFTWARE WILL BE STABLE, AVAILABLE, CONTAIN CERTAIN FEATURES, UNINTERRUPTED, OR ERROR-FREE, THAT DEFECTS WILL BE CORRECTED, OR THAT THE SOFTWARE IS FREE OF VIRUSES OR OTHER HARMFUL COMPONENTS. IF APPLICABLE LAW REQUIRES ANY WARRANTIES WITH RESPECT TO THE SOFTWARE, ALL SUCH WARRANTIES ARE LIMITED IN DURATION TO NINETY (90) DAYS FROM THE DATE OF DOWNLOAD. NO ORAL OR WRITTEN INFORMATION OR ADVICE GIVEN BY VIRO OR OTHERS SHALL CREATE A WARRANTY, CONDITION OR REPRESENTATION OR IN ANY WAY EXTEND THE SCOPE OF THE “AS IS” PROVISION OF THE SOFTWARE.\n\n5.\tLIMITATION OF REMEDIES AND DAMAGES. NEITHER VIRO NOR ITS SUPPLIERS SHALL BE RESPONSIBLE OR LIABLE WITH RESPECT TO ANY SUBJECT MATTER OF THIS AGREEMENT OR TERMS OR CONDITIONS RELATED THERETO UNDER ANY CONTRACT, NEGLIGENCE, STRICT LIABILITY OR OTHER THEORY (A) FOR LOSS OR INACCURACY OF DATA OR COST OF PROCUREMENT OF SUBSTITUTE GOODS, SERVICES OR TECHNOLOGY, OR (B) FOR ANY INDIRECT, INCIDENTAL OR CONSEQUENTIAL DAMAGES INCLUDING, BUT NOT LIMITED TO LOSS OF REVENUES AND LOSS OF PROFITS. VIRO’S AGGREGATE CUMULATIVE LIABILITY HEREUNDER SHALL NOT EXCEED FIFTY DOLLARS ($50.00). CERTAIN STATES AND/OR JURISDICTIONS DO NOT ALLOW THE EXCLUSION OF IMPLIED WARRANTIES OR LIMITATION OF LIABILITY FOR INCIDENTAL OR CONSEQUENTIAL DAMAGES, SO THE EXCLUSIONS SET FORTH ABOVE MAY NOT APPLY TO DEVELOPER.\n\n6.\tBasis of Bargain. The warranty disclaimer and limitation of liability set forth above are fundamental elements of the basis of the agreement between VIRO and Developer. VIRO would not be able to provide the Software on an economic basis without such limitations. The warranty disclaimer and limitation of liability inure to the benefit of VIRO’s suppliers\n\n7.\tTerm and Termination. This Agreement and the licenses granted hereunder are effective on the date Developer accepts the terms of this Agreement and shall continue unless this Agreement is terminated by either party pursuant to this section. VIRO may terminate this Agreement immediately upon notice to Developer in the event that Developer materially breaches any of the terms hereof. VIRO may terminate this Agreement for convenience upon providing sixty (60) days notice to DeveloperDeveloper may terminate this Agreement at any time, with or without cause. Developer may terminate this Agreement by sending either an email to info:::at:::viromedia.com with Developer’s name and the subject “REMOVE” or a letter by United States mail to: Viro Media, 1601 5th Ave 11th Floor, Seattle, WA 98101 or to such other address as VIRO may specify in writing by posting the new address on the VIRO website. Upon termination, the license granted hereunder shall terminate and Developer shall immediately destroy any copies of the Software in its possession, but the terms of this Agreement which are intended to survive termination will remain in effect, including Sections 5, 6 and 9 through 14.\n\n8.\tModifications. VIRO reserves the right, at any time, to modify, suspend, or discontinue the Software, or change access requirements, with or without notice. Developer agrees that VIRO will not be liable to Developer or to any third party for any modification, suspension, or discontinuance of the Software. VIRO reserves the right to change the terms and conditions of this Agreement or its policies relating to the Software at any time, and such changes will be effective thirty (30) days after notice to Developer. Developer’s continued use of the Software after any such changes take effect shall constitute Developer’s consent to such changes. Developer is responsible for providing VIRO with Developer’s most current e-mail address. In the event that the last e-mail address provided by Developer is not valid, VIRO’s dispatch of an e-mail containing such notice will nonetheless constitute effective notice of the changes described in the notice.\n\n9.\tOwnership. The Software, and all worldwide intellectual property rights therein, are the exclusive property of VIRO and its suppliers. All rights in and to the Software not expressly granted to Developer in this Agreement are reserved by VIRO and its suppliers. Subject to VIRO’s rights in the Software, the Developer App, and all worldwide Intellectual Property Rights therein, are the exclusive property of Developer and its suppliers. Developer may from time-to-time provide suggestions, comments, ideas or other feedback (“Feedback”) to VIRO with respect to the Software. VIRO shall be free to use, disclose, reproduce, license or otherwise distribute and exploit the Feedback provided to it as it sees fit, entirely without obligation or restriction of any kind on account of intellectual property rights or otherwise.\n\n10.\tConfidentiality. “Confidential Information” includes the non-public elements and portions of the Software, including its source code and structure, and any other materials of VIRO that VIRO designates as confidential or which Developer should reasonably believe to be confidential. Developer shall hold VIRO’s Confidential Information in confidence and shall neither disclose such Confidential Information to third parties nor use VIRO’s Confidential Information for any purpose other than as necessary to perform under this Agreement. Developer agrees to limit access to the Confidential Information to those employees, agents, and representatives who are necessary for Developer to perform its obligations under this Agreement. All such employees, agents, and representatives must have a written confidentiality agreement with Developer that is no less restrictive than the terms contained herein. Developer will protect the Confidential Information from unauthorized use, access, or disclosure in the same manner as Developer protects its own confidential or proprietary information of a similar nature VIRO’s Confidential Information.\n\n11.\tIndemnity. Developer agrees to indemnify and hold VIRO harmless, including costs and attorneys’ fees, from any claim or demand made by any third party due to or arising out of (a) the Developer App; (b) Developer’s violation of this Agreement; or (c) Developer’s violation of applicable laws or regulations. VIRO reserves the right, at Developer’s expense, to assume the exclusive defense and control of any matter for which Developer is required to indemnify VIRO and Developer agrees to cooperate with VIRO defense of these claims. License agrees not to settle any matter without the prior written consent of VIRO. VIRO will use reasonable efforts to notify Developer of any such claim, action or proceeding upon becoming aware of it.\n\n12.\tExport. The Software and related technology are subject to U.S. export control laws and may be subject to export or import regulations in other countries. Developer agrees to strictly comply with all such laws and regulations and acknowledges that it has the responsibility to obtain authorization to export, re-export, or import the Software and related technology, as may be required. Developer will indemnify and hold VIRO harmless from any and all claims, losses, liabilities, damages, fines, penalties, costs and expenses (including attorney’s fees) arising from or relating to any breach by Developer of its obligations under this section.\n\n13.\tMiscellaneous. CALIF THIS AGREEMENT OR THE TRANSACTIONS CONTEMPLATED HEREBY.. \n\n14.\tQuestions Or Additional Information. If you have questions regarding this Agreement, or wish to obtain additional information, please send an e-mail to [email protected].","excerpt":"","slug":"license","type":"basic","title":"License"} | http://docs.viromedia.com/docs/license | 2017-09-19T20:37:11 | CC-MAIN-2017-39 | 1505818686034.31 | [] | docs.viromedia.com |
MP4 Plugin¶
This module provides streaming media server support for MP4 files.
User can send a HTTP request to the server with
start argument
which is measured in seconds, and the server will respond with the
stream such that its start position corresponds to the requested time,
for example:
This allows performing a random seeking at any time. We can use flash player, vlc, mplayer, firefox or chrome to play the streaming media.
This plugin can be used as a remap plugin. We can write this in remap.config:
map @plugin=mp4.so | https://docs.trafficserver.apache.org/en/latest/admin-guide/plugins/mp4.en.html | 2017-09-19T20:37:12 | CC-MAIN-2017-39 | 1505818686034.31 | [] | docs.trafficserver.apache.org |
You can now make top quality documents online, in minutes, at Andreyev Online.
Why have we done this?
The feedback we’ve received is that although people love the quality of our documents, and our friendly support, sometimes they would like to do a bit more of the work themselves online, to save some time and money.
But they also want to be able to give us a call and ask a key question now and then, to ensure they get the best outcome.
Andreyev Online is our response to these requests.
You can now log on and prepare a document in minutes: either completely ‘self-serve’, or with our friendly expertise and assistance along the way.
Can I really make a document in minutes?
Yes you can.
This is a completely automated document assembly system. Simply answer the questions, and then click “Make my document!” Your document will be prepared and ready for you to download or retrieve from your inbox within seconds, (sometimes a minute or two for the really sophisticated documents).
Can I see what the document is going to look like before I pay for it?
Yes you can.
You can prepare a draft of the document before paying. You can also make as many drafts as you wish. (We also note that some of our documents are free to use.)
Some documents also come with a handy ‘Summary’ of the key data you have entered so that you are able to check all the important bits before finalising your document.
What format do I get the document in?
Before you have paid the document is in PDF format.
After you have paid the document is in both PDF format and editable Microsoft Word format.
Are the documents I get online the same as the ones I get if I ring you up or send in a form?
Yes, and no.
The documents are exactly the same – but you answer the questions and make the choices, so the document you get out the other end is your creation.
We have tried to make the questions as clear and simple as possible. But if you need some help along the way, please feel free to give us a call on 1300 654 590 or email [email protected].
We also love feedback, so keep it coming! | http://docs.andreyev.com.au/making-documents-at-andreyev-lawyers/ | 2017-09-19T20:26:05 | CC-MAIN-2017-39 | 1505818686034.31 | [] | docs.andreyev.com.au |
Update Manager scans vSphere objects to determine how they comply with baselines and baseline groups that you attach. You can filter scan results by text search, group selection, baseline selection, and compliance status selection.
When you select a container object, you view the overall compliance status of the container against the attached baselines as a group. You also see the individual compliance statuses of the objects in the selected container against all baselines. If you select an individual baseline attached to the container object, you see the compliance status of the container against the selected baseline.
If you select an individual virtual machine, appliance, or host, you see the overall compliance status of the selected object against all attached baselines and the number of updates. If you select an individual baseline attached to this object, you see the number of updates grouped by the compliance status for that baseline.
The compliance information is displayed on the Update Manager tab. For more information about viewing compliance information, see Viewing Scan Results and Compliance States for vSphere Objects. | https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.update_manager.doc/GUID-D8DFE1E6-CD37-4B0D-919C-82F7DE218F46.html | 2017-09-19T20:58:25 | CC-MAIN-2017-39 | 1505818686034.31 | [] | docs.vmware.com |
If you encounter problems when running or using Update Manager, you can use a troubleshooting topic to understand and solve the problem, if there is a workaround. Update Manager Web Client Remains Visible in the vSphere Web Client After Uninstalling Update Manager ServerAfter you uninstall Update Manager server, the Update Manager tab might remain visible under the Monitor tab in the vSphere Web Client. Connection Loss with Update Manager Server or vCenter Server in a Single vCenter Server SystemBecause of loss of network connectivity or the restart of the servers, the connection between the Update Manager plug-in and the Update Manager server or vCenter Server system might get interrupted. Gather Update Manager Log BundlesYou can gather information about recent events on the Update Manager server for diagnostic purposes. Gather Update Manager and vCenter Server Log BundlesWhen the Update Manager server and vCenter Server are installed on the same computer, you can gather information about recent events on the Update Manager server and vCenter Server system for diagnostic purposes. Log Bundle Is Not GeneratedAlthough the script seems to complete successfully, an Update Manager log bundle might not be generated. Because of limitations in the ZIP utility that Update Manager uses, the cumulative log bundle size cannot exceed 2 GB. If the log exceeds 2 GB, the operation might fail. Host Extension Remediation or Staging Fails Due to Missing PrerequisitesSome host extension remediation or staging operations fail because Update Manager does not automatically download and install missing prerequisites. No Baseline Updates AvailableBaselines are based on metadata that Update Manager downloads from the VMware and third-party Web sites. All Updates in Compliance Reports Are Displayed as Not ApplicableScan results usually consist of a mix of installed, missing, and not applicable results. Not applicable entries are only a concern when this is the universal result or when you know that the patches should be applicable. All Updates in Compliance Reports Are UnknownScanning is the process in which you generate compliance information about vSphere objects against attached baselines and baseline groups. The compliance statuses of objects can be All Applicable, Non Compliant, Incompatible, Unknown, and Compliant. VMware Tools Upgrade Fails if VMware Tools Is Not InstalledUpdate Manager upgrades only an existing installation of VMware Tools in a virtual machine running on a host of version ESXi 5.x or later. ESXi Host Scanning FailsScanning is the process in which you generate compliance information about the vSphere objects against attached baselines and baseline groups. In some cases, the scan of ESXi hosts might fail. ESXi Host Upgrade FailsThe remediation process of an ESXi host against an upgrade baseline or a baseline group containing an upgrade baseline might fail. The Update Manager Repository Cannot Be DeletedWhen you uninstall the Update Manager server, you might want to delete the Update Manager repository. Incompatible Compliance StateAfter you perform a scan, the compliance state of the attached baseline might be incompatible. The incompatible compliance state requires more attention and further action to be resolved. | https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.update_manager.doc/GUID-DFCEFA13-858C-4FE3-90ED-3DF88DF8461D.html | 2017-09-19T20:58:27 | CC-MAIN-2017-39 | 1505818686034.31 | [] | docs.vmware.com |
You can start the requested machine reconfiguration immediately or schedule it to start at a particular day and time. You can also specify the power option for the machine before reconfiguring it.
Prerequisites
Specify Machine Reconfiguration Settings.
Procedure
- If the Execution tab is visible, you can select it to specify additional reconfiguration settings. If it is not visible, click Submit to start machine reconfiguration.
- If the Execution tab is visible, click Execution to schedule the reconfiguration action.
- memory is not supported or is disabled
Storage change where hot storage is disabled. | https://docs.vmware.com/en/vRealize-Automation/7.3/com.vmware.vra.prepare.use.doc/GUID-47AA3E95-6220-491D-8CA7-CBC3203D811E.html | 2017-09-19T20:58:19 | CC-MAIN-2017-39 | 1505818686034.31 | [] | docs.vmware.com |
In Version 9, there are two types of Breadcrumbs:
If you see ellipsis dots on a tab, it means that the browser window isn't wide enough to show all of the tabs. Click on the ellipsis dots to see what additional tabs are available.
Everything in the admin interface can be considered a "record". A product is a record. A customer account is a record, etc. There are many types of records in the admin interface, and most of them can be edited. For example, you can edit a product and change the retail price of that product.
There are two types of editing in the admin interface: "quick edit" and "full edit". Some types of records only have quick edit. Some types of records have both quick edit and full edit mode.
In Version 9.0005 and later you can reorganize tabs. | http://docs.miva.com/reference-guide/new-features | 2017-04-23T13:55:17 | CC-MAIN-2017-17 | 1492917118707.23 | [] | docs.miva.com |
Generally there should be no reason to alter the output volume levels within OtsAV Deck level control on the software mixer. For details on the software mixer click here.
Go to the Windows Control Panel.
If you are using Windows 2000 go to Start -> Settings -> Control Panel
If you are using Windows XP go to Start -> Control Panel
Double-click on either the Sounds and Multimedia icon (Windows 2000), or the Sounds and Audio Devices icon (Windows XP).
If you are using Windows 2000 select the Audio tab and enable the Show volume control on the taskbar option.
If you are using Windows XP select the Volume tab and enable the Place volume icon in the taskbar option.
OtsAV Dynamics Processor
OtsAV mixer
Adjusting cue channel levels | http://docs.otslabs.com/OtsAV/help/using_otsav/audio_output_features/how_to_adjust_volume_levels.htm | 2017-04-23T13:53:41 | CC-MAIN-2017-17 | 1492917118707.23 | [] | docs.otslabs.com |
DescribeTags
Retrieves a list of configuration items that are tagged with a specific tag. Or retrieves a list of all tags assigned to a specific configuration item.
Request Syntax
Copy
{ "filters": [ { "name": "
string", "values": [ "
string" ] } ], "maxResults":
number, "nextToken": "
string" }
Request Parameters
For information about the parameters that are common to all actions, see Common Parameters.
The request accepts the following data in JSON format.
- filters
You can filter the list using a key-value format. You can separate these items by using logical operators. Allowed filters include
tagKey,
tagValue, and
configurationId.
Type: array of TagFilter objects
Required: No
- maxResults
The total number of items to return in a single page of output. The maximum value is 100.
Type: Integer
Required: No
- nextToken
A token to start the list. Use this token to get the next set of results.
Type: String
Required: No
Response Syntax
Copy
{ "nextToken": "string", "tags": [ { "configurationId": "string", "configurationType": "string", "key": "string", "timeOfCreation": number, "value": "string" } ] }
Response Elements
If the action is successful, the service sends back an HTTP 200 response.
The following data is returned in JSON format by the service.
- nextToken
The call returns a token. Use this token to get the next set of results.
Type: String
Depending on the input, this is a list of configuration items tagged with a specific tag, or a list of tags for a specific configuration item.
Type: array of ConfigurationTag objects
Errors
For information about the errors that are common to all actions, see Common Errors.
- AuthorizationErrorException
The AWS user account does not have permission to perform the action. Check the IAM policy associated with this account.
- ResourceNotFoundException
The specified configuration ID was not located. Verify the configuration ID and try again.
HTTP Status Code: 400
- ServerInternalErrorException
The server experienced an internal error. Try again.
HTTP Status Code: 500
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: | http://docs.aws.amazon.com/application-discovery/latest/APIReference/API_DescribeTags.html | 2017-04-23T13:53:41 | CC-MAIN-2017-17 | 1492917118707.23 | [] | docs.aws.amazon.com |
API¶
The module version can be read from
perf.VERSION (tuple of int) or
perf.__version__ (str).
See API examples.
Functions¶
add_runs(filename: str, result)¶
Append a
Benchmarkor
BenchmarkSuiteto an existing benchmark suite file, or create a new file.
If the file already exists, adds runs to existing benchmarks.
See
BenchmarkSuite.add_runs()method..
On Python 3.3 and newer, it’s
time.perf_counter(). On older versions, it’s
time.clock()on Windows and
time.time()on other platforms. See the PEP 418 for more information on Python clocks.
See also PEP 418 – Add monotonic time, performance counter, and process time functions.
python_implementation()¶
Name of the Python implementation in lower case.
Examples:
cpython
ironpython
jython
pypy
Use
sys.implementation.nameor
platform.python_implementation().
Run class¶
- class
Run(values: Sequence[float], warmups: Sequence[float]=None, metadata: dict=None, collect_metadata=True)¶
A benchmark run result is made of multiple values.
values must be a sequence of numbers (integer or float) greater than zero. Values must be normalized per loop iteration. Usually, values is a list of number of seconds.
warmups is an optional sequence of
(loops: int, value)tuples where value must be a number (integer or float) greater than or equal to zero. Warmup values must be normalized per loop iteration.
values and/or warmups must be a non-empty sequence. If values is empty, the run is a calibration run.
Values must not be equal to zero. If a value is zero, use more loop iterations: see Runs, values, warmups, outer and inner loops.
metadata are metadata of the run, see Metadata. Important metadata:
name(mandatory, non-empty str): benchmark name
loops(
int >= 1): number of outer-loops
inner_loops(
int >= 1): number of inner-loops
unit(str): unit of values:
'second',
'byte'or
'integer'
Set collect_metadata to false to not collect system metadata.
Methods:
get_metadata() → dict¶
Get run metadata.
The
format_metadata()function can be used to format values.
get_loops() → int¶
Get the number of outer loop iterations from metadata.
Return 1 if metadata have no
'loops'entry.
New in version 1.3.
get_inner_loops() → int¶
Get the number of inner loop iterations from metadata.
Return 1 if metadata have no
'inner_loops'entry.
New in version 1.3.
get_total_loops() → int¶
Get the total number of loops of the benchmark run: get_loops() x get_inner_loops().
Attributes:
Benchmark class¶
- class
Benchmark(runs)¶
A benchmark is made of multiple
Runobjects.
runs must be non-empty sequence of
Runobjects. Runs must have a
namemetadata (all runs must have the same name).
Methods:
add_run(run: Run)¶
Add a benchmark run: run must a
Runobject.
The new run must be compatible with existing runs, the following metadata must be the same (same value or no value for all runs):
aslr
cpu_count
cpu_model_name
hostname
inner_loops
name
platform
python_executable
python_implementation
python_unicode
python_version
unit
add_runs(bench: Benchmark)¶
Add runs of the benchmark bench.
See
BenchmarkSuite.add_runs()method and
add_runs()function.
dump(file, compact=True, replace=False)¶
Dump the_dates() → (datetime.datetime, datetime.datetime) or None¶
Get the start date of the first run and the end date of the last run.
Return a
(start, end)tuple where start and end are
datetime.datetimeobjects if a least one run has a date metadata.
Return
Noneif no run has the
datemetadata.
get_metadata() → dict¶
Get metadata common to all runs.
The
format_metadata()function can be used to format values.
get_nwarmup() → int or float¶
Get the number of warmup values per run.
Return an
intif all runs use the same number of warmups, or return the average as a
float.
get_total_duration() → float¶
Get the total duration of the benchmark in seconds.
Use the
durationmetadata of runs, or compute the sum of their raw values including warmup values.
get_loops() → int or float¶
Get the number of outer loop iterations of runs.
Return an
intif all runs have the same number of outer loops, return the average as a
floatotherwise.
New in version 1.3.
get_inner_loops() → int or float¶
Get the number of inner loop iterations of runs.
Return an
intif all runs have the same number of outer loops, return the average as a
floatotherwise.
New in version 1.3.
get_total_loops() → int or float¶
Get the total number of loops per value (outer-loops x inner-loops).
Return an
intif all runs have the same number of loops, return the average as a
floatotherwise.
get_unit() → str¶
Get the unit of values:
'byte': File size in bytes
'integer': Integer number
'second': Duration in seconds
- classmethod
load(file) → Benchmark¶
Load a benchmark from a JSON file which was created by
dump().
file can be a filename,
'-'string to load from
sys.stdin, or a file object open to read.
Raise an exception if the file contains more than one benchmark.
See the perf JSON format.
- classmethod
loads(string) → Benchmark¶
Load a benchmark from a JSON string.
Raise an exception if JSON contains more than one benchmark.
See the perf JSON format.
mean()¶
Compute the arithmetic mean of
get_values().
The mean is greater than zero:
add_run()raises an error if a value is equal to zero.
Raise an exception if the benchmark has no values.
median()¶
Compute the median of
get_values().
The median is greater than zero:
add_run()raises an error if a value is equal to zero.
Raise an exception if the benchmark has no values.
percentile(p)¶
Compute the p-th percentile of
get_values().
p must be in the range [0; 100]:
stdev()¶
Compute the standard deviation of
get_values().
Raise an exception if the benchmark has less than 2 values.
median_abs_dev()¶
Compute the median absolute deviation (MAD) of
get_values().
Raise an exception if the benchmark has no values.
BenchmarkSuite class¶
- class
BenchmarkSuite(benchmarks, filename=None)¶
A benchmark suite is made of
Benchmarkobjects.
benchmarks must be a non-empty sequence of
Benchmarkobjects. filename is the name of the file from which the suite was loaded.
Methods:
add_benchmark(benchmark: Benchmark)¶
A suite cannot contain two benchmarks with the same name, because the name is used as an unique key: see the
get_benchmark()method.
add_runs(bench: Benchmark or BenchmarkSuite)¶
Add runs of benchmarks.
bench can be a
Benchmarkor a
BenchmarkSuite.
See
Benchmark.add_runs()method and
add_runs()function.
dump(file, compact=True, replace=False)¶
Dump the benchmark_benchmark(name: str) → Benchmark¶
Get the benchmark called name.
name must be non-empty.
Raise
KeyErrorif there is no benchmark called name.
get_dates() → (datetime.datetime, datetime.datetime) or None¶
Get the start date of the first benchmark and end date of the last benchmark.
Return a
(start, end)tuple where start and end are
datetime.datetimeobjects if a least one benchmark has dates.
Return
Noneif no benchmark has dates.
get_metadata() → dict¶
Get metadata common to all benchmarks (common to all runs of all benchmarks).
The
format_metadata()function can be used to format values.
See the
Benchmark.get_metadata()method and Metadata.
get_total_duration() → float¶
Get the total duration of all benchmarks in seconds.
See the
Benchmark.get_total_duration()method.
- classmethod
load(file)¶
Load a benchmark suite from a JSON file which was created by
dump().
file can be a filename,
'-'string to load from
sys.stdin, or a file object open to read.
See the perf JSON format.
- classmethod
loads(string) → Benchmark¶
Load a benchmark suite from a JSON string.
See the perf JSON format.
Attributes:
Runner class¶
- class
Runner(values=3, warmups=1, processes=20, loops=0, min_time=0.1, max_time=1.0, metadata=None, show_name=True, program_args=None, add_cmdline_args=None)¶
Tool to run a benchmark in text mode.
Spawn processes worker processes to run the benchmark.
metadata is passed to the
Runconstructor.
values, warmups and processes are the default number of values, warmup values and processes. These values can be changed with command line options. See Runner CLI for command line options.
program_args is a list of strings passed to Python on the command line to run the program. By default,
(sys.argv[0],)is used. For example,
python3 -m perf timeitsets program_args to
('-m', 'perf', 'timeit').
add_cmdline_args is an optional callback used to add command line arguments to the command line of worker processes. The callback is called with
add_cmdline_args(cmd, args)where cmd is the command line (
list) which must be modified in place and args is the
argsattribute of the runner.
If show_name is true, displays the benchmark name.
If isolated CPUs are detected, the CPU affinity is automatically set to these isolated CPUs. See CPU pinning and CPU isolation.
Methods to run benchmarls:
Methods:
bench_func(name, func, *args, inner_loops=None, metadata=None)¶
Benchmark the function
func(*args).
name is the benchmark name, it must be unique in the same script.
The inner_loops parameter is used to normalize timing per loop iteration.
The design of
bench_func()has a non negligible overhead on microbenchmarks: each loop iteration calls
func(*args)but Python function calls are expensive. The
timeit()and
bench_time_func()methods are recommended if
func(*args)takes less than
1millisecond (
0.001second).
To call
func()with keyword arguments, use
functools.partial.
Return a
Benchmarkinstance.
See the bench_func() example.
timeit(name, stmt, setup="pass", inner_loops=None, duplicate=None, metadata=None, globals=None)¶
Run a benchmark on
timeit.Timer(stmt, setup, globals=globals).
name is the benchmark name, it must be unique in the same script.
stmt is a Python statement. It can be a non-empty string or a non-empty sequence of strings.
setup is a Python statement used to setup the benchmark: it is executed before computing each benchmark value. It can be a string or a sequence of strings.
Parameters:
- inner_loops: Number of inner-loops. Can be used when stmt manually duplicates the same expression inner_loops times.
- duplicate: Duplicate the stmt statement duplicate times to reduce the cost of the outer loop.
- metadata: Metadata of this benchmark, added to the runner
metadata.
- globals: Namespace used to run setup and stmt. By default, an empty namespace is created. It can be used to pass variables.
See the timeit() example.
bench_command(name, command)¶
Benchmark the execution time of a command using
perf_counter()timer. Measure the wall-time, not CPU time.
command must be a sequence of arguments, the first argument must be the program.
Basically, the function measures the timing of
Popen(command).wait(), but tries to reduce the benchmark overhead.
Standard streams (stdin, stdout and stderr) are redirected to
/dev/null(or
NULon Windows).
Use
--inherit-environand
--no-localecommand line options to control environment variables.
If the
resource.getrusage()function is available, measure also the maximum RSS memory and stores it in
command_max_rssmetadata.
See the bench_command() example.
Changed in version 1.1: Measure the maximum RSS memory (if available).
bench_time_func(name, time_func, *args, inner_loops=None, metadata=None)¶
Benchmark
time_func(loops, *args). The time_func function must return raw timings: the total elapsed time of all loops. Runner will divide raw timings by
loops x inner_loops(loops and inner_loops parameters).
perf_counter()should be used to measure the elapsed time.
name is the benchmark name, it must be unique in the same script.
To call
time_func()with keyword arguments, use
functools.partial.
Return a
Benchmarkinstance.
See the bench_time_func() example.
parse_args(args=None)¶
Parse command line arguments using
argparserand put the result into the
argsattribute.
If args is set, the method must only be called once.
Return the
argsattribute.
Attributes:
args¶
Namespace of arguments: result of the
parse_args()method,
Nonebefore
parse_args()is called.
Metadata¶
The
Run class collects metadata by default.
Benchmark:
date(str): date when the benchmark run started, formatted as ISO 8601
duration(int or float >= 0): total duration of the benchmark run in seconds (
float)
name(non-empty str): benchmark name
loops(
int >= 1): number of outer-loops per value (
int)
inner_loops(
int >= 1): number of inner-loops of the benchmark (
int)
timer: Implementation of
perf.perf_counter(), and also resolution if available
Python metadata:
python_cflags: Compiler flags used to compile Python.
python_executable: path to the Python executable
python_hash_seed: value of the
PYTHONHASHSEEDenvironment variable (
randomstring or an
int)
python_implementation: Python implementation. Examples:
cpython,
pypy, etc.
python_version: Python version, with the architecture (32 or 64 bits) if available, ex:
2.7.11 (64bit)
python_unicode: Implementation of Unicode,
UTF-16or
UCS-4, only set on Pyhon 2.7, Python 3.2 and older
Memory metadata:
command_max_rss(int): Maximum resident set size in bytes (
int) measured by
Runner.bench_command().
mem_max_rss(int): Maximum resident set size in bytes (
int). On Linux, kernel 2.6.32 or newer is required.
mem_peak_pagefile_usage(int): Get
PeakPagefileUsageof
GetProcessMemoryInfo()(of the current process): the peak value of the Commit Charge during the lifetime of this process. Only available on Windows.
CPU metadata:
cpu_affinity: if set, the process is pinned to the specified list of CPUs
cpu_config: Configuration of CPUs (ex: scaling governor)
cpu_count: number of logical CPUs (
int)
cpu_freq: Frequency of CPUs
cpu_machine: CPU machine
cpu_model_name: CPU model name
cpu_temp: Temperature of CPUs
System metadata:
aslr: Address Space Layout Randomization (ASLR)
boot_time(str): Date and time of the system boot
hostname: Host name
platform: short string describing the platform
load_avg_1min(int or float >= 0): Load average figures giving the number of jobs in the run queue (state
R) or waiting for disk I/O (state
D) averaged over 1 minute
runnable_threads: number of currently runnable kernel scheduling entities (processes, threads). The value comes from the 4th field of
/proc/loadavg:
1in
0.20 0.22 0.24 1/596 10123for example (
596is the total number of threads).
uptime(int or float >= 0): Duration since the system boot (
float, number of seconds since
boot_time)
Other:
perf_version: Version of the
perfmodule
unit: Unit of values:
byte,
integeror
second
calibrate_loops(
int >= 1): number of loops computed in a loops calibration run
recalibrate_loops(
int >= 1): number of loops computed in a loops recalibration run
calibrate_warmups(bool): True for runs used to calibrate the number of warmups
recalibrate_warmups(bool): True for runs used to recalibrate the number of warmups
perf JSON format¶
perf stores benchmark results as JSON in files. By default, the JSON is
formatted to produce small files. Use the
python3 -m perf convert --indent
(...) command (see perf convert) to get readable
(indented) JSON.
perf supports JSON files compressed by gzip: use gzip if filename ends with
.gz.
Example of JSON,
... is used in the example for readability:
{ "benchmarks": [ { "runs": [ { "metadata": { "date": "2016-10-21 03:14:19.670631", "duration": 0.33765527700597886, }, "warmups": [ [ 1, 0.023075559991411865 ], [ 2, 0.022522017497976776 ], [ 4, 0.02247579424874857 ], [ 8, 0.02237467262420978 ] ] }, { "metadata": { "date": "2016-10-21 03:14:20.496710", "duration": 0.7234010050015058, }, "values": [ 0.022752201875846367, 0.022529058374857414, 0.022569017250134493 ], "warmups": [ [ 8, 0.02249833550013136 ] ] }, ... { "metadata": { "date": "2016-10-21 03:14:52.549713", "duration": 0.719920061994344, ... }, "values": [ 0.022562820375242154, 0.022442164625317673, 0.02241712374961935 ], "warmups": [ [ 8, 0.02249412499986647 ] ] } ] } ], "metadata": { "cpu_count": 4, "cpu_model_name": "Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz", "description": "Telco decimal benchmark", "hostname": "selma", "loops": 8, "name": "telco", "perf_version": "0.8.2", ... }, "version": "1.0" }
See also the jq tool: “lightweight and flexible command-line JSON processor”. | http://perf.readthedocs.io/en/latest/api.html | 2017-04-23T13:46:44 | CC-MAIN-2017-17 | 1492917118707.23 | [] | perf.readthedocs.io |
The Pass block causes the geometry of an object to be rendered once.
Pass { [Name and Tags] [RenderSetup] }
The basic pass command contains a list of render state setup commands.
A Pass can define its Name and arbitrary number of Tags - name/value strings that communicate Pass’ intent to the rendering engine.
A pass sets up various states of the graphics hardware, for example should alpha blending be turned on, should depth testing be used, and so on. The commands are these:
Cull Back | Front | Off
Set polygon culling mode. See Cull and Depth page for details.
ZTest (Less | Greater | LEqual | GEqual | Equal | NotEqual | Always)
Set depth buffer testing mode. See Cull and Depth page for details.
ZWrite On | Off
Set depth buffer writing mode. See Cull and Depth page for details., alpha-to-coverage modes. See Blending page for.
Offset OffsetFactor, OffsetUnits
Set Z buffer depth offset. See Cull and Depth page for details.
A number of commands are used for writing legacy “fixed function style” shaders. This is considered deprecated functionality, as writing surface shaders or shader programs allows much more flexibility. However, for very simple shaders writing them in fixed function style might be somewhat easier, so here’s the commands. All of them are ignored when not using fixed function shaders.
Lighting On | Off Material { Material Block } SeparateSpecular On | Off Color Color-value ColorMaterial AmbientAndDiffuse | Emission
All these control fixed function per-vertex lighting: turns it on, sets up material colors, turns on specular highlights, provides default color if vertex lighting is off, and controls how the mesh vertex colors affect lighting. See Material page for details.
Fog { Fog Block }
Set fixed function fog parameters. See Fog page for details.
AlphaTest (Less | Greater | LEqual | GEqual | Equal | NotEqual | Always) CutoffValue
Turns on fixed function alpha testing. See Alpha Testing page for details.
After the render state setup, you can specify a number of textures and their combining modes to apply using SetTexture commands:
SetTexture textureProperty { combine options }
Shader passes interact with Unity’s rendering pipeline in several ways; for example a pass can indicate that it should only be used for deferred shading using Tags command. Certain passes can also be executed multiple times on the same object; for example in forward rendering the “ForwardAdd” pass type will be executed multiple times, based on how many lights are affecting the object. See Render Pipeline page for details.
There are several special passes available for reusing common functionality or implementing various high-end effects: | https://docs.unity3d.com/es/current/Manual/SL-Pass.html | 2017-04-23T13:49:19 | CC-MAIN-2017-17 | 1492917118707.23 | [] | docs.unity3d.com |
Details
General
This SOP applies to work within GMP and Non-GMP areas were PM is scheduled. This includes work that is related to Environment Health & Safety regulatory requirements. This SOP applies to all associates responsible for making any type of Preventive Maintenance for instrumentations, equipments and utilities.
Regulatory basis, reference documents
- EU GMP Chapter 3
- 21 CFR 211.58
Table of Content (just Headers):
1 Purpose
2 Objective
3 Regulatory basis, reference documents
3.1 Quality Management Representative / Quality Assurance
3.2 Engineering Head
3.3 Engineering Technician
4 Related documents
5 Definitions
6 Procedure
6.1 Preventive Maintenance Work Order
6.1.1 Execution of Work Order
6.1.2 Incomplete Preventive Maintenance
6.1.3 Non performed Preventive Maintenance
6.1.4 Preventive Maintenance Frequency
6.2 Trending
6.3 Archiving of PM Documentation
7 Attachments
7.1 Attachment 1: Management of preventive maintenance (1 page)
7.2 Attachment 2: Request for new preventative maintenance plan (1 page)
8 SOP distribution
9 Health, safety and environmental considerations
Size and Format:
Microsoft Office 2003
Word File
11 pages procedure | http://www.qm-docs.com/preventive-maintenance.html | 2017-04-23T13:49:02 | CC-MAIN-2017-17 | 1492917118707.23 | [] | www.qm-docs.com |
Storage Backends¶
Storage scheme¶
limits uses a url style storage scheme notation (similar to the JDBC driver connection string notation) for configuring and initializing storage backends. This notation additionally provides a simple mechanism to both identify and configure the backend implementation based on a single string argument.
The storage scheme follows the format
{scheme}://{parameters}
limits.storage.storage_from_string() is provided to
lookup and construct an instance of a storage based on the storage scheme. For example:
import limits.storage uri = "redis://localhost:9999" options = {} redis_storage = limits.storage.storage_from_string(uri, **options)
Examples¶
- In-Memory
- The in-memory storage takes no parameters so the only relevant value is
memory://
- Memcached
- Requires the location of the memcached server(s). As such the parameters is a comma separated list of
{host}:{port}locations such as
memcached://localhost:11211or
memcached://localhost:11211,localhost:11212,192.168.1.1:11211etc...
- Memcached on Google App Engine
- Requires that you are working in the GAE SDK and have those API libraries available. :code: gaememcached://
- Redis
Requires the location of the redis server and optionally the database number.
redis://localhost:6379or
redis://localhost:6379/1(for database 1).
If the database is password protected the password can be provided in the url, for example
redis://:foobared@localhost:6379.
- Redis over SSL
- Redis does not support SSL natively, but it is recommended to use stunnel to provide SSL suport. The official Redis client
redis-pysupports redis connections over SSL with the scheme
rediss.
rediss://localhost:6379/0just like the normal redis connection, just with the new scheme.
- Redis with Sentinel
Requires the location(s) of the redis sentinal instances and the service-name that is monitored by the sentinels.
redis+sentinel://localhost:26379/my-redis-serviceor
redis+sentinel://localhost:26379,localhost:26380/my-redis-service.
If the database is password protected the password can be provided in the url, for example
- Redis Cluster
- Requires the location(s) of the redis cluster startup nodes (One is enough).
redis+cluster://localhost:7000or
redis+cluster://localhost:7000,localhost:70001 | http://limits.readthedocs.io/en/latest/storage.html | 2017-04-23T13:49:42 | CC-MAIN-2017-17 | 1492917118707.23 | [] | limits.readthedocs.io |
Difference between revisions of "Developing a Model-View-Controller Component/2.5/Adding verifications" From Joomla! Documentation < Developing a Model-View-Controller Component | 2.5Redirect page Revision as of 18:16, 3 May 2013 (view source)Tom Hutchison (Talk | contribs) (Hutchy68 moved page Developing a Model-View-Controller Component/2.5/Adding verifications to Developing a MVC Component/2.5/Adding verifications) Latest revision as of 18:25, 3 May 2013 (view source) JoomlaWikiBot (Talk | contribs) m (Robot: Fixing double redirect to J2.5:Developing a MVC Component/Adding verifications) Line 1: Line 1: −#REDIRECT [[Developing a MVC Component/2.5/Adding verifications]]+#REDIRECT [[J2.5:Developing a MVC Component/Adding verifications]] Latest revision as of 18:25, 3 May 2013 J2.5:Developing a MVC Component/Adding verifications Retrieved from ‘’ | https://docs.joomla.org/index.php?title=Developing_a_Model-View-Controller_Component/2.5/Adding_verifications&diff=prev&oldid=88987 | 2015-04-18T13:55:52 | CC-MAIN-2015-18 | 1429246634333.17 | [] | docs.joomla.org |
Difference between revisions of "Should PHP run as a CGI script or as an Apache module?"
From Joomla! Documentation
Revision as of 20:01, 13 July 2009 Permissions FAQ
can now be more restrictive. CGI mode is also claimed to be more flexible in many respects as you should now not see, with phpSuExec ( refer. | https://docs.joomla.org/index.php?title=Should_PHP_run_as_a_CGI_script_or_as_an_Apache_module%3F&diff=prev&oldid=14914 | 2015-04-18T14:00:30 | CC-MAIN-2015-18 | 1429246634333.17 | [] | docs.joomla.org |
Content Article Manager Edit
From Joomla! Documentation
Revision as of 17:19, 24 June 2012 by Dextercowley (Talk | contribs)
Contents
- 1 How to Access
- 2 Description
- 3 Screenshot
- 4 Column Headers
-
Column Headers
Edit Article
Enter the heading information for the Article,
Images and Links
Parameters - Advanced.
Article Permissions
Toolbar
At the top right you will see the toolbar:
The functions are:
- Save. Saves the article and stays in the current screen.
- Save & Close. Saves the article and closes the current screen.
- Save & New. Saves the article and keeps the editing screen open and ready to create another article.
- Cancel/Close. Closes the current screen and returns to the previous screen without saving any modifications you may have made.
- Help. Opens this help screen.
- For help on using TinyMCE and other editors: Content editors | https://docs.joomla.org/index.php?title=Help25:Content_Article_Manager_Edit&oldid=67904 | 2015-04-18T13:51:07 | CC-MAIN-2015-18 | 1429246634333.17 | [] | docs.joomla.org |
Rvsjoen/tutorial/Developing a Module/Part 05 From Joomla! Documentation < User talk:Rvsjoen IMHO, the helper contain the business logic of the module. In addition, we can add one more example for helper use - call component method for interacting with other parts of Joomla. oc666 13:59, 13 March 2012 (CDT) Retrieved from ‘’ | https://docs.joomla.org/User_talk:Rvsjoen/tutorial/Developing_a_Module/Part_05 | 2015-04-18T13:35:29 | CC-MAIN-2015-18 | 1429246634333.17 | [] | docs.joomla.org |
The Jikes RVM manages bugs, feature requests, tasks and patches using an issue tracker. When submitting an issue, please take a moment to read and follow our advice for Reporting Bugs.
The Research Archive is also maintained within another issue tracker.
In 2007, we migrated between different issue trackers. The historic issue trackers are listed below. | http://docs.codehaus.org/plugins/viewsource/viewpagesrc.action?pageId=74082 | 2015-04-18T13:23:16 | CC-MAIN-2015-18 | 1429246634333.17 | [] | docs.codehaus.org |
If you have the correct permissions within a space or group, you can escalate a discussion as a case manually, rather than wait for it to be auto-escalated.
If you think you should have rights to escalate cases, and you don't see Escalate as Case as an available action in your Actions menu when you read a question, ask your Community Administrator to adjust your permissions.
To escalate a question as a case: | https://docs.jivesoftware.com/jive/6.0/community_admin/topic/com.jivesoftware.help.crm.online/admin/EscalatingCasesManually.html | 2015-04-18T13:10:04 | CC-MAIN-2015-18 | 1429246634333.17 | [] | docs.jivesoftware.com |
Difference between revisions of "Switching templates"
From Joomla! Documentation
Revision as of 08:17,. →. | https://docs.joomla.org/index.php?title=J3.2:Switching_templates&diff=100703&oldid=100574 | 2015-04-18T14:20:23 | CC-MAIN-2015-18 | 1429246634333.17 | [] | docs.joomla.org |
Difference between revisions of "Local wiki extensions"
From Joomla! Documentation
Revision as of 15:53, 25 November 2011
Local wiki templates • Local wiki extensions • Local interwiki links
Extensions are additions to the MediaWiki code that perform special functions.
Syntax Highlighting>
Available lang attributes are:,.
Link Search bogus URL and find every article that contains the external link: Special:Linksearch | https://docs.joomla.org/index.php?title=JDOC:Local_wiki_extensions&diff=63172&oldid=6890 | 2015-04-18T14:06:03 | CC-MAIN-2015-18 | 1429246634333.17 | [] | docs.joomla.org |
An Act to renumber 49.165 (4) and 165.93 (4); to renumber and amend 968.075 (4); to amend 7.08 (10), 165.85 (4) (b) 1d. a., 950.01, 968.075 (4) (title), 968.075 (8) and 968.075 (9) (a) 2. and (b); and to create 49.165 (4) (b), 165.85 (2) (as), 165.85 (4) (cp), 165.93 (4) (b), 968.075 (4) (a) (intro.), 968.075 (4) (a) 2. and 968.075 (9) (a) 1m. of the statutes; Relating to: training standards for law enforcement officers regarding domestic abuse incidents and complaints, and law enforcement reports following a domestic abuse incident. (FE) | https://docs.legis.wisconsin.gov/2013/proposals/sb160 | 2015-04-18T13:14:31 | CC-MAIN-2015-18 | 1429246634333.17 | [] | docs.legis.wisconsin.gov |
Difference between revisions of "JError::handleVerbose"
From Joomla! Documentation
Revision as of 20.
JError::handleVerbose
Description
Verbose error handler
Description:JError::handleVerbose [Edit Descripton]
public static function handleVerbose ( &$error $options )
See also
JError::handleVerbose source code on BitBucket
Class JError
Subpackage Error
- Other versions of JError::handleVerbose
SeeAlso:JError::handleVerbose [Edit See Also]
User contributed notes
<CodeExamplesForm /> | https://docs.joomla.org/index.php?title=JError::handleVerbose/11.1&diff=prev&oldid=56740 | 2015-04-18T13:37:39 | CC-MAIN-2015-18 | 1429246634333.17 | [] | docs.joomla.org |
Revision history of "JTable::bind/1.5"
View logs for this page
There is no edit history for this page.
This page has been deleted. The deletion and move log for the page are provided below for reference.
- 13:46, 20 June 2013 JoomlaWikiBot (Talk | contribs) deleted page JTable::bind/1.5 (cleaning up content namespace and removing duplicated API references) | https://docs.joomla.org/index.php?title=JTable::bind/1.5&action=history | 2015-04-18T14:01:46 | CC-MAIN-2015-18 | 1429246634333.17 | [] | docs.joomla.org |
Revision history of "JHtmlRules/ getUserGroups"
View logs for this page
There is no edit history for this page.
This page has been deleted. The deletion and move log for the page are provided below for reference.
- 20:41, 1 May 2013 JoomlaWikiBot (Talk | contribs) deleted page JHtmlRules/ getUserGroups (Robot: Deleting all pages from category Candidates for deletion) | https://docs.joomla.org/index.php?title=JHtmlRules/_getUserGroups&action=history | 2015-04-18T14:20:34 | CC-MAIN-2015-18 | 1429246634333.17 | [] | docs.joomla.org |
.
The Tempo Message Audience Groups is a system group to which other, member groups can be added. Member groups can be used to define participants on News posts or recipients of News messages.
For users to be able to see and select a group as a participant on a News post or recipient of a message, the group must be added to the Tempo Message Audience Groups system group by a system administrator..
NOTE: Any users or groups added to these system groups also gain the same functionality within Appian for Mobile Devices applications.
The Tempo Message Audience Groups system group is configured with the following security settings:
The Health Check Viewers group allows you to automatically share Health Check reports. Members of the group will be notified via email each time a report becomes available, and will be able to download the report from a secured News post. By default, all system administrators are added as members of the Health Check Viewers group via an editable membership rule.
Health Check must be set up in the Administration Console, and automatic upload must be enabled in order for these viewers to see the Health Check report.
You can access the Health Check Viewers group from the link on the Health Check Settings page or by searching for the group in the objects view of Appian Designer. You can add both individual users and groups as members (see Group Management).
The Health Check Viewers group is configured with the following security settings:
On This Page | https://docs.appian.com/suite/help/20.4/System_Groups.html | 2022-06-25T13:09:49 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.appian.com |
Microsoft Graph Toolkit overview
Microsoft Graph Toolkit is a collection of reusable, framework-agnostic components and authentication providers for accessing and working with Microsoft Graph. The components are fully functional right of out of the box, with built-in providers that authenticate with and fetch data from Microsoft Graph. Microsoft Graph Toolkit?
Components
Microsoft Graph Toolkit includes a collection of web components for the most commonly built experiences powered by Microsoft Graph APIs.
The components are also available as React components.
Providers
Providers enable authentication, provide the implementation for acquiring access tokens on various platforms, and expose a Microsoft Graph client for calling the Microsoft Graph APIs. The components work best when used with a provider, but the providers can be used on their own.
Why use Microsoft Graph Toolkit?
Microsoft Graph Toolkit enables you to quickly and easily integrate common experiences powered by Microsoft Graph into your own application. The toolkit:
Cuts development time. The work to connect to Microsoft Graph APIs and render the data in a UI that looks and feels like a Microsoft 365 experience is done for you, with no customization required.
Works everywhere. All components are based on web standards and work seamlessly with any modern browser and web framework (such as React, Angular, or Vue).
Is beautiful but flexible. The components are designed to look and feel like Microsoft 365 experiences but are also customizable by using CSS custom properties and templating.
Who should use it?
Microsoft Graph Toolkit is great for developers of all experience levels that want to develop an app that connects to and accesses data from Microsoft Graph, such as a:
- Web app
- Microsoft Teams tab
- Progressive Web App (PWA)
- Electron app
- SharePoint web part
Where can I use it?
Microsoft Graph Toolkit is supported in the following browsers:
Next steps
- Try out the components in the playground.
Feedback
Submit and view feedback for | https://docs.microsoft.com/en-us/graph/toolkit/overview?WT.mc_id=m365-34899-wmastyka | 2022-06-25T15:34:20 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.microsoft.com |
Creating Inbox rules by using the EWS Managed API 2.0
Last modified: October 13, 2012 create Inbox rules, which are sets of conditions and associated actions that enable clients to manipulate incoming e-mail messages. If an incoming e-mail message meets the conditions that are defined in the rule, the specified actions will be taken.
To create an Inbox rule
Create a Rule object that represents the new rule to be created.
Rule newRule = new Rule();
Add properties to the rule. The user sets the Conditions property the ContainsSubjectStrings rule. The user sets the Actions property specifies to move the e-mail message to the Junk E-mail folder. In the following example, the DisplayName, Priority, IsEnable, Conditions, and Actions properties are added.
newRule.DisplayName = "MoveInterestingToJunk"; newRule.Priority = 1; newRule.IsEnabled = true; newRule.Conditions.ContainsSubjectStrings.Add("Interesting"); newRule.Actions.MoveToFolder = WellKnownFolderName.JunkEmail;
Create a CreateRuleOperation object, as shown in the following example.
CreateRuleOperation createOperation = new CreateRuleOperation(newRule);
Update the mailbox with the newly defined rule, as shown in the following example.
service. UpdateInboxRules(new RuleOperation[] { createOperation }, true);
This procedure assumes that a valid ExchangeService object is bound to the primary user's account.
Example
The following example shows how to create an Inbox rule. In this example, the rule specifies that if an e-mail message subject contains the word "Interesting", the message is moved to the Junk E-mail folder.
// Create an Inbox rule. // If "Interesting" is in the subject, move it to the Junk E-mail folder. Rule newRule = new Rule(); newRule.DisplayName = "MoveInterestingToJunk"; newRule.Priority = 1; newRule.IsEnabled = true; newRule.Conditions.ContainsSubjectStrings.Add("Interesting"); newRule.Actions.MoveToFolder = WellKnownFolderName.JunkEmail; // Create the CreateRuleOperation object. CreateRuleOperation createOperation = new CreateRuleOperation(newRule); service.UpdateInboxRules(new RuleOperation[] { createOperation }, true);
The following example shows the XML request that is sent by the UpdateInboxRules method.
<soap:Envelope xmlns: <soap:Header> <t:RequestServerVersion </soap:Header> <soap:Body> <m:UpdateInboxRules> <m:RemoveOutlookRuleBlob>true</m:RemoveOutlookRuleBlob> <m:Operations> <t:CreateRuleOperation> <t:Rule> <t:DisplayName>MoveInterestingToJunk</t:DisplayName> <t:Priority>1</t:Priority> <t:IsEnabled>true</t:IsEnabled> <t:Conditions> <t:ContainsSubjectStrings> <t:String>Interesting</t:String> </t:ContainsSubjectStrings> </t:Conditions> <t:Exceptions /> <t:Actions> <t:MoveToFolder> <t:DistinguishedFolderId </t:MoveToFolder> </t:Actions> </t:Rule> </t:CreateRuleOperation> </m:Operations> </m:UpdateInboxRules> </soap:Body> </soap:Envelope>
The following example shows the XML response that is returned by using the UpdateInboxRules method.
<?xml version="1.0" encoding="utf-8"?> <s:Envelope xmlns: <s:Header> <h:ServerVersionInfo </s:Header> <s:Body xmlns: <UpdateInboxRulesResponse ResponseClass="Success" xmlns=""> <ResponseCode>NoError</ResponseCode> </UpdateInboxRulesResponse> </s:Body> </s:Envelope>
This example assumes that the ExchangeService object is configured correctly to connect to the user’s Client Access server.. | https://docs.microsoft.com/en-us/previous-versions/office/developer/exchange-server-2010/ff597938(v=exchg.80)?redirectedfrom=MSDN | 2022-06-25T15:30:02 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.microsoft.com |
Overview
Thank you for choosing RadDataFilter!
Build.
Key Features: a myriad of filter functions.
Seamless Integration: RadDataFilter can communicate with any kind of collection (even a plain IEnumerable). An ItemsControl (RadGridView, RadTreeListView, RadComboBox, RadTreeView etc.) can then be bound to the filtered endpoint called FilteredSource. This allows extreme flexibility and loose coupling.
Unbound Mode: The RadDataFilter control allows you to use its UI without passing any data to it. Read more about this in the Unbound Mode article.
Build Complex Filter Criteria: You can easily build filter criteria that contain multiple logical (Boolean) operators combining simple filter conditions. Read more about this in the End User Manual article.
Get started with the control with its Getting Started help article that shows how to use it in a basic scenario. | https://docs.telerik.com/devtools/silverlight/controls/raddatafilter/datafilter-overview | 2022-06-25T13:42:18 | CC-MAIN-2022-27 | 1656103035636.10 | [array(['images/datafilter_overview.jpg', None], dtype=object)] | docs.telerik.com |
Interpolated fine phase shift mode has a linear shift behavior independent of the CLKOUT_DIVIDE/CLKFBOUT_MULT value. There is an individual phase interpolator (PI) for each of the O clock output counters and the M counter in the MMCM. The phase shift resolution only depends on the VCO frequency. The phase shift resolution is 1/32nd of the VCO frequency ((VCO period in ps)/32)) and is based on one of the eight phases out of the VCO selected. In this mode, the output clocks can be rotated 360° round robin. The phase interpolators can be controlled by either of the two deskew phase detectors (PDs) or by the phase shift interface.
If the VCO runs at 3 GHz, the phase resolution is approximately (rounded) 10 ps, and at 4 GHz it is approximately (rounded) 8 ps.
When using fine phase shift, the initial phase step value of every PI can be independently setup. The phase can be dynamically incremented or decremented. The dynamic phase shift is controlled by the PS interface of the MMCME5_ADV. This phase shift mode affects each individual CLKOUT output. In interpolated fine phase shift mode, a clock must always be connected to the PSCLK pin of the MMCM. Fixed or dynamic phase shifting of the feedback path results in a negative phase shift of all output clocks with respect to CLKIN. | https://docs.xilinx.com/r/en-US/am003-versal-clocking-resources/Dynamic-Interpolated-Fine-Phase-Shift-in-MMCM-and-XPLL-variable-phase-shift | 2022-06-25T13:25:28 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.xilinx.com |
Install core components
The core components are the Delivery Controller, Studio, Director, and License Server.
(In versions before 7.15 LTSR CU6, core components included StoreFront. You can still install StoreFront by choosing Citrix StoreFront from the Extend Deployment section, or by running the command available on the installation media.)
Important: Before you start an installation, review Prepare to install. Also, review this article before starting an installation.
This article describes the installation wizard sequence when installing core components. Command-line equivalents are provided. For more information, see Install using the command line.
Step 1. Download the product software and launch the wizardStep 1. Download the product software and launch the wizard
Use your Citrix account credentials to access the XenApp and XenDesktop download page. Download the product ISO file.
Unzip the file. Optionally, burn a DVD of the ISO file.
Log on to the machine where you are installing the core components, using a local administrator account.
Insert the DVD in the drive or mount the ISO file. If the installer does not launch automatically, double-click the AutoSelect application or the mounted drive.
Step 2. Choose which product to installStep 2. Choose which product to install
Click Start next to the product to install: XenApp or XenDesktop.
(If the machine already has XenApp or XenDesktop components installed on it, this page does not appear.)
Command-line option: /xenapp to install XenApp; XenDesktop is installed if option is omitted
Step 3. Choose what to installStep 3. Choose what to install
If you’re just getting started, select Delivery Controller. (On a later page, you select the specific components to install on this machine.)
If you’ve already installed a Controller (on this machine or another) and want to install another component, select the component from the Extend Deployment section.
Command-line option: /components
Step 4. Read and accept the license agreementStep 4. Read and accept the license agreement
On the Licensing Agreement page, after you read the license agreement, indicate that you have read and accepted it. Then click Next.
Step 5. Select the components to install and the installation locationStep 5. Select the components to install and the installation location
On the Core components page:
- Location: By default, components are installed in C:\Program Files\Citrix. The default is fine for most deployments. If you specify a different location, it must have execute permissions for network service.
- Components: By default, the check boxes for all core components are selected. Installing all core components on one server is fine for proof of concept, test, or small production deployments. For larger production environments, Citrix recommends installing Director, StoreFront, and the License Server on separate servers.
Select only the components you want to install on this machine. After you install components on this machine, you can run the installer again on other machines to install other components.
An icon alerts you when you choose not to install a required core component on this machine. That alert reminds you to install that component, although not necessarily on this machine.
Click Next.
Command-line options: /installdir, /components, /exclude
Step 6. Enable or disable featuresStep 6. Enable or disable features
On the Features page:
- Choose whether to install Microsoft SQL Server Express for use as the Site database. By default, this selection is enabled. If you’re not familiar with the XenApp and XenDesktop databases, review Databases.
- When you install Director, Windows Remote Assistance is installed automatically. You choose whether to enable shadowing in Windows Remote Assistance for use with Director user shadowing. Enabling shadowing opens TCP port 3389. By default, this feature is enabled. The default setting is fine for most deployments. This feature appears only when you are installing Director.
Click Next.
Command-line options: /nosql (to prevent installation), /no_remote_assistance (to prevent enabling)
Step 7. Open Windows firewall ports automaticallyStep 7. Open Windows firewall ports automatically
By default, the ports on the Firewall page are opened automatically if the Windows Firewall Service is running, even if the firewall is not enabled. The default setting is fine for most deployments. For port information, see Network ports.
Click Next.
(The graphic shows the port lists when you install all the core components on this machine. That type of installation is usually done only for test deployments.)
Command-line option: /configure_firewall
Step 8. Review prerequisites and confirm installationStep 8. Review prerequisites and confirm installation
The Summary page lists what will be installed. Use the Back button to return to earlier wizard pages and change selections, if needed.
When you’re ready, click Install.
The display shows the progress of the installation:
Step 9. Connect to Smart Tools and Call HomeStep 9. Connect to Smart Tools and Call Home
When installing or upgrading a Delivery Controller, the Smart Agent page offers several options:
- Enable connections to Smart Tools and Call Home. This is the recommended selection.
- Enable connections to Call Home. During an upgrade, this option does not appear if Call Home is already enabled or if the installer encounters an error related to the Citrix Telemetry Service.
- Do not enable connections to Smart Tools or Call Home.
If you install StoreFront (but not a Controller), the wizard displays the Smart Tools page. If you install other core components (but not a Controller or StoreFront), the wizard does not display either the Smart Tools or Call Home pages.
If you choose an option to enable connections to Smart Tools and/or Call Home:
- Click Connect.
- Provide your Citrix or Citrix Cloud credentials.
- After your credentials are validated, the process downloads a Smart Agent certificate. After this completes successfully, a green check mark appears next to the Connect button. If an error occurs during this process, change your participation selection (to “I do not want to …”). You can enroll later.
- Click Next to continue with the installation wizard.
If you choose not to participate, click Next.
Command-line option: /exclude “Smart Tools Agent” (to prevent installation)
Step 10. Finish this installationStep 10. Finish this installation
The Finish page contains green check marks for all prerequisites and components that installed and initialized successfully.
Click Finish.
Step 11: Install remaining core components on other machinesStep 11: Install remaining core components on other machines
If you installed all the core components on one machine, continue with Next steps. Otherwise, run the installer on other machines to install other core components. You can also install more Controllers on other servers.
Next stepsNext steps
After you install all the required core components, use Studio to create a Site.
After creating the Site, install VDAs.
At any time, you can use the full-product installer to extend your deployment with the following components:
- Universal Print Server server component: Launch the installer on the print server. Select Universal Print Server in the Extend Deployment section. Accept the license agreement, then proceed to the end of the wizard. There is nothing else to specify or select. To install this component form the command line, see Install using the command line.
- Federated Authentication Service: See Federated Authentication Service.
- Self-Service Password Reset Service: See the Self-Service Password Reset Service documentation.
In this article
- Step 1. Download the product software and launch the wizard
- Step 2. Choose which product to install
- Step 3. Choose what to install
- Step 4. Read and accept the license agreement
- Step 5. Select the components to install and the installation location
- Step 6. Enable or disable features
- Step 7. Open Windows firewall ports automatically
- Step 8. Review prerequisites and confirm installation
- Step 9. Connect to Smart Tools and Call Home
- Step 10. Finish this installation
- Step 11: Install remaining core components on other machines
- Next steps | https://docs.citrix.com/en-us/xenapp-and-xendesktop/7-15-ltsr/install-configure/install-core.html | 2021-01-15T18:57:23 | CC-MAIN-2021-04 | 1610703495936.3 | [array(['/en-us/xenapp-and-xendesktop/7-15-ltsr/media/install-xaxd75.png',
'localized image'], dtype=object)
array(['/en-us/xenapp-and-xendesktop/7-15-ltsr/media/core-what-to-install-75.png',
'localized image'], dtype=object)
array(['/en-us/xenapp-and-xendesktop/7-15-ltsr/media/core-license-75.png',
'localized image'], dtype=object)
array(['/en-us/xenapp-and-xendesktop/7-15-ltsr/media/core-components-75.png',
'localized image'], dtype=object)
array(['/en-us/xenapp-and-xendesktop/7-15-ltsr/media/core-features-75.png',
'localized image'], dtype=object)
array(['/en-us/xenapp-and-xendesktop/7-15-ltsr/media/core-firewall-75.png',
'localized image'], dtype=object)
array(['/en-us/xenapp-and-xendesktop/7-15-ltsr/media/core-summary-75.png',
'localized image'], dtype=object)
array(['/en-us/xenapp-and-xendesktop/7-15-ltsr/media/core-during-install-75.png',
'localized image'], dtype=object)
array(['/en-us/xenapp-and-xendesktop/7-15-ltsr/media/core-smart-tools-75.png',
'localized image'], dtype=object)
array(['/en-us/xenapp-and-xendesktop/7-15-ltsr/media/core-finish-75.png',
'localized image'], dtype=object) ] | docs.citrix.com |
Confidence Intervals, Histograms, & Distributions
A confidence interval (CI) indicates that you are confident to a certain degree that the observed value of your metric would fall within the given range.
The probability that the observed metric would take on different values is encoded in a probability distribution. A histogram visualizes a distribution with the likelihood of observing certain values (y-axis) against those values (x-axis).
Guesstimate can translate your intuitive CI to some common distributions. This is covered in the 'Modeling' section. | https://docs.getguesstimate.com/theory/confidence_intervals.html | 2021-01-15T17:51:37 | CC-MAIN-2021-04 | 1610703495936.3 | [] | docs.getguesstimate.com |
Configure Central Authentication Service (CAS)
Authentication
Many campuses use some kind of single sign on, such as JASIG's Central Authentication Service, or CAS. This guide describes how to integrate Opencast into such a system.
Step 1
First, you need to edit the file
etc/org.apache.karaf.features.cfg and add the
opencast-security-cas to the
featuresBoot variable.
featuresBoot = ..., opencast-security-cas
Step 2
In a single-tenant deployment, your
security.xml file is under
OPENCAST_HOME/etc/security/mh_default_org.xml. In an
RPM/DEB based installation, it is located in
/etc/opencast/security/mh_default_org.xml. You should make a backup copy of
the file and substitute it by the sample file named
security_sample_cas.xml-example. In other words:
$> cd etc/security $> mv mh_default_org.xml mh_default_org.xml.old $> cp security_sample_cas.xml-example mh_default_org.xml
The sample file should be exactly the same as the default security file, except for the parts only relevant to the CAS. If you have done custom modifications to your security file, make sure to incorporate them to the new file, too.
Step 3
Add the necessary configuration values to the CAS section of the new security file. The comments should be self-explanatory.
You must modify several settings in the sample to point to your CAS server:
<bean id="casEntryPoint" class="org.springframework.security.cas.web.CasAuthenticationEntryPoint"> <property name="loginUrl" value=""/> <property name="serviceProperties" ref="serviceProperties"/> </bean> <bean id="casAuthenticationProvider" class="org.springframework.security.cas.authentication.CasAuthenticationProvider"> <property name="userDetailsService" ref="userDetailsService"/> <property name="serviceProperties" ref="serviceProperties" /> <property name="ticketValidator"> <bean class="org.jasig.cas.client.validation.Cas20ServiceTicketValidator"> <constructor-arg </bean> </property> <property name="key" value="cas"/> </bean>
You will also need to set the public URL for your Opencast server:
<bean id="serviceProperties" class="org.springframework.security.cas.ServiceProperties"> <property name="service" value=""/> <property name="sendRenew" value="false"/> </bean>
Authorization
Now the system knows all the information necessary to authenticate users against CAS, but also need some authorization information, to tell which services the user is allowed to use and which resources is allowed to see and/or modify.
You will need to configure a UserProvider to look up users as identified by CAS.
- Sakai User Provider
- LDAP User Provider (Section
Authorization/Step 2)
Original documentation from University of Saskatchewan
University of Saskatchewan CAS and LDAP integration | https://docs.opencast.org/r/4.x/admin/configuration/security.cas/ | 2021-01-15T18:04:47 | CC-MAIN-2021-04 | 1610703495936.3 | [] | docs.opencast.org |
MuleSoft Documentation
Your customers and employees need data-rich, delightful digital experiences on a variety of devices from smart watches to desktop computers. To deliver these experiences, your systems must be connected to each other, and the data must flow among those systems (integration).
The MuleSoft approach to integration, integration of data from different systems using a layer of APIs, allows you to spend less time on IT projects and more time on your core business. Whenever you turn a business process into an API, you make it easier to implement that process in the next project. And APIs have proven to be the best size of reusable code.
Anypoint Platform helps you build a structured application network that connects applications, data, and devices with reusable APIs. The unified Anypoint Platform makes it easy to discover, create, and manage APIs in a modular, organized layer. Instead of retrieving random and possibly unstable code snippets, you can “shop” for APIs created using the industry’s best practices.
Explore Anypoint Exchange
The APIs you build in MuleSoft to integrate applications and services are, by design, reusable and built with enterprise security in mind. You can discover these APIs, as well as connectors, samples, and templates in Exchange.
Exchange also provides RAML fragments, custom packages, videos, links to documentation, and other assets.
Discover Anypoint Platform APIs on the MuleSoft developer portal.
Design and Build API-based Integrations
Anypoint Studio is an Eclipse-based IDE that helps you create integrations. Design Center, a web-based tool, helps you create API specifications, the foundation for data integrations.
Studio 7 includes Mule Runtime (Mule) 4, the runtime environment where you deploy Mule apps and APIs. Earlier versions of Studio include Mule 3.
Test Integrations
Test the integrations you create before you deploy them.
Deploy Integrations
Deploy your integration into a production environment.
Manage Integrations
Once your integration is running in production, you can monitor its performance.
Manage Anypoint Platform Features
During development or after deployment, you can make changes to your integration.
More Resources
Archived Documentation
When an older version of a product is no longer supported, including products with end-of-life status, the documentation moves to an archive site. | https://docs.mulesoft.com/general/ | 2021-01-15T17:45:23 | CC-MAIN-2021-04 | 1610703495936.3 | [array(['_images/api-led-architecture.png',
'Diagram of structured API layers for experience'], dtype=object)] | docs.mulesoft.com |
The VLINGO/STREAMS component implements the Reactive Streams specification for the VLINGO/PLATFORM. It is fully based on VLINGO/ACTORS to provide reactive concurrency for stream processing.
There are four abstractions used by the Reactive Streams specification, which are entirely implemented in VLINGO/STREAMS.
It is possible that you will never implement a
Publisher,
Subscriber,
Subscription, or
Processor yourself. Default implementations of these are provided by VLINGO/STREAMS. You will, instead implement
Source and
Sink types.
The
Subscriber requests a
Subscription from a
Publisher and specifies the number of elements it can accept. By specifying the number of elements it can accept, backpressure is enforced on the
Publisher so that it does not overwhelm the
Subscriber.
The following is an example of how to create a
Publisher that produces a sequenced series of
Long values and a
Subscriber with a
Sink that prints those as
String values.
final long max = 10;final Publisher publisher =world.actorFor(Publisher.class,StreamPublisher.class,Source.rangeOf(1, max + 1),PublisherConfiguration.defaultDropHead());final Subscriber subscriber =world.actorFor(Subscriber.class,StreamSubscriber.class,Sink.printToStdout("> "),max);publisher.subscribe(subscriber);
With no further code the above produces the following output.
// RESULTS> 1> 2> 3> 4> 5> 6> 7> 8> 9> 10
Similarly a
Processor is created, which is both a
Subscriber to an upstream
Publisher and a
Publisher to an downstream
Subscriber.
The following example reuses the
Publisher and
Subscriber from the previous example, but injects the
Processor in between the two, where the
Processor transforms the
Long values to
Double values.
final LongToDoubleMapper transformer = new LongToDoubleMapper();final Processor<String,Integer> processor =world.actorFor(Processor.class,StreamProcessor.class,transformer,10,PublisherConfiguration.defaultDropHead());processor.subscribe(subscriber);publisher.subscribe(processor);
The above produces this output.
// RESULTS> 1.0> 2.0> 3.0> 4.0> 5.0> 6.0> 7.0> 8.0> 9.0> 10.0
A
Source is the source of a stream of elements relayed by a
Publisher. A
Sink is the destination of the elements provided to a
Subscriber by its
Publisher. Next are ways that these may be used.
You may reuse predefined
Source types, but you will also develop your own implementations. This is the
Source protocol as provided by VLINGO/STREAMS.
package io.vlingo.reactivestreams;public interface Source<T> {Completes<Elements<T>> next();Completes<Elements<T>> next(final int maximumElements);Completes<Elements<T>> next(final long index);Completes<Elements<T>> next(final long index, final int maximumElements);Completes<Boolean> isSlow();}
The job of a
Source is to take requests for the next elements in a stream, which are returned as a
Completes<Elements<T>>. This means that the
Elements<T> are completed at some future time. See
Completes<T> for details on use.
The
isSlow() protocol answers a
Completes<Boolean> indicating whether the
Source will tend to be slow in providing next elements. The following demonstrates how you may answer.
@Overridepublic Completes<Boolean> isSlow() {return Completes.withSuccess(false);}
Of course, answering
true or
false accurately is vitally important. If your Source is slow, stating so enables the
Publisher to decide on the kind of
Scheduler to use between probes for next elements. A slow
Source will be managed by repeated schedule-once timer intervals, while a fast
Source will be managed by a consistently schedule-many repeating time interval.
The following is an example of a fast
Source that lazily (not all preallocated in a
List) provides a range of
Long values.
package io.vlingo.reactivestreams.source;import io.vlingo.common.Completes;import io.vlingo.reactivestreams.Elements;import io.vlingo.reactivestreams.Source;public class LongRangeSource implements Source<Long> {private long current;public final long endExclusive;public final long startInclusive;public LongRangeSource(final long startInclusive, final long endExclusive) {assert(startInclusive <= endExclusive);assert(startInclusive >= 0 && startInclusive <= Long.MAX_VALUE);this.startInclusive = startInclusive;assert(endExclusive >= 0 && endExclusive <= Long.MAX_VALUE);this.endExclusive = endExclusive;this.current = startInclusive;}@Overridepublic Completes<Elements<Long>> next() {if (current < endExclusive) {final Long[] element = new Long[1];element[0] = current++;return Completes.withSuccess(new Elements<>(element, false));}return Completes.withSuccess(new Elements<>(new Long[0], true));}@Overridepublic Completes<Elements<Long>> next(final int maximumElements) {return next();}@Overridepublic Completes<Elements<Long>> next(long index) {return next();}@Overridepublic Completes<Elements<Long>> next(final long index, final int maximumElements) {return next();}@Overridepublic Completes<Boolean> isSlow() {return Completes.withSuccess(false);}@Overridepublic String toString() {return "LongRangeSource [startInclusive=" + startInclusive +" endExclusive=" + endExclusive + " current=" + current + "]";}}
See the Javadocs on
Source for full API explanations.
There are factory methods available to create
Source instances, and default implementation types of which these produce instances.
public interface Source<T> {static <T> Source<T> empty() ...static <T> Source<T> only(final T... elements) ...static Source<Long> rangeOf(final long startInclusive, final long endExclusive) ...static <T> Source<T> with(final Iterable<T> iterable) ...static <T> Source<T> with(final Iterable<T> iterable, final boolean slowIterable) ...static <T> Source<T> with(final Supplier<T> supplier) ...static <T> Source<T> with(final Supplier<T> supplier, final boolean slowSupplier) ...}
The number of these and the backing
Source types will grow over future releases.
You may reuse predefined
Sink types, but you will develop your own implementations. This is the
Sink protocol as provided by VLINGO/STREAMS.
public interface Sink<T> {void ready();void terminate();void whenValue(final T value);}
The
ready() indicates that the
Sink should become prepared to handle incoming values. The
terminate() indicates that the
Sink is being terminated and will no longer receive values. The
whenValue(T value) is used to provide the next available value from the
Subscriber.
The following is an example of a
Sink that prints received values.
package io.vlingo.reactivestreams.sink;import java.io.PrintStream;import io.vlingo.reactivestreams.Sink;public class PrintSink<T> implements Sink<T> {private final PrintStream printStream;private final String prefix;private boolean terminated;public PrintSink(final PrintStream printStream, final String prefix) {this.printStream = printStream;this.prefix = prefix;this.terminated = false;}@Overridepublic void ready() {// ignored}@Overridepublic void terminate() {terminated = true;}@Overridepublic void whenValue(final T value) {if (!terminated) {printStream.println(prefix + value.toString());}}@Overridepublic String toString() {return "PrintSink[terminated=" + terminated + "]";}}
See the Javadocs on
Sink for full API explanations.
There are factory methods available to create
Sink instances, and default implementation types of which these produce instances.
public interface Sink<T> {static <T> Sink<T> consumeWith(final Consumer<T> consumer) ...static <T> Sink<T> printToStdout(final String prefix) ...static <T> Sink<T> printToStderr(final String prefix) ...static <T> Sink<T> printTo(final PrintStream printStream, final String prefix) ...}
The number of these and the backing
Sink types will grow over future releases.
Some functions in the VLINGO/PLATFORM, such as queries provided by VLINGO/SYMBIO, answer a
Completes<Stream>. The
Stream that is eventually available inside the
Completes is used to consume the elements of streaming data.
package io.vlingo.reactivestreams;public interface Stream {<S> void flowInto(final Sink<S> sink);<S> void flowInto(final Sink<S> sink, final long flowElementsRate);<S> void flowInto(final Sink<S> sink, final long flowElementsRate, final int probeInterval);void request(final long flowElementsRate);void stop();}
The
flowInto() overrides are used to start the stream flowing into a given
Sink<S>. Besides causing the flow to begin, additional options are available. To control the initial rate at which elements flow, as in the number of elements that will arrive in a single burst, pass a value for
flowElementsRate. If no parameter is provided, the default is
100.
public static final long DefaultFlowRate = 100;
To indicate the number of milliseconds between probes for
Source<T> elements, pass
probeInterval. The default if not explicitly provided by the client is
5 milliseconds. There are other constants that you may use. Ultimately the performance of stream throughput is based on the amount of throughput available to the
Source<S> and the speed at which the
Source<T> is probed by the
StreamPublisher to retrieve that data.
// 5 millisecondspublic static final int DefaultProbeInterval = PublisherConfiguration.DefaultProbeInterval;// 2 millisecondspublic static int FastProbeInterval = PublisherConfiguration.FastProbeInterval;// 1 millisecondpublic static int FastestProbeInterval = PublisherConfiguration.FastestProbeInterval;
The
request() operation may be used to change the flow rate after the initial rate has been established. The
stop() method may be used to completely terminate the flow of elements from the
Source<T> and
Publisher<T>. Note, however, that since the flow is asynchronous it is possible that some elements are already incoming to the
Sink<S> and thus will not be prevented from arriving. If no parameter is provided, the default is
5.
For examples of how
Stream is used, see Streaming Persistent Data.
In VLINGO/STREAMS an operator is used to perform a specific kind of data transformation. When using a
StreamProcessor you will probably want to transform the
StreamSubscriber side of the processor's incoming data to another type that is outgoing through its
StreamPublisher side. You can use an operator to do that.
package io.vlingo.reactivestreams;public interface Operator<T,R> {void performInto(final T value, final Consumer<R> consumer);}
The
Operator<T, R> takes type
T as an input
value and provides type
R as output. Output is delivered to a
Consumer<R> of type
R, which means the
R value is produced before the
Consumer<R> receives it. This is where the implementation of the
Operator<T, R> plays in. Note that there are currently two factory methods provided on the
Operator interface.
package io.vlingo.reactivestreams;public interface Operator<T,R> {static <T> Operator<T,T> filterWith(final Predicate<T> filter) {return new Filter<>(filter);}static <T,R> Operator<T,R> mapWith(final Function<T,R> mapper) {return new Mapper<>(mapper);}...}
Thus, there are two basic kinds of operators, a filter and a mapper. A filter outputs the same type that it takes as input, but may produce less output than it receives as input. The mapper outputs a different type than it takes as input, as it is responsible to map/transform the input to another type. Actually the mapper may produce the same type of output as the input, but perhaps the data inside the type has been enriched or restricted in some way.
Here is an example filter.
final List<String> list =Arrays.asList("ABC", "321", "123", "456", "DEF", "214");final List<String> results = new ArrayList<>();final Operator<String,String> filter =Operator.filterWith((s) -> s.contains("1"));list.forEach(possible ->filter.performInto(possible, (match) -> results.add(match)));Assert.assertEquals(3, results.size());Assert.assertEquals("321", results.get(0));Assert.assertEquals("123", results.get(1));Assert.assertEquals("214", results.get(2));
In this example (broken down into multiple steps for clarity) a
filter is created that filters in all
String instances that contain the substring
"1", and filters out all others. Then the
List<String> of six elements is iterated over and the filter's
performInto() operation is used. If the filter
Predicate<T> of
s.contains("1") is satisfied, the match is added to the
List<String> of
results, which then contains elements
"321",
"123", and
"214".
Next is an example of a mapper.
final List<String> list = Arrays.asList("123", "456", "789");final List<Integer> results = new ArrayList<>();final Operator<String,Integer> mapper =Operator.mapWith((s) -> Integer.parseInt(s));list.forEach(digits ->mapper.performInto(digits, (number) -> results.add(number)));Assert.assertEquals(3, results.size());Assert.assertEquals(123, (int) results.get(0));Assert.assertEquals(456, (int) results.get(1));Assert.assertEquals(789, (int) results.get(2));
In this example (broken down into multiple steps for clarity) a
mapper is created that maps all
String instances of digit characters to
Integer numbers. The
List<String> of three elements is iterated over and the mapper's
performInto() operation is used. The new value is added to the
List<Integer> of
results, which then contains elements
123,
456, and
789.
The following demonstrates how a mapper can use flat-map.
final List<String> list1 = Arrays.asList("1", "2", "3");final List<String> list2 = Arrays.asList("4", "5", "6");final List<String> list3 = Arrays.asList("7", "8", "9");final List<List<String>> lists = Arrays.asList(list1, list2, list3);final List<Integer> results = new ArrayList<>();final Function<List<List<String>>, List<Integer>> mapper =(los) -> los.stream().flatMap(list -> list.stream().map(s -> Integer.parseInt(s))).collect(Collectors.toList());final Operator<List<List<String>>,List<Integer>> flatMapper =Operator.mapWith(mapper);flatMapper.performInto(lists, (numbers) -> results.addAll(numbers));Assert.assertEquals(9, results.size());Assert.assertEquals(1, (int) results.get(0));Assert.assertEquals(2, (int) results.get(1));Assert.assertEquals(3, (int) results.get(2));
Note that there is a
List<List<String>> lists that is a list of lists. The
mapper is created that internally uses the Java
Stream::flatMap. The
mapper first streams over the
List<List<String>> lists. It then uses
flatMap to see each of the single
List<String> list. It then streams over each of those lists, and each individual
String is transformed to an
Integer.
You can give a
StreamProcessor an
Operator<T, R> such as these, or use them in your
Source and
Sink implementations.
We will be adding more functional interfaces to our various types over future releases.
There are several ways to stream your data. You have already seen specific examples in the above content. In this section you are introduced to other ways.
Data that is persisted inside a VLINGO/STREAMS store can be streamed out to one or more subscribers. One of the most basic examples is provided by the
StateStore.
package io.vlingo.symbio.store.state;public interface StateStoreReader {...Completes<Stream> streamAllOf(final Class<?> stateType);Completes<Stream> streamSomeUsing(final QueryExpression query);}
A
StateStoreReader is provided as part of the
StateStore protocol. Thus, you may ask a
StateStore to stream data from it's internal storage. The two interfaces provided support streaming all of a given state type; that is, a
stateType is a Java
Class<?> of which all states of that type are stored in a single container. In some storage mechanisms that container may be a database table or an in-memory grid region.
By requesting a stream of all of a given
stateType, the
StateStoreReader queries that data from the container and provides an instance of
io.vlingo.reactivestreams.Stream.
// somewhere many EquityState instances are writtenfinal Equity equity = new Equity(...);...store.write(equity.id, equity.state, equity.version, interest);...final Completes<Stream> stream =store.streamAllOf(EquityState.class);stream.andThen(all ->all.flowInto(new ConsumerSink<>((equityState) -> reportOn(equityState));
You may also constrain your query to a subset of the whole container. The query is storage-type dependent.
final Completes<Stream> stream =store.streamSomeUsing(QueryExpression.using(EquityState.class,"select ... from tbl_equity where ..."));stream.andThen(some ->some.flowInto(new ConsumerSink<>((equityState) -> reportOn(equityState));
You may also stream persisted
Source<?> types, such as
DomainEvent and
Command instances. For example, the
Journal<T> provides this streaming interface through the
JournalReader<T>, which in turn is an
EntryReader<T>.
// Journalpackage io.vlingo.symbio.store.journal;public interface Journal<T> {...<ET extends Entry<?>> Completes<JournalReader<ET>>journalReader(final String name);}// JournalReaderpackage io.vlingo.symbio.store.journal;public interface JournalReader<T extends Entry<?>> extends EntryReader<T> { }// EntryReaderpackage io.vlingo.symbio.store.journal;public interface EntryReader<T extends Entry<?>> {...Completes<Stream> streamAll();}
These are used in the following example, first to request the
JournalReader, then the
Stream of all
EntryBundle instances, and then to flow the stream into the
Sink<EntryBundle> for reporting on each event.
final Sink<EntryBundle> sink =Sink.consumeWith((bundle) -> reportOn(bundle.source));journal.journalReader("events-reporter").andThenTo(reader -> reader.streamAll()).andThen(stream -> stream.flowInto(sink, 50));
All stores—
ObjectStore,
StateStore, and
Journal—support persisting a totally ordered stream of
Source<?> entries, such as
DomainEvent and
Command. You may obtain any of the total streams using the same basic techniques. | https://docs.vlingo.io/vlingo-streams/ | 2021-01-15T16:50:36 | CC-MAIN-2021-04 | 1610703495936.3 | [] | docs.vlingo.io |
futhark-run¶
DESCRIPTION¶
Execute the given program by evaluating the
main function with
arguments read from standard input, and write the results on standard
output.
futhark run is very slow, and in practice only useful for testing,
teaching, and experimenting with the language. Certain special
debugging functions are available in
futhark run:
trace 'a : a -> a
Semantically identity, but prints the value on standard output.
break 'a : a -> a
Semantically identity, but interrupts execution at the calling point, such that the environment can be inspected. Continue execution by entering an empty input line. Breakpoints are only respected when starting a program from the prompt, not when passing a program on the command line.
OPTIONS¶
- -e NAME
Run the given entry point instead of
main.
- -h
Print help text to standard output and exit.
- -V
Print version information on standard output and exit.
- -w, --no-warnings
Disable interpreter warnings.
SEE ALSO¶
futhark-repl, futhark-test | https://futhark.readthedocs.io/en/latest/man/futhark-run.html | 2021-01-15T17:22:57 | CC-MAIN-2021-04 | 1610703495936.3 | [] | futhark.readthedocs.io |
@Internal public interface Archiver
An archiver is an object capable of creating archive files and extracting their content. Each archiver should be capable of handling at least one type of archive, e.g. zip or gz.
void compressFiles(@NotNull Iterable<File> sourceFiles, @NotNull File archiveFile, @Nullable File baseDirectory) throws IOException
sourceFiles- Files to archive. Each file will exist. If baseDirectory argument is given, all files will be contained within that directory.
archiveFile- Archive file to create. Will not exist.
baseDirectory- Optional argument: base directory of all the files to compress. If given, all file paths in the resulting archive should be calculated relatively to this directory. If skipped, all source files should be placed in the archive root. If the argument is present, the file will exist and will be a directory.
IOException- if any IO operation fails
void extractArchive(@NotNull File archiveFile, @NotNull File destinationDirectory) throws IOException
archiveFile- Archive file. Will exists.
destinationDirectory- Directory to which the content of the archive should be copied. This file will exist and will be a directory.
IOException- if any IO operation fails
@NotNull List<String> getArchiveFileExtensions()
Copyright © 2016 Atlassian Software Systems Pty Ltd. All rights reserved. | https://docs.atlassian.com/atlassian-bamboo/5.10.1.1/com/atlassian/bamboo/archive/Archiver.html | 2021-01-15T19:33:21 | CC-MAIN-2021-04 | 1610703495936.3 | [] | docs.atlassian.com |
Audio streams¶
Вступ that while WAV files may contain looping information in their metadata, Ogg Vorbis files do not. If looping an Ogg Vorbis file is desired, it must be set up using the import options:
There are other types of AudioStreamPlayer,.
Примітка
Area2Ds can be used to divert sound from any AudioStreamPlayer2Ds they contain to specific buses. This makes it possible to create buses with different reverb or sound qualities to handle action happening in a particular parts of your game world.
.
:
At the same time, a special bus layout is created where each area receives the reverb info from each area. A Reverb effect needs to be created and configured in each reverb bus to complete the setup for the desired effect:
.
Доплер:
Enable it by setting it depending on how objects will be moved:
use Idle for objects moved using
_process, or Physics
for objects moved using
_physics_process. The tracking will
happen automatically. | https://docs.godotengine.org/uk/stable/tutorials/audio/audio_streams.html | 2021-01-15T17:58:50 | CC-MAIN-2021-04 | 1610703495936.3 | [array(['../../_images/audio_stream_import1.png',
'../../_images/audio_stream_import1.png'], dtype=object)
array(['../../_images/audio_stream_2d_area.png',
'../../_images/audio_stream_2d_area.png'], dtype=object)
array(['../../_images/audio_stream_3d_area.png',
'../../_images/audio_stream_3d_area.png'], dtype=object)
array(['../../_images/audio_stream_reverb_bus.png',
'../../_images/audio_stream_reverb_bus.png'], dtype=object)
array(['../../_images/audio_stream_reverb_bus2.png',
'../../_images/audio_stream_reverb_bus2.png'], dtype=object)
array(['../../_images/audio_stream_doppler.png',
'../../_images/audio_stream_doppler.png'], dtype=object)] | docs.godotengine.org |
Host Dependencies
This page describes how to set up your host machine to build and run seL4 and its supported projects. To compile and use seL4 you can either:
- Recommended: Use Docker to isolate the dependencies from your machine. Detailed instructions for using Docker for building seL4, Camkes, and L4v can be found here.
- Install the following dependencies on your local OS
The following instructions describe how to set up the required dependencies on your local OS. This page assumes you are building in a Linux OS. We however encourage site contributions for building in alternative OSes (e.g. macOS).
Get Google’s Repo tool
The primary way of obtaining and managing seL4 project source is through the use of Google’s repo tool. To get repo, follow the instructions described in the section “Installing Repo” here.
See the RepoCheatsheet page for a quick explanation of how we use Repo.
seL4 Build Dependencies
To build seL4-based projects, ensure you have installed the dependencies described in the Base Build Dependencies and Python Dependencies sections below.
Base Build Dependencies
To establish a usable development environment it is important to install your distributions basic build packages.
Ubuntu
The following instructions cover the build dependencies tested on Ubuntu 18.04 LTS. Note that earlier versions of Ubuntu (e.g. 16.04) may not be sufficient for building as some default development packages are stuck at older versions (e.g CMake 3.5.1, GCC 5.4 for 16.04). As dependencies and packages may be frequently changed, deprecated or updated these instructions may become out of date. If you discover any missing dependencies and packages we welcome new contributions to the page.
Note that we require a minimum CMake version of 3.12.0 while Ubuntu 18.04 contains 3.10.2. In order to correct this, a custom installation of CMake may be required which can be downloaded from:
The basic build package on Ubuntu is the
build-essential package. To install run:
sudo apt-get update sudo apt-get install build-essential
Additional base dependencies for building seL4 projects on Ubuntu include installing:
sudo apt-get install cmake ccache ninja-build cmake-curses-gui sudo apt-get install python-dev python-pip python3-dev python3-pip sudo apt-get install libxml2-utils ncurses-dev sudo apt-get install curl git doxygen device-tree-compiler sudo apt-get install u-boot-tools sudo apt-get install protobuf-compiler python-protobuf
To build for ARM targets you will need a cross compiler. In addition, to run seL4 projects on a simulator you will need
qemu. Installation of these additional base dependencies include running:
sudo apt-get install gcc-arm-linux-gnueabi g++-arm-linux-gnueabi sudo apt-get install gcc-aarch64-linux-gnu g++-aarch64-linux-gnu sudo apt-get install qemu-system-arm qemu-system-x86 qemu-system-misc
(you can install the hardware floating point versions as well if you wish”
sudo apt-get install gcc-arm-linux-gnueabihf g++-arm-linux-gnueabihf
Debian
For Debian Stretch or later
The dependencies listed in our docker files repository will work for a Debian installation. You can refer to this repository for an up-to-date list of base build dependencies. Specifically refer to the dependencies listed in the:
The version of
cmake in Debian stretch is too old to build seL4 projects (buster and later are OK). If you are on stretch, install
cmake from stretch-backports:
Add the stretch-backports repository like this (substitute a local mirror for if you like)
sudo sh -c "echo 'deb stretch-backports main' > /etc/apt/sources.list.d/backports.list"
Then install
cmake with
sudo apt-get update sudo apt-get -t stretch-backports install cmake
Python Dependencies
Regardless of your Linux distribution, python dependencies are required to build seL4, the manual and its proofs. To install you can run:
pip3 install --user setuptools pip3 install --user sel4-deps # Currently we duplicate dependencies for python2 and python3 as a python3 upgrade is in process pip install --user setuptools pip install --user sel4-deps
(Some distributions use
pip for python3 and
pip2 for python2; others uses
pip for python2 and
pip3 for python3. Use the Python 3 version for your distribution)
CAmkES Build Dependencies
To build a CAmkES based project on seL4, additional dependencies need to be installed on your host machine. Projects using CAmkES (the seL4 component system) need Haskell and some extra python libraries in addition to the standard build tools. The following instructions cover the CAmkES build dependencies for Ubuntu/Debian. Please ensure you have installed the dependencies listed in sections sel4 Build Dependencies and Get Google’s Repo tool prior to building a CAmkES project.
Python Dependencies
The python dependencies required by the CAmkES build toolchain can be installed via pip:
pip3 install --user camkes-deps # Currently we duplicate dependencies for python2 and python3 as a python3 upgrade is in process pip install --user camkes-deps
Haskell Dependencies
The CAmkES build toolchain additionally requires Haskell. You can install the Haskell stack on your distribution by running:
curl -sSL | sh
If you prefer not to bypass your distribution’s package manager, you can do
sudo apt-get install haskell-stack
Build Dependencies
Ubuntu
Tested on Ubuntu 18.04 LTS
Install the following packages on your Ubuntu machine:
sudo apt-get install clang gdb sudo apt-get install libssl-dev libclang-dev libcunit1-dev libsqlite3-dev sudo apt-get install qemu-kvm
Debian
For Debian Stretch or later
The dependencies listed in our docker files repository will work for a Debian installation. You can refer to this repository for an up-to-date list of base build dependencies. Specifically refer to the dependencies listed in the:
Proof and Isabelle Dependencies
Proof Dependencies
Linux Packages - Debian
On Buster or Bullseye, \ rsync
There is no package for the MLton compiler on Buster or Bullseye, so you will need to install it from the MLton website.
The Haskell Stack package is unavailable on Bullseye and out-of-date on Buster, so you will need to install it from the Haskell Stack website.
Linux Packages - Ubuntu
On Ubuntu 18.04, \ mlton-compiler haskell-stack
Python
The build system for the seL4 kernel requires several python packages:
sudo pip3 install --upgrade pip sudo pip3 install sel4-deps
Haskell Stack
After installing
haskell-stack, make sure
you’ve adjusted your
PATH to include
$HOME/.local/bin, and that you’re
running an up-to-date version:
stack upgrade --binary-only which stack # should be $HOME/.local/bin/stack
MacOS
Other than the cross-compiler
gcc toolchain, setup on MacOS should be similar
to that on Ubuntu. To set up a cross-compiler, try the following:
- Install
XCodefrom the AppStore and its command line tools. If you are running MacPorts, you have these already. Otherwise, after you have XCode installed, run
gcc --versionin a terminal window. If it reports a version, you’re set. Otherwise it should pop up a window and prompt for installation of the command line tools.
- Install the seL4 Python dependencies, for instance using
sudo easy_install sel4-deps.
easy_installis part of Python’s
setuptools.
- Install the
misc/scripts/cppwrapper for clang, by putting it in
~/bin, or somewhere else in your
PATH.
Isabelle Setup
After the repository is set up using
repo with
seL4/verification-manifest, you should have following directory
structure, where
l4v is the repository you are currently looking at:
verification/ isabelle/ l4v/ seL4/
To set up Isabelle for use in
l4v/, assuming you have no previous
installation of Isabelle, run the following commands in the directory
verification/l4v/:
mkdir -p ~/.isabelle/etc cp -i misc/etc/settings ~/.isabelle/etc/settings ./isabelle/bin/isabelle components -a ./isabelle/bin/isabelle jedit -bf ./isabelle/bin/isabelle build -bv HOL-Word
These commands perform the following steps:
- create an Isabelle user settings directory.
- install L4.verified Isabelle settings. These settings initialise the Isabelle installation to use the standard Isabelle
contribtools from the Munich Isabelle repository and set up paths such that multiple Isabelle repository installations can be used side by side without interfering with each other.
- download
contribcomponents from either the Munich or TS repository. This includes Scala, a Java JDK, PolyML, and multiple external provers. You should download these, even if you have these tools previously installed elsewhere to make sure you have the right versions. Depending on your internet connection, this may take some time.
- compile and build the Isabelle PIDE jEdit interface.
- build basic Isabelle images, including
HOL-Wordto ensure that the installation works. This may take a few minutes.
Alternatively, it is possible to use the official Isabelle2020 release
bundle for your platform from the Isabelle website. In this case, the
installation steps above can be skipped, and you would replace the directory
verification/isabelle/ with a symbolic link to the Isabelle home directory
of the release version. Note that this is not recommended for development,
since Google repo will overwrite this link when you synchronise repositories
and Isabelle upgrades will have to be performed manually as development
progresses. | https://docs.sel4.systems/projects/buildsystem/host-dependencies.html | 2021-01-15T17:45:20 | CC-MAIN-2021-04 | 1610703495936.3 | [] | docs.sel4.systems |
Elfloader
The elfloader is responsible for preparing the hardware for seL4 on ARM and RISC-V. It loads the kernel and user image from an embedded CPIO archive, initialises secondary cores (if SMP is enabled), and sets up an initial set of page tables for the kernel.
ARM
On ARM platforms, the elfloader supports being booted in four ways: as a binary image, as a u-boot uImage, as an ELF file, and as an EFI executable. Each of these methods differs slightly. It can also provide seL4 with a DTB - either from the bootloader or included in the embedded CPIO archive.
- (EFI only) Elfloader is entered at
_gnuefi_startentry point.
- (EFI only) Elfloader relocates itself
- Elfloader
_startis called. This is in
arch-arm/<arch_bitness>/crt0.S.
- Elfloader initialises the driver framework, which enables UART/printf.
- Elfloader loads the kernel, user image, and DTB, determining where the kernel needs to be mapped in memory.
- If the kernel window overlaps the elfloader’s code:
- (AArch32 EFI only) the elfloader relocates itself. See
relocate_below_kernelfor a detailed explanation of the relocation logic.
- (Other platforms) the elfloader aborts.
- The elfloader resumes booting. If it relocated itself, it will re-initialise the driver model.
- If the elfloader is in HYP mode but seL4 is not configured to support HYP, it will leave HYP mode.
- The elfloader sets up the initial page tables for the kernel (see
init_hyp_boot_vspaceor
init_boot_vspace).
- If SMP is enabled, the elfloader boots all secondary cores.
- The elfloader enables the MMU.
- The elfloader launches seL4, passing information about the user image and the DTB.
Binary
The elfloader expects to be executed with a base address as generated by the
shoehorn utility.
You can determine the correct address for a given image by running
aarch64-linux-gnu-objdump -t elfloader/elfloader | grep _text
from the kernel build directory. The first field in the output contains the base address.
On aarch64, the elfloader will try and move itself to the right address, however, this will fail if the load address and the correct address are too close, as the relocation code will be overwritten.
It is also possible to override
shoehorn and hardcode a load address by setting IMAGE_START_ADDR in CMake.
U-Boot
The elfloader can be booted according to the Linux kernel’s booting convention for ARM/ARM64. The DTB, if provided, will be passed to seL4 (which will then pass it to the root task).
ELF
The elfloader supports being executed as an ELF image (via
bootelf in U-Boot or similar).
EFI
The elfloader integrates EFI support based on the
gnu-efi project. It will relocate itself as appropriate,
and supports loading a DTB from the EFI implementation.
RISC-V
On RISC-V the elfloader is launched by the
bbl, which is integrated in the seL4 build system.
The
bbl brings up secondary cores, and the elfloader uses SBI to provide a serial output on RISC-V.
Driver framework
The elfloader provides a driver framework to reduce code duplication between platforms. Currently the driver framework is only used for UART output, however it is designed with extensibility in mind. In practice, this is currently only used on ARM, as RISC-V uses SBI for UART, and SBI has no device tree entries. However, in the future it may become useful on RISC-V.
The driver framework uses a header file containing a list of devices generated by the
hardware_gen.py utility
included in seL4. Currently, this header only includes the UART specified by the
stdout-path property in the DTB.
Each device in the list has a compatible string (
compat), and a list of addresses (
region_bases[]) which correspond to the regions specified
by the
reg property in the DTB.
Each driver in the elfloader has a list of compatible strings, matching those found in the device tree. For instance, the 8250 UART driver, used on Tegra and TI platforms has the following:
static const struct dtb_match_table uart_8250_matches[] = { { .compatible = "nvidia,tegra20-uart" }, { .compatible = "ti,omap3-uart" }, { .compatible = "snps,dw-apb-uart" }, { .compatible = NULL /* sentinel */ }, };
Each driver also has a ‘type’. Currently the only type supported is
DRIVER_UART. The
type
indicates the type of struct that is found in the
ops pointer of each driver object,
and provides type-specific functionality.
(For instance, UART drivers have a
elfloader_uart_ops struct which contains a
putc function).
Finally, drivers also provide an
init function, which is called when the driver is matched with a device,
and can be used to perform device-specific setup (e.g. setting the device as the UART output).
Finally, each driver has a
struct elfloader_driver and a corresponding
ELFLOADER_DRIVER statement.
Taking the 8250 UART driver as an example again:
static const struct elfloader_driver uart_8250 = { .match_table = uart_8250_matches, .type = DRIVER_UART, .init = &uart_8250_init, .ops = &uart_8250_ops, }; ELFLOADER_DRIVER(uart_8250);
UART
The driver framework provides a “default” (
__attribute__((weak))) implementation of
plat_console_putchar, which calls
the
putc function for the elfloader device provided to
uart_set_out - discarding all characters
that are given to it before
uart_set_out is called. This can be overridden if you do not wish to use
the driver framework (e.g. for very early debugging).
Porting the elfloader
To ARM
Once a kernel port has been started (and a DTB provided), porting the elfloader to a platform is reasonably straightforward.
Most platform-specific information is extracted from a DTB, including available physical memory ranges. If the
platform uses a UART compatible with another platform, even the UART will work out of the box. In other cases,
it might be necessary to add a new
dtb_match_table entry to an existing driver, or add a new driver
(which is fairly trivial - only the
match_table and
putchar functions from an existing driver would
need to be changed).
An appropriate image type needs to be selected. By default
ElfloaderImage is set to
elf, however,
various platform-specific overrides exist and can be found in
ApplyData61ElfLoaderSettings in this repo, at
cmake-tool/helpers/application_settings.cmake.
To RISC-V
TODO - it seems there’s not actually that much that needs to be done on the elfloader side.
File included from github repo edit | https://docs.sel4.systems/projects/elfloader/ | 2021-01-15T18:21:28 | CC-MAIN-2021-04 | 1610703495936.3 | [] | docs.sel4.systems |
Ubuntu.Components.BottomEdge
A component to handle bottom edge gesture and content. More...
Properties
- activeRegion : BottomEdgeRegion
- contentComponent : Component
- contentItem : Item
- contentUrl : url
- dragDirection : DragDirection
- dragProgress : real
- hint : BottomEdgeHint
- preloadContent : bool
- regions : list<BottomEdgeRegion>
- status : Status
Signals
Methods
Detailed Description
The component provides bottom edge content handling. The bottom egde feature is typically composed of a hint and some content. The contentUrl is committed (i.e. fully shown) when the drag is completed after it has been dragged for a certain amount, that is 30% of the height of the BottomEdge. The contentUrl can be anything, defined by contentUrl or contentComponent.
As the name suggests, the component automatically anchors to the bottom of its parent and takes the width of the parent. The drag is detected within the parent area, and the height drives till what extent the bottom edge content should be exposed on commit call. The content is centered into a panel which is dragged from the bottom of the BottomEdge. The content must specify its width and height.
import QtQuick 2.4 import Ubuntu.Components 1.3 MainView { width: units.gu(40) height: units.gu(70) Page { id: page title: "BottomEdge" BottomEdge { height: parent.height - units.gu(20) hint.text: "My bottom edge" contentComponent: Rectangle { width: page.width height: page.height color: UbuntuColors.green } } } }
Note: The content is specified either through contentUrl or contentComponent, where contentComponent has precedence over contentUrl.
There can be situations when the content depends on the progress of the drag. There are two possibilities to follow this, depending on the use case. The dragProgress provides live updates about the fraction of the drag.
BottomEdge { id: bottomEdge height: parent.height hint.text: "progression" contentComponent: Rectangle { width: bottomEdge.width height: bottomEdge.height color: Qt.rgba(0.5, 1, bottomEdge.dragProgress, 1); } }
The other use case is when the content needs to be completely different in certain regions of the area. These regions can be defined through BottomEdgeRegion elements listed in the regions property.
import QtQuick 2.4 import Ubuntu.Components 1.3 MainView { width: units.gu(40) height: units.gu(70) Page { title: "BottomEdge" BottomEdge { id: bottomEdge height: parent.height - units.gu(20) hint.text: "My bottom edge" contentComponent: Rectangle { width: bottomEdge.width height: bottomEdge.height color: bottomEdge.activeRegion ? bottomEdge.activeRegion.color : UbuntuColors.green } regions: [ BottomEdgeRegion { from: 0.4 to: 0.6 property color color: UbuntuColors.red }, BottomEdgeRegion { from: 0.6 to: 1.0 property color color: UbuntuColors.silk } ] } } }
Note: Custom regions override the default declared ones. Therefore there must be one region which has its to limit set to 1.0 otherwise the content will not be committed at all.
Note: Regions can also be declared as child elements the same way as resources.
The BottomEdge takes ownership over the custom BottomEdgeRegions, therefore we cannot 'reuse' regions declared in other BottomEdge components, as those will be destroyed together with the reusing BottomEdge component. The following scenario only works if the customRegion is not used in any other regions.
Page { BottomEdge { id: bottomEdge hint.text: "reusing regions" // put your content and setup here regions: [customRegion] } BottomEdgeRegion { id: customRegion from: 0.2 } }
Page As Content
BottomEdge accepts any component to be set as content. Also it can detect whether the content has a PageHeader component declared, and will inject a collapse navigation action automatically. In case the content has no header, the collapse must be provided by the content itself by calling the collapse function.
BottomEdge { id: bottomEdge height: parent.height hint.text: "Sample collapse" contentComponent: Rectangle { width: bottomEdge.width height: bottomEdge.height color: Qt.rgba(0.5, 1, bottomEdge.dragProgress, 1); Button { text: "Collapse" onClicked: bottomEdge.collapse() } } }
Alternatively you can put a PageHeader component in your custom content as follows:
BottomEdge { id: bottomEdge height: parent.height hint.text: "Injected collapse" contentComponent: Rectangle { width: bottomEdge.width height: bottomEdge.height color: Qt.rgba(0.5, 1, bottomEdge.dragProgress, 1); PageHeader { title: "Fancy content" } } }
Styling
Similar to the other components the default style is expected to be defined in the theme's BottomEdgeStyle. However the style is not parented to the BottomEdge itself, but to the BottomEdge's parent item. When loaded, the style does not fill the parent but its bottom anchor is set to the bottom of the BottomEdge. Beside this the hint is also parented to the style instance. Custom styles are expected to implement the BottomEgdeStyle API.
See also BottomEdgeRegion.
Property Documentation
Specifies the current active region.
The property holds the component defining the content of the bottom edge. The property behaves the same way as Loader's sourceComponent property.
The property holds the item created either from contentUrl or contentComponent properties.
The property holds the url to the document defining the content of the bottom edge. The property behaves the same way as Loader's source property.
The property reports the current direction of the drag. The direction is flipped when the drag passes the drag threshold.
Defaults to Undefined
The property specifies the proggress of the drag within [0..1] interval.
The property holds the component to display the hint for the bottom edge element.
If set, all the contents set in the component and in regions will be loaded in the background, so it will be available before it is revealed.
The property holds the custom regions configured for the BottomEdge. The default configuration contains one region, which commits the content when reached. The defaults can be restored by setting an empty list to the property or by calling regions.clear(). See BottomEdgeRegion.
The property reports the actual state of the bottom edge. It can have the following values:
Note: Once Commited status is set, no further draging is possible on the content.
Signal Documentation
Signal emitted when the content collapse is completed.
Signal emitted when the content collapse is started.
Signal emitted when the content commit is completed.
Signal emitted when the content commit is started.
Method Documentation
The function forces the bottom edge content to be hidden. Emits collapseStarted and collapseCompleted signals to notify the start and the completion of the collapse operation.
The function forces the bottom edge content to be fully exposed. Emits commitStarted and commitCompleted signals to notify the start and the completion of the commit operation. It is safe to call commit() multiple times. | https://phone.docs.ubuntu.com/en/apps/api-qml-development/Ubuntu.Components.BottomEdge | 2021-01-15T18:03:45 | CC-MAIN-2021-04 | 1610703495936.3 | [] | phone.docs.ubuntu.com |
Note
This is a community-driven FAQ for RPKI, originally written by Alex Band, Job Snijders, David Monosov and Melchior Aelmans. Network operators around the world have contributed to these questions and answers. The contents are available on Github, allowing you to send a pull request with edits or additions, or fork the contents for usage elsewhere.
RPKI Mechanism¶
What is RPKI and why was it developed?¶
The global routing system of the Internet consists of a number of functionally independent actors (Autonomous Systems) which use BGP (Border Gateway Protocol) to exchange routing information. The system is very dynamic and flexible by design. Connectivity and routing topologies are subject to change. Changes easily propagate globally within a few minutes. One weakness of this system is that these changes cannot be validated against information existing outside of the BGP protocol itself.
RPKI is a way to define data in an out-of-band system such that the information that are exchanged by BGP can be validated to be correct. The RPKI standards were developed by the IETF (Internet Engineering Task Force) to describe some of the resources of the Internet’s routing and addressing scheme in a cryptographic system. These information are public, and anyone can get access to validate their integrity using cryptographic methods.
I thought we were all using the IRR to check route origin, why do we need RPKI now?¶
If you’ve been involved in default-free zone Internet engineering for any length of time, you’re probably familiar with RPSL, a routing policy specification language originally defined in RFC 2280 back in 1998. While RPSL has created considerable early enthusiasm and has seen some traction, the Internet was rapidly growing at the time, and the primary focus was on data availability rather than data trustworthiness. Everyone was busy opportunistically documenting the minimal policy that was necessary to “make things work” with the policy specification language parsing scripts of everyone else so that something would finally ping!
Over time, this has created an extensive repository of obsolete data of uncertain validity spread across dozens of route registries around the world. Additionally, the RPSL language and supporting tools have proven to be too complex to consistently transpose policy into router configuration language - resulting in most published RPSL data being neither sufficiently accurate and up to date for filtering purposes, nor sufficiently comprehensive or precise for being the golden master in router configuration.
RPKI aims to complement and expand upon this effort focusing primarily on trustworthiness, timeliness, and accuracy of data. RPKI ROAs are hierarchically delegated by RIRs based on strict criteria, and are cryptographically verifiable. This offers the Internet community an opportunity to build an up to date and accurate information of IP address origination data on the Internet.
Why are we investing in RPKI, isn’t it easier to just fix the Internet Routing Registry (IRR) system?¶ two problems, as you can be absolutely sure that an authoritative, cryptographically verifiable statement can be made by any legitimate IP resource holder in the world.
Is it true that BGP4 is just not up to the task any longer?¶
Unfortunately it’s practically impossible to replace BGP right now. We should, however, work on fixing the broken parts and improving the situation.
What is the value of RPKI based BGP Origin Validation without Path Validation?¶
While Path Validation is a desirable characteristic, the existing RPKI origin validation functionality addresses a large portion of the problem surface.
Existing operational and economic incentives ensure that the most important prefixes for each network are seen via the shortest AS path possible. One such example are network operators setting a higher local preference for prefixes learned via an Internet exchange or private peers (“peerlock”). This reduces the risk that an invalid route could win the BGP route selection process even if it originates from an impersonated but correct origin AS.
For transit providers, direct interconnections and short AS paths are a defining characteristic, positioning them ideally to act on RPKI data and accept only valid routes for redistribution.
Furthermore, operational experience suggests that the vast majority of route hijacks are unintentional rather than malicious, and are caused by ‘fat-fingering’, where an operator accidentally originates a prefix they are not the holder of. Origin Validation would mitigate many of these problems.
While a malicious party willing to intentionally impersonate the origin AS could still take advantage of the lack of Path Validation in some circumstances, widespread RPKI Origin Validation implementation would make such instances easier to pinpoint and address.
When comparing the ROA data set to the announcements my router sees, what are possible outcomes?¶
In short, routes can have the state Valid, Invalid, or NotFound (a.k.a. Unknown).
- Valid: The route announcement is covered by at least one ROA
- Invalid: The prefix is announced from an unauthorised AS or the announcement is more specific than is allowed by the maximum length set in a ROA that matches the prefix and AS
- NotFound: The prefix in this announcement is not covered (or only partially covered) by an existing ROA
To understand how more specifics, less specifics and partial overlaps are treated, please refer to section 2 of RFC 6811.
I’ve heard the term “route leak” and “route hijack”. What’s the difference?¶
A route leak is a propagation of one or more routing announcements that are beyond their intended scope. That is an announcement from an Autonomous System (AS) of a learned BGP route to another AS is in violation of the intended policies of the receiver, the sender, and/or one of the ASes along the preceding AS path.
A route hijack is the unauthorised origination of a route.
Note that in either case, the cause may be accidental or malicious and in either case, the result can be path detours, redirection, or denial of services. For more information, please refer to RFC 7908.
If a ROA is cryptographically invalid, will it make my route invalid?¶
An invalid ROA means that the object did not pass cryptographic validation and is therefore discarded. The statement about routing that was made within the ROA is simply not taken into consideration. An invalid route on the other hand, is the result of a valid ROA, specifically one that had the outcome that a prefix is announced from an unauthorised AS or the announcement is more specific than is allowed by the maximum length set in a ROA that matches the prefix and AS.
Operations and Impact¶
Will my router have a problem with all of this cryptographic validation?¶
No, routers do not do any cryptographic operations to perform Route Origin Validation. The signatures are checked by external software, called Relying Party software or RPKI Validator, which feeds the processed data to the router over a light-weight protocol. This architecture causes minimal overhead for routers.
Does RPKI reduce the BGP convergence speed of my routers?¶
No, filtering based on an RPKI validated cache has a negligible influence on convergence speed. RPKI validation happens in parallel with route learning (for new prefixes which aren’t yet in cache), and those prefixes will be marked as valid, invalid, or notfound (and the correct policy applied) as the information becomes available.
Why do I need rsync on my system to use a validator?¶
In the original standards, rsync was defined as the main means of distribution of RPKI data. While it has served the system well in the early years, rsync has several downsides:
- When RPKI relying party software is used on a client system, it has a dependency on rsync. Different versions and different supported options, such as
--contimeout, cause unpredictable results. Furthermore, calling rsync is inefficient. It’s an additional process and the output can only be verified by scanning the disk.
- Scaling becomes more and more problematic as the global RPKI data set grows and more operators download and validate data, as with rsync the server in involved in processing the differences.
To overcome these limitations the RRDP protocol was developed and standardised in RFC 8182, which relies on HTTPS. RRDP was specifically designed for scaling and allows CDNs to participate in serving the RPKI data set globally, at scale. In addition, HTTPS is well supported in programming languages so development of relying party software becomes easier and more robust.
Currently, RRDP is implemented on the server side by the ARIN, RIPE NCC and APNIC. Most RPKI Validator implementations either already have RRDP support, or have it on the short term roadmap.
The five RIRs provide a Hosted RPKI system, so why would I want to run a Delegated RPKI system myself instead?¶
The RPKI system was designed to be a distributed system, allowing each organisation to run their own CA and publish the certificate and ROAs themselves. The hosted RIR systems are in place to offer a low entry barrier into the system, allowing operators to gain operational experience before deciding if they want to run their own CA.
For many operators, the hosted system will be good enough, also in the long term. However, organisations who for example don’t want to be dependent on a web interface for management, who manage address space across multiple RIR regions, or have BGP automation in place that they would like to integrate with ROA management, can all choose to run a CA on their own systems.
Should I run a validator myself, when I can use an external data source I found on the Internet?¶
The value of signing the authoritative statements about routing intent by the resource holder comes from being able to validate that the data is authentic and has not been tampered with in any way.
When you outsource the validation to a third party, you lose the certainty of data accuracy and authenticity. Conceptually, this is similar to DNSSEC validation, which is best done by a local trusted resolver.
Section 3 of RFC 7115 has an extensive section on this specific topic.
How often should I fetch new data from the RPKI repositories?¶
According to section 3 of RFC 7115 you should fetch new data at least every 4 to 6 hours. At the moment, the publication of new ROAs in the largest repositories takes about 10-15 minutes. This means fetching every 15-30 minutes is reasonable, without putting unnecessary load on the system.
What if the Validator I use crashes and my router stops getting a feed. What will happen to the prefixes I learn over BGP?¶
All routers that support Route Origin Validation allow you to specify multiple Validators for redundancy. It is recommended that you run multiple instances, preferably from independent publishers and on separate subnets. This way you rely on multiple caches.
In case of a complete failure, all routes will fall back to the NotFound state, as if Origin Validation were never used.
I don’t want to rely on the RPKI data set in all cases, but I want to have my own preferences for some routes. What can I do?¶
You can always apply your own, local overrides on specific prefixes/announcements and override the RPKI data you fetch from the repositories. Specifying overrides is in fact standardised in RFC 8416, “Simplified Local Internet Number Resource Management with the RPKI (SLURM)”.
Is there any point in signing my routes with ROAs if I don’t validate and filter myself?¶
Yes, signing your routes is always a good idea. Even if you don’t validate yourself someone else will, or in worst case someone else might try to hijack your prefix. Imagine what could happen if you haven’t signed your prefixes…
Miscellaneous¶
Why isn’t the ARIN RPKI TAL like other public key files?¶
Unlike the other RIRs, which distribute their TAL publicly, ARIN has a policy requiring users to explicitly agree to terms and conditions concerning its TAL. Note that this policy is not without controversy as discussed here and here on the NANOG list.
Job Snijders made a video explaining his perspective on the ARIN TAL. Christopher Yoo and David Wishnick authored a paper titled Lowering Legal Barriers to RPKI Adoption.
Ben Cox performed various RPKI measurements and concluded that the ARIN TAL is used far less than TALs from their RIR counter parts. This has led to a situation where ROAs created under the ARIN TAL offer less protection against BGP incidents than other RIRs. State of RPKI: Q4 2018.
What is the global adoption and data quality of RPKI like?¶
There are several initiatives that measure the adoption and data quality of RPKI:
- RPKI Analytics, by NLnet Labs
- Global certificate and ROA statistics, by RIPE NCC
- Cirrus Certificate Transparency Log, by Cloudflare
- The RPKI Observatory, by nusenu
- RPKI Deployment Monitor, by NIST
I want to use the RPKI services from a specific RIR that I’m not currently a member of. Can I transfer my resources?¶
The RPKI services that each RIR offers differ in conditions, terms of service, availability and usability. Most RIRs have a transfer policy that allow their members to transfer their resources from one RIR region to another. Organisations may wish to do this so that they bring all resources under one entity, simplifying management. Others may do this because they are are looking for a specific set of terms with regards to the holdership of their resources. Please check with your RIR for the possibilities and conditions for resource transfers.
Will RPKI be used as a censorship mechanism allowing governments to make arbitrary prefixes unroutable on a whim?¶
Unlikely. In order to suppress a prefix, it would be necessary to both revoke the existing ROA (if one is present) and publish a conflicting ROA with a different origin.
These characteristics make using RPKI as a mechanism for censorship a rather convoluted and uncertain way of achieving this goal, and has broad visibility (as the conflicting ROA, as well as the Regional Internet Registry under which it was issued, will be immediately accessible to everyone). A government would be much better off walking into the data center and confiscate your equipment.
What are the long-term plans for RPKI?¶
With RPKI Route Origin Validation being deployed in more and more places, there are several efforts to build upon this to offer out-of-band Path Validation. Autonomous System Provider Authorisation (ASPA) currently has the most traction in the IETF, defined in these drafts: draft-azimov-sidrops-aspa-profile and draft-azimov-sidrops-aspa-verification. | https://rpki.readthedocs.io/en/latest/about/faq.html | 2021-01-15T16:47:06 | CC-MAIN-2021-04 | 1610703495936.3 | [] | rpki.readthedocs.io |
@groovy.transform.CompileStatic abstract class Plugin extends java.lang.Object
Super class for plugins to implement. Plugin implementations should define the various plugin hooks (doWithSpring, doWithApplicationContext, doWithDynamicMethods etc.)
List of ArtefactHandler instances provided by this plugin
Whether the plugin is enabled
The current Grails Environment
The GrailsApplication instance
The GrailsPlugin definition for this plugin
The GrailsPluginManager instance
Allows a plugin to define beans at runtime. Used primarily for reloading in development mode
beanDefinitions- The bean definitions
Invokes once the org.springframework.context.ApplicationContext has been refreshed and after {#doWithDynamicMethods()} is invoked. Subclasses should override
Invoked in a phase where plugins can add dynamic methods. Subclasses should override
Sub classes should override to provide implementations
The GrailsPluginManager instance
Invoked when a object this plugin is watching changes
event- The event
Invoked when the application configuration changes
event- The event
Invoked when the org.springframework.context.ApplicationContext is closed
event- The event | http://docs.grails.org/3.2.3/api/grails/plugins/Plugin.html | 2021-01-15T18:11:39 | CC-MAIN-2021-04 | 1610703495936.3 | [] | docs.grails.org |
This VLINGO/LATTICE component provides an API for distributed computations and data processing across multiple nodes in a cluster. It supports distributed parallel processing by sending computational execution requests to actors on any node in a VLINGO/CLUSTER with the potential to receive results in return, if the protocol specifies a completable future outcome. This as well as elastic resiliency are embodied in the VLINGO/LATTICE Grid.
The VLINGO/LATTICE
Grid is a distributed compute construct that is implemented as a VLINGO/ACTORS
Stage. The
Grid API is the same as you would expect from any
Stage with one exception: the
Actor implementations that should be started on a
Grid must have a grid-compatible
Address. There are no other requirements, such as implementing a special interface or extending an abstract base class.
To start the
Grid use one of the static
Grid.start(...) methods. You may then start any actor that has a grid-compatible
Address as you would normally do on a
Stage.
final Grid grid = Grid.start("product-service", "product-grid");final Product product = grid.actorFor(Product.class, ProductEntity.class);
In the above example the
ProductEntity actor instance is assigned a grid-compatible
Address, and is therefore reachable by any message sender on the
Grid.
All VLINGO/LATTICE base model types are compatible with the grid without any changes. The following are the types.
// model entitiesimport io.vlingo.lattice.model.sourcing.EventSourced;import io.vlingo.lattice.model.sourcing.CommandSourced;import io.vlingo.lattice.model.object.ObjectEntity;import io.vlingo.lattice.model.stateful.StatefulEntity;// model processes (a.k.a. sagas)import io.vlingo.lattice.model.process.ObjectProcess;import io.vlingo.lattice.model.process.SourcedProcess;import io.vlingo.lattice.model.process.StatefulProcess;
Use all of these base types as you would if not using the grid.
Each node in the grid has a representative abstraction name
GridNode. Each
GridNode in the cluster has a copy of a hash ring data structure. Without going into details on how a hash ring works, it is used to determine where in the cluster a given actor is to be distributed and subsequently found within the grid. An actor's node location is determined by where the hash of its
Address distributes on the hash ring data structure. For example, if there are three nodes in the cluster, each actor
Address must hash into one of the three nodes. This means that a given actor purposely has a single instance in a single grid node in the cluster; that is, that one actor instance is pinned to a given node.
When a message is sent to a grid-compatible actor, the grid node of the message sender looks up in the hash ring for the node on which that actor is located. The message is then serialized and sent over the network to that node. The receiving grid node searches up the actor locally and delivers the message through the actor's mailbox.
There is, of course, an optimization if the message is sent to an actor that is on the same node as the sender. Such a condition delivers the message directly through the target actor's mailbox without any network overhead.
There are several grid operational scenarios to be aware of. Although service and application developers will not need to do anything special when creating actors in the grid and sending messages from many client actors to many target actors, it is useful to understand the common scenarios.
Some of the support for the following scenarios is strictly handled by the cluster. In such cases the local grid node is involved only to update its knowledge of the current cluster conditions. Such include updating its hash ring and providing the service/application with a status that indicates that this node is considered healthy and operational.
An actor on a given grid node sends a message to an actor on a grid node somewhere in the cluster. The sender has a protocol reference to the target actor and sends a message via a protocol method invocation. The target actor's grid-compatible
Address is used to look up the grid node on which it is pinned. The message is serialized, sent across a network socket channel to the node of the target actor, deserialized, and delivered to the actor via its mailbox.
It is possible/likely that some sender actors will be on the same node as the target actor. In that case, the message send is optimized and delivered directly to the actor's mailbox.
A node that experiences a network partition will fail to receive continuous health updates from other nodes that can no longer see it. The partitioned node will be considered lost from the other nodes that can't see it, and that node will in short time understand that it has lost the cluster quorum with the other cluster nodes (as per the VLINGO/CLUSTER specification). This partitioned node then enters into an idle state. When the node understands that it is in an idle state it will provide this information to the clients running in the local node.
In some network partition cases any N nodes out of M total may see each other, and yet there must be P nodes excluded from the cluster. In the case of a three-node cluster, N=2, M=3, and P=1.
One of the N nodes will be elected leader of the quorum. Any node in the set of P that is not seen by the leader will be prevented from entering the cluster. Should any partitioned node within the set of P claim to be leader among the N and/or P nodes that it can see (it has the highest value node id among any other nodes it can see), the nodes that receive its declaration of leadership will reject that claim. In turn any of the rejecting nodes will inform the failed leadership-declaring node (within P) of the real leader (within N) that it recognizes. This is strictly handled by the cluster and involves the grid only to provide information that it is not in the cluster quorum.
A node that is downed within the cluster has left it, which will be communicated to all remaining nodes.
The actors that were distributed to the node that left must continue to operate within the grid on the nodes still in the cluster quorum. Thus, the actors that would otherwise be lost can be recovered onto a different node. Consider the following types of recovery.
Actors with ephemeral or transient state, and model actors that have persistent state, will be recovered on a different node according to the adjusted hash ring. This will not actually occur until a message is sent to any such actor.
Actors that have non-transient state but that are not persistent model actor types cannot be fully recovered on other nodes with the state they previously held on the downed node. One way to ensure that the actor state can be maintained is to place in a VLINGO/LATTICE Space. A Space is a distributed, in-memory, key-value store. As long as the cluster nodes remain running on which a given Space key-value is persistent in memory, any actor using that Space key-value can restore from that state as needed.
A node that newly joins a cluster will cause the hash ring to adjust to that node, meaning that the actors that were on any preexisting nodes may be repartitioned on to one or more other nodes. In other words, a node joining the cluster changes the hashing results because there are now more nodes than previously available.
This repartitioning could be quite expensive if the actors were to be immediately relocated. Instead what happens is the same that occurs when a cluster node leaves. The recovery onto a different node will occur only when a message is sent to any such relocated actor.
The implementation of the typical scenario is: when the cluster node updates its hash ring to include the newly joined node, it scans its internal directory for actors whose Address indicates that they no longer belong, and evicts those actors. The actors evicted from that node will not be repartitioned until receiving their next message. Any messages for the evicted actors that are received latently are relayed to the node that now contains the given actors.
There are a few challenges to this typical scenario.
An evicted actor with messages still in its mailbox must be given the opportunity to process those messages. Thus, before evicting such an actor, its remaining mailbox messages must be sent to the grid node that is now responsible for it.
It is possible that when multiple nodes are joining within a short time frame, or one or more nodes leave the cluster near the same time as one or more joining, there may be some shuffling of actors until the repartitioning settles. The best that a given evicting node can do is relay received messages to actors that it is no longer responsible for to the node it currently understands to be the one hosting such actors. This will cause some additional latency.
There is no way to avoid these challenges, because there is no way for a given node to predict that something about the cluster will change just following its update of its own hash ring.
In distributed computing, there is no now.
Yet, these challenges do emphasize that the eviction process must be asynchronous to the ongoing receipt of cluster health and actor targeted messages. That enables the node to immediately update its hash ring to its current understanding of possible ongoing cluster changes. As a result, the asynchronous eviction process has the opportunity to check for a possible refreshed hash ring as it decides whether or not to evict a given actor, and if evicted, which node should receive it's messages—at least as of that very instant in time. | https://docs.vlingo.io/vlingo-lattice/grid | 2021-01-15T18:33:46 | CC-MAIN-2021-04 | 1610703495936.3 | [] | docs.vlingo.io |
Creating Android modules¶
Introduction")
Resources¶
Future through the developer mailing list. | https://godot-es-docs.readthedocs.io/en/latest/development/cpp/creating_android_modules.html | 2020-11-24T00:47:26 | CC-MAIN-2020-50 | 1606141169606.2 | [] | godot-es-docs.readthedocs.io |
Selecting actions for an account
You can select an account to work with by clicking anywhere in the row that contains the account name to display the account details or by clicking the check box for a row. Selecting an account displays the Actions menu from which you can select the action you want to perform.
The actions available depend on the type of account you have selected and the permissions you have been granted. For example, you might see some or all of the following actions on the Actions menu:
- Login to log on to the target system remotely using the selected account and stored password.
- Checkout to check out the password for the selected account.
- Extend to extend the check out time period.
- Checkin to check in the password for the selected account.
- Manage accounts to convert unmanaged accounts into managed accounts. If you are converting a single account and it doesn't have a password configured, you are prompted to enter a password for the selected account. If you select multiple accounts, Privileged Access Service sends an email notifying you of the success/failure of the multiple account management task. The Managed column on the Accounts page indicates the managed accounts. To convert all accounts associated with a set to managed, see Managing accounts in a set
- Update Password to update the password stored in the Privileged Access Service for an account.
- Rotate Password(s) to change the password stored in the Privileged Access Service for managed account(s) immediately without waiting for the rotation period to expire. This action is only available if the selected accounts are accounts with a managed password. If you select multiple accounts, Privileged Access Service sends an email notifying you of the success/failure of the multiple account rotate password task. To rotate all account passwords associated with a set, see Rotating passwords in a set on demand.
- Set as Admin Account to identify the selected account as a local administrative account.
- Clear as Admin Account to remove the selected account as a local administrative account.
- Add to Set to add the selected account to a new or existing set.
- Verify Credential to make sure the selected account credential (password or SSH key) in Privileged Access Service is in sync with the domain controller credential. Credential verification is performed when Verify Credential is selected from the Actions menu, when the account credential is changed, when an account or administrative account is added, and when resolving an account.The Last Verify Result column on the Accounts page displays nothing if the last credential verification for the account was successful. If the credential verification for the account failed for any reason the column displays Failed, Missing Credential, or Unknown.The Last Verify column on the Accounts page displays the date and time of the last credential verification.
- Show Offline Passcode to get a code to log into an offline system.
- Delete to remove the selected account from the Privileged Access Service.
- Unlock Account to manually unlock the selected account. This option is only visible if the user has the Unlock Account permission, and if the domain the account belongs to has an administrative account defined and has the Enable manual account unlock using administrative account policy enabled. To unlock an account, the user selecting the account must have the Unlock Account permission in the domain the account belongs to. For more information, see Assigning permissions and Enable manual account unlock using administrative account.
- Launch <desktop application name> to launch the specified application. If you have desktop applications configured and the system is referenced in the Command Line field, then you will see the launch action for that application. See Adding Desktop Apps using the Admin Portal for information on configuring desktop applications.
If an account is configured to require the approval of a designated user or role, you might see the Request Login or Request Checkout actions. Selecting Request Login or Request Checkout sends an email request to the designated user or to the members of a designated role for approval. If your request is approved, you have limited period of time to take the action you requested.
The steps for checking out, checking in, or updating a password are the same whether you start from a system, domain, database, service, or account. Only the navigation to where you find the accounts listed and the specific tasks you see listed on an Actions menu vary based on where you are.
For more information about performing the account‑related tasks, see the following topics:
For more information about configuring, requesting, and responding to access requests, see Managing access requests. | https://docs.centrify.com/Content/Infrastructure/resources-access/svr-mgr-select-account.htm | 2020-11-24T01:40:54 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.centrify.com |
Configure SMTP for outbound emails
In order to configure outbound email, follow the steps below:
Browse to the Settings -> General settings -> Configure outgoing email server -> Create menu.
Fill the required information. The settings below configure Odoo to send emails through a Gmail account. Replace USERNAME and PASSWORD with your Gmail account username and password respectively.
Description: smtp.gmail.com #### Connection Information SMTP Server: smtp.gmail.com SMTP Port: 587 #### Security and Authentication Connection Security: TLS Username: [email protected] Password: PASSWORD
Save the changes.. | https://docs.bitnami.com/aws/apps/odoo/configuration/configure-smtp/ | 2020-11-24T00:53:20 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.bitnami.com |
Keep the Survey add-on > Configure Fact Sheet > Advanced
Both ways bring up the same table:
In this table, you have two dropdowns for each Fact Sheet section:
- Weight: The weight controls whether this section is counted in the completeness calculation. Set it to "1" to include the position into the calculation, set it to "0" to leave it out of the calculation. If you feel that a particular section is of higher importance, you can also use higher numbers for your most important sections.
- Status: It is good practice to deactivate any section which is not in use in order to make life easier for your stakeholders. Those sections are still accessible, e.g. via API, and any data already stored will of course remain present.
Quality Seal
The quality seal is a great mechanism to combine collaborative editing with clear responsibilities and a rigid governance process. The main concept is very simple:
- As a Fact Sheet responsible or accountable, it is your task to approve the quality of a Fact Sheet (see below).
-.
Information:
The Quality Seal is not available in all LeanIX editions. Contact your Administrator or get in touch in case you are unsure whether the feature is available to you.
How to configure the Quality Seal
As an administrator, go to the Fact Sheet Definition page.
For each Fact Sheet, you can choose between the following options:
- Disable the Quality Seal
- Enable it, i.e. the Seal breaks in case of any edit of non-Responsibles, but no time period for re-approval is configured.
- Enable it and set a time period for re-approval
How to set the Quality Seal
For each Fact Sheet where the Quality Seal is enabled, a little button on the top right shows the state (Check needed or not) and allows you to approve the Quality. the Survey add-on to collect and maintain data
Maintaining high data quality is a great use case for our Survey Add-On as well.
Updated 10 months ago | https://docs.leanix.net/docs/increase-your-data-quality | 2020-11-24T00:36:21 | CC-MAIN-2020-50 | 1606141169606.2 | [array(['https://files.readme.io/8459131-DataQuality1.png',
'DataQuality1.png'], dtype=object)
array(['https://files.readme.io/8459131-DataQuality1.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/7413357-DataQuality2.png',
'DataQuality2.png'], dtype=object)
array(['https://files.readme.io/7413357-DataQuality2.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/ade784b-DataQuality3.png',
'DataQuality3.png'], dtype=object)
array(['https://files.readme.io/ade784b-DataQuality3.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/1e8c504-DataQuality2.png',
'DataQuality2.png'], dtype=object)
array(['https://files.readme.io/1e8c504-DataQuality2.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/b0120bf-DataQuality4.png',
'DataQuality4.png'], dtype=object)
array(['https://files.readme.io/b0120bf-DataQuality4.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/ad6587e-DataQuality5.png',
'DataQuality5.png'], dtype=object)
array(['https://files.readme.io/ad6587e-DataQuality5.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/26713f7-DataQuality6.png',
'DataQuality6.png'], dtype=object)
array(['https://files.readme.io/26713f7-DataQuality6.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/b7ad740-DataQuality7.png',
'DataQuality7.png'], dtype=object)
array(['https://files.readme.io/b7ad740-DataQuality7.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/ae6ed75-DataQuality8.png',
'DataQuality8.png'], dtype=object)
array(['https://files.readme.io/ae6ed75-DataQuality8.png',
'Click to close...'], dtype=object) ] | docs.leanix.net |
View the available log files that uses SDK functionality.
Prerequisites
Use the View Logs feature from the actions menu to quickly access available log files pertaining to applications that use SDK functionality.
Procedure
- Navigate to Apps & Books >Applications > Native and select the Internal tab.
- Select the application.
- Select the View Logs option from the actions menu. | https://docs.vmware.com/en/VMware-Workspace-ONE-UEM/services/Web_Admin_Guide/GUID-3952A756-41B2-4B20-B449-DF96A51DD341.html | 2020-11-24T01:53:48 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.vmware.com |
Analyze MEF Assemblies from the Command Line. | https://docs.microsoft.com/en-us/archive/blogs/nblumhardt/analyze-mef-assemblies-from-the-command-line | 2020-11-24T01:09:36 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.microsoft.com |
Zoe Analytics - Container-based Analytics as a Service¶
Zoe Analytics provides a simple way to provision data analytics applications. It hides the complexities of managing resources, configuring and deploying complex distributed applications on private clouds. Zoe is focused on data analysis applications, such as Spark or Tensorflow. A generic, very flexible application description format lets you easily describe any kind of data analysis application.
Downloading¶
Get Zoe from the GitHub repository. Stable releases are tagged on the master branch and can be downloaded from the releases page.
Zoe is written in Python 3.4+ and requires a number of third-party packages to function. Deployment scripts for the supported back-ends, install and setup instructions are available in the installation guide.
Quick tutorial¶
To use the Zoe command-line interface, first of all you have to create a configuration file called
~/.zoerc containing your login information:
url= # address of the zoe-api instance user=joe # User name pass=joesecret # Password
Now you can check if you are up and running with this command:
./zoe.py info
It will return some version information, by querying the zoe-api and zoe-master processes.
Zoe applications are passed as JSON files. A few sample ZApps are available in the
contrib/zapp-shop-sample/ directory. To start a ZApp use the following command:
./zoe.py start joe-spark-notebook contrib/zapp-shop-sample/jupyter-r/r-notebook.json
ZApp execution status can be checked this way:
./zoe.py exec-ls # Lists all executions, past and present ./zoe.py exec-get <execution id> # Inspects an execution
Where
execution id is the ID of the ZApp execution to inspect, taken from the
exec-ls command.
Or you can just connect to the web interface (port 5001 by default).
Where to go from here¶
Main documentation¶
Zoe applications¶
Development and contributing to the project¶
External resources¶
Zoe is licensed under the terms of the Apache 2.0 license. | http://docs.zoe-analytics.eu/en/latest/ | 2020-11-24T00:20:03 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.zoe-analytics.eu |
Modifying member permissions for a set
You can modify the permissions for the members of a set to control what other users can do on the databases in the set. For example, you can assign member permissions to enable other users to view, edit, or delete the members of a set or to manage sessions on any member of the set. Member permissions are the same as the permissions you can assign to individual databases or globally for all databases. You can only assign member-level permissions on manual databases sets, however.
For more information about the permissions you can assign to databases, see Setting database-specific permissions.
To assign member-level permissions:
- In the Admin Portal, click Resources, then click Databases to display the list of databases.
- In the Sets section, right-click a set name, then click Modify.
- Click Member Permissions.
- Click Add to search for and select the users, groups, or roles to which you want to grant set‑specific permissions, then click Add.
- Select the appropriate permissions for each user, group, or role you have added.
- Click Save. | https://docs.centrify.com/Content/Infrastructure/resources-manage/svr-mgr-database-set-member-perms.htm | 2020-11-24T01:01:28 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.centrify.com |
Searching¶
2D and 3D coordinates¶
By default, compounds are returned with 2D coordinates. Use the
record_type keyword argument to specify otherwise:
pcp.get_compounds('Aspirin', 'name', record_type='3d')
Advanced search types¶
By default, requests look for an exact match with the input. Alternatively, you can specify substructure,
superstructure, similarity and identity searches using the
searchtype keyword argument:
pcp in the
PUG REST Specification.
Note: These types of search are slow.
Getting a full results list for common compound names¶
For some very common names, PubChem maintains a filtered whitelist of human-chosen CIDs with the intention of reducing confusion about which is the ‘right’ result. In the past, a search for Glucose would return four different results, each with different stereochemistry information. But now, a single result is returned, which has been chosen as ‘correct’ by the PubChem team.
Unfortunately it isn’t directly possible to return to the previous behaviour, but there is a straightforward workaround: Search for Substances with that name (which are completely unfiltered) and then get the compounds that are derived from those substances.
There area a few different ways you can do this using PubChemPy, but the easiest is probably using the
get_cids
function:
>>> pcp.get_cids('2-nonenal', 'name', 'substance', list_return='flat') [17166, 5283335, 5354833]
This searches the substance database for ‘2-nonenal’, and gets the CID for the compound associated with each substance.
By default, this returns a mapping between each SID and CID, but the
list_return='flat' parameter flattens this into
just a single list of unique CIDs.
You can then use
Compound.from_cid to get the full Compound record, equivalent to what is returned by get_compounds:
>>> cids = pcp.get_cids('2-nonenal', 'name', 'substance', list_return='flat') >>> [pcp.Compound.from_cid(cid) for cid in cids] [Compound(17166), Compound(5283335), Compound(5354833)] | https://pubchempy.readthedocs.io/en/latest/guide/searching.html | 2020-11-24T01:15:51 | CC-MAIN-2020-50 | 1606141169606.2 | [] | pubchempy.readthedocs.io |
Grade Processing- January/June
At the end of each semester grades and transcripts are processed.
Semester Grades Posted (District IT) – Confirm that grades are posted and there are no issues. District office Human Resources department provides a grading calendar each school year. Check individual student historical grades in PowerSchool.
Below are the state requirements for each grade type that are described in detail throughout this document –
ISBE required terms for all transcript records
*** Effective 9/24/2018 outside course assignment submission to ISBE will no longer be required…Per ISBE SIS team:
Outside Course Assignment data is not being used by the State Board of Education in reporting and is an Optional collection. Submitting this data is NOT required.
When entering outside grades in Powerschool check IL box –
==>OCA- Outside Course Assignment Report – Start date = 8/1/xx of the school year that matches your OCA entries , End date = TODAY
===>CCA- College Course Assignments (see calendar (June), item 3 Course assignments for a more detailed description)
After semester grades are posted enter the college name in stored grades for any course assignment that is dual
enrollment. District IT can help identify these courses/students during the grading process.
Extended courses – Extended courses receive 1 credit each semester for a double period class. The second period course must be uploaded by the IT department and the end of each semester grade process in order for it to receive elective credit. Prior to the end of the semester create a database for elective extended classes that need to be uploaded to Powerschool.
- Powerschool/school/sections – sort by course number.
- Choose the enrollment heading/# of students in the section
- Make the current selection
List students
Add output fields for list- student_number,lastfirst. click submit
Highlight the list of students and copy/paste to Excel
Repeat for each course section/teacher
Excel column headings:
Student_number
Student name (remove column before sending to IT for upload)
Course name Extended English Elective/Extended Algebra Elective/Extended Geometry Elective
Credit Type EL
Excludefromclassrank 1=exclude; 0=include
excludefromgpa 1=exclude; 0=include
Excludefromhonorroll 1=exclude; 0=include
Earnedcrhrs passing grade gets .5
potentialcrhrs all get .5
gpa added value
Gpa Points same as grade
Grade grade
Gradescale Name
Grade Level
Storecode S1 or S2
Year 12(example 12-13,year =12)
Teacher insert teacher name
NOTE: You can also put in the teacher’s name in your spreadsheet. Send webhelp with excel file attached to IT powerschool admin for download. Once downloaded to Powerschool, enter teacher/section number for each student manually in the state screen in stored grades. You should have this information from previous years where we each assigned section numbers to specific teachers.
ALL Hand entered grades as of 01/2015
Due to ISBE changes in the TERM configuration you must use the obsolete terms(01,02,03) for 2013-2014 and prior.
2014-2015 and forward use the new terms S1, S2, and S3. If grades were earned outside of Glenbard (private school) all years
use the obsolete terms(01,02,03).
Off Campus students – District Office Special Ed Department to contact off campus schools early to request senior grades end of May. Must have grades for student’s to be listed in graduation program.
Create a list of off campus students. Check off as grades are received.
search il_rcdts_serving_school#
(be sure you are in Term 2)
click on “quick export” and put in
student_number,lastfirst,se_program,entrydate
click submit
Enter off campus grades with state required information in Powerschool/Stored grades, the only state field that must be entered is exclude from state reporting for the majority of students. Students who change serving schools may require submission to ISBE.
Enter senior grades first.
Incomplete List – select all students in Powerschool/special functions/search by grades and attendance, click submit. Copy/paste student list to excel.
Blank Grades – Send a webhelp to IT for the “blank grade report” to identify students that should have grades but do not. Send email to teachers asking to get you information ASAP or call them. This is the teachers responsibility to have this information in powerschool.
Technology Center of DuPage Grades – TCD grades are requested and collected by the district special education department. The special education department will send grades to each building’s data specialist and registrar. TCD grades are entered by the Data specialist into the powerschool gradebook. After grades are completed in the Gradebook and posted at the end of the semester the registrar needs to fill in “state” screen portion of stored grades screen. The only field that must be entered is exclude from state reporting for the majority of students. Students who change serving schools may require submission to ISBE.
June Graduates– Beginning of June submit webhelp ticket to IT for update all grade 12 to il_graduate=1 (include your school’s graduation date). When this is complete remove graduation date and uncheck HS Graudate for all non-graduates on the registrar screen.
Summer Graduates– submit webhelp ticket to IT for summer graduates. Enter the graduation date and check the graduated box on the registrar screen in Powerschool.
Run powerschool inquiry to verify students are marked correctly.
January – Grade_level=12;il_graduate=1
June – Grade_level=12;il_graduate=
State reporting- Graduates/Non graduates
Non-Graduates- Non graduates must be identified in Powerschool before the state EOY process end of JUNE.
This is very important and directly affects your school’s graduation rate at the state level.
Identify your non grads.
Work with your building data specialist to identify non graduates in Powerschool
/powerschool/scheduling setup -To keep a student in grade ‘12’ (non graduate) the data specialist must enter next year grade =12 and next school = current school – See below.
A quick and easy way to double check graduates credits (make sure all grades are entered)
a. Select grade level
b. Select Quick Export
c. Inquiry
student_number
lastfirst
^(*GPA method = “Earned Credit”)
Audit/Review Grades – Counselors will submit a form to you when a student is to receive an audit or review grade.
After grades post the form should be reviewed to make sure the proper grade and credit are issued. See board policy #6:280-R9.
Entering a Review Grade
A review grade happens when a student takes a course and passes and then retakes the same course to improve their grade.
Begin by comparing the grades. The first grade earned always receives the .5 credit. The second class taken always gets *REV* after the course title and the earned credit removed. The higher of the grades gets included in GPA, Honor Roll and Class Rank. The lower of the grades gets excluded from GPA, Honor Roll and Class Rank.
When the edit is complete, send request for grade update to ISBE (Webhelp to IT/state reporting).
Audit Grades
The teacher is notified by the counselor that the student is taking the class as an audit. Counselor gives the registrar a copy of the audit form. The registrar holds the audit form in a folder until grades post to historic. After grades have posted the registrar double checks to make sure that an AUD was issued by teacher and .001 credits is earned. If the teacher has issued a grade, change the grade to AUD, change the credit to .001 and exclude from honor roll, GPA and class rank.
When the edit is complete, send request for grade update to ISBE (Webhelp to IT/state reporting) if applicable.
Edgenuity (formerly Novanet courses) Credit recovery – Beginning with the 14-15 School year a list of Glenbard course codes will be provided for registrar entry in stored grades for state requirements. State course setting=Online learning.
Recalculate GPAS– all grade processing must be completed.
a. School
b. Grading
c. Class rank
d. Recalc Frequency – Hit “Recalc Now”
Run GPA List (This report will include GPAS and rank, you can edit/select all and copy/paste to excel.
Remove the rank column, not used.
a. System Reports
b. Grades and Gradebooks
1. Class ranking
2. Grade Level – Always start with Seniors
3. Weighted – Submit
4. Save to Excel for future reference
Compare new rank list to previous semester rank list. Check for major movement.
We are NOT runing a rank list any longer. Instead you will run a report asking for the cum GPA.
Seniors – graduation_year<2016
Quick Export
Student_number
Lastfirst
^(*gpa type = “cumulative”)
Honor Roll – Send a Web Help to IT to have Honor Roll populated for each grade level.
(This process must be completed before exiting Semester Graduates)
Honor Roll Report- Choose graduation year on start page, choose system
reports/Honor roll report/choose student group from the graduation year query,report
displays, copy/paste to excel, modify columns per webpage requirements (paragraph format).
Reset Glenbard Grade Levels (Grade level based on credits)
a. Extended Reports
b. Functions
c. Glenbard Grade Level Change.
d. Review list and click submit to run.
Transcripts
Make sure to Save each grade level to a PDF when report is completed.
Senior group – search on graduation_year<[current junior class]
To include ALL seniors and 5th year seniors.
Select print a report
Class of 2014 and prior – Report=Graduated Transcript With Rank
Underclassmen – select class of for each group
Select print a report
Class of 2015 and after – Graduated Class = Graduated Transcript without Rank
Underclassmen = Transcript without Rank
Click on report in the report que and save to pdf or print
Save each grade level to PDF by counselor and grade level
Semester only -Senior group – search on graduation_year<current junior class;counselorname contains(counselorname)
To include ALL seniors and 5th year seniors.
Underclassmen – select class of for each group
Counselorname contains(counselorname)
Print senior transcripts at the end of the 8th semester to be kept as the student’s permanent record.
Attendace Labels-JUNE only
Attendance labels are attached to the student transcript page 2.
1. Make sure that you have the term set to the full school year.
2. From the start page go to ‘Reportworks/System’
3. Next click on ADA/ADM by Student report
a. Mid screen – Students to Include Click ‘All Students’
b. Grades – Choose a grade level
c. Beginning and End Dates should match powerschool.
d. Click “include absent column”
e. Submit
4. Copy and paste information into excel file
5. Delete columns you will not use. These are the columns I used on my labels;
a. ID Number, Name, School Year, Grade Level, ADM, ADA and Absences.
b. Line 1, ‘ID and Name’
c. Line 2, ‘School Year and Grade Level’
e. Line 4, ‘absences’
Word Mail Merge (Attendance Labels)
1. Database with information must be in excel and each column must be marked. Close and
save on your desktop.
2. Open word document.
3. Go to Mailings
4. Start merge
a. Select labels and enter label type and hit ok.
5. Select recipients.
a. Use existing list
b. Open desktop and choose database.
6. Insert merge fields.
a. Each merge field must be inserted individually and closed in between. When
information needs to be identified it must be entered ADM:space before choosing the
information. Labels should look as follows:
<<id>> <<Name>>
<<School _yr>> Grade: <<Grade>>
ADM: <<ADM>> ADA: <<ADA>>
Abs: <<Absences>>
You may need to select all (home screen) to remove added space between lines and/or
reduce font size to fit label.
7. Once you have the label filled you must update label.
8. Finish and Merge
a. Edit individual
b. All then hit OK.
9. Print labels and save to a PDF .
Test Score Labels
This is required by ISBE to be part of a student’s permanent record.
To be done for the Senior Class – December Grads and May Grads (prior to students being moved to Graduated Students in July).
Open Naviance
Analytics
Reports
Under Student Reports ➢ Student Data ➢ Customize
Select Student under “Required”
Highest PSAT
Highest SAT EBRW
Highest SAT M
Highest Comb. SAT 1600
Highest ACT
Under “Required” you may personalize it as you wish (i.e., ID #, GPA, Weighted GPA)
Make sure you have the correct class year (grade 12)
View Report
Copy and paste to an Excel Spreadsheet. Save
Create Mail merge using Microsoft Word, 30 per page- Address Labels
Labels are placed on the same student record sheet as the attendance labels. | https://docs.glenbard.org/index.php/ps-2/admin-ps/grades/grade-processing-end-of-school-year/ | 2020-11-24T00:48:51 | CC-MAIN-2020-50 | 1606141169606.2 | [array(['https://docs.glenbard.org/wp-content/uploads/2014/08/grade-table-edit.png',
None], dtype=object)
array(['https://docs.glenbard.org/wp-content/uploads/2014/08/exclude-IL.png',
None], dtype=object)
array(['https://docs.glenbard.org/wp-content/uploads/2014/08/testscore1.png',
None], dtype=object)
array(['https://docs.glenbard.org/wp-content/uploads/2019/11/testscore2.png',
None], dtype=object)
array(['https://docs.glenbard.org/wp-content/uploads/2019/11/Picture3.png',
None], dtype=object) ] | docs.glenbard.org |
Message-ID: <948056489.139178.1606177804804.JavaMail.confluence@ip-172-31-1-120.ec2.internal> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_139177_651371226.1606177804803" ------=_Part_139177_651371226.1606177804803 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
Released on: June 30, 2020
Cameo DataHub 19.0 SP4 is significantly= improved with bug fixes specifically implemented into the following areas.= Get it today on nomagic.com or contact your sales represen= tative, and don't forget to give us your feedback on Twitter= or Facebook. Also, please check the lat= est documentation and a= dditional resources.
Synchroniz= ation is no longer applied with non-selected stereotypes that the data are = copied twice to properties inherited from the parent stereotype containing = the same property name.
Information
You can check the list of publi=
cly available issues or your own reported issues fixed in Cameo DataHub 19.=
0 SP4.
Note: You will be required to login. Use the same usernam= e and password as for.
Cameo DataHub documentation
Other resources | https://docs.nomagic.com/exportword?pageId=55865926 | 2020-11-24T00:30:04 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.nomagic.com |
Introduction
The HDL application is a combined iOS App
HDL Control available from the iTunes App Store and a Cloud based tool, both of which are administered by DemoPad on behalf of HDL. The Cloud is where you organise payments, store automatic backup copies of all your configuration files and generate unique QR codes for each customer Graphical User Interface system. The App is used to both commission the HDL/AV system, and is used by the end user to control their system.
App Basics
A
System is organised into Areas and Rooms. Rooms are contained within Areas, eg.
- Colour.
The next section will guide you through Creating a New System. | http://docs.demopad.com/basics/introduction/ | 2018-11-12T19:42:03 | CC-MAIN-2018-47 | 1542039741087.23 | [array(['/images/1-home.png', None], dtype=object)] | docs.demopad.com |
Gateway Configuration
If you need an introduction to the purpose and function of Gateways and Merchant Accounts, or need help obtaining one, please see our FAQ on Basic Payment Gateway Requirements
This gateway does not do any real authorizations or captures on credit cards. It allows you to simulate successful and unsuccessful credit card transactions.
When using the Chargify Test Gateway:
- Use the credit card number “1” to simulate a credit card in good status. All payments and authorizations will succeed.
- Use the credit card number “2” to simulate a credit card that will not accept any payments. Authorizations will suceed.
- or Changing your Payment Gateway
You may only change gateways if the Site has no live subscriptions. If you need to change gateways, you may either create another Site, or cancel all of the subscriptions within your Site. If you are working with a test site, you may also consider clearing your Site data for a quick way to “reset” your Site and change gateways.
To set up your gateway and credentials, start by selecting the site you want to modify.
Next, click the Settings tab..
| http://docs.chargify.com/gateway-configuration | 2013-12-05T08:07:33 | CC-MAIN-2013-48 | 1386163041955 | [array(['http://s3.amazonaws.com/entp-tender-production/assets/aff15dec85327d636591aa865bfd20ff85b44524/select_site_normal.png',
None], dtype=object)
array(['http://s3.amazonaws.com/entp-tender-production/assets/55617837764711c01ad87358af77a7ee31b364cd/site_settings_tab_normal.png',
None], dtype=object)
array(['http://s3.amazonaws.com/entp-tender-production/assets/3e30229ad1ac17683c94cd13d153e68cacce03cf/use_gateway_normal.png',
None], dtype=object)
array(['http://s3.amazonaws.com/entp-tender-production/assets/0f8c24d5fae5529f40d9a1e0bd1abfaa246d9524/set_gateway_credentials_normal.png',
None], dtype=object) ] | docs.chargify.com |
Help Center
Local Navigation
Search This Document
Why can't I see all the items for sale or download in the BlackBerry App World storefront on my device?
Some items that are listed on the BlackBerry App World storefront on a computer might not be listed in the BlackBerry App World storefront on your device, because they are not available for your current device. The BlackBerry App World™ storefront is designed to display only the items that are available for your current BlackBerry® device and BlackBerry® Device Software
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/19369/Why_cant_I_see_all_items_for_sale_1232109_11.jsp | 2013-12-05T08:09:54 | CC-MAIN-2013-48 | 1386163041955 | [] | docs.blackberry.com |
's classloading architecture
Jetty provides configuration options to control all three of these options. The method
org.mortbay.jetty.webapp.WebAppContext.setParentLoaderPriority(boolean) allows the normal java 2 behaviour to be used and all classes will be loaded from the system classpath if possible. This is very useful if the libraries that a web application uses are having problems loading classes that are both in a web application and on the system classpath.
The methods
org.mortbay.jetty.webapp.WebAppContext.setSystemClasses(String[]) and
org.mortbay.jetty.webapp.WebAppContext.setServerClasses(String[]) may be called to allow fine control over what classes can be seen or overridden by a web application.
- SystemClasses cannot be overridden by webapp context classloaders. The defaults are; {"java.","javax.servlet.","javax.xml.","org.mortbay.","org.xml.","org.w3c."}
- ServerClasses (on the container classpath) cannot be seen by webapp context classloaders but can be overridden by the webapp.. file and using that instead.
If you want to add a couple of class directories or jars to jetty, but you can't put them in $jetty.home/lib/ext/ for some reason, or you don't want to create a custom start.config file, you can simply use the System property
-Djetty.class.path on the runline instead. Here's how it would look: | http://docs.codehaus.org/pages/viewpage.action?pageId=13631590 | 2013-12-05T08:10:37 | CC-MAIN-2013-48 | 1386163041955 | [] | docs.codehaus.org |
User Guide
Local Navigation
Search This Document
Turn off automatic initialization of the random number generator
By default, each time that you connect your BlackBerry® smartphone to your computer and open the BlackBerry® Desktop Software, the certificate synchronization tool initializes the random number generator on your smartphone. If you turn off automatic initialization, your smartphone uses the same starting point each time it generates a random number.
Previous topic: Change the default security level for private keys
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/29244/Turn_off_auto_init_random_num_generator_602_1478813_11.jsp | 2013-12-05T08:09:57 | CC-MAIN-2013-48 | 1386163041955 | [] | docs.blackberry.com |
PushToTest TestMaker.
PushToTest source code is licensed under the GPL version 2 open-source license.
Details found at the PushToTest.com Web site | http://docs.pushtotest.com/javadoc/testmaker/overview-summary.html | 2008-05-16T08:50:37 | crawl-001 | crawl-001-011 | [] | docs.pushtotest.com |
Server Rules¶
Note
All players are required by the community and the management to read through this agreement and the rules when joining the Sunrise Roleplay Server. You are responsible for checking this rules list for any updates (Last Update: 2019-08-20 3:31 PM GMT). Rule may be updated at any time, it is your job to keep updated with them when joining the server it is implied that you accept these terms as well as you following them.
By entering any of our community servers.
General Rules¶
- Please remain in character at all times in game. Breaking character via voice is not tolerated. (Using voice chat or tweets for out of character information is prohibited.)
- In Character means that you are acting as your character. You are taking in the in-game world in through the eyes,ears and mind of your character. You are not acting as yourself. It’s important to make this distinction.
- Out of Character refers to your responses to people as yourself rather than your character. Being out of character means that you are not role playing. Out of character communication is limited to /ooc and local chat (white text).
- No Meta-Gaming. (Using info that you have gained out of character for an In-Character benefit) Metagaming refers to when a player uses knowledge they’ve obtained through OOC means in the IC world to gain an advantage. Examples of Metagaming include:
- Seeing another player’s discord gang tag and using that IC to know someone else’s character belongs to a gang.
- Hearing a story about a role play that’s happened while in a lounge channel in discord and then making your character privy to the information gained by hearing said story.
- Asking “how many cops are on?” in /ooc before committing a crime.
- No Power Gaming. (Powergaming covers three different parts of RP; RPing things that are impossible or unrealistic, forcing your role play onto another player, or making up RP scenarios for the benefit of your character without any of the negative consequences.) Examples include:
- Using /chance or a /me to turn a losing roleplay scenario into a win. ie: “/me breaks handcuffs.” Or “/me tears through rope ties to pull a hidden weapon.”
- Using /me to do actions that make no sense in the role play, such as: “/me ties the man up.” While you are on the other side of the room.
- Forcing another character to comply with all biddings and actions without a chance to resist. /chance does not overrule this.
- Giving your character supernatural powers.
- Using knowledge of the rules to stop, avoid, or manipulate a RP.
- No FailRP (Doing actions that are impossible In Real Life to gain an advantage in roleplay unfairly.) For example:
- Escaping the police in a chase by flying your car off a mountain.
- Riding a tuned supercar up a rocky mountain road at 200mph.
- Dodging and trying to outrun bullets.
- You must value your life at all given times, an average person with things to lose will nearly always value there life over there belongings just as you would likely do in real life so to should your character when presented with the opportunity
- Put yourself in the shoes of your character if that gun was in your face IRL you’d hand over your wallet without a second thought
- In a shootout bleeding, you would beg for medical attention not try and do all you could to die so you can escape justice, real life has no respawn and your character does not know he/she is going to survive these wounds
- No VDM (Vehicle DeathMatch is when you attempt to use your vehicle as a weapon against another player without reason or roleplay behind it)
- Ramming your car into another players vehicle without any RP interaction is VDM.
- Running over pedestrians without any RP is VDM.
- No RDM (Random DeathMatch is when you attack or kill another player without reason or roleplay behind it)
- Running up and punching a player in the face without any reason or initiation is RDM.
- Shooting at a group of people with a sniper without being initiated into the RP is RDM.
- If your character is executed during an RP scene your character may not be re-initiated into the scene after respawning. For example:
- A gang-on-gang shootout is taking place. If your character is killed during the shootout and respawns at the hospital, your character may not rejoin that role play scenario. It is not realistic as they have been ‘killed’ in that RP.
- No Fail Driving (Driving your vehicle in areas it wouldn’t survive IRL)
- Driving a vehicle not suitable for the terrain is fail driving. Ex: Driving a sports car up a mountain.
- Continuing to drive after a high speed impact is fail driving. Realistically the car and your character would be injured.
- What happens in RolePlay stays in RolePlay.(Do not bring in character arguments or love affairs into real life you are not your character and he/she is not you keep it seperate)
- No advertising anything unless given explicit permission from an Admin or above
- All forms of communication in the game shall remain in English. This includes, but is not limited to voice chat, OOC, and Twitter.
- No internal/external mods or hacks.
- No crosshairs
- No mod menus
- No third-party programs that provide any sort of advantage over other players.
- Do not abuse bugs or use game limitations to gain an advantage.
- No Rape RP (All ERP must be consensual from both sides if it is not then it is explicitly prohibited)
- No Terrorism RP (Deliberately creating a roleplay to strike terror into many such as stealing a plane and threatening to crash it into a high population area for no real reason)
- No blatant racism (In character or out of character). This includes but is not limited to:
- Hate Speech is not tolerated.
- No harmful or directed stereotyping. Be respectful.
- No Homophobia (Dislike of or prejudice/hate speech against homosexual people).
- No harmful or directed stereotyping. Be respectful.
- Hate speech is not tolerated.
- Microphones are a requirement
- You are not allowed to trade in-game items/currency for anything IRL outside of the Members Shop on the website
- No excessive sexual harassment (if asked to stop OOC please respect their boundaries.)
- **Using an invalid date of birth that doesn’t match the format YYYY-MM-DD or isn’t a realistic DoB is strictly forbidden. **
- No AFK Farming in drug locations for longer than 5 minutes (coffee break.)
- All jobs received at the job center must be completed in the work vehicle provided at each job’s locker room. Doing these jobs in a personal/NPC vehicle is against the rules.
- The provided work vehicles are immersive and are capable of carrying the work materials. For example, another smaller car wouldn’t realistically be able to carry the logs for the lumberjack job or the rocks from the miner job.
- In order to maintain balance everyone must complete the job center jobs in the same vehicle.
- You can not run to a private/invite-only instance in the middle of a RolePlay scenario.
- This specifically is there because unlike in irl where that person could follow you into the property in FiveM it puts you in a protective bubble so whether it is with another civilian or a cop it is not allowed unless both parties agree someone can enter.
- Whilst you are allowed to enter the military base you are NOT allowed to interact with any military equipment. (such as but not limited to Fighter Jets and Tanks.
- All in character voice communication must be spoken in-game at all times. When using out-of-game voice communications, such as a bluetooth call using discord, you must also speak in game in at least “normal” range. It is unrealistic that people nearby wouldn’t be able to hear words spoken into a bluetooth. Examples include:
- You have been pulled over by a police officer. Your window is rolled down and the police officer is stood next to your door. You are in a bluetooth call with your buddies and would like to call them to back you up. While you speak in discord you are required to simultaneously speak in game. This can be done by holding down your in-game push to talk as you speak in discord.
- You are on a criminal meet, during the scenario you grow suspicious and wish to speak to your friend in bluetooth. While within normal range of the other party, you may not communicate privately within a bluetooth call.
- These rules also cover walkie-talkies, wireless(satellite) headsets and various other forms of wireless communication.
27.5) You are required to have a physical item in your ear to be allowed to use Bluetooth communications, for example: Earpieces or helmet with microphones.
- Please keep OOC chat to a minimum, No arguing in OOC chat. if you feel someone has broken server rules please put in a /report (Players ID that you are reporting)(Why you are reporting them). If there is no admin available to take the report in real time please visit ` <>`_ and fill out an in-game-support ticket.
Green Zone Rules¶
- No crimes are to be committed within Green Zones. Crimes that were started outside of a Green Zone may be carried out if the scene moves into a Green Zone. For example,
- You are in a police chase and go into the public garage. The police may still pursue you as the RP started outside of the Green Zone.
- No Green Zone Baiting. Green Zone baiting is starting or provoking someone within a green zone knowing that it may result in violence, theft, or any crime. For example:
- You are in the green zone and someone starts verbally threatening your life. This is green zone baiting.
- You are standing within a green zone and tweeting out sensitive information about another character/sending threats via twitter. This is green zone baiting.
- Whilst within a designated safe zone or green zone such as PD or the hospitals you are not allowed to release information about a character via twitter using the zone as a shield to avoid consequences such as but not limited to violent retaliation by the victimised party, this includes both public and private release of information but does not include a police investigation you may also not initiate on anyone via twitter in a greenzone
Police Rules¶
- Cop baiting without a role play reason is not tolerated. Ramming police cars because you’re bored is not allowed. Luring a police officer to take them hostage for a role play scene is allowed
- Players that are part of LSPD or BCSO may not have a second character that is a member of a gang.
- Noticed Role Players may apply to have a second character who can join an official gang.
- Players who RP a gang leader may not have a character in LSPD/BCSO.
- No stealing Emergency Service Vehicles unless you’ve been granted access by an Admin within RP.
- Police are NOT allowed to plate check or arrest or start any form of police on civilian RP within the 5 public garages. If the RP began outside the public garage this is not relevant. Police may question for information but not arrest, detain, search, cuff, taze or shoot under any circumstances if someone else does its a green zone violation call an admin
- Cell phones cannot be used when handcuffed, or after they have been removed in RP, or if you are incapacitated. Please see #📌death-rules for more information.
EMS Rules¶
- Medics should not be within 200 meters of an on-going shooting, Reviving during active-combat is not allowed. No active-combat defibs either.
- Characters that are part of LSFD may not be in a gang. This is to prevent an unfair advantage based on script limitations.
- Preventing rival gangs from using defibs by going on duty during or as a direct result of a scenario.
- No stealing Emergency Service Vehicles unless you’ve been granted access by an Admin within RP.
- In order to take an EMS hostage there must be at least 3 ems on duty or OOC consent must be given by said EMS
Whitelist Rules¶
- If you call/text any Whitelist business you must provide a detailed message of what you’re needing.
- To EMS: I’ve been in a car accident.
- To Car Dealer: I’m looking to buy a car. Are you open?
- While communicating with (or as) a business via phone, you may not add or write down a players phone number. Realistically, each business would have their own phone line but due to FiveM limitations, players are forced to use their own phones and phone numbers to reply. Taking down numbers after players have texted a business number is considered meta and powergaming. *All government institutions are exempt from this rule as they are not business
- Car dealers are not to use any cars from the dealership for personal use if found doing so there job will be removed. (This is here to stop an unfair advantage of unlimited cars someone can’t afford as well as the realistic point of your putting miles on a new car and its no longer saleable as new you’d be fired at best sued and jailed for fraud at worst
- You may NOT abuse any whitelisted position in any way, whitelisted jobs have a key to making a successful server. If a whitelisted job no longer functions properly it may cause potential issues for the server and the community.
Death Rules¶
- When doing Hitman RP you must do the following. You must notify the person that you are a hitman who has been hired to end their life. This must be done before the person has been attacked/injured and does count as a form of initiation.
- Executions must be realistic. Your character must realistically be able to perform the action and have the tools to match the /me that is written (ex: to slit a throat you must have a knife.)
- If you get executed at an illegal location(drugs locations,etc), You can not return to said location for 1 hour after respawning.
- New Life Rule: Upon ‘death’, your character forgets the events leading up to their death and after respawning, you can’t go back to your area of death for at least 15 minutes. Exceptions:
- In the case of an accidental death (forgetting to eat “pulling an EMILY”, motorcycle accident, falling from too high, drowning) your character is not required to forget the events leading up to their death and they may immediately return to the RP scene.
- If your character is executed at an illegal location (drug location, black market, etc) your character may not return to said location for 1 hour.
- If a witness survives, or someone finds and inspects your body their character can inform your character of what they know
Combat Rules¶
- The driver of all forms of vehicles may never shoot from their vehicle unless completely stationary. Passengers, however, may fire at-will so long as all other RP requirements have been met(ex: initiation has happened).
- GTA makes this extremely easy to do as the game was designed for casual npc slaying in IRL that is very difficult to accomplish and at best extremely inaccurate and likely to end in the driver crashing
- Initiation Timer - When you initiate on someone, that initiation is valid for 15 minutes ( ex: you are chasing someone lose them and see them an hour later you MUST reinitiate).
- If a player is being compliant during a robbery they can not be killed unless there is dirty money, drugs or some form of an illegal transaction involved. For example:
- You come to the gym and rob someone who is working out for their cash, you can’t not kill them at the end of the robbery if they’ve complied with your orders.
- You meet up with someone who is looking to purchase drugs from you. Instead, you rob them. In this scenario you may kill them.
- You can not initiate crime on workers that are doing non-whitelist jobs. They must be in a work uniform with a work vehicle. For example,
- You cannot rob a tailor at the wool collection job location if they are wearing their work uniform and their work van is parked nearby.
- You can rob a person who is at the wool collection that is not using the correct vehicle.
- You can initiate crime on a person who is at the butcher job who is not wearing the job uniform.
- In order to restrain someone or remove their weapons in RP they must be compliant. You can’t simply do a /me takes someone’s weapon and expect it to be relevant unless they are being forced to value their life or doing it voluntarily during RP. For example,
- if you’ve asked someone to submit to a pat down, or have asked them to hand over all of their weapons you can reasonably take the items if the other party complies.
- OOC Consent from all parties involved must be obtained to participate in graphic and excessively violent RP. This includes but is not limited to:
- Heavy torture role play.
- Detailed /me’s involving dismemberment or disfigurement.
- ERP and any RP of overtly sexual nature.
- Consent may be revoked at anytime during the scene.
- If consent is not granted the scene can fade-to-black – a sort of “we’ll say it happened” time-skip.
- Refusing consent does not grant your character immunity from combat RP. For example:
- In the case you are kidnapped and are being tortured, refusal to consent to graphic RP doesn’t mean that the RP will stop and your character will be unharmed.
- If you initially granted permission to participate in graphic RP and revoke consent mid-RP the scene doesn’t stop or restart. It merely switches to a fade-to-black or a significantly scaled down scene.
Quality of Life Rules¶
- You are not allowed to use public garages to collect license plates or information on other player’s vehicles. Unlike real life, every player is forced to get their cars from these locations and using this to gain an in-game advantage is powergaming.
- Stolen cell phones can not be used to acquire information about the character in-game. This rule is in place to stop potential metagaming via discord gang tags, or information acquired out of character. You can, however, force them to either give you a single contact if you already know they have it in character or delete your own number from their phone. For example
- While kidnapping someone, you cannot take their phone and then force them to give you the phones password so that you can read all their texts and get all their contacts.
- Your character is looking for information about Character A. Your character hears character X mention texting Character A. As your character now knows that Character X has Character A’s number, your character may now steal Character X’s phone to acquire Character A’s number.
- Your number has fallen into the hands of someone you don’t want to have it. If you manage to get this person’s phone you are allowed to remove your number from their phone.
- Information can be shared via mutual consent. Characters may voluntarily, while not coerced or under duress, share their texts, contacts, and any other information acquired fairly within roleplay that would be stored in their phone.
- You can not force someone to go to an ATM and give you all their money. Whilst you could take them to an ATM in real life, you’d be unable to get all of their money as real life ATMs have a limit on the amount of cash you can withdraw in a day. This also applies to bank transfers.
- If your character ‘leaves’ for an extended period of time (2 weeks or more), you must first notify your gang or criminal friends that you are leaving. Upon returning, your character cannot immediately use their criminal knowledge to detrimentally affect storylines without first letting their gang/criminal friends know that they are back. Informing criminal ties of your return starts a 3-day grace period to allow for the facilitation of storyline in a fair way. During this 3-day grace period you must play your character somewhat regularly. This rule is in place to prevent someone from leaving on bad terms, staying away to avoid in-character consequences and then returning for one day for the sole purpose of throwing their criminal buddies under the bus. Examples include:
- Someone has an OOC argument with a friend or gang member and leaves the server. Days or weeks pass and he comes back with the purpose of getting OOC revenge.
- The RP has gotten risky for your character. You may not take a long break to avoid RP consequences and then return to disrupt the role play as a counter-play. | http://docs.sunrise-roleplay.eu/en/latest/rules/server-rules.html | 2019-12-05T16:57:32 | CC-MAIN-2019-51 | 1575540481281.1 | [] | docs.sunrise-roleplay.eu |