content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
With VMware Horizon with View, you can create desktop pools that include one or hundreds or thousands of virtual desktops. You can deploy desktops that run on virtual machines, physical machines, and Windows Remote Desktop Services (RDS) hosts. Create one virtual machine as a base image, and View can generate a pool of virtual desktops from that image. You can also create application pools that give users remote access to applications. | https://docs.vmware.com/en/VMware-Horizon-6/6.0/com.vmware.horizon-view.desktops.doc/GUID-39A9B9B7-7DD0-4DF0-B298-9C6AD71F526E.html | 2018-07-16T01:25:17 | CC-MAIN-2018-30 | 1531676589029.26 | [] | docs.vmware.com |
Enable the Shopping Cart widget The shopping cart widget is enabled automatically for instances upgrading to Istanbul, however, there are several ways to manually enable or disable the widget. To enable the shopping cart for a catalog item: Navigate to a catalog item on the Service Catalog page in Service Portal. CTRL+right-click a catalog item widget to open the widget instance options. Select or clear the Show Add Cart Button option to enable or disable the shopping cart for that particular catalog item. Figure 1. Catalog item cart option To enable to shopping cart in the portal header: From the Service Portal configuration page, select the Portal editor. Select the SP Header Menu in the portal hierarchy. In the Additional options section, make sure the enable cart value is set to true. Set the value to false to hide the shopping cart. Figure 2. Shopping cart in the header menu | https://docs.servicenow.com/bundle/kingston-servicenow-platform/page/build/service-portal/concept/enable-shopping-cart.html | 2018-07-16T01:09:47 | CC-MAIN-2018-30 | 1531676589029.26 | [] | docs.servicenow.com |
Certification follow-on tasks The ServiceNow system can automatically generate and assign follow-on tasks to correct discrepancies detected during compliance audits. The system attribute glide.allow.new.cert_follow_on_task is set to true by default, allowing for new follow on tasks to be created for the same failure, at each audit run. You can set this property to false, to configure audit to use the same follow-on task for the same audit failure across multiple runs. You configure and assign follow-on tasks as needed to qualified users or groups in the audit record. Any follow-on task can be reassigned by a user with the certification_admin role. The Audit Results related list in the Follow On Task form contains links to the records that failed. Access follow-on tasksUsers with the certification role can only access follow-on tasks assigned to them but can reassign these tasks to other users.Manage follow-on tasksUsers with the certification_admin or admin role can see all follow-on tasks.Related TasksCompliance Overview moduleRelated ConceptsCompliance ActivationCompliance Templates and AuditsArchitecture ComplianceDesired StateCertification auditsCertification filtersCertification templatesControls and tests managementScripted auditsRelated ReferenceInstalled with Compliance | https://docs.servicenow.com/bundle/helsinki-servicenow-platform/page/product/compliance/concept/c_CertificationFollowOnTasks.html | 2018-07-16T01:16:39 | CC-MAIN-2018-30 | 1531676589029.26 | [] | docs.servicenow.com |
svds¶
Format¶
- s =
svds(x)¶
- Parameters
x (matrix or array) – NxP matrix or K-dimensional array where the last two dimensions are NxP, whose singular values are to be computed.
- Returns
s (min(N,P)x1 vector or K-dimensional array) – singular values of x arranged in descending order, the last two dimensions are \(min(N,P)x1\).
Examples¶
// Create a 10x3 matrix x = { -0.60 3.50 0.47, 8.40 16.50 0.27, 11.40 6.50 0.17, 7.40 -0.50 -2.43, -9.60 -10.50 0.57, -17.60 -5.50 0.67, -12.60 -14.50 0.87, 18.40 12.50 -1.43, -11.60 -19.50 0.77, 6.40 11.50 0.07 }; // Calculate the singular values s = svds(x);
After the code above, s will be equal to:
49.58 14.96 2.24
Remarks¶
If x is an array, the result will be an array containing the singular values of each of the 2-dimensional arrays described by the two trailing dimensions of x. In other words, for a 10x4x5 array x, s will be a 10x4x1 array containing the singular values of each of the 10 4x5 arrays contained in x.
If the singular values cannot be computed, either the program will be terminated with an error message, or the first element of the return, \(s[1]\), is set to a missing value. This behavior is controlled by the trap command. Below is an example with error trapping:
// Turn on error trapping trap 1; // Calculate singular values s = svds(x); // Check for success or failure if ismiss(s); // Code to handle failure case endif;
Note that in the
trap 1case, if the input to
svds()is a multi-dimensional array and the singular values for a submatrix fail to compute, only the first value of that s submatrix will be set to a missing value. For a 3 dimensional array, you could change the if, else, elseif, endif check in the above example to:
// Check for success or failure of each submatrix if ismiss(s[., 1, 1]);
Call either
svdcusv()or
svdusv(), to also calculate the right and left singular vectors | https://docs.aptech.com/gauss/svds.html | 2022-06-25T10:57:23 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.aptech.com |
fn:id( arg as xs:string*, [node as node()] ) as element()*
Returns the sequence of element nodes that have an ID value matching the value of one or more of the IDREF values supplied in $arg.
The function returns a sequence, in document order with duplicates eliminated, containing every element node E that satisfies all the following conditions:
for $s in $arg return fn:tokenize(fn:normalize-space($s), ' ') [. castable as xs:IDREF]
Notes: schema-defined type is an.
let $x := document{ <html xmlns=""> <p id="myID">hello</p> </html> } return fn:id("myID", $x) => <p id="myID" xmlns="">hello</p>
xquery version "1.0-ml"; declare namespace <p id="myID">hello</p> <p>hello</p> </html> } return $x/xh:html/xh:p[. is fn:id("myID")] => <p id="myID" xmlns="">hello</p>
Stack Overflow: Get the most useful answers to questions from the MarkLogic community, or ask your own question. | https://docs.marklogic.com/fn:id | 2022-06-25T11:45:26 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.marklogic.com |
Camera
- class Camera
A node that can be positioned around in the scene graph to represent a point of view for rendering a scene.
Inheritance diagram
- explicit Camera(std::string const &name, Lens *lens = (new PerspectiveLens()))
- int cleanup_aux_scene_data(Thread *current_thread = Thread::get_current_thread())
Walks through the list of currently-assigned
AuxSceneDataobjects and releases any that are past their expiration times. Returns the number of elements released.
- bool clear_aux_scene_data(NodePath const &node_path)
Removes the
AuxSceneDataassociated with the indicated
NodePath. Returns true if it is removed successfully, false if it was already gone.
- void clear_tag_state(std::string const &tag_state)
Removes the association established by a previous call to
set_tag_state().
- void clear_tag_states(void)
Removes all associations established by previous calls to
set_tag_state().
- AuxSceneData *get_aux_scene_data(NodePath const &node_path) const
Returns the
AuxSceneDataassociated with the indicated
NodePath, or NULL if nothing is associated.
- DrawMask get_camera_mask(void) const
Returns the set of bits that represent the subset of the scene graph the camera will render. See
set_camera_mask().
- static TypeHandle get_class_type(void)
- BoundingVolume *get_cull_bounds(void) const
Returns the custom cull volume that was set by
set_cull_bounds(), if any, or NULL if no custom cull volume was set.
- NodePath const &get_cull_center(void) const
Returns the point from which the culling operations will be performed, if it was set by
set_cull_center(), or the empty
NodePathotherwise.
- DisplayRegion *get_display_region(std::size_t n) const
Returns the nth display region associated with the camera.
- ConstPointerTo<RenderState> get_initial_state(void) const
Returns the initial state as set by a previous call to
set_initial_state().
- NodePath const &get_lod_center(void) const
Returns the point from which the LOD distances will be measured, if it was set by
set_lod_center(), or the empty
NodePathotherwise.
- std::size_t get_num_display_regions(void) const
Returns the number of display regions associated with the camera.
- NodePath const &get_scene(void) const
Returns the scene that will be rendered by the camera. See
set_scene().
- ConstPointerTo<RenderState> get_tag_state(std::string const &tag_state) const
Returns the state associated with the indicated tag state by a previous call to
set_tag_state(), or the empty state if nothing has been associated.
- std::string const &get_tag_state_key(void) const
Returns the tag key as set by a previous call to
set_tag_state_key().
- bool has_tag_state(std::string const &tag_state) const
Returns true if
set_tag_state()has previously been called with the indicated tag state, false otherwise.
- void list_aux_scene_data(std::ostream &out) const
Outputs all of the
NodePathsand
AuxSceneDatasin use.
- void set_active(bool active)
Sets the active flag on the camera. When the camera is not active, nothing will be rendered.
- void set_aux_scene_data(NodePath const &node_path, AuxSceneData *data)
Associates the indicated
AuxSceneDataobject with the given
NodePath, possibly replacing a previous data defined for the same
NodePath, if any.
- void set_camera_mask(DrawMask mask)
Changes the set of bits that represent the subset of the scene graph the camera will render.
During the cull traversal, a node is not visited if none of its draw mask bits intersect with the camera’s camera mask bits. These masks can be used to selectively hide and show different parts of the scene graph from different cameras that are otherwise viewing the same scene.
- void set_cull_bounds(BoundingVolume *cull_bounds)
Specifies the bounding volume that should be used to perform culling from this camera. Normally, this is the bounding volume returned from the active lens’ make_bounds() call, but you may override this to specify a custom volume if you require. The specified bounding volume will be understood to be in the coordinate space of the
get_cull_center()node.
- void set_cull_center(NodePath const &cull_center)
Specifies the point from which the culling operations are performed. Normally, this is the same as the camera, and that is the default if this is not specified; but it may sometimes be useful to perform the culling from some other viewpoint, particularly when you are debugging the culling itself.
- void set_initial_state(RenderState const *state)
Sets the initial state which is applied to all nodes in the scene, as if it were set at the top of the scene graph.
- void set_lod_center(NodePath const &lod_center)
Specifies the point from which the LOD distances are measured. Normally, this is the same as the camera, and that is the default if this is not specified; but it may sometimes be useful to perform the distance test from some other viewpoint. This may be used, for instance, to reduce LOD popping when the camera rotates in a small circle about an avatar.
- void set_lod_scale(PN_stdfloat value)
Sets the multiplier for LOD distances. This value is multiplied with the LOD scale set on LodNodes.
- void set_scene(NodePath const &scene)
Sets the scene that will be rendered by the camera. This is normally the root node of a scene graph, typically a node called ‘render’, although it could represent the root of any subgraph.
Note that the use of this method is now deprecated. In the absence of an explicit scene set on the camera, the camera will render whatever scene it is parented into. This is the preferred way to specify the scene, since it is the more intuitive mechanism.
- void set_tag_state(std::string const &tag_state, RenderState const *state)
Associates a particular state transition with the indicated tag value. When a node is encountered during traversal with the tag key specified by
set_tag_state_key(), if the value of that tag matches tag_state, then the indicated state is applied to this node–but only when it is rendered by this camera.
This can be used to apply special effects to nodes when they are rendered by certain cameras. It is particularly useful for multipass rendering, in which specialty cameras might be needed to render the scene with a particular set of effects.
- void set_tag_state_key(std::string const &tag_state_key)
Sets the tag key which, when encountered as a tag on nodes in the scene graph, causes this Camera to apply an arbitrary state transition based on the value of the tag (as specified to
set_tag_state()). | https://docs.panda3d.org/1.10/cpp/reference/panda3d.core.Camera | 2022-06-25T10:14:40 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.panda3d.org |
In,.
- This feature was formerly known as, "target matching.".
Known Limitations
Targets are applied only after initial type inferencing has been applied to the loaded dataset.
-.
Creating Targets
Sources for creating a target
The schema used to define a target can be imported and assigned from any of the following objects, including:
- Output of a recipe in the same flow
- A reference dataset from another flow
An imported dataset
Ideally, the source of the target schema should come from the publishing target. If you are publishing to a pre-existing target, you can create do one of the following:
- Reference the target: If the schema is represented in a dataset to which you have access ,.
- Job Details Page: After you have successfully run a job, you can create a new dataset from the Output Destinations tab. Through Flow View, this imported dataset can be used as the schema for wrangling. See Job Details Page.. | https://docs.trifacta.com/pages/diffpages.action?originalId=136167305&pageId=138084391 | 2022-06-25T11:29:36 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.trifacta.com |
You can get help directly from the Tcl console. Every Vivado
command supports the
-help command line argument that
can be used anywhere in the line.
For example:
Vivado% create_clock -help Vivado% create_clock -name CLK1 -period 10 -help
In addition, there is a
help command that provides additional information. Providing a command name to the
help command (that is,
command
>) reports the same help information as
<
command
Vivado% help create_clock
The
help command can also just return a short description of the arguments using the
-args option:
Vivado% help create_clock -args create_clock Description: Create a clock object Syntax: create_clock -period <arg> [-name <arg>] [-waveform <args>] [-add] [-quiet] [-verbose] [<objects>] Returns: new clock object Usage: Name Description ------------------------ -period Clock period: Value > 0 [-name] Clock name [-waveform] Clock edge specification [-add] Add to the existing clock in source_objects [-quiet] Ignore command errors [-verbose] Suspend message limits during command execution [<objects>] List of clock source ports, pins or nets
A short summary of the syntax of a command is also available with the
-syntax option:
Vivado% help create_clock -syntax create_clock Syntax: create_clock -period <arg> [-name <arg>] [-waveform <args>] [-add] [-quiet][-verbose] [<objects>]
In addition to providing help for the specific commands, the
help command can also provide information on categories of commands or classes of objects. A list of categories can be obtained by executing the
help command without any argument or option. A non-exhaustive list of categories is:
Vivado% help ChipScope DRC FileIO Floorplan GUIControl IPFlow Object PinPlanning Power Project PropertyAndParameter Report SDC Simulation TclBuiltIn Timing ToolLaunch Tools XDC
The list of commands available under each category can be also reported with the
-category option. For example, the following command reports all the commands under the Tools category:
Vivado% help -category tools Topic Description link_design Open a netlist design list_features List available features. load_features Load Tcl commands for a specified feature. opt_design Optimize the current netlist. This will perform the retarget, propconst, and sweep optimizations by default. phys_opt_design Optimize the current placed netlist. place_design Automatically place ports and leaf-level instances route_design Route the current design synth_design Synthesize a design using Vivado Synthesis and open that design | https://docs.xilinx.com/r/en-US/ug894-vivado-tcl-scripting/Getting-Help | 2022-06-25T10:25:25 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.xilinx.com |
Project import/export administration:
-. | https://docs.gitlab.com/14.10/ee/administration/raketasks/project_import_export.html | 2022-06-25T11:47:03 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.gitlab.com |
Optimizing Memory Usage while Working with Big Files having Large Datasets
When building a workbook with large data sets, or reading a big Microsoft Excel file, the total amount of RAM the process will take is always a concern. There are measures that can be adapted to cope with the challenge. Aspose.Cells provides some relevant options and API calls to lower, reduce and optimize memory use. Also, it can help the process work more efficiently and run faster.
Use the MemorySetting.MemoryPreference option to optimize memory use for cells data and decrease the overall memory cost. When building a large data set for cells, it can save a certain amount of memory compared to using the default setting (MemorySetting.Normal).
Optimizing Memory
Reading Large Excel Files
The following example shows how to read a large Microsoft Excel file in optimized mode.
Writing Large Excel Files
The following example shows how to write a large dataset to a worksheet in an optimized mode.
Caution
The default option, MemorySetting.Normal is applied for all versions. For some situations, such as building a workbook with a large data set for cells, the MemorySetting.MemoryPreference option may optimize the memory use and decrease the memory cost for the application. However, this option may degrade performance in some special cases such as follow.
- Accessing Cells Randomly and Repeatedly: The most efficient sequence for accessing the cells collection is cell by cell in one row, and then row by row. Especially, if you access rows/cells by the Enumerator acquired from Cells, RowCollection and Row, the performance would be maximized with MemorySetting.MemoryPreference.
- Inserting & Deleting Cells & Rows: Please note that if there are lots of insert/delete operations for Cells/Rows, the performance degradation will be notable for MemoryPreference mode as compared to the Normal mode.
- Operating on Different Cell Types: If most of the cells contain string values or formulas, the memory cost will be the same as Normal mode but if there are lots of empty cells, or cell values are numeric, bool and so on, the MemorySetting.MemoryPreference option will give better performance. | https://docs.aspose.com/cells/net/optimizing-memory-usage-while-working-with-big-files-having-large-datasets/ | 2022-06-25T10:15:54 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.aspose.com |
Implementing automatic WLM
With automatic workload management (WLM), Amazon Redshift manages query concurrency and memory allocation. You can create up to eight queues with the service class identifiers 100–107. Each queue has a priority. For more information, see Query priority.
In contrast, manual WLM requires you to specify values for query concurrency and memory allocation. The default for manual WLM is concurrency of five queries, and memory is divided equally between all five. Automatic WLM determines the amount of resources that queries need, and adjusts the concurrency based on the workload. When queries requiring large amounts of resources are in the system (for example, hash joins between large tables), the concurrency is lower. When lighter queries (such as inserts, deletes, scans, or simple aggregations) are submitted, concurrency is higher.
For details about how to migrate from manual WLM to automatic WLM, see Migrating from manual WLM to automatic WLM.
Automatic WLM is separate from short query acceleration (SQA) and it evaluates queries differently. Automatic WLM and SQA work together to allow short running and lightweight queries to complete even while long running, resource intensive queries are active. For more information about SQA, see Working with short query acceleration.
Amazon Redshift enables automatic WLM through parameter groups:
If your clusters use the default parameter group, Amazon Redshift enables automatic WLM for them.
If your clusters use custom parameter groups, you can configure the clusters to enable automatic WLM. We recommend that you create a separate parameter group for your automatic WLM configuration.
To configure WLM, edit the
wlm_json_configuration parameter in a parameter
group that can be associated with one or more clusters. For more information, see Modifying the WLM configuration.
You define query queues within the WLM configuration. You can add additional query queues to the default WLM configuration, up to a total of eight user queues. You can configure the following for each query queue:
Priority
Concurrency scaling mode
User groups
Query groups
Query monitoring rules
Priority
You can define the relative importance of queries in a workload by setting a priority value. The priority is specified for a queue and inherited by all queries associated with the queue. For more information, see Query priority.
Concurrency scaling mode
When concurrency scaling is enabled, Amazon Redshift automatically adds additional cluster capacity when you need it to process an increase in concurrent read and write queries. Your users see the most current data, whether the queries run on the main cluster or on a concurrency scaling cluster.
You manage which queries are sent to the concurrency scaling cluster by configuring WLM queues. When you enable concurrency scaling for a queue, eligible queries are sent to the concurrency scaling cluster instead of waiting in a queue. For more information, see Working with concurrency scaling.
User groups
You can assign a set of user groups to a queue by specifying each user group name or by using wildcards. When a member of a listed user group runs a query, that query runs in the corresponding queue. There is no set limit on the number of user groups that can be assigned to a queue. For more information, see Assigning queries to queues based on user groups.
Query groups
You can assign a set of query groups to a queue by specifying each query group name or by using wildcards. A query group is simply a label. At runtime, you can assign the query group label to a series of queries. Any queries that are assigned to a listed query group run in the corresponding queue. There is no set limit to the number of query groups that can be assigned to a queue. For more information, see Assigning a query to a query group.. Thus, if
you add
dba_* to the list of user groups for a queue, any user-run query
that belongs to a group with a name that begins with
dba_ is assigned to
that queue. Examples are
dba_admin or
DBA_primary. The '?'
wildcard character matches any single character. Thus, if the queue includes user-group
dba?1, then user groups named
dba11 and
dba21
match, but
dba12 doesn't match.
By default, wildcards aren't enabled.
Query monitoring rules. For more information, see WLM query monitoring rules.
Checking for automatic WLM
To check whether automatic WLM is enabled, run the following query. If the query returns at least one row, then automatic WLM is enabled.
select * from stv_wlm_service_class_config where service_class >= 100;
The following query shows the number of queries that went through each query queue (service class). It also shows the average execution time, the number of queries with wait time at the 90th percentile, and the average wait time. Automatic WLM queries use service classes 100 to 107.
select final_state, service_class, count(*), avg(total_exec_time), percentile_cont(0.9) within group (order by total_queue_time), avg(total_queue_time) from stl_wlm_query where userid >= 100 group by 1,2 order by 2,1;
To find which queries were run by automatic WLM, and completed successfully, run the following query.
select a.queue_start_time, a.total_exec_time, label, trim(querytxt) from stl_wlm_query a, stl_query b where a.query = b.query and a.service_class >= 100 and a.final_state = 'Completed' order by b.query desc limit 5; | https://docs.aws.amazon.com/redshift/latest/dg/automatic-wlm.html | 2022-06-25T11:51:42 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.aws.amazon.com |
Connections (preview) overview
Connections are the key to enable data sharing to and from Customer Insights. Each connection establishes data sharing with a specific service. Connections are the foundation to configure third-party enrichments and configure exports. The same connection can be used multiple times. For example, one connection to Dynamics 365 Marketing works for multiple exports and one Leadspace connection can be used for several enrichments.
Go to Admin > Connections to create and view connections.
The Connections tab shows you all active connections. The list shows a row for each connection.
Get a quick overview, description, and find out what you can do with each extensibility option on the Discover tab.
Exports
Only administrators can configure new connections but they can grant access to contributors to use existing connections. Administrators control where data can go, contributors define the payload and frequency fitting their needs. For more information, see Allow contributors to use a connection for exports.
Enrichments
Only administrators can configure new connections but the created connections are always available to both administrators and contributors. Administrators manage credentials and give consent to data transfers. The connections can then be used for enrichments by both administrators and contributors.
Add a new connection
To add connections, you need to have administrator permissions. If you connect to other Microsoft services, we assume both services are in the same organization.
Go to Admin > Connections (preview).
Go to the Connections tab.
Select Add connection to create a new connection. Choose from the dropdown menu what type of connection you want to create.
In the Set up connection pane, provide the required details.
- The Display name and the type of the connection describe a connection. We recommend choosing a name that explains the purpose and target of this connection.
- The exact fields depend on what service you're connecting to. You can learn about details of a specific connection type in the article about the target service.
- If you use your own Key Vault to store secrets, activate Use Key Vault and choose the secret from the list.
To create the connection, select Save.
You can also select Set up on a tile on the Discover tab.
Allow contributors to use a connection for exports
When setting up or editing an export connection, you choose which users are allowed to use this specific connection to define exports. By default a connection is available to users with an administrator role. You can change this setting under Choose who can use this connection and allow users with contributor role to use this connection.
- Contributors won't be able to view or edit the connection. They'll only see the display name and its type when creating an export.
- By sharing a connection, you allow contributors to use a connection. Contributors will see shared connections when they set up exports. They can manage every export that uses this specific connection.
- You can change this setting while keeping the exports that have already been defined by contributors.
Edit a connection
Go to Admin > Connections (preview).
Go to the Connections tab.
Select the vertical ellipsis (⋮) for the connection you want to edit.
Select Edit.
Change the values you want to update and select Save.
Remove a connection
If the connection you're removing is used by enrichments or exports, you first need to detach or remove them. The remove dialog will guide you to the relevant enrichments or exports.
Detached enrichments and exports become inactive. You reactivate them by adding another connection to them on the Enrichments or Exports page.
Go to Admin > Connections (preview).
Go to the Connections tab.
Select the vertical ellipsis (⋮) for the connection you want to remove.
Select Remove from the dropdown menu. A confirmation dialog appears.
- If there are enrichments or exports using this connection, select the button to see what's using the connection.
- Exports: You can choose to either remove or disconnect the exports to be able to remove the connection. To disconnect an export, administrators can use the Disconnect action. This action is available for individual and multiple selected exports. By disconnecting you keep the export config, but it won't be run until another connection is added to it.
- Enrichments: You can choose to either remove or deactivate the enrichments to be able to remove the connection.
- Once the connection has no more dependencies, go back to Admin > Connections and try removing the connection again.
To confirm the deletion, select Remove.
Set up connections with secrets managed by your own Key Vault
Some connections need secrets like API keys or passwords. Some connections support secrets stored in your own Key Vault. Learn more about supported connections and how to set up on your own Key Vault for Customer Insights.
Feedback
Submit and view feedback for | https://docs.microsoft.com/en-us/dynamics365/customer-insights/connections | 2022-06-25T11:53:11 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.microsoft.com |
Out-Null
Hides the output instead of sending it down the pipeline or displaying it.
Syntax
Out-Null [-InputObject <PSObject>] [<CommonParameters>]
Description
The
Out-Null cmdlet sends its output to NULL, in effect, removing it from the pipeline and
preventing the output to be displayed at the screen.
Examples
Example 1: Delete output
Get-ChildItem | Out-Null
This command gets items in the current location/directory, but its output is not passed through the pipeline nor displayed at the command line. This is useful for hiding output that you do not need.
Parameters
Specifies the object to be sent to NULL (removed from pipeline). Enter a variable that contains the objects, or type a command or expression that gets the objects.
Inputs
You can pipe any object to this cmdlet.
Outputs
None
This cmdlet does not generate any output.
Notes
- The cmdlets that contain the Out verb (the Out cmdlets) do not have parameters for names or file paths. To send data to an Out cmdlet, use a pipeline operator (
|) to send the output of a PowerShell command to the cmdlet. You can also store data in a variable and use the InputObject parameter to pass the data to the cmdlet. For more information, see the examples.
Out-Nulldoes not return any output objects. If you pipe the output of
Out-Nullto the Get-Member cmdlet,
Get-Memberreports that no objects have been specified.
Related Links
Feedback
Submit and view feedback for | https://docs.microsoft.com/en-us/powershell/module/Microsoft.powershell.core/out-null?view=powershell-7.2 | 2022-06-25T12:17:55 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.microsoft.com |
Scripture Burrito Documentation¶
After several years of development and testing, we are pleased to announce the availability of Scripture Burrito 1.0-rc1 specification! We recommend that developers of Scripture and Scripture-related applications test and adopt this specification for interchanging data with other systems. Out of the box Scripture Burrito is designed to support the following types of data:
- Scripture Text
- Scripture Print
- Scripture Audio
- Scripture Sign Language
- Scripture Braille
- Scriptural Text Stories
As interoperability is our primary goal, we are happy to accept proposals for new flavors based on common interchange scenarios. We have provided instructions and examples for Extending Scripture Burrito by testing and implementing new flavors (using
x- flavors). When multiple implementations can be demonstrated, we will consider adding them as official flavors in new schema releases.
If you learn best by example, see the minimal flavor examples.
This work has been a multi-year collaboration between several organizations, including American Bible Society, Clear.Bible, Eldarion, Bridge Connectivity Solutions, SIL, unfoldingWord, United Bible Societies, and the work has been sponsored by illumiNations.
Future Development¶
See future development milestones here. The Scripture Burrito Committee invites comments on all aspects of the schema and documentation. Please use Github Issues or Github Discussions to provide feedback.
Documentation¶
- Introduction
- Overview
- History
- Committee
- Schema Documentation
- Overall Design
- Agencies
- Agency
- Common Definitions
- Confidential
- Meta (Derived)
- Metadata (Derived)
- Flavor Details: Glossed Text Stories
- idAuthorities
- Identification
- Ingredient
- Ingredients
- Language
- Languages
- Localized Name
- Localized Names
- Meta Comments
- Meta Date created
- Meta Default Language
- Meta Version
- Metadata
- Normalization
- Numbering System
- Progress
- Promotional Statements
- Recipes
- Recipe Element
- Recipe Section
- Relationship
- Relationships
- Role
- Scope
- Flavor Details: Audio Translation
- Flavor Details: Scripture: Braille Publication
- Flavor Details: Scripture: Sign Language Video Translation
- Flavor Details: scripture/textTranslation
- Flavor Details: Scripture: Print Publication
- Scripture Flavor Type
- Software and User Info
- Meta (Source)
- Metadata (Default)
- Target Area
- Target Areas
- Meta (Template)
- Metadata (Template)
- Type
- UNM49 enum
- Flavor Details: X-Flavor
- Flavors
- Scripture Flavors
- Scripture Text
- Scripture Audio
- Scripture Sign Language
- Scripture Print
- Scripture Braille
- Gloss Flavors
- Parascriptural Flavors
- Peripheral Flavors
- Extending Scripture Burrito
- Examples
- Glossary | https://docs.burrito.bible/en/latest/index.html | 2022-06-25T11:14:58 | CC-MAIN-2022-27 | 1656103034930.3 | [array(['_images/burrito_logo.png', '_images/burrito_logo.png'],
dtype=object) ] | docs.burrito.bible |
We need help filling out this section! Feel free to follow the edit this page link and contribute.
Shipping is enabled in Drupal Commerce 2.x with an external module Commerce Shipping. This module only provides an API and plugins for flat rate (both per order and per item) shipping functionality. As with Drupal Commerce 1.x, you will need a plugin or plugins provided by other module(s) for calculating actual shipping costs with shipping services. Currently available plugins with status as of 6 November 2018:
Found errors? Think you can improve this documentation? edit this page | https://docs.drupalcommerce.org/commerce2/user-guide/shipping | 2022-06-25T10:56:04 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.drupalcommerce.org |
Anatomy of a Standard Case Detail Page
While every Case Type can be configured to display case-specific information, the layout of the case detail page usually remains similar to other request types.
For new installations of ServiceJourney, there is a standard Case Detail Page for Complaints, Enquiries, and Requests. The information displayed in each section of a standard case page depends on the individual case type.
Standard Case Detail Page
Header title
This is the topmost section of the Case detail page.
Header toolbar
Right under the header title is the header toolbar that contains the major actions to do on the Case. By default it contains:
Summary section
This section contains basic information about the Case.
Central tabs
The central tabs contain detailed information about the Case.
Right tabs
By default, this section shows: | https://docs.eccentex.com/doc1/anatomy-of-a-standard-case-detail-page | 2022-06-25T10:07:15 | CC-MAIN-2022-27 | 1656103034930.3 | [array(['../doc1/2052522121/image2021-12-2_2-36-45.png?inst-v=8f326cbe-d759-410f-b89d-9e6c8bf0a399',
None], dtype=object)
array(['../doc1/2052522121/image2021-12-2_2-51-33.png?inst-v=8f326cbe-d759-410f-b89d-9e6c8bf0a399',
None], dtype=object)
array(['../doc1/2052522121/image2021-12-2_2-52-13.png?inst-v=8f326cbe-d759-410f-b89d-9e6c8bf0a399',
None], dtype=object)
array(['../doc1/2052522121/image2021-12-2_2-54-40.png?inst-v=8f326cbe-d759-410f-b89d-9e6c8bf0a399',
None], dtype=object)
array(['../doc1/2052522121/image2021-12-2_2-55-56.png?inst-v=8f326cbe-d759-410f-b89d-9e6c8bf0a399',
None], dtype=object)
array(['../doc1/2052522121/image2021-12-2_2-57-27.png?inst-v=8f326cbe-d759-410f-b89d-9e6c8bf0a399',
None], dtype=object) ] | docs.eccentex.com |
- Introduced a new form in GitLab 13.11 with a flag named
fork_project_form. Disabled by default.
- Enabled on GitLab.com and self-managed in GitLab 14.8. Feature flag
fork_project_formremoved.
To fork an existing project in GitLab:
- On the project’s home page, in the top right, select Fork:
- Optional. Edit the Project name.
- For Project URL, select the namespace your fork should belong to.
- Add a Project slug. This value becomes part of the URL to your fork. It must be unique in the namespace.
- Optional. Add a Project description.
- Select the Visibility level for your fork. For more information about visibility levels, read Project and group visibility.
- Select Fork project.
GitLab creates your fork, and redirects you to the new fork’s page. select Submit merge request to conclude the process. When successfully merged, your changes are added to the repository and branch you’re merging into.
Removing a fork relationship
You can unlink your fork from its upstream project in the advanced settings. | https://docs.gitlab.com/14.10/ee/user/project/repository/forking_workflow.html | 2022-06-25T10:23:07 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.gitlab.com |
Mesh outlines overview -- MRTK3
Many mesh outline techniques are done using a post processing technique. Post processing provides great quality outlines, but can be prohibitively expensive on many mixed reality devices.
MeshOutline.cs and MeshOutlineHierarchy.cs can be used to render an outline around a mesh renderer. Enabling this component introduces another render pass of the object being outlined, but it's designed to run optimally on mobile mixed reality devices and doesn't utilize any post processes.
Note
Limitations of this effect include it not working well on objects which are not watertight (or required to be two sided) and depth sorting issues can occur on overlapping objects.
Sample
See the Mesh Outlines sample for demonstrations of the outline system.
Material setup
The outline behaviors are used with the Graphics Tools/Standard shader. Outline materials are usually a solid unlit color but can be configured to achieve a wide array of effects. The default configuration of an outline material is as follows:
- Depth Write - should be disabled for outline materials to make sure the outline doesn't prevent other objects from rendering.
- Vertex Extrusion - needs to be enabled to render the outline.
- Use Smooth Normals - this setting is optional for some meshes. Extrusion occurs by moving a vertex along a vertex normal, on some meshes extruding along the default normals will cause discontinuities in the outline. To fix these discontinuities, you can check this box to use another set of smoothed normals that get generated by MeshSmoother.cs.
Mesh smoothing
MeshSmoother.cs is a component which can be used to automatically generate smoothed normals on a mesh. This method groups vertices in a mesh that share the same location in space then averages the normals of those vertices. This process creates a copy of the underlying mesh and should be used only when required.
In the above image, cube one is utilizing mesh smoothing while cube two is not. Notice the discontinuities at the corners of the cube without mesh smoothing.
Tip
Certain meshes (like spheres) don't display these discontinuities. So, it's best to test for meshes that need mesh smoothing.
See also
Feedback
Submit and view feedback for | https://docs.microsoft.com/en-us/windows/mixed-reality/mrtk-unity/mrtk3-graphicstools/features/mesh-outlines | 2022-06-25T12:41:01 | CC-MAIN-2022-27 | 1656103034930.3 | [array(['images/meshoutline/meshoutline.jpg', 'Mesh outlines example'],
dtype=object)
array(['images/meshoutline/outlinematerial.jpg',
'Mesh outline material inspector'], dtype=object)
array(['images/meshoutline/smoothnormals.jpg',
'Smooth normals comparison'], dtype=object)] | docs.microsoft.com |
You can use the RTL design flow to create modules, instantiate IP, or assemble the top-level design, similar to previous architectures. However, you must follow Xilinx recommendations for using Versal device-specific blocks in the RTL design flow, including the CIPS and NoC IP. The CIPS IP provides access to device configuration features, and the NoC IP connects PL to one or several DDRMC hardened IP.
Xilinx highly recommends using the Vivado IP integrator to instantiate and configure the CIPS and NoC IP. However, you do not need to use the IP integrator for your entire design. You can configure the CIPS and NoC in IP integrator, specify the interface to the rest of the design, and instantiate the resulting block design in the top-level RTL. Using this approach, IP integrator automates the CIPS and NoC configuration, allowing you apply additional changes as needed. | https://docs.xilinx.com/r/2021.2-English/ug1273-versal-acap-design/RTL-Design-Flow | 2022-06-25T11:51:23 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.xilinx.com |
Microsoft SQL Server
Description
A Microsoft SQL Server Connection can be used to easily connect to your MSSQL server. You'll need the following credentials for connecting to the server:
- MSSQL server URL
- DB port
- A valid username
- A valid password
- The database where to connect to
Enter the credentials into the corresponding field and click on Test Connection to test the DB connection.
Use the Microsoft SQL Server Connection
After you've set up the Microsoft SQL Server Connection, you can use it in a SQL Node within a Flow.
Updated almost 3 years ago
Did this page help you? | https://old-docs.cognigy.com/docs/microsoft-sql-server | 2022-06-25T09:59:14 | CC-MAIN-2022-27 | 1656103034930.3 | [array(['https://files.readme.io/a7e02ad-Screenshot_from_2018-11-27_09-09-24.png',
'Screenshot from 2018-11-27 09-09-24.png'], dtype=object)
array(['https://files.readme.io/a7e02ad-Screenshot_from_2018-11-27_09-09-24.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/6cfe951-d992276-Divider3.png', None],
dtype=object)
array(['https://files.readme.io/6cfe951-d992276-Divider3.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/c2e3b89-Screenshot_from_2018-11-27_09-27-17.png',
'Screenshot from 2018-11-27 09-27-17.png'], dtype=object)
array(['https://files.readme.io/c2e3b89-Screenshot_from_2018-11-27_09-27-17.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/6cfe951-d992276-Divider3.png', None],
dtype=object)
array(['https://files.readme.io/6cfe951-d992276-Divider3.png',
'Click to close...'], dtype=object) ] | old-docs.cognigy.com |
infinidat.infinibox.infini_port module – Add and Delete fiber channel and iSCSI ports to a host on Infinibox
Note
This module is part of the infinidat.infinibox collection (version 1.3_port.
New in version 2.1: of infinidat.infinibox
Synopsis
This module adds or deletes fiber channel or iSCSI ports to hosts on Infinibox.
Requirements
The below requirements are needed on the host that executes this module.
python2 >= 2.7 or python3 >= 3.6
infinisdk ()
Parameters host bar is available with wwn/iqn ports infini_host: name: bar.example.com state: present wwns: - "00:00:00:00:00:00:00" - "11:11:11:11:11:11:11" iqns: - "iqn.yyyy-mm.reverse-domain:unique-string" system: ibox01 user: admin password: secret
Collection links
Issue Tracker Repository (Sources) | https://docs.ansible.com/ansible/latest/collections/infinidat/infinibox/infini_port_module.html | 2022-06-25T10:22:20 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.ansible.com |
. redshift-data ]
Fetches the temporarily cached result of an SQL statement. A token is returned to page through the statement results.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
get-statement:
Records
get-statement-result --id <value> [--cli-input-json <value>] [--starting-token <value>] [--max-items <value>] [--generate-cli-skeleton <value>]
--id (string)
The identifier of the SQL statement whose results are to be fetched. This value is a universally unique identifier (UUID) generated by Amazon Redshift Data API. A suffix indicates thenment, and
ListStat.
ColumnMetadata -> (list)
The properties (metadata) of a column.
(structure)
The properties (metadata) of a column.
columnDefault -> (string)The default value of the column.
isCaseSensitive -> (boolean)A value that indicates whether the column is case-sensitive.
isCurrency -> (boolean)A value that indicates whether the column contains currency values.
isSigned -> (boolean)A value that indicates whether an integer column is signed.
label -> (string)The label for the column.
length -> (integer)The length of the column.
name -> (string)The name of the column.
nullable -> (integer)A value that indicates whether the column is nullable.
precision -> (integer)The precision value of a decimal number column.
scale -> (integer)The scale value of a decimal number column.
schemaName -> (string)The name of the schema that contains the table that includes the column.
tableName -> (string)The name of the table that includes the column.
typeName -> (string)The database-specific data type of the column.
NextToken -> (string).
Records -> (list)
The results of the SQL statement.
(list)
(structure)
A data value in a column.
blobValue -> (blob)A value of the BLOB data type.
booleanValue -> (boolean)A value of the Boolean data type.
doubleValue -> (double)A value of the double data type.
isNull -> (boolean)A value that indicates whether the data is NULL.
longValue -> (long)A value of the long data type.
stringValue -> (string)A value of the string data type.
TotalNumRows -> (long)
The total number of rows in the result set returned from a query. You can use this number to estimate the number of calls to the
GetStatementResultoperation needed to page through the results. | https://docs.aws.amazon.com/cli/latest/reference/redshift-data/get-statement-result.html | 2022-06-25T12:12:06 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.aws.amazon.com |
Information about each database object is stored in the system tables.
System tables cannot be directly modified and are used by the system to manage and maintain the integrity of the database.
Information in the system tables is used to create, access, modify and execute objects and user data stored in Vantage.
The primary system tables include:
- Dbase: This table contains information about every database installed on the system. Database information includes names associated with this database (for example, database name, owner name, and account name), timestamps, passwords, and so on.
- DataBaseSpace: This table contains space allocation for each database.
- TVM and TVFields: These two tables contain information about every table, view, macro and other objects that are stored in the databases.
- Accessrights: This table contains all of the information about user permissions for each type object stored in the database.
Vantage supports the following database objects:
- Stored Procedures written in SQL and external procedures written in C/C++ and Java.
- User-defined types (UDTs), user-defined functions (UDFs), and user-defined methods (UDMs). These functions and methods provide you with the toolset to perform whatever type of processing and manipulation of data is required.
For detailed descriptions of these objects, see Database Objects. | https://docs.teradata.com/r/Teradata-VantageTM-Data-Dictionary/July-2021/Overview/What-the-Data-Dictionary-Stores | 2022-06-25T11:14:52 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.teradata.com |
Accept Apple Pay with UI components
Use Unzer UI component to add Apple Pay payment to your checkout page.
Overview
Using UI components for Apple Pay you create a payment type resource that will be used to make the payment in the server-side integration. You will need to create an Apple Pay button for this payment method.
Before you begin
- Check the basic integration requirements.
- Familiarize yourself with general guide on integrating using UI components.
- See the list of prerequisites for Accepting Apple Pay Payments through the Unzer payment system here: Apple Pay Prerequisites
Using Apple Pay
Apple Pay guidelines
Before you can use Apple Pay you must make sure that your website or app comply with all of the guidelines mentioned by Apple:
Apple Pay version compatibility
You can accept payments via Apple Pay with the Unzer Payment API. In our code examples, we have chosen version 6 to provide a good mix between compatibility with most Apple devices available and the data which you can request from a customer to process orders.
Apple Pay - Good to know
Here are some things which you should keep in mind while implementing Apple Pay in your application:
- The
domainName-parameter from the merchant validation-step must be equal to the one which has been validated in the Apple developer account.
- Apple Pay is only available on supported Apple devices. See the full list of supported devices here:'});
Step 2: Add Apple Pay to your project
Create Apple Pay Button client side
Place the Apple Pay button with this code in the desired place on the page.
<div class="apple-pay-button apple-pay-button-black" lang="us" onclick="yourOnClickHandlerMethod()" title="Start Apple Pay" role="link" tabindex="0"></div>
To see other options for the available button display, see Apple Pay Documentation.
Create an Apple Pay Session client side
First you need to set-up an
ApplePayPaymentRequest which is then used to create an Apple Pay session. For more information, see Apple Pay documentation.
In this example, we define the function
startApplePaySession that can be called when the pay button is clicked.
function startApplePaySession() { let applePayPaymentRequest = { countryCode: 'DE', currencyCode: 'EUR', supportedNetworks: ['visa', 'masterCard'], merchantCapabilities: ['supports3DS'], total: { label: 'Unzer GmbH', amount: '12.99' }, lineItems: [ { "label": "Subtotal", "type": "final", "amount": "10.00" }, { "label": "Free Shipping", "amount": "0.00", "type": "final" }, { "label": "Estimated Tax", "amount": "2.99", "type": "final" } ] }; // We adhere to Apple Pay version 6 to handle the payment request. let session = new ApplePaySession(6, applePayPaymentRequest); session.onvalidatemerchant = function (event) { // Call the merchant validation in your server-side integration } session.onpaymentauthorized = function (event) { // The event will contain the data you need to pass to our server-side integration to actually charge the customers card let paymentData = event.payment.token.paymentData; // event.payment also contains contact information if needed. // Create the payment method instance at Unzer with your public key unzerApplePayInstance.createResource(paymentData) .then(function (createdResource) { // Hand over the payment type ID (createdResource.id) to your backend. }) .catch(function (error) { // Handle the error. E.g. show error.message in the frontend. abortPaymentSession(session); }) } // Add additional event handler functions ... session.begin(); }
The Apple Pay session provides various event handlers to define the behaviour of your checkout. Most relevant for payment integration with Unzer are the
onvalidatemerchant and
onpaymentauthorized events. Refer to Apple Developer Documentation to get more information about available event handlers.
After this, create an event handler for the
OnClick-function for the Apple Pay button which is defined above.
Inside the event handler function you need to start the Apple Pay session as described previously. for example,
startApplePaySession();
This constructs the
ApplePaySession object and start the session.
Provide Merchant Validation server side
To accept payments via Apple Pay, should be able to process the Apple Pay merchant validation. With this, Apple adds a security layer so that the customer is sure that the merchant and the shop on which they are buying are the same as they expect.
This is a synchronous call from the
ApplePaySession inside the Safari-browser to your backend. For security reasons, the actual call to the Apple Pay server for the validation must be done from your server-side integration. You are able to create the call to your merchant validation server-side integration by yourself via the
onvalidatemerchant event handler. Find more information about the merchant validation step here: Apple Developer Documentation
The Unzer SDKs also provide an adapter function to process the merchant validation for you.
To construct an
ApplepaySession object, the following parameters are needed:
$applepaySession = new ApplepaySession('your.merchantIdentifier', 'your.merchantName', 'your.domainName'); $appleAdapter = new ApplepayAdapter(); $appleAdapter->init('/path/to/merchant_id.pem', '/path/to/rsakey.key') // Get the merchant validation url from the frontend. $merchantValidationURL = urldecode($_POST['merchantValidationUrl']); try { $validationResponse = $appleAdapter->validateApplePayMerchant( $merchantValidationURL, $applepaySession ); print_r($validationResponse); } catch (\Exception $e) { ... }
String merchantValidationUrl = getMerchantValidationUrlFromFrontend(); ApplePaySession applePaySession = new ApplePaySession(applePayMerchantIdentifier, applePayDisplayName, domainName); KeyManagerFactory kmf = getKeyManagerFactory(); TrustManagerFactory tmf = getTrustManagerFactory(); String response = ApplePayAdapterUtil.validateApplePayMerchant(merchantValidationUrl, applePaySession, kmf, tmf); return response; //TruststoreManagerFactory creation private TrustManagerFactory getTrustManagerFactory() { KeyStore trustStore = KeyStore.getInstance(KeyStore.getDefaultType()); InputStream is = new ClassPathResource("path/to/file").getInputStream(); trustStore.load(is, "password".toCharArray()); is.close(); TrustManagerFactory trustManagerFactory = TrustManagerFactory.getInstance("SunX509"); trustManagerFactory.init(trustStore); return trustManagerFactory; } //KeyManagerFactory creation private KeyManagerFactory getKeyManagerFactory() { KeyStore keyStore = KeyStore.getInstance("PKCS12"); InputStream is = new ClassPathResource("path/to/file").getInputStream(); keyStore.load(is, "password".toCharArray()); is.close(); KeyManagerFactory keyManagerFactory = KeyManagerFactory.getInstance("SunX509"); keyManagerFactory.init(keyStore, "password".toCharArray()); return keyManagerFactory; }
All of the SDKs require the
Apple Pay Merchant ID Certificate to be present and provided to the adapter-function. See this article from Apple on how to request an
Apple Pay Merchant ID Certificate: Apple Developer Documentation
In the Java SDK you also need to provide the Apple Pay certificate keychain in a trust tore.
Provide a Payment Authorized Endpoint server side
After the customer has authorized the payment via the Apple Pay overlay (Face ID, Touch ID or device passcode), Safari will return an object (encrypted Apple Pay token) with data which you need to create the Apple Pay payment type on the Unzer API. The Unzer payment type will be needed to perform the actual transaction. For this you should provide a backend controller to accept the
typeId from your frontend. This controller returns the result of the API authorization because Apple Pay uses this to display the result to the customer.
As an example you can have a look at this
RequestMapping:
$jsonData = json_decode(file_get_contents('php://input'), false); $paymentTypeId = $jsonData->typeId; // Catch API errors, write the message to your log and show the ClientMessage to the client. $response = ['transactionStatus' => 'error']; try { // Create an Unzer object using your private key and register a debug handler if you want to. $unzer = new Unzer('s-priv-xxxxxxxxxxxxxx'); // -> Here you can place the Charge or Authorize call as shown in Step 3 <- // E.g $transaction = $unzer->charge(...); // Or $transaction = $unzer->authorize(...); $response['transactionStatus'] = 'pending'; if ($transaction->isSuccess()) { $response['transactionStatus'] = 'success'; } } catch (UnzerApiException $e) { $merchantMessage = $e->getMerchantMessage(); $clientMessage = $e->getClientMessage(); } catch (RuntimeException $e) { $merchantMessage = $e->getMessage(); } echo json_encode($response);
String paymentTypeId = getApplePayPaymentTypeIdFromFrontend(); // Create an Unzer object using HttpClientBasedRestCommunication and your private key Unzer unzer = new Unzer(new HttpClientBasedRestCommunication(), privateKey); boolean authStatus = false; Applepay applepay = unzer.fetchPaymentType(paymentTypeId); try { // -> Here you can place the Charge or Authorize call as shown in Step 3 <- // E.g Charge charge = unzer.charge(...); // Or Authorize authorize = unzer.authorize(...); // Set the authStatus based on the resulting Status of the Payment-Transaction // The Functions charge.getStatus() or authorize.getStatus() will return the Status-Enum (SUCCESS, PENDING, ERROR) if(charge.getStatus().equals(AbstractTransaction.Status.SUCCESS)) { authStatus = true; } } catch (Exception ex) { log.error(ex.getMessage()); } return authStatus;
Step 3: Make a payment server side
Make a Charge transaction
Now make a
charge or
authorize transaction with the
Applepay resource that you created. With a successful
Chargetransaction money is transferred from the customer to the merchant and a
payment resource is created. In case of the
authorize transaction it can be charged when it has been successful.
POST Body: { "amount" : "49.99", "currency" : "EUR", "returnUrl": "", "resources" : { "typeId" : "s-apl-xxxxxxxxxxxx" } }
$unzer = new Unzer('s-priv-xxxxxxxxxx'); $applePay = $unzer->fetchPaymentType('s-apl-xxxxxxxxxxx'); $charge = $applePay->charge(49.99, 'EUR', '');
Unzer unzer = new Unzer("s-priv-xxxxxxxxxx"); Charge charge = unzer.charge(BigDecimal.valueOf(49.99), Currency.getInstance("EUR"), "s-apl-wqmqea8qkpqy", new URL(""));
The response looks similar to the following example:
{ "id": "s-chg-1", "isSuccess": true, "isPending": false, "isError": false, "redirectUrl": "", "message": { "code": "COR.000.100.112", "merchant": "Request successfully processed in 'Merchant in Connector Test Mode'", "customer": "Your payments have been successfully processed in sandbox mode." }, "amount": "49.9900", "currency": "EUR", "returnUrl": "", "date": "2021-05-14 16:01:24", "resources": { "paymentId": "s-pay-xxxxxxx", "traceId": "c6dc23c6fe91a3e1129da83ebd29deb0", "typeId": "s-apl-xxxxxxxxxxxx" }, "paymentReference": "", "processing": { "uniqueId": "31HA07BC810C911B825D119A51F5A57C", "shortId": "4849.3448.4721", "traceId": "c6dc23c6fe91a3e1129da83ebd29deb0" } }
Step 4: Check status of payment server side
Once the customer is redirected to the
returnUrl, you can fetch the payment details from the API, by using the resources
paymentId from the transaction above to handle the
payment according to its status. If the status of the
payment is
completed, the payment process has been finished successfully and can be considered as paid. Check all the possible payment states here.
GET{payment_ID} { "id": "s-pay-222305", "state": { "id": 1, "name": "completed" }, "amount": { "total": "49.9900", "charged": "49.9900", "canceled": "0.0000", "remaining": "0.0000" }, "currency": "EUR", "orderId": "", "invoiceId": "", "resources": { "customerId": "", "paymentId": "s-pay-222305", "basketId": "", "metadataId": "", "payPageId": "", "traceId": "70ddf3152a798c554d9751a6d77812ae", "typeId": "s-apl-wqmqea8qkpqy" }, "transactions": [ { "date": "2021-05-10 00:51:03", "type": "charge", "status": "success", "url": "", "amount": "49.9900" } ] }
Step Apple Pay payments, see Manage Apple Pay. | https://docs.unzer.com/payment-methods/applepay/accept-applepay-ui-component/ | 2022-06-25T10:49:32 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.unzer.com |
Changes
We regularly publish updates to documentation housed at docs.fastly.com. On the dates listed here, we made changes we believe are significant according to our Terms of Service. You can subscribe to email updates of documentation changes (see the form on this page) and we also make these changes available a machine-readable feed for use with RSS-compatible software.
Special Service Guide Update for September 23, 2016
Friday, September 23 2016 -
On September 23, 2016, Fastly self-certified to the US-EU Privacy Shield. In connection with this certification, Fastly posted its Privacy Shield Notice and then updated its Privacy Policy. Information about the Privacy Shield can be found at the U.S. Department of Commerce's Privacy Shield website.
Special Service Guide Update for May 20, 2016 was published on May 20, 2016. You can view prior versions of the Privacy Policy in our privacy archive.
Special Service Guide Update for January 15, 2016
Friday, January 15 2016 -
On January 15, 2016, Fastly posted an updated privacy policy. This new policy will be effective on February 15, 2016, 30 days from the date of posting. You can view the prior versions of our privacy policy in our archive.
Documentation Updates for the Period Ending June 27, 2015.
Documentation Updates for the Period Ending April 04, 2015.
Documentation Updates for the Period Ending January 24, 2015
Saturday, January 24 2015 -?
A list of all content updates is also available. | https://docs.fastly.com/changes/significant/ | 2016-12-03T00:16:12 | CC-MAIN-2016-50 | 1480698540798.71 | [] | docs.fastly.com |
DataS.
In other words, the DataSource fully supports CRUD (Create, Read, Update, Destroy) data operations, and provides both client-side and server-side support for sorting, paging, filtering, grouping, and aggregates.
For detailed information on the capabilities of the DataSource, refer to its configuration API, methods, and events, and demos.
DataSource Binding
This article provides simple examples, which show how to create Kendo UI DataSource instances bound to local or remote data, and DataSource instances, which are used by a single Kendo UI widget or by multiple widgets.
To Local Data
In this scenario an array of JavaScript objects is assigned to the
data configuration property of the DataSource instance, as demonstrated in the example below.
Example
var movies = [{ title: "Star Wars: A New Hope", year: 1977 }, { title: "Star Wars: The Empire Strikes Back", year: 1980 }, { title: "Star Wars: Return of the Jedi", year: 1983 }]; var localDataSource = new kendo.data.DataSource({ data: movies });
To Remote Service
In this scenario the DataSource needs information about the web service URLs, the request type, the response data type, and the structure (
schema) of the response, if it is more complex than a plain array of objects. You are also able to provide custom parameters, which are going to be submitted during the data request.
Example
var dataSource = new kendo.data.DataSource({ transport: { read: { // the remote service url url: "", // the request type type: "get", // the data type of the returned result dataType: "json", // additional custom parameters sent to the remote service data: { lat: 42.42, lon: 23.20, cnt: 10 } } }, // describe the result format schema: { // the data, which the data source will be bound to is in the "list" field of the response data: "list" } });
Mixed Data Operations Mode
Make sure that all data operations (paging, sorting, filtering, grouping, and aggregates) are configured to take place either on the server, or on the client. While it is possible to use a mixed data operations mode, setting some of the data operations on the server and others on the client
Many Kendo UI widgets support data binding, and the Kendo UI DataSource is an ideal binding source for both local and remote data.
To Local DataSource
A DataSource can be created in-line with other Kendo UI widget configuration settings, as demonstrated in the example below.
Example
$("" } });
To Remote DataSource
You can also create a shared DataSource to allow multiple Kendo UI widgets to bind to the same data collection. The main benefits of using a shared DataSource are fewer data requests, better performance and automatic synchronized refreshing of all widgets bound to the same DataSource instance, when the data changes.
Example") #' }] });
See Also
Other articles on the Kendo UI DataSource component: | http://docs.telerik.com/kendo-ui/framework/datasource/overview | 2016-12-03T00:21:08 | CC-MAIN-2016-50 | 1480698540798.71 | [] | docs.telerik.com |
RadColorPicker Overview
Telerik RadColorPicker for ASP.NET AJAX is a lightweight UI component that allows users to select colors from the RGB or HEX color spaces using a configurable palette view. The control is completely customizable in terms of appearance and offers numerous configuration options including:
25 Preset Color Palettes To speed up your work RadColorPicker is shipped with 25 ready-to-use color palettes like Grayscale, Web216, ReallyWebSafe, Office, etc.
Custom Color Palettes - you can easily define your custom color palette, as well as combine custom with preset palettes when necessary.
Automatic Picker Button - RadColorPicker can render a picker button, which will open the color palette. The picker button will also display the currently selected color. Alternatively, the palette can be always visible or can be evoked using the client-side API.
"No Color" and Color Preview Elements - depending on the particular settings, RadColorPicker can display a "No Color" button in the color palette. A color preview area displays the actual color and its hex code.
Configurable Palette Layout - you can define the number of color columns to be displayed in the color palette or leave them to be configured automatically.
Automatic color box sizing - the size of the color boxes in the palette is automatically calculated according to the Columns property to fit the given Width property. The boxes are of equal width and height. By default the size of each color box is 15px/15px
Advanced Skinning - the visual appearance of the color palette can be easily customized through skins. You can use one of the predefined skins or create your own. | http://docs.telerik.com/devtools/aspnet-ajax/controls/colorpicker/overview | 2016-12-03T00:19:44 | CC-MAIN-2016-50 | 1480698540798.71 | [array(['images/colorpicker-overview006.png', None], dtype=object)] | docs.telerik.com |
Guides
Enabling cross-origin resource sharing (CORS) header button. The Create a new header page appears.
Fill out the Create a new header fields as follows:
- From the Type menu, select Cache, and from the Action menu, select Set.
- In the Destination field, type
http.Access-Control-Allow-Origin.
- In the Source field, type
"*".
- Leave the Ignore if set menu and the Priority field set to their default values.
- In the Description field, type a descriptive name for the new header (e.g.,
CORS S3 Allow). This name is displayed in the Fastly web interface.
Click the Create button. The new header appears on the Content page.
Click the Activate button to deploy your configuration changes.
Test it out
Running the command
curl -I your-hostname.com/path/to/resource should include similar information to the following in your header:
Access-Control-Allow-Origin: Access-Control-Allow-Methods: GET Access-Control-Expose-Headers: Content-Length, Connection, Date...
Back to Top | https://docs.fastly.com/guides/performance-tuning/enabling-cross-origin-resource-sharing.html | 2016-12-03T00:15:40 | CC-MAIN-2016-50 | 1480698540798.71 | [] | docs.fastly.com |
Authors
- 1 The Basics
- 1.1 Pre-requisites
- 1.2 Issue Management - JIRA
- 1.2.1 Versioning Guidelines
- 1.3 Source control - Git
- 1.3.1 Setting up your IDE
- 1.3.2 Eclipse
- 1.4 Build - Maven
- 1.5 Testing - TestNG
- 1.6 Communicating with other Infinispan contributors
- 1.7 Style Requirements
- 1.7.1 Spelling
- 1.7.2 License header
- 1.7.3 Check-in comments
- 1.8 Configuration
- 1.9 Logging
- 2 Source Control
- 2.1 Pre-requisites
- 2.2 Repositories
- 2.3 Roles
- 2.3.1 Occasional Contributor
- 2.3.2 Frequent Contributor
- 2.3.3 Project Admin
- 2.4.7.7 Shell prompt
- 3 Building Infinispan
- 3.1 Requirements
- 3.2 Quick command reference
- 3.3 Publishing releases to Maven
- 3.3.1 Publishing snapshots
- 3.3.2 Publishing releases
- 3.4 The Maven Archetypes
- 3.4.1 Starting a new project
- 3.4.2 Writing a test case for Infinispan
- 3.4.3 Versions
- 3.4.4 Source Code
- 4.
The Basics
Pre-requisites
Issue Management - JIRA
Infinispan uses JIRA for issue management, hosted on issues.jboss.org. You can log in using your normal jboss.org username and password.
If you need to create a new issue then follow these steps.
- Choose between
- Feature Request if you want to request an enhancement or new feature for Infinispan
- Bug if you have discovered an issue
- Task if you wish to request a documentation, sample or process (e.g. build system) enhancement or issue
- Then enter a Summary, describing briefly the problem - please try to be descriptive!
- You should not set Priority.
- Now, enter the version you are reporting an issue against in the Affects Version field, and leave the Fix Version field blank.
- In the Environment text area, describe in as much detail as possible your environment (e.g. Java runtime and version, operating system, any network topology which is relevant).
- In the Description field enter a detailed description of your problem or request.
- If the issue has been discussed on the forums or the mailing list, enter a reference in the Forum Reference field
- Finally, hit Create
Versioning Guidelines
When setting the Fix Version field for bugs and issues in JIRA, the following guidelines apply:
Version numbers are defined as MAJOR.MINOR.MICRO.MODIFIER. For example, 4.1.0.BETA1 would be:
If the issue relates to a Task or Feature Request, please ensure that the .FINAL version is included in the Fixed In field. For example, a new feature should contain 4.1.0.BETA1, 4.1.0.FINAL if it is new for 4.1.0 and was first made public in BETA1. For example, see ISPN-299.
If the issue relates to a bug which affected a previous FINAL version, then the Fixed In field should also contain the .FINAL version which contains the fix, in addition to any ALPHA, BETA or CR release. For example, see ISPN-546.
If the issue pertains to a bug in the current release, then the .FINAL version should not be in the Fixed In field. For example, a bug found in 4.1.0.ALPHA2 (but not in 4.1.0.ALPHA1) should be marked as fixed in 4.1.0.ALPHA3, but not in 4.1.0.FINAL. For example, see ISPN-416.
Source control - Git
Infinispan uses git, hosted on GitHub, for version control. You can find the upstream git repository at. To clone the repository:
or to clone your fork:
For more information, read the Git chapter.
Setting up your IDE
Maven supports generating IDE configuration files for easy setup of a project. Tested are Eclipse, IntelliJ IDEA and Netbeans.
Before we import the project, we need to clone the project as described above.
Eclipse
- Install the m2eclipse plugin if you have not already installed it. Eclipse is including it since version "Indigo" 3.7, for older versions follow instructions at.
- Import the Infinispan maven project. Select File -> Import in your eclipse workbench. Select the Existing Maven Project importer.
- Select the root directory of your Infinispan checkout.
- Select Infinispan modules that you want to import.
- Finally, from Infinispan 5.0 onwards, annotation processing is used to allow log messages to be internationalized. This processing can be done directly from Eclipse as part of compilation but it requires some set up:
- Open the properties for infinispan-core and locate Annotation Processing
- Tick Enable project specific settings
- Enter target/generated-sources/annotations as the Generated source directory
Code Formatting. From the menu Window->Preferences-> select Java -> Code Style -> Formatter. Import formatter.xml
Code template. From the menu Window->Preferences-> select Java -> Code Style -> Code Templates. Import codetemplates.xml
- From Infinispan 5.0 onwards, annotation processing is used to allow log messages to be internationalized. This processing can be done directly from IntelliJ as part of compilation but it requires some set up:
- Go to "Preferences/Compiler/Annotation Processor" and click on Enable annotation processing
- Add an annotation processor with "Processor FQN Name" as org.jboss.logging.LoggingToolsProcessor
- In "Processed Modules", add all modules except the root and the parent modules.
-.
Testing - TestNG
Infinispan uses TestNG for unit and functional tests, and all Infinispan tests are run in parallel. For more information see the Test Suite chapter; this chapter gives advice on writing tests which can safely execute in parallel.
Communicating with other Infinispan contributors
Infinispan contributors use a mix of mailings lists and IRC to communicate ideas and designs, with more detailed designs often making their way into wiki pages.
Style Requirements
Infinispan uses the K&R code style for all Java source files, with two exceptions:
- use 3 spaces instead of a tab character for indentations.
- braces start on the same line for class, interface and method declarations as well as code blocks.
In addition, sure all new line characters used must be LF (UNIX style line feeds). Most good IDEs allow you to set this, regardless of operating system used.
All patches or code committed must adhere to this style. Code style settings which can be imported into IntelliJ IDEA and Eclipse are committed in the project sources, in ide-settings.
Spelling
Ensure correct spelling in code, comments, Javadocs, etc. (use American English spelling). It is recommended that you use a spellchecker plugin for your IDE.
License header
All source files must have up-to-date license headers as described in Copyright Ownership and Licenses. Never remove existing headers or copyrights.
Check-in comments
Please ensure any commit comments use this format if related to a task or issue in JIRA. This helps JIRA pick out these checkins and display them on the issue, making it very useful for back/forward porting fixes. If your comment does not follow this format, your commit may not be merged into upstream.
Configuration
Infinispan offers both programmatic configuration and XML configuration. For more information read the Configuration chapter.
Logging
Infinispan follows the JBoss logging standards, which can be found here.
From Infinispan 5.0 onwards, Infinispan uses JBoss Logging to abstract over the logging backend. Infinispan supports localization of log message for categories of INFO or above as explained in the JBoss Logging guidelines. As a developer, this means that for each INFO, WARN, ERROR, FATAL message your code emits, you need to modify the Log class in your module and add an explicit method for it with the right annotations. For example:
And then, instead of calling log.info(...), you call the method, for example log.anInformativeMessage(param1, param2). If what you're trying to log is an error or similar message and you want an exception to be logged as cause, simply use @Cause annotation, example:
The last thing to figure out is which id to give to the message. Each module that logs something in production code that could be internationalized has been given an id range, and so the messages should use an available id in the range for the module where the log call resides. Here are the id range assignments per module:
Source Control
Pre:
Building Infinispan
Inf!
API, Commons and Core
In order to provide proper separation between public APIs, common utilities and the actual implementation of Infinispan, these are split into 3 modules: infinispan-api, infinispan-commons and infinispan-core. This separation also makes sure that modules, such as the remote clients, don't have to depend on infinispan-core and its transitive dependencies. The following paragraphs describe the role of each of these modules and give indication as to what goes where.
API
The infinispan-api module should only contain the public interfaces which can be used in any context (local, remote, etc). Any additions and/or modifications to this module must be discussed and approved by the team beforehand. Ideally it should not contain any concrete classes: rare exceptions may be made for small, self-contained classes which need to be referenced from the API interfaces and for which the introduction of an interface would be deemed cumbersome.
Commons
The infinispan-commons module contains utility classes which can be reused across other modules. Classes in infinispan-commons should be self-contained and not pull in any dependencies (apart from the existing jboss-logging and infinispan-api). They should also make no reference to configuration aspects specific to a particular environment.
Core
The infinispan-core module contains the actual implementation used for local/embedded mode. When adding new functionality to the APIs, it is generally safe to start by putting them in infinispan-core and promoting them to infinispan-api only when it is deemed to do so and it makes sense across the various use-cases.
Running and Writing Tests
Tests are written using the TestNG framework.
Running the tests
Before running the actual tests it is highly recommended to configure adjust suite's memory setting by updating the MAVEN_OPTS variables. E.g..
Alternatively, you can always pass your own Log4j configuration file via -Dlog4.configuration with your own logging settings.
Patterns:
Enabling code coverage generation
When you run tests, you can enable code coverage recorder for calculating and analysing the Infinispan code coverage. You can do this using coverage and jacocoReport profiles.
As a code coverage evaluation tool, the JaCoCo is used.
Please note, that -Dmaven.test.failure.ignore=true is used for generating JaCoCo code coverage report, even if there are test failures.
Executing this will generate jacoco.exec file in each module's target/ directory. These are the JaCoCo execution data files, which contain full data about the specific module's coverage.
As soon as the coverage execution command is finished, you will need to generate the JaCoCo report, which will merge the generated jacoco.exec files as well as will create the code coverage report.
For having the report in place, run the following command from your Infinispan home directory:
The jacoco-html/ directory will be generated in Infinispan Home directory, which will contain the code coverage report..
Helping Others Out
Inf.
Adding Configuration
Note, these guides assume you are adding an element to the cache configuration, but apply equally to the global configuration.
Before you start adding a configuration property, identify whether you want to add a property to an existing configuration group/element, or whether you need to create a child object. We call the configuration group XXX in the steps below.
Adding a property
Add the property to the relevant XXXConfiguration class. Add a private final field, add a parameter to the constructor, and assign the value to the field in the constructor body. Add a accessor for the property. If the property should be mutable at runtime, then add a mutator as well. Most configuration properties are not mutable at runtime - if the configuration is runtime mutable, then Infinispan needs to take notice of this update whilst the cache is running (you can't cache the value of the configuration in your implementation class). Mutators and accessors don't use the classic JavaBean pattern of prepending accessors with "get" and mutators with "set". Instead, the name of the property is used for an accessor. A mutator is an overloaded version of the accessor which takes a parameter, the new value.
Add the property to the matching XXXConfigurationBuilder. You'll need to add a mutable field to the class, and initialise it to it's default value in the field declaration. Add a mutator (following the above pattern).
The create() method is called by the parent object in order to instantiate the XXXConfiguration object from the builder. Therefore, make sure to pass the value of the field in the builder to the XXXConfiguration object's constructor here. Additionally, if you require a complex default (for example, the value of a configuration property is defaulted conditionally based on the value of some other configuration property), then this is the place to do this.
The validate() method is called by the parent object to validate the values the user has passed in. This method may also be called directly by user code, should they wish to manually validate a configuration object. You should place any validation logic here related to your configuration property. If you need to "cross-validate" properties (validate the value of your property conditionally upon the value of another property), and the other property is on another builder object, increase the visibility of that other property field to "default", and reference it from this builder, by calling the getBuilder() method, which will gives you a handle on the root configuration builder.
The final step is to add parsing logic to the Parser class. First, add the attribute to name to the Attribute enum (this class simply provides a mapping between the non-type-safe name of the attribute in XML and a type-safe reference to use in the parser). Locate the relevant parseXXX() method on the class, and add a case to the switch statement for the attribute. Call the builder mutator you created above, performing any XML related validation (you are unlikely to need this), and type conversion (using the static methods on the primitive wrapper classes, String class, or relevant enum class).
Adding a group
In some situations you may additionally want to add a configuration grouping object, represented in XML as an element. You might want to do this if you are adding a new area of functionality to Infinispan. Identify the location of the new configuration grouping object. It might be added to the root Configuration object, or it might be added to one it's children, children's children. We'll call the parent YYY in the steps below.
Create the XXXConfiguration object. Add any properties required following the guide for adding properties. The constructors visibility should be "default".
Create the XXXConfigurationBuilder object. It should subclass the relevant configuration child builder – use the YYYConfigurationChildBuilder as the superclass. This will ensure that all builder methods that allow the user to "escape" are provided correctly (i.e provide access to other grouping elements), and also require you to provide a create() and validate() method. The constructor needs to take the the YYYConfigurationBuilder as an argument, and pass this to the superclass (this simply allows access to the root of the builder tree using the getBuilder() method).
Follow the property adding guide to add any properties you need to the builder. The create() method needs to return a new instance of the XXXConfiguration object. Implement any validation needed in the validate() method.
In the YYYConfiguration object, add your new configuration class as a private final field, add an accessor, and add initialiser assignment in the constructor
In the YYYConfigurationBuilder, add your new configuration builder as a private final field, and initialise it in the constructor with a new instance. Finally, add an accessor for it following the standard pattern discussed in the guide.
In the YYYConfigurationBuilder ensure that your validate method is called in it's validate method, and that result of the XXXConfiguration instances' create method is passed to the constructor of YYYConfiguration.
Finally, add this to the parser. First, add the element to the Element class, which provides a type safe representation of the element name in XML. In the Parser class, add a new parseXXX method, copying one of the others that most matches your requirements (parse methods either parse elements only - look for ParseUtils.requireNoAttributes(), attributes only – look for ParseUtils.requireNoContent() or a combination of both – look for an iterator over both elements and attributes). Add any attributes as discussed in the adding a property guide. Finally, wire this in by locating the parseYYY() method, and adding an element to the switch statement, that calls your new parseXXX() method.
Don't forget to update the XSD and XSD test
Add your new elements or attributes to the XSD in src/main/resources. Update src/test/resources/configs/all.xml to include your new elements or attributes.
Bridging to the old configuration
Until we entirely swap out the old configuration you will need to add your property to the old configuration (no need to worry about jaxb mappings though!), and then add some code to the LegacyConfigurationAdaptor to adapt both ways. It's fairly straightforward, just locate the relevant point in the adapt() method (near the configuration group you are using) and map from the legacy configuration to the new configuration, or vs versa. You will need to map both ways, in both adapt methods.
Writing Documentation and FAQs
Introduction
Infinispan uses this Confluence instance to store documentation and FAQs. The documentation is organised into:
Editing and adding pages on Confluence is restricted to regular contributors to Infinispan (if you think you should have access, or want to become a regular contributor to the documentation, then please email [email protected].
What goes where?
Infinispan has both this Confluence instance, and the SBS instance at. Documentation and FAQs belong in Confluence, whilst design notes, meeting notes and user contributed articles belong on SBS.
Wiki markup or rich text
You should always use wiki markup, it gives you much greater control over output format. The "Full notation guide" link in the "Help Tips" panel to the right (on the edit page) gives a full list of markup available.
Markup guide
This section discusses the typical markup you would use to write a documentation chapter. The [Style and Grammar guide] will discuss the writing style you should use.
Let's start at the beginning - headers, page structure, and the table of contents.
Headers, Page Structure and the Table of Contents
This Confluence instance is styled to produce structured documentation. Each node in the Infinispan space can have children, producing a tree of nodes. Note that pages are actually stored in a flat structure in the Infinispan space, they are just logically arranged into the tree. This means that URLs do not reflect the tree structure, and names of documents must be unique within the space.
The child nodes of the Infinispan space represent the various guides, and the FAQs. The child nodes of each guide represent the sections of the guides. Some sections of a guide may be further split into subsections stored in different pages (it is likely this was done because a section was getting large).
The include macro is used to display inline the contents of the various sub pages into the top level guide page, and if a section is made up of child pages, each child page should be inlined into the section page using the include macro.
Confluence auto-generates the table of contents (using the toc macro) for the various guides, based on the headings used in the guide. As the include macro does not print the title of the page you are including, it is necessary to add the title above the include macro. You should not use the toc macro except on the main guide page.
h1 headers should only be used to name sections of guide, h2 headers to name sub-sections and so on. You should not skip header levels. Headings should follow the same capitalization rules as a sentence - only capitalize the first letter and proper nouns.
For an example, take a look at the wiki markup for the Contributing to Infinispan guide.
Marking up text
You will likely want to introduce some inline formatting for text. Here are the various styles you should use.
Lists and tables
Markup
You will normally wish to use an unordered list, however a numbered list is useful when expressing a set of steps the user should take. There are no "definition list" macros available in Confluence, so a table makes a good alternative.
You can make nested lists by using "double" markup.
Grammar in lists and tables
The trailing sentence in a list or a table should normally not have a full stop at the end.
Often, you will want to introduce a list or a table using a sentence. If you do this, a colon is often used to punctuate the end of the sentence, rather than a full stop/period.
Links
Links should be used as normal!
Admonitions
Confluence supports three admonition styles, and you are encouraged to use them in your documentation as they allow the flow of information to the reader to be controlled, by moving orthogonal information out of the main body of text.
You can use the title attribute to give the admonition a title
Images and other media
If you are describing the use of a GUI, or showing results of some operation, images embedded in the page can bring the documentation to life for the reader. Images can be attached to the page using the Tools menu, and then linked. The "Full notation guide" discusses the syntax for embedding images. If you are embedding the image to describe a series of steps taken in a GUI, it is not necessary to title your image, otherwise you should give every image a title.
Giffy is supported in this Confluence instance, allowing you to easily create drawings online. For more, read the Giffy documentation for more information.
Confluence supports a charting macro, however it is recommended that you embed charts as images, generating the chart using your favourite program.
The use of the panel macro is not recommended.
Confluence allows you to natively embed video, however the use of this is not recommended, instead it is recommended the widget macro is used to connect to Vimeo or YouTube. The [Screencasts] section describes the creation of screencasts and upload of video to Vimeo or YouTube. To embed a video using the widget macro simply reference the URL to the video, for example:
This produces
You can also embed Google Docs documents, Twitter searches, slide decks from SlideShare, and presentations from SlideRocket. Just follow the above example, substituting the URL for your media.
Code samples
Confluence includes a code macro, but unfortunately it is not very advanced. This Confluence instance also supports the snippet macro which can be used to include text from other sites. The use of the snippet macro is strongly recommended as it ensures that code samples do not get out of date. It is critical that you add a title to the your snippet, and it is also recommended you enable linenumbers and trim the text. For example
Which results in:
Voice and grammar guide
By using a consistent voice throughout the documentation, the Infinispan documentation appears more professional. The aim is to make it feel to the user like the documentation was written by a single person. This can only be completely achieved by regular editing, however in order to make the workload of the editor lighter, following these rules will produce a pretty consistent voice.
- Never use abbreviations. On the other hand, contractions are fine.
- Always use the project name "Infinispan". Never abbreviate it.
- Always write in the second or third person, never the first (plural or singular forms). Use the second person to emphasize you are giving instructions to the user.
- Use a "chatty" style. Although the use of the first person is avoided, the documentation shouldn't be too dry. Use the second person as needed. Short sentences and good use of punctuation help too!
- If you define a list, keep the ordering of the list the same whenever you express the list. For example, if you say "In this section you will learn about interceptors, commands and factories" do not go on to say "First, let's discuss factories". This will subconsciously confuse the user
- You should only capitalize proper nouns only. For example "data grid" is lower case (it's a concept), whilst "Infinispan" is capitalized (it's a project/product name)
- You should always use American spelling. Enable a spell checker!
- Use the definite article when discussing a specific instance or the indefinite article when describing a generalization of something; generally you omit the article when using a name for a project or product.
- Keep the tense the same. It's very easy to slip between the present, past and future tenses, but this produces text that is feels "unnatural" to the reader. Here's an example:
- If you are telling the user about a procedure they can follow, do be explicit about this, and enumerate the steps clearly
Glossary
When writing a glossary entry, you should follow the Basically Available, Soft-state, Eventually-consistent (BASE) as a template.
- If the entry is commonly referred to using an acronym, then the title should consistent of the fully expanded name, with the acronym in brackets. You can then use the acronym always within the main text body
- If you want to refer to other glossary articles using links in the text body, then just link them with no alternative text
- If you want to make external links (e.g. wikipedia, user guide), then add a h2 header "More resources", and list them there. This clearly indicates to users when they are moving outside of our definitions
Screencasts
TODO. | https://docs.jboss.org/author/pages/viewpage.action?pageId=4587620 | 2016-12-03T01:57:03 | CC-MAIN-2016-50 | 1480698540798.71 | [array(['/author/download/attachments/4784485/git_wf_1.png?version=1&modificationDate=1309952321000',
None], dtype=object)
array(['/author/download/attachments/4784485/git_wf_2.png?version=1&modificationDate=1309952321000',
None], dtype=object)
array(['/author/download/attachments/4784485/git_wf_3.png?version=1&modificationDate=1309952321000',
None], dtype=object)
array(['/author/download/attachments/4784485/git_wf_4.png?version=1&modificationDate=1309952321000',
None], dtype=object) ] | docs.jboss.org |
A meta class for closures generated by the Groovy compiler. These classes have special characteristics this MetaClass uses. One of these is that a generated Closure has only additional doCall methods, all other methods are in the Closure class as well. To use this fact this MetaClass uses a MetaClass for Closure as static field And delegates calls to this MetaClass if needed. This allows a lean implementation for this MetaClass. Multiple generated closures will then use the same MetaClass for Closure. For static dispatching this class uses the MetaClass of Class, again all instances of this class will share that MetaClass. The Class MetaClass is initialized lazy, because most operations do not need this MetaClass.
The Closure and Class MetaClasses are not replaceable.
This MetaClass is for internal usage only! | http://docs.groovy-lang.org/latest/html/gapi/org/codehaus/groovy/runtime/metaclass/ClosureMetaClass.html | 2016-12-03T00:24:11 | CC-MAIN-2016-50 | 1480698540798.71 | [] | docs.groovy-lang.org |
100% garanti
The word innovation derives from the Latin word innovatus, which is the noun form of innovare "to renew or change," stemming from in?"into" + novus?"new". Diffusion of innovation research was first started in 1903 by seminal researcher Gabriel Tarde, who first plotted the S-shaped diffusion curve. Tarde (1903) defined the innovation-decision process as a series of steps that includes:
-First knowledge
-Forming an attitude
-A decision to adopt or reject
-Implementation and use
-Confirmation of the decision
Enter the password to open this PDF file:
-
Consultez plus de 91303 études en illimité sans engagement de durée. Nos formules d'abonnement | https://www.docs-en-stock.com/business-comptabilite-gestion-management/creation-innovative-company-140753.html | 2016-12-03T00:25:08 | CC-MAIN-2016-50 | 1480698540798.71 | [] | www.docs-en-stock.com |
Within.
Generally.
The process for adding and removing Working Group members is defined by the Working Group coordinator and because of that it varies from group to group..
Joomla! is released under the GNU GPL (Gnu General Public) version 2 or later. Details of the GPL can be found.
No..
Sometimes,.
You can send or vote ideas at:
And you can request a feature here:
You will need to register at joomlacode.org to submit a feature request. | http://docs.joomla.org/index.php?title=Project_Leadership_FAQs&oldid=103867 | 2014-03-07T13:36:33 | CC-MAIN-2014-10 | 1393999642523 | [] | docs.joomla.org |
The redirects app¶
Django comes with an optional redirects application. It lets you store simple redirects in a database and handles the redirecting for you.
Installation¶¶
manage.py syncdb creates a django_redirect table in your database. This is a simple is not empty, it redirects to new_path.
- If it finds a match, and new_path_CLASSES.. | https://docs.djangoproject.com/en/1.2/ref/contrib/redirects/ | 2014-03-07T13:33:44 | CC-MAIN-2014-10 | 1393999642523 | [] | docs.djangoproject.com |
User Guide
Local Navigation
Create a custom status
If you have a status that you use often, such as when you're going on vacation, you can save it for another time. Your custom status is added to the bottom of the status list.
Next topic: Change your display picture or display name
Previous topic: Show what you're listening to
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/41228/Create_edit_or_delete_a_custom_status_60_1328733_11.jsp | 2014-03-07T13:41:08 | CC-MAIN-2014-10 | 1393999642523 | [] | docs.blackberry.com |
Help Center
Local Navigation® Device Software 6.0 converts the context menu items to command items and provides them to the command item provider. For more information, see Support for legacy context menus.
Next topic: Create a pop-up menu
Previous topic: Support for legacy context menus
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/developers/deliverables/17971/Creating_a_pop-up_menu_1223665_11.jsp | 2014-03-07T13:35:42 | CC-MAIN-2014-10 | 1393999642523 | [] | docs.blackberry.com |
Installing.
Installing Sonar Server
- Prior to the installation, check the requirements.
- Download and unzip the distribution JEE Server
- Running Sonar as a Service on Windows or Linux
- Running Sonar behind a Proxy. | http://docs.codehaus.org/pages/diffpagesbyversion.action?pageId=113541598&selectedPageVersions=144&selectedPageVersions=143 | 2014-03-07T13:37:27 | CC-MAIN-2014-10 | 1393999642523 | [] | docs.codehaus.org |
.
Chunk:Help screen column header Status
Chunk:Help screen column header Access - Chunk:Help screen column header Created By
Chunk:Help screen column header Language
At the top right you will see the toolbar (depending on the template in use):
The functions are:
Chunk:Help screen toolbar Check In
Chunk:Help screen toolbar Options:
Chunk:Help screen column header Select Access | http://docs.joomla.org/index.php?title=Help17:Content_Featured_Articles&oldid=66885 | 2014-03-07T13:39:23 | CC-MAIN-2014-10 | 1393999642523 | [] | docs.joomla.org |
changes.mady.by.user jti user
Saved on Sep 05, 2017
Saved on Sep 14, 2017
The:
Figure 1. NIRSpec, highlighted on the left, in the JWST focal plane
Image Modified
The JWST focal plane with NIRSpec on the left. The 4 magenta rectangles represent the 4-quadrants of the micro-shutter assembly (MSA). Fixed slits (in red) and the IFU (in yellow) are located between the MSA. The NIRSpec aperture position angle is rotated by approximately 137.5° in comparison to NIRCam, NIRISS and the FGS fields of view.
For more information, see the NIRSpec optics page.
0.6–5.3 μm (prism)0.7–1.27 μm
0.97–1.89 μm
1.66–3.17 μm
2.87–5.27 μm
0.20 × 0.46(individual shutter size in the 3.6' × 3.4' FOV)
~100 (prism),~1000,~2700
** These resolving powers correspond to the values at the central wavelength in the measured spectral range.
Figure 2 shows the NIRSpec optical design. The key instrument elements that are important for science observation specifications are the filters, dispersers, science apertures and detectors.
Figure 2. NIRSpec optical elements
A schematic layout of the NIRSpec instrument, including the key filter wheel assembly, grating wheel, detector housing and apertures used for science. Also shown are the locations of pickoff mirrors, fore optics, refocus mechanism and calibration assembly.
Figure 3 shows NIRSpec predicted sensitivity in MOS mode observations for a point source observed in ten 966 s exposures, with the medium resolution R = 1000 paired filter and grating settings used for science. Observers testing NIRSpec performance and preparing proposals should always use the JWST Exposure Time Calculator (ETC) to obtain the most recent sensitivity estimates.
Figure 3. An example of NIRSpec point-source sensitivity in R = 1000 spectral observations
Four curves show the NIRSpec point-source sensitivity in the main spectral settings of the R = 1000 observations, based on instrument sensitivity information derived from the ground test calibration campaign in winter 2016. The plots show curves of limiting sensitivity (defined as S/N of 10 on a source of the presented brightness) that can be achieved in ten 966s exposures with a 0.2" MSA slit width.
Each mode in NIRSpec has its own planning interface template in the Astronomer's Proposal Tool (APT) software. The NIRSpec MOS mode has a specialized MSA planning tool (MPT) to help optimize NIRSpec MSA science.
Dorner, B., Giardino, G., Ferruit, P. et al. 2016, A&A, 592, A113 A model-based approach to the spatial and spectra calibration of NIRSpec onboard JWST.
Merge landing & overview
Space Telescope Science Institute
HELP DESK
JWST WEBSITE
Report website problems
The NASA James Webb Space Telescope, developed in partnership with ESA and CSA, is operated by AURA’s Space Telescope Science Institute. | https://jwst-docs.stsci.edu/pages/diffpagesbyversion.action?pageId=17137873&selectedPageVersions=52&selectedPageVersions=51 | 2018-01-16T13:39:45 | CC-MAIN-2018-05 | 1516084886436.25 | [] | jwst-docs.stsci.edu |
changes.mady.by.user jpp user
Saved on Sep 05, 2017
Saved on Sep 14, 2017
Instructions for designing JWST NIRCam Coronagraphic Imaging observations using APT, the Astronomer's Proposal Tool, v25.1.
Coronagraphic imaging is 1 of 5 observing modes available with the Near Infrared Camera (NIRCam). NIRCam offers 5 coronagraphic occulting masks in the focal plane and 2 Lyot stops in the pupil plane. One Lyot stop is used with the round coronagraphic masks, and the other Lyot stop is used with the bar-shaped coronagraphic masks. NIRCam's 3 round coronagraphic masks, MASK210R, MASK335R, and MASK430R, have inner working angles IWA = 0.40", 0.63", and 0.81" (radius), corresponding to 6λ/D at 2.1, 3.35 and 4.1 μm. The 2 bar-shaped coronagraphic masks, MASKSWB and MASKLWB, are tapered, with IWA varying by a factor of 3 3 primary parameters for NIRCam coronagraphic imaging:
Performing multiple coronagraphic observations enables PSF subtraction as described below in PSF Reference Observations.
Allowed values are documented in the NIRCam Coronagraphic Imaging Template parameters.
Note: mosaics are not available for NIRCam coronagraphic imaging.
NIRCam coronagraphic imaging is available only using Module A, with Module B as a back-up. This parameter is not user changeable. is required for coronagraphic imaging. This section specifies the Target ACQ and Acq Exposure Time.).
A Target ACQ must be completed by selecting a MULTIACCUM exposure configuration. Each exposure is configured by setting the readout pattern and characteristic parameters: Acq Readout Pattern and Acq Groups per Integration.
All 9 642 (SW) or 1282 .
If Yes is selected, confirmation images are taken. These are 2 identical, full-frame exposures using the 4 SW SCAs (for the SW masks), or the 1parameter are: FULL, SUB640 (for the SW channel), and SUB320 (for the LW channel).
Table 1. NIRCam coronagraphic imaging array and subarray properties type
Number of dithers
0
5
DITHER PATTERNvalues available for the bar-shaped coronagraphic masks (MASKSWB, MASKLWB) are:
Table 3. Dither patterns for bar-shaped coronagraphic masks (MASKSWB, MASKLWB).
In most cases, three or more coronagraphic observations should be defined to support PSF subtraction:
For a detailed description of the observational strategies, see the Coronagraphic Sequences page..
If this observation is of a science target, then use this field to associate it with an appropriate PSF reference observation (based on those that have been defined elsewhere in the observing proposal).
For a survey of many targets, they may serve as PSF references for one another. In that case, the user may check this box and explain with additional text in the science justification section of a submitted proposal. The user must still select the science targets as PSF Reference Observations in the list above.
A variety of observatory level special requirements may be chosen, including Position Angle (PA) offsets and non-interruptible observing sequences.
The comments field should be used only to record observing notes.
NIRCam Coronagraphic ImagingNIRCam Coronagraphic Imaging Template ParametersNIRCam Coronagraphic Target AcquisitionNIRCam Coronagraphic Occulting Masks and Lyot StopsNIRCam Filters for CoronagraphyJWST High Contrast Imaging OverviewJWST Coronagraphy StrategiesJWST Coronagraphic Observation PlanningJWST Coronagraphic SequencesJWST High Contrast Imaging in APTNIRCam Coronagraphic PSF EstimationJWST Exposure Time Calculator OverviewNIRCam Detector Readout PatternsNIRCam OverviewNIRCam ModulesNIRCam Detector SubarraysNIRCam DetectorsNIRCam Field of View
Reference papers and reports.
Last updated
Updated June 1, 2017
Published May. | https://jwst-docs.stsci.edu/pages/diffpagesbyversion.action?pageId=20425136&selectedPageVersions=35&selectedPageVersions=34 | 2018-01-16T13:39:53 | CC-MAIN-2018-05 | 1516084886436.25 | [] | jwst-docs.stsci.edu |
About web messaging
From Genesys Documentation
This topic is part of the manual Genesys Predictive Engagement Administrator's Guide for version Current of Genesys Predictive Engagement.
Feature coming soon!Learn how to work with web messaging in Genesys Predictive Engagement.
Overview
Learn what web messaging is and how it works in Genesys Predictive Engagement.
Configuration and deployment
Learn how to configure and deploy messenger.
Web messaging metrics and performance
Learn about the web messaging metrics that appear in the Action Map Performance report. | https://all.docs.genesys.com/ATC/Current/AdminGuide/About_web_messaging | 2021-01-16T03:54:01 | CC-MAIN-2021-04 | 1610703499999.6 | [] | all.docs.genesys.com |
The withdraw function sends an application request into the protocol, this application request consists of the amount in which the user wants to withdraw, the address of which the users wants to transfer the asset and the market of which the asset is locked.
function withdraw(uint _amount, address _to, bool _isMarket) external;
_amount : The amount of asset you want to withdraw from the pool.
_to : Withdraw to the address of the user. The withdrawal address could be the address of the user or any address provided by the user.
_isMarket : true if the asset is a base market and false if the asset is the Token
CEGORAS egoras = CEGORAS(0xd9E373F....);require(egoras.withdraw(50,0xd9E373F.,true/false ), "unable to withdraw");
const instance = await new web3.eth.Contract(abi, address)await instance.methods.withdraw("50,0xd9E373F.,true/false").send({from: 0xMyAccount}); | https://docs.egoras.com/on-chain-liquidity-module/untitled | 2021-01-16T03:07:58 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.egoras.com |
The oldest synallactid sea cucumber (Echinodermata: Holothuroidea: Aspidochirotida)
Reich, Mike, 2010: The oldest synallactid sea cucumber (Echinodermata: Holothuroidea: Aspidochirotida). In: Paläontologische Zeitschrift, Band 84, 541 - 546, DOI 10.1007/s12542-010-0067-8.
Aspidochirote holothurian ossicles were discovered in Upper Ordovician-aged Öjlemyr cherts from Gotland, Sweden. The well-preserved material allows definitive assignment to the family Synallactidae, a deep-sea sea cucumber group that is distributed worldwide today. The new taxon Tribrachiodemas ordovicicus gen. et sp. nov. is described, representing the oldest member of the Aspidochirotida. The further fossil record of Synallactidae and evolutionary implications are also discussed. | https://e-docs.geo-leo.de/handle/11858/7110 | 2021-01-16T03:16:37 | CC-MAIN-2021-04 | 1610703499999.6 | [] | e-docs.geo-leo.de |
datacoco-secretsmanager¶
datacoco-secretsmanager provides basic interaction with the Amazon Web Service (AWS) Secrets Manager service.
Installation¶
datacoco-secretsmanager requires Python 3.6+
python3 -m venv venv source venv/bin/activate python -m pip install datacoco_secretsmanager
Quickstart¶
datacoco-secretsmanager utilizes the boto3 library to interact with the AWS Secrets Manager service, requiring AWS credentials configuration. Lookup of credentials by boto3 is documented here.
Based on how you store your AWS credentials, you can use datacoco-secretsmanager in the following ways.
If you have AWS credentials stored in the default ~/.aws/credentials, instantiate a SecretsManager class using:
from datacoco_secretsmanager import SecretsManager sm = SecretsManager()
You can also pass in AWS authentication keys directly:
from datacoco_secretsmanager import SecretsManager sm = SecretsManager( aws_access_key_id, aws_secret_access_key, aws_role_arn, # only required if you are using role based access )
Otherwise, if you are running on an Amazon EC2 instance, and credentials are not passed in either way above, you can have boto3 load credentials from the instance metadata service. datacoco-secretsmanager will then assume the same IAM role as you specified when you launched your EC2 instance.
One Secret¶
Store a secret in AWS Secrets manager:
AWS Secret name
<AWS-secret-name-for-connection>
| Key | Value | | ---------- | -------------| | <db-name> | <db-name> | | <user> | <user-name> | | <host> | <host> | | <port> | <port-value> | | ... | ... |
To fetch a single secret, use:
sm.get_secret(<aws_secret_resource_name>)
Many Secrets¶
For a project, you may have more than one secret or credentials for more than one system.
You can handle by storing key/value mapping for all required credentials in an AWS secret for the project, then further store credentials in a separate AWS secret for each credential name indicated in a key’s value.
For example, storing a single AWS secret to map or provide lookup to all
required system/db connections is known as the
cfg_store name in our
module:
AWS Secret name
<project-name>/<environment>
Note: If using environment, environment variable named
ENVIRONMENT
should be stored and assigned with the same environment name indicated in your AWS secret name.
Additionally, if working in organization with multiple teams using AWS Secrets Manager, you can further denote secrets per team, by using naming convention:
<team-name>/<project-name>/<environment>.
Store key/values for your
cfg_store with the following:
| Key | Value | | --------------------- | ----------------------------------- | | <db-connection1-name> | <AWS-secret-name-for-db-connection1>| | <db-connection2-name> | <AWS-secret-name-for-db-connection2>|
For each Secret value in your cfg_store, store the full credentials in an additional AWS Secret, ie:
AWS Secret name
<AWS-secret-name-for-db-connection1>
| Key | Value | | ---------- | -------------| | <db-name1> | <db-name1> | | <user> | <user-name> | | <host> | <host> | | <port> | <port-value> | | ... | ... |
AWS Secret name
<AWS-secret-name-for-db-connection2>
| Key | Value | | ---------- | -------------| | <db-name2> | <db-name2> | | <user> | <user-name> | | <host> | <host> | | <port> | <port-value> | | ... | ... |
To fetch secrets for a full project/cfg store, use:
sm.get_config( project_name='your-project-name', team_name='your-team-name', # include only if you want to save as part of your cfg_store name )
Development¶
Getting Started¶
It is recommended to use the steps below to set up a virtual environment for development:
python3 -m venv <virtual env name> source <virtual env name>/bin/activate pip install -r requirements.txt
Testing¶
pip install -r requirements-dev.txt
To run the testing suite, simply run the command:
tox or
python -m unittest discover tests | https://datacoco-secretsmanager.readthedocs.io/en/latest/?badge=latest | 2021-01-16T03:02:28 | CC-MAIN-2021-04 | 1610703499999.6 | [] | datacoco-secretsmanager.readthedocs.io |
KPI definitions.
Next steps
- Learn about the KPI reports available in Swrve. For more information, see Intro to Swrve analytics.
- Start monitoring KPI data by creating a customized Trend Report. For more information, see Trend Reports.
- Learn more about the datasets available in Swrve and how to perform in depth analysis of them with user database exports. For more information, see Intro to user databases and Setting up raw data export. | https://docs.swrve.com/user-documentation/analytics/kpi-definitions/ | 2021-01-16T03:17:27 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.swrve.com |
Breaking: #65317 - TypoScriptParser sortList sanitizes input on numerical sort¶
See Issue #65317
Description¶
When calling the
:= sortList() with a “numeric” modifier of the TypoScript parser with a string, the
sort() method
differs between PHP versions. In order to make this behavior more strict, a check is done before the elements are
sorted to only have numeric values in the list, otherwise an Exception is thrown.
Impact¶
An exception is thrown if non-numerical values are given for a numeric sort in TypoScripts
sortList. | https://docs.typo3.org/c/typo3/cms-core/master/en-us/Changelog/7.5/Breaking-65317-TypoScriptParserSortListSanitizesInputOnNumericalSort.html | 2021-01-16T02:44:28 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.typo3.org |
Updating Studies, Samples, Experiments and Runs Interactively¶
The interactive submission interface supports some editing of your submitted objects. Access these existing objects by clicking the relevant tab after logging in to Webin.
Note that under no circumstances can an object’s own accession or alias attribute be edited.
When editing the XML version of an object, you should in general leave the element tags unchanged. These are the capitalised words enclosed in ‘<>’. For example, in the below XML snippet you should leave the words ‘ELEMENT’ unchanged and edit the ‘value’.
<ELEMENT>value</ELEMENT>
Study Edits¶
Some parts of the study object can be edited. These include the release date, title, description and publication references.
Login to Webin and find the studies tab.
Find the study in the list, or search for it by its accession.
If your study is confidential you can change the release date by clicking on the pencil icon and navigating to the required date in the calendar. To release the study simply select the current date/present day and set the following processes in motion:
- Moving relevant sequence data files to a public archive.
- Indexing and rendering the study and its objects so that they can be linked-to and visualised in the ENA browser.
- Mirroring to INSDC databases, who will then make the data available through their services.
Please allow up to 48 hours for newly released data to appear in the public database. Read more about this in our Data Release Policies FAQ
For edits besides changing the release date, click the ‘Edit’ button.
- The short name for the study will be visible in search outputs and overview pages whereas the descriptive title and abstract will be presented in the study’s public page.
- You can add the PMID of any papers related to your data. There will then be a link to the paper from your study’s public page.
- Study attributes are optional tag-value pairs you can specify to add extra information or to make your study more searchable. For example, you could add a ‘DOI’ tag with your paper’s DOI as the value.
- Save your changes when you are satisfied with your updates, or click ‘Cancel’ to abandon them.
Sample Edits¶
To edit a sample, find it in the list (note the search box) and click the ‘Edit’ button next to it.
Your sample will be shown as an XML document which you can edit directly. Make changes as required and click the ‘Save’ button; your changes will not be saved if they invalidate the file. General XML errors and specific errors defined by us are prevented in this way. Note that not all fields can be edited: the sample alias and accession are immutable, and you will not be allowed to remove an attribute which is required by the specified checklist.
This method is useful for one-off edits but it is not feasible for editing many samples at once. For this you can use the programmatic method.
Experiment And Run Edits¶
Experiments and runs are both listed in the ‘Runs’ tab, where matched pairs of experiments and runs share a row in the table. Note that their are separate ‘Edit’ buttons for the two object types:
Be sure to use the correct edit button for the object you wish to edit. When you click the edit button, you will be shown the relevant object in XML format. Locate the element you wish to change and make the required changes, then click ‘Save’. You will not be able to save changes which invalidate the file.
Common Experiment Updates¶
The experiment object provides important metadata about how your data was produced. Common updates might include:
- Changing the library descriptor where a mistake has been made e.g. the library source could be listed as ‘GENOMIC’ when in fact it is ‘METAGENOMIC’
- Changing the <SAMPLE_DESCRIPTOR> where the experiment is pointing at the wrong sample
- Changing the <STUDY_REF> where the experiment is pointing a the wrong study
- Adding new <EXPERIMENT_ATTRIBUTE> elements to provide additional information about your experiment
All of the above can be achieved by editing the XML displayed when you click the ‘Edit’ button.
Common Run Updates¶
The most common run edit would be an MD5 update. You may need to do this if:
- An incorrect MD5 value has been registered for a file
- An invalid file has been replaced with a valid one, which has a different MD5 value
Find the <FILE> element’s ‘checksum’ attribute and correct the 32-digit value.
It is not possible to replace the uploaded file in this way; entering a new filename will not be accepted. If the submitted file has passed validation and been archived, it cannot be replaced. If the submitted file has failed validation, it must be replaced with an identically-named, corrected file. | https://ena-docs.readthedocs.io/en/latest/update/metadata/interactive.html | 2021-01-16T01:51:07 | CC-MAIN-2021-04 | 1610703499999.6 | [array(['../../_images/mod_05_p03.png', '../../_images/mod_05_p03.png'],
dtype=object)
array(['../../_images/mod_05_p04.png', '../../_images/mod_05_p04.png'],
dtype=object)
array(['../../_images/mod_05_p02.png', '../../_images/mod_05_p02.png'],
dtype=object)
array(['../../_images/mod_05_p02_b.png', '../../_images/mod_05_p02_b.png'],
dtype=object)
array(['../../_images/mod_05_p05.png', '../../_images/mod_05_p05.png'],
dtype=object) ] | ena-docs.readthedocs.io |
block face triangulate command
Syntax
- block face triangulate <keyword> <range>
Triangulate faces of rigid blocks to increase the number of vertices on a face. If no keyword is supplied, the minimum number of triangles are created.
- radial
Add a center vertex to all faces.
- radial-8
Add a center vertex and mid-edge vertices to all faces. | http://docs.itascacg.com/3dec700/3dec/block/doc/manual/block_manual/block_commands/block.face/cmd_block.face.triangulate.html | 2021-01-16T03:09:54 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.itascacg.com |
Changelog for package naoqi_dcm_driver
0.0.3 (2017-11-16)
Update README
Update README
Fix typo in README
changing the speed of set_angles
changes in stiffness when shutting down
updating joint comparison
fixing the diagnostics
adding electical current and battery info to Diagnostics
adding a possibility to change stiffness
few changes in ReadJoints
Read joints names from pepper_control config reading joints names from yaml file
adding headers
define controlled joints from ROS controllers
removed joint publishing from ALMotion
removing velocity control and adding moveto subscriber since velocity control is already in Naoqi Driver
Smooth robot motion
Merge pull request
#1
from ros-naoqi/update_links updated repo URL
smooth changed in stiffness
update repo urls
few changes in log
reduce Naoqi log
fix reading motor groups from launch
fixing crash at shutting down
clean robot.hpp
fixing typos
Contributors: Mikael Arguedas, Natalia Lyubova
0.0.2 (2016-09-20)
fixed Autonomous Life call
wakeup the robot during initialization
Contributors: nlyubova
0.0.1 (2016-09-16)
refactoring
adding a wrapper for Memory Proxy
updating the README
exit Touch service
adding the diagnostics class
adding tools for AnyValue conversion
fixing velocity control
initial commit
Contributors: Natalia Lyubova | http://docs.ros.org/en/kinetic/changelogs/naoqi_dcm_driver/changelog.html | 2021-01-16T03:01:37 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.ros.org |
Setting up FTS to search across multiple forms
You can set up full text search (FTS) to search across multiple forms.
To set up FTS to search across multiple forms
- Configure each form so that it can be included in a multi-form FTS (see Configuring forms for multiple-form FTS).
- Configure the server for multi-form FTS (see Configuring the relevancy weight for fields in a multiple-form FTS).
- Update the AR System Multi-Form Search form (see Performing searches on multiple forms).
- Create or modify a form and workflow that enable users to search across multiple forms (see Creating a form and workflow to search across multiple forms).
- (Optional) Use date and time fields on your search form (see Using date and time fields on your search form0.
Behavior of multi-form FTS
- If a user does not have permission to see a form or field, that form or field does not participate in multi-form FTS.
- If a user does not have permission to the Request ID field (field ID 1), the entire row is eliminated from the multi-form FTS results.
- If a user does not have permission to see any of the FTS-indexed fields on a form (based on row-level security), the entire row is eliminated from the multi-form FTS result. The reason for this policy is because BMC Remedy AR System cannot determine the value of the field that caused the entry to appear in the search results and whether the user has permission to see that field.
- If a user does not have permission to see the Full Text MFS Category Name field or Title field, those fields are empty in the search results. The special handling of these fields is because they are used as filters to narrow down the search results. (These fields do not participate in multi-form FTS.)
Was this page helpful? Yes No Submitting... Thank you
Where is the published version of this page?
Hello Bruce,
The page is now visible.
Thanks,
Prachi | https://docs.bmc.com/docs/ars9000/setting-up-fts-to-search-across-multiple-forms-509972453.html | 2021-01-16T03:31:47 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.bmc.com |
SODA Multicloud provides a cloud vendor agnostic data management for hybrid cloud, intercloud, or intracloud. The goal is to provide a unified interface to support file, block, and object services across multiple cloud vendors.
Details about the design and use cases can be found at: multi-cloud File storage and multi-cloud block storage
This guide shows how to use the File and Block storage services of multi-cloud to create and manage cloud volumes and cloud fileshares. This will illustrate the operations through SODA Dashboard.
To refer the API specs, please check multi-cloud API specs
Please refer SODA installation using Ansible Or Check the developer guide multi-cloud lcoal cluster installation through repo
http://{your_host_ip}:8088/#/home
admin/opensds@123
Click on (+) for registering a storage backend. Choose appropriate backend type for Block or File storage
Tags and Metadata need to be chosen appropriately
Tags and Metadata need to be chosen appropriately
Please note that the volume or file share attributes, metadata and tags are compatible with Cloud vendors. Please refer cloud vendor docs for more details | https://docs.sodafoundation.io/guides/user-guides/multi-cloud/file-and-block/ | 2021-01-16T02:00:44 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.sodafoundation.io |
- Product
- Customers
- Solutions
Amazon Step Functions (States) enables you to coordinate the components of distributed applications and microservices using visual workflows.
Enable this integration to see all your Step Functions metrics in Datadog.
If you haven’t already, set up the Amazon Web Services integration first. Then, add the following permissions to the policy document for your AWS/Datadog Role:
states:ListStateMachines, states:DescribeStateMachine
Step Functions (States)is checked under metric collection. If your state machines use AWS Lambda, also ensure that
Lambdais checked.
If your Step Functions states are Lambda functions, installing this integration will add additional tags to your Lambda metrics. This lets you see which state machines your Lambda functions belong to, and you can visualize this on the Serverless page.
Configure Amazon Step Functions to send logs either to a S3 bucket or to Cloudwatch.
Note: If you log to a S3 bucket, make sure that
amazon_step_functions is set as Target prefix.
If you haven’t already, set up the Datadog log collection AWS Lambda function.
Once the lambda function is installed, manually add a trigger on the S3 bucket or Cloudwatch log group that contains your Amazon Step Functions logs in the AWS console:
To enable distributed tracing for your AWS Step Functions:
The Amazon Step Functions integration does not include any events.
The Amazon Step Functions integration does not include any service checks.
Need help? Contact Datadog support.
On this Page | https://docs.datadoghq.com/integrations/amazon_step_functions/ | 2021-01-16T03:48:16 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.datadoghq.com |
@Incubating public interface ModularitySpec
A classpath is a list of JAR files, classes and resources folders passed to one of the JDK's commands like javac, java, or javadoc. Since Java 9, these commands offer two different approaches to handle such a classpath:
Wherever a classpath (a list of files and folders) is configured in Gradle, it can be accompanied by a ModularClasspathHandling object to describe how the entries of that list are to be passed to the --classpath and --module-path parameters.
@Input Property<Boolean> getInferModulePath()
An entry is considered to be part of the module path if it meets one of the following conditions: | https://docs.gradle.org/current/javadoc/org/gradle/api/jvm/ModularitySpec.html | 2021-01-16T03:17:43 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.gradle.org |
acm-certificate-expiration-check
Checks whether ACM Certificates in your account are marked for expiration within the specified number of days. Certificates provided by ACM are automatically renewed. ACM does not automatically renew certificates that you import.
Identifier: ACM_CERTIFICATE_EXPIRATION_CHECK
Trigger type: Configuration changes and periodic
AWS Region: All supported AWS Regions except China (Beijing), China (Ningxia), Africa (Cape Town) and Europe (Milan)
Parameters:
- daysToExpiration
Specify the number of days before the rule flags the ACM Certificate as NON_COMPLIANT.
AWS CloudFormation template
To create AWS Config managed rules with AWS CloudFormation templates, see Creating AWS Config Managed Rules With AWS CloudFormation Templates. | https://docs.aws.amazon.com/config/latest/developerguide/acm-certificate-expiration-check.html | 2021-01-16T02:44:06 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.aws.amazon.com |
If you are from Hong Kong, you can pay for Civic Liker by PayMe or FPS, the yearly subscription fee is HKD468. If you would like to subscribe monthly in HKD38, you can choose to pay by credit card or PayPal.
PayMe ID: 6636369
PayMe QR code:
After payment is done, please collect your transaction record together with your registered email address, send to Like.co / Liker Land lower right hand corner help desk dialogue box (icon in green), or email to [email protected]. The transaction record can be a screenshot of the money transfer.
Thanks for your support, please clap for media and creators to create a healthy content ecosystem. | https://docs.like.co/user-guide/civic-liker/civic-liker-paid-by-payme | 2021-01-16T02:51:55 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.like.co |
After you deploy an MCP cluster, you can renew your expired certificates or replace them by the endpoint certificates provided by a customer as required. When you renew a certificate, its key remains the same. When you replace a certificate, a new certificate key is added accordingly.
You can either push certificates from pillars or regenerate them as follows:
Generate and update by
salt-minion (signed by
salt-master)
Generate and update by external certificate authorities, for example, by Let’s Encrypt
Certificates generated by
salt-minion can be renewed by the
salt-minion
state. The renewal operation becomes available within 30 days before the
expiration date. This is controlled by the
days_remaining parameter of the
x509.certificate_managed Salt state.
Refer to
Salt.states.x509
for details.
You can force renewal of certificates by removing old certificates
and running
salt.minion.cert state on each target node. | https://docs.mirantis.com/mcp/q4-18/mcp-operations-guide/openstack-operations/manage-certificates.html | 2021-01-16T02:22:06 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.mirantis.com |
CB Defense API¶
This page documents the public interfaces exposed by cbapi when communicating with a CB Defense server.
Main Interface¶
To use cbapi with Carbon Black Defense, you will be using the CBDefenseAPI. The CBDefenseAPI object then exposes two main methods to select data on the Carbon Black server:
- class
cbapi.psc.defense.rest_api.
CbDefenseAPI(*args, **kwargs)¶
The main entry point into the Cb Defense API.
Usage:
>>> from cbapi import CbDefenseAPI >>> cb = CbDefense_auditlogs()¶
- Retrieve queued audit logs from the Carbon Black Cloud Endpoint Standard server.
- Note that this can only be used with a ‘API’ key generated in the CBC console.
get_notifications()¶
Retrieve queued notifications (alerts) from the Cb Defense server. Note that this can only be used with a ‘SIEM’ key generated in the Cb Defense console..
notification_listener(interval=60)¶
Generator to continually poll the Cb Defense server for notifications (alerts). Note that this can only be used with a ‘SIEM’ key generated in the Cb Defense console.
- class
cbapi.psc.defense.rest_api.
Query(doc_class, cb, query=None)¶
Represents a prepared query to the Cb Defense server.
This object is returned as part of a
CbDefenseAPI.select()operation on models requested from the Cb Defense server. You should not have to create this class yourself.
The query is not executed on the server until it’s accessed, either as an iterator (where it will generate values on demand as they’re requested) or as a list (where it will retrieve the entire result set and save to a list). You can also call the Python built-in
len()on this object to retrieve the total number of items matching the query.
Examples:
>>> from cbapi.psc.defense import CbDefenseAPI >>> cb = CbDefenseAPI()
- Notes:
- The slicing operator only supports start and end parameters, but not step.
[1:-1]is legal, but
[1:2:-1]is not.
- You can chain where clauses together to create AND queries; only objects that match all
whereclauses will be returned.
Models¶
- class
cbapi.psc.defense.models.
DefenseMutableModel(cb, model_unique_id=None, initial_data=None, force_init=False, full_doc=False)¶
Represents a DefenseMutableModel object in the Carbon Black server.
- class
cbapi.psc.defense.models.
Device(cb, model_unique_id, initial_data=None)¶
Represents a Device object in the Carbon Black server.
- class
cbapi.psc.defense.models.
Event(cb, model_unique_id, initial_data=None)¶
Represents a Event object in the Carbon Black server.
- class
cbapi.psc.defense.models.
Policy(cb, model_unique_id=None, initial_data=None, force_init=False, full_doc=False)¶
Represents a Policy object in the Carbon Black server. | https://cbapi.readthedocs.io/en/latest/defense-api.html | 2021-01-16T02:59:26 | CC-MAIN-2021-04 | 1610703499999.6 | [] | cbapi.readthedocs.io |
The access to our Support and Service program
GUARANTEE – for access to software usage rights, upgrade, support and services
The Software Assurance (SA) provides running and working software:
- FREE – Usage rights in accordance with the end user license agreement.
- FREE – 1st Level basic support through our Support & Service center (ticket system).
- FREE – Availability to special discounts, tools, knowledge base, downloads.
- ACCESS – To renewal, upgrade, patch license key and licenses key file.
- ACCESS – The full support & service program
- GUARANTEE – On software defect for up to 3 months after our GA version release date
- GUARANTEE – Money back guarantee for our service, caused by a defect in our software
Accept of terms and conditions
- When purchasing or renewal of the Software Assurance these terms and condition are at the same time accepted – also when the Software Assurance is incl./part of a product
- The link for the Software Assurance is incl./part of e-mail, invoice and/or payment, and It is the responsibility of the costumer’s to be reading the terms before accepting
Mandatory
According to the end user license agreement (EULA) on nearly all our software, it is mandatory to have an active Software Assurance (SA) when using/running the software, unless the software has been purchased as a perpetual license and this is stated on the invoice
Make sure the software is running and working correctly
In most of our software, there is an expiry date connected and this date is synchronized with the Software Assurance period.
- Sometimes as a license key
- Sometimes as a license key file
- Sometimes as an automated online approval service
- Sometimes all the above
Software expire date
- Normal our software will run according to the Software Assurance (SA) period
- Some software will run limited up-to 30 days after the end of SA period
- Some software need online service for approval verification of SA period
- Some old software versions, will seem to work after end SA period, but are very unreliable and break down without any further warning.
Software License Key
Automatic Updates:
If the Ontolica Software in use, have the possibility and are configured to automatic updates, you will get your License Key automatic after payment is approved.
Manual Updates:
If the Ontolica Software in use, don’t have the possibility or are not configured to automatic updates, then it is the customer’s responsibility to request and follow our guide in the Support & Service Center for a new License Key or License Key File.
- Remember to request on time, delivery time is normally 5-10 working days
- The request/order process for a new License Key or License Key File, can start as soon as payment is received and registered in our systems
- Make sure your payment is done in good time and you have requested for a new Key License if needed for Renewal planning etc.
- In most cases – depending on the software – a service can be added the PO Renewal process for a fixed fee “Renewal Process Fee – Manage Key Delivery” and new License Key File will automatic be provided after approved payment.
Support for the 2 latest software versions
The latest and newest General Available (GA) version and one version back.
Example: v7.1 as GA and v6.x as one version back
When a software version is not supported anymore, there is no more guarantee for access to software maintenance, upgrades, patches, support, and services.
Read more on Software Version Supported
Support & Service Center
If an issue occurs, end-user can submit a normal support ticket in our Support & Service Center. Then our team will find the most effective and constructive way the solve the issue.
The issues will normally be prioritized in the order;
- Critical, High, Medium, Low
- First in – First out
All issues must be submitted in our ticket system, any other type of support method it is considered as payable service time.
All inquiries and dialog regarding finance, payments and cancellation issues, must be submitted and done in our ticket system to avoid misunderstanding.
Money back guarantee
If your company or organization purchases our service assistance and later the issue turns out to be caused mainly by a defect in our software, we will refund your purchases of our service assistance.
Assurance Duration, Period and Cancellation
The Software Assurance continues automatically for a new period, into a cancellation and/or closing is done.
The Software Assurance secure continually delivery of professional support & service. We are using cost on planning and booking of ready resources for minimum a remaining active period and the next upcoming full period.
Cancellation must be done in writing minimum 1 period before a new period starts
Example; if the period is 1 year, cancellation must be done minimum 12 months before a new period starts.
Cancellation is done by submitting a ticket to our Support & Service Center
(Cancellation are first applicable when received and accepted in writing)
IMPORTANT! Make sure to pay the last invoice and period on time, to save further expenses on reminder fees, interest rate etc. Any outstanding must be paid in full before accept of cancellation is applicable
Renewal and Purchase Order Process (PO/POP)
If your organization need to purchase the assurance by a yearly renewal with a light or wide Purchase Order Process (PO/POP), then this is possible by an additional handling and process fee(s).
IMPORTANT! Make sure to pay the last invoice and period on time, to save further expenses on reminder fees, interest rate etc.
*Invoicing and payment *
- Software Assurance invoicing and payment is normally handled directly from the software manufacturer to the end-user.
- Normally up-front alerts, invoice and/or PO Process will be started before a new period.
- It is end-users responsibility to update and inform the software manufacturer or supplier of any changes minimum 4 percent calculated on the end-users registered inventory of software licenses, but never less than the minimum price on the assurance.
Before the start of a new assurance period, the assurance price for the next period is calculated based on:
- The currently active and valid price sheet.
- is added the invoice
- A renewal or PO Process fee will be added the invoice
- For extended services (as fill out of forms, IT systems etc.) a fee is added
- Bank cost and handling (as bank checks, transatlantic transfer cost etc.) will be added
Read more – Handling & Service Fees
Stable and fixed renewal prices
Select a renewal subscription plan with a longer period, example 3, 5 or 10 years period
When adding more to the registered software license inventory
The assurance price on any software adding will be calculated and invoiced for the remaining months before an alignment with the main assurance period can be executed.
Additional cost
Some requests may attract an extra cost due to the issue or task type, complexity or amount of work required to process and/or to solve it. In such situation, we will inform and request an accept before proceeding, unless otherwise agreed.
Disclaimer and change eligibility
We accept no liability for errors and reserve the right to make changes to new software version or/and once a year without any further notice!
Post your comment on this topic. | http://docs.surfray.com/ontolica-search-preview/1/en/topic/software-assurance | 2018-04-19T11:25:30 | CC-MAIN-2018-17 | 1524125936914.5 | [] | docs.surfray.com |
Point Point Point Point Point Struct
Definition
public : struct Point
struct winrt::Windows::Foundation::Point
public struct Point
Public Structure Point
var point = { x: /* Your value */, y: /* Your value */ }
<object property="X,Y"/> -or <object property="X Y"/>
- Attributes
-
Remarks
A Point value sometimes represents a coordinate space in the plane of the main window of an app, but there are other possible interpretations of Point values that can vary, depending on the property that uses the Point value.
JavaScript In JavaScript, a Point is an object with two data properties: x and y. Other than x and y, the remaining API listed here in the Point members lists (or shown in the table of contents) don't apply to JavaScript programming.
Notes on XAML syntax
Point values are used extensively for graphics properties and similar UI-related properties throughout the XAML vocabulary for Windows Runtime app.
Either a space or a comma can be used as the delimiter between X and Y values. The common convention for coordinate points is to use a comma delimiter.
Point structures cannot be declared as resources in a ResourceDictionary. A potential workaround is to use an x:String resource and then inject it into a string that's parsed with XamlReader.Load.
Some XAML usages use the Point type to represent a logical point. See "Logical Points" in Remarks.
Projection and members of Point
If you are using a Microsoft .NET language (C# or Microsoft Visual Basic), or Visual C++ component extensions (C++/CX), then Point. For more info on WRL, see Windows Runtime C++ Template Library (WRL).. However, this can introduce the possibility of sub-pixel rendering, which can change colors and anti-alias any straight line along the sub-pixel edge. That's how the XAML rendering engine treats sub-pixel boundaries. So for best results, use integer values when declaring coordinates and shapes that are used for UI positioning and rendering. Runtime app do not relate to coordinate frames of reference directly. Instead, these are logical points, where the value of X and Y are each expected to be between 0 and 1 inclusive. This is one reason why the X and Y values support floating-point values rather than being constrained to integer values. Logical point values are interpreted by a context: the ultimate presentation or behavior might be specified or modified by a different property or setting. Sometimes the points express information that doesn't relate to coordinate space at all. Examples of the logical point concept in the specifics vary by property. Value constraints are typically noted in the reference pages for individual properties, rather than here in the Point reference.
Point values and XAML graphics API. In other words they're data for a more comprehensive graphics model. For example, the graphics elements you use to compose Path.Data, such as LineSegment, often have a Point -value property.
Some graphics elements use multiple Point values represented in a single property. These properties use the PointCollection type. Any Windows Runtime property that takes a PointCollection supports a XAML syntax that parses the attribute string to get X and Y values for multiple points. An example of a graphics property that uses PointCollection is Polygon.Points.
Point values from XAML input events, what that frame of reference represents. For input events, the frame of reference by default is the main app window, not the overall screen/display. This enables a consistent frame of reference in case the window is moved or resized. Some API such as GetCurrentPoint and GetPosition also provide a way to translate to an element-specific frame of reference, which is useful when working with input events that are handled by an individual control. For more info, see Handle pointer input.
XAML UI development also has a concept known as hit testing, where you can use utility methods to test how input events would report info if the user were to perform a pointer action in a particular coordinate location of the UI. To support the hit testing concept, 2 signatures of FindElementsInHostCoordinates use a Point input parameter, as does FindSubElementsForTouchTargeting. For more info, see Mouse interactions.
Animating Point values
The Windows Runtime provides a means to animate the values of any property that uses a Point as a value, so long as that property is implemented as a dependency property. Point has its own animation support type because it's not possible to individually animate the x and y values of a Point. Structures can't support dependency properties. Use the PointAnimation type for from-to animations, or use PointAnimationUsingKeyFrames derived types for key-frame animation behavior. For more info on how to animate a Point value and how animations work in XAML UI, see Storyboarded animations. | https://docs.microsoft.com/en-us/uwp/api/Windows.Foundation.Point | 2018-04-19T12:47:59 | CC-MAIN-2018-17 | 1524125936914.5 | [] | docs.microsoft.com |
Abstract
Contents
- Abstract
- PEP Deferral
- Standard Extension Namespace
- The python.details extension
- The python.project extension
- The python.integrator extension
- The python.exports extension
- The python.commands extension
- The python.constraints extension
This PEP describes several standard extensions to the Python metadata.
Like all metadata extensions, each standard extension format is independently versioned. Changing any of the formats requires an update to this PEP, but does not require an update to the core packaging metadata.
PEP Deferral
This PEP depends on PEP 426, which has itself been deferred. See the PEP Deferral section in that PEP for details. python.project extension
The python.project extension allows for more information to be provided regarding the creation and maintenance of the distribution.
The python.project extension contains three custom python.integrator extension
Structurally, this extension is largely identical to the python.project extension (the extension name is the only difference). python.commands extension
The python.commands extension contains three custom.. | http://docs.activestate.com/activepython/2.7/peps/pep-0459.html | 2018-04-19T11:21:58 | CC-MAIN-2018-17 | 1524125936914.5 | [] | docs.activestate.com |
Docker containers are designed for comfortable application distribution by means of fast and lightweight templates, which allows to run your projects almost everywhere. Thus, this technology represents a perfect solution for for those developers and sysadmins, who looks for speeding up the application delivery workflow and avoiding the repetitive adjustment issues.
In order to handle your own Docker image, the appropriate registry is needed..So, let’s discover how to get it at Jelastic platform in a matter of minutes through following the next steps:
Subsequently, you’ll be able to easily deploy the added image from your custom registry and leverage the possibilities that Docker containers’ support at Jelastic ensures in the full force. | https://docs.jelastic.com/ru/docker-private-registry | 2018-04-19T11:28:38 | CC-MAIN-2018-17 | 1524125936914.5 | [] | docs.jelastic.com |
The Wavefront Query Builder is an easy-to-use interface that makes Wavefront accessible to all users in your organization. You can construct queries using Query Builder, Query Editor, or Query Wizard. Both Query Builder and Query Editor support autocomplete.
- Query Builder allows you to construct queries from building blocks. Query Builder supports most elements of the Wavefront Query Language. A few of the more advanced functions are only available in the Query Editor, so you can switch from Query Builder to Query Editor to use those functions. You cannot switch from Query Editor to Query Builder.
- Query Editor expect that you type the query using the elements of the query language.
- Query Wizards allows you to customize predefined recipes, for example, to create a Moving Average for a specified time duration, for your environment.
See Wavefront Query Language Quick Reference for a complete reference.
Toggling Query Builder User Preferences
Query Builder is enabled by default. You can toggle Query Builder settings in your user preferences.
- If Enable Query Builder and Always Open the Query Builder are both checked, then Query Builder always displays when you navigate to a blank chart or new alert.
- If Always Open the Query Builder is not checked, then the Query Editor displays by default. Query Builder displays only if you click the Query Builder toggle.
To switch from Query Builder to Query Editor, click the toggle.
Note: You cannot switch from Query Editor to Query Builder if any part of the query has changed.
Constructing Queries
You use Query Builder to assemble a query from its building blocks:
- A metric, constant, or other query.
- Zero or more filters (i.e. sources, source tags, and point tags). The metric determines which filters make sense.
- Zero or more functions for manipulating the output.
As you assemble a query, it displays below the query builder. The chart associated with the query updates as you add filters and use functions to manipulate the output.
To use a constant value or the value of another query instead of a metric, you can toggle the metrics field by clicking the metrics selector:
You can preview the result of each evaluation step in real-time:
- For filters, click the bar chart icon at the end of each field (shown below).
- For functions, hover over the function to get documentation and a preview of the changed chart.
Filters and Functions
Query Builder helps you construct your queries like this:
- You can
ANDand
ORmultiple elements together. Unlike manually constructed
ts()queries, which allow mixed
ANDs and
ORs, the Query Builder applies either
ANDor
OR.
- The order of evaluation is left to right – metrics, then filters, then functions.
- Wildcard matching is supported for metrics, sources, and tags.
- You can remove any element in the chain by clicking the X icon to the right of that element. The rest of your expression remains intact.
| https://docs.wavefront.com/query_language_query_builder.html | 2018-04-19T11:41:16 | CC-MAIN-2018-17 | 1524125936914.5 | [array(['images/query_builder_new.png', 'Query builder new'], dtype=object)
array(['images/query_builder_04x.png', 'Query builder'], dtype=object)
array(['images/query_builder_showing_chart.png',
'Query builder with chart'], dtype=object)
array(['images/metric_selector.png', 'Metric selector'], dtype=object)
array(['images/display_query.png', 'Display query'], dtype=object)
array(['images/filter_and.png', 'filter and'], dtype=object)] | docs.wavefront.com |
ZP3102 Multi Sensor Dual: motion and temperature sensor
This describes the Z-Wave device ZP3102, manufactured by Zipato with the thing type UID of
zipato_zp3102_00_000.
Multi Sensor Dual: motion and temperature sensor
Overview
Presence detector and temperature meter in one device. Zipato Multisensor Duo offers elaborate security and ambient sensing options. The multifunctional nature of this product allows you to detect motion, and measure the room’s temperature. It can be used to automatically trigger other Z-Wave devices when activated.
Inclusion Information
Put your Z-Wave Controller into “inclusion” mode, and follow the instructions to add Multisensor Duo to your controller. To get in the “inclusion” mode, the distance between sensor and controller should be up to 1 meter. Press the program switch of Multisensor Duo once. The LED on the sensor should stop flashing, if not, please try again.
Exclusion Information
Put your Z-Wave Controller into “exclusion” mode, and follow the instructions to remove the Multisensor Duo from you controller network. Press the program switch of Multisensor Duo once to be excluded. The LED on the Multisensor Duo should start to flash.
Wakeup Information
Remove the rear cover to wake up the device, or set the wake up interval time from 10 minutes to 1 week. The battery will be drained quickly if you fail to replace the cover after using that method to wake up the device.
Channels
The following table summarises the channels available for the ZP3102 Multi Sensor Dual: motion and temperature sensor.
Sensor (temperature)
Scale
Select the scale for temperature readings
Device Configuration
The following table provides a summary of the configuration parameters available in the ZP3102 Multi Sensor Dual: motion and temperature sensor. Detailed information on each parameter can be found below.
1: On time in minutes
Delay before sending OFF
2: Celsius / Fahrenheit
0 = Celsius, 1 = Fahrenheit
3: Infrared sensor sensitivity adjustment
1 is most sensitive, 7 is least
Overview
(Parameter 3) Infrared sensor sensitivity adjustment, 7 levels sensitivity, 1 = most sensitive, 7 = most insensitive, default value= 4
1: Control Command
Did you spot an error in the above definition or want to improve the content? You can edit the database here. | https://docs.openhab.org/addons/bindings/zwave/doc/zipato_zp3102_00_000.html | 2018-04-19T13:53:05 | CC-MAIN-2018-17 | 1524125936969.10 | [] | docs.openhab.org |
Get-Role
Assignment Policy
Syntax
Get-RoleAssignmentPolicy [[-Identity] <MailboxPolicyIdParameter>] [-DomainController <Fqdn>] [<CommonParameters>]
Description
For more information about assignment policies, see Understanding management role assignment-RoleAssignmentPolicy
This example returns a list of all the existing role assignment policies.
-------------------------- Example 2 --------------------------
Get-RoleAssignmentPolicy "End User Policy" | Format-List
This example returns the details of the specified assignment policy. The output of the Get-RoleAssignmentPolicy cmdlet is piped to the Format-List cmdlet.
For more information about pipelining and the Format-List cmdlet, see Pipelining () and Working with command output ().
-------------------------- Example 3 --------------------------
Get-RoleAssignmentPolicy | Where { $_.IsDefault -eq $True }
This example returns the default assignment policy.
The output of the Get-RoleAssignmentPolicy cmdlet is piped to the Where cmdlet. The Where cmdlet filters out all of the policies except the policy that has the IsDefault property set to $True.
For more information about pipelining and the Format-List cmdlet, see Pipelining () and Working with command name of the assignment policy to view. If the name contains spaces, enclose the name in quotation marks (").. | https://docs.microsoft.com/en-us/powershell/module/exchange/role-based-access-control/Get-RoleAssignmentPolicy?view=exchange-ps | 2018-04-19T14:22:50 | CC-MAIN-2018-17 | 1524125936969.10 | [] | docs.microsoft.com |
GitHub
You can use the data from your GitHub commits to help find and fix bugs faster.
Configure GitHub
Note
GitHub (github.com). Make sure you have given Sentry access to these repositories in GitHub in the previous steps.
The GitHub integration is available for all projects under your Sentry organization. You can connect multiple GitHub organizations to one Sentry organization, but you cannot connect a single GitHub organization to multiple Sentry organizations.
GitHub Enterprise
Add new GitHub App
Confirm Sentry's IP ranges are allowed for your GitHub Enterprise instance.
In your GitHub Enterprise organization, navigate to Settings > Developer Settings > GitHub Apps and click to add a new New GitHub App.
Register new GitHub App
First, you'll need to generate a webhook secret. For example, in terminal:Copied:Copied. Make sure you have given Sentry access to these repositories in GitHub in the previous steps.
GitHub Enterprise should now be enabled for all projects under your Sentry organization. GitHub issues from within Sentry, and link Sentry issues to existing GitHub Issues.
Once you’ve navigated to a specific issue, you’ll find the Linked Issues section on the right hand panel. Here, you’ll be able to create or link GitHub issues.
Resolving in Commit/Pull Request
Once you are sending commit data, you can start resolving issues by including
fixes <SENTRY-SHORT-ID> in your commit messages. For example, a commit message might look like:
Prevent empty queries on users Fixes MYAPP-317
You can also resolve issues with pull requests by including
fixes <SENTRY-SHORT-ID> in the title or description.
When Sentry sees this, we’ll automatically annotate the matching issue with a reference to the commit or pull request, and, later, when that commit or pull request is part of a release, we’ll mark the issue as resolved. | https://docs.sentry.io/workflow/integrations/github/ | 2020-08-03T18:08:58 | CC-MAIN-2020-34 | 1596439735823.29 | [] | docs.sentry.io |
Warning
This document is for an old release
galaxy.security.
get_permitted_actions(filter=None)[source]¶
Utility method to return a subset of RBACAgent’s permitted actions
galaxy.security.idencoding module¶
- class
galaxy.security.idencoding.
IdEncodingHelper(**config)[source]¶
encode_dict_ids(a_dict, kind=None, skip_startswith=None)[source]¶
Encode all ids in dictionary. Ids are identified by (a) an ‘id’ key or (b) a key that ends with ‘_id’
encode_all_ids(rval, recursive=False)[source]¶
Encodes all integer values in the dict rval whose keys are ‘id’ or end with ‘_id’ excluding tool_id which are consumed and produced as is via the API.. | https://docs.galaxyproject.org/en/release_19.09/lib/galaxy.security.html | 2020-08-03T17:08:26 | CC-MAIN-2020-34 | 1596439735823.29 | [] | docs.galaxyproject.org |
countBy
countBy(Array<T>, (T) -> Boolean): Number
Counts the elements in an array that return
true when the matching function is applied to the value of each element.
Example
This example counts the number of elements in the input array ([1, 2, 3, 4]) that return
true when the function
(($ mod 2) == 0) is applied their values. In this case, the values of two of the elements, both
2 and
4, match because
2 mod 2 == 0 and
4 mod 2 == 0. As a consequence, the
countBy function returns
2. Note that
mod returns the modulus of the operands.
Source
%dw 2.0 import * from dw::core::Arrays output application/json --- { "countBy" : [1, 2, 3, 4] countBy (($ mod 2) == 0) } | https://docs.mulesoft.com/mule-runtime/4.3/dw-arrays-functions-countby | 2020-08-03T18:33:45 | CC-MAIN-2020-34 | 1596439735823.29 | [] | docs.mulesoft.com |
ShortCut is an online service that allows users to precisely define changes in a road network. These changes can then be published and communicated to navigation service providers.
If you want a ShortCut for your city or study, get in touch, we're happy to help!
By default working with ShortCut is done in three steps:
This guide will help you get started with doing this yourself, from mapping the new situation to publishing the data.
We've added a small example below, a short section of road is set to 'under construction' and a default detour has been added.
Editing the network is done using the iD editor. A customized version is avialable in ShortCut.
To start editing, first zoom-in to the area where you want to edit the network and then click on the edit menu.
Once clicked on the edit menu, the map switches to the edit in the exact same area.
If on the previous page and before clicking on the edit menu you have not zoomed-in enough on the edit area, editable map will also open in a zoomed-out level and you will see no link to edit. However, as soon as you zoom to the edit area, the links start appearing for you to edit.
You can test your changes by planning routes on the network.
Once the network is edited in accordance with the scenario in hand (e.g., Road Constructions/works), one can experiment changes that this scenario will bring about by switching to the Test mode.
Test mode allows you to verify accessibility as well as possible routing changes for an Origin and Destination pair. This verification is simply done by 2 mouse clicks on the map where:
Subsequently you can select your desired routing profile/s in the "ROUTING" box at the top-right corner of the map.
ShortCut offers up to 10 instances (0-9). It is recommended to keep instance "0” as the Base-Scenario representing the existing situation and edit the network in other 9 instances (1-9) for creating new scenarios (e.g., future plans, etc.). This allows for an easier routing comparison between the existing situation and created scenarios as well as routing comparison among created scenarios themselves. See below for an example:
When you publish, the map will become public and data about the instance will be available. Publishing can be done by clicking "Publish" setting a name and a description and changing the status to published. | https://docs.anyways.eu/shortcut/index.html | 2020-08-03T17:42:07 | CC-MAIN-2020-34 | 1596439735823.29 | [] | docs.anyways.eu |
So you found an SDK documentation bug...
What do you do?
Do you post in the forums?
Do you email [email protected]?
The best way is to click the link which is on the bottom of virtually every page in the MSDN library (thanks for the correction, alimbada) that says: "Send comments about this topic to Microsoft". This will let you email Microsoft directly with comments about a specific topic.
By the way, If you are just interested in giving generic feedback or filing bugs on Microsoft products outside of MSDN, we have a site for that.
You're probably thinking now that this feedback goes into a giant black hole at Microsoft. It does. Just kidding :-) But seriously, when the mail is received, the feedback propagates through our publishing system into a database that contains the then anonymized feedback. Systems exist on top of that db that writers can use to gather insights about their topics. For example, one such system tells writers all the feedback that has been written for any pages within their content area ... for all time.
Some teams, such as ours, have an even more in depth methodology for handling this feedback. A very passionate individual on our team combs through all of the feedback for all of the content areas for our team (all the Windows content areas!) and classifies each piece of feedback. IF the feedback is a bug, she files a bug against the owner for the topic area and the owner of that topic will then correct their content as appropriate.
Yeah, you read that right. We read the comments posted to any of our pages on MSDN, even if the comment is "roflcopter lol lol lol" - which we appreciate receiving from time to time, especially if the topic had comments attacking our beloved company or product. Not every writer will respond directly to your feedback, but when you email us, we at the very least see it and consider your suggestions.
So there you have it, you can directly give us documentation feedback through the feedback mechanisms that have existed on MSDN for the past 10 years.
Oh, also of note. If you give feedback, it's best if it is very specific. For example:
"There's a bug on this page: The fSomeNumber parameter is actually a float, NOT a double as the documentation says."
versus:
"There's a bug on this page, and it makes me very upset."
Which we typically look at, feel bad about, but can't do very much about. | https://docs.microsoft.com/en-us/archive/blogs/seealso/so-you-found-an-sdk-documentation-bug | 2020-08-03T18:42:04 | CC-MAIN-2020-34 | 1596439735823.29 | [] | docs.microsoft.com |
NLog
Sentry has an integration with
NLog through the Sentry.NLog NuGet package.
Installation
Using package manager:
Install-Package Sentry.NLog -Version 2.1.5
Or using the .NET Core CLI:
dotnet add package Sentry.NLog , Sentry will keep any message with log level
Info or higher as a
Breadcrumb.
The default value to report a log entry as an event to Sentry is
Error.
This means that out of the box, any
Error call will create an
Event which will include all log messages of level
Info,
Warn and also
Error and
Critical.
Configuration
You can configure the Sentry NLog target via code as follows:
LogManager.Configuration = new LoggingConfiguration(); LogManager.Configuration .AddSentry(o => { // Optionally specify a separate format for message o.Layout = "${message}"; // Optionally specify a separate format for breadcrumbs o.BreadcrumbLayout = "${logger}: ${message}"; // Debug and higher are stored as breadcrumbs (default is Info) o.MinimumBreadcrumbLevel = LogLevel.Debug; // Error and higher is sent as event (default is Error) o.MinimumEventLevel = LogLevel.Error; // Send the logger name as a tag o.AddTag("logger", "${logger}"); // All Sentry Options are accessible here. });
It's also possible to initialize the SDK through the NLog integration (as opposed to using
SentrySdk.Init).
This is useful when NLog is the only integration being used in your application. To initialize the Sentry SDK through the NLog integration, provide it with the DSN:
LogManager.Configuration = new LoggingConfiguration(); LogManager.Configuration .AddSentry(o => { // The NLog integration will initialize the SDK if DSN is set: o.Dsn = new Dsn("PUBLIC_DSN")); });
Note
The SDK needs to be initialized only.
Minimum log level
Two log levels are used to configure this integration (see options below). One will configure the lowest level required for a log message to become an event (
MinimumEventLevel) sent to Sentry. The other option (
MinimumBreadcrumbLevel) configures the lowest level a message has to be to become a breadcrumb. Breadcrumbs are kept in memory (by default the last 100 records) and are sent with events. For example, by default, if you log 100 entries with
logger.Info or
logger.Warn, no event is sent to Sentry. If you then log with
logger.Error, an event is sent to Sentry which includes those 100
Info or
Warn messages. For this to work,
SentryTarget needs to receive all log entries in order to decide what to keep as breadcrumb or sent as event. Make sure to set the
NLog
LogLevel configuration to a value lower than what you set for the
MinimumBreadcrumbLevel and
MinimumEventLevel to make sure
SentryTarget receives these log messages.
The SDK can also be configured via
NLog.config XML file:
<?xml version="1.0" encoding="utf-8" ?> <nlog xmlns="" xmlns: <extensions> <add assembly="Sentry.NLog" /> </extensions> <targets> <target xsi: <!-- Advanced options can be configured here--> <options environment="Development" attachStacktrace="true" sendDefaultPii="true" shutdownTimeoutSeconds="5" > <!--Advanced options can be specified as attributes or elements--> <includeEventDataOnBreadcrumbs>true</includeEventDataOnBreadcrumbs> </options> <!--Add any desired additional tags that will be sent with every message --> <tag name="logger" layout="${logger}" /> <tag name="example" layout="sentry-nlog" /> </target> </targets> <rules> <logger name="*" writeTo="sentry" /> </rules> </nlog>
Options
MinimumBreadcrumbLevel
A
LogLevel that indicates the minimum level a log message needs to be in order to become a breadcrumb. By default, this value is
Info..
IgnoreEventsWithNoException
To ignore log messages that don't contain an exception.
SendEventPropertiesAsData
Determines whether event-level properties will be sent to sentry as additional data. Defaults to true.
SendEventPropertiesAsTags
Determines whether event properties will be sent to sentry as Tags or not. Defaults to false.
IncludeEventDataOnBreadcrumbs
Determines whether or not to include event-level data as data in breadcrumbs for future errors. Defaults to false.
BreadcrumbLayout
Custom layout for breadcrumbs. See NLog layout renderers for more.
Layout
Configured layout for the NLog logger.
Any additional tags to apply to each logged message. | https://docs.sentry.io/platforms/dotnet/nlog/ | 2020-08-03T18:24:48 | CC-MAIN-2020-34 | 1596439735823.29 | [] | docs.sentry.io |
Tarbell¶..
Requirements¶
Tarbell requires Python 2.7 and Git (v1.5.2+). Tarbell does not currently support Python 3.
Tarbell does not work on Windows machines.
Anatomy of a Tarbell project¶
Tarbell projects are made up of four pieces:
- The core Tarbell library, installed when you run pip install tarbell
- A Tarbell blueprint directory
- Your project template files
- A Google spreadsheet (optional)
Using Tarbell¶
- Installation
- Tutorial
- Set up a new project
- Structure your project
- Add content
- Displaying data
- Adding CSS
- Using Javascript
- Using
{{ super() }}
- Overriding default templates
- Putting it all together: Leaflet maps
-?
- Adding custom template types
- Anatomy of a project directory
- Using and extending template filters and functions
- Adding custom routes
- Using Flask Extensions
- Using Google spreadsheets
- Publishing
- Developing Tarbell blueprints
Reference¶
- Remote configuration
- Managing projects
- Hooks
- Command line reference
- Configuration reference
- Contributing | https://tarbell.readthedocs.io/en/1.0.10/ | 2020-08-03T17:04:24 | CC-MAIN-2020-34 | 1596439735823.29 | [] | tarbell.readthedocs.io |
Update a publication You can make changes to a publication that is in the Author or Review stages. Before you beginRole required: sn_publications.author or sn_publications.admin Procedure Navigate to one of the following: Targeted Communications > Active Publications Targeted Communications > Draft Publications Open the desired publication. The stage of the publication must be either Author or Review. Make the desired changes. Click Update. | https://docs.servicenow.com/bundle/jakarta-customer-service-management/page/product/customer-service-management/task/t_TargetCommEditAPublication.html | 2018-01-16T17:33:37 | CC-MAIN-2018-05 | 1516084886476.31 | [] | docs.servicenow.com |
Using the HTTP Caching Policy in Policy Manager
Learn how to enhance performance of message processing through caching responses to previously made service requests using the HTTP Caching Policy.
About Policies Managing Policies About Operational Policies
Supported Platforms: 7.2, 8.0
Table of Contents
- Introduction
- HTTP Caching Policy Modes
- How Stale Caches are Managed
- HTTP Caching Policy Options
- Public/Private Caching
- Configuration
Introduction
An SOA Container is involved in the processing of messages prior to their delivery to a service implementation. The performance of message processing can be enhanced through the caching of responses to previously made service requests. The cached responses can be returned without paying the overhead of invoking the service implementation.
The HTTP Caching Policy is an Operational policy that allows you to:
- Define how long a response can be cached for HTTP requests.
- Select a caching mode (HTTP Proxy Mode or HTTP Mediation/Server Mode).
Note: This policy was introduced with Policy Manager 7.2.
HTTP Caching Policy Modes
The HTTP Caching policy supports the following modes:
- HTTP Proxy Mode: Using this mode, the container expects the downstream call to be HTTP and to return cache control headers. If the downstream call does not meet these requirements, responses are not cached when the Authorization header is sent in the request.
- HTTP Mediation / Server Mode: Using this mode, the presence or absence of downstream headers is not taken into consideration. The caching decisions are based completely on the client cache-control headers, and the default behavior is used for the Authorization header. This means that the Authorization header is used in the cache key, and the responses are cached individually per authenticated user.
How Stale Caches are Managed
When a message is processed and the headers are read, a determination is made as to whether the cache is stale or not. If a cache is discovered to be stale, response validators are used to send a conditional response to the originating server. If it's determined that the cache is not stale, the cached response is used. Otherwise, a full response will be received that will be used instead of the cached response. For more information on the stale cache handling approach used for this policy, see
HTTP Caching Policy Options
The policy includes the following configuration options:
- Time To Live: Allows you to specify the maximum time in seconds a response will be cached. If not specified, the maximum time is determined by the container settings.
- Staleness Period Seconds: If a value is entered, any cached entry will live in the cache for the number of seconds in Time To Live, plus the number of seconds in Staleness Period. The "stale" portion of the entry will only be used if Cache-Control directives on the request allow for a stale entry to be used, by the use of a max-stale directive. The default is 0.
- Act as HTTP Proxy: Uncheck to enable either the HTTP Proxy Mode (checked) or HTTP Mediation / Server Mode.
- Shared Cache: Uncheck to enable the ability to use a private cache. In this case, you must also select a Subject Category (below).
- Subject Category: Allows you to select a subject category if Shared Cache is unchecked. The subject category is used to find the cache's principal name that is set when the cache is created, and use it as part of the cache key.
Public/Private Caching
By default, the caching module considers itself to be a shared (public) cache (Shared Cache checked), (Shared Cache unchecked).
Configuration
Let's take a quick walkthrough of the HTTP Caching Policy configuration process to get you started.
Step 1: Add Policy / Use System Policy
In Policy Manager, to create an HTTPS Caching Quota Policy instance, go to Policies > Operational Policies and choose Add Policy.
Step 2: Modify Policy
When you click Modify to make changes to the HTTP Caching policy on the Policy Details page, the initial policy looks like this:
Configure the policy options based on your requirements and click Apply.
Step 3: Attach Policy
After you've saved your policy, you can attach it to a web service, binding, or binding operation that you would like to enhance the message processing of.
Step 4: Test Policy and View Monitoring Data
After you've attached the HTTPS Caching. | http://docs.akana.com/ag/policies/using_the_http_caching_policy.htm | 2018-01-16T17:19:41 | CC-MAIN-2018-05 | 1516084886476.31 | [array(['images/modify_http_caching_policy.jpg', None], dtype=object)
array(['images/http_cache_policy_summary.jpg', None], dtype=object)] | docs.akana.com |
Redirects
Deactivate WordPress Dashboard for Vendors
By default Vendors will be redirected to their BuddyPress 'Member Profile Vendor Dashboard' if they try to access the back-end ( /wp-admin ). All other roles will be able to access the wp admin. In the WC Vendor Pro settings you can set WordPress dashboard to "only administrators can access the /wp-admin/ dashboard".
Turn off the redirect and enable admin back-end access
Vendor Store Settings
You can redirect the Vendor Store to the BuddyPress Vendor Profile | http://docs.themekraft.com/article/487-redirects | 2018-01-16T17:41:01 | CC-MAIN-2018-05 | 1516084886476.31 | [] | docs.themekraft.com |
FOR INSTRUCTORS¶
This material is for current and aspiring Carpentries Instructors. Find material here on becoming an Instructor, how you can develop as an Instructor, and what networking opportunities our community offers.
Thank You to Instructors!
We are very grateful to our community of Instructors! From those who teach once or twice a year to those who organise entire programs, from those who have just gotten involved to those who have deeply embedded themselves as mentors, discussion hosts, committee and Task Force members, and instructor Trainers: EVERY Instructor helps make The Carpentries what we are. Thank you!!
- For Current Carpentries Instructors
- Become a Carpentries Instructor | https://docs.carpentries.org/topic_folders/for_instructors/index.html | 2019-10-14T06:56:37 | CC-MAIN-2019-43 | 1570986649232.14 | [] | docs.carpentries.org |
Last updated 13th March 2018
Objective
You can configure Exchange accounts on email clients, if they are compatible. By doing so, you can use your email address through your preferred email application.
Find out how to configure an Exchange account on Android, via the Gmail app.
Requirements
- You must have an Exchange solution.
- will be different.
Instructions
Step 1: Add the account
On your device’s homepage, open the
Gmail app. You can add an account in two different ways:
If no account has been set: Tap through the welcome screen, and tap
Add email address. Next, choose
Exchange and Office 365.
If an account has already been set: Tap the three-line icon on the top left-hand corner, then the arrow icon to the right of the account name that has already been set. Next, tap
Add account, and choose
Exchange and Office 365.
Enter your email address, then tap
Enter your email password, do not select any client certificates, then tap
Next to continue configuring your account. You can make connections to the OVH server to configure your account. If you would like to do so, a notification will appear on your device. Tap
OK to make these connections.
Enter the incoming server settings. Some fields may be auto-filled.
Then tap
Next. If all the information you have entered is correct, you will be able to log in to your account straight away.
To finalise your configuration, you will need to authorise the OVH server to control certain security features on your device. Tap
OK, read the information on the page, and tap
Activate device administrator.
Set a name for your account, so that you can distinguish it from any other accounts associated with your app. Then press
To check that the account has been correctly configured, you can send a test email.
Step 2: Use the email address
Once you have configured your email address, you can start using it! You can now send and receive emails.
OVH also offers a web application that has collaborative features, accessible via. You can log in using your email credentials.
Go further
Configuring an email address included in an MX Plan or web hosting plan on Android, via the Gmail app.
Configuring an Email Pro account on Android via the Gmail app.
Join our community of users on. | https://docs.ovh.com/lt/microsoft-collaborative-solutions/exchange_20132016_konfiguravimas_android/ | 2019-10-14T05:27:23 | CC-MAIN-2019-43 | 1570986649232.14 | [array(['https://docs.ovh.com/cz/cs/microsoft-collaborative-solutions/konfigurace-android/images/configuration-exchange-gmail-application-android-step1.png',
'Exchange'], dtype=object)
array(['https://docs.ovh.com/cz/cs/microsoft-collaborative-solutions/konfigurace-android/images/configuration-exchange-gmail-application-android-step2.png',
'Exchange'], dtype=object) ] | docs.ovh.com |
Create and Configure Models¶
Models are powerful tools that provide connections to external systems, allowing the builder to pull data into the page. However, models follow rules that govern how they can be connected to data objects or entities. It’s important to follow these rules when creating models:
- You can connect any individual model to only one object within any external system. (For example, a model can be connected to the Account object in Salesforce, or a User object in SAP.)
- You can create one—or more—models for each page, and each model can connect to entirely different objects.
- A model can also connect to the same object as another model. That’s right: you can have multiple models on the same external system object—provided the models each have unique names.
Note
Why would you want to connect multiple models to the same object? Using model conditions, you can pre-filter what comes into each model.
- You may have one model on the Salesforce Account object that pulls in only certain information about the accounts, and displays the data in a table that is always on screen. This data will load immediately when the page displays, because the user always needs to see it.
- Another model may pull in specific internal account information (such as the account owner ID, last time modified, modified by whom, etc.). This model displays information in a “More about the Account” popup or modal. The records in this model don’t load with the page; instead, they only load when the user opens the popup to drill down into specific content. This streamlines the initial page load.
To prevent loading a single model with all the content that will ultimately be needed from that object—even though much of it is not needed immediately—it can be a best practice to create different copies of the model for each specific component or usage on the page.
Create a Model¶
Adding a model to a page involves three steps:
- Create a model and name it.
- Connect the model to an external system, such as a data source (like Amazon DynamoDB, Salesforce, Microsoft Dynamics, or SAP) or a service (for example, Amazon’s Lex or Simple Notification Service/SNS). This makes it possible for the model to pull in content and display it in any associated components.
- Customize the model to determine what part of the connected content to display and how the model behaves by using model properties, fields, conditions and actions.
Create [[]]¶
You can create as many models as the application needs—even multiple models that point to the same external system.
ClickModels Models in the App Elements pane.
Click theAdd Model.
Under Model ID, give the model a practical name, one that is distinctive enough to distinguish it from others on the page.
Note
This Model ID is not visible to end users.
Model name best practices¶
- Use alphanumeric characters:
SalesEntity4
- Do not include spaces between words; use an underscore to represent a space:
Opportunity_List_Data
- When possible, use a name that indicate the model’s function:
ContactDetail_Recordor
AccountNameSource_forCustomfilter
- Creating more than one model from the same external connection? Be sure to label them accordingly:
Account1,
Account2,
Account3
Connect [[]]¶
A newly-created model is just an empty “container,” waiting to be filled. Turning that container into a powerful and effective model is a matter of pinpointing exactly what content the model can access by connecting it to an external system.
Note
UI-Only models do not require a connection to an external system.
Warning
To connect Skuid models to a specific instance of an external system, you must first set up a connection to that system. This is done from the Configure tab in the Skuid navigation bar, under Configure Data Sources. Do not attempt to connect a model until you have created a system connector to the desired data source or service.
There are a series of necessary choices to hone in on the exact content you want to pull into the model:
- Select the associated Skuid system connector that will connect the model to the system. (Skuid provides a variety of out-of-the box connectors for various data sources and other systems and services, such as Lex, SES, and SNS.)
- Once you select the connector, Skuid winnows the list external system connections associated with that connector. If you only have access to one connection, its name will pre-populate; if there is more than one instance of the system connector to choose from, Skuid will present all options in a dropdown list.
- Finally, use the dropdown next to Object or Entity to select the specific object within the system to pull the content from.
Note
UI-only models and fields work slightly differently than models built to connect to external systems such as data sources or services. See UI-Only Models and Fields to learn more.
Customize [[]]¶
There are several ways to customize a model to gain even more control over the content pulled into a Skuid page.
- Model properties control essential model behaviors.
- Fields specify which content attributes are available in the model from the external system’s object.
- Model conditions limit or filter the specific records that are pulled into the model.
- Model actions trigger actions that run automatically when model-level events occurs.
Note
Depending upon the choices made, these options may impact page load times and performance.
Adjust the properties [[]]¶
Model properties—found in the properties pane—allow the builder more nuanced control over the model’s behaviors. The properties that are available vary, depending upon the external system connector being used. To learn about the specific properties available for a specific external connector, locate that topic in the Data section of Skuid’s documentation.
UI-Only models¶
UI-Only models are unique: they do not connect to any external system. A model using this type of system connector can store temporary values at runtime, but it does not save that data to any external system. UI-Only models are commonly used for variables that power interactive elements in the user interface or for other bits of logic within a page. See UI-Only Models and Fields to learn more.
Add model fields [[]]¶
Each model connected to an external system—and specifically, to a particular object in that external system. Most external connections include fields that are accessible to the model. By selecting which of those available fields to add to the model, you specify the data or content to be included and avoid including extraneous data or content that you don’t need.
There are several ways to add fields to a model.
Add fields using the model list [[]]¶
- In the App Elements pane, click on the desired model.
- Below the model name, click Fields. Skuid opens a list of fields available for that model in the property pane.
- In the All Fields tab, check a field to add it to the model.
- Made a mistake? Uncheck any fields to remove them from the model.
- The fields selected are added to the model’s fields list in the order in which they are selected. They are also listed under the Selected Fields tab.
Add components—then add fields to them [[]]¶
Certain data components, such as the Table, Field Editor, or Queue components, are designed to contain and display model fields, making that content accessible to the end user. Because these components rely on model fields for their content, the component itself includes an Add Field(s) button. Adding fields directly to a component also adds them to the model’s field list.
- Drag and drop a Table, Field Editor, or Queue component into the page.
- In the Properties pane:
- Model: Select the desired model to link to the component from the drop-down list. (By default, the last model modified will be pre-selected.)
To add fields to component, either:
Click the Add Field(s) button to open the Add Fields dialog box, then:
To select a field already added to the model: Check a field from the Selected Fields tab to add it to the component.
Note
Use the Search bar to find fields quickly.
To select a field that has not yet been added to a model: Check a field from the All Fields tab to add it to the component. (The field will also be added to the model’s field list/)
- No check box next to a field? Clicknext to the field to add it to the model and make it selectable, then check the field to add it to the component.
Click Apply.
Or you can drag and drop fields into the component from the model’s fields listing by grabbing either:
- a field from the model’s field list (below the model’s name in the App Elements pane, under Fields)
- a field from the model’s field’s list in the Properties pane.
Having trouble adding fields to a component?¶
A component can only accept fields from the model associated with it. When dragging a field into a component, if the component does not display an orange visual indicator for the area where you can drop the field, you may be trying to add a field to a component that is connected to a different model. Ensure that the component’s model matches the model you are pulling your fields from.
View selected fields [[]]¶
To see a list of the fields currently included in the model, click the model from the App Elements pain, then either:
- Below the model name, click Fields.
- In the Properties pane, click.
Add model conditions [[]]¶
A model condition is filters the content before it is loaded into the model. Without any conditions, a component attached to a model displays data for all the records in the object. Adding conditions lets you limit the data pulled into the page.
To add a model condition:
- In the App Elements pane, click on the desired model.
- Below the model name, click Conditions.
- ClickAdd.
To learn more, see Model Conditions.
Add model actions [[]]¶
Skuid Model Actions let you to specify actions that will initiate when certain model-leve events occur on a given model.
To add a model action:
- In the App Elements pane, click on the desired model.
- Below the model name, click Actions.
- ClickAdd.
To learn more, see Model Actions.
Working with Models¶
Change the order of models on the model list [[]]¶
Click, drag, and drop models to reposition them within the App Elements pane.
Note
Model order matters: In a Skuid page, models load in the order they are listed in the models list with models at the top loading first. If you have models that are dependent on another model for data, the dependent models must load after their primary models.
Clone a model [[]]¶
Cloning makes an exact copy of the model (including properties, fields, conditions, and actions). This is useful when you have a model that you want to duplicate and revise, without affecting the original model.
- In the App Elements pane, clickModels to open the models list.
- Select the model you want to clone from the list and click.
The clone appears at the bottom of the model list. Cloned models default to the name of the original model with a “1” appended. For clarity, remember to assign cloned models a unique name to help distinguish between it and the original model.
Delete a model [[]]¶
Warning
Deleting a model also deletes the components that are attached to that model.
To delete a model:
- ClickModels in the App Elements pane.
- In the list of models, click the model to delete.
- Click. Because deleting a model removes any associated components attached to the model, Skuid confirms the deletion.
- Click OK. The model and associated components are removed from the page .
Best Practices¶
When creating models, consider both the data users need to access and the experience you want to offer those users.
Model order matters: Models load in the order they’re arranged in the models list: models at the top of the list load first. When a model is dependent on another for data, ensure that dependent model loads after the primary model. If necessary, re-order the models.
Set a Max # of Records to a smaller number: Generally, users only really need to see around 10 records at a time; more records may present too much information to process, and in many cases they can load more records using pagination options.
Uncheck “Query on Page Load” to load models only when needed. Unchecking this property allows components and user action to determine when model data is loaded.This prevents all of the page’s models from loading at the same time during the initial page load, especially valuable for pages with numerous models.
Don’t specify Fields to order records by. If you need to use this feature, only specify indexed fields. (Ordering by a non-indexed field will make the page run slower.)
Properties¶
Warning
- Not all model properties listed below will be available for all data source types. If you do not see a property listed here in the app composer, it may not be available for your data source type.
Models for some data source types may have unique properties not documented here. These properties are detailed in the topics for those respective data source types.
Basic Tab [[]]¶
Model Id: The unique name by which components refer to this model. Each model must have a unique name within one Skuid page. If other pages are included within that page—such as through the Page Include component or child pages—then the models in those pages must have also have unique names.
Data Source Type: The data source type <data/> to use for this model, which narrows the selectable options in the Data Source property.
Data Source: The data source <data/>—a connection to a system that has been configured by the Skuid builder—that the model uses to access records.
Model Object / Entity: The data object to pull data from. The label for this property can vary based on data source type, but they all mean the same thing:
- External Object Name
- Model Entity
- Salesforce Object Name
Model Behavior: Some data sources allow the builder to select a specific type of model behavior:
- Basic: The default Skuid model.
- Aggregate: A model that collects, groups, and summarizes multiple data records into a single end result, such as a sum or a count.
- Read-Only: (REST data sources) A model that can only query (and not update) data.
- Read/Write: (REST data sources) A model that can use multiple data source URLs for different data operations.
Query on Page Load: If unchecked, no data rows are loaded into the model when the page initially loads. Uncheck this box to use this model to create new records, or to load this model later via the Action Framework (for example, opening it in a drawer, popup, or tab).
Allow Page Render Before Query Completes: Controls whether Skuid must finish loading the metadata for the model before rendering the Skuid page. (Only available if Query on Page Load is checked.) By default, Skuid loads metadata for every model prior to rendering a page, even those without visible UI components. A model containing a picklist with hundreds or thousands of values could extend page load times, especially if there are many models on a page. Use this property to tailor which models have a higher priority, giving users access to the most meaningful data and UI elements first.
When unchecked, the model is considered synchronous—meaning it is given priority and its query must complete before the page renders.
When checked, the model is considered asynchronous, and the page will render even if its query is not yet finished.
Note
This model property is unavailable for server-side models.. (If no sort command is specified, ASC is assumed.)
DESC: Descending, meaning records of higher “value”—alphabetically or numerically—appear at the top of the record list..)
Note.
Advanced tab [[]]¶ Plural Label:. | https://docs.skuid.com/latest/en/skuid/models/create-model.html | 2019-10-14T07:11:06 | CC-MAIN-2019-43 | 1570986649232.14 | [] | docs.skuid.com |
Once you have created an application and are ready to work on it, you can invite contributors to participate in its development, testing and management tasks as follows:
- Log in to the App Factory portal (for the online version, try) as a user who is assigned the
Application Ownerrole.
Only the app owner has permission to invite users to participate in the application. See roles/permissions.
- Select an application from the list of apps that you own or are invited to participate in.
- Click the Team tab from the left panel.
The Team page opens. Initially, only the application creator is assigned to the application as the owner.
Click the Add Members button, enter the email address of user you want to invite and click the Add to List button.You can only invite users who are registered to WSO2 App Factory directly or to WSO2 Oxygen Tank. You must also provide the complete email.
Next, select the appropriate role for the user and click Invite.
By default, WSO2 App Factory comes with 4 user roles as Application Owner, Developer, QA and DevOps. Their default permissions are given below. You can configure these permissions or add new roles using the
<ApplicationRoles>element in
<AF_HOME>/repository/conf/appfactory/appfactory.xmlfile. For information, see Configuring Application Roles.
Default permissions of the default user roles:
- The user receives an e-mail notification of the invitation and is immediately added to the team. To configure the e-mail template, see Modifying the e-mail Notification Template.
- After adding the user, the application owner can change his/her role or remove him/her from the users list.
You have now created an application and added contributors to it. Next, see Checking in and Branching the Code. | https://docs.wso2.com/display/AF100/Building+your+Team | 2019-10-14T05:31:55 | CC-MAIN-2019-43 | 1570986649232.14 | [] | docs.wso2.com |
All content with label events+hibernate_search+hot_rod+infinispan+jboss_cache+listener+mvcc+publish+read_committed+scala+websocket+write_behind.
Related Labels:
expiration, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, release, partitioning, query, deadlock, archetype, lock_striping, jbossas, nexus, guide, cache, amazon,
s3, memcached, grid, test, jcache, api, xsd, ehcache, maven, documentation, ec2, 缓存, hibernate, aws, interface, setup, clustering, eviction, gridfs, concurrency, out_of_memory, import, index, hash_function, configuration, buddy_replication, loader, xa, write_through, cloud, notification, tutorial, xml, jbosscache3x, distribution, cachestore, data_grid, cacheloader, resteasy, cluster, br, development, transaction, async, interactive, xaresource, build, searchable, demo, installation, cache_server, client, migration, jpa, filesystem, tx, gui_demo, eventing, client_server, testng, infinispan_user_guide, hotrod, snapshot, repeatable_read, docs, consistent_hash, batching, store, jta, faq, 2lcache, as5, lucene, jgroups, locking, rest
more »
( - events, - hibernate_search, - hot_rod, - infinispan, - jboss_cache, - listener, - mvcc, - publish, - read_committed, - scala, - websocket, - write_behind )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/events+hibernate_search+hot_rod+infinispan+jboss_cache+listener+mvcc+publish+read_committed+scala+websocket+write_behind | 2019-10-14T06:36:18 | CC-MAIN-2019-43 | 1570986649232.14 | [] | docs.jboss.org |
Rid¶
The
rid argument, contains the name of the environ variable containing the request id if any, for example when using GAE:
app = zunzuncito.ZunZun(root, versions, hosts, routes, rid='REQUEST_LOG_ID')
This helps to add a Request-ID header to all your responses, example when you make a request like:
curl -i
The response is something like:
This can help to trace the request in both server/client side.
If you do not specify the
rid argument the Request-ID will
automatically be generated using an UUID
, so for example you can run the app like this:
app = zunzuncito.ZunZun(root, versions, hosts, routes) # notice there is no rid argument
And the output will be something like:
A great amount of time has been spent creating, crafting and maintaining this software, please consider donating.
Donating helps ensure continued support, development and availability. | https://zunzuncito.readthedocs.io/en/stable/zunzun/Rid.html | 2019-10-14T05:49:20 | CC-MAIN-2019-43 | 1570986649232.14 | [] | zunzuncito.readthedocs.io |
Try it now and let us know what you think. Switch to the new look >>
You can return to the original look by selecting English in the language selector above.
PUT Object Copy
The PUT object copy operation copies each object specified in the manifest. You can copy objects to a different bucket in the same AWS Region or to a bucket in a different Region. Amazon S3 batch operations support most options available through Amazon S3 for copying objects. These options include setting object metadata, setting permissions, and changing an object's storage class. For more information about the functionality available through Amazon S3 for copying objects, see Copying Objects.
Restrictions and Limitations
All source objects must be in one bucket.
All destination objects must be in one bucket.
You must have read permissions for the source bucket and write permissions for the destination bucket.
Objects to be copied can be up to 5 GB in size.
PUT Object Copy jobs must best created in the destination region, i.e. the region you intend to copy the objects to.
All PUT Object Copy options are supported except for conditional checks on ETags and server-side encryption with customer-provided encryption keys.
If the buckets are unversioned, you will overwrite objects with the same key names.
Objects are not necessarily copied in the same order as they are listed in the manifest. So for versioned buckets, if preserving current/non-current version order is important, you should copy all non-current versions first and later copy the current versions in a subsequent job after the first job is complete. | https://docs.aws.amazon.com/AmazonS3/latest/dev/batch-ops-copy-object.html | 2019-10-14T06:18:06 | CC-MAIN-2019-43 | 1570986649232.14 | [] | docs.aws.amazon.com |
Try it now and let us know what you think. Switch to the new look >>
You can return to the original look by selecting English in the language selector above.
AssociateDelegateToResource
Adds a member (user or group) to the resource's set of delegates.
Request Syntax
{ "EntityId": "
string", "OrganizationId": "
string", "ResourceId": "
string" }
Request Parameters
For information about the parameters that are common to all actions, see Common Parameters.
The request accepts the following data in JSON format.
- EntityId
The member (user or group) to associate to the resource.
Type: String
Length Constraints: Minimum length of 12. Maximum length of 256.
Required: Yes
- OrganizationId
The organization under which the resource exists.
Type: String
Pattern:
^m-[0-9a-f]{32}$
Required: Yes
- ResourceId
The resource for which members (users or groups) are associated.: | https://docs.aws.amazon.com/workmail/latest/APIReference/API_AssociateDelegateToResource.html | 2019-10-14T06:06:36 | CC-MAIN-2019-43 | 1570986649232.14 | [] | docs.aws.amazon.com |
Keying Sets¶
The Active Keying Sets data ID in the Timeline.
Keying Sets are a collection of properties. They are used to record multiple properties at the same time.
Now when you press. | https://docs.blender.org/manual/ko/dev/animation/keyframes/keying_sets.html | 2019-10-14T06:18:48 | CC-MAIN-2019-43 | 1570986649232.14 | [array(['../../_images/editors_timeline_keying-sets.png',
'../../_images/editors_timeline_keying-sets.png'], dtype=object)
array(['../../_images/animation_keyframes_keying-sets_scene-keying-set-panel.png',
'../../_images/animation_keyframes_keying-sets_scene-keying-set-panel.png'],
dtype=object)
array(['../../_images/animation_keyframes_keying-sets_scene-active-keying-set-panel.png',
'../../_images/animation_keyframes_keying-sets_scene-active-keying-set-panel.png'],
dtype=object) ] | docs.blender.org |
Whether or not the API is better to libssh2 is debatable. This comparison looks at more objective criteria.
phpseclib is designed to be ultra-portable, even to the point of working on PHP4. Not a single extension is required, either (although if they're available they'll be used, for speed).
libssh2, in contrast... if you're on a shared host that doesn't have it installed (and most don't) you're S.O.L. And even if your server is running on a dedicated box it's still one extra step you have to do to install it. As if migrating to a new server wasn't enough of a challenge do you really want to compound the challenge by adding an additional server dependency like libssh2?
The following table shows how long, in seconds, it took to transfer a 1mb file via phpseclib and libssh2 to localhost and to a remote host.
This test was conducted on an Intel Core i5-3320M CPU @ 2.6GHz running Windows 7 64-bit and PHP 5.4.19 and libssh 0.12 and the latest Git version of phpseclib with the gmp and mcrypt extensions installed.
The connection to the remote host was done with a 1MB upload speed. Here's the speedtest.net results:
The code used to conduct these tests is at upload.phps and download.phps.
The motivation for doing a separate test for remote hosts and localhosts was to compare the speeds with different bottlenecks. ie. with the remote host the bandwidth is the bottleneck. With localhost bandwidth is eliminated as a bottleneck. This shows that phpseclib performs better under a multitude of conditions.
How you do it with libssh2:
<?php $ssh = ssh2_connect('domain.tld'); ssh2_auth_pubkey_file($ssh, 'username', '/home/ubuntu/pubkey', '/home/ubuntu/privkey'/*, 'password'*/); $stream = ssh2_exec($ssh, 'ls -la'); echo stream_get_contents($stream);
Both have to be of the right format too. If you didn't use ssh-keygen to generate your keys good luck in converting them.
<?php include('Net/SSH2.php'); include('Crypt/RSA.php'); $rsa = new Crypt_RSA(); //$rsa->setPassword('password'); $rsa->loadKey('...'); $ssh = new Net_SSH2('domain.tld'); $ssh->login('username', $rsa); echo $ssh->exec('ls -la');
Ignoring the API for the time being there are a few clear ways phpseclib comes out on top here:
Why didn't top or sudo work? With phpseclib you can get logs. They look like this:
You can also do print_r($ssh->getErrors()) or echo $ssh->getLastError()
I don't see any cd or chdir functions at. phpseclib, however, has it - Net_SFTP::chdir(...)
Let's try to do sudo on the remote system.
With phpseclib: examples.html#sudo
With libssh2? I have no clue. My best guess (doesn't work):
<?php $ssh = ssh2_connect('domain.tld'); ssh2_auth_password($ssh, 'username', 'password'); $shell = ssh2_shell($ssh); echo fread($shell, 1024*1024); fwrite($shell, "sudo ls -la\n"); $output = fread($shell, 1024*1024); echo $output; if (preg_match('#[pP]assword[^:]*:#', $output)) { fwrite($shell, "password\n"); } echo fread($shell, 1024*1024);
It is additionally unclear how to get top working with libssh2 but it works perfectly fine with phpseclib: examples.html#top. | https://docs.phpseclib.org/ssh/compare.html | 2019-10-14T06:30:56 | CC-MAIN-2019-43 | 1570986649232.14 | [] | docs.phpseclib.org |
This command prints the version of kubeadm.
Print the version of kubeadm
Print the version of kubeadm
kubeadm version [flags]
Was this page helpful?
Thanks for the feedback. If you have a specific, answerable question about how to use Kubernetes, ask it on Stack Overflow. Open an issue in the GitHub repo if you want to report a problem or suggest an improvement. | https://v1-13.docs.kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-version/ | 2019-10-14T05:24:15 | CC-MAIN-2019-43 | 1570986649232.14 | [] | v1-13.docs.kubernetes.io |
Back
Ease
Back Ease
Back Ease
Back Ease
Class
Definition
Represents an easing function that retracts the motion of an animation slightly before it begins to animate in the path indicated.
public ref class BackEase : System::Windows::Media::Animation::EasingFunctionBase
public class BackEase : System.Windows.Media.Animation.EasingFunctionBase
type BackEase = class inherit EasingFunctionBase
Public Class BackEase Inherits EasingFunctionBase
- Inheritance
-
Examples>
Remarks<<
Note
Because this animation causes values to retract before progressing, the animation might interpolate into negative numbers unexpectedly. This can cause errors when animating properties that do not allow negative numbers. For example, if you apply this animation to the Height of an object (e.g. from 0 to 200 with an EasingMode of EaseIn), the animation will attempt to interpolate through negative numbers for Height which will throw an error.
There are several other easing functions besides BackEase. In addition to using the easing functions included in the run-time, you can create your own custom easing functions by inheriting from EasingFunctionBase.
XAML Object Element Usage
<BackEase .../> | https://docs.microsoft.com/en-us/dotnet/api/system.windows.media.animation.backease?view=netframework-4.8 | 2019-10-14T06:11:51 | CC-MAIN-2019-43 | 1570986649232.14 | [array(['../media/backease-graph.png?view=netframework-4.8',
'BackEase EasingMode graphs. BackEase EasingMode graphs.'],
dtype=object)
array(['../media/backease-formula.png?view=netframework-4.8',
'BackEase formula. BackEase formula.'], dtype=object) ] | docs.microsoft.com |
Xaml
Services
Xaml Services
Xaml Services
Xaml Services
Class
Definition
Provides higher-level services (static methods) for the common XAML tasks of reading XAML and writing an object graph; or reading an object graph and writing XAML file output for serialization purposes.
public ref class XamlServices abstract sealed
public static class XamlServices
type XamlServices = class
Public Class XamlServices
- Inheritance
-
Remarks methods.
Important
XamlServices is not the recommended XAML reading or XAML writing API set if you are processing WPF-defined types, or types based on WPF. For WPF usage, use System.Windows.Markup.XamlReader for reading or loading XAML (or BAML); and System.Windows.Markup.XamlWriter for writing back XAML. These classes use .NET Framework XAML Services APIs and the XAML readers and XAML writers internally in their implementation; however, they also provide support and specialized XAML schema context for WPF-specific concepts, such as optimizations for dependency properties and WPF known types. | https://docs.microsoft.com/en-us/dotnet/api/system.xaml.xamlservices?redirectedfrom=MSDN&view=netframework-4.8 | 2019-10-14T06:46:26 | CC-MAIN-2019-43 | 1570986649232.14 | [] | docs.microsoft.com |
Azure Blob Storage
To use the built-in Azure Blob provider that comes with RadCloudUpload, you must:
Adding References
RadCloudUpload can upload files to Azure Blob Storage. It is built on the top of Windows Azure Blob Storage Service in .NET. To work properly, the control needs a reference to some of the client libraries included in the Windows Azure Storage which is a part of Windows Azure SDK for .NET.
The Windows Azure Storage package is distributed through a NuGet feed and can be easily installed through the Package Management Console. Table 1 shows the command to install the version you need according to the Telerik.Web.UI version you use.
Table 1: Package Manager command to restore the appropriate NuGet package, depending on the Telerik version and the .NET your project is targetting
Figure 1: Example of using the NuGet Package Manager Console to add the Azure package
For version
3.0.2, only the
Microsoft.WindowsAzure.Storage.dll assembly with version
3.0.2.0 is used by RadCloudUpload.
For version
1.7.0.0, only the
Microsoft.WindowsAzure.StorageClient.dll assembly with version
6.0.6002.18488 is used by RadCloudUpload.
The
Microsoft.WindowsAzure.Configuration.dll assembly is not used, but the other dependencies may be used by the Azure code.
When a Web Application type of project is used the Copy Local property in the Reference Properties dialog box, available from the References pane of the Project Designer must be set to True .
If you use newer versions of the
WindowsAzure.Storage package, there is a risk of a breaking change in the library to break RadCloudUpload. File uploads may stop working or you may get errors such as
NullReferenceException from
Telerik.Web.UI.CloudUploadHandler.GetEncryptedText.
Configuration
From the RadCloudUpload's smart tag choose Azure as provider tag and open the Configuration Wizard:
In the Configuration Wizard dialog enter Azure Access Key, Account Name and Blob Container Name.
Specifying the Uncommitted Files Expiration Period(TimeSpan Structure), you could easily configure the time, after which the unprocessed files will be removed from the storage.When Ensure Container is checked, the control will create a new Container if it doesn't exists. In case it is not checked and the Container doesn't exists - an exception will be thrown.This will add configuration setting in the web.config file:
XML
<telerik.web.ui> <radCloudUpload> <storageProviders> <add name="Azure" type="Telerik.Web.UI.AzureProvider" accountKey="" accountName="" blobContainer="" subFolderStructure="" ensureContainer="true" uncommitedFilesExpirationPeriod="2" defaultEndpointsProtocol="https" /> </storageProviders> </radCloudUpload> </telerik.web.ui>
Uploading in Azure is done on chunks. Every chunk has size of 2MB. These chunks that were cancelled during the uploading are removed automatically by Azure. When older browsers are used (IE9 or below), files are uploaded at once, because chunking is not supported. In order to upload files larger than 4MB, it is needed to increase the maximum allowed file size. For more details please refer to this article. | https://docs.telerik.com/devtools/aspnet-ajax/controls/cloudupload/cloud-storage-providers/azure-blob-storage | 2019-10-14T05:50:43 | CC-MAIN-2019-43 | 1570986649232.14 | [array(['images/cloudupload-azure-nuget.png', 'cloudupload-azure-nuget'],
dtype=object) ] | docs.telerik.com |
.
1.1 Requirements and important information on the iOS SDK
The Accengage iOS SDK is provided as a dynamic framework and is compatible with:The Accengage iOS SDK is provided as a dynamic framework and is compatible with:
- Xcode 9
- iOS 8 and higher.
In terms of size, the SDK will increase your app download size by ~608 Kb.).
1.2 Activate Push Notifications
Enable).
1.3 Configure the Accengage dashboard for Push Notifications
Once
It is very important that you respect the following configuration:
-!
1.4 Configure URL Schemes
Creating URL schemes
Deep Linking is becoming very important, it's a great way to enhance the effectiveness of your:
-. | https://docs.accengage.com/display/IOS/Section++1+++-+Getting++started | 2019-10-14T05:41:32 | CC-MAIN-2019-43 | 1570986649232.14 | [] | docs.accengage.com |
This chapter describes the compatibility issues that users should consider when purchasing MPT version 1.4.
Users may have to change the way they run MPI programs because internal buffering is no longer done as of MPT 1.4 on Cray T3E-900 systems or later:
If a user is accustomed to turn off internal buffering by setting the environment variable MPI_BUFFER_MAX to 0, doing so is no longer necessary.
If an illegal program relies on internal buffering, the user will have to set the environment variable MPI_BUFFER_MAX to some value for the program to run. | http://docs.cray.com/books/004-3689-001/html-004-3689-001/z832952721jlb.html | 2008-05-14T15:11:56 | crawl-001 | crawl-001-011 | [] | docs.cray.com |
Software included in MPT was designed to be used with the Cray Programming Environment 3.3 release or later.
MPT software is self-configuring, based on the operating system configuration in effect at the time of installation. You are not required to do any configuration at the initial installation of the product. If, however, you upgrade your operating system level to a new major release or change the system host name, you will need to reconfigure the MPT software. This reconfiguration can be done as follows:
Note: You can also change to the /opt/ctl/mpt/version directory and do the configuration from there.
The Modules software package is used to support the installation of both the Programming Environment and MPT. To use the MPT software, load the mpt module in addition to loading the Programming Environment module. For information on using modules, see Installing Programming Environment Products, or, if the Programming Environment has already been installed on your system, see the online ASCII file /opt/ctl/doc/README. After you have initialized the modules, enter the following command to access the MPT software:
To unload the mpt module, enter the following command: | http://docs.cray.com/books/004-3689-001/html-004-3689-001/zfixedsllc0bet.html | 2008-05-14T15:13:22 | crawl-001 | crawl-001-011 | [] | docs.cray.com |
The PDFAnalyzer class¶
- class ferenda.PDFAnalyzer(pdf objects.+ | http://ferenda.readthedocs.io/en/stable/api/pdfanalyzer.html | 2017-06-22T16:26:08 | CC-MAIN-2017-26 | 1498128319636.73 | [] | ferenda.readthedocs.io |
public interface Unmarshaller
Defines the contract for Object XML Mapping unmarshallers.
Implementations of this interface can deserialize a given XML Stream to an Object graph.
Object unmarshal(Source source) throws XmlMappingException, IOException
Sourceinto an object graph.
source- the source to marshal from
XmlMappingException- if the given source cannot be mapped to an object
IOException- if an I/O Exception occurs
boolean supports(Class clazz)
clazz- the class that this unmarshaller is being asked if it can marshal
trueif this unmarshaller can indeed unmarshal to the supplied class;
falseotherwise | http://docs.spring.io/spring-ws/sites/1.5/apidocs/org/springframework/oxm/Unmarshaller.html | 2017-06-22T16:37:50 | CC-MAIN-2017-26 | 1498128319636.73 | [] | docs.spring.io |
Perform this procedure, if in Step 4 of Configure a SalesForce synchronization, you have selected Contacts.
NOTE: If simultaneously one and the same item is being modified in both Sitefinity CMS and SalesForce, you can set which system has priority – which of the modifications is saved in both systems. To do this, click Administration » Settings » Advanced » SalesForceConnector. In Master adapter field, enter which system wins in case of conflict. You can enter SalesForce or Sitefinity. By default, items modified in SalesForce are persisted in case of conflict between the two systems.
EXAMPLE: You have several thousand leads in SalesForce, but you want to sync only leads from a specific company. In this case, you can filter SalesForce leads according to the value of their Company field. You must also choose to sync these leads with a specific Sitefinity CMS role.
IMPORTANT: Since Last name is a required field for SalesForce contacts and both Last name and Company are required fields for SalesForce leads, the Administration: User profiles that you are using, must also have the Last name field, if you are synchronizing contacts, and Last name and Company fields, if you are synchronizing leads.
Back To Top | http://docs.sitefinity.com/synchronize-salesforce-contacts-or-leads-with-sitefinity-users | 2017-06-22T16:25:33 | CC-MAIN-2017-26 | 1498128319636.73 | [] | docs.sitefinity.com |
Topics more information, see Policy Variables in the Using IAM guide.
The main difference between using Amazon SQS policies versus IAM policies is that you can grant another AWS Account permission to your queues with an Amazon SQS policy, and you can't do that with an IAM policy.
Note
When you grant other AWS accounts access to your AWS resources, be aware that all AWS accounts can delegate their permissions to users under their accounts. This is known as cross-account access. Cross-account access enables you to share access to your AWS resources without having to manage additional users. For information about using cross-account access, go to Enabling Cross-Account Access in Using IAM.
This section describes how the Amazon SQS policy system works with IAM.
You can use an Amazon SQS policy with a queue to specify which AWS Accounts have access to
the queue. You can specify the type of access and conditions (e.g., permission to use
SendMessage,
ReceiveMessage, if the
request is before December 31, 2010). The specific actions you can grant permission for
are a subset of the overall list of Amazon SQS actions. When you write an Amazon SQS policy and
specify * to mean "all the Amazon SQS actions", that means all actions in that subset.
The following diagram illustrates the concept of one of these basic Amazon SQS policies that covers the subset of actions. The policy is for queue_xyz, and it gives AWS Account 1 and AWS Account 2 permission to use any of the allowed actions with the queue. Notice that.
So for example, according to the Amazon SQS policy shown in the preceding figure, anyone possessing the security credentials for AWS Account 1 or AWS Account 2 could access queue_xyz. Also, Users Bob and Susan in your own AWS Account (with ID 123456789012) can access the queue.
Before the introduction of IAM, Amazon SQS automatically gave the creator of a queue full control over the queue (e.g., access to all possible Amazon SQS actions with that queue). This is no longer true, unless the creator is using the AWS security credentials. Any User who has permission to create a queue must also have permission to use other Amazon SQS actions in order to do anything with the queues they create.
There are two ways you can give your Users permissions for your Amazon SQS resources: through
the Amazon SQS policy system or the IAM policy system. You can use one or the other,
or both. For the most part, you can achieve the same results with either. For example,
the following diagram shows an IAM policy and an Amazon SQS policy that are equivalent. The IAM
policy allows the Amazon SQS
ReceiveMessage and
SendMessage actions for the queue called queue_xyz in your
AWS Account, and it's attached to the Users Bob and Susan (which means Bob and Susan have
the permissions stated in the policy). The Amazon SQS policy also gives Bob and Susan
permission to access
ReceiveMessage and
SendMessage for the same queue.
Note
The preceding example shows simple policies with no conditions. You could specify a particular condition in either policy and get the same result.
There is one difference between IAM and Amazon SQS policies: the Amazon SQS policy system lets you grant permission to other AWS Accounts, whereas IAM doesn't.
It's up to you how you use both of the systems together to manage your permissions, based on your needs. The following examples show how the two policy systems work together.
1
In this example, Bob has both an IAM policy and an Amazon SQS policy that apply
to him. The IAM policy gives him permission to use
ReceiveMessage on queue_xyz, whereas the Amazon SQS policy gives
him permission to use
SendMessage on the same queue. The
following diagram illustrates the concept.
If Bob were to send a request to receive a message from queue_xyz, the IAM policy would allow the action. If Bob were to send a request to send a message to queue_xyz, the Amazon SQS policy would allow the action.
2
In this example, we build on example 1 (where Bob has two policies that apply to him). Let's say that Bob abuses his access to queue_xyz, so you want to remove his entire access to that queue. The easiest thing to do is add a policy that denies him access to all actions on the queue. This third policy overrides the other two, because an explicit deny always overrides an allow (for more information about policy evaluation logic, see Evaluation Logic). The following diagram illustrates the concept.
Alternatively, you could add an additional statement to the Amazon SQS policy that denies Bob any type of access to the queue. It would have the same effect as adding a IAM policy that denies him access to the queue.
For examples of policies that cover Amazon SQS actions and resources, see Example IAM Policies for Amazon SQS. For more information about writing Amazon SQS policies, go to the Amazon Simple Queue Service Developer Guide.
For Amazon SQS, queues are the only resource type you can specify in a policy. Following is the Amazon Resource Name (ARN) format for queues:
arn:aws:sqs:
region:
account_ID:
queue_name
For more information about ARNs, go to IAM ARNs in Using IAM.
Following is an ARN for a queue named my_queue in the US East (Northern Virginia) Region, belonging to AWS Account 123456789012.
arn:aws:sqs:us-east-1:123456789012:my_queue
If you had a queue named my_queue in each of the different Regions that Amazon SQS supports, you could specify the queues with the following ARN.
arn:aws:sqs:*:123456789012:my_queue
You can use * and ? wildcards in the queue name. For example, the following could
refer to all the queues Bob has created, which he has prefixed with
bob_.
arn:aws:sqs:*:123456789012:bob_*
As a convenience to you, Amazon SQS has a queue attribute called
Arn whose value
is the queue's ARN. You can get the value by calling the Amazon SQS
GetQueueAttributes action.
All Amazon SQS actions that you specify in a policy must be prefixed with the
lowercase string
Amazon SQS:. For example,
Amazon SQS:CreateQueue.
Before the introduction of IAM, you could use an Amazon SQS policy with a queue to
specify which AWS Accounts have access to the queue. You could also specify the type of
access (e.g.,
Amazon SQS:SendMessage,
Amazon SQS:ReceiveMessage, etc.). The specific actions you could grant
permission for were a subset of the overall set of Amazon SQS actions. When you wrote an Amazon SQS
policy and specified * to mean "all the Amazon SQS actions", that meant all actions in that
subset. That subset originally included:
Amazon SQS:SendMessage
Amazon SQS:ReceiveMessage
Amazon SQS:ChangeMessageVisibility
Amazon SQS:DeleteMessage
Amazon SQS:GetQueueAttributes (for all attributes except
Policy)
With the introduction of IAM, that list of actions expanded to include the following actions:
Amazon SQS:CreateQueue
Amazon SQS:DeleteQueue
Amazon SQS:ListQueues
The actions related to granting and removing permissions from a queue
(
Amazon SQS:AddPermission, etc.) are reserved and so don't
appear in the preceding two lists. This means that Users in the AWS Account can't use those
actions. However, the AWS Account can use those actions.
Amazon SQS implements the following policy keys, but no others. For more information about policy keys, see Condition..
This section shows several simple IAM policies for controlling User access to Amazon SQS.
Note
In the future, Amazon SQS might add new actions that should logically be included in one of the following policies, based on the policy’s stated goals.
1: Allow a User to create and use his or her own queues
In this example, we create a policy for Bob that lets him access all Amazon SQS
actions, but only with queues whose names begin with the literal string
bob_queue.
Note
Amazon SQS doesn't automatically grant the creator of a queue permission to
subsequently use the queue. Therefore, in our IAM policy, we must
explicitly grant Bob permission to use all the Amazon SQS actions in addition to
CreateQueue.
{ "Version": "2012-10-17", "Statement":[{ "Effect":"Allow", "Action":"sqs:*", "Resource":"arn:aws:sqs:*:123456789012:bob_queue*" } ] }
2: Allow developers to write messages to a shared test queue
In this example, we create a group for developers and attach a policy that lets
the group use the Amazon SQS
SendMessage action, but only with
the AWS Account's queue named CompanyTestQueue.
{ "Version": "2012-10-17", "Statement":[{ "Effect":"Allow", "Action":"sqs:SendMessage", "Resource":"arn:aws:sqs:*:123456789012:CompanyTestQueue" } ] }
3: Allow managers to get the general size of queues
In this example, we create a group for managers and attach a policy that lets the
group use the Amazon SQS
GetQueueAttributes action with all of
the AWS Account's queues.
{ "Version": "2012-10-17", "Statement":[{ "Effect":"Allow", "Action":"sqs:GetQueueAttributes", "Resource":"*" } ] }
4: Allow a partner to send messages to a particular queue
You could do this with an Amazon SQS policy or an IAM policy. Using an Amazon SQS policy might be easier if the partner has an AWS Account. However, anyone in the partner's company who possesses the AWS security credentials could send messages to the queue (and not just a particular User). We'll assume you want to limit access to a particular person (or application), so you need to treat the partner like a User within your own company, and use a IAM policy instead of an Amazon SQS policy.
In this example, we create a group called WidgetCo that represents the partner company, then create a User for the specific person (or application) at the partner company who needs access, and then put the User in the group.
We then attach a policy that gives the group
SendMessage
access on the specific queue named WidgetPartnerQueue.
We also want to prevent the WidgetCo group from doing anything else with queues,
so we add a statement that denies permission to any Amazon SQS actions besides
SendMessage on any queue besides WidgetPartnerQueue. This
is only necessary if there's a broad policy elsewhere in the system that gives Users
wide access to Amazon SQS.
{ "Version": "2012-10-17", "Statement":[{ "Effect":"Allow", "Action":"sqs:SendMessage", "Resource":"arn:aws:sqs:*:123456789012:WidgetPartnerQueue" }, { "Effect":"Deny", "NotAction":"sqs:SendMessage", "NotResource":"arn:aws:sqs:*:123456789012:WidgetPartnerQueue" } ] }
In addition to creating IAM users with their own security credentials, IAM also enables you to grant temporary security credentials to any user allowing in making requests to Amazon SQS. The API libraries compute the necessary signature value using those credentials to authenticate your request. If you send requests using expired credentials Amazon SQS denies the request.
First, use IAM to create temporary security credentials, which include a security token, an Access Key ID, and a Secret Access Key. Second, prepare your string to sign with the temporary Access Key ID and the security token. Third, use the temporary Secret Access Key instead of your own Secret Access Key to sign your Query API request. Finally, when you submit the signed Query API request, don't forget to use the temporary Access Key ID instead of your own Access Key ID and include the security token. For more information about IAM support for temporary security credentials, go to Granting Temporary Access to Your AWS Resources in Using IAM.
To call an Amazon SQS Query API action using Temporary Security Credentials
Request a temporary security token with AWS Identity and Access Management. For more information, go to Creating Temporary Security Credentials to Enable Access for IAM Users in Using IAM.
IAM returns a security token, an Access Key ID, and a Secret Access Key.
Prepare your Query as you normally would, but use the temporary Access Key ID in place of your own Access Key ID and include the security token. Sign your request using the temporary Secret Access Key instead of your own.
Submit your signed query string with the temporary Access Key ID and the security token.
The following example demonstrates how to use temporary security credentials to authenticate an Amazon SQS request. ?Action=CreateQueue &DefaultVisibilityTimeout=40 &QueueName=testQueue &Attribute.1.Name=VisibilityTimeout &Attribute.1.Value=40 &Version=2011-10-01 &Signature=Dqlp3Sd6ljTUA9Uf6SGtEExwUQEXAMPLE &SignatureVersion=2 &SignatureMethod=HmacSHA256 &Expires=2011-10-18T22%3A52%3A43PST &SecurityToken=
SecurityTokenValue&AWSAccessKeyId=
Access Key ID provided by AWS Security Token Service
The following example uses Temporary Security Credentials to send two messages with
SendMessageBatch. ?Action=SendMessageBatch &SendMessageBatchRequestEntry.1.Id=test_msg_001 &SendMessageBatchRequestEntry.1.MessageBody=test%20message%20body%201 &SendMessageBatchRequestEntry.2.Id=test_msg_002 &SendMessageBatchRequestEntry.2.MessageBody=test%20message%20body%202 &SendMessageBatchRequestEntry.2.DelaySeconds=60 &Version=2011-10-01 &Expires=2011-10-18T22%3A52%3A43PST &Signature=Dqlp3Sd6ljTUA9Uf6SGtEExwUQEXAMPLE &SignatureVersion=2 &SignatureMethod=HmacSHA256 &SecurityToken=
SecurityTokenValue&AWSAccessKeyId=
Access Key ID provided by AWS Security Token Service | http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/UsingIAM.html | 2014-03-07T08:25:14 | CC-MAIN-2014-10 | 1393999638008 | [] | docs.aws.amazon.com |
The GCD Charter
GCD By-Laws
Statement of Purpose
The Grand Comics Database [herein called the GCD] is a nonprofit, Internet-based organization of international volunteers dedicated to the collection, preservation, and dissemination of information about all printed comics throughout the world. Its primary objective is to build a publicly accessible database using the latest technology available in order to document the medium of printed comics wherever they may be published. The GCD may also pursue other activities that relate to the general purpose described above.
Membership
Membership shall be conferred upon those individuals who volunteer their time in furtherance of the GCD's goals in any of the following manners:
- Submitting a requisite number of new and/or corrected indexes or cover scans, as specified in the current operating guidelines.
-.
Members of the GCD shall elect members from within their ranks to hold positions on the governing board, and must approve all amendments to the governing by-laws by a two-thirds majority vote of those voting.
Furthermore, any group of members equal in number to at least twice the current size of the governing board may formulate a proposal, or petition the board asking them to formulate a proposal, for the consideration of the membership on any of the following matters:
- Recall or reprimand of a board member or coordinator.
- The challenging of any existing board ruling by putting it to a vote of the membership.
- The initiation of a Charter Amendment.
- No such vote of the membership should be taken less than 28 days from an announcement to the membership.
Board
Role & Responsibility of Board
The Board is the official representative body of the GCD membership. The board is responsible for serving as the leadership necessary to ensure that the GCD's mission statement is being met. In pursuit of the GCD mission, the Board is empowered to appoint agents and committees to achieve authorized activities, to coordinate and monitor these activities, and make such administrative and organizational changes needed to achieve smooth operation in accordance with the GCD mission including but not limited to:
- finding or recruiting additional resources to help those that need it,
- identifying and fulfilling the need to create additional coordinator positions and committees as needed to meet the project's statement of purpose, and
- recruiting replacements for vacated coordinator positions or committee seats.
The board is also responsible for establishing, maintaining, and updating the organizational structure. This structure will define the relationships of the various administrative entities to each other and the place of each within the operational flow.
Board Structure
Operations of the GCD shall be governed by a board of directors. The board shall consist of nine members serving two year terms each. Elections shall be held annually for at least four of the nine seats. Vacancies occurring within three months of a previous election shall be filled from nearest runner-ups from that most recent election. Vacancies occurring more than three months from the previous election and more than three months from the next election will be filled by a special election to be organized by the remaining board members. Vacancies occurring less than three months from an upcoming election will be filled during that election.
Should they find it necessary in order to ensure adequate representation by members for whom English is not their first language, the Board is authorized to set aside up to three positions on the Board to be filled in subsequent elections by said members. This authority may be renewed or terminated at the Board's discretion at the time said positions come up for election. Procedures for nominating and electing members to these positions will be the same as for other Board members except that the Board will delegate someone to verify the language qualification.
Board members will elect officers as needed from among their membership. Board meetings will be conducted by means of a semi-closed mailing list, available to members in a non-participatory digest form, with bi-weekly summaries of activities being posted to the general membership by means of a web page, a post to the general email list, and/or an FTPable text archive.
Board & Electorate
Candidates who wish to run for a board seat must post a message to the gcd-main email list to that effect anytime during the period three weeks prior to the election to two days prior.
A recall vote may be forced by the membership if at least 9 members demand it within any given one-week period. A 2/3 majority vote of those of the membership voting is required for the recall to pass.
A repeal of a board action may be forced to vote by the membership if at least 9 members demand it within any given one-week period, provided no such vote has transpired in the previous six months. A 3/5 majority vote of those of the membership voting is required for the repeal to pass. Successful repeals may not themselves be put up for repeal for at least six months.
Amendments
Board Candidate Announcement & Campaign Period
November 2010: The candidate announcement and campaign period identified in Section 4.3 was shortened to 2-21 days prior to election (reduced from 3-5 weeks).
Original Text: "Candidates who wish to run for a board seat must post a brief note to the main email list no sooner than five weeks prior to the election and no later than three weeks prior to the election."
Changed Text: "Candidates who wish to run for a board seat must post a message to the gcd-main email list to that effect anytime during the period three weeks prior to the election to two days prior."
The Amendment took effect during the 2011 Board Elections. | http://docs.comics.org/wiki/The_GCD_Charter | 2014-03-07T08:25:11 | CC-MAIN-2014-10 | 1393999638008 | [] | docs.comics.org |
Features
Applications
In mezzio, you define a
Mezzio\Application instance and
execute it. The
Application instance is itself middleware
that composes:
- a
Mezzio\MiddlewareFactoryinstance, used to prepare middleware arguments to pipe into:
- a
Laminas\Stratigility\MiddlewarePipeinstance, representing the application middleware pipeline.
- a
Mezzio\Router\RouteCollectorinstance, used to create
Mezzio\Router\Routeinstances based on a combination of paths and HTTP methods, and which also injects created instances into the application's router.
- a
Laminas\HttpHandlerRunner\RequestHandlerRunnerinstance which will ultimately be responsible for marshaling the incoming request, passing it to the
MiddlewarePipe, and emitting the response.
You can define the
Application instance in two ways:
- Direct instantiation, which requires providing several dependencies.
- Via a dependency injection container; we provide a factory for setting up all aspects of the instance via configuration and other defined services.
Regardless of how you setup the instance, there are several methods you will likely interact with at some point or another.
Instantiation
Constructor
If you wish to manually instantiate the
Application instance, it has the
following constructor:
public function __construct( Mezzio\MiddlewareFactory $factory, Laminas\Stratigility\MiddlewarePipeInterface $pipeline, Mezzio\Router\RouteCollector $routes, Laminas\HttpHandlerRunner\RequestHandlerRunner $runner ) {
Container factory
We also provide a factory that can be consumed by a PSR-11 dependency injection container; see the container factories documentation for details.
Adding routable middleware
We discuss routing vs piping elsewhere; routing is the act of dynamically matching an incoming request against criteria, and it is one of the primary features of mezzio.
Regardless of which router implementation you use, you
can use the following
Application methods to provide routable middleware:
route()
route() has the following signature:
public function route( string $path, $middleware, array $methods = null, string $name = null ) : Mezzio\Router\Route
where:
$pathmust be a string path to match.
$middlewaremust.
$methodsmust be an array of HTTP methods valid for the given path and middleware. If null, it assumes any method is valid.
$nameis the optional name for the route, and is used when generating a URI from known routes. See the section on route naming for details.
This method is typically only used if you want a single middleware to handle multiple HTTP request methods.
get(), post(), put(), patch(), delete(), any()
Each of the methods
get(),
put(),
patch(),
delete(), and
any()
proxies to
route() and has the signature:
function ( string $path, $middleware, string $name = null ) : Mezzio\Router\Route
Essentially, each calls
route() and specifies an array consisting solely of
the corresponding HTTP method for the
$methods argument.
Piping
Because mezzio builds on laminas-stratigility,
and, more specifically, its
MiddlewarePipe definition, you can also pipe
(queue) middleware to the application. This is useful for adding middleware that
should execute on each request, defining error handlers, and/or segregating
applications by subpath.
The signature of
pipe() is:
public function pipe($middlewareOrPath, $middleware = null)
where:
$middlewareOrPathis either a string URI path (for path segregation), PSR-15
MiddlewareInterfaceor
RequestHandlerInterface, or the service name for a middleware or request handler to fetch from the composed container.
$middlewareis required if
$middlewareOrPathis a string URI path. It can be one.
Unlike
Laminas\Stratigility\MiddlewarePipe,
Application::pipe() allows
fetching middleware and request handlers by service name. This facility allows
lazy-loading of middleware only when it is invoked. Internally, it wraps the
call to fetch and dispatch the middleware inside a
Mezzio\Middleware\LazyLoadingMiddleware instance.
Read the section on piping vs routing for more information.
Registering routing and dispatch middleware
Routing and dispatch middleware must be piped to the application like any other middleware. You can do so using the following:
$app->pipe(Mezzio\Router\Middleware\RouteMiddleware::class); $app->pipe(Mezzio\Router\Middleware\DispatchMiddleware::class);
We recommend piping the following middleware between the two as well:
$app->pipe(Mezzio\Router\Middleware\ImplicitHeadMiddleware::class); $app->pipe(Mezzio\Router\Middleware\ImplicitOptionsMiddleware::class); $app->pipe(Mezzio\Router\Middleware\MethodNotAllowedMiddleware::class);
These allow your application to return:
HEADrequests for handlers that do not specifically allow
HEAD; these will return with a 200 status, and any headers normally returned with a
GETrequest.
OPTIONSrequests for handlers that do not specifically allow
OPTIONS; these will return with a 200 status, and an
Allowheader indicating all allowed HTTP methods for the given route match.
- 405 statuses when the route matches, but not the HTTP method; these will also include an
Allowheader indicating all allowed HTTP methods.
See the section on piping to see how you can register non-routed middleware and create layered middleware applications.
Executing the application: run()
When the application is completely setup, you can execute it with the
run()
method. The method proxies to the underlying
RequestHandlerRunner, which will
create a PSR-7 server request instance, pass it to the composed middleware
pipeline, and then emit the response returned.
Found a mistake or want to contribute to the documentation? Edit this page on GitHub! | https://docs.mezzio.dev/mezzio/v3/features/application/ | 2020-09-18T11:26:05 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.mezzio.dev |
# Deep Dive
# Objectives of the Review Process
The Subspace review process has several objectives:
- Promote high quality code, and filter out poorly written code
- Encourage collaboration and discourage animosity
- Elevate the influence of coders who perform well, and limit the influence of coders who do not
- Do not create bottlenecks -- reviews should be relatively quick
We've been through several iterations of the protocols, and we will continue to seek improvements. Originally we started with a voting model, but thanks to this blog post by Nadia Eghbal we went through some major rewrites.
# Review Actions
Here we will outline the basic interactions with the review system.
# Submit a contribution
You wrote the code. You submitted a contribution. This is a
merge request on steroids.
# Support a contribution
You reviewed a contribution, and you think it's great (or at least headed in the right direction). Not only that, but you want to get involved and help to shepherd it across the finish line. You've put yourself in the same boat as the contributor, and you've staked your reputation on getting this done successfully. If you, the contributor, and any other supporters get this code approved, then you'll see a boost in your Subspace
rating. Each person to support the contribution brings it closer to approval. Contributions are weighed against
suggestions / concerns, which we'll jump into next.
# Make a suggestion
You've reviewed the contribution, and you see some room for improvement, or have some concerns. You write up a suggestion detailing the specifics of your proposed alterations (perhaps with some example code). Suggestions serve as blockers to getting the contribution approved. Here you can begin a dialogue with the community over specific issues.
# Contribution outcomes: Approved, Pending, or Declined
A contribution is approved if the
rating-weighted proportion of
support exceeds a threshold relative to the proportion of
suggestions. If after a certain amount of time (the time and thresholds are discussed below) there are still too many suggestions, then the contribution will transition to
pending. This is fine, and during this time the
contributor and
supporters can work to
address
suggestions. If a contribution stays in the
pending state for too long (usually 1-2 weeks) then it will be declined.
# Suggestion states
When a suggestion is made, it begins in the
SUGGESTED state. It now holds some weight against the contribution becoming approved. A suggestion can also be withdrawn while in this state. The contributor can submit additional commits, and if they feel they have addressed the concerns, they can move the suggestion to the
ADDRESSED state. From here the suggester can choose to accept the changes (and move it into the
ACCEPTED state), or renew their suggestion (and move it back to the
SUGGESTED state). Once a suggestion is accepted it no longer holds weight against the contribution approval.
Suggestion life-cycle:
WITHDRAWN<--
SUGGESTED<-->
ADDRESSED-->
ACCEPTED
# Approval thresholds
Initially when a contribution is made there is an approval threshold that is set. This threshold is dictated by overall project activity and the ratings of project members. It is set at a fairly high level, which gives everyone who wants to review it a chance to do so. However, the threshold is lowered as time passes, and is determined by an exponential decay function. At some point the activity (support and suggestions) will cross this lowering threshold and the contribution will get pushed into either the
Approved or
Pending state. If for some reason there are no supporters and no suggestions, then after a very long time the contribution will eventually be accepted. We had open-source projects in mind here: even if a project has very little support and the owner has dumped the project into
no longer maintained status, the project can continue on along happily in their absence, with support from the occasional passerby 😃
# Ratings
While we actively seek to foster a community of knowledge-sharing and collaboration, Subspace is primarily a tool for break-neck, mind-bending, reality-questioning, software development. For this meritocracy to function at it's best, the ratings need to be accurate estimates of user skill and experience level. On the backend we have used models similar to those found in competitive chess (and some video games). Most actions in Subspace will have some affect on reputation. Similar to chess, a novice will not be penalized very much for losing to a grand-master. A user trying to contribute code to a very popular and high-quality open-source project will not lose very many points if their contribution is declined. Most of the actions effectively model the user vs. the project, and we keep internal ratings for each project.
All users start off with a rating of
0. At the end of the day, every user's reputation is scaled up or down slightly to maintain an average rating for all users of
1000. The median rating (where there are an equal number of developers above and below) is approximately
700.
# Actions and ratings
Below are two tables outlining the possible rating-influencing scenarios.
+R and
-R indicate an increase or decrease in rating, respectively.
++R and
--R indicate a larger increase or decrease in rating. These are approximate representations -- the actual model is fairly involved.
# Contributions
# Suggestions
From the tables above we can see a few things:
- Submitting a contribution affects your rating more than supporting a contribution. And actually, the later you are to the party the less your rating is affected.
- Most suggestion scenarios increase your rating -- unless someone is being obnoxious and the contributor ignores their suggestion (but manages to get the contribution approved anyway).
We welcome any feedback on the protocol!
# Contribution Mana (quadratic voting)
Quadratic voting has recently become popular as a method for group decision-making. The basic premise is that votes (given some set of choices) are not free, but instead come from a finite resource. Placing a single vote on a given choice will cost you 1 vote point, or, if you really care about an issue you can place multiple votes, but it will cost you n^2 vote points; the important piece is that it is nonlinear. Vitalik Buterin has a great blog post about the benefits of quadratic voting, in the context of cryptocurrency governance.
Users in Subspace have contribution vote points -- what is playfully termed 'Mana'. Actions related to contributions (submission, approval, suggestion feedback) all require contribution mana as a means of communicating conviction / value / importance. Mana regenerates at a rate of 1 point per hour, and all users have a mana cap of 325. Use your mana wisely! 😃
← Getting Started FAQs → | https://docs.subspace.net/deep-dive/ | 2020-09-18T10:21:05 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.subspace.net |
TOPICS×
Command Line Start and Stop
Starting Adobe Experience Manager from the Command Line
The start script is available under the <cq-installation>/bin directory. Both Unix and Windows versions are provided. The script starts the instance installed in <cq-installation> directory.
Those two versions support a list of environment variables that could be used to start and tune the AEM instance.
To stop AEM, do one of the following:
- Depending on the platform you are using:
- If you started AEM from either a script or the command line, press Ctrl+C to shut down the server.
- If you have used the start script on UNIX, you must use the stop script to stop AEM.
- If you started AEM by double-clicking the jar file, click the On button on the startup window (the button then changes to Off ) to shut down the server.. | https://docs.adobe.com/content/help/en/experience-manager-65/deploying/deploying/command-line-start-and-stop.html | 2020-09-18T11:52:24 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.adobe.com |
Recommended post-installation configuration¶
Once you’ve installed SQream DB, you can and should tune your system for better performance and stability.
This page provides recommendations for production deployments of SQream DB.
In this topic:
- Recommended BIOS settings
- Use a dedicated SQream DB administration account
- Configure the OS locale and timezone
- Configure NTP for clock synchronization
- Install recommended utilities
- Tuning OS parameters for performance and stability
- Disable SELinux
- Secure the server with a firewall
Recommended BIOS settings¶.
Use a dedicated SQream DB administration account¶
Create a user for SQream DB, and optionally assign it to the
wheel group for
sudo access.
$ useradd -m -u 1132 -U sqream $ passwd sqream $ usermod -aG wheel sqream
Note
- The UID (1132 in the example above) is set to ensure all shared files are accessible by all workers.
Configure the OS locale and timezone¶
Set your OS to use UTF-8, which SQream DB uses for non-English language support.
$ sudo localectl set-locale LANG=en_US.UTF-8
Set the correct timezone for your server. Refer to the list of available timezones to find a timezone that matches your location.
$ sudo timedatectl set-timezone America/New_York
Configure NTP for clock synchronization¶
SQream DB clusters rely on clock synchronization to function correctly.
$ sudo yum install -y ntp ntpdate $ sudo systemctl enable ntpd $ sudo systemctl start ntpd
If your organization has an NTP server, configure it by adding records to
/etc/ntpd.conf, reloading the service, and checking that synchronization is enabled:
$ echo -e "\nserver <your NTP server address>\n" | sudo tee -a /etc/ntp.conf $ sudo systemctl restart ntpd $ sudo timedatectl
Install recommended utilities¶
The following packages contain tools that are recommended but not required for using SQream DB.
$ sudo yum install -y bash-completion.noarch vim-enhanced.x86_64 vim-common.x86_64 net-tools iotop htop psmisc screen xfsprogs wget yum-utils deltarpm dos2unix tuned pciutils
Tuning OS parameters for performance and stability¶
SQream DB requires certain OS parameters to be set on all hosts in your cluster.
These settings affect:
- Shared memory - Most OS installations may try to limit high throughput software like SQream DB.
- Network - On high throughput operations like ingest, optimizing network connection parameters can boost performance
- User limits - SQream DB may open a large amount of files. The default OS settings may cause some statements to fail if the system runs out of file descriptors.
- Core dump creation rules
Create a directory for core dumps
In this step, you will create a directory for writing core dumps - which you will configure in the next step.
$ sudo mkdir /tmp/core_dumps
Note
Core dumps can be large - up to the size of the system memory (i.e. for a machine with 512GB of RAM, the size of the core dump will be 512GB).
Make sure the directory has enough space for writing a core dump.
Set
sysctloverrides to tune system performance
Note
The settings above include provisioning for core dumps. Core dumps can be a valuable source of information in some scenarios, where stack traces and error logs are not enough.
By default, the kernel writes core dump files in the current working directory of the process. SQream recommends overriding this setting and write the core dump files to a fixed directory.
The setting on line 4 uses the directory you created in the previous step (
/tmp/core_dumps).
Increase the limit of open files and processes
$ sudo tee -a /etc/security/limits.conf > /dev/null <<EOT * soft nproc 524288 * hard nproc 524288 * soft nofile 524288 * hard nofile 524288 * soft core unlimited * hard core unlimited EOT
Verify mount options for drives
SQream recommends XFS for local data storage. The recommended XFS mount options are:
rw,nodev,noatime,nobarrier,inode64
Note
Reboot your system for the above settings to take effect.
Disable SELinux¶
SELinux may interfere with NVIDIA driver installation and some SQream DB operations. Unless absolutely necessary, we recommend disabling it.
Check if SELinux is enabled
$ sudo sestatus SELinux status: disabled
You can disable SELinux by changing the value of
SELINUXparameter to
disabledin
/etc/selinux/configand rebooting.
Secure the server with a firewall¶
Opening up ports in the firewall¶
The example below shows how to open up all ports required by SQream DB and related management interfaces. The example also takes into account up to 4 workers on the host.
$ sudo systemctl start firewalld $ sudo systemctl enable firewalld $ for p in {2812,3000,3001,3105,3108,5000-5003,5100-5103}; do sudo firewall-cmd --zone=public --permanent --add-port=${p}/tcp; done $ sudo firewalld --reloadi
Disabling the built in firewall¶
If not required, you can disable the server’s firewall. This will reduce connectivity issues, but should only be done inside your internal network.
$ sudo systemctl disable firewalld $ sudo systemctl stop firewalld
What’s next? | https://docs.sqream.com/en/latest/guides/operations/setup/recommended_configuration.html | 2020-09-18T09:59:08 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.sqream.com |
Table of Contents
Product Index
Cataleya is a tough girl, very hard, tanned in a thousand battles, rarely appears in her eyes the sweetness. But that sweetness is in her, hidden almost, imperceptible, but it is, you just have to look at it more closely and you will see it appear as if by magic.
Cataleya has a great versatility that makes it fit in any type of scene, whether fantasy or realism.
Cataleya for Genesis 8 female is a character full of strength, with an athletic body and features touching the hardness and coldness. It brings three different shaders, many makeup options, fibermesh eyebrows, HD details, HD scars and much more that will make it one of the favorite characters of its Libraries. Do not lose the opportunity to. | http://docs.daz3d.com/doku.php/public/read_me/index/59739/start | 2020-09-18T10:04:02 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.daz3d.com |