content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Claim processing reports
Smart Claims Engine provides baseline reporting on claims processing. These reports are available by selecting the Claims processing category from the dropdown on the reports screen.
The reports listed with denoted by an asterisk (*) are also displayed in the Claims manager portal and provide weekly, monthly, and yearly breakdowns.
Available claims processing reports include:
Additionally, the Claims manager portal displays the following reports:
Previous topic Report actions Next topic Healthcare IT operations reports | https://docs.pega.com/pega-smart-claims-engine-user-guide/86/claim-processing-reports | 2022-05-16T23:16:06 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.pega.com |
Live Streaming
Get API Key for Live Streaming
The start streaming with Qencode, you need your
The
Get Access Token
Once you have your
Here's an example of API request to get access token:
curl -X POST
Response example:
{ "access_token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyjpYXQiOjE2MjgxNTgyNJMsInN1YiI6MTMwLCJleHAiOjE2MjgyNDQ2NjN9.SxS3zLx2CZbZ9ylTpd25kj9el6_4TqqTWUA9RT2iJ9I" }
Token returned should be passed in Authorization header to all other live streaming API methods.
Create Live Stream
Now that you have an
There is a wide range of parameters that are required that contains all the information you need to start streaming.
Choose an Input Protocol for Stream
The way you stream depends on your ingest protocol. Qencode supports RTMP, WebRTC and SRT protocols for live streaming.
RTMP Streaming
To start broadcasting with RTMP, enter the following values into your streaming client or software.
- Server URL
rtmp://rtmp-live.qencode.com/qlive
You can also get this URL
... "server_urls": { "rtmp": "rtmp://rtmp-live.qencode.com/qlive", } ...
- Stream key
There are three ways to get the
- As part of the response when you create a live stream
- When you query stream details/v1/live-streams/<stream_id>/startmethod and get webrtc server url from its response
- By clicking on a stream on the Stream List page.
If you want to reduce startup time, you can start the encoder in advance using You can also get this URL
Otherwise, you can start streaming using your credentials and the system will allocate an encoder automatically.
WebRTC Streaming
For WebRTC input typical setup is the following: client's webpage gets access to a webcam and pushes the stream from it to Qencode. We provide a powerful WebRTC Kit you can use to integrate this feature into your web site. For testing purposes you can use this WebRTC Test page
- Start your WebRTC stream
You should call
... "server_urls": { "rtmp": "rtmp://rtmp-live.qencode.com/qlive", "webrtc": "wss://live-df6607d4-8866-4d30-906d-0fed54e58f97.qencode.com:3334/qlive/1234e5b1-dfd7-4daa-8121-bc5c68bbeb37" } ...
Alternatively you can start stream using Start stream link on the Stream detail page in your Qencode account UI. Available server urls will be shown there as well.
- Push stream to Server URL
To start broadcasting with WebRTC, you need the webrtc value from
You can start streaming using our WebRTC Test page or any web client that supports WebRTC.
Note: The timeout to receive input for a stream is 120 seconds, so please make sure you call the
If you are using the test page, make sure you
- Allow webcam and microphone access. You can verify this if the video is visible on the page.
- Press Connect button to start pushing the stream to Qencode.
SRT Streaming
To set up your client app you need a Server URL for SRT input. This URL also includes
You can also start the stream by calling
... "server_urls": { "rtmp": "rtmp://rtmp-live.qencode.com/qlive", "srt": "srt://live-53801710-8c4e-433e-befb-5571d4843d4f.qencode.com:9999?streamid=1234e5b1-dfd7-4daa-8121-bc5c68bbeb37" } ...
Specify the SRT Server URL in your streaming client app settings. You can see screenshot of OBS params below:
Ingest timeout for a stream is 120 seconds. Please make sure it's less than 120 seconds passed since you have called
Play your live stream
Each stream has a unique playback_id value identifying a playback URL.
HLS
For HLS output stream playback URL format will be the following:
You can use any player supporting ABR streams playback.
Using Qencode Player is described here
Stop live stream
Stopping a broadcast in your streaming client will bring stream to an Idle status. Renewing broadcast withing a two minutes interval after stopping it will make your stream Live again.
If more than two minutes pass after broadcast is stopped or interrupted stream gets Stopped.
Calling
Here's an example of API request to stop a stream:
curl -H "Authorization: Bearer $ACCESS_TOKEN" -X POST
Receive stream event callbacks
You can set up an endpoint on your web server and get webhook notifications each time a stream event occurs, e.g. when stream status is changed. To specify a callback for a stream you need to call
{ "callback_url": " }
After you set up a callback URL for a stream, Qencode will send an http POST request to it each time a stream event occurs. The body of http POST request is stream object data in JSON format. | https://docs.qencode.com/tutorials/streaming/ | 2022-05-16T21:41:00 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.qencode.com |
Connection Types¶
Before being able to talk to a remote GMP or OSP server using one of the provided command line clients, the user has to choose a connection type for establishing a communication channel. Currently three different connection types are supported for being used as transport protocol:
For the most common use case (querying openvasmd/gvmd via GMP on the same host) the socket connection should be chosen. The other connection types require some setup and possible adjustments at the server side, if no Greenbone OS based system is used.
Using a Unix Domain Socket¶
The Unix Domain Socket is the default connection type of gvmd in the Greenbone Source Edition. It is only usable when running the client tool on the same host as the daemon.
The location and name of the Unix Domain Socket provided by
gvmd/openvasmd highly depends on the environment and
GVM installation. Additionally, its name changed from
openvasmd.sock in
GVM 9 to
gvmd.sock in GVM 10.
For GOS 4 the path is either
/run/openvas/openvasmd.sock or
/usr/share/openvas/gsa/classic/openvasmd.sock and for
GOS 5 the path is either
/run/gvm/gvmd.sock or
/usr/share/gvm/gsad/web/gvmd.sock.
OSPd based scanners may be accessed via Unix Domain Sockets as well. The location and name of these sockets is configurable and depends on the used OSPd scanner implementation.
Warning
Accessing a Unix Domain Socket requires sufficient Unix file permissions for the user running the command line interface tool.
Please do not start a tool as root user via sudo or su only to be able to access the socket path. Instead, adjust the socket file permissions, e.g. by setting the --listen-owner, --listen-group or --listen-mode arguments of gvmd.
Using TLS¶
The TLS connection type was the default connection type for remote and local communication in GOS 3.1 and before. It is used to secure the transport protocol connection of GMP or OSP. It requires to provide a TLS certificate file, TLS key file and TLS certificate authority file. | https://gvm-tools.readthedocs.io/en/latest/connectiontypes.html | 2022-05-16T21:02:43 | CC-MAIN-2022-21 | 1652662512249.16 | [] | gvm-tools.readthedocs.io |
Represents a controller class in Grails.
The general name to use when referring to action artefacts.
The name of the after interceptor property.
The name of the before interceptor property.
The general name to use when referring to controller artefacts.
The name of the index action.
The name of the namespace property
The general name to use when referring to action view.
Returns the default action for this Controller.
Initialize the controller class
Invokes a controller action on the given controller instance
controller- The controller instance
action- The action
Tests if a controller maps to a given URI.
Register a new UrlConverter with the controller
urlConverter- The UrlConverter to register | http://docs.grails.org/4.0.12/api/grails/core/GrailsControllerClass.html | 2022-05-16T21:21:47 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.grails.org |
{warning} This online page relates to the latest released JChem PostgreSQL Cartridge. Find the version specific documentation in your installed package in
/opt/jchem-psql/doc/directory.
During chemical operations, JChem sessions are opened, closed and assigned to PostgreSQL sessions. In order to release JChem sessions properly, PostgreSQL statements must be closed when using JDBC.
This can be done either by setting up autocommit in JDBC session or by calling commit or rollback after every statement execution (even query statements).
Setting SQL log level to debug may produce high amount of log entries. JDBC keeps these log messages connected to Statements, Connections, and ResultSets which may fill the JVM heap causing memory or GC problems.
To avoid this problem use the
clearWarnings() methods of the classes above. | https://docs.chemaxon.com/display/lts-europium/jdbc-caution.md | 2022-05-16T21:00:53 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.chemaxon.com |
!
Manage licensing and enable analytics on virtual servers
Note
- By default, the Auto Licensed Virtual Servers option is enabled. You must ensure to have sufficient licenses to license the virtual servers. If you have limited licenses and want to license only the selective virtual servers based on your requirement, disable the Auto Licensed Virtual Servers option. Navigate to Settings > Licensing & Analytics Configuration and disable the Auto Licensed Virtual Servers option under Virtual Server License Allocation.
The process of enabling analytics is simplified. You can license the virtual server and enable analytics in a single workflow.
Navigate to Settings > Licensing & Analytics Configuration to:
View the Virtual Server Licence Summary
View the Virtual Server Analytics Summary
When you click Configure License or Configure Analytics, the All Virtual Servers page is displayed. Infrastructure > WAF Security Violations).
Under Instance level options:
Enable HTTP X-Forwarded-For - Select this option to identify the IP address for the connection between client and application, through HTTP proxy or load balancer.
Citrix Gateway - Select this option to view analytics for Citrix Gateway. Infrastructure >
The following table describes the features of Citrix ADM that supports IPFIX and Logstream as the transport. | https://docs.citrix.com/en-us/citrix-application-delivery-management-software/current-release/configure/enable-analytics-on-virtual-servers.html | 2022-05-16T20:55:14 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.citrix.com |
RENAME
Purpose
Use this statement to rename schemas and schema objects.
Prerequisites
- If the object is a schema, it must belong to the user or one of the user roles.
- If the object is a schema object, the object must belong to the user or one of the user roles (that is, located in one's own schema or that of an assigned user role).
- If the object is a user or role, then the user requires the CREATE USER or CREATE ROLE privileges.
- If the object is a consumer group, the user needs to have the system privilege MANAGE CONSUMER GROUPS.
- If the object is a connection, at least one of the following prerequisites must be fulfilled:
- User has the system privilege ALTER ANY CONNECTION
- The connection is granted to the user with the WITH ADMIN OPTION
- The connection belongs to the current user or one of the user’s roles
Syntax
rename::=
Usage Notes
- Schema objects cannot be shifted to another schema with the RENAME statement. For example, 'RENAME TABLE s1.t1 TO s2.t2' is not allowed.
- Distinguishing between schema or tables is optional and only necessary if two identical objects share the same name. | https://docs.exasol.com/db/latest/sql/rename.htm | 2022-05-16T21:43:32 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.exasol.com |
Markdown#
The Markdown node converts between Markdown and HTML formats.
Options#
You can configure the node's output using Options. Click Add Option to view and select your options.
Test out the options
Some of the options depend on each other, or can interact. We recommend testing out options to check the effects are what you want.
Markdown to HTML#
HTML to Markdown#
Parsers#
n8n uses the following parsers:
- To convert from HTML to Markdown: node-html-markdown
- To convert from Markdown to HTML: Showdown. Some options allow you to extend your Markdown with GitHub Flavored Markdown. | https://docs.n8n.io/integrations/core-nodes/n8n-nodes-base.markdown/ | 2022-05-16T22:51:31 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.n8n.io |
“Don’t skip weeks.”
“Good god I have no freaking clue.” — Arbitrage
.” | https://docs.numer.ai/community-content/numerai-community-office-hours/office-hours-recaps/ohwa-s01e09 | 2022-05-16T22:48:00 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.numer.ai |
Extended.
Text analytics infrastructure upgraded to Java version 1.7
Valid from Pega Version 7.2.2
The JAR libraries that support the text analytics features of the Pega 7 Platform have been upgraded to Java 1.7. As a result of the upgrade, you can validate the Ruta scripts that contain entity extraction models and view error reports if the imported Ruta scripts are invalid. This upgrade helps you to ensure that the structure of the entity extraction models that you import to your application is always correct.
For more information, see Text Analyzer. | https://docs.pega.com/platform/release-notes-archive?f%5B0%5D=%3A9031&f%5B1%5D=%3A31541&f%5B2%5D=releases_capability%3A9031&f%5B3%5D=releases_capability%3A9076&f%5B4%5D=releases_note_type%3A983&f%5B5%5D=releases_version%3A7091 | 2022-05-16T23:13:34 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.pega.com |
How to implement xunit-style set-up.. | https://docs.pytest.org/en/7.1.x/how-to/xunit_setup.html | 2022-05-16T22:20:39 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.pytest.org |
GraphQL API
Defining the Schema Class
A plugin can extend NetBox's GraphQL API by registering its own schema class. By default, NetBox will attempt to import
graphql.schema from the plugin, if it exists. This path can be overridden by defining
graphql_schema on the PluginConfig instance as the dotted path to the desired Python class. This class must be a subclass of
graphene.ObjectType.
Example
# graphql.py import graphene from netbox.graphql.types import NetBoxObjectType from netbox.graphql.fields import ObjectField, ObjectListField from . import filtersets, models class MyModelType(NetBoxObjectType): class Meta: model = models.MyModel fields = '__all__' filterset_class = filtersets.MyModelFilterSet class MyQuery(graphene.ObjectType): mymodel = ObjectField(MyModelType) mymodel_list = ObjectListField(MyModelType) schema = MyQuery
GraphQL Objects
NetBox provides two object type classes for use by plugins.
BaseObjectType (DjangoObjectType)
Base GraphQL object type for all NetBox objects. Restricts the model queryset to enforce object permissions.
NetBoxObjectType (ChangelogMixin, CustomFieldsMixin, JournalEntriesMixin, TagsMixin, BaseObjectType)
GraphQL type for most NetBox models. Includes support for custom fields, change logging, journaling, and tags.
GraphQL Fields
NetBox provides two field classes for use by plugins.
ObjectField (Field)
Retrieve a single object, identified by its numeric ID.
ObjectListField (DjangoListField)
Retrieve a list of objects, optionally filtered by one or more FilterSet filters. | https://docs.netbox.dev/en/stable/plugins/development/graphql-api/ | 2022-05-16T22:16:42 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.netbox.dev |
Project Migration#
Moving to the Component-Based Architecture#
In SSv5 version 5.3, the Gecko Bootloader, Zigbee, and Z-Wave changed to the Silicon Labs Configurator component-based architecture. For more information on transitioning these projects from earlier versions of SSv5 see:
Zigbee AN1301: Transitioning from Zigbee EmberZNet SDK 6.x to SDK 7.x
Gecko Bootloader AN1326: Transitioning to the Updated Gecko Bootloader in GSDK 4.0 and Higher
Z-Wave
If you are migrating one of these project types from Simplicity Studio 4 (SSv4), first follow the instructions below.
Migrate from Simplicity Studio 4 to Simplicity Studio 5#
The process to migrate a project from Simplicity Studio® 4 (SSv4) to SSv5 depends on the type of project.
If you are migrating a Bootloader, EFM32, or EFM8 project, use the Migrate Project tool. If you are migrating a Z-Wave, Bluetooth/Bluetooth Mesh, Proprietary Flex, or Zigbee project, follow the instructions in the specified documents.
Bootloader, EFM32, and EFM8 Projects#
Bootloader note: Use this procedure as a first step in migration.
Click the Tools toolbar button to open the Tools dialog. Select Migrate Projects and click OK. Select the project to be migrated and click Next.
Verify the information displayed and click Next.
Decide if you want to copy the project (recommended) and click Finish.
The project is migrated from SSv4 to SSv5.
Z-Wave Projects#
(Through Simplicity Studio 5.2) Follow the instructions in the knowledge base article Migrating a Z-Wave project from GSDK 2.7.6 to GSDK 3.0.0.
Zigbee, Flex, and Bluetooth/Bluetooth Mesh Projects#
These projects cannot be migrated using the tool. Instead, refer to the following documents:
Zigbee (Through Simplicity Studio 5.2): QSG106: Getting Started with EmberZNet PRO
Flex: AN1254: Transitioning from the v2.x to the v3.x Proprietary Flex SDK
Bluetooth: AN1255: Transitioning from the v2.x to the v3.x Bluetooth SDK
Bluetooth Mesh: AN1298: Transitioning from the v1.x to the v2.x Bluetooth Mesh SDK
If you try using the tool, in most cases the tool will point you to the documentation for the migration process. | https://docs.silabs.com/simplicity-studio-5-users-guide/5.3.0/ss-5-users-guide-getting-started/migrate-from-ss-4 | 2022-05-16T22:43:59 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.silabs.com |
Support | Blog | Contact Us
Schedule Enterprise Demo
changes.mady.by.user docuser1
Saved on Jul 13, 2018
Saved on Aug 06, 2018
...
For Redshift, you can publish CSV, JSON, or Avro results from S3 to Redshift.
NOTE: To publish to Redshift, results must be written first to S3.
NOTE: By default, data is published to Redshift using the public schema. To publish using a different schema, Even if you are publishing to the default schema, you mustpreface the table value with the name of the schema to use: MySchema.MyTable.
public
table
MySchema.MyTable
© 2013-2022 Trifacta® Inc. Privacy Policy | Terms of Use | https://docs.trifacta.com/pages/diffpages.action?originalId=124586301&pageId=110758759 | 2022-05-16T23:36:53 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.trifacta.com |
The description of the CLI command (default this.name)
An example of how to run this CLI command
Should the server initialize before running this command?
The inputs of the CLI command (default: {})
The name of the CLI command.
Should the server start before running this command?
An optional method to append additional information to the --help response for this CLI command
The main "do something" method for this CLI command. It is an
async method.
If error is thrown in this method, it will be logged to STDERR, and the process will terminate with a non-0 exit code.
An Actionhero CLI Command. For inputs, you can provide Options (--thing=stuff) with the "Inputs" object, or define Arguments in the name of the command (
greet [name]) | https://docs.actionherojs.com/classes/CLI.html | 2022-05-16T21:57:35 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.actionherojs.com |
Grenadine Event Manager allows you to create user roles so you and your team can plan and manage events with ease.
Introduction
Each person on your event planning team needs a username in order to log in and use Grenadine Event Management Software.
Note: Most roles require a user license to access Grenadine Event Manager except for Door Monitor and Moderator.
User roles give you the ability to delegate tasks to different team members. Each user role comes with a different level of access and capabilities.
Access and Capabilities
Below you can find details about how to create a user role as well as information about each user role. The list presented below has been ordered going from those with the least access and capabilities to those with unlimited access and capabilities.
You can scope user roles to specific events or to your whole organization. | https://docs.grenadine.co/admin-users-overview.html | 2022-05-16T21:44:14 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.grenadine.co |
New Login UI and process
New Login Process and UI: We have completely redesigned the account creation and login UI and process in this release to make it simpler and faster. We know it’s critical for attendees and participants. We reduced the number of steps to a minimum.
Magic Links: Magic links have been added to the event websites login process to simplify the attendees and participants authentication. It’s a passwordless authentication to facilitate and speed up the authentication and account creation process. The user receives an email with a link and can login directly when clicking on it.
Schedule Improvements
Better and Faster Publishing Feature: We put a lot of effort into the last months to design and develop a leaner Publishing model and architecture. The results are here; the publishing is now almost instant even with very large schedules and it provides more control and flexibility with a “Publish now” feature at the session and people session assignments levels.
Publish Now: The Session edit dialog box now contains a allowing you to instantly publish a session. There is also the same button for people session assigments.
Visible From/Until: “Visible From” and “Visible Until” are two new fields added to the Session edit dialog box. They allow you to control the period during which the session will be visible (Event Website, Mobile App, APIs).
Additional Roles: We added a new role category called “Staff”. So now we have three roles categories: Participants (Speakers, Moderators, Performers, etc.), Audience (Attendees, Signed-up, Walk-in, etc.), and Staff (Organizers, Hosts, Volunteers, etc.). In each category we have multiple roles to match as much as possible the languages used by event organizers. The “People” tab of the “Session” detail panel allow you to filter the people grid by category.
Check-in/Scan Improvements
New version for the Check-in Mobile App on both iOS and Android: A new version of the check-in app has been released. The priorities of this new version are simplicity and reliability. You can scan QR codes (of badges or tickets) for the event or for a session. The scans will be sent to the server immediately if a wireless connection is available, if not, they will be queued up in a local database and you will be able to send them to the server at a later time. If for speed purposes or if the check-in location has no (or slow) WiFi, you can disable the admissibility validation by clicking on the shield icon in the top right corner. You can send the scans manually to the server, if there are any that are queued up in the local database, by clicking on the sync icon in the top right corner.
New Scans information in Grenadine Event Manager:
Grenadine servers
New uptime web page: You can now consult the Grenadine server uptime at the following address:
Grenadine Registrations
New timeline for registration steps: During the checkout process a new timeline at the top of the page has been added to indicate how many steps the registration process includes and what is the current step.
Other Improvements
- New registration APIs.
- New “external_id” column to the people table.
- The People grid has been replaced with a new grid in the mass emailing’s add recipients dialog.
- We now show the registration status on the My Account page
- There are three new Scheduling conficts reports: staff, participants and audience. | https://docs.grenadine.co/grenadine-release-notes-march-2020.html | 2022-05-16T22:38:12 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.grenadine.co |
WiFi
RSSI()
WiFi.RSSI() returns the signal strength of a Wi-Fi network from -127 (weak) to -1dB (strong) as an
int. Positive return values indicate an error with 1 indicating a Wi-Fi chip error and 2 indicating a time-out error.
// SYNTAX int rssi = WiFi.RSSI(); WiFiSignal rssi = WiFi.RSSI();
Since 0.8.0
WiFi.RSSI() returns an instance of
WiFiSignal class.
// SYNTAX WiFiSignal sig = WiFi.RSSI();
If you are passing the RSSI value as a variable argument, such as with Serial.printlnf, Log.info, snprintf, etc. make sure you add a cast:
Log.info("RSSI=%d", (int8_t) WiFi.RSSI()).
This is necessary for the compiler to correctly convert the WiFiSignal class into a number. | https://docs.particle.io/cards/firmware/wifi/rssi/ | 2022-05-16T21:05:40 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.particle.io |
This article explains the application profiles and PKI profile configurations.
NSX Advanced Load Balancer NSX Advanced Load Balancer release 18.2.3, this has been extended to L4 SSL/TLS applications (via the NSX Advanced Load Balancer CLI). | https://docs.vmware.com/en/VMware-NSX-Advanced-Load-Balancer/20.1.4/Configuration_Guide/GUID-1BC3762F-8398-4540-87A4-82ABB2961AA0.html | 2022-05-16T23:16:49 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.vmware.com |
When necessary, you can set an environment variable so that an product suite installed in that location can use it. For example, the SM_BROKER_DEFAULT variable, the value of which is set during installation, is used in this manner.
To set an environment variable so that it can be used by the programs of an product suite, add it to the runcmd_env.sh file, which is located in the BASEDIR/smarts/local/conf directory of that product suite.
Use sm_edit to open the runcmd_env.sh file. Invoke sm_edit from the BASEDIR/smarts /bin directory:
sm_edit conf/runcmd_env.sh
Add the environment variable and its value by using the following syntax:
SM_AUTHORITY="<STD>"
Save the runcmd_env.sh file and close it.
Any program within a product suite started after this point will use the applicable environment variables specified in the runcmd_env.sh file. programs that are already running need to be restarted for any new environment variable to take effect. | https://docs.vmware.com/en/VMware-Smart-Assurance/10.1.0/foundation-system-admin-guide-101/GUID-42F40434-6A28-4848-ADCB-6C87642BC993.html | 2022-05-16T21:11:59 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.vmware.com |
The credentials are the collection configuration settings, for example, user names and passwords, that the adapters use to authenticate the connection on the external data sources. Other credentials can include values such as domain names, pass phrases, or proxy credentials. You can configure for one or more solutions to connect to data sources as you manage your changing environment.
Where You Find Credentials
From the left menu, click Accounts tab, click the Credentials link on the upper right side.. In the | https://docs.vmware.com/en/vRealize-Operations/Cloud/com.vmware.vcom.config.doc/GUID-13D5CB8F-E98B-4C19-8301-5EFEADC49F02.html | 2022-05-16T21:33:07 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.vmware.com |
UnitCell
This page gives hints on how to specify the unit cell with the ABINIT package.
Introduction¶
ABINIT needs three dimensioned non-coplanar vectors, forming the unit cell, to set up the real space lattice.
An initial set of three vectors, specified in real space by rprim or as unit vectors with angles angdeg, are dimensioned in a second step using scaling factors as specified by acell or by rescaling their cartesian coordinates, as specified by scalecart. Internally, only the final result, %rprimd matters. The most detailed explanation can be found by looking at rprim.
Note that ABINIT expects the mixed product of the three vectors (R1xR2).R3 to be positive. If it is not the case, exchange two of them with the associated reduced coordinates. More information about the way the real space lattice, the reciprocal lattice, and symmetries are defined in ABINIT can be found here.
Also note that that Abinit space group routines uses by default strict tolerances for the recognition of the symmetry operations. This means that lattice vectors and atomic positions must be give with enough figures so that the code can detect the correct space group. This is especially true in the case of hexagonal or rhombohedral lattices. Remember that one can specify rational numbers with the syntax:
xred 1/3 1/3 1/3
instead of the less precise:
xred 0.33333 0.33333 0.33333
Using acell and angdeg instead of %rprimd may solve possible issues with the space group recognition.
If your input parameters correspond to a high-symmetry structure but the numerical values at hand
are noisy, you may want to increase the value of tolsym in the input file
so that Abinit will resymmetrize automatically the input parameters.
Finally, one can use the structure input variable to initialize the crystalline geometry from an external file. For instance, it is possible to read the crystalline structure from an external netcdf file or other formats such as POSCAR without having to specify the value of natom, ntypat, typat and znucl.
Smart symmetriser
ABINIT has also a smart symmetriser capability, when spgroup!=0 and brvltt=-1. In this case, the CONVENTIONAL unit cell must be input through the usual input variables rprim, angdeg, acell and/or scalecart. ABINIT will fold the conventional unit to the primitive cell, and also generate all the nuclei positions from the irreducible ones. See topic_SmartSymm.
Related Input Variables¶
basic:
useful:
- angdeg ANGles in DEGrees
- brvltt BRaVais LaTTice type
- chkprim CHecK whether the cell is PRIMitive
- scalecart SCALE CARTesian coordinates
- spgroup SPace GROUP number
- supercell_latt SUPERCELL LATTice
expert:
- expert_user EXPERTise of the USER
internal:
Selected Input Files¶
tutorial:
v1:
v2:
v3:
v5:
v6:
v9:
Tutorials¶
- Fourth basic tutorial Determination of the surface energy of aluminum (100): changing the orientation of the unit cell. | https://docs.abinit.org/topics/UnitCell/ | 2022-05-16T22:23:32 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.abinit.org |
Function:
a!barChartField()
Displays numerical data as horizontal bars. Use a bar chart to display several values at the same point in time.
See also: Line Chart, Column Chart, Pie Chart, Chart Series, UX Charts Best Practices, Chart Color Schemes
There are two ways to configure a.
PERCENT_TO_TOTALalso shows a single bar for each category, but bars have a total height of 100%. Each value shows the percent contribution to the total within each category.
[Series #]with
#as the index number of the data value. For example,
[Series 1].
true.
"AUTO"height is used, the chart will show as Medium height with 20 or less categories. If more categories are provided, the chart will expand in height to ensure categories are not cut off.:. | https://docs.appian.com/suite/help/21.1/Bar_Chart_Component.html | 2022-05-16T21:03:06 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.appian.com |
DataStax Agent port setting conflict
Ensure there are no conflicts with port 7199 used by the DataStax Agent.
If there are problems with OpsCenter, check for conflicts in port settings. The DataStax Agent uses port 7199 by default. If you have not changed the default port, check that Cassandra or another process on the node is not set up to use port 7199.
If you set the DataStax Agent port to a host name instead of an IP address, the DNS provider must be online to resolve the host name. If the DNS provider is not online, expect some intermittent problems. | https://docs.datastax.com/en/dse-trblshoot/doc/troubleshooting/opsc/opscTroubleshootingAgentPortConflict_r.html | 2022-05-16T21:45:00 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.datastax.com |
Usage
What are the expected performance figures for the fabric?
The performance of any chain network depends on several factors: proximity of the validating nodes, number of validators, encryption method, transaction message size, security level set, business logic running, and the consensus algorithm deployed, among others.
The current performance goal for the fabric is to achieve 100,000 transactions per second in a standard production environment of about 15 validating nodes running in close proximity. The team is committed to continuously improving the performance and the scalability of the system.
Do I have to own a validating node to transact on a chain network?
No. You can still transact on a chain network by owning a non-validating node (NV-node).
Although transactions initiated by NV-nodes will eventually be forwarded to their validating peers for consensus processing, NV-nodes establish their own connections to the membership service module and can therefore package transactions independently. This allows NV-node owners to independently register and manage certificates, a powerful feature that empowers NV-node owners to create custom-built applications for their clients while managing their client certificates.
In addition, NV-nodes retain full copies of the ledger, enabling local queries of the ledger data.
What does the error string “state may be inconsistent, cannot query” as a query result mean?
Sometimes, a validating peer will be out of sync with the rest of the network. Although determining this condition is not always possible, validating peers make a best effort determination to detect it, and internally mark themselves as out of date.
When under this condition, rather than reply with out of date or potentially incorrect data, the peer will reply to chaincode queries with the error string “state may be inconsistent, cannot query”.
In the future, more sophisticated reporting mechanisms may be introduced such as returning the stale value and a flag that the value is stale. | https://openblockchain.readthedocs.io/en/latest/FAQ/usage_FAQ/ | 2022-05-16T21:42:50 | CC-MAIN-2022-21 | 1652662512249.16 | [] | openblockchain.readthedocs.io |
SOLIDWORKS is a 3-D mechanical CAD (computer-aided design) program that runs on Microsoft Windows and is being developed by Dassault Systèmes SOLIDWORKS Corp., a subsidiary of Dassault Systèmes, S. A. (Vélizy, France).
When opening SOLIDWORKS assembly files, the referenced part files must be in the same folder as the assembly file. Rhino 6 reads SOLIDWORKS file formats up to version 2019.
From the File menu, click Open, Insert, Import, or Worksession > Attach.
SOLIDWORKS Import Options
Imports parts of a SOLIDWORKS assembly as linked blocks. When disabled, parts are imported as objects.
Rotates the model on import so that the z direction is up.
Saves the current settings and turns off the dialog display.
See also: ResetMessageBoxes command.
Wikipedia: SOLIDWORKS
SOLIDWORKS
Rhinoceros 6 © 2010-2019 Robert McNeel & Associates. 16-May-2019 | http://docs.mcneel.com/rhino/6/help/en-us/fileio/solidworks_sldprt_sldasm_import.htm | 2019-06-16T07:03:53 | CC-MAIN-2019-26 | 1560627997801.20 | [] | docs.mcneel.com |
Percent and Conditional Routing
You can also use the Target Block for:
- Percentage routing or for
- Conditional routing using the threshold functions
Percentage Routing
When using the Target Block, If percentage is selected for the Statistics Order property, the Targets dialog box, which opens from the Targets property, displays a Weight column as shown below.
Here you can enter numbers to specify percentage allocation for each target.
Conditional Routing
Universal Routing Server's threshold functions can be used in the Target block for conditional routing, such as "share agents by service level agreement routing" as described in the Universal Routing 8.1 Routing Application Configuration Guide. The Targets dialog box, which opens from the Targets property, displays a Threshold column. Click the button under Threshold to open Expression Builder where you can create a threshold expression.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/Composer/latest/Help/PercentandConditionalRouting | 2019-06-16T07:37:44 | CC-MAIN-2019-26 | 1560627997801.20 | [] | docs.genesys.com |
All content with label client+demo+gridfs+hotrod+infinispan+installation+xsd.
Related Labels:
expiration, publish, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, release, query, deadlock, archetype, lock_striping, jbossas, nexus, guide, schema, listener,
cache, amazon, s3, memcached, grid, test, jcache, api, ehcache, maven, documentation, wcm, write_behind, ec2, 缓存, s, hibernate, aws, getting, interface, clustering, setup, eviction, out_of_memory, concurrency, jboss_cache, examples, import, events, hash_function, configuration,, cache_server, scala, command-line, migration, filesystem, jpa, tx, gui_demo, eventing, shell, client_server, testng, murmurhash, infinispan_user_guide, webdav, snapshot, repeatable_read, docs, consistent_hash, batching, store, jta, faq, 2lcache, as5, lucene, jgroups, locking, rest, hot_rod
more »
( - client, - demo, - gridfs, - hotrod, - infinispan, - installation, - xsd )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/client+demo+gridfs+hotrod+infinispan+installation+xsd | 2019-06-16T07:36:49 | CC-MAIN-2019-26 | 1560627997801.20 | [] | docs.jboss.org |
Start System Monitor (Windows)
SQL Server
Azure SQL Database
Azure SQL Data Warehouse
Parallel Data Warehouse
On the Start menu, point to Run, type perfmon in the Run dialog box, and then select OK.
See also
Feedback
Send feedback about: | https://docs.microsoft.com/en-us/sql/relational-databases/performance/start-system-monitor-windows?view=sql-server-2017 | 2019-06-16T08:15:50 | CC-MAIN-2019-26 | 1560627997801.20 | [] | docs.microsoft.com |
Read the Docs Commercial Features¶
Read the Docs has a community solution for open source projects on readthedocs.org and we offer commercial documentation building and hosting on readthedocs.com. Features in this section are specific to our commercial offering.
- Private repositories and private documentation
- The largest difference between the community solution and our commercial offering is the ability to connect to private repositories, to restrict documentation access to certain users, or to share private documentation via private hyperlinks.
- Additional build resources
- Do you have a complicated build process that uses large amounts of CPU, memory, disk, or networking resources? Our commercial offering has much higher default resources that result in faster documentation build times and we can increase it further for very demanding projects.
- Priority support
- We have a dedicated support team that responds to support requests during business hours. If you need a quick turnaround, please signup for readthedocs.com.
- Advertising-free
- All commercially hosted documentation is always ad-free.
Additional commercial features | https://docs.readthedocs.io/en/latest/commercial/index.html | 2019-06-16T06:57:14 | CC-MAIN-2019-26 | 1560627997801.20 | [] | docs.readthedocs.io |
Your documents are stored in folders. Folders are paginated and you can move to the next or previous page using the navigation menu.
The folder tree is expanded by default.
Folders
Use folders to organize your documents better. The root is the
pool folder, which is created by default with each new project.
Create a new folder
Click on the folder you want to be the parent of the new folder. Click on the folder action
Add new, write the name of the new folder and press the key ↵.
Rename a folder
Click on the folder you want to rename. Click on the folder action
Rename, write the new name of the new folder and press ↵.
Remove a folder
Click on the folder you want to remove. Click on the folder action
Remove. Please remember that all the documents stored in this folder will be also removed.
Upload text
In order to upload a new document, please select the folder you want to upload documents to and click on
. Once clicked, a modal menu is displayed.
The different formats accepted are described here: Input formats
Upload box where you can select how to import text
Upload files with predefined document labels
If you have document labels defined in your project, you can pre-annotate these labels for the document you want to upload. This is very handy if you have metadata (e.g. time stamp, type of document, industry, severity, etc.) available you want to have readily available for your annotators or your ML model.
For example, let's say your model use Webhooks to generate predictions once a document is uploaded. If the user has pre-annotated this document before, your model has valuable information to generate these predictions based on the pre-annotations. Language can significantly vary between departments, contexts, industries, time, etc., so you have an opportunity here to pick this info and generate better predictions accordingly.
This option in the user interface is only available when you upload files. Expand the Advanced menu and set the document labels.
Using the API you can automatically pre-annotate documents uploading together the document/text and the
ann.json file with the annotations.
Predefining document labels before uploading the file
Upload pre-annotated documents
If you have pre-annotated documents, you can upload them directly to tagtog. You will need these two files:
The file wit the text content. The format of this file should be one of our supported input formats.
The file with the annotations. From the GUI, the only supported format for annotations is the
ann.json.
Please remember to name both files the same, except for the extension. For example:
mydoc.pdf and
mydoc.ann.json. You can upload multiple pre-annotated documents at the same time. For example, 5 text files and 5 annotation files.
Please check the API for more options as replacing existing annotations.
Remove a document
You can remove a document on the web editor view or in the document list view by clicking on the remove button
.
To remove documents in batch, you can use the search bar or the API for batch removal.
Manually confirmed documents
In the document list view, each document has a check mark, when green, it means the document is confirmed.
Manually confirmed documents are those with all annotations complete. Depending on the project, it can also mean that the annotations have been reviewed by a human, and they can be used as training data.
Confirm documents is helpful to keep the progress of the annotation tasks. | http://docs.tagtog.net/documents.html | 2019-06-16T07:36:18 | CC-MAIN-2019-26 | 1560627997801.20 | [array(['/assets/img/editor-upload-box.png', None], dtype=object)
array(['/assets/img/preann-doclabels.png', None], dtype=object)] | docs.tagtog.net |
Note
Please participate in the MPF User Survey 2018.
Tutorial step 4: Adjust your flipper power¶
We casually mentioned in the previous step that MPF uses a very low default power setting for coils–mainly because we don’t want to risk blowing apart some 40-year-old coil mechanism with a power setting that’s too high. (Ask us how we know this! :)
So at this step in the tutorial, we’re going to look at how you can adjust and fine-tune the power of your flipper coils. The good news is that everything you learn here will 100% apply to all the other coils in your machine (slingshots, pop bumpers, ball ejects, the knocker, drop target resets, etc.)
1. Adjust coil pulse times¶
Modern pinball controllers that MPF uses have the ability to precisely control how long (in milliseconds) the full power is applied to a coil. (Longer time = more power.) This is called the “pulse time” of a coil, as it controls how long the coil is pulsed when it’s fired.
You can set the default pulse time for each coil in
the coil’s entry in the
coils: section of your config file. If you
don’t specify a time for a particular coil, then MPF will a default
pulse time of 10ms.
So in the last step, we got your flipper coils working, but as they are now, they each use 10ms for their pulse times. (Remember for flippers we’re talking about the strong initial pulse to move the flipper from the down to up position. Then after that pulse is over, if you have dual-wound coils, the main winding is shut off while the hold winding stays on, and if you have single wound coils the pulse time specifies how long the coil is on solid for before it goes to the on/off pwm switching.)
So right now your flippers have a pulse time of 10ms. But what if that’s too strong? In that case you risk breaking something. Or if your coil is too weak, then your ball will be too slow or not be able to make it to the top of the playfield or up all your ramps. So now you have to play with different settings to see what “feels” right.
Unfortunately there’s no universal pulse time setting that will work on every machine. It depends on how many windings your coils have, how worn out your coils are, how clean your coil sleeves are, how tight your flipper bats are to the playfield, how free-moving your linkages are, and how much voltage you’re using. Some machines have coil pulse times set really low, like 12 or 14ms. Others might be 60 or 70ms. Our 1974 Big Shot machine has several coils with pulse times over 100ms. It all really depends.
You adjust the pulse time for each coil by adding a
default_pulse_ms: setting to
the coil’s entry in the
coils: section of your config file. (Notice
that you make this change in the
coils: section of your config, not
the
flippers: section.) So let’s try changing your flipper coils
from the default of 10ms to 20ms. Change your config file so it looks
like this:
coils: c_flipper_left_main: number: 00 default_pulse_ms: 20 c_flipper_left_hold: number: 01 allow_enable: true c_flipper_right_main: number: 02 default_pulse_ms: 20 c_flipper_right_hold: number: 03 allow_enable: true
Notice that we only added
default_pulse_ms: entries to the two main coils,
since the hold coils are never pulsed so it doesn’t matter what their
pulse times are. Now play your game and see how it feels. Then keep on
adjusting the
default_pulse_ms: values up or down until your flippers
feel right. In the future we’ll create a coil test tool that makes it
easy to dial-in your settings without having to manually change the
config file and re-run your game, but we don’t have that yet. You
might find that you have to adjust this
default_pulse_ms: setting down the
road too. If you have a blank playfield then you might think that your
coils are fine where they are, but once you add some ramps you might
realize it’s too hard to make a ramp shot and you have to increase the
power a bit. Later on when you have a real game, you can even expose
these pulse settings to operators via the service menu.
2. Adjusting coil “hold” strength¶
If you’re using single-wound flipper coils, you should also take a
look at the
default_hold_power: values. (Again, to be clear, you only have
to do this if your flippers have a single winding. If you have dual-wound
coils then the hold winding is designed to be held on for long
periods of time so you can safely keep it on full strength solid and
you can skip to the next step.)
We don’t have any good guidance for
what your
default_hold_power: values should be. Really you can just start
with a value of 0.125 or 0.25 and then keep increasing it (0.0 for 0% power to 1.0 for 100% power)
until your flipper holds are strong enough not to break their
hold when a ball hits them. Some hardware platform have additional
options for fine-turning the hold power if this setting
result in weird buzzing sounds or don’t feel right. See the coils:
section of each hardware platform’s How To guide for details for your
platform.
By the way there are a lot of other settings you can configure for your flippers. (As detailed in the flippers: section of the config file reference.) They’re not too important now, but we wanted to at least look at the power settings to make sure you don’t get too far into this tutorial with a risk of burning them up.
3. Check out the complete config.yaml file so far¶ 4 should
be at
/mpf-examples/tutorial/config/step4.yaml.
You can run this file directly by switching to that folder and then running the following command:
C:\mpf-examples\tutorial>mpf -c step4 | http://docs.missionpinball.org/en/latest/tutorial/4_adjust_flipper_power.html | 2018-05-20T12:14:11 | CC-MAIN-2018-22 | 1526794863410.22 | [] | docs.missionpinball.org |
Overview
As a RightScale customer, I'd like to know if RightScale will support Cloud Networks within the RackSpace Open Cloud?
Resolution
RightScale does have plans to fully support RackSpace Open Cloud Networks, however we have no time table or estimate on a feature delivery date. If you wish to use Cloud Networks, it's also important to remember that they are not available on the RackSpace First Generation/Legacy cloud. | http://docs.rightscale.com/faq/clouds/rackspace/Will_Rightscale_support_Rackspace_Cloud_Networks_on_Rackspace_Open_Cloud.html | 2018-05-20T12:00:55 | CC-MAIN-2018-22 | 1526794863410.22 | [] | docs.rightscale.com |
When the NetScaler appliance communicates with the physical servers or peer devices, by default, it uses one of its own IP addresses as the source IP. The appliance maintains a pool of mapped IP addresses (MIPs) and subnet IP addresses (SNIPs), and selects an IP address from this pool to use as the source IP address for a connection to the physical server. The decision of whether to select a MIP or a SNIP depends on the subnet in which the physical server resides..
As an alternative to USIP mode, you have the option of inserting the client's IP address (CIP) in the request header of the server-side connection for an application server that needs the client's IP address..
For more information about the Use Proxy Port option, see "Using the Client Port When Connecting to the Server."
The following figure shows how the NetScaler uses IP addresses in USIP mode.
At the command prompt, type one of the following commands:
At the command prompt, type:
Example
set service Service-HTTP-1 -usip YES | https://docs.citrix.com/ko-kr/netscaler/11-1/networking/ip-addressing/enabling-use-source-ip-mode.html | 2018-05-20T12:26:07 | CC-MAIN-2018-22 | 1526794863410.22 | [] | docs.citrix.com |
Write business logic using C# and X++ source code
The primary goal of this tutorial is to illustrate the interoperability between C# and X++ in Microsoft Dynamics AX. In this tutorial, you’ll write business logic in C# source code and in X++ source code.
In this tutorial, you’ll write business logic in C# source code and in X++ source code. You'll get experience with the following:
- New tools in Visual Studio.
- The handling of events in C#.
- The use of Language Integrated Query (LINQ) in C# to fetch data.
Prerequisite
This tutorial requires that you access the Dynamics AX environment using Remote Desktop, and be provisioned as an administrator on the Dynamics AX instance. Note: Debugging support for the C# project does not work if the Load symbols only for items in the solution check box is selected. Since this option is selected by default, it must be changed prior to running the lab. In Visual Studio, click Dynamics AX > Options, and clear the Load symbols only for items in the solution check box.
Scenario
Too many cars have been rented to drivers who have a history of unsafe driving habits. The Fleet Management rental company needs to check driving records from external sources. Upper management has decided to subscribe to a service that is hosted by the Department of Transportation (DOT), which is the legal entity that manages drivers’ licenses and associated information. This service retrieves the number of citations for the given unique license number. It’s not easy to call external services directly from X++ source code. Visual Studio has tools for generating the “code-behind” (in C#) that calls the services, and these tools make the development effort easy. The obvious choice would be to leverage Visual Studio to write the code. However, in this tutorial your code won’t actually call an external service, because the logistics are beyond the scope of the simple lab environment. Instead, we provide a mock implementation of a service call. The goal of this tutorial is to teach an understanding of the current state of C# and of interoperability with X++.
Create a C# class library
Dynamics AX enables you to create a reference from a Dynamics AX project to the C# class library, or to any other type of C# project that generates an assembly. Such references affect the build order. The C# project is built before the Dynamics AX project that references and depends on it. The Dynamics AX infrastructure understands the references, and will make sure that the C# assemblies are deployed correctly to the cloud before execution. Follow these steps to create a C# class library in the Fleet Management solution:
- In Visual Studio, click File > Open project/solution.
- In the Open Project dialog box, in the File name text box, type the following path, and then press Enter - C:\users\public\desktop\FleetManagement.
- Select the file named FleetManagement.sln, and then click Open. If the solution file is not on your computer, the steps to create it are listed in Tutorial: Create a Fleet Management solution file out of the Fleet Management models in the AOT.
- Right-click the FleetManagement solution, and then click Add > New Project. The Add New Project dialog is displayed.
- In the left pane, click Visual C#, and then in the middle pane, click Class Library.
- At the bottom in the Name text box, type the name DriversLicenseEvaluator.
- In the Location text box, type the following directory path: C:\users\public\desktop\FleetManagement.
- Verify that your project is set to “.NET Framework 4.5” in the drop-down list at the top.
- Click OK to create the project.
- In Solution Explorer, under the DriversLicenseEvaluator project, right-click the file name Class1.cs and rename it DriversLicenseChecker.cs.
- Click Yes, when prompted to rename all references to the class.
Write a C# method named CheckDriversLicense
In this section, you add C# code for a method named CheckDriversLicense. The method must validate the driver’s license. To do this, the method must retrieve the driver’s license number, which is stored in the customer table. The method is given the RecId value for the customer record that contains the information required by the method. Your C# code uses the Dynamics AX LINQ provider to read from the customer table. For LINQ to work, you must first add references pointing to the LINQ assemblies. You add these references to the C# project named DriversLicenseEvaluator.
In Solution Explorer, expand the DriversLicenseEvaluator project node, right-click References, and then click Add Reference.
Click Browse and then enter the following path: C:\Packages\bin
- In some environments, the location of the packages folder is not on the c: drive.
In the File name field, type the pattern *LINQ*.dll and then press Enter. You'll see a list of assemblies with the name LINQ in them. From that list, select the following files, and then click Add:
- Microsoft.Dynamics.AX.Framework.Linq.Data.dll
- Microsoft.Dynamics.AX.Framework.Linq.Data.Interface.dll
- Microsoft.Dynamics.AX.Frameowrk.Linq.Data.Msil.dll
You must also add the support assemblies that contain the Common type that you'll use in the code below. Click Browse again, and then type the following file name into the field:
- Microsoft.Dynamics.AX.Xpp.Support.dll
- Microsoft.Dynamics.AX.Data.Core.dll
Click Add, and then click OK. The assemblies now appear under the references node in the project.
Repeat the Add Reference process, except this time, add the following DLL file from the indicated path:
- Dynamics.Ax.FleetManagement.dll, in C:\Packages\FleetManagement\bin
In Solution Explorer, select the reference Dynamics.Ax.FleetManagement.dll reference and set the property Copy Local = False.
In Solution Explorer, right-click DriversLicenseChecker.cs, and then click View Code.
Add the following three using statements to the DriversLicenseEvaluator namespace, to reduce the verbosity of code that references external classes. using Dynamics.AX.Application; using Microsoft.Dynamics.AX.Framework.Linq.Data; using Microsoft.Dynamics.AX.Xpp; Your C# code should now look something like the following example.
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace DriversLicenseEvaluator { using Dynamics.AX.Application; using Microsoft.Dynamics.AX.Framework.Linq.Data; using Microsoft.Dynamics.Ax.Xpp; public class DriversLicenseChecker { } }
Replace the class CheckDriversLicense with the following code. Tip: If you prefer, you can paste in the code from the DriversLicenseChecker.cs file in the C:\FMLab directory.
public class DriversLicenseChecker { public static bool CheckDriversLicense(long customerId) { // Use LINQ to get back to the information about the license number FMCustomer customer; QueryProvider provider = new AXQueryProvider(null); var customers = new QueryCollection<FMCustomer>(provider); // Build the query (but do not execute it) var query = from c in customers where c.RecId == customerId select c; // Execute the query: customer = query.FirstOrDefault(); if (customer == null) { throw new ArgumentException ("The customerId does not designate a customer"); } if (string.IsNullOrEmpty(customer.DriverLicense)) { // No driver's license was recorded. Veto the rental. return false; } // Call the DOT web service to validate the license number. // This is not practical for this lab, because all the service providers // charge for this service. Instead, just assume that any license number // that contains the sequence "89" is valid. // In the demo data, this is true for Adrian Lannin, // but not for Phil Spencer. return customer.DriverLicense.Contains("89"); } }
Understand the LINQ code
Before proceeding with more C# code, verify that you understand the LINQ code you just added. More details about LINQ are provided in the Technical Concepts Guide, so only the basics are described below.
- First, a provider is created. It provides access to all the Microsoft Dynamics AX tables.
- Next, a collection of all customers is created. The customer of interest is retrieved from this collection.
- Then, a query is created with a where clause that designates the requested customer by RecId.
- The call to the FirstOrDefault method forces execution of the query.
- The method assigns the single matching customer to the customer variable. (Null is assigned if the RecId value matches no customer.)
- Finally, the customer data is tested to see if the associated driver's license is valid. (Does the license contain "89"?)
Handle the event when a record is added
The following subsections provide the following:
- Explain the upcoming code items and their inter-relationships.
- Show the code for an event handler.
- Associate the handler with the event occurrences.
Preparatory overview
When an attempt is made to add a record to a table, the OnValidateWrite event is raised by Dynamics AX before the record is written to the database. You want your CheckDriversLicense method to be called each time on the OnValidateWrite event is raised for the FMRental table. To do this, you now need to write a C# method that is invoked by the event, and which calls your checkDriversLicense method. In other words, you need to write an event handler that calls your CheckDriversLicense method. The event handler method receives a parameter of the type, DataEventArgs. The event handler can set a value in the DataEventArgs structure to accept or reject the record. After you write your event handler method, you connect it to the event by assigning, or adding it to the OnValidatedWrite delegate that is a member of the FMRental table. You write this assignment in the init method of the data source of the FMRental form. This assignment to a delegate might seem odd. After all, we're modifying existing code (FMRental) to add handlers, which contradicts the main value proposition of loose coupling that eventing is supposed to offer. This assignment step is temporary. We'll eventually have the same story in C# as we do in X++, where an attribute is applied to the C# event handler as the mechanism that ties the delegate to the handler. Note: The data source init method is called when the form is opened. Technically, the init method is inherited from the FormDataSource class.
Write an event handler method
In C#, write the following event handler method and add it to the DriversLicenseChecker class.
public static void OnValidatedWriteHandler(Common table, DataEventArgs args) { var validateEventArgs = args as ValidateEventArgs; // Do not check if already rejected. if (validateEventArgs.parmValidateResult()) { var rentalTable = table as FMRental; if (rentalTable == null) { throw new ArgumentNullException("table"); } var result = CheckDriversLicense(rentalTable.Customer); validateEventArgs.parmValidateResult(result); } }
Build the DriversLicenseEvaluator project by right-clicking the project node and then clicking Build.
Add a reference pointing to the DriversLIcenseEvaluator project
Create a reference from the X++ project named FleetManagement Migrated to the C# project named DriversLicenseEvaluator, by completing the following steps.
- Right-click the FleetManagement Migrated project, click Add, and then click Reference. Select the row for the DriversLicenseEvaluator project in the Projects references tab, and then click OK.
- Under the FleetManagement Migrated project, expand the References node, and there you see new reference to the DriversLicenseEvaluator project.
Build sequence
Your C\# DriversLicenseEvaluator project will be built before the FleetManagement Migrated project is built. This is because the added reference makes the Fleet project dependent on your project. The build sequence is easy to see if you right-click the FleetManagement solution, click **Project Build Order**, and then click **Dependencies**. | | | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | [](./media/projectdependencies1_linqc2.png) | [](./media/projectdependencies2_linqc1.png) |
Add your event handler to a delegate
In Solution Explorer, navigate to FleetManagement Migrated > User Interface > Forms > FMRental.
Double-click the FMRental form. The Visual Studio designer opens to the form.
Expand the Data Sources node to show the data sources used in the form.
Expand the FMRental data source, and then the Methods node to list the methods defined on the data source.
Right-click Methods, and then click Override > init. The list displays all of the methods on the data source that haven't yet been overridden. When you select init, this opens the file FMRental.xpp in the X++ code editor with the cursor near the template for the init method.
At the end of the init method body, use the += operator to add one assignment to a delegate.
FMRental.onValidatedWrite += eventhandler (DriversLicenseEvaluator.DriversLicenseChecker::OnValidatedWriteHandler);
Click to save, and then build the entire solution.
Final test
In this section, you set breakpoints and run the Fleet application under the Visual Studio debugger. This enables you to prove the following:
- Your LINQ query runs when the OnValidateWrite event is raised.
- Your LINQ query successfully retrieves the data for one customer.
Prepare the test
In Solution Explorer, navigate to FleetManagement Migrated > User Interface > Forms.
Right-click FMRental, and then click Set as Startup Object.
In the code editor for DriversLicenseChecker.cs, find the OnValidateWriteHandler method. Find the following line of code.
var result = CheckDriversLicense(rentalTable.Customer);
Set a breakpoint on that line of code. You do this by clicking in the left margin at that line. A red dot displays when the breakpoint is set.
In the CheckDriversLicense method, set another breakpoint at the following line.
if (string.IsNullOrEmpty(customer.DriverLicense))
Run the test
For this test, we'll be debugging the C# code that we've written. To do this, we need to inform Visual Studio to load the symbols for the assembly that contains the C# code. Go to Dynamics AX > Options > Debugging and verify that the Load symbols only for items in the solution check box is not selected.
Tip: If you're unable to get to the breakpoint in the C# code, you may want to open the Modules window (Debug > Windows > Modules), find the C# module and load it explicitly.
- Click Debug > Start Debugging. This starts the Fleet application, and a browser window with the FMRental form is displayed.
- Click on any Vehicle rental ID to view details.
- Click the Edit icon near the top left of the form. The icon looks like a pencil.
- In the To field of the Rental section, increase the date by one day.
- Click the Save button. This causes the focus to shift to Visual Studio at your highlighted breakpoint. This line shows that the OnValidatedWrite event was raised, and that your handler method was called.
- Press F5 to continue the run. Instantly, your other breakpoint becomes highlighted.
- Find the variable customer a few lines above your breakpoint.
- Right-click the customer variable, and then click QuickWatch. Any long integer value proves that your LINQ query worked.
- Press F5 to complete the Save operation. | https://docs.microsoft.com/en-us/dynamics365/unified-operations/dev-itpro/dev-tools/write-business-logic | 2018-05-20T12:14:20 | CC-MAIN-2018-22 | 1526794863410.22 | [array(['./media/projectdependencies1_linqc2.png',
'ProjectDependencies1\\_LinqC'], dtype=object)
array(['./media/projectdependencies2_linqc1.png',
'ProjectDependencies2\\_LinqC'], dtype=object)] | docs.microsoft.com |
ConfigurationSetting Method - RemoveURL
Removes
UrlString does not include the Virtual Directory name – the SetVirtualDirectory Method (WMI MSReportServer_ConfigurationSetting) method is provided for that purpose.
Before calling the ReserveURL method, you must supply a value for the VirtualDirectory configuration property for the Application parameter. Use the SetVirtualDirectory Method (WMI MSReportServer_ConfigurationSetting) method to set the VirtualDirectory property.
If an SSL Certificate was provisioned by Reporting Services and no other URLs need it, it is removed.
This method causes all non-configuration app domains to hard recycle and stop during this operation; app domains are restarted after this operation complete.
Requirements
Namespace: root\Microsoft\SqlServer\ReportServer\<InstanceName>\v13\Admin
See Also
MSReportServer_ConfigurationSetting Members | https://docs.microsoft.com/en-us/sql/reporting-services/wmi-provider-library-reference/configurationsetting-method-removeurl?view=sql-server-2017 | 2018-05-20T12:27:15 | CC-MAIN-2018-22 | 1526794863410.22 | [] | docs.microsoft.com |
Leadpages
This documentation will show you how to take your promotion and set it up on Leadpages.
1) Using the Drag & Drop Editor
Using the drag and drop editor in Leadpages choose your template.
2) Drag and Drop < > HTML
Once you are inside of the editor, drag and drop the < > HTML element anywhere you want into the page.
3) Paste in ViralSweep code
Next, copy your ViralSweep code, and then paste it into the < > HTML element on your template.
4) You're finished!
That's it, the promotion is now installed on your Leadpages template.
Still having trouble with adding your campaign to Leadpages? Simply click the support or live chat icon to get in touch with us. | http://docs.viralsweep.com/en/articles/85111-leadpages | 2018-05-20T11:27:25 | CC-MAIN-2018-22 | 1526794863410.22 | [array(['https://s3.amazonaws.com/elevio-article-assets/56c3521d59cf1/5880f89be70ab_screen-shot-2017-01-19-at-71450-am.png',
None], dtype=object)
array(['https://s3.amazonaws.com/elevio-article-assets/56c3521d59cf1/5880f8d848247_screen-shot-2017-01-19-at-71539-am.png',
None], dtype=object)
array(['https://s3.amazonaws.com/elevio-article-assets/56c3521d59cf1/5880f96c31823_screen-shot-2017-01-19-at-71608-am.png',
None], dtype=object)
array(['https://s3.amazonaws.com/elevio-article-assets/56c3521d59cf1/5880fcefb6c49_screen-shot-2017-01-19-at-71631-am.png',
None], dtype=object) ] | docs.viralsweep.com |
Testing
Writing and running tests in Perl 6
Testing code is an integral part of software development. Tests provide automated, repeatable verifications of code behaviour, and ensures your code works as expected.
In Perl 6, the Test module provides a testing framework. Perl 6's official spectest suite uses
Test.
The testing functions emit output conforming to the Test Anything Protocol. In general, they are used in sink context:
ok check-name(, :), "name has a hyphen rather than '::'"
but all functions also return as a Boolean if the test has been successful or not, which can be used to print a message if the test fails:
ok check-name(, :), "name has a hyphen rather than '::'" \or diag "\nTo use hyphen in name, pass :relaxed-name to meta-ok\n";
Reference
plan— declare how many tests you expect to run
done-testing— indicate the test suite has finished (used instead of
plan)
bail-out— abort the test suite by exiting (considered a failure)
todo— mark a given number of tests as TODO
skip— mark a given number of tests as SKIP
skip-rest— mark all of the remaining tests as SKIP
diag— display a diagnostic message
subtest— declare a subtest
pass— proclaim one test as passed
flunk— proclaim one test as failed
-
-
cmp-ok— compare with a value using the given operator
is— is a given value (
cmpsemantics)
is-deeply— is a given value (
eqvsemantics)
isnt— is not a given value (
cmpsemantics)
is-approx— approximately equals to a numerical value
like— matches a given regex
unlike— does not match a given regex
use-ok— a module can be
used
isa-ok— an object
.isagiven type
does-ok— an object
doesa given role
can-ok— an object
.^cana method
dies-ok— a given block of code dies
lives-ok— a given block of code does not die
eval-dies-ok— a given string of code dies when evaled
eval-lives-ok— a given string of code does not die when evaled
throws-like— a given block of code (or string to be evaled) throws a given exception
fails-like— a given block of code (or string to be evaled) fails with a given exception
Writing tests
As with any Perl project, the tests live under the
t directory in the project's base directory.
A typical test file looks something like this:
use v6.c;use Test; # a Standard module included with Rakudouse lib 'lib';plan ;# .... testsdone-testing; # optional with 'plan'
We ensure that we're using Perl 6, via the
use v6.c pragma, then we load the
Test module and specify where our libraries are. We then specify how many tests we plan to run (such that the testing framework can tell us if more or fewer tests were run than we expected) and when finished with the tests, we use done-testing to tell the framework we are done.
Thread Safety
Note that routines in
Test module are not thread-safe. This means you should not attempt to use the testing routines in multiple threads simultaneously, as the TAP output might come out of order and confuse the program interpreting it.
There are no current plans to make it thread safe. If threaded-testing is crucial to you, you may find some suitable ecosystem modules to use instead of
Test for your testing needs.
Running tests
Tests can be run individually by specifying the test filename on the command line:
$ perl6 t/test-filename.t
Or via the prove command from Perl 5, where
perl6 is specified as the executable that runs the tests:
$ prove --exec perl6 -r t
To abort the test suite upon first failure, set the
PERL6_TEST_DIE_ON_FAIL environmental variable:
$ PERL6_TEST_DIE_ON_FAIL=1 perl6 t/test-filename.t
The same variable can be used within the test file. Set it before loading the
Test module:
BEGIN <PERL6_TEST_DIE_ON_FAIL> = 1;use Test;...
Test plans)..
Testing return values
The
Test module exports various functions that check the return value of a given expression and produce standardized test output.
In practice, the expression will often be a call to a function or method that you want to unit-test.
By
Bool value
The
ok function marks a test as passed if the given
$value evaluates to
True. The
nok function marks a test as passed if the given value evaluates to
False. Both functions accept an optional
$description of the test.
my ; my ; ...;ok .success, 'HTTP response was successful';nok .error, 'Query completed without error';
In principle, you could use
ok for every kind of comparison test, by including the comparison in the expression passed to
$value:
sub factorial() ;ok factorial(6) == 720, 'Factorial - small integer';
However, where possible it's better to use one of the specialized comparison test functions below, because they can print more helpful diagnostics output in case the comparison fails.
By string comparison
is($value, $expected, $description?)
Marks a test as passed if
$value and
$expected compare positively with the eq operator, unless
$expected is a type object, in which case
=== operator will be used instead; accepts an optional
$description of the test.';
By approximate numeric comparison
Marks a test as passed if the
$value and
$expected numerical values are approximately equal to each other. The subroutine can be called in numerous ways that let you test using relative tolerance (
$rel-tol) or absolute tolerance (
$abs-tol) of different values.
If no tolerance is set, the function will base the tolerance on the absolute value of
$expected: if it's smaller than
1e-6, use absolute tolerance of
1e-5; if it's larger, use relative tolerance of
1e-6.
my Numeric (, , , ) = ...is-approx , ;is-approx , , 'test description';is-approx , , ;is-approx , , , 'test description';is-approx , , :;is-approx , , :, 'test description';is-approx , , :;is-approx , , :, 'test description';is-approx , , :, :;is-approx , , :, :, 'test description';
Absolute Tolerance
When an absolute tolerance is set, it's used as the actual maximum value by which the
$value and
$expected can differ. For example:
is-approx 3, 4, 2; # successis-approx 3, 6, 2; # failis-approx 300, 302, 2; # successis-approx 300, 400, 2; # failis-approx 300, 600, 2; # fail
Regardless of values given, the difference between them cannot be more than
2.
Relative Tolerance
When a relative tolerance is set, the test checks the relative difference between values. Given the same tolerance, the larger the numbers given, the larger the value they can differ by can be.
For example:
is-approx 10, 10.5, :rel-tol<0.1>; # successis-approx 10, 11.5, :rel-tol<0.1>; # failis-approx 100, 105, :rel-tol<0.1>; # successis-approx 100, 115, :rel-tol<0.1>; # fail
Both versions use
0.1 for relative tolerance, yet the first can differ by about
1 while the second can differ by about
10. The function used to calculate the difference is:
|value - expected|rel-diff = ────────────────────────max(|value|, |expected|)
and the test will fail if
rel-diff is higher than
$rel-tol.
Both Absolute and Relative Tolerance Specified
is-approx , , :rel-tol<.5>, :abs-tol<10>;
When both absolute and relative tolerances are specified, each will be tested independently, and the
is-approx test will succeed only if both pass.
By structural comparison
Marks a test as passed if
$value and
$expected are equivalent, using the same semantics as the eqv operator. This is the best way to check for equality of (deep) data structures. The function accepts an optional
$description of the test.
use v6.c;use Test;plan 1;sub string-info(Str() )is-deeply string-info('42 Butterflies ♥ Perl'), Map.new((:21length,char-counts => Bag.new-from-pairs: ( :15letters, :2digits, :4other, ))), 'string-info gives right info';
Note: for historical reasons, Seq:D arguments to
is-deeply get converted to Lists. If you want to ensure strict
Seq comparisons, use
cmp-ok $got, 'eqv', $expected, $desc instead.
By arbitrary comparison
Compares
$value and
$expected with the given
$comparison comparator and passes the test if the comparison yields a
True value. The
$description of the test is optional.
The
$comparison comparator can be either a Callable or a Str containing an infix operator, such as
'==', a
'~~', or a user-defined infix.
cmp-ok 'my spelling is apperling', '~~', /perl/, "bad speller";
Meta operators cannot be given as a string; pass them as a Callable instead:
cmp-ok <a b c>, &[!eqv], <b d e>, 'not equal';
A Callable
$comparison lets you use custom comparisons:
sub my-comp ;cmp-ok 1, , 2, 'the dice giveth and the dice taketh away'cmp-ok 2, -> , , 7,'we got primes, one larger than the other!';
By object type
Marks a test as passed if the given object
$value is, or inherits from, the given
$expected-type. For convenience, types may also be specified as a string. The function accepts an optional
$description of the test.
is Womblemy = GreatUncleBulgaria.new;isa-ok , Womble, "Great Uncle Bulgaria is a womble";isa-ok , 'Womble'; # equivalent
By method name
Marks a test as passed if the given
$variable can run the given
$method-name. The function accepts an optional
$description. For instance:
;my = Womble.new;# with automatically generated test descriptioncan-ok , 'collect-rubbish';# => An object of type 'Womble' can do the method 'collect-rubbish'# with human-generated test descriptioncan-ok , 'collect-rubbish', "Wombles can collect rubbish";# => Wombles can collect rubbish
By role
By regex
like 'foo', /fo/, 'foo looks like fo';
Marks a test as passed if the
$value, when coerced to a string, matches the
$expected-regex. The function accepts an optional
$description of the test.
unlike 'foo', /bar/, 'foo does not look like bar';
Marks a test as passed if the
$value, when coerced to a string, does not match the
$expected-regex. The function accepts an optional
$description of the test.
Testing modules
Marks a test as passed if the given
$module loads correctly.
use-ok 'Full::Qualified::ModuleName';
Testing exceptions
Marks a test as passed if the given
$code throws an exception.
The function accepts an optional
$description of the test.
sub saruman(Bool :)dies-ok , "Saruman dies";
Marks a test as passed if the given
$code does not throw an exception.
The function accepts an optional
$description of the test.
sub frodo(Bool :)lives-ok , "Frodo survives";
Marks a test as passed if the given
$string throws an exception when
evaled as code.
The function accepts an optional
$description of the test.
eval-dies-ok q[my $joffrey = "nasty";die "bye bye Ned" if $joffrey ~~ /nasty/],"Ned Stark dies";
Marks a test as passed if the given
$string does not throw an exception when
evaled as code.
The function accepts an optional
$description of the test.
eval-lives-ok q[my $daenerys-burns = False;die "Oops, Khaleesi now ashes" if $daenerys-burns],"Dany is blood of the dragon";
throws-like($code, $expected-exception, $description?, *%matcher)
Marks a test as passed if the given
$code throws the specific exception
$expected-exception. got;
fails-like($code, $expected-exception, $description?, *%matcher)';
Grouping tests
The result of a group of subtests is only
ok if all subtests are
ok. test.' =>
Skipping tests.
Skip
$count tests, giving a
$reason as to why. By default only one test will be skipped. Use such functionality when a test (or tests) would die if run.
sub num-forward-slashes() ;if ~~ 'linux'else
Note that if you mark a test as skipped, you must also prevent that test from running.('...')
Manual control
If the convenience functionality documented above does not suit your needs, you can use the following functions to manually direct the test harness output.
The
pass function marks a test as passed.
flunk marks a test as not passed. Both functions accept an optional test
$description.
pass "Actually, this test has passed";flunk "But this one hasn't passed";
Since these subroutines do not provide indication of what value was received and what was expected, they should be used sparingly, such as when evaluating a complex test condition.!"; | https://docs.perl6.org/language/testing.html | 2018-05-20T11:53:49 | CC-MAIN-2018-22 | 1526794863410.22 | [] | docs.perl6.org |
Begin a menu column definition
.COLUMN name, text[, justification][, DISABLE][, SELECT(character)]
Arguments
name
The name of the column (a maximum of 15 characters).
text
A quoted or alphanumeric string that contains the text of the column’s heading.
justification
(optional) Designates how the column is aligned beneath the heading: LEFT (default), RIGHT, CENTERED.
DISABLE
(optional) Specifies that when the column is placed on the menu bar, it will be disabled.
SELECT (character)
(optional) Specify a single, printable quick‑select character.
The .COLUMN command marks the beginning of a new menu column definition. This column can be used either as a primary or a submenu column.
Name is case insensitive, and must be a unique window name within the window library.
Text is the text that will be displayed on the menu bar. Upper and lowercase will appear as entered. Text is required for submenu column definitions, but is currently ignored.
Justification defaults to LEFT if you do not specify LEFT, RIGHT, or CENTERED. If you specify RIGHT justification, the right sides of the column and heading will be aligned. If you specify CENTERED justification, the column will be centered beneath the heading. (See figure 1.) When the column is being used as a submenu, the justification setting will be ignored. On Windows, justification has no effect.
Menu column placement is limited by available screen space.
If you specify DISABLE, you can later re‑enable the column with the M_ENABLE subroutine.
If you do not specify SELECT(character), UI Toolkit uses the first non‑blank character of text. A column’s quick‑select character will not work if any menu column is dropped down, which sometimes happens automatically. You can use the M_DEFCOL(0) subroutine to disable automatic drop‑down.
The width of the column’s heading is two spaces larger than the length of text, with the text beginning at the second column. If you want the heading text to have more than one leading or trailing space, you can include spaces in the quoted text string.
To create a submenu column, define the column and the parent column (the menu from which the submenu will cascade) as you would any other, but in the parent menu column’s definition, use the SUB argument for the menu entry that will cause the submenu to be displayed. And, for the same menu entry, use the submenu column’s name (the name argument for .COLUMN) as the entry name (the name argument for .ENTRY). See Examples below.
See also
.ENTRY for more information about defining column entries.
This example generates a new menu column named util whose text in the menu bar is “General utilities,” with an extra leading space. The column is left‑aligned beneath the heading by default.
.column util, "General utilities"
The column generated by the second example is named receive; the text entry in the menu bar is “Receivables.” The column is centered beneath the heading.
.column receive, Receivables, centered
This example creates a menu column primary that references a submenu column submenu.
.column primary, "Primary" .entry submenu, "Submenu", sub .end .column submenu, "Submenu" .entry subent1, "Sub entry 1" .entry subent2, "Sub entry 2" .end | http://docs.synergyde.com/tk/tkChap2COLUMN.htm | 2018-01-16T11:15:19 | CC-MAIN-2018-05 | 1516084886416.17 | [] | docs.synergyde.com |
Using Unicode
- 1 Introduction
- 2 Prerequisites for Acrobat full version
- 3 Prerequisites for Adobe Reader®
- 4 Available fonts
- 5 Character encoding
- 6 Multi-line UTF8/UTF16 stamps
- 7 Resources
Introduction
AppendPDF Pro.
Prerequisites for Acrobat full version
In order to use Acrobat to view and print Asian text, you must have Asian language support files installed for both the Operating System (OS) and Acrobat. The table below shows whether Asian font support is automatically installed for your combination of OS and Acrobat, or whether you have to manually install it.
Unicode Font Support for Windows and Mac OS X and Acrobat.
Operating System
Asian font support is automatically installed for all OS platforms except Windows 2000/NT. To install Asian font support, open Regional Options in the Control Panel, and add the fonts you want. You may need your original installation disk. Refer to the Windows on-line help for more information. You can also install keyboard support.
Acrobat
Asian font support is automatically installed only in Acrobat 6.
Windows/UNIX
Download and install the Asian Font Pack.
Reader.
Mac OS X
You cannot download the Asian Font Pack for Mac OS X. You must choose to install it when you install Acrobat Reader. If you did not, you must reinstall Reader.
Available fonts
The table below lists the seven fonts that are available for double-byte character stamping:
Use the font name in the left-hand column above in the Font parameter in your stamp file.
Character encoding
Stamp files can be encoded as Plain Text (ISO-8859 or ISO Latin 1) or as UTF-8. If you are going to use Asian characters, we recommend using UTF-8 stamp files and a text editor that supports UTF-8. (HeiseiMin-W3) Text
Note: UTF-8 encoded characters are converted to UTF-16 before stamping into the document. UTF-16 characters are stamped directly into the document with no intervention by AppendPDF Pro. (HeiseiKakuGo-W5) Text (51855BC6306E)
Refer to Resources for help finding codes.
Multi-line UTF8/UTF16 stamps
In order to get multi-line text stamps, you must set the MultiLine parameter to yes. AppendPDF Pro (HeiseiMin-W3) MultiLine (yes) Text
For UTF16, place the Unicode line separator character 2028 in your text string where you want the new line to start. For example:
Type (UTF16) Font (HeiseiKakuGo-W5) MultiLine (yes) Text (51855BC6306E202851855BC6306E)
Both Text parameters result in this stamp:
Resources
- The Unicode Consortium has the complete Unicode specification, providing a wealth of information on Unicode, character sets, and conversions.
- SC UniPad provides a free trial of UniPad, a Windows-based Unicode text editor.
- IT and communication provides an extensive tutorial on character sets, including Unicode. | https://docs.appligent.com/appendpdf-pro/stamp-files/using-unicode/ | 2018-06-18T02:08:56 | CC-MAIN-2018-26 | 1529267859923.59 | [array(['/files/2013/03/utf_multi2.jpg',
'Example of multiline stamped text'], dtype=object)] | docs.appligent.com |
At its core, F2 is an open framework. To create a truly open and flexible foundation with F2.js, F2 can be extended with custom plugins. Extending F2 with plugins provides direct access to F2.js SDK methods and can save your teams a lot of time.
Now that you’re comfortable with F2 and all the individual components of the framework, you are ready to extend F2 and add your own custom logic in the form of an F2 plugin.
There is a separate repository on GitHub dedicated to F2 plugin development. If you write a plugin you’d like to contribute back to the community, commit it to F2Plugins.
Download F2 Plugins View on GitHub
Plugins are encapsulated in JavaScript closures as demonstrated below. There are three arguments which can be passed into
F2.extend():
namespace,
object, and
overwrite. For full details, read the F2.js SDK documentation.
F2.extend('YourPluginName', (function(){ return { doSomething: function(){ F2.log("Something has been done."); } }; })());
To call your custom method shown above:
... F2.YourPluginName.doSomething(); ...
This method call writes
Something has been done. to the Console.
The purpose of developing a plugin is to encapsulate clever logic in a single javascript function to save time and effort performing repetitive tasks. Here are some best practices to keep in mind:
F2.extend()and wrap your plugin in a closure.
F2namespace with more custom plugins than you need.
Have a question? Ask it on the F2 Google Group. | http://docs.openf2.org/extending-f2.html | 2017-07-20T14:49:06 | CC-MAIN-2017-30 | 1500549423222.65 | [] | docs.openf2.org |
In Telerik Platform, each user is a member of an account but is not constrained to a single account. Instead, to ease collaboration and project management, in Telerik Platform, a single user can be a member of and access multiple accounts.
Each account has a quota for the available account seats. By default, each account member, including the Account owner, takes up an available.
In this section, you will find the following resources. | http://docs.telerik.com/platform/help/accounts/accounts | 2017-07-20T14:48:29 | CC-MAIN-2017-30 | 1500549423222.65 | [] | docs.telerik.com |
Here is a simple matrix to sum up which technical libraries could be used to cover a new language with SonarQubeTM. For all languages, the following features are natively provided by SonarQubeTM without a big effort :
- Duplications detection
- Commented-out code detection
- '//NOSONAR' tag detection
- Calculation of basic metrics : 'lines of code', 'physical lines', 'blank lines' and 'comment lines'. | http://docs.codehaus.org/pages/viewpage.action?pageId=231082052 | 2014-12-18T08:37:34 | CC-MAIN-2014-52 | 1418802765698.11 | [] | docs.codehaus.org |
User Guide
Local Navigation
About the BlackBerry Client for Microsoft SharePoint
You can use the BlackBerry Client for Microsoft SharePoint to organize and share documents, files, and information with your colleagues on a site. You can preview, download, add, and edit files in libraries. Using a variety of lists—such as calendars, blogs, and wikis—you can actively collaborate on projects.
You can use your existing Microsoft SharePoint login information to access the BlackBerry Client for Microsoft SharePoint.
Next topic: New features and enhancements
Previous topic: Getting started
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/39954/About_Microsoft_SharePoint_20_1473776_11.jsp | 2014-12-18T08:41:05 | CC-MAIN-2014-52 | 1418802765698.11 | [] | docs.blackberry.com |
Section 776 addresses backup power systems installed on the property of residential and small commercial customers by telephony providers. The first step in the investigation will be to determine the telephony providers' current practices regarding backup power systems, including the feasibility of establishing such systems where they do not exist. The second step will be to obtain the telephony providers' and other interested parties' recommendations for reliability standards and the associated costs and benefits.
To this end, the Commission's Communications Division (CD) is directed to convene a technical workshop of subject matter experts to inform the Commission on this matter. The workshop to discuss "back up power installed on the property of residential and small commercial customers" will be held June 5, 2007. CD will provide timely notice on the Commission's Calendar and to the service list.
The outcome of the workshop will be an informational request that will seek more detailed information, concerns and issues related to backup power systems on the property of residential and small commercial customers. The request will direct respondents to provide recommendations along with associated implementation costs and benefits. While the bill concerns itself with only backup power, a cost/benefit analysis should be viewed holistically. For example, there is no customer benefit if power is maintained/restored but the lines are flooded under water.
The request will be sent to all facilities-based telephony providers and other interested parties. Upon receipt of the responses to the request, CD will compile the information into a report that:
1. Identifies the concerns and issues that the Commission must address, including current best practices and the technical feasibility of establishing battery backup requirements;
2. Identifies recommendations presented by the parties and their level of support;
3. Identifies a recommended course of action, as well as any other viable options;
4. Discusses the costs and benefits of implementing the recommended course of action;
5. Proposes a definition of small businesses for the purpose of this investigation; and
6. Identifies any concerns or issues that remain to be addressed.
The draft report will be sent to the parties for comment. Upon receipt of the comments, CD, in consultation with the assigned Commissioner, will prepare a revised draft report, which will be provided to the parties for comment.5 A proposed decision, which adopts a final report, then will be prepared.
5 For any or all of these three workshop topics, CD may evaluate a gradation of possibilities with varying costs and benefits. Option A, for example, may have some benefits but relatively high costs. Option B may be the opposite with several other options falling in between. All possibilities may be feasible, and CD will specify its recommended options in accordance with the requirements of §§ 776, 2872.5 and 2892.1. | http://docs.cpuc.ca.gov/published/FINAL_DECISION/66856-02.htm | 2014-12-18T08:34:47 | CC-MAIN-2014-52 | 1418802765698.11 | [] | docs.cpuc.ca.gov |
web interface in the body of a request to the
/auth/login endpoint of the Identity and Access Management Service API. It returns an authentication token as shown below.
{ the DC/OS CLIVia the DC/OS CLI
When you log into the DC/OS CLI using
dcos auth login, it stores the authentication token value locally. You can reference this value as a variable in cURL commands (discussed in the next section).
Alternatively, you can use the following command to get the authentication token value.
dcos config show core.dcos_acs_token
Passing an authentication tokenPassing an authentication token
Via the HTTP headerVia the HTTP header
Copy the token value and pass it in the
Authorization field of the HTTP header, string value
Using
curl, for example, you would pass this value as follows.
curl -H DC/OS CLI variable
You can then reference this value in your
curl commands, as shown below.
curl -H "Authorization: token=$(dcos config show core.dcos_acs_token)"
Refreshing the authentication tokenRefreshing the authentication token
Authentication tokens expire after five days by default. If your program needs to run longer than five days, you will need a service account. Please see Provisioning custom services for more information.
API referenceAPI reference
LoggingLogging
While the API returns informative error messages, you may also find it useful to check the logs of the service. Refer to Service and Task Logging for instructions. | http://docs-staging.mesosphere.com/mesosphere/dcos/1.11/security/ent/tls-ssl/ca-api/ | 2020-07-02T19:09:36 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs-staging.mesosphere.com |
Helping charitable organisations with technology
A guest post by Belinda Gorman, Community Affairs Manager at Microsoft New Zealand Limited.
The Asia Pacific region and provides some interesting examples.
One of Microsoft’s Australian NGO partners, Infoxchange, has been instrumental in the establishment of Carbonxchange. Carbonxchange is a social enterprise established to facilitate the establishment of the first certified Tree Cooperative in Timor-Leste. The creation of an online carbon trading service will allow direct carbon trading between the Tree Cooperatives and those wishing to purchase carbon credits.
Further east, UNICEF colleagues in the Solomon Islands and Vanuatu are embarking on efforts to increase birth registration, commonly take for granted in New Zealand. Birth Registration ensures greater protection of children in cases of particular vulnerability, such as risk of violence and abuse, child trafficking. The project will enable midwives in remote areas to register births using software designed by a Masters Student in Auckland.
New Zealand has a vibrant NGO sector – with 25,685 organisations registered with the Charities Commission. Information technology is being proactively and creatively harnessed by a growing number of NGOs in New Zealand to improve the way they work and respond to increasingly complex issues. One such organization is Plunket.
Plunket has embarked on an innovative new information system, PlunketPlus, to improve child health outcomes through access to in-depth, accurate and timely information. The system will provide greater support to Plunket nurses, allowing them to capture and access information electronically in the field, freeing up their valuable time from administration and record taking and allowing them to focus even more on the children and families they are working with.
As part of its corporate social responsibility initiative, Microsoft works in partnership with NGOs and the IT industry to increase capacity in the sector. We were pleased to see a broad range of NGOs across New Zealand have now taken up donated software valued at a total of more than NZ$17 million.
New Zealand charities who wish to take advantage of Microsoft technology are encouraged to visit the TechSoup New Zealand website. | https://docs.microsoft.com/en-us/archive/blogs/nzgovtech/helping-charitable-organisations-with-technology | 2020-07-02T20:14:09 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.microsoft.com |
Fix for RestartWWWService=false in Visual Studio 2010
We recently published a fix for the following problem:
You create a deployment project for a web application in Visual Studio 2010, set the deployment property RestartWWWService=false and deploy to a Windows 2003 server.
In this situation the w3wp.exe process will still be recycled even though the RestartWWWService is set to false.
The fix is available at and should only be installed on development machines if you experience this problem.
Have a good one,
Tess | https://docs.microsoft.com/en-us/archive/blogs/tess/fix-for-restartwwwservicefalse-in-visual-studio-2010 | 2020-07-02T20:39:51 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.microsoft.com |
Intro to Mutators.
Objectives
What will be covered in this guide:
- What are Sensu mutators?
- Creation of a Sensu event data mutator
What are Sensu mutators?.
Create an event data mutator
Coming soon. Please see the Mutators reference documentation. If you have questions about event data mutators, please contact Sensu Support. | https://docs.sensu.io/sensu-core/1.3/guides/intro-to-mutators/ | 2020-07-02T18:37:16 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.sensu.io |
9. Formalism for program alm¶
9.1. Interatomic force constants (IFCs)¶
The starting point of the computational methodology is to approximate the potential energy of interacting atoms by a Taylor expansion with respect to atomic displacements by
Here, \(u_{\mu}(\ell\kappa)\) is the atomic displacement of \(\kappa\)th atom in the \(\ell\)th unit cell along \(\mu\)th direction, and \(\Phi_{\mu_{1}\dots\mu_{n}}(\ell_{1}\kappa_{1};\dots;\ell_{n}\kappa_{n})\) is the \(n\)th-order interatomic force constant (IFC).
9.2. Symmetry relationship between IFCs¶
The are several relationships between IFCs which may be used to reduce the number of independence IFCs.
- Permutation
Firstly, IFCs should be invariant under the exchange of triplet \((\ell,\kappa,\mu)\), e.g.\[\Phi_{\mu_{1}\mu_{2}\mu_{3}}(\ell_{1}\kappa_{1};\ell_{2}\kappa_{2};\ell_{3}\kappa_{3})=\Phi_{\mu_{1}\mu_{3}\mu_{2}}(\ell_{1}\kappa_{1};\ell_{3}\kappa_{3};\ell_{2}\kappa_{2})=\dots.\]
- Periodicity
Secondly, since IFCs should depend on interatomic distances, they are invariant under a translation in units of lattice vector, namely\[\Phi_{\mu_{1}\mu_{2}\dots\mu_{n}}(\ell_{1}\kappa_{1};\ell_{2}\kappa_{2};\dots;\ell_{n}\kappa_{n})=\Phi_{\mu_{1}\mu_{2}\dots\mu_{n}}(0\kappa_{1};\ell_{2}-\ell_{1}\kappa_{2};\dots;\ell_{n}-\ell_{1}\kappa_{n}).\]
- Crystal symmetry
A crystal symmetry operation maps an atom \(\vec{r}(\ell\kappa)\) to another equivalent atom \(\vec{r}(LK)\) by rotation and translation. Since the potential energy is invariant under any crystal symmetry operations, IFCs should transform under a symmetry operation as follows:(2)¶\[\sum_{\nu_{1},\dots,\nu_{n}}\Phi_{\nu_{1}\dots\nu_{n}}(L_{1}K_{1};\dots;L_{n}K_{n}) O_{\nu_{1}\mu_{1}}\cdots O_{\nu_{n}\mu_{n}} = \Phi_{\mu_{1}\dots\mu_{n}}(\ell_{1}\kappa_{1};\dots;\ell_{n}\kappa_{n}),\]
where \(O\) is the rotational matrix of the symmetry operation. Let \(N_{s}\) be the number of symmetry operations, there are \(N_{s}\) relationships between IFCs which may be used to find independent IFCs.
Note
In the current implementation of alm, independent IFCs are searched in Cartesian coordinate where the matrix element \(O_{\mu\nu}\) is 0 or \(\pm1\) in all symmetry operations except for those of hexagonal (trigonal) lattice. Also, except for hexagonal (trigonal) systems, the product \(O_{\nu_{1}\mu_{1}}\cdots O_{\nu_{n}\mu_{n}}\) in the left hand side of equation (2) becomes non-zero only for a specific pair of \(\{\nu\}\) (and becomes 0 for all other \(\{\nu\}\)s). Therefore, let \(\{\nu^{\prime}\}\) be such a pair of \(\{\nu\}\), the equation (2) can be reduced to(3)¶\[\Phi_{\nu_{1}^{\prime}\dots\nu_{n}^{\prime}}(L_{1}K_{1};\dots;L_{n}K_{n}) = s \Phi_{\mu_{1}\dots\mu_{n}}(\ell_{1}\kappa_{1};\dots;\ell_{n}\kappa_{n}),\]
where \(s=\pm1\). The code employs equation (3) instead of equation (2) to reduce the number of IFCs. If IFCs of the left-hand side and the right-hand side of equation (3) are equivalent and the coupling coefficient is \(s=-1\), the IFC is removed since it becomes zero. For hexagonal (trigonal) systems, there can be symmetry operations where multiple terms in the left-hand side of equation (2) become non-zero. For such cases, equation (2) is not used to reduce the number of IFCs. Alternatively, the corresponding symmetry relationships are imposed as constraints between IFCs in solving fitting problems.
9.3. Constraints between IFCs¶
Since the potential energy is invariant under rigid translation and rotation, it may be necessary for IFCs to satisfy corresponding constraints.
The constraints for translational invariance are given by
which should be satisfied for arbitrary pairs of \(\ell_{2}\kappa_{2},\dots,\ell_{n}\kappa_{n}\) and \(\mu_{1},\dots,\mu_{n}\). The code alm imposes equation (4) by default (
ICONST = 1).
The constraints for rotational invariance are
which must be satisfied for arbitrary pairs of \((\ell_{1}\kappa_{1},\dots,\ell_{n}\kappa_{n};\mu_{1},\dots,\mu_{n};\mu,\nu)\). This is complicated since \((n+1)\)th-order IFCs (first line) are related to \(n\)th-order IFCs (second line).
For example, the constraints for rotational invariance related to harmonic terms can be found as
and
When
NORDER = 1, equation (5) will be considered if
ICONST = 2, whereas equation (6) will be neglected. To further consider equation (6), please use
ICONST = 3, though it may enforce a number of harmonic IFCs to be zero since cubic terms don’t exist in harmonic calculations (
NORDER = 1).
9.4. Estimate IFCs by least-square fitting¶
The code alm extracts harmonic and anharmonic IFCs from a displacement-force data set by solving the following linear least-square problem:
Here, \(m\) is the number of atomic configurations and the index \(i = (\ell,\kappa,\mu)\) is the triplet of coordinates. The model force \(F_{i,t}^{\mathrm{ALM}}\) is a linear function of IFCs \(\{\Phi\}\) which can be obtained by differentiating \(U\) (Eq. (1)) by \(u_{i}\). The parameters (IFCs) are determined so as to best mimic the atomic forces obtained by DFT calculations, \(F_{i,t}^{\mathrm{DFT}}\).
To evaluate goodness of fit, alm reports the relative error \(\sigma\) defined by
where the numerator is the residual of fit and the denominator is the square sum of DFT forces. | http://alamode.readthedocs.io/en/latest/formalism/formalism_alm.html | 2018-02-18T04:56:00 | CC-MAIN-2018-09 | 1518891811655.65 | [] | alamode.readthedocs.io |
This module allows you to customize the formatting and text of emails sent by GoClixy. You can design them, following your Logo and incorporating social media, photos and other information.
To edit an email template, click its name under the Name column as shown above. Modify the information with available tags and click the Save button.
Note: Different tags are available for different email templates. | http://docs.goclixy.com/docs/application/settings/email-templates | 2018-02-18T04:37:14 | CC-MAIN-2018-09 | 1518891811655.65 | [] | docs.goclixy.com |
Log messages and report errors to MarkLogic Server. You do not need to subclass this class. More...
#include <MarkLogic.h>
Log messages and report errors to MarkLogic Server. You do not need to subclass this class.
Messages logged by Reporter::log are written to the MarkLogic Server log. For example, to marklogic_dir/Logs/ErrorLog.txt.
Rather than throwing exceptions, your AggregateUDF implementation should report errors by calling Reporter::error. Errors reported this way write a message to the log file, cancel the current job, and raise a MarkLogic Server exception to the calling application.
Log an error and cancel the current job.
The task that calls this function stops immediately. That is, control does not return to your code. In-progress tasks for the same job may still run to completion.
Determine the current log level.
Only messages at this level and above appear in the log. | http://docs.marklogic.com/cpp/udf/classmarklogic_1_1Reporter.html | 2018-02-18T05:22:02 | CC-MAIN-2018-09 | 1518891811655.65 | [] | docs.marklogic.com |
Click To Call
Click To Call is a form of Web-based communication in which a visitor clicks a button to request an immediate connection with the business owner in real-time, either by phone call, Voice-over-Internet-Protocol (VoIP), or text.
FILTERING OPTIONS
SEARCH
To find the number of calls made to a particular listing, use the advanced search. For example, to find out the number of Desktop calls to GoClixy Listing between February 1, 2015 to June 30, 2015, fill the information as shown below and then click the Submit button. | http://docs.goclixy.com/docs/application/reports/click-to-call | 2018-02-18T04:37:44 | CC-MAIN-2018-09 | 1518891811655.65 | [] | docs.goclixy.com |
Understanding backup devices and media
Storage technology changes rapidly, so it is important to research your options before making a purchase. When selecting a tape drive and tapes, consider the initial cost of the drive and individual tapes, as well as capacity, speed, and reliability.
The primary format of tape drives used for backup is digital data storage (DDS). The two most common tape drives are DDS3, which stores up to 24 gigabytes (GB) of compressed data, and DDS4, which stores up to 40 GB of compressed data. These tape drives offer the highest ratio of cost to value. Other media include quarter-inch cartridges (QIC), digital audio tapes (DAT), 8-millimeter (mm) cassettes, and digital linear tapes (DLT). High-capacity, high-performance tape drives typically use small computer system interface (SCSI) controllers. You should have more than enough capacity to back up the entire server. It is also important to plan for the future demands of user data.
Note
You cannot back up to floppy disk or writable CD or DVD.
Your budget may require you to make a trade-off between the type of drive you choose and your backup schedule when purchasing the necessary number of backup tapes. For example, you should consider how many tapes you need to implement your backup plan for one year and then purchase as many tapes up-front as possible. You should also consider purchasing spare tapes and plan to replace worn tapes per the manufacturer's recommendation.
See Also
Concepts
Backup Overview
Backing up Windows Small Business Server | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-essentials/cc719783(v=ws.10) | 2018-02-18T05:44:03 | CC-MAIN-2018-09 | 1518891811655.65 | [] | docs.microsoft.com |
MSP Delay Tutorial 1: Delay Lines
Effects achieved with delayed signals
One of the most basic yet versatile techniques of audio processing is to delay a signal and mix the delayed version with the original signal. The delay time can range from a few milliseconds to several seconds, limited only by the amount of RAM you have available to store the delayed signal.
When the delay time is just a few milliseconds, the original and delayed signals interfere and create a subtle filtering effect but not a discrete echo. When the delay time is about 100 ms we hear a ‘slapback’ echo effect in which the delayed copy follows closely behind the original. With longer delay times, we hear the two signals as discrete events, as if the delayed version were reflecting off a distant mountain.
This tutorial patch delays each channel of a stereo signal independently, and allows you to adjust the delay times and the balance between direct signal and delayed signal.
Creating a delay line: tapin~ and tapout~
The MSP object tapin~ is a buffer that is continuously updated so that it always stores the most recently received signal. The amount of signal it stores is determined by a typed-in argument. For example, a tapin~ object with a typed-in argument of stores the most recent one second of signal received in its inlet.
The only object to which the outlet of tapin~ should be connected is a tapout~ object. This connection links the tapout~ object to the buffer stored by tapin~. The tapout~ object ‘taps into’ the delayed signal at certain points in the past. In the above example, tapout~ gets the signal from tapin~ that occurred 500 ms ago and sends it out the left outlet; it also gets the signal delayed by 1000 ms and sends that out its right outlet. It should be obvious that tapout~ can't get signal delayed beyond the length of time stored in tapin~.
A patch for mixing original and delayed signals
The tutorial patch sends the sound coming into the computer to two places: directly to the output of the computer and to a tapin~-tapout~ delay pair. You can control how much signal you hear from each place for each of the stereo channels, mixing original and delayed signal in whatever proportion you want.
At this point you don't hear any delayed signal because the ‘Direct Level’ for each channel is set atand the ‘Delay Level’ for each channel is set at . The signal is being delayed, but you simply don't hear it because its amplitude is scaled to 0.
The slider in the left part of the Patcher window serves as a balance fader between a ‘Dry’ (all direct) output signal and a ‘Wet’ (fully processed) output signal.
Changing the parameters while the sound is playing can result in clicks in the sound because this patch does not protect against discontinuities. You could create a version of this patch that avoids this problem by interpolating between parameter values with line~ or number~ objects.
In future tutorial chapters, you will see how to create delay feedback, how to use continuously variable delay times for flanging and pitch effects, and other ways of altering sound using delays, filters, and other processing techniques.
Summary
The tapin~ object is a continuously updated buffer which always stores the most recently received signal. Any connected tapout~ object can use the signal stored in tapin~, and access the signal from any time in the past (up to the limits of the tapin~ object's storage). A signal delayed with tapin~ and tapout~ can be mixed with the undelayed signal to create discrete echoes, early reflections, or comb filtering effects. | https://docs.cycling74.com/max7/tutorials/15_delaychapter01 | 2018-02-18T04:50:54 | CC-MAIN-2018-09 | 1518891811655.65 | [array(['images/delaychapter01a.png', None], dtype=object)
array(['images/delaychapter01b.png', None], dtype=object)
array(['images/delaychapter01c.png', None], dtype=object)] | docs.cycling74.com |
.
Trans 300.59(2)
(2)
The forwardmost seat on the right side of the bus shall be located so as not to interfere with the driver's vision.
Trans 300.59(3)
(3)
A minimum of 36 inches of headroom for the sitting position above the top of the undepressed cushion line of all seats shall be provided. The measurement shall be made vertically not more than 11 inches from the side wall at cushion height and at the fore and aft center of the cushion.
Trans 300.59(4)
(4)
Trans 300.59(4)(a)
(a)
The backs of seats of similar size shall be of the same width at the top and of the same height from the floor and shall slant at the same angle with the floor. The top corners, and at least 10 inches of the top of the back surface of the seat backs shall be padded sufficiently to reduce the likelihood of injury upon impact. Seat cushions and seat backs may not have any torn or worn-through covering material.
Trans 300.59(4)(b)
(b)
The seat back of the rearmost seat shall be of the same dimension as the seat immediately forward. Failure to comply with this standard will result in the loss of one seating position, or 2 seating positions if this situation occurs in both rows, when determining the capacity of the bus. This requirement shall apply only to type A-I, B, C or D school buses manufactured after January 1, 1984.
Trans 300.59(5)
(5)
Fold down, fold up or reclining seats or seat backs is not permitted in a school bus except as allowed in
s.
Trans 300.33 (3)
.
Trans 300.59(6)
(6)
A child restraint seat may be installed in place of a standard seat. The replacement seat shall meet all of the requirements in this section, except that the seat back may exceed the seat height of the remaining bus seats by not more than 4 inches.
Trans 300.59 History
History:
Cr.
Register, February, 1983, No. 326
, eff. 3-1-83; am. (4) (b), (5), cr. (6),
Register, February, 1995, No. 470
, eff. 3-1-95; am. (4) and (5),
Register, December, 1997, No. 504
, eff. 1-1-98;
CR 03-116
: am. (1)
Register April 2004 No. 580
, eff. 5-1-04.
Trans 300.60
Trans 300.60
Service door.
Trans 300.60(1)
(1)
The service door shall be under control of the driver and so designed as to prevent accidental opening. When a hand lever is used, no parts shall come together so as to shear or crush any fingers.
Trans 300.60(2)
(2)
The service door shall be located on the right side of the bus and within the view of the driver.
Trans 300.60(3)
(3)
The service door shall have a minimum horizontal opening of 24 inches and a minimum vertical opening of 68 inches.
Trans 300.60(4)
(4)
The upper and lower glass panels of the service door shall be of safety glass. The bottom of the lower glass panel may not be more than 35 inches from the ground when the bus is unloaded. The top of the upper glass panel may not be more than 6 inches from the top of door. The upper glass panel shall be of insulated glass or of a thermo electric design that performs at least as well as insulated glass.
Trans 300.60(5)
(5)
Any lock used in conjunction with the service door must be constructed to insure that the door is not in the locked position while transporting passengers.
Trans 300.60(6)
(6)
The service door shall be equipped with a seal to prevent dust and cold air from entering the vehicle.
Trans 300.60(7)
(7)
Type A-II buses need not comply with
subs.
(3)
and
(4)
.
Trans 300.60 History
History:
Cr.
Register, February, 1983, No. 326
, eff. 3-1-83; am. (4), (5), r. (8),
Register, February, 1995, No. 470
, eff. 3-1-95; am. (4) and (7),
Register, December, 1997, No. 504
, eff. 1-1-98;
CR 03-116
: am. (4)
Register April 2004 No. 580
, eff. 5-1-04.
Trans 300.61
Trans 300.61
Signs and lettering.
Trans 300.61(1)
(1)
Only signs and lettering approved by state law or rule shall appear on or in the bus.
Trans 300.61(2)
(2)
The body shall bear words “School Bus" in black letters at least 8 inches high and one-inch stroke on both front and rear or on yellow signs attached thereto. The lettering shall be placed above the rear window and the front windshield. This lettering shall only appear on buses painted the yellow and black school bus colors and meeting all the requirements of this chapter.
Trans 300.61(3)
(3)
Each school bus painted the yellow and black color scheme shall have a fleet number consisting of no more than 4 digits. The fleet number shall appear on the front and the rear of the bus. Additional fleet number locations may be utilized at the owner's option.
Trans 300.61(4)
(4)
Fleet numbers shall be no less than 3 inches nor more than 6 inches high with a
1
⁄
2
inch brush stroke.
Trans 300.61(5)
(5)
Fleet numbers are prohibited in the black area around the alternately flashing red lights.
Trans 300.61(6)
(6)
The name and address (and telephone number, if desired) of the owner or operator shall be displayed below the window line in the panel to the rear of, and as close as possible to, the service door in letters not less than 2 inches high nor more than 3 inches high by
1
⁄
4
inch stroke. If desired, this marking may also be painted on the left side of the bus below the driver's window. Owner's decals may be used to comply with this subsection if the decals do not violate other provisions of this section.
Trans 300.61(7)
(7)
The name of the school bus firm may appear on the sides of the bus between the seat line rub rail and the bottom window line in contrasting yellow or black letters not more than 10 inches high. The owner's name may also appear on the rear bumper in school bus yellow. The lettering may not exceed 6 inches in height with a
1
⁄
2
inch brush stroke. These options do not relieve the owner or operator from the requirements of
sub.
(6)
.
Down
Down
/code/admin_code/trans/300
true
administrativecode
/code/admin_code/trans/300/II/54/1/n/3
Department of Transportation (Trans)
administrativecode/Trans 300.54(1)(n)3.
administrativecode/Trans 300.54(1)(n)3.. | http://docs.legis.wisconsin.gov/code/admin_code/trans/300/II/54/1/n/3 | 2018-02-18T05:02:40 | CC-MAIN-2018-09 | 1518891811655.65 | [] | docs.legis.wisconsin.gov |
-
We recommend a virtualenv for using these scripts.
-
Install the dependencies for the scripts - Fabric and Cuisine.pip install -r requirements.txt
-
Use the following commands to install and configure a standalone Eden instance.
Install :fab -H targetmachine setup_eden_standalone
Configure :fab -H targetmachine configure_eden_standalone
The following illustrates the procedure to spawn a standalone Sahana Eden instance on EC2.
-
Create a file called .boto in your home directory with the following contents with the access key and the secret access key as per your account.[Credentials] aws_access_key_id = REPLACE_ME_WITH_ACCESS_KEY aws_secret_access_key = REPLACE_ME_WITH_SECRET_ACCESS_KEY
-
Upload your public key to be used with the instances created with EC2.fab aws_import_key:key_name=awskey,public_key=path_to_your_public_key,ZONE='us-east-1b'
-
Create a security group to be used with the instances spawned with EC2.fab aws_create_security_group:name=default,ZONE='us-east-1b'
-
Create a standalone Sahana Eden instance.fab aws_eden_standalone
Sahana Eden deployment script
Cleans AWS instances in the specific region.
Wrapper around boto’s create_image
This function creates security groups with access to the ports given from ALL ips.
Deletes the AMI and the EBS snapshot associated with the image_id
Spawns a standalone AWS instance of Eden with Postgres, uwsgi and Cherokee.
Arguments are same as those of fabfile.aws_spawn()
Imports a RSA key into AWS
Lists out AWS instances launched in the specific region
Spawns an AWS instance and installs Postgres with Eden - The uwsgi Eden instance is not started. Eden install on this machine is used only for initilization of DB and migration.
Arguments are same as those of fabfile.aws_spawn()
Spawns an AWS instance with the given specs.
Spawns an AWS instance with TSUNG set up to run load testing.
Arguments are same as those of fabfile.aws_spawn()
Configure an installed Eden - Postgres instance - This is to be run after installing Eden with other helpers provided.
Installs packages necessary for Eden.
Initializes the Debian env to work with Cuisine.
Installs memcached on the remote machine
Install Postgres on a remote machine
Runs tsung tests with a given xml against the given target - Replaces localhost in the xml with the target and fetches the reports dir.
Example Usage:
fab -i awskey.pem -u root -H machine_with_tsung run_tsung:xml=tsung-tests/test.xml,target=machine_to_test_against,additional_file=tsung-tests/test.csv
Sets up Postgres, uwsgi and Cherokee
Installs snmpd on a given host for monitoring. NOTE: Allows everyone to connect
Installs Tsung for Load testing | http://spawn-eden.readthedocs.io/en/latest/ | 2018-02-18T04:58:44 | CC-MAIN-2018-09 | 1518891811655.65 | [] | spawn-eden.readthedocs.io |
Choose the Column to Use for Testing a Mining Model
APPLIES TO:
SQL Server Analysis Services
Azure Analysis Services
Before you can measure the accuracy of a mining model, you must decide which outcome it is that you want to assess. Most data mining models require that you choose at least one column to use as the predictable attribute when you create the model. Therefore, when you test the accuracy of the model, you generally must select that attribute to test.
The following list describes some additional considerations for choosing the predictable attribute to use in testing:
Some types of data mining models can predict multiple attributes—such as neural networks, which can explore the relationships between many attributes.
Other types of mining models—such as clustering models—do not necessarily have a predictable attribute. Clustering models cannot be tested unless they have a predictable attribute.
To create a scatter plot or measure the accuracy of a regression model requires that you choose a continuous predictable attribute as the outcome. In that case, you cannot specify a target value. If you are creating anything other than a scatter plot, the underlying mining structure column must also have a content type of Discrete or Discretized.
If you choose a discrete attribute as the predictable outcome, you can also specify a target value, or you can leave the Predict Value field blank. If you include a Predict Value, the chart will measure only the model’s effectiveness at predicting the target value. If you do not specify a target outcome, the model is measured for its accuracy in predicting all outcomes.
If you want to include multiple models and compare them in a single accuracy chart, all models must use the same predictable column.
When you create a cross-validation report, Analysis Services will automatically analyze all models that have the same predictable attribute.
When the option, Synchronize Prediction columns and Values, is selected, Analysis Services automatically chooses predictable columns that have the same names and matching data types. If your columns do not meet these criteria, you can turn off this option and manually choose a predictable column. You might need to do this if you are testing the model with an external data set that has different columns than the model. However, if you choose a column with the wrong the type of data you will either get an error or bad results.
Specify the outcome to predict
Double-click the mining structure to open it in Data Mining Designer.
Select the Mining Accuracy Chart tab.
Select the Input Selection tab.
On the Input Selection tab, under Predictable Column Name, select a predictable column for each model that you include in the chart.
The mining model columns that are available in the Predictable Column Name box are only those with the usage type set to Predict or Predict Only.
If you want to determine the lift for a model, you must select a specific outcome value to measure, for by choosing from the Predict Value list.
See Also
Choose and Map Model Testing Data
Choose an Accuracy Chart Type and Set Chart Options | https://docs.microsoft.com/en-us/sql/analysis-services/data-mining/choose-the-column-to-use-for-testing-a-mining-model | 2018-02-18T05:16:25 | CC-MAIN-2018-09 | 1518891811655.65 | [] | docs.microsoft.com |
How to: Map a Domain Method Result to a Single Scalar Value
This article is relevant to entity models that utilize the deprecated Visual Studio integration of Telerik Data Access. The current documentation of the Data Access framework is available here.
Persistent and complex types are only one of the types that can be mapped to data returned from a stored procedure. This topic demonstrates how to create a domain method for a stored procedure that returns a scalar type.
To complete this walkthrough, you will need to create a new domain model based on the SofiaCarRental database.
Mapping a Domain Method Result to a Single Scalar Value
Suppose, you have the following stored procedure (see the code-snippet below). It is based on the SofiaCarRental database. The stored procedure is named IsCarAvailable. It takes a single @CarId parameter and returns a Boolean.
CREATE PROCEDURE IsCarAvailable( @CarId INT ) AS BEGIN SELECT Available FROM Cars WHERE CarID = @CarId END
If you have already generated a domain model, you could include the IsCarAvailable stored procedure by using the Update From Database Wizard.
Follow the same steps as in the How to: Create a Domain Method for a Stored Procedure topic. Except this time, you need to select the Scalar result type rather than a Persistent Type. The Scalar option indicates that the result set contains only a single unit of data. After you select the Scalar option, the Scalar drop-down will be enabled so that you can choose the type of the return value. Select System.Boolean from the drop-down.
In case your stored procedure returns a collection of scalar values, then the Scalar option will no longer work for you. In this scenario, you will need to use the Complex Type option. The Scalar option will generate a method that returns a single scalar value.
If you do not know the schema of the result returned by a stored procedure, you could use the Retrieve Result Shape button to display the result schema.
What Just Happened ?
When you click OK, the new IsCarAvailable method will be added to your context class. The signature of the generated method is:
public System.Boolean IsCarAvailable(int? carId) { }
Public Function IsCarAvailable(ByVal carId As Integer?) As System.Boolean End Function
How It Works ?
The function internally uses the OpenAccessContext.ExecuteScalar method to execute the stored procedure and return the first column of the first row in the result set. All other columns and rows will be ignored. For more information, please refer to How the Generated Methods Retrieve the Data.
public System.Boolean IsCarAvailable(int? carId) { OAParameter parameterCarId = new OAParameter(); parameterCarId.ParameterName = "CarId"; parameterCarId.Value = carId; System.Boolean queryResult = this.ExecuteScalar<System.Boolean>("IsCarAvailable", CommandType.StoredProcedure, parameterCarId); return queryResult; }
Public Function IsCarAvailable(ByVal carId As Integer?) As System.Boolean Dim parameterCarId As New Telerik.OpenAccess.Data.Common.OAParameter parameterCarId.ParameterName = "CarId" parameterCarId.Value = carId Dim queryResult As System.Boolean = Me.ExecuteScalar(Of System.Boolean)("IsCarAvailable", CommandType.StoredProcedure, parameterCarId) Return queryResult End Function
The generated context method allows you to call the corresponding stored procedure from your code.
using (EntitiesModel dbContext = new EntitiesModel()) { bool isCarAvailable = dbContext.IsCarAvailable(1); }
Using dbContext As New EntitiesModel() Dim isCarAvailable As Boolean = dbContext.IsCarAvailable(1) End Using
Next Steps
This topic demonstrated you how to map a stored procedure result to a single scalar value. In some scenarios, it will be useful to translate (materialize) the stored procedure result to a collection of persistent or non-persistent types. You could achieve this by selecting the Persistent Type or Complex Type result type in the Domain Method Editor. For more information, read How to: Map a Domain Method Result to a Persistent Type and How to: Map a Domain Method Result to a Complex Type. | https://docs.telerik.com/data-access/deprecated/developers-guide/stored-procedures-and-functions/developer-guide-crud-sp-support-scalar-value.html | 2018-02-18T04:42:25 | CC-MAIN-2018-09 | 1518891811655.65 | [array(['/data-access/images/1devguide-crud-spsupport-scalartypes-010.png',
None], dtype=object) ] | docs.telerik.com |
Filtering and Refreshing the Displayed Data
This article is relevant to entity models that utilize the deprecated Visual Studio integration of Telerik Data Access. The current documentation of the Data Access framework is available here.
In this step you will add functionality to MainViewModel allowing filtering the displayed cars by their manufacturer and refreshing the diplayed data. To do that you will need to implement the appropriate methods and define properties which will expose command objects based on them.
Implementing the required methods
Remember the RetrieveCarsToDisplayMethod which you defined in a previous step. If you take a look at it again, you will notice that when retrieving cars it takes into account the CarMakerFilter, and if it is available, retrieves only the cars which comply with it.
private void RetrieveCarsToDisplay() { this.ClearCache(); int? selectedCarId = null; //save the Id of the currently selected car if (this.SelectedCar != null) { selectedCarId = this.SelectedCar.CarID; } if (string.IsNullOrEmpty(this.carMakerFilter)) { //retrieve all cars this.CarsToDisplay = this.context.Cars.ToList(); } else { //retrieve cars which comply to the filter this.CarsToDisplay = this.context.Cars.Where(car => car.Make == this.carMakerFilter).ToList(); } //restore the selected car using its saved Id if (selectedCarId.HasValue) { this.SelectedCar = this.CarsToDisplay.FirstOrDefault(car => car.CarID == selectedCarId); } this.Status = string.Format(STATUS_MESSAGE, this.CarsToDisplay.Count); }
Private Sub RetrieveCarsToDisplay() Me.ClearCache() Dim selectedCarId As Integer? = Nothing 'save the Id of the currently selected car If IsNothing(Me.SelectedCar) = False Then selectedCarId = Me.SelectedCar.CarID End If If String.IsNullOrEmpty(Me.CarMakerFilter) Then 'retrieve all cars Me.CarsToDisplay = Me._context.Cars.ToList() Else 'retrieve cars which comply to the filter Me.CarsToDisplay = Me._context.Cars.Where(Function(car) car.Make = Me.CarMakerFilter).ToList() End If 'restore the selected car using its saved Id If selectedCarId.HasValue Then Me.SelectedCar = Me.CarsToDisplay.FirstOrDefault(Function(car) car.CarID = selectedCarId) End If Me.Status = String.Format(STATUS_MESSAGE, Me.CarsToDisplay.Count) End Sub
In MainViewModel add a new method called FilterCars. In this method, all you need to do is call RetrieveCarsToDisplay which will retrieve the cars according to the filter and set the filtering flag. This method will be used in the filter command.
private void FilterCars() { this.RetrieveCarsToDisplay(); this.isFilteringActive = (string.IsNullOrEmpty(this.CarMakerFilter) == false); }
Private Sub FilterCars() Me.RetrieveCarsToDisplay() Me._isFilteringActive = (String.IsNullOrEmpty(Me.CarMakerFilter) = False) End Sub
In MainViewModel add a method called ResetFilter. This method will be used to form the reset command - it clears the filter, clears the filtering flag and calls RetrieveCarsToDisplay which will retrieve all cars from the database.
private void ResetFilter() { this.CarMakerFilter = string.Empty; this.isFilteringActive = false; this.RetrieveCarsToDisplay(); }
Private Sub ResetFilter() Me.CarMakerFilter = String.Empty Me._isFilteringActive = False Me.RetrieveCarsToDisplay() End Sub
In MainViewModel define a method called CanResetFilter. This method will be used to form the reset command and will decide if it can be executed.
private bool CanResetFilter() { return this.isFilteringActive; }
Private Function CanResetFilter() As Boolean Return Me._isFilteringActive End Function
In MainViewModel add a method named Refresh. This method clears the CarMakerFilter if the filtering flag is not raised and calls RetrieveCarsToDisplay, which will retrieve the cars from the database according to the current filter. It will be used to form the refresh command.
private void Refresh() { if (this.isFilteringActive == false) { this.CarMakerFilter = string.Empty; } this.RetrieveCarsToDisplay(); }
Private Sub Refresh() If Me._isFilteringActive = False Then Me.CarMakerFilter = String.Empty End If Me.RetrieveCarsToDisplay() End Sub
Exposing commands to the the view
At this point you have defined the methods required to execute filter, reset and refresh operations. However they are not yet exposed to the view and thus cannot be called. You will need to expose command objects for those methods via properties.
In MainViewModel add a property FilterCarsCommand. In its getter, initialize a RelayCommand object based on the FilterCars method and return it.
public RelayCommand FilterCarsCommand { get { this.command = new RelayCommand(this.FilterCars); return this.command; } set { this.command = value; } }
Public Property FilterCarsCommand As RelayCommand Get Me._command = New RelayCommand(AddressOf Me.FilterCars) Return Me._command End Get Set(value As RelayCommand) Me._command = value End Set End Property
In MainViewModel add a property ResetFilterCommand. In its getter, initialize a RelayCommand object based on the ResetFilter and CanResetFilter methods and return it.
public RelayCommand ResetFilterCommand { get { this.command = new RelayCommand(this.ResetFilter, this.CanResetFilter); return this.command; } set { this.command = value; } }
Public Property ResetFilterCommand As RelayCommand Get Me._command = New RelayCommand(AddressOf Me.ResetFilter, AddressOf Me.CanResetFilter) Return Me._command End Get Set(value As RelayCommand) Me._command = value End Set End Property
In MainViewModel add a property RefreshCommand. In its getter, initialize a RelayCommand object based on the Refresh method and return it.
public RelayCommand RefreshCommand { get { this.command = new RelayCommand(this.Refresh); return this.command; } set { this.command = value; } }
Public Property RefreshCommand As RelayCommand Get Me._command = New RelayCommand(AddressOf Me.Refresh) Return Me._command End Get Set(value As RelayCommand) Me._command = value End Set End Property
Checkpoint
The sample application can now filter the displayed cars by their manufacturer, reset the available filter and refresh the displayed data.
Next step: Deleting Cars from the Database | https://docs.telerik.com/data-access/deprecated/quick-start-scenarios/wpf/quickstart-wpf-filter-reset-refresh.html | 2018-02-18T04:42:44 | CC-MAIN-2018-09 | 1518891811655.65 | [array(['/data-access/images/1quickstart-wpf-filter-reset-refresh-010.png',
None], dtype=object) ] | docs.telerik.com |
Workflow¶
GeoServer documentation aims to mirror the development process of the software itself. The process for writing/editing documentation is as follows:
- Step 1: Check out source
- Step 2: Make changes
- Step 3: Build and test locally
- Step 4: Commit changes
Repository¶
This documentation source code exists in the same repository as the GeoServer source code:
Within this repository are the various branches and tags associated with releases, and the documentation is always inside a
doc path. Inside this path, the repository contains directories corresponding to different translations. The languages are referred to by a two letter code, with
en (English) being the default.
For example, the path review the English docs is:
Inside this directory, there are four directories:
user/ developer/ docguide/ theme/
Software¶
You must use a version control software to retrieve files.
-
-
-
- Or use git on the command line
Follow these instructions to fork the GeoServer repository:
Make changes¶
Documentation in Sphinx is written in reStructuredText, a lightweight markup syntax. For suggestions on writing reStructuredText for use with Sphinx, please see the section on Sphinx Syntax. For suggestions about writing style, please see the Style Guidelines.
Build and test locally¶
You should install Sphinx on your local system to build the documentation locally and view any changes made. Sphinx builds the reStructuredText files into HTML pages and PDF files.
HTML¶
On a terminal, navigate to your GeoServer source checkout and change to the
doc/en/userdirectory (or whichever project you wish to build).
Run the following command:
make html
The resulting HTML pages will be contained in
doc/en/user/build/html.
Watch the output of the above command for any errors and warnings. These could be indicative of problems with your markup. Please fix any errors and warnings before continuing.
PDF¶
On a terminal, navigate to your GeoServer source checkout and change to the
doc/en/userdirectory (or whichever project you wish to build).
Run the following command:
make latex
The resulting LaTeX pages will be contained in
doc/en/user/build/latex.
Change to the
doc/en/user/build/latexdirectory.
Run the following command:
pdflatex [GeoServerProject].tex
This will create a PDF file called
GeoServerProject.pdfin the same directory
Note
The exact name of
GeoServerProjectdepends on which project is being built. However, there will only be one file with the extension
.texin the
doc/en/user/build/latexdirectory, so there should hopefully be little confusion.
Watch the output of the above command for any errors and warnings. These could be indicative of problems with your markup. Please fix any errors and warnings before continuing.
Commit changes¶
Warning
If you have any errors or warnings in your project, please fix them before committing!
The final step is to commit the changes to the repository. If you are using Subversion, the command to use is:
git add [path/file(s)] git commit -m "message describing your fix" git push
where
path/file(s) is the path and file(s) you wish to commit to the repository.
When ready return to the GitHub website and submit a pull request:
The GitHub website provides a link to CONTRIBUTING.md outlining how we can accept your patch. Small fixes may be contributed on your behalf, changes larger than a file (such as a tutorial) may require some paperwork. | http://docs.geoserver.org/latest/en/docguide/workflow.html | 2018-02-18T04:53:42 | CC-MAIN-2018-09 | 1518891811655.65 | [] | docs.geoserver.org |
Remote Administration (RA) follows a specific workflow implemented by automation that takes into account subscription-specific preferences. If you are a Remote Administration customer, setting your application update preferences correctly ensures that security updates can be smoothly integrated into your workflow.
Reviewing your RA settings
To review your RA application update preferences, completing the following steps:
- Select the application whose Remote Administration settings you want to view.
- In the left menu, click RA.
Acquia Cloud displays the RA update settings in place for your application.
Modifying your RA settings
Remote Administration customers have several available update methods, and whichever method is selected:
- Update and Deploy (default) - When a security update is required, Acquia pulls a branch according to your preferences, applies security updates, and pushes the branch to the repository.
The new branch is then deployed to your RA environment for testing. Acquia lets you know using a ticket that the secure branch is available for testing. Code will not be deployed to the Production environment without your explicit approval.
- Inform only - Acquia informs you of pending updates using a security update notification. No further action is taken without your specific request.
- Do not inform - Acquia does not inform you of any pending updates. No further action is taken without your specific request.
- Depending on your selection in the Update process list, Acquia Cloud may display additional fields regarding the implementation details for your selected method.
- Click Save.
Remote Administration will now use your selected update settings for your application. | https://docs.acquia.com/index.php/node/16646 | 2018-02-18T05:12:31 | CC-MAIN-2018-09 | 1518891811655.65 | [] | docs.acquia.com |
Microsoft network server: Amount of idle time required before suspending session
Each Server Message Block (SMB) session consumes server resources. Establishing numerous null sessions will cause the server to slow down or possibly fail. A malicious user might repeatedly establish SMB sessions until the server stops responding; at this point, SMB services will become slow or unresponsive.
The Microsoft network server: Amount of idle time required before suspending session policy setting determines the amount of continuous idle time that must pass in an SMB session before the session is suspended due to inactivity. You can use this policy setting to control when a computer suspends an inactive SMB session. The session is automatically reestablished when client computer activity resumes.
Possible values
A user-defined number of minutes from 0 through 99,999
For this policy setting, a value of 0 means to disconnect an idle session as quickly as is reasonably possible. The maximum value is 99999, which is 208 days. In effect, this value disables the policy.
Not defined
Best practices
-.
Countermeasure
The default behavior on a server mitigates this threat by design in Windows Server 2003 and later.
Potential impact
There is little impact because SMB sessions are reestablished automatically if the client computer resumes activity. | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/jj852253(v=ws.11) | 2018-02-18T05:08:30 | CC-MAIN-2018-09 | 1518891811655.65 | [] | docs.microsoft.com |
Help Center
Local Navigation
I receive a prompt to switch networks when I stream a song or video
If you receive a prompt to switch networks, you cannot box.
Next topic: I cannot open a new tab
Previous topic: I cannot play a song or video on a web page
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/22178/I_receive_a_prompt_to_switch_browsers_60_1131209_11.jsp | 2015-07-28T03:57:51 | CC-MAIN-2015-32 | 1438042981525.10 | [] | docs.blackberry.com |
Hortonworks Data Platform deploys Apache Oozie for your Hadoop cluster.
Oozie is a server-based workflow engine specialized in running workflow jobs with actions that execute Hadoop jobs, such as MapReduce, Pig, Hive, Sqoop, HDFS operations, and sub-workflows. Oozie supports coordinator jobs, which are sequences of workflow jobs that are created at a given frequency and start when all of the required input data is available. A command-line client and a browser interface allow you to manage and administer Oozie jobs locally or remotely.
For additional Oozie documentation, use the following resources:
Developer Documentation
Administrator Documentation | http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.9.0/bk_dataintegration/content/ch_using-oozie.html | 2015-07-28T03:26:05 | CC-MAIN-2015-32 | 1438042981525.10 | [] | docs.hortonworks.com |
UAB Galaxy RNA Seq Step by Step Tutorial
-.
Cufflinks QC
- Check that all the FPKM values in these files are not zero.
- Check that the gene symbols show up where you want them. If all the gene_id's are "CUFF.#.#", it will be hard to figure things out.
- Visualize assembled_transcripts.
cuffcompare outputs
- combined_transcripts.gtf
- For this exercise, we only use this one output, which will be the augmented genome annotation we'll use in our final step, cuffdiff.
Fold Change Between Conditions: CuffDiff
Cuffdiff does the real work determining differences between experimental conditions. In our case, we have two conditions (control, drugged), with 1 replicate each. However, cuffdiff can handle many conditions, each with several replicates, in one go.. | https://docs.uabgrid.uab.edu/wiki/UAB_Galaxy_RNA_Seq_Step_by_Step_Tutorial | 2015-07-28T03:24:54 | CC-MAIN-2015-32 | 1438042981525.10 | [] | docs.uabgrid.uab.edu |
You can set limits on the size of tables that users can create.
Overview of User Query Limits
- You can restrict the number of fields, rows, columns, wafers, or total cells that users can add to a table.
- The query limits apply to groups rather than individual users. If you have not already done so, you will need to organise your users into groups. See Users, Groups and Permissions for more information about this step.
- There are also some default limits that apply to all users. The defaults are designed to prevent users from creating very large tables.
- If a user belongs to multiple groups, and you set different limits for those groups, then the lower limit will apply to that user.
- If a limit is not defined for a particular group (or is defined incorrectly) then there will be no restrictions applied to that group (apart from the default limits).
- If a user loads a predefined table with more items than allowed by their limits or you have configured mandatory fields and these fields push the user over their limit, then the table will open but the user will not be able to add anything else to the table unless they remove some items to get under their configured limit.
- If a user hits one of the axis limits, then they can continue to add items to the other axes until they hit their limit on that axis too. For example if a user hits the column limit they can continue to add to the table rows until they reach the row limit.
Configure the Limits
Query limits are configured in SuperADMIN, using the
cfg command. You will not need to restart SuperWEB2 to pick up the changes you make in SuperADMIN (although if you currently have SuperWEB2 Table View open, you will need to go back to the catalogue and reopen the dataset to see any changes). This also applies to any users who currently have Table View open. They will not see the effect of your changes until they reselect the dataset from the catalogue.
In addition, you should note that any changes to users and groups are not applied until users log out and back in again. So if you have just changed group memberships (with the
account command) as part of setting up the query limits, then those changes will not be applied until users log in again.
Configure the Default Limits for All Users
The
defaultQueryLimits are active by default and apply to all users.
You can check the current settings using the following command:
> cfg global superweb2.rules.defaultQueryLimits superweb2.rules.defaultQueryLimits : { "cells":10000000, "wafers":1000, "columns":1000000, "rows":1000000, "fields":10 }
These defaults will prevent any user from creating a table with more than 10,000,000 cells, 1,000 wafers, 1,000,000 columns, 1,000,000 rows or 10 fields.
You should not need to change these settings. However, if you wish to adjust the limits, you can use the following commands:
cfg global superweb2.rules.defaultQueryLimits.cells set <cells> cfg global superweb2.rules.defaultQueryLimits.wafers set <wafers> cfg global superweb2.rules.defaultQueryLimits.columns set <columns> cfg global superweb2.rules.defaultQueryLimits.rows set <rows> cfg global superweb2.rules.defaultQueryLimits.fields set <fields>
For example, if you wish to decrease the total number of cells users can include in a table to 1,000,000, you would use the following command
> cfg global superweb2.rules.defaultQueryLimits.cells set 1000000 superweb2.rules.defaultQueryLimits.cells : Updated
Configure User/Group Limits
Before starting to configure the group limits, it is a good idea to use the following command to check whether any group limits are currently defined:
cfg global superweb2.rules.groupQueryLimits
SuperWEB2 will either display the list of existing limits (if some are already defined), or display "not found".
Scenario A: No Existing Limits Defined
If there are no existing limits defined, SuperADMIN displays the message "not found":
> cfg global superweb2.rules.groupQueryLimits superweb2.rules.groupQueryLimits : not found
To create a new set of limits for a particular group, use the following commands:
cfg global superweb2.rules.groupQueryLimits[<index>].groupId set <group_id> cfg global superweb2.rules.groupQueryLimits[<index>].limits.fields set <fields> cfg global superweb2.rules.groupQueryLimits[<index>].limits.cells set <cells> cfg global superweb2.rules.groupQueryLimits[<index>].limits.rows set <rows> cfg global superweb2.rules.groupQueryLimits[<index>].limits.columns set <columns> cfg global superweb2.rules.groupQueryLimits[<index>].limits.wafers set <wafers>
Where:
<index>is the index for this set of query limits. Each set of limits has a unique index number, starting from zero.
<group_id>is the ID of the group that this set of limits applies to.
<fields>,
<cells>,
<rows>,
<columns>and
<wafers>are the maximum number of each item that users belonging to this group will be allowed to add to a table.
For example, the following commands specify the limits for users who belong to a group called Guests. As this is the first set of limits we have defined it has the index value of 0.
cfg global superweb2.rules.groupQueryLimits[0].groupId set "Guests" cfg global superweb2.rules.groupQueryLimits[0].limits.fields set 5 cfg global superweb2.rules.groupQueryLimits[0].limits.cells set 50 cfg global superweb2.rules.groupQueryLimits[0].limits.rows set 10 cfg global superweb2.rules.groupQueryLimits[0].limits.columns set 10 cfg global superweb2.rules.groupQueryLimits[0].limits.wafers set 1
In this example, any users who belong to the Guests group will not be able to create a table that contains any of the following:
- More than 5 fields.
- More than 50 cells in total.
- More than 10 rows.
- More than 10 columns.
- More than 1 wafer.
To add a second set of limits, use the same commands but increment the index value. For example:
cfg global superweb2.rules.groupQueryLimits[1].groupId set "Administrators" cfg global superweb2.rules.groupQueryLimits[1].limits.fields set 100 cfg global superweb2.rules.groupQueryLimits[1].limits.cells set 10000 cfg global superweb2.rules.groupQueryLimits[1].limits.rows set 100 cfg global superweb2.rules.groupQueryLimits[1].limits.columns set 100 cfg global superweb2.rules.groupQueryLimits[1].limits.wafers set 10
Scenario B: Existing Limits Defined
If there are already some existing limits defined, then SuperADMIN displays the details. For example:
> cfg global superweb2.rules.groupQueryLimits superweb2.rules.groupQueryLimits : [ { "limits":{ "cells":50, "wafers":1, "columns":10, "rows":10, "fields":5 }, "groupId":"Guests" }, { "limits":{ "cells":10000, "wafers":10, "columns":100, "rows":100, "fields":100 }, "groupId":"Administrators" } ]
In this example there are two existing sets of limits defined, one for Guests and one for Administrators (each one is enclosed in curly brackets).
You can either add a new set of limits or update one of these existing ones:
To add a new set of limits for a third group, simply make sure that you use the next index value up. In this example, there are already two sets of limits, so to create a third one you must use the index value of 2 (because the index numbering starts at 0). For example:
cfg global superweb2.rules.groupQueryLimits[2].groupId set "Managers" cfg global superweb2.rules.groupQueryLimits[2].limits.fields set 10 cfg global superweb2.rules.groupQueryLimits[2].limits.cells set 1000 cfg global superweb2.rules.groupQueryLimits[2].limits.rows set 10 cfg global superweb2.rules.groupQueryLimits[2].limits.columns set 10 cfg global superweb2.rules.groupQueryLimits[2].limits.wafers set 1
To update an existing set of limits, simply use the index value of the existing rule. For example, to change the row limit for the Guests group, use the index value of 0:
cfg global superweb2.rules.groupQueryLimits[0].limits.rows set 15
When you have finished configuring the limits, you may wish to login to SuperWEB2 with a user account that belongs to the relevant group and check the limits are being applied correctly.
Deactivate User Query Limits
Query limits are applied by SuperWEB2's rules engine. The query limit rule is activated by default.
If you do not want to have any user query limits (including the default query limits) then you can turn this rule off by editing <tomcat_home>\webapps\webapi\WEB-INF\data\.repository\RulesEngine.xml in a text editor:
Locate the following section of RulesEngine.xml:
="noConcatenationRule"/> --> <rules:rule-name
Add comments around the
GroupQueryLimitRuleto deactivate it:
="GroupQueryLimitRule"/> -->
Save your changes to the file.
Restart Tomcat or the SuperWEB2 service to apply the change to RulesEngine.xml.
This change is not recommended for production use, as it will deactivate the default query limits, in addition to the user/group specific query limits. If you just want to turn off all user/group limits, then it is better to leave the rule active in RulesEngine.xml and use the following command in SuperADMIN instead:
cfg global superweb2.rules.groupQueryLimits remove
This will remove all defined group query limits, but will leave the default limits in place. | http://docs.spacetimeresearch.com/display/SuperSTAR98/Configure+User+Query+Limits+-+SuperWEB2 | 2017-05-22T17:21:40 | CC-MAIN-2017-22 | 1495463605485.49 | [] | docs.spacetimeresearch.com |
Example: Testing SAP
Overview
This document contains a basic example for testing SAP-based applications.
The SAP connector requires a set of specific libraries which are listed in the SAP Connector requirements documentation. These libraries need to be added manually in order to test the SAP-based application correctly.
Find below the configuration needed to test a SAP connector using studio or using Maven respectively:
Configuration
From Studio
In your Studio environment:
Add the SAP libraries to the classpath:
Navigate to Build Path > Configure Build Path
In the Libraries tab, click the Add Library … button
Select Anypoint Connectors Dependencies
Select the SAP Connector option.
You may also need to make the
java.library.pathpoint to the folder where the native libraries are located.
The MUnit Studio plugin will try to find these native libraries and configure them automatically when you try to run your tests. In case they are not found, you need to add the following vm argument in the run configuration:
In Studio’s top navigation bar, click run
Click Run Configurations…
Select the Arguments tab
In the VM Arguments dialog box, type the path to your libraries with the
java.library.pathargument.
Example:
-Djava.library.path=path/to/lib
From Maven
One way of adding the libraries to the classpath is using the additionalClasspathElements parameter in the maven plugin.
You can provide the path to each of the SAP libraries that you want to add:
Additionally, you need to make the
java.library.path property point to the native libraries directory, similar to how it’s done in Studio.
To accomplish this, you can use the argLine parameter to add the additional vm argument. | https://docs.mulesoft.com/munit/v/1.3/testing-sap | 2017-05-22T17:14:10 | CC-MAIN-2017-22 | 1495463605485.49 | [] | docs.mulesoft.com |
This topic shows how to use the Azure portal with Azure Resource Manager to manage your Azure resources. To learn about deploying resources through the portal, see Deploy resources with Resource Manager templates and Azure portal.
Currently, not every service supports the portal or Resource Manager. For those services, you need to use the classic portal. For the status of each service, see Azure portal availability chart.
Manage resource groups
A resource group.
To see all the resource groups in your subscription, select Resource groups.
To create an empty resource group, select Add.
Provide a name and location for the new resource group. Select Create.
You may need to select Refresh to see the recently created resource group.
To customize the information displayed for your resource groups, select Columns.
Select the columns to add, and then select Update.
- To learn about deploying resources to your new resource group, see Deploy resources with Resource Manager templates and Azure portal.
For quick access to a resource group, you can pin the blade to your dashboard.
The dashboard displays the resource group and its resources. You can select either the resource groups or any of its resources to navigate to the item.
Tag resources
You can apply tags to resource groups and resources to logically organize your assets. For information about working with tags, see Using tags to organize your Azure resources.
To view the tags for a resource or resource group, select the Tags icon.
You see the existing tags for the resource. If you have not previously applied tags, the list is empty.
To add a tag, type a key and value, or select an existing one from the dropdown menu. Select Save.
To view all the resources with a tag value, select > (More services), and enter the word Tags into the filter text box. Select Tags from the available options.
You see a summary of the tags in your subscriptions.
Select any of the tags to display the resources and resource groups with that tag.
Select Pin blade to dashboard for quick access.
You can select the pinned tag from the dashboard to see the resources with that tag.
Monitor resources
When you select a resource, the resource blade presents default graphs and tables for monitoring that resource type.
Select a resource and notice the Monitoring section. It includes graphs that are relevant to the resource type. The following image shows the default monitoring data for a storage account.
You can pin a section of the blade to your dashboard by selecting the ellipsis (...) above the section. You can also customize the size the section in the blade or remove it completely. The following image shows how to pin, customize, or remove the CPU and Memory section.
After pinning the section to the dashboard, you will see the summary on the dashboard. And, selecting it immediately takes you to more details about the data.
To completely customize the data you monitor through the portal, navigate to your default dashboard, and select New dashboard.
Give your new dashboard a name and drag tiles onto the dashboard. The tiles are filtered by different options.
To learn about working with dashboards, see Creating and sharing dashboards in the Azure portal.
Manage resources
In the blade for a resource, you see the options for managing the resource. The portal presents management options for that particular resource type. You see the management commands across the top of the resource blade and on the left side.
From these options, you can perform operations such as starting and stopping a virtual machine, or reconfiguring the properties of the virtual machine.
Move resources
If you need to move resources to another resource group or another subscription, see Move resources to new resource group or subscription.
Lock resources
You can lock a subscription, resource group, or resource to prevent other users in your organization from accidentally deleting or modifying critical resources. For more information, see Lock resources with Azure Resource Manager..
View your subscription and costs
You can view information about your subscription and the rolled-up costs for all your resources. Select Subscriptions and the subscription you want to see. You might only have one subscription to select.
Within the subscription blade, you see a burn rate.
And, a breakdown of costs by resource type.
Export template
After setting up your resource group, you may want to view the Resource Manager template for the resource group. Exporting the template offers two benefits:
- You can easily automate future deployments of the solution because the template contains all the complete infrastructure.
- You can become familiar with template syntax by looking at the JavaScript Object Notation (JSON) that represents your solution.
For step-by-step guidance, see Export Azure Resource Manager template from existing resources.
Delete resource group or resources
Deleting a resource group deletes all the resources contained within it. You can also delete individual resources within a resource group. You want to exercise caution when you delete a resource group because there might be resources in other resource groups that are linked to it. Resource Manager does not delete linked resources, but they may not operate correctly without the expected resources.
Next Steps
- To view activity logs, see Audit operations with Resource Manager.
- To view details about a deployment, see View deployment operations.
- To deploy resources through the portal, see Deploy resources with Resource Manager templates and Azure portal.
- To manage access to resources, see Use role assignments to manage access to your Azure subscription resources.
- For guidance on how enterprises can use Resource Manager to effectively manage subscriptions, see Azure enterprise scaffold - prescriptive subscription governance. | https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-portal | 2017-05-22T17:28:17 | CC-MAIN-2017-22 | 1495463605485.49 | [array(['media/resource-group-portal/manage-resources.png',
'manage resources'], dtype=object)
array(['media/resource-group-portal/select-subscription.png',
'subscription'], dtype=object)
array(['media/resource-group-portal/burn-rate.png', 'burn rate'],
dtype=object)
array(['media/resource-group-portal/cost-by-resource.png',
'resource cost'], dtype=object)
array(['media/resource-group-portal/delete-group.png', 'delete group'],
dtype=object) ] | docs.microsoft.com |
In this section:
You can monitor and collect coverage data from .NET managed code during manual or automated functional tests performed on a running web application that is deployed on IIS.
The following components are required for collecting coverage:.
The following steps are required to prepare the application under test (AUT):
Run the following test configuration on the solution:
The dottestcli console output will indicate where the static coverage data is saved:
By default, coverage is measured for the entire web application. You can customize the scope of coverage by adding the following switches when collecting static coverage to measure specific parts of the application (see Configuring the Test Scope, for usage information)::
Invoke the dotTEST IIS Manager tool on this machine to enable runtime coverage collection inside IIS::
Go to the following address to check the status of the coverage agent:
You should receive the following response:
You can collect coverage information for multiple users that are simultaneously accessing the same web application server. This requires launching the dotTEST IIS Manager with the
-multiuser switch:
See the Coverage Agent Manager (CAM) section of the DTP documentation for details..
For tests executed by SOAtest, the SOAtest XML report will need to be uploaded to DTP. See the "Uploading Rest Results to DTP" section in the Application Coverage topic in the SOAtest documentation for details. with).
You can stop the process of collecting dynamic coverage data in one of the following ways:
Write
exit in the open console when the following message will be printed to the output to stop dottest_iismanager:
Send a request to the service by entering the following URL in the browser:
If any errors occur when dottest_iismanager exits, which prevent the clean-up of the Web Server environment, then execute dottest_iismanager with the
-stop option to bring back the original Web Server environment and settings:
You can use the Coverage Explorer in DTP to review the application coverage achieved during test execution. See the DTP documentation for details on viewing coverage information. | https://docs.parasoft.com/plugins/viewsource/viewpagesrc.action?pageId=38643244 | 2020-10-19T22:12:43 | CC-MAIN-2020-45 | 1603107866404.1 | [] | docs.parasoft.com |
Some objects may show only a limited set of details, or may not have any details available at the time of execution.
The tab may provide the Block Object option for adding the object to the User-Defined Suspicious Object List.
You can further examine objects with "Malicious" ratings in Threat Connect or VirusTotal. | https://docs.trendmicro.com/en-us/smb/worry-free-business-security-services-67-server-help/detection-and-respon/noteworthy-events_001/analysis-chains/object-details_-prof.aspx | 2020-10-19T22:06:21 | CC-MAIN-2020-45 | 1603107866404.1 | [] | docs.trendmicro.com |
To place Light Probes in your Scene, you must use a GameObject with a Light Probe Group component attached. You can add a Light Probe Group component from the menu: Component > Rendering > Light Probe Group.
You can add the Light Probe Group component to any GameObject in the Scene. However, it’s good practice to create a new empty GameObject (menu: GameObject > Create Empty) and add it to that, to decrease the possibility of accidentally removing it from the Project.
Under certain circumstances, Light Probes exhibit an unwanted behaviour called “ringing”. This often happens when there are significant differences in the light surrounding a Light Probe. For example, if you have bright light on one side of a Light Probe, and no light on the other side, the light intensity can “overshoot” on the back side. This overshoot causes a light spot on the back side.
There are several ways to deal with this:
When editing a Light Probe Group, you can manipulate individual Light Probes in a similar way to GameObjects. However, Light Probes are not GameObjects; they are a set of points in the Light Probe Group component.
When you begin editing a new Light Probe Group, you start with a default formation of eight probes arranged in a cube, as shown below:
You can now use the controls in the Light Probe Group inspector to add new probe positions to the group. The probes appear in the Scene as yellow spheres which you can position in the same way as GameObjects. You can also select and duplicate individual probes or in groups, by using the usual “duplicate” keyboard shortcut (ctrl+d/cmd+d).
Remember to disable the Light Probe Group edit mode when you’ve finished editing the Light Probes, so that you can continue to edit and move GameObjects in your Scene as normal.
Unlike lightmaps, which usually have a continuous resolution across the surface of an object, the resolution of the Light Probe information depends on how closely packed you choose to position the Light Probes.
To optimise the amount of data that Light Probes store, and the amount of computation done while the game is playing, you should generally attempt to place as few Light Probes as possible. However, you should also place enough Light Probes so Light Probes are placed more densely near and between the buildings where there is more contrast and color variation, and less densely along the road, where the lighting does not significantly change.
The simplest approach to positioning Light Probes is to arrange them in a regular 3D grid pattern. While this setup is simple and effective, it is likely to consume more memory than necessary, and you may have lots of redundant Light Probes. For example, in the Scene above, if there were lots of Light Probes placed along the road it would be a waste of resources. The light does not change much along the length of the road, so many Light Probes would be storing almost identical lighting data to their neighbouring Light Probes. In situations like this, it is much more efficient to interpolate this lighting data between fewer, more spread-out, Light Probes.
Light Probes individually do not store a large amount of information. From a technical perspective, each probe is a spherical, panoramic HDR image of the view from the sample point, encoded using Spherical Harmonics L2 which is stored as 27 floating point values. However, in large Scenes with hundreds of Light Probes they can add up, and having unnecessarily densely packed Light Light Probes.
In the example below, you can see on the left the Light Probes are arranged only across the surface of the ground. This does not result in good lighting because the Light Probe system cannot calculate sensible tetrahedral volumes from the Light Probes.
On the right, the Light Probes are arranged in two layers, some low to the ground and others higher up, so that together they form a 3D volume made up of lots of individual tetrahedra. This is a good layout.
Unity’s real-time GI allows moving lights to cast dynamic bounced light against your static scenery. However, you can also receive dynamic bounced light from moving lights on moving GameObjects when you are using Light Probes.
Light Probes therefore perform two very similar but distinct functions - they store static baked light, and at run time they represent sampling points for dynamic real-time global illumination (GI, or bounced light) to affect the lighting on moving objects.
Therefore, if you are using dynamic moving lights, and want real-time Light Probes - because the light does not change across a wide area. However, if you plan to have moving lights within this area, and you want moving objects to receive bounced light from them, you need a more dense network of Light Probes within the area so that there is a high enough level of accuracy to match your light’s range and style.
How densely placed your Light Probes need to be varies depending on the size and range of your lights, how fast they move, and how large the moving objects are that you want to receive bounced light.
Your choice of Light Probe positions must take into account that the lighting is interpolated between sets of Light Probes. Problems can arise if your Light “bleeds” across the dark gap, on moving objects. This is because the lighting is being interpolated from one bright point to another, with no information about the dark area in-between.
If you are using Realtime or Mixed lights, this problem may be less noticeable, because only the indirect light bleeds across the gap. The problem is more noticable if you are using fully baked lights, is occurring between one brightly lit end of the street to the other.
This is an undesired effect - the ambulance remains brightly lit while passing through a dark area, because no Light Probes were placed in the dark area.
To solve this, you should place more Light Probes in the dark area, as shown below:
Now the Scene has Light Probes in the dark area too. As a result, the moving ambulance takes on the darker lighting as it travels from one side of the Scene to the other.
2018–10–17 Page amended
2017–06–08 Page published
Remove Ringing added in 2018.3 NewIn20183
Light Probes updated in 5.6 | https://docs.unity3d.com/es/2020.2/Manual/class-LightProbeGroup.html | 2020-10-19T22:22:00 | CC-MAIN-2020-45 | 1603107866404.1 | [] | docs.unity3d.com |
Internet Data Query Files
Note
Indexing Service is no longer supported as of Windows XP and is unavailable for use as of Windows 8. Instead, use Windows Search for client side search and Microsoft Search Server Express for server side search.
Internet Data Query files (files with an .idq extension) for Indexing Service, together with the form parameters, specify the query that Indexing Service will run. The .idq file is divided into two sections: the names section (which is optional and need not be supplied for standard queries) and the query section.
This part of the documentation contains the following topics:
- Names Section of .Idq Files: Defines nonstandard column names in an .idq file that can be referred to in a query.
- Defining Friendly Names: Examples of defining friendly names for properties.
- Defining Custom Properties: Examples of custom properties from various IFilter implementations, including Microsoft Office and HTML.
- Query Section of .Idq Files: Explains the parameters that can be used in a query.
- Effect of Parameters on Query Performance: Tells how to set parameters for the best performance.
- Sequential vs. Nonsequential Execution: Describes the two ways to execute a query.
- Enumerated vs. Indexed Resolution. Describes conditions under which queries are enumerated against the system disk rather than the Content Index.
- Deferring Nonindexed Trimming. Describes how to defer the trimming of records returned from certain queries.
Note
All paths to .idq files must be the full path name from a virtual root, not a relative path or a physical path. In other words, all paths must start with a forward slash and cannot contain "." or ".." components. See the following examples:
Valid Paths
/scripts/myquery.idq
/iissamples/issamples/query.idq
Invalid Paths
c:\iissrv\scripts\myquery.idq
scripts/query.idq
/samples/../scripts/example.idq
You must put the .idq files into a virtual directory with Execute or Script permission. You cannot put .idq files into a virtual directory pointing to a remote Universal Naming Convention (UNC) share.
Note
Indexing Service does not support physical paths longer than the Microsoft Windows 2000 shell limit (260 characters). | https://docs.microsoft.com/en-us/previous-versions/windows/desktop/indexsrv/internet-data-query-files | 2020-10-19T22:15:44 | CC-MAIN-2020-45 | 1603107866404.1 | [] | docs.microsoft.com |
How to self-sign certificates
This topic describes one way you can use OpenSSL to self-sign certificates for securing forwarder-to-indexer and Inter-Splunk communication.
If you already possess or know how to generate the needed certificates, you can skip this topic and go directly to the configuration steps, described later in this manual:
- How to prepare your signed certificates for Splunk
- Configure Splunk forwarding to use your own certificates
- About securing inter-Splunk communication
Self-signed certificates are best for data communication that occurs within an organization or between known entities. If you communicate with unknown entities, we recommend CA-signed certificates to secure your data.
Before you begin
In this discussion,
$SPLUNK_HOME refers to the Splunk Enterprise installation directory:
- For Windows, Splunk software is installed in
C:\Program Files\splunkby default
- For most Unix platforms, the default installation directory is at
/opt/splunk
- For Mac OS, it is
/Applications/splunk
See the Administration Guide to learn more about working with Windows and *nix.
Create a new directory for your certificates
Create a new directory to work from when creating your certificates. In our example, we are using
$SPLUNK_HOME/etc/auth/mycerts:
# mkdir $SPLUNK_HOME/etc/auth/mycerts # cd $SPLUNK_HOME/etc/auth/mycerts
This ensures you do not overwrite the Splunk-provided certificates that reside in
$SPLUNK_HOME/etc/auth.
Create the root certificate
First you create a root certificate that serves as your root certificate authority. You use this root CA to sign the server certificates that you generate and distribute to your Splunk instances.
Generate a private key for your root certificate
1. Create a key to sign your certificates.
In *nix:
$SPLUNK_HOME/bin/splunk cmd openssl genrsa -aes256 -out myCAPrivateKey.key 2048
In Windows:
$SPLUNK_HOME\bin\splunk cmd openssl genrsa -aes256 -out myCAPrivateKey.key 2048
2. When prompted, create a password for the key.
When the step is completed, the private key
myCAPrivateKey.key appears in your directory.
Generate and sign the certificate
1. Generate a new Certificate Signing Request (CSR):
In *nix:
$SPLUNK_HOME/bin/splunk cmd openssl req -new -key myCAPrivateKey.key -out myCACertificate.csr
In Windows:
$SPLUNK_HOME\bin\splunk cmd openssl req -new -key myCAPrivateKey.key -out myCACertificate.csr
2. When prompted, enter the password you created for the private key in
$SPLUNK_HOME/etc/auth/mycerts/myCAPrivateKey.key.
3. Provide the requested certificate information, including the common name if you plan to use common name checking in your configuration.
A new CSR
myCACertificate.csr appears in your directory.
4. Use the CSR
myCACertificate.csr to generate the public certificate:
In *nix:
$SPLUNK_HOME/bin/splunk cmd openssl x509 -req -in myCACertificate.csr -sha512 -signkey myCAPrivateKey.key -CAcreateserial -out myCACertificate.pem -days 1095
In Windows:
$SPLUNK_HOME\bin\splunk cmd openssl x509 -req -in myCACertificate.csr -sha512 -signkey myCAPrivateKey.key -CAcreateserial -out myCACertificate.pem -days 1095
5. When prompted, enter the password for the private key
myCAPrivateKey.key.
A new file
myCACertificate.pem appears in your directory. This is the public CA certificate that you will distribute to your Splunk instances.
Create the server certificate
Now that you have created a root certificate to serve as your CA, you must create and sign your server certificate.
A note about common name checking
This topic shows you how to create a new private key and server certificate.
You can distribute this server certificate to all forwarders, indexers as well your Splunk instances that communicate on the management port. If you plan to use a different common name for each instance, you simply repeat the process described here to create different certificates (each with a different common name) for your Splunk instances.
For example, if configuring multiple forwarders, you can use the following example to create the certificate
myServerCertificate.pem for your indexer, then create another certificate
myForwarderCertificate.pem using the same root CA and install that certificate on your forwarder. Note that an indexer will only accept a properly generated and configured certificate from a forwarder that is signed by the same root CA.
See Configure Splunk forwarding to use your own certificates for more information about configuring your forwarders and indexers.
Generate a key for your server certificate
1. Generate a new RSA private key for your server certificate. In this example we are again using AES encryption and a 2048 bit key length:
In *nix:
$SPLUNK_HOME/bin/splunk cmd openssl genrsa -aes256 -out myServerPrivateKey.key 2048
In Windows:
$SPLUNK_HOME\bin\splunk cmd openssl genrsa -aes256 -out myServerPrivateKey.key 2048
2. When prompted, create a new password for your key.
A new key
myServerPrivateKey.key is created. You will use this key to encrypt the outgoing data on any Splunk Software instance where you install it as part of the server certificate.
Generate and sign a new server certificate
1. Use your new server to the private key
myServerPrivateKey.key.
3. Provide the requested information for your certificate, including a Common Name if you plan to configure Splunk Software to authenticate via common-name checking.
A new CSR
myServerCertificate.csr appears in your directory.
4. Use the CSR
myServerCertificate.csr and your CA certificate and private key to generate a server certificate.
In *nix:
In Windows:
5. When prompted, provide the password for the certificate authority private key
myCAPrivateKey.key. Make sure to sign this with your private key and not the server key you just created.
A new public server certificate
myServerCertificate.pem appears in your directory., prepare your server certificate (including appending any intermediate certificates), and then configure Splunk to find and use them:
- Splunk to Splunk! | https://docs.splunk.com/Documentation/Splunk/8.0.2/Security/Howtoself-signcertificates | 2020-10-19T21:06:22 | CC-MAIN-2020-45 | 1603107866404.1 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
public static interface ResponseCookie.ResponseCookieBuilder
ResponseCookie.ResponseCookieBuilder maxAge(Duration maxAge)
A positive value indicates when the cookie should expire relative to the current time. A value of 0 means the cookie should expire immediately. A negative value results in no "Max-Age" attribute in which case the cookie is removed when the browser is closed.
ResponseCookie.ResponseCookieBuilder maxAge(long maxAgeSeconds)
maxAge(Duration)accepting a value in seconds.
ResponseCookie.ResponseCookieBuilder path(String path)
ResponseCookie.ResponseCookieBuilder domain(String domain)
ResponseCookie.ResponseCookieBuilder secure(boolean secure)
ResponseCookie.ResponseCookieBuilder httpOnly(boolean httpOnly)
ResponseCookie.ResponseCookieBuilder sameSite(@Nullable String sameSite)
This limits the scope of the cookie such that it will only be
attached to same site requests if
"Strict" or cross-site
requests if
"Lax".
ResponseCookie build() | https://docs.spring.io/spring-framework/docs/5.2.7.RELEASE/javadoc-api/org/springframework/http/ResponseCookie.ResponseCookieBuilder.html | 2020-10-19T22:14:41 | CC-MAIN-2020-45 | 1603107866404.1 | [] | docs.spring.io |
This article describes how to implement the basic features of PK Host.
Flowchart
Refer to the flowcharts for the following functions:
- Joins and leaves the room.
- Co-host across channels.
Integrate the SDK
Refer to the following table to integrate the SDKs into your project:
Core API call sequence
The following diagrams show the core APIs that the Agora Live Demo app uses to implement a PK Host scenario. Refer to them to implement the various functions in your project.
- The host joins the room and sends a PK host invitation.
- The audience chat through text messages and switch channels..
Mute the local audio and video
Call
muteLocalAudioStream or
muteLocalVideoStream to stop sending local audio or video streams. PK Host on GitHub for you to download as a source code reference.
You can also | https://docs.agora.io/en/Interactive%20Broadcast/pk_host_android?platform=Android | 2020-10-19T21:12:46 | CC-MAIN-2020-45 | 1603107866404.1 | [] | docs.agora.io |
Bitnami Moodle Deployment¶
What is Moodle ?¶
Moodle is a free and open-source learning management system written in PHP and distributed under the GNU General Public License.It is widely used at universities, schools, and corporations worldwide. It is modular and highly adaptable to any type of online learning.
Why use the Bitnami Moodle Stack?¶
The Bitnami Moodle Stack is always up-to-date and secure. The installation and configuration of the stack are automated completely, making it easy for everyone, including those who are not very technical, to get them up and running.
Where To Find Moodle Console & MySQL credentials?¶
The instance/node/machine root credentials would be sent to your registered email address as soon as you launch the Moodle one-click deployment.
The Moodle Moodle console?¶
- Access the Moodle console by browsing to.
- To Logging in to your site use the user and password which you have retrieved from the standalone file /home/bitnami/bitnami_credentials
How to open MySQL 3306 port using UFW Firewall?¶
By default, the Bitnami Moodle
Moodle is the world’s open source learning platform that allows educators to create a private space online and easily build courses and activities with flexible software tools for collaborative online learning. You can refer this bitnami documentation for full details. | https://docs.e2enetworks.com/oneclick/moodle.html | 2020-10-19T21:10:43 | CC-MAIN-2020-45 | 1603107866404.1 | [] | docs.e2enetworks.com |
Australian Quality Framework Review submissions
In December 2018, the Chair of the Australian Qualifications Framework (AQF) Review, Professor Peter Noonan, released a discussion paper calling for public submissions and consultation to the Review.
The submission process was closed on 15 March 2019. A total of 134 written submissions were received and considered by the Expert Panel.
Note: Not all submissions have been published, including three stakeholders who requested their submissions remain confidential.
For further information about the review, please visit the Australian Qualifications Framework Review page.
Submission List
Pages
- « first
- 1
- 2
- 3 | https://docs.education.gov.au/documents/australian-quality-framework-review-submissions?page=2 | 2020-10-19T22:15:42 | CC-MAIN-2020-45 | 1603107866404.1 | [] | docs.education.gov.au |
Before upgrading, please ensure the machine hosting Loome Publish is running version 4.6.2 or above of the .net framework.
You can find steps to check the version number here -
You can find dot net framework downloads here -
Once completed, follow the steps under Running the Upgrade to install your upgrade. | https://docs.loomesoftware.com/publish/upgrades/adfs-on-premise-upgrade-guide/version-4.5/ | 2020-10-19T21:32:35 | CC-MAIN-2020-45 | 1603107866404.1 | [] | docs.loomesoftware.com |
MASSO Documentation
Work offsets allows the user to position the work piece and cutting tool to allow cutting at the required position. There are different types of offsets such as work and temporary offsets, its very important to understand these concepts as these will help you generate gcode from a CAD/CAM software and then how to position the work piece on the machine. | https://docs.masso.com.au/index.php/getting-started-guides/machining-with-masso/work-offsets | 2020-10-19T21:59:48 | CC-MAIN-2020-45 | 1603107866404.1 | [] | docs.masso.com.au |
Toolbar Plugins
Fred supports adding functionality to individual Elements by registering new buttons in the toolbar above each Element.
ToolbarPlugin– the Toolbar ToolbarPlugin.
Example
var TestToolbarPluginInit = function(fred, ToolbarPlugin, pluginTools) { class TestToolbarPlugin extends ToolbarPlugin { static title = "Test Plugin"; static icon = "fred--element-settings"; onClick() { console.log("Test Plugin icon pressed from the toolbar"); } } return TestToolbarPlugin; };
This will create an additional toolbar icon at the end of the toolbar with the same icon as the Settings icon, a gear.
Icons
Toolbar icons are button elements with specific classes. For example, the delete button from the toolbar is marked up as follows:
<button class="fred--trash" role="button" title="Delete"></button>
The CSS class
fred--trash determines the appearance of the button. To style a new toolbar icon, you need to target the psuedo element
::before in your plugin’s CSS with inline SVG code for a background image, and a background color. You can optionally have a differnt background color when hovered:
.fred--my_plugin_button::before { background-repeat: no-repeat; background-position: center center; background-image: url("data:image/svg+xml, %3Csvg xmlns='' viewBox='0 0 512 512' fill='%23fff'%3E%3Cpath d='M512 144v288c0 26.5-21.5 48-48 48H48c-26.5 0-48-21.5-48-48V144c0-26.5 21.5-48 48-48h88l12.3-32.9c7-18.7 24.9-31.1 44.9-31.1h125.5c20 0 37.9 12.4 44.9 31.1L376 96h88c26.5 0 48 21.5 48 48zM376 288c0-66.2-53.8-120-120-120s-120 53.8-120 120 53.8 120 120 120 120-53.8 120-120zm-32 0c0 48.5-39.5 88-88 88s-88-39.5-88-88 39.5-88 88-88 88 39.5 88 88z'/%3E%3C/svg%3E"); background-color: #e46363; } .fred--my_plugin_button::before:hover { background-color: #061323; }.
Toolbar Plugin Order
The buttons registered to the toolbars are always added after the built-in default buttons. If there are multiple Toolbar Plugins registered, they will render in the order of the MODX Plugin’s rank in the MODX Manager.
Limiting Plugins for Elements
By default, all Toolbar Plugins will register for every Element. To specify the order and/or omit some plugins, modify an Element’s Option Set setting to either include or exclude specific Fred Plugins with a
plugins-include or
plugins-exclude attribute.
Note: The plugins are unique names of the class created for the plugins. As a general rule, this should match the plugin name used for the MODX Package Provider.
If a
plugins-include attribute is included, it will ignore any
plugins-exclude lines. To include only specific Plugins for an Element, use a
plugins-include Options setting:
{ "toolbarPluginsInclude": ["gallery","mapmarker"], "settings": [ { … } ] }
To exclude one or more specific Plugins on an Element, use a
plugins-exclude option:
{ "toolbarPluginsExclude": ["fredfontawesome5iconeditor"], "settings": [ { … } ] }
To prevent all Plugins from registering on an Element completely, specify an empty array for a
plugins-include option:
{ "plugins-include": [], "settings": [ { … } ] }
Note: The plugins are unique names of the class created for the plugins. As a general rule, this should match the plugin name used for the MODX Package Provider.
If a
pluginsInclude attribute is included, it will ignore any
pluginsExclude lines. To include only specific Plugins for an Element, use a
pluginsInclude Options setting:
{ "pluginsInclude": ["gallery","mapmarker"], "settings": [ { … } ] }
To exclude one or more specific Plugins on an Element, use a
pluginsExclude option:
{ "pluginsExclude": ["fredfontawesome5iconeditor"], "settings": [ { … } ] }
To prevent all Plugins from registering on an Element completely, specify an empty array for a
pluginsInclude option:
{ "pluginsInclude": [], "settings": [ { … } ] }
Register your Plugin
When you have the
init function returning your plugin's class, you need to register it for Fred by creating a MODX Plugin on the FredBeforeRender event.
Include the JS file containing the init function using includes and registering the Plugin using
beforeRender
To register the toolbar Plugin, you call the
registerToolbarPlugin function from Fred with two arguments:
name- a unique name for your plugin. Fred cannot register multiple Plugins with the same name.
init function- the
TestToolbarPluginInitfunction we created in
Init functionstep, above
Example
$includes = ' <script type="text/javascript" src="/path/to/plugin/file.js"></script> <link rel="stylesheet" href="/path/to/stylsheet/style.css" /> '; $beforeRender = ' this.registerToolbarPlugin("TestToolbarPlugin", TestToolbarPluginInit); '; $modx->event->_output = [ 'includes' => $includes, 'beforeRender' => $beforeRender ];
The Plugin class
The sample Class in the
Init function step above can do much more than just logging to the console via
console.log. In fact, much of Fred's functionality is already coded as Plugins. To review the current Toolbar Plugins for a sense of how to create your own, review the source code on Github.
Custom Data
Your Plugin can save and load custom data when the page is saved. Be aware, though, that custom data is only saved when a user saves the entire page.
Element Data
Toolbar Plugins typically should affect the Elements on which they act. This data are attached to the Fred element where to toolbar action occurred. To save data, use
this.el.setPluginValue(, use
this.el.getPluginValue('Namespace', 'VariableName') which takes two arguments:
namespace- the same namespace used when calling
setPluginValue
name- the same name used when calling
setPluginValue
Global Data
Data assoiated with a Plugin can optionally be saved globally, not attached to a specific Element. To save data this way, call
pluginTools.fredConfig.setPluginsData('Namespace', 'VariableName', 'Data'). This function takes same arguments as
this.el.setPluginValue.
To load global data, call
pluginTools.fredConfig.getPluginsData('Namespace', 'VariableName'). This function takes same arguments as
this.el.getPluginValue. | https://docs.modx.org/current/en/extras/fred/developer/toolbar_plugins | 2020-10-19T21:38:18 | CC-MAIN-2020-45 | 1603107866404.1 | [array(['/2.x/en/extras/fred/developer/../media/toolbar.png',
'Element Toolbar'], dtype=object) ] | docs.modx.org |
Configuring OpenID Connect Back-Channel Logout¶
OpenID Connect back-channel logout feature enables logging out users from a client application/Relying Party (RP) by directly communicating the logout requests between the client application and authorization server.
What is direct communication?
Direct communication enables communicating the requests between the client application and authorization server through direct network links without having to rely on a browser/agent.
This approach is more reliable as it does not require communicating the logout requests through a user agent/browser and maintain active RP browser sessions for the communication to succeed.
Message flow¶
Let's take a look at the underlying message flow of the OpenID Connect back-channel logout.
- The client application or authorization server triggers a user logout.
- The authorization server identifies all the client applications that share the same user session.
- The authorization server generates the logout token, which is a special JWT containing claims and sends it with the logout request to the logout endpoints of the client applications.
- Upon receiving the logout token, the client application validates the logout token and invalidates the user session.
Configuring the sample applications¶
Follow the steps here to download, deploy and register
playground2applications.
Make a copy of
playground2.warand rename to
playground3.warin the same location described in step 1.
Configuring OpenID Connect back-channel logout¶
Follow the steps below to configure OpenID Connect back-channel logout in WSO2 Identity Server:
Tip
In the previous section you have registered the
playground2webapp. If you complete that you can skip step 2 below.
To register a web application as a service provider:
- On the Main menu, click Identity > Service Providers > Add.
- Enter
playground2in the Service Provider Name text box.
- Click Register. Note that you will be redirected to the Service Providers screen.
- Under Inbound Authentication Configuration, click OAuth/OpenID Connect Configuration > Configure.
Enter the configurations as follows:
Click Add. Note that a
client IDand
client secrethave been created.
You have successfully added the playground2 service provider. Similarly, register another service provider performing all the sub-steps in above Step2 with the following data:
- Service Provider Name :
playground3
- Callback URL :
- Logout URL :
To view the identity provider's logout endpoint URL, which gets called when the logout is triggered from the service provider:
- On the Main menu, click Identity > Identity Providers > Resident.
- Under Inbound Authentication Configuration, click OAuth2/OpenID Connect Configuration.
Note that the identity provider's logout endpoint URL is listed out.
Testing OpenID Connect back-channel logout with the sample applications¶
Follow the steps below to test OpenID Connect back-channel logout with the newly registered service provider:
To sign in to the playground2 web application:
Navigate to
http://<TOMCAT_HOST>:<TOMCAT_PORT>/playground2in your browser, e.g.,.
Note
Even though
localhostis used in this documentation, it is recommended to use a hostname that is not
localhostto avoid browser errors. To achieve this, modify the
/etc/hostsentry on your computer.
The following screen appears.
Click Import Photos.
Enter the required details as follows:
Click Authorize. The login page of the identity provider appears.
- Sign in as an admin user. The user consent screen appears.
- Select the necessary attributes and click Continue. The playground2 home screen appears with the Logged in user set to
admin.
Similarly, sign in to the playground3 application by navigating to
http://<TOMCAT_HOST>:<TOMCAT_PORT>/playground3(ex:) in a separated browser tab.
- Click Logout. A confirmation message appears.
- Click Yes. A success message appears
- Go back to the playground2 application and refresh. Note that the Logged in user had changed from
adminto
nullindicating that you are also logged out from the playground2 application. | https://is.docs.wso2.com/en/5.11.0/learn/configuring-openid-connect-back-channel-logout/ | 2020-10-19T22:26:53 | CC-MAIN-2020-45 | 1603107866404.1 | [] | is.docs.wso2.com |
Product Overview
Welcome to Aspose.BarCode for Java.BarCode for Java supports the most established barcode standards and barcode specifications. It has the ability to export to multiple image formats including: BMP, GIF, JPEG, PNG and TIFF.
This section introduces Aspose.BarCode for Java and its features, gives examples as case studies and lists some customers who choose to use Aspose.BarCode in their solutions. This section also includes information about Aspose.BarCode for Java installation, evaluation and licensing.
Product Description
Aspose.BarCode for Java provides fully featured demos and working examples written in Java for developers to have a better understanding of our product. Using these demos, developers can quickly learn about the features provided by Aspose.BarCode.
There is no printer limitation for the barcodes generated with Aspose.BarCode. Developers can use any kind of printer to print barcodes but naturally, the quality of the printed barcode images will be affected by the printers with low resolution.PEG
- TIFF
- PNG
- BMP
- GIF
- EXIF
Output Image Formats
- JPEG
- TIFF
- PNG
- BMP
- GIF
- EXIF
- EMF
- SVG
- Swiss QR (QR Bill)
Aspose.BarCode for Java provides encoding and decoding features for all above-mentioned symbologies, with exception of Australia Post and Aztec. At the moment we only support encoding for these two symbols. | https://docs.aspose.com/barcode/java/product-overview/ | 2020-10-19T21:08:09 | CC-MAIN-2020-45 | 1603107866404.1 | [array(['product-overview_1.png', 'todo:image_alt_text'], dtype=object)] | docs.aspose.com |
Introduction¶
E2E CDN is a fast content delivery network (CDN) service that securely delivers data, and static contents to customers globally with low latency, high transfer speeds.
The idea of a CDN is essentially to have multiple edges in different locations of the world which caches all the static content so that it can be delivered from the customers closest location. We intend to integrate CDN such that people hosting applications through us can use CDNs to improve their static asset delivery.
With E2E’s CDN, you can take the initial steps to speed up the delivery of your static content (for example: Images, JavaScripts, Style Sheets and many others). With E2E CDN, you can guarantee your viewers a fast, reliable and safe experience across the globe. To create CDN with E2E, please follow the following steps:
How to Create a CDN¶
Logging into E2E Networks MyAccount¶
Please go to My Account and log in using your credentials set up at the time of creating and activating the E2E Networks My Account.
Creation of CDN¶
- Click on the “Create CDN” button. You will be redirected to the “Setup CDN” page.
- Here, you need to specify the ‘Origin Domain Name’, ‘Origin Path’, and ‘Origin ID’.
- Origin Domain Name: Enter the “Origin Domain Name”. For example, xyz.abc.com / abc.com.
It is the DNS domain name of HTTP Server or E2E Object storage bucket from which your static content gets delivered. For example, xyz.abc.com / abc.com. The files in your origin must be publicly readable. E2E CDN will contact this origin for the content requested by the user if not found on CDN edge server cache.
- Origin Path: Enter the “Origin Path” beginning with /. For example, xyz.com/static. Do not include / at the end of the directory name.
It is the path of the folder or directory from where the content should be delivered. CDN appends the directory name to the value of Origin Domain Name when forwarding the request to the origin. If the content from this directory fails which, CDN will give a “Not Found” response to the user.
- Origin ID: Enter a description for the “Origin Path”. This value lets you distinguish multiple origins in the same distribution from one another. The description for each origin must be unique within the distribution.
- After specifying the above value, click on the ‘Create distribution’ button. You will automatically redirect to the “CDN Services” page and a confirmation message will appear on screen.
Note
After creation of CDN distribution, it will take upto 30 minutes to deploy the configuration changes on the CDN Edge servers.
- When State is ‘InProgress’ and status is ‘Disabled’.
This signifies that the configuration changes deploy is In-progress on the CDN Edge servers and the CDN is inactive, the requests will not be served through the CDN Edge network.
- When State is ‘Deployed’ and status is ‘Enabled’.
This signifies that the configuration changes deployed on the CDN Edge servers and the CDN is active, the requests will be served through the CDN Edge network.
Manage/Update a CDN settings¶
There are basically 4 types of settings that user can change for any CDN:
Basic Settings¶
The ‘Origin Path’, and ‘Origin ID’ details specified at the time of CDN creation.
- To update Origin path, select the CDN and click on the “edit” button beside the Origin Path field under the Basic Settings tab.
- Update the value with the desired one. Click on the “Tick” icon to confirm the modification.
Origin Settings¶
Origin settings defines how CDN is going to communicate with the Origin server for any content or resource.
- SSL Protocol: Defines the minimum protocol when CDN communicates with Origin. To update, select desired value from the drop down besides “SSL Protocol”. The allowed protocol versions are “TLSv1”, “TLSv1.1” or “TLSv1.2”.
- Protocol Policy: Defines whether CDN will communicate with the origin using HTTP only, HTTPS only or to forward the protocol used by the end user. To update, select desired value from the drop down besides “Protocol Policy”.
- Response Timeout: Defines the amount of time that CDN waits for a response from Origin Server. The valid values are from 4 to 60 seconds. To update, click on the “edit” button corresponding to Response Timeout.
- KeepAlive Timeout: Defines the amount of time that CDN maintains an idle connection with the Origin Server before closing the connection. The valid values are from 1 to 60 seconds. To update, click on the “Tick” icon to confirm the modification
Cache Settings¶
Cache settings define how the content will be delivered from the CDN Edge Servers to the user.
- Viewer Protocol Policy: Defines whether users will be able to access the content with HTTP or HTTPS protocol. To update, select desired value from the drop down corresponding to “Viewer Protocol Policy”. The allowed protocol versions are “HTTPS only”, “HTTP only” or “Redirect HTTP to HTTPS”.
- Allowed HTTP Methods: Defines the list of HTTP methods that you want to allow to be cached. To update, select desired value from the drop down corresponding to “Allowed HTTP Methods”.
- Minimum TTL: Defines the minimum time for which content will remain cached in edge server caches before CDN will request the updated content again from the origin server. To update, click on the “Tick” icon to confirm the modification.
- Maximum TTL: Defines the maximum time for which content will remain cached in edge server caches before CDN will request the updated content again from the origin server. To update, click on the “Tick” icon to confirm the modification.
- Default TTL: Defines the default time for which content will remain cached in edge server caches before CDN will request the updated content again from the origin server. To update, click on the “Tick” icon to confirm the modification.
Domain Settings¶
Domain settings define how the response will be served from the domain for which CDN was enabled.
- Supported HTTP Versions: Defines whether CDN should accept the HTTP 2.0 requests from the users with HTTP 1.0. To update, select desired value from the drop down corresponding to Supported HTTP Versions.
- Root Object: Defines the resource that CDN will return when the user requests your root URL. To update, click on the “Tick” icon to confirm the modification.
CDN Usage¶
The ‘Usage Details’ report provides information about your usage of E2E CDN This report will be having usage for the last 5 days.
Enable/Disable the CDN¶
- Click on the “Action” button for the CDN that you want to ‘Enable’/ ‘Disable’.
- The state of the CDN will change to “InProgress” with the desired status.
Note
It takes approximately 30 minutes to deploy the configuration changes to all the CDN Edge servers. Once the changes are completed, the state of CDN changes to “Deployed”.
Deleting a CDN¶
- Click on the “Action” button for the CDN that you want to delete and select “Delete icon” from the drop down.
Note
To delete the CDN, your CDN should be in “Deployed” state with “Disabled” status. Else, you will not be able to delete the CDN on any other state and status.
- Click on the “Delete” button in the confirmation popup.
| https://docs.e2enetworks.com/cdn/cdn.html | 2020-10-19T21:27:09 | CC-MAIN-2020-45 | 1603107866404.1 | [array(['../_images/2.png', '../_images/2.png'], dtype=object)
array(['../_images/3.png', '../_images/3.png'], dtype=object)
array(['../_images/4.png', '../_images/4.png'], dtype=object)
array(['../_images/5.png', '../_images/5.png'], dtype=object)
array(['../_images/6.png', '../_images/6.png'], dtype=object)
array(['../_images/7.png', '../_images/7.png'], dtype=object)
array(['../_images/8.png', '../_images/8.png'], dtype=object)
array(['../_images/9.png', '../_images/9.png'], dtype=object)
array(['../_images/10.png', '../_images/10.png'], dtype=object)
array(['../_images/11.png', '../_images/11.png'], dtype=object)
array(['../_images/12.png', '../_images/12.png'], dtype=object)
array(['../_images/13.png', '../_images/13.png'], dtype=object)
array(['../_images/14.png', '../_images/14.png'], dtype=object)
array(['../_images/15.png', '../_images/15.png'], dtype=object)] | docs.e2enetworks.com |
Difference between revisions of "Getting started"
From Technologic Systems Manuals
Revision as of 15:53, 21 July 2017
A Linux PC is recommended for development, and will be assumed for this documentation. For users in Windows or OSX we recommend virtualizing a Linux PC. Most of our platforms run Debian and if you have no other distribution preference this is what we recommend.
Virtualization
Suggested Linux Distributions
It may be possible to develop using a Windows or OSX system, but this is not supported. Development will include accessing drives formatted for Linux and often Linux based tools. | https://docs.embeddedarm.com/index.php?title=Getting_started&diff=8953&oldid=8075&printable=yes | 2020-10-19T23:02:16 | CC-MAIN-2020-45 | 1603107866404.1 | [] | docs.embeddedarm.com |
Paris Paris ParisThe vs. updates ServiceNow organizes its releases into families. A family is a set of releases that are named after a major city, such as Paris. lifecycle, Paris patch-to-patch and Paris. You can use Automated Test Framework quick start tests and product. | https://docs.servicenow.com/bundle/paris-release-notes/page/release-notes/upgrades/reference/upgrade.html | 2020-10-19T20:59:35 | CC-MAIN-2020-45 | 1603107866404.1 | [] | docs.servicenow.com |
Documentation for txtAlert API¶
Please Call Me API documentation¶
A “Please Call Me” is a text message that is sent from one subscriber to another where asking the latter to call the sender of the message back.
PCMs (Please Call Me’s) are usually sent by dialing a specific USSD code and arrive at the receivers handset as a sponsored SMS message informing the receiver of the request.
In South Africa all major telecom operators provide a number of free PCMs, to all subscribers. This allows for those who cannot afford credit on their phones to still communicate when needed.
In txtAlert PCMs are used as a low-cost means for the patient and the clinic to get in touch. When a patient is unable to attend an appointment, he or she can send a PCM to txtAlert. This PCM is registered by txtAlert and the the patient is identified by the MSISDN. The clinic is notified of the PCM and will call the patient back to schedule a new appointment.
Registering PCMs¶
- URI
- /api/v1/pcm.json
- HTTP Method
Example¶
An example of HTTP POSTing with parameters with cURL:
$ curl --user 'user:password' \ > --data 'sender_msisdn=271234567890&sms_id=abfegvcd&recipient_msisdn=271234567810' \ > Please Call Me registered
FrontlineSMS supports this out of the box. Check the FrontlineSMS documentation for how to do this.
An example of HTTP POSTing JSON from Vumi’s APIs with cURL:
$ curl --user 'user:password' \ > --data '{"from_addr": "271234567890", "to_addr": "271234567890", "content":"message", "message_id": "abfegvcd"}' \ >
Both API calls:
- return an
HTTP 201 Createdheader if successful.
- return an
HTTP 409 Conflictif an exact same SMS was received in the last 2 hours.
- return an
HTTP 400 Bad Requestif not all parameters are present.
Sending SMSs¶
SMSs are sent via Vumi Go’s HTTP API. txtAlert should have an account configured and appropriate token set to allow for outbound messaging.
Receiving Delivery Reports¶
The path for receiving network acknowledgements & delivery reports is:
/api/v1/events.json
This URL endpoint expects a Vumi event message. Event messages of type
ack
and
delivery_report are supported. These update the original outbound
message with the appropriate txtAlert status matching pending, failed
or delivered. | https://txtalert.readthedocs.io/en/latest/api.html | 2020-10-19T21:07:00 | CC-MAIN-2020-45 | 1603107866404.1 | [] | txtalert.readthedocs.io |
Contents
GVP Minutes
This page provides information about how the GVP Minutes metric is defined and calculated, and describes the key values used to designate these metrics in the billing files.
GVP Minutes is a usage metric.
Metric specification
Configuration
Define the following:
- Regions and locations — Within each tenant configuration, define a set of regions and locations in which the tenant is enabled.
- In the globals section, define credentials at the top level in the following sections:
- cme_credentials: the credentials to access all the Configuration Servers.
- gimdb: the default credentials to access the GIM DBs.
- gvpdb: the default credentials to access the GVP Reporting Server DBs.
- Each location within a tenant contains a list of switch names valid for that location, within the switches parameter.
- In the GVP section:
- For each location, define the shared GVP deployment details within the gvp section:
- Configuration Server coordinates
- GVP Reporting Server database coordinates
- Credentials (inherited from the gvpdb section under the globals section)
- Globally defined GVP Reporting Server database parameters for each location and default credentials are propagated down to the tenant level, and can be overridden at the tenant level.
- For each tenant:
- A "shared_gvp" flag that defaults to true;
- An optional parameter "shared_gvp_alias", which you can use specify a name of the tenant as defined in Shared GVP CME, if required.
- Within each location, a list of billable IVR Profiles DBIDs can be defined (ivr_usage_profiles: ['000'] - default value meaning all IVR Profiles are billable).
- You must configure the GVP Reporting Server to store CDRs table:
- [dbmp]
- …
- rs.db.retention.cdr.default=30
Metric datasets
- gvp_cdrs_db — statement executed for the GVP RS DBs in each location that tenant has enabled to extract day worth of RM CDRs.
Resulting data file details
- Each record in the resulting file represents a call undergoing a GVP treatment, with the duration of the treatment presented in milliseconds.
- File content complies with the IT Integration Specification.
- A separate file is produced for each region in which the tenant is deployed.
This page was last modified on April 12, 2019, at 13:16.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/BDS/latest/User/GVPMinutes | 2019-07-15T22:34:35 | CC-MAIN-2019-30 | 1563195524254.28 | [] | docs.genesys.com |
Data structure requirements for visualizations
In this topic we cover the data structure requirements of the different types of visualizations offered for our reports and dashboards. If you're trying to generate a visualization, and are wondering why certain visualizations are unavailable, building searches with reporting commands, see "Use reporting commands."
- For more information about building reports with the Report Builder, see ""Define reports and generate charts" in this manual.
- For more information about using the Visualization Editor to design visualizations for dashboard panels, see "Define and edit dashboard panels."
Column, line, and area charts
It's important to understand that.)
Note: To create a scatter plot chart with a search like this, you need to enter the reporting commands directly into the Report Builder by clicking Define report data using search language in the Report Builder. You can run this report from the search bar, but when you open up Report Builder, it adds a timechart command that you should remove before formatting the report.
More complex scatter charts can be set up in dashboards using Splunk's view XML. For more information see the Custom charting configuration reference chapter in the Developer manual.
Gauges and single value visualizations
Gauges and single value visualizations are designed to.
This documentation applies to the following versions of Splunk® Enterprise: 4.3, 4.3.1, 4.3.2, 4.3.3, 4.3.4, 4.3.5, 4.3.6, 4.3.7
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/Splunk/4.3/User/VizDataStructures | 2019-07-15T22:50:39 | CC-MAIN-2019-30 | 1563195524254.28 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
@FunctionalInterface public interface Function<T> extends Identifiable<java.lang.String>
Functions can be of different types. Some can have results while others need not return any result. Some functions require writing in the targeted
Regionwhile some may just be read operations. Consider extending
FunctionAdapterwhich.
default boolean hasResult()
If
hasResult() returns false,
ResultCollector.getResult() throws
FunctionException.
If
hasResult() returns true,
ResultCollector.getResult() blocks and
waits for the result of function execution
void execute(FunctionContext<T> context)
Execution. The context provided to this function is the one which was built using Execution. The contexts can be data dependent or data-independent so user should check to see if the context provided in parameter is instance of
RegionFunctionContext.
context- as created by
Execution
default java.lang.String getId()
FunctionService
getIdin interface
Identifiable<java.lang.String>
default boolean optimizeForWrite()
Return true to indicate to GemFire the method requires optimization for writing the targeted
FunctionService.onRegion(org.apache.geode.cache.Region) and any associated
routing objects.
Returning false will optimize for read behavior on the targeted
FunctionService.onRegion(org.apache.geode.cache.Region) and any associated
routing objects.
This method is only consulted when Region passed to FunctionService#onRegion(org.apache.geode.cache.Region) is a partitioned region
FunctionService
default boolean isHA()
FunctionContext.isPossibleDuplicate()
default java.util.Collection<ResourcePermission> getRequiredPermissions(java.lang.String regionName)
By default, functions require DATA:WRITE permission. If your function requires other permissions, you will need to override this method.
Please be as specific as possible when you set the required permissions for your function e.g. if your function reads from a region, it would be good to include the region name in your permission. It's better to return "DATA:READ:regionName" as the required permission other than "DATA:READ", because the latter means only users with read permission on ALL regions can execute your function.
All the permissions returned from this method will be ANDed together.
regionName- the region this function will be executed on. The regionName is optional and will only be present when the function is executed by an onRegion() executor. In other cases, it will be null. This method returns permissions appropriate to the context, independent of the presence of the regionName parameter.
ResourcePermissions indicating the permissions required to execute the function. | http://gemfire-95-javadocs.docs.pivotal.io/org/apache/geode/cache/execute/Function.html | 2019-07-15T22:15:54 | CC-MAIN-2019-30 | 1563195524254.28 | [] | gemfire-95-javadocs.docs.pivotal.io |
4.2. Before getting openMosix
First of all, you must understand that openMosix is made up of a kernel patch and some user-space tools. The kernel patch is needed to make the kernel capable of talking to other openMosix-enabled machines on the network. If you download openMosix as a binary package (such as an rpm file), you don't even need to take care about the kernel patch because the kernel has been patched and compiled with the most common default options and modules for you.
The user-space tools are needed in order to make an effective use of an openMosix-enabled kernel. They are needed to start/stop the migration daemon, the openMosix File System, to migrate jobs to certain nodes and other tasks which are usually accomplished with the help our good old friend: the command line interface. About binary packages: the same as in the kernel patch goes for the user-space tools: if you install an rpm you don't need to care about compiling them or configuring anything; just let them install and run. That's all. Really :)
Once you get to the download page (which we'll talk about in a second), you'll need to get two distinct parts: the kernel and the user-space tools. You can either download two binary packages or get the kernel patch plus the user-space tools' sources. The kernel patch is usually named after this scheme: openMosix-x.y.z-w where x.y.z is the version of the vanilla Linux Kernel against which the patch should be applied and w is the patch revision for that particular kernel release. For the precompiled kernel binaries, please refer to the README-openMosix-kernel.txt file you'll find in the download page. This file also contains updated info about manually compiling a kernel.
About the user-space tools: you'll find those in a package named openmosix-tools. We use the terms user-space tools, userspace-tools and openmosix-tools interchangeably. Updated info about precompiled binaries and manually compiling the tools are also provided in the README-openmosix-tools.txt file. Please note that since version 0.3 of the openmosix-tools, the openmosix.map file is deprecated and the use of the autodiscovery daemon is highly encouraged since it tends to make your life easier. | http://tldp.docs.sk/howto/openmosix/x320.html | 2019-07-15T22:57:40 | CC-MAIN-2019-30 | 1563195524254.28 | [] | tldp.docs.sk |
Constantly – Symbolic Constants in Python¶
Overview¶
It is often useful to define names which will be treated as constants. constantly provides APIs for defining such symbolic constants with minimal overhead and some useful features beyond those afforded by the common Python idioms for this task.
This document will explain how to use these APIs and what circumstances they might be helpful in.
Constant Names¶
Constants which have no value apart from their name and identity can be defined by subclassing
constantly.Names.
Consider this example, in which some HTTP request method constants are defined.
from constantly import NamedConstant, Names class METHOD(Names): """ Constants representing various HTTP request methods. """ GET = NamedConstant() PUT = NamedConstant() POST = NamedConstant() DELETE = NamedConstant()
Only direct subclasses of
Names are supported (i.e., you cannot subclass
METHOD to add new constants the collection).
Given this definition, constants can be looked up by name using attribute access on the
METHOD object:
>>> METHOD.GET <METHOD=GET> >>> METHOD.PUT <METHOD=PUT>
If it’s necessary to look up constants from a string (e.g. based on user input of some sort), a safe way to do it is using
lookupByName :
>>> METHOD.lookupByName('GET') <METHOD=GET> >>> METHOD.lookupByName('__doc__') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "constantly/_constants.py", line 145, in lookupByName raise ValueError(name) ValueError: __doc__
As demonstrated, it is safe because any name not associated with a constant (even those special names initialized by Python itself) will result in
ValueError being raised, not some other object not intended to be used the way the constants are used.
The constants can also be enumerated using the
iterconstants method:
>>> list(METHOD.iterconstants()) [<METHOD=GET>, <METHOD=PUT>, <METHOD=POST>, <METHOD=DELETE>]
Constants can be compared for equality or identity:
>>> METHOD.GET is METHOD.GET True >>> METHOD.GET == METHOD.GET True >>> METHOD.GET is METHOD.PUT False >>> METHOD.GET == METHOD.PUT False
Ordered comparisons (and therefore sorting) also work. The order is defined to be the same as the instantiation order of the constants:
>>> from constantly import NamedConstant, Names >>> class Letters(Names): ... a = NamedConstant() ... b = NamedConstant() ... c = NamedConstant() ... >>> Letters.a < Letters.b < Letters.c True >>> Letters.a > Letters.b False >>> sorted([Letters.b, Letters.a, Letters.c]) [<Letters=a>, <Letters=b>, <Letters=c>]
A subclass of
Names may define class methods to implement custom functionality.
Consider this definition of
METHOD :
from constantly import NamedConstant, Names class METHOD(Names): """ Constants representing various HTTP request methods. """ GET = NamedConstant() PUT = NamedConstant() POST = NamedConstant() DELETE = NamedConstant() @classmethod def isIdempotent(cls, method): """ Return True if the given method is side-effect free, False otherwise. """ return method is cls.GET
This functionality can be used as any class methods are used:
>>> METHOD.isIdempotent(METHOD.GET) True >>> METHOD.isIdempotent(METHOD.POST) False
Constants With Values¶
Constants with a particular associated value are supported by the
constantly.Values base class.
Consider this example, in which some HTTP status code constants are defined.
from constantly import ValueConstant, Values class STATUS(Values): """ Constants representing various HTTP status codes. """ OK = ValueConstant("200") FOUND = ValueConstant("302") NOT_FOUND = ValueConstant("404")
As with
Names , constants are accessed as attributes of the class object:
>>> STATUS.OK <STATUS=OK> >>> STATUS.FOUND <STATUS=FOUND>
Additionally, the values of the constants can be accessed using the
value attribute of one these objects:
>>> STATUS.OK.value '200'
As with
Names , constants can be looked up by name:
>>> STATUS.lookupByName('NOT_FOUND') <STATUS=NOT_FOUND>
Constants on a
Values subclass can also be looked up by value:
>>> STATUS.lookupByValue('404') <STATUS=NOT_FOUND> >>> STATUS.lookupByValue('500') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "constantly/_constants.py", line 244, in lookupByValue raise ValueError(value) ValueError: 500
Multiple constants may have the same value.
If they do,
lookupByValue will find the one which is defined first.
Iteration is also supported:
>>> list(STATUS.iterconstants()) [<STATUS=OK>, <STATUS=FOUND>, <STATUS=NOT_FOUND>]
Constants can be compared for equality, identity and ordering:
>>> STATUS.OK == STATUS.OK True >>> STATUS.OK is STATUS.OK True >>> STATUS.OK is STATUS.NOT_FOUND False >>> STATUS.OK == STATUS.NOT_FOUND False >>> STATUS.NOT_FOUND > STATUS.OK True >>> STATUS.FOUND < STATUS.OK False
Note that like
Names ,
Values are ordered by instantiation order, not by value, though either order is the same in the above example.
As with
Names , a subclass of
Values can define custom methods:
from constantly import ValueConstant, Values class STATUS(Values): """ Constants representing various HTTP status codes. """ OK = ValueConstant("200") NO_CONTENT = ValueConstant("204") NOT_MODIFIED = ValueConstant("304") NOT_FOUND = ValueConstant("404") @classmethod def hasBody(cls, status): """ Return True if the given status is associated with a response body, False otherwise. """ return status not in (cls.NO_CONTENT, cls.NOT_MODIFIED)
This functionality can be used as any class methods are used:
>>> STATUS.hasBody(STATUS.OK) True >>> STATUS.hasBody(STATUS.NO_CONTENT) False
Constants As Flags¶
Integers are often used as a simple set for constants.
The values for these constants are assigned as powers of two so that bits in the integer can be set to represent them.
Individual bits are often called flags .
constantly.Flags supports this use-case, including allowing constants with particular bits to be set, for interoperability with other tools.
POSIX filesystem access control is traditionally done using a bitvector defining which users and groups may perform which operations on a file.
This state might be represented using
Flags as follows:
from constantly import FlagConstant, Flags class Permission(Flags): """ Constants representing user, group, and other access bits for reading, writing, and execution. """ OTHER_EXECUTE = FlagConstant() OTHER_WRITE = FlagConstant() OTHER_READ = FlagConstant() GROUP_EXECUTE = FlagConstant() GROUP_WRITE = FlagConstant() GROUP_READ = FlagConstant() USER_EXECUTE = FlagConstant() USER_WRITE = FlagConstant() USER_READ = FlagConstant()
As for the previous types of constants, these can be accessed as attributes of the class object:
>>> Permission.USER_READ <Permission=USER_READ> >>> Permission.USER_WRITE <Permission=USER_WRITE> >>> Permission.USER_EXECUTE <Permission=USER_EXECUTE>
These constant objects also have a
value attribute giving their integer value:
>>> Permission.USER_READ.value 256
These constants can be looked up by name or value:
>>> Permission.lookupByName('USER_READ') is Permission.USER_READ True >>> Permission.lookupByValue(256) is Permission.USER_READ True
Constants can also be combined using the logical operators
& (and ),
| (or ), and
^ (exclusive or ).
>>> Permission.USER_READ | Permission.USER_WRITE <Permission={USER_READ,USER_WRITE}> >>> (Permission.USER_READ | Permission.USER_WRITE) & Permission.USER_WRITE <Permission=USER_WRITE> >>> (Permission.USER_READ | Permission.USER_WRITE) ^ Permission.USER_WRITE <Permission=USER_READ>
These combined constants can be deconstructed via iteration:
>>> mode = Permission.USER_READ | Permission.USER_WRITE >>> list(mode) [<Permission=USER_READ>, <Permission=USER_WRITE>] >>> Permission.USER_READ in mode True >>> Permission.USER_EXECUTE in mode False
They can also be inspected via boolean operations:
>>> Permission.USER_READ & mode <Permission=USER_READ> >>> bool(Permission.USER_READ & mode) True >>> Permission.USER_EXECUTE & mode <Permission={}> >>> bool(Permission.USER_EXECUTE & mode) False
The unary operator
~ (not ) is also defined:
>>> ~Permission.USER_READ <Permission={GROUP_EXECUTE,GROUP_READ,GROUP_WRITE,OTHER_EXECUTE,OTHER_READ,OTHER_WRITE,USER_EXECUTE,USER_WRITE}>
Constants created using these operators also have a
value attribute.
>>> (~Permission.USER_WRITE).value 383
Note the care taken to ensure the
~ operator is applied first and the
value attribute is looked up second.
A
Flags subclass can also define methods, just as a
Names or
Values subclass may.
For example,
Permission might benefit from a method to format a flag as a string in the traditional style.
Consider this addition to that class:
from twisted.python import filepath from constantly import FlagConstant, Flags class Permission(Flags): ... @classmethod def format(cls, permissions): """ Format permissions flags in the traditional 'rwxr-xr-x' style. """ return filepath.Permissions(permissions.value).shorthand()
Use this like any other class method:
>>> Permission.format(Permission.USER_READ | Permission.USER_WRITE | Permission.GROUP_READ | Permission.OTHER_READ) 'rw-r--r--' | https://constantly.readthedocs.io/en/latest/ | 2019-07-15T22:49:58 | CC-MAIN-2019-30 | 1563195524254.28 | [] | constantly.readthedocs.io |
8.5.200.11
Genesys Knowledge Center CMS Release Notes
Helpful Links
Releases Info
Product Documentation
Genesys Products
What's New
This release contains the following new features and enhancements:
- This release now enables the user to delete the previously set date in the Valid to date field by pressing on the clear button.
- 12:41.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/RN/latest/gkc-cms85rn/gkc-cms8520011 | 2019-07-15T22:32:43 | CC-MAIN-2019-30 | 1563195524254.28 | [] | docs.genesys.com |
Formatting Dates for Gravity Forms in Microsoft Excel
Fixing date formatting in Excel
Gravity Forms fields can be formatted in many ways
If your current spreadsheet isn't using the currect format, the import may not work. Here's how to fix the formatting in Microsoft Excel.
Open your spreadsheet in Microsoft Excel
Note: in this tutorial, we're using Mac Excel 2010 in this example, so your look and feel may be different. The process should remain the same. So is the pain of using Excel!
Select the column by clicking on the header
If "Date" doesn't work, select the "Custom…" option next
This will allow us to create our own date format to match the field's date format.
Click on Custom from the "Category" list and define your own date format
To create a format that will be accepted by Gravity Forms, use the following values:
ddis the day using two digits (01-31)
mmis the month using two digits (01-12)
yyyyis the year using four digits (2016)
As a reminder, here are the date formats that are used in Gravity Forms. Enter the format you need into the "Type" field:
mm/dd/yyyy(example: 06/18/2016)
dd/mm/yyyy(example: 18/06/2016)
dd-mm-yyyy(example: 18-06-2016)
dd.mm.yyyy(example: 18.06.2016)
yyyy/mm/dd(example: 2016/06/18)
yyyy-mm-dd(example: 2016-06-18)
yyyy.mm.dd(example: 2016-06-18)
Click OK to save the format, and you're done!
🎉 Now that date is formatted the way you need it!
I don't know if dates ever look great (even 08/08/2008, which looked pretty wonderful), but your dates are looking a lot better now than they did a few minutes ago.
As always, if you have any questions, please contact us at [email protected] and we'll be happy to help. | https://docs.gravityview.co/article/396-formatting-dates-for-gravity-forms-in-microsoft-excel | 2019-07-15T21:58:13 | CC-MAIN-2019-30 | 1563195524254.28 | [array(['https://gravityview.co/wp-content/uploads/2018/01/gravity-forms-fields-can-be-formatted-in-many-ways.png?1480725505',
None], dtype=object)
array(['https://gravityview.co/wp-content/uploads/2018/01/open-your-spreadsheet-in-microsoft-excel.png?1480725506',
None], dtype=object)
array(['https://gravityview.co/wp-content/uploads/2018/01/select-the-column-by-clicking-on-the-header.png?1480725506',
None], dtype=object)
array(['https://gravityview.co/wp-content/uploads/2018/01/if-date-doesn-t-work-select-the-custom-hellip-option-next.png?1480725507',
None], dtype=object)
array(['https://gravityview.co/wp-content/uploads/2018/01/click-on-custom-from-the-category-list-and-define-your-own-date-format.png?1480725508',
None], dtype=object)
array(['https://gravityview.co/wp-content/uploads/2018/01/55356-57225-now-that-date-is-formatted-the-way-you-need-it-.png?1480725508',
None], dtype=object) ] | docs.gravityview.co |
Overview
With Optimail, it is easy to automatically send personalized transaction confirmation emails to customers. These transactional emails are typically generated in response to events such as purchase made, shipment sent or game level reached.
This document outlines the steps required to set up automatic transactional emails.
Setting Up Transactional Emails in Optimail
Set Up a New Optimail Account Dedicated to Transactional Emails
Contact your CSM to set up a new Optimail sub-account that will be dedicated to sending transactional emails. If you are working with multiple brands, you must set up a separate Optimail sub-account for each of your brands. For each sub-account, you will need to select the domain name, default From Name and default From Email addresses.
For each account, your CSM can define a set of particular personalization tags that will be available to dynamically personalize each email, using data passed via the API, per transactional send.
IP Warm-up
Optimail uses a dedicated IP address for each sub-account. Because many spam detection systems assign a reputation ranking to individual IP addresses, spam systems may treat emails coming from a new Optimail account as automatically suspect. In order to avoid the rejection of emails sent to customers, it is important to slowly “warm up” the new IP. This is done by gradually sending an increasing number of real customer emails over a period of 30 days. By the end of this period, you will be able to send customer emails freely. Contact your CSM to learn more about this process.
Setting up Templates in the Optimove UI
In the Manage Templates page, select the relevant brand from the drop-down list on top and then select the Transactional Templates tab.
Create your transactional email templates, either using the Optimail editor or by importing HTML code prepared externally.
Each template can include personalization tags that will be replaced, at the time that each email is generated, with specific values provided by your system via API.
Note that transactional email templates are maintained completely separately from regular Optimail templates in the system.
Sending an Email via the Optimove API
In order to trigger transactional emails from your system, you need to execute code that will connect to the Optimove API and initiate email sends for specific users/events.
The following API call must be made by your system in order to activate a transactional email:
Authorization-Token: <user token returned by API login call>
Accept: application/json
Content-Type: application/json
POST to
{
"TemplateID":integer,
"ScheduleTime":"DateTime",
"Recipients":[{
"Personalizations":[{
"Tag":"string",
"Value“:"string"
}]
}]
}
Notes:
- Refer to the Optimove API User Guide for detailed documentation of the Optimove API.
- Your CSM will provide you do credentials for accessing the Optimove API if you do not already have them.
- The TemplateID for each template can be found at the bottom of its Manage Templates page in the Optimove UI:
- Each SendMail call can contain one or more recipients (email addresses) and associated personalization tags.
- For each recipient, you can provide a set of personalization values using Tag/Value pairs. The Tag string must be identical to the personalization tag name string defined in the system by your CSM and must begin with the prefix ‘TRANS:’ (example: [%TRANS:CART_URL%]).
- You can provide the schedule time (in UTC) for each SendMail call using the optional ScheduleTime parameter. If a schedule time is not provided, the email will be sent out immediately.
Retrieving Metrics for Past Transactional Emails
You can use the Optimove API in order to retrieve post-execution metrics per template or recipient, using the GetTemplateMetrics and GetUserMetrics API calls, respectively. Refer to the Optimove API User Guide for details. | https://docs.optimove.com/optimail-transactional-emails/ | 2019-07-15T23:09:44 | CC-MAIN-2019-30 | 1563195524254.28 | [array(['/wp-content/uploads/2017/03/c-users-hadar_s-appdata-local-microsoft-windows-i-1.png',
None], dtype=object)
array(['/wp-content/uploads/2017/03/word-image-12.png', None],
dtype=object) ] | docs.optimove.com |
.
See Azure Availability Sets in the High Availability section to learn how to set up Azure Availability Sets. | https://docs.softnas.com/display/SD/Planning+your+Instance%3A+Azure+Availability+Sets | 2019-07-15T22:01:56 | CC-MAIN-2019-30 | 1563195524254.28 | [] | docs.softnas.com |
JBoss.orgCommunity Documentation.
Example 7.35. Banking Tutorial: Example3.java
public class Example3 {
public static void main(String[] args) throws Exception {
Number[] numbers = new Number[] {wrap(3), wrap(1), wrap(4), wrap(1), wrap(5)};
new RuleRunner().runRules( new String[] { "Example3.drl" },
numbers );
}
private static Integer wrap(int i) {
return new Integer(i);
}
}
Again we insert our
Integer objects, but this time the
rule is slightly different:
Example 7.36. Banking Tutorial: Rule in Example3.drl
rule "Rule 03" when $number : Number( ) not Number( intValue < $number.intValue ) then System.out.println("Number found with value: " + $number.intValue() ); retract( $number ); end
The first line of the rule identifies a
Number and
extracts the value. The second line ensures that there does not exist
a smaller number than the one found by the first pattern. We might
expect to match only one number - the smallest in the set. However,
the retraction of the number after it has been printed means that the
smallest number has been removed, revealing the next smallest number,
and so on.
The resulting output shows that the numbers are now sorted numerically.
Example 7.37. Banking Tutorial: Output of Example3.java
Loading file: Example3.drl Inserting fact: 3 Inserting fact: 1 Inserting fact: 4 Inserting fact: 1 Inserting fact: 5 Number found with value: 1 Number found with value: 1 Number found with value: 3 Number found with value: 4 Number found with value: 5
We are ready to start moving towards our personal accounting
rules. The first step is to create a
Cashflow object.
Example 7.38. Banking Tutorial: Class Cashflow
public class Cash + "]";
}
}
Class
Cashflow has two simple attributes, a date
and an amount. (Note that using the type
double for
monetary units is generally not a good idea
because floating point numbers cannot represent most numbers accurately.)
There is also an overloaded constructor to set the values, and a
method
toString to print a cashflow. The Java code of
Example4.java inserts five Cashflow objects,
with varying dates and amounts.
Example 7.39. Banking Tutorial: Example4.java
public class Example4 {
public static void main(String[] args) throws Exception {
Object[] cashflows = {
new Cashflow(new SimpleDate("01/01/2007"), 300.00),
new Cashflow(new SimpleDate("05/01/2007"), 100.00),
new Cashflow(new SimpleDate("11/01/2007"), 500.00),
new Cashflow(new SimpleDate("07/01/2007"), 800.00),
new Cashflow(new SimpleDate("02/01/2007"), 400.00),
};
new RuleRunner().runRules( new String[] { "Example4.drl" },
cashflows );
}
}
The convenience class
SimpleDate extends
java.util.Date, providing a constructor taking
a String as input and defining a date format. The code is
listed below
Example 7.40. Banking Tutorial: Class SimpleDate
public class SimpleDate extends Date {
private static final SimpleDateFormat format = new SimpleDateFormat("dd/MM/yyyy");
public SimpleDate(String datestr) throws Exception {
setTime(format.parse(datestr).getTime());
}
}
Now, let’s look at
Example4.drl to see how
we print the sorted
Cashflow objects:
Example 7.41. Banking Tutorial: Rule in Example4.drl
rule "Rule 04" when $cashflow : Cashflow( $date : date, $amount : amount ) not Cashflow( date < $date) then System.out.println("Cashflow: "+$date+" :: "+$amount); retract($cashflow); end
Here, we identify a
Cashflow and extract the date
and the amount. In the second line of the rule we ensure that there
is no Cashflow with an earlier date than the one found. In the
consequence, we print the
Cashflow that satisfies the
rule and then retract it, making way for the next earliest
Cashflow. So, the output we generate is:
Example 7.42. Banking Tutorial: Output of Example4.java
Loading file: Example4.drl Inserting fact: Cashflow[date=Mon Jan 01 00:00:00 GMT 2007,amount=300.0] Inserting fact: Cashflow[date=Fri Jan 05 00:00:00 GMT 2007,amount=100.0] Inserting fact: Cashflow[date=Thu Jan 11 00:00:00 GMT 2007,amount=500.0] Inserting fact: Cashflow[date=Sun Jan 07 00:00:00 GMT 2007,amount=800.0] Inserting fact: Cashflow[date=Tue Jan 02 00:00:00 GMT 2007,amount=400.0] Cashflow: Mon Jan 01 00:00:00 GMT 2007 :: 300.0 Cashflow: Tue Jan 02 00:00:00 GMT 2007 :: 400.0 Cashflow: Fri Jan 05 00:00:00 GMT 2007 :: 100.0 Cashflow: Sun Jan 07 00:00:00 GMT 2007 :: 800.0 Cashflow: Thu Jan 11 00:00:00 GMT 2007 :: 500.0
Next, we extend our
Cashflow, resulting in a
TypedCashflow which can be a credit or a debit operation.
(Normally, we would just add this to the
Cashflow type, but
we use extension to keep the previous version of the class intact.)
Example 7.43. Banking Tutorial: Class TypedCashflow
public class TypedCashflow extends Cashflow {
public static final int CREDIT = 0;
public static final int DEBIT = 1;
private int type;
public TypedCashflow() {
}
public TypedCashflow(Date date, int type, double amount) {
super( date, amount );
this.type = type;
}
public int getType() {
return type;
}
public void setType(int type) {
this.type = type;
}
public String toString() {
return "TypedCashflow[date=" + getDate() +
",type=" + (type == CREDIT ? "Credit" : "Debit") +
",amount=" + getAmount() + "]";
}
}
There are lots of ways to improve this code, but for the sake of the example this will do.
Now let's create Example5, a class for running our code.
Example 7.44. Banking Tutorial: Example5.java
public class Example5 {
public static void main(String[] args) throws Exception {
Object[] cashflows = {
new TypedCashflow(new SimpleDate("01/01/2007"),
TypedCashflow.CREDIT, 300.00),
new TypedCashflow(new SimpleDate("05/01/2007"),
TypedCashflow.CREDIT, 100.00),
new TypedCashflow(new SimpleDate("11/01/2007"),
TypedCashflow.CREDIT, 500.00),
new TypedCashflow(new SimpleDate("07/01/2007"),
TypedCashflow.DEBIT, 800.00),
new TypedCashflow(new SimpleDate("02/01/2007"),
TypedCashflow.DEBIT, 400.00),
};
new RuleRunner().runRules( new String[] { "Example5.drl" },
cashflows );
}
}
Here, we simply create a set of
Cashflow objects
which are either credit or debit operations. We supply them and
Example5.drl to the RuleEngine.
Now, let’s look at a rule printing the sorted
Cashflow objects.
Example 7.45. Banking Tutorial: Rule in Example5.drl
rule "Rule 05" when $cashflow : TypedCashflow( $date : date, $amount : amount, type == TypedCashflow.CREDIT ) not TypedCashflow( date < $date, type == TypedCashflow.CREDIT ) then System.out.println("Credit: "+$date+" :: "+$amount); retract($cashflow); end
Here, we identify a
Cashflow fact with a type
of
CREDIT and extract the date and the amount. In the
second line of the rule we ensure that there is no
Cashflow
of the same type with an earlier date than the one found. In the
consequence, we print the cashflow satisfying the patterns and then
retract it, making way for the next earliest cashflow of type
CREDIT.
So, the output we generate is
Example 7.46. Banking Tutorial: Output of Example5.java
Loading file: Example5.drl Inserting fact: TypedCashflow[date=Mon Jan 01 00:00:00 GMT 2007,type=Credit,amount=300.0] Inserting fact: TypedCashflow[date=Fri Jan 05 00:00:00 GMT 2007,type=Credit,amount=100.0] Inserting fact: TypedCashflow[date=Thu Jan 11 00:00:00 GMT 2007,type=Credit,amount=500.0] Inserting fact: TypedCashflow[date=Sun Jan 07 00:00:00 GMT 2007,type=Debit,amount=800.0] Inserting fact: TypedCashflow[date=Tue Jan 02 00:00:00 GMT 2007,type=Debit,amount=400.0] Credit: Mon Jan 01 00:00:00 GMT 2007 :: 300.0 Credit: Fri Jan 05 00:00:00 GMT 2007 :: 100.0 Credit: Thu Jan 11 00:00:00 GMT 2007 :: 500.0
Continuing our banking exercise, we are now going to process both
credits and debits on two bank accounts, calculating the account balance.
In order to do this, we create two separate
Account objects
and inject them into the
Cashflows objects before passing
them to the Rule Engine. The reason for this is to provide easy access
to the correct account without having to resort to helper classes. Let’s
take a look at the
Account class first. This is a simple
Java object with an account number and balance:
Example 7.47. Banking Tutorial: Class Account
public class Account {
private long accountNo;
private double balance = 0;
public Account() {
}
public Account(long accountNo) {
this.accountNo = accountNo;
}
public long getAccountNo() {
return accountNo;
}
public void setAccountNo(long accountNo) {
this.accountNo = accountNo;
}
public double getBalance() {
return balance;
}
public void setBalance(double balance) {
this.balance = balance;
}
public String toString() {
return "Account[" + "accountNo=" + accountNo + ",balance=" + balance + "]";
}
}
Now let’s extend our
TypedCashflow, resulting in
AllocatedCashflow, to include an
Account
reference.
Example 7.48. Banking Tutorial: Class AllocatedCashflow
public class AllocatedCashflow extends TypedCashflow {
private Account account;
public AllocatedCashflow() {
}
public AllocatedCashflow(Account account, Date date, int type, double amount) {
super( date, type, amount );
this.account = account;
}
public Account getAccount() {
return account;
}
public void setAccount(Account account) {
this.account = account;
}
public String toString() {
return "AllocatedCashflow[" +
"account=" + account +
",date=" + getDate() +
",type=" + (getType() == CREDIT ? "Credit" : "Debit") +
",amount=" + getAmount() + "]";
}
}
The Java code of
Example5.java creates
two
Account objects and passes one of them into each
cashflow, in the constructor call.
Example 7.49. Banking Tutorial: Example5.java
public class Example6 {
public static void main(String[] args) throws Exception {
Account acc1 = new Account(1);
Account acc2 = new Account(2);
Object[] cashflows = {
new AllocatedCashflow(acc1,new SimpleDate("01/01/2007"),
TypedCashflow.CREDIT, 300.00),
new AllocatedCashflow(acc1,new SimpleDate("05/02/2007"),
TypedCashflow.CREDIT, 100.00),
new AllocatedCashflow(acc2,new SimpleDate("11/03/2007"),
TypedCashflow.CREDIT, 500.00),
new AllocatedCashflow(acc1,new SimpleDate("07/02/2007"),
TypedCashflow.DEBIT, 800.00),
new AllocatedCashflow(acc2,new SimpleDate("02/03/2007"),
TypedCashflow.DEBIT, 400.00),
new AllocatedCashflow(acc1,new SimpleDate("01/04/2007"),
TypedCashflow.CREDIT, 200.00),
new AllocatedCashflow(acc1,new SimpleDate("05/04/2007"),
TypedCashflow.CREDIT, 300.00),
new AllocatedCashflow(acc2,new SimpleDate("11/05/2007"),
TypedCashflow.CREDIT, 700.00),
new AllocatedCashflow(acc1,new SimpleDate("07/05/2007"),
TypedCashflow.DEBIT, 900.00),
new AllocatedCashflow(acc2,new SimpleDate("02/05/2007"),
TypedCashflow.DEBIT, 100.00)
};
new RuleRunner().runRules( new String[] { "Example6.drl" },
cashflows );
}
}
Now, let’s look at the rule in
Example6.drl
to see how we apply each cashflow in date order and calculate and print
the balance.
Example 7.50. Banking Tutorial: Rule in Example6.drl
rule "Rule 06 - Credit" when $cashflow : AllocatedCashflow( $account : account, $date : date, $amount : amount, type == TypedCashflow.CREDIT ) not AllocatedCashflow( account == $account, date < $date) then System.out.println("Credit: " + $date + " :: " + $amount); $account.setBalance($account.getBalance()+$amount); System.out.println("Account: " + $account.getAccountNo() + " - new balance: " + $account.getBalance()); retract($cashflow); end rule "Rule 06 - Debit" when $cashflow : AllocatedCashflow( $account : account, $date : date, $amount : amount, type == TypedCashflow.DEBIT ) not AllocatedCashflow( account == $account, date < $date) then System.out.println("Debit: " + $date + " :: " + $amount); $account.setBalance($account.getBalance() - $amount); System.out.println("Account: " + $account.getAccountNo() + " - new balance: " + $account.getBalance()); retract($cashflow); end
Although we have separate rules for credits and debits, but we do not specify a type when checking for earlier cashflows. This is so that all cashflows are applied in date order, regardless of the cashflow type. In the conditions we identify the account to work with, and in the consequences we update it with the cashflow amount.
Example 7.51. Banking Tutorial: Output of Example6.java
Loading file: Example6.drl Inserting fact: AllocatedCashflow[account=Account[accountNo=1,balance=0.0],date=Mon Jan 01 00:00:00 GMT 2007,type=Credit,amount=300.0] Inserting fact: AllocatedCashflow[account=Account[accountNo=1,balance=0.0],date=Mon Feb 05 00:00:00 GMT 2007,type=Credit,amount=100.0] Inserting fact: AllocatedCashflow[account=Account[accountNo=2,balance=0.0],date=Sun Mar 11 00:00:00 GMT 2007,type=Credit,amount=500.0] Inserting fact: AllocatedCashflow[account=Account[accountNo=1,balance=0.0],date=Wed Feb 07 00:00:00 GMT 2007,type=Debit,amount=800.0] Inserting fact: AllocatedCashflow[account=Account[accountNo=2,balance=0.0],date=Fri Mar 02 00:00:00 GMT 2007,type=Debit,amount=400.0] Inserting fact: AllocatedCashflow[account=Account[accountNo=1,balance=0.0],date=Sun Apr 01 00:00:00 BST 2007,type=Credit,amount=200.0] Inserting fact: AllocatedCashflow[account=Account[accountNo=1,balance=0.0],date=Thu Apr 05 00:00:00 BST 2007,type=Credit,amount=300.0] Inserting fact: AllocatedCashflow[account=Account[accountNo=2,balance=0.0],date=Fri May 11 00:00:00 BST 2007,type=Credit,amount=700.0] Inserting fact: AllocatedCashflow[account=Account[accountNo=1,balance=0.0],date=Mon May 07 00:00:00 BST 2007,type=Debit,amount=900.0] Inserting fact: AllocatedCashflow[account=Account[accountNo=2,balance=0.0],date=Wed May 02 00:00:00 BST 2007,type=Debit,amount=100.0] Debit: Fri Mar 02 00:00:00 GMT 2007 :: 400.0 Account: 2 - new balance: -400.0 Credit: Sun Mar 11 00:00:00 GMT 2007 :: 500.0 Account: 2 - new balance: 100.0 Debit: Wed May 02 00:00:00 BST 2007 :: 100.0 Account: 2 - new balance: 0.0 Credit: Fri May 11 00:00:00 BST 2007 :: 700.0 Account: 2 - new balance: 700.0 Credit: Mon Jan 01 00:00:00 GMT 2007 :: 300.0 Account: 1 - new balance: 300.0 Credit: Mon Feb 05 00:00:00 GMT 2007 :: 100.0 Account: 1 - new balance: 400.0 Debit: Wed Feb 07 00:00:00 GMT 2007 :: 800.0 Account: 1 - new balance: -400.0 Credit: Sun Apr 01 00:00:00 BST 2007 :: 200.0 Account: 1 - new balance: -200.0 Credit: Thu Apr 05 00:00:00 BST 2007 :: 300.0 Account: 1 - new balance: 100.0 Debit: Mon May 07 00:00:00 BST 2007 :: 900.0 Account: 1 - new balance: -800.0
The Pricing Rule decision table demonstrates the use of a decision table in a spreadsheet, in Excel's XLS format, in calculating the retail cost of an insurance policy. The purpose of the provide set of rules is to calculate a base price and a discount for a car driver applying for a specific policy. The driver's age, history and the policy type all contribute to what the basic premium is, and an additional chunk of rules deals with refining this with a discount percentage.
Name: Example Policy Pricing Main class: org.drools.examples.decisiontable.PricingRuleDTExample Module: drools-examples Type: Java application Rules file: ExamplePolicyPricing.xls Objective: demonstrate spreadsheet-based decision tables.
Open the file
PricingRuleDTExample.java and
execute it as a Java application. It should produce the following
output in the Console window:
Cheapest possible BASE PRICE IS: 120 DISCOUNT IS: 20
The code to execute the example follows the usual pattern. The rules are loaded, the facts inserted and a Stateless Session is created. What is different is how the rules are added.
DecisionTableConfiguration dtableconfiguration =
KnowledgeBuilderFactory.newDecisionTableConfiguration();
dtableconfiguration.setInputType( DecisionTableInputType.XLS );
KnowledgeBuilder kbuilder = KnowledgeBuilderFactory.newKnowledgeBuilder();
Resource xlsRes = ResourceFactory.newClassPathResource( "ExamplePolicyPricing.xls",
getClass() );
kbuilder.add( xlsRes,
ResourceType.DTABLE,
dtableconfiguration );
Note the use of the
DecisionTableConfiguration object.
Its input type is set to
DecisionTableInputType.XLS.
If you use the BRMS, all this is of course taken care of for you.
There are two fact types used in this example,
Driver
and
Policy. Both are used with their default values. The
Driver is 30 years old, has had no prior claims and
currently has a risk profile of
LOW. The
Policy
being applied for is
COMPREHENSIVE, and it has not yet been
approved.
In this decision table, each row is a rule, and each column is a condition or an action.
Referring to the spreadsheet show above, we have the
RuleSet declaration, which provides the package name.
There are also other optional items you can have here, such as
Variables for global variables, and
Imports
for importing classes. In this case, the namespace of the rules is
the same as the fact classes we are using, so we can omit it.
Moving further down, we can see the
RuleTable
declaration. The name after this (Pricing bracket) is used as the
prefix for all the generated rules. Below that, we have
"CONDITION or ACTION", indicating the purpose of the column, i.e.,
whether it forms part of the condition or the consequence of the rule
that will be generated.
You can see that there is a driver, his data spanned across three
cells, which means that the template expressions below it apply to that
fact. We observe the driver's age range (which uses
$1 and
$2 with comma-separated values),
locationRiskProfile, and
priorClaims in the
respective columns. In the action columns, we are set the policy
base price and log a message.
In the preceding spreadsheet section, there are broad category brackets, indicated by the comment in the leftmost column. As we know the details of our drivers and their policies, we can tell (with a bit of thought) that they should match row number 18, as they have no prior accidents, and are 30 years old. This gives us a base price of 120.
The above section contains the conditions for the discount we
might grant our driver. The discount results from the
Age
bracket, the number of prior claims, and the policy type. In our case,
the driver is 30, with no prior claims, and is applying for a
COMPREHENSIVE policy, which means we can give a discount
of 20%. Note that this is actually a separate table, but in the same
worksheet, so that different templates apply.
It is important to note that decision tables generate rules. This means they aren't simply top-down logic, but more a means to capture data resulting in rules. This is a subtle difference that confuses some people. The evaluation of the rules is not necessarily in the given order, since all the normal mechanics of the rule engine still apply.
Name: Pet Store Main class: org.drools.examples.petstore.PetStoreExample Module: drools-examples Type: Java application Rules file: PetStore.drl Objective: Demonstrate use of Agenda Groups, Global Variables and integration with a GUI, including callbacks from within the rules
The Pet Store example shows how to integrate Rules with a GUI, in this case a Swing based desktop application. Within the rules file, it demonstrates how to use Agenda groups and auto-focus to control which of a set of rules is allowed to fire at any given time. It also illustrates the mixing of the Java and MVEL dialects within the rules, the use of accumulate functions and the way of calling Java functions from within the ruleset.
All of the Java code is contained in one file,
PetStore.java, defining the following principal
classes (in addition to several classes to handle Swing Events):
Petstore contains the
main()
method that we will look at shortly.
PetStoreUI is responsible for creating and
displaying the Swing based GUI. It contains several smaller
classes, mainly for responding to various GUI events such as
mouse button clicks.
TableModel holds the table data. Think of it
as a JavaBean that extends the Swing class
AbstractTableModel.
with the Rules.
Ordershow keeps the items that we wish to
buy.
Purchase stores details of the order and
the products we are buying.
Product is a JavaBean holding details of
the product available for purchase, and its price.
Much of the Java code is either plain JavaBeans or Swing-based. Only a few Swing-related points will be discussed in this section, but a good tutorial about Swing components can be found at Sun's Swing website, in.
The pieces of Java code in
Petstore.java
that relate to rules and facts are shown below.
Example 7.52. Creating the PetStore RuleBase in PetStore.main
KnowledgeBuilder kbuilder = KnowledgeBuilderFactory.newKnowledgeBuilder();
kbuilder.add( ResourceFactory.newClassPathResource( "PetStore.drl",
PetStore.class ),
ResourceType.DRL );
KnowledgeBase kbase = KnowledgeBaseFactory.newKnowledgeBase();
kbase.addKnowledgePackages( kbuilder.getKnowledgePackages() );
// Create the stock.
Vector<Product> stock = new Vector<Product>();
stock.add( new Product( "Gold Fish", 5 ) );
stock.add( new Product( "Fish Tank", 25 ) );
stock.add( new Product( "Fish Food", 2 ) );
// A callback is responsible for populating the
// Working Memory and for firing all rules.
PetStoreUI ui = new PetStoreUI( stock,
new CheckoutCallback( kbase ) );
ui.createAndShowGUI();
The code shown above loads the rules from a DRL file on the
classpath. Unlike other examples where the facts are asserted and
fired straight away, this example defers this step to later. The
way it does this is via the second last line where a
PetStoreUI object is created using a constructor
accepting the
Vector object
stock
collecting our products, and an instance of
the
that we have just loaded.
The Java code that fires the rules is within the
(eventually) when the Checkout button is pressed by the user.
Example 7.53. Firing the Rules - extract from CheckoutCallBack.checkout()
public String checkout(JFrame frame, List<Product> items) {
Order order = new Order();
// Iterate through list and add to cart
for ( Product p: items ) {
order.addItem( new Purchase( order, p ) );
}
// Add the JFrame to the ApplicationData to allow for user interaction
StatefulKnowledgeSession ksession = kbase.newStatefulKnowledgeSession();
ksession.setGlobal( "frame", frame );
ksession.setGlobal( "textArea", this.output );
ksession.insert( new Product( "Gold Fish", 5 ) );
ksession.insert( new Product( "Fish Tank", 25 ) );
ksession.insert( new Product( "Fish Food", 2 ) );
ksession.insert( new Product( "Fish Food Sample", 0 ) );
ksession.insert( order );
ksession.fireAllRules();
// Return the state of the cart
return order.toString();
}
Two items get passed into this method. One is the handle to the
JFrame Swing component surrounding the output text
frame, at the bottom of the GUI. The second is a list of order items;
this comes from the
TableModel storing the information
from the "Table" area at the top right section of the GUI.
The for loop transforms the list of order items coming from the
GUI into the
Order JavaBean, also contained in the
file
PetStore.java. Note that it would be
possible to refer to the Swing dataset directly within the rules,
but it is better coding practice to do it this way, using simple
Java objects. It means that we are not tied to Swing if we wanted
to transform the sample into a Web application.
It is important to note that all state in this
example is stored in the Swing components, and that the rules are
effectively stateless. Each time the "Checkout" button is
pressed, this code copies the contents of the Swing
TableModel into the Session's Working Memory.
Within this code, there are nine calls to the Working Memory.
The first of these creates a new Working Memory, as a Stateful
Knowledge Session from the Knowledge Base. Remember that we passed
in this Knowledge Base when we created the
class in the
main() method. The next two calls pass in
two objects that we will hold as global variables in the rules: the
Swing text area and the Swing frame used for writing messages.
More inserts put information on products into the Working Memory,
as well as the order list. The final call is the standard
fireAllRules(). Next, we look at what this method causes
to happen within the rules file.
Example 7.54. Package, Imports, Globals and Dialect: extract from PetStore.drl
package org.drools.examples
import org.drools.WorkingMemory
import org.drools.examples.petstore.PetStoreExample.Order
import org.drools.examples.petstore.PetStoreExample.Purchase
import org.drools.examples.petstore.PetStoreExample.Product
import java.util.ArrayList
import javax.swing.JOptionPane;
import javax.swing.JFrame
global JFrame frame
global javax.swing.JTextArea textArea
The first part of file
PetStore.drl
contains the standard package and import statements to make various
Java classes available to the rules. New to us are the two globals
frame and
textArea. They hold references
to the Swing components
JFrame and
JTextArea
components that were previously passed on by the Java code calling
the
setGlobal() method. Unlike variables in rules,
which expire as soon as the rule has fired, global variables retain
their value for the lifetime of the Session.
The next extract from the file
PetStore.drl
contains two functions that are referenced by the rules that we will
look at shortly.
Example 7.55. Java Functions in the Rules: extract from PetStore.drl
function void doCheckout(JFrame frame, WorkingMemory workingMemory) {
Object[] options = {"Yes",
"No"};
int n = JOptionPane.showOptionDialog(frame,
"Would you like to checkout?",
"",
JOptionPane.YES_NO_OPTION,
JOptionPane.QUESTION_MESSAGE,
null,
options,
options[0]);
if (n == 0) {
workingMemory.setFocus( "checkout" );
}
}
function boolean requireTank(JFrame frame, WorkingMemory workingMemory, Order order, Product fishTank, int total) {
Object[] options = {"Yes",
"No"};
int n = JOptionPane.showOptionDialog(frame,
"Would you like to buy a tank for your " + total + " fish?",
"Purchase Suggestion",
JOptionPane.YES_NO_OPTION,
JOptionPane.QUESTION_MESSAGE,
null,
options,
options[0]);
System.out.print( "SUGGESTION: Would you like to buy a tank for your "
+ total + " fish? - " );
if (n == 0) {
Purchase purchase = new Purchase( order, fishTank );
workingMemory.insert( purchase );
order.addItem( purchase );
System.out.println( "Yes" );
} else {
System.out.println( "No" );
}
return true;
}
Having these functions in the rules file just makes the Pet Store
example more compact. In real life you probably have the functions
in a file of their own, within the same rules package, or as a
static method on a standard Java class, and import them, using
import function my.package.Foo.hello.
The purpose of these two functions is:
doCheckout() displays a dialog asking users
whether they wish to checkout. If they do, focus is set to the
to (potentially) fire.
requireTank() displays a dialog asking
users whether they wish to buy a tank. If so, a new fish tank
Product is added to the order list in Working
Memory.
We'll see the rules that call these functions later on. The
next set of examples are from the Pet Store rules themselves. The
first extract is the one that happens to fire first, partly because
it has the
auto-focus attribute set to true.
Example 7.56. Putting items into working memory: extract from PetStore.drl
// Insert each item in the shopping cart into the Working Memory // Insert each item in the shopping cart into the Working Memory rule "Explode Cart" agenda-group "init" auto-focus true salience 10 dialect "java" when $order : Order( grossTotal == -1 ) $item : Purchase() from $order.items then insert( $item ); kcontext.getKnowledgeRuntime().getAgenda().getAgendaGroup( "show items" ).setFocus(); kcontext.getKnowledgeRuntime().getAgenda().getAgendaGroup( "evaluate" ).setFocus(); end
This rule matches against all orders that do not yet have their
grossTotal calculated . It loops for each purchase item
in that order. Some parts of the "Explode Cart" rule should be familiar:
the rule name, the salience (suggesting the order for the rules being
fired) and the dialect set to
"java". There are three
new features:
agenda-group
"init" defines the name
of the agenda group. In this case, there is only one rule in the
group. However, neither the Java code nor a rule consequence sets
the focus to this group, and therefore it relies on the next
attribute for its chance to fire.
auto-focus
true ensures that this rule,
while being the only rule in the agenda group, gets a chance to fire
when
fireAllRules() is called from the Java code.
kcontext....setFocus() sets the focus to the
"show items" and
"evaluate" agenda groups
in turn, permitting their rules to fire. In practice, we loop
through all items on the order, inserting them into memory, then
firing the other rules after each insert.
The next two listings show the rules within the
"show items" and
evaluate agenda groups.
We look at them in the order that they are called.
Example 7.57. Show Items in the GUI - extract from PetStore.drl
rule "Show Items" agenda-group "show items" dialect "mvel" when $order : Order( ) $p : Purchase( order == $order ) then textArea.append( $p.product + "\n"); end
The
"show items" agenda-group has only one rule,
called "Show Items" (note the difference in case). For each purchase
on the order currently in the Working Memory (or Session), it logs
details to the text area at the bottom of the GUI. The
textArea variable used to do this is one of the global
variables we looked at earlier.
The
evaluate Agenda group also gains focus from
the
"Explode Cart" rule listed previously. This
Agenda group has two rules,
"Free Fish Food Sample" and
"Suggest Tank", shown below.
Example 7.58. Evaluate Agenda Group: extract from PetStore.drl
// Free Fish Food sample when we buy a Gold Fish if we haven't already bought // Fish Food and don't already have a Fish Food Sample rule "Free Fish Food Sample" agenda-group "evaluate" dialect "mvel" when $order : Order() not ( $p : Product( name == "Fish Food") && Purchase( product == $p ) ) not ( $p : Product( name == "Fish Food Sample") && Purchase( product == $p ) ) exists ( $p : Product( name == "Gold Fish") && Purchase( product == $p ) ) $fishFoodSample : Product( name == "Fish Food Sample" ); then System.out.println( "Adding free Fish Food Sample to cart" ); purchase = new Purchase($order, $fishFoodSample); insert( purchase ); $order.addItem( purchase ); end // Suggest a tank if we have bought more than 5 gold fish and don't already have one rule "Suggest Tank" agenda-group "evaluate" dialect "java" when $order : Order() not ( $p : Product( name == "Fish Tank") && Purchase( product == $p ) ) ArrayList( $total : size > 5 ) from collect( Purchase( product.name == "Gold Fish" ) ) $fishTank : Product( name == "Fish Tank" ) then requireTank(frame, drools.getWorkingMemory(), $order, $fishTank, $total); end
The rule
"Free Fish Food Sample" will only fire if
we don't already have any fish food, and
we don't already have a free fish food sample, and
we do have a Gold Fish in our order.
If the rule does fire, it creates a new product (Fish Food Sample), and adds it to the order in Working Memory.
The rule
"Suggest Tank" will only fire if
we don't already have a Fish Tank in our order, and
we do have more than 5 Gold Fish Products in our order.
If the rule does fire, it calls the
requireTank() function
that we looked at earlier (showing a Dialog to the user, and adding a Tank to
the order / working memory if confirmed). When calling the
requireTank() function the rule passes
the global frame variable so that the
function has a handle to the Swing GUI.
The next rule we look at is
"do checkout".
Example 7.59. Doing the Checkout - extract (6) from PetStore.drl
rule "do checkout" dialect "java" when then doCheckout(frame, drools.getWorkingMemory()); end
The rule
"do checkout" has no
agenda group set and no auto-focus attribute. As such, is is
deemed part of the default (MAIN) agenda group. This group gets focus by
default when all the rules in agenda-groups that explicitly had focus set
to them have run their course.
There is no LHS to the rule, so the RHS will always call the
doCheckout() function. When calling the
doCheckout() function, the rule passes the global
frame variable to give the function a handle to the Swing GUI.
As we saw earlier, the
doCheckout() function shows a
confirmation dialog to the user. If confirmed, the function sets the focus
to the checkout agenda-group, allowing
the next lot of rules to fire.
Example 7.60. Checkout Rules: extract from PetStore.drl
rule "Gross Total" agenda-group "checkout" dialect "mvel" when $order : Order( grossTotal == -1) Number( total : doubleValue ) from accumulate( Purchase( $price : product.price ), sum( $price ) ) then modify( $order ) { grossTotal = total }; textArea.append( "\ngross total=" + total + "\n" ); end rule "Apply 5% Discount" agenda-group "checkout" dialect "mvel" when $order : Order( grossTotal >= 10 && < 20 ) then $order.discountedTotal = $order.grossTotal * 0.95; textArea.append( "discountedTotal total=" + $order.discountedTotal + "\n" ); end rule "Apply 10% Discount" agenda-group "checkout" dialect "mvel" when $order : Order( grossTotal >= 20 ) then $order.discountedTotal = $order.grossTotal * 0.90; textArea.append( "discountedTotal total=" + $order.discountedTotal + "\n" ); end
There are three rules in the checkout agenda-group:
If we haven't already calculated the gross total,
Gross Total accumulates the product prices into a total,
puts this total into Working Memory, and displays it via the Swing
JTextArea, using the
textArea global
variable yet again.
If our gross total is between 10 and 20,
"Apply 5% Discount" calculates the discounted total and
adds it to the Working Memory and displays it in the text area.
If our gross total is not less than 20,
"Apply 10% Discount" calculates the discounted total and
adds it to the Working Memory and displays it in the text area.
Now that we've run through what happens in the code, let's have a
look at what happens when we actually run the code. The file
PetStore.java contains a
main() method,
so that it can be run as a standard Java application, either from the
command line or via the IDE. This assumes you have your classpath set
correctly. (See the start of the examples section for more information.)
The first screen that we see is the Pet Store Demo. It has a list of available products (top left), an empty list of selected products (top right), checkout and reset buttons (middle) and an empty system messages area (bottom).
To get to this point, the following things have happened:
The
main() method has run and loaded the Rule Base
but not yet fired the rules. So far, this is the
only code in connection with rules that has been run.
A new
PetStoreUI object has been created and given a
handle to the Rule Base, for later use.
Various Swing components do their stuff, and the above screen is shown and waits for user input.
Clicking on various products from the list might give you a screen similar to the one below.
Note that no rules code has been fired here. This
is only Swing code, listening for mouse click events, and adding some
selected product to the
TableModel object for display in the
top right hand section. (As an aside, note that this is a classic use of
the Model View Controller design pattern).
It is only when we press the "Checkout" button that we fire our business rules, in roughly the same order that we walked through the code earlier.
Method
(eventually) by the Swing class waiting for the click on the
TableModel object (top right hand side of the GUI),
and inserts it into the Session's Working Memory. It then fires
the rules.
The
"Explode Cart" rule is the first to fire,
given that it has
auto-focus set to true. It loops through
all the products in the cart, ensures that the products are in the
Working Memory, and then gives the
"Show Items" and
Evaluation agenda groups a chance to fire. The rules
in these groups add the contents of the cart to the text area
(at the bottom of the window), decide whether or not to give us free
fish food, and to ask us whether we want to buy a fish tank. This
is shown in the figure below.
The Do Checkout rule is the next to fire as it (a) No other agenda group currently has focus and (b) it is part of the default (MAIN) agenda group. It always calls the doCheckout() function which displays a 'Would you like to Checkout?' Dialog Box.
The
doCheckout() function sets the focus to the
the option to fire.
The rules in the the
the contents of the cart and apply the appropriate discount.
Swing then waits for user input to either checkout more products (and to cause the rules to fire again), or to close the GUI - see the figure below.
We could add more System.out calls to demonstrate this flow of events. The output, as it currently appears in the Console window, is given in the listing below.
Example 7.61. Console (System.out) from running the PetStore GUI
Adding free Fish Food Sample to cart SUGGESTION: Would you like to buy a tank for your 6 fish? - Yes
Name: Honest Politician Main class: org.drools.examples.honestpolitician.HonestPoliticianExample Module: drools-examples Type: Java application Rules file: HonestPoliticianExample.drl Objective: Illustrate the concept of "truth maintenance" based on the logical insertion of facts
The Honest Politician example demonstrates truth maintenance with
logical assertions. The basic premise is that an object can only exist
while a statement is true. A rule's consequence can logically insert an
object with the
insertLogical() method. This means the object
will only remain in the Working Memory as long as the rule that logically
inserted it remains true. When the rule is no longer true the object is
automatically retracted.
In this example there is the class
Politician, with a
name and a boolean value for being honest. Four politicians with honest
state set to true are inserted.
Example 7.62. Class Politician
public class Politician {
private String name;
private boolean honest;
...
}
Example 7.63. Honest Politician: Execution
Politician blair = new Politician("blair", true);
Politician bush = new Politician("bush", true);
Politician chirac = new Politician("chirac", true);
Politician schroder = new Politician("schroder", true);
ksession.insert( blair );
ksession.insert( bush );
ksession.insert( chirac );
ksession.insert( schroder );
ksession.fireAllRules();
The Console window output shows that, while there is at least one honest politician, democracy lives. However, as each politician is in turn corrupted by an evil corporation, so that all politicians become dishonest, democracy is dead.
Example 7.64. Honest Politician: Console Output
Hurrah!!! Democracy Lives I'm an evil corporation and I have corrupted schroder I'm an evil corporation and I have corrupted chirac I'm an evil corporation and I have corrupted bush I'm an evil corporation and I have corrupted blair We are all Doomed!!! Democracy is Dead
As soon as there is at least one honest politician in the
Working Memory a new
Hope object is logically asserted.
This object will only exist while there is at least one honest
politician. As soon as all politicians are dishonest, the
Hope object will be automatically retracted. This rule
is given a salience of 10 to ensure that it fires before any other
rule, as at this stage the "Hope is Dead" rule is actually true.
Example 7.65. Honest Politician: Rule "We have an honest politician"
rule "We have an honest Politician" salience 10 when exists( Politician( honest == true ) ) then insertLogical( new Hope() ); end
As soon as a
Hope object exists the "Hope Lives" rule
matches and fires. It has a salience of 10 so that it takes priority
over "Corrupt the Honest".
Example 7.66. Honest Politician: Rule "Hope Lives"
rule "Hope Lives" salience 10 when exists( Hope() ) then System.out.println("Hurrah!!! Democracy Lives"); end
Now that there is hope and we have, at the start, four honest
politicians, we have four activations for this rule, all in conflict.
They will fire in turn, corrupting each politician so that they are
no longer honest. When all four politicians have been corrupted we
have no politicians with the property
honest == true.
Thus, the rule "We have an honest Politician" is no longer true and
the object it logical inserted (due to the last execution of
new Hope()) is automatically retracted.
Example 7.67. Honest Politician: Rule "Corrupt the Honest"
rule "Corrupt the Honest" when politician : Politician( honest == true ) exists( Hope() ) then System.out.println( "I'm an evil corporation and I have corrupted " + politician.getName() ); modify ( politician ) { honest = false }; end
With the
Hope object being automatically retracted,
via the truth maintenance system, the conditional element
not
applied to
Hope is no longer true so that the following
rule will match and fire.
Example 7.68. Honest Politician: Rule "Hope is Dead"
rule "Hope is Dead" when not( Hope() ) then System.out.println( "We are all Doomed!!! Democracy is Dead" ); end
Let's take a look at the Audit trail for this application:
The moment we insert the first politician we have two activations.
The rule "We have an honest Politician" is activated only once for the first
inserted politician because it uses an
exists conditional
element, which matches once for any number. The rule "Hope is Dead" is
also activated at this stage, because we have not yet inserted the
Hope object. Rule "We have an honest Politician" fires first,
as it has a higher salience than "Hope is Dead", which inserts the
Hope object. (That action is highlighted green.) The
insertion of the
Hope object activates "Hope Lives" and
de-activates "Hope is Dead"; it also activates "Corrupt the Honest"
for each inserted honest politician. Rule "Hope Lives" executes,
printing "Hurrah!!! Democracy Lives". Then, for each politician, rule
"Corrupt the Honest" fires, printing "I'm an evil corporation and I
have corrupted X", where X is the name of the politician, and modifies
the politician's honest value to false. When the last honest politician
is corrupted,
Hope is automatically retracted, by the truth
maintenance system, as shown by the blue highlighted area. The green
highlighted area shows the origin of the currently selected blue
highlighted area. Once the
Hope fact is retracted, "Hope is
dead" activates and fires printing "We are all Doomed!!! Democracy is
Dead".
Name: Sudoku Main class: org.drools.examples.sudoku.SudokuExample Type: Java application Rules file: sudoku.drl, validate.drl Objective: Demonstrates the solving of logic problems, and complex pattern matching.
This example demonstrates how Drools can be used to find a solution in a large potential solution space based on a number of constraints. We use the popular puzzle of Sudoku. This example also shows how Drools can be integrated into a graphical interface and how callbacks can be used to interact with a running Drools rules engine in order to update the graphical interface based on changes in the Working Memory at runtime.
Sudoku is a logic-based number placement puzzle. The objective is to fill a 9x9 grid so that each column, each row, and each of the nine 3x3 zones contains the digits from 1 to 9, once, and only once.
The puzzle setter provides a partially completed grid and the puzzle solver's task is to complete the grid with these constraints.
The general strategy to solve the problem is to ensure that when you insert a new number it should be unique in its particular 3x3 zone, row and column.
See Wikipedia for a more detailed description.
Download and install drools-examples as described above and then
execute
java org.drools.examples.DroolsExamplesApp and
click on "SudokuExample".
The window contains an empty grid, but the program comes with a number of grids stored internally which can be loaded and solved. Click on "File", then "Samples" and select "Simple" to load one of the examples. Note that all buttons are disabled until a grid is loaded.
Loading the "Simple" example fills the grid according to the puzzle's initial state.
Click on the "Solve" button and the Drools-based engine will fill out the remaining values, and the buttons are inactive once more.
Alternatively, you may click on the "Step" button to see the next digit found by the rule set. The Console window will display detailed information about the rules which are executing to solve the step in a human readable form. Some examples of these messages are presented below.
single 8 at [0,1] column elimination due to [1,2]: remove 9 from [4,2] hidden single 9 at [1,2] row elimination due to [2,8]: remove 7 from [2,4] remove 6 from [3,8] due to naked pair at [3,2] and [3,7] hidden pair in row at [4,6] and [4,4]
Click on the "Dump" button to see the state of the grid, with cells showing either the established value or the remaining possibilitiescandidates.
Col: 0 Col: 1 Col: 2 Col: 3 Col: 4 Col: 5 Col: 6 Col: 7 Col: 8 Row 0: 2 4 7 9 2 456 4567 9 23 56 9 --- 5 --- --- 1 --- 3 67 9 --- 8 --- 4 67 Row 1: 12 7 9 --- 8 --- 1 67 9 23 6 9 --- 4 --- 23 67 1 3 67 9 3 67 9 --- 5 --- Row 2: 1 4 7 9 1 456 --- 3 --- 56 89 5 78 5678 --- 2 --- 4 67 9 1 4 67 Row 3: 1234 12345 1 45 12 5 8 --- 6 --- 2 5 78 5 78 45 7 --- 9 --- Row 4: --- 6 --- --- 7 --- 5 --- 4 --- 2 5 8 --- 9 --- 5 8 --- 1 --- --- 3 --- Row 5: --- 8 --- 12 45 1 45 9 12 5 --- 3 --- 2 5 7 567 4567 2 4 67 Row 6: 1 3 7 1 3 6 --- 2 --- 3 56 8 5 8 3 56 8 --- 4 --- 3 567 9 1 678 Row 7: --- 5 --- 1 34 6 1 4 678 3 6 8 --- 9 --- 34 6 8 1 3 678 --- 2 --- 1 678 Row 8: 34 --- 9 --- 4 6 8 --- 7 --- --- 1 --- 23456 8 3 56 8 3 56 6 8
Now, let us load a Sudoku grid that is deliberately invalid. Click on "File", "Samples" and "!DELIBERATELY BROKEN!". Note that this grid starts with some issues, for example the value 5 appears twice in the first row.
A few simple rules perform a sanity check, right after loading a grid. In this case, the following messages are printed on standard output:
cell [0,8]: 5 has a duplicate in row 0 cell [0,0]: 5 has a duplicate in row 0 cell [6,0]: 8 has a duplicate in col 0 cell [4,0]: 8 has a duplicate in col 0 Validation complete.
Nevertheless, click on the "Solve" button to apply the solving rules to this invalid grid. This will not complete; some cells remain empty.
The solving functionality has been achieved by the use of rules that implement standard solving techniques. They are based on the sets of values that are still candidates for a cell. If, for instance, such a set contains a single value, then this is the value for the cell. A little less obvious is the single occurrence of a value in one of the groups of nine cells. The rules detecting these situations insert a fact of type Setting with the solution value for some specific cell. This fact causes the elimination of this value from all other cells in any of the groups the cell belongs to. Finally, it is retracted.
Other rules merely reduce the permissible values for some cells. Rules "naked pair", "hidden pair in row", "hidden pair in column" and "hidden pair in square" merely eliminate possibilities but do not establish solutions. More sophisticated eliminations are done by "X-wings in rows", "X-wings in columns", "intersection removal row" and "intersection removal column".
The Java source code can be found in the /src/main/java/org/drools/examples/sudoku directory, with the two DRL files defining the rules located in the /src/main/rules/org/drools/examples/sudoku directory.
The package
org.drools.examples.sudoku.swing
contains a set of classes which implement a framework for Sudoku
puzzles. Note that this package does not have any dependencies on
the Drools libraries.
SudokuGridModel defines an
interface which can be implemented to store a Sudoku puzzle as a 9x9
grid of
Cell objects.
SudokuGridView is
a Swing component which can visualize any implementation of
SudokuGridModel.
SudokuGridEvent and
SudokuGridListener are used to
communicate state changes between the model and the view: events are
fired when a cell's value is resolved or changed. If you are familiar
with the model-view-controller patterns in other Swing components such
as
JTable then this pattern should be familiar.
SudokuGridSamples provides a number of partially filled
Sudoku puzzles for demonstration purposes.
Package
org.drools.examples.sudoku.rules contains a
utility class with a method for compiling DRL files.
The package
org.drools.examples.sudoku contains a
set of classes implementing the elementary
Cell object
and its various aggregations: the
CellFile subtypes
CellRow and
CellCol as well as
CellSqr, all of which are subtypes of
CellGroup. It's interesting to note that
Cell
and
CellGroup are subclasses of
SetOfNine,
which provides a property
free with the
type
Set<Integer>. For a
Cell it
represents the individual candidate set; for a
CellGroup
the set is the union of all candidate sets of its cells, or, simply,
the set of digits that still need to be allocated.
With 81
Cell and 27
CellGroup objects and
the linkage provided by the
Cell properties
cellRow,
cellCol and
cellSqr
and the
CellGroup property
cells, a list of
Cell objects, it is possible to write rules that
detect the specific situations that permit the allocation of a
value to a cell or the elimination of a value from some candidate
set.
An object of class
Setting is used for triggering
the operations that accompany the allocation of a value: its removal
from the candidate sets of sibling cells and associated cell groups.
Moreover, the presence of a
Setting fact is used in
all rules that should detect a new situation; this is to avoid
reactions to inconsistent intermediary states.
An object of class
Stepping is used in a
low priority rule to execute an emergency halt when a "Step"
does not terminate regularly. This indicates that the puzzle
cannot be solved by the program.
The class
org.drools.examples.sudoku.SudokuExample
implements a Java application combining the components described.
Validation rules detect duplicate numbers in cell groups. They are combined in an agenda group which enables us to activate them, explicitly, after loading a puzzle.
The three rules "duplicate in cell..." are very similar. The first pattern locates a cell with an allocated value. The second pattern pulls in any of the three cell groups the cell belongs to. The final pattern would find a cell (other than the first one) with the same value as the first cell and in the same row, column or square, respectively.
Rule "terminate group" fires last. It prints a message and calls halt.
There are three types of rules in this file: one group handles the allocation of a number to a cell, another group detects feasible allocations, and the third group eliminates values from candidate sets.
Rules "set a value", "eliminate a value from Cell" and
"retract setting" depend on the presence of a
Setting
object. The first rule handles the assignment to the cell and the
operations for removing the value from the "free" sets of the
cell's three groups. Also, it decrements a counter that, when
zero, returns control to the Java application that has called
fireUntilHalt(). The purpose of rule
"eliminate a value from Cell" is to reduce the candidate lists
of all cells that are related to the newly assigned cell. Finally,
when all eliminations have been made, rule "retract setting"
retracts the triggering
Setting fact.
There are just two rules that detect a situation where an
allocation of a number to a cell is possible. Rule "single" fires
for a
Cell with a candidate set containing a single
number. Rule "hidden single" fires when there is no cell with a
single candidate but when there is a cell containing a candidate but
this candidate is absent from all other cells in one of the three
groups the cell belongs to. Both rules create and insert a
Setting fact.
Rules from the largest group of rules implement, singly or in groups of two or three, various solving techniques, as they are employed when solving Sudoku puzzles manually.
Rule "naked pair" detects identical candidate sets of size 2 in two cells of a group; these two values may be removed from all other candidate sets of that group.
A similar idea motivates the three rules "hidden pair in..."; here, the rules look for a subset of two numbers in exactly two cells of a group, with neither value occurring in any of the other cells of this group. This, then, means that all other candidates can be eliminated from the two cells harbouring the hidden pair.
A pair of rules deals with "X-wings" in rows and columns. When there are only two possible cells for a value in each of two different rows (or columns) and these candidates lie also in the same columns (or rows), then all other candidates for this value in the columns (or rows) can be eliminated. If you follow the pattern sequence in one of these rules, you will see how the conditions that are conveniently expressed by words such as "same" or "only" result in patterns with suitable constraints or prefixed with "not".
The rule pair "intersection removal..." is based on the restricted occurrence of some number within one square, either in a single row or in a single column. This means that this number must be in one of those two or three cells of the row or column; hence it can be removed from the candidate sets of all other cells of the group. The pattern establishes the restricted occurrence and then fires for each cell outside the square and within the same cell file.
These rules are sufficient for many but certainly not for all Sudoku puzzles. To solve very difficult grids, the rule set would need to be extended with more complex rules. (Ultimately, there are puzzles that cannot be solved except by trial and error.)
Name: Number Guess Main class: org.drools.examples.numberguess.NumberGuessExample Module: droolsjbpm-integration-examples (Note: this is in a different download, the droolsjbpm-integration download.) Type: Java application Rules file: NumberGuess.drl Objective: Demonstrate use of Rule Flow to organise Rules
The "Number Guess" example shows the use of Rule Flow, a way of controlling the order in which rules are fired. It uses widely understood workflow diagrams for defining the order in which groups of rules will be executed.
Example 7.69. Creating the Number Guess RuleBase: NumberGuessExample.main() - part 1
final KnowledgeBuilder kbuilder = KnowledgeBuilderFactory.newKnowledgeBuilder();
kbuilder.add( ResourceFactory.newClassPathResource( "NumberGuess.drl",
ShoppingExample.class ),
ResourceType.DRL );
kbuilder.add( ResourceFactory.newClassPathResource( "NumberGuess.rf",
ShoppingExample.class ),
ResourceType.DRF );
final KnowledgeBase kbase = KnowledgeBaseFactory.newKnowledgeBase();
kbase.addKnowledgePackages( kbuilder.getKnowledgePackages() );
The creation of the package and the loading of the rules (using the
add() method) is the same as
the previous examples. There is an additional line to add the Rule Flow (
NumberGuess.rf), which
provides the option of specifying different rule flows for the same Knowledge Base. Otherwise, the Knowledge Base is
created in the same manner as before.
Example 7.70. Starting the RuleFlow: NumberGuessExample.main() - part 2
final StatefulKnowledgeSession ksession = kbase.newStatefulKnowledgeSession();
KnowledgeRuntimeLogger logger =
KnowledgeRuntimeLoggerFactory.newFileLogger(ksession, "log/numberguess");
ksession.insert( new GameRules( 100, 5 ) );
ksession.insert( new RandomNumber() );
ksession.insert( new Game() );
ksession.startProcess( "Number Guess" );
ksession.fireAllRules();
logger.close();
ksession.dispose();
Once we have a Knowledge Base, we can use it to obtain a Stateful Session. Into our session we insert our facts,
i.e., standard Java objects. (For simplicity, in this sample, these classes are all contained within our
NumberGuessExample.java file. Class
GameRules provides the maximum range and the
number of guesses allowed. Class
RandomNumber automatically generates a number between 0 and 100 and
makes it available to our rules, by insertion via the
getValue() method. Class
Game keeps
track of the guesses we have made before, and their number.
Note that before we call the standard
fireAllRules() method, we also start the process that we
loaded earlier, via the
startProcess() method. We'll learn where to obtain the parameter we pass ("Number
Guess", i.e., the identifier of the rule flow) when we talk about the rule flow file and the graphical Rule Flow
Editor below.
Before we finish the discussion of our Java code, we note that in some real-life application we would examine
the final state of the objects. (Here, we could retrieve the number of guesses, to add it to a high score table.) For
this example we are content to ensure that the Working Memory session is cleared by calling the
dispose()
method.
If you open the
NumberGuess.rf file in the Drools IDE (provided you have the JBoss Rules
extensions installed correctly in Eclipse) you should see the above diagram, similar to a standard flowchart. Its
icons are similar (but not exactly the same) as in the JBoss jBPM workflow product. Should you wish to edit the
diagram, a menu of available components should be available to the left of the diagram in the IDE, which is called the
palette. This diagram is saved in XML, an (almost) human readable format, using XStream.
If it is not already open, ensure that the Properties View is visible in the IDE. It can be opened by clicking "Window", then "Show View" and "Other", where you can select the "Properties" view. If you do this before you select any item on the rule flow (or click on the blank space in the rule flow) you should be presented with the following set of properties.
Keep an eye on the Properties View as we progress through the example's rule flow, as it presents valuable
information. In this case, it provides us with the identification of the Rule Flow Process that we used in our earlier
code snippet, when we called
session.startProcess().
In the "Number Guess" Rule Flow we encounter several node types, many of them identified by an icon.
The Start node (white arrow in a green circle) and the End node (red box) mark beginning and end of the rule flow.
A Rule Flow Group box (yellow, without an icon) represents a Rule Flow Groups defined in our rules (DRL)
file that we will look at later. For example, when the flow reaches the Rule Flow Group "Too High", only those
rules marked with an attribute of
ruleflow-group
"Too High" can potentially
fire.
Action nodes (yellow, cog-shaped icon) perform standard Java method calls. Most action nodes in this
example call
System.out.println(), indicating the program's progress to the user.
Split and Join Nodes (blue ovals, no icon) such as "Guess Correct?" and "More guesses Join" mark places where the flow of control can split, according to various conditions, and rejoin, respectively
Arrows indicate the flow between the various nodes.
The various nodes in combination with the rules make the Number Guess game work. For example, the "Guess" Rule
Flow Group allows only the rule "Get user Guess" to fire, because only that rule has a matching attribute of
ruleflow-group
"Guess".
Example 7.71. A Rule firing only at a specific point in the Rule Flow: NumberGuess.drl
rule "Get user Guess" ruleflow-group "Guess" no-loop when $r : RandomNumber() rules : GameRules( allowed : allowedGuesses ) game : Game( guessCount < allowed ) not ( Guess() ) then System.out.println( "You have " + ( rules.allowedGuesses - game.guessCount ) + " out of " + rules.allowedGuesses + " guesses left.\nPlease enter your guess from 0 to " + rules.maxRange ); br = new BufferedReader( new InputStreamReader( System.in ) ); i = br.readLine(); modify ( game ) { guessCount = game.guessCount + 1 } insert( new Guess( i ) ); end
The rest of this rule is fairly standard. The LHS section (after
when) of the rule states
that it will be activated for each
RandomNumber object inserted into the Working Memory where
guessCount is less than
allowedGuesses from the
GameRules object and where the
user has not guessed the correct number.
The RHS section (or consequence, after
then) prints a message to the user and then awaits
user input from
System.in. After obtaining this input (the
readLine() method call blocks
until the return key is pressed) it modifies the guess count and inserts the new guess, making both available to the
Working Memory.
The rest of the rules file is fairly standard: the package declares the dialect as MVEL, and various Java classes are imported. In total, there are five rules in this file:
Get User Guess, the Rule we examined above.
A Rule to record the highest guess.
A Rule to record the lowest guess.
A Rule to inspect the guess and retract it from memory if incorrect.
A Rule that notifies the user that all guesses have been used up.
One point of integration between the standard Rules and the RuleFlow is via the
ruleflow-group attribute on the rules, as discussed above. A second point of integration
between the rules (.drl) file and the Rules Flow .rf files is that the Split Nodes (the blue ovals) can use
values in the Working Memory (as updated by the rules) to decide which flow of action to take. To see how this works,
click on the "Guess Correct Node"; then within the Properties View, open the Constraints Editor by clicking the button
at the right that appears once you click on the "Constraints" property line. You should see something similar to the
diagram below.
Click on the "Edit" button beside "To node Too High" and you'll see a dialog like the one below. The values in the "Textual Editor" window follow the standard rule format for the LHS and can refer to objects in Working Memory. The consequence (RHS) is that the flow of control follows this node (i.e., "To node Too High") if the LHS expression evaluates to true.
Since the file
NumberGuess.java contains a
main() method, it can be run as a
standard Java application, either from the command line or via the IDE. A typical game might result in the interaction
below. The numbers in bold are typed in by the user.
Example 7.72. Example Console output where the Number Guess Example beat the human!
You have 5 out of 5 guesses left. Please enter your guess from 0 to 100 50 Your guess was too high You have 4 out of 5 guesses left. Please enter your guess from 0 to 100 25 Your guess was too low You have 3 out of 5 guesses left. Please enter your guess from 0 to 100 37 Your guess was too low You have 2 out of 5 guesses left. Please enter your guess from 0 to 100 44 Your guess was too low You have 1 out of 5 guesses left. Please enter your guess from 0 to 100 47 Your guess was too low You have no more guesses The correct guess was 48
A summary of what is happening in this sample is:
The
main() method of
NumberGuessExample.java loads a Rule Base, creates a
Stateful Session and inserts
Game,
GameRules and
RandomNumber (containing
the target number) objects into it. The method also sets the process flow we are going to use, and fires all
rules. Control passes to the Rule Flow.
File
NumberGuess.rf, the Rule Flow, begins at the "Start" node.
Control passes (via the "More guesses" join node) to the Guess node.
At the Guess node, the appropriate Rule Flow Group ("Get user Guess") is enabled. In this case the Rule
"Guess" (in the
NumberGuess.drl file) is triggered. This rule displays a message to the user,
takes the response, and puts it into Working Memory. Flow passes to the next Rule Flow Node.
At the next node, "Guess Correct", constraints inspect the current session and decide which path to take.
If the guess in step 4 was too high or too low, flow proceeds along a path which has an action node with normal Java code printing a suitable message and a Rule Flow Group causing a highest guess or lowest guess rule to be triggered. Flow passes from these nodes to step 6.
If the guess in step 4 was right, we proceed along the path towards the end of the Rule Flow. Before we get there, an action node with normal Java code prints a statement "you guessed correctly". There is a join node here (just before the Rule Flow end) so that our no-more-guesses path (step 7) can also terminate the Rule Flow.
Control passes as per the Rule Flow via a join node, a guess incorrect Rule Flow Group (triggering a rule to retract a guess from Working Memory) onto the "More guesses" decision node.
The "More guesses" decision node (on the right hand side of the rule flow) uses constraints, again looking at values that the rules have put into the working memory, to decide if we have more guesses and if so, goto step 3. If not, we proceed to the end of the rule flow, via a Rule Flow Group that triggers a rule stating "you have no more guesses".
The loop over steps 3 to 7 continues until the number is guessed correctly, or we run out of guesses.
Name: Conway's Game Of Life Main class: org.drools.examples.conway.ConwayAgendaGroupRun org.drools.examples.conway.ConwayRuleFlowGroupRun Module: droolsjbpm-integration-examples (Note: this is in a different download, the droolsjbpm-integration download.) Type: Java application Rules file: conway-ruleflow.drl conway-agendagroup.drl Objective: Demonstrates 'accumulate', 'collect' and 'from'
Conway's Game Of Life, described in's_Game_of_Life and in, is a famous cellular automaton conceived in the early 1970's by the mathematician John Conway. While the system is well known as "Conway's Game Of Life", it really isn't a game at all. Conway's system is more like a simulation of a form of life. Don't be intimidated. The system is terribly simple and terribly interesting. Math and Computer Science students alike have marvelled over Conway's system for more than 30 years now. The application presented here is a Swing-based implementation of Conway's Game of Life. The rules that govern the system are implemented as business rules using Drools. This document will explain the rules that drive the simulation and discuss the Drools parts of the implementation.
We'll first introduce the grid view, shown below, designed for the visualisation of the game, showing the "arena" where the life simulation takes place. Initially the grid is empty, meaning that there are no live cells in the system. Each cell is either alive or dead, with live cells showing a green ball. Preselected patterns of live cells can be chosen from the "Pattern" drop-down list. Alternatively, individual cells can be doubled-clicked to toggle them between live and dead. It's important to understand that each cell is related to its neighboring cells, which is fundamental for the game's rules. Neighbors include not only cells to the left, right, top and bottom but also cells that are connected diagonally, so that each cell has a total of 8 neighbors. Exceptions are the four corner cells which have only three neighbors, and the cells along the four border, with five neighbors each.
So what are the basic rules that govern this game? Its goal is to show the development of a population, generation by generation. Each generation results from the preceding one, based on the simultaneous evaluation of all cells. This is the simple set of rules that govern what the next generation will look like:
If a live cell has fewer than 2 live neighbors, it dies of loneliness.
If a live cell has more than 3 live neighbors, it dies from overcrowding.
If a dead cell has exactly 3 live neighbors, it comes to life.
That is all there is to it. Any cell that doesn't meet any of those criteria is left as is for the next generation. With those simple rules in mind, go back and play with the system a little bit more and step through some generations, one at a time, and notice these rules taking their effect.
The screenshot below shows an example generation, with a number of live cells. Don't worry about matching the exact patterns represented in the screen shot. Just get some groups of cells added to the grid. Once you have groups of live cells in the grid, or select a pre-designed pattern, click the "Next Generation" button and notice what happens. Some of the live cells are killed (the green ball disappears) and some dead cells come to life (a green ball appears). Step through several generations and see if you notice any patterns. If you click on the "Start" button, the system will evolve itself so you don't need to click the "Next Generation" button over and over. Play with the system a little and then come back here for more details of how the application works.
Now lets delve into the code. As this is an advanced example we'll
assume that by now you know your way around the Drools framework and are
able to connect the presented highlight, so that we'll just focus at a
high level overview. The example has two ways to execute, one way
uses Agenda Groups to manage execution flow, and the other one uses
Rule Flow Groups to manage execution flow. These two versions are
implemented in
ConwayAgendaGroupRun and
ConwayRuleFlowGroupRun, respectively. Here,
we'll discuss the Rule Flow version, as it's what most people will
use.
All the
Cell objects are inserted into the Session
and the rules in the
ruleflow-group "register neighbor" are
allowed to execute by the Rule Flow process. This group of four rules
creates
Neighbor relations between some cell and its
northeastern, northern, northwestern and western neighbors. This
relation is bidirectional, which takes care of the other four directions.
Border cells don't need any special treatment - they simply won't be
paired with neighboring cells where there isn't any. By
the time all activations have fired for these rules, all cells are related
to all their neighboring cells.
Example 7.73. Conway's Game of Life: Register Cell Neighbour relations
rule "register north east" ruleflow-group "register neighbor" when $cell: Cell( $row : row, $col : col ) $northEast : Cell( row == ($row - 1), col == ( $col + 1 ) ) then insert( new Neighbor( $cell, $northEast ) ); insert( new Neighbor( $northEast, $cell ) ); end rule "register north" ruleflow-group "register neighbor" when $cell: Cell( $row : row, $col : col ) $north : Cell( row == ($row - 1), col == $col ) then insert( new Neighbor( $cell, $north ) ); insert( new Neighbor( $north, $cell ) ); end rule "register north west" ruleflow-group "register neighbor" when $cell: Cell( $row : row, $col : col ) $northWest : Cell( row == ($row - 1), col == ( $col - 1 ) ) then insert( new Neighbor( $cell, $northWest ) ); insert( new Neighbor( $northWest, $cell ) ); end rule "register west" ruleflow-group "register neighbor" when $cell: Cell( $row : row, $col : col ) $west : Cell( row == $row, col == ( $col - 1 ) ) then insert( new Neighbor( $cell, $west ) ); insert( new Neighbor( $west, $cell ) ); end
Once all the cells are inserted, some Java code applies the pattern to the grid, setting certain cells to Live. Then, when the user clicks "Start" or "Next Generation", it executes the "Generation" ruleflow. This ruleflow is responsible for the management of all changes of cells in each generation cycle.
The rule flow process first enters the "evaluate" group, which means
that any active rule in the group can fire. The rules in this group apply
the Game-of-Life rules discussed in the beginning of the example,
determining the cells to be killed and the ones to be given life. We use
the "phase" attribute to drive the reasoning of the Cell by specific
groups of rules; typically the phase is tied to a Rule Flow Group in the
Rule Flow process definition. Notice that it doesn't actually change the
state of any
Cell objectss at this point; this is because
it's evaluating the grid in turn and it must complete the full evaluation
until those changes can be applied. To achieve this, it sets the cell to
a "phase" which is either
Phase.KILL or
Phase.BIRTH, used later to control actions applied
to the
Cell object.
Example 7.74. Conway's Game of Life: Evaluate Cells with state changes
rule "Kill The Lonely" ruleflow-group "evaluate" no-loop when // A live cell has fewer than 2 live neighbors theCell: Cell( liveNeighbors < 2, cellState == CellState.LIVE, phase == Phase.EVALUATE ) then modify( theCell ){ setPhase( Phase.KILL ); } end rule "Kill The Overcrowded" ruleflow-group "evaluate" no-loop when // A live cell has more than 3 live neighbors theCell: Cell( liveNeighbors > 3, cellState == CellState.LIVE, phase == Phase.EVALUATE ) then modify( theCell ){ setPhase( Phase.KILL ); } end rule "Give Birth" ruleflow-group "evaluate" no-loop when // A dead cell has 3 live neighbors theCell: Cell( liveNeighbors == 3, cellState == CellState.DEAD, phase == Phase.EVALUATE ) then modify( theCell ){ theCell.setPhase( Phase.BIRTH ); } end
Once all
Cell objects in the grid have been evaluated,
we first clear any calculation activations that occurred from any previous
data changes. This is done via the "reset calculate" rule, which clears
any activations in the "calculate" group. We then enter a split in the
rule flow which allows any activations in both the "kill" and the "birth"
group to fire. These rules are responsible for applying the state
change.
Example 7.75. Conway's Game of Life: Apply the state changes
rule "reset calculate" ruleflow-group "reset calculate" when then WorkingMemory wm = drools.getWorkingMemory(); wm.clearRuleFlowGroup( "calculate" ); end rule "kill" ruleflow-group "kill" no-loop when theCell: Cell( phase == Phase.KILL ) then modify( theCell ){ setCellState( CellState.DEAD ), setPhase( Phase.DONE ); } end rule "birth" ruleflow-group "birth" no-loop when theCell: Cell( phase == Phase.BIRTH ) then modify( theCell ){ setCellState( CellState.LIVE ), setPhase( Phase.DONE ); } end
At this stage, a number of
Cell objects have been
modified with the state changed to either
LIVE or
DEAD. Now we get to see the power of the
Neighbor facts defining the cell relations. When a cell
becomes live or dead, we use the
Neighbor relation to
iterate over all surrounding cells, increasing or decreasing the
liveNeighbor count. Any cell that has its count changed
is also set to to the
EVALUATE phase, to make sure
it is included in the reasoning during the evaluation stage of the
Rule Flow Process. Notice that we don't have to do any iteration
ourselves; simply by applying the relations in the rules we make
the rule engine do all the hard work for us, with a minimal amount of
code. Once the live count has been determined and set for all cells,
the Rule Flow Process comes to and end. If the user has initially
clicked the "Start" button, the engine will restart the rule flow;
otherwise the user may request another generation.
Example 7.76. Conway's Game of Life: Evaluate cells with state changes
rule "Calculate Live" ruleflow-group "calculate" lock-on-active when theCell: Cell( cellState == CellState.LIVE ) Neighbor( cell == theCell, $neighbor : neighbor ) then modify( $neighbor ){ setLiveNeighbors( $neighbor.getLiveNeighbors() + 1 ), setPhase( Phase.EVALUATE ); } end rule "Calculate Dead" ruleflow-group "calculate" lock-on-active when theCell: Cell( cellState == CellState.DEAD ) Neighbor( cell == theCell, $neighbor : neighbor ) then modify( $neighbor ){ setLiveNeighbors( $neighbor.getLiveNeighbors() - 1 ), setPhase( Phase.EVALUATE ); } end
A Conversion for the classic game Pong. Use the keys A, Z and K, M. The ball should get faster after each bounce.
Name: Example Pong Main class: org.drools.games.pong.PongMain
Based on the Adventure in Prolog, over at the Amzi website,, we started to work on a text adventure game for Drools. They are ideal as they can start off simple and build in complexity and size over time, they also demonstrate key aspects of declarative relational programming.
Name: Example Text Adventure Main class: org.drools.games.adventure.TextAdventure
You can view the 8 minute demonstration and introduction for the example at
Name: Example Wumpus World Main class: org.drools.games.wumpus.WumpusWorldMain
Wumpus World is an AI example covered in the book "Artificial Intelligence : A Modern Approach". When the game first starts all the cells are greyed out. As you walk around they become visible. The cave has pits, a wumpus and gold. When you are next to a pit you will feel a breeze, when you are next to the wumpus you will smell a stench and see glitter when next to gold. The sensor icons are shown above the move buttons. If you walk into a pit or the wumpus, you die. A more detailed overview of Wumpus World can be found at. A 20 minute video showing how the game is created and works is at.
Name: Miss Manners Main class: org.drools.benchmark.manners.MannersBenchmark Module: drools-examples Type: Java application Rules file: manners.drl Objective: Advanced walkthrough on the Manners benchmark, covers Depth conflict resolution in depth.
Miss Manners is throwing a party and, being a good host, she wants to arrange good seating. Her initial design arranges everyone in male-female pairs, but then she worries about people have things to talk about. What is a good host to do? She decides to note the hobby of each guest so she can then arrange guests not only pairing them according to alternating sex but also ensuring that a guest has someone with a common hobby, at least on one side.
Five benchmarks were established in the 1991 paper "Effects of Database Size on Rule System Performance: Five Case Studies" by David Brant, Timothy Grose, Bernie Lofaso and Daniel P. Miranker:
Manners uses a depth-first search approach to determine the seating arrangements alternating women and men and ensuring one common hobby for neighbors.
Waltz establishes a three-dimensional interpretation of a line drawing by line labeling by constraint propagation.
WaltzDB is a more general version of Waltz, supporting junctions of more than three lines and using a database.
ARP is a route planner for a robotic air vehicle using the A* search algorithm to achieve minimal cost.
Weaver VLSI router for channels and boxes using a black-board technique.
Manners has become the de facto rule engine benchmark. Its behavior, however, is now well known and many engines optimize for this, thus negating its usefulness as a benchmark which is why Waltz is becoming more favorable. These five benchmarks are also published at the University of Texas.
After the first seating arrangement has been assigned, a
depth-first recursion occurs which repeatedly assigns correct
seating arrangements until the last seat is assigned. Manners
uses a
Context instance to control execution flow.
The activity diagram is partitioned to show the relation of the
rule execution to the current
Context state.
Before going deeper into the rules, let's first take a look at the asserted data and the resulting seating arrangement. The data is a simple set of five guests who should be arranged so that sexes alternate and neighbors have a common hobby.
The Data
The data is given in OPS5 syntax, with a parenthesized list of name and value pairs for each attribute. Each person has only one hobby.
(guest (name n1) (sex m) (hobby h1) )
(guest (name n2) (sex f) (hobby h1) )
(guest (name n2) (sex f) (hobby h3) )
(guest (name n3) (sex m) (hobby h3) )
(guest (name n4) (sex m) (hobby h1) )
(guest (name n4) (sex f) (hobby h2) )
(guest (name n4) (sex f) (hobby h3) )
(guest (name n5) (sex f) (hobby h2) )
(guest (name n5) (sex f) (hobby h1) )
(last_seat (seat 5) )
The Results
Each line of the results list is printed per execution of the
"Assign Seat" rule. They key bit to notice is that each line has
a "pid" value one greater than the last. (The significance of this
will be explained in the discussion of the rule "Assign Seating".)
The "ls", "rs", "ln" and "rn" refer to the left and right
seat and neighbor's name, respectively. The actual implementation
uses longer attribute names (e.g.,
leftGuestName,
but here we'll stick to the notation from the original
implementation.
[Seating id=1, pid=0, done=true, ls=1, ln=n5, rs=1, rn=n5]
[Seating id=2, pid=1, done=false, ls=1, ln=n5, rs=2, rn=n4]
[Seating id=3, pid=2, done=false, ls=2, ln=n4, rs=3, rn=n3]
[Seating id=4, pid=3, done=false, ls=3, rn=n3, rs=4, rn=n2]
[Seating id=5, pid=4, done=false, ls=4, ln=n2, rs=5, rn=n1]
Manners has been designed to exercise cross product joins and Agenda activities. Many people not understanding this tweak the example to achieve better performance, making their port of the Manners benchmark pointless. Known cheats or porting errors for Miss Manners are:
Using arrays for a guests hobbies, instead of asserting each one as a single fact massively reduces the cross products.
Altering the sequence of data can also reduce the amount of matching, increasing execution speed.
It's possible to change the
not Conditional
Element so that the test algorithm only uses the
"first-best-match", which is, basically, transforming
the test algorithm to backward chaining. The results are only
comparable to other backward chaining rule engines or ports of
Manners.
Removing the context so the rule engine matches the guests and seats prematurely. A proper port will prevent facts from matching using the context start.
It's possible to prevent the rule engine from performing combinatorial pattern matching.
If no facts are retracted in the reasoning cycle, as a
result of the
not CE, the port is incorrect.
The Manners benchmark was written for OPS5 which has two conflict resolution strategies, LEX and MEA. LEX is a chain of several strategies including salience, recency and complexity. The recency part of the strategy drives the depth first (LIFO) firing order. The CLIPS manual documents the Recency strategy as follows:
However Jess and CLIPS both use the Depth strategy, which is simpler and lighter, which Drools also adopted. The CLIPS manual documents the Depth strategy as:
The initial Drools implementation for the Depth strategy would not work for Manners without the use of salience on the "make_path" rule. The CLIPS support team had this to say:
Investigation into the CLIPS code reveals there is undocumented functionality in the Depth strategy. There is an accumulated time tag used in this strategy; it's not an extensively fact by fact comparison as in the recency strategy, it simply adds the total of all the time tags for each activation and compares.
Once the context is changed to
START_UP,
activations are created for all asserted guest. Because all
activations are created as the result of a single Working Memory
action, they all have the same Activation time tag. The last
asserted
Guest object would have a higher fact
time tag, and its Activation would fire because it has the highest
accumulated fact time tag. The execution order in this rule has little
importance, but has a big impact in the rule "Assign Seat". The
activation fires and asserts the first
Seating
arrangement and a
Path, and then sets the
Context attribute
state to create
an activation for rule
findSeating.
rule assignFirstSeat when context : Context( state == Context.START_UP ) guest : Guest() count : Count() then String guestName = guest.getName(); Seating seating = new Seating( count.getValue(), 1, true, 1, guestName, 1, guestName); insert( seating ); Path path = new Path( count.getValue(), 1, guestName ); insert( path ); modify( count ) { setValue ( count.getValue() + 1 ) } System.out.println( "assign first seat : " + seating + " : " + path ); modify( context ) { setState( Context.ASSIGN_SEATS ) } end
This rule determines each of the
Seating
arrangements. The rule creates cross product solutions for
all asserted
Seating arrangements
against all the asserted guests except
against itself or any already assigned chosen solutions.
rule findSeating when context : Context( state == Context.ASSIGN_SEATS ) $s : Seating( pathDone == true ) $g1 : Guest( name == $s.rightGuestName ) $g2 : Guest( sex != $g1.sex, hobby == $g1.hobby ) count : Count() not ( Path( id == $s.id, guestName == $g2.name) ) not ( Chosen( id == $s.id, guestName == $g2.name, hobby == $g1.hobby) ) then int rightSeat = $s.getRightSeat(); int seatId = $s.getId(); int countValue = count.getValue(); Seating seating = new Seating( countValue, seatId, false, rightSeat, $s.getRightGuestName(), rightSeat + 1, $g2.getName() ); insert( seating ); Path path = new Path( countValue, rightSeat + 1, $g2.getName() ); insert( path ); Chosen chosen = new Chosen( seatId, $g2.getName(), $g1.getHobby() ); insert( chosen ); System.err.println( "find seating : " + seating + " : " + path + " : " + chosen); modify( count ) {setValue( countValue + 1 )} modify( context ) {setState( Context.MAKE_PATH )} end
However, as can be seen from the printed results shown earlier,
it is essential that only the
Seating with the highest
pid cross product be chosen. How can this be possible
if we have activations, of the same time tag, for nearly all
existing
Seating and
Guest objects? For
example, on the third iteration of
findSeating the
produced activations will be as shown below. Remember, this is from
a very small data set, and with larger data sets there would be many
more possible activated
Seating solutions, with multiple
solutions per
pid:
=>]
The creation of all these redundant activations might seem
pointless, but it must be remembered that Manners is not about good
rule design; it's purposefully designed as a bad ruleset to fully
stress-test the cross product matching process and the Agenda, which
this clearly does. Notice that each activation has the same time tag
of 35, as they were all activated by the change in the
Context object to
ASSIGN_SEATS. With OPS5
and LEX it would correctly fire the activation with the
Seating asserted last. With Depth, the accumulated fact
time tag ensures that the activation with the last asserted
Seating fires.
Rule
makePath must always fire before
pathDone. A
Path object is asserted for
each
Seating arrangement, up to the last asserted
Seating. Notice that the conditions in
pathDone are a subset of those in
makePath - so how do we ensure that
makePath
fires first?
rule makePath when Context( state == Context.MAKE_PATH ) Seating( seatingId:id, seatingPid:pid, pathDone == false ) Path( id == seatingPid, pathGuestName:guestName, pathSeat:seat ) not Path( id == seatingId, guestName == pathGuestName ) then insert( new Path( seatingId, pathSeat, pathGuestName ) ); end
rule pathDone when context : Context( state == Context.MAKE_PATH ) seating : Seating( pathDone == false ) then modify( seating ) {setPathDone( true )} modify( context ) {setState( Context.CHECK_DONE)} end
Both rules end up on the Agenda in conflict and with identical activation time tags. However, the accumulate fact time tag is greater for "Make Path" so it gets priority.
Rule
areWeDone only activates when the last seat
is assigned, at which point both rules will be activated. For the
same reason that
makePath always wins over
path Done,
areWeDone will take
priority over rule
continue.
rule areWeDone when context : Context( state == Context.CHECK_DONE ) LastSeat( lastSeat: seat ) Seating( rightSeat == lastSeat ) then modify( context ) {setState(Context.PRINT_RESULTS )} end
rule continue when context : Context( state == Context.CHECK_DONE ) then modify( context ) {setState( Context.ASSIGN_SEATS )} end
Assign First seat
=>[fid:13:13]:[Seating id=1, pid=0, done=true, ls=1, ln=n5, rs=1, rn=n5]
=>[fid:14:14]:[Path id=1, seat=1, guest=n5]
==>[ActivationCreated(16):]
==>[ActivationCreated(16): rule=findSeating
[fid:13:13]:[Seating id=1 , pid=0, done=true, ls=1, ln=n5, rs=1, rn=n5]
[fid:9:9]:[Guest name=n5, sex=f, hobbies=h1]
[fid:5:5]:[Guest name=n4, sex=m, hobbies=h1]*
Assign Seating
=>[fid:15:17] :[Seating id=2 , pid=1 , done=false, ls=1, lg=n5, rs=2, rn=n4]
=>[fid:16:18]:[Path id=2, seat=2, guest=n4]
=>[fid:17:19]:[Chosen id=1, name=n4, hobbies=h1]
=>[ActivationCreated(21): rule=makePath
[fid:15:17] : [Seating id=2, pid=1, done=false, ls=1, ln=n5, rs=2, rn=n4]
[fid:14:14] : [Path id=1, seat=1, guest=n5]*
==>[ActivationCreated(21): rule=pathDone
[Seating id=2, pid=1, done=false, ls=1, ln=n5, rs=2, rn=n4]*
Make Path
=>[fid:18:22:[Path id=2, seat=1, guest=n5]]
Path Done
Continue Process
=>[ActivationCreated(25): rule=findSeating
[fid:15:23]:[Seating id=2, pid=1, done=true, ls=1, ln=n5, rs=2, rn=n4]
[fid:7:7]:[Guest name=n4, sex=f, hobbies=h3]
[fid:4:4] : [Guest name=n3, sex=m, hobbies=h3]*
=>[ActivationCreated(25):], [fid:12:20] : [Count value=3]
=>[ActivationCreated(25)::19:26]:[Seating id=3, pid=2, done=false, ls=2, lnn4, rs=3, rn=n3]]
=>[fid:20:27]:[Path id=3, seat=3, guest=n3]]
=>[fid:21:28]:[Chosen id=2, name=n3, hobbies=h3}]
=>[ActivationCreated(30): rule=makePath
[fid:19:26]:[Seating id=3, pid=2, done=false, ls=2, ln=n4, rs=3, rn=n3]
[fid:18:22]:[Path id=2, seat=1, guest=n5]*
=>[ActivationCreated(30): rule=makePath
[fid:19:26]:[Seating id=3, pid=2, done=false, ls=2, ln=n4, rs=3, rn=n3]
[fid:16:18]:[Path id=2, seat=2, guest=n4]*
=>[ActivationCreated(30): rule=done
[fid:19:26]:[Seating id=3, pid=2, done=false, ls=2, ln=n4, rs=3, rn=n3]*
Make Path
=>[fid:22:31]:[Path id=3, seat=1, guest=n5]
Make Path
=>[fid:23:32] [Path id=3, seat=2, guest=n4]
Path Done
Continue Processing
=>], [fid:12:29]*
=>:24:36]:[Seating id=4, pid=3, done=false, ls=3, ln=n3, rs=4, rn=n2]]
=>[fid:25:37]:[Path id=4, seat=4, guest=n2]]
=>[fid:26:38]:[Chosen id=3, name=n2, hobbies=h3]
==>[ActivationCreated(40): rule=makePath
[fid:24:36]:[Seating id=4, pid=3, done=false, ls=3, ln=n3, rs=4, rn=n2]
[fid:23:32]:[Path id=3, seat=2, guest=n4]*
==>[ActivationCreated(40): rule=makePath
[fid:24:36]:[Seating id=4, pid=3, done=false, ls=3, ln=n3, rs=4, rn=n2]
[fid:20:27]:[Path id=3, seat=3, guest=n3]*
=>[ActivationCreated(40): rule=makePath
[fid:24:36]:[Seating id=4, pid=3, done=false, ls=3, ln=n3, rs=4, rn=n2]
[fid:22:31]:[Path id=3, seat=1, guest=n5]*
=>[ActivationCreated(40): rule=done
[fid:24:36]:[Seating id=4, pid=3, done=false, ls=3, ln=n3, rs=4, rn=n2]*
Make Path
=>fid:27:41:[Path id=4, seat=2, guest=n4]
Make Path
=>fid:28:42]:[Path id=4, seat=1, guest=n5]]
Make Path
=>fid:29:43]:[Path id=4, seat=3, guest=n3]]
Path Done
Continue Processing
=>[ActivationCreated(46):(46): rule=findSeating
[fid:24:44]:[Seating id=4, pid=3, done=true, ls=3, ln=n3, rs=4, rn=n2]
[fid:2:2]:[Guest name=n2, sex=f, hobbies=h1]
[fid:1:1]:[Guest name=n1, sex=m, hobbies=h1]*
=>[ActivationCreated(46)::30:47]:[Seating id=5, pid=4, done=false, ls=4, ln=n2, rs=5, rn=n1]
=>[fid:31:48]:[Path id=5, seat=5, guest=n1]
=>[fid:32:49]:[Chosen id=4, name=n1, hobbies=h1] | https://docs.drools.org/5.4.0.Final/drools-expert-docs/html/ch07.html | 2019-07-15T23:00:15 | CC-MAIN-2019-30 | 1563195524254.28 | [] | docs.drools.org |
Microsoft 365 Enterprise E5
When you add a subscription through the admin center, the new subscription is associated with the same organization (domain namespace) as your existing subscription. This makes it easier to move users in your organization between subscriptions, or to assign them a license for the additional subscription they need.
Try or buy a Microsoft 365 subscription
Sign in to the admin center at, and then go to Billing > Purchase services.
On the Purchase services page, the subscriptions that are available to your organization are listed. Choose the Microsoft 365 plan that you want to try or buy.
On the next page, choose Get free trial, which gives you 25 user licenses for a one-month term, or you can Buy.
Note
If you start a free trial, skip to step 8.
If you buy, enter the number of user licenses you need and choose whether to pay each month or for the whole year, then choose Check out now.
Your cart opens. Review the pricing information and choose Next.
Provide your payment information, then choose Place order.
On the confirmation page, choose Go to admin home. You're all set!
Choose to receive a text or a call, enter your phone number, then choose Text me or Call me.
Enter the verification code, then choose Start your free trial.
On the Check out page, choose Try now.
On the order receipt page, choose continue.
Not using preview yet?
If you have preview turned off, watch the following video to sign up for a trial Microsoft 365 subscription.
Next steps
After you get the new subscription, you have to assign a license to the users who will use that subscription. To learn how, see Assign licenses to users in Office 365 for business.
Feedback | https://docs.microsoft.com/en-us/office365/admin/try-or-buy-microsoft-365?redirectSourcePath=%252fhr-hr%252farticle%252fIsprobajte-ili-Kupite-pretplatu-na-Microsoft-365-9E8CEAC6-8D20-4D28-837A-D766AE99CBD1&view=o365-worldwide | 2019-07-15T23:01:48 | CC-MAIN-2019-30 | 1563195524254.28 | [] | docs.microsoft.com |
vSphere 6.0.x includes the VMware Certificate Authority (VMCA). By default, VMCA generates all internal certificates used in vSphere environment, including certificates for newly added ESXi hosts and storage VASA providers that manage or represent Virtual Volumes storage systems.
Communication with the VASA provider is protected by SSL certificates. These certificates can come from the VASA provider or from VMCA.
- Certificates can be directly provided by the VASA provider for long-term use, and can be either self-generated and self-signed, or derived from an external Certificate Authority.
- Certificates can be generated by VMCA for use by the VASA provider.
When a host or VASA provider is registered, VMCA follows these steps automatically, without involvement from the vSphere administrator.
- When a VASA provider is first added to vCenter Server storage management service (SMS), it produces a self‐signed certificate.
- After verifying the certificate, SMS requests a Certificate Signing Request (CSR) from the VASA provider.
- After receiving and validating the CSR, SMS presents it to VMCA on behalf of the VASA provider, requesting a CA signed certificate.
VMCA can be configured to function as a standalone CA, or as a subordinate to an enterprise CA. If you set up VMCA as a subordinate CA, VMCA signs the CSR with the full chain.
- The signed certificate along with the root certificate is passed to the VASA provider, so it can authenticate all future secure connections originating from SMS on vCenter Server and on ESXi hosts. | https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.storage.doc/GUID-36F6CC52-833F-4F37-AB4E-7EA80145979C.html | 2019-07-15T22:37:13 | CC-MAIN-2019-30 | 1563195524254.28 | [] | docs.vmware.com |
Converting legacy renderingConverting legacy rendering
This page is only relevant to you if you upgraded from the old 'Jira Automation' add-on by Atlassian Labs to 'Automation for Jira' by Code Barrel. If you are a new user and never upgraded, then this documentation does not apply!
For more details on the different versions please our comparison lite vs pro.
Who this guide is forWho this guide is for
The old 'Jira Automation' add-on used Velocity to render issue values when using the 'Comment action' or 'Edit issue' action in an automation rule. So for example you could write a comment like
The issue $issue.key was just updated.
and this would get rendered as
The issue TEST-1234 was just updated.
The new 'Automation for Jira' add-on uses Mustache to render issue values. We chose to use Mustache mainly for simplicity, however also for security. Velocity allows far more access to Jira internals and could lead to insecure code execution.
However to make the upgrade as simply as possible we introduced legacy Velocity rendering into the new 'Automation for Jira' such that your existing rules will continue to work after the upgrade. The only restriction is that for security reasons those rules cannot be converted to project specific rules and Velocity rendering wont work for imported rules in Cloud (you will have to convert imported rules to use smart-values). There's also less features available and you should try to upgrade them to Mustache as soon as possible.
Converting legacy valuesConverting legacy values
Luckily only the 'Comment' and 'Edit issue' actions used this rendering mechanism. For 'Edit issue' this was only the case if the 'Allow variable expansion' checkbox was checked. When upgrading to the new Automation for Jira we detect which components are using legacy rendering. After the upgrade these components will show the following warning:
This means that currently this comment will be rendered using Velocity. Once the 'Enable smart-value rendering' button is clicked, this comment action will switch to Mustache rendering. This operation cannot be undone!
In general the conversion should be pretty straightforward. The old Velocity renderer only provided the following context objects:
- issue
- reporter
- project
- customfields
Here's a general guide for how to convert these:
{% raw %}
{% endraw %}
See our smart-value documentation for more detailed examples!
So in our comment example shown in the screenshot above:
- You'd simply convert 'Hello
$issue.key' to 'Hello
{{issue.key}}'
- Then hit the 'Enable smart-value rendering' button
That's it. This is how the comment action would look like:
Once you've upgraded, you can't create new actions that use Velocity any longer. Velocity is dead - Mustache and smart-values are the way of the future. | https://docs.automationforjira.com/upgrade/convert-legacy-velocity-to-smart-fields.html | 2019-04-18T16:29:54 | CC-MAIN-2019-18 | 1555578517745.15 | [array(['/assets/img/velocity-warning.7936dbb1.png', 'Velcoity Warning'],
dtype=object)
array(['/assets/img/velocity-upgraded.932c2734.png', 'Velcoity Upgraded'],
dtype=object) ] | docs.automationforjira.com |