content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Help Center
Local Navigation
Search This Document
Find your phone number
Perform one of the following actions:
- To view your active phone number, from the Home screen, press the Send key. Your active phone number appears beside the My Number field at the top of the screen.
- If you have multiple phone numbers associated with your BlackBerry® device, to view a list of your phone numbers, from the Home screen, press the Send key. Click the My Number field at the top of the screen. If your wireless service plan supports SMS text and MMS messaging, the first phone number in the list is the phone number that you use to send and receive SMS text and MMS messages.
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/9271/Find_your_phone_number_26315_11.jsp | 2013-05-18T14:01:34 | CC-MAIN-2013-20 | 1368696382398 | [] | docs.blackberry.com |
Extensions and GPL
Revision as of 21.
- What GPL version is Joomla! licensed under? Joomla! is licensed under GPL version 2 or later..
-? | http://docs.joomla.org/index.php?title=Extensions_and_GPL&diff=29711&oldid=29691 | 2013-05-18T13:52:32 | CC-MAIN-2013-20 | 1368696382398 | [] | docs.joomla.org |
Revision history of "JCacheStorageCachelite:: construct/11.1":: construct/11.1 to API17:JCacheStorageCachelite:: construct without leaving a redirect (Robot: Moved page) | http://docs.joomla.org/index.php?title=JCacheStorageCachelite::_construct/11.1&action=history | 2013-05-18T14:03:32 | CC-MAIN-2013-20 | 1368696382398 | [] | docs.joomla.org |
Cloudera Manager Failover Protection
A CDH cluster managed by Cloudera Manager can have only one instance of Cloudera Manager active at a time. A Cloudera Manager instance is backed by a single database instance that stores configurations and other operational data.. If two instances of Cloudera Manager are active at the same time and attempt to access the same database, data corruption can result, making Cloudera Manager unable to manage the cluster.
2016-02-17 09:47:27,915 WARN main:com.cloudera.server.cmf.components.ScmActive: ScmActive detected spurious CM : hostname=sysadmin-scm-2.mycompany.com/172.28.197.136,bootup true 2016-02-17 09:47:27,916 WARN main:com.cloudera.server.cmf.components.ScmActive: ScmActive: The database is owned by sysadmin-scm-1.mycompany.com/172.28.197.242 2016-02-17 09:47:27,917 ERROR main:com.cloudera.server.cmf.bootstrap.EntityManagerFactoryBean: ScmActiveat bootup: The configured database is being used by another instance of Cloudera Manager.
2016-02-17 09:47:27,919 ERROR main:com.cloudera.server.cmf.Main: Serverfailed.2016-02-17 09:47:27,919 ERROR main:com.cloudera.server.cmf.Main: Serverfailed.org.springframework.beans.factory.BeanCreationException: Error creatingbean with name 'com.cloudera.server.cmf.TrialState': Cannot resolvereference to bean 'entityManagerFactoryBean' while setting constructorargument; nested exception isorg.springframework.beans.factory.BeanCreationException: Error creatingbean with name 'entityManagerFactoryBean': FactoryBean threw exception onobject creation; nested exception is java.lang.RuntimeException: ScmActiveat bootup: Failed to validate the identity of Cloudera Manager.
When a Cloudera Manager instance fails or becomes unavailable and remains offline for more than 30 seconds, any new instance that is deployed claims ownership of the database and continues to manage the cluster normally.
Disabling Automatic Failover Protection
- On the host where Cloudera Manager server is running, open the following file in a text editor:
/etc/default/cloudera-scm-server
- Add the following property (separate each property with a space) to the line that begins with export CMF_JAVA_OPTS:
-Dcom.cloudera.server.cmf.components.scmActive.killOnError=falseFor example:
export CMF_JAVA_OPTS="-Xmx2G -XX:MaxPermSize=256m -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp -Dcom.cloudera.server.cmf.components.scmActive.killOnError=false”
- Restart the Cloudera Manager server by running the following command on the Cloudera Manager server host:
sudo service cloudera-scm-server restart | https://docs.cloudera.com/documentation/enterprise/6/6.2/topics/cm_failover_db.html | 2019-11-12T00:49:32 | CC-MAIN-2019-47 | 1573496664469.42 | [] | docs.cloudera.com |
Message-ID: <1931200170.3053.1573517774641.JavaMail.confluence@5b6120839685> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_3052_1867009958.1573517774640" ------=_Part_3052_1867009958.1573517774640 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
You can synchronize with any of the supported issue tracking platforms.<= /p>
You can integrate Jira Cloud issu= es with the following platforms:
Install the add-on from th= e marketplace and accept the registration form.
Check Licensin= g and pricing for more details.
Set up a Connection
Synchronization between two instances requires a configured Connection.<= /p>
One side needs to initiate the connection and send an invitation to= the partner (the Destination instance).
The other side, depending on the Exalate version, needs to finish t= he configuration from their side.
You can synchronize with different issue tracking platforms. For example= , you can sync between your JIRA Cloud and JIRA Server, even if your JIRA s= erver is not accessible from the outside network.
You can also sync local projects within the same Jira Instance. For more= details check typical use c= ases.
To start synchronization with your partner - Initiate Connection.
If you have an Invitation code - Accept an Invitation.
Go to an issue and use the Exalate button to start synchronization.
Find out more ways to synchronize issues automatically.
Configure the synchronization behavior of your use case with the help of= Jira = Cloud configuration guides. | https://docs.idalko.com/exalate/exportword?pageId=19629053 | 2019-11-12T00:16:15 | CC-MAIN-2019-47 | 1573496664469.42 | [] | docs.idalko.com |
Using Printers & Other Devices
- Setting up wired Ethernet for Internet access
- Can I print without showing the iOS print dialog?
- Can I use a print button within my page powered by JavaScript window.print()?
- Print only part of the webpage?
- Which thermal printers are supported?
- Using Kiosk Pro with an external screen
- Accepting data from magnetic stripe card readers
- Using unencrypted card readers
- Accessing the iPad's cameras within Kiosk Pro
- Using the native camera as a proximity or motion sensor
- Scanning QR codes and/or UPC barcodes
- Can I remove Previous/Next buttons when using a Bluetooth keyboard or scanner? | https://docs.kioskproapp.com/category/610-category | 2019-11-12T00:38:25 | CC-MAIN-2019-47 | 1573496664469.42 | [] | docs.kioskproapp.com |
Managed metadata input file format (SharePoint Server 2010)
Applies to: SharePoint Server 2010
Managed metadata in SharePoint Server 2010 is imported from a comma-separated values (csv) file. Each file must contain one term set, and the terms within the term set may be nested as many as seven levels deep.
Even if your organization does not have data to import, you could consider creating your taxonomy outside the Term Store Management tool, and then importing the taxonomy. The Term Store Management Tool provides a convenient, simple way to create term sets and manage terms, but using it to create many term sets might take longer than importing the term sets. The Term Store Management Tool is convenient to use for the day-to-day management of term sets after the term sets have been created.
In this article:
Format of the import file
View a sample managed metadata import file
Import managed metadata
Before reading this article, you should understand the concepts described in the Managed metadata overview (SharePoint Server 2010) topic.
Format of the managed metadata import file
The managed metadata import file is a comma-separated values (.csv) file that contains a header row and additional rows that define the term set and the terms in the term set.
The first line of the file must contain 12 items separated by commas. Consider these items to be column headings (such as in a table) for the values that you will provide in the next lines. It is good practice to enclose each value in quotation marks (
"").
The following line is the first line from the sample managed metadata import file. For more information about how to view the sample managed metadata import file, see View a sample managed metadata import file.
"Term Set Name","Term Set Description","LCID","Available for Tagging","Term Description","Level 1 Term","Level 2 Term","Level 3 Term","Level 4 Term","Level 5 Term","Level 6 Term","Level 7 Term"
The second line of the managed metadata import file represents the term set, and should contain the following information, in the order specified:
The name of the term set
Note
It is recommended that the name of the term set, the name of a term, and all descriptions be enclosed in quotation marks (
""). Quotation marks are optional, unless the value itself contains a comma. It is safer always to use the quotation marks.
A comma (
,)
Optionally, a description of the term set
Two commas (
,,)
The word
TRUEor the word
FALSEthat indicates whether users should be able to add the terms in this term set to Microsoft SharePoint Server items. If you do not provide a value, the term set is available for tagging.
Eight commas (
,,,,,,,,)
The third line of the managed metadata import file and each successive line represents a term. Separate the values in the third line and in each successive line with commas. You may omit an optional value, but do not omit the accompanying comma, because commas are required as separators regardless of whether values are present.
The values in a line represent the following information, and must be provided in the order in which they are listed:
Term set name: Leave this value blank.
Term set description: Leave this value blank.
Locale identifier: The decimal value that identifies the locale that corresponds to the language of the term. If you do not provide a value, the default locale of the term store to which you import this managed metadata is used.
Available for tagging: This value determines whether users should be able to add this term to a SharePoint Server item. Use the word TRUE to let users use the term; use the word FALSE to forbid users to use the term. If you do not provide a value, users can use the term.
Term description: A description of the term. This value is optional.
Level 1 term -- level 7 term: If the term set is organized as a hierarchy, a level 1 term is a term at the top of the hierarchy, a level 2 term is lower than a level 1 term, and so on. You must provide a value for all levels down to the level of the term that you are representing. This is best illustrated by the example that follows this list.
In this example, you want to import a term set that represents all the office locations of your organization. The term set will be organized hierarchically. The following list is a fragment of the term set:
Sites (term set)
North America
Washington
Seattle
Redmond
Tacoma
Massachusetts
Boston
Cambridge
“North America” is a level 1 term. “Washington” and “Massachusetts” are level 2 terms. “Redmond”, “Seattle”, “Tacoma”, “Boston”, and “Cambridge” are level 3 terms. To import this term set, you would use a file that contained the following lines:
"Term Set Name","Term Set Description","LCID","Available for Tagging","Term Description","Level 1 Term","Level 2 Term","Level 3 Term","Level 4 Term","Level 5 Term","Level 6 Term","Level 7 Term" "Sites","Locations where the organization has offices",,TRUE,,,,,,,, ,",,,,, ,,1033,TRUE,,"North America","Massachusetts","Boston",,,, ,,1033,TRUE,,"North America","Massachusetts","Cambridge",,,,
Note
It is possible to combine the line that defines the term set (line 2) and the first line that defines a term (line 3). This is done in the sample import file.
You cannot represent synonyms or translations of terms by using a managed metadata import file. To create synonyms or translations you must either use the Term Store Management Tool or write a custom program to import and add the synonyms or translations.
See Wictor Wilén's blog () for information about a tool to import term sets that was developed by a member of the SharePoint community.
View a sample managed metadata import file
You can view a sample import file by clicking View a sample import file from the properties pane of a managed metadata service in the Term Store Management Tool. The simplest way to create an import file for your term set is to use the sample import file as a template. Save the sample import file; delete everything except the first row. Then add more rows to represent your term set and terms.
Import managed metadata
For instructions about how to import metadata, see Office.com ().
See Also
Concepts
Managed metadata overview (SharePoint Server 2010)
Plan to import managed metadata (SharePoint Server 2010) | https://docs.microsoft.com/en-us/previous-versions/office/sharepoint-server-2010/ee424396(v=office.14)?redirectedfrom=MSDN | 2019-11-12T02:37:57 | CC-MAIN-2019-47 | 1573496664469.42 | [] | docs.microsoft.com |
class Gem::Resolver::VendorSpecification
A
VendorSpecification represents a gem that has been unpacked into a project and is being loaded through a gem dependencies file through the
path: option.
Public Instance Methods
install(options = {}) { |nil| ... } click to toggle source
This is a null install as this gem was unpacked into a directory.
options are ignored.
# File lib/rubygems/resolver/vendor_specification.rb, line 20 def install(options = {}) yield nil end | https://docs.ruby-lang.org/en/master/Gem/Resolver/VendorSpecification.html | 2019-11-12T01:06:55 | CC-MAIN-2019-47 | 1573496664469.42 | [] | docs.ruby-lang.org |
How to cancel my subscription
Thanks for choosing our products! As we mentioned, an automated subscription is not required, you can easily cancel your subscription and just subscribe again when you want. Keep in mind that without an active license you won't be able to get updates and support.
FastSpring gateway (payment available starting with July 2017)
For payments made with the FastSpring method, click on the following link and fill with your email address.
After you press the "Continue" button, you'll receive an email with a link from where you can Cancel your subscription.
PayPal gateway (payment available until July 2017) ( If you used Paypal, but paid through FastSpring, read the first section)
- Click on the Profile icon which is next to "Log Out" and select Profile and settings.
- Select My money.
- In the My preapproved payments section, click Update.
- Search for Vertigo Studio SRL (our company name) and click Cancel.
- Click Cancel Profile to confirm your request.
2checkout gateway (payment available until July 2017)
If you have used your credit card for payments, then please go to and follow the next steps:
1. Use your 2CO Order Number/PayPal Invoice ID or First 6 Digits of Card Charged and Last 2 Digits of card charges then input your email and click on Find my Order.
2. Now find " I'd like to..." and click on "Stop my recurring membership or subscription," then select the desired subscription.
3. Click on " Stop Selected"
4. Your subscription is now canceled.
If your query was not resolved yet, we can still help you, contact us via | https://docs.themeisle.com/article/828-how-to-cancel-my-subscription | 2019-11-12T01:32:47 | CC-MAIN-2019-47 | 1573496664469.42 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55192029e4b0221aadf23f55/images/5a8940c72c7d3a4a41993aec/file-icuLeJ7yxJ.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55192029e4b0221aadf23f55/images/5a89480f2c7d3a4a41993af4/file-j5n41Tri56.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55192029e4b0221aadf23f55/images/5a882fa00428634376d03c28/file-5gDmAwOzMm.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55192029e4b0221aadf23f55/images/5a882fbe2c7d3a4a41993919/file-AuJNkGrnWn.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55192029e4b0221aadf23f55/images/5a882b262c7d3a4a4199390c/file-qefLDGXq8I.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55192029e4b0221aadf23f55/images/5a8939890428634376d03e01/file-1BiJfcK9lc.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55192029e4b0221aadf23f55/images/5a893d140428634376d03e04/file-WxpJBBw4HL.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55192029e4b0221aadf23f55/images/5a893d760428634376d03e05/file-Rkgxf02mK6.png',
None], dtype=object) ] | docs.themeisle.com |
AccountingIntegrator 2.3.0 Installation Guide Save PDF Selected topic Selected topic and subtopics All content Prerequisites This chapter describes the system requirements for AccountingIntegrator. Software prerequisites Platforms Software and license keys Set environment variables Additional prerequisites Related Links | https://docs.axway.com/bundle/AccountingIntegrator_230_InstallationGuide_allOS_en_HTML5/page/Content/InstallationGuide/prereqs/prereqs_overview.htm | 2019-11-12T01:12:20 | CC-MAIN-2019-47 | 1573496664469.42 | [] | docs.axway.com |
Use custom metric indexes in Splunk App for Infrastructure
You can create custom indexes to store metrics data in the Splunk App for Infrastructure (SAI). For more information about creating custom indexes, see Create custom indexes.
The default index for metrics data in Splunk App for Infrastructure is
em_metrics.
About the em_metrics source type
The
em_metrics sourcetype is specifically for use with SAI, collectd, and the
write_splunk plugin for collectd. This sourcetype performs important data transforms before indexing that is not available in the standard collectd sourcetype. Use the sourcetype in any custom metrics index that you create.
Use a custom metrics index in SAI
Include a custom metrics index in the metrics index macro so you can monitor hosts in your infrastructure that send data to the custom index. You can also add multiple metrics indexes.
- Go to Settings > Advanced search and select Search macros.
- For App, select Splunk App for Infrastructure (splunk_app_infrastructure).
- Select the
sai_metrics_indexesmacro.
- For the Definition, include the custom index you want to use. If you use multiple metrics indexes, add each one like this:
index = linux_metrics OR index = windows_metrics
- When you're done, save the macro.
- Go to Settings > Data inputs and select HTTP Event Collector.
- For the HEC token you use to collect metrics, update the allowed indexes list and specify a new Default Index.
- When you're done, save the configuration.
This documentation applies to the following versions of Splunk® App for Infrastructure: 1.3.0, 1.3.1, 1.4.0, 1.4.1, 2.0.0
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/InfraApp/1.4.0/Admin/CustomIndexes | 2019-11-12T00:37:38 | CC-MAIN-2019-47 | 1573496664469.42 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
DROP TABLE Statement
Removes an Impala table. Also removes the underlying HDFS data files for internal tables, although not for external tables.
Syntax:
DROP TABLE [IF EXISTS] [db_name.]table_name [PURGE]
IF EXISTS clause:).
PURGE clause:
The optional PURGE keyword, available in CDH 5.5 / Impala 2.3 and higher, causes Impala to remove the associated HDFS data files immediately, rather than going through the HDFS trashcan mechanism. Use this keyword when dropping a table if it is crucial to remove the data as quickly as possible to free up space, or if there is a problem with the trashcan, such as the trash cannot being configured or being in a different HDFS encryption zone than the data files..
If you intend to issue a DROP DATABASE statement, first issue DROP TABLE statements to remove all the tables in that database.
Examples:
create database temporary; use temporary; create table unimportant (x int); create table trivial (s string); -- Drop a table in the current database. drop table unimportant; -- Switch to a different database. use default; -- To drop a table in a different database... drop table trivial; ERROR: AnalysisException: Table does not exist: default.trivial -- ...use a fully qualified name. drop table temporary.trivial;
For other tips about managing and reclaiming Impala disk space, see Managing Disk Space for Impala Data.
Amazon S3 considerations:.
In CDH 5.8 / Impala 2.6 and higher, Impala DDL statements such as CREATE DATABASE, CREATE TABLE, DROP DATABASE CASCADE, DROP TABLE, and ALTER TABLE [ADD|DROP] PARTITION can create or remove folders as needed in the Amazon S3 system. Prior to CDH 5.8 / Impala 2.6, you had to create folders yourself and point Impala database, tables, or partitions at them, and manually remove folders when no longer needed. See Using Impala with the Amazon S3 Filesystem for details about reading and writing S3 data with Impala. | https://docs.cloudera.com/documentation/enterprise/5-14-x/topics/impala_drop_table.html | 2019-11-12T01:14:31 | CC-MAIN-2019-47 | 1573496664469.42 | [] | docs.cloudera.com |
FashionTIY ("we" "us" or "our") values your privacy and takes a step to protect the privacy and security of information that you provide to us that either identifies you or could, when combined with other information, be used to identify you (“Personal Information"). FashionTIY complies with all applicable laws regarding your privacy. This Privacy Policy explains how and why we collect and use the Personal Information you provide to FashionTIY.
By accessing or otherwise using FashionTIY, you agree to the terms and conditions of this Privacy Policy..
Personal Information that You Provide: In general, we collect Personal Information that you submit to us voluntarily through FashionTIY. We also collect information that you submit in the process of creating or editing your account and user profile on FashionTIY. For example, our registration and login process requires you to provide us with your name, valid email address and password of your choice. When you personalize your profile and use the features of FashionTIY, we will collect any information you voluntarily provide, and we may also request optional information to support your use of FashionTIY, such as your year of birth, gender, and other demographic information. We collect information in the form of the content that you submit during your use of FashionTIY, such as photos, ratings and other information you choose to submit. We may also collect information about you and your friends who use FashionTIY, from any social network you may have connected from, in order to provide you with a more personalized experience. When you purchase items, you will need to submit your credit card or other payment information, which we may collect and store, so that we can process your payment for the items. When you sign up for FashionTIY or engage in other activities that we make available on FashionTIY, we will collect the information designated along with such activity, which may include your contact information such as your address and phone number. We will also collect transactional information based on your activity on FashionTIY, such as buying, billing, and any other information you provide to purchase or ship items. If you choose to sign up to receive information about products or services that may be of interest to you, we will collect your email address and other related information.
FashionTIY may require you to submit additional personal information to authenticate yourself if we believe you are violating FashionTIY policies, such as identification or a utility bill to verify your address. If you do not wish to share such information with us, you will be unable to complete any further transactions on FashionTIY.
Personal Information from Other Sources: We may receive Personal Information about you from other sources with which you have registered, companies who we have partnered with (collectively, "Partners") or other third parties. We may associate this information with the other Personal Information we have collected about you.
E-mail and E-mail Addresses: If you send an e-mail to us, or fill out our "Feedback" form through FashionTIY, we will collect your e-mail address and the full content of your e-mail, including attached files, and other information you provide. We may use and display your full name and email address when you send an email notification to a friend through FashionTIY or the social network from which you have connected to FashionTIY (such as in an invitation, or when sharing your content). Additionally, we use your email address to contact you on behalf of your friends (such as when someone sends you a personal message) or notifications from a social network or other websites FashionTIY, certain information may also be passively collected and stored on our or our service providers’ server logs, including your Internet protocol address, browser type, type of device, operating system, and the address of the referring website. We may also place small data files on your computer or other devices. These data files may be cookies, pixel tags, Flash Cookies, Silverlight Cookies, or other local storage provided by your browser or associated applications ("Cookies"). We use these technologies to recognize you as a customer; customize FashionTIY FashionTIY, your web browser must accept Cookies. If you choose to disable Cookies, some aspects of FashionTIY may not work properly, and you may not be able to receive our Services. This type of information is collected to make FashionTIY and solutions more useful to you and to tailor the experience with FashionTIY to meet your special interests and needs.
We engage in remarketing to market our sites across the web. When a user visits our site, a cookie is placed on the user’s computer. FashionTIY to assist with ad targeting.
Except as otherwise stated in this Privacy Policy, we do not sell, trade, rent for marketing purposes the Personal Information that we collect with third parties unless you ask or authorize us to do so.
In general, Personal Information you submit to us is used by us to provide you access to FashionTIY; to improve FashionTIY; FashionTIY; FashionTIY, along with those companies and persons you have asked us to share your information with.
We may provide your Personal Information to third-party service providers who work on behalf of or with us to provide some of the services and features of FashionTIY Privacy Policy, do not use FashionTIY. We share information only with companies that have also committed to compliance with the GDPR and bind each of those companies via a Data Privacy Agreement to provide you with substantially equivalent privacy and security protections as those that FashionTIY provides to you. FashionTIY may be liable for any onward transfers of your Personal Information in violation of our commitments to you pursuant to this Privacy Policy, the Privacy Shield Frameworks or GDPR.
Additionally, we may create Anonymous Information records from Personal Information by excluding information (such as your name) that would otherwise make the Anonymous Information personally identifiable to you. Generally, we aggregate this information and use it in statistical analysis to help us analyze patterns in the use of FashionTIY. FashionTIY affiliated companies that are under a common control, in which case we will require them, via the written contract,.
By using FashionTIY and providing us with your Personal Information, you understand and agree that we may disclose your Personal Information if we believe in good faith that such disclosure is necessary to: (i) satisfy any applicable law, regulation, legal process or governmental request, (ii) enforce the FashionTIY Terms, including investigation of potential violations hereof, (iii) detect, prevent, or otherwise address fraud, security or technical issues, (iv) respond to user support requests, or (v) protect the rights, property or safety of FashionTIY, its users, and the public. You hereby consent to us sharing your Personal Information under the circumstances described herein.
FashionTIY may also be required to disclose an individual’s Personal Information in response to a lawful request by government officials, including to satisfy law enforcement or national security requirements.
Third Party Sites and Advertising: FashionTIY FashionTIY and does not apply to these third-party websites. The ability to access information of third parties from FashionTIY,.
Despite your indicated email marketing preferences, we may send you administrative emails regarding FashionTIY, including, for example, administrative and transactional confirmations, and notices of updates to our Privacy Policy.. You may ask us to stop processing your information or delete your information by contacting us, please contact us. We will delete that information within a commercially reasonable period of time unless prevented from doing so by applicable law.
FashionTIY complies with all applicable privacy laws and regulations that apply to residents of the EU. Residents of the European Union (“EU”) may only use our Services after providing FashionTIY you are freely given, informed consent for FashionTIY to collect, transfer, store, and share your Personal Data, as that term is defined in the EU’s General Data Protection Regulation (“GDPR”).
FashionTIY complies with the EU GDPR and makes it easy for EU residents to exercise their rights described in that regulation. The purposes for which FashionTIY collects your Personal Information, which is defined in the GDPR as Personal Data, the categories and specific types of Personal Data we collect and our practices and policies regarding your Personal Data are described in this Privacy Policy. As discussed throughout this Privacy Policy, FashionTIY makes it easy for you to access, correct, delete, or demand deletion of your Personal Data. You may object to our processing of your Personal data by emailing us, although if you prohibit our processing, it may make some of our Services either impossible to offer or less useful. Any of those requests should be sent to [email protected]. Should you ever wish to leave FashionTIY and take an electronic copy of the Personal Data and information we have collected about you, you may make that request at [email protected].
FashionTIY of this Privacy Policy and the Privacy Shield Principles, the Privacy Shield Principles shall govern.
With respect to personal data received or transferred pursuant to the Privacy Shield Frameworks, FashionTIY is subject to the regulatory enforcement powers of the U.S. Federal Trade Commission. their query to [email protected]. Personal Information with third parties other than our agents, or before we use it for a purpose other than which it was originally collected or subsequently authorized. To limit the use and disclosure of your personal information, please submit a written request to [email protected].
FashionTIY’s accountability for Personal Information that it receives under the Privacy Shield and subsequently transfers to a third party is described in the Privacy Shield Principles. In particular, FashionTIY remains responsible and liable under the Privacy Shield Principles if third-party agents that it engages to process the personal data on its behalf do so in a manner inconsistent with the Principles unless FashionTIY proves that it is not responsible for the event giving rise to the damage.
In compliance with the Privacy Shield Principles, FashionTIY commits to resolve complaints about your privacy and our collection or use of your personal information transferred to the United States pursuant to Privacy Shield.
If your Privacy Shield complaint cannot be resolved through the above channels, under certain conditions, you may invoke binding arbitration for some residual claims not resolved by other redress mechanisms.
No matter where they live, children under the age of 13 are not permitted to use FashionTIY.
Children who reside in some countries in the EU may be prohibited by law from using services such as FashionTIY until they are older than 13 years of age. In some countries the age that you may be able to use our service may be 14, 15 or 16 years of age. By using FashionTIY, you are either (i)_representing that you are at least 18 years old; (ii) that you are at least 13 years old and have a parent’s or guardian’s permission to use FashionTIY; or (iii) if you reside in an EU country with an older age of consent that you are at least the appropriate number of years old and have a parent’s or guardians permission to use FashionTIY..
If you have questions or concerns about our Privacy Policy, please contact us. If you are a resident of the EU and wish to exercise your rights pursuant to GDPR, please contact us.
This Privacy Policy is subject to occasional revision at our discretion, and if we make any substantial changes in the way we use your Personal Information, we will post an alert on this page and send an email to registered users. If you object to any such changes, you must cease using FashionTIY. Continued use of FashionTIY following notice of any such changes shall indicate your acknowledgment of such changes and agreement to be bound by the terms and conditions of such changes. | https://docs.fashiontiy.com/terms-and-privacy/privacy-policy | 2019-11-12T00:46:38 | CC-MAIN-2019-47 | 1573496664469.42 | [] | docs.fashiontiy.com |
Business Results reports
This page describes reports you can use to learn more about the business outcomes resulting from activity in your contact center. The reports in the Business Results folder are ready-to-use, but as always, can be modified to suit your specific business needs.
About Business Results reports
The following reports are available in the CX Insights > Business Results folder:
Related Topics:
- Go back to the complete list of available reports.
- Learn how to generate historical reports.
- Learn how to read and understand reports.
- Learn how to create or customize reports.
This page was last edited on July 23, 2018, at 18:17.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/PSAAS/latest/RPRT/HRCXIBusResReports | 2019-11-12T00:18:40 | CC-MAIN-2019-47 | 1573496664469.42 | [] | docs.genesys.com |
Live Assist for Dynamics 365 for Customer Engagement powered by Café X
Live Assist for Microsoft Dynamics 365 Powered by CaféX is a fully integrated omnichannel solution. With Live Assist for Microsoft Dynamics 365 Powered by CaféX, create more personalized, intelligent experiences within websites and apps using chat and co-browse. Features include:
Agents interact with customers within Dynamics 365 for Customer Engagement: Full integration with Customer Engagement customer engagement and Unified Service Desk for Dynamics 365 means agents interact with their customers via chat without leaving the Customer Engagement application.
Live omnichannel: Customers connect with your agents across mobile and web.
Proactive customer engagement with chat: Customer assistance via chat that provides contextual customer information to agents including past history, preferences, and purchases.
Faster problem solving with co-browse: View your customer’s app or browser with sensitive data hidden.
Important
- This feature is currently available in North America (NAM), Canada (CAN), and Europe, Middle East, Africa (EMEA) regions.
Next steps
View the resources available on liveassistfor365.com for more information.
Feedback | https://docs.microsoft.com/en-us/dynamics365/live-assist/live-assist-microsoft-dynamics-365-powered-by-cafe-x | 2019-11-12T02:30:23 | CC-MAIN-2019-47 | 1573496664469.42 | [] | docs.microsoft.com |
List Payment Refunds API¶
Refunds API v2
GET
GET*paymentId*/refunds
Authentication:API keysOrganization access tokensApp access tokens
Retrieve refunds.
- If the payment-specific endpoint is used, only refunds for that specific payment are returned.
- When using the top level endpoint
v2/refundswith an API key, only refunds for the corresponding website profile and mode are returned.
- When using the top level endpoint with OAuth, you can specify the profile and mode with the
profileIdand
testmodeparameters respectively. If you omit
profileId, you will get all refunds for the organization.
The results are paginated. See pagination for more information.
Parameters¶
When using the payment-specific endpoint, replace
paymentId in the endpoint URL by the payment’s ID, for example
tr_7UhSN1zuXS.
Access token parameters¶
If you are using organization access tokens or are creating an
OAuth app, the following query string parameters are also available. With the
profileId
parameter, you can specify which profile you want to look at when listing refunds. If you omit the
profileId
parameter, you will get all refunds on the organization. | https://docs.mollie.com/reference/v2/refunds-api/list-refunds | 2019-11-12T02:22:52 | CC-MAIN-2019-47 | 1573496664469.42 | [] | docs.mollie.com |
After choosing "Sign up now" to begin the account registration process some users are reporting an issue where the "Create User" button is unresponsive, though all required information has been provided.
This may be related to our verification process. Before a user can been created you will need to verify your email address by sending yourself a verification code. Below are the steps to follow.
Step 1 - Verify your email
Send verification code
To verify your email first enter your email address into the required Email textbox. Next click the "Send verification code" button located below the Email textbox.
Verification Code
You should receive an email in your inbox which contains a unique code which is needed for the next step.
It may take a few minutes for this email to arrive. If you haven't received the email and would like to request a new one you can select the "Send new code" button that is now visible next to the "Verify code" button.
Enter verification code
Next, enter the code provided in the email you received into the "Verification code" textbox and press "Verify code".
Step 2 - Password and Display Name
After successfully verifying your email your last step is to provide a password and enter your display name. Once both are provided click "Create User" to enter our Account Registration process.
Is Cloud still now working?
If Cloud is still having issues please contact us through the help button in the bottom right hand corner of this screen and one of our team members will assist you. | http://docs.d-tools.cloud/en/articles/1765693-why-doesn-t-the-create-user-button-work-when-creating-an-account | 2019-11-12T01:23:06 | CC-MAIN-2019-47 | 1573496664469.42 | [array(['https://downloads.intercomcdn.com/i/o/55340425/2cc783dbe5dbefa744674ca1/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/55340972/55538b21b5f4d3893aca2b4f/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/55341383/08e49a8ebf53e03c7f0eaafe/image.png',
None], dtype=object) ] | docs.d-tools.cloud |
-
SSL
There are advanced policy expressions to parse SSL certificates and SSL client hello messages.
Parse SSL certificates
You can use advanced policy expressions to evaluate X.509 Secure Sockets Layer (SSL) client certificates. A client certificate is an electronic document that can be used to authenticate a user’s identity. A client certificate contains (at a minimum) version information, a serial number, a signature algorithm ID, an issuer name, a validity period, a subject (user) name, a public key, and signatures.
You can examine both SSL connections and data in client certificates. For example, you may want to send SSL requests that use low-strength ciphers to a particular load balancing virtual server farm. The following command is an example of a Content Switching policy that parses the cipher strength in a request and matches cipher strengths that are less than or equal to 40:
add cs policy p1 -rule "client.ssl.cipher_bits.le(40)"
As another example, you can configure a policy that determines whether a request contains a client certificate:
add cs policy p2 -rule "client.ssl.client_cert exists"
Or, you can configure a policy that examines particular information in a client certificate. For example, the following policy verifies that the certificate has one or more days before expiration:
add cs policy p2 -rule "client.ssl.client_cert exists && client.ssl.client_cert.days_to_expire.ge(1)"
Note
For information on parsing dates and times in a certificate, see Format of Dates and Times in an Expression” and Expressions for SSL Certificate Dates.
Prefixes for text-based SSL and certificate data
The following table describes expression prefixes that identify text-based items in SSL transactions and client certificates.
Table 1. Prefixes That Return Text or Boolean Values for SSL and Client Certificate Data
Prefixes for numeric data in SSL certificates
The following table describes prefixes that evaluate numeric data other than dates in SSL certificates. These prefixes can be used with the operations that are described in Basic Operations on Expression Prefixes and Compound Operations for Numbers.
Table 2. Prefixes That Evaluate Numeric Data Other Than Dates in SSL Certificates
Note
For expressions related to expiration dates in a certificate, see Expressions for SSL Certificate Dates.
Expressions for SSL certificates
You can parse SSL certificates by configuring expressions that use the following prefix:
CLIENT.SSL.CLIENT_CERT
This section discusses the expressions that you can configure for certificates, except expressions that examine certificate expiration. Time-based operations are described in Advanced Policy Expressions: Working with Dates, Times, and Numbers.
The following table describes operations that you can specify for the CLIENT.SSL.CLIENT_CERT prefix.
Table 3. Operations That Can Be Specified with the CLIENT.SSL.CLIENT_CERT Prefix
Parse SSL client hello
You can parse the SSL client hello message by configuring expressions that use the following prefix:
These expressions can be used at CLIENTHELLO_REQ bind point. For more information, see SSL policy binding.
Advanced policy expressions: parsing SSL. | https://docs.citrix.com/en-us/citrix-adc/13/appexpert/policies-and-expressions/advanced-policy-expressions-parsing-ssl.html | 2019-11-12T02:09:09 | CC-MAIN-2019-47 | 1573496664469.42 | [] | docs.citrix.com |
Call Flows¶
Direct calls between two destinations by calling a feature code.
- Name: Define the name of the call flow
- Extension: Define what extension to use. (This will make an extension not allready created)
- Feature Code: Define what * number to use
- Context: Domain context (typically leave as is)
- Status: Define what currently is in use.
- Pin Number: Define a pin number in order to execute either mode.
- Destination: Define where the call will go in the intial mode.
- Sound: Define the sound that will play once mode is engaged.
- Destination: Define what the destination will be.
- Alternative Label: Label that will show when alternative mode is in use.
- Alternative Sound: Define the sound that will play once alternative mode is engaged.
- Alternative Destination: Define where the call will go in the alternative mode.
- Description: Label what this call flow does.
Call Flow Example¶
In the Call Flow example below we have the name as Call Flow. Made the Extension number 30 that didn’t exist until now. Feature code we made with a *code as *30. Kept the context as is with training.fusionpbx.com . Status to show which mode. Made a pin number to help secure the call flow. Made the detination label as Day Mode. Picked a sound to familiarize which mode is activated. Choose a destination for the alternative mode. Made the alternative detination label as Night Mode. Picked an alternative sound to familiarize which mode is activated. Choose a destination for the alternative mode. Finally describe what this call flow does.
| https://docs.fusionpbx.com/en/latest/applications/call_flows.html | 2019-11-12T02:16:12 | CC-MAIN-2019-47 | 1573496664469.42 | [array(['../_images/fusionpbx_call_flow1.jpg',
'../_images/fusionpbx_call_flow1.jpg'], dtype=object)
array(['../_images/fusionpbx_call_flow.jpg',
'../_images/fusionpbx_call_flow.jpg'], dtype=object)] | docs.fusionpbx.com |
Voicemail
If a red circle with a white number is displayed on the main menu
button, you have voicemail waiting for you. Click the button to view a list of your voicemail boxes.
You have a personal voicemail box and possibly a group voicemail box associated with the agent group you belong to. Your mailboxes are only displayed when you have at least one message in the mailbox. When you do have mail, the number of messages in each of your voicemail boxes is displayed beside the name of the voicemail box. Select the voicemail box to open it and listen to your voicemail. The main menu automatically collapses when the voice mail box is selected.
This page was last edited on August 10, 2017, at 13:38.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/PSAAS/latest/Agent/GPAVoiceMail | 2019-11-12T00:47:59 | CC-MAIN-2019-47 | 1573496664469.42 | [] | docs.genesys.com |
Windows Workflow Conceptual Overview
This section contains a set of topics discussing the larger concepts behind Windows Workflow Foundation (WF).
In This Section
Windows Workflow Overview
Describes the foundation that enables users to create system or human workflows in their applications written for Windows Vista, Windows XP, Windows Server 2003, and Windows Server 2008 operating systems.
Fundamental Windows Workflow Concepts
Describes fundamental concepts used in Windows Workflow Foundation development that may be new to some developers.
Windows Workflow Architecture
Describes components used in Windows Workflow Foundation development.
Feedback | https://docs.microsoft.com/en-us/dotnet/framework/windows-workflow-foundation/conceptual-overview | 2019-11-12T01:38:25 | CC-MAIN-2019-47 | 1573496664469.42 | [] | docs.microsoft.com |
Please download the registration form, fill them in completely and sign them. Then either send us the filled-in and signed registration form or scan and e-mail it to us as a PDF- or JPG-file. You will find our postal and e-mail address under
We will then send you an order confirmation with invoice via e-mail.
The invoice amount is due immediately and must arrive on our account within 14 days. The course price is 2500 €. After we receive your payment, we will send you a confirmation of registration, which you can also present to the (Foreigner) Authorities.
If you meet the government's requirements, the Arbeitsagentur (employment agency) or Jobcenter may sponsor your course. Please talk to us. | http://deutsch-for-docs.de/booking.html | 2018-07-16T02:27:55 | CC-MAIN-2018-30 | 1531676589172.41 | [] | deutsch-for-docs.de |
Piwik
From WebarchDocs
We have a Piwik server that members of our co-operative can use to generate web stats (as an alternative to Google Analytics).
Our server is set to respect the Do Not Track HTTP header and to only save the first two parts of visitor's IP addresses.
Adding Piwik tracking to site is simply a matter of copying and pasting some Javascript into your web pages or if you use one of the popular content management systems there might well be a plugin / module available for this, see: | https://docs.webarch.net/wiki/Piwik | 2017-03-23T08:06:52 | CC-MAIN-2017-13 | 1490218186841.66 | [] | docs.webarch.net |
Returns a set of temporary credentials for an AWS account or IAM user. The credentials consist of an access key ID, a secret access key, and a security token. Typically, you use get-session-token if you want to use MFA to protect programmatic calls to specific AWS APIs like Amazon EC2 StopInstances . MFA-enabled IAM users would need to call get-session-token get-session-token with the other APIs that produce temporary credentials, see Requesting Temporary Security Credentials and Comparing the AWS STS APIs in the IAM User Guide .
The get-session-token get-session-token can be used to make API calls to any AWS service with the following exceptions:
Note
We recommend that you do not call get-session-token with root account credentials. Instead, follow our best practices by creating one or more IAM users, giving them the necessary permissions, and using IAM users for everyday interaction with AWS.
The permissions associated with the temporary security credentials returned by get-session-token are based on the permissions associated with account or IAM user whose credentials are used to call the action. If get-session-token is called using root account credentials, the temporary credentials have root account permissions. Similarly, if get-session-token is called using the credentials of an IAM user, the temporary credentials have the same permissions as the IAM user.
For more information about using get-session-token to create temporary credentials, go to Temporary Credentials for Users in Untrusted Environments in the IAM User Guide .
See also: AWS API Documentation
get-session-token [--duration-seconds <value>] [--serial-number <value>] [--token-code <value>] [--cli-input-json <value>] [--generate-cli-skeleton <value>]
--duration-seconds (integer).
--serial-number (string)
The identification number of the MFA device that is associated with the IAM user who is making the get-session-token call. validate this parameter is a string of characters consisting of upper- and lower-case alphanumeric characters with no spaces. You can also include underscores or any of the following characters: =,.@-
--token-code (string).
Credentials -> (structure).
AccessKeyId -> (string)The access key ID that identifies the temporary security credentials.
SecretAccessKey -> (string)The secret access key that can be used to sign requests.
SessionToken -> (string)The token that users must pass to the service API to use the temporary credentials.
Expiration -> (timestamp)The date on which the current credentials expire. | https://docs.aws.amazon.com/cli/latest/reference/sts/get-session-token.html | 2017-03-23T08:17:07 | CC-MAIN-2017-13 | 1490218186841.66 | [] | docs.aws.amazon.com |
The default operational mode is for multi_trepctl list to output the status. A specific mode can be also be specified on the command-line.
Table 9.11. multi_trepctl Commands
In addition to the two primary commands, multi_trepctl can execute commands that would normally be applied to trepctl, running them on each selected host, service or directory according to the options. The output format and expectation is controlled through the list and run commands.
For example:
shell>
multi_trepctl status
Outputs the long form of the status information (as per trepctl status) for each identified host.
Lists the available backups across all replicators.
shell>
multi_trepctl backups| host | servicename | backup_date | prefix | agent | | host1 | alpha | 2014-08-15 09:40:37 | store-0000000002 | mysqldump | | host1 | alpha | 2014-08-15 09:36:57 | store-0000000001 | mysqldump | | host2 | alpha | 2014-08-12 07:02:29 | store-0000000001 | mysqldump |
Runs the trepctl heartbeat command on all hosts that are identified as masters.
shell>
multi_trepctl heartbeathost: host1 servicename: alpha role: master state: ONLINE appliedlastseqno: 8 appliedlatency: 2.619 output:
Lists which hosts are masters of others within the configured services.
shell>
multi_trepctl masterof| servicename | host | uri | | alpha | host1 | thl://host1:2112/ |
The multi_trepctl list mode is the default mode for multi_trepctl and outputs the current status across all hosts and services as a table:
shell>
multi_trepctl| host | servicename | role | state | appliedlastseqno | appliedlatency | | host1 | firstrep | master | OFFLINE:ERROR | -1 | -1.000 | | host2 | firstrep | slave | GOING-ONLINE:SYNCHRONIZING | 5271 | 4656.264 | | host3 | firstrep | slave | OFFLINE:ERROR | -1 | -1.000 | | host4 | firstrep | slave | OFFLINE:ERROR | -1 | -1.000 |
Or selected hosts and services if options are specified. For example, to
get the status only for
host1 and
host2:
shell>
multi_trepctl --hosts=host1,host2| host | servicename | role | state | appliedlastseqno | appliedlatency | | host1 | firstrep | master | ONLINE | 5277 | 0.476 | | host2 | firstrep | slave | ONLINE | 5277 | 0.000 |
The multi_trepctl command implies that the status or information is being output from each of the commands executed on the remote hosts and services.
The multi_trepctl run command can be used where the output of the corresponding trepctl command cannot be formatted into a convenient list. For example, to execute a backup on every host within a deployment:
shell>
multi_trepctl run backup
The same filters and host or service selection can also be made:
shell>
multi_trepctl run backup --hosts=host1,host2,host3host: host1 servicename: firstrep output: | Backup completed successfully; URI=storage://file-system/store-0000000005.properties --- host: host2 servicename: firstrep output: | Backup completed successfully; URI=storage://file-system/store-0000000001.properties --- host: host3 servicename: firstrep output: | Backup completed successfully; URI=storage://file-system/store-0000000001.properties ...
Return from the command will only take place when remote commands on each host have completed and returned. | http://docs.continuent.com/tungsten-replicator-5.0-oss/cmdline-tools-multi_trepctl-commands.html | 2017-03-23T08:14:34 | CC-MAIN-2017-13 | 1490218186841.66 | [] | docs.continuent.com |
These are the Golden Rules of localization direction setting initially used by Translate.org.za. They are born out of the goal of wanting to have the greatest potential impact on language speakers with localised Free Software.
There are three rules, software that we translate must be:
With these basic rules in mind you can more intelligently select your software targets.
The person who will most benefit from localized software is the desktop bound end-user. It is not the sysadmin or programmer. These are the people most likely to have less command of English while sysadmins have probably in the course of their work had to come to terms with English on the computers that they use.
This also means that you need to examine what type of software is generally used by an end-user. This would include: office suites, email programs and web-browsers, instant messaging and even games.
Again if you want to make the most impact on the most people you need to remove barriers. Free Software removes two barriers:
The reality is that most computer users are using Microsoft Windows. Until that changes a heavy focus should be on cross-platform products. Thus you can provide a solution to Windows user while at the same time providing an avenue for them to move to a Free operating system.
It is much easier to provide a localised office suite solution that does not require a complete retooling of the users operating environment.
These are the Golden Rules applied to the target selection at Translate.org.za:
Here is the explanation for this choice. OpenOffice.org is a leading Free Software, End-user focused and cross-platform piece of software and so meets all of the 3 requirements. However, so is Mozilla (and its offspring, Thunderbird and Firefox). We place OpenOffice.org before Mozilla because when using OpenOffice.org you are immersed in the language because you are probably editing an Afrikaans document while using an Afrikaans interface. While a web-browser gives you an Afrikaans window frame onto an English Internet. the argument does not apply as well to an email reader such as Thunderbird in that you are creating content in an email program. This is when you need to make a call.
Mozilla is localized before we look at any Linux desktops simply because people use Windows and Mozilla fills a gap in the needs of the end-user.
Translate.org.za has not made any firm decisions about desktops. We have localized both KDE and GNOME in the past and continue to do various depending on conditions and funding. The project sees this as an important step in creating a fully localised Free Software environment but is at a lower priority while there are still low hanging fruit in the cross-platform area.
We relagate Linux installer and distribution specific localisation to a lower rung. Distribution specific configuration programs are problematic in that they have limited impact to the distribution as apposed to a whole sea of Free Software users. But they are important for a seemless end-user experience. Some of them are also only used once, for example during software installation making localisation of them wasteful of resources.
Not listed are the localisation of kernel messages and command line tools. This is moving far beyond the realm of the average end-user and until a growth is seen in usage in this area they will remain off our radar.
The programs mentioned above are very big, and you might not have enough time to attempt such large projects yet. It is also good to start with something smaller, which will give you the satisfaction of completing a project, and give you something with which to encourage others to help you.
Here are some ideas of projects to look into: | http://docs.translatehouse.org/projects/localization-guide/en/latest/guide/golden_rules.html?id=guide/golden_rules | 2017-03-23T08:12:02 | CC-MAIN-2017-13 | 1490218186841.66 | [] | docs.translatehouse.org |
Believe it or not some languages have more than one plural form. This fact does come as a surprise to many, especially programmers.
There are two aspects to plurals that you need to be aware of:
English adds s to the end of most words to form a plural. Thus you will see many instances where a programmer has simply add (s) to the word to indicate that it can be both singular and plural. Here is an example in Tsonga, in this case plurals add/change text at the beginning of the word, not the end. This often simply looks ugly.
"Show/Hide Axis Description(s)" "Kombisa/Fihla (ti)nhlamuselo ya tikhona"
Furthermore, the grammar in other parts of the sentence might need to agree with either the singular or the plural case, but can’t agree with both.
Here are the options available to you to deal with these type of plurals:
Programs do have the ability handle plurals correctly. This is usually achieved by using the Gettext library or some similar method. When the application runs it will determine which plural form to use based on the number that is being displayed. So in English it would display:
Gettext uses the “Plural-Forms” header in the PO file to define:
nplural– the number of plural forms.
plural– an expression which when evaluated determines which form is appropriate for that number. If a definition does not exist for your language then you will need to create one or get someone to help you.
Once this is defined then your PO editing tool will display the correct number of fields for you to enter the plural forms.
A list with these settings in some languages is available here. To find out how to define these entries, see Plural forms (gettext manual)
Note
KDE now uses standard Gettext plurals. This section is just for historical reference.
xxx
The plural form for KDE is defined in the kdelibs.po file. Choose one of the options or ask for help on their mailing list.
You will recognise KDE plural messages as they all start with “
_n: ``" and
each form is separated by "n``”. in your translation you leave out the
“
_n: ``" and include as many translations as there are plural forms in your
language, with each one separated by a "n``” | http://docs.translatehouse.org/projects/localization-guide/en/latest/guide/translation/plurals.html?id=guide/translation/plurals | 2017-03-23T08:13:37 | CC-MAIN-2017-13 | 1490218186841.66 | [] | docs.translatehouse.org |
Hiera 1: Release Notes 1.3.4
Released June 10, 2014.
Hiera 1.3.4 is a security fix release in the Hiera 1.3 series. It has no other bug fixes or new features.
Security Fix
CVE-2014-3248 (An attacker could convince an administrator to unknowingly execute malicious code on platforms with Ruby 1.9.1 and earlier)
Platforms running Ruby 1.9.1 or earlier would load Ruby source files from the current working directory during a Hiera lookup. This could lead to the execution of arbitrary code.
Hiera 1.3.3
Released May 22, 2014.
Hiera 1.3.3 is a backward-compatible performance and fixes release in the 1.3 series. It provides a substantial speed increase for lookups compared to Hiera 1.3.2. This release also adds support for Ubuntu 14.04 (Trusty Tahr) and discontinues support for Fedora 18 and Ubuntu 13.04 (Raring Ringtail).
Performance Improvements
- HI-239: Backport speed improvement to 1.3.x codebase, resulting in a substantial speed increase in lookups compared to Hiera 1.3.2.
Operating System Support
- HI-149: Remove Fedora 18 from default build targets
- HI-236: Remove Raring (Ubuntu 13.04) from build_defaults, it is EOL
- HI-185: Add Trusty (Ubuntu 14.04) support
Bug Fixes
Hiera 1.3.2
Released February 26, 2014. (RC1: February 11; RC2: February 20.)
Hiera 1.3.2 is a bug fix release in the 1.3 series. It adds packages for Red Hat Enterprise Linux 7, support for deploying to Solaris and Windows vCloud instances, and fixes a bug on Debian.
RHEL 7 Support
Bug Fixes
- HI-176: Hiera would fail to find the correct ruby binary on Debian when an alternative version was installed. Hiera now uses
/usr/bin/ruby, which fixes the issue.
- HI-178: Acceptance tests have been added for Solaris and Windows vCloud machines.
- HI-115: Hiera would show an incorrect
recursive_guardwarning if the same variable was interpolated twice in a hierarchy definition, even if the usage was not recursive.
Hiera 1.3.1
Released January 23, 2014. (RC1: December 12, 2013.)
Hiera 1.3.1 is a bug fix release in the 1.3 series. It fixes one bug:
HI-65: Empty YAML files can raise an exception (backported to stable as HI-71)
Hiera 1.3.0
Released November 21, 2013. (RC1: never published; RC2: November 8, 2013.)
Hiera 1.3.0 contains three new features, including Hiera lookups in interpolation tokens. It also contains bug fixes and packaging improvements.
Most of the features contributed to Hiera 1.3 are intended to provide more power by allowing new kinds of value interpolation.
Feature: Hiera Sub-Lookups from Within Interpolation Tokens
In addition to interpolating variables into strings, you can now interpolate the value of another Hiera lookup. This uses a new lookup function syntax, which looks like
"%{hiera('lookup_key')}". See the docs on using interpolation tokens for more details.
- Feature #21367: Add support for a hiera variable syntax which interpolates data by performing a hiera lookup
Feature: Values Can Now Be Interpolated Into Hash Keys
Hashes within a data source can now use interpolation tokens in their key names. This is mostly useful for advanced
create_resources tricks. See the docs on interpolating values into data for more details.
- Feature #20220: Interpolate hash keys
Feature: Pretty-Print Arrays and Hashes on Command Line
This happens automatically and makes CLI results more readable.
- Feature #20755: Add Pretty Print to command line hiera output
Bug Fixes
Most of these fixes are error handling changes to improve silent or unattributable error messages.
- Bug #17094: hiera can end up in and endless loop with malformed lookup variables
- Bug #20519: small fix in hiera/filecache.rb
- Bug #20645: syntax error cause nil
- Bug #21669: Hiera.yaml will not interpolate variables if datadir is specified as an array
- Bug #22777: YAML and JSON backends swallow errors
- Feature #21211: Hiera crashes with an unfriendly error if it doesn’t have permission to read a yaml file
Build/Packaging Fixes and Improvements
We are now building Hiera packages for Ubuntu Saucy, which previously was unable to use Puppet because a matching Hiera package couldn’t be built. Fedora 17 is no longer supported, and hardcoded hostnames in build_defaults.yaml were removed.
- Bug #22166: Remove hardcoded hostname dependencies
- Bug #22239: Remove Fedora 17 from build_defaults.yaml
- Bug #22905: Quilt not needed in debian packaging
- Bug #22924: Update packaging workflow to use install.rb
- Feature #14520: Hiera should have an install.rb
Hiera 1.2.1
Hiera 1.2.1 contains one bug fix.
Hiera 1.2.0
Hiera 1.2.0 contains new features and bug fixes.
Features
- Deep-merge feature for hash merge lookups. See the section of this manual on hash merge lookups for details.
- (#16644) New generic file cache. This expands performance improvements in the YAML backend to cover the JSON backend, and is relevant to those who write custom backends. It is implemented in the Hiera::Filecache class.
- (#18718) New logger to handle fallback. Sometimes a logger has been configured, but is not suitable for being used. An example of this is when the puppet logger has been configured, but Hiera is not being used inside Puppet. This adds a FallbackLogger that will choose among the provided loggers for one that is suitable.
Bug Fixes
(#17434) Detect loops in recursive lookup
The recursive lookup functionality was vulnerable to infinite recursion when the values ended up referring to each other. This keeps track of the names that have been seen in order to stop a loop from occurring. The behavior for this was extracted to a class so that it didn’t clutter the logic of variable interpolation. The extracted class also specifically pushes and pops on an internal array in order to limit the amount of garbage created during these operations. This modification should be safe so long a new Hiera::RecursiveLookup is used for every parse that is done and it doesn’t get shared in any manner.
(#17434) Support recursive interpolation
The original code for interpolation had, hidden somewhere in its depths, supported recursive expansion of interpolations. This adds that support back in. | https://docs.puppet.com/hiera/1/release_notes.html | 2017-03-23T08:14:53 | CC-MAIN-2017-13 | 1490218186841.66 | [] | docs.puppet.com |
Actions and Condition Context Keys for AWS XRay
AWS XRay provides the following service-specific actions and condition context keys for use in IAM policies.
Actions for AWS XRay
Condition context keys for AWS XRay
AWS XRay has no service-specific context keys that can be used in an IAM policy. For the list of the global condition context keys that are available to all services, see Global Condition Keys in the IAM Policy Elements Reference. | http://docs.aws.amazon.com/IAM/latest/UserGuide/list_xray.html | 2017-03-23T08:12:30 | CC-MAIN-2017-13 | 1490218186841.66 | [] | docs.aws.amazon.com |
Inception Score¶
Module Interface¶
- class torchmetrics.image.inception.InceptionScore(feature='logits_unbiased', splits=10, **kwargs)[source]
Calculates the Inception Score (IS) which is used to access how realistic generated images are. It is defined as
where
is the KL divergence between the conditional distribution
and the margianl distribution
. Both the conditional and marginal distribution is calculated from features extracted from the images. The score is calculated on random splits of the images such that both a mean and standard deviation of the score are returned..
splits¶ (
int) – integer determining how many splits the inception score calculation should be split among
kwargs¶ (
Any) – Additional keyword arguments, see Advanced metric settings for more info.
References
[1] Improved Techniques for Training GANs Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, Xi Chen
stror
intand
torch-fidelityis not installed
ValueError – If
featureis set to an
stror
intand not one of
['logits_unbiased', 64, 192, 768, 2048]
TypeError – If
featureis not an
str,
intor
torch.nn.Module
Example
>>> import torch >>> _ = torch.manual_seed(123) >>> from torchmetrics.image.inception import InceptionScore >>> inception = InceptionScore() >>> # generate some images >>> imgs = torch.randint(0, 255, (100, 3, 299, 299), dtype=torch.uint8) >>> inception.update(imgs) >>> inception.compute() (tensor(1.0544), tensor(0.0117))
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- compute()[source]
Override this method to compute the final metric value from state variables synchronized across the distributed backend.
- Return type
- | https://torchmetrics.readthedocs.io/en/latest/image/inception_score.html | 2022-06-25T02:09:52 | CC-MAIN-2022-27 | 1656103033925.2 | [] | torchmetrics.readthedocs.io |
ClickHouse ODBC Driver Installation for Windows
The ClickHouse ODBC Driver for Microsoft Windows allows users to connect different applications to a ClickHouse database. There are two versions available: 64-Bit and 32-Bit based on which version of Windows is being used, and the requirements of the applications connecting to ClickHouse.
Prerequisites
- The ClickHouse ODBC Driver relies on the Microsoft Visual C++ Redistributable for Visual Studio 2019. This is included with the ClickHouse ODBC Installer for Windows.
- A local Administrator account is required to install the ClickHouse ODBC driver on Windows 10.
Installation Steps
To install the ClickHouse ODBC Driver for Microsoft Windows:
With a browser, navigate to the clickhouse-odbc releases page.
Select the most recent release and select one of the following ClickHouse ODBC installer for Windows, replacing
{version}with the version that will be downloaded:
- For 32-bit versions of Windows:
clickhouse-odbc-{version}-win32.msi
- For 64-bit versions of Windows:
clickhouse-odbc-{version}-win64.msi
Launch the downloaded ClickHouse ODBC installer.
- Note: There may be a warning from Microsoft Windows Defender that the installer is an unrecognized application. If this occurs, select More Info, then select Run Anyway.
Follow the ODBC installation process as detailed by the application. The default installation are typically sufficient, but refer to the clickhouse-odbc guide for full details.
Once finished, the ClickHouse ODBC Driver will be installed.
Verifying the ClickHouse ODBC Driver Installation
To verify the ClickHouse ODBC Driver has been installed:
Launch the Windows 10 application ODBC Data Source Administrator - there are two versions: 32 bit and 64 bit. Select the version that matches your operating system.
Select the System DSN tab. Under a standard ClickHouse ODBC installation, both the ClickHouse DSN (ANSI) and the ClickHouse DSN (Unicode) will be available.
Example Connecting to ClickHouse with ODBC
Once the ClickHouse ODBC driver has been installed, connections can be made to specific ClickHouse servers via the Data Source Name(DSN). Two connection types are recommended:
- User DSN: These are ODBC connections that are available for the Windows 10 user.
- System DSN: These are ODBC connections available to all users of the Windows 10 operating system.
The following example demonstrates how to create a User DSN connection to a ClickHouse server.
Launch the Windows 10 application ODBC Data Source Administrator - there are two versions: 32 bit and 64 bit. Select the version that matches your operating system and the applications that will be connecting to ClickHouse.
- For example: If running the 64 bit version of Windows 10, but the application is 32 bit, then select the 32 bit version of the ODBC driver.
Select the User DSN tab.
Select Add.
Select ClickHouse ODBC Driver (Unicode), then select Finish.
There are two methods of setting up the DSN connection: URL or Host Name. To set up the connection via URL:
Name: The name you set for your connection.
Description (Optional): A short description of the ODBC connection.
URL: The URL for the ClickHouse server. This will be the HTTP or HTTPS connection based on the ClickHouse HTTP Interface.
- This will be in the format:
{connection type}//{url}:{port}
For example:
To set up the connection via Host Name, provide the following:
- Host: The hostname or IP address of the ClickHouse server.
- Port: The port to be used. This will be either the HTTP port, default
8123, or the HTTPS port default
8443.
- Database (Optional): The name of the database on the ClickHouse server.
- SSLMode (Optional)
- Set to require if SSL will be used and fail if it can not be verified.
- Set to allow if SSL will be used with self-signed certificates.
- User (Optional): Set to provide a specific username when connecting, leave blank to be prompted.
- Password (Optional): Set to provide a specific password when connecting, leave blank to be prompted.
- Timeout (Optional): Set a timeout period before giving up on the connection.
Test Connection
One method of testing the connection to ClickHouse through the ODBC driver is with Powershell. This script will make an ODBC connection to the specified database, then show all tables available to the authenticating ClickHouse user.
- Launch Powershell.
- If using the 64 bit version of the ClickHouse ODBC Driver, then select
Windows Powershell ISE.
- If using the 32 bit version of the ClickHouse ODBC Driver, select
Windows Powershell ISE (x86).
- Paste the following script, replacing the following:
- DSN: The DSN of your ClickHouse ODBC Connection.
- Uid: The ClickHouse user being used.
- Pwd: The password of the ClickHouse user being used.
- Run the script.
$connectstring = "DSN=ClickHouseDemo;Uid=demo;Pwd=demo;" $sql = @' show tables; '@ $connection = New-Object System.Data.Odbc.OdbcConnection($connectstring) $connection.open() $command = New-Object system.Data.Odbc.OdbcCommand($sql,$connection) $data = New-Object system.Data.Odbc.OdbcDataAdapter($command) $datatable = New-Object system.Data.datatable $null = $data.fill($datatable) $conn.close() $datatable | https://beta.docs.altinity.com/integrations/clickhouse-odbc-driver/windows-clickhouse-odbc/ | 2022-06-25T01:31:20 | CC-MAIN-2022-27 | 1656103033925.2 | [array(['/images/operations/integrations/odbc/odbc_powershell_testconnection.png',
'ODBC Powershell Test'], dtype=object) ] | beta.docs.altinity.com |
This module displays the list of all the users that are registered in the billing Application. The Users can be searched on the basis of their Name, Username, Company Name, Mobile, Email, Country, and Industries.
In the data table below is the list of all the registered users, their email and the date on which they were registered. Also, there is a column called Status which tells whether the user has done his email and OTP Verification. If yes, then the color changes to green or else it is red.
On the right -the top of the data table is a button to create users.
Create Users
When this button is clicked a page opens where all the details need to be filled. Few fields on this page are described below:
- Role: This determines what role is assigned to the registered users(User or Admin). If the admin is selected and saved, the user will now have access to the admin panel.
- Position: The User can be assigned as Manager which allows him access to receive all the mails related to billing.
- All the fields can similarly be filled and saved to create a new user and retain the information.
Edit Users
Here all the information of the created user can be edited and saved. | https://docs.agorainvoicing.com/all-users/ | 2022-06-25T01:18:28 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.agorainvoicing.com |
In OpenShift Container Platform 4.8, you can install a cluster on VMware vSphere infrastructure in a restricted network by creating an internal mirror of the installation release content.
You reviewed details about the persistent storage for your cluster. To deploy a private image registry, your storage must provide the ReadWriteMany access mode.
The OpenShift Container Platform installer requires access to port 443 on the vCenter and ESXi hosts. You verified that port 443 is accessible.
If you use a firewall, you confirmed with the administrator that port 443 is accessible. Control plane nodes must be able to reach vCenter and ESXi hosts on port 443 for the installation to succeed.
If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to..8, administrative. All nodes must be in the same VLAN. You cannot scale the cluster using a second VLAN as a Day 2 operation.. The installation program uses the root resource pool of the vSphere cluster as the default resource pool. from the Red Hat OpenShift Cluster Manager. (6) apiVIP: api_vip ingressVIP: ingress_vip clusterOSImage: (7) fips: false) - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev.
Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
Change to the directory that contains the installation program and initialize the cluster deployment:
$ ./openshift-install create cluster --dir <installation_directory> \ (1) --log-level=info (2)
... -l docker-registry=default
No resourses found in openshift-image-registry namespace
Check the registry configuration:
$ oc edit configs.imageregistry.operator.openshift.io
storage: pvc: claim: (1)
Check the
clusteroperator status:
$ oc get clusteroperator image-registry
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m.
Set up your registry and configure registry storage. | https://docs.openshift.com/container-platform/4.8/installing/installing_vsphere/installing-restricted-networks-installer-provisioned-vsphere.html | 2022-06-25T02:23:50 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.openshift.com |
Chrome Certificate Error
Chrome certificate error with HSTS
Pritunl will set the HSTS header to force the web browser to use HTTPS. When changing certificates it is possible to be locked out of the web console with the error message You cannot visit example.domain right now because the website uses HSTS. If this occurs the domain must be removed from the Chrome HSTS database. This is done by first closing any open tabs with the Pritunl web console then going to chrome://net-internals/#hsts. At the bottom of the page put the domain of the Pritunl web console in the Delete domain security policies field and click Delete. Then open a new tab and go to the web console and click Proceed to example.domain (unsafe).
Updated over 2 years ago | https://docs.pritunl.com/docs/chrome-certificate-error | 2022-06-25T01:35:26 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.pritunl.com |
The following functions allow you to get all the information needed to successfully render Mulberry offers.
The response objects that get returned from each method are compatible with and can be injected into the Mulberry frontend Inline, Modal and various other offers. See section "Rendering Offers" for more information.
Embedding the SDK
The first step to getting started with the Mublerry SDK is to embed the script into the
<head> of your HTML
Staging:
<script src=""></script>
Production:
<script src=""></script>
mulberry.core.init()
The first step to getting started with the Mulberry SDK is to embed the script and initialize it. You can do so as shown below.
async function initializeMulberry() { await window.mulberry.core.init({ publicToken: '5NvkZva_L5MQ6NW8h3siiiR23z5' }); } initializeMulberry();
Your 'publicToken'
Make sure to use your own publicToken . You can find yours by logging into the Mulberry dashboard and clicking the settings/gear icon.
Once initialized, window.mulberry.core.settings is automatically populated with your brand colors and settings.
mulberry.core.getCoverageDetails()
You can fetch coverage details for different insurance categories.
const coverage = await window.mulberry.core.getCoverageDetails( '74828474' );
mulberry.core.getWarrantyOffer()
Fetches offers from Mulberry API in real time.
const offers = await mulberry.core.getWarrantyOffer({ title: 'Oslo Chaise in Velvet', id: 'OSLO-XYZ', price: '2999.00', images: [ '', '' ] });
Once you have an offer returned to display, you can inject it and render it with one of our pre-built components. | https://docs.getmulberry.com/reference/initialization | 2022-06-25T01:36:08 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.getmulberry.com |
Microsoft Dynamics 365 - Service Update 141 Release Notes
Service Update 141 141 will now correspond to version number 141 141 resolves the following issues:
Repaired Functionality
The following list details issues whose resolutions repair items in Dynamics that are not functioning.
Knowledge Management
- Embedded knowledge search for cases was not showing results in the "Custom" web app.
- Rich text was not displaying in mobile offline mode.
Outlook
- The ellipsis did not display in the Account card or the Recipient card when tracking an email regarding an account, or opening an email that was tracked and pertaining to an account.
- The task due date could not be entered in the Quick Create form.
Platform Services
- After adding a privilege role, the audit history did not specify which role was added, and no value was in the "Old value" and "New value" fields.*
- Retrieve Group Teams would intermittently fail.*
- Workflows containing the trigger "Does not contain 'test'" did not execute.*
- When capturing a new lead in CRM, the workflow stopped sending notifications that a lead was created.
- A generic SQL error occurred when adding or making changes to anything displayed on the Timeline.
- After creating an entity that contains an "Image" field, that field disappeared when the entity was refreshed.
- An error occurred when opening an entity ("The entity could not be feched: An unexpected error occurred").
- Solutions containing a custom SDK Filter component could not be deleted.
- Tasks were not synchronized from Dynamics 365 to Dynamics to Outlook.*
- App updates failed with an error when feature control block (FCB) was enabled ("The requested service ... has not been registered.").*
- Performing a search using a "Contain Values" operator on the "Multiple Select Option" field produced unexpected results.*
Sales
- Dropdown menu in "My Activities" table was expanded and collapsed by using the left and right arrow keys instead of the "Enter" or "Space" keys.*
- When the French language was selected, the "Import log" associated view header displayed in English.
- Skype for Business 9.0.2.2008 could not be updated to the latest release.*
Solutions
- The "Business Unit ID" field in the Scheduling tab of the Customer Service Hub displayed in English when a non-English language was selected.
- When importing a solution containing an entity with an "Active" SLA state, the entity did not appear as an option in the entity dropdown menu while creating an SLA.*
Unified Interface
- An expected error message did not appear when using Safari to debug an app with a total script download size of 25MB.
- When an address was displayed on Bing Maps and then removed, the pinpoint was removed but the infobox remained displaying the old address.
- After importing a solution, non-English ribbon buttons were unexpectedly replaced with base language (English) labels.*
- An expected popup displayed before a dialog window was closed instead of after the window was closed.
- Field list sorting was case sensitive.
- The value for weeks in the mobile app for Dynamics 365 displayed as "(n-1)" instead of "n".*
- The TimeControl could not be edited using a keyboard.
- The wrong label displayed on the Phone Call entity's out-of-box quick form.
- The Map control loaded in new forms with a specific pinpoint added to it by default.
- The "Start time" and "End time" fields in an appointment displayed the wrong value when the "All day event" check box was selected in the form.
- The command bar did not load for some entities (Account, Contact, and Custom).*
- New memberships could not be created using a custom form.
- The "Sign Out" button did not function.
- Importing a solution failed when the design map contained comments.*
- For emails in a draft state, its subject displayed twice on the Activity Wall.
- The "+New Record" button in the input dropdown did not function when the "Most Recently Used" (MRU) setting was disabled.
- Columns with the 'Onchange' handler in Editable Grids within entities were not updating when the associated column was changed.*
- Organization language displayed in the default language, instead of the selected language.
- Dashboards could not be viewed within the Dashboards page.
Error Messages, Exceptions, and Failures
The following list details issues whose resolutions correct actions that produce errors, unhandled exceptions, or system or component failures.
Folders
- A database update failed with a "duplicate ID puid" exception.
- An error occurred when validating the business process flow on a project task entity ("A field with this name already exists, Please enter different unique name").*
Knowledge Management
- Timeline filters were unavailable ("Unable to load filters due to an unexpected error.").*
Outlook
- A syntax error occurred when accessing Unified Interface.*
Platform Services
- With component definitions enabled, an error occurred when making a basic solution component summaries request with no parameters aside from "$filter" ("The entity with a name ... with namemapping ... was not found in the MetadataCache.").
- Out-of-box reports were failing to run and gave an "Unhandled exception" error message.
- Importing a solution failed with an error ("'LocalizedLabel' metadata entity doesn't contain metadata attribute with Name = 'LocalizedLabel'").*
- Data could not be entered in charts by users with certain roles assigned to them.*
- The error message, "Cannot assign roles or profiles to an access team" did not include the Team ID.*
- Admins would receive a "Cannot delete a team which owns records," even after reassigning all records of that team.*
- An error occurred when saving forms containing values for Look Up fields to Custom entities ("Try this action again….").*
- An error occurred when accessing Dynamics 365 for Outlook ("We are unable to show Dynamics 365 App for Outlook because current user role does not have required permissions. Please contact your administrator to have required security role assigned to the Dynamics 365 App for Outlook solution.").*
- An "Arithmetic overflow" SQL error occurred when retrieving a calculated field containing Money type parameters.*
Sales
- When viewing a chart on the Opportunity Sales Processes home grid in Unified Interface and switching to "Process Funnel", the chart failed with an error ("There was an error retrieving the chart from the server. Please try again.").*
- Selecting "View Hierarchy" resulted in an error ("Quick view forms are not available to render the hierarchy form."), and the Form Switcher did not function.
Unified Interface
- An error occurred when editing an exception feedback message ("The attachment is too large. The maximum file size allowed is {0} kilobytes.”).
- Error message for missing privilege did not contain privilege information.*
- Entity icons did not display in the “Report” and “Resources” roles entities in Unified Interface.*
- Unified Interface access did not function when "FCB.BrowserOfflineEnabledForTest" was enabled.*
Return to the all version availability page.
If you have any feedback on the release notes, please provide your thoughts here | https://docs.microsoft.com/en-us/dynamics365/released-versions/weekly-releases/update141 | 2022-06-25T02:02:30 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.microsoft.com |
Class find_embedding::embedding¶
- template<typename embedding_problem_t>
class find_embedding::embedding¶
This class is how we represent and manipulate embedding objects, using as much encapsulation as possible.
We provide methods to view and modify chains.
Public Functions
- inline embedding(embedding_problem_t &e_p)¶
constructor for an empty embedding
- inline embedding(embedding_problem_t &e_p, map<int, vector<int>> &fixed_chains, map<int, vector<int>> &initial_chains)¶
constructor for an initial embedding: accepts fixed and initial chains, populates the embedding based on them, and attempts to link adjacent chains together.
- inline embedding<embedding_problem_t> &operator=(const embedding<embedding_problem_t> &other)¶
copy the data from
other.var_embeddinginto
this.var_embedding
- inline int max_weight(const int start, const int stop) const¶
Get the maximum of all qubit weights in a range.
- inline bool has_qubit(const int v, const int q) const¶
Check if variable v is includes qubit q in its chain.
- inline void fix_chain(const int u, const vector<int> &incoming)¶
Permanently assign a chain for variable u.
NOTE: This must be done before any chain is assigned to u.
- inline bool operator==(const embedding &other) const¶
check if
thisand
otherhave the same chains (up to qubit containment per chain; linking and parent information is not checked)
- inline void construct_chain(const int u, const int q, const vector<vector<int>> &parents)¶
construct the chain for
u, rooted at
q, with a vector of parent info, where for each neibor
vof
u, following
q->
parents[v][q]->
parents[v][parents[v][q]]…
terminates in the chain for
v
- inline void construct_chain_steiner(const int u, const int q, const vector<vector<int>> &parents, const vector<vector<distance_t>> &distances, vector<vector<int>> &visited_list)¶
construct the chain for
u, rooted at
q.
for the first neighbor
vof
u, we follow the parents until we terminate in the chain for
v
q->
parents[v][q]-> …. adding all but the last node to the chain of
u. for each subsequent neighbor
w, we pick a nearest Steiner node,
qw, from the current chain of
u, and add the path starting at
qw, similar to the above…
qw->
parents[w][qw]-> … this has an opportunity to make shorter chains than
construct_chain
- inline void flip_back(int u, const int target_chainsize)¶
distribute path segments to the neighboring chains — path segments are the qubits that are ONLY used to join link_qubit[u][v] to link_qubit[u][u] and aren’t used for any other variable
if the target chainsize is zero, dump the entire segment into the neighbor
if the target chainsize is k, stop when the neighbor’s size reaches k
- inline void tear_out(int u)¶
short tearout procedure blank out the chain, its linking qubits, and account for the qubits being freed
- inline int freeze_out(int u)¶
undo-able tearout procedure.
similar to
tear_out(u), but can be undone with
thaw_back(u). note that this embedding type has a space for a single frozen chain, and
freeze_out(u)overwrites the previously-frozen chain consequently,
freeze_out(u)can be called an arbitrary (nonzero) number of times before
thaw_back(u), but
thaw_back(u)MUST be preceeded by at least one
freeze_out(u). returns the size of the chain being frozen
- inline void thaw_back(int u)¶
undo for the freeze_out procedure: replaces the chain previously frozen, and destroys the data in the frozen chain
thaw_back(u)must be preceeded by at least one
freeze_out(u)and the chain for
umust currently be empty (accomplished either by
tear_out(u)or
freeze_out(u))
- inline void steal_all(int u)¶
grow the chain for
u, stealing all available qubits from neighboring variables
- inline int statistics(vector<int> &stats) const¶
compute statistics for this embedding and return
1if no chains are overlapping when no chains are overlapping, populate
statswith a chainlength histogram chains do overlap, populate
statswith a qubit overfill histogram a histogram, in this case, is a vector of size (maximum attained value+1) where
stats[i]is either the number of qubits contained in
i+2chains or the number of chains with size
i
- inline bool linked() const¶
check if the embedding is fully linked — that is, if each pair of adjacent variables is known to correspond to a pair of adjacent qubits
- inline void print() const¶
print out this embedding to a level of detail that is useful for debugging purposes TODO describe the output format.
- inline void long_diagnostic(std::string current_state)¶
run a long diagnostic, and if debugging is enabled, record
current_stateso that the error message has a little more context.
if an error is found, throw a CorruptEmbeddingException | https://docs.ocean.dwavesys.com/en/stable/docs_minorminer/source/cpp/class/classfind__embedding_1_1embedding.html | 2022-06-25T02:20:25 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.ocean.dwavesys.com |
Class lispa\amos\cwh\models\CwhPubblicazioni
All Classes | Properties | Methods | Events | Constants
This is the model class for table "cwh_pubblicazioni".
Public Properties
Hide inherited properties
Protected Properties
Hide inherited properties
Public Methods
Protected Methods
Events
Constants
Method Details
Unlink and delete all CwhPubliccazioniCwhNodiEditoriMm associated to this CwhPubblicazioni
Delete all CwhPubblicczioniCwhNodiValidatoriMm associated to this CwhPubblicazioni
Deprecated Old method used when publication id was a string content::tableName()-content->id eg. 'news-99' This method will be removed. | http://docs.open2.0.appdemoweb.org/lispa-amos-cwh-models-cwhpubblicazioni.html | 2022-06-25T01:20:36 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.open2.0.appdemoweb.org |
fortinet.fortimanager.fmgr_ips_sensor_entries_exemptip module – Traffic from selected source or destination IP addresses is exempt from this signature._ips_sensor_entries_exempt: Traffic from selected source or destination IP addresses is exempt from this signature. fmgr_ips_sensor_entries_exemptip: bypass_validation: False workspace_locking_adom: <value in [global, custom adom including root]> workspace_locking_timeout: 300 rc_succeeded: [0, -2, -3, ...] rc_failed: [-2, -3, ...] adom: <your own value> sensor: <your own value> entries: <your own value> state: <value in [present, absent]> ips_sensor_entries_exemptip: dst-ip: <value of string> id: <value of integer> src-ip: <value of string>
Return Values
Common return values are documented here, the following are the fields unique to this module:
Collection links
Issue Tracker Homepage Repository (Sources) | https://docs.ansible.com/ansible/latest/collections/fortinet/fortimanager/fmgr_ips_sensor_entries_exemptip_module.html | 2022-06-25T02:06:47 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.ansible.com |
This document offers a basic understanding of the REST API used by automation controller.
REST stands for Representational State Transfer and is sometimes spelled as “ReST”. It relies on a stateless, client-server, and cacheable communications protocol, usually the HTTP protocol.
You may find it helpful to see which API calls the user interface makes in sequence. To do this, you can use the UI from Firebug or Chrome with developer plugins.
Another alternative is Charles Proxy (), which offers a visualizer that you may find helpful. While it is commercial software, it can insert itself as an OS X proxy, for example, and intercept both requests from web browsers as well as curl and other API consumers.
Other alternatives include:
Fiddler ()
mitmproxy ()
Live HTTP headers FireFox extension ()
Paros () | https://docs.ansible.com/automation-controller/latest/html/towerapi/tools.html | 2022-06-25T01:25:21 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.ansible.com |
CreateLabelingJob.
You can use this operation to create a static labeling job or a streaming labeling
job. A static labeling job stops if all data objects in the input manifest file
identified in
ManifestS3Uri have been labeled. A streaming labeling job
runs perpetually until it is manually stopped, or remains idle for 10 days. You can send
new data objects to an active (
InProgress) streaming labeling job in real
time. To learn how to create a static labeling job, see Create a Labeling Job
(API) in the Amazon SageMaker Developer Guide. To learn how to create a streaming
labeling job, see Create a Streaming Labeling
Job.
Request Syntax
{ "HumanTaskConfig": { "AnnotationConsolidationConfig": { "AnnotationConsolidationLambdaArn": "
string" }, "MaxConcurrentTaskCount":
number, "NumberOfHumanWorkersPerDataObject":
number, "PreHumanTaskLambdaArn": "
string", "PublicWorkforceTaskPrice": { "AmountInUsd": { "Cents":
number, "Dollars":
number, "TenthFractionsOfACent":
number} }, "TaskAvailabilityLifetimeInSeconds":
number, "TaskDescription": "
string", "TaskKeywords": [ "
string" ], "TaskTimeLimitInSeconds":
number, "TaskTitle": "
string", "UiConfig": { "HumanTaskUiArn": "
string", "UiTemplateS3Uri": "
string" }, "WorkteamArn": "
string" }, "InputConfig": { "DataAttributes": { "ContentClassifiers": [ "
string" ] }, "DataSource": { "S3DataSource": { "ManifestS3Uri": "
string" }, "SnsDataSource": { "SnsTopicArn": "
string" } } }, "LabelAttributeName": "
string", "LabelCategoryConfigS3Uri": "
string", "LabelingJobAlgorithmsConfig": { "InitialActiveLearningModelArn": "
string", "LabelingJobAlgorithmSpecificationArn": "
string", "LabelingJobResourceConfig": { "VolumeKmsKeyId": "
string", "VpcConfig": { "SecurityGroupIds": [ "
string" ], "Subnets": [ "
string" ] } } }, "LabelingJobName": "
string", "OutputConfig": { "KmsKeyId": "
string", "S3OutputPath": "
string", "SnsTopicArn": "
string" }, "RoleArn": "
string", "StoppingConditions": { "MaxHumanLabeledObjectCount":
number, "MaxPercentageOfInputDatasetLabeled":
number}, "Tags": [ { "Key": "
string", "Value": "
string" } ] }
Request Parameters
For information about the parameters that are common to all actions, see Common Parameters.
The request accepts the following data in JSON format.
- HumanTaskConfig
Configures the labeling task and how it is presented to workers; including, but not limited to price, keywords, and batch size (task count).
Type: HumanTaskConfig object
Required: Yes
- InputConfig
Input data for the labeling job, such as the Amazon S3 location of the data objects and the location of the manifest file that describes the data objects.
You must specify at least one of the following:
S3DataSourceor
SnsDataSource.
Use
SnsDataSourceto specify an SNS input topic for a streaming labeling job. If you do not specify and SNS input topic ARN, Ground Truth will create a one-time labeling job that stops after all data objects in the input manifest file have been labeled.
Use
S3DataSourceto specify an input manifest file for both streaming and one-time labeling jobs. Adding an
S3DataSourceis optional if you use
SnsDataSourceto create a streaming labeling job.
If you use the Amazon Mechanical Turk workforce, your input data should not include confidential information, personal information or protected health information. Use
ContentClassifiersto specify that your data is free of personally identifiable information and adult content.
Type: LabelingJobInputConfig object
Required: Yes
- LabelAttributeName
The attribute name to use for the label in the output manifest file. This is the key for the key/value pair formed with the label that a worker assigns to the object. The
LabelAttributeNamemust meet the following requirements.
The name can't end with "-metadata".
If you are using one of the following built-in task types, the attribute name must end with "-ref". If the task type you are using is not listed below, the attribute name must not end with "-ref".
Image semantic segmentation (
SemanticSegmentation), and adjustment (
AdjustmentSemanticSegmentation) and verification (
VerificationSemanticSegmentation) labeling jobs for this task type.
Video frame object detection (
VideoObjectDetection), and adjustment and verification (
AdjustmentVideoObjectDetection) labeling jobs for this task type.
Video frame object tracking (
VideoObjectTracking), and adjustment and verification (
AdjustmentVideoObjectTracking) labeling jobs for this task type.
3D point cloud semantic segmentation (
3DPointCloudSemanticSegmentation), and adjustment and verification (
Adjustment3DPointCloudSemanticSegmentation) labeling jobs for this task type.
3D point cloud object tracking (
3DPointCloudObjectTracking), and adjustment and verification (
Adjustment3DPointCloudObjectTracking) labeling jobs for this task type.
Important
If you are creating an adjustment or verification labeling job, you must use a different
LabelAttributeNamethan the one used in the original labeling job. The original labeling job is the Ground Truth labeling job that produced the labels that you want verified or adjusted. To learn more about adjustment and verification labeling jobs, see Verify and Adjust Labels.
Type: String
Length Constraints: Minimum length of 1. Maximum length of 127.
Pattern:
^[a-zA-Z0-9](-*[a-zA-Z0-9]){0,126}
Required: Yes
- LabelCategoryConfigS3Uri
The S3 URI of the file, referred to as a label category configuration file, that defines the categories used to label the data objects.
For 3D point cloud and video frame task types, you can add label category attributes and frame attributes to your label category configuration file. To learn how, see Create a Labeling Category Configuration File for 3D Point Cloud Labeling Jobs.
For named entity recognition jobs, in addition to
"labels", you must provide worker instructions in the label category configuration file using the
"instructions"parameter:
"instructions": {"shortInstruction":"<h1>Add header</h1><p>Add Instructions</p>", "fullInstruction":"<p>Add additional instructions.</p>"}. For details and an example, see Create a Named Entity Recognition Labeling Job (API) ."}]
}
Note the following about the label category configuration file:
For image classification and text classification (single and multi-label) you must specify at least two label categories. For all other task types, the minimum number of label categories required is one.
Each label category must be unique, you cannot specify duplicate label categories.
If you create a 3D point cloud or video frame adjustment or verification labeling job, you must include
auditLabelAttributeNamein the label category configuration. Use this parameter to enter the
LabelAttributeNameof the labeling job you want to adjust or verify annotations of.
Type: String
Length Constraints: Maximum length of 1024.
Pattern:
^(https|s3)://([^/]+)/?(.*)$
Required: No
- LabelingJobAlgorithmsConfig
Configures the information required to perform automated data labeling.
Type: LabelingJobAlgorithmsConfig object
Required: No
- LabelingJobName
The name of the labeling job. This name is used to identify the job in a list of labeling jobs. Labeling job names must be unique within an AWS account and region.
LabelingJobNameis not case sensitive. For example, Example-job and example-job are considered the same labeling job name by Ground Truth.
Type: String
Length Constraints: Minimum length of 1. Maximum length of 63.
Pattern:
^[a-zA-Z0-9](-*[a-zA-Z0-9]){0,62}
Required: Yes
- OutputConfig
The location of the output data and the AWS Key Management Service key ID for the key used to encrypt the output data, if any.
Type: LabelingJobOutputConfig object
Required: Yes
- RoleArn
The Amazon Resource Number (ARN) that Amazon SageMaker assumes to perform tasks on your behalf during data labeling. You must grant this role the necessary permissions so that Amazon SageMaker can successfully complete data labeling.
Type: String
Length Constraints: Minimum length of 20. Maximum length of 2048.
Pattern:
^arn:aws[a-z\-]*:iam::\d{12}:role/?[a-zA-Z_0-9+=,.@\-_/]+$
Required: Yes
- StoppingConditions
A set of conditions for stopping the labeling job. If any of the conditions are met, the job is automatically stopped. You can use these conditions to control the cost of data labeling.
Type: LabelingJobStoppingConditions object
Required: No
An array of key/value pairs. For more information, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.
Type: Array of Tag objects
Array Members: Minimum number of 0 items. Maximum number of 50 items.
Required: No
Response Syntax
{ "LabelingJobArn": "string" }
Response Elements
If the action is successful, the service sends back an HTTP 200 response.
The following data is returned in JSON format by the service.
- LabelingJobArn
The Amazon Resource Name (ARN) of the labeling job. You use this ARN to identify the labeling job.
Type: String
Length Constraints: Maximum length of 2048.
Pattern:
arn:aws[a-z\-]*:sagemaker:[a-z0-9\-]*:[0-9]{12}:labeling-job/.*
Errors
For information about the errors that are common to all actions, see Common Errors.
- ResourceInUse
Resource being accessed is in use.
HTTP Status Code: 400
-: | https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateLabelingJob.html | 2022-06-25T02:58:05 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.aws.amazon.com |
Most relevant administration functions are directly accessible from the IJC user interface. This document describes some things that are not currently possible through the IJC UI and must be performed manually.
On rare occasions the meta data that describes the database contents to Instant JChem may become corrupted. When this happens there may be items that are not understood and cause error messages to be displayed on startup. The only current way to avoid this is to manually remove these bad items from the IJC meta data tables. The error message will report the ID of the offending item (a 32 character hexadecimal string). This ID will correspond to an item in the IJC_SCHEMA table.
Before performing this operation you are advised to back up the IJC_SCHEMA table in case the wrong item(s) are deleted.
Connect to the database using a SQL editor (e.g. the database explorer provided with IJC). Examine the contents of the IJC_SCHEMA table to confirm that there is a row which has a ITEM_ID with the corresponding value. Delete this row using SQL such as this:
DELETE FROM IJC_SCHEMA WHERE ITEM_ID = 'uuid' OR PARENT_ID = 'uuid'
where uuid is replaced with the actual 32 character ID.
It is clearly possible for a user with administration access to drop a table or remove a column. The previously reported warnings are now replaced with a link which the user will see at logging in time in the output window. On clicking the link, the user can optionally chose to amend the schema by removing either the field or entity which references the missing column or table respectively. Any link not processed will subsequently re-appear if the user disconnect/connect again.
Missing Edge
In the case of forms and edges which reference entities that no longer exist, IJC will remove the edge item and any bound form widget will be shown as invalid.
his article [1] summarizes the default ORDER BY behaviour for many RDBMs. In short the default behaviour is as follows:
[1] | https://docs.chemaxon.com/display/docs/manual-instant-jchem-schema-admin-functions.md | 2022-06-25T01:32:27 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.chemaxon.com |
psscale¶
Plot gray scale or color scale bar
Synopsis¶
gmt psscale [ -B[p|s]parameters ] [ -Ccpt ] [ -Drefpoint ] [ -Fpanel ] [ -Gzlo/zhi ] [ -I[max_intens|low_i/high_i] ] [ -Jparameters ] [ -K ] [ -L[i][gap] ] [ -M ] [ -N[p|dpi ]] [ -O ] [ -P ] [ -Q ] [ -Rregion ] [ -S[+aangle][+c|n][+s][+xlabel][+yunit] ] [ -U[stamp] ] [ -V[level] ] [ -Wscale ] [ -X[a|c|f|r][xshift] ] [ -Y[a|c|f|r][yshift] ] [ -Zwidthfile ] [ -pflags ] [ -ttransp ] [ --PAR=value ]
Description¶
Plots gray scales or color scales on maps. Both horizontal and vertical scales are supported. For color palette tables . For a full overview of CPTs, see the Cookbook section on Color palette tables.
Example of a horizontal colorbar placed below a geographic map.¶
Required Arguments¶
None.
Optional Arguments¶
- -B[p|s]parameters
Set annotation, tick, and gridline interval for the colorbar. The x-axis label will plot beneath a horizontal bar (or vertically to the right of a vertical bar), except when using the +m modifier of the -D option. As an option, use the y-axis label to plot the data unit to the right of a horizontal bar (and above a vertical bar). If -B is omitted, or no annotation intervals are provided (classic mode only), the default is to annotate every color level based on the numerical entries in the CPT (which may be overridden by ULB flags in the CPT). The exception to this rule is for CPT files that were scaled to fit the range of a grid exactly and thus have arbitrary color levels; these will trigger an automatic -Baf setting. To specify custom text annotations for intervals, you must append ;annotation to each z-slice in the CPT. Note: The -B option relies on the -R and -J settings of the given hierarchical level to plot correctly. For standard -B operations, (See full description) (See cookbook information).
- -C[cpt]
cpt is the CPT to be used. If no cpt is appended or no -C is given then we use the current CPT (modern mode only). In classic mode, if no -C is given then we read stdin. By default all color changes are annotated. To use a subset, add an extra column to the CPT with a L, U, or B to annotate Lower, Upper, or Both color segment boundaries (but see -B). Like grdview, we.
- -D[g|j|J|n|x]refpoint[+wlength[/width]][+e[b|f][length]][+h|v][+jjustify][+m[a|c|l|u]][+n[txt]][+odx[/dy]][+r]. If length is not given then it defaults to 80% of the corresponding map side dimension. If either length or width end with % then those percentages are used instead to set the dimensions, where width is defined as a percentage of the colorbar length. Give a negative length to reverse the scale bar, or append +r. Append +h to get a horizontal scale [Default is vertical (+v)]... If not given, the default argument is JBC (Place color bar centered beneath current plot).
- -F[+cclearances][+gfill][+i[[gap/]pen]][+p[pen]][+r[radius]][+s[[dx/dy/][shade]]]
Without further options, draws a rectangular border around the colorbar -max_intens to +max_intens. If not specified, 1 is used. Alternatively, append low/high intensities to specify an asymmetric range [Default is no illumination].
- -Jparameters
Specify the projection. (See full description) (See cookbook summary) (See projections table).
-L[i][gap]
Gives equal-sized color rectangles. Default scales rectangles according to the z-range in the CPT (Also see -Z). should be encoded graphically. To preferentially draw color rectangles (e.g., for discrete colors), append p. Otherwise we will preferentially draw images (e.g., for continuous colors). Optionally append effective dots-per-inch for rasterization of color scales [600].
- -Q
Select logarithmic scale and power of ten annotations. All z-values in the CPT will be converted to p = log10(z) and only integer p values will be annotated using the 10^p format [Default is linear scale].
[+aangle][+c|n][+s][+xlabel][+yunit]
Control various aspects of color bar appearance when -B is not used. Append +a to place annotations at the given angle [default is no slanting]. Append +c to use custom labels if given in the CPT as annotations. Append +n to use numerical labels [Default]. Append +s to skip drawing gridlines separating different color intervals [Default draws gridlines]. If -L is used then -B cannot be used, hence you may optionally set a bar label via +xlabel and any unit (i.e., y-label) via +yunit.
- -U[label|+c][+jjust][+odx[/dy]]
Draw GMT time stamp logo on plot. (See full description) (See cookbook information).
- -V[level]
Select verbosity level [w]. (See full description) (See cookbook information).
- -Wscale
Multiply all z-values in the CPT by the provided scale. By default the CPT is used as is.
- -X[a|c|f|r][xshift]
Shift plot origin. (See full description) (See cookbook information).
- -Y[a|c|f|r][yshift]
Shift plot origin. (See full description) (See cookbook information).
- -Zwidthfile
File with colorbar-width per color entry. By default, width of entry is scaled to color range, i.e., z = 0-100 gives twice the width as z = 100-150 (Also see -L). Note: The widths may be in plot distance units or given as relative fractions and will be automatically scaled so that the sum of the widths equals the requested bar length.
- -p[x|y|z]azim[/elev[/zlevel]][+wlon0/lat0[/z0]][+vx0/y0] (more …)
Select perspective view. (Required -P >. See option -N for affecting these decisions. Also note that for years now, Apple’s Preview insists on smoothing deliberately course CPT color images to a blur. Use another PDF viewer if this bothers you.
For cyclic (wrapping) color tables the cyclic symbol is plotted to the right of the color bar. If annotations are specified there then we place the cyclic symbol at the left, unless +n was used in which case we center of the color bar instead.
Discrete CPTs may have transparency applied to all or some individual slices. Continuous CPTs may have transparency applied to all slices, but not just some.
See Also¶
gmt, makecpt gmtlogo, grd2cpt psimage, pslegend | https://docs.generic-mapping-tools.org/6.3/psscale.html | 2022-06-25T02:41:25 | CC-MAIN-2022-27 | 1656103033925.2 | [array(['_images/GMT_colorbar.png', '_images/GMT_colorbar.png'],
dtype=object) ] | docs.generic-mapping-tools.org |
How do you hide/unhide composer or typing area on the bot?
If you still have questions or prefer to get help directly from an agent, please submit a request.
We’ll get back to you as soon as possible.
- Getting Started
- Bot Building
- Conversation Design
- Developer Guides
- Deployment
- Agent Setup
- Analytics & Reporting
- Troubleshooting Guides
- Release Notes
At times, you do not want the users to type out the queries, but instead, choose from the menu options provided to give them directions. To do that you first need to disable the tying area. Once you have done that, then you can simply add menu options and various quick replies in your bot, for the user to get to the end goal.
This approach usually works with e-commerce business sites. For example, you can have menu options like Product information, track order, return, or refund. And each of these options would have an actionable ahead. Here, the user just needs to select the options that are presented to him and go ahead with the conversational journey. You cannot have a disabled composer area or typing area on bots that deal with customer service as users like explaining their problem in their own words instead of following a set journey.
How to hide the typing area?
In order to hide or disable the typing area of your bot, you need to navigate to the Business Manager > Channels > SDK Configurations.
Here, you need to scroll and select the typingArea in the Disable features dropdown, as shown -
Initially, when the typing area was not hidden, the bot screen looked as shown below where you could see the typing area
When we disabled the typing area, it looked like this
| https://docs.haptik.ai/how-do-you-hideunhide-composer-or-typing-area-on-the-bot | 2022-06-25T02:34:57 | CC-MAIN-2022-27 | 1656103033925.2 | [array(['https://static.helpjuice.com/helpjuice_production/uploads/upload/image/9701/direct/1639993099297-1639993099297.gif',
None], dtype=object)
array(['https://static.helpjuice.com/helpjuice_production/uploads/upload/image/9701/direct/1639993512003-1639993512003.png',
None], dtype=object)
array(['https://static.helpjuice.com/helpjuice_production/uploads/upload/image/9701/direct/1639993542118-1639993542118.png',
None], dtype=object) ] | docs.haptik.ai |
What is Knowledge BaseKnowledge Base Integration Points to note Analytics for Knowledge Base Integration
Knowledge Base Integration
This is the ability of the Haptik virtual assistants to connect with third-party knowledge management systems and answer user queries. The solution works by dynamically picking the most relevant articles from the third-party knowledge base and suggesting them to the user on the virtual assistant.
The screenshot shown below, explains the same, where a user’s query “Tell me about Android SDK” is responded with the right articles from the knowledge base.
Knowledge Base can be provided by multiple vendors, and Haptik can enable those to be integrated with your virtual assistant depending on your requirements. For Example - The Haptik platform can integrate with Zendesk, i.e Zendesk Guide and USU Knowledge Solutions out of the box.
More such out-of-the-box integrations are on the Roadmap with other key knowledge base providers. To add to that, the Haptik platform has the capability to integrate with other custom or enterprises’ in-house knowledge base solutions too, as long as the content can be accessed over REST APIs.
It is important to understand what the user's experience has been while conversing with the bot, which is why we collect feedback from the users at the end of the conversational journey. The feedback collection message is shown after the article suggestions - “Was I able to help you?”. This message ONLY shows if the feedback collection is enabled for the bot on the Business Manager.
Points to note
- When a user message is received on the bot, the bot will try to find the response in the following order of priority -
- STEPs present on the Graph
- FAQ STEPs
- Articles from Knowledge Base
- Small talk
- During the course of a conversation, a user will be sent article suggestions from the Knowledge Base only -
- If the user’s message is the first message in a conversation.
- If the previous user message was responded by the bot using some STEP
The GIF here, shows the representation where the user asks "What is Conversation Studio?", to which the bot responds with a set of articles related to the user's intent, i.e. Conversation studio.
Analytics for Knowledge Base Integration
Analyzing responses coming to the Knowledge Base would be utterly important to measure its efficiency. The user messages that received article suggestions from the Knowledge Base can be analyzed on the Intelligent Analytics tool.
You can download the messages from the “Message Analysis” section on the Intelligent Analytics tool. The “Message Analysis” CSV has a column “Response Container”. The messages that receive Knowledge Base article suggestions would have the value of “Response Container” marked as “KMSResponseContainer”.
To integrate your Knowledge Base with the virtual assistant, please get in touch with your Haptik account executive or the solution consultant. You can write to us at [email protected] | https://docs.haptik.ai/what-is-knowledge-base-integration | 2022-06-25T01:23:56 | CC-MAIN-2022-27 | 1656103033925.2 | [array(['https://static.helpjuice.com/helpjuice_production/uploads/upload/image/9701/direct/1638421279279-1638421279279.png',
None], dtype=object)
array(['https://static.helpjuice.com/helpjuice_production/uploads/upload/image/9701/direct/1638422582843-1638422582843.gif',
None], dtype=object) ] | docs.haptik.ai |
Large Map Coloring¶
This example solves a map coloring problem to demonstrate an out-of-the-box use of
Ocean’s classical-quantum hybrid sampler, dwave-hybrid
KerberosSampler, that enables you to solve problems
of arbitrary structure and size.
Map coloring is an example of a constraint satisfaction problem (CSP). CSPs require that all a problem’s variables be assigned values, out of a finite domain, that result in the satisfying of all constraints. The map-coloring CSP is to assign a color to each region of a map such that any two regions sharing a border have different colors.-hybrid.
Solve the Problem by Sampling¶
Ocean’s dwave_networkx can return a
minimum vertex coloring for a graph,
which assigns a color to the vertices of a graph in a way that no adjacent vertices
have the same color, using the minimum number of colors. Given a graph representing a
map and a sampler, the
min_vertex_coloring() function tries to
solve the map coloring problem.
dwave-hybrid
KerberosSampler
is classical-quantum hybrid asynchronous decomposition sampler, which can decompose large problems
into smaller pieces that
it can run both classically (on your local machine) and on the D-Wave system.
Kerberos finds best samples by running in parallel tabu search,
simulated annealing, and D-Wave subproblem sampling on
problem variables that have high impact. The only optional parameters set here
are a maximum number of iterations and number of iterations with no improvement that
terminates sampling. (See the Problem With Many Variables example for more details on configuring
the classical and quantum workflows.)
>>> import dwave_networkx as dnx >>> from hybrid.reference.kerberos import KerberosSampler >>> coloring = dnx.min_vertex_coloring(G, sampler=KerberosSampler(), chromatic_ub=4, max_iter=10, convergence=3) >>> set(coloring.values()) {0, 1, 2, 3}
Note
The next code requires Matplotlib.
Plot the solution, if valid.
>>> import matplotlib.pyplot as plt >>> node_colors = [coloring.get(node) for node in G.nodes()] # Adjust the next line if using a different map >>> if dnx.is_vertex_coloring(G, coloring): ... nx.draw(G, pos=nx.shell_layout(G, nlist = [list(G.nodes)[x:x+10] for x in range(0, 50, 10)] + [[list(G.nodes)[50]]]), with_labels=True, node_color=node_colors, node_size=400, cmap=plt.cm.rainbow) >>> plt.show()
The graphic below shows the result of one such run. | https://docs.ocean.dwavesys.com/en/stable/examples/map_kerberos.html | 2022-06-25T00:51:48 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.ocean.dwavesys.com |
Institutions
What is an Institution?
Please see Yapily's guide on Institutions for more information.
How can I test without using live accounts?
Please see Yapily's guide on sandboxes.
What set of features can each Institution provide?
The
features array in the
Institution response informs all the features a particular
Institution supports. The full list of features available in Yapily are listed
in FeatureEnum
For Yapily's signed customers, you can request the latest coverage from Yapily in spreadsheet form by emailing [email protected].
How do I find out when an Institution is experiencing downtime?
Monitoring information is available for Yapily clients using the GET Institution and GET Institutions endpoints. If not already enabled, please reach out to [email protected] to request this to be enabled:
{ "monitoring": { "ACCOUNT": { "status": "Up", "lastTested": "2020-08-26T17:46:40.901Z", "span": "P14DT9H5M26.458S" }, "IDENTITY": { "status": "Expired", "lastTested": "2020-08-25T18:33:45.255Z", "span": "PT22H15M37.897S" }, "ACCOUNT_TRANSACTIONS": { "status": "Up", "lastTested": "2020-08-26T17:46:40.008Z", "span": "P14DT9H5M27.347S" }, "EXISTING_PAYMENTS_DETAILS": { "status": "Expired", "lastTested": "2020-08-21T12:43:51.838Z", "span": "P5DT4H5M31.314S" } } } | https://docs.yapily.com/pages/home/faqs/institutions/ | 2022-06-25T02:49:36 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.yapily.com |
Court Records
Look up your user's court records
Overview
This merit looks up a user's court records after being verified by other merits.
Features
MetaMap looks up a user's court records as a requested.
Availability
Court Records are available for the following countries:
- Argentina
- Brazil
- Chile
- Colombia
- Ecuador
- Mexico
- Peru
- Paraguay
- Venezuela
If you would like to add a court records check to your metamap, or see a country you are interested in is not listed, reach out to [email protected].
Court record lookups are a per transaction service
You will be charged for each court record lookup initiated.
User Flow
Your user must upload a national identity document as part of our Document Verification product.
Currently, Document Verification is a required step that must be added before this block.
Document Verification
Flowchart of Metamap government check screens: start, select country, enter ID, upload ID front and back, and done.
How it Works
For metamaps with court record lookups, the lookup is available only if a user has been verified by all other merits in the metamap and has provided enough information to initiate a lookup.
If the user has been verified for all other merits and we have obtained enough information, the court record lookup will execute automatically.
Manually initiate a court record lookup
If a user has failed a check, been rejected, or requires review, the court record lookup will not be available unless you have manually changed their status to "verified."
After you have manually verified a user's status, you can initiate a court record lookup.
Dashboard screen showing a user's background check is available. The "Request Background Check" button is to the left of center.
Otherwise, you will be notified that the court record lookup step is not available.
Dashboard screen showing a user's court record lookup is unavailable. The "Request Background Check" button is disabled.
Setup
Step 1: Setup verification flow
Metamap with Document Verification and Background Checks (Court Records) merits selected.
Add the Background Checks merit in the Dashboard to review a user's court records.
The merit requires the Document Verification merit
Users must upload valid identifying documents as part of this merit.
Step 2: Integrate
There are 3 ways you can use MetaMap's Court Records:
- Direct Link — Send your users a link to enter their information on MetaMap's prebuilt UX
- Court Records in the dashboard
- Install and implement an SDK framework
###Integrate via API
Use our API Integration if you want to use MetaMap's Court Records endpoints but design your own experience for your users.
Step 3: Process Verification Results
Court Records Results
When we have obtained the completed court records, the Dashboard screen will refresh to show:
- A summary indicating that the user is:
- Approved
- Low risk (Available for BR only)
- High risk
- User data
- A list of searches run against the user. Each search result will have one of three statuses:
- Approved
- Low risk (Available for BR only)
- High risk
Court Records: Approved. All 60 checks returned show that the user is approved.
Court Records: High risk. 2 of 60 checks returned show that the user is high risk.
Webhook verification results
You will need to configure your webhooks, then handle the webhook responses that will be sent to your webhook URL.
- Brazilian Court Records Webhooks
- Mexican Court Records Webhooks
- Court Records Webhooks for * Argentina, Chile, Colombia, Ecuador, Peru, Paraguay, and Venezuela
Updated 3 days ago | https://docs.getmati.com/docs/background-check | 2022-06-25T00:59:45 | CC-MAIN-2022-27 | 1656103033925.2 | [array(['https://files.readme.io/8587419-Govcheck_flow.png',
'Govcheck flow.png Flowchart of Metamap government check screens: start, select country, enter ID, upload ID front and back, and done.'],
dtype=object)
array(['https://files.readme.io/8587419-Govcheck_flow.png',
'Click to close... Flowchart of Metamap government check screens: start, select country, enter ID, upload ID front and back, and done.'],
dtype=object)
array(['https://files.readme.io/26d0bbe-background_check-available.png',
'background_check-available.png Dashboard screen showing a user\'s background check is available. The "Request Background Check" button is to the left of center.'],
dtype=object)
array(['https://files.readme.io/26d0bbe-background_check-available.png',
'Click to close... Dashboard screen showing a user\'s background check is available. The "Request Background Check" button is to the left of center.'],
dtype=object)
array(['https://files.readme.io/ceb4025-background_check-unavailable.png',
'background_check-unavailable.png Dashboard screen showing a user\'s court record lookup is unavailable. The "Request Background Check" button is disabled.'],
dtype=object)
array(['https://files.readme.io/ceb4025-background_check-unavailable.png',
'Click to close... Dashboard screen showing a user\'s court record lookup is unavailable. The "Request Background Check" button is disabled.'],
dtype=object)
array(['https://files.readme.io/0ab8c00-background-check-setup.png',
'background-check-setup.png Metamap with Document Verification and Background Checks (Court Records) merits selected.'],
dtype=object)
array(['https://files.readme.io/0ab8c00-background-check-setup.png',
'Click to close... Metamap with Document Verification and Background Checks (Court Records) merits selected.'],
dtype=object)
array(['https://files.readme.io/fe9cc74-background-check-approved.png',
'background-check-approved.png Court Records: Approved. All 60 checks returned show that the user is approved.'],
dtype=object)
array(['https://files.readme.io/fe9cc74-background-check-approved.png',
'Click to close... Court Records: Approved. All 60 checks returned show that the user is approved.'],
dtype=object)
array(['https://files.readme.io/7925b23-background-check-high-risk.png',
'background-check-high-risk.png Court Records: High risk. 2 of 60 checks returned show that the user is high risk.'],
dtype=object)
array(['https://files.readme.io/7925b23-background-check-high-risk.png',
'Click to close... Court Records: High risk. 2 of 60 checks returned show that the user is high risk.'],
dtype=object) ] | docs.getmati.com |
This module is used to install and manage ruby installations and gemsets with RVM, the Ruby Version Manager. Different versions of ruby can be installed and gemsets created. RVM itself will be installed automatically if it's not present. This module will not automatically install packages that RVM depends on or ones that are needed to build ruby. If you want to run RVM as an unprivileged user (recommended) you will have to create this user yourself. This is how a state configuration could look like:
rvm: group.present: [] user.present: - gid: rvm - home: /home/rvm - require: - group: rvm rvm-deps: pkg.installed: - pkgs: - bash - coreutils - gzip - bzip2 - gawk - sed - curl - git-core - subversion mri-deps: pkg.installed: - pkgs: -1-dev - autoconf - libc6-dev - libncurses5-dev - automake - libtool - bison - subversion - ruby jruby-deps: pkg.installed: - pkgs: - curl - g++ - openjdk-6-jre-headless ruby-1.9.2: rvm.installed: - default: True - user: rvm - require: - pkg: rvm-deps - pkg: mri-deps - user: rvm jruby: rvm.installed: - user: rvm - require: - pkg: rvm-deps - pkg: jruby-deps - user: rvm jgemset: rvm.gemset_present: - ruby: jruby - user: rvm - require: - rvm: jruby mygemset: rvm.gemset_present: - ruby: ruby-1.9.2 - user: rvm - require: - rvm: ruby-1.9.2
salt.states.rvm.
gemset_present(name, ruby='default', user=None)¶
Verify that the gemset is present.
The name of the gemset.
The ruby version this gemset belongs to.
The user to run rvm as.
New in version 0.17.0.
salt.states.rvm.
installed(name, default=False, user=None, opts=None, env=None)¶
Verify that the specified ruby is installed with RVM. RVM is installed when necessary.
The version of ruby to install
Whether to make this ruby the default.
The user to run rvm as.
A list of environment variables to set (ie, RUBY_CONFIGURE_OPTS)
A list of option flags to pass to RVM (ie -C, --patch)
New in version 0.17.0. | https://docs.saltproject.io/en/3001/ref/states/all/salt.states.rvm.html | 2022-06-25T00:47:36 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.saltproject.io |
- NAME
- DESCRIPTION
- Supported Encodings
- Unsupported encodings
- Encoding vs. Charset -- terminology
- Encoding Classification (by Anton Tagunov and Dan Kogai)
- Glossary
- See Also
- References
NAME
Encode::Supported -- Encodings supported by Encode
DESCRIPTION..
Supported Encodings Encodings
The following encodings are always available..
Encode::Unicode -- other Unicode encodings
Unicode coding schemes other than native utf8 are supported by Encode::Unicode, which will be autoloaded on demand.
----------------------------------------------------------------.
Encode::Byte -- Extended ASCI encodings unsupported because those encodings need an algorithmical approach, currently unsupported by enc2xs: .
Encoding vs. Charset -- terminology.ets.! (**)
is a proprietary name.
Microsoft-related naming mess
Microsoft products misuse the following names:
- KS_C_5601-1987
Microsoft extension to
EUC-KR.
Proper names:
CP949,
UHC,
x-windows-949(as used by Mozilla).
See for details.
Encode aliases
KS_C_5601-1987to
cp949to reflect this common misusage. Raw
KS_C_5601-1987encoding is available as
kcs5601-raw.
See Encode::KR for details.
- GB2312
Microsoft extension to
EUC-CN.
Proper names:
CP936,
GBK.
GB2312has been registered in the
EUC-CNmeaning at IANA. This has partially repaired the situation: Microsoft's
GB2312has become a superset of the official
GB2312.
Encode aliases
GB2312to
euc-cnin full agreement with IANA registration.
cp936is supported separately. Raw
GB_2312-80encoding is available as
gb2312-raw.
See Encode::CN for details.
- Big5
Microsoft extension to
Big5.
Proper name:
CP950.
Encode separately supports
Big5and
cp950.
- Shift_JIS
Microsoft's understanding of
Shift_JIS.
JIS has not endorsed the full Microsoft standard however. The official
Shift_JISincludes only JIS X 0201 and JIS X 0208 character sets, while Microsoft has always used
Shift_JISto encode a wider character repertoire. See
IANAregistration for
Windows-31J.
As a historical predecessor, Microsoft's variant probably has more rights for the name, though it may be objected that Microsoft shouldn't have used JIS as part of the name in the first place.
Unambiguous name:
CP932.
IANAname (also used by Mozilla, and provided as an alias by Encode):
Windows-31J.
Encode separately supports
Shift_JIShas lost this meaning in MIME context since [RFC 2130], the
charsetabbre]
-
Encode,by Ken Lunde
CJKV Information Processing 1999 O'Reilly & Associates, ISBN : 1-56592-224-7. | http://docs.activestate.com/activeperl/5.22/perl/lib/Encode/Supported.html | 2018-08-14T14:23:06 | CC-MAIN-2018-34 | 1534221209040.29 | [] | docs.activestate.com |
Add folders
Overview
The folders from where the inSync Client backs up data are called backup folders. Your inSync administrator can allow you to add folders for backup.
Note: The inSync administrator can restrict backing up of certain folders to comply with security regulations. If you try to add a folder that is restricted for backup, "The folder has been excluded from backup by your IT administrator" message is displayed. For more information on folders that are excluded from backup by the administrator, see View Global Exclusions list.
To add folders for backup
- Start the inSync Client.
- In the navigation pane, click Backup & Restore.
- In the right pane, click Add folder. The Add Folder window appears.
- Select the Show hidden items check box located at the top-right corner of the Add folder window. Selecting this check box enables you to view hidden files and folders in the folders that you navigate and select for back up.
- Under Quick Configuration or My Computer, navigate and click the folder that you want to back up and then click Next.
- If the parent folder is checked, all the files and folders inside that directory will be included for backup.
If you want to backup only certain files and folders within a parent folder then, clear the parent folder check box and select only required files and folders that you want to include for backup.
- Select the files or folders that you want to back up. When you select a folder for backup, all files and folders, including the hidden files and folders, are selected for backup. You can explicitly remove the files and folders that you want to exclude from backup by clearing the adjacent check box.
Note: On MAC Client, if you configure a folder for backup, which has a SYMLINK folder within, then the folder contents of the SYMLINK folder will not be displayed.
But, if you configure a folder which is a SYMLINK folder, then all the contents within the SYMLINK folder are displayed.
- Click Add.
Configure backup settings of the selected folder
You can change the configuration settings of the folders that you configure for backup. By default, all files from a selected folder are backed up. You can include or exclude some files or file types from backup.
Note: You cannot change the configuration settings of the folders that your administrator has included in the backup.
To configure backup settings
- Start the inSync Client.
- In the navigation pane, click Backup & Restore.
- In the right pane, click Add folder. The Add Folder window appears.
- Under Quick Configuration or My Computer, navigate and click the folder that you want to back up and then click Next.
- Click Backup Rules... for the backup folder that you want to configure.
- Under Local > Folder Content, if you want to backup all the files select the Backup All Files option.
If you want to include only specific files for backup, click Backup Specific Files and select the required file filters.
- Provide the appropriate information for each field.
- Click Apply.
- The folder filters specified under Global Exclusions tab are defined by the administrator. For more information, see View Global Exclusions list. | https://docs.druva.com/005_inSync_Client/inSync_Client_6.0.1_for_inSync_Cloud/002Install_inSync_Client/040_Configure_inSync/010_Configure_folders_for_backup/Add_folders | 2018-08-14T14:16:36 | CC-MAIN-2018-34 | 1534221209040.29 | [array(['https://docs.druva.com/@api/deki/files/41343/backup_rules__specificfiles.png?revision=1',
'backup rules_ specificfiles.png'], dtype=object) ] | docs.druva.com |
inSync IPv6 Support
Overview
With the growing need for IPv4 and IPv6 dual stack support, inSync supports IPv6 along with IPv4. By default, inSync Server does not support an inSync Client trying to connect using an IPv6 address. But, a Client can connect to the server using an IPv6 address, if the Client is using NAT64 enabled Microsoft DirectAccess as its VPN solution or if the server lies behind a NAT64 translator. In case of MS Direct Access, the NAT64/DNS64 gateway which is integrated with Microsoft DirectAccess, creates an IPv6 address to allow the external Client to connect to the inSync Server using an IPv6 route. Data from the IPv6 network is routed via the NAT64/DNS64 gateway which performs all the necessary translations for transferring data between the IPv6 address and IPv4 only Server.
How does it work?
Here is a description of how Microsoft DirectAccessNAT64 and DNS64 work together to provide DirectAccess users access to IPv4 machines on the corporate network:
- It all starts when the DirectAccess Client tries to connect to an application server, it sends a DNS query to the DNS64 to get the address of the application server. It is important to note that DirectAccess Clients have connectivity to the corporate network only over IPv6, therefore their DNS queries are always IPv6 DNS queries that are called “AAAA” (quad A).
- After it gets the query from the Client, the DNS64 sends two DNS queries: an IPv4 query (A query) and an IPv6 query (AAAA query) to the corporate DNS.
- If DNS64 got in response only an IPv4 address it is assumed that there is only IPv4 connectivity to this server and therefore NAT64 will have to bridge all traffic. Since the Client needs an IPv6 address DNS64 generates an IPv6 address from the IPv4 address based on the NAT64 prefix configured on the DirectAccess prefixes page.
- After the Client machine has the address of the application server, it starts sending data packets to this server. The packets are sent to the DirectAccessNAT64 since all IPv6 addresses that are included in the NAT64 prefix are routed to DirectAccess.
- NAT64 receives the data packet and tries to determine the IPv4 address that is associated with the destination IPv6 address. Then it creates a new IPv4 packet that has the same payload and sends it to the application server.
FAQs
What is IPv6?
IPv6 (Internet Protocol version 6) is the latest version of the Internet Protocol that is designed to supplement and eventually be the successor of IPv4, which is the protocol predominantly in use today. IPv6 was developed by the Internet Engineering Task Force (IETF).
What’s the difference between IPv4 and IPv6?
The key difference between the versions of the protocol is that IPv6 has significantly more address space.
What are the different transition techniques for IPv6 transition?
The main transition techniques are as follows:
- Dual Stack – The network stack supports both IPv4 and IPv6.
- Tunneling – The IPv6 packets are encapsulated within IPv4 packets.
- Translation – Protocol translation between IPv4 and IPv6 is performed.
What is DirectAccess?.
What is NAT64?
NAT64 is an IPv6 transition mechanism that facilitates communication between IPv6 and IPv4 hosts by using a form of network address translation (NAT). The NAT64 gateway is a translator between IPv4 and IPv6 protocols, for which it needs at least one IPv4 address and an IPv6 network segment comprising a 32-bit address space. | https://docs.druva.com/005_inSync_Client/inSync_Client_6.0.1_for_inSync_Cloud/Troubleshoot_and_Reference_reads/Reference_Reads/inSync_IPv6_Support | 2018-08-14T14:17:25 | CC-MAIN-2018-34 | 1534221209040.29 | [] | docs.druva.com |
Parameters
Parameters are used to customize Routing Strategies. For example, you can use a Schedule parameter to define the opening and closing hours of your contact center. Other parameters allow you to perform tasks such as modify call flows or insert holiday greetings on defined days.
Parameters are grouped by Genesys into a collection of parameters known as a Parameter Group Template. Genesys deploys the Parameter Group Template to you, and then you can customize the values of the parameters in the template. These parameters are then read by a Routing Strategy and incorporated into the call flow.
The Parameters screen displays a list of all parameters that are available to you. Click a parameter in the list to see its properties displayed in a panel that opens to the right..
Note: To modify parameters within a parameter group, see Parameter Groups.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/PSAAS/latest/Administrator/Parameters | 2018-08-14T13:37:13 | CC-MAIN-2018-34 | 1534221209040.29 | [] | docs.genesys.com |
Ten Year Agile Retrospective: How We Can Improve In The Next Ten Years
Jeff.
June 2011
Ten years after the publication of the Agile Manifesto, Jeff Sutherland describes the successes of Agile and pinpoints four key success factors for the next ten years.
Applies To
Agile, Scrum, software development, project management, Team Foundation Server
Introduction
Key Success Factor #1: Demand Technical Excellence
Key Success Factor #2: Promote Individual Change and Lead Organizational Change
Key Success Factor #3: Organize Knowledge and Improve Education
Key Success Factor #4: Maximize Value Creation Across the Entire Process
Conclusion
Ten years after the Agile Manifesto was published, some of the original signatories met with a larger group of Agile thought leaders met at Snowbird, Utah, to do a retrospective on 10 years of Agile software development. They celebrated the success of the Agile approach to product development and reviewed the key impediments to building on that success. And they came to unanimous agreement on four key success factors for the next 10 years:
Demand technical excellence.
Promote individual change and lead organizational change.
Organize knowledge and improve education.
Maximize value creation across the entire process.
In this article, I describe the key success factors as determined by the 10 Year Agile Retrospective, and then I list the highest priority problems that keep organizations from acting on those success factors.
Key Success Factor #1: Demand Technical Excellence
Teams that deploy applications in short increments and get rapid feedback from end users have largely driven the explosion of the Internet and the proliferation of applications on smartphones. This is formalized in Agile practice by developing products in short time boxes, which are called sprints and last no more than a month and most often two weeks. We framed this issue in the Agile Manifesto by saying that “we value working software over comprehensive documentation.”
The 10 Year Agile Retrospective concluded that the majority of Agile teams are still having difficulty testing their work within sprints because neither the management, nor the business, nor the customers, nor the development teams demand technical excellence.
Impediment #1: Lack of READY Product Backlog
The Product Owner is responsible for working with stakeholders and customers to develop a product backlog that is broken down into small pieces, clear to the developers, immediately actionable, estimated in points by the team that will implement it, and testable (i.e., acceptance tests are clearly defined that determine whether a backlog item is done). A strong definition of done that is continuously improved is a hallmark of a high performing Agile team.
Most development teams do not have a good product backlog. I regularly query Scrum practitioners in the Americas, Europe, and Asia on what type of backlog they have. Over 80% have user stories for their product backlog, but less than 10% find their current user stories acceptable.
Systematic, a CMMI Level 5 company with extensive published data, has shown that their teams consistently double their velocity when product backlog is in a high ready state because the team must do only half the work[1]. So poor product backlog will typically reduce team performance by at least 50%. Complicating the problem, customers rarely or never use 65% of features that are created on average worldwide[2]. Poor product backlogs will have more unused features, as well as poor quality in the features that are used. If you develop less than half as fast as you could and 65% of what you build is waste, your performance is only 17.5% of what it could be.
Impediment #2: Lack of DONE Product at the End of a Sprint
You calculate team velocity by adding up the points for the product backlog items that are done at the end of each sprint. The product owner uses velocity to build release plans and a product roadmap. Teams use velocity to assess performance improvement. And management and boards of directors use velocity to assess the accuracy of product development plans and impact on revenue.
Lack of ready product backlog makes it impossible to get software done at the end of a sprint. Incomplete software is often the result of poor engineering practices, such as the following:
lack of testing during a sprint
poor configuration management
failure to implement continuous integration and automated testing
minimal or no pair programming or code reviews
All this leads to software that is not done. This means the team has no clear velocity, and the product owner cannot predict release dates or build a product roadmap to communicate with customers. The latest Scrum Guide[3] specifies that the product owner has a Release Burndown Chart that includes any work that is not done. For example, if 60 points burn down and there are still 40 points of bug fixing, integration testing, system testing, security testing, and documentation before release, the release burndown must burn up 40 points. Making this visible in the chart will clarify to the team and to management that for every six sprints completed, four sprints of undone work will remain before software can be put into production.
At the same time, if software is not done, testing occurs in later sprints. When coaching Palm, Inc. in 2006, we found that one hour of testing at code complete turned into 24 hours of testing effort three weeks later. The impact was that it would take two years to finish features that could be done in a month. Although this might be an extreme case, at Openview Venture Partners, we use Scrum in the venture teams and in portfolio companies[4]. We have never seen a company that did not take twice as long to deliver software when testing occurred in a sprint later than the sprint in which code was completed.
For these and many other reasons, Agile thought leaders agree that demanding technical excellence is the top priority for Agile management, Agile customers, Agile developers, and Agile stakeholders for the next 10 years. We know from Systematic data[1, 5, 6] and data from many other companies[7] that this change will at least double productivity in most teams, and it will quadruple productivity in high-performing teams.
Key Success Factor #2: Promote individual change and lead organizational change
In addition to technical excellence, Agile adoption requires rapid response to changing requirements. This was the fourth principle of the Agile Manifesto – “respond to change over following a plan.” However, individuals adapting to change is not enough. Organizations must be structured for Agile response. Failure to remove impediments that block progress destroys existing high-performing teams and prevents the formation of new high-performing teams.
Agility requires a significant mindset change: from focusing on a big upfront plan, to focusing on delivering the maximum value to customers, who are always changing their minds for good reasons. For example, customers get a better understanding of their business or their market changes, and they need software to adapt to those changes. Over 65% of requirements change during development for the average project worldwide, and the pace of change is increasing[2].
Impediment #1: Failure to See Impediments
The Scrum Daily Meeting is designed to surface blocks or impediments so that the team can remove them. Often individual team members are trying to work in isolation on their individual stories. A large number of items are open, almost nothing is done, sprint failure is almost certain, and team members will say they have no impediments working on their own little piece. Thus the team is actually a group of isolated individuals and not working as a team. They fail to see that they are their own biggest impediment. They need to individually change their behavior to work as a team.
Impediment #2: Tolerating Defects
A second major pattern of failure is a team not seeing any impediments despite having many open defects that are not quickly repaired. Industry data shows that fixing bugs on the same day as they are discovered will double the velocity of a team[6].
In order for Agile teams to succeed, Agile individuals must train, motivate, and lead the organization through a major change process. Failure or inability to lead this organizational change is the second most important issue that blocks Agile progress in enterprises today.
Scrum is a continuous process improvement approach to identifying and removing impediments to performance on a daily basis. Teams need to identify impediments in daily meetings and retrospectives and remove them quickly. Often 80% of the impediments require management help to remove. Management needs to understand Agile development and participate fully in its success. That means management must change along with the teams and lead the company forward. The widespread failure to do this in the face of increasing Agile adoption is of great concern to Agile leadership. Thus Agile leaders must lead organizational change.
Impediment #3: ScrumMaster Is Not An Agent of Change
A recent survey of 91 countries showed that 75% of Agile development worldwide is based on Scrum[8]. The ScrumMaster owns the process, is responsible for team performance, and must educate everyone involved on how to continuously improve by removing impediments. When the ScrumMaster finds an impediment that is outside the team but hinders team performance, the ScrumMaster is responsible for educating, training, and motivating people to take action. A good ScrumMaster is a catalyst for change in the organization. This responsibility is embodied in the work of Takeuchi and Nonaka that inspired Scrum[9].
In order to lead organizational change, we must provide better training and motivation for people at all levels in the organization. This means that the number three priority for Agile leadership in the next 10 years is to organize knowledge and improve education.
Key Success Factor #3: Organize Knowledge and Improve Education
Most managers and many developers are unaware of the large body of knowledge regarding teams and productivity. Scrum was designed to help teams incorporate best practices that evolved over many decades. Many of these best practices were formalized by the patterns movement and directly influenced the creation of Scrum and Extreme Programming[10]. Here are a few basic principles that are not well understood in the development community.
Impediment #1: Software Development is Inherently Unpredictable
Few people are aware of Ziv’s Law, that software development is unpredictable[11]. The failure rate on projects worldwide is over 65%, largely due to lack of understanding of this problem and the proper approach to deal with it. Traditional project management creates a plan that is expected to be delivered on time, within budget, and with predefined features. Yet, in the average project, over 65% of features change during development. Process engineering experts Ogunnaike and Ray[12] informed the co-creators of Scrum that using a predictive control system (waterfall) to control an empirical process (software development) was the origin of almost all chemical plant explosions and was at the root of most software project failures. These experts advised us to make sure that Scrum was an empirical control system based on inspect and adapt feedback loops.
Impediment #2: Users Do Not Know What They Want Until They See Working Software
Traditional project management assumes that users know what they want before software is built. As a result, over 65% of features built are either rarely or never used by the customers. This problem was formalized as “Humphrey’s Law” [13], yet it is systematically ignored in university and industry training of managers and project leaders.
Impediment #3: The Structure of the Organization Will Be Embedded in the Code
A third example of a major problem that is not generally understood is “Conway’s Law:” the structure of the organization will be reflected in the code[10]. A traditional hierarchical organizational structure will negatively affect object-oriented design, resulting in brittle code, bad architecture, poor maintainability and adaptability, along with excessive costs and high failure rates. Agile organizational patterns are designed to provide an organizational structure that supports good object design, which includes flexibility, adaptability, self-organization, reflection, and effective communication throughout the system via message passing, inheritance, and so on. Outmoded organizational structures fail to do this and produce bad code.
These are a few of the many principles that any organization that produces software must understand well. Fortunately, these principles are all encapsulated in good Agile practices. However, getting management support for these practices is a major impediment in most companies due to lack of ongoing education in fundamental principles for everyone from university students to boardroom directors.
Key Success Factor #4: Maximize value creation across the entire process
Agile practices can easily double or triple the productivity of a software development team if the product backlog is ready and the software is done at the end of a sprint. The bottleneck in most companies today is testing. Too often, software is not fully tested inside a sprint. That delay at least doubles the required testing later and, in some cases, requires 24 times as much testing. Fixing this problem will at least double a team's productivity.
Once a team starts going twice as fast, the product owner or product owner team must produce twice as much product backlog information. If there are poor user stories to begin with, they will be even worse when a development teams asks for twice as many of them.
Impediment #1: Lack of Agility in Operations and Infrastructure
As soon as talent and resources are applied to improve the product backlog, the flow of software to production will as least double. In some cases, the flow will be 5-10 times higher. This increase can cripple production, and any problems with development operations and infrastructure must be fixed. For example, a recent company transformation doubled the velocity of 27 Scrum teams. The lack of continuous integration produced twice as many bugs that could not be fixed within the sprint in which they were created. This delay generated several months of bug fixing after code complete and before product release. Even with this resultant drag on productivity, the other improvements in Agile approaches within the company enabled it to cut its development and deployment time from 19 months to 9 months with more features[14].
Impediment #2: Lack of Agility in Management, Sales, Marketing, and Product Management
At the front end of the process, business goals, strategies, and objectives are often not clear. This results in a flat or decaying revenue stream even when production of software doubles.
For this reason, everyone in an organization needs to be educated and trained on how to optimize performance across the whole value stream. Agile individuals need to lead this educational process by improving their ability to organize knowledge and train the whole organization. The few organizations that have trained everyone in Scrum all at once have always doubled, sometimes quadrupled, and occasionally gained market domination in one year.
The Bottom Line.
In order to improve, Agile teams must promote individual change in development practices and, at the same time, lead organizational changes that enable greater output of software to the market.
To move the organization forward, Agile developers need to organize their knowledge and develop data-driven communication that motivates the organization to change.
At the end of the day, everyone in the organization needs to focus on the value stream, from the initial concept to revenue generation. Agile developers understand this problem. They need to communicate solutions and lead organizational change. Only then will the principles and values articulated in the Agile Manifesto be fully realized.
C. Jakobsen and J. Sutherland, "Scrum and CMMI – Going from Good to Great: are you ready-ready to be done-done?," in Agile 2009, Chicago, 2009.
J. Johnson, "Standish Group Study Report," presented at the XP2002, Sardinia, 2002.
K. Schwaber and J. Sutherland, Scrum Guide: Scrum.org and Scrum, Inc., 2011.
Sutherland and I. Altman, "Take No Prisoners: How a Venture Capital Group Does Scrum," in Agile 2009, Chicago, 2009.
C. R. Jakobsen and K. A. Johnson, "Mature Agile with a Twist of CMMI," in Agile 2008, Toronto, 2008.
J. Sutherland, C. Jakobsen, and K. Johnson, "Scrum and CMMI Level 5: A Magic Potion for Code Warriors!," in Agile 2007, Washington, D.C., 2007.
G. Benefield, "Rolling Out Agile at a Large Enterprise," in HICSS'41, Hawaii International Conference on Software Systems, Big Island, Hawaii, 2008.
VersionOne, "5th Annual Survey: 2010 - The State of Agile Development," Version One2011.
H. Takeuchi and I. Nonaka, "The New New Product Development Game," Harvard Business Review, 1986.
J. Coplien and N. Narrison, Organizational Patterns of Agile Software Development: Prentice Hall, 2004.
H. Ziv and D. Richardson, "The Uncertainty Principle in Software Engineering," in submitted to Proceedings of the 19th International Conference on Software Engineering (ICSE'97), 1997.
B. A. Ogunnaike and W. H. Ray, Process Dynamics, Modeling, and Control: Oxford University Press, 1994.
W. Humphrey. (2009, The Watts New? Collection: Columns by the SEI's Watts Humphrey. *news@sei*.
J. Sutherland and R. Frohman, "Hitting the Wall: What to Do When High Performing Scrum Teams Overwhelm Operations and Infrastructure," in Hawaii International Conference on Software Systems, Kauai, Hawaii, 2011. | https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2010/hh350860(v=vs.100) | 2018-08-14T14:09:11 | CC-MAIN-2018-34 | 1534221209040.29 | [] | docs.microsoft.com |
Migrating mutation and property change events to mutation observers
Mutation observers provide developers with a way to detect insertion and removal of a DOM node. You can migrate existing code using mutation events and / or property change events to use mutation observers.
Note Mutation observers gained support from Internet Explorer 11 forward. They offer a fast-performing replacement for all of the same scenarios supported by the now deprecated mutation events, and an alternative to the scenarios supported by property change events.
Legacy techniques for monitoring DOM mutations
Mutation events play a key role in the web platform. They allow web apps to synchronously monitor dynamic changes to elements in the Document Object Model (DOM) of a webpage. While useful, mutation events are also known to cause app performance regressions mainly because of their synchronous nature and the event architecture the work on.
Note Mutation events (as defined in the W3C DOM Level 3 Events) have been deprecated in favor of mutation observers (W3C DOM4).
Property change events provide similar behavior as mutation events. They also carry a performance penalty because of the legacy browser event system they need to function correctly.
Note The onpropertychange event is only supported with the legacy attachEvent IE-only event registration model, which has been deprecated since Windows Internet Explorer 9 (and discontinued in IE11) in favor of the W3C standard "addEventListener" event model.
Identifying mutation events
The mutation events, first available in Internet Explorer 9, can be easily identified by their name, which is a string parameter passed to either the addEventListener or removeEventListener platform APIs:
- DOMNodeInserted
- DOMNodeRemoved
- DOMSubtreeModified
- DOMAttrModified
- DOMCharacterDataModified
Note Two additional mutation events are defined by the standard, but not supported by Internet Explorer: DOMNodeInsertedIntoDocument and DOMNodeRemovedFromDocument.
Here's an example of what one of these events might look like in JavaScript code:
someElement.addEventListener("DOMAttrModified", function() { //... }, false);
The DOMNodeInserted, DOMNodeRemoved, and DOMSubtreeModified mutation events monitor structural changes to an element's children—either elements are added to the element's children or they're removed. The DOMSubtreeModified event is for both:it's fired for removals and adds. However, it doesn't contain any information as to why it was fired (you can't distinguish an add from a remove based on the event alone).
The DOMAttrModified mutation event reports changes to an element's attribute list. This single event includes information related to attribute insertions, removals, or changes.
The DOMCharacterDataModified mutation event reports changes to an element's text content. Text content is grouped into logical units called text nodes, and only modifications to an existing text node will fire the DOMCharacterDataModified event. If new text nodes are inserted / created, they're reported as DOMNodeInserted events instead.
It should be simple to find mutation events in your code, like using the Find in Files... search feature of your favorite editor. Remember that variables are often used in the addEventListener method, so be sure to search first for the use of the mutation event strings ("DOMNodeInserted", "DOMNodeRemoved", etc.), and then double-check all occurrences of addEventListener to be sure you've found them all.
Identifying property change events
Property change events can be identified by the onpropertychange event name used along with the legacy attachEvent or detachEvent IE-only event registration APIs. Search for all occurrences of attachEvent and check the first parameter for onpropertychange to find these usages in your code.
The property change event fires when a DOM element's properties change. The event doesn't bubble and has been deprecated since Internet Explorer 9 in favor of the W3C standard "addEventListener" event model. The event includes the name of the property that changed in the events propertyName getter. Unfortunately, to dispatch a property change event, a number of other event attributes are also calculated, some of which force the layout engine to recalculate, causing a substantial performance cost to any application using these events.
Unlike with mutation events, the property change event doesn't cleanly map to mutation observers. However, its possible to replace the usage of property change events with mutation observers if the property names of interest are reflected in HTML attributes. For example, id, which reflects the id attribute, style.color which is reflected in the serialized style attribute, and className which corresponds to the class attribute.
Note For properties that aren't reflected in HTML attributes (such as value on input elements), you can use the ECMAScript 5 (JavaScript) feature called defineProperty. This document doesn't describe how to migrate property change events using the Object.defineProperty JavaScript API.
How mutation observers differ
Mutation observers aren't based on the web platform's event model. This is an important difference that enables them to dispatch much faster and without needing to bubble an event through the DOM element hierarchy.
Additionally, mutation observers are designed to record multiple changes before notifying your observer. They batch mutation records to avoid spamming your app with events. By contrast, mutation events are synchronous and interrupt normal code execution to notify your app of mutations. Despite the delayed notification model employed by mutation observers, your apps's observer is still guaranteed to receive (and have a chance to process) all the mutation records before the next repaint.
Both of these changes impact how your app must be adapted to support mutation observers.
Mutation observer registration
Mutation observers must first be created before they can be registered on a given element. To create a mutation observer, use the JavaScript new operator and specify a callback method:
var mutationObserver = new MutationObserver(callback);
The callback that you provide to the mutation observer constructor will be different than the callback you are likely using for your current mutation events. This will be explained in more detail below.
Having created the observer, you now instruct it to observe a particular element. Generally, this will be the same element on which you were previously registering the mutation event:
mutationObserver.observe(someElement, options);
If you don't save a reference to it, the mutation observer instance will be preserved in-memory by the web platform as long as it is observing at least one element. If you don't save a reference to the observer, you can still reference it from the observer's callback (it will be the this object in the callback's scope, as well as the 2nd parameter to the callback function).
The options parameter is a simple JavaScript object with properties that you must provide to describe exactly what kinds of mutations you want to observe. The property options correspond to the three categories of mutations noted earlier:
- childList
- attributes
- characterData
The childList option with a value of true means observe changes to this element's child elements (both removals and additions). This option includes text nodes that are added or removed as children of this element.
The attribute option with a value of true means observe changes to this element's attributes (both removals, additions, and changes).
The characterData option with a value of true means observe changes to this element's text nodes (changes to the values of text nodes, excluding when text nodes are removed entirely or newly added).
A fourth subtree option is also important. The three previous options (by default) only observe their target element in isolation, not considering any of its descendants (its subtree). To monitor the given element and all its descendants, set the subtree property to true. Because mutation events have the characteristic of bubbling through the DOM, the use of the subtree option is required to maintain parity with mutation events registered on ancestor elements.
The following table describes what mutation observer options correspond to what mutation event names:
Note With mutation observers it is also possible to combine multiple options to observe childLists, attributes, and characterData at the same time.
Finally, there are several options for saving the previous values of attributes and character data changes, and for refining the scope of which attributes are important to observe:
- The attributeOldValue and characterDataOldValue options with a value of true save the previous value when changes to attributes or characterData occur.
- The attributeFilter option with a string array of attribute names limits observation to the specified attributes. This option is only relevant when the attributes option is set to true.
With this information, any code that previously registered for a mutation event can be replaced with code that registers for a mutation observer:
// Watch for all changes to the body element's children document.body.addEventListener("DOMNodeInserted", nodeAddedCallback, false); document.body.addEventListener("DOMNodeRemoved", nodeRemovedCallback, false);
Now becomes:
// Watch for all changes to the body element's children new MutationObserver(nodesAddedAndRemovedCallback).observe(document.body, { childList: true, subtree: true });
Mutation observer callbacks
The mutation observer callback function is invoked with two parameters:
- A list of records
- A reference to the mutation observer object that's invoking the callback
Be careful if you're reusing your mutation events callbacks for mutation observers. When a relevant mutation happens, the MutationObserver records the change information you requested in a MutationRecord object and invokes your callback function, but not until all script within the current scope has run. It's possible that more than one mutation (each represented by a single MutationRecord) will occur since the last time the callback was invoked.
The records parameter is a JavaScript array consisting of MutationRecord objects. Each object in the array is representative of one mutation that occurred on the element (or elements) being observed.
A record has the following properties: | https://docs.microsoft.com/en-us/previous-versions/windows/internet-explorer/ie-developer/compatibility/dn265032(v=vs.85) | 2018-08-14T13:39:16 | CC-MAIN-2018-34 | 1534221209040.29 | [] | docs.microsoft.com |
How Windows 10 uses the Trusted Platform Module
The Windows 10 operating system improves most existing security features in the operating system and adds groundbreaking new security features such as Device Guard and Windows Hello for Business. It places hardware-based security deeper inside the operating system than previous Windows versions had done, maximizing platform security while increasing usability. To achieve many of these security enhancements, Windows 10 makes extensive use of the Trusted Platform Module (TPM). This article offers a brief overview of the TPM, describes how it works, and discusses the benefits that TPM brings to Windows 10—as well as the cumulative security impact of running Windows 10 on a PC that contains a TPM.
See also:
TPM Overview
The TPM is a cryptographic module that enhances computer security and privacy. Protecting data through encryption and decryption, protecting authentication credentials, and proving which software is running on a system are basic functionalities associated with computer security. The TPM helps with all these scenarios and more.
Historically, still providing logical separation similar to discrete TPM chips.
TPMs are passive: they receive commands and return responses. To realize the full benefit of a TPM, the OEM must carefully integrate system hardware and firmware with the TPM to send it commands and react to its responses. TPMs were originally designed to provide security and privacy benefits to a platform’s owner and users, but newer versions can provide security and privacy benefits to the system hardware itself. Before it can be used for advanced scenarios, a TPM must be provisioned. Windows 10 automatically provisions a TPM, but if the user reinstalls the operating system, he or she may need to tell the operating system to explicitly provision the TPM again before it can use all the TPM’s features.
The Trusted Computing Group (TCG) is the nonprofit organization that publishes and maintains the TPM specification. The TCG exists to develop, define, and promote vendor-neutral, global industry standards that support a hardware-based root of trust for interoperable trusted computing platforms. The TCG also publishes the TPM specification as the international standard ISO/IEC 11889, using the Publicly Available Specification Submission Process that the Joint Technical Committee 1 defines between the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC).
OEMs implement the TPM as a component in a trusted computing platform, such as a PC, tablet, or phone. Trusted computing platforms use the TPM to support privacy and security scenarios that software alone cannot achieve. For example, software alone cannot reliably report whether malware is present during the system startup process. The close integration between TPM and platform increases the transparency of the startup process and supports evaluating device health by enabling reliable measuring and reporting of the software that starts the device. Implementation of a TPM as part of a trusted computing platform provides a hardware root of trust—that is, it behaves in a trusted way. For example, if a key stored in a TPM has properties that disallow exporting the key, that key truly cannot leave the TPM.
The TCG designed the TPM as a low-cost, mass-market security solution that addresses the requirements of different customer segments. There are variations in the security properties of different TPM implementations just as there are variations in customer and regulatory requirements for different sectors. In public-sector procurement, for example, some governments have clearly defined security requirements for TPMs, whereas others do not.
Certification programs for TPMs—and technology in general—continue to evolve as the speed of innovation increases. Although having a TPM is clearly better than not having a TPM, Microsoft’s best advice is to determine your organization’s security needs and research any regulatory requirements associated with procurement for your industry. The result is a balance between scenarios used, assurance level, cost, convenience, and availability.
TPM in Windows 10
The security features of Windows 10 combined with the benefits of a TPM offer practical security and privacy benefits. The following sections start with major TPM-related security features in Windows 10 and go on to describe how key technologies use the TPM to enable or increase security.
Platform Crypto Provider
Windows includes a cryptography framework called Cryptographic API: Next Generation (CNG), the basic approach of which is to implement cryptographic algorithms in different ways but with a common application programming interface (API). Applications that use cryptography can use the common API without knowing the details of how an algorithm is implemented much less the algorithm itself.
Although CNG sounds like a mundane starting point, it illustrates some of the advantages that a TPM provides. Underneath the CNG interface, Windows or third parties supply a cryptographic provider (that is, an implementation of an algorithm) implemented as software libraries alone or in a combination of software and available system hardware or third-party hardware. If implemented through hardware, the cryptographic provider communicates with the hardware behind the software interface of CNG.
The Platform Crypto Provider, introduced in the Windows 8 operating system, exposes the following special TPM properties, which software-only CNG providers cannot offer or cannot offer as effectively:
• Key protection. The Platform Crypto Provider can create keys in the TPM with restrictions on their use. The operating system can load and use the keys in the TPM without copying the keys to system memory, where they are vulnerable to malware. The Platform Crypto Provider can also configure keys that a TPM protects so that they are not removable. If a TPM creates a key, the key is unique and resides only in that TPM. If the TPM imports a key, the Platform Crypto Provider can use the key in that TPM, but that TPM is not a source for making additional copies of the key or enabling the use of copies elsewhere. In sharp contrast, software solutions that protect keys from copying are subject to reverse-engineering attacks, in which someone figures out how the solution stores keys or makes copies of keys while they are in memory during use.
• Dictionary attack protection. Keys that a TPM protects can require an authorization value such as a PIN. With dictionary attack protection, the TPM can prevent attacks that attempt a large number of guesses to determine the PIN. After too many guesses, the TPM simply returns an error saying no more guesses are allowed for a period of time. Software solutions might provide similar features, but they cannot provide the same level of protection, especially if the system restarts, the system clock changes, or files on the hard disk that count failed guesses are rolled back. In addition, with dictionary attack protection, authorization values such as PINs can be shorter and easier to remember while still providing the same level of protection as more complex values when using software solutions.
These TPM features give Platform Crypto Provider distinct advantages over software-based solutions. A practical way to see these benefits in action is when using certificates on a Windows 10 device. On platforms that include a TPM, Windows can use the Platform Crypto Provider to provide certificate storage. Certificate templates can specify that a TPM use the Platform Crypto Provider to protect the key associated with a certificate. In mixed environments, where some computers might not have a TPM, the certificate template could simply prefer the Platform Crypto Provider over the standard Windows software provider. If a certificate is configured as not able to be exported, the private key for the certificate is restricted and cannot be exported from the TPM. If the certificate requires a PIN, the PIN gains the TPM’s dictionary attack protection automatically.
Virtual Smart Card
Smart cards are highly secure physical devices that typically store a single certificate and the corresponding private key. Users insert a smart card into a built-in or USB card reader and enter a PIN to unlock it. Windows can then access the card’s certificate and use the private key for authentication or to unlock BitLocker protected data volumes. Smart cards are popular because they provide two-factor authentication that requires both something the user has (that is, the smart card) and something the user knows (such as the smart card PIN). Smart cards are difficult to use, however, because they require purchase and deployment of both smart cards and smart card readers.
In Windows, the Virtual Smart Card feature allows the TPM to mimic a permanently inserted smart card. The TPM becomes “something the user has” but still requires a PIN. Although physical smart cards limit the number of PIN attempts before locking the card and requiring a reset, a virtual smart card relies on the TPM’s dictionary attack protection to prevent too many PIN guesses.
For TPM-based virtual smart cards, the TPM protects the use and storage of the certificate private key so that it cannot be copied when it is in use or stored and used elsewhere. Using a component that is part of the system rather than a separate physical smart card can reduce total cost of ownership because it eliminates “lost card” and “card left at home” scenarios while still delivering the benefits of smart card–based multifactor authentication. For users, virtual smart cards are simple to use, requiring only a PIN to unlock. Virtual smart cards support the same scenarios that physical smart cards support, including signing in to Windows or authenticating for resource access.
Windows Hello for Business
Windows Hello for Business provides authentication methods intended to replace passwords, which can be difficult to remember and easily compromised. In addition, user name - password solutions for authentication often reuse the same user name – password combinations on multiple devices and services; if those credentials are compromised, they are compromised in many places. Windows Hello for Business provisions devices one by one and combines the information provisioned on each device (i.e., the cryptographic key) with additional information to authenticate users. On a system that has a TPM, the TPM can protect the key. If a system does not have a TPM, software-based techniques protect the key. The additional information the user supplies can be a PIN value or, if the system has the necessary hardware, biometric information, such as fingerprint or facial recognition. To protect privacy, the biometric information is used only on the provisioned device to access the provisioned key: it is not shared across devices.
The adoption of new authentication technology requires that identity providers and organizations deploy and use that technology. Windows Hello for Business lets users authenticate with their existing Microsoft account, an Active Directory account, a Microsoft Azure Active Directory account, or even non-Microsoft Identity Provider Services or Relying Party Services that support Fast ID Online V2.0 authentication.
Identity providers have flexibility in how they provision credentials on client devices. For example, an organization might provision only those devices that have a TPM so that the organization knows that a TPM protects the credentials. The ability to distinguish a TPM from malware acting like a TPM requires the following TPM capabilities (see Figure 1):
• Endorsement key. The TPM manufacturer can create a special key in the TPM called an endorsement key. An endorsement key certificate, signed by the manufacturer, says that the endorsement key is present in a TPM that that manufacturer made. Solutions can use the certificate with the TPM containing the endorsement key to confirm a scenario really involves a TPM from a specific TPM manufacturer (instead of malware acting like a TPM.
• Attestation identity key. To protect privacy, most TPM scenarios do not directly use an actual endorsement key. Instead, they use attestation identity keys, and an identity certificate authority (CA) uses the endorsement key and its certificate to prove that one or more attestation identity keys actually exist in a real TPM. The identity CA issues attestation identity key certificates. More than one identity CA will generally see the same endorsement key certificate that can uniquely identify the TPM, but any number of attestation identity key certificates can be created to limit the information shared in other scenarios.
Figure 1: TPM Cryptographic Key Management
For Windows Hello for Business, Microsoft can fill the role of the identity CA. Microsoft services can issue an attestation identity key certificate for each device, user, and identify provider to ensure that privacy is protected and to help identity providers ensure that device TPM requirements are met before Windows Hello for Business credentials are provisioned.
BitLocker Drive Encryption
BitLocker provides full-volume encryption to protect data at rest. The most common device configuration splits the hard drive into several volumes. The operating system and user data reside on one volume that holds confidential information, and other volumes hold public information such as boot components, system information and recovery tools. (These other volumes are used infrequently enough that they do not need to be visible to users.) Without additional protections in place, if the volume containing the operating system and user data is not encrypted, someone can boot another operating system and easily bypass the intended operating system’s enforcement of file permissions to read any user data.
In the most common configuration, BitLocker encrypts the operating system volume so that if the computer or hard disk is lost or stolen when powered off, the data on the volume remains confidential. When the computer is turned on, starts normally, and proceeds to the Windows logon prompt, the only path forward is for the user to log on with his or her credentials, allowing the operating system to enforce its normal file permissions. If something about the boot process changes, however—for example, a different operating system is booted from a USB device—the operating system volume and user data cannot be read and are not accessible. The TPM and system firmware collaborate to record measurements of how the system started, including loaded software and configuration details such as whether boot occurred from the hard drive or a USB device. BitLocker relies on the TPM to allow the use of a key only when startup occurs in an expected way. The system firmware and TPM are carefully designed to work together to provide the following capabilities:
• Hardware root of trust for measurement. A TPM allows software to send it commands that record measurements of software or configuration information. This information can be calculated using a hash algorithm that essentially transforms a lot of data into a small, statistically unique hash value. The system firmware has a component called the Core Root of Trust for Measurement (CRTM) that is implicitly trusted. The CRTM unconditionally hashes the next software component and records the measurement value by sending a command to the TPM. Successive components, whether system firmware or operating system loaders, continue the process by measuring any software components they load before running them. Because each component’s measurement is sent to the TPM before it runs, a component cannot erase its measurement from the TPM. (However, measurements are erased when the system is restarted.) The result is that at each step of the system startup process, the TPM holds measurements of boot software and configuration information. Any changes in boot software or configuration yield different TPM measurements at that step and later steps. Because the system firmware unconditionally starts the measurement chain, it provides a hardware-based root of trust for the TPM measurements. At some point in the startup process, the value of recording all loaded software and configuration information diminishes and the chain of measurements stops. The TPM allows for the creation of keys that can be used only when the platform configuration registers that hold the measurements have specific values.
• Key used only when boot measurements are accurate. BitLocker creates a key in the TPM that can be used only when the boot measurements match an expected value. The expected value is calculated for the step in the startup process when Windows Boot Manager runs from the operating system volume on the system hard drive. Windows Boot Manager, which is stored unencrypted on the boot volume, needs to use the TPM key so that it can decrypt data read into memory from the operating system volume and startup can proceed using the encrypted operating system volume. If a different operating system is booted or the configuration is changed, the measurement values in the TPM will be different, the TPM will not let Windows Boot Manager use the key, and the startup process cannot proceed normally because the data on the operating system cannot be decrypted. If someone tries to boot the system with a different operating system or a different device, the software or configuration measurements in the TPM will be wrong and the TPM will not allow use of the key needed to decrypt the operating system volume. As a failsafe, if measurement values change unexpectedly, the user can always use the BitLocker recovery key to access volume data. Organizations can configure BitLocker to store the recovery key in Active Directory Domain Services (AD DS).
Device hardware characteristics are important to BitLocker and its ability to protect data. One consideration is whether the device provides attack vectors when the system is at the logon screen. For example, if the Windows device has a port that allows direct memory access so that someone can plug in hardware and read memory, an attacker can read the operating system volume’s decryption key from memory while at the Windows logon screen. To mitigate this risk, organizations can configure BitLocker so that the TPM key requires both the correct software measurements and an authorization value. The system startup process stops at Windows Boot Manager, and the user is prompted to enter the authorization value for the TPM key or insert a USB device with the value. This process stops BitLocker from automatically loading the key into memory where it might be vulnerable, but has a less desirable user experience.
Newer hardware and Windows 10 work better together to disable direct memory access through ports and reduce attack vectors. The result is that organizations can deploy more systems without requiring users to enter additional authorization information during the startup process. The right hardware allows BitLocker to be used with the “TPM-only” configuration giving users a single sign-on experience without having to enter a PIN or USB key during boot.
Device Encryption
Device Encryption is the consumer version of BitLocker, and it uses the same underlying technology. How it works is if a customer logs on with a Microsoft account and the system meets Modern Standby hardware requirements, BitLocker Drive Encryption is enabled automatically in Windows 10. The recovery key is backed up in the Microsoft cloud and is accessible to the consumer through his or her Microsoft account. The Modern Standby hardware requirements inform Windows 10 that the hardware is appropriate for deploying Device Encryption and allows use of the “TPM-only” configuration for a simple consumer experience. In addition, Modern Standby hardware is designed to reduce the likelihood that measurement values change and prompt the customer for the recovery key.
For software measurements, Device Encryption relies on measurements of the authority providing software components (based on code signing from manufacturers such as OEMs or Microsoft) instead of the precise hashes of the software components themselves. This permits servicing of components without changing the resulting measurement values. For configuration measurements, the values used are based on the boot security policy instead of the numerous other configuration settings recorded during startup. These values also change less frequently. The result is that Device Encryption is enabled on appropriate hardware in a user-friendly way while also protecting data.
Measured Boot
Windows 8 introduced Measured Boot as a way for the operating system to record the chain of measurements of software components and configuration information in the TPM through the initialization of the Windows operating system. In previous Windows versions, the measurement chain stopped at the Windows Boot Manager component itself, and the measurements in the TPM were not helpful for understanding the starting state of Windows.
The Windows boot process happens in stages and often involves third-party drivers to communicate with vendor-specific hardware or implement antimalware solutions. For software, Measured Boot records measurements of the Windows kernel, Early-Launch Anti-Malware drivers, and boot drivers in the TPM. For configuration settings, Measured Boot records security-relevant information such as signature data that antimalware drivers use and configuration data about Windows security features (e.g., whether BitLocker is on or off).
Measured Boot ensures that TPM measurements fully reflect the starting state of Windows software and configuration settings. If security settings and other protections are set up correctly, they can be trusted to maintain the security of the running operating system thereafter. Other scenarios can use the operating system’s starting state to determine whether the running operating system should be trusted.
TPM measurements are designed to avoid recording any privacy-sensitive information as a measurement. As an additional privacy protection, Measured Boot stops the measurement chain at the initial starting state of Windows. Therefore, the set of measurements does not include details about which applications are in use or how Windows is being used. Measurement information can be shared with external entities to show that the device is enforcing adequate security policies and did not start with malware.
The TPM provides the following way for scenarios to use the measurements recorded in the TPM during boot:
• Remote Attestation. Using an attestation identity key, the TPM can generate and cryptographically sign a statement (orquote) of the current measurements in the TPM. Windows 10 can create unique attestation identity keys for various scenarios to prevent separate evaluators from collaborating to track the same device. Additional information in the quote is cryptographically scrambled to limit information sharing and better protect privacy. By sending the quote to a remote entity, a device can attest which software and configuration settings were used to boot the device and initialize the operating system. An attestation identity key certificate can provide further assurance that the quote is coming from a real TPM. Remote attestation is the process of recording measurements in the TPM, generating a quote, and sending the quote information to another system that evaluates the measurements to establish trust in a device. Figure 2 illustrates this process.
When new security features are added to Windows, Measured Boot adds security-relevant configuration information to the measurements recorded in the TPM. Measured Boot enables remote attestation scenarios that reflect the system firmware and the Windows initialization state.
Figure 2: Process used to create evidence of boot software and configuration using a TPM
Health Attestation
Some Windows 10 improvements help security solutions implement remote attestation scenarios. Microsoft provides a Health Attestation service, which can create attestation identity key certificates for TPMs from different manufacturers as well as parse measured boot information to extract simple security assertions, such as whether BitLocker is on or off. The simple security assertions can be used to evaluate device health.
Mobile device management (MDM) solutions can receive simple security assertions from the Microsoft Health Attestation service for a client without having to deal with the complexity of the quote or the detailed TPM measurements. MDM solutions can act on the security information by quarantining unhealthy devices or blocking access to cloud services such as Microsoft Office 365.
Credential Guard
Credential Guard is a new feature in Windows 10 that helps protect Windows credentials in organizations that have deployed AD DS. Historically, a user’s credentials (e.g., logon password) were hashed to generate an authorization token. The user employed the token to access resources that he or she was permitted to use. One weakness of the token model is that malware that had access to the operating system kernel could look through the computer’s memory and harvest all the access tokens currently in use. The attacker could then use harvested tokens to log on to other machines and collect more credentials. This kind of attack is called a “pass the hash” attack, a malware technique that infects one machine to infect many machines across an organization.
Similar to the way Microsoft Hyper-V keeps virtual machines (VMs) separate from one another, Credential Guard uses virtualization to isolate the process that hashes credentials in a memory area that the operating system kernel cannot access. This isolated memory area is initialized and protected during the boot process so that components in the larger operating system environment cannot tamper with it. Credential Guard uses the TPM to protect its keys with TPM measurements, so they are accessible only during the boot process step when the separate region is initialized; they are not available for the normal operating system kernel. The local security authority code in the Windows kernel interacts with the isolated memory area by passing in credentials and receiving single-use authorization tokens in return.
The resulting solution provides defense in depth, because even if malware runs in the operating system kernel, it cannot access the secrets inside the isolated memory area that actually generates authorization tokens. The solution does not solve the problem of key loggers because the passwords such loggers capture actually pass through the normal Windows kernel, but when combined with other solutions, such as smart cards for authentication, Credential Guard greatly enhances the protection of credentials in Windows 10.
Conclusion
The TPM adds hardware-based security benefits to Windows 10. When installed on hardware that includes a TPM, Window 10 delivers remarkably improved security benefits. The following table summarizes the key benefits of the TPM’s major features.
Although some of the aforementioned features have additional hardware requirements (e.g., virtualization support), the TPM is a cornerstone of Windows 10 security. Microsoft and other industry stakeholders continue to improve the global standards associated with TPM and find more and more applications that use it to provide tangible benefits to customers. Microsoft has included support for most TPM features in its version of Windows for the Internet of Things (IoT) called Windows 10 IoT Core. IoT devices that might be deployed in insecure physical locations and connected to cloud services like Azure IoT Hub for management can use the TPM in innovative ways to address their emerging security requirements. | https://docs.microsoft.com/en-us/windows/security/information-protection/tpm/how-windows-uses-the-tpm | 2018-08-14T13:21:52 | CC-MAIN-2018-34 | 1534221209040.29 | [array(['images/tpm-capabilities.png', 'TPM Capabilities'], dtype=object)
array(['images/process-to-create-evidence-of-boot-software-and-configuration-using-tpm.png',
'Process to Create Evidence of Boot Software and Configuration Using TPM'],
dtype=object) ] | docs.microsoft.com |
Version End of Life. 7 June 2019
Release 5.3.1 is a bug fix release that adds support for the GEOMETRY data type in MySQL 5.7 and above, and a number of bug fixes.
The following issues may affect the operation of Tungsten Clustering and should be taken into account when deploying or updating to this release.
It was previously impossible to change from a non-SSL installation to an SSL installation using self-generated certificates if an INI style configuration was being used. This can now be achieved by using the following command-line:shell>
tools/tpm update --replace-release --replace-jgroups-certificate --replace-tls-certificate
Issues: CT-442
Previously the system had been configured to dump heap files by default when the system ran out of memory which was useful for debugging by the development team. This has now been disabled.
Issues: CT-604.
Installation and Deployment
Support for the
GEOMETRYdata type within MySQL 5.7 and above has been added. This provides full support for both extractiong and applying of the datatype to MySQL.
This change is not backwards compatible; when upgrading, you should upgrade slaves first and then the master to ensure compatibility. Once you have extracted data with the GEOMETRY type into THL, the THL will no longer be compatible with any version of the replicator that does not support the GEOMETRY datatype.
Issues: CT-403 | http://docs.continuent.com/tungsten-clustering-5.3/release-notes-5-3-1.html | 2018-08-14T13:42:14 | CC-MAIN-2018-34 | 1534221209040.29 | [] | docs.continuent.com |
Configure; hash to create. (this will delete all of your data!):
.
_blocksignature.
View the integrity of IT data
To view the integrity of indexed data at search time, open the Show source window for results of a search. To bring up the Show source window, click the drop-down arrow at the left of any search result. Select Show source and a window will open displaying the raw data for each search result.
The Show source window displays information as to whether the block of IT data has gaps, has been tampered with, or is valid (no gaps or tampering).
The status shown for types of events are:
- Valid
- Tampered with
- Has gaps in data
Issues. files.
This documentation applies to the following versions of Splunk® Enterprise: 4.3, 4.3.1, 4.3.2, 4.3.3, 4.3.4, 4.3.5, 4.3.6, 4.3.7
Feedback submitted, thanks! | http://docs.splunk.com/Documentation/Splunk/4.3/Admin/ITDataSigning | 2018-08-14T13:58:08 | CC-MAIN-2018-34 | 1534221209040.29 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
If the active recording policy specifies that users are notified when their sessions are recorded, a pop-up window appears displaying a notification message after users type their credentials. The following message is the default notification: “Your activity with one or more of the programs you recently started is being recorded. If you object to this condition, close the programs.” The user clicks OK to dismiss the window and continue the session.
The default notification message appears in the language of the operating system of the computers hosting the Session Recording Server.
You can create custom notifications in languages of your choice; however, you can have only one notification message for each language. Your users see the notification message in the language corresponding to their user preferred locale settings. | https://docs.citrix.com/ja-jp/xenapp-and-xendesktop/7-6-long-term-service-release/xad-monitor-article/xad-session-recording/xad-sr-record-notification-creating.html | 2018-08-14T14:24:28 | CC-MAIN-2018-34 | 1534221209040.29 | [] | docs.citrix.com |
Migrate SAM Foundation software installations If you are using Discovery, run this script after installing Software Asset Management Foundation plugin to copy previously discovered software installation records from the [cmdb_ci_spkg] table to the [cmdb_sam_sw_install] table, which is used by Software Asset Management Foundation plugin to store software installation records. Before you beginRole required: sam_admin About this task If you are running Discovery and have used a version of ITSM Software Asset Management previously, there is no need to run this script. When running the Migrate Software Installs script, allow enough time for the process to complete. Procedure Navigate to Software Asset > Administration > Migrate Software Installs and click Procced. The Software Installations list is shown. If the data has already been migrated, a message is shown. | https://docs.servicenow.com/bundle/kingston-it-service-management/page/product/software-asset-management2/task/t_MigrateSWInstallsSAMF.html | 2018-08-14T13:54:13 | CC-MAIN-2018-34 | 1534221209040.29 | [] | docs.servicenow.com |
request software activate
request software activate—Activate a software image on the local Viptela device (on vEdge routers and vSmart controllers only).
Starting in Release 15.4, this command replaces the reboot other-boot-partition command.
Command Syntax
request software activate software-image [clean] [now]
Options
- Activate Immediately
- now
Activate the specified software image immediately, with no prompt asking you to confirm that you want to activate.
- Clear All Existing Configuration and Related Files
- clean
Activate the specified software image, but do not associate the existing configuration file, and do not associates any files that store information about the device history, such as log and trace files, with the newly activated software image.
- Software Image Name
- software-image
Name of the software image to activate on the device.
Output Fields
The output fields are self-explanatory.
Example Output
Activate a software image:
vEdge# request software activate 15.3.3 This will reboot the node with the activated version. Are you sure you want to proceed? [yes,NO]
Release Information
Command introduced in Viptela Software Release 15.3.3 for vEdge 100 routers only.
In Release 15.4, this command is supported on all routers and on vSmart controllers. It replaces the reboot other-boot-partition command. | https://sdwan-docs.cisco.com/Product_Documentation/Command_Reference/Operational_Commands/request_software_activate | 2018-08-14T13:29:23 | CC-MAIN-2018-34 | 1534221209040.29 | [] | sdwan-docs.cisco.com |
Use Azure Video Indexer API
Note
The Video Indexer V1 API was deprecated on August 1st, 2018. You should now use the Video Indexer v2 API.
To develop with Video Indexer v2 APIs, please refer to the instructions found here.
Video Indexer consolidates various audio and video artificial intelligence (AI) technologies offered by Microsoft in one integrated service, making development simpler. The APIs are designed to enable developers to focus on consuming Media AI technologies without worrying about scale, global reach, availability, and reliability of cloud platform. You can use the API to upload your files, get detailed video insights, get URLs of insight and player widgets in order to embed them into your application, and other tasks.
When creating a Video Indexer account, you can choose a free trial account (where you get a certain number of free indexing minutes) or a paid option (where you are not limited by the quota). With free trial, Video Indexer provides up to 600 minutes of free indexing to website users and up to 2400 minutes of free indexing to API users. With paid option, you create a Video Indexer account that is connected to your Azure subscription and a Azure Media Services account. You pay for minutes indexed as well as the Media Account related charges.
This article shows how the developers can take advantage of the Video Indexer API. To read a more detailed overview of the Video Indexer service, see the overview article.
To start developing with Video Indexer, you must first Sign In to the Video Indexer portal.
Important
- You must use the same provider you used when you signed up for Video Indexer.
- Personal Google and Microsoft (outlook/live) accounts can only be used for trial accounts. Accounts connected to Azure require Azure AD.
- There can be only one active account per E-Mail. If a user tries to sign-in with [email protected] for LinkedIn and after that with [email protected] for Google the later will display an error page, saying the user already exist.
Subscribe.
Select the Products tab. Then, select Authorization and subscribe.
Note
New users are automatically subscribed to Authorization.
Once you subscribe, you will be able to see your subscription and your primary and secondary keys. The keys should be protected. The keys should only be used by your server code. They should not be available on the client side (.js, .html, etc.).
Obtain access token using the Authorization API
Once you subscribed to the Authorization API, you will be able to obtain access tokens. These access tokens are used to authenticate against the Operations API.
Each call to the Operations API should be associated with an access token, matching the authorization scope of the call.
- User level - user level access tokens let you perform operations on the user level. For example, get associated accounts.
- Account level – account level access tokens let you perform operations on the account level or the video level. For example, Upload video, list all videos, get video insights, etc.
- Video level – video level access tokens let you perform operations on a specific video. For example, get video insights, download captions, get widgets, etc.
You can control whether these tokens are readonly or they allow editing by specifying allowEdit=true/false.
For most server-to-server scenarios, you will probably use the same account token since it covers both account operations and video operations. However, if you are planning to make client side calls to Video Indexer (for example, from javascript), you would want to use a video access token, to prevent clients from getting access to the entire account. That is also the reason that when embedding VideoIndexer client code in your client (for example, using Get Insights Widget or Get Player Widget) you must provide a video access token.
To make things easier, you can use the Authorization API > GetAccounts to get your accounts without obtaining a user token first. You can also ask to get the accounts with valid tokens, enabling you to skip an additional call to get an account token.
Access tokens expire after 1 hour. Make sure your access token is valid before using the Operations API. If expires, call the Authorization API again to get a new access token.
You are ready to start integrating with the API. Find the detailed description of each Video Indexer REST API.
Location
All operation APIs require a Location parameter, which indicates the region to which the call should be routed and in which the account was created.
The values described in the following table apply. The Param value is the value you pass when using the API.
Account ID
The Account ID parameter is required in all operational API calls. Account ID is a GUID that can be obtained in one of the following ways:
Use the Video Indexer portal to get the Account ID:
- Browse to the Settings page.
Copy the account ID.
Use the API to programmatically get the Account ID.
Use the Get accounts API.
Tip
You can generate access tokens for the accounts by defining
generateAccessTokens=true.
Get the account ID from the URL of a player page in your account.
When you watch a video, the ID appears after the
accountssection and before the
videossection.
Recommendations
This section lists some recommendations when using Video Indexer API.
If you are planning to upload a video, it is recommended to place the file in some public network location (for example, OneDrive). Get the link to the video and provide the URL as the upload file param.
The URL provided to Video Indexer must point to a media (audio or video) file. Some of the links generated by OneDrive are for an HTML page that contains the file. An easy verification for the URL would be to paste it into a browser – if the file starts downloading, it's likely a good URL. If the browser is rendering some visualization, it's likely not a link to a file but an HTML page.
When you call the API that gets video insights for the specified video, you get a detailed JSON output as the response content. See details about the returned JSON in this topic.
Code sample
The following C# code snippet demonstrates the usage of all the Video Indexer APIs together.
var apiUrl = ""; var accountId = "..."; var location = "westus2"; var apiKey = "..."; System.Net.ServicePointManager.SecurityProtocol = System.Net.ServicePointManager.SecurityProtocol | System.Net.SecurityProtocolType.Tls12; // create the http client var handler = new HttpClientHandler(); handler.AllowAutoRedirect = false; var client = new HttpClient(handler); client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", apiKey); // obtain account access token var accountAccessTokenRequestResult = client.GetAsync($"{apiUrl}/auth/{location}/Accounts/{accountId}/AccessToken?allowEdit=true").Result; var accountAccessToken = accountAccessTokenRequestResult.Content.ReadAsStringAsync().Result.Replace("\"", ""); client.DefaultRequestHeaders.Remove("Ocp-Apim-Subscription-Key"); // upload a video var content = new MultipartFormDataContent(); Debug.WriteLine("Uploading..."); // get the video from URL var videoUrl = "VIDEO_URL"; // replace with the video URL // as an alternative to specifying video URL, you can upload a file. // remove the videoUrl parameter from the query string below and add the following lines: //FileStream video =File.OpenRead(Globals.VIDEOFILE_PATH); //byte[] buffer =newbyte[video.Length]; //video.Read(buffer, 0, buffer.Length); //content.Add(newByteArrayContent(buffer)); var uploadRequestResult = client.PostAsync($"{apiUrl}/{location}/Accounts/{accountId}/Videos?accessToken={accountAccessToken}&name=some_name&description=some_description&privacy=private&partition=some_partition&videoUrl={videoUrl}", content).Result; var uploadResult = uploadRequestResult.Content.ReadAsStringAsync().Result; // get the video id from the upload result var videoId = JsonConvert.DeserializeObject<dynamic>(uploadResult)["id"]; Debug.WriteLine("Uploaded"); Debug.WriteLine("Video ID: " + videoId); // obtain video access token client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", apiKey); var videoTokenRequestResult = client.GetAsync($"{apiUrl}/auth/{location}/Accounts/{accountId}/Videos/{videoId}/AccessToken?allowEdit=true").Result; var videoAccessToken = videoTokenRequestResult.Content.ReadAsStringAsync().Result.Replace("\"", ""); client.DefaultRequestHeaders.Remove("Ocp-Apim-Subscription-Key"); // wait for the video index to finish while (true) { Thread.Sleep(10000); var videoGetIndexRequestResult = client.GetAsync($"{apiUrl}/{location}/Accounts/{accountId}/Videos/{videoId}/Index?accessToken={videoAccessToken}&language=English").Result; var videoGetIndexResult = videoGetIndexRequestResult.Content.ReadAsStringAsync().Result; var processingState = JsonConvert.DeserializeObject<dynamic>(videoGetIndexResult)["state"]; Debug.WriteLine(""); Debug.WriteLine("State:"); Debug.WriteLine(processingState); // job is finished if (processingState != "Uploaded" && processingState != "Processing") { Debug.WriteLine(""); Debug.WriteLine("Full JSON:"); Debug.WriteLine(videoGetIndexResult); break; } } // search for the video var searchRequestResult = client.GetAsync($"{apiUrl}/{location}/Accounts/{accountId}/Videos/Search?accessToken={accountAccessToken}&id={videoId}").Result; var searchResult = searchRequestResult.Content.ReadAsStringAsync().Result; Debug.WriteLine(""); Debug.WriteLine("Search:"); Debug.WriteLine(searchResult); // get insights widget url var insightsWidgetRequestResult = client.GetAsync($"{apiUrl}/{location}/Accounts/{accountId}/Videos/{videoId}/InsightsWidget?accessToken={videoAccessToken}&widgetType=Keywords&allowEdit=true").Result; var insightsWidgetLink = insightsWidgetRequestResult.Headers.Location; Debug.WriteLine("Insights Widget url:"); Debug.WriteLine(insightsWidgetLink); // get player widget url var playerWidgetRequestResult = client.GetAsync($"{apiUrl}/{location}/Accounts/{accountId}/Videos/{videoId}/PlayerWidget?accessToken={videoAccessToken}").Result; var playerWidgetLink = playerWidgetRequestResult.Headers.Location; Debug.WriteLine(""); Debug.WriteLine("Player Widget url:"); Debug.WriteLine(playerWidgetLink);
Next steps
Examine details of the output JSON. | https://docs.microsoft.com/ja-jp/azure/cognitive-services/video-indexer/video-indexer-use-apis | 2018-08-14T13:16:59 | CC-MAIN-2018-34 | 1534221209040.29 | [] | docs.microsoft.com |
The fastest way for developers to build, host and scale applications in the public cloud
Single-tenant, high-availability Kubernetes clusters in the public cloud
Build, deploy and manage your applications across cloud- and on-premise infrastructure
Toggle nav
See the quick installation
method to use an interactive CLI tool that allows you to install and configure a
new trial OpenShift Container Platform instance across multiple hosts. | https://docs.openshift.com/container-platform/3.4/getting_started/administrators.html | 2018-08-14T13:14:58 | CC-MAIN-2018-34 | 1534221209040.29 | [] | docs.openshift.com |
Tutorial: Creating a Data-Driven Subscription.
What You Will Learn
This.
Requirements.
Tip
The easiest way to publish all of the sample reports to a report server is to deploy the report sample solution (AdventureWorks Sample Reports.sln) from Business Intelligence Development Studio. For more information, see AdventureWorks Report Samples.
Note
When reviewing tutorials it is recommended you add next and previous buttons to the document viewer toolbar. For more information, see Adding Next and Previous Buttons to Help.
See Also
Concepts
Reporting Services Tutorials
Other Resources
Deployment Modes for Reporting Services
Data-Driven Subscriptions
Report Samples (Reporting Services)
Installing AdventureWorks Sample Databases and Samples
Help and Information
Getting SQL Server 2005 Assistance | https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2005/ms169673%28v%3Dsql.90%29 | 2018-08-14T13:16:38 | CC-MAIN-2018-34 | 1534221209040.29 | [] | docs.microsoft.com |
Precompiled Resources Option is one of the most often used options from the resource optimization options at Easy Social Share Buttons. But we think that most of the users set the options in that section at random and just turn all of them on. That way they believe their site is best speed optimized. But that is not true.
Each option from the Static Resource Optimizations has its specific function. And if the setting is turned on without need, there is negative result to the loading speed or there are visualization issues.
How Precompiled Resources Option Works?
By default Easy Social Share Buttons is made to add CSS and JavaScript files only if they are needed. That means that the files generated at the code vary depends on the settings made from each user. That way of work is the most speed optimize method, because it saves unnecessary code that have to be executed with no need.
When Easy Social Share Buttons is activated and share buttons are added to one installation, at the code is included one basic template file. If the user make any specific settings as activating another template, additional display methods, native buttons, mail button, subscribe button and so on, other separate files will be added to the code. And when the page is loaded calls for each of these files will be started. Which simply means that there is time needed these calls to be completed.
And Here Is The Situation When Precompiled Is Useful
When Use plugin precompiled resources option is activated all the files (CSS and JavaScript) from ESSB settings are combined in one common file, saved on your server. And when the page is loaded, there will be only one file download call. Which saves time and loading resources.
With each update of the Easy Social Share Buttons settings that precompiled filed is generated again. That way all the settings you have changed are displayed as they are set.
When Precompiled Is NOT Useful
In short Precompiled resources option use the way of work of all cache plugins. That is the reason that option do not have to be used with caches from any type.
When you have separate cache plugin and precompiled resources option activated, each of these cache actions is refreshed in different time. That can makes visualization issues or the settings made at the ESSB menu are just not applied at the code.
Technically it is possible precompiled option to work with cache plugins with no troubles. But if there is visualization issue turn off the precompiled option. | https://docs.socialsharingplugin.com/precompiled-resources-option/ | 2018-08-14T13:24:34 | CC-MAIN-2018-34 | 1534221209040.29 | [] | docs.socialsharingplugin.com |
Compatibility¶
Hypothesis does its level best to be compatible with everything you could possibly need it to be compatible with. Generally you should just try it and expect it to work. If it doesn’t, you can be surprised and check this document for the details.
Python versions¶
Hypothesis is supported and tested on CPython 2.7 and CPython 3.4+.
Hypothesis also supports PyPy2, and will support PyPy3 when there is a stable release supporting Python 3.4+. Hypothesis does not currently work on Jython, though it probably could (issue #174).¶¶, or ask the framework maintainer to support our hooks for inserting such a wrapper later.
In terms of what’s actually known to work:
- Hypothesis integrates as smoothly with py.test and unittest as we can make it, and this is verified as part of the CI. Note however that
@givenshould only be used on tests, not
unittest.TestCasesetup or teardown methods.
- pytest fixtures work in the usual way for tests that have been decorated with
@given- just avoid passing a strategy for each argument that will be supplied by a fixture. However, each fixture will run once for the whole function, not once per example. Decorating a fixture function is meaningless.
- Nose works fine with hypothesis, and this is tested as part of the CI. yield based tests simply won’t work.
- Integration with Django’s testing requires use of the Hypothesis for Django users package. The issue is that in Django’s tests’ normal mode of execution it will reset the database once per test rather than once per example, which is not what you want.
- Coverage works out of the box with Hypothesis - we use it to guide example selection for user code, and Hypothesis has 100% branch coverage in its own tests.
Optional Packages¶
The supported versions of optional packages, for strategies in
hypothesis.extra,
are listed in the documentation for that extra. Our general goal is to support
all versions that are supported upstream.
Regularly verifying this¶
Everything mentioned above as explicitly supported is checked on every commit with Travis, Appveyor, and CircleCI. Our continous delivery pipeline runs all of these checks before publishing each release, so when we say they’re supported we really mean it.
Hypothesis versions¶
Backwards compatibility is better than backporting fixes, so we use semantic versioning and only support the most recent version of Hypothesis. See Help and Support for more information. | https://hypothesis.readthedocs.io/en/latest/supported.html | 2018-08-14T13:27:42 | CC-MAIN-2018-34 | 1534221209040.29 | [] | hypothesis.readthedocs.io |
Welcome to the
Oddballs Anonymous Document Control area
Released documents are
managed here
.
Latest document activity is
listed here
.
Help on using this area is
available here
.
Unreleased and uncontrolled documents/files are handled via the forms below.
File Management area
Select the file to upload.
Unable to open /home/oadave/docs.oahaven.net/files
no file selected
File to delete
File Access area | http://docs.oahaven.net/ | 2018-08-14T13:49:37 | CC-MAIN-2018-34 | 1534221209040.29 | [] | docs.oahaven.net |
Implementing Online Features in FrameMaker
Implement online features in your output by preparing your Adobe FrameMaker source documents with custom marker types, paragraph formats, and character formats defined by the Stationery designer for your Stationery. These markers and styles define the presentation and behavior or your online content. For example, markers can define the name of the file generated for a topic. Formats can define how content displays online. | http://docs.webworks.com/ePublisher/2009.2/Help/03.Preparing_and_Publishing_Content/1.03.Preparing_FrameMaker_Files | 2018-08-14T13:24:21 | CC-MAIN-2018-34 | 1534221209040.29 | [] | docs.webworks.com |
Verify LDAP mapping After creating an LDAP transform map, refresh the LDAP data to verify the transform map works as expected. Before you beginRole required: admin Procedure Navigate to System LDAP > Scheduled Loads. Click your LDAP import job. Click Execute Now. Related ConceptsRecord creation options during an LDAP transformRelated ReferenceDifferences between LDAP transform maps and legacy import mapsLDAP import default mappingLDAP scripting | https://docs.servicenow.com/bundle/kingston-platform-administration/page/integrate/ldap/task/t_VerifyLDAPMapping.html | 2018-08-14T13:51:36 | CC-MAIN-2018-34 | 1534221209040.29 | [] | docs.servicenow.com |
Some more examples¶
This is a collection of examples of how to use Hypothesis in interesting ways. It’s small for now but will grow over time.
All of these examples are designed to be run under py.test (nose should probably work too).
How not to sort by a partial order¶, strategies = {node.label: node for node in nodes} return list(table.values())
We define a function to deduplicate nodes by labels, and can now map that over a strategy for lists of nodes to give us a strategy for lists of nodes with unique labels:
@given(s.lists(NodeStrategy).map(deduplicate_nodes_by_label))¶
This is an example of some tests for pytz which check that various timezone
conversions behave as you would expect them to. These tests should all pass,
and are mostly a demonstration of some useful sorts of thing to test with
Hypothesis, and how the
datetimes() strategy works.
>>> from datetime import timedelta >>> from hypothesis.extra.pytz import timezones >>> from hypothesis.strategies import datetimes >>> #¶:
- We can generate a type that is much larger than an election, extract an election out of that, and rely on minimization to throw away all the extraneous detail.
-¶. The swagger-conformance package provides an excellent example of this! other.() | https://hypothesis.readthedocs.io/en/latest/examples.html | 2018-08-14T13:27:39 | CC-MAIN-2018-34 | 1534221209040.29 | [] | hypothesis.readthedocs.io |
Triples
In this section you will learn
Introduction
tdt/triples is a repository that hooks into The DataTank core application, and provides the functionality to build your URI space through the configuration of semantic resources, in addition to the URI space of The DataTank.
It reads the triples from the semantic sources, and by default it stores them into a local simulated triple store based on MySQL. Therefore it needs a MySQL database, so make sure your datatank project is configured with a MySQL connection.
Also note that for this kind of "triple caching", it uses the semsol/arc2 library which works on PHP 5.3 and 5.4. From 5.5 the MySQL driver that semsol/arc2 uses is deprecated. This can be solved by creating a different TriplesRepository instance that uses a genuine triplestore (or other solutions) to store triples in. Because we've built this type of caching with dependency injection, it makes it easy to provide your own triple caching.
Purpose
The core application allows for the RESTful publication of data sources, from any machine readable format, to web-ready formats. (e.g. SPARQL, SHP, CSV, XLS, ... to JSON-LD, JSON, XML, PHP, RDF/XML, RDF/JSON, and even map visualizations)
Each datasource that is published by the core application has its own URI, and represents a certain chunk of data. Now, to allow for automatic configuration of your URI space from linked data sources and take a step towards the semantic web, the tdt/triples package was created.
This package allows for a set of semantic sources to be added to the datatank, from which the subject of the triples are used to fill in URIs that have not been used in the datatank core. Let's take a look at an example.
The DataTank has 2 published datasources:
-
-
Now if an organization has a set of semantic data that is pointing towards their domain (the domain that has the datatank instance installed), they would have to manually publish the different semantic sources under corresponding (or not) URIs. This is where tdt/triples comes in. By adding the semantic datasources to the datatank, while this package is installed, the subject URIs of the triples that are present in the semantic sources will be dereferenced automatically. No need to publish the different triples to corresponding URI's by hand.
For example, if you configure a turtle file that has triples with a subject of, then that URI will automatically be dereferenced by the datatank. Upon making a request, all triples that can be found in the configured semantic sources, with a subject similar to the URI of the request will be returned.
How it works
The current supported semantic sources are Turtle files, RDF files, LDF servers and SPARQL-endpoints. When tdt/triples is installed the following workflow is applied:
- Request URI serves as an identifier
- The datatank checks if no datasource is published on the identifier by the main (core) application
- If the identifier is not used by core, then all semantic sources are scanned for triples with a subject matching the URI
- If triples are found with the subject, they are returned, if not a 404 is given
An admin user can interact with this package through api/triples, check the /discovery document to see which functionalities are available, or go to the admin interface to interact with the package through a UI.
Installing tdt/triples
This package works with version 4.3 or higher ( if 4.3 is not available, try the development branch) of the datatank core. If you have remarks, suggestions, issues, etc. please don't hesitate to log it on the github repository.
1) Edit composer.json
Edit your composer.json file, and add tdt/triples as a dependency:
"tdt/triples": "dev-master"
After that run the composer update command.
$ composer update
3) Migrate
The package needs a few extra datatables for its configuration, so go ahead and run the migration command!
$ php artisan migrate --package=tdt/triples
4) Notify core
Let the core application know you have added functionality it should take into account. Do this by adding 'Tdt\Triples\TriplesServiceProvider' to the app.php file located in the app/config folder.
Next, tell the core application it has new UI (CSS and JS) assets it has to take into account, do this by the following command:
$ php artisan asset:publish tdt/triples
You're ready to start using tdt/triples. Each api resource in the datatank is located under | http://docs.thedatatank.com/5.12/triples_introduction | 2018-08-14T13:26:26 | CC-MAIN-2018-34 | 1534221209040.29 | [] | docs.thedatatank.com |
Reproducing Failures¶
One of the things that is often concerning for people using randomized testing like Hypothesis is the question of how to reproduce failing test cases.
Fortunately Hypothesis has a number of features in support of this. The one you will use most commonly when developing locally is the example database, which means that you shouldn’t have to think about the problem at all for local use - test failures will just automatically reproduce without you having to do anything.
The example database is perfectly suitable for sharing between machines, but there currently aren’t very good work flows for that, so Hypothesis provides a number of ways to make examples reproducible by adding them to the source code of your tests. This is particularly useful when e.g. you are trying to run an example that has failed on your CI, or otherwise share them between machines.
Providing explicit examples¶
You can explicitly ask Hypothesis to try a particular example, using
hypothesis.
example(*args, **kwargs)[source]¶
A decorator which ensures a specific example is always tested.
Hypothesis will run all examples you’ve asked for first. If any of them fail it will not go on to look for more examples.
As with
@given, it is not permitted for a single example to be a mix of
positional and keyword arguments.
Either are fine, and you can use one in one example and the other in another
example if for some reason you really want to, but a single example must be
consistent.
Reproducing a test run with
@seed¶
hypothesis.
seed(seed)[source]¶
seed: Start the test execution from a specific seed.
May be any hashable object. No exact meaning for seed is provided other than that for a fixed seed value Hypothesis will try the same actions (insofar as it can given external sources of non- determinism. e.g. timing and hash randomization).
Overrides the derandomize setting, which is designed to enable deterministic builds rather than reproducing observed failures.
When a test fails unexpectedly, usually due to a health check failure,
Hypothesis will print out a seed that led to that failure, if the test is not
already running with a fixed seed. You can then recreate that failure using either
the
@seed decorator or (if you are running pytest) with
--hypothesis-seed.
Reproducing an example with with
@reproduce_failure¶
Hypothesis has an opaque binary representation that it uses for all examples it
generates. This representation is not intended to be stable across versions or
with respect to changes in the test, but can be used to to reproduce failures
with the
@reproduce_example decorator.
hypothesis.
reproduce_failure(version, blob)[source]¶
Run the example that corresponds to this data blob in order to reproduce a failure.
A test with this decorator always runs only one example and always fails. If the provided example does not cause a failure, or is in some way invalid for this test, then this will fail with a DidNotReproduce error.
This decorator is not intended to be a permanent addition to your test suite. It’s simply some code you can add to ease reproduction of a problem in the event that you don’t have access to the test database. Because of this, no compatibility guarantees are made between different versions of Hypothesis - its API may change arbitrarily from version to version.
The intent is that you should never write this decorator by hand, but it is
instead provided by Hypothesis.
When a test fails with a falsifying example, Hypothesis may print out a
suggestion to use
@reproduce_failure on the test to recreate the problem
as follows:
>>> from hypothesis import settings, given, PrintSettings >>> import hypothesis.strategies as st >>> @given(st.floats()) ... @settings(print_blob=PrintSettings.ALWAYS) ... def test(f): ... assert f == f ... >>> try: ... test() ... except AssertionError: ... pass Falsifying example: test(f=nan) You can reproduce this example by temporarily adding @reproduce_failure(..., b'AAAA//AAAAAAAAEA') as a decorator on your test case
Adding the suggested decorator to the test should reproduce the failure (as
long as everything else is the same - changing the versions of Python or
anything else involved, might of course affect the behaviour of the test! Note
that changing the version of Hypothesis will result in a different error -
each
@reproduce_failure invocation is specific to a Hypothesis version).
When to do this is controlled by the
print_blob
setting, which may be one of the following values:
- class
hypothesis.
PrintSettings[source]¶
Flags to determine whether or not to print a detailed example blob to use with
reproduce_failure()for failing test cases.
INFER= 1¶
Make an educated guess as to whether it would be appropriate to print the blob.
The current rules are that this will print if both: | https://hypothesis.readthedocs.io/en/latest/reproducing.html | 2018-08-14T13:27:23 | CC-MAIN-2018-34 | 1534221209040.29 | [] | hypothesis.readthedocs.io |
8.3
Denxi Journal
Sage L. Gerard (base64-decode #"c2FnZUBzYWdlZ2VyYXJkLmNvbQ")
This is a development journal for Denxi, with entries sorted in reverse chronological order.
This log does not maintain links to other Denxi documents, because a claim about Denxi in an entry applies only to the context of that entry. It is possible for a change in Denxi to break the logic of at least zero log entries, and such transitions will be captured in a new log entry.
For all documentation, see Denxi Documentation. | https://docs.racket-lang.org/denxi-journal/index.html | 2022-01-16T22:05:58 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.racket-lang.org |
Lab Templates
What's a Template?
The concept of a Lab Template is core to the Snap Labs platform. A Template is a collection of all the data we need to automatically deploy your lab environment within minutes. It includes things like:
- System Details
- Networking Design
- Lab Documents
- Lab Metadata
- Estimated Runtime Cost
Any currently deployed lab can be Templated for automated deployment, all without the lab creator needing to write a single line of code! That's right, no writing Infrastructure as Code deployments. This means any Snap Labs user can instantly build their own lab environment and create a Template to allow anyone with a Snap Labs account deploy it. No hardware, no code, and no fuss.
Creating a Template
There are two approaches to take when creating a new lab template. You can make changes to an existing template, or start completely from scratch.
Launch an Existing Lab Template
To build off of an existing lab template, you'll need to launch a new copy of that lab from the Launch New Lab page. Select from your available Templates and Launch!
You can also browse the Templates available to you by visiting the Templates page. Selecting the Launch icon of your desired template will automatically link you to that template in the lab launching screen.
After launching your desired starter template, make your changes and customizations. For help, check out our guides on Lab Building.
Launch a new Blank Lab Template
You may elect to design a lab that's entirely your own. Awesome! To do this, select New Blank Lab from the lab launching screen.
The Lab Marketplace
It's our goal to make sharing the incredible lab designs and scenarios contributed by our partners and the community as easy as possible. To accomplish this, we're building out a Lab Marketplace which will feature Snap Labs designed templates and labs from the community. This work is underway and we're excited to see what you create!
Featured Templates
For now, the Lab Marketplace is a collection of featured templates developed and maintained by Snap Labs and our partners. Featured Templates are available for anyone to launch and include six distinct environments:
- Shirts Corp - A retail themed lab for pentesters and defenders
- Eagle Bank - A more sophisticated network with financial institution themes and artifacts
- Spark Studio - Attack a segmented network and analyze with an Elastic Stack
- DetectionLab - Chris Long's lab to simplify testing, analysis, research for defensive security practitioners
- Attack Range - Spin up Splunk's popular Attack Range project faster than ever
- Red Team Ops - ZeroPoint Security's Red Team Ops training course, powered by Snap Labs
Sharable Links
Lab Templates are now shareable! Snap Labs users with the Creator role or above can make custom templates in their account public, and provide a shareable link for others to launch their labs.
To make a custom template shareable, select the desired template from the Lab Templates page, then select "Share Template" under the Sharing section. This will make your template public, and anyone with the link will have the ability to Import and Launch your custom lab designs.
Public Templates
By making your Lab Template public, you are allowing any Snap Labs user with the template link to launch your lab design. These users can then further customize and template the lab.
Once a lab has been shared you cannot revoke access to that template from anyone who has already imported it into their account. You should treat sharing a lab template like creating an open source project.
Once a Template is public, the AMI's for template are also made public and will be available to all AWS accounts.
Updated 6 months ago | https://docs.snaplabs.io/docs/launching-from-a-template | 2022-01-16T22:08:36 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.snaplabs.io |
Update By Query¶
The
Update By Query object¶
The
Update By Query object enables the use of the
_update_by_query
endpoint to perform an update on documents that match a search query.
The object is implemented as a modification of the
Search object, containing a
subset of its query methods, as well as a script method, which is used to make updates.
The
Update By Query object implements the following
Search query types:
- queries
- filters
- excludes
For more information on queries, see the Search DSL chapter.
Like the
Search object, the API is designed to be chainable. This means that the
Update By Query object
is immutable: all changes to the object will result in a shallow copy being created which
contains the changes. This means you can safely pass the
Update By Query object to
foreign code without fear of it modifying your objects as long as it sticks to
the
Update By Query object APIs.
You can define your client in a number of ways, but the preferred method is to use a global configuration. For more information on defining a client, see the Configuration chapter.
Once your client is defined, you can instantiate a copy of the
Update By Query object as seen below:
from elasticsearch_dsl import UpdateByQuery ubq = UpdateByQuery().using(client) # or ubq = UpdateByQuery(using=client)
Note
All methods return a copy of the object, making it safe to pass to outside code.
The API is chainable, allowing you to combine multiple method calls in one statement:
ubq = UpdateByQuery().using(client).query("match", title="python")
To send the request to Elasticsearch:
response = ubq.execute()
It should be noted, that there are limits to the chaining using the script method: calling script multiple times will overwrite the previous value. That is, only a single script can be sent with a call. An attempt to use two scripts will result in only the second script being stored.
Given the below example:
ubq = UpdateByQuery().using(client).script(source="ctx._source.likes++").script(source="ctx._source.likes+=2")
This means that the stored script by this client will be
'source': 'ctx._source.likes+=2' and the previous call
will not be stored.
For debugging purposes you can serialize the
Update By Query object to a
dict
explicitly:
print(ubq.to_dict())
Also, to use variables in script see below example:
ubq.script( source="ctx._source.messages.removeIf(x -> x.somefield == params.some_var)", params={ 'some_var': 'some_string_val' } )
Serialization and Deserialization¶
The search object can be serialized into a dictionary by using the
.to_dict() method.
You can also create a
Update By Query object from a
dict using the
from_dict
class method. This will create a new
Update By Query object and populate it using
the data from the dict:
ubq = UpdateByQuery.from_dict({"query": {"match": {"title": "python"}}})
If you wish to modify an existing
Update By Query object, overriding it’s
properties, instead use the
update_from_dict method that alters an instance
in-place:
ubq = UpdateByQuery(index='i') ubq.update_from_dict({"query": {"match": {"title": "python"}}, "size": 42})
Extra properties and parameters¶
To set extra properties of the search request, use the
.extra() method.
This can be used to define keys in the body that cannot be defined via a
specific API method like
explain:
ubq = ubq.extra(explain=True)
To set query parameters, use the
.params() method:
ubq = ubq.params(routing="42")
Response¶
You can execute your search by calling the
.execute() method that will return
a
Response object. The
Response object allows you access to any key
from the response dictionary via attribute access. It also provides some
convenient helpers:
response = ubq.execute() print(response.success()) # True print(response.took) # 12
If you want to inspect the contents of the
response objects, just use its
to_dict method to get access to the raw data for pretty printing. | https://elasticsearch-dsl.readthedocs.io/en/latest/update_by_query.html | 2022-01-16T22:26:56 | CC-MAIN-2022-05 | 1642320300244.42 | [] | elasticsearch-dsl.readthedocs.io |
Table of Contents
Product Index. | http://docs.daz3d.com/doku.php/public/read_me/index/63271/start | 2021-10-16T08:01:39 | CC-MAIN-2021-43 | 1634323584554.98 | [] | docs.daz3d.com |
Once you activate WooCommerce plugin for the first time, you are invited to set up your shop using the WooCommerce setup wizard. It is pretty intuitive and allows you to get your shop running in no time.
Nothing can describe getting started with WooCommerce better than its original documentation. Here are links to help you get started:
If you want to get started manually, or skipped the setup wizard, then you need to get a set of shop pages.
Here is what you need to do:
Now you should have a set of standard shop pages and are ready to work with WooCommerce. | https://docs.clbthemes.com/ohio/woocommerce/ | 2021-10-16T08:34:15 | CC-MAIN-2021-43 | 1634323584554.98 | [array(['https://colabrio.ams3.cdn.digitaloceanspaces.com/ohio-docs/docs__sc_32-min-scaled.jpg',
None], dtype=object)
array(['https://colabrio.ams3.cdn.digitaloceanspaces.com/ohio-docs/docs__sc_33-min-scaled.jpg',
None], dtype=object)
array(['https://colabrio.ams3.cdn.digitaloceanspaces.com/ohio-docs/docs__sc_34-min-scaled.jpg',
None], dtype=object) ] | docs.clbthemes.com |
Donut Mod is an extensive campaign mod with a new version of the classic Buzz Cola conspiracy story told through all new story missions.
It also features a number of new cars, characters, locations and collectibles.
Public Beta
The public beta version.
Version History
See All Versions. | https://docs.donutteam.com/docs/donutmod/intro | 2021-10-16T08:01:58 | CC-MAIN-2021-43 | 1634323584554.98 | [] | docs.donutteam.com |
Date: Fri, 4 May 2018 12:04:09 +0700 From: Victor Sudakov <[email protected]> To: Carl Johnson <[email protected]> Cc: [email protected] Subject: Re: Alternative to x11/gnome3 ? Message-ID: <[email protected]> In-Reply-To: <[email protected]> References: <[email protected]> <[email protected]> <[email protected]> <[email protected]>
Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
Carl Johnson wrote: > Victor Sudakov <[email protected]> writes: > > > tech-lists). > >> > > >> > Thanks in advance for any input. > >> > > >> > >> I use xdm with xfce4 for this. > > > > Could you please tell more about it. What packages you had to install > > besides x11/xfce, how you configured xdm and xfce4 to support user > > switching, how you start xfce4. > > I haven't used it, but XFCE does have a 'switch user' button for the > panel. Yes, it does, but the button is grey and cannot be used. > It is part of the 'action buttons' panel addon, but appears to > be part of the basic install. Those action buttons also include things > like lock screen, suspend, hibernate, etc. Let me know if you need more > information. Yes, I need more information please. In my setup, the 'switch user' button is grey (inactive). How do I make it actually work? How is it even *supposed* to work? xdm per se certainly does not do "user switching" in the sense Windows or Ubuntu allow it (with the disconnected user's X-clients still running in the background). -- Victor Sudakov, VAS4-RIPE, VAS47-RIPN AS43859
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=303185+0+/usr/local/www/mailindex/archive/2018/freebsd-questions/20180506.freebsd-questions | 2021-10-16T09:06:27 | CC-MAIN-2021-43 | 1634323584554.98 | [] | docs.freebsd.org |
QueryProcedures
Synopsis
[SQL] QueryProcedures=n
n is either 1 or 0. The default value is 0.
Description
When QueryProcedures is enabled (n = 1), all SQL class queries project as SQL Stored Procedures, regardless of the query’s SqlProc value. When this parameter is not enabled, only class queries defined with SqlProc=1 project as Stored Procedures.
When changing this setting, you must recompile the classes with the class queries in order for this change to have an affect. Modifying this setting in the CPF does not require a n instance restart to make it active.
Changing This Parameter
To set the desired value for QueryProcedures from the Terminal, use the SetOption(“QueryProcedures”)
method of the %SYSTEM.SQL.Util
class. See the class reference for details.
You can also change QueryProcedures with the Config.SQL
class (as described in the class reference) or by editing the CPF in a text editor (as described in the Editing the Active CPF section of the “Introduction to the Configuration Parameter File” chapter in this book). | https://docs.intersystems.com/irisforhealthlatest/csp/docbook/DocBook.UI.Page.cls?KEY=RACS_QUERYPROCEDURES | 2021-10-16T09:15:55 | CC-MAIN-2021-43 | 1634323584554.98 | [] | docs.intersystems.com |
Date: Sat, 16 Oct 2021 08:21:44 +0000 (GMT) Message-ID: <1070563217.46742.1634372504646@9c5033e110b2> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_46741_1721384001.1634372504646" ------=_Part_46741_1721384001.1634372504646 T= ransformation Reference.
There are function equivalents to this transformation:.
true, then the listed value is written to the new = column.
false, then the next case is tested.
true, then t= he default value is written.
case cases: [totalOrdersQ3 < 10, true], [lastO= rderDays > 60, true] default: false as: 'sendCheckinEmail'
Output: If the total orders in Q3 < 10 OR the l=
ast order was placed more than 60 days ago, then write
true in the
sendCheckinEmail. Otherwise, write
f=
alse.
case [if: if_expression] [then:'str_if_true'] [el=
se:'str_if_false] [col:col1] [colCases: [[Match1,Val1]],[[Match2,Val2]]
For more information on syntax standards, see Language Documentation Syntax Notes= .
For if-then-else condition types, this value is an expression to test. E=
xpression must evaluate to
true or
false.
Usage Notes:
For if-then-else condition types, this value is a literal value to write=
in the output column if the expression evaluates to
true=
.
Usage Notes:
For if-then-else condition types, this value is a literal value to write=
in the output column if the expression evaluates to
false.
Usage Notes:
For single-case condition types, this value identifies the = column to test.
Usage Notes:
For single-case condition types, this parameter contains a = comma-separated set of two-value arrays.
You can specify one or more cases as comma-separated two-va= lue arrays.
Usage Notes:
For multi-case condition types, this parameter contains a c= omma-separated set of two-value arrays.
trueor
false.=
You can specify one or more cases as comma-separated two-va= lue arrays.
Usage Notes:
For single-case and multi-case condition types, this parame=
ter defines the value to write in the new column if none of the cases yield=
s a
true result.
Usage Notes:
Name of the new column that is being generated. If the
as parameter is not specified, a default name is used.
Usage Notes:
Tip: For additional examples, see Common Tasks.
See above.
=20=20 | https://docs.trifacta.com/exportword?pageId=160408641 | 2021-10-16T08:21:44 | CC-MAIN-2021-43 | 1634323584554.98 | [] | docs.trifacta.com |
How to set up data checks and data processings in NAV mappings
This article applies to all import cases, whether the original message was received as an XML file, as an EDIFACT file and so on. The goals of the checks and processings are to determine if all of the required data has been transferred in the message (and if that’s not the case, stop the processing), if all necessary data in the local NAV system can be found (customer, vendors, items etc.) and that all data that has been transferred is also valid (are the prices that were sent for this article correct? Or are they outdated?). For all these checks and processings you need to use a NAV type mapping because any data manipulation can only be done in a NAV type mapping.
The first possibility is the command “TESTFIELD”. When you create a new mapping line, set the Type to “Command” and the Command Type to “TESTFIELD”. The compare type often used is “Not Blank” which checks if a field contains a value or not. This is useful for external item numbers or GTIN numbers when processing orders – the number you don’t have cannot be translated to the internal NAV item no. The same applies to the customer and vendor numbers. When you use a value translation for the contact types and a cross reference to get the customers and vendors from the database via a GLN for example, you can use the testfield to check afterwards if the field “Internal No.” in the corresponding EDI Contact has been filled (e.g. the customer was found) and display an error if the field is empty.
You can use the other compare types like EQUAL / NOT EQUAL and GREATER / LESS to perfom similar checks. To stay at the example of the prices: is the price from the message equal to the one that is set for this item and this customer/vendor in the current period in your database? It’s possible to use a customised error text for every TESTFIELD which will be shown during the (test) conversion and in the processing queue.
When you are receiving an order message and you get the manufacturer’s item number in the message, you might want to identify the item in your database. If you maintain the manufacturer’s item numbers in your item table for example, you can use an NAV mapping to retrieve our item and write it’s number to the EDI Document Line. How do you do this?
At first you write the manufacturer’s item no. to the field “External Item No.” in the table “EDI Document Line in your import mapping. In the following NAV mapping you setup a loop over all EDI Document Lines and indented below the loop you insert a loop over the table “Item”. In the data item link you connect the two tables by the fields that contain the same numbers: “External Item No.” from the table “EDI Document Line” and the one from your database, where you maintain the numbers. If the item table loop finds the appropriate item you can write the item no. to the EDI Document Line, to the field “No.”, in a data mapping line indented below the item table loop.
If you want to check afterwards if there’s really an item no. in your EDI Document Line, you can use the TESTFIELD function on the field “No.” and show an error if it’s empty.
You don’t need to transfer every data from the NAV tables to the EDI Document, when our module creates the records in the NAV tables every OnValidate trigger will be executed and all data pulled from the database automatically will be written to the sales document by the system. | https://docs.anveogroup.com/en/kb/how-to-set-up-data-checks-and-data-processings-in-nav-mappings/ | 2021-10-16T08:57:01 | CC-MAIN-2021-43 | 1634323584554.98 | [] | docs.anveogroup.com |
Hashtag is a text tag that can be set on task, post or attachment. Those tags can be used to describe object in detail, they can be searched for either in address bar or in Search tab. Every object can be assigned virtually unlimited amount of tags.
You can add hashtag to a task in task properties, in “Hashtags” area, which is opened by clicking # icon, or you can use inline editing to add tags via “Hashtag” column.
To start a quick search on tasks containing a certain hashtag you can leftclick that hashtag with Ctrl button pressed, or you can rightclick on it and select “Search by hashtag” from the menu. This will open Search tab with set criteria and search results.
You can search on several hashtags by Ctrl-leftclicking several hashtags. They will be searched for automatically.
You can add a hashtag while creating or editing a forum post.
Post hashtags are shown at the beginning of text.
By clicking on link you will open a search panel with all project posts that have this hashtag attached.
You can add hashtag to an attachment during creation / editing a forum post containing that attachment. | https://docs.cerebrohq.com/en/articles/3309641-hashtags | 2021-10-16T08:59:26 | CC-MAIN-2021-43 | 1634323584554.98 | [] | docs.cerebrohq.com |
Flocker is under active deployment and we receive a lot of questions about how this or that will be done in a future release. You can find these questions in the Future Functionality section below. You can also view ideas for future versions of Flocker.
If you want to get involved in a discussion about a future release or have a question about Flocker today, get in touch on our Freenode IRC channel #clusterhq or the Flocker Google group.
There is a good write up of the ZFS and Linux license issues on the ZFS on Linux website. In short, while ZFS won’t be able to make it into mainline Linux proper due to licensing issues, “there is nothing in either license that prevents distributing it in the form of a binary module or in the form of source code.”
ZFS on Linux is already in use in companies and institutions all over the world to the tune of hundreds of petabytes of data. We are also rigorously testing ZFS on Linux to make sure it is stable. ZFS is production quality code.
Flocker manages Docker applications and Docker runs on Linux, so Flocker runs on Linux. However, you do not need to be running Linux on your development machine in order to manage Docker containers with the flocker-cli. See Installing flocker-cli for installation instructions for various operating systems.
Over time, we hope that Flocker becomes the de facto way for managing storage volumes with your favorite orchestration framework. We are interested in expanding libswarm to include support for filesystems and are talking with the various open source projects about the best way to collaborate on storage and networking for volumes. If you’d like work with us on integration, get in touch on our Freenode IRC #clusterhq or the Flocker Google group. You can also submit an issue or a pull request if you have a specific integration that you’d like to propose.
Thankfully no. This is where ZFS makes things really cool. Each clone is essentially free until the clone is modified. This is because ZFS is a copy-on-write filesystem, so a clone is just a set of block pointers. It’s only when a block is modified that the data is copied, so a 2GB database that is cloned five times still just uses 2GB of disk space until a copy is modified. That means, when the database is modified, only the changes are written to disk, so your are only storing the net new data. This also makes it really fast to create database clones.
The idea will be that cloning the app and the database together in some sense allows the containers to maintain what we call independent “links” between 10 instances of the app server (deployed at different staging URLs) and the respective 10 different instances of the cloned database. This works because e.g. port 3306 inside one app server gets routed via an ephemeral port on the host(s) to 3306 inside the corresponding specific instance of the database.
The upshot if which is that you shouldn’t need to change the apps at all, except to configure each clone with a different URL. | https://docs.clusterhq.com/en/0.3.0/faq/index.html | 2021-10-16T09:22:13 | CC-MAIN-2021-43 | 1634323584554.98 | [] | docs.clusterhq.com |
Learn How to Understand Your Pay Stub
The feelings that comes with getting paid are great. Most of the people are quite enthusiastic to receive their pay checks when it comes to getting paid. Depending of the time of payment people can receive their payment either weekly or even monthly. Unfortunately, the number of people who are not familiar with errors that comes with a pay check are significant. The number might be even higher since there are a large number of people who rarely review their paycheque stubs. Once the payment has been made, the people take them to the bank immediately. Most of the times, in the bank is where most people realize that their paycheques have errors. Avoiding these errors can be done by reviewing the paycheques once they have been received. Understand the paycheck by reading more here. VIEW HERE to easily notice these errors.
The gross pay is one of the most important part of the paycheck stub. After the tax deductions and other deduction is what the gross pay states. The gross pay is affected by some factors. The pay rate is one of the factors. The meaning of the pay rate is the amount of time taken to finish work in hours or the amount of projects done. There is need to verify the amount of time worked is accurately stated after getting paid. The commissions, tips and the bonuses and the deductions are the summary of a gross pay. There are errors in the pay stub if the gross pay is incorrect. To learn more about gross pay visit this PAGE.
The tax deductions is the other way to understand about the paystub. The amount the person receives in the bank is minus the deductions of tax. There is a difference in the amount of taxes made after one has received the paychecks. The deductions are a sum for the federal and the state government in the USA. The reasons the taxes are deducted from the paycheck is to fund the Medicare and the social security programs. Visit THIS SITE to learn more about federal and state taxes. There might be additional deductions depending on the states.
Through the employee benefits is the last way to realize these errors. There are deductions that take place once there are employee benefits. The health insurance is one of the most common type of deductions for employees benefits. Also, to add to their retirement, the employees need to contribute to the program. The employees come across many types of benefits that adds to the deductions that happens on their paycheck. To learn more about the types of deductions for employees, visit this website. After all the deductions, the amount left is what is deposited to the bank of the employee. If you want to learn more about gross pay and paychecks go to this website. | http://docs-prints.com/2021/02/12/a-quick-overlook-of-your-cheatsheet-21/ | 2021-10-16T08:15:29 | CC-MAIN-2021-43 | 1634323584554.98 | [] | docs-prints.com |
Literate Programming¶
Agda supports a limited form of literate programming, i.e. code interspersed with prose, if the corresponding filename extension is used.
Literate TeX¶
Files ending in
.lagda or
.lagda.tex are interpreted
as literate TeX files. All code has to appear in code blocks:
Ignored by Agda. \begin{code}[ignored by Agda] module Whatever where -- Agda code goes here \end{code}
Text outside of code blocks is ignored, as well as text right after
\begin{code}, on the same line.
Agda finds code blocks by looking for the first instance of
\begin{code} that is not preceded on the same line by
% or
\ (not counting
\ followed by any code point), then (starting
on the next line) the first instance of
\end{code} that is
preceded by nothing but spaces or tab characters (
\t), and so on
(always starting on the next line). Note that Agda does not try to
figure out if, say, the LaTeX code changes the category code of
%.
If you provide a suitable definition for the code environment, then literate Agda files can double as LaTeX document sources. Example definition:
\usepackage{fancyvrb} \DefineVerbatimEnvironment {code}{Verbatim} {} % Add fancy options here if you like.
The LaTeX backend or the preprocessor lhs2TeX can also be used to produce LaTeX code from literate Agda files. See Known pitfalls and issues for how to make LaTeX accept Agda files using the UTF-8 character encoding.
Literate reStructuredText¶
Files ending in
.lagda.rst are interpreted as literate
reStructuredText files. Agda will parse code following a line ending
in
::, as long as that line does not start with
..:
This line is ordinary text, which is ignored by Agda. :: module Whatever where -- Agda code goes here Another non-code line. :: .. This line is also ignored
reStructuredText source files can be turned into other formats such as HTML or LaTeX using Sphinx.
- Code blocks inside an rST comment block will be type-checked by Agda, but not rendered.
- Code blocks delimited by
.. code-block:: agdaor
.. code-block:: lagdawill be rendered,.
Literate Markdown¶
Files ending in
.lagda.md are interpreted as literate
Markdown files. Code blocks start with
``` or
```agda on
its own line, and end with
```, also on its own line:
This line is ordinary text, which is ignored by Agda. ``` module Whatever where -- Agda code goes here ``` Here is another code block: ```agda data ℕ : Set where zero : ℕ suc : ℕ → ℕ ```
Markdown source files can be turned into many other formats such as HTML or LaTeX using PanDoc.
-.
Literate Org¶
Files ending in
.lagda.org are interpreted as literate
Org files. Code blocks are surrounded by two lines including only
`#+begin_src agda2` and
`#+end_src` (case insensitive).
This line is ordinary text, which is ignored by Agda. #+begin_src agda2 module Whatever where -- Agda code goes here #+end_src Another non-code line.
- Code blocks which should be ignored by Agda, but rendered in the final document may be placed in source blocks without the
agda2label. | https://agda.readthedocs.io/en/latest/tools/literate-programming.html | 2021-10-16T09:36:25 | CC-MAIN-2021-43 | 1634323584554.98 | [] | agda.readthedocs.io |
Message-ID: <719143018.26377.1408518470085.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_26376_782781220.1408518470085" ------=_Part_26376_782781220.1408518470085 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
Groovy requires Java, so you need to have a version available (while gro= ovy 1.6 supported JDK 1.4 or greater, for groovy 1.7 onwards, minimum JDK 1= .5 is needed). Here are the steps if you don't already have Java installed:=
Download the Groovy installer or binaries from the downloa= ds page and follow the installation instructions. (There is curre= ntly an issue where you cannot have spaces in the path where Groovy is inst= alled under windows. So, instead of accepting the default installatio= n path of "c:\Program Files\Groovy" you will want to change the p= ath to something like "c:\Groovy")
OR
You may wish to obtain optional jar files, either corresponding to Groov= y modules (see module documentation for details) or corresponding to other = Java classes you wish to make use of from Groovy. Some possibilities are li= sted below:
The recommended way for making Groovy be aware of your additional jar fi=
les is to place them in a predefined location. Your Groovy install should i=
nclude a file called
groovy-starter.conf. Within that file, ma=
ke sure a line such as
load ${user.home}/.groovy/lib/*=20
is not commented out. The
user=
.home system property is set by your operating system. (Mine is
.groovy/lib directory.
(Note: as an alternative, y= ou can set up a CLASSPATH variable and make sure it mentions all of your ad= ditional jar files, otherwise Groovy works fine with an empty or no CLASSPA= TH variable.)
In the top part of the window of the groovyConsole, type the following= p>
println "Hello, World!"=20
And then type <CTRL-R>.
= Notice that the text gets printed out in the OS console window (the black o= ne behind the groovyConsole window) and the bottom part of the groovyConsol= e says:
groovy> println "Hello, World!" null=20
The line starting with "groovy&= gt;" is just the text of what the console processed. The "null&qu= ot; is what the expression "evaluated to". Turns out the expressi= on to print out a message doesn't have any "value" so the groovyC= onsole printed "null".
Next try something with an actual va= lue. Replace the text in the console with:
123+45*67=20
or your favorite arithmetic expressi= on,:
x =3D 1 println x x =3D new java.util.Date() println x x =3D -3.1499392 println x x =3D false println x x =3D "Hi" println x=20
The Groovy language has built-in support for two important data types, l= ists and maps (Lists can be operated as arrays in Java language). Lists are= used to store ordered collections of data. For example an integer list of = your favorite integers might look like this:
myList =3D [1776, -1, 33, 99, 0, 928734928763]=20
You can access a given item in the l= ist with square bracket notation (indexes start at 0):
println myList[0]=20
Should result in this output:
1776=20
You can get the length of the list w= ith the "size" method:
println myList.size()=20
Should print out:
6=20
But generally you shouldn't need the= length, because unlike Java, the preferred method to loop over all the ele= ments in an list is to use the "each" method, which is described = below in the "Code as Data" section.
Another native data st= ructure is called a map. A map is used to store "associative arrays&qu= ot; or "dictionaries". That is unordered collections of heterogen= eous, named data. For example, let's say we wanted to store names with IQ s= cores we might have:
scores =3D [ "Brett":100, "Pete":"Did = not finish", "Andrew":86.87934 ]=20
Note that each of the values stored = in the map is of a different type. Brett's is an integer, Pete's is a strin= g, and Andrew's is a floating point number. We can access the values in a m= ap in two main ways:
println scores["Pete"] println scores.Pete=20
Should produce the output:
Did not finish Did not finish=20
To add data to a map, the syntax is = similar to adding values to an list. For example, if Pete re-took the IQ te= st and got a 3, we might:
scores["Pete"] =3D 3=20
Then later when we get the value bac= k out, it will be 3.
println scores["Pete"]=20
should print out 3.
Also as an= aside, you can create an empty map or an empty list with the following:
emptyMap =3D [:] emptyList =3D []=20
To make sure the lists are empty, yo= u can run the following lines:
println emptyMap.size() println emptyList.size()=20
Should print a size of 0 for the Lis= t and the Map.
One of the most important features of any programming language is the ab= ility to execute different code under different conditions. The simplest wa= y to do this is to use the '''if''' construct. For example:
amPM =3D Calendar.getInstance().get(Calendar.AM_PM) if (amPM =3D=3D Calendar.AM) { =09println("Good morning") } else { =09println("Good evening") }=20&qu= ot; block is not required, but the "then" block is:
amPM =3D Calendar.getInstance().get(Calendar.AM_PM) if (amPM =3D=3D Calendar.AM) { =09println("Have another cup of coffee.") }=20
There is a special data type in most programming languages that is used = to represent truth values, '''true''' and '''false'''. The simplest boolean= expression are simply those words. Boolean values can be stored in variabl= es, just like any other data type:
myBooleanVariable =3D true=20
A more complex boolean expression us= es one of the boolean operators:
* =3D=3D * !=3D * > * >=3D * < * <=3D=20
Most of those are probably pretty in= tuitive. The equality operator is '''=3D=3D''' to distinguish from the assi= gnment operator '''=3D'''. The opposite of equality is the '''!=3D''' opera= tor, that is "not equal"
So some examples:
titanicBoxOffice =3D 1234600000 titanicDirector =3D "James Cameron" trueLiesBoxOffice =3D 219000000 trueLiesDirector =3D "James Cameron" returnOfTheKingBoxOffice =3D 752200000 returnOfTheKingDirector =3D "Peter Jackson" theTwoTowersBoxOffice =3D 581200000 theTwoTowersDirector =3D "PeterJackson" titanicBoxOffice > returnOfTheKingBoxOffice // evaluates to true titanicBoxOffice >=3D returnOfTheKingBoxOffice // evaluates to true titanicBoxOffice >=3D titanicBoxOffice // evaluates to true titanicBoxOffice > titanicBoxOffice // evaluates to false titanicBoxOffice + trueLiesBoxOffice < returnOfTheKingBoxOffice + theTwo= TowersBoxOffice // evaluates to false titanicDirector > returnOfTheKingDirector // evaluates to false, beca= use "J" is before "P" titanicDirector < returnOfTheKingDirector // evaluates to true titanicDirector >=3D "James Cameron" // evaluates to= true titanicDirector =3D=3D "James Cameron" // evaluates to = true=20
Boolean expressions are especially u= seful when used in conjunction with the '''if''' construct. For example:
if (titanicBoxOffice + trueLiesBoxOffice > returnOfTheKingBo= xOffice + theTwoTowersBoxOffice) { =09println(titanicDirector + " is a better director than " + retu= rnOfTheKingDirector) }=20
An especially useful test is to test= whether a variable or expression is null (has no value). For example let's= say we want to see whether a given key is in a map:
suvMap =3D ["Acura MDX":"\$36,700", "F= ord Explorer":"\$26,845"] if (suvMap["Hummer H3"] !=3D null) { =09println("A Hummer H3 will set you back "+suvMap["Hummer = H3"]); }=20
Generally null is used to indicate t= he lack of a value in some location. | http://docs.codehaus.org/exportword?pageId=30854 | 2014-08-20T07:07:50 | CC-MAIN-2014-35 | 1408500800767.23 | [] | docs.codehaus.org |
Help Center
Local Navigation
BlackBerry Java Development Environment
The BlackBerry® Java® Development Environmentis a fully integrated development and simulation environment for building a BlackBerry® Java Application for BlackBerry devices. With the BlackBerry JDE, developers can build applications using the Java® ME programming language and the extended Java APIs for BlackBerry.
The BlackBerry Java Development Environment includes the following development tools:
- BlackBerry® Integrated Development Environment
- BlackBerry Smartphone Simulator
- Java ME APIs and BlackBerry APIs
- sample applications
The BlackBerry IDE includes a full suite of editing and debugging tools that are optimized for the development of a BlackBerry Java Application. TheBlackBerry Smartphone Simulator provides a complete Windows® type environment, and is designed to simulate UIs and user interaction, network connections, email services, and wireless data synchronization.
The BlackBerry Java Development Environment Component Package includes the following development tools for development within third-party IDEs such as NetBeans™ or Eclipse™:
-® Signature Tool: You can use this tool to send code signature requests to the BlackBerry® Signing Authority Tool.
- Preverify Tool: You can use this tool to partially verify your classes before you load your application onto a BlackBerry device.
- JDWP: You can use this tool to debug applications using third-party integrated development environments.
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/developers/deliverables/5827/BlackBerry_JDE_446979_11.jsp | 2014-08-20T07:02:49 | CC-MAIN-2014-35 | 1408500800767.23 | [] | docs.blackberry.com |
Message-ID: <1527513376.26313.1408517888705.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_26312_47321464.1408517888704" ------=_Part_26312_47321464.1408517888704 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
GValidation is a validation plugin for Griffon - A Grails lik= e cla= sses that are bound to database, however in Griffon models are usually just= mere POGOs thus this plugin is designed to work with Griffon models as wel= l as any POGOs.
GValidation is written purely in Groovy while retaining most of the synt= ax of Grails constraints support.
GValidation plugin depends on the following libraries, and will automati= cally add them to your application once the plugin is installed.
Apache Commons Lang 2.5 Apache Commons Validator 1.3.1 Jakarta ORO 2.0.8=20
griffon install-plugin validation=20
Please use the Bug Tracker to report any bug you find
Once this plugin is installed, an additional annotation @Validatable wil=
l become available to you. Once annotated your model object will have a dyn=
amic field errors and a dynamic method validate injected similar to Grails domain class. The errors=
field encapsulates all errors generated on a particular model, and the
@Validatable class PersonModel { @Bindable String name @Bindable String email @Bindable String blog static constraints =3D { name(blank: false) email(blank: false, email: true) blog(url: true) } }=20
if (!model.validate()) { doLater { // display error messages ... } } else { doLater { // do the real job ... } }=20
All built-in and custom validators provided in this plugin follow the sa= me error message code naming convention. Model specific error message code = is generated with the following format:
<modelClass>.<field>.<validator>.message=20
So as in the previous example the blank constraint on email field will g= enerate error message code:
personModel.email.blank.message=20
Each validator built-in or custom also has a global default error messag= e code associated with in the following format:
default.<validator>.message=20
So as shown in the previous example you can provide a global error messa= ge for blank validator by using:
default.blank.message=20
You can then retrieve the error code and default error code from the Err= or object using the following fields respectively:
error.errorCode error.defaultErrorCode=20
Since v0.8 release validation plugin now can automatically trigger valid= ation for Griffon model beans if a realTime flag is set to true in @Validat= able annotation.
@Validatable(realTime=3Dtrue) class MyModel{ =09.... }=20
The real time validation feature is implemented relying on the property = change support therefore any property value change, for example triggered b= y bind{}, will invoke the validation logic associated with that particular = property. The actual validation is performed in a separate thread using Gri= ffonApplication.execAsync() before it switches back to EDT for error render= ing. Currently this flag only works with Griffon MVC model beans if it is&n= bsp; applied to a regular POGO object the flag will be ignored.
GValidation plugin is shipped with a set of built-in validator or constr= aints as shown in the above example.
Check if the given String field is blank.
Example:
name(blank: false)=20
Check if the given field is a valid credit card number.
Exmple:
creditCardNumber(creditCard: true)=20
Check if the given field is a valid email address.
Example:
Check if the given field is a valid network address. It can be either a = host name or an IP address. (This is GValidation specific constraint not av= ailable in Grails.)
Example:
hostServer(inetAddress: true)=20
Check if the given field is contained in the defined list.
Example:
city(inList: ['New York', 'Toronto', 'London'])=20
Check if the field matches with the given regular expression.
Example:
login(matches:"[a-zA-Z]+")=20
Ensures a value's size does not exceed the given maximum value. This con= straint works with collection, array, as well as string.
Example:
children(maxSize:25) firstName(maxSize:20)=20
Ensures a value does not exceed the given maximum value.
Example:
age(max:new Date()) price(max:999F)=20
Ensures a value's size does not fall below the given minimum value.
Example:
children(minSize:25) firstName(minSize:2)=20
Ensures a value does not fall below the given minimum value.
Example:
age(min:new Date()) price(min:0F)=20
Ensures that a property is not equal to the specified value
Examples:
login(notEqual:"Bob")=20
Allows a property to be set to null. By default Grails does not allow nu= ll values for properties.
Examples:
age(nullable:true)=20
Uses a Groovy range to ensure that a property's value occurs within a sp= ecified range. Set to a Groovy range which can contain numbers in the form = of an IntRange, dates or any object that implements Comparable and provides= next and previous methods for navigation.
Examples:
age(range:18..65) createdOn(range: new Date()-10..new Date())=20
Uses a Groovy range to restrict the size of a collection or number or th=
e length of a String. Sets the size of a collection or number property or S=
tring length.
Examples:
children(size:5..15)=20
To validate that a String value is a valid URL. Set to true if a string =
value is a URL. Internally uses the org.apache.commons.validator.UrlValidat=
or class.
Examples:
homePage(url:true)=20
Adds custom validation to a field. Set to a closure or block to use for = custom validation. A single or no parameter block receives the value, a two= -parameter block receives the value and object reference. The closure can r= eturn: null or true to indicate that the value is valid false to indicate a= n invalid value and use the default message code
Examples:
// Simple custom validator even( validator: { return (it % 2) =3D=3D 0 }) // Custom validator with access to the object under validation password1( validator: { val, obj -> obj.properties['password2'] =3D=3D val }) // Custom validator with custom error magicNumber( validator: { val, obj -> def result =3D checkMagicNumber() if(!result) obj.errors.rejectValue('magicNumber', 'customErrorCode') return result })=20
Many of the above explanation were borrowed directly from Grails ref= erence guide
Since the constraints are defined using static fields following Grails c= onvention, no real inheritance can be implemented. However since 0.6 releas= e Validation plugin will basically copy the parent class' constraints to th= e child before performing validation, thus additionally you can also overri= de the parent constraint in the child class. See the following example:
@Validatable class ServerParameter { @Bindable String serverName @Bindable int port @Bindable String displayName def beforeValidation =3D { setDisplayName "${serverName}:${port}" } static constraints =3D { serverName(nullable: false, blank: false) port(range: 0..65535) displayName(nullable: false, blank: false) } } @Validatable class ProtocolSpecificServerParameter extends ServerParameter{ @Bindable String protocol def beforeValidation =3D { setDisplayName "${protocol}://${serverName}:${port}" } static constraints =3D { protocol(blank: false, nullable: false) } }=20
In the above example, the ProtocolSpecificServerParameter will not only = inherent ServerParameter's serverName and port fields but also their associ= ated constraints. The only restriction you need to be aware of is if the pa= rent constraint generates error for a certain condition then the overriding= child constraint has to generate error as well. In other words, validation= plugin does not allow error-hiding by using constraint override in the chi= ld class, similar to the method exception treatment during inheritance with= in Java.
The validator mentioned above allows you to specify a closure as a simpl= e custom constraint easily and quickly however there is no easy way to reus= e the closure in other scenarios, hence you will be forced to rewrite the v= alidator each time you use them which is inconvenient and a violation of th= e DRY principle. Since version 0.3, inspired by Grails Custom Constraint pl= ugin, GValidation plugin now provides you ways to define reusable custom co= nstraints in Griffon.
Once you upgrade the plugin to 0.3 version and above a new artifact type= will be added to your Griffon application called Constraint. You can create new constraints by using the new script added by the plu= gin:
griffon create-constraint <package>.<constraint-name&g= t;=20
Which in turn will create a Groovy classes under griffon-app/constraints= folder with a single method validate defined where you ca= n perform your reusable custom validation logic. A simple custom constraint= typically looks like this:class MagicConstraint {
/** * Generated message * * @param propertyValue value of the property to be validated * @param bean object owner of the property * @param parameter configuration parameter for the constraint * @return true if validation passes otherwise false */ def validate(propertyValue, bean, parameter) { if (!parameter) return true return propertyValue =3D=3D 42 } }=20
Once created a custom constraint pretty much behaves exactly like a buil= t-in constraint, you can easily invoke them in your model by following the = simple naming convention, following the above example you can apply the con= straint on any field in your model by using the following declaration:
@Validatble class DemoModel{ ... @Bindable int magicNumber static constraints =3D { ... magicNumber(magic: true) ... }=20
The GValidation plugin will also take care of the error message code gen= eration for you, as soon as your validate method returns false the plugin w= ill automatically generate error for the appropriate field with the error m= essage code generated by following the same convention as the built-in ones= :
<modelClass>.<field>.<validator>.message // s= pecific error message code default.<validator>.message // default global error message code=20
As mentioned before once installed the plugin will enhance all your mode= l object to have the additional validate() method, once in= voked this method will perform a validate-all executing all constraints on = the given model, typical usage scenario:
model.validate() ... if(model.hasErrors()){ // notify user ... }=20
Originally proposed by Andres Almiray, since v0.3 GValidation now offers= capability to perform validation on only a selected number of fields in th= e model instead of all. Here is a typical single field validation usage sce= nario:
model.validate('name') ... if(model.hasErrors()){ // notify user ... }=20
Here is how to perform validation on a multiple fields:
model.validate(['name', 'email']) ... if(model.hasErrors()){ // notify user ... }=20
Inspired by Rails before_validation callback, now GValidation provides a= similar pre-validation callback to give model developer a chance to manipu= late data right before validation. Here is an example how this kind of call= back is defined:
class ServerModel { @Bindable String serverName @Bindable int port @Bindable String stringForm def beforeValidation =3D { setStringForm "${serverName}:${port}" } ... }=20
Although GValidation is not built on top of Spring Validation framework = as the Grails constraints do, it still tries to maintain some sort of API c= onsistency when it comes to error generation. However GValidation only prov= ide a subset implementation of the Spring Error objects in Groovy. For API = details please see the Errors and Simple Error classes.
Other than the return value of the validation() method itself, once the = validation is complete you can use the hasErrors() method = that was dynamically injected on your model to check if there is any valida= tion error. For example:
model.validate() // do something else ... if(model.hasErrors()){ notifyUser(model.errors) }=20
You can generate global error at the instance level by calling the
model.errors.reject('error.code')=20
Or reject a specific field using the rejectValue() meth= od.
model.errors.rejectValue('field', 'error.code')=20
Later on you can iterate through errors using Groovy it= erator
model.errors.each{error-> // do something with the error }=20
Since 0.4 release the dynamic errors field has been enh= anced to be Bindable which means you can now directly bind it to your compo= nent. It is especially handy when building error notification component suc= h as in the built-in ErrorMessagePanel. Here is how the binding can be achi= eved in the view with the built-in panel:
container(new ErrorMessagePanel(messageSource), id: 'errorMessagePanel', constraints: NORTH, errors: bind(source: model, 'errors'))=20
GValidation is shipped with a simple generic errorMessages widget to hel= p you display error message easily. Of course you can build your own error = message feedback component, it is fairly easy to do that, check out the sou= rce code for the built-in ErrorMessagePanel for more details. To use the bu= ilt-in error panel first declare it in your view:
The following example works with v0.4+ binary, if you are using the = older version you need to update the errors manually in ErrorMessagePanel= em>
panel(id:'demoPanel'){ borderLayout() errorMessages(constraints: NORTH, errors: bind(source: model, 'errors') // the rest of your view }=20
Later in your controller you can update the error messages:
def doSomething =3D {evt =3D null -> if (!model.validate()) { doLater { // do something } } else { doOutside { // do something interesting } } }=20
Since 0.4 release now you can enhance any POGO class in your application= by adding the @Validatable annotation at the class level, then Groovy AST = transformation will take care of the rest.
import net.sourceforge.gvalidation.annotation.Validatable @Validatable class AnnotatedModel { String id String email =3D " " static constraints =3D { id(nullable: false) email(email: true) } }=20
This kind of annotated classes will go through essentially the same enha= ncement as any model class. The only difference is that annotated class is = enhanced during build time using AST transformation vs. runtime enhancement= as what happen to the model instances.
One of the common challenge we face when building UI using any GUI frame= work is how to effectively and easily notify the user about errors. Ideally= a validation framework should not just help developer define constraints a= nd validation logic but also handle the presentation of the error message a= utomatically with little coding involved. With this vision in mind Error Re= nderer was created.
Error Renderer can be declared easily by using the additional synthetic = attribute 'errorRenderer' introduced in 0.7 release. See the following exam= ple:
textField(text: bind(target: model, 'email'), errorRenderer:'fo= r: email, styles: [highlight, popup]')=20
In the above example, two error renderers were declared for the textFiel= d widget for the 'email' field in the model. Basically what it means is tha= t if any error was detected for the email field in the model two types of e= rror renderer will be activated to display the error(s). The styles portion= of the configuration is optional. If no renderer style is defined, by defa= ult highlight renderer will be used. Currently three types of error rendere= r styles are implemented, I will go through them quickly here.
This renderer basically change the background color of the component to = pink. Mostly it is used for text based input fields. Here is a screen shot = of the rendering result.
-
This renderer display the error message associated with the error using = a tooltip-like popup box. Here is a screen shot of the rendering result.
-
This is an invisible renderer that does not render anything itself but s= witch the component visible attribute on when the error is detected. It is = commonly used to display initially invisible custom component when error oc= curs. This renderer is used in combination of the new errorIcon widget also= introduced in this release. Here is a screen shot of it used with errorIco= n.
-
This is basically the old wine in a new bottle. This widget is essential= ly identical to the ErrorMessagePanel class existed since v0.2 however it i= s now implemented as a widget to make it easier to use. Usage:
errorMessages(constraints: NORTH, errors: bind(source: model, '= errors'))=20
Screen Shot:
As mentioned before this widget is mainly used in combination with the o= nWithError renderer. This icon widget is initially invisible and will only = be turned on by the onWithError renderer. Usage:
errorIcon(errorRenderer:'for: creditCard, styles: [onWithError]= ')=20
Screen Shot: | http://docs.codehaus.org/exportword?pageId=164626709 | 2014-08-20T06:58:08 | CC-MAIN-2014-35 | 1408500800767.23 | [] | docs.codehaus.org |
changes.mady.by.user S. Ali Tokmen
Saved on Apr 28, 2011
Saved on May 01, 2011
...
The usage of Cargo for executing functional tests on a container does not mandate this m2 plugin. You could also directly use the Cargo Java API from your Java unit test classes (JUnit, TestNG, etc), as described on the Functional testing page. On the other hand, if you are already used to the maven-surefire-plugin or maven-failsafe-plugin for your tests, then this Cargo Maven2/Maven3 plugin is probably the best option for you. It is also in most cases more straightforward to use.The choice is yours, thought the Maven2 plugin is generally more straightforward to use and integrates better with the whole build process (with profiles, easier to use deployer, proxy server support, etc.)
maven-surefire-plugin
maven-failsafe-plugin
The documentatation for this Maven2 plugin includes:
Powered by a free Atlassian Confluence Open Source Project License granted to Codehaus. Evaluate Confluence today. | http://docs.codehaus.org/pages/diffpages.action?pageId=228167582&originalId=209649968 | 2014-08-20T06:55:46 | CC-MAIN-2014-35 | 1408500800767.23 | [] | docs.codehaus.org |
Posting Multiple Leaderboard Entries as Player
Introduction
Some of our users have asked if it's possible to post more than one score to a single Leaderboard as a player. The answer is yes, and you can see how it's done with the example given in this tutorial.
We'll cover three stages in setting things up and testing:
- Create an Event with two Attributes.
- Configure your Leaderboard using the Event you've created.
- Test your configuration in the Test Harness.
Creating the Event
First, you'll need to create an Event and add two Attributes:
- The score Attribute is a Number, which will track the Maximum value.
- The character Attribute is a String. The important thing to note here is that we set the Default Aggregation Type to Grouped. This means that the underlying Running Total will use this Attribute to group the other Attributes:
Configuring the Leaderboard
Next, let's configure the Leaderboard:
As you can see from the Fields we've added:
- We're tracking the Maximum score Attribute that we just added to our score_ID Event.
- We've set the Group value for character to ID - this means that each entry, grouped by character, will be posted to a single Leaderboard.
Testing in the Test Harness
We can test the configuration in the Test Harness of our game. Simply authenticate a player and send the requests below. For this example, we're posting scores to the Leaderboard as different characters, which are defined in the character field.
First Request
{ "@class": ".LogEventRequest", "eventKey": "score_ID", "score": 50, "character": "Wizard" }
Second Requests
{ "@class": ".LogEventRequest", "eventKey": "score_ID", "score": 65, "character": "Warrior" }
Request to Return Entries
{ "@class": ".LeaderboardsEntriesRequest", "leaderboards": [ "LB_ID" ] }
Response
{ "@class": ".LeaderboardsEntriesResponse", "LB_ID": [ { "userId": "5c52f106212f9e04f44d0f66", "score": 65, "character": "Warrior", "when": "2019-01-31T13:03Z", "city": "Dublin", "country": "IE", "userName": "TestPlayer_02", "externalIds": {}, "rank": 1 }, { "userId": "5c52f106212f9e04f44d0f66", "score": 50, "character": "Wizard", "when": "2019-01-31T13:03Z", "city": "Dublin", "country": "IE", "userName": "TestPlayer_02", "externalIds": {}, "rank": 2 } ] } | http://gsp-docs-a2.s3-website-eu-west-1.amazonaws.com/tutorials/social-features/posting-multiple-leaderboard-entries-as-player.html | 2022-06-25T02:17:21 | CC-MAIN-2022-27 | 1656103033925.2 | [array(['img/postmulti/1.png', None], dtype=object)
array(['img/postmulti/2.png', None], dtype=object)] | gsp-docs-a2.s3-website-eu-west-1.amazonaws.com |
Session Three¶ = grdcut("@earth_relief_05m", region=(-66,-60,30,35), verbose=true):
grdinfo(G) proj. 15-cm-wide Mercator plot and annotate the borders every 2°:
grdcontour(“@earth_relief_05m”, region=(-66,-60,30,35), proj=:Mercator, figsize=15, cont=250, annot=1000, show=true)
Your plot should look like our example 11 below
Result of GMT Tutorial example 11¶
Exercises:
Add smoothing with smooth=4.
Try tick all highs and lows with ticks.
Skip small features with cut=10.
Override region using region=(-70,-60,25,35)
Try another region that clips our data domain.
Scale data to km and use the km unit in the annotations.:
Nearest neighbor gridding¶
Search geometry for nearneighbor.¶ = nearneighbor("@tut_ship.xyz", region=(245,255,20,30), inc="5m", search_radius="40k"); grdcontour(G, proj=:Mercator, figsize=15, cont=250, annot=1000, show=true)
Your plot should look like our example 12 below
Result of GMT Tutorial example 12¶
Since the grid ship.nc is stored in netCDF format that is supported by a host of other modules, you can try one of those as well on the same grid.
Exercises:
Try using a 100 km search radius and a 10 minute grid spacing. region and inc switches, these preprocessors all take the same options shown below:
With respect to our ship data we preprocess it using the median method:
D = blockmedian("@tut_ship.xyz", region=(245,255,20,30), inc="5m", verbose=true);
The output data can now be used with surface:
G = surface(D, region=(245,255,20,30), inc="5m", verbose=true);. Note also that since we are appending layers to the figure, second and on commands use the bang (!) form. Here’s the recipe:
D = blockmedian("@tut_ship.xyz", region=(245,255,20,30), inc="5m", verbose=true); G = surface(D, region=(245,255,20,30), inc="5m", verbose=true); mask(D, region=(245,255,20,30), inc="5m", figsize=15) grdcontour!(G, cont=250, annot=1000) mask!(end_clip_path=true, show=true)
Your plot should look like our example 13 below
Result of GMT Tutorial example 13¶
Exercises: | https://docs.generic-mapping-tools.org/latest/tutorial/session-3_jl.html | 2022-06-25T02:14:23 | CC-MAIN-2022-27 | 1656103033925.2 | [array(['../_images/GMT_tut_11.png', '../_images/GMT_tut_11.png'],
dtype=object)
array(['../_images/GMT_nearneighbor.png',
'../_images/GMT_nearneighbor.png'], dtype=object)
array(['../_images/GMT_tut_12.png', '../_images/GMT_tut_12.png'],
dtype=object)
array(['../_images/GMT_tut_13.png', '../_images/GMT_tut_13.png'],
dtype=object) ] | docs.generic-mapping-tools.org |
Last Updated: May 10, 2022
Perform the following steps to start the JIFFY.ai application.
,in which $MOUNTPOINT is the filesystem path where Mongo DB is installed, for example, /opt,in which $MOUNTPOINT is the filesystem path where Mongo DB is installed, for example, /opt
mongod -f $MOUNTPOINT/mongo/conf/mongo.conf
Make sure Mongo DB is up and running before starting the application. Otherwise, you may get the error as “Unable to fetch the license detail” after logging in to Jiffy.
Log in to the JIFFY.ai core server and run the following command as a Root User to start the NGINX.
- systemctl start jiffy-nginx.service
- systemctl restart jiffy-nginx.service
Server reboot brings Rabbitmq process up automatically. If Status=failed, you need to start the process manually.
systemctl start rabbitmq-server
- sudo /etc/init.d/td-agent start
- sudo /etc/init.d/td-agent start
Jiffy core services, such as stop/start Applications must be performed from jiffyapp-usr
Switch the user to JIFFY.ai app Linux user and run the following commands as JIFFY.ai app user to start the Jiffy Application.
application start all
To enter the Masterkey, you can run the history | grep -i passphrase command or type the command export and press the pg up button to get the latest passphrase.
application status
Run the following commands as JIFFY.ai app user(jiffyapp-usr) whenever there is an application restart to start the vault services.
Vault process will not come up automatically after core server reboot.
To bring up the process manually, run the following command from jiffyapp-usr.
$ application start vault
vault operator unseal {unseal key 1 which was generated during the initialization}
vault operator unseal {unseal key 2 which was generated during the initialization}
$JIFFY_HOME: Environment variable which contains Jiffy installation filesystem path.$JIFFY_HOME: Environment variable which contains Jiffy installation filesystem path.
cd $JIFFY_HOME/.vault.d/
Run the following commands to check the Vault Running Status.Run the following commands to check the Vault Running Status.
This path can be varied from one to another customer server.
ps -ef | grep vault
< | https://docs.jiffy.ai/admin_guide/it-activities/start-stop-procedure/start-core-application-server/ | 2022-06-25T01:59:04 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.jiffy.ai |
Last Updated: May 10, 2022
I am unable to start the recording in the Web UI node, get the error message as "Jiffy Extension in Google Chrome is not enabled".
Perform the following steps to enable Jiffy extension.
While executing Web UI node that has web table extraction, I get the error “Element Not Found“.
This error can occur due to multiple reasons:
While executing Web UI node that has web table extraction, I get the error “List index out of range”.
This error can occur due to multiple reasons:
During execution of Web UI node with Image/Text Recording, I get an error “Task cannot be executed. Current screen width: 1536 or Height 864 is different from the Learned Width 1920 or Height 1080”.
This error occurs when the resolution of your machine is different from the resolution while the elements were recorded.
Check if the layout is set to 100% in the display settings. (Refer to the following snapshot).
Check the Display resolution is the same as mentioned in the error for the learned element Width 1920 or Height 1080. If not change the values.
If the error persists, change the resolution to 100% and re-record the element.
When I try to record the data from Web UI node row-wise, I am getting the warning “Elements might not be correlated”.
This error occurs when after recording the first-row element of the first column, you record the row element of the second column directly. For Web Table recording, for the first column, you need to select two row elements. Ensure the first column data is displayed on the Table learning screen, then record the second column.
I am using Iterations in WebUI node using the Iterate-on option. It gives an error “Not a table object".
Iterate-on requires a Table type variable as an input. This error occurs if the variable is not of type Table. By default, when the Data table is mapped from preceding node it gets mapped as a text type variable. In the Variable tab, change it to Table type and select suitable Table Definition and rerun the task.
I have an issue with the recording of the UI node. The issue is when I type into input element of these websites and the element searched is not first in the list, for example, while recording the following element:
For such cases where there can be multiple search results, provide the complete name as follows.
Google Chrome tab is closing unexpectedly while executing the task execution with WebUI node. The Properties Close Application, Close Application on Error are available for Web UI node. If they are toggled ON, the chrome browser will close after execution or on error. Toggle them to OFF if you want the browser to remain open.
I recorded a Dropdown element, but in Actions, I am not finding 'Select' to pass the values. How do I automate in such scenarios? If the dropdown element recorded is of type Select:
dr=self.engine.get_driver() xpath="{ElementXpath}" ele=dr.find_elements_by_xpath(xpath) return ele[0].click()
The script works only for elements whose XPath remains unchanged.
When executing Web UI node, if you get an error “Chrome not reachable…..” or task execution hangs or Chrome browser does not open.
Change the properties of below Chrome Flags
In UI automation, I am unable to perform control-based familiarization for an element. I am unable to pass actions to the elements. You can perform the required actions on these elements using the user defined function Send_Keys. Create a User Defined function Send_Keys of type UI with the following inputs:
import clr clr.AddReference("System.Windows.Forms") from System.Windows.Forms import SendKeys import time time.sleep(1) SendKeys.SendWait(input)
The variable is not listed for Iterate on option, although the variable is mapped from the preceding node.
This occurs when variables Type is not changed to Table. Iterate on requires a Table type variable as an input. By default, when the Datatable is mapped from preceding node it gets mapped as a Text type variable.In the Variable tab, change it to Table type and select suitable Table Definition.
During UI automation, when I try to upload a file using file upload button on a website, the task execution hangs. This occurs when Allow access to file URLs option is not enabled for the chrome extension. Enable the same to use file upload in chrome mode.
While executing the WebUI node, I am getting the error as "Process exited with an error: 1 (Exit value: 1) [Error: importerror: dll load failed: %1 is not a valid win32 application.;] ".
WebUI node fails with error message "stale element reference element is not attached to the page document…" error.
This error occurs when the Chrome driver version does not match with the version of Chrome browser.
Unable to record elements for automation. Getting a warning “UILearn is not installed…..”
This error occurs if all Jiffy files are not installed properly due to anti-virus blocking the installation.
Check if wpfbase.exe is present in C:\jiffyservice\Learn, If not present, place the file there and restart the recording.
You can also stop the antivirus and reinstall Jiffy Client.
In input fields if I provide " ", it is changed to "& quot;", for example, “Mobile” is converted as "& quot;"Mobile"& quot;". I am unable to perform web automation due to this.
Add a function to convert the entire input including quotes to a string, for example, return str(input_text) and use the converted string as input for the web automation.
When executing WebUI node, getting "Python path is missing" error.
This can occur when there are multiple versions of Python installed in the system.
Uninstall all the already existing versions of Python from the system, remove the path from environment variables and uninstall Jiffy client.
Restart the machine and reinstall Jiffy client.
When executing WebUI node, getting "Chrome Extension is not found or disabled" error.
In the Web UI Configurations, use Chrome Deprecated mode for Browser.
Even though Run even if locked option enabled in advanced properties of the Web UI node, the task is failing if the machine is locked. The property Run even if locked fails if the WEB UI node has the Image or text-based elements in it. To overcome this, use the Login node in the task so that the bot logs in to the machine and executes the task successfully.
There is a UI element that appears only after scrolling down the webpage. During the execution, the bot is unable to scroll down to the check box and throws up an error "Element not found error".
Jiffy scrolls down to the element automatically. But if it throws the error “Element not found error”, use the following selenium expression to scroll down to the element in the dynamic script of the UI Control.
x = ‘xpath of the element' driver=self.engine.get_driver() target=driver.find_element_by_xpath(x) driver.execute_script('arguments[0].scrollIntoView(true);', target) return target | https://docs.jiffy.ai/troubleshooting_guide/web-ui-node/ | 2022-06-25T01:21:01 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.jiffy.ai |
Education#
Jupyter Notebooks offer exciting and creative possibilities in education. The following subprojects are focused on supporting the use of Jupyter Notebook in a variety of educational settings.
Teaching and Learning with Jupyter is a book about using Jupyter in teaching and learning.
- nbgrader#
tools for managing, grading, and reporting of notebook based assignments. Documentation | Repo
- jupyter4edu#
GitHub organization hosting community resources for Jupyter in education | https://docs.jupyter.org/en/latest/projects/education.html | 2022-06-25T02:00:30 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.jupyter.org |
"Can't connect to the Management Reporter server" error when you start Microsoft Management Reporter 2012
This article provides resolutions for the errors that may occur when you start Microsoft Management Reporter 2012.
Applies to: Microsoft Management Reporter 2012, Microsoft Dynamics GP, Microsoft Dynamics SL 2011
Original KB number: 2862020
Symptoms
When you start Microsoft Management Reporter 2012 (MR 2012), you receive one of the following error messages:
A connection to the server could not be established. Check the server address and try again or contact your system administrator.
or
Can't connect to the Management Reporter server. Do you want to specify a different server address?
To troubleshoot Management Reporter connection problems you need to select OK to this message and then select Test Connection to get an additional error message. You also need to go to Event Viewer to get additional information on the error. In Event Viewer, select Windows Logs and select Application. Under the Source column, look for Management Reporter Report Designer or Management Reporter Services.
Here is a list of errors received when you select Test Connection and the possible associated error(s) seen in Event Viewer. Find your error in the list and use the appropriate Cause and Resolution sections.
Connection attempt failed. There is a version mismatch between the client and the server. Contact your system administrator.
- See Cause 1
Connection attempt failed. User does not have appropriate permissions to connect to the server. Contact your system administrator.
- See Cause 2
A connection to the server could not be established. Check the server address and try again or contact your system administrator.
Note
Servername is a placeholder for your actual server name and 4712 is a placeholder for the actual port selected during the MR install. If you check the Event Viewer, you may find the following error messages:
Message: System.ServiceModel.Security.SecurityNegotiationException: SOAP security negotiation with target. See inner exception for more details. ---> System.ComponentModel.Win32Exception: The Security Support Provider Interface (SSPI) negotiation failed."
- See Cause 3
- See Cause 7
- See Cause 9
Message: System.ServiceModel.EndpointNotFoundException: There was no endpoint listening.
- See Cause 5
Message: System.ServiceModel.Security.MessageSecurityException: An unsecured or incorrectly secured fault was received from the other party. See the inner FaultException for the fault code and detail. ---> System.ServiceModel.FaultException: An error occurred when verifying security for the message.
- See Cause 4
Message: System.TimeoutException: The request channel timed out attempting to send after 00:00:40. Increase the timeout value passed to the call to Request or increase the SendTimeout value on the Binding. The time allotted to this operation may have been a portion of a longer timeout. ---> System.TimeoutException: The HTTP request to exceeded the allotted timeout of 00:00:39.9660000. The time allotted to this operation may have been a portion of a longer timeout. ---> System.Net.WebException: The operation has timed out
or
Message: System.ServiceModel.Security.MessageSecurityException: The security timestamp is invalid because its creation time ('2017-09-15T18:08:07.177Z') is in the future. Current time is '2017-09-1T18:00:34.847Z' and allowed clock skew is '00:05:00'.
Note
The date/time indicated above is an example of the actual date/time.
- See Cause 4
Message: System.Data.SqlClient.SqlException (0x80131904): A connection was successfully established with the server, but then an error occurred during the pre-login handshake. (provider: SSL Provider, error: 0 - The certificate chain was issued by an authority that is not trusted.)
- See Cause 6
Message: System.ServiceModel.Security.SecurityNegotiationException: The caller was not authenticated by the service. ---> System.ServiceModel.FaultException: The request for security token could not be satisfied because authentication failed.
- See Cause 7
Message: System.ServiceModel.ProtocolException: The remote server returned an unexpected response: (405) Method Not Allowed. ---> System.Net.WebException: The remote returned an error: (405) Method Not Allowed.
- See Cause 8
Message:Microsoft.Dynamics.Performance.Common.ReportingServerNotFoundException: The server could not be found. Make sure the server address is correct.
- See Cause 5
Message: An error occurred while receiving the HTTP response to server_name\InformationService.svc. This could be due to the service endpoint binding not using the HTTP protocol. This could also be due to an HTTP request context being aborted by the server (possibly due to the service shutting down).
- See Cause 10
Cause
Cause 1
The Management Reporter Client installed is a different version than the MR Server. See Resolution 1 in the Resolution section.
Cause 2
The user trying to run MR has not been set up as a user in MR and therefore cannot connect. See Resolution 2 in the Resolution section.
Cause 3
The computer is not connected to the domain where Management Reporter is installed. See Resolution 3 in the Resolution section.
Cause 4
The time on the client and server is more than five minutes different (differences in time zones are permitted). See Resolution 4 in the Resolution section.
Cause 5
The port used during the MR installation is not set up as an exclusion within the Firewall software. See Resolution 5 in the Resolution section.
Cause 6
The Encrypt connection option was selected during the install but SSL was not configured. See Resolution 6 in the Resolution"section.
Cause 7
The computer is having problems communicating or authenticating with the domain. See Resolution 7 in the Resolution section.
Cause 8
WCF HTTP Activation is not installed on the MR Server. See Resolution 8 in the Resolution section.
Cause 9
MR Services is being run as a Domain user and WCF Authentication is failing when using the UPN (User Principal Name). See Resolution 9 in the Resolution section.
Cause 10
Named Pipes is not enabled on the MR server. See Resolution 10 in the Resolution section.
Resolution
Resolution 1
Check the Management Reporter Client install on the workstation and also check the Management Reporter Server install on the server. To check the version in Management Reporter, select Help, and then selectAbout Management Reporter. The MR Client install needs to be the same version as the MR Server install.
Resolution 2
Set up the user receiving the connection error within MR.
- Run MR as a user that is set up as an MR administrator.
- In MR select Go and then select Security.
- Add the user who is receiving the connection error.
Note
If it is not known what user(s) exist in MR you can run
select * from SecurityUser against the ManagementReporter database to find out.
Resolution 3
Management Reporter will only function while connected to the domain used during the install. Even if all MR server components are on one computer, that computer still needs to be connected to the domain you were using when you installed MR.
Note
This means that Management Reporter will not work when demonstration laptops are not physically connected to the domain or not connected using a VPN connection.
Resolution 4
Verify the time on the client and server. Change the time that is incorrect. The time must be within five minutes of each other.
Resolution 5
Set up an exception in your Firewall program. Steps will vary depending on the Firewall program used but here are high-level steps.
- Select Start and then select Run. Type WF.MSC and then press Enter.
- Select Inbound Rules.
- Select New Rule.
- Select Port and then select Next.
- Select Specific local ports and then type 4712. If you are not using the default port of 4712, you will need to type that here. Select Next.
- Select Allow the connection and then select Next.
- Select Domain, Private, and Public. Select Next.
- Type Management Reporter as the Name and then select Finish.
Resolution 6
The MR Install Guide has the following information regarding the encrypt connection option:
You must configure SSL on the server and install certificates before you can use this option. For more information about encryption in Microsoft SQL Server, see the SQL Server documentation Encrypting Connections to SQL Server.
You could also modify the config files to turn off Encryption (make a backup copy of the files before you modify them).
- In Windows Explorer, go to the MR install folder (the default install is:
C:\Program Files\Microsoft Dynamics ERP\Management Reporter\2.1)
- In the Application Service folder, find the web.config file and right-click the file to open it in Notepad.
- Locate the <connectionstrings> and change the setting Encrypt= from True to False.
- Save the changes.
- In the Process Service folder, find the MRProcessService.exe.config file and right-click the file to open it in Notepad.
- Locate the <connectionstrings> and change the setting Encrypt= from True to False.
- Save the changes.
Resolution 7
Remove the computer from the domain and then add it back to the domain.
Warning
A local administrator account will need to be used to logon to the computer one time after it is removed from the domain.
- Select Start, select Run and type sysdm.cpl to open System Properties.
- Select Change and make a note of the Domain name.
- Select Workgroup, type a name (that is, workgroup), select OK to accept changes and then restart the computer.
- After restarting, select Start, select Run and type sysdm.cpl to open System Properties.
- Select Change and then select Domain.
- Enter the domain noted above, select OK to accept changes and then restart the computer.
Resolution 8
Install WCF HTTP Activation.
- In Windows Server 2008, open Server Manager and then select Features.
- Select Add Features and then expand .NET Framework.
- Expand WCF Activation and then mark HTTP Activation.
- Select Next and then select Install.
Resolution 9
Create an SPN on the computer for the domain account running the MR Service. To create an SPN for this domain account, run the Setspn tool at a command prompt on the MR server with the following commands:
setspn -S HTTP/MRservername domain\customAccountName setspn -S HTTP/MRservername.fullyqualifieddomainname domain\customAccountName
Note
- "MRservername" should be replaced with the MR server name where the MR Application Service is installed.
- "MRservername.FullyQualifiedDomainName" should be replaced with the fully qualified domain name of the MR server where the MR Application Service is installed.
- "domain\customAccountName" should be replaced with the domain account running the MR Services.
Resolution 10
On the MR server, open Server Manager and then select Dashboard. On the right side, select Add roles and Features. This will open a wizard. Select Next until you get to the Features section. Expand .NET Framework 4.6 Features (or whatever the highest version available is). Select Named Pipes Activation. Select Next and finish the wizard.
More information
If you still receive error messages after making changes contact Microsoft Management Reporter support with the errors including details from Event Viewer. | https://docs.microsoft.com/en-US/troubleshoot/dynamics/gp/cannot-connect-to-server-error-message-when-starting-management-reporter | 2022-06-25T01:09:21 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.microsoft.com |
The Apex One rollback procedure involves rolling back Security Agents and then rolling back the Apex One server.
Administrators can only roll back the Apex One server and agents using the following procedure if the administrator chose to back up the server during the installation process. If the server backup files are not available, refer to the previously installed OfficeScan version's Installation and Upgrade Guide for manual rollback procedures.
This version of Apex One only supports rollbacks to the following OfficeScan versions:
OfficeScan XG Service Pack 1
OfficeScan XG
OfficeScan 11.0 Service Pack 1 with a Critical Patch
OfficeScan 11.0 Service Pack 1
OfficeScan 11.0 | https://docs.trendmicro.com/en-us/enterprise/trend-micro-apex-one-2019-server-online-help/appendices/product_short_name-r/rolling-back-the-pro.aspx | 2022-06-25T01:23:32 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.trendmicro.com |
Why is the bot displaying a Bot Break message after connecting to an agent?
If you still have questions or prefer to get help directly from an agent, please submit a request.
We’ll get back to you as soon as possible.
- Getting Started
- Bot Building
- Conversation Design
- Developer Guides
- Deployment
- Agent Setup
- Analytics & Reporting
- Troubleshooting Guides
- Release Notes
The ideal flow for handing over the chat to an agent is, when the user asks for chatting with an agent, the chat should be handed over. Also, the bot should display a message similar to this - "Please allow me a minute to connect you to our customer excellence team".
If your bot displays a Bot Break message, after connecting to an agent, you should check the Code Step, and make sure there are no null values present. | https://docs.haptik.ai/troubleshooting/why-is-the-bot-displaying-a-bot-break-message-after-connecting-to-an-agent | 2022-06-25T02:22:58 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.haptik.ai |
.
If you have questions or need help, create a support request, or ask Azure community support.
Feedback
Submit and view feedback for | https://docs.microsoft.com/en-US/troubleshoot/azure/active-directory/cannot-manage-objects | 2022-06-25T02:30:33 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.microsoft.com |
Using Account Lists in Dashboards
Why Use Account Lists in Dashboards?
- Your Marketing Manager would like to view attribution from Accounts touched by the advertising agency.
- Your data analyst has crafted a query to select only Accounts from trade show interactions.
- The Revenue Manager wants to see the attribution report with only prospects from the tech industry.
Open the Engagement Dashboard, click the LIST: dropdown, and select the account list you would like to use to filter.
The dashboard will now filter by just the account_ids present in that list and update the
scorecards with the new data. | https://docs.calibermind.com/article/kbwo4jq1q8-using-account-lists-in-dashboards | 2022-06-25T01:16:56 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.calibermind.com |