content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Login and Registration
Before you can start importing Visio files to Symbio, please make sure to have your PIN at hand.
Enter your PIN and click on "Confirm". If you do not have a PIN yet, please start the registration process on the Homepage by clicking on "Register here". The registration itself requires a valid email address and your agreement to the general Terms and Services.
You will receive an email containing the PIN after you registered successfully. Please contact support if you didn’t receive an email.
Note: The PIN will be needed for future logins. | https://docs.symbioworld.com/admin/services/visio-importer/login/ | 2021-07-23T21:27:33 | CC-MAIN-2021-31 | 1627046150067.51 | [array(['../media/login.png', 'login'], dtype=object)
array(['../media/register.png', 'register'], dtype=object)] | docs.symbioworld.com |
What is the difference between apps and spaces?
Apps and spaces are both designed to help you enhance your web experience and organize your web tools better. In fact, both share many similar features and are developed in parallel on the same technologies.
The only major difference is that:
- Apps let you run multiple web apps in different windows like other desktop apps like Microsoft Outlook or Photoshop. Each app will show up on your Dock (macOS) or task bar (Windows) and you can switch between these apps using familiar keyboard shortcuts such as Cmd+Tab (macOS) or Alt+Tab (Windows).
- Spaces lets you run multiple web apps in a single window. Each website will show up on each space’s side bar as an "account". The concept is similar to how apps such as Slack lets you open multiple workspaces in their apps. With this concept, for example, you can open and switch quickly between multiple Gmail accounts, multiple Discord communities, etc. | https://docs.webcatalog.io/article/18-what-is-the-difference-between-apps-and-spaces | 2021-07-23T21:51:34 | CC-MAIN-2021-31 | 1627046150067.51 | [] | docs.webcatalog.io |
Citrix Gateway service
Citrix Gateway service provides secure remote access solution with a diverse Identity and Access Management (IdAM) capabilities, delivering a unified experience into SaaS apps, heterogeneous Virtual apps and Desktops, and so forth.
Moving on-premises resources to cloud has the following benefits.
- Better and predictable financial planning
- Elasticity with pay-as-you-grow model
- High availability of service
- Less operational overhead
- Less time to add features and release to world
Citrix Cloud Services hosts a suite of services provided by Virtual Apps and Desktop service, Citrix Gateway service, ShareFile, and so forth. All these services are delivered in a single pane using Workspace experience.
Salient features of Citrix Gateway serviceSalient features of Citrix Gateway service
Some of the notable features of the Citrix Gateway service are as follows.
High Availability
Multi-Layered resiliency approach by the Citrix Gateway service provides resiliency at every level. Within a particular Citrix Gateway service Point of Presence (POP), the micro services and tenants that form the service are deployed in a highly available form. The components are deployed in the N+1 model. In this model, all components are load balanced and can do a quick failover with standby, if there is some failure. In rare cases, when all the services of a particular component within a POP are down, the Citrix Gateway service marks itself as down. This enables the DNS server to redirect users to the next nearest POP, providing a POP level high availability.
Multi-POP deployment of Citrix Gateway service
Multiple instances of Citrix Gateway service are deployed in multiple geographic locations across the globe to handle its entire customer base. All the instances are constantly in sync with each other to provide one reliable service. From anywhere in the world, the user is never too far away from a Citrix Gateway service POP.
Optimal Gateway Routing
Citrix Gateway service is deployed globally and there is a need for a mechanism to choose the nearest POP. Optimal Gateway routing or proximity routing a DNS based service is used to return end users with the closest POP location when they query for the Citrix Gateway IP address. This DNS service uses the source IP address of the query as one of the metadata to return the closest Citrix Gateway service POP IP address.
Solutions offered by Citrix Gateway serviceSolutions offered by Citrix Gateway service
Current offerings of the Citrix Gateway service are as follows.
- HDX connectivity for the Citrix Virtual Apps and Desktops users – a globally available service providing secure connectivity from users in any location to virtual apps and desktops.
- Secure access to SaaS applications – a unified user experience bringing configured SaaS applications to end-users.
- Secure access to Enterprise web apps - a unified user experience for the configured Enterprise web apps. Access Enterprise web applications hosted within your corporate network from anywhere using Citrix Workspace.
- Secure access to mobile apps in a digital workspace – a modern approach to managing all your devices through a single platform, Citrix Endpoint Management. Supported platforms include desktops, laptops, smartphones, tablets, and IoT. | https://docs.citrix.com/en-us/citrix-gateway-service/ | 2021-07-23T23:30:07 | CC-MAIN-2021-31 | 1627046150067.51 | [] | docs.citrix.com |
System localization Localization allows administrators to accommodate users from a variety of different countries, using different languages and currencies, within the same instance. A digital guidebook for localization is available on the ServiceNow® Developer Site: ServiceNow Application Localization Define localesThe base system allows you to specify your locale so information such as dates, times, and currencies display properly based on your location.Language internationalization supportThe ServiceNow platform supports multiple languages, using UTF-8 for international characters.Localize price fieldsYou can localize currencies for item prices and options.Localization settingsLocalization settings control translation, currency, and locale settings in the instance.Custom translationsYou provide your own translations of applications you create and of modifications you make to the Now Platform®. You can also provide any translations to languages that ServiceNow does not provide translations for. The translation process varies depending on the type of item that you are translating. | https://docs.servicenow.com/bundle/quebec-platform-administration/page/administer/core-configuration/concept/p_Localization.html | 2021-07-23T22:50:04 | CC-MAIN-2021-31 | 1627046150067.51 | [] | docs.servicenow.com |
This chapter describes some of the common ways to denormalize the physical implementation of a fully-normalized model. The chapter also briefly describing the popular technique of dimensional modeling and shows how the most useful attributes of a dimensional model can be emulated for a fully-normalized or partially-normalized physical database implementation through the careful use of dimensional views.
The term denormalization describes any number of physical implementation techniques that enhance performance by reducing or eliminating the isomorphic mapping of the logical database design on the physical implementation of that design. The result of these operations is usually a violation of the design goal of making databases application-neutral. In other words, a “denormalized” database favors one or a few applications at the expense of all other possible applications.
Strictly speaking, these operations are not denormalization at all. The concept of database schema normalization is logical, not physical. Logical denormalization should be avoided. Develop a fully-normalized design and then, if necessary, adjust the semantic layer of your physical implementation to provide the desired performance enhancement. Finally, use views to tailor the external schema to the usability needs of users and to limit their direct access to base tables (see “Dimensional Views” on page 186). | https://docs.teradata.com/r/ji8nYcbKBTVEaNYVwKF3QQ/AaGxlgY4L89WWQn98bTi1w | 2021-07-23T21:09:38 | CC-MAIN-2021-31 | 1627046150067.51 | [] | docs.teradata.com |
WP Travel Engine
Documentations
We are excited to announce the release of the new version 2.0.4 with new and exciting features to the Extra Services addon. We have been developing the enhancements to provide you with multiple new options to display and sell your extra services with trips with each, which have all been bundled and released in version 2.0.4.
We have made some of the changes in the addon for our users to make adding, editing and managing Extra Services for their trips easier.
Extra Services menu is now moved from Global Settings to a new menu under WP Travel Engine in WordPress Admin Dashboard. This will let you add/edit and manage all of the global extra services from this menu, just like managing trips. We have also added a bunch of new features that will let you create unlimited extra services with elegance accommodating all of your needs.
All of the old Extra Services that you have created in Global Settings of older version of Extra Services addon will be automatically migrated under the new menu.
You can use the new menu to add/edit Extra Services, to access extra services settings, click in the extra service same as the way you would edit trips. The edit screen has an additional Settings section below the editor to manage extra service type, prices and descriptions.
A new service type option has been added to the extra services edit screen that lets you access the advanced options added in the new version update. This setting consists of two option “Default” and “Advanced”. Default service type has settings same as before that lets you add service name, cost, description and per label.
Advanced service type is the new enhanced extra service settings that lets you add multi-options extra services with multiple selection and single selection types. With this feature extra services of same type can be categorized and displayed in a single drop-down / options lists that will let your users to choose through for each trip.
Advance service type option opens up a new set of options for the Extra Service that lets you add multiple options under the service type. Options like Room types, pickup / drop locations etc. can easily accommodate under these settings.
Field types can be of two types, single selection multiple options and multiple selections multiple options.
You can add multiple options under the service with “Add Service Option” button. Each option can have same price or different prices based on your requirements.
In a single selection field type, your website visitors will be able to choose one of the options from the drop-down list defined here.
In multiple option selection type, website visitors will be able to choose multiple options set under the options.
The options are then displayed in the front-end in a slider of available options to choose from.
Adding Extra Services to your trips are similar to before from the “Extra Services” tab under trip settings metabox. As all of the Extra Services can be managed from the new menu under WP Travel Engine, there’s no need to create extra services in individual trips now. You can simply choose added extra services from the dropdown list and save the changes.
Name: *
Your email address will not be published. Required fields are marked *
Comment
Name*
Website
Save my name, email, and website in this browser for the next time I comment. | https://docs.wptravelengine.com/docs/extra-services-addon-version-2-1-0/ | 2021-07-23T22:26:55 | CC-MAIN-2021-31 | 1627046150067.51 | [] | docs.wptravelengine.com |
You are viewing version 2.23 of the documentation, which is no longer maintained. For up-to-date documentation, see the latest version.
Configuring GitHub OAuth for Spinnaker
This post describes how to configure GitHub and Spinnaker to use GitHub as an OAuth2 authenticator.
Requirements:
- Ability to modify developer settings for your GitHub organization
- Access to Halyard
- A Spinnaker deployment with DNS and SSL configured
Configuring GitHub OAuth
- Login to GitHub and go to Settings > Developer Settings > OAuth Apps > New OAuth App
- Note the Client ID / Client Secret
- Homepage URL: This would be the URL of your Spinnaker service e.g.
- Authorization callback URL: This is going to match your
--pre-established-redirect-uriin halyard and the URL needs
loginappended to your gate endpoint e.g.
Configuring Spinnaker
Operator
Add the following snippet to your
SpinnakerService manifest under the
spec.spinnakerConfig.config.security.authn level:
oauth2: enabled: true client: clientId: a08xxxxxxxxxxxxx93 clientSecret: 6xxxaxxxxxxxxxxxxxxxxxxx59 # Secret Enabled Field scope: read:org,user:email preEstablishedRedirectUri: provider: github
For additional configuration options review the Spinnaker Operator Reference
Haly:
Last modified October 12, 2020: add missing syntax (#254) (4cc9aea) | https://v2-23.docs.armory.io/docs/armory-admin/authn-github/ | 2021-07-23T22:58:18 | CC-MAIN-2021-31 | 1627046150067.51 | [] | v2-23.docs.armory.io |
Armor PowerShell Module¶
This is a community project that provides a powerful command-line interface for managing and monitoring your Armor Complete (secure public cloud) and Armor Anywhere (security as a service) environments & accounts via a PowerShell module with cmdlets that interact with the published RESTful APIs.
Every code push is built on Windows via AppVeyor, as well as on macOS and Ubuntu Linux via Travis CI, and tested using the Pester test & mock framework.
Code coverage scores and reports showing how much of the project is covered by automated tests are tracked by Coveralls.
Every successful build is published on the PowerShell Gallery.
The source code is available on GitHub. | https://armorpowershell.readthedocs.io/en/stable/ | 2021-07-23T23:17:59 | CC-MAIN-2021-31 | 1627046150067.51 | [] | armorpowershell.readthedocs.io |
J.
Key Features
Equipped with latest technologies like React Hooks.
Practically Unlimited ready to use components.
Special focus on Code Splitting and Re-usability.
Integrated with all leading third party libraries.
Multi-lingual and RTL supported.
100's of pages.
Pixel perfect designing. | https://docs-jumbo-react.g-axon.work/ | 2021-07-23T22:35:20 | CC-MAIN-2021-31 | 1627046150067.51 | [] | docs-jumbo-react.g-axon.work |
CoreWeave supports both Linux and Windows Virtual Servers. Both GPU-enabled and CPU-only Virtual Servers are available for deployment, and can be configured from the variety of GPUs and CPUs in the CoreWeave fleet. CoreWeave storage can be mounted in automatically, providing high performance access to shared storage volumes accessible by other Kubernetes workloads including other Virtual Servers.
Virtual Servers, being a Kubernetes custom-resource, can be be deployed onto CoreWeave Cloud easily using such conventional methods as application of a YAML manifest via
kubectl and creation of a release via the Virtual Server helm chart. Additionally, CoreWeave provides a programmatic interface to create and manipulate Virtual Servers via the Kubernetes API server. These methods will be detailed in subsequent sections.
Once a Virtual Server is deployed, tools such as
kubectl and
virtctl can be utilized to manage and control the resources, and state of a Virtual Server.
The examples and demo files that will be used in the following sections are available in the CoreWeave kubernetes-cloud repository. | https://docs.coreweave.com/virtual-servers/getting-started | 2021-07-23T21:22:43 | CC-MAIN-2021-31 | 1627046150067.51 | [] | docs.coreweave.com |
unlock_keychain
Unlock a keychain
Unlocks the given keychain file and adds it to the keychain search list.
Keychains can be replaced with
add_to_search_list: :replace.
4 Examples
unlock_keychain( # Unlock an existing keychain and add it to the keychain search list path: "/path/to/KeychainName.keychain", password: "mysecret" )
unlock_keychain( # By default the keychain is added to the existing. To replace them with the selected keychain you may use `:replace` path: "/path/to/KeychainName.keychain", password: "mysecret", add_to_search_list: :replace # To only add a keychain use `true` or `:add`. )
unlock_keychain( # In addition, the keychain can be selected as a default keychain path: "/path/to/KeychainName.keychain", password: "mysecret", set_default: true )
unlock_keychain( # If the keychain file is located in the standard location `~/Library/Keychains`, then it is sufficient to provide the keychain file name, or file name with its suffix. path: "KeychainName", password: "mysecret" )
Parameters
* = default value is dependent on the user's system
Documentation
To show the documentation in your terminal, run
fastlane action unlock_keychain
CLI
It is recommended to add the above action into your
Fastfile, however sometimes you might want to run one-offs. To do so, you can run the following command from your terminal
fastlane run unlock_keychain
To pass parameters, make use of the
: symbol, for example
fastlane run | https://docs.fastlane.tools/actions/unlock_keychain/ | 2021-07-23T21:54:34 | CC-MAIN-2021-31 | 1627046150067.51 | [] | docs.fastlane.tools |
Responses. *
Longitude, Latitude, Accuracy, and Altitude
Description
Retrieves the latitude/longitude/accuracy/altitude of a GPS coordinate.
Syntax
var_GPS.Longitude var_GPS.Latitude var_GPS.Accuracy var_GPS.Altitude
This function retrieves the longitude, latitude, accuracy or altitude from a GPS question. var_GPS is the variable name for the GPS question, followed by “.” and the name of the element that you want to retrieve.
Example 1
Suppose you are collecting data in Uganda and the household samples are along the equator. In a GPS question you are recording the coordinates of each household. You want to make sure that the GPS recorded is in fact along the equator.
For this check, we would write the validation condition for the question like this:
house_GPS.Longitude==0
Example 2
For a GPS question recording the coordinates of a household you want to make sure that the accuracy is equal to or less than 15.
The validation condition for this question would be:
house_GPS.Accuracy<=15
InRectangle
Description
Confirm that a GPS coordinate falls within a rectangle defined by north, west, south, and east boundaries.
Syntax
var_GPS.InRectangle(north,west,south,east)
This function verifies that a coordinate (longitude and latitude) fall
within the rectangle defined by the north, west, south, east corner
coordinates.
Example 1
Assume that you are conducting a study in Ethiopia and you want to make
sure that the GPS coordinates recorded are within the country.
For this check, we would write the validation condition for the question like this:
house_GPS.InRectangle(14.390422862509851,33.0984365234375,3.7756813387012143, 47.993157226562516)
Note that writing values so precisely makes no practical sense. As this discussion thread indicates, “The fifth decimal place is worth up to 1.1 m: it distinguish trees from each other. Accuracy to this level with commercial GPS units can only be achieved with differential correction.”
GpsDistance
Description
Calculates the distance between two coordinates in meters.
Syntax
gpsA.GpsDistance(gpsB)
This function calculates the distance in meters (m) between the coordinates gpsA and gpsB.
Example 1
Assume you have two GPS questions in your survey, one (gpsHome) for the
coordinates of the household’s house, and another for the coordinates of
their field (gpsField). You want to check that the distance between the
two is at least 50 meters.
For this check, the validation condition would be:
gpsHome.GpsDistance(gpsField)>50
GpsDistanceKm
Description
Calculates the distance between two coordinates in kilometers.
Syntax
gpsA.GpsDistanceKm(gpsB)
This function calculates the distance in kilometers (km) between the coordinates gpsA and gpsB**.**
Example 1
Suppose you have two GPS questions in your survey, one for each visit to
the household (visit1_gps and visit2_gps**)**. You want to check that
the distance between the two is less than .5 kilometers.
For this check, the validation condition would be:
visit1_gps.GpsDistanceKm(visit2_gps)>.5 | https://docs.mysurvey.solutions/syntax-guide/questions/syntax-guide-gps-questions/ | 2021-07-23T22:04:19 | CC-MAIN-2021-31 | 1627046150067.51 | [] | docs.mysurvey.solutions |
Configuring Okta for TaaS
To use Okta as an identity provider for TaaS, you must first configure it. For more information about configuring Okta, see How do I set up Okta as a SAML identity provider in an Amazon Cognito user pool?
Create a SAML application and provide the metadata to Tanium
- Open the Okta Developer Console.
- From the Developer Console drop-down menu, click Classic UI to open the Admin Console.
You must use the Classic UI to create a SAML application.
- From the Main menu, click Applications, and then click Create New App.
- Confirm that the following fields are set correctly, and then click Create.
Platform: Web
- Configure general settings.
- Enter a name, such as Tanium or TaaS.
- (Optional) Upload a logo.
- Verify that Do not display application icon to users and Do not display application icon in the Okta Mobile app are selected and then click Next.
- In the GENERAL section, enter the following values from your welcome e-mail from Tanium.
Single sign on URL: SSO URL
Audience URI (SP Entity ID): Audience URI (SP Entity ID)
- In the ATTRIBUTE STATEMENTS (OPTIONAL) section, enter the following values, and then click Next.
Name:
Value: user.email
- In the Feedback section, select I'm an Okta customer adding an internal app, provide any additional responses, and click Finish.
- In the SIGN ON METHODS section of the Sign On tab of the application, click Identity Provider metadata, and then provide the downloaded file to Tanium.
You can also right-click Identity Provider metadata and Copy Link Address to provide the URL to Tanium instead of downloading the XML file.
Assign the application to users
From the Assignments tab of the application, click Assign to assign the application to any users that you want to have access to TaaS..
(Optional) Create a bookmark application for TaaS
TaaS uses Amazon Cognito user pools, which does not currently support identity provider (IdP) initiated sign-on. To work around this limitation, you can create a Bookmark App. For more information, see Simulating an IdP-initiated Flow with the Bookmark App.
- From the Okta Admin Console, go to Shortcuts > Add Applications.
- Search for bookmark and then select Bookmark App in INTEGRATIONS.
- In the Bookmark App section, click Add.
- In the General Settings • Required section, enter the following values, and then click Done.
Application label: descriptive name such as TaaS or Tanium
URL: the TaaS Console URL from your welcome e-mail from Tanium
- (Optional) Edit the template logo to provide a more appropriate logo. This application is visible to users.
- Click the Assignments tab to assign the bookmark app to any users that you want to have access to the bookmark app.
You must give access to the user that is listed as the Primary TaaS Admin Username in your welcome e-mail from Tanium.
Use groups to assign access to TaaS and assign both the SAML integration application and the Bookmark App to that group to ensure that all users receive both applications.
Last updated: 6/25/2021 3:14 PM | Feedback | https://docs.tanium.com/taas/taas/configuring_okta.html | 2021-07-23T22:44:10 | CC-MAIN-2021-31 | 1627046150067.51 | [] | docs.tanium.com |
Maintenance Cost as a Function of Number of Hits Per Data Block
The number of hits per block is a measure of how many rows in a data block are accessed (inserted, deleted, or updated) during an operation on a table. Generally speaking, the greater the number of hits per block, the better the performance provided the hits can be combined into one update of the data block, as would be the case, for example, with an INSERT … SELECT involving a set of updates containing multiple rows.
If a large number of the data blocks for a table have become significantly smaller than half the maximum size for the defined maximum data block size, an appropriate specification for the MERGEBLOCKRATIO option of the ALTER TABLE and CREATE TABLE statements can enable the file system to merge up to 8 data blocks into a single larger data block. See SQL Data Definition Language for details.
On the other hand, if each hit of a data block involves a separate read operation, with hits occurring semi‑randomly, performance is worse.
Data Processing Scenarios for the Hit Rates Studied
The following table provides example data processing scenarios that correspond with the hit rates studied.
Standard Test Procedure
The standard procedure for these tests was to make any maintenance changes to the left table in the join.
Maintenance Costs In Terms of Elapsed Time
Elapsed times increase as a function of increased hits per data block because more rows are touched. At the same time, the CPU path per row times decrease because the elapsed times increase at a lesser rate than the increase in the number of rows touched.
Maintenance Costs In Terms of CPU Path Length Per Transaction
Suppose you report the same information in terms of CPU path length per transaction. For this data, the term transaction equates to qualifying row. This number is a measure of the amount of CPU time a transaction requires; roughly, the number of instructions performed per transaction.
CPU path per transaction is a better way to compare various manipulations than elapsed time for two reasons: | https://docs.teradata.com/r/ji8nYcbKBTVEaNYVwKF3QQ/wu7wLmwtGNNQT8vnGvzQ3g | 2021-07-23T21:44:28 | CC-MAIN-2021-31 | 1627046150067.51 | [] | docs.teradata.com |
We now provide you along with the CometChat SDK, the source code for our UI. This source code can be found in the downloaded package under the
iOS.
Setup to modify the UI.
Add the ‘readyui’ module to your existing application:
In the SDK, locate the sample app in the
iOS/CometChat UI Source. In the sample app, locate a folder called readyui. Import the readyui folder as a module in existing iOS application project.
You can modify the Source code and replace existing images or add new images in ready UI as per your requirement.
Once you are Done with making changes in existing readyui project then you will need to rebuild the project so that you will get new CometChat UI .bundle and .framework file. Kindly refer below steps for the same.
- Clear Derived Date on your mac system.
- Build cometchat-ui-Resource by selecting a device from simulator with iOS version 8 and above.
- Build cometchat-ui by selecting a device from simulator with iOS version 8 and above.
- Build Framework by selecting ‘Generic iOS Device’.
- Once Framework is built successfully cometchat-ui.bundle and cometchat-ui.framework file will get generated on your Mac Desktop.
Now you can use the newly generated UI framework in your Xcode project. | https://docs.cometchat.com/ios-sdk/displaying-ui/modifying-ui/ | 2021-07-23T21:19:02 | CC-MAIN-2021-31 | 1627046150067.51 | [] | docs.cometchat.com |
Schema Registry Serializer and Formatter¶
This document describes how to use Avro with the Apache Kafka® Java client and console tools.
Assuming that you have Schema Registry source code checked out at
/tmp/schema-registry, the
following is how you can obtain all needed JARs.
mvn package
The JARs can be found in
/tmp/schema-registrypackage/target/package-$VERSION-package/share/java/avro-serializer/
To get a better understanding of what is provided with Confluent Platform, try out the Schema Registry Tutorial and Installing and Configuring Schema Registry. Details on Kafka clients, libraries, configuration, and APIs are at Kafka Clients.
See also
Developer examples of Kafka client producers and consumers, with and without Avro, are on GitHub in examples/clients.
Serializer¶
You can plug
KafkaAvroSerializer into KafkaProducer to send messages of Avro type to Kafka.
Currently, we support primitive types of
null,
Boolean,
Integer,
Long,
Float,
Double,
String,
byte[], and complex type of
IndexedRecord. Sending data of other types
to
KafkaAvroSerializer will
cause a
SerializationException. Typically,
IndexedRecord will be used for the value of the Kafka
message. If used, the key of the Kafka message is often of one of the primitive types., we send a message with key of type string and value of type Avro record
to Kafka. A
SerializationException may occur during the send call, if the data is not well formed. java.util.Properties; Properties props = new Properties(); props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");(); }
You can plug in
KafkaAvroDeserializer to
KafkaConsumer to receive messages of any Avro type from Kafka.
In the following example, we receive messages with key of type
string and value of type Avro record
from Kafka. When getting the message key or value, a
SerializationException may occur if the data is
not well formed.,
Avro serializer registers a schema in Schema Registry under a
subject name, which essentially defines a namespace in the registry:
- Compatibility checks are per subject
- Versions are tied to subjects
- When schemas evolve, they are still associated to the same subject but get a new schema id and version
Overview¶
The subject name depends on the subject name strategy, which you can set to one of the following three values:
TopicNameStrategy(
io.confluent.kafka.serializers.subject.TopicNameStrategy) – this is the default
RecordNameStrategy(
io.confluent.kafka.serializers.subject.RecordNameStrategy)
TopicRecordNameStrategy(
io.confluent.kafka.serializers.subject.TopicRecordNameStrategy)
Clients can set the subject name strategy for either the key or value, using the following configuration parameters:
key.subject.name.strategy
value.subject.name.strategy
Tip
For a quick review of the relationship between schemas, subjects, and topics, see Terminology Review in the Schema Registry Tutorial. multiple
record types. This is useful when your data represents a time-ordered sequence
of events, and the messages have different data structures. In this case, it is
more useful to keep a set following limitations apply to subject naming strategies:
- Subject naming strategies are fully configurable only on Java clients.
- Non Java clients (like Go producers, Python, .NET, libserdes) use only the default,
TopicNameStrategy.
- Go consumers are not fully supported with any naming strategies in Schema Registry.
- KSQL
KafkaAvroSerializer and KafkaAvroDeserializer.
Basic Auth Security¶
Schema Registry supports
http://<username>:<password>@sr-host:<sr-port>.
Formatter¶
You can use
kafka-avro-console-producer and
kafka-avro-console-consumer respectively to send and
receive Avro data in JSON format from the console. Under the hood, they use
AvroMessageReader and
AvroMessageFormatter to convert between Avro and JSON.
To run the Kafka console tools, first make sure that ZooKeeper, Kafka and Schema Registry server are all started. In the following examples, the default Schema Registry URL value is used.
You can configure that by supplying
--property schema.registry.url=address of your |sr|
in the commandline arguments of
kafka-avro-console-producer and
kafka-avro-console-consumer.
In the following example, we send Avro records in JSON as the message value (make sure there is no space in the schema string).
bin/kafka-avro-console-producer --broker-list localhost:9092 --topic t1 \ --property value.schema='{"type":"record","name":"myrecord","fields":[{"name":"f1","type":"string"}]}'
In the shell, type in the following.
{"f1": "value1"}
In the following example, we read the value of the messages in JSON.
bin/kafka-avro-console-consumer --topic t1 \ --bootstrap-server localhost:9092
You should see following in the console.
{"f1": "value1"}
In the following example, we send strings and Avro records in JSON as the key and the value of the message, respectively.
bin/kafka-avro-console-producer --broker-list localhost:9092 --topic t2 \ --property parse.key=true \ --property key.schema='{"type":"string"}' \ --property value.schema='{"type":"record","name":"myrecord","fields":[{"name":"f1","type":"string"}]}'
In the shell, type in the following.
"key1" {"f1": "value1"}
The following example reads both the key and the value of the messages in JSON.
bin/kafka-avro-console-consumer --topic t2 \ --bootstrap-server localhost:9092 \ --property print.key=true
You should see following in the console.
"key1" { \ --bootstrap-server localhost:9092 \ --property print.key=true \ --property print.schema.ids=true \ --property schema.id.separator=: You should see following in the console. "key1"
Wire Format¶
Most users can use the serializers and formatter directly and never worry about the details of how Avro messages are mapped to bytes. However, if you’re working with a language that Confluent has not developed serializers for, or simply want a deeper understanding of how the Confluent Platform works, you may need. | https://docs.confluent.io/5.3.1/schema-registry/serializer-formatter.html | 2021-07-23T22:24:20 | CC-MAIN-2021-31 | 1627046150067.51 | [] | docs.confluent.io |
We are making clarifications and updates to the Amazon Acceptable Use and Data Protection policies governing the appropriate use of the Selling Partner API and Amazon Marketplace Web Service (Amazon MWS). These updates will go into effect on January 1, 2021. We have published the redlined versions of these policies at the links below for your convenience.
The Data Protection Policy ("DPP") governs the treatment (e.g., receipt, storage, usage, transfer, and disposition) of the data vended and retrieved through the Marketplace APIs (including the Marketplace Web Service APIs). This Policy supplements the Amazon Marketplace Developer Agreement and the Acceptable Use Policy. Failure to comply may result in suspension or termination of Marketplace API access.
Definitions
"Application" means a software application or website that interfaces with the Marketplace APIs.
. | https://docs.developer.amazonservices.com/en_IN/dev_guide/DG_DataProtectionPolicy.html | 2021-07-23T22:44:57 | CC-MAIN-2021-31 | 1627046150067.51 | [] | docs.developer.amazonservices.com |
Issues & Help¶
Often, the quickest way to get support for general questions is through the MongoDB Community Forums for Kafka Connector.
Refer to our support channels documentation for more information.
Bugs / Feature Requests¶
To report a bug or to request a new feature for the Kafka Connector, please open a case in our issue management tool, JIRA:
- Navigate to the KAFKA project.
- Click Create Issue. Please provide as much information as possible about the issue and the steps to reproduce it.
Bug reports in JIRA for the Kafka Connector project are public.
If you have identified a security vulnerability in the connector or any other MongoDB project, please report it according to the instructions found on the Create a Vulnerability Report page.
Pull Requests¶
We are happy to accept contributions to help improve the connector. We review user contributions to ensure they meet the standards of the codebase.
To get started check out the source and work on a branch: | https://docs.mongodb.com/kafka-connector/v1.5/issues-and-help/ | 2021-07-23T22:36:57 | CC-MAIN-2021-31 | 1627046150067.51 | [] | docs.mongodb.com |
You can modify an organization VDC Kubernetes policy to change its description and the CPU and memory limits.
Procedure
- From the top navigation bar, select Resources and click Cloud Resources.
- In the left panel, select Organization VDCs, and click the name of a flex organization VDC.
- Under Policies, select Kubernetes, select the policy you want to edit and click Edit.The Edit VDC Kubernetes Policy wizard appears.
- Edit the description of the organization VDC Kubernetes policy and click Next.The name of the policy is linked to the Supervisor Namespace, created during the publishing of the policy, and you cannot change it.
- Edit the CPU and Memory limit for the organization VDC Kubernetes policy and click Next.You cannot edit the CPU and Memory reservation.
- Review the new policy details and click Save. | https://docs.vmware.com/en/VMware-Cloud-Director/10.3/VMware-Cloud-Director-Service-Provider-Admin-Portal-Guide/GUID-A3038AF7-11AB-4375-8AF7-A73E4BD4CBC0.html | 2021-07-23T23:41:12 | CC-MAIN-2021-31 | 1627046150067.51 | [] | docs.vmware.com |
topics provide instructions for setting preferences and options in each console:
- Setting Service Catalog Manager Console preferences
- Setting Business Manager Console preferences
- Setting Service Request Coordinator Console preferences
- Setting Work Order Console preferences
- Setting Request Entry console preferences:
In the above article, it starts with
When you click Functions > Application Preferences
but nowhere does it tell you what form, console, etc that you should be in when clicking on this Functions button/menu, etc....so it's really hard to tell what is meant, and specifically, difficult to 'follow along'.
Hi Lj,
Thank you for your feedback on this documentation - it was not very clear, and there are different menus for setting preferences, depending on the console that's open.
I have updated the topic so that it directs the user to look at the instructions for each individual console.
Let me know if you have any other feedback or concerns about the documentation.
Cathy | https://docs.bmc.com/docs/srm81/setting-application-preferences-and-options-225509608.html | 2021-07-23T23:09:01 | CC-MAIN-2021-31 | 1627046150067.51 | [] | docs.bmc.com |
update_code_signing_settings
Configures Xcode's Codesigning options
Configures Xcode's Codesigning options of all targets in the project
2 Examples
# manual code signing update_code_signing_settings( use_automatic_signing: false, path: "demo-project/demo/demo.xcodeproj" )
# automatic code signing update_code_signing_settings( use_automatic_signing: true, path: "demo-project/demo/demo.xcodeproj" )
Parameters
* = default value is dependent on the user's system
Documentation
To show the documentation in your terminal, run
fastlane action update_code_signing_settings
CLI
It is recommended to add the above action into your
Fastfile, however sometimes you might want to run one-offs. To do so, you can run the following command from your terminal
fastlane run update_code_signing_settings
To pass parameters, make use of the
: symbol, for example
fastlane run update_code_signing | https://docs.fastlane.tools/actions/update_code_signing_settings/ | 2021-07-23T21:51:45 | CC-MAIN-2021-31 | 1627046150067.51 | [] | docs.fastlane.tools |
As a grid administrator, you can enable S3 Object Lock for your StorageGRID system and implement a compliant ILM policy to help ensure that objects in specific S3 buckets are not deleted or overwritten for a specified amount of time.
The StorageGRID S3 Object Lock feature is an object-protection solution that is equivalent to S3 Object Lock in Amazon Simple Storage Service (Amazon S3).
As shown in the figure, when the global S3 Object Lock setting is enabled for a StorageGRID system, an S3 tenant account can create buckets with or without S3 Object Lock enabled. If a bucket has S3 Object Lock enabled, S3 client applications can optionally specify retention settings for any object version in that bucket. An object version must have retention settings specified to be protected by S3 Object Lock.
The StorageGRID S3 Object Lock feature provides a single retention mode that is equivalent to the Amazon S3 compliance mode. By default, a protected object version cannot be overwritten or deleted by any user. The StorageGRID S3 Object Lock feature does not support a governance mode, and it does not allow users with special permissions to bypass retention settings or to delete protected objects.
For details on these settings, go to the instructions for implementing S3 client applications and search for
using S3 object lock.
Implementing S3 client applications | https://docs.netapp.com/sgws-115/topic/com.netapp.doc.sg-ilm/GUID-80CBF838-0A2D-4915-AB72-8021B88F571A.html | 2021-07-23T22:34:30 | CC-MAIN-2021-31 | 1627046150067.51 | [] | docs.netapp.com |
Column Partitions With COLUMN Format
A column partition with COLUMN format packs column partition values into a physical row, or container, up to a system-determined limit. The column partition values must be in the same combined partition to be packed into a container.
The row header occurs once for a container instead of there being a row header for each column partition value.
The format for the row header is either 14 or 20 bytes and consists of the following fields.
The rowID of the first column partition value is the rowID of the container.
The rowID of a column partition value can be determined by its position within the container. If many column partition values can be packed into a container, this row header compression can greatly reduce the space needed for a column‑partitioned object compared to the same object without column partitioning.
If Teradata Database can only place a few column partition values in a container because of their width, there can actually be an increase in the space needed for a column‑partitioned object compared to the object without column partitioning. In this case, ROW format might be more appropriate.
If Teradata Database can only place a few column partition values because the row partitioning is such that only a few column partition values occur for each combined partition,.
In this case, consider altering the row partitioning to allow for more column partition values per combined partition or removing column partitioning.
If the container has autocompression, 2 bytes are used as an offset to compression bits, 1 or more bytes indicate the autocompression types and their arguments, if any, for the container, 1 or bytes, depending on the number of column partition values, of autocompression bits, 0 or more bytes are used for a local value-list dictionary depending the autocompression type, and 0 or more bytes are used for present column partition values.
If the container does not have autocompression either because you specified NO AUTO COMPRESS for the column partition or because no autocompression types are applicable for the column partition values of the container, Teradata Database uses 0 or more bytes for column partition values.
The byte length of a container is rounded up to a multiple of 8.
The formats for single‑column and multicolumn partitions with COLUMN format differ slightly, as listed by the following bullets.
See “Row Structure for Containers (COLUMN Format)” on page 759 for more information. | https://docs.teradata.com/r/ji8nYcbKBTVEaNYVwKF3QQ/zCbpzbHf94gaUFNkAlDvPA | 2021-07-23T21:42:32 | CC-MAIN-2021-31 | 1627046150067.51 | [] | docs.teradata.com |
Customizing CSS for the administration interface
The Kentico administration interface is styled using the Bootstrap framework, based on a series of LESS stylesheets.
If you need to extend or override the styling of the administration interface (for example when developing a custom module), add your custom styles into the custom.less file in the App_Themes\Default\Bootstrap folder of your Kentico web project.
Note: The custom.less file is explicitly intended for customization, and is never modified by Kentico upgrades or hotfixes.
Compiling the LESS code
Modifying LESS files does not have an immediate effect on the styles in the administration interface. You also need to compile the LESS code into CSS files, which are then linked into the administration interface pages.
For example, you can use Visual Studio 2015 and the Grunt task runner to set up automatic LESS compilation:
- Download the Gruntfile.js and package.json files.
- Copy both files into the CMS folder of your Kentico web project.
- Open your Kentico solution in Visual Studio.
On web site projects, Visual Studio automatically installs the required packages.
For web application projects, you need to perform the following additional steps:
- Include the new files in the CMSApp project:
- Click Show all files at the top of the Solution Explorer.
- Select the Gruntfile.js and package.json files.
- Right-click one of the files and select Include in Project.
- Right-click package.json and select Restore Packages.
- Wait until the required packages are installed.
- Save the CMSApp project.
- Close and then reopen the solution.
You can now see a watch task in the Visual Studio Task Runner Explorer. When you modify the custom.less file, the styles are automatically compiled into the CSS that is applied to the administration interface. | https://docs.xperience.io/k10/custom-development/miscellaneous-custom-development-tasks/customizing-css-for-the-administration-interface | 2021-07-23T23:12:55 | CC-MAIN-2021-31 | 1627046150067.51 | [] | docs.xperience.io |
Sharing Code Snippets
What are Code Snippets?
Code snippets are small pieces of code, hence "snippets" of code, that can be shared while commenting.
With FastComments, simply pasting a snippet of code should automatically be detected and formatted.
When commenting, the code snippet will be wrapped in
<code> blocks, and the formatting happens
after you submit your comment.
The wrapping in
<code> blocks is automatic when pasting a snippet of code, in many languages.
Alternatively, you can highlight a piece of text and click the
Code button in the toolbar.
Coding Communities
Code Snippet Sharing is useful for communities where sharing code is likely to happen.
This makes shared pieces of code easier to read with little effort from your users.
We hope that this feature makes FastComments an ideal choice for programming related sites and applications.
Languages Supported
When sharing code snippets, FastComments does its best to automatically determine which programming language the code snippet is in to automatically format and apply syntax highlighting.
FastComments syntax highlighting currently supports 39 common languages:
- .properties
- Apache
- config
- Bash
- C
- C#
- C++
- CSS
- CoffeeScript
- Diff
- Go
- HTML, XML
- HTTP
- JSON
- Java
- JavaScript
- Kotlin
- Less
- Lua
- Makefile
- Markdown
- Nginx
- config
- Objective-C
- PHP
- PHP Template
- Perl
- Plain text
- Python
- Python REPL
- R
- Ruby
- Rust
- SCSS
- SQL
- Shell
- Session
- Swift
- TOML and INI
- TypeScript
- Visual Basic .NET
- YAML
When sharing a code snippet in a language we don't support, it will still be rendered like code - but no syntax highlighting or reformatting will be applied.
Setting Up Code Snippet Sharing
No setup is required to enable this feature. It is automatically available to your users when using FastComments. | https://docs.fastcomments.com/guide-sharing-code-snippets.html | 2021-07-23T22:05:49 | CC-MAIN-2021-31 | 1627046150067.51 | [array(['/images/menu.png', 'Open Menu Menu Icon'], dtype=object)
array(['images/link-internal.png', 'Direct Link Internal Link'],
dtype=object)
array(['images/link-internal.png', 'Direct Link Internal Link'],
dtype=object)
array(['images/link-internal.png', 'Direct Link Internal Link'],
dtype=object)
array(['images/link-internal.png', 'Direct Link Internal Link'],
dtype=object) ] | docs.fastcomments.com |
Lanes
Passing Parameters
To pass parameters from the command line to your lane, use the following syntax:
fastlane [lane] key:value key2:value2 fastlane deploy submit:false build_number:24
To access those values, change your lane declaration to also include
|options|
before_all do |lane, options| # ... end before_each do |lane, options| # ... end lane :deploy do |options| # ... if options[:submit] # Only when submit is true end # ... increment_build_number(build_number: options[:build_number]) # ... end after_all do |lane, options| # ... end after_each do |lane, options| # ... end error do |lane, exception, options| if options[:debug] puts "Hi :)" end end
Switching lanes
To switch lanes while executing a lane, use the following code:
lane :deploy do |options| # ... build(release: true) # that's the important bit hockey # ... end lane :staging do |options| # ... build # it also works when you don't pass parameters hockey # ... end lane :build do |options| build_config = (options[:release] ? "Release" : "Staging") build_ios_app(configuration: build_config) end
fastlane takes care of all the magic for you. You can call lanes of the same platform or a general lane outside of the
platform definition.
Passing parameters is optional.
Returning values
Additionally, you can retrieve the return value. In Ruby, the last line of the
lane definition is the return value. Here is an example:
lane :deploy do |options| value = calculate(value: 3) puts value # => 5 end lane :calculate do |options| # ... 2 + options[:value] # the last line will always be the return value end
Stop executing a lane early
The
next keyword can be used to stop executing a
lane before it reaches the end.
lane :build do |options| if cached_build_available? UI.important 'Skipping build because a cached build is available!' next # skip doing the rest of this lane end match gym end private_lane :cached_build_available? do |options| # ... true end
When
next is used during a
lane switch, control returns to the previous
lane that was executing.
lane :first_lane do |options| puts "If you run: `fastlane first_lane`" puts "You'll see this!" second_lane puts "As well as this!" end private_lane :second_lane do |options| next puts "This won't be shown" end
When you stop executing a lane early with
next, any
after_each and
after_all blocks you have will still trigger as usual :+1:
before_each and
after_each blocks
before_each blocks are called before any lane is called. This would include being called before each lane you've switched to.
before_each do |lane, options| # ... end
after_each blocks are called after any lane is called. This would include being called after each lane you've switched to.
Just like
after_all,
after_each is not called if an error occurs. The
error block should be used in this case.
after_each do |lane, options| # ... end
e.g. With this scenario,
before_each and
after_each would be called 4 times: before the
deploy lane, before the switch to
archive,
sign, and
upload, and after each of these lanes as well.
lane :deploy do archive sign upload end lane :archive do # ... end lane :sign do # ... end lane :upload do # ... end
Lane Context
The different actions can communicate with each other using a shared hash. You can access this in your code (lanes, actions, plugins etc.):
lane_context[SharedValues::VARIABLE_NAME_HERE]
Here are some examples:
lane_context[SharedValues::BUILD_NUMBER] # Generated by `increment_build_number` lane_context[SharedValues::VERSION_NUMBER] # Generated by `increment_version_number` lane_context[SharedValues::SNAPSHOT_SCREENSHOTS_PATH] # Generated by _snapshot_ lane_context[SharedValues::PRODUCE_APPLE_ID] # The Apple ID of the newly created app lane_context[SharedValues::IPA_OUTPUT_PATH] # Generated by _gym_ lane_context[SharedValues::DSYM_OUTPUT_PATH] # Generated by _gym_ lane_context[SharedValues::SIGH_PROFILE_PATH] # Generated by _sigh_ lane_context[SharedValues::SIGH_UDID] # The UDID of the generated provisioning profile lane_context[SharedValues::HOCKEY_DOWNLOAD_LINK] # Generated by `hockey` lane_context[SharedValues::GRADLE_APK_OUTPUT_PATH] # Generated by `gradle` lane_context[SharedValues::GRADLE_ALL_APK_OUTPUT_PATHS] # Generated by `gradle` lane_context[SharedValues::GRADLE_FLAVOR] # Generated by `gradle` lane_context[SharedValues::GRADLE_BUILD_TYPE] # Generated by `gradle`
To get information about available lane variables, run
fastlane action [action_name] or look at the generated table in the action documentation.
Lane Properties
It can be useful to dynamically access some properties of the current lane. These are available in
lane_context as well:
lane_context[SharedValues::PLATFORM_NAME] # Platform name, e.g. `:ios`, `:android` or empty (for root level lanes) lane_context[SharedValues::LANE_NAME] # The name of the current lane preceded by the platform name (stays the same when switching lanes) lane_context[SharedValues::DEFAULT_PLATFORM] # Default platform
They are also available as environment variables:
ENV["FASTLANE_PLATFORM_NAME"] ENV["FASTLANE_LANE_NAME"]
Private lanes
Sometimes you might have a lane that is used from different lanes, for example:
lane :production do # ... build(release: true) appstore # Deploy to the AppStore # ... end lane :beta do # ... build(release: false) crashlytics # Distribute to testers # ... end lane :build do |options| # ... ipa # ... end
It probably doesn't make sense to execute the
build lane directly using
fastlane build. You can hide this lane using
private_lane :build do |options| # ... end
This will hide the lane from:
fastlane lanes
fastlane list
fastlane docs
And also, you can't call the private lane using
fastlane build.
The resulting private lane can only be called from another lane using the lane switching technology.
Control configuration by lane and by platform
In general, configuration files take only the first value given for a particular configuration item. That means that for an
Appfile like the following:
app_identifier "com.used.id" app_identifier "com.ignored.id"
the
app_identfier will be
"com.used.id" and the second value will be ignored. The
for_lane and
for_platform configuration blocks provide a limited exception to this rule.
All configuration files (Appfile, Matchfile, Screengrabfile, etc.) can use
for_lane and
for_platform blocks to control (and override) configuration values for those circumstances.
for_lane blocks will be called when the name of lane invoked on the command line matches the one specified by the block. So, given a
Screengrabfile like:
locales ['en-US', 'fr-FR', 'ja-JP'] for_lane :screenshots_english_only do locales ['en-US'] end for_lane :screenshots_french_only do locales ['fr-FR'] end
locales will have the values
['en-US', 'fr-FR', 'ja-JP'] by default, but will only have one value when running the
fastlane screenshots_english_only or
fastlane screenshots_french_only.
for_platform gives you similar control based on the platform for which you have invoked fastlane. So, for an
Appfile configured like:
app_identifier "com.default.id" for_lane :enterprise do app_identifier "com.forlane.enterprise" end for_platform :mac do app_identifier "com.forplatform.mac" for_lane :release do app_identifier "com.forplatform.mac.forlane.release" end end
you can expect the
app_identifier to equal
"com.forplatform.mac.forlane.release" when invoking
fastlane mac release. | https://docs.fastlane.tools/advanced/lanes/ | 2021-07-23T22:28:55 | CC-MAIN-2021-31 | 1627046150067.51 | [] | docs.fastlane.tools |
Hashing Mechanisms
In the case of hashing, an index key data value is transformed by a mathematical function called a hash function to produce an abstract value not related to the original data value in an obvious way. Hashed data is assigned to hash buckets, which are memory‑resident routing structures that correspond in a 1:1 manner to the relationship a particular hash code or range of hash codes has with an AMP location.
Hashing is similar to indexing in that it associates an index key with a relative row address.
Hashing differs from indexing in the following ways.
Ordinarily, the rows for primary‑indexed tables in Teradata Database are distributed, or horizontally row partitioned, among the AMPs based on a hash code generated from the primary index, or primary hash key, for the table (see “Row Allocation for Primary‑Indexed Tables” on page 235). The only exceptions to this are rows from global temporary trace tables, NoPI tables, column‑partitioned tables, and column‑partitioned join indexes (see “Row Allocation for Teradata Parallel Data Pump” on page 237 and “Row Allocation for FastLoad Operations Into Nonpartitioned NoPI Tables” on page 238).
Because Teradata Database hash functions are formally mature and mathematically sound, rows with unique primary indexes are always distributed in a uniformly random fashion across the AMPs, even when there is a natural clustering of key values. This behavior both avoids nodal hot spots and minimizes the number of key comparisons required to perform join processing when two rows have the same rowhash value.
All primary indexes are hashed for distribution, and while the stored value for the primary index is retained as data, it is the hashed transformation of that value that is used for distribution and primary index retrieval of the row. The primary indexes of hash and join indexes can, in some circumstances, be stored in value‑order (there are restrictions on the data type and column width of any value‑ordered primary index for a hash or join index: see “CREATE HASH INDEX” and “CREATE JOIN INDEX” in SQL Data Definition Language for details), but their rows are still distributed to the AMPs based on the hash of their primary index value.
Nonpartitioned and column‑partitioned NoPI table rows that are inserted into Teradata Database using Teradata Parallel Data Pump ARRAY insert processing are distributed among the AMPs based on a hash code generated from their Query ID (see “Row Allocation for Teradata Parallel Data Pump” on page 237).
Nonpartitioned NoPI table rows that are FastLoaded onto a system are distributed among the AMPs using a randomization algorithm that is different from the standard hashing algorithm (see “Row Allocation for FastLoad Operations Into Nonpartitioned NoPI Tables” on page 238).
Unique secondary indexes are also hashed and distributed to their appropriate index subtables based on that hash value, where they are stored in rowID order. Rows stored in this way are said to be hash-ordered. Non‑primary index retrievals usually benefit from defining one or more secondary hash keys on a table.
To summarize, although Teradata Database uses the term index for these values, they are really hash keys, not indexes. | https://docs.teradata.com/r/ji8nYcbKBTVEaNYVwKF3QQ/_U4230u6cJ26zqbOLKNaEA | 2021-07-23T21:31:59 | CC-MAIN-2021-31 | 1627046150067.51 | [] | docs.teradata.com |
You are viewing documentation for Kubernetes version: v1.18
Kubernetes v1.18 documentation is no longer actively maintained. The version you are currently viewing is a static snapshot. For up-to-date documentation, see the latest version.
Disruptions
This guide is for application owners who want to build highly available applications, and thus need to understand what types of disruptions can happen to Pods.
It is also for cluster administrators who want to perform automated cluster actions, like upgrading and autoscaling clusters.
Voluntary and involuntary disruptions
Pods do not disappear until someone (a person or a controller) destroys them, or there is an unavoidable hardware or system software error.
We call these unavoidable cases involuntary disruptions to an application. Examples are:
- a hardware failure of the physical machine backing the node
- cluster administrator deletes VM (instance) by mistake
- cloud provider or hypervisor failure makes VM disappear
- a kernel panic
- the node disappears from the cluster due to cluster network partition
- eviction of a pod due to the node being out-of-resources.:
- deleting the deployment or other controller that manages the pod
- updating a deployment's pod template causing a restart
- directly deleting a pod (e.g. by accident)
Cluster administrator actions include:
- Draining a node for repair or upgrade.
- Draining a node from a cluster to scale the cluster down (learn about Cluster Autoscaling ).
- Removing a pod from a node to permit something else to fit on that node..
Caution: Not all voluntary disruptions are constrained by Pod Disruption Budgets. For example, deleting deployments or pods bypasses Pod Disruption Budgets.
Dealing with disruptions
Here are some ways to mitigate involuntary disruptions:
- Ensure your pod requests the resources it needs.
- Replicate your application if you need higher availability. (Learn about running replicated stateless and stateful applications.)
- For even higher availability when running replicated applications, spread applications across racks (using anti-affinity) or across zones (if using a multi-zone cluster.).
Pod disruption budgets
Kubernetes v1.5 [beta]
Kubernetes offers features to help you run highly available applications even when you introduce frequent voluntary disruptions.
As an application owner, you can create a PodDisruptionBudget (PDB) for each application. A PDB limits the number of PodDisruptionBudgets by calling the Eviction API instead of directly deleting pods or deployments.
For example, the
kubectl drain subcommand lets you mark a node as going out of
service. When you run
kubectl drain, the tool tries to evict all of the Pods on
the Node you're taking out of service. The eviction request that
kubectl submits on
your behalf may be temporarily rejected, so the tool periodically retries all failed
requests until all Pods on the target node workload resource
that is managing those pods. The control plane discovers the owning workload resource by
examining the
.metadata.ownerReferences of the Pod.
PDBs cannot prevent involuntary disruptions from occurring, but they do count against the budget.
Pods which are deleted or unavailable due to a rolling upgrade to an application do count against the disruption budget, but workload resources (such as Deployment and StatefulSet) are not limited by PDBs when doing rolling upgrades. Instead, the handling of failures during application updates is configured in the spec for the specific workload resource.
When a pod is evicted using the eviction API, it is gracefully
terminated, honoring the
terminationGracePeriodSeconds setting in its PodSpec.)
PodDisruptionBudget example-0, would need
to terminate completely before its replacement, which is also called
pod-0:
- how many replicas an application needs
- how long it takes to gracefully shutdown an instance
- how long it takes a new instance to start up
- the type of controller
- the cluster's resource capacity
Separating Cluster Owner and Application Owner Roles
Often, it is useful to think of the Cluster Manager and Application Owner as separate roles with limited knowledge of each other. This separation of responsibilities may make sense in these scenarios:
- when there are many application teams sharing a Kubernetes cluster, and there is natural specialization of roles
- when third-party tools or services are used to automate cluster management
Pod Disruption Budgets support this separation of roles by providing an interface between the roles.
If you do not have such a separation of responsibilities in your organization, you may not need to use Pod Disruption Budgets.
How to perform Disruptive Actions on your Cluster
If you are a Cluster Administrator, and you need to perform a disruptive action on all the nodes in your cluster, such as a node or system software upgrade, here are some options:
- Accept downtime during the upgrade.
- Failover to another complete replica cluster.
- No downtime, but may be costly both for the duplicated nodes and for human effort to orchestrate the switchover.
- Write disruption tolerant applications and use PDBs.
- No downtime.
- Minimal resource duplication.
- Allows more automation of cluster administration.
- Writing disruption-tolerant applications is tricky, but the work to tolerate voluntary disruptions largely overlaps with work to support autoscaling and tolerating involuntary disruptions.
What's next
Follow steps to protect your application by configuring a Pod Disruption Budget.
Learn more about draining nodes
Learn about updating a deployment including steps to maintain its availability during the rollout. | https://v1-18.docs.kubernetes.io/docs/concepts/workloads/pods/disruptions/ | 2021-07-23T21:32:50 | CC-MAIN-2021-31 | 1627046150067.51 | [] | v1-18.docs.kubernetes.io |
Expected identifier (JavaScript)
You used something other than an identifier in a context where one was required. An identifier can be:
a variable,
a property,
an array,
or a function name.
To correct this error
- Change the expression so an identifier appears to the left of the equal sign. | https://docs.microsoft.com/en-us/scripting/javascript/misc/expected-identifier-javascript | 2018-06-18T03:57:51 | CC-MAIN-2018-26 | 1529267860041.64 | [] | docs.microsoft.com |
Creating a Deployment Package
To create a Lambda function you first create a Lambda function deployment package, a .zip or .jar file consisting of your code and any dependencies. When creating the zip, include only the code and its dependencies, not the containing folder. You will then need to set the appropriate security permissions for the zip package.
Permissions Polices on Lambda Deployment Packages
Zip packages uploaded with incorrect permissions may cause execution failure. AWS Lambda requires global read permissions on code files and any dependent libraries that comprise your deployment package. To ensure permissions are not restricted to your user account, you can check using the following samples:
Linux/Unix/OSX environments: Use
zipinfoas shown in the sample below:
$ zipinfo test.zip Archive: test.zip Zip file size: 473 bytes, number of entries: 2 -r-------- 3.0 unx 0 bx stor 17-Aug-10 09:37 exlib.py -r-------- 3.0 unx 234 tx defN 17-Aug-10 09:37 index.py 2 files, 234 bytes uncompressed, 163 bytes compressed: 30.3%
The
-r--------indicates that only the file owner has read permissions, which can cause Lambda function execution failures. The following indicates what you would see if there are requisite global read permissions:
$ zipinfo test.zip Archive: test.zip Zip file size: 473 bytes, number of entries: 2 -r--r--r-- 3.0 unx 0 bx stor 17-Aug-10 09:37 exlib.py -r--r--r-- 3.0 unx 234 tx defN 17-Aug-10 09:37 index.py 2 files, 234 bytes uncompressed, 163 bytes compressed: 30.3%
To fix this recursively, run the following command:
$ chmod 644 $(find /tmp/package_contents -type f) $ chmod 755 $(find /tmp/package_contents -type d)
The first command changes all files in
/tmp/package_contentsto have read/write permissions to owners, read to group and global.
The second command cascades the same permissions for directories.
Once you have done that, set the requisite IAM permissions on the package. For more information, see Authentication and Access Control for AWS Lambda policies. | https://docs.aws.amazon.com/lambda/latest/dg/deployment-package-v2.html | 2018-06-18T04:03:41 | CC-MAIN-2018-26 | 1529267860041.64 | [] | docs.aws.amazon.com |
File Server Component
The File Server component enables Plesk administrators to share directories on a network directly from the Plesk. Using the Plesk File Server, you can share access to a directory on your server, grant access to this directory to specific users or hosts,. | https://docs.plesk.com/en-US/12.5/deployment-guide/appendix-g-configuring-additional-plesk-components-linux/file-server-component.70441/ | 2018-06-18T04:02:28 | CC-MAIN-2018-26 | 1529267860041.64 | [] | docs.plesk.com |
vSphere is a sophisticated product with multiple components to upgrade. For a successful vSphere upgrade, you must understand the sequence of tasks required.
Upgrading vSphere includes the following tasks:
Read the vSphere release notes.
Verify that your system meets vSphere hardware and software requirements. See Upgrade Requirements..
You can connect vCenter Server instances with external Platform Services Controller instances in an Enhanced Linked Mode configuration.Important:
Although you can select to join a vCenter Single Sign-On domain, you should consider vCenter Server with an embedded Platform Services Controller as a standalone installation and do not use it for replication of infrastructure data.
Concurrent upgrades are not supported and upgrade order matters. If you have multiple vCenter Server instances or services that are not installed on the same physical server or virtual machine (VM) as the vCenter Server instance, see Migration of Distributed vCenter Server for Windows Services During Upgrade to vCenter Server 6.0 and Mixed-Version Transitional Environments During vCenter Server Upgrades
Upgrade vCenter Server on a Windows VM or physical server or upgrade the vCenter Server Appliance. For the vCenter Server for Windows upgrade workflow, see About the vCenter Server 6.0 for Windows Upgrade Process. For the vCenter Server Appliance workflow, see About the vCenter Server Appliance Upgrade Process.
Verify that your system meets the hardware and software requirements for upgrading vCenter Server. See vCenter Server for Windows Requirements or vCenter Server Appliance Requirements.
Prepare your environment for the upgrade. See Before Upgrading vCenter Server
Create a worksheet with the information that you need for the upgrade. See Required Information for Upgrading vCenter Server for Windows or Required Information for Upgrading the vCenter Server Appliance.
Upgrade vCenter Server. See Upgrading and Updating vCenter Server for Windows or Upgrading and Patching the vCenter Server Appliance and Platform Services Controller Appliance.
You can upgrade vCenter Server 5.0 to an embedded or external Platform Services Controller deployment. For vCenter Server 5.1 or 5.5 upgrades, your deployment outcome after upgrade depends upon your initial deployment. For more information on deployment details and how they affect upgrades, see About the vCenter Server 6.0 for Windows Upgrade Process,Upgrading the vCenter Server Appliance, Patching the vCenter Server Appliance and Platform Services Controller Appliance, and vCenter Server Example Upgrade Paths.
After upgrading vCenter Server, complete the post-upgrade tasks. Depending on your configuration details before upgrade, you might need to complete some reconfiguration tasks. See After Upgrading vCenter Server.
If you are using vSphere Update Manager, upgrade it. See Upgrading Update Manager.
Upgrade your ESXi hosts.
Review the best practices for upgrading and verify that your system meets the upgrade requirements. See Best Practices for ESXi Upgrades and ESXi Requirements.
Determine the ESXi upgrade option to use. See Upgrade Options for ESXi 6.0.
Determine where you want to locate and boot the ESXi installer. See Media Options for Booting the ESXi Installer. If you are PXE-booting the installer, verify that your network PXE infrastructure is properly set up. See PXE Booting the ESXi Installer.
Upgrade ESXi.
After upgrading ESXi hosts, you must reconnect the hosts to the vCenter Server and reapply the licenses. See After You Upgrade ESXi Hosts.
Consider setting up a syslog server for remote logging, to ensure sufficient disk storage for log files.. | https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.upgrade.doc/GUID-7AFB6672-0B0B-4902-B254-EE6AE81993B2.html | 2018-06-18T04:05:21 | CC-MAIN-2018-26 | 1529267860041.64 | [] | docs.vmware.com |
This guide is intended for users who use Magnum to deploy and manage clusters of hosts for a Container Orchestration Engine. It describes common failure conditions and techniques for troubleshooting. To help the users quickly identify the relevant information, the guide is organized as a list of failure symptoms: each has some suggestions with pointers to the details for troubleshooting.
A separate section for developers describes useful techniques such as debugging unit tests and gate tests.
To be filled in
A cluster is deployed by a set of heat stacks: one top level stack and several nested stack. The stack names are prefixed with the cluster name and the nested stack names contain descriptive internal names like kube_masters, kube_minions.
To list the status of all the stacks for a cluster:
heat stack-list -n | grep cluster-name
If the cluster has failed, then one or more of the heat stacks would have failed. From the stack list above, look for the stacks that failed, then look for the particular resource(s) that failed in the failed stack by:
heat resource-list failed-stack-name | grep “FAILED”
The resource_type of the failed resource should point to the OpenStack service, e.g. OS::Cinder::Volume. Check for more details on the failure by:
heat resource-show failed-stack-name failed-resource-name
The resource_status_reason may give an indication on the failure, although in some cases it may only say “Unknown”.
If the failed resource is OS::Heat::WaitConditionHandle, this indicates that one of the services that are being started on the node is hung. Log into the node where the failure occurred and check the respective Kubernetes services, Swarm services or Mesos services. If the failure is in other scripts, look for them as Heat software resource scripts.
When a user creates a cluster, Magnum will dynamically create a service account for the cluster. The service account will be used by the cluster to access the OpenStack services (i.e. Neutron, Swift, etc.). A trust relationship will be created between the user who created the cluster (the “trustor”) and the service account created for the cluster (the “trustee”). For details, please refer <>`_.
If Magnum fails to create the trustee, check the magnum config file (usually in /etc/magnum/magnum.conf). Make sure ‘trustee_*’ and ‘auth_uri’ are set and their values are correct:
[keystone_authtoken] auth_uri = …
[trust] trustee_domain_admin_password = XXX trustee_domain_admin_id = XXX trustee_domain_id = XXX
If the ‘trust’ group is missing, you might need to create the trustee domain and the domain admin:
. /opt/stack/devstack/accrc/admin/admin export OS_IDENTITY_API_VERSION=3 unset OS_AUTH_TYPE openstack domain create magnum openstack user create trustee_domain_admin --password secret \ --domain magnum openstack role add --user=trustee_domain_admin --user-domain magnum \ --domain magnum admin . /opt/stack/devstack/functions export MAGNUM_CONF=/etc/magnum/magnum.conf iniset $MAGNUM_CONF trust trustee_domain_id \ $(openstack domain show magnum | awk '/ id /{print $4}') iniset $MAGNUM_CONF trust trustee_domain_admin_id \ $(openstack user show trustee_domain_admin | awk '/ id /{print $4}') iniset $MAGNUM_CONF trust trustee_domain_admin_password secret
Then, restart magnum-api and magnum-cond to pick up the new configuration. If the problem still exists, you might want to manually verify your domain admin credential to ensure it has the right privilege. To do that, run the script below with the credentials replaced (you must use the IDs where specified). If it fails, that means the credential you provided is invalid.
from keystoneauth1.identity import v3 as ka_v3 from keystoneauth1 import session as ka_session from keystoneclient.v3 import client as kc_v3 auth = ka_v3.Password( auth_url=YOUR_AUTH_URI, user_id=YOUR_TRUSTEE_DOMAIN_ADMIN_ID, domain_id=YOUR_TRUSTEE_DOMAIN_ID, password=YOUR_TRUSTEE_DOMAIN_ADMIN_PASSWORD) session = ka_session.Session(auth=auth) domain_admin_client = kc_v3.Client(session=session) user = domain_admin_client.users.create( name='anyname', password='anypass')
In production deployments, operators run the OpenStack APIs using ssl certificates and in private clouds it is common to use self-signed or certificates signed from CAs that they are usually not included in the systems’ default CA-bundles. Magnum clusters with TLS enabled have their own CA but they need to make requests to the OpenStack APIs for several reasons. Eg Get the cluster CA and sign node certificates (Keystone, Magnum), signal the Heat API for stack completion, create resources (volumes, load balancers) or get information for each node (Cinder, Neutron, Nova). In these cases, the cluster nodes need the CA used for to run the APIs.
To pass the OpenStack CA bundle to the nodes you can set the CA using the openstack_ca_file option in the drivers section of Magnum’s configuration file (usually /etc/magnum/magnum.conf). The default drivers in magnum install this CA in the system and set it in all the places it might be needed (eg when configuring the kubernetes cloud provider or for the heat-agents.)
The cluster nodes will validate the Certificate Authority by default when making requests to the OpenStack APIs (Keystone, Magnum, Heat). If you need to disable CA validation, the configuration parameter verify_ca can be set to False. More information on CA Validation.
The nodes for Kubernetes, Swarm and Mesos are connected to a private Neutron network, so to provide access to the external internet, a router connects the private network to a public network. With devstack, the default public network is “public”, but this can be replaced by the parameter “external-network” in the ClusterTemplate. The “public” network with devstack is actually not a real external network, so it is in turn routed to the network interface of the host for devstack. This is configured in the file local.conf with the variable PUBLIC_INTERFACE, for example:
PUBLIC_INTERFACE=eth1
If the route to the external internet is not set up properly, the ectd discovery would fail (if using public discovery) and container images cannot be downloaded, among other failures.
First, check for connectivity to the external internet by pinging an external IP (the IP shown here is an example; use an IP that works in your case):
ping 8.8.8.8
If the ping fails, there is no route to the external internet. Check the following:
If ping is successful, check that DNS is working:
wget google.com
If DNS works, you should get back a few lines of HTML text.
If the name lookup fails, check the following:
The networking between pods is different and separate from the neutron network set up for the cluster. Kubernetes presents a flat network space for the pods and services and uses different network drivers to provide this network model.
It is possible for the pods to come up correctly and be able to connect to the external internet, but they cannot reach each other. In this case, the app in the pods may not be working as expected. For example, if you are trying the redis example, the key:value may not be replicated correctly. In this case, use the following steps to verify the inter-pods networking and pinpoint problems.
Since the steps are specific to the network drivers, refer to the particular driver being used for the cluster.
Flannel is the default network driver for Kubernetes clusters. Flannel is an overlay network that runs on top of the neutron network. It works by encapsulating the messages between pods and forwarding them to the correct node that hosts the target pod.
First check the connectivity at the node level. Log into two different minion nodes, e.g. node A and node B, run a docker container on each node, attach to the container and find the IP.
For example, on node A:
sudo docker run -it alpine # ip -f inet -o a | grep eth0 | awk '{print $4}' 10.100.54.2/24
Similarly, on node B:
sudo docker run -it alpine # ip -f inet -o a | grep eth0 | awk '{print $4}' 10.100.49.3/24
Check that the containers can see each other by pinging from one to another.
On node A:
# ping 10.100.49.3 PING 10.100.49.3 (10.100.49.3): 56 data bytes 64 bytes from 10.100.49.3: seq=0 ttl=60 time=1.868 ms 64 bytes from 10.100.49.3: seq=1 ttl=60 time=1.108 ms
Similarly, on node B:
# ping 10.100.54.2 PING 10.100.54.2 (10.100.54.2): 56 data bytes 64 bytes from 10.100.54.2: seq=0 ttl=60 time=2.678 ms 64 bytes from 10.100.54.2: seq=1 ttl=60 time=1.240 ms
If the ping is not successful, check the following:
Is neutron working properly? Try pinging between the VMs.
Are the docker0 and flannel0 interfaces configured correctly on the nodes? Log into each node and find the Flannel CIDR by:
cat /run/flannel/subnet.env | grep FLANNEL_SUBNET FLANNEL_SUBNET=10.100.54.1/24
Then check the interfaces by:
ifconfig flannel0 ifconfig docker0
The correct configuration should assign flannel0 with the “0” address in the subnet, like 10.100.54.0, and docker0 with the “1” address, like 10.100.54.1.
Verify the IP’s assigned to the nodes as found above are in the correct Flannel subnet. If this is not correct, the docker daemon is not configured correctly with the parameter –bip. Check the systemd service for docker.
Is Flannel running properly? check the Running Flannel.
Ping and try tcpdump on each network interface along the path between two nodes to see how far the message is able to travel. The message path should be as follows:
If ping works, this means the flannel overlay network is functioning correctly.
The containers created by Kubernetes for pods will be on the same IP subnet as the containers created directly in Docker as above, so they will have the same connectivity. However, the pods still may not be able to reach each other because normally they connect through some Kubernetes services rather than directly. The services are supported by the kube-proxy and rules inserted into the iptables, therefore their networking paths have some extra hops and there may be problems here.
To check the connectivity at the Kubernetes pod level, log into the master node and create two pods and a service for one of the pods. You can use the examples provided in the directory /etc/kubernetes/examples/ for the first pod and service. This will start up an nginx container and a Kubernetes service to expose the endpoint. Create another manifest for a second pod to test the endpoint:
cat > alpine.yaml << END apiVersion: v1 kind: Pod metadata: name: alpine spec: containers: - name: alpine image: alpine args: - sleep - "1000000" END kubectl create -f /etc/kubernetes/examples/pod-nginx-with-label.yaml kubectl create -f /etc/kubernetes/examples/service.yaml kubectl create -f alpine.yaml
Get the endpoint for the nginx-service, which should route message to the pod nginx:
kubectl describe service nginx-service | grep -e IP: -e Port: IP: 10.254.21.158 Port: <unnamed> 8000/TCP
Note the IP and port to use for checking below. Log into the node where the alpine pod is running. You can find the hosting node by running this command on the master node:
kubectl get pods -o wide | grep alpine | awk '{print $6}' k8-gzvjwcooto-0-gsrxhmyjupbi-kube-minion-br73i6ans2b4
To get the IP of the node, query Nova on devstack:
nova list
On this hosting node, attach to the alpine container:
export DOCKER_ID=`sudo docker ps | grep k8s_alpine | awk '{print $1}'` sudo docker exec -it $DOCKER_ID sh
From the alpine pod, you can try to reach the nginx pod through the nginx service using the IP and Port found above:
wget 10.254.21.158:8000
If the connection is successful, you should receive the file index.html from nginx.
If the connection is not successful, you will get an error message like::xs
wget: can’t connect to remote host (10.100.54.9): No route to host
In this case, check the following:
Is kube-proxy running on the nodes? It runs as a container on each node. check by logging in the minion nodes and run:
sudo docker ps | grep k8s_kube-proxy
Check the log from kube-proxy by running on the minion nodes:
export PROXY=`sudo docker ps | grep "hyperkube proxy" | awk '{print $1}'` sudo docker logs $PROXY
Try additional service debugging. To see what’s going during provisioning:
kubectl get events
To get information on a service in question:
kubectl describe services <service_name>
The etcd service is used by many other components for key/value pair management, therefore if it fails to start, these other components will not be running correctly either. Check that etcd is running on the master nodes by:
sudo service etcd status -l
If it is running correctly, you should see that the service is successfully deployed:
Active: active (running) since ....
The log message should show the service being published:
etcdserver: published {Name:10.0.0.5 ClientURLs:[]} to cluster 3451e4c04ec92893
In some cases, the service may show as active but may still be stuck in discovery mode and not fully operational. The log message may show something like:
discovery: waiting for other nodes: error connecting to, retrying in 8m32s
If this condition persists, check for Cluster internet access.
If the daemon is not running, the status will show the service as failed, something like:
Active: failed (Result: timeout)
In this case, try restarting etcd by:
sudo service etcd start
If etcd continues to fail, check the following:
Check the log for etcd:
sudo journalctl -u etcd
etcd requires discovery, and the default discovery method is the public discovery service provided by etcd.io; therefore, a common cause of failure is that this public discovery service is not reachable. Check by running on the master nodes:
. /etc/sysconfig/heat-params curl $ETCD_DISCOVERY_URL
You should receive something like:
{"action":"get", "node":{"key":"/_etcd/registry/00a6b00064174c92411b0f09ad5466c6", "dir":true, "nodes":[ {"key":"/_etcd/registry/00a6b00064174c92411b0f09ad5466c6/7d8a68781a20c0a5", "value":"10.0.0.5=", "modifiedIndex":978239406, "createdIndex":978239406}], "modifiedIndex":978237118, "createdIndex":978237118} }
The list of master IP is provided by Magnum during cluster deployment, therefore it should match the current IP of the master nodes. If the public discovery service is not reachable, check the Cluster internet access.
When deploying a COE, Flannel is available as a network driver for certain COE type. Magnum currently supports Flannel for a Kubernetes or Swarm cluster.
Flannel provides a flat network space for the containers in the cluster: they are allocated IP in this network space and they will have connectivity to each other. Therefore, if Flannel fails, some containers will not be able to access services from other containers in the cluster. This can be confirmed by running ping or curl from one container to another.
The Flannel daemon is run as a systemd service on each node of the cluster. To check Flannel, run on each node:
sudo service flanneld status
If the daemon is running, you should see that the service is successfully deployed:
Active: active (running) since ....
If the daemon is not running, the status will show the service as failed, something like:
Active: failed (Result: timeout) ....
or:
Active: inactive (dead) ....
Flannel daemon may also be running but not functioning correctly. Check the following:
Check the log for Flannel:
sudo journalctl -u flanneld
Since Flannel relies on etcd, a common cause for failure is that the etcd service is not running on the master nodes. Check the etcd service. If the etcd service failed, once it has been restored successfully, the Flannel service can be restarted by:
sudo service flanneld restart
Magnum writes the configuration for Flannel in a local file on each master node. Check for this file on the master nodes by:
cat /etc/sysconfig/flannel-network.json
The content should be something like:
{ "Network": "10.100.0.0/16", "Subnetlen": 24, "Backend": { "Type": "udp" } }
where the values for the parameters must match the corresponding parameters from the ClusterTemplate.
Magnum also loads this configuration into etcd, therefore, verify the configuration in etcd by running etcdctl on the master nodes:
. /etc/sysconfig/flanneld etcdctl get $FLANNEL_ETCD_KEY/config
Each node is allocated a segment of the network space. Check for this segment on each node by:
grep FLANNEL_SUBNET /run/flannel/subnet.env
The containers on this node should be assigned an IP in this range. The nodes negotiate for their segment through etcd, and you can use etcdctl on the master node to query the network segment associated with each node:
. /etc/sysconfig/flanneld for s in `etcdctl ls $FLANNEL_ETCD_KEY/subnets` do echo $s etcdctl get $s done /atomic.io/network/subnets/10.100.14.0-24 {"PublicIP":"10.0.0.5"} /atomic.io/network/subnets/10.100.61.0-24 {"PublicIP":"10.0.0.6"} /atomic.io/network/subnets/10.100.92.0-24 {"PublicIP":"10.0.0.7"}
Alternatively, you can read the full record in ectd by:
curl http://<master_node_ip>:2379/v2/keys/coreos.com/network/subnets
You should receive a JSON snippet that describes all the segments allocated.
This network segment is passed to Docker via the parameter –bip. If this is not configured correctly, Docker would not assign the correct IP in the Flannel network segment to the container. Check by:
cat /run/flannel/docker ps -aux | grep docker
Check the interface for Flannel:
ifconfig flannel0
The IP should be the first address in the Flannel subnet for this node.
Flannel has several different backend implementations and they have specific requirements. The udp backend is the most general and have no requirement on the network. The vxlan backend requires vxlan support in the kernel, so ensure that the image used does provide vxlan support. The host-gw backend requires that all the hosts are on the same L2 network. This is currently met by the private Neutron subnet created by Magnum; however, if other network topology is used instead, ensure that this requirement is met if host-gw is used.
Current known limitation: the image fedora-21-atomic-5.qcow2 has Flannel version 0.5.0. This version has known bugs that prevent the backend vxland and host-gw to work correctly. Only the backend udp works for this image. Version 0.5.3 and later should work correctly. The image fedora-21-atomic-7.qcow2 has Flannel version 0.5.5.
To be filled in
(How to introspect k8s when heat works and k8s does not)
Additional Kubenetes troubleshooting guide is available.
To be filled in
(How to check on a swarm cluster: see membership information, view master, agent containers)
This section is intended to help with issues that developers may run into in the course of their development adventures in Magnum.
Note: This is adapted from Devstack Gate’s README which is worth a quick read to better understand the following)
Boot a VM like described in the Devstack Gate’s README .
Provision this VM like so:
apt-get update \ && apt-get upgrade \ # Kernel upgrade, as recommended by README, select to keep existing grub config && apt-get install git tmux vim \ && git clone \ && system-config/install_puppet.sh && system-config/install_modules.sh \ && puppet apply \ --modulepath=/root/system-config/modules:/etc/puppet/modules \ -e "class { openstack_project::single_use_slave: install_users => false, ssh_key => \"$( cat .ssh/authorized_keys | awk '{print $2}' )\" }" \ && echo "jenkins ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers \ && cat ~/.ssh/authorized_keys >> /home/jenkins/.ssh/authorized_keys
Compare
~/.ssh/authorized_keys and
/home/jenkins/.ssh/authorized_keys. Your original public SSH key should now be in
/home/jenkins/.ssh/authorized_keys. If it’s not, explicitly copy it (this can happen if you spin up a using
--key-name <name>, for example).
Assuming all is well up to this point, now it’s time to
reboot into the latest kernel
Once you’re done booting into the new kernel, log back in as
jenkins user to continue with setting up the simulation.
Now it’s time to set up the workspace:
export REPO_URL= export WORKSPACE=/home/jenkins/workspace/testing export ZUUL_URL=/home/jenkins/workspace-cache2 export ZUUL_REF=HEAD export ZUUL_BRANCH=master export ZUUL_PROJECT=openstack/magnum mkdir -p $WORKSPACE git clone $REPO_URL/$ZUUL_PROJECT $ZUUL_URL/$ZUUL_PROJECT \ && cd $ZUUL_URL/$ZUUL_PROJECT \ && git checkout remotes/origin/$ZUUL_BRANCH
At this point, you may be wanting to test a specific change. If so, you can pull down the changes in
$ZUUL_URL/$ZUUL_PROJECT directory:
cd $ZUUL_URL/$ZUUL_PROJECT \ && git fetch refs/changes/83/247083/12 && git checkout FETCH_HEAD
Now you’re ready to pull down the
devstack-gate scripts that will let you run the gate job on your own VM:
cd $WORKSPACE \ && git clone --depth 1 $REPO_URL/openstack-infra/devstack-gate
And now you can kick off the job using the following script (the
devstack-gate documentation suggests just copying from the job which can be found in the project-config repository), naturally it should be executable (
chmod u+x <filename>):
#!/bin/bash -xe cat > clonemap.yaml << EOF clonemap: - name: openstack-infra/devstack-gate dest: devstack-gate EOF /usr/zuul-env/bin/zuul-cloner -m clonemap.yaml --cache-dir /opt/git \ git://git.openstack.org \ openstack-infra/devstack-gate export PYTHONUNBUFFERED=true export DEVSTACK_GATE_TIMEOUT=240 # bump this if you see timeout issues. Default is 120 export DEVSTACK_GATE_TEMPEST=0 export DEVSTACK_GATE_NEUTRON=1 # Enable tempest for tempest plugin export ENABLED_SERVICES=tempest export BRANCH_OVERRIDE="default" if [ "$BRANCH_OVERRIDE" != "default" ] ; then export OVERRIDE_ZUUL_BRANCH=$BRANCH_OVERRIDE fi export PROJECTS="openstack/magnum $PROJECTS" export PROJECTS="openstack/python-magnumclient $PROJECTS" export PROJECTS="openstack/barbican $PROJECTS" export DEVSTACK_LOCAL_CONFIG="enable_plugin magnum git://git.openstack.org/openstack/magnum" export DEVSTACK_LOCAL_CONFIG+=$'\n'"enable_plugin ceilometer git://git.openstack.org/openstack/ceilometer" # Keep localrc to be able to set some vars in post_test_hook export KEEP_LOCALRC=1 function gate_hook { cd /opt/stack/new/magnum/ ./magnum/tests/contrib/gate_hook.sh api # change this to swarm to run swarm functional tests or k8s to run kubernetes functional tests } export -f gate_hook function post_test_hook { . $BASE/new/devstack/accrc/admin/admin cd /opt/stack/new/magnum/ ./magnum/tests/contrib/post_test_hook.sh api # change this to swarm to run swarm functional tests or k8s to run kubernetes functional tests } export -f post_test_hook cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh
project-config’s magnum.yaml.
{}should be set as an environment variable
{{ }}should have those brackets changed to single brackets -
{}.
chmod u+x <filename>and run it.
project-config’s macros.yml:
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. | https://docs.openstack.org/magnum/queens/admin/troubleshooting-guide.html | 2018-06-18T03:46:01 | CC-MAIN-2018-26 | 1529267860041.64 | [] | docs.openstack.org |
Retrieving a
tracking from parcelLab refers to the act of requesting the information about a certain delivery from parcelLab to be used somewhere else, e.g. in the user account within the shop interface, to indicate the progress of the delivery to the recipient. We call this information about the progress up to now the
trace. For this, also different channels can be used, of which only one is ready for production as yet:
RESTful API¶
Request¶
The API offers an endpoint to retrieve the information about a
tracking, for that tracking number and courier code need to be supplied in the URL:
GET '' QUERY + tno: String + courier: String + lang: String
Return¶
The return of a successful call of this endpoint yields status code
200 OK, and an JSON encoded response object. This object consists of a
header and
body.
Return Header¶
The
header is an array listing an overview over all trackings returned. When requesting the API like documented here with a combination of
courier and
tracking_number the array with always have
header.length === 1.
The overview for each tracking (see example) is structured like this:
Return object
courier¶
Return object
last_delivery_status¶
Return Body¶
The
body simply is an object where for each tracking in the header, identified by their
id, an array of checkpoints is given. This array has the following structure:
Example of Return¶
{ "header": [ { "id": "579f6444b6a1bb4fbaab5610", "tracking_number": "00340000000000000001", "courier": { "name": "dhl-germany", "prettyname": "DHL", "trackingurl": "", "trackingurl_label": "Klicken Sie hier für weitere Informationen zur Sendung." }, "last_delivery_status": { "status": "Zugestellt", "status_details": "Die Ware wurde erfolgreich zugestellt.", "code": "Delivered" }, "delay": false, "exception": false } ], "body": { "579f6444b6a1bb4fbaab5610": [ { "shown": true, "status": "OrderProcessed", "status_text": "Bestellung verarbeitet", "status_details": "Die Bestellung wurde verarbeitet.", "full_courier_status": "If exists, date of order, else date of import to parcelLab", "location": "Sample City", "timestamp": "2015-08-10T15:18:27.000Z" } ] } }
Tracking Website¶
We offer a free to use, high performant tracking website to be exposed to the recipient of a
tracking. This website follows a simple URL scheme, and is activated for any
tracking in the parcelLab system by default. To have it included into any notification by default, just let us know. The tracking website can be reached at either versand-status.de or paket-fuer-mich.de. The URL scheme is identical to the API endpoint, i.e.:
JavaScript Libraries¶
parcelLab becomes prepared with JavaScript libraries, which can be used to integrate the API in asynchronous fashion directly in the browser without much effort. This can be done without customization through our CDN, or with customization on your own servers.
Through parcelLab CDN¶
All required libraries are hosted on our high-performant and high-available CDN. For this, following modifications have to be done on any page, where the
trace of a
tracking should be displayed.
1. Loading depenencies¶
At the bottom of the
head of the website displaying the
trace, following references to two CSS files need to be included. The first one does some simple formatting, the second one loads the Font Awesome Icons in version
4.4.0, which is guaranteed to be compatible.
<head> ... <link href="" rel="stylesheet"> <link href="" rel="stylesheet"> </head>
At the bottom of the
body, the JavaScript references are included. This allows the website to be loaded and displayed, before any additional scripts are loaded.
<body> ... <script src="" charset="utf-8"></script> <script type="text/javascript"> var parcelLab = new ParcelLab('#parcelLab-trace-container'); // <~ where the trackings will be rendered... parcelLab.initialize(); </script> </body>
2. Preparing the content¶
Next, the container for the content to be shown needs to be defined in the
body. For this, only the central node in the DOM where the trace should be included needs to be specified. The
id (
some-dom-id in the example below) has to referred to by the included JavaScript. If the default stylesheet should be applied to the trace (this recommended), then additional
class="parcelLab-style" has to be added.
<div id="parcelLab-trace-container" class="parcelLab-style"></div>
3. Loading content¶
The code is prepared, the content container is present, now the script will automatically load and show the trace.
Identification of the tracking is read from the URL: for that you simply have to include the queries
courier and
trackingNo in the URL, analogous to the API or Tracking Website. The URL of the page then could look like this:
Another optional key is
&lang=de to specify language.
3. II. Search form (optional)¶
The plugin offers functionality to display a search form when the tracking page is opened without any tracking identifier. This search form performs a search on the field
orderNo of the trackings. For that, the search form has be to be activated, and a valid and correct
userId has to be supplied. This is the same
userId as in the credentials.
To inject these settings, the function
.initialize() has to be passed an object as parameter:
var parcelLab = new ParcelLab('#parcelLab-trace-container', { show_searchForm: true, userId: 123, }); parcelLab.initialize();
4. Voilá¶
All done. To modify any behavior or styling of the code, the hosted libraries and stylesheets can be modified as described below.
Customize JavaScript Libraries¶
To modify any behavior or style of the code, all required resources are available in developer form as Open Source at GitHub / parcelLab / parcelLab-js-plugin. Feel free to fork or clone, change and host, or create pull requests. | https://docs.parcellab.com/retrieve/ | 2018-06-18T03:21:02 | CC-MAIN-2018-26 | 1529267860041.64 | [] | docs.parcellab.com |
Refer to the multinode install for a primer on how to set up a cluster. Required architecture is x86_64.
All Vespa processes have a PID file $VESPA_HOME/var/run/{service name}.pid, where {service name} is the Vespa service name, e.g. container or distributor. It is the same name which is used in the telnet administration interface in the config sentinel.
Vespa service instances have status pages for debugging and testing. Status pages are subject to change at any time - take care when automating. Procedure
$ vespa-model-inspect servicesTo find the status page port for a specific node for a specific service, pick the correct service and run:
$ vespa-model-inspect service [Options] <service-name>
/, while the container-clustercontroller status page is found at
/clustercontroller-status/v1/[clustername/]. Example:
$ vespa-model-inspect service searchnode searchnode @ myhost.mydomain.com : search search/search/cluster.search/0 tcp/myhost.mydomain.com:19110 (STATUS ADMIN RTC RPC) tcp/myhost.mydomain.com:19111 (FS4) tcp/myhost.mydomain.com:19112 (TEST HACK SRMP) tcp/myhost.mydomain.com:19113 (ENGINES-PROVIDER RPC) tcp/myhost.mydomain.com:19114 (HEALTH JSON HTTP) $ curl ... $ vespa-model-inspect service distributor distributor @ myhost.mydomain.com : content search/distributor/0 tcp/myhost.mydomain.com:19116 (MESSAGING) tcp/myhost.mydomain.com:19117 (STATUS RPC) tcp/myhost.mydomain.com:19118 (STATE STATUS HTTP) $ curl ... $ curl ...
The most trustworthy metric for overload is the prioritized partition queues on the content nodes.
In an overload situation, operations above some priority level will come in so fast
that operations below this priority level will just fill up the queue.
The metric to look out for is the
.filestor.alldisks.queuesize metric,
giving a total value for all the partitions.
While queue size is the most dependable metric, operations can be queued elsewhere too,
which may be able to hide the issue.
There are some visitor related queues:
.visitor.allthreads.queuesize and
.visitor.cv_queuesize.
Additionally, maintenance operations may be queued merely by not being created yet,
as there's no reason to keep more pending operations than is needed to get good throughput.
It may seem obvious to detect overload by monitoring CPU, memory and IO usage. However, a fully utilized resource does not necessarily indicate overload. As the content cluster supports prioritized operations, it will typically do as many low priority operations as it is able to when no high priority operations are in the queue. This means, that even if there's just a low priority reprocess or a load rebalancing effort going on after a node went down, the cluster may still use up all available resources to process these low priority tasks.
Network bandwidth consumption and switch saturation is something to look out for. These communication channels are not prioritized, and if they fill up processing low priority operations, high priority operations may not get through. If latencies gets mysterious high while no queuing is detected, the network is a candidate.
Many applications built on Vespa is used by multiple clients and faces the problem of protecting clients from others overuse or abuse. To solve this problem, use the RateLimitingSearcher to rate limit load from each client type.
Refer to the distribution algorithm. Distributor status pages can be viewed to manually inspect state metrics:
Notes:
vespa-deploy preparewill not change served configurations until
vespa-deploy activateis run.
vespa-deploy preparewill warn about all config changes that require restart.
It is possible to run multiple Vespa services on the same host.
If changing the services on a given host,
stop Vespa on the given host before running
vespa-deploy activate.
This is because the services will be allocated port numbers depending on what is running on the host.
Consider if some of the services changed are used by services on other hosts.
In that case, restart services on those hosts too. Procedure:
vespa-deploy prepareand
vespa-deploy activate
Document processing chains can be added, removed and modified at runtime. Modification includes adding/removing document processors in chains and changing names of chains and processors etc. In short,
vespa-deploy prepare
vespa-deploy activate
ls
restart docprocservice, and/or
restart docprocservice2,
restart docprocservice3and so on
quit
When a distributor stops, it will try to respond to any pending cluster state request first. New incoming requests after shutdown is commenced will fail immediately, as the socket is no longer accepting requests. Cluster controllers will thus detect processes stopping almost immediately.
The cluster state will be updated with the new state internally in the cluster controller. Then the cluster controller will wait for maximum min_time_between_new_systemstates before publishing the new cluster state - this to reduce short-term state fluctuations.
The cluster controller has the option of setting states to make other distributors take over ownership of buckets, or mask the change, making the buckets owned by the distributor restarting unavailable for the time being. Distributors restart fast, so the restarting distributor may transition directly from up to initializing. If it doesn't, current default behavior is to set it down immediately.
If transitioning directly from up to initializing, requests going through the remaining distributors will be unaffected. The requests going through the restarting distributor will immediately fail when it shuts down, being resent automatically by the client. The distributor typically restart within seconds, and syncs up with the service layer nodes to get metadata on buckets it owns, in which case it is ready to serve requests again.
If the distributor transitions from up to down, and then later to initializing, other distributors will request metadata from the service layer node to take over ownership of buckets previously owned by the restarting distributor. Until the distributors have gathered this new metadata from all the service layer nodes, requests for these buckets can not be served, and will fail back to client. When the restarting node comes back up and is marked initializing or up in the cluster state again, the additional nodes will dump knowledge of the extra buckets they previously acquired.
For requests with timeouts of several seconds, the transition should be invisible due to automatic client resending. Requests with a lower timeout might fail, and it is up to the application whether to resend or handle failed requests.
Requests to buckets not owned by the restarting distributor will not be affected. The other distributors will start to do some work though, affecting latency, and distributors will refetch metadata for all buckets they own, not just the additional buckets, which may cause some disturbance.
When a content node restarts in a controlled fashion, it marks itself in the stopping state and rejects new requests. It will process its pending request queue before shutting down. Consequently, client requests are typically unaffected by content node restarts. The currently pending requests will typically be completed. New copies of buckets will be created on other nodes, to store new requests in appropriate redundancy. This happens whether node transitions through down or maintenance state. The difference being that if transitioning through maintenance state, the distributor will not start any effort of synchronizing new copies with existing copies. They will just store the new requests until the maintenance node comes back up.
When coming back up, content nodes will start with gathering information on what buckets it has data stored for. While this is happening, the service layer will expose that it is initializing, but not done with the bucket list stage. During this time, the cluster controller will not mark it initializing in cluster state yet. Once the service layer node knows what buckets it has, it reports that it is calculating metadata for the buckets, at which time the node may become visible as initializing in cluster state. At this time it may start process requests, but as bucket checksums have not been calculated for all buckets yet, there will exist buckets where the distributor doesn't know if they are in sync with other copies or not.
The background load to calculate bucket checksums has low priority, but load received will automatically create metadata for used buckets. With an overloaded cluster, the initializing step may not finish before all buckets have been initialized by requests. With a cluster close to max capacity, initializing may take quite some time.
The cluster is mostly unaffected during restart. During the initializing stage, bucket metadata is unknown. Distributors will assume other copies are more appropriate for serving read requests. If all copies of a bucket are in an initializing state at the same time, read requests may be sent to a bucket copy that does not have the most updated state to process it.
Content cluster nodes will register in the vespa-slobrok naming service on startup. If the nodes have not been set up or fail to start required processes, the naming service will mark them as unavailable.
Effect on cluster: Calculations for how big percentage of a cluster that is available will include these nodes even if they never have been seen. If many nodes are configured, but not in fact available, the cluster may set itself offline due by concluding too many nodes are down.
vespa-slobrok requires nodes to ping it periodically. If they stop sending pings, they will be set as down and the cluster will restore full availability and redundancy by redistribution load and data to the rest of the nodes. There is a time window where nodes may be unavailable but still not set down by slobrok.
Effect on cluster: Nodes that become unavailable will be set as down after a few seconds. Before that, document operations will fail and will need to be resent. After the node is set down, full availability is restored. Data redundancy will start to restore.
A crashing node restarts in much the same node as a controlled restart. A content node will not finish processing the currently pending requests, causing failed requests. Client resending might hide these failures, as the distributor should be able to process the resent request quickly, using other copies than the recently lost one.
An example is OS disk using excessive amount of time to complete IO requests. Ends up with maximum number of files open, and as the OS is so dependent on the filesystem, it ends being able to do not much at all.
get-node-state requests from the cluster controller fetch node metrics from /proc and write this to a temp directory on the disk before responding. This causes a trashing node to time out get-node-state requests, setting the node down in the cluster state.
Effect on cluster: This will have the same effects like the not available on network issue.
A broken node may end up with processes constantly restarting. It may die during initialization due to accessing corrupt files, or it may die when it starts receiving requests of a given type triggering a node local bug. This is bad for distributor nodes, as these restarts create constant ownership transfer between distributors, causing windows where buckets are unavailable.
The cluster controller has functionality for detecting such nodes. If a node restarts in a way that is not detected as a controlled shutdown, more than max_premature_crashes, the cluster controller will set the wanted state of this node to be down.
Detecting a controlled restart is currently a bit tricky. A controlled restart is typically initiated by sending a TERM signal to the process. Not having any other sign, the content layer has to assume that all TERM signals are the cause of controlled shutdowns. Thus, if the process keep being killed by kernel due to using too much memory, this will look like controlled shutdowns to the content layer. | https://docs.vespa.ai/documentation/operations/admin-procedures.html | 2018-06-18T03:54:22 | CC-MAIN-2018-26 | 1529267860041.64 | [] | docs.vespa.ai |
Contents Now Platform Capabilities Previous Topic Next Topic Assessment metric categories ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Other Share Assessment metric categories In the Assessments application, a metric category represents a theme for evaluating assessable records in a given metric type. Each category has a numeric weight value to indicate its importance relative to other categories. Within a category, records called metrics are the traits or values used to evaluate assessable records. For example, there are many categories within the Vendor metric type, including Support Rating, which contains metrics that measure the quality of vendors' customer support services. Assessable records must be associated to categories to be eligible for evaluation. Assessment administrators create categories and manage which assessable records each category is associated to. Weight categories and metricsWhen you create a metric category or metric, you must specify a weight, a numeric value that indicates the importance of the category or metric relative to other categories and metrics.Assessable record associationsOnly the assessable records associated to a category can be evaluated using metrics in that category. Manage which assessable records you evaluate for each category by creating and removing the associations.Delete a categoryWhen you delete a category, the system also deletes the associated category users and stakeholders. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/istanbul-servicenow-platform/page/administer/assessments/concept/c_AssessmentMetricCategories.html | 2018-10-15T11:04:05 | CC-MAIN-2018-43 | 1539583509170.2 | [] | docs.servicenow.com |
The Logo particle enables you to set the image and/or text you want to have appear as your logo for the site. Through this particle, you can add your logo to the site in a position you set in the Layout Manager.
You can use SVG code to define your logo/image using the SVG Code field. For example, you could enter:
<svg width="400" height="180"> <rect x="50" y="20" rx="20" ry="20" width="150" height="150" style="fill:red;stroke:black;stroke-width:5;opacity:0.5"> Sorry, your browser does not support inline SVG. </svg>
This would produce a rectangle in place of your image. This is a great way to save on bandwidth as there would be no image file at all to load, just the SVG code. | http://docs.gantry.org/gantry5/particles/logo | 2018-10-15T10:43:08 | CC-MAIN-2018-43 | 1539583509170.2 | [] | docs.gantry.org |
You can create Infinite Volumes to provide a large, scalable data container with a single namespace and a single mount point by using System Manager. You can use Infinite Volumes to store large unstructured repositories of primary data that is written once and seldom used.
If you have not configured the protocols but have installed any one of these licenses, you can create only data protection (DP) volumes.
You can create only Infinite Volumes without storage classes by using System Manager. If you want to create Infinite Volumes with storage classes, you cannot use System Manager; you must use OnCommand Workflow Automation instead.
System Manager uses the default deduplication schedule. If the specified volume size exceeds the limit required for running deduplication, the volume is created and deduplication is not enabled. | http://docs.netapp.com/ontap-9/topic/com.netapp.doc.onc-sm-help-900/GUID-048B3E05-F1E6-47DA-B580-3C55AB04359A.html | 2018-10-15T10:51:11 | CC-MAIN-2018-43 | 1539583509170.2 | [] | docs.netapp.com |
Reference Architecture for Pivotal Cloud Foundry on GCP
Page last updated:
This guide presents a reference architecture for Pivotal Cloud Foundry (PCF) on Google Cloud Platform (GCP). This document also outlines multiple networking solutions. All these architectures are validated for production-grade PCF deployments using multiple (3+) AZs.
See PCF on GCP Requirements for general requirements for running PCF and specific requirements for running PCF on GCP.
PCF Reference Architectures
A PCF reference architecture describes a proven approach for deploying Pivotal Cloud Foundry on a specific IaaS, such as G GCP Reference Architecture
The following diagram provides an overview of a reference architecture deployment of PCF on GCP.
View a larger version of this diagram.
Base Reference Architecture Components
The following table lists the components that are part of a reference architecture deployment with three availability zones.
Alternative GCP Network Layouts for PCF
This section describes the possible network layouts for PCF deployments as covered by the reference architecture of PCF on GCP.
At a high level, there are currently two possible ways of granting public Internet access to PCF as described by the reference architecture:
- NATs provide public access to PCF internals.
- The instructions for Installing PCF on GCP Manually use this method.
- Every PCF VM receives its own public IP address (no NAT).
- The instructions for Installing PCF on GCP using Terraform use this method.
Providing each PCF VM with a public IP address is the most recommended architecture, because of increased latency due to NATs as well as extra maintenance required for NAT instances that cannot be deployed with BOSH.
However, if you require NATs, you may refer to the following section.
NAT-based Solution
This diagram illustrates the case where you want to expose only a minimal number of public IP addresses.
View a larger version of this diagram.
Public IP addresses Solution
If you prefer not to use a NAT solution, you can configure PCF on GCP to assign public IP addresses for all components. This type of deployment may be more performant since most of the network traffic between Cloud Foundry components are routed through the front end load balancer and the Gorouter.
Network Objects
The following table lists the network objects expected for each type of reference architecture deployment with three availability zones (assumes you are using NATs).
Network Communication in GCP Deployments
This section provides more background on the reasons behind certain network configuration decisions, specifically for the Gorouter.
Load Balancer to Gorouter Communications and TLS Termination
In a PCF on GCP deployment, the Gorouter receives two types of traffic:
Unencrypted HTTP traffic on port 80 that is decrypted by the HTTP(S) load balancer.
Encrypted secure web socket traffic on port 443 that is passed through the TCP WebSockets load balancer.
TLS is terminated for HTTPS on the HTTP load balancer and is terminated for WebSockets (WSS) traffic on the Gorouter.
PCF deployments on GCP use two load balancers to handle Gorouter traffic because HTTP load balancers currently do not support WebSockets.
ICMP
GCP routers do not respond to ICMP; therefore, Pivotal recommends disabling ICMP checks in BOSH Director network configuration. | https://docs.pivotal.io/pivotalcf/2-1/refarch/gcp/gcp_ref_arch.html | 2018-10-15T10:56:32 | CC-MAIN-2018-43 | 1539583509170.2 | [array(['../images/gcp-overview-arch.png', 'Gcp overview arch'],
dtype=object)
array(['../images/gcp-net-topology-base.png', 'Gcp net topology base'],
dtype=object) ] | docs.pivotal.io |
Querying on the Area
This query retrieves data based on the GeoJSON polygon example and associated with a large square footage.
Description
In this example, all documents queried are associated with any really large area. For example, the criteria could be any areas bigger than 10,000 square kilometers without caring about a specific location or query for areas bigger than 10,000 square kilometers without caring where they are.
In this case, the existing view can be queried with wildcards on the location (the first two dimensions) and an open range for the area.
Syntax
Curl syntax:
curl http://[localhost]:8092/[bucket-name]/_design/[design-doc]/_spatial/[spatial-name]?start_range=[]&end_range=[]
Example
The following example is based on the GeoJSON polygon data and the associated spatial view function.
Curl example:
curl[null,null,10000]&end_range=[null,null,null]
Alternatively, the query could have used
start_range=[-180,-90,10000]&end_range=[180,90,null] because the longitudes and latitudes have those bounds.
Response
The results contain only the Bermuda Triangle:
{]]]}}]} | http://docs.couchbase.com/server/5.5/views/sv-ex1-query-area.html | 2018-10-15T10:09:55 | CC-MAIN-2018-43 | 1539583509170.2 | [] | docs.couchbase.com |
Working Offline
A common desire is the ability to leverage the full development environment provided to work offline. When attempting this users are often thwarted trying to access containers which appear to be successfully running.
This issue is due to a failure to resolve domain names to the appropriate IP address for the
container. This affects those using OS X and appears to be the result of a failure of the name
resolution system to operate when no network interface appears connected. Typical error messages
in browsers warn of being disconnected from the internet. Specifically, Chrome will include the
error code ERR_INTERNET_DISCONNECTED. In these instances, a test command like
curl -v succeeds as does attempting to use the IP address of a container
within a browser. The command
rig dns-records will display a mapping between all known
containers and IP addresses. The
docker inspect CONTAINER_NAME command can be used to view more
detailed information about a specific container including its IP address and should be used on
systems not supporting the
rig application.
Offline DNS Workaround
The work around for this issue is to use the
rig dns-records command and copy the output
to your
/etc/hosts file. You'll need to update this file any time you start or stop project
containers and you should clean entries from it when you reconnect to a network. On systems
which do not support the
rig application the
docker inspect command can be used to
build a mapping of IP addresses to domain names.
Network Changes Can Require Restart
Network changes such as connecting or disconnecting an interface or VPN can require a restart
of your Outrigger environment via
rig restart or a re-execution of the
rig dns
command to ensure routing of traffic to containers and DNS resolution is properly configured.
Additional Information
Those seeking additional information about the root cause of this issue and wishing to explore potential solutions can read further information at the following URLs. Note that none of the purported solutions other than that described above has proven successful in testing. Note that some of these links refer to DNS utilities used in older OS X versions.
-
-
-
-
This same issue affects other projects as well.
A theoretical solution to this issue would be to have a network interface that always appeared active triggering the attempt to resolve the domain name. The following post discusses creation of a virtual interface to satisfy this requirement. The mechanism mentioned by bmasterswizzle and Alex Gray did not prove effective in testing. The interface could be created successfully but never brought up to a state where OS X considered it active.
Some additional information and demonstrations of utilities for diagnostics can be found at | http://docs.outrigger.sh/common-tasks/working-offline/ | 2018-10-15T11:31:37 | CC-MAIN-2018-43 | 1539583509170.2 | [] | docs.outrigger.sh |
Quickstart¶
Once CasperJS is properly installed, you can write your first script. You can use plain Javascript (or CoffeeScript with PhantomJS versions before 2.0).
Hint
If you’re not too comfortable with Javascript, a dedicated FAQ entry is waiting for you.
A minimal scraping script¶ PhantomJS: Headless WebKit with JavaScript API
What did we just do?
- we created a new Casper instance
- we started it and opened
- once the page has been loaded, we asked to print the title of that webpage (the content of its
<title>tag)
- then we opened another url,
- once the new page has been loaded, we asked to print its title too
- we executed the whole process
Now let’s scrape Google!¶
In the following example, we’ll query google for two terms consecutively, “casperjs” and “phantomjs”, aggregate the result links in a standard
Array and output the result to the console.
Fire up your favorite editor and save the javascript code below in a
googlelinks.js file:
var links = []; var casper = require('casper').create(); function getLinks() { var links = document.querySelectorAll('h3.r a'); return Array.prototype.map.call(links, function(e) { return e.getAttribute('href'); }); }); }); casper.then(function() { // aggregate results for the 'phantomjs' search links = links.concat(this.evaluate(getLinks)); }); casper.run(function() { // echo results in some pretty fashion this.echo(links.length + ' links found:'); this.echo(' - ' + links.join('\n - ')).exit(); });
Run it:
$ casperjs googlelinks.js 20 links found: - - - - - - - - - - - - - - - - - - - -
CoffeeScript version¶
You can also write Casper scripts using the CoffeeScript syntax:()
Just remember to suffix your script with the
.coffee extension.
Note
CoffeeScript is not natively supported in PhantomJS versions 2.0.0 and above. If you are going to use CoffeeScript you’ll have to transpile it into vanilla Javascript. See known issues for more details.
A minimal testing script¶
CasperJS is also a testing framework; test scripts are slightly different than scraping ones, though they share most of the API.
A simplest test script:
// hello-test.js casper.test.begin("Hello, Test!", 1, function(test) { test.assert(true); test.done(); });
Run it using the
casperjs test subcommand:
$ casperjs test hello-test.js Test file: hello-test.js # Hello, Test! PASS Subject is strictly true PASS 1 test executed in 0.023s, 1 passed, 0 failed, 0 dubious, 0 skipped.
Note
As you can see, there’s no need to create a
casper instance in a test script as a preconfigured one has already made available for you.
You can read more about testing in the dedicated section. | http://docs.casperjs.org/en/latest/quickstart.html | 2016-07-23T13:01:40 | CC-MAIN-2016-30 | 1469257822598.11 | [] | docs.casperjs.org |
public interface AdvisorAdapterRegistry
Interface for registries of Advisor adapters.
This is an SPI interface, not to be implemented by any Spring user.
Advisor wrap(Object advice) throws UnknownAdviceTypeException
Should by default at least support
MethodInterceptor,
MethodBeforeAdvice,
AfterReturningAdvice,
ThrowsAdvice.
advice- object that should be an advice
null. If the advice parameter is an Advisor, return it.
UnknownAdviceTypeException- if no registered advisor adapter can wrap the supposed advice
MethodInterceptor[] getInterceptors(Advisor advisor) throws UnknownAdviceTypeException
Don't worry about the pointcut associated with the Advisor, if it's a PointcutAdvisor: just return an interceptor.
advisor- Advisor to find an interceptor for
UnknownAdviceTypeException- if the Advisor type is not understood by any registered AdvisorAdapter.
void registerAdvisorAdapter(AdvisorAdapter adapter)
adapter- AdvisorAdapter that understands a particular Advisor or Advice types | https://docs.spring.io/spring/docs/3.0.5.RELEASE/javadoc-api/org/springframework/aop/framework/adapter/AdvisorAdapterRegistry.html | 2016-07-23T13:13:36 | CC-MAIN-2016-30 | 1469257822598.11 | [] | docs.spring.io |
Last Updated: 2013-11-01
Condition or Error
The latency of updates on the slaves is very high
Causes
First the reason and location of the delay should be identified. It is possible for replication data to have been replicated quickly, but applying the data changes is taking a long time. Using row-based replication may increase the latency due to the increased quantity of data that must be transferred.
Rectifications
Check the replication format:
shell> grep binlog_format /etc/my.cnf binlog_format=ROW
Slow slaves can be the cause, but it may require some configuration changes. | https://docs.continuent.com/tungsten-clustering-5.3/troubleshooting-ecs-high-replication-latency.html | 2019-12-05T17:19:01 | CC-MAIN-2019-51 | 1575540481281.1 | [] | docs.continuent.com |
Menus Menu Item Article Create/de
From Joomla! Documentation' assign Groups to Viewing Access Levels: User Manager:Add / Edit Viewing Access Level | https://docs.joomla.org/Help39:Menus_Menu_Item_Article_Create/de | 2019-12-05T17:47:11 | CC-MAIN-2019-51 | 1575540481281.1 | [] | docs.joomla.org |
Bind Context String Keys
A set of string keys that are used with the IBindCtx::RegisterObjectParam method to specify a bind context.
Remarks
Bind contexts are used to pass optional parameters to functions that have an IBindCtx* parameter. Those parameters are expressed as COM objects and might implement interfaces that are used to model the parameter data. Some bind contexts represent a Boolean value, where TRUE indicates an object that implements only IUnknown and FALSE indicates no object is present.
IShellFolder::ParseDisplayName, IShellFolder::BindToObject and IShellItem::BindToHandler take a bind context and you can pass them parameters through that bind context.
Some bind contexts are specific to a certain data source implementations or handler types.
Bind context parameters are defined for use with a specific function or method.
When requesting a property store through IShellFolder, you can specify the equivalent of GPS_DEFAULT by passing in a null IBindCtx parameter. You can also specify the equivalent of GPS_READWRITE by passing a mode of STGM_READWRITE | STGM_EXCLUSIVE in the bind context.
The property bag specified by the STR_PROPERTYBAG_PARAM bind context object contains additional values that you can access with the IPropertyBag::Read and IPropertyBag::Write methods.
See the Parsing With Parameters Sample for an example of the use of bind context values.
Requirements | https://docs.microsoft.com/en-us/windows/win32/shell/str-constants?redirectedfrom=MSDN | 2019-12-05T18:33:06 | CC-MAIN-2019-51 | 1575540481281.1 | [] | docs.microsoft.com |
A tag is a key-value pair that can be added to an entity. Tags are used to group, search, filter, and focus your data in a way that's useful for troubleshooting and understanding your environment. You can use tags to effectively manage and monitor your complex modern software:
Use tags to organize all your entities. Create tags for teams, roles, or regions to know who is responsible for what.
Add tags to dashboards that visualize your entity data. Easily find dashboards related to the data or entities you care about.
You can create and manage tags with New Relic NerdGraph. Then you can use the tags to filter, facet, and organize the elements of your complex systems in New Relic One.
You must have a paid account to filter tags in the New Relic One UI. However, all account types can use the NerdGraph GraphiQL explorer at api.newrelic.com/graphiql to organize tags by API.
Organize your entities, services, and teams
Tags are used to organize every entity you monitor with New Relic. By applying tags to various entities, you can organize all your data in ways that make sense for your environment. For example, you can use tags to organize entities by the teams responsible for them, or by the applications they support.
Add, edit, and manage tags with NerdGraph
Use New Relic's NerdGraph GraphiQL explorer to create, edit, and delete tags. Once tags are created, they are available to search and filter in the New Relic One UI.
If you have not used the NerdGraph GraphiQL explorer, read the Introduction to NerdGraph for information on authentication, endpoints, and terminology.
If you're already familiar with the NerdGraph GraphiQL explorer, see the following tutorials for sample queries:
- Manage tags: add, delete, view, and replace tags for entities
- Query entities: search for and see relationships between entities
View tags and metadata for an entity
After you create tags, it's important to know which tags have been added to various entity types. You can see all the tags that have been added to an entity in two ways:
- In the UI: from the entity explorer, select an entity from the index. On the Summary page you can see all the tags that have been added to the entity, as well as the entity's guid, account ID, and App ID.
- Using the API: use NerdGraph to read the existing tags for an entity.
After you select an entity from the entity explorer, New Relic automatically shows corresponding metadata about it. Depending on the type of entity (services, hosts, browser or mobile apps, etc.), you can tag, view, facet, search, or query the metadata in several ways, such as:
- Metadata descriptions and examples
-
Tagging examples
You can use tags to organize entities across your organization in the following ways:
Formatting and parameters
Each tag is key-value pair. When using tags, be aware of the following parameters:
An entity can have a maximum of 100 key-value pairs tied to it. This is valid whether you have 100 values for one key, or 100 separate keys that each have a single value.
A key can have a maximum of 128 characters.
A value can have a maximum of 256 characters.
Use tags in New Relic One
After you apply tags using the NerdGraph GraphiQL explorer, you can use them for searching and filtering in New Relic One. Select tags to filter entities across accounts to focus on just the data and system statuses you care about.
To use tags in New Relic One, search for your tagged terms anywhere you see the
Filter with tags field. Filtering with tags will surface all entities that have the tag attached through cross-account search.
Use consistent naming for tags across your organization. Consistent tags make it easier for everyone in your account to filter across complex systems.
Use labels from APM
APM uses labels to organize your data, similar to how tags work in New Relic One. If you have pre-existing labels for APM, you can automatically use them in the same ways you use tags when searching and filtering.
To search for your pre-existing labels as tags in New Relic One:
- Go to one.newrelic.com.
- Click Entity explorer.
- In the
Filter with tagsfield, enter your APM label.
Filter by Infrastructure attributes
If you use attributes with New Relic Infrastructure, the Infrastructure agent registers some attributes as tags in New Relic One. These tags are available by default for searching and filtering in the New Relic One UI.
Examples of default tags created by Infrastructure attributes for a host include:
hostname:production-app1.my-corp.net
operatingsystem:linux
agentversion:1.1.7
Tags drawn directly from Infrastructure attributes cannot be edited with NerdGraph. However, you can add additional tags for your Infrastructure entities, and edit or remove them with NerdGraph. For example, if you want to associate a given host with a team, you could apply
team:my-team as a tag, then later edit or remove it as needed.
To search for your pre-existing attributes as tags in New Relic One:
- Go to one.newrelic.com.
- Click on the Entity explorer.
- In the
Filter with tagsfield, enter your APM or Synthetics label. | https://docs.newrelic.com/docs/new-relic-one/use-new-relic-one/core-concepts/tagging-use-tags-organize-group-what-you-monitor | 2019-12-05T18:16:08 | CC-MAIN-2019-51 | 1575540481281.1 | [array(['https://docs.newrelic.com/sites/default/files/thumbnails/image/NR1_EExplorer_tags_0.png',
'NR1_EExplorer_tags.png Filter by tags in New Relic One'],
dtype=object) ] | docs.newrelic.com |
Menus Menu Item Search Results/fr
From Joomla! Documentation
Contents
How To Access
To create a new Search Form or Form or Search Results Menu Item link under Search.
To edit an existing Search Form or Search Results Menu Item, click its Title in Menu Manager: Menu Items.
Description
Used to create a 'Search Form or Search Results' Term. Optional, used to set a 'canned' search word, words or phrase when Menu Item Search Results is clicked.
- See Quick Tips for more details.
Advanced Tab
Basic Options
- Use Search Options. (Global/No/Yes) Show the search options.
- Use Search Areas. (Global/Show/Hide) Show the search areas checkboxes for Articles, Categories, Contacts, Newsfeeds, or Weblinks.
- Created Date. Date the item(Article, Category, Weblink, etc.) was created.
Default Search Options
- Search For. (All Words/Any Words/Exact Phrase) The type of search. Search for All Words, Any Words or Exact Phrase.
- Results Ordering: Defines what ordering results are listed in.
- Newest First: Show newest item first.
- Oldest First: Show oldest item first.
- Popularity: Show by popularity of item, number of page hits.
- Alphabetical: Show in alphabetical order.
- Category: Show in category order.. | https://docs.joomla.org/Help39:Menus_Menu_Item_Search_Results/fr | 2019-12-05T17:42:13 | CC-MAIN-2019-51 | 1575540481281.1 | [] | docs.joomla.org |
Thanks up and barked, “Doc, Doc, did you hear that? We need to get the Great One. It’s time to go hunting. I hear the sounds of quail”!
Before I had a chance to answer Zeke, he just took off running after the bird. That would prove to be a problem for Zeke. His running was no match for Reggie the squirrel and the strange looking gray, black, and white bird. They kept running around the outside of the house – Reggie in front, the bird behind, and pulling up the rear Zeke.
I sat on my haunches and watched as they went by. Could that possibly be a mockingbird instead of a quail? Its tail was much longer and it had feathers on its head. I remembered my hunting school training that mockingbirds were know to mimic sounds of different birds.
Before I got the chance to say something to Zeke, he tripped by the pool and…… | https://www.hickorydocstales.com/docs-dog-days-2/ | 2019-12-05T16:59:17 | CC-MAIN-2019-51 | 1575540481281.1 | [] | www.hickorydocstales.com |
- Reference >
mongoShell Methods >
- Collection Methods >
- db.collection.findOneAndReplace()
db.collection.findOneAndReplace()¶
On this page
Definition¶
db.collection.
findOneAndReplace(filter, replacement, options)¶
New in version 3.2.
Modifies and replaces a single document based on the
filterand
sortcriteria.
The
findOneAndReplace()method has the following form:
The
findOneAndReplace()method takes the following parameters:
Behavior¶
Document Match¶
db.collection.findOneAndReplace() replaces the first matching
document in the collection that matches the
filter.
The
sort parameter can be used to influence which document is modified.Replace() supports multi-document transactions.
If the operation results in an upsert, the collection must already exist..
Replace A Document¶
The
scores collection contains documents similar to the following:
The following operation finds the first document with
score less than
20000 and replaces it:
The operation returns the original document that has been replaced:
If
returnNewDocument was true, the operation would return the replacement
document instead.
Sort and Replace A Document¶
The
scores collection contains documents similar to the following:
Sorting by
score changes the result of the operation. The following
operation sorts the result of the
filter by
score ascending, and
replaces the lowest scoring document:
The operation returns the original document that has been replaced:
See Replace A Document for the non-sorted result of this command.
Project the Returned Document¶
The
scores collection contains documents similar to the following: nothing.
A collection
myColl has the following documents:
The following operation includes the collation option:
The operation returns the following document: | https://docs.mongodb.com/master/reference/method/db.collection.findOneAndReplace/ | 2019-03-18T16:50:49 | CC-MAIN-2019-13 | 1552912201455.20 | [] | docs.mongodb.com |
React Native Getting Started
This page is about React Native for mobile Apps. See React.js for building web apps.
React Native by Facebook, lets JavaScript developers build native mobile apps for iOS and Android. It uses the same design as React, letting developers compose a rich mobile UI from declarative components.
Learn more about installing the Pyze React Native Smart SDK, initializing it, using events, and setting up In-app and/or push notifications. The Pyze SDK supports Android versions 4.0.3 (minSdkVersion 15) and above for Android and iOS version 8 or above for iOS.
1. Get one or more Pyze App Keys
See instructions to get one or more Pyze App Keys depending on platforms (iOS and/or Android) you support for your React Native app.
2. Install the Pyze SDK
Install the Pyze react-native package
3. Setup & Initialize
Setup & Initialize Pyze in your React Native app
Build and Go!
You have enabled all screen flow funnels, loyalty, cohort and churn analysis, Intelligence data explorations, built-in events, auto segmentation, and much more. Use your app with the Pyze SDK and you should see data on growth.pyze.com.
In the following sections, you can add app-defined-, timed- and curated- events. To reach out to your users and create meaningful relationships, add push and in-app notifications.
4. Add events
Add Events to your React Native app.
A comprehensive overview of curated, app defined, timed and built-in events is available under Events in the api & events.
5. Build meaningful relationships with your users
Pyze delivers intelligence-driven marketing and growth automation, so you can build meaningful relationships with your users.
Enable Push Notifications and use Pyze as a provider for both Apple Push Notification Service (APNS) and Google Cloud Messaging (GCM). Instructions for iOS Push Notifications for React Native iOS and Instructions for Android Push Notifications for React Native Android.
- Enable Pyze In-App Notifications for React Native.
- Enable Personalization Intelligence™ for React Native.
App SDK API, Samples, Resources for React Native developers
Setup & Initialize
Get Pyze App Keys
Get a Pyze App Key (PAK) for iOS and Android React Native
If you see any errors while executing ‘react-native link pyze-sdk-react-native’ command, then rerun ‘npm install’ inside the project directory and run the ‘react-native link pyze-sdk-react-native’.
Now you are ready to Setup and Initialize for iOS and Androidblock of your App delegate code file. (usually
AppDelegate.m). If you do not have
application:willFinishLaunchingWithOptions:method in your class simply add it.
For iOS and tvOS, add code in
willFinishLaunchingWithOptions!
Add Events);
Enable Mobile Marketing
Push Notifications (React Native - iOS)
Apple Push Notifications
Apple provides a Apple Push Notification service (APNs) to allow app developers to reach out to their users via push notifications.
The App businesses have the option of hosting and running servers “provider” themselves to send notification content “payload” to Apple over a persistent and secure channel using HTTP/2 multiplex protocol. Apple then forwards the notification content to your app on user’s device.
Pyze customers don’t have to maintain provider servers to communicate with Apple’s push notification service. Pyze can be used to send push notifications for an app in development, an app in production provisioned through the app store, and to a combination of apps in development and production.
Prerequisites
You will need access to your Apple developer Account, Xcode development environment, macOS Keychain Access, and finally iTunes Connect to publish your push-notification enabled app.
Using Pyze as your provider for push notifications
In the following sections, we will create APNs SSL certificates, use your Keychain to convert it into a file (.p12) and upload it to growth.pyze.com. We will also create a mobile provisioning file to import into Xcode. Then we will also enable Background Modes in your project using Xcode and write code to register for remote notifications in your app. Finally, we will enable your app with push notifications.
You will be doing a lot of configuring and clicking, but only write 4-5 lines of code.
1. Generate and download APNs SSL certificates
Login into your Apple developer Account
- If you do not have a valid developer account, enroll in the program here (or click on Program at developer.apple.com)
- If you have a valid developer account, login to the Apple Developer Account (or click on Account at developer.apple.com)
Create or Edit App and enable Push Notifications
From Apple developer Account, click on Certificate, Identifiers & Profiles.
(Note: “People” resource is available to developers who you enroll as an organization, as opposed to an individual)
Under Identifiers group, click on App IDs.
If creating a new app, click on the “+” button on top right. Or, if you have already created the app, select the app and click on the Edit button.
Get the bundle ID from your app
Enter your App Id Description and suffix (bundle id) and make sure you have enabled Push Notifications.
Ensure the bundle ID that you provide matches the bundle ID that you’re using in your app. (This is a common mistake).
Click on Continue.
Generate SSL certificate and download it
Once you successfully created the App-ID, select your App-ID and click the Edit button. In the resulting page scroll down to the Apple Push Notification service SSL Certificates section.
Click on Create Certificate button for either
Development SSL certificateor
Production SSL certificate.
See
Distribution considerationsbelow.
You will be presented with instructions for creating a Certificate Signing Request (CSR) in Keychain Access.
Do not click on Continue and please read the instructions provided by Apple and/or follow along below.
In the Applications folder on your Mac, open the Utilities folder and launch Keychain Access (Or press command-space and type Keychain Access).
Within the Keychain Access Utilities App’s drop down menu, select Keychain Access > Certificate Assistant > Request a Certificate from a Certificate Authority.
In the Certificate Information window, enter the email address and a name for your private key. Leave CA Email Address field blank. In the “Request is” group, select the “Saved to disk” option. Click Continue and then save at a known location within Keychain Access to complete the CSR generating process.
Switch back to the Apple developer Account where you left of and click on Continue. Upload your Certificate Signing Request (CSR) needed to generate certificate.
Generate and then download the certificate as a .cer file
2. Generate Personal Information Exchange PKCS #12 (.p12)
Import certificates into Keychain Access and export Personal Information Exchange PKCS #12 (.p12) file
Import the certificate (.cer) you created, either by double clicking it or by choosing File > Import items in Keychain Access.
Ensure you have selected your certificate from ‘My Certificates’ under left hand side ‘Category’.
Select the certificate which you have added choose File > Export items from menu and export the certificate as Personal Information Exchange (.p12) file.
While saving the p12, it is always recommended that you create a password. Note down your password, you will need it in the later steps.
3. Create Mobile Provisioning profile to download and import into Xcode
In this we will create a mobile provisioning profile in Apple developer Account, download it and import it into Xcode. The profile contains test devices you can send push notifications to during development.
Login into Apple developer Account, click on Certificate, Identifiers & Profiles.
(Note: “People” resource is available to developers who you enroll as an organization, as opposed to an individual)
Select ‘All’ under Provisioning Profiles in Certificates, Identifiers & profiles. Click ‘+’ on top right to create a new profile
Depending on type of provisioning profile (development or production provisioning profile), select the right type. Click on Continue See Distribution considerations below.
Select App ID which you have created. Click on Continue.
Select iOS Development Certificate or iOS Production Certificate which you created and click on Continue. Select Devices pane appears, select device(s) which you would to want to test the APNs service and click on Continue Enter a profile name for your app, click Continue and Download the certificate and click Done.
Note, if Certificates don’t exist, you will be prompted to create one.
Once you ‘Download’ the mobile provision certificate, import it into Xcode by double clicking downloaded mobile provisioning certificate.
Verify time and date of import into xcode, by opening terminal and typing the following. You should see the date time of the latest imports. ls -light ~/Library/MobileDevice/Provisioning\ Profiles
Joes-MacBook-Pro-15:~ awesomejoe$ ls -light ~/Library/MobileDevice/Provisioning\ Profiles 9453527 -rw-r--r-- 1 staff 7.7K Jan 8 15:27 cafebeef-dead-bead-fade-decadeaccede.mobileprovision
4. Configure Pyze as your provider
In order to allow Pyze to send push notifications on your behalf to users of your app, please provide the iOS push certificate to Pyze.
- Login to growth.pyze.com
- Navigate to the app you want to provide keys for
Select App Profile page either from the Portfolio page or menu
- Select Push Notifications on the left menu
Upload a Push notifications certificate in .p12 format, provide p12 password (the password you generated while creating the certificate), and specify the provisioning mode: Development or Production depending on the type of certificate you are using.
- Click Save
- Select daily and weekly quota limits
5. Distribution considerations
Before you publish your app to AppStore or make a public Ad Hoc release, ensure you created a Production SSL certificate and used a Distribution mobile provisioning profile when following the steps above.
Also, ensure you have uploaded a Production .p12 certificate following steps mentioned in above section. Use Production Mode for Provisioning Mode.
Push Notifications with FCM
Google Notifications
Google provides the Firebase Cloud Messaging service (FCM) to allow app publishers to reach out to their users via push notifications.
App Publishers have the option of hosting and running servers “app servers” themselves to send notification content “payload” to FCM. Google then forwards the notification content to your app on user’s device.
Pyze customers do not have to maintain their own servers to communicate with Google’s Messaging service. Pyze can be used to send push notifications for an app in development, an app that is live in Google Play, and to a combination of apps in development and production.
Prerequisites
You must Create Firebase project in Firebase console. Click here after logging into the google account that owns your project for next steps.
Following sections
In the following sections, we will generate the google-services.json and enter the Server Key in growth.pyze.com. Then we will also enable push notifications in your Android project.
Configuring the Server API Key and Sender ID
In this section we will
- Get the Server Key and Sender ID for your app
- Enter the Server Key on growth.pyze.com
Generate the Server API Key and Sender ID
Login into the Google Account that owns your app
Go to Google Cloud Messaging and select Get a Configuration File
- Obtain the Server API Key and Sender ID from the project settings for your
Enter the Server Key and Sender ID on growth.pyze.com
Login into your Firebase Console
- Select Your Project and Click on the Settings icon next to the Project name
In the Cloud Messagin Tab, locate the Server Key and Sender ID
Login into your Pyze Account
- On the App Portfolio page, click on the App Profile icon for the app
On the App Settings page, navigate to the Push Notifications settings for the app and enter the Server key and Sender ID and also quota limits for the maximum number of push messages your app can send in a day and a week
Integrate FCM Push Notifications in your App project.6.1' } // ADD THIS AT THE BOTTOM apply plugin: 'com.google.gms.google-services'
Add the below dependency to your app’s build.gradle file:
dependencies { compile "com.google.firebase:firebase-messaging:9.6.1" }
In the Android.Manifest file, add the below between the Application Tag:
<receiver android: <service android: <intent-filter> <action android: </intent-filter> </service>
Get the configuration file from Google and place it under your app/ folder in your android studio project. Open this link for more info:
In-App Notifications
Enable In-app Notifications in your App
In-app notifications allow app publishers to reach out to app users when they use your app. In-App Notifications are deeply integrated in Pyze Growth Intelligence and allow app publishers to reach out to users from manually from Dynamic Funnels and Intelligence Explorer, and automatically based on workflows and campaigns from Growth Automation.
App publishers have full control over when to display the in-app messages and have two options: use the user interface provided by Pyze or develop their own.
Option 1. Use Pyze provided user interface
Invoke built-in User Interface from your app
You can look for In-App messages and invoke the built-in UI anywhere in your app. For example, on a button click or in your
Start method or when new scene is loaded/unloaded.
Invoke UI from your app
Call the following method, whenever you want to show notification with default pyze UI. Method accepts a callback handler method which will be invoked whenever any of the call to action button on the UI is pressed. ); }); } });
Option 2. Build your own user interface
You can provide your own user interface for in-app messages using the base level APIs
Get Count of New and Unfetched In-App Messages
To get the count of new and un-fetched in-app messages you can call the following method.
countNewUnFetchedMessages(callback);
Get Message Headers
To get the message headers call the following method. Upto 20 messages are cached within the SDK. Using PyzeInAppMessageType you can fetch new unfetched messages, or previously fetched and cached in-app messages or you can get all messages.); }); } });
Sending In-App notifications from growth.pyze.com
In-app notifications allow app publishers to reach out to app users when they use your app. In-App Notifications are deeply integrated in growth.pyze.com and allow app publishers to reach out to users from manually from Dynamic Funnels and Intelligence Explorer, and automatically based on workflows and campaigns from Growth Automation. For illustration purposes, we will send In-App Notifications from Dynamic Funnels.
Sending In-App notifications from Dynamic Funnels
Create an event sequence, specify filters. You can reach out from either Dynamic Funnels or from Recent Sequences
Create and send an In-app Notification. In this example, we will inform users of a tour we created for users
View progress in Campaign History
Enable Personalization
getTags(callback)
Get all tags assigned to the user.
Note: Tags are case sensitive, High Value and high value are different tags.
Usage
PyzePersonalizationIntelligence.getTags(function(tags){ console.log(tags); });
isTagSet(tag, callback)
Returns true if requested tag is assigned to user.
Note: Tags are case sensitive, High Value and high value are different tags
Usage
PyzePersonalizationIntelligence.isTagSet("loyal", function(tagExists) { console.log(tagExists); });
areAnyTagSet(listOfTags, callback)
Returns true if at least one tag is assigned.
Note: Tags are case sensitive, High Value and high value are different tags.
Usage
PyzePersonalizationIntelligence.areAnyTagSet(["loyal","High Value","Low value"], function(tagExists) { console.log(tagExists); });
areAllTagSet(listOfTags, callback)
Returns true if all tags requested are assigned to user.
Note: Tags are case sensitive, High Value and high value are different tags.
Usage
PyzePersonalizationIntelligence.areAllTagSet(["loyal","High Value","Low value"], function(tagExists) { console.log(tagExists); });
User Privacy
Pyze provides APIs to allow end-users to Opt out of Data Collection and also instruct the Pyze system to forget a user’s data.
setUserOptOut
Allows end-users to opt out from data collection. Opt-out can be toggled true or false.
Pyze.setUserOptOut(true)
To resume user data collection set value to false
Pyze.setUserOptOut(false)
deleteUser
Allows end-users to opt out from data collection and delete the user in the Pyze system. We recommend you confirm this action as once a user is deleted, this cannot be undone.
Pyze.deleteUser(true) | https://docs.pyze.com/react-native.html | 2019-03-18T16:46:46 | CC-MAIN-2019-13 | 1552912201455.20 | [array(['images/react-native/Events-Push-InApp.png', None], dtype=object)
array(['images/react-native/AddApp.png', None], dtype=object)
array(['images/react-native/AddAppDetails.png', None], dtype=object)
array(['images/react-native/CopyPak.png', None], dtype=object)
array(['images/react-native/AppProfileLink.png', None], dtype=object)
array(['images/react-native/AppProfilePage.png', None], dtype=object)
array(['images/react-native/Pyze-APNs.png', None], dtype=object)
array(['images/react-native/create-SSL-certificate.jpg', None],
dtype=object)
array(['images/react-native/Add-Edit-Provisioning-Profile-Prod.jpg', None],
dtype=object)
array(['images/react-native/RemoteNotifications.jpg', None], dtype=object)
array(['images/react-native/Pyze-FCM.png', None], dtype=object)
array(['images/react-native/fcm1.png', None], dtype=object)
array(['images/react-native/fcm2.png', None], dtype=object)
array(['images/react-native/fcm3.png', None], dtype=object)] | docs.pyze.com |
Voxel Graphs¶
Goal¶
The goal of voxel graphs is to create procedural worlds without using C++ and with fast iteration times - in other words, no compilation. To generate a voxel world, the densities of millions of voxels need to be queried. Blueprints are way too slow to do that. That’s why a custom graph system optimized for procedural world generation was created.
How it works¶
A voxel graph is similar to a blueprint graph: it has execution flows and data flows. The execution flow start on the Start node:
The graph is called for every voxel. To get the current voxel position, you can use the X, Y and Z nodes.
Note
The coordinates are always integers, but for convenience these nodes output floats
During the execution flow, you need to set two outputs:
- Value Output: this is the density of the current voxel. Clamped between -1 and 1
- Material Output: this is the material of the current voxel
To do so, you have access to the following nodes:
- Set Value for the density
- Set Color/Set Index/Set Double Index for your material depending on your voxel world material config
These are the basic outputs. Voxel graphs can also output custom values, as it will be detailed in the following sections.
Quick Start¶
Creating a flat world¶
- Add a Set Value node:
- Link a Z node to it:
Tip
To add a X/Y/Z node, hold the corresponding key and click on the graph
The density will be negative when Z < 0, and positive when Z > 0. As a negative density corresponds to a full voxel, this is what we want.
You can now set your voxel world World Generator property to Object and set its value to your graph. You should see a flat world.
Note
Make sure that your character has a Voxel Invoker Component
Adding some hills to it¶
For the hills, we’re going to use perlin noise:
This will give us a height. However noise output is between -1 and 1: this is too small for hills! Let’s multiply it by a Float Constant node:
We might want to control the hills height from blueprint. Let’s expose the constant:
- Click on the constant node
- On the detail panel on the left, apply the following settings:
- Your node should now look like that:
We want to have a positive density (empty) when Z > height, and a negative one (full) when Z < height.
We could do it that way:
However, this would lead to bad looking terrain:
Caution
Using ifs to switch between densities will create discontinuities
Instead, we’re going to subtract the height from the Z value:
That way, we still have a positive density when Z > height and a negative one when Z < height, but without any discontinuities!
Important
Make sure you understand the graph above. This is a recurring pattern when using voxel graphs
This gives the following terrain:
Adding some color¶
First of all, as we’re going to reuse the noise output, we’re going to add a local variable. Local variables are only syntaxic sugar - the pins are linked together in a precompiler pass - but they can clean a lot voxel graphs.
- Right click in the voxel graph and select Create Local Variable
- Select the new node
- In the detail panel on the left, set its name to Height and its type to Float
- Link the perlin noise output to it, and replace its usage by a new Height node:
Note
The Height constant was renamed to HeightScale to avoid ambiguity
RGB Config
Set your voxel world Material Config property to RGB, and the Voxel Material property to M_VoxelMaterial_Colors.
Note
If you don’t find the material make sure Show Plugin Content is enabled in the View Options dropdown menu of the selector
Add the following nodes:
Your world should now look like this:
Single Index Config
Set your voxel world Material Config property to Single Index, and the Material Collection property to ExampleCollection.
Add the following nodes:
Your world should now look like this:
Notice how while there is a blending, it’s far from perfect.
Double Index Config
Set your voxel world Material Config property to Double Index, and the Material Collection property to ExampleCollection.
Add the following nodes:
Your world should now look like this:
Notice how smooth the blending is. If you want a smaller blending distance:
| https://voxel-plugin.readthedocs.io/en/latest/docs/voxel_graphs/voxel_graphs.html | 2019-03-18T16:31:06 | CC-MAIN-2019-13 | 1552912201455.20 | [array(['../../_images/start.png', '../../_images/start.png'], dtype=object)
array(['../../_images/xyz.png', '../../_images/xyz.png'], dtype=object)
array(['../../_images/setvalue.gif', '../../_images/setvalue.gif'],
dtype=object)
array(['../../_images/setvaluez.png', '../../_images/setvaluez.png'],
dtype=object)
array(['../../_images/perlinnoise.png', '../../_images/perlinnoise.png'],
dtype=object)
array(['../../_images/multiplyfloatconstant.png',
'../../_images/multiplyfloatconstant.png'], dtype=object)
array(['../../_images/floatconstantsettings.png',
'../../_images/floatconstantsettings.png'], dtype=object)
array(['../../_images/floatconstantsyellow.png',
'../../_images/floatconstantsyellow.png'], dtype=object)
array(['../../_images/perlinnoiseif.png',
'../../_images/perlinnoiseif.png'], dtype=object)
array(['../../_images/ifterrain.png', '../../_images/ifterrain.png'],
dtype=object)
array(['../../_images/zsubtract.png', '../../_images/zsubtract.png'],
dtype=object)
array(['../../_images/subtractterrain.png',
'../../_images/subtractterrain.png'], dtype=object)
array(['../../_images/localvariableusage.gif',
'../../_images/localvariableusage.gif'], dtype=object)
array(['../../_images/localvariablefinal.png',
'../../_images/localvariablefinal.png'], dtype=object)
array(['../../_images/setcolor.png', '../../_images/setcolor.png'],
dtype=object)
array(['../../_images/colorterrain.png', '../../_images/colorterrain.png'],
dtype=object)
array(['../../_images/setindex.png', '../../_images/setindex.png'],
dtype=object)
array(['../../_images/singleindexterrain.png',
'../../_images/singleindexterrain.png'], dtype=object)
array(['../../_images/setdoubleindex.png',
'../../_images/setdoubleindex.png'], dtype=object)
array(['../../_images/doubleindexterrain.png',
'../../_images/doubleindexterrain.png'], dtype=object)
array(['../../_images/setdoubleindexsmaller.png',
'../../_images/setdoubleindexsmaller.png'], dtype=object)
array(['../../_images/doubleindexsmallerterrain.png',
'../../_images/doubleindexsmallerterrain.png'], dtype=object)] | voxel-plugin.readthedocs.io |
How Long Do You Have to Pay Spousal Support in California
How long do you pay spousal support (alimony) in an Orange County, California divorce?
Well, to answer that question, one needs to know how long the marriage lasted. The rule of thumb, at least for marriages lasting less than 10 years, is that support is generally due for one-half the duration of the marriage. There are exceptions, but they are usually made on a case by case basis. For those who have never been divorced before or left their librettos at home, a long term marriage in California is one found to be lasting 10 years or more.
As an example, if you were married for 4 years, the duration of alimony would be expected to last 2 years. This would be for the so-called “permanent support” order. For marriages lasting 10 years or more, then the supporting spouse usually will pay the supported spouse alimony for an indefinite amount of time. It is vague, yes, but the judges have the power to terminate support if a spouse does not become self-supporting after a reasonable amount of time. That, too is vague, and is interpreted on a case by case basis.
People can fall anywhere on the alimony spectrum in terms of ability to become self-supporting. A young person without any child care responsibilities and a good education might be expected to become self-supporting faster than one who is middle aged with no education or work experience, but who devoted their prime working years to raising the parties’ children. The courts have a lot of flexibility to decide whether support is terminated or reduced after a period of time.
As spousal support is unique to the parties and their individual circumstances it is difficult to make general statements about alimony. If you are curious about what the Court considers in making a long term spousal support order, a good section of the Family Code to read is section 4320. That section of the California Family Code lists all of the things the Court should consider before awarding anyone long term spousal support. It’s pretty dry reading, but well worth the effort. | http://divorce-docs.com/how-long-do-you-have-to-pay-spousal-support-in-california/ | 2017-01-16T14:53:41 | CC-MAIN-2017-04 | 1484560279189.36 | [] | divorce-docs.com |
Note that transactions can still be authorised even if the CVV and AVS responses
are: No match or failure responses. CVV and AVS responses are for indication to the
merchant only and usually do not influence the overall Authorisation result. This can vary per cardholders bank though (issuing bank).
CVV Results:
AVS Results:
Auto Decline AVS failures setting will only approve transactions that have an AVS response of Y, U, G, X or Z. | http://docs.worldnettps.com/doku.php?id=developer:integrator_guide:appendix_a&ref=sb | 2017-01-16T14:56:50 | CC-MAIN-2017-04 | 1484560279189.36 | [] | docs.worldnettps.com |
!
The Rapid Application Development for CMS Working Group is a standing team focused on maintaining and improving Joomla's Rapid Application Development layer..:- | http://docs.joomla.org/index.php?title=Production_Working_Groups&diff=105982&oldid=61880 | 2014-03-07T10:50:45 | CC-MAIN-2014-10 | 1393999642168 | [] | docs.joomla.org |
from their desk phones to their BlackBerry devices
- Move calls to their mobile numbers
- Move calls to one-time numbers
- Manually move calls from Voice over Wi-Fi to Voice over Mobile
- Manually move calls from Voice over Mobile to Voice over Wi-Fi
- Configure a call schedule
- Add participants to an active call. Users must be associated with a Cisco Unified Communications Manager 7.1 or later PBX before they can add participants to an active call.
- Device can initiate automatic handoff between Voice over Wi-Fi and Voice over Mobile
- Set their mobile numbers from their devices
- Forward incoming BlackBerry MVS calls to an internal extension or another phone
- Allow the device to use the no data coverage number to make calls when the network is experiencing congestion
- Use only the BlackBerry MVS line for making and taking calls
- Define the starting RTP port number that the device uses when making Voice over Wi-Fi calls and also define the preferred order of codecs when making the Voice over Wi-Fi calls
- Change the mobile number
- Change the call move to desk number
- Change the work line label
- Change the default line for outgoing calls
- Change the outgoing call setup sound
- Change the number that users call to access voice mail
- Change caller restrictions
- Change the default network (Wi-Fi or mobile) that users use for work calls
- Change the automatic handoff method
- Enable or disable automatic handoff
You can. | http://docs.blackberry.com/en/admin/deliverables/43928/Managing_classes_of_service_1951870_11.jsp | 2014-03-07T10:45:58 | CC-MAIN-2014-10 | 1393999642168 | [] | docs.blackberry.com |
Recent Development
For the 2.2.x branch the tiger module has:
- @completed goal@
Module Status
The tiger module's status is unknown
IP Review:
TigerDataStoreFactory.java contains a GPL license. TO_RESOLVE - this can be removed as per Chris Holmes's statement.
Outstanding Issues
(Unable to create JIRA link. There is no tiger module on JIRA) | http://docs.codehaus.org/pages/viewpage.action?pageId=57721 | 2014-03-07T10:44:32 | CC-MAIN-2014-10 | 1393999642168 | [] | docs.codehaus.org |
Griffon 0.9.3-beta-1 – "Aquila fasciata" - is a maintenance release of Griffon 0.9.
Theres a new application archetype available. It bootstraps an application in a similar way as Ubuntu's Quickly does. Here's how to use it
The generated code is fully i18n aware and customizable..
Calling
griffon list-plugins will now filter the list automatically, leaving out those plugins that do not match the current development platform you're working on..
All of the
createMVCGroup and
build>).
New versions for the following dependencies
Griffon 0.9.3-beta-1. | http://docs.codehaus.org/plugins/viewsource/viewpagesrc.action?pageId=209651207 | 2014-03-07T10:45:45 | CC-MAIN-2014-10 | 1393999642168 | [] | docs.codehaus.org |
View or hide call logs in the Messages application
You can set your BlackBerry smartphone to show call logs, including missed calls, in the Messages application.
- From the home screen, press the
key.
- Press the
key > Options > Call Logs and Lists.
- To show recent and missed calls in the Messages application, select the All Calls option.
- To hide call logs in the Messages application, select the None option.
- Press the
key > Save.
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/47918/1597061.jsp | 2014-04-16T08:47:41 | CC-MAIN-2014-15 | 1397609521558.37 | [] | docs.blackberry.com |
Description / Features
This plugin uses the Trac XML-RPC plugin to connect to a Trac instance and display metrics about open tickets. It can also drill down to the component level.
Installation
- Install the plugin through the Update Center or download it into the SONARQUBE_HOME/extensions/plugins directory
- Restart the SonarQube server
Usage
Install and enable the Trac XML-RPC plugin on your Trac instance. You will need to give anonymous or a user account 'XML_RPC' and 'TICKET_VIEW' privileges.
A user working with trac 0.11.7 has reported on the user mailing-list that the HttpAuthPlugin should be also installed, so if you get the following error Trac: XmlRpcException (possibly missing authentication details?) install it as well.
The Trac instance URL can be specified in two places:
Your project's pom.xml file under the 'issue managment' section; for example: (note that the username/password/component has to be specified in SonarQube project settings, the plugin does not currently have the ability to read the username/password/component from the pom.xml)
- Specified in SonarQube, see Analyzing Source Code.
Known Limitations
- While the Trac instance URL can be picked up from your project's pom.xml file the username/password/component have to be specified within the project settings in SonarQube.
Change Log
Release 0.3 (2 issues)
Release 0.2 (2 issues)
| http://docs.codehaus.org/pages/viewpage.action?pageId=171081780 | 2014-04-16T08:01:07 | CC-MAIN-2014-15 | 1397609521558.37 | [array(['/s/en_GB-1988229788/4727/ffd10e10ff7bc0a1d7e29b4d2225707dd7f03d0b.15/_/images/icons/wait.gif',
None], dtype=object)
array(['/s/en_GB-1988229788/4727/ffd10e10ff7bc0a1d7e29b4d2225707dd7f03d0b.15/_/images/icons/wait.gif',
None], dtype=object)
array(['/s/en_GB-1988229788/4727/ffd10e10ff7bc0a1d7e29b4d2225707dd7f03d0b.15/_/images/icons/wait.gif',
None], dtype=object) ] | docs.codehaus.org |
19 drracket:eval
- current-namespace has been set to a newly created empty namespace. This namespace has the following modules shared (with namespace-attach-module) from DrRacket’s original namespace:If the gui-modules? parameter is a true value, then these modules are also shared:
read-curly-brace-as-paren is #t;
read-square-bracket-as-paren is #t;
error-print-width is set to 250;
current-ps-setup is set to a newly created ps-setup% object;
the exit-handler is set to a parameter that kills the user’s custodian;:eval:build-user-eventspace/custodian.
The input argument specifies the source of the program.
The eval-compile-time-part? argument indicates if expand is called or if expand-top-level-with-compile-time-evals is called when the program is expanded. Roughly speaking, if your tool will evaluate each expression itself by calling eval then pass #f. Otherwise, if your tool just processes the expanded program, be sure to pass #t.
This function calls front-end/complete-program to expand the program. Unlike when the Run is clicked, however, it does not call front-end/finished-complete-program..
The second argument to iter is a thunk that continues expanding the rest of the contents of the definitions window. If the first argument to iter was eof, this argument is just the primitive void.
current-custodian is set to a new custodian.
In addition, it calls dr:language-configuration:get-settings-preferences-symbol for that language-settings.
The init argument is called after the user’s parameters are all set, but before the program is run. It is called on the user’s thread. The current-directory and current-load-relative-directory parameters are not set, so if there are appropriate directories, the init argument is a good place to set them.
The kill-termination argument is called when the main thread of the eventspace terminates, no matter if the custodian was shutdown, or the thread was killed. This procedure is also called when the thread terminates normally. This procedure is called from a new, dedicated thread (i. e., not the thread created to do the expansion, nor the thread that drracket:eval:build-user-eventspace/custodian was called from.) | http://docs.racket-lang.org/tools/drracket_eval.html | 2014-04-16T07:18:10 | CC-MAIN-2014-15 | 1397609521558.37 | [] | docs.racket-lang.org |
Welcome to the AEON.to API documentation!¶
This is the API documentation for AEON.to, which is a service that allows users to pay any bitcoin address anonymously using Aeon.
Note
The current version of the API is 3.
Contents¶
- Introduction
- Version 3
- Querying order parameters
- Creating a new order
- Creating a new order using a payment protocol URL
- Querying order status
- Querying order price
- Public test instance
- Problems? | https://aeonto-api.readthedocs.io/en/latest/ | 2020-05-25T00:44:25 | CC-MAIN-2020-24 | 1590347387155.10 | [] | aeonto-api.readthedocs.io |
This document gives an overview of CKAN’s authorization capabilities and model in relation to access control. The authentication/identification aspects of access control are dealt with separately in CKAN Authentication and Identification.
CKAN implements a fine-grained role-based access control system.
In a nutshell: For a particular package (or other protected object) a user can be assigned a role which specifies permitted actions (edit, delete, change permissions, etc.).
There are variety of protected objects to which access can be controlled, for example the System, Packages, Package Groups and Authorization Groups. Access control is fine-grained in that it can be set for each individual package, group or authorization group instance.
For each protected object there are a set of relevant actions such as create’, ‘admin’, ‘edit’ etc. To facilitate mapping Users and Objects with Actions, Actions are aggregated into a set of roles (e.g. an ‘editor’ role would have ‘edit’ and ‘read’ action).
A special role is taken by the System object, which serves as an authorization object for any assignments which do not relate to a specific object. For example, the creation of a package cannot be linked to a specific package instance and is therefore a system operation.
To gain further flexibility, users can be assigned to authorization groups. Authz groups are both the object of authorization (i.e. one can have several roles with regards to an authz group) and the subject of authorization (i.e. they can be assigned roles on other objects which will apply to their members.
The assignment of users and authorization groups to roles on a given protected object (such as a package) can be done by ‘admins’ via the ‘authorization’ tab of the web interface (or by system admins via that interface or the system admin interface). There is also a command-line based authorization manager, detailed below.
Although the Admin Extension provides a Web interface for managing authorization, there is a set of more powerful Paster commands commands for fine-grained control.
The roles command will list and modify the assignment of actions to roles:
$ paster --plugin=ckan roles -c my.ini list $ paster --plugin=ckan roles -c my.ini deny editor create-package $ paster --plugin=ckan roles -c my.ini allow editor create-package
This would first list all role action assignments, then remove the ‘create-package’ action from the ‘editor’ role and finally re-assign that role.
Similarly, the rights command will set the authorization roles of a specific user on a given object within the system:
$ paster --plugin=ckan rights -c my.ini list
Will list all assigned rights within the system. It is recommended to then grep over the resulting output to search for specific object roles.
Rights assignment follows a similar pattern:
$ paster --plugin=ckan rights -c my.ini make bar admin package:foo
This would assign the user named bar the admin role on the package foo. Instead of user names and package names, a variety of different entities can be the subject or object of a role assignment. Some of those include authorization groups, package groups and the system as a whole (system:`):
# make 'chef' a system-wide admin: $ paster --plugin=ckan rights -c my.ini make chef admin system # allow all members of authz group 'foo' to edit group 'bar' $ paster --plugin=ckan rights -c my.ini make agroup:foo edit \ group:bar
To revoke one of the roles assigned using make, the remove command is available:
$ paster --plugin=ckan rights -c my.ini remove bar admin package:foo
For more help on either of these commands, also refer to the paster help.
Each role has a list of permitted actions appropriate for a protected object.
Currently there are three basic roles (although you can add others if these defaults do not suit):
- reader: can read the object
- anon_editor: (anonymous i.e. not logged in) can edit and read the object
- editor: can edit, read and create new objects
- admin: admin can do anything including: edit, read, delete, update-permissions (change authorizations for that object)
When you install a new CKAN extension or upgrade your version of CKAN then new actions may be created, and permissions may given to these basic roles, according to the broad intention of the name of the roles.
It is suggested that if the broad idea of these basic roles and their actions are not suitable for your CKAN instance then you create new roles and assign them actions of your own choosing, rather than edit the roles. If the definition of the roles drift from their name then it can be confusing for admins and cause problems for CKAN upgrades and new extensions.
Actions are defined in the Action enumeration in ckan/model/authz.py and currently include: edit, change-state, read, purge, edit-permissions, create-package, create-group, create-authorization-group, read-site, read-user, create-user.
Obviously, some of these (e.g. read) have meaning for any type of Domain Object, and some (e.g. create-package) can not be associated with any particular Domain Object, so the Context for Roles with these Actions is system.
The read-site action (with System context) is designed to provide/deny access to pages not associated with Domain Objects. This currently includes:
- PackageSystem Admin’ and can do any action on any object. (A shortcut for creating a System Admin is by using the paster sysadmin command.)
- A user given the admin right for a particular object can do any action to that object.
Although ckan.net is forging ahead with the Wikipedia model of allowing anyone to add and improve metadata, some CKAN instances prefer to operate in ‘Publisher mode’ which allows edits only from authorized users.
To operate in this mode:
- Remove the rights for general public to edit existing packages and create new ones.:paster rights remove visitor anon_editor package:all paster rights remove logged_in editor package:all paster rights remove visitor anon_editor system paster rights remove logged_in editor system
- If logged-in users have already created packages in your system then you may also wish to remove admin rights. e.g.:paster rights remove bob admin package:all
- Change the default rights for newly created packages. Do this by using these values in your config (.ini file):ckan.default_roles.Package = {“visitor”: [“reader”], “logged_in”: [“reader”]} ckan.default_roles.Group = {“visitor”: [“reader”], “logged_in”: [“reader”]} ckan.default_roles.System = {“visitor”: [“reader”], “logged_in”: [“reader”]} ckan.default_roles.AuthorizationGroup = {“visitor”: [“reader”], “logged_in”: [“reader”]}
Note there is also the possibility to restrict package edits by a user’s authorization group. See
Example 1: Package ‘paper-industry-stats’:
- David Brent is an ‘admin’
- Gareth Keenan is an ‘editor’
- Logged-in is a ‘reader’ (This is a special user, meaning ‘anyone who is logged in’)
- Visitor is a ‘reader’ (Another special user, meaning ‘anyone’)
That is, Gareth and David can edit this package, but only Gareth can assign roles (privileges) to new team members. Anyone can see (read) the package.
Example 2: The current default for new packages is:
- the user who creates it is an ‘admin’
- Visitor and Logged-in are both an ‘editor’ and ‘reader’
NB: “Visitor” and “Logged-in” are special “pseudo-users” used as a way of concretely referring to the special sets of users, namely those that are a) not logged-in (“visitor”) and b) logged-in (“Logged-in”)
When a new package is created, its creator automatically become admin for it. This user can then change permissions for other users.
NB: by default any user (including someone who is not logged-in) will be able to read and write. This default can be changed in the CKAN configuration - see default_roles in CKAN Configuration.
We record tuples of the form:
- A user means someone who is logged in.
- A visitor means someone who is not logged in.
- An protected object is the subject of a permission (either a user or a pseudo-user)
- There are roles named: Admin, Reader, Writer
- A visitor visits a package page and reads the content
- A visitor visits a package page and edits the package
- Ditto 1 for a user
- Ditto 2 for a user
- On package creation if done by a user and not a visitor then user is made the ‘admin’
- An admin of a package adds a user as an admin
- An admin of a package removes a user as an admin
- Ditto for admin re. editor
- Ditto for admin re. reader
- We wish to be able assign roles to 2 specific entire groups in addition to specific users: ‘visitor’, ‘users’. These will be termed pseudo-users as we do not have AC ‘groups’ as such.
- The sysadmin alters the assignment of entities to roles for any package
- A visitor goes to a package where the editor role does not include ‘visitor’ pseudo-user. They are unable to edit the package.
- Ditto for user where users pseudo-user does not have editor role and user is not an editor for the package
- Ditto 12 re reader role.
- Ditto 13 re reader role.
- Try to edit over REST interface a package for which ‘visitor’ has Editor role, but no API is supplied. Not allowed.
Warning: not all of what is described in this conceptual overview is yet fully implemented.
- There are Users and (User) Authorization Groups
- There are actions which may be performed on “protected objects” such as Package, Group, System
- Roles aggregate actions
- UserObjectRole which assign users (or Authorization groups) a role on an object (user, role, object). We will often refer to these informally as “permissions”.
NB: there is no object explicitly named “Permission”. This is to avoid confusion: a ‘normal’ “Permission” (as in e.g. repoze.what) would correspond to an action-object tuple. This works for the case where protected objects are limited e.g. a few core subsystems like email, admin panel etc). However, we have many protected objects (e.g. one for each package) and we use roles so this ‘normal’ model does not work well.
Question: do we require for both Users and UserAuthorizationGroup to be subject of Role or not?
Ans: Yes. Why? Consider, situation where I just want to give an individual user permission on a given object (e.g. assigning authz permission for a package)? If I just have UserAuthorizationGroups one would need to create a group just for that individual. This isn’t impossible but consider next how to assign permissions to edit the Authorization Groups? One would need create another group for this but then we have recursion ad infinitum (unless this were especially encompassed in some system level permission or one has some group which is uneditable ...)
Thus, one requires both Users and UserAuthorizationGroups to be subject of “permissions”. To summarize the approximate structure we have is:
class SubjectOfAuthorization class User class UserAuthorizationGroup class ObjectOfAuthorization class Package class Group class UserAuthorizationGroup ... class SubjectRoleObject subject_of_authorization object_of_authorization role
Demo example model:
User Group Permission * Users are assigned to groups * Groups are assigned permissions
CKAN Authentication and Identification
Enter search terms or a module, class or function name. | https://docs.ckan.org/en/ckan-1.4.1/authorization.html | 2020-05-25T02:02:21 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.ckan.org |
What's New in Visual Studio 2008!)
Integrated Development Environment (IDE)
Settings Migration
Community Components
Community and Help Menus
Window Management
Class Designer
Projects and Solutions
Web Application Projects
AJAX Development
Project Designer
Deployment
Editing
New Design View and CSS Design Tools
IntelliSense for Jscript and ASP.NET AJAX
Object Browser and Find Symbol Support for Multi-targeting
WPF Designer
Data
Language-Integrated Query (LINQ)
Client Application Services
Reporting
New Report Projects
Report Wizard
Expression Editor Enhancement
ReportViewer Printing
PDF Compression
MSBuild
Other:
Target a Specific .NET Framework
Multiple Processor Capabilities
Enhanced Logging
Item Definitions
Assembly Location and Name Changes
.NET Compact Framework Version 3.5
.NET Framework Version 3.5
What's New in ADO.NET
What's New in Architecture Edition
What's New in Data
What's New in Deployment
What's New in Smart Device Projects
What's New in the Visual Basic Language
What's New in the Visual Studio Debugger
What's New in Visual Basic
What's New in Visual C#
What's New in Visual C++ 2008
What's New in Visual Studio Team System
What's New in Visual Studio Tools for Office | https://docs.microsoft.com/en-us/archive/blogs/adamga/whats-new-in-visual-studio-2008 | 2020-05-25T03:11:48 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.microsoft.com |
SF/JavaOne, Day 4, Smart User Interfaces
Unfortunately, this is just a gripe and not an actual talk about
something going on at JavaOne right now. When you pull up the
JavaOne Session Catalog,
ask for all the sessions in a single day, and then you sort by time,
you get a list of sessions that start with the afternoon session and
then list the morning sessions. Yeah. Because that makes a
whole lot of sense. Good to see that someone tried out this
interface before publishing it. | https://docs.microsoft.com/en-us/archive/blogs/cyrusn/sfjavaone-day-4-smart-user-interfaces | 2020-05-25T03:05:27 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.microsoft.com |
The User Menu¶
The user menu can be accessed via the menu area of the orcharhino management UI:
The user menu is dynamically named depending on the user currently logged in. The naming scheme goes as follows:
<first_name> <last_name>(“Example User” in the above example).
Selecting My account from the drop-down menu will take you to the edit user page for your user. This page is identical to the edit user window, which is documented in the administer menu section. (More information can be found on the users page).
Selecting Log out from the drop-down menu, will log out the user currently logged in, and take you back to the orcharhino login screen (see below).
The orcharhino login screen:
More information can be found in the user management section. | https://docs.orcharhino.com/sources/management_ui/the_user_menu.html | 2020-05-25T00:23:49 | CC-MAIN-2020-24 | 1590347387155.10 | [array(['../../_images/the_user_menu.png', 'User menu tile'], dtype=object)
array(['../../_images/login_screen.png', 'Management UI login screen'],
dtype=object) ] | docs.orcharhino.com |
Table of Contents
Product Index
Three textures for ElorOnceDark's XTech Outfit, inspired by goth and cyber fashion. With exquisite new detail sculpted in Zbrush, hair textures made with unique and exclusive high resolution photos of real dreads, and plenty of extras, you can expand your use for this outfit. It also includes seven new textures for the dreads, one of them white for easy recolouring, and 4 separate for tactile, textured epaulettes to allow for variations on the basic. | http://docs.daz3d.com/doku.php/public/read_me/index/19032/start | 2020-05-25T02:34:36 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.daz3d.com |
Chat
Contents
Chatter for Salesforce
Currently not supported. | https://docs.resco.net/wiki/Chatter | 2020-05-25T02:23:38 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.resco.net |
[−][src]Crate slog_scope
Logging scopes for slog-rs
Logging scopes are convenience functionality for slog-rs to free user from manually passing
Logger objects around.
Set of macros is also provided as an alternative to original
slog crate macros, for logging
directly to
Logger of the current logging scope.
Set global logger upfront
Warning: Since
slog-scope version 4.0.0,
slog-scope defaults to
panicking on logging if no scope or global logger was set. Because of it, it
is advised to always set a global logger upfront with
set_global_logger.
Using
slog-scope as a part of API is not advised
Part of a
slog logging philosophy is ability to freely express logging contexts
according to logical structure, rather than callstack structure. By using
logging scopes the logging context is tied to code flow again, which is less
expressive.
It is generally advised NOT to use
slog_scope in libraries. Read more in
slog-rs FAQ
#[macro_use(slog_o, slog_info, slog_log, slog_record, slog_record_static, slog_b, slog_kv)] extern crate slog; #[macro_use] extern crate slog_scope; extern crate slog_term; use slog::Drain; fn foo() { slog_info!(slog_scope::logger(), "foo"); info!("foo"); // Same as above, but more ergonomic and a bit faster // since it uses `with_logger` } fn main() { let plain = slog_term::PlainSyncDecorator::new(std::io::stdout()); let log = slog::Logger::root( slog_term::FullFormat::new(plain) .build().fuse(), slog_o!() ); // Make sure to save the guard, see documentation for more information let _guard = slog_scope::set_global_logger(log); slog_scope::scope(&slog_scope::logger().new(slog_o!("scope" => "1")), || foo() ); } | https://docs.rs/slog-scope/4.3.0/slog_scope/ | 2020-05-25T00:40:04 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.rs |
Code primer
- The “core”
- Initializing the Framework: serendipity_config.inc.php and serendipity_config_local.inc.php
- .htaccess
- serendipity.css.php
- deployment directory
- Composer / Bundled-Libs
- Internationalization
- Other files and directories
- Database layers
- Important variables and constants
- Important API functions
- Error-Handling
- Frontend-Routing: index.php
- Backend-Routing: serendipity_admin.php
- Plugins
- Themes
- Coding Guidelines
To get started with developing Serendipity, here’s a few things to get you kickstarted.
The first thing you need is an installation of Serendipity, and FTP/SSH/file access to that installation. It’s easy to setup a local Apache server with PHP and MySQL on all Linux, MacOS or Windows systems. Of course the best thing would be if you use github to check out our core code, so you can easily contribute patches or update your installation to the latest code version.
Now here are the most basic concepts you need to know. Those assume you have some basic PHP knowledge, and you are comfortable with reading the PHP code of the files alongside. The best way is always learning-by-doing when working with an existing system, so our goal is not to teach you all the basics - but rather to get you to know the basic workflows, so that you can check them out easily on your own, and know where to find what.
All our core PHP functions (serendipity_XXX) have phpDoc style comments which explain the parameters and functionality of each function, so be sure to read those. We currently have no automatted Code documentation, but you should be able to use any phpDoc compiler on our code yourself.
The “core”
Initializing the Framework: serendipity_config.inc.php and serendipity_config_local.inc.php
The user configuration for the most basic settings required to start the framework lies in serendipity_config_local.inc.php. It sets up the basic array $serendipity, and configure database credentials and the used Serendipity version.
The file serendipity_config.inc.php is the heart of our framework. It sets default variables, checks the PHP environment, loads the user configuration, includes the required files.
Whenever you want to do “something” with the Serendipity framework, all you need to do is include that file serendipity_config.inc.php in your code, and you can immediately access most of the Serendipity function calls, like this:
<?php include 'serendipity_config.inc.php'; $entries = serendipity_fetchEntries(); print_r($entries); ?>
The defined variables in this file by default are:
- $serendipity[‘versionInstalled’]: Current version number
- $serendipity[‘dbName’]: Database name
- $serendipity[‘dbPrefix’]: Database prefix (prepended before internal table names)
- $serendipity[‘dbHost’]: Database host
- $serendipity[‘dbUser’]: Database user
- $serendipity[‘dbPass’]: Database password
- $serendipity[‘dbType’]: Database type (=layer)
- $serendipity[‘dbPersistent’]: Whether to use persistant connections
On top of that, certain variables that are not included in the Serendipity Configuration panel can be configured in this file; if they are not present, the Serendipity defaults will be used. Such variables are:
- $serendipity[‘production’]: If set to “false” you can evoke extra debugging output when errors occur. If set to “debug”, it will be extra-verbose. (default: based on version, RC and alpha/betas default to false)
- $serendipity[‘allowDateManipulation’]: If set to true (default), users can change the date of entries
- $serendipity[‘max_last_modified’]: Amount of seconds how fresh an article must be, so that a comment to an entry will modify its LastModified-timestamp (default: 7 days)
- $serendipity[‘max_fetch_limit’]: In RSS-Feeds, how many entries can be fetched (default 50)
- $serendipity[‘trackback_filelimit’]: How large may an URL be to check for incoming trackback links (default 150kb)
- $serendipity[‘fetchLimit’]: How many entries to display (default 15)
- $serendipity[‘RSSfetchLimit’]: How many entries to display within RSS feed (default 15)
- $serendipity[‘use_PEAR’]: By default, Serendipity will use externally provided PEAR files (if existing). To force using the PEAR libraries bundled with Serendipity, set this variable to FALSE.
- $serendipity[‘useHTTP-Auth’]: If enabled (on by default, requires mod_php), users can log in to the blog by specifying user/password in the URL like:[email protected]/serendipity_admin.php(default true)
- $serendipity[‘cacheControl’]: (default true)
- $serendipity[‘expose_s9y’]: Whether to expose Serendipity version number (default true)
- $serendipity[‘forceBase64’]: When enabled, mails are encoded with base64 instead of imap_8bit (default false)
- $serendipity[‘use_iframe’]: When enabled, uses an iframe to save entries in the backend to prevent timeouts (default true)
- $serendipity[‘autolang’]: Default language, when autodetection fails (default “en”)
- $serendipity[‘defaultTemplate’]: Which template directory to use for fallback chaining (see below) (default “2k111”)
- $serendipity[‘template_backend’]: Which backend template to use when none is configured (default “2k11”)
-
- $serendipity[‘dashboardCommentsLimit’]: How many comments to show in the dashboard overview (default 5)
- $serendipity[‘dashboardEntriesLimit’]: How many future entries and drafts to show in the dashboard overview (default 5)
- $serendipity[‘languages’]: Holds an array of available languages
- $serendipity[‘calendars’]: Holds an array of available calendar types (gregorian and persian by default)
- $serendipity[‘charsets’]: Holds an array of supported charsets (native and UTF-8 by default)
- $serendipity[‘use_autosave’]: Whether to use local-browser autosaving feature (default true)
- $serendipity[‘imagemagick_thumb_parameters’]: Otpional parameters passed to imagemagick when creating thumbnails (default empty)
- $serendipity[‘logLevel’]: If set, enables using the Katzgrau KLogger (writes to templates_c/). If set to “debug”, be extra verbose (default: ‘Off’)
.htaccess
This file holds simple, central mod_rewrite RewriteRules (when URL-Rewriting is enabled) to match all permalink patters back to index.php (see the “Routing”-Part below).
serendipity.css.php
This file is usally called through the URL RewriteRules, and dynamically assembles the CSS statements for a selected theme as well as all plugins that have distinct CSS output.
deployment directory
Serendipity supports the concept of a “shared installation”. This keeps Serendipity as a kind of library in a central directory outside the DocumentRoot. Each blog will then only use stub-files which actually include that library file. The deployment-Directory contains exactly those stubs that point back to the library (through simply “include” calls). Note that the file names are exactly those that the core actually uses.
For more information, see Setting up a shared installation.
Composer / Bundled-Libs
The directory bundled-libs holds all of our internally used libraries that ship together with Serendipity. Using composer, we are able to update those libraries (in part). However, note that composer is NOT required to develop with Serendipity, since the libraries are all contained in our source code repository.
We currently bundle:
- PEAR for some legacy libraries:
- PEAR::Cache for some easy File/function caching
- PEAR::HTTP for some basic HTTP Request and Response classes
- PEAR::Net for some basic HTTP operations, related to PEAR::HTTP
- PEAR::Text for basic Text/Wiki operations
- PEAR::XML for XML operations
- Onyx for RSS parsing
- simplepie for advanced Atom and RSS parsing
- Smarty for our templating infrastructure
- composer for library maintenance
- katzgrau/klogger, psr as a central low-level logging facility
- zendframework/zend-db/ as an (optional) database layer intermediate
- create_release.sh is the script we use to bundle releases
- serendipity_generateFTPChecksums.php is the code used by the create_release.sh to create the *checksums.inc.php file
Internationalization
Serendipity’s translations are handled through easy .php include files inside the lang/ subdirectory. Each language has its own file according to its countrycode.
Translation files can have a “local charset” file and a UTF-8 variant in the UTF-8 subdirectory.
The file “addlang.txt” is meant for developers to hold new strings; by running “addlang.sh” this list can be integrated into each language file.
Some special constants / variables inside the language files are these:
- $i18n_filename_from: Holds an array of character replacements, which will replaced with ASCII characters when they occur within a URL. This can be used to translate umlauts etc. to “readable” variants of those, like a german “Ü” gets translated to “Ue”.
- $i18n_filename_to: This holds the actual character replacement values.
- LANG_CHARSET: Central constant that indicates the used charset of a language. Many plugins etc. use this to deduce which
- SQL_CHARSET: Default charset for database connection
- DATE_LOCALES: Used locales for a language
- DATE_FORMAT_ENTRY: The dateformat used in the language
- DATE_FORMAT_SHORT: A short dateformat used in the language
- WYSIWYG_LANG: Which language file include is used by CKEditor
- NUMBER_FORMAT_DECIMALS, NUMBER_FORMAT_DECPOINT, NUMBER_FORMAT_THOUSANDS: Some default number formatting rules
- LANG_DIRECTION: Indicates if a language uses rtl or ltr spelling
Certain language constants can use placeholders like “%s” (standard PHP Sprintf()) for later variable inclusion.
The proper language in the codeflow is loaded through “include/lang.inc.php”. This file is called twice; once for the central routine to only detect the proper language, and then a second time to actually load all language constants for the currently-logged in user.
Other files and directories
There are a couple of other files in the Serendipity root that are basic stubs which all reference the core framework:
- index.php: Frontend (see below)
- .htaccess: Webserver configuration (see above)
- serendipity_admin.php: Backend (see below)
- serendipity_config.inc.php: Framework initializing (see above)
- serendipity_config_local.inc.php: Basic configuration of the blog
- comment.php, wfwcomment.php: Used to accept and process a comment or trackback made to blog entries
- checksums.inc.php: A checksum file that holds information about the files that belong to the current Serendipity version; it is used to verify the integrity of an installation to find modified files.
- exit.php: A tracking script used for external links
- rss.php: The RSS feeds are routed through this
- serendipity_admin_image_selector.php: A mediadatabase popup screen that is called from within the backend
- *serendipity_xmlrpc.php: A basic stub for XML-RPC calls made to the blog, the actual XML-RPC client is provided through a plugin
The subdirectories contain:
- archives/: A temporary directory used to hold static files
- uploads/: Directory to contain user/media files
- bundled-libs/: External libraries like Smarty and other (see above)
- deployment/: Stub directory for shared installations (see above)
- docs/: ChangeLog and other documentation
- htmlarea/: Bundled CKEditor WYSIWYG
- include/: Internal code libraries
- include/admin/: Code workflow used for backend (see below)
- include/admin/importers/: Data export/import modules (see below)
- include/db/: Code libraries for database layers
- include/tpl/: Configuration templates for backend functionality
- lang/: PHP language files (using simple constants)
- plugins/: Plugin files
- sql/: SQL database creation files
- templates/: Theme files
- templates_c/: Compile directory for cached templates (and other temporary data)
- tests/: Draft ideas for unit tests
Database layers
The database layer offers a central framework trough include/db/db.inc.php. Serendipity uses plain SQL statements for its queries. We try to use database-agnostic standard SQL wherever possible, so that it runs on most servers.
If that is not possible, certain code-forks are deployed to use specific queries for specific database layers; however, those places are very few. The advantage of using central SQL is that it also creates very readable, easy SQL.
Core functions available globally across all database layers are:
- serendipity_db_update: Perform a query to updat ethe data of a certain table row
- serendipity_db_insert: Perform a query to insert an associative array into a specific SQL table
- serendipity_db_bool: Check whether an input value corresponds to a TRUE/FALSE option in the SQL database.
- serendipity_db_get_interval: Return a SQL statement for a time interval or timestamp, specific to certain SQL backends
- serendipity_db_implode: Operates on an array to prepare it for SQL usage.
We offer those database layers, which all implement an identical interface accross all layers:
- generic: zendframework-DB adapter
- mysql: MySQL databases
- mysqli: MySQL databases (PHP 5.5+ compatible backend, better performance)
- pdo-postgres: PDO-Postgresql driver (requires PHP PDO extension)
- pdo-sqlite: PDO-sqlite driver (requires PHP PDO extension)
- postgres: native postgresql driver (requires PHP extension)
- sqlite: native sqlite driver (requires PHP sqlite extension)
- sqlite3: native sqlite3 driver (requires PHP sqlite3 extension)
- sqlite3oo: native sqlite3 driver using Object-Oriented interface (requires PHP sqlite3 extension)
- sqlrelay: DB adapter using sqlrcon PHP extension
Those layers implement these central functions:
- serendipity_db_query: Perform a DB Layer SQL query.
- serendipity_db_escape_string: Returns a escaped string, so that it can be safely included in a SQL string encapsulated within quotes, without allowing SQL injection.
- serendipity_db_limit: Returns the option to a LIMIT SQL statement, because it varies accross DB systems
- serendipity_db_limit_sql: Return a LIMIT SQL option to the DB Layer as a full LIMIT statement
- serendipity_db_schema_import: Prepares a Serendipty query input to fully valid SQL. Replaces certain “template” variables.
- serendipity_db_connect: Connect to the configured Database
- serendipity_db_probe: Try to connect to the configured Database (during installation)
- serendipity_db_reconnect: Reconnect to the configured Database
- serendipity_db_insert_id: Returns the latest INSERT_ID of an SQL INSERT INTO command, for auto-increment columns
- serendipity_db_affected_rows: Returns the number of affected rows of a SQL query
- serendipity_db_updated_rows: Returns the number of updated rows in a SQL query
- serendipity_db_matched_rows: Returns the number of matched rows in a SQL query
- serendipity_db_begin_transaction: Tells the DB Layer to start a DB transaction.
- serendipity_db_end_transaction: Tells the DB Layer to end a DB transaction.
- serendipity_db_in_sql: Assemble and return SQL condition for a “IN (…)” clause
- serendipity_db_concat: Returns the SQL code used for concatenating strings
The database structure of Serendipity tries to be self-explanatory. For a list of all Serendipity database tables check out our Database structure documentation.
Important variables and constants
Core variables are:
- S9Y_DATA_PATH: If a shared installation is used, points to the directory where Serendipity keeps its per-user files
- S9Y_INCLUDE_PATH: Path to the core installation of Serendipity
- S9Y_PEAR_PATH: Path to the bundled PEAR libraries of Serendipity
- IS_installed: Boolean variable to indicate if Serendipity is properly installed
- IN_serendipity: Boolean variable to indicate if the Serendipity Framework is loaded
- IS_up2date: Boolean variable to indicate if the current Serendipity version matches the local files
- PATH_SMARTY_COMPILE: Path to the folder where temporary Smarty files are kept
- USERLEVEL_ADMIN (255), USERLEVEL_CHIEF (1), USERLEVEL_EDITOR (0): Constants for user permission levels
- $serendipity[‘rewrite’]: Indicates which URL rewriting method is used (none, apache errorhandling, mod_rewrite, …)
- $serendipity[‘serendipityPath’]: Contains the path to the current Serendipity installation
- $serendipity[‘serendipityHTTPPath’]: Contains the URL path to the current Serendipity installation
- $serendipity[‘baseURL’]: Contains the absolute URL to the current Serendipity installation
- $serendipity[‘serendipityAuthedUser’]: Boolean variable which indicates if a user is logged in
- $serendipity[‘user’]: The ID of the currently logged-in user
- $serendipity[‘email’]: The email address of the currently logged-in user
- $serendipity[‘smarty’]: The central Smarty object
- $serendipity[‘logger’]: The central logger object (only exists if logging is enabled)
- $serendipity[‘uriArguments’]: Contains the currently parsed URI arguments to a page call
- $serendipity[‘lang’]: Which language is currently loaded
- $serendipity[‘charset’]: Which charset is currently loaded
Some important URL variables (note that if those specific variables are submitted via POST they get automatically merged into the GET array and acts like a $_REQUEST superglobal):
- $serendipity[‘GET’][‘action’]: Indicates which action on the frontend shall be executed (i.e. “read”, “search”, “comments”, “archives”, …) and is evaluated by include/genpage.inc.php.
- $serendipity[‘GET’][‘adminModule’]: Indicates which module of the backend shall be executed (i.e. “maintenance”, “comments”, “entries”, …) and is evaluated by serendipity_admin.php, which includes the specific module from include/admin/.
- $serendipity[‘GET’][‘adminAction’]: Indicates which action of a requested module on the frontend shall be executed; the performed action is evaluated by the specific module that it is a part of.
- $serendipity[‘GET’][‘id’]: If supplied, applies to a specific entry ID (both backend and frontend)
- $serendipity[‘GET’][‘page’]: Indicates the page number (for listing entries) for the frontend
- $serendipity[‘GET’][‘lang_selected’]: Allows to change the frontend/backend language (ie “en”, “de” language keys)
- $serendipity[‘GET’][‘token’]: Some modules require token hashes to prevent XSS attacks
- $serendipity[‘GET’][‘category’]: Affects displaying entries, showing entries only belonging to this category ID (multiple categories separated by “;”).
- $serendipity[‘GET’][‘hide_category’]: Affects displaying entries, hide entries belonging to this category ID (multiple categories separated by “;”).
- $serendipity[‘GET’][‘viewAuthor’]: Affects displaying entries, only showing entries by this specific author ID (multiple authors separated by “;”)..
- $serendipity[‘GET’][‘range’]: Affects displaying entries, only showing entries by this date range
- $serendipity[‘GET’][‘subpage’]: Can execute frontend plugins (if not using their own rewrite-URL or external_plugin URL)
- $serendipity[‘GET’][‘searchTerm’]: If search is executed, holds the search term
- $serendipity[‘GET’][‘fullFeed’]: Boolean to indicate whether the full RSS feed is displayed (for rss.php)
- $serendipity[‘GET’][‘noBanner’]: Boolean to indicate if the backend should show the banner
- $serendipity[‘GET’][‘noSidebar’]: Boolean to indicate if the backend should show the menu sidebar
- $serendipity[‘GET’][‘noFooter’]: Boolean to indicate if the backend should show the footer
Note that a parameter like index.php?serendipity[subpage]=XXX gets converted to $serendipity[‘GET’][‘subpage’]=XXX. But index.php?subpage=YYY will not exist in $serendipity[‘GET’] due to it’s missing prefix.
On top of that, some global and user-specific configuration is passed through options saved in the database table serendipity_config. Those variables are defined in include/tpl/config_local.inc.php and include/tpl/config_personal.inc.php:
Local configuration
- $serendipity[‘dbNames’]: Boolean whether to use “SET NAMES” charset directive in database layer
- $serendipity[‘uploadPath’]: Path to “uploads” directory
- $serendipity[‘templatePath’]: Path to “templates” directory”
- $serendipity[‘uploadHTTPPath’]: URL-path to “uploads” directory
- $serendipity[‘autodetect_baseURL’]: Boolean whether to enable autodetection of HTPP host name
- $serendipity[‘defaultBaseURL’]: When HTTP-Hostname autodetection is turned off, contains the default URL to Serendipity
- $serendipity[‘indexFile’]: Name to index.php (used for Embedded Installation)
- $serendipity[‘permalinkStructure’]: Permalink for archives/ patterns
- $serendipity[‘permalinkAuthorStructure’]: Permalink for authors/ patterns
- $serendipity[‘permalinkCategoryStructure’]: Permalink for categories/ patterns
- $serendipity[‘permalinkFeedAuthorStructure’]: Permalink for feeds/authors/ patterns
- $serendipity[‘permalinkFeedCategoryStructure’]: Permalink for feeds/categories/ patterns
- $serendipity[‘permalinkArchivesPath’]: URL path for “archives view” detection
- $serendipity[‘permalinkArchivePath’]: URL path for “single entry” detection
- $serendipity[‘permalinkCategoriesPath’]: URL path for “category view” detection
- $serendipity[‘permalinkFeedsPath’]: URL path for “RSS feed” detection
- $serendipity[‘permalinkPluginPath’]: URL path for “external plugin” detection
- $serendipity[‘permalinkSearchPath’]: URL path for “search” detection
- $serendipity[‘permalinkAdminPath’]: URL path for “administration” detection
- $serendipity[‘permalinkAuthorsPath’]: URL path for “author view” detection
- $serendipity[‘permalinkCommentsPath’]: URL path for “comments” detection
- $serendipity[‘permalinkUnsubscribePath’]: URL path for “unsubscribe comment” detection
- $serendipity[‘permalinkDeletePath’]: URL path for “delete comment” detection
- $serendipity[‘permalinkApprovePath’]: URL path for “approve comment” detection
- $serendipity[‘blogTitle’]: Title of blog
- $serendipity[‘blogDescription’]: Subtitle of blog
- $serendipity[‘blogMail’]: E-Mail address of blog (sending/receiving)
- $serendipity[‘allowSubscriptions’]: Boolean whether to allow users to subscribe to comments via email
- $serendipity[‘allowSubscriptionsOptIn’]: Boolean whether comment subscription requires opt-in confirmation
- $serendipity[‘useCommentTokens’]: Boolean whether to allow approving/deleting comments to the author of entries via email
- $serendipity[‘calendar’]: Which calendar to use
- $serendipity[‘lang_content_negotiation’]: Boolean whether user’s browser-language is used
- $serendipity[‘enablePluginACL’]: Boolean whether configuration of plugins applies permission checks
- $serendipity[‘updateCheck’]: Boolean whether performing update checks is allowed
- $serendipity[‘archiveSortStable’]: Boolean whether pagination URLs start with the pages enumberated from first or last page
- $serendipity[‘searchsort’]: Default sort order for sorting search results
- $serendipity[‘enforce_RFC2616’]: Boolean whether for RSS feeds Conditional Get may be used (see Configuration)
- $serendipity[‘useGzip’]: Boolean whether gzip’ing pages is enabled
- $serendipity[‘enablePopup’]: Boolean whether Popups are used in the frontend (depends on the theme)
- $serendipity[‘embed’]: Boolean whether embedded mode is enabled (see Configuration)
- $serendipity[‘top_as_links’]: Boolean whether links outputted by exit/referrer tracking are clickable (anti-spam)
- $serendipity[‘trackReferer’]: Boolean whether referrer tracking is enabled
- $serendipity[‘blogReferer’]: List of referrer URL patters that shall be blocked
- $serendipity[‘useServerOffset’]: Boolean whether the timezone of the server and the authors differs
- $serendipity[‘serverOffsetHours’]: How many hours timezone difference are between server and authors
- $serendipity[‘showFuturEentries’]: Whether to show entries from the future
- $serendipity[‘enableACL’]: Boolean whether access-control checks are performed for entries, categorie and media database items
- $serendipity[‘feedFull’]: Affects how entries are displayed in RSS feeds
- $serendipity[‘feedBannerURL’]: SYNDICATION_PLUGIN_BANNERURL
- $serendipity[‘feedBannerWidth’]: Width of banner image
- $serendipity[‘feedBannerHeight’]: Height of banner image
- $serendipity[‘feedShowMail’]: Whether to reveal authors email adress in feeds
- $serendipity[‘feedManagingEditor’]: Specify the managing editor of the feeds
- $serendipity[‘feedWebmaster’]: Specify the webmaster of the feeds
- $serendipity[‘feedTtl’]: Specify the “time to live” on how often content gets updated
- $serendipity[‘feedPubDate’]: Whether to embed the publication date of a feed
- $serendipity[‘feedCustom’]: URL of a location to redirect feedreaders to
- $serendipity[‘feedForceCustom’]: Whether to force visitors to the redirected feed location even if they call the internal rss.php file
- $serendipity[‘magick’]: Boolean whether to use imagemagick for converting images
- $serendipity[‘convert’]: Path to imagemagick binary
- $serendipity[‘thumbSuffix’]: Suffix to append to converted image thumbnails
- $serendipity[‘thumbSize’]: Maximum Thumbnail dimension
- $serendipity[‘thumbConstraint’]: What to apply the maximum thumbnail dimension to
- $serendipity[‘maxFileSize’]: Maximum file size of uploaded files
- $serendipity[‘maxImgWidth’]: Maximum width for uploaded images
- $serendipity[‘maxImgHeight’]: Maximum height for uploaded images
- $serendipity[‘uploadResize’]: Boolean whether to resize images to maximum size before uploading
- $serendipity[‘onTheFlySynch’]: Boolean whether every call to the media database checks for updated files
- $serendipity[‘dynamicResize’]: Whether to allow frontend visitors to resize thumbnails on demand
- $serendipity[‘mediaExif’]: Whether to parse EXIF information of uploaded files
- $serendipity[‘mediaProperties’]: Which meta information shall be available in the media database
- $serendipity[‘mediaKeywords’]: Specifies a list of available keywords to tag media files with
Personal Configuration
- $serendipity[‘wysiwyg’]: Whether to use WYSIWYG editor
- $serendipity[‘wysiwygToolbar’]: Defines the toolbar palette of the WYSIWYG editor
- $serendipity[‘mail_comments’]: Boolean whether a user receives comments to his entries via mail
- $serendipity[‘mail_trackbacks’]: Boolean whether a user receives trackbacks to his entries via mail
- $serendipity[‘no_create’]: If set, the author has no write permissions to anything in the backend
- $serendipity[‘right_publish’]: Boolean to indicate if an author may publish entries (or only drafts)
- $serendipity[‘simpleFilters’]: Boolean whether simplified media and entry filter toolbars are shown
- $serendipity[‘enableBackendPopup’]: Boolean whether popups in the backend are shown inline or as “real” popups
- $serendipity[‘moderateCommentsDefault’]: Boolean to indicate if new entries are moderated by default
- $serendipity[‘allowCommentsDefault’]: Boolean to indicate if new entries are able to be commented by default
- $serendipity[‘publishDefault’]: Indicates if new entries are saved as drafts or publish
- $serendipity[‘showMediaToolbar’]: Boolean whether to show toolbars for media library even in “picker” mode
- $serendipity[‘use_autosave’]: Boolean to indicate if browser’s autosave feature is used for entries
Important API functions
We have created seperate bundles for specific API functions. An overview of most relevant functions and where they are defined can be found here:
List of Important API functions
Error-Handling
By default, Serendipity sets the PHP error_reporting() to E_ALL without E_NOTICE and E_STRICT to prevent unnecessary PHP error output. When $serendipity[‘production’] is set to “Debug”, E_STRICT errors will be shown.
Serendipity uses a default errorhandler (configured as $serendipity[‘errorhandler’], by default set to “errorToExceptionHandler” (defined in include/compat.inc.php). This will take care of emitting all error messages, and also ignore specific warnings that can be dealt with.
You can overwrite such an errorhandler in your serendipity_config_local.inc.php file by implementing your own function.
Frontend-Routing: index.php
All of our frontend routing is performed through the “index.php” file. Its code flow is like this:
- Load serendipity_config.inc.php to initiate the frontend (load functions, languages, database layers, central configuration, user configuration)
- Initialize Permalink patterns and lookup routines (serendipity_initPermalinks() and serendipity_permalinkPatterns()).
- Check what the current URL looks like and perform wanted action, parsing available parameters. Possibly execute plugins
- Call include/genpage.inc.php to setup the output for the required Smarty templates
- Initialize Smarty Framework, output template file
Some examples:
Archive view
The blog’s url is called by the visitor.
Through the .htaccess file, index.php get assigned to this page call. The index.php file instantiates the Serendipity framework by including the serendipity_config.inc.php file. It sets a view headers and sets up a few variables.
Now the central URL that was called is stored in $uri, and additional parameters (like the date: 2019-10-28) get evaluated and stored in $serendipity[‘uriArguments’].
Now, multiple regular expressions check the $uri for each possible scenario that could happen. This means, the central PAT_ARCHIVES rule will evaluate true, and execute its workflow.
Inside this if-statement, the $serendipity[‘uriArguments’] are parsed and operated on, so that possible categories, authors, week formats, pagination numbers or others are recognized. Those variables are stored in parameters like
$serendipity['GET']['page'] (pagination) for example.
In our case, the list only contains “2019”, “10” and “28”. Those are stored in the variables $year, $month and $day. According to the selected calendar (gregorian or persian) these variables are evaluated and passed along to $ts and $te. Those hold timestamps of the passed date minumum and maximum (here: 2019-10-28 00:00 to 2019-10-28 23:59).
Once all variables are setup, include/genpage.inc.php is called to create the usual frontend view. This file checks based on $serendipity[‘view’] which output and smarty template files it needs to call, executes possible event plugins that listen on events, and after that, assign all data to the requested smarty template.
This output is then emitted as $data from index.php to the browser.
External plugin
The routing for executing a plugin like to match a staticpage plugin’s output is very similar to the example above.
The difference is that in this case the usual routing in index.php finds no specific pattern, and then goes to the “404” routing view. Once include/genpage.inc.php operates on that page, the plugin API event hook “genpage” is executed. The staticpage plugin has registered this event hook, and performs routines on its database tables to see if there is an entry that matches the currentl url. If that is the case, it adjusts the serendipity output and passes over its content.
Backend-Routing: serendipity_admin.php
For the Serendipity backend, all HTTP calls are routed through serendipity_admin.php. This file instantiates the Serendipity framework, sets up a couple of variables and then performs a central lookup on the URL GET (or POST) variable ?serendipity[adminModule]=XXX. Before each module is included from the file in include/admin/XXX.inc.php, Serendipity performs permission checks to see if the user is authorized to access the given module.
Each of the modules (see below) performs their specific actions and evaluate the URL variable ?serendipity[adminAction] to see, which action is performed (like creating an entry, updating an entry, viewing entries, etc.).
Each module passes its output and rendering data to a backend smarty template file, which gets saved in $main_content through output-buffering and finally assigns this output to Smarty and displays the admin/index.tpl template file.
Backend Modules
The list of modules that are routable are:
- catgegory.inc.php: Category management
- configuration.inc.php: Blog configuration
- entries.inc.php: Single Entry management
- entries_ovewview.inc.php: Entry overview
- groups.inc.php: Group / Access mangement
- images.inc.php: Media database
- import.inc.php: Import/Export
- maintenance.inc.php: Maintenance of the blog
- overview.inc.php: The central dashboard
- personal.inc.php: Personal preferences
- plugins.inc.php: Plugin management
- templates.inc.php: Theme management
- upgrader.inc.php: Upgrader functionality
- users.inc.php: User management
Importers / Exporters
Serendipity supports importing from a lot of different systems. Each system is handled through a unified process put into their own files in the include/admin/importers/ directory.
Each of those files is named like the system they stem from:
- b2evolution.inc.php
- bblog.inc.php
- blogger.inc.php
- bmachine.inc.php
- geeklog.inc.php
- lifetype.inc.php
- livejournal.inc.php
- moveabletype.inc.php
- nucleus.inc.php
- nuke.inc.php
- old_blogger.inc.php
- phpbb.inc.php
- pivot.inc.php
- pmachine.inc.php
- serendipity.inc.php
- smf.inc.php
- sunlog.inc.php
- textpattern.inc.php
- voodoopad.inc.php
- wordpress-pg.inc.php
- wordpress.inc.php
- generic.inc.php: Import through a generic RSS feed
All files simple implement their own Class that extends from Serendipity_Import and uses those methods. A good example is the serendipity.inc.php file for a Serendipity importer.
- getImportNotes: Displays specific information about what the importer does
- Serendipity_Import_Serendipity (Constructor): Defines input GET/POST data as $this->data and popuplates $this->inputFields with a list of configuration options that the importer offers
- validateData: Checks if all fields are popuplated
- getInputFields: Wrapper function to return $this->inputFields (or custom variables)
- import: Central function that is called, can access input data through $this->data.
A class can now implement as many helper functions as it needs; the Serendipity importer uses one method for each kind of metadata it imports: import_cat, import_groups, import_authors and so on. All data is popuplated through another helper function import_table that performs the SQL queries for copying over data.
Plugins
Serendipity can easily be enhanced by plugins. We have coined two different terms for two kinds of plugins.
Event-Plugins are plugins that perform functionality based on events which are “fired” from the core on specific places, like when an entry is displayed, when an entry is saved and so on.
Sidebar-Plugins are simple output plugins that are displayed on the frontend of your blog within a sidebar or footer, or header.
Both kinds of plugins can be enabled through the Admin interface, and they can be put into a custom order. For sidebar plugins the order simply visually arranges the output of plugins. For event plugins, the order indicates which plugin gets executed first, which can in turn influence the “pipeline” of plugins coming after that. This is mostly important for Markup plugins that affect the rendering of blog entries: If one event plugin takes care of translating glossary terms to full links, and another plugin is used to mark internal and external links graphically, it would be important the the glossary plugin gets executed first, so that the link-marking plugin can also take care of glossary links. There is no “proper” order of plugins, it all depends on the specific combination of these.
A plugin is defined by the files in the plugins/ subdirectory. Each plugin has its own distinct directory name which must conform to a prefix “serendipity_plugin_XXX” for sidebar plugins and “serendipity_event_XXX” for event plugins. The same name must then be repeated as the filename of the .php file within that directory.
Plugin files within those directories are then only loaded, if you have activated/installed that plugin through the Admin interface.
To see how the plugin files must be coded, please refer to our Plugin API Documentation.
Themes
Historically, Serendipity used the term “Theme”, “Template” or “Style” to express the same term. We have tried to completely remove the term “Style”, and now use “Theme” to describe a collection of single smarty template files.
So whenever we say “Theme”, we mean that what an end-user selects to affect output. And when we say “Template”, we refer to an actual, single file.
Our themes are built upon single Smarty template files. Each file is responsible for a specific aspect of frontend or backend display. Serendipity implements both frontend and backend themes, so that you can basically build your own backend. The drawback to building a custom backend of course is, that anytime we add new functionality, we only add this to our internal default theme. We suggest to only make visual changes on the CSS side of things, unless you know what you are doing.
A description for how themes are built, which variables they refer to please check the Theme Documentation.
Coding Guidelines
Serendipity has been around since 2002, and code has been gradually built upon the same core. This has advantages (stability, adaptibility, compatibility), and also disadvantages (“old flair”, mixed code patterns).
Most notably it shows that Serendipity does not use specific object-oriented patterns (asside from the Plugin API), and adheres to functional approaches. This has the advantage of being really easy to understand and read.
This also means, we only have a few strict rules:
- Use 4 spaces to indent code
- Use the proper versioning on plugins
Put opening braces on the same line like the preceding logic, put closing braces on a new line:
if (condition) {
// code
}
function serendipity_function($var1, $var2, $var3) {
// code
}
- Add spaces after commas, add spaces before and after string concatenation ($var = $var1 . $var2)
- Use easy-to-read IF-statements, try to only use ternary IFs where it’s well readable
- Use single-quotes for array keys and strings
- Indent SQL statements with newlines for readability
- Add phpDoc style inline code documentation for function parameters etc.
- Prefix framework function names with serendipity_ and after that, use camelCase naming
- Try to use camelCase naming for new variables (currently, there is a mixture of function/variable names with underscore characters and camelCasing)
- Always escape HTML output of unsafe user input with serendipity_specialchars()
- If your code/plugin uses administrative tasks in the backend, make sure you use the serendipityFormToken() functions to protect against XSRF.
- Always escape database input of unsafe user input with either implicit typecasting (int)$_REQUEST[‘var’] or serendipity_db_escape_string()
- Write database-agnostic standard SQL wherever possible; if you require database-specific SQL, add codeforks with a switch($serendipity[‘dbType’]) statement.
- Try to cache results that come from foreign URLs. If your plugins displays an RSS feed, it shouldn’t be fetched each execution cycle of the plugin, but rather only every X minutes. Provide a configuration option to let the user configure his own caching period.
- Always abstract any output messages with language constants. Always include an english language file of your plugin.
- If you enhance functionality of a plugin, please add a file called “ChangeLog” documenting changes. If you fix core code or add new functionality in the core, document this in docs/NEWS.
- If you bundle foreign code, make sure you indicate the right licensing of your plugins. By default, a s9y plugin is BSD licensed.
- If your plugin has foreign code dependencies, either include those in the plugin or make sure, your plugin does not bail out with a fatal error otherwise. It should always alarm the user what’s missing.
- Closing Words: Take a look at existing plugins. What has worked in the past, might work out for you as a draft for your own plugin. | https://docs.s9y.org/docs/developers/code-primer.html | 2020-05-25T02:03:56 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.s9y.org |
deploymentInfo.status returns Succeeded when polling with get-deployment. It will poll every 15 seconds until a successful state has been reached. This will exit with a return code of 255 after 120 failed checks.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
deployment-successful --deployment-id <value> [--cli-input-json <value>] [--generate-cli-skeleton <value>]
--deployment-id (string)
The unique ID of a deployment pause script operations until a deployment is flagged as successful
The following wait deployment-successful example pauses until the specified deployment completes successfully.
aws deploy wait deployment-successful --deployment-id d-A1B2C3111
This command produces no output, but pauses operation until the condition is met. It generates an error if the condition is not met after 120 checks that are 15 seconds apart. | https://docs.aws.amazon.com/cli/latest/reference/deploy/wait/deployment-successful.html | 2020-05-25T02:44:41 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.aws.amazon.com |
. frauddetector ]
Gets the details for one or more Amazon SageMaker models that have been imported into the service. This is a paginated API. If you provide a null maxSizePerPage , this actions retrieves a maximum of 10 records per page. If you provide a maxSizePerPage , the value must be between 5 and 10. To get the next page results, provide the pagination token from the GetExternalModelsResult as part of your request. A null pagination token fetches the records from the beginning.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
get-external-models [--model-endpoint <value>] [--next-token <value>] [--max-results <value>] [--cli-input-json <value>] [--generate-cli-skeleton <value>]
--model-endpoint (string)
The Amazon SageMaker model endpoint.
--next-token (string)
The next page token for the request.
--max-results (integer)
The maximum number of objects to return for the.
externalModels -> (list)
Gets the Amazon SageMaker models.
(structure)
The Amazon SageMaker model.
modelEndpoint -> (string)The Amazon SageMaker model endpoints.
modelSource -> (string)The source of the model.
role -> (structure)
The role used to invoke the model.
arn -> (string)The role ARN.
name -> (string)The role name.
inputConfiguration -> (structure)
The input configuration.
format -> (string)The format of the model input configuration. The format differs depending on if it is passed through to SageMaker or constructed by Amazon Fraud Detector.
isOpaque -> (boolean)For an opaque-model, the input to the model will be a ByteBuffer blob provided in the getPrediction request, and will be passed to SageMaker as-is. For non-opaque models, the input will be constructed by Amazon Fraud Detector based on the model-configuration..
nextToken -> (string)
The next page token to be used in subsequent requests. | https://docs.aws.amazon.com/cli/latest/reference/frauddetector/get-external-models.html | 2020-05-25T02:58:21 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.aws.amazon.com |
Client-side logging with the Azure Storage client library for Java
For instructions on how to install the binaries for the Azure Storage client libraries in your Java project, see the readme file for the project on GitHub:. This file documents any additional dependencies you must install.
You must install the optional SLF4J dependency if you are planning to use client-side logging. SLF4J is a logging façade that enables you to use many common Java logging frameworks easily from a client application: for more information about SLF4J, see the SLF4J user manual. For a simple test of how to use SLF4J with the storage SDK, place the slf4j-api and slf4j-simple JAR files in the build path for your storage client project. All storage log messages are subsequently directed to the console.
The following sample Java code shows how to switch storage client logging off by default by calling the static method setLoggingEnabledByDefault, and then use an OperationContext object to enable logging for a specific request:
// Set logging off by default. OperationContext.setLoggingEnabledByDefault(false); OperationContext ctx = new OperationContext(); ctx.setLoggingEnabled(true); // Create an operation to add a new customer to the people table. TableOperation insertCustomer1 = TableOperation.insertOrReplace(customer1); // Submit the operation to the table service. table.execute(insertCustomer1, null, ctx);
The following example shows the log messages that slf4j-simple writes to the console:
[main] INFO ROOT - {ceba5ec6...}: {Starting operation.} [main] INFO ROOT - {ceba5ec6...}: {Starting operation with location 'PRIMARY' per location mode 'PRIMARY_ONLY'.} [main] INFO ROOT - {ceba5ec6...}: {Starting request to '(PartitionKey='Harp',RowKey='Walter')' at 'Tue, 08 Jul 2014 15:07:43 GMT'.} [main] INFO ROOT - {ceba5ec6...}: {Writing request data.} [main] INFO ROOT - {ceba5ec6...}: {Request data was written successfully.} [main] INFO ROOT - {ceba5ec6...}: {Waiting for response.} [main] INFO ROOT - {ceba5ec6...}: {Response received. Status code = '204', Request ID = '8f6ce566-3760-4733-a8da-a090e642286a', Content-MD5 = 'null', ETag = 'W/"datetime'2014-07-08T15%3A07%3A41.1177234Z'"'.} [main] INFO ROOT - {ceba5ec6...}: {Processing response headers.} [main] INFO ROOT - {ceba5ec6...}: {Response headers were processed successfully.} [main] INFO ROOT - {ceba5ec6...}: {Processing response body.} [main] INFO ROOT - {ceba5ec6...}: {Response body was parsed successfully.} [main] INFO ROOT - {ceba5ec6...}: {Operation completed.}
The GUID (ceba5ec6... in the sample) is the client request ID assigned to the storage operation by the client-side storage library. | https://docs.microsoft.com/en-us/rest/api/storageservices/Client-side-Logging-with-the-Microsoft-Azure-Storage-SDK-for-Java?redirectedfrom=MSDN | 2020-05-25T02:51:04 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.microsoft.com |
Woodford
Woodford is a browser-based configuration tool for building and managing mobile projects using the Resco platform. It also allow you to manage connected mobile devices and licenses.
Contents
Availability and installation
An HTML version of Woodford is available on all major platforms and browsers, including Mac OS and the Safari web browser. Former stand-alone application is no longer updated.
Using Woodford requires a SuperUser (Administrator) license.
- compatibility with Resco Mobile CRM app
Microsoft Dynamics
If you are using Microsoft Dynamics:
- Go to Woodford download page:.
- Depending on your Dynamics version, download the appropriate solution file:
- If you are using Microsoft Dynamics CRM 2011, click Woodford for Dynamics 2011 Download.
- If you are using a newer version, click Woodford for Dynamics Download.
- Log in to your Dynamics CRM.
- Go to the Solutions section of your CRM server settings.
- Click Import and select the downloaded Woodford zip file.
- Finish the import wizard and publish all customizations..
- For sandbox environment, click Woodford for Salesforce - Sandbox.
- Log in to Salesforce.
- Select Woodford. pane of Woodford displays content depending on what's selected in the Administration menu.
This menu allows you to access the following functions:
- App projects - manage and design app projects
- Device control - manage devices using your projects
- Mobile users - manage users and licenses
- Localizations - manage languages available in the mobile app, create a new translation or tweak existing
- Geocoding - add latitude and longitude to records
- Location tracking - configure when and how to track the location of mobile users
- Mobile apps - design your own branded applications and publish them on app stores
-.
The Project menu replaces the Administration menu when you are editing an app project.
- Home screen
- Dashboard
- Social
- Branding
- Auditing
- Location tracking
- Images
- Offline HTML
- Localization
- Configuration
- Global map
- Calendar
- Route plan
- Voice control
- Schedule Board
- Entity hubs
- Events and reminders
- Theme
- Documents
- Exchange
- Inspections
New features and changes
Spring 2019
- GitHub integration: users can now commit mobile projects to this well-known software version control service or restore them from GitHub – which results in the improved life-cycle management of projects, enables progress tracking, and more.
- Export/import any artifact (Dashboard, List, View, Form, Chart) – serves for replicating design artifacts between projects or for a quick backup.
- Import Salesforce.com layouts as multiple forms – the app will choose the correct form (layout) automatically.
- Overhauled Mobile Report Designer – users can add, edit and remove reports, manage report styles and sources with this fast and user-friendly editing tool.
- Enhanced User Experience – the HTML version enables a more convenient experience for users coming alongside the improved navigation.
- Optimized publishing times – the new version brings faster publishing speed.
- Sync Dashboard, including sync conflict resolution – enables admins to help users in the field to resolve synchronization conflicts and any errors that may occur. Find out more info in this blog.
- Device control center – previously known as the Security section, where admins can see the status of each user (last sync, device Id, device OS, and other details). | https://docs.resco.net/mediawiki/index.php?title=Woodford&oldid=14 | 2020-05-25T02:50:54 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.resco.net |
:.
Decide how many instances can be down at the same time for a short period due to a voluntary disruption.
Values for
minAvailable or
maxUnavailable can be expressed as integers or as a percentage.
minAvailableto 10, then 10 Pods must always be available, even during a disruption.
.
A
PodDisruptionBudget has three fields:
: For versions 1.8 and earlier: When creating a
PodDisruptionBudgetobject using the
kubectlcommand line tool, the
minAvailablefield has a default value of 1 if neither
minAvailablenor
maxUnavailableis specified..):.
You can create the PDB object with a command like
kubectl apply beta1 kind: PodDisruptionBudget metadata:. | https://v1-15.docs.kubernetes.io/docs/tasks/run-application/configure-pdb/ | 2020-05-25T00:46:42 | CC-MAIN-2020-24 | 1590347387155.10 | [] | v1-15.docs.kubernetes.io |
The. You must have set the
resultset_is_neededflag to
appendto intercept the result set before it is returned to the client. See proxy.queries.
query: The text of the original query.
query_time: The number of microseconds required to receive the first row of a result set since the query was sent to the server.
response_time: The number of microseconds required to receive the last row of the result set since the query was sent to the server., you will want to remove the results returned from those
additional queries and return only the results from the query
originally submitted by the client.
The following example()", {resultset_is_needed = true} ) proxy.queries:append(1, packet, {resultset_is_needed = true}) proxy.queries:append(2, string.char(proxy.COM_QUERY) .. "SELECT NOW()", {resultset_is_needed = true} )”. | http://doc.docs.sk/mysql-refman-5.5/mysql-proxy-scripting-read-query-result.html | 2020-05-25T02:45:58 | CC-MAIN-2020-24 | 1590347387155.10 | [] | doc.docs.sk |
Use this information to upgrade Alfresco Content Services on a single instance and in a distributed and clustered environment.
Follow this checklist when upgrading or clustering an installation of Alfresco Content Services. For detailed step-by-step instructions for upgrading see Upgrading Alfresco Content Services.
When upgrading Alfresco Content Services, in order to configure distribution and clustering optimally, contact Alfresco Consulting or your Alfresco certified partner. | https://docs.alfresco.com/5.2/concepts/quick-upgrade.html | 2020-05-25T02:46:00 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.alfresco.com |
Spot Instance interruptions
Demand for Spot Instances can vary significantly from moment to moment, and the availability of Spot Instances can also vary significantly depending on how many unused EC2 instances are available. It is always possible that your Spot Instance might be interrupted. Therefore, you must ensure that your application is prepared for a Spot Instance interruption.
An On-Demand Instance specified in an EC2 Fleet or Spot Fleet cannot be interrupted.
Contents
Reasons for interruption
The following are the possible reasons that Amazon EC2 might interrupt your Spot Instances:.
Interruption behavior
You can specify whether Amazon EC2 should hibernate, stop, or terminate Spot Instances
when they
are interrupted. You can choose the interruption behavior that meets your needs.
The
default is to terminate Spot Instances when they are interrupted. To change the
interruption
behavior, choose an option from Interruption behavior in the
console when you are creating a Spot request, or specify
InstanceInterruptionBehavior in the launch configuration or the
launch template. To change interruption behavior in the console when you are
creating a Spot request, choose Maintain target capacity. When
you select this option, Interruption behavior will appear and
you can then specify that the Spot service terminates, stops, or hibernates Spot
Instances when they are interrupted.
Stopping interrupted Spot Instances
You can change the behavior so that Amazon EC2 stops.
After a Spot Instance is stopped by the Spot service, only the Spot service can restart the Spot Instance, and the same launch specification must be used.
For a Spot Instance launched by a
persistent Spot Instance request, the Spot
service restarts the stopped instance when capacity is available in the same
Availability Zone and for the same instance type as the stopped instance.
If instances in an EC2 Fleet or Spot Fleet are stopped and the fleet is of type
maintain, the Spot service launches replacement instances to
maintain the target capacity. The Spot service finds the best pools based on
the
specified allocation strategy (
lowestPrice,
diversified, or
InstancePoolsToUseCount); it does
not prioritize the pool with the earlier stopped instances. Later, if the
allocation strategy leads to a pool containing the earlier stopped instances,
the Spot service restarts the stopped instances to meet the target
capacity.
For example, consider a Spot Fleet with the
lowestPrice allocation
strategy. At initial launch, a
c3.large pool meets the
lowestPrice criteria for the launch specification. Later, when
the
c3.large instances are interrupted, the Spot service stops the
instances and replenishes capacity from another pool that fits the
lowestPrice strategy. This time, the pool happens to be a
c4.large pool and the Spot service launches
c4.large instances to meet the target capacity. Similarly, Spot Fleet
could move to a
c5.large pool the next time. In each of these
transitions, the Spot service does not prioritize pools with earlier stopped
instances, but rather prioritizes purely on the specified allocation strategy.
The
lowestPrice strategy can lead back to pools with earlier
stopped instances. For example, if instances are interrupted in the
c5.large pool and the
lowestPrice strategy leads
it back to the
c3.large or
c4.large pools, the earlier
stopped instances are restarted to fulfill target, an EC2 Fleet, or a Spot Fleet, the Spot service terminates any associated Spot Instances that are stopped.
While a Spot Instance is stopped, you are charged only for the EBS volumes, which are preserved. With EC2 Fleet and Spot Fleet, if you have many stopped instances, you can exceed the limit on the number of EBS volumes for your account.
Hibernating interrupted Spot Instances
You can change the behavior so that Amazon EC2 hibernates, and it must be large enough to store the instance memory (RAM) during hibernation.
The following instances are supported: C3, C4, C5, M4, M5, R3, and R4, with less than 100 GB of memory.
The following operating systems are supported: Amazon Linux 2, Amazon Linux AMI, Ubuntu with an AWS-tuned Ubuntu kernel (linux-aws) greater than 4.4.0-1041, and Windows Server 2008 R2 and later.
Install the hibernation agent on a supported operating system, or use one of the following AMIs, which already include the agent:
Amazon Linux 2
Amazon Linux AMI 2017.09.1 or later
Ubuntu Xenial 16.04 20171121 or later
Windows Server 2008 R2 AMI 2017.11.19 or later
Windows Server 2012 or Windows Server 2012 R2 AMI 2017.11.19 or later
Windows Server 2016 AMI 2017.11.19 or later
Windows Server 2019
Start the agent. We recommend that you use user data to start the agent on instance startup. Alternatively, you could start the agent manually.
Recommendation
We strongly recommend that you use an encrypted Amazon EBS volume as the root volume, because instance memory is stored on the root volume during hibernation. This ensures that the contents of memory (RAM) are encrypted when the data is at rest on the volume and when data is moving between the instance and volume. Use one of the following three options to ensure that the root volume is an encrypted Amazon EBS volume:
EBS “single-step” encryption: In a single run-instances API call, you can launch encrypted EBS-backed EC2 instances from an unencrypted AMI. For more information, see Using encryption with EBS-backed AMIs.
EBS encryption by default: You can enable EBS encryption by default to ensure all new EBS volumes created in your AWS account are encrypted. For more information, see Encryption by default.
Encrypted AMI: You can enable EBS encryption by using an encrypted AMI to launch your instance. If your AMI does not have an encrypted root snapshot, you can copy it to a new AMI and request encryption. For more information, see Encrypt an unencrypted image during copy and Copying an AMI.
When a Spot Instance is hibernated by the Spot service, the EBS volumes are preserved and instance memory (RAM) is preserved on the root volume. The private IP addresses of the instance are also preserved. Instance storage volumes and public IP addresses, other than Elastic IP addresses, are not preserved. While the instance is hibernating, you are charged only for the EBS volumes. With EC2 Fleet and Spot Fleet, if you have many hibernated instances, you can exceed the limit on the number of EBS volumes for your account.
The agent prompts the operating system to hibernate when the instance receives a signal from the Spot service. If the agent is not installed, the underlying operating system doesn't support hibernation, or there isn't enough volume space to save the instance memory, hibernation fails and the Spot service stops the instance instead.
When the Spot service hibernates a Spot Instance, you receive an interruption notice,
but you do not have two minutes before the Spot Instance is interrupted. Hibernation
begins immediately. While the instance is in the process of hibernating,
instance health checks might fail. When the hibernation process completes, the
state of the instance is
stopped.
Resuming a hibernated Spot Instance
After a Spot Instance is hibernated by the Spot service, it can only be resumed by the Spot service. The Spot service resumes the instance when capacity becomes available with a Spot price that is less than your specified maximum price.
For more information, see Preparing for instance hibernation.
For information about hibernating On-Demand Instances, see Hibernate your Windows instance.
Preparing for interruptions
Here are some best practices to follow when you use Spot Instances:
Use the default maximum price, which is the On-Demand price.
Ensure that your instance is ready to go as soon as the request is fulfilled by using an Amazon Machine Image (AMI) that contains the required software configuration. You can also use user data to run commands at start-up.
Store important data regularly in a place that isn't affected when the Spot Instance terminates. For example, you can use Amazon S3, Amazon EBS, or DynamoDB.
Divide the work into small tasks (using a Grid, Hadoop, or queue-based architecture) or use checkpoints so that you can save your work frequently.
Use Spot Instance interruption notices to monitor the status of your Spot Instances.
While we make every effort to provide this warning as soon as possible, it is possible that your Spot Instance.
Preparing for instance hibernation
You must install a hibernation agent on your instance, unless you used an AMI that already includes the agent. You must run the agent on instance startup, whether the agent was included in your AMI or you installed it yourself.
The following procedure helps you prepare a Windows instance. For directions to prepare a Linux instance, see Preparing for Instance Hibernation in the Amazon EC2 User Guide for Linux Instances.
To prepare a Windows instance
If your AMI doesn't include the agent, download the following files to the
C:\Program Files\Amazon\Hibernatefolder on your Windows instance:
Add the following command to the user data.
<powershell>."C:\Program Files\Amazon\Hibernate\EC2HibernateAgent.exe"</powershell>
Spot Instance interruption notices
The best way to protect against Spot Instance interruption is to architect your application to be fault-tolerant. In addition, you can take advantage of Spot Instance interruption notices, which provide a two-minute warning before Amazon EC2 must stop or terminate your Spot Instance. We recommend that you check for these warnings every 5 seconds.
This warning is made available as a CloudWatch event and as an item in the instance metadata on the Spot Instance.
If you specify hibernation as the interruption behavior, you receive an interruption notice, but you do not receive a two-minute warning because the hibernation process begins immediately.
EC2 Spot Instance interruption notice
When Amazon EC2 is going to interrupt your Spot Instance, it emits an event two minutes prior to the actual interruption. This event can be detected by Amazon CloudWatch Events. For more information, see the Amazon CloudWatch Events User Guide.
The following is an example of the event for Spot Instance interruption. The possible
values for
instance-action are
hibernate,
stop, and
terminate.
{ "version": "0", "id": "
12345678-1234-1234-1234-123456789012", "detail-type": "EC2 Spot Instance Interruption Warning", ", "instance-action": "
action" } }
instance-action
If your Spot Instance is marked to be stopped or terminated by the Spot service, the
instance-action item is present in your instance metadata.
Otherwise, it is not present. You can retrieve
instance-action as
follows.
PS C:\>
Invoke-RestMethod -uri
The
instance-action item specifies the action and the approximate
time, in UTC, when the action will occur.
The following example indicates the time at which this instance will be stopped.
{"action": "stop", "time": "2017-09-18T08:22:00Z"}
The following example indicates the time at which this instance will be terminated.
{"action": "terminate", "time": "2017-09-18T08:22:00Z"}
If Amazon EC2 is not preparing to stop or terminate the instance, or if you
terminated the instance yourself,
instance-action is not present
and you receive an HTTP 404 error.
termination-time
This item is maintained for backward compatibility; you should use
instance-action instead.
If your Spot Instance is marked for termination by the Spot service, the
termination-time item is present in your instance metadata.
Otherwise, it is not present. You can retrieve
termination-time as
follows.
PS C:\>
Invoke-RestMethod -uri
The
termination-time item specifies the approximate time in UTC
when the instance receives the shutdown signal. For example:
2015-01-05T18:02:00Z
If Amazon EC2 is not preparing to terminate the instance, or if you terminated the
Spot Instance yourself, the
termination-time item is either not present (so
you receive an HTTP 404 error) or contains a value that is not a time
value.
If Amazon EC2 fails to terminate the instance, the request status is set to
fulfilled. The
termination-time value remains in
the instance metadata with the original approximate time, which is now in the
past.
Billing for interrupted Spot Instances
When a Spot Instance (not in a Spot block) is interrupted, you’re charged as follows.
When a Spot Instance in a Spot block is interrupted, you’re charged as follows. | https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/spot-interruptions.html | 2020-05-25T02:56:06 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.aws.amazon.com |
ListReviewableHITs operation retrieves the HITs with Status equal to Reviewable or Status equal to Reviewing that belong to the Requester calling the operation.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
list-reviewable: HITs
list-reviewable-hits [--hit-type-id <value>] [--status <value>] [--cli-input-json <value>] [--starting-token <value>] [--page-size <value>] [--max-items <value>] [--generate-cli-skeleton <value>]
--hit-type-id (string)
The ID of the HIT type of the HITs to consider for the query. If not specified, all HITs for the Reviewer are considered
--status (string)
Can be either Reviewable or Reviewing . Reviewable is the default value.
Possible values:
- Reviewable
- Reviewing
-)
If the previous response was incomplete (because there is more data to retrieve), Amazon Mechanical Turk returns a pagination token in the response. You can use this pagination token to retrieve the next set of results.
NumResults -> (integer)
The number of HITs on this page in the filtered results list, equivalent to the number of HITs being returned by this call.
HITs -> (list)
The list of HIT elements returned by the query.
(structure)
The HIT data structure represents a single HIT, including all the information necessary for a Worker to accept and complete the HIT.
HITId -> (string)A unique identifier for the HIT.
HITTypeId -> (string)The ID of the HIT type of this HIT
HITGroupId -> (string)The ID of the HIT Group of this HIT.
HITLayoutId -> (string)The ID of the HIT Layout of this HIT.
CreationTime -> (timestamp)The date and time the HIT was created.
Title -> (string)The title of the HIT.
Description -> (string)A general description of the HIT.
Question -> (string)The data the Worker completing the HIT uses produce the results. This is either either a QuestionForm, HTMLQuestion or an ExternalQuestion data structure.
Keywords -> (string)One or more words or phrases that describe the HIT, separated by commas. Search terms similar to the keywords of a HIT are more likely to have the HIT in the search results.
HITStatus -> (string)The status of the HIT and its assignments. Valid Values are Assignable | Unassignable | Reviewable | Reviewing | Disposed.
MaxAssignments -> (integer)The number of times the HIT can be accepted and completed before the HIT becomes unavailable.
Reward -> (string)A string representing a currency amount.
AutoApprovalDelayInSeconds -> (long)The amount of time, in seconds, after the Worker submits an assignment for the HIT that the results are automatically approved by Amazon Mechanical Turk. This is the amount of time the Requester has to reject an assignment submitted by a Worker before the assignment is auto-approved and the Worker is paid.
Expiration -> (timestamp)The date and time the HIT expires.
AssignmentDurationInSeconds -> (long)The length of time, in seconds, that a Worker has to complete the HIT after accepting it.
RequesterAnnotation -> (string)An arbitrary data field the Requester who created the HIT can use. This field is visible only to the creator of the HIT.
QualificationRequirements -> (list).
(structure)
The QualificationRequirement data structure describes a Qualification that a Worker must have before the Worker is allowed to accept a HIT. A requirement may optionally state that a Worker must have the Qualification in order to preview the HIT, or see the HIT in search results.
QualificationTypeId -> (string)The ID of the Qualification type for the requirement.
Comparator -> (string)The kind of comparison to make against a Qualification's value. You can compare a Qualification's value to an IntegerValue to see if it is LessThan, LessThanOrEqualTo, GreaterThan, GreaterThanOrEqualTo, EqualTo, or NotEqualTo the IntegerValue. You can compare it to a LocaleValue to see if it is EqualTo, or NotEqualTo the LocaleValue. You can check to see if the value is In or NotIn a set of IntegerValue or LocaleValue values. Lastly, a Qualification requirement can also test if a Qualification Exists or DoesNotExist in the user's profile, regardless of its value.
IntegerValues -> (list)
The integer value to compare against the Qualification's value. IntegerValue must not be present if Comparator is Exists or DoesNotExist. IntegerValue can only be used if the Qualification type has an integer value; it cannot be used with the Worker_Locale QualificationType ID. When performing a set comparison by using the In or the NotIn comparator, you can use up to 15 IntegerValue elements in a QualificationRequirement data structure.
(integer)
LocaleValues -> (list)
The locale value to compare against the Qualification's value. The local value must be a valid ISO 3166 country code or supports ISO 3166-2 subdivisions. LocaleValue can only be used with a Worker_Locale QualificationType ID. LocaleValue can only be used with the EqualTo, NotEqualTo, In, and NotIn comparators. You must only use a single LocaleValue element when using the EqualTo or NotEqualTo comparators. When performing a set comparison by using the In or the NotIn comparator, you can use up to 30 LocaleValue elements in a QualificationRequirement data structure.
(structure)
The Locale data structure represents a geographical region or location.
Country -> (string)The country of the locale. Must be a valid ISO 3166 country code. For example, the code US refers to the United States of America.
Subdivision -> (string)The state or subdivision of the locale. A valid ISO 3166-2 subdivision code. For example, the code WA refers to the state of Washington.
RequiredToPreview -> (boolean)DEPRECATED: Use the ActionsGuarded field instead. If RequiredToPreview is true, the question data for the HIT will not be shown when a Worker whose Qualifications do not meet this requirement tries to preview the HIT. That is, a Worker's Qualifications must meet all of the requirements for which RequiredToPreview is true in order to preview the HIT. If a Worker meets all of the requirements where RequiredToPreview is true (or if there are no such requirements), but does not meet all of the requirements for the HIT, the Worker will be allowed to preview the HIT's question data, but will not be allowed to accept and complete the HIT. The default is false. This should not be used in combination with the ActionsGuarded field.
ActionsGuarded -> (string)Setting this attribute prevents Workers whose Qualifications do not meet this QualificationRequirement from taking the specified action. Valid arguments include "Accept" (Worker cannot accept the HIT, but can preview the HIT and see it in their search results), "PreviewAndAccept" (Worker cannot accept or preview the HIT, but can see the HIT in their search results), and "DiscoverPreviewAndAccept" (Worker cannot accept, preview, or see the HIT in their search results). It's possible for you to create a HIT with multiple QualificationRequirements (which can have different values for the ActionGuarded attribute). In this case, the Worker is only permitted to perform an action when they have met all QualificationRequirements guarding the action. The actions in the order of least restrictive to most restrictive are Discover, Preview and Accept. For example, if a Worker meets all QualificationRequirements that are set to DiscoverPreviewAndAccept, but do not meet all requirements that are set with PreviewAndAccept, then the Worker will be able to Discover, i.e. see the HIT in their search result, but will not be able to Preview or Accept the HIT. ActionsGuarded should not be used in combination with the RequiredToPreview field.
HITReviewStatus -> (string)Indicates the review status of the HIT. Valid Values are NotReviewed | MarkedForReview | ReviewedAppropriate | ReviewedInappropriate.
NumberOfAssignmentsPending -> (integer)The number of assignments for this HIT that are being previewed or have been accepted by Workers, but have not yet been submitted, returned, or abandoned.
NumberOfAssignmentsAvailable -> (integer)The number of assignments for this HIT that are available for Workers to accept.
NumberOfAssignmentsCompleted -> (integer)The number of assignments for this HIT that have been approved or rejected. | https://docs.aws.amazon.com/cli/latest/reference/mturk/list-reviewable-hits.html | 2020-05-25T02:48:43 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.aws.amazon.com |
Frontend structure
Modules and directories
The Karrot frontend is composed of modules. Each module should follow this directory structure:
src/ <module-name>/ - routes.js # routes for this app - assets/ # mostly for images used in this module - apple.png - api/ # XHR communication, to backend and other services - pickup.js - pickup.spec.js # unit test - ... - datastore/ # vuex namespaced modules and plugins - pickup.js - pickup.spec.js # unit test - ... - components/ # reusable components (atoms, molecules, organism) - PickupUser.vue - PickupUser.spec.js # unit test - Pickups.story.js # storybook story - ... - pages/ # page templates and instances - PickupsManage.vue # page connected with mapGetters and mapActions - PickupsManageUI.vue # or with vuex-connect - ...
The modules
base and
utils stand out as they don't focus on one area.
base contains all bits that could be considered the core of Karrot, while
utils are helpers that can be reused in other modules.
They should be kept as small as possible, so consider creating a new module before adding to them. | https://docs.karrot.world/frontend-structure.html | 2020-05-25T00:35:14 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.karrot.world |
Publishing app projects
Publishing is the process of making an app project available for use in Resco mobile apps.
Publish
Once you are done editing your app project in Woodford, you need to save and publish the project. Click Publish and wait for the process to finish. If the published project has the highest priority of all available projects for a particular user role, those users will now receive the updated project during synchronization.
Publish all projects
This button is useful if you are using project hierarchy. Any change in the parent project requires that all child projects are republished. To save you unnecessary clicking and to ensure that you don't accidentally omit one of the child projects, use the 'Publish All button.
For more information, see the feature introduction Webinar
Validate
When publishing, app projects are first validated, i.e., TBD.
Publishing with older version
If you publish an app project with a newer Woodford version than is the mobile app version of your users, synchronization may fail with the error Unsupported metadata version. In this case, you have two options:
- Ask your mobile users to upgrade to the most recent version of the mobile app.
- Republish the project with an older release version. As Publish Version, select a version compatible with your mobile apps, then click Publish. | https://docs.resco.net/wiki/Publishing_app_projects | 2020-05-25T02:16:50 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.resco.net |
policy". Data moves through several stages, which correspond to file directory locations. Data starts out in the hot database, located as subdirectories ("buckets") under
$SPLUNK_HOME/var/lib/splunk/defaultdb/db/. It then moves to the warm database, also located as subdirectories under
$SPLUNK_HOME/var/lib/splunk/defaultdb/db. Eventually, data is aged into the cold database
$SPLUNK_HOME/var/lib/splunk/defaultdb/colddb.
Finally, data reaches the frozen state. This can happen for a number of reasons, will automatically copy frozen buckets to the specified location before erasing the data from the index.
Add this stanza to
$SPLUNK_HOME/etc/system/local/indexes.conf:
[<index>] coldToFrozenDir = "<path to frozen archive>"
Note the following:
<index>specifies which index contains the data to archive.
<path to frozen archive>specifies the directory where the indexer will put the archived buckets.
Note: When you use Splunk Web to create a new index, you can also specify a frozen archive path for that index. See "Set up multiple indexes" for details.
How the indexer archives the frozen data depends on whether the data was originally indexed in a pre-4.2 release:
- For buckets created from version 4.2 and on, the indexer will remove all files except for the rawdata file.
- For pre-4.2 buckets, the script simply gzip's all the
.tsidxand
.datafiles in the bucket.
This difference is due to a change in the format of rawdata. Starting with 4.2, the rawdata file contains all the information erases the frozen data from the index.
You'll need to supply the actual script. Typically, the script will archive the data, but you can provide a script that performs any action you want.
Add this stanza to
$SPLUNK_HOME/etc/system/local/indexes.conf:
[<index>] coldToFrozenScript = ["<path to program that runs script>"] "<path to script>"
Note the following:
<index>specifies which index contains the data to archive.
<path to script>specifies the path to the archiving script. The script must be in
$SPLUNK_HOME/binor one of its subdirectories.
<path to program that runs script>is optional. You must set it if your script requires a program, such as python, to run it.
- If your script is located in
$SPLUNK_HOME/binand is named
myColdToFrozen.py, set the attribute like this:
coldToFrozenScript = "$SPLUNK_HOME/bin/python" "$SPLUNK_HOME/bin/myColdToFrozen.py"
- For detailed information on the archiving script, see the indexes.conf spec file.
The indexer ships with an example archiving script that you can edit,
$SPLUNK_HOME/bin/coldToFrozenExample.py.
Note: If using the example script, edit it to specify the archive location for your installation. Also, rename the script or move it to another location to avoid having changes overwritten when you upgrade the indexer. This is an example script and should not be applied to a production instance without editing to suit your environment and testing extensively.
The example script archives the frozen data differently, depending on whether the data was originally indexed in a pre-4.2 release:
- For buckets created from version 4.2 and on, it will remove all files except for the rawdata file.
- For pre-4.2 buckets, the script simply gzip's all the
.tsidxand
.datafiles.
This difference is due to a change in the format of rawdata. Starting with 4.2, the rawdata file contains all the information.
Sign your archives
Splunk Enterprise supports archive signing; configuring this allows you to verify integrity when you restore an archive.
Note: To use archive signing, you must specify a custom archiving script; you cannot perform archive signing if you choose to let the indexer perform the archiving automatically.
Clustered data archiving
Indexer clusters contain redundant copies of indexed data. If you archive that data using the techniques described above,! | https://docs.splunk.com/Documentation/Splunk/6.1/Indexer/Automatearchiving | 2020-05-25T01:56:13 | CC-MAIN-2020-24 | 1590347387155.10 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
Access the File Plan to view, create, and edit the structure of the records management hierarchy.
- Enter the Records Management site.
- On the banner, click File Plan.
The File Plan page displays.
- Click Simple View to display only basic item details (title, modification date and time, user responsible for the modifications) for the content items. Click Detailed View to display the summary view.
The Hide Folders/Show Folders button lets you change the view to display both folders and records, or display only the records in the File Plan main view. | https://docs.alfresco.com/4.0/tasks/rm-fileplan-access.html | 2020-05-25T02:43:45 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.alfresco.com |
an array of JobListEntry objects of the specified length. Each JobListEntry object is for a job in the specified cluster and contains a job's state, a job's ID, and other information.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
list-cluster: JobListEntries
list-cluster-jobs --cluster-id <value> [--cli-input-json <value>] [--starting-token <value>] [--page-size <value>] [--max-items <value>] [--generate-cli-skeleton <value>]
--cluster-id (string)
The 39-character ID for the cluster that you want to list, for example CID123e4567-e89b-12d3-a456-42665544.
JobListEntries -> (list)
Each JobListEntry object contains a job's state, a job's ID, and a value that indicates whether the job is a job part, in the case of export jobs.
(structure)
Each JobListEntry object contains a job's state, a job's ID, and a value that indicates whether the job is a job part, in the case of an export job.
JobId -> (string)The automatically generated ID for a job, for example JID123e4567-e89b-12d3-a456-426655440000 .
JobState -> (string)The current state of this job.
IsMaster -> (boolean)A value that indicates that this job is a master job. A master job represents a successful request to create an export job. Master jobs aren't associated with any Snowballs. Instead, each master job will have at least one job part, and each job part is associated with a Snowball. It might take some time before the job parts associated with a particular master job are listed, because they are created after the master job is created.
JobType -> (string)The type of job.
SnowballType -> (string)The type of device used with this job.
CreationDate -> (timestamp)The creation date for this job.
Description -> (string)The optional description of this specific job, for example Important Photos 2016-08-11 .
NextToken -> (string)
HTTP requests are stateless. If you use the automatically generated NextToken value in your next ListClusterJobsResult call, your list of returned jobs will start from this point in the array. | https://docs.aws.amazon.com/cli/latest/reference/snowball/list-cluster-jobs.html | 2020-05-25T02:42:04 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.aws.amazon.com |
-projects ]
Deletes a project. To delete a project, it must not have any placements associated with it.
Note
When you delete a project, all associated data becomes irretrievable.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
delete-project --project-name <value> [--cli-input-json <value>] [--generate-cli-skeleton <value>]
--project-name (string)
The name of the empty project project from your AWS account
The following delete-project example deletes the specified project from your AWS account.
aws iot1click-projects delete-project \ --project-name AnytownDumpsters
This command produces no output.
For more information, see Using AWS IoT 1-Click with the AWS CLI in the AWS IoT 1-Click Developer Guide. | https://docs.aws.amazon.com/cli/latest/reference/iot1click-projects/delete-project.html | 2020-05-25T01:29:34 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.aws.amazon.com |
Difference between revisions of "Intrinsic Bundles"
Revision as of 00:23, 18 July 2016
Intrinsic bundles are hardcoded fields in the CollectiveAccess database. No matter what installation profile and configuration you use these intrinsic fields are always defined and available. Some exist to support basic application functionality. Examples include idno (the identifier for a record), locale_id (the locale/language of the record, label, etc.) and access (indicates whether a record should be accessible to the public or not). Others support optional, and in some cases legacy, functionality. An example is the item_status_id on objects, while contains an "item status" value that was widely used in pre 1.0 installations but is rarely used now.
Intrinsics are simple, non-repeating values with no associated locale. When using a intrinsic, what you see is what you get – there is no translation or multiple values.
The bundle names for various intrinsic database fields are the field names themselves. Bundle names for other user interface elements are listed in the following tables.
Contents
- 1 For Object (ca_objects)
- 2 For lots (ca_object_lots)
- 3 For entities (ca_entities)
- 4 For places (ca_places)
- 5 For occurrences (ca_occurrences)
- 6 For collections (ca_collections)
- 7 For storage locations (ca_storage_locations)
- 8 For object representations (ca_object_representations)
- 9 For lists (ca_lists)
- 10 For list items (ca_list_items)
- 11 For sets (ca_sets)
- 12 For set items (ca_set_items) | https://docs.collectiveaccess.org/index.php?title=Intrinsic_Bundles&diff=next&oldid=5860 | 2020-05-25T00:39:37 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.collectiveaccess.org |
PrerequisitesPrerequisites
You need to have a
cluster.yaml file and an
inventory.yaml file for the cluster you are going to launch.
Inventory fileInventory file
Inventory file is an Ansible inventory file.
Example:
control-plane: hosts: 10.0.0.1: ansible_host: 10.0.0.1 node_pool: control-plane 10.0.0.2: ansible_host: 10.0.0.2 node_pool: control-plane 10.0.0.3: ansible_host: 10.0.0.3 node_pool: control-plane node: hosts: 10.0.0.4: ansible_host: 10.0.0.4 node_pool: worker 10.0.0.5: ansible_host: 10.0.0.5 node_pool: worker 10.0.0.6: ansible_host: 10.0.0.6 node_pool: worker 10.0.0.7: ansible_host: 10.0.0.7 node_pool: worker bastion: {} all: vars: order: sorted control_plane_endpoint: "" ansible_user: "<username>" ansible_port: 22 version: v1beta1
On Premise ProviderOn Premise Provider
Once you have both
cluster.yaml file and
inventory.yaml file, you need to add a provisioner for your provider. Follow these steps:
- Navigate to Administration / Cloud Providers.
- Select the Add Provider button.
- Select On Premise.
- Enter a name for your provider and insert the full contents of your private ssh key, then hit Verify and Save.
Launching the clusterLaunching the cluster
You can now launch your cluster.
Go to the Clusters section and select Add Cluster.
Select Upload YAML to create a cluster.
Fill out the formFill out the form
- Enter a unique name for your cluster.
- At the top of the
cluster.yamltext field, fill the following information:
kind: ClusterProvisioner apiVersion: Konvoy.mesosphere.io/v1beta1 metadata: name: Konvoy-on-prem spec: sshCredentials: user: "<username>" provider: none ---
- Make sure the Kommander addon is disabled with
enabled: false.
spec: addons: addonsList: - name: kommander enabled: false
Select your On Premise Provider created in the previous step.
Paste the contents of your
inventory.yamlfile into the inventory field. Ensure that your inventory.yaml does not specify the following line:
ansible_ssh_private_key_file: "id_rsa"
- Select Continue.
At this point Provisioning of your cluster should start. You can track the deployment progress with Kibana or
kubectlas you normaly would in Kommander. | https://docs.d2iq.com/ksphere/kommander/1.0/tutorials/on-prem/ | 2020-05-25T01:09:44 | CC-MAIN-2020-24 | 1590347387155.10 | [array(['/ksphere/kommander/1.0/img/On-prem-provider-with-values.png',
'On Premise Provider Form with values'], dtype=object)
array(['/ksphere/kommander/1.0/img/clusters-header.png', 'Upload YAML'],
dtype=object)
array(['/ksphere/kommander/1.0/img/add-cluster.png', 'Upload YAML'],
dtype=object) ] | docs.d2iq.com |
Troubleshooting: Microsoft Office Outlook Integration and Synchronization
This topic lists some common problems that can occur when you use the Microsoft Office Outlook Add-in.
Cleaning Up Data and Configuration Information
You may have to test multiple installations and configurations of the Microsoft Dynamics NAV Synchronization Add-in.
To make sure that you are starting from a clean installation, you should delete the following files from your computer before you start a new setup. The following table describes the details.
Treating Uncompleted Tasks that Have Been Deleted
If you attempt to delete a task in Outlook before the task is completed, after you synchronize, you will receive a message noting that there is a conflict, because the Outlook version of the task has been deleted. When you address the conflict and delete it, and synchronize again, the task is added back to the Tasks folder as completed.
To work around this problem, create a task, synchronize the first time, and set the status of the task to Complete. Synchronize again, and then delete the task. Synchronize again.
Using Outlook Links Collection With Outlook 2013
When you are setting up a user and specifying APP and Task entities, you cannot use the Outlook Links collection, which is not supported by Outlook 2013. If you have configured links, when you synchronize, it will fail with the following error:
An error has occurred during the synchronization process. You can find the error details in the DynamicsNAVsync.log file.
To work around this problem, do not configure the APP and TASK entities to use the Links collection.
Verifying Information in the Change Log Setup Window
By default, the change log is activated. However, you may want to verify that the activation is valid. For example, you may have to do this when no Contacts or To-Dos are synchronized when Outlook synchronization runs.
To verify the Change Log Setup setting
In the Search box, enter Change Log Setup, and then choose the related link.
Verify that the Change Log Activated field is selected.
To validate specific Change Log Setup settings, on the Actions tab, in the Setup group, choose Tables.
The default change log setup contains information for the Salesperson/Purchaser, Contact, and To-do tables. You can make additional modifications.
Synchronizing Addresses for Contacts
When you specify a salesperson for a contact and then synchronize the contact with Outlook, information from the contact’s Country/Region Code field in Microsoft Dynamics NAV is missing from the Address field in the Outlook Contact form. When you attempt to modify information, for example, the street address, for the contact’s address in Outlook, and then synchronize that information back to Microsoft Dynamics NAV, you may encounter the following error:.
An Outlook item cannot be synchronized because the Country/Region Code field of the CONT_PERS entity cannot be processed. Try again later and if the problem persists contact your system administrator.
To fix this issue, add the country/region information to the multiline address field in the Outlook Contact form and then proceed with synchronization.
Setting Conditions
Setting conditions for the TASK and APP entities is required. Meetings and tasks need a Meeting Organizer or Task Owner, so you must create a condition.
To configure conditions
In the Search box, enter Outlook Synch. User Setup, and then choose the related link.
Select the line for the APP entity, and then select the Condition field.
In the Outlook Synch. Filters -Condition window, in the Field Name field, select Salesperson Code.
In the Value field, add the code for the salesperson.
Repeat these steps for the TASK entity.
Setting conditions is not required, but you should do it to prevent mass data transfer to a local mailbox or a public folder. In large environments with hundreds of contacts, you may want to set conditions to limit the data that is to be synchronized to every salesperson who has a mailbox that is configured for Outlook synchronization.
If there are no conditions set for the contacts entities, then you may get the following message to view the debug log:
Closing Mapi session "/o=First Organization/ou=First Administrative Group/cn=Recipients/cn=XY" because it exceeded the maximum of 250 objects of type "objtMessage
This is due to a security setting on the Exchange Server. If a large set of data, which is first triggered with 250 objects, is synchronized to Exchange Server, then Exchange Server logs an error in the event log file and does not let you add the data to the mailbox.
To work around this limitation, you can adjust the registry based on the objects that can be found in event ID 9646 in the application event log file on Exchange Server. After you have completed setting conditions, you may have
Note
By default, there is no debug log. You must enable it first. For more information, see Knowledge Base article 944237: How to enable the log file mode for the Outlook Synchronization feature in Microsoft Dynamics NAV 5.0 (requires PartnerSource account).
Locating the Error Log Created By the Outlook Synchronization Process
You may receive the following message:
An Error has occurred during the synchronization process. You can find the error details in the log file
The location of this log file depends on the operating system that you are using. You can find the path of the file by looking in the Outlook.exe.config file: c:\Program Files\Microsoft Office\Office<version number>\Outlook.exe.config.
Note
You can modify the Outlook.exe.config file to change the amount of information that is logged in the log file and to show more detailed information. For more information, see Knowledge Base article 944237: How to enable the log file mode for the Outlook Synchronization feature in Microsoft Dynamics NAV 5.0 (requires PartnerSource account).
Synchronizing Large Sets of Data
Synchronizing large amounts of data can cause issues with the connection through NAS Services and web services.
If you are synchronizing with the web services connection, then the maximum size of the web service can be changed. You can configure the maximum permitted size of a web services request in the CustomSettings.config file.
<!-- Maximum permitted size of a web services request, in kilobytes--> <add key="WebServicesMaxMsgSize" value="512"></add>
You should not regularly synchronize large sets of data, although it may be appropriate to do this during initial setup. Instead, we recommend that you change the web service size back to an appropriate level after you perform a full synchronization.
Setting Up Microsoft Outlook Integration in a Three-Machine Environment
When selecting a company, you may receive a message that resembles7170 Service Principal Names (SPN) and delegation information. They can also occur if the web service path is not set correctly in the Connection tab. For more information, see Walkthrough: Installing the Three Tiers on Three Computers. You can also see the NAV 2009 Web Services on a three machine setup blog post in the Microsoft Dynamics NAV Team Blog on MSDN.
Configuring the Outlook Profile in an Environment Other than Microsoft Exchange
When you add and configure a new user Outlook profile in an environment other than Microsoft Exchange and are working with the TASK entity, you must make sure
Displaying the Microsoft Dynamics NAV Synchronization Toolbar
After you reinstall the Microsoft Office Outlook Add-in, the Microsoft Dynamics NAV Synchronization toolbar items may not appear in Outlook on the Add-ins tab, even though the toolbar is selected in the list of available toolbars from the View menu. This occurs because an earlier version of the add-in is running during reinstallation.
To display the Microsoft Dynamics NAV Synchronization toolbar
In Outlook, on the File tab, choose Options, and then, in the Options dialog, choose Add-Ins.
In the Manage box, verify that COM Add-ins is selected, and then choose Go.
If the Microsoft Dynamics NAV Synchronization Add-in is selected, clear the check box, and then choose the OK button.
In the Manage box, verify that COM Add-ins is selected, and then choose Go.
Select the Microsoft Dynamics NAV Synchronization Add-in check box, and then choose the OK button.
Showing Synchronization
We recommend that you enable the Show Synchronization Progress option in Outlook so that a user knows when synchronization is occurring. This helps a user avoid receiving an error if Outlook is closed when synchronization is occurring.
To set rules for synchronization
In Outlook, on the Microsoft Dynamics NAV Synchronization toolbar, choose Settings.
On the General tab, select the Show synchronization progress and Show synchronization summary check boxes.
Finding Additional Information
For additional information about the set up and configuration of the Microsoft Dynamics Synchronization Add-in, see the Microsoft Dynamics NAV Help. You can search for the "Set Up Outlook Synchronization" topic as a starting point. In addition, see the Outlook Integration Installation & Setup Technical White Paper (requires PartnerSource account). Although this document describes installation for Microsoft Dynamics NAV 5.0, much of the information applies to Microsoft Dynamics NAV 2015. You can also find troubleshooting information on the NAV Developer's Blog on MSDN.
See Also
Tasks
Walkthrough: Setting Up Outlook Synchronization
Other Resources
Walkthrough: Synchronizing Information Between Outlook and Microsoft Dynamics NAV
Installation Options | https://docs.microsoft.com/en-us/previous-versions/dynamicsnav-2015/dd983823(v%3Dnav.80) | 2020-05-25T02:53:36 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.microsoft.com |
What is Shibboleth? as OAuth or OpenID connect.
It helps sites make informed authorization decisions for accessing protected resourcesand provides federated identity-based authentication and authorization that allows cross-domain Single Sign-On (SSO) and removes the need for access credentials.
Shibboleth web-based Single Sign-On (SSO) system contains three Components:
- Identity Provider (IDP) – An identity provider (IDP) creates, maintains, and manages user identities and information. Identity Providers are responsible for user authentication and providing required user information to the Service Provider (SP).
- Service Provider (SP) – Service provider (SP) receives authentications assertions from the Identity provider and authenticates the user.
- Discovery Sevice (DS) – It helps the Service Provider to discover the user’s Identity Provider. It may be located anywhere on the web and most of the time does not require.
Shibboleth SSO Workflow
The below diagram shows the common workflow of single sign-on (SSO) and interaction between User, Identity Provider (IDP) and Service Provider (SP).
Shibboleth SSO flow with miniOrange IDP
The authentication process using Identity Provider (IDP), takes place in the following steps:
- The user reaches for a Service provider (website) for accessing the resources.
- Service Provider figure outs the Identity provider (IDP) with the help of miniOrange discovery service and authenticates the user with the Identity Provider (IDP).
- Identity Provider checks if any active session is going on if it not then it asks the user to enter the credentials and the authentication request is sent to IDP.
- Identity Provider (IDP) sends an authentication response to the Service Provider (SP).
- After authenticating the user with Identity Provider (IDP) Service Provider (SP) grants access to the user.
Limitations of Shibboleth
- Support limited protocols such as SAML.
- Support and customization are not available because it is open-source, unlike other vendors who provide full support.
- It is more complex to set up and configure. The configuration is more involved.
- It only supports Supports SAML 1 and SAML 2 and features up to Shibboleth 2.4 protocols.
Shibboleth Vs miniOrange IDP
Related Articles:
- miniOrange is registered consultants who support shibboleth. Click here to know more.
- SAML Single Sign-On (SSO) Into Bamboo Using Shibboleth 2
- Shibboleth-2 As Idp For WordPress
- Single Sign On | https://docs.miniorange.com/what-is-shibboleth | 2020-05-25T01:13:28 | CC-MAIN-2020-24 | 1590347387155.10 | [array(['https://docs.miniorange.com/wp-content/uploads/sites/11/2019/10/shibboleth-sso-workflow.jpg',
'shibboleth sso workflow'], dtype=object)
array(['https://docs.miniorange.com/wp-content/uploads/sites/11/2019/09/CAS-as-an-IDP.png',
'shibboleth workflow'], dtype=object) ] | docs.miniorange.com |
According to the XACML reference architecture, PIP (Policy Info Point) is the system entity that acts as a source of attribute values. Basically if there are missing attributes in the XACML request sent by PEP (Policy Enforcement Point), PIP would find them for the PDP (Policy Decision Point) to evaluate the policy.
This topic provides instructions on how to write a simple PIP attribute finder module to plug in to the WSO2 Identity Server. There are two ways that you can write a PIP attribute finder module.
...
Overview
Content Tools
Activity | https://docs.wso2.com/pages/diffpages.action?originalId=41746546&pageId=43980037 | 2020-05-25T02:38:29 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.wso2.com |
Note
The documentation you're currently reading is for version 3.1.0. Click here to view documentation for the latest stable version.
Sensor Troubleshooting¶
If a particular sensor is not running or appears to not be working (e.g. triggers are not emitted) follow these steps to debug it:
1. Verify Sensor is Registered¶
The first step is verifying that the sensor is registered in the database. You can do that by inspecting the output of the CLI command below and making sure that your sensor is present:
st2 sensor list
If your sensor is not listed, this means it is not registered. You can
register it by running the
st2ctl script:
sudo st2ctl reload --register-sensors --register-fail-on-failure --verbose
This will register sensors for all the packs which are available on the file
system. As you can see, we also use
--register-fail-on-failure and
--verbose
flags.
This will cause the register script to exit with a non-zero exit code, and print a failure in case registration of a particular sensor fails (e.g. typo in sensor metadata file, invalid YAML, etc).
2. Verify Virtual Environment Exists¶
After confirming that the sensor has been registered, you should check that a virtual environment has been created for that sensor’s pack.
You can check this by confirming the existence of the
/opt/stackstorm/virtualenvs/<pack name> directory. If the directory and
virtual environment does not exist, you can create it using this command:
st2 run packs.setup_virtualenv packs=<pack name>
3. Checking st2sensorcontainer Logs¶
If the sensor still does not appear to be running or working, you should run the sensor container service in the foreground in debug and single sensor mode. In this mode the sensor container will only run the sensor you specified and all log messages with level DEBUG and higher will be printed directly to the console.
/opt/stackstorm/st2/bin/st2sensorcontainer --config-file=/etc/st2/st2.conf --debug --sensor-ref=pack.SensorClassName
The log output will usually give you a clue as to what is going on. Common issues include typos and syntax errors in the sensor class code, uncaught exceptions being thrown that causes the sensor to exit, etc. | https://bwc-docs.brocade.com/troubleshooting/sensors.html | 2019-09-15T12:24:14 | CC-MAIN-2019-39 | 1568514571360.41 | [] | bwc-docs.brocade.com |
FAQ Objects
This topic describes part of the functionality of Genesys Content Analyzer.
Taking a category tree and its associated standard responses as input, Knowledge Manager can produce an FAQ object. From this object Knowledge Manager can produce a .jar file, which can in turn be used to:
- Build a web application that accepts written requests and, using content analysis, returns a set of standard responses.
- Present the contents (or a selection from the contents) of the standard response library as answers to frequently-asked questions.
An FAQ object combines a category tree, a training object based on the tree, and, optionally, a model built from the training object. The model is required in order to build a web application.
FAQ objects allow you to include in your web application a means of gathering user feedback about the correctness of a returned standard response. The application then uses this feedback to update the confidence rating of that particular standard response. This functionality is exemplified in the FAQ sample in the Simple Samples that are installed along with Web API Server.
For a description of this sample and its source code, see the eServices 8.1 Web API Client Developer's Guide.
The section includes the following:
- Sample FAQ .jar File
- More About FAQ Objects
- Procedure: Creating a new FAQ object
- Full Category Tree Subtab: Configuring the Category Tree
- FAQ Category Tree Subtab: Viewing and Testing
- Procedure: Generating and testing an FAQ.jar file
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/ES/8.1.4/KMUser/FAQ | 2019-09-15T12:02:28 | CC-MAIN-2019-39 | 1568514571360.41 | [] | docs.genesys.com |
_route
Enable Outbound Calls
Modified in 8.5.108.02.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/CLBCK/8.5.1/UG/EnableOutbound | 2019-09-15T12:24:35 | CC-MAIN-2019-39 | 1568514571360.41 | [] | docs.genesys.com |
Event
Handler
Event Task Async Helper Handler
Event Task Async Helper Handler
Event Task Async Helper Handler
Class
Task Async Helper
Definition
Converts task-returning asynchronous methods into methods that use the asynchronous programming model used in previous versions of ASP.NET and that is based on begin and end events.
public ref class EventHandlerTaskAsyncHelper sealed
public sealed class EventHandlerTaskAsyncHelper
type EventHandlerTaskAsyncHelper = class
Public NotInheritable Class EventHandlerTaskAsyncHelper
- Inheritance
-
Remarks. | https://docs.microsoft.com/en-us/dotnet/api/system.web.eventhandlertaskasynchelper?view=netframework-4.8 | 2019-09-15T12:53:24 | CC-MAIN-2019-39 | 1568514571360.41 | [] | docs.microsoft.com |
When scheduling jobs, there can only be one job doing one task. In a game, it is common to want to perform the same operation on a large number of objects. There is a separate job type called IJobParallelFor to handle this.
Note: A “ParallelFor” job is a collective term in Unity for any struct that implements the
IJobParallelFor interface.
A ParallelFor job uses a NativeArray of data to act on as its data source. ParallelFor jobs run across multiple cores. There is one job per core, each handling a subset of the workload.
IJobParallelFor behaves like
IJob, but instead of a single Execute method, it invokes the
Execute method once per item in the data source. There is an integer parameter in the
Execute method. This index is to access and operate on a single element of the data source within the job implementation.
struct IncrementByDeltaTimeJob: IJobParallelFor { public NativeArray<float> values; public float deltaTime; public void Execute (int index) { float temp = values[index]; temp += deltaTime; values[index] = temp; } }
When scheduling ParallelFor jobs, you must specify the length of the
NativeArray data source that you are splitting. The Unity C# Job System cannot know which
NativeArray you want to use as the data source if there are several in the struct. The length also tells the C# Job System how many
Execute methods to expect.
Behind the scenes, the scheduling of ParallelFor jobs is more complicated. When scheduling ParallelFor jobs, the C# Job System divides the work into batches to distribute between cores. Each batch contains a subset of
Execute methods. The C# Job System then schedules up to one job in Unity’s native job system per CPU core and passes that native job some batches to complete.
When a native job completes its batches before others, it steals remaining batches from the other native jobs. It only steals half of a native job’s remaining batches at a time, to ensure cache locality.
To optimize the process, you need to specify a batch count. The batch count controls how many jobs you get, and how fine-grained the redistribution of work between threads is. Having a low batch count, such as 1, gives you a more even distribution of work between threads. It does come with some overhead, so sometimes it is better to increase the batch count. Starting at 1 and increasing the batch count until there are negligible performance gains is a valid strategy.
Job code:
// Job adding two floating point values together public struct MyParallelJob : IJobParallelFor { [ReadOnly] public NativeArray<float> a; [ReadOnly] public NativeArray<float> b; public NativeArray<float> result; public void Execute(int i) { result[i] = a[i] + b[i]; } }
Main thread code:
NativeArray<float> a = new NativeArray<float>(2, Allocator.TempJob); NativeArray<float> b = new NativeArray<float>(2, Allocator.TempJob); NativeArray<float> result = new NativeArray<float>(2, Allocator.TempJob); a[0] = 1.1; b[0] = 2.2; a[1] = 3.3; b[1] = 4.4; MyParallelJob jobData = new MyParallelJob(); jobData.a = a; jobData.b = b; jobData.result = result; // Schedule the job with one Execute per index in the results array and only 1 item per processing batch JobHandle handle = jobData.Schedule(result.Length, 1); // Wait for the job to complete handle.Complete(); // Free the memory allocated by the arrays a.Dispose(); b.Dispose(); result.Dispose();
2018–06–15 編集レビュー を行ってパブリッシュされたページ
C# Job System exposed in 2018.1 NewIn20181 | https://docs.unity3d.com/ja/current/Manual/JobSystemParallelForJobs.html | 2019-09-15T12:19:44 | CC-MAIN-2019-39 | 1568514571360.41 | [] | docs.unity3d.com |
_NAME_DIM
Description
This dimension table allows Bot Gateway Server (BGS) session facts to be described based on the name of the bot used in the session._NAME_DIM
No subject area information available.
This page was last modified on June 11, 2018, at 10:15.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/GIM/8.5.0/PDMMS/Table-BGS_BOT_NAME_DIM | 2019-09-15T11:59:38 | CC-MAIN-2019-39 | 1568514571360.41 | [] | docs.genesys.com |
Custom Time Zones
You can configure custom time zones for Workforce Advisor; use the following procedure.
JDBC Data Source Error Logging in XML GeneratorChange the Time Profile of Agent Groups Metrics from 5 Minute Sliding to 30 Minute Growing
This page was last modified on March 27, 2015, at 11:43.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/PMA/8.5.1/PMADep/CustomTimeZone | 2019-09-15T12:17:50 | CC-MAIN-2019-39 | 1568514571360.41 | [] | docs.genesys.com |
Constructor: account.passwordSettings
Back to constructors index
Password settings
Attributes:
Type: account_PasswordSettings
Example:
$account_passwordSettings = ['_' => 'account.passwordSettings', 'email' => 'string', 'secure_settings' => SecureSecretSettings];
Or, if you’re into Lua:
account_passwordSettings={_='account.passwordSettings', email='string', secure_settings=SecureSecretSettings}
This site uses cookies, as described in the cookie policy. By clicking on "Accept" you consent to the use of cookies. | https://docs.madelineproto.xyz/API_docs/constructors/account_passwordSettings.html | 2019-09-15T12:43:14 | CC-MAIN-2019-39 | 1568514571360.41 | [] | docs.madelineproto.xyz |
Row
Updating
Row Event Args. Base Command Updating
Row Event Args. Base Command Updating
Row Event Args. Base Command Updating
Property
Event Args. Base Command
Definition
Gets or sets the IDbCommand object for an instance of this class.
protected: virtual property System::Data::IDbCommand ^ BaseCommand { System::Data::IDbCommand ^ get(); void set(System::Data::IDbCommand ^ value); };
protected virtual System.Data.IDbCommand BaseCommand { get; set; }
member this.BaseCommand : System.Data.IDbCommand with get, set
Protected Overridable Property BaseCommand As IDbCommand
Property Value
The IDbCommand to execute during the Update(DataSet).
Remarks
This method allows implementations to perform type checking for the IDbCommand object assignment. Only valid IDbCommand objects, for example a SqlCommand object, should be permitted. | https://docs.microsoft.com/en-us/dotnet/api/system.data.common.rowupdatingeventargs.basecommand?view=netframework-4.8 | 2019-09-15T13:33:17 | CC-MAIN-2019-39 | 1568514571360.41 | [] | docs.microsoft.com |
.
Note
Batch-level options are referred to as connection-level options in earlier versions of SQL Server and also in connections that have disabled Multiple Active Result Sets (MARS)..
Hierarchy of Options
When an option is supported at more than one level, the following hierarchy is imposed:
- A database option overrides an instance option.
- A SET option overrides a database option.
- A hint overrides a SET option.
Note
SET options set within a dynamic SQL batch affect only the scope of that batch.
Note
SET options, such as QUOTED_IDENTIFIER and ANSI_NULLS, are persisted with stored procedure definition and, therefore, take precedence over different values explicitly set for them.
See Also
Concepts
SET Options
Database Options
Instance Options
Database Compatibility Level Option
Help and Information
Getting SQL Server 2005 Assistance | https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2005/ms191203%28v%3Dsql.90%29 | 2019-09-15T12:39:24 | CC-MAIN-2019-39 | 1568514571360.41 | [] | docs.microsoft.com |