content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Payara Server Documentation
Documentation of Payara Server Enterprise features. Includes full documentation of the features added on top of GlassFish Server 5 Open Source Edition as well as features shared with GlassFish Server or their improvements specific to Payara Server.
All documentation for GlassFish Server 5 is also valid for Payara Server Enterprise 5.28.0 unless stated otherwise. | https://docs.payara.fish/enterprise/docs/documentation/payara-server/README.html | 2021-05-06T00:54:13 | CC-MAIN-2021-21 | 1620243988724.75 | [] | docs.payara.fish |
Administrators Guide¶
Introduction¶
The admin guide is intended for use by anyone who will be responsible for configuring tests and test lists, modifying references & tolerances or managing users and user permissions. Typically this would be limited to a small group of users within the clinic.
Accessing the admin site¶¶
Before you start defining tests and test lists for the first time it is a good idea to begin by doing some initial configuration.
Admin Guide Contents¶
- Changing The Site Name Displayed at the Top of Pages
- Managing Users and Groups
- Editing a Notification
- Deleting a Notification
- Units
- Tests and Other QC
-
- Admin Tutorials
- Service Log & Parts
- Fault Log Administration | https://docs.qatrackplus.com/en/stable/admin/guide.html | 2021-05-06T00:16:37 | CC-MAIN-2021-21 | 1620243988724.75 | [] | docs.qatrackplus.com |
7. Communication
During the translation process, there will be a lot of occasions for communication with your collaborators; similarly, they might need to ask you questions and clarifications.
See below the two ways to communicate with your collaborators within Transifex. We recommend discussing it with your collaborators/vendor about what communication channel would work best.
Announcements and discussions
This is probably best for “bigger” topics such as announcements about a start of the project, or a big update of the source file, or adding contextual information about the translation project. Learn more about this feature here.
Discuss specific strings
Quite often, translators would need clarification about specific strings and will request you to provide additional information by opening an “issue” on a string level. Learn more about this feature here.
As mentioned, we recommend discussing the procedure with your collaborators/vendor team: should they @mention you or someone else in the organization (e.g. different project maintainers for different projects)? should they mark the issue as resolved or you will do it? should you @mention all other language translators or they’re supposed to check open issues on their own? | https://docs.transifex.com/getting-started-as-a-localization-manager/7-communication | 2021-05-06T00:55:37 | CC-MAIN-2021-21 | 1620243988724.75 | [] | docs.transifex.com |
Update from 6.1.x to 6.2.0¶
This page describes how you can update from OXID eShop version 6.1.x to 6.2.0. If you want to update to any other version, please switch to the appropriate version of the documentation.
Depending on your existing OXID eShop installation, you need to perform one or more of the following actions:
1. Composer update¶
Please edit your root
composer.jsonfile by updating contents of require and require-dev nodes:
{ "require": { "oxid-esales/oxideshop-metapackage-ce": "v6.2.0" }, "require-dev": { "oxid-esales/testing-library": "^v7.1.0", "incenteev/composer-parameter-handler": "^v2.0.0", "oxid-esales/oxideshop-ide-helper": "^v3.1.2", "oxid-esales/azure-theme": "^v1.4.2" }, }
Example: updated values for OXID eShop CE v6.2.0
Adapt the metapackage according to your edition.
Note
New version of testing-library requires php-zip extension. You might need to install it to be able to update OXID eShop from oxvm_eshop.
Clean up the
tmpfolder
rm -rf source/tmp/*
For updating dependencies (necessary to update all libraries), in the project folder run:
composer update --no-plugins --no-scripts
Copy the file
overridablefunctions.phpfrom the
vendordirectory to the OXID eShop
sourcedirectory:
cp vendor/oxid-esales/oxideshop-ce/source/overridablefunctions.php source/
For executing all necessary scripts to actually gather the new compilation, in the project folder run:
composer update #(You will be prompted whether to overwrite existing code for several components. The default value is N [no] but of course you should take care to reply with y [yes].)
Important
Composer will ask you to overwrite module and theme files. E.g.: “Update operation will overwrite oepaypal files in the directory source/modules. Do you want to overwrite them? (y/N)” If you include modules by
"type": "path",in your
composer.jsonfile like described in Best practice module setup, answer
Noto this question..
For executing possible database migrations, in the project folder run:
vendor/bin/oe-eshop-db_migrate migrations:migrate
2. Update of the module configurations¶
The outcome of the following steps is that you are able to configure, activate and deactivate your current modules again. Therefore the new module configuration .yaml files need to be synchronized with the configuration and activation status of your current modules. Read here for background information.
Install the update component via composer:
composer require --no-interaction oxid-esales/oxideshop-update-component:"^1.0"
Clean up the
tmpfolder
rm -rf source/tmp/*
Install a default configuration for all modules which are currently inside the directory
source/modules. On the command line, execute the console command:
vendor/bin/oe-console oe:oxideshop-update-component:install-all-modules
Transfer the existing configuration (module setting values, class extension chain, which modules are active) from the database to the
.yamlconfiguration files.
vendor/bin/oe-console oe:oxideshop-update-component:transfer-module-data
Remove modules data which already presents the yaml files from the database to avoid duplications and errors during the module activation.
vendor/bin/oe-console oe:oxideshop-update-component:delete-module-data-from-database
After this step, modules data should be removed from the database so modules functionality should not work anymore.
Activate all configured modules which were previously active . On the command line, execute the console command:
vendor/bin/oe-console oe:module:apply-configuration
After this step, all modules which were previously active, should be active and have the correct configuration set.
Uninstall the update component via composer
3. Remove old files¶
There is a list of files that are not used anymore by OXID eShop, and those files can be removed manually. If you are not using them, its recommended to remove listed files.
- source/xd_receiver.htm
Troubleshooting¶
- Error message: `Module directory of ModuleX could not be installed due to The variable $sMetadataVersion must be present in ModuleX/metadata.php and it must be a scalar.`
- Up to OXID eShop 6.1, modules without a metadata version in the file
metadata.phpwere accepted. OXID eShop 6.2 requires to set a metadata version in ModuleX
metadata.php.
- Error message `The metadata key constrains is not supported in metadata version 2.0.`
- Up to OXID eShop 6.1, the array keys constraints and constrains were accepted in the file
metadata.php. OXID eShop 6.2 only allows the key constraints. Please refer to the metadata documentation of settings.
- The extension chain in the OXID eShop admin inis partly highlighted red and crossed out.
- This must not be an error. Up to OXID eShop 6.1, only extensions of active modules were shown. OXID eShop 6.2 shows extensions of all installed modules (active and inactive). If a module is inactive, the extensions of this module are highlighted red and crossed out. This new behavior means, you can configure the extension chain of modules which are not activated yet. | https://docs.oxid-esales.com/developer/en/latest/update/update-from-6.1.x-to-6.2.0.html | 2021-05-06T00:52:39 | CC-MAIN-2021-21 | 1620243988724.75 | [] | docs.oxid-esales.com |
blynk (verified community library)
Summary
Build a smartphone app for your project in minutes!
Example Build Testing
Device OS Version:
This table is generated from an automated build. Success only indicates that the code compiled successfully.
Library Read Me
This content is provided by the library maintainer and has not been validated or approved.
Blynk C++ Library
If you like Blynk - give it a star, or fork it and contribute!
What is Blynk?.
Download
Blynk Arduino Library
Blynk App: Google Play / App Store
Documentation
Social: Webpage / Facebook / Twitter / Kickstarter Help Center: Documentation: Community Forum: Examples Browser: Blynk for Business:
Contributing
We accept contributions from our community: stability bugfixes, new hardware support, or any other improvements. Here is a list of what you could help with.
Implementations for other platforms
- Arduino
- Node.js, Espruino, Browsers
- Lua, OpenWrt, NodeMCU
- Python, MicroPython
- OpenWrt packages
- MBED
- Node-RED
- LabVIEW
- C#
License
This project is released under The MIT License (MIT)
Browse Library Files | https://docs.particle.io/cards/libraries/b/blynk/ | 2021-05-06T00:56:46 | CC-MAIN-2021-21 | 1620243988724.75 | [array(['https://img.shields.io/twitter/url/http/shields.io.svg?style=social',
'Tweet'], dtype=object)
array(['https://github.com/blynkkk/blynkkk.github.io/blob/master/images/GithubBanner.jpg',
'Blynk Banner'], dtype=object) ] | docs.particle.io |
- Go to Appearance>Customize>Home Page Settings>Portfolio Section
- Check the box with blue tick mark on Enable Portfolio Section
- Select Page from the drop-down box
- Select Post 1,2,3,..and so on from the drop-down box
- Enter text in Portfolio Section Read More Text text-box
- Enter link on Portfolio Page url text-box
- Click on Publish
| https://docs.prosysthemes.com/biz-ezone/home-page-settings/how-to-configure-portfolio-section/ | 2021-05-06T00:25:38 | CC-MAIN-2021-21 | 1620243988724.75 | [array(['http://docs.prosysthemes.com/wp-content/uploads/2020/02/portfolio-section-3.png',
None], dtype=object) ] | docs.prosysthemes.com |
The Theme manager is the set of parameters, where you can define the design and looking of the Live Search. It has its own tab in Joomla 3.x, but if you are using it with Joomla 2.5.x versions it can be found in the parameter list. As you can previously read, it has 4 individual theme, what has different options, so you can find detailed descriptions for all theme in the Themes section. Settings in the Theme manager has no effect directly to the working, you will get similar working method in all theme, only the design, structure and the functionality can be different. There are 3 parameters, what are the same in all theme, these are the following:
Here you can select what themes would you like to use for the Live Search:
Theme skins define many color option with predefined values. The names are describe well what color package they include. After you select one the colors will changed automatically.Font style
This parameter is also a kind of skin selector. There are many predefined values based on Google fonts, what set the font types in the font manager. There is an article for the Font Manager, where you can read that options in details. | http://docs.offlajn.com/live-search/35-module-settings/78-theme-manager | 2021-05-06T01:19:07 | CC-MAIN-2021-21 | 1620243988724.75 | [] | docs.offlajn.com |
Getting Started with Axonius
- 3 Minutes To Read
-
- DarkLight
The Axonius Cybersecurity Asset Management platform allows security and other teams to:
- Get a credible, comprehensive asset management inventory
- Discover security coverage gaps
- Automatically validate and enforce security policies
The Axonius platform does this by connecting to your security and management solutions, and collecting and correlating information about devices, cloud instances, and users.
Axonius offers visibility into important details related to devices and users:
- Devices – Refer to any computing entity that has an IP address. This includes workstations, servers, local virtual instances, cloud instances and containers, IoT and more.
- Users – Refer to the identities that authenticate to and use devices.
After deploying Axonius, on every login, the Getting Started with Axonius checklist will be displayed by default.
This checklist includes a list of fundamentals milestones you should complete to start benefiting from the solution.
- To review each milestone description click the expand button. Click Learn More to open the milestone designated documentation page that explains how to complete the milestone.
- To start working on the milestone, click Let's Do It.
- Completed milestones are marked with a green 'V'. The Let's Do It button is disabled and its label is renamed to Completed.
- To open the checklist, click
. To close it, click anywhere in the screen.
- To stop the checklist from being displayed on every login, unselect the Show this checklist on login checkbox.
- To disable the checklist, unselect the Enable Getting Started with Axonius Checklist checkbox under the Global Settings tab in the System Settings screen. For more details, see Global Settings.
NOTE
To view the latest updates on Axonius, you can find the full release notes under the Release Notes folder in the navigation menu.
Was This Article Helpful? | https://docs.axonius.com/v2/docs/en/getting-started-with-axonius | 2020-10-19T22:00:21 | CC-MAIN-2020-45 | 1603107866404.1 | [array(['https://cdn.document360.io/95e0796d-2537-45b0-b972-fc0c142c6893/Images/Documentation/image%281413%29.png',
'image.png'], dtype=object)
array(['https://cdn.document360.io/95e0796d-2537-45b0-b972-fc0c142c6893/Images/Documentation/image%281015%29.png',
'image.png'], dtype=object) ] | docs.axonius.com |
abAn asset type that allows you to store a GameObject complete with components and properties. The prefab acts as a template from which you can create new object instances in the scene. More info
See in Glossary that stores unchanging data in attached MonoBehaviour scriptsA piece of code that allows you to create your own Components, trigger game events, modify Component properties over time and respond to user input in any way you like. More info
See in Glossary.
Every timeThe fundamental object in Unity scenes, which can represent characters, props, scenery, cameras, waypoints, and more. A GameObject’s functionality is defined by the Components attached to it. More info
See in Glossary. Instead, you need to save them as Assets in your Project.
When you use, create a script in your applicationA Scene contains the environments and menus of your game. Think of each unique Scene file as a unique level. In each Scene, you place your environments, obstacles, and decorations, essentially designing and building your game in pieces. More info
See in Glossary. Then, in the InspectorA Unity window that displays information about the currently selected GameObject, Asset or Project Settings, allowing you to inspect and edit the values. More info
See in Glossary, set the Spawn Manager Values field to the new
SpawnManagerScriptableObject that you set up.
Set the Entity To Spawn field to any Prefab in your Assets folder, then click Play in the Editor. The Prefab you referenced in the
Spawner instantiates using the values you set in the
SpawnManagerScriptableObject instance.
If you’re. | https://docs.unity3d.com/2018.4/Documentation/Manual/class-ScriptableObject.html | 2020-10-19T22:36:41 | CC-MAIN-2020-45 | 1603107866404.1 | [] | docs.unity3d.com |
The last step in creating your new automation is saving it with the Save Automation button on the right panel and preparing to activate it. All automations will start initially in a inactive state, meaning that they will not attempt to process anything until you are ready for them to do so.
To active an automation simply click the big play button just next to the Save Automation button, the automation will now be active in your builder.
You now have two options for testing your automation:
Using the Test Automation button that will appear in the right panel when an automation has been saved
The second option can be useful for testing automations that are designed to be used with a webhook as otherwise these can be difficult to test. Whichever option you choose you should notice that the actions which were requested have been executed, e.g. if there was a Create Row action in your automation this row should now exist.
If your automation does not run as expected then check that all input mustache syntax is valid and any filters put in place should pass.
Below is a video showing the testing of a very basic automation being activated and then tested. | https://docs.budibase.com/automate/activating-and-testing | 2020-10-19T21:47:57 | CC-MAIN-2020-45 | 1603107866404.1 | [] | docs.budibase.com |
New and Enhanced Features for InterSystems IRIS 2020.3
This document describes the new and enhanced features in the 2020.3 release of InterSystems IRIS® data platform, which is a continuous delivery release. The enhancements in this release make it easier to develop and deploy real-time, machine learning-enabled applications that bridge data and application silos.
Enhancements that Improve Deployment and Operations Experience
This release provides the following enhancements to the deployment and operations experience, both in the cloud and on-premises:
Configuring a Kubernetes cluster is much easier with the new InterSystems Kubernetes Operator (IKO). See “Using the InterSystems Kubernetes Operator.”
The InterSystems Cloud Manager (ICM) adds support for InterSystems API Manager deployments. See “Deploying InterSystems API Manager.”
Asynchronous mirroring support for sharded clusters.
You can now manage Work Queues from the System Management Portal.
Enhancements that Improve Developer Experience
This release provides the following enhancements t the developer experience, including new facilities, higher performance, and compatibility with recent versions of key technology stacks:
Python Gateway — this release extends the dynamic object gateway to allow you to call Python code from ObjectScript and provides forward and reverse proxy access to Python objects. In previous releases, the dynamic object gateway only supported calls to Java and .NET.
Support for JDBC and Java Gateway reentrancy.
.NET Gateway now supports .NET Core 2.1.
XEP adds support for deferred indexing and indexes can be built as a background process. See “Controlling Index Updating.”
Support for Spark 2.4.4.
IntegratedML Machine Learning
This release introduces IntegratedML, a new feature that brings best-of-breed machine learning frameworks, such as SciKit Learn, TensorFlow, and H2O, to InterSystems IRIS. This feature allows you to build and deploy machine learning models using simple SQL statements. IntegratedML emphasizes ease of use by providing a universal interface to different frameworks and streamlining the iterative process of data preparation, training and deployment.
IntegratedML will be included in future releases, and it is available in a separate kit based on InterSystems IRIS 2020.3.
Other Enhancements and Efficiency Improvements
In each release, InterSystems makes many efficiency improvements and minor enhancements. In this release these improvements include:.
You can now use Transact-SQL through JDBC. Please see the Transact-SQL Migration Guide for more on hosting Transact-SQL applications on InterSystems IRIS.
Node.js Native API now includes the List class. See “Native API Quick Reference for Node.js.”
Java Messaging Service (JMS) adapter is able to connect to a broader range of servers..
Continuous Delivery Releases of InterSystems IRIS
InterSystems IRIS 2020.3 is a continuous delivery release of InterSystems IRIS. There are now two streams of InterSystems IRIS releases:
Extended maintenance (EM) releases — these are annual releases and provide maintenance releases. These releases are ideal for large enterprise applications where the ease of getting fixes in maintenance releases is more important than getting early access to new features.
Continuous delivery (CD) releases — these are quarterly releases provide that access quickly to new features and are ideal for developing and deploying applications in the cloud or in local Docker containers.
The quarterly schedule of continuous delivery releases reduces the time between when a customer requests a feature and we deliver it to them. Having regular schedules for both the continuous delivery releases and the major extended maintenance releases provide customers with a predictable schedule that helps them plan and schedule updates.
Continuous delivery releases are provided in container format and are available on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, Docker Hub, and the InterSystems WRC download site. You can run a continuous delivery release on any of these cloud platforms or a local system using Docker container. InterSystems does not provide maintenance releases for continuous delivery releases, but instead fixes issues in subsequent continuous delivery releases.
The extended maintenance releases are provided on all Supported Platforms Guide, including UNIX®, Windows, the cloud platforms, and the Docker container.
If your application runs on a non-container platform, you can only use an extended maintenance release for that application but can consider using the continuous delivery releases for:
Evaluating new features and testing your custom code — this will reduce your upgrade costs when you upgrade to the next extended maintenance major release.
Using it for new projects that can be deployed in the cloud or in local containers.
In addition to providing fully-suppported releases, InterSystems provides access to prerelease software for developers who want to get an early look at new features. | https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GCRN_NEW20203 | 2020-10-19T21:53:41 | CC-MAIN-2020-45 | 1603107866404.1 | [] | docs.intersystems.com |
Central. Central dispatch componentsCentral dispatch includes work order task and agent information and a calendar view of assigned and unassigned tasks.Central dispatch integration with dynamic schedulingDrag and drop a task over an assigned task in the agent calendar. Dynamic scheduling determines the best way to unassign or reassign the original task to make room for the new task. Working with tasks in the central dispatch calendarDisplay task and SLA information and use the drag and drop feature to assign, unassign, and reassign tasks in the central dispatch calendar. Configuring central dispatchView work order tasks and agent information and assign work order tasks using central dispatch. Dispatchers can view the work order task and agent information that is configured by the administrator. Dispatchers can enable or disable the fields related to task and agent information that are visible to them.Using central dispatchYou can display a task or a user form, assign work order tasks, and search work order tasks or field service agents using central dispatch. | https://docs.servicenow.com/bundle/paris-field-service-management/page/product/field-service-management/concept/c_CentralDispatch.html | 2020-10-19T20:50:43 | CC-MAIN-2020-45 | 1603107866404.1 | [] | docs.servicenow.com |
identify_format¶
astropy.io.registry.
identify_format(origin, data_class_required, path, fileobj, args, kwargs)[source]¶
Loop through identifiers to see which formats match.
- Parameters
- originstr
A string
"reador
"write"identifying whether the file is to be opened for reading or writing.
- data_class_requiredobject
The specified class for the result of
reador the class that is to be written.
- pathstr, other path object or None
The path to the file or None.
- fileobjFile object or None.
An open file object to read the file’s contents, or
Noneif the file could not be opened.
- argssequence
Positional arguments for the
reador
writefunction. Note that these must be provided as sequence.
- kwargsdict-like
Keyword arguments for the
reador
writefunction. Note that this parameter must be
dict-like.
- Returns
- valid_formatslist
List of matching formats. | https://docs.astropy.org/en/latest/api/astropy.io.registry.identify_format.html | 2020-10-19T21:56:22 | CC-MAIN-2020-45 | 1603107866404.1 | [] | docs.astropy.org |
Working with MailChimp
You can connect your MailChimp account to Doki to allow you to funnel purchasers into lists and automation sequences and create mailing list sign ups that capture and tag leads in your MailChimp forms.
What do you want to accomplish?
Our MailChimp integration currently enables the following actions once connected:
- You can add a sign up form to your course landing that subscribes a user to a MailChimp form.
- You can add users to a MailChimp List triggering automation when an auto-drip schedule starts.
- You can add all users to a MailChimp List when a class schedule triggering automation for all users at the same time when a class schedule starts.
If any of these is what you want, continue below to the MailChimp integration guide.
If you need to funnel students into a MailChimp sequence when they create an account or buy a non-dripped package, we recommend using our Zapier integration instead of (or as-well-as) the MailChimp integration. In this case continue to the MailChimp + Zapier integration guide.
MailChimp Integration Guide
Connect to MailChimp
In the Doki sidebar, click the "Business" section (hint: it's the briefcase icon). Then click the "Integrations" tab. From the list of integrations, click the "Connect" button next to the MailChimp one. This will take you to a MailChimp login page. Here we'll ask you to "Connect Doki to your [MailChimp] account". This authorizes us to make requests to your account on your behalf.
After signing in and approving the connection, you'll be returned back to Doki with the MailChimp integration now expanded. You should see some information about your MailChimp account presented there now. Doki is now connected to MailChimp.
Adding a sign-up form to your course's landing page
Now that MailChimp is connected, you can add a sign-up form to your course landing pages. To do so, start by clicking into the appropriate Course you want to add a form to. From the "Marketing" tab, select "Integrations" and then click "MailChimp".
Doki will automatically fetch your MailChimp Forms from your account and populate the dropdown with them. To add a form to your course's landing page, select the form from the list and click "Save".
Now you'll have a form on your landing page that will subscribe any potential customers to the selected form when submitted. To preview the form, from the top right corner of the admin interface, click the "View" button. Scroll down your landing page until you see the form. You can see we also add a direct link to the MailChimp form next to the "Subscribe" button.
Subscribing students when a schedule starts
If you've created a course that is delivered on a drip schedule (either an auto-drip or class schedule), you can sync the delivery schedule with a MailChimp automation sequence by adding the List "MailChimp List ID".
You'll want to paste the MailChimp Form ID in this field. To get your Form ID, in MailChimp, go to the "Lists" section and go to the list you want to subscribe to to trigger automation. From the "Settings" menu on the list, click the "List name and defaults" item. On the right you'll see a message that says "Some plugins and integrations may request your List ID. Typically, this is what they want: " followed by a "code" of some letters and numbers. That code is what you want to enter in the Mailchimp List ID field in Doki. Note: make sure you don't copy and paste the period at the end of that sentence. That is not part of the code.
Paste the form ID into the Doki field and click the "Update Schedule" button in the top right of the page.
What will happen now will depend if your schedule is an auto-drip schedule or a class schedule. For auto-drip schedules, Doki will subscribe the user to the MailChimp list as soon as they purchase the course. If you have any automation tied to the list, it will then be triggered by MailChimp. So you should design your automation in MailChimp to match the exact delays in your Doki schedule. So if you are delaying your first content unlock by 1 day, you can do the same for the MailChimp automation. OR, you might immediately trigger a "Welcome" email in MailChimp.
For class schedules, we don't subscribe until 12am on the day that the class starts. Everyone that has purchased the package will be subscribed to the MailChimp list at the same time, so any automation tied to that list will trigger for everyone in the class at once.
MailChimp + Zapier guide
To start, open the Working with Zapier guide in a new tab or window. You're going to follow along with steps 1 + 2 and then follow the link in step 3 to the Configuring a MailChimp Zap with Zapier guide. | https://docs.doki.io/article/10-mailchimp-integration | 2020-10-19T22:45:19 | CC-MAIN-2020-45 | 1603107866404.1 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5751e99d9033604d43dabc61/images/5810fd4ac697915f88a38c44/file-UzN9GPMVGf.jpg',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5751e99d9033604d43dabc61/images/5810fd56c697915f88a38c46/file-Vmfdsg4nD0.jpg',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5751e99d9033604d43dabc61/images/5810fe659033604deb0eb2e4/file-dP351D2x9O.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5751e99d9033604d43dabc61/images/5810fe7ac697915f88a38c5a/file-y7eSntaC3c.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5751e99d9033604d43dabc61/images/580fee849033604deb0eaa37/file-qcdf2GOHgs.jpg',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5751e99d9033604d43dabc61/images/580fee9c9033604deb0eaa38/file-VJCKcxOGsK.jpg',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5751e99d9033604d43dabc61/images/5810feb4c697915f88a38c62/file-1VGwWKB0OY.png',
None], dtype=object) ] | docs.doki.io |
The EMC Data Access API (EDAA) provides an MSA URI and interface that is used by Watch4net reporting. By default NCM installs a CAS server and the EDAA URI is authenticated using the CAS server. The EDAA server certificate is available in the [product directory]\conf directory and is required to be imported to the keystore,
To establish a secure connection between Network Configuration Manager EDAA and the CAS server, do the following: | https://docs.vmware.com/en/VMware-Smart-Assurance/10.1.1/ncm-installation-guide-10.1.1/GUID-0720CE89-2FF3-4E36-8AD0-9FEF5885C288.html | 2020-10-19T21:37:28 | CC-MAIN-2020-45 | 1603107866404.1 | [] | docs.vmware.com |
Workspace ONE Access uses role-based access control to manage administrator roles. You assign the Super Admin, Directory Admin, and ReadOnly roles to directory user groups to manage administrative access to the cross-region Workspace ONE Access cluster.
You assign the Workspace ONE Access roles to the Workspace ONE Access user groups.
Procedure
- In a Web browser, log in to the Workspace ONE Access cross-region cluster by using the administration interface.
- On the main navigation bar, click Roles.
- Select the Super Admin role and click Assign.
- In the Users / groups search box, enter [email protected], select the group, and click Save.
- Repeat these steps to configure the Directory Admin and the ReadOnly Admin roles. | https://docs.vmware.com/en/VMware-Validated-Design/services/deployment-of-vrealize-suite-2019-on-vmware-cloud-foundation-310/GUID-B90E1BF8-91C9-4A5E-9D6D-8E298F4B2B23.html | 2020-10-19T22:13:57 | CC-MAIN-2020-45 | 1603107866404.1 | [] | docs.vmware.com |
As a virtual infrastructure administrator, you need vRealize Operations Cloud to send email notifications to your advanced network engineers when critical alerts are generated for mmbhost object, the host for many virtual machines that run transactional applications, where no one has yet taken ownership of the alert.
Prerequisites
- Ensure that you have at least one alert definition for which you are sending a notification. For an example of an alert definition, see Create an Alert Definition for Department Objects.
- Ensure that at least one instance of the Standard Email Plug-In is configured and running. See Add a Standard Email Plug-In for vRealize Operations Cloud Outbound Alerts.
Procedure
- In the menu, click Alerts and then in the left pane, click .
- Click Add to add a notification rule.
- In the Name text box, enter a name similar to Unclaimed Critical Alerts for mmbhost.
- In the Method area, select Standard Email Plug-In from the drop-down menu, and select the configured instance of the email plug-in.
- Configure the email options.
- In the Recipients text box, enter the email addresses of the members of your advance engineering team, separating the addresses with a semi-colon (;).
- To send a second notification if the alert is still active after a specified amount of time, enter the number of minutes in the Notify again text box.
- Type number of notifications that are sent to users in the Max Notifications text box.
- Set the Notification Status, you can either enable or disable a notification setting. Disabling a notification stops the alert notification for that setting and enabling it activates it again.
- Configure the scope of filtering criteria.
- From the Scope drop-down menu, select Object.
- Click Select an Object and enter the name of the object.In this example, type mmbhost.
- Locate and select the object in the list, and click Select.
- Configure the Notification Trigger.
- From the Notification Trigger drop-down menu, select Impact.
- From the adjacent drop-down menu, select Health.
- In the Criticality area, click Critical.
- Expand the Advanced Filters and from the Alert States drop-down menu, select Open.The Open state indicates that no engineer or administrator has taken ownership of the alert.
- Click Save.
Results. | https://docs.vmware.com/en/VMware-vRealize-Operations-Cloud/services/config-guide/GUID-AE26F747-EBC7-44DE-BACB-6B0B8593546C.html | 2020-10-19T22:32:54 | CC-MAIN-2020-45 | 1603107866404.1 | [] | docs.vmware.com |
vSphere 7.0 provides various options for installation and setup. To ensure a successful vSphere deployment, you should understand the installation and setup options, and the sequence of tasks. deploy the vCenter Server appliance, a preconfigured virtual machine optimized for running vCenter Server and the vCenter Server components. You can deploy the vCenter Server appliance on ESXi hosts or on vCenter Server instances.
For detailed information about the vCenter Server installation process, see vCenter Server Installation and Setup. | https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.esxi.install.doc/GUID-A71D7F56-6F47-43AB-9C4E-BAA89310F295.html | 2020-10-19T22:07:38 | CC-MAIN-2020-45 | 1603107866404.1 | [] | docs.vmware.com |
. scene, throughout the entire scene. | https://docs.toonboom.com/help/harmony-16/premium/3d-integration/set-3d-model-scale-factor.html | 2018-11-13T05:07:39 | CC-MAIN-2018-47 | 1542039741219.9 | [array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object) ] | docs.toonboom.com |
MiXCR Immune Repertoire Analyzer @ BaseSpace®¶
If your data is deposited at the BaseSpace cloud platform you can perform repertoire extraction and calculate common analysis metrics like V/J usage, without leaving BaseSpace UI. MiXCR Immune Repertoire Analyzer app is available for all BaseSpace users (this link works if you are logged in into base space, and this one points to the overview page available without authentication).
Input¶
User interface of MiXCR Immune Repertoire Analyzer was specifically optimized to set up best possible analysis pipeline with the minimal possible set of parameters, covering majority of the sequencing data types, TCR/IG repertoires can be extracted from.
The list of possible sequencing material sources include, but is not limited to:
- all targeted TCR and IG profiling protocols (both RNA- and gDNA-derived, including multiplex-PCR and 5’RACE based techniques)
- non-enriched RNA-Seq data
- WGS
- single-cell data
Parameters¶
Starting material¶
Sets the type of starting material (RNA or Genomic DNA) of the library. This determines whether MiXCR will look for V introns.
Library type¶
This option determines whether the data will be treated as amplicon library with the same relative sequence architecture across all reads, where CDR3 is fully covered by each read. Or, randomly shred library like RNA-Seq or WGS, so MiXCR will perform assembly of target molecules from several reads.
“Random fragments” option works well for shallow libraries like RNA-Seq of solid tissue or sorted cell populations, but if the library is too reach with the target molecules (e.g. library was additionally enriched using bait probes), using this option may drastically degrade both computational and “analytical” performance. In this case, select “Targeted TCR/IG library”, no partial CDR3 assembly will be performed, still, sequences extracted from reads with full CDR3 coverage, should be enough for the analysis.
Targeted Library Parameters: 5’-end of the library¶
If you specify the library may contain V primer sequence on the 5’ end, and it was not trimmed (see “Presence of PCR primers and/or adapter sequences”), alignment of V genes will not be forcefully extended to the left side and clonotypes with the same clonal sequence (specified in “Target region”) but different V gene will not be separated into individual records.
Primers for some segments if accidentally annealed to non-target region may introduce chimeric sequences, and prevent exact V gene assignment, thus generating artificial clonotypes differing by only V gene assigned. Clonotype splitting by V gene is turned off to prevent generation of this type of artificial diversity.
Targeted Library Parameters: 3’-end of the library¶
If you specify the library may contain J or C primer sequence on the 3’ end, and it was not trimmed (see “Presence of PCR primers and/or adapter sequences”), alignment of J or C genes respectively will not be forcefully extended to the right side, and clonotypes with the same clonal sequence (specified in “Target region”) but different J or C gene will not be separated into individual records (the motivation is the same as for 5’-end).
Presence of PCR primers and/or adapter sequences¶
Specifies whether the adapter and primer sequences were trimmed for this data. This applies to V/J/C primers, 5’RACE adapters or nay other sequence not originating from target molecules.
It affects V/J/C alignment parameters (see above), and also affects alignment of V 5’UTR sequence (5’UTR will not be aligned if sequence may contain 5’RACE adapter sequences, e.g. if 5’-end of the library do not contain primer sequence is not marked as containing primer, but “Presence of PCR primers and/or adapter sequences” is set to “May be present”.
Analysis Settings¶
Target region¶
Region of the sequence to assemble clones by. Only this part of the sequence will be available in the output.
Filter¶
Filter out-of-frame CDR3s Output only in-frame reads Filter out CDR3s with stop codon Don’t output sequences containing stop codons in their target region.
Output¶
MiXCR Immune Repertoire Analyzer produces tab separated tables containing comprehensive information about clonotypes detected during analysis. This information includes:
- Clonal sequence
- Aggregated quality score values for clonal sequence
- Anchor positions inside clonal sequence
- Assigned V/D/J/C genes, among with corresponding aggregated alignment scoring
- Encoded alignments of V/J/C genes inside clonal sequence
MiXCR Immune Repertoire Analyzer also contains useful statistics and corresponding charts for the clonesets produced with VDJTools .
Graphical output is available for:
- Overall analysis statistics: total number of processed sequence, successfully aligned sequences, and number of sequences passed different clonotype assembling stages
- V and J usage
- Spectratype, marked with major clonotypes
- Spectratype, marked with V gene usage
- Quantile statistics
- And the table of top clonotypes marked wit V, D and J genes | https://mixcr.readthedocs.io/en/master/BaseSpaceApp.html | 2018-11-13T04:24:54 | CC-MAIN-2018-47 | 1542039741219.9 | [] | mixcr.readthedocs.io |
Edit Shipment Method
All the shipment methods available are listed in the parameter Shipment Method.
The steps to create a new shipment method are:
In the "Shipment Method Information" tab
- In the Shipment Name field enter the name of your shipment
- Set the Published radio button to Yes
- In dropdown Shipment Method select the shipment method you whish to create
- In the top right toolbar, click Save. This step will load the configuration parameters of the shipment method you just created.
- Go on the Configuration tab, and configure the shipment method.
{tab=Information}
Shipment Name
Give a unique name to your shipment method. This name will appear in your checkout page.
Published
Must select YES for Published or the shipment method will NOT appear in your store.
Shipment Description
This explanation will appear in the checkout page so your customer can understand the choice(s).
Shipment Method
There is only one shipment method pre-installed in VirtueMart, so select it from the drop-down list.
Shopper Group
If you shipment method is available for all shopper groups, DO NOT SELECT any shopper group.
If you whish to restrict a shipment method to a specific shopper group, click in the box to display the drop-down menu and select it. This method to be available only to that group.
List Order
If you use multiple shipment methods, put a 1 for this method to appear first, 2 to appear second, etc. Click the SAVE Button before moving onto the Configuration Tab so VirtueMart knows what configuration settings to show you.
{tab=Configuration}
Logo
If you have uploaded a small image to display on shipping method, select it here.
Countries
If you wish to ship to only some countries, click in the box and select the individual country names. If you will ship to any country, leave blank.
Zip range start and Zip Range end
To limit shipping to only certain zip codes, enter them here. Otherwise, leave blank.
Lowest Weight, Highest weight and Weight Unit
To limit shipping by weight, enter lowest and highest weights here. Otherwise, leave blank.
Minimum and Maximum Number of Products
To limit shipping by number of products, enter lowest and highest number of products here. Otherwise, leave blank.
Minimum and Maximum Order Amount
To limit shipping by price, enter lowest and highest prices here. Otherwise, leave blank.
Shipment Cost, Package Fee, Tax
How much will the customer pay for shipping if they meet the criteria you specified above? Will you charge a fee per package? If you are required to pay tax on shipping, select it here.
Minimum Amount for Free Shipment
If you want to allow free shipping, example on orders over $100, enter 100.00 here. Otherwise, leave blank. | http://docs.virtuemart.net/manual/shop-menu/edit-shipment-method.html | 2018-11-13T04:53:11 | CC-MAIN-2018-47 | 1542039741219.9 | [] | docs.virtuemart.net |
Access Your APIs
The AWS Mobile CLI and Amplify library make it easy to create and call cloud APIs and their handler logic from your JavaScript.
Set Up Your Backend
Create Your API
In the following examples you will create an API that is part of a cloud-enabled number guessing app. The CLI will create a serverless handler for the API behind the scenes.
To enable and configure an API
In the root folder of your app, run:
awsmobile cloud-api enable --prompt
When prompted, name the API
Guesses.
? API name: Guesses
Name a HTTP path
/number. This maps to a method call in the API handler.
? HTTP path name (/items): /number
Name your Lambda API handler function
guesses.
? Lambda function name (This will be created if it does not already exists): guesses
When prompted to add another HTTP path, type
N.
? Add another HTTP path (y/N): N
The configuration for your Guesses API is now saved locally. Push your configuration to the cloud.
awsmobile push
To test your API and handler
From the command line, run:
awsmobile cloud-api invoke Guesses GET /number
The Cloud Logic API endpoint for the
Guesses API is now created.
Customize Your API Handler Logic
The AWS Mobile CLI has generated a Lambda function to handle calls to the
Guesses API. It is saved locally in
YOUR-APP-ROOT-FOLDER/awsmobilejs/backend/cloud-api/guesses. The
app.js file in that directory contains the definitions and functional code for all of the
paths that are handled for your API.
To customize your API handler
Find the handler for POST requests on the
/numberpath. That line starts with
app.post('number',. Replace the callback function’s body with the following:
# awsmobilejs/backend/cloud-api/guesses/app.js app.post('/number', function(req, res) { const correct = 12; let guess = req.body.guess let result = "" if (guess === correct) { result = "correct"; } else if (guess > correct) { result = "high"; } else if (guess < correct) { result = "low"; } res.json({ result }) });
Push your changes to the cloud.
awsmobile push
The
Guesses API handler logic that implements your new number guessing functionality is now deployed
to the cloud.
Connect to Your Backend
The examples in this section show how you would integrate AWS Amplify library calls using React (see the AWS Amplify documentation to use other flavors of Javascript).
The following simple component could be added to a
create-react-app project to present the number guessing game.
Make a Guess
The
API module from AWS Amplify allows you to send requests to your Cloud Logic APIs right
from your JavaScript application.
To make a RESTful API call
Import the
APImodule from
aws-amplifyin the
GuessNumbercomponent file.
import { API } from 'aws-amplify';
Add the
makeGuessfunction. This function uses the
APImodule’s
postfunction to submit a guess to the Cloud Logic API.
async makeGuess() { const guess = parseInt(this.refs.guess.value); const body = { guess } const { result } = await API.post('Guesses', '/number', { body }); this.setState({ guess: result }); }
Change the Guess button in the component’s
renderfunction to invoke the
makeGuessfunction when it is chosen.
<button type="submit" onClick={this.makeGuess.bind(this)}>Guess</button>
Open your app locally and test out guessing the number by running
awsmobile run.
Your entire component should look like the following:
Next Steps
Learn how to retrieve specific items and more with the API module in AWS Amplify.
Learn how to enable more features for your app with the AWS Mobile CLI.
Learn more about what happens behind the scenes, see Set up Lambda and API Gateway. | https://docs.aws.amazon.com/aws-mobile/latest/developerguide/web-access-apis.html | 2018-06-18T00:15:45 | CC-MAIN-2018-26 | 1529267859904.56 | [] | docs.aws.amazon.com |
Contents
Profile Extensions
This page provides guidelines for managing Profile Extensions in UCS.
About Profile Extensions
Profile extensions are stored in the UCS database and are not JSON values as service extensions. Their management is similar to former extension management in Context Services. You must define database schemas before you can use them in your application.
Profile extensions are additional information which extend the standard contents of resources such as Customer Profile. A Extension is a record-a list of attributes-or an array of records, associated with a resource ID.
- You can define as many extension types as you need by creating an Extension Schema for each of them.
- Extension schema are created through Context Services (see List of Schema Operations), not through the Configuration Layer (Configuration Manager).
Extension records can be either:
- "single-valued": The extension contains a single record across the resource (for instance, LastName, FirstName, identifiers, etc.)
- "multi-valued": The extension can contain several values (for instance, phone numbers, e-mail addresses, etc.)
Extensions are provided at the same time and at the same level than the attributes of the resource. For instance, the following output presents a profile containing the attributes FirstName, LastName, DOB<ref>DOB: Date Of Birth</ref> and one multi-valued extension EmailAddress:
{ "FirstName": "Bruce", "LastName": "Banner", "DOB": "1962-05-10", "EmailAddress": [ "[email protected]", "[email protected]" ] }
<references />
Unique Attributes
In the case of multi-valued extensions, the attributes which are part of the 'unique' list (specified in the Extension Schema) are used to identify records. The combination of these attributes' values must be unique across the related resource, and this enables UCS to identify a given record in the given extension. For example, consider a 'Bill' extension which includes the attribute bill_id. To ensure that a given service does not have two 'Bill' extensions with the same bill_id, set the following unique array in the extension schema:
unique = ["bill_id"]
The attributes of the unique list are mandatory at the extension record's creation. You need to provide values for the 'unique' attributes:
- At the creation of an extension record.
- In operations which update or delete a specific record, such as Update Record In Profile Extension or Delete Record From Profile Extension.
Limitations
- Once created, you cannot update the schema.
- When you are dealing with extensions or extension schema, make sure that you do not use one of the
Unauthorized Strings as an attribute name or value.
Managing Extension Schema
Before you can start using extensions, you must create their schema.
You can create schema with the following operations:
Then, you can retrieve extension schema.
Example: Retrieving the schema for profile extensions:
GET /metadata/profiles/extensions
Result
200 OK [ { "name":"Phone", "type":"multi-valued", "attributes": [ {"name":"PhoneType","type":"integer","default":0,"mandatory":"true"}, {"name":"prefix","type":"string","length":"3","default":"555",}, {"name":"PhoneNumber","type":"integer","length":15,"mandatory":"true"}, {"name":"description","type":"string","length":32,"mandatory":"true"}, {"name":"start_availabilty","type":"datetime"}, {"name":"end_availabilty","type":"datetime", "mandatory":"false"} ] }, { "name":"Address", "type":"single-valued", "attributes": [ {"name":"AddressType","type":"integer","default":0}, {"name":"Address","type":"string","length":256}, {"name":"City","type":"string","length":32}, {"name":"County","type":"string","length":32}, {"name":"PostCode","type":"string", "length":10}, {"name":"Country","type":"string","length":32} ] } ]
Managing Extensions
Adding Extensions to a given Resource
You can add extensions when managing the resources with related operations which authorize the <extension n> attribute in the operation's body. In that case, if a former value of the extension exists for the given resource, this former extension value is replaced with the new extension value specified in the body.
Let's consider the following multi-valued extension record named 'Satisfaction'. The unique field which identifies records is "place" (the name of the proposed place for the booking.
Example: Records for a 'Satisfaction' extension
PUT /profiles/00027a52JCGY000M { "FirstName": "Bruce", "LastName": "Banner", "DOB": "1962-05-10", "EmailAddress": [ "[email protected]", "[email protected]" ], "Address": { "Type":1, "Address":"21 JumpStreet", "City":"Hollywood", "County":"Santa Barbara", "PostCode":"555", "Country":"United States" } } POST /profiles/00027a52JCGY000M/extensions { "customer_id":"00027a52JCGY000M", "Satisfaction": [ { "rating":2, "pertinence":8, "usefull":true, "place":"Terranova mexico resort" }, { "rating":8, "pertinence":4, "usefull":false, "place":"Fancy resort Paris" } ]
Example: Operation which updates the 'Satisfaction' extension" } ] }
As a result, the previous records 'Fancy resort Paris ' and 'Terranova mexico resort ' are lost. In this case, to add a new record to the extension, you must specify the whole extension content. For instance, note the following:
Example: Operation which updates the 'Satisfaction' extension without losing records" }, { "rating":2, "pertinence":8, "usefull":true, "place":"Terranova mexico resort" }, { "rating":8, "pertinence":4, "usefull":false, "place":"Fancy resort Paris" }] }
Retrieving Extensions
GET operations which enable to retrieve resources include the "extensions" parameter to specify a list of extensions to retrieve. By default, extensions are not returned. The following list is not exhaustive:
Deleting an Extension
To delete the extension of a given resource, use the related Update XXX Extension operation with no attributes in the operation's body.
PUT POST /profiles/00027a52JCGY000M/extensions []
As explained in Role-Based Access Control, you need update privileges to clear the extension, as follows:
- Clear Profile Extension—UCS.Customer.updateProfileExtension
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/CS/latest/Developer/Extensions | 2018-06-18T00:01:30 | CC-MAIN-2018-26 | 1529267859904.56 | [] | docs.genesys.com |
Layout Rendering Modes
As of Q2 2013 RadPropertyGrid exposes new property - RenderMode. It represents RadPropertyGrid's layout render mode and has two options:
Hierarchical: This is the default one and it will nest the PropertyFields into one another, when you have Nested PropertyDefinitions or grouping.
Flat: This mode represents RadPropertyGrid's new layout mechanism, which rely entirely on flat rendering of its elements.This allows to virtualize the grouping process which leads to a very good perfromance when RadPropertyGrid is grouped and has a lot of data.
For compatibility reasons, Hierarchical mode is also preserved, but it is recommended to use Flat mode.
There are a number of benefits of using the Flat render mode. Some are listed below:
Faster layout rendering and scrolling time for grouped RadPropertyGrid.
Supported UI Virtualization for grouping scenarios.
- | https://docs.telerik.com/devtools/silverlight/controls/radpropertygrid/features/layout-rendering-modes | 2018-06-18T00:02:32 | CC-MAIN-2018-26 | 1529267859904.56 | [] | docs.telerik.com |
So, let’s consider how to quickly integrate the considered CI/CD DevOps approach using the preliminary prepared Jenkins configuration template:
Implementation of the application continuous delivery approach becomes much more simpler and faster with a proper add-on for the automated DevOps pipeline building. Being composed for a once, it ensures that the majority of Jenkins CI server configurations can be handled automatically, without your interference needed, each time it’s applied. Eventually, you’ll only need to perform some slight tuning according to the specific of a particular project.
Configuration Add-on Appliance
So, let’s consider how to quickly integrate the considered CI/CD DevOps approach using the preliminary prepared Jenkins configuration template:
Add-on Installation
1. Navigate to the Jelastic dashboard (for the account your Jenkins environment is running at) and hover over the Tomcat 7 node for the appropriate environment. Click the appeared Add-ons icon to display the same-named tab with the list of suitable pluggable modules:
Locate your add-on (named Jenkins configuration in our case) and Install it with the appropriate button.
2. Wait for the process to be successfully completed and access the Jenkins dashboard (i.e. Open your environment in browser).
As you can see, our continuous integrator already have the list of jobs, that have been prepared automatically.
Jenkins Post-Configurations
To complete the establishment of your Jenkins CI server, a few more slight adjustments should be applied to it:1. At first, customize the list of jobs with your projects’ data:
- specify the link to the repository with your project within the Build and Deploy action
set the correct credentials inside the Execute shell box for the rest of the jobs
In order to access a particular job’s parameters and change them, use the Configure context menu option (can be called through clicking on the down arrow, which appears upon hovering over the corresponding job):
- {dev_user}@example.com - login (email address) of the dev user’s account
- {password} - password for the user, specified above
- {cloud_domain} - domain name of your Jelastic Platform
2. The last thing that left to perform is to bind the launching of the first job from our cycle (i.e. Create Environment) to the commit action occurrence at a repository. This can be done through the Webhook option - for example, for a GitHub repo it looks like the following:
up application will appear at production.
And having the Jelastic attachable add-on for Jenkins configuration already prepared, you can quickly apply this approach to any of your projects through deploying a new continuous delivery mechanism within a few clicks. | https://ops-docs.jelastic.com/jenkins-addon-appliance | 2018-06-18T00:22:55 | CC-MAIN-2018-26 | 1529267859904.56 | [array(['https://download.jelastic.com/index.php/apps/files_sharing/publicpreview?file=%2F%2F01.png&x=1855&a=true&t=902e638988d88d97a4f10ec3d5524afe&scalingup=0',
None], dtype=object)
array(['https://download.jelastic.com/index.php/apps/files_sharing/publicpreview?file=%2F%2F02.png&x=1855&a=true&t=902e638988d88d97a4f10ec3d5524afe&scalingup=0',
None], dtype=object)
array(['https://download.jelastic.com/index.php/apps/files_sharing/publicpreview?file=%2F%2F03.png&x=1855&a=true&t=902e638988d88d97a4f10ec3d5524afe&scalingup=0',
None], dtype=object)
array(['https://download.jelastic.com/index.php/apps/files_sharing/publicpreview?file=%2F%2F04.png&x=1855&a=true&t=902e638988d88d97a4f10ec3d5524afe&scalingup=0',
None], dtype=object) ] | ops-docs.jelastic.com |
Error getting tags :
error 404Error getting tags :
error 404
the openSockets openSockets()
the openSockets
if line thisLine of the openSockets is "example.com:80" then beep
Use the openSockets function to find out which sockets need to be closed, or to check whether a socket is already open before reading from it or writing to it.
Value:
The openSockets function returns a list of the open sockets, one per line.
Each line of the list returned by the openSockets function is a socket identifier. A socket identifier consists of the host and port number the socket is connected to, separated by a colon. If a connection name or number was assigned when the socket was opened, it is appended to the identifier, separated from the port number by a vertical bar (|).
Note: Several of the commands and functions in the Internet library use sockets, so the openSockets func may return sockets opened by the Internet library in addition to any sockets you have opened with the open socket com. | http://docs.runrev.com/Function/openSockets | 2018-06-18T00:25:23 | CC-MAIN-2018-26 | 1529267859904.56 | [] | docs.runrev.com |
Account Manager Help
The Outbound for PureEngage Cloud applications enable administrators and supervisors to administer and manage Outbound campaigns. Through an easy-to-use, intuitive web-based user interface, users can quickly and easily create and manage comprehensive outbound contact strategies, manage multiple contact lists, and create and apply compliance rules.
This document provides information about managing Outbound campaigns, from the perspective of an account manager.
Refer to the following help topics for more information.
Getting Started Guides
Campaigns
Lists
Compliance
Account Settings
Latest Videos
This page was last modified on 21 August 2017, at 07:13.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/OCS/latest/OCShelp/AccMgr | 2018-06-17T23:47:04 | CC-MAIN-2018-26 | 1529267859904.56 | [] | docs.genesys.com |
Contents
Working with Column Chooser
If you see the Column Chooser button (
) on your dashboard's toolbar, then you can open the Column Chooser window. Use the Column Chooser window to choose which metrics to display on your dashboard, and which to hide. For example, your selection of dashboard metrics might be based on the particular aspects of team performance that are most relevant in order to meet specific operational targets. Access to the Column Chooser is tied to user roles. In some enterprises, a manager or system administrator will select the metrics for you, in which case you will not have the Column Chooser button.
Only metrics to which you have access are displayed in the Column Chooser window (see Role-Based Access and Permissions).
Selecting Time Profile Groups
The Column Chooser shows the metrics to which you have access, from all time profile groups. It enables you to select metrics from different time profile groups.
For example, the Available Metrics pane shows three entries for AHT (average handle time) for applications, one for each time profile group (Short, Medium, and Long). You can then choose to display one, two, or all three on the dashboard.
Default Columns
When the Column Chooser is launched for the first time, the set of displayed columns has not been configured in a previous session. The list of metrics on the Selected Metrics pane, and shown by default on the dashboard, corresponds to the default set of selected metrics configured in the Administration module.
Default Metrics Sort Order
The default sort order of metrics is set by the administrator.
Selecting Metrics for Dashboard Display
In the Column Chooser window, use the Select drop−down list at the top of the window to choose which set of metrics to display in the Selected Metrics pane. (The metrics listed in the Selected Metrics pane display on your dashboard.) You can add metrics to, or remove metrics from, this initial list.
Any metric in the Available Metrics pane is available for display, but is not currently displayed on the dashboard if there is no check mark beside the metric row. To add an available metric to your dashboard display, place a check mark beside the metric in the Available Metrics pane. When you click Apply, that metric will be displayed in the Selected Metrics list and will be added to your dashboard display.
To help you narrow your search for a specific metric in the Available Metrics pane, use the filters at the top of the pane. For example, you can search for metrics by channel, object type, and/or time profile group. To find a list of metrics that start with a specific letter, you can click that letter in the alphabet row, assuming the letter by which you want to filter the list is an active link (that is, there are metrics that begin with that letter in the Available Metrics pane). You can also enter some or all of a metric name, or a word in a description, into the Search field. This limits the display of metrics in the Available Metrics pane to the metric or metrics whose names or description contain your search criterion.
To remove a metric from your dashboard, you simply remove the check mark from that metric row in the Selected Metrics or Available Metrics pane. When you click Apply, that metric will be removed from the Selected Metrics pane, the check mark will be removed from that metric row in the Available Metrics pane, and the metric will be removed from your dashboard.
For related information, see Working with Metrics Libraries.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/PMA/latest/CCAWAHelp/MetsColumns | 2018-06-17T23:45:56 | CC-MAIN-2018-26 | 1529267859904.56 | [] | docs.genesys.com |
Migrating Payment Data to Spreedly
If you have credit cards already vaulted within another third party vault and would like to use them with Spreedly there are two migration options:
- If you’d like to do a mass one-time migration where we import credit card data into the Spreedly vault, see our one-time import guide.
- If you’d like to do a self-managed gradual migration using the API for one of our gateways that accept Third Party Vaulting, this guide is for you.
Alternative Payment Methods
Alternative payment methods have different technical restrictions that may make them ineligible for migrations between service providers. If you’re considering a migration of Apple Pay or Android Pay tokens to the Spreedly vault, contact our support team to find out more. | https://docs.spreedly.com/guides/migrating/ | 2018-06-17T23:32:39 | CC-MAIN-2018-26 | 1529267859904.56 | [] | docs.spreedly.com |
eZ Studio 15.12.1 Release notes¶
The first sub-release of eZ Studio 15.12 is available as of February 2nd.
For the release notes of the corresponding (and included) eZ Platform sub-release, see eZ Platform 15.12.1 Release Notes.
Changes since 15.12¶
Summary of changes¶
- Enhanced Landing Page drag-and-drop interactions, including a better visualization of dropping blocks onto the page:
- Timeline toolbar now covers all changes in all Schedule Blocks on a given Landing Page.
- Timeline toolbar is now also available in View mode on Landing Pages:
- Added an Approval Timeline which lists all review requests for a given Content item:
- Modified template of the notification email sent to reviewers from Flex Workflow.
- Minor UI improvements (including: updated icons, labels, date picker and others):
- Added notification when copying a URL.
- Numerous bug fixes.
Full list of improvements¶
Full list of bugfixes¶
Upgrading a 15.12 Studio project¶
You can easily upgrade your existing Studio project in version 15.12 studio --unshallow to load the full history, and run the merge again.
The latter can be ignored, as it will be regenerated when we execute composer update later. The easiest way is to checkout the version from the tag, and add it to the changes:
From the upgrade-1.1.0:.1.0 branch: | https://ez-systems-developer-documentation.readthedocs-hosted.com/en/latest/releases/ez_studio_15.12.1_release_notes/ | 2018-12-10T04:52:03 | CC-MAIN-2018-51 | 1544376823303.28 | [array(['../img/LP_drag_and_drop_improved.png',
'Dropping a block onto a Landing Page'], dtype=object)
array(['../img/LP_in_view_mode.png',
'Landing Page in View mode with a Timeline'], dtype=object)
array(['../img/approval_timeline.png', 'Approval timeline screen'],
dtype=object)
array(['../img/new_datepicker.png', 'Datepicker'], dtype=object)] | ez-systems-developer-documentation.readthedocs-hosted.com |
Ktpass Overview
Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2003 with SP2
Ktpass.exe: Kerberos Keytab Setup
-
See Also
Concepts
Ktpass Remarks
Ktpass Syntax
Alphabetical List of Tools
Xcacls Overview
Sidwkr.dll
Sidwalker Security Administration Tools
Sidwalk Overview
Showaccs Overview
Sdcheck Overview
Ktpass Overview
Ksetup Overview
Getsid Overview
Addiag.exe | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2003/cc779157%28v%3Dws.10%29 | 2018-12-10T04:21:25 | CC-MAIN-2018-51 | 1544376823303.28 | [] | docs.microsoft.com |
How faceted search work
The normal search system allows you to:
- Search for all listings, by default the all categories and all location options are automatically selected (this is only available for the normal search form
- the default search form allows you to perform a search INTO both category and location, meaning the listings have to be in BOTH the category that you select and the location that you select, if one or the other aren’t selected, it won’t display it.
2) The faceted search form allows you to:
- Split your categories in multifields so you don’t end up with a superlong category/location field
- This search form won’t allow you to search for all categories/locations
- This search form performs a search INTO BOTH the CHILD category and the CHILD location
- This form will never search into the PARENT category or the PARENT location
- Same as the other form above, the listing that you might be looking for has to be in BOTH CHILD category and CHILD Location
So for example if you want to find all car dealers in Egypt you would need to create an additional sublocation into the “egypt” location and for example call it “all around Egypt” or anything else that you might want to use. You can then assign every listing that is in egypt into 1) his child egypt(country/area/whatever) 2) into the “all around egypt” in this way into the faceted form you can select the mercedes option into the child category field and then select the child location “all around egypt” to search for them. | https://docs.themesdepot.org/article/217-how-faceted-search-work | 2018-12-10T05:12:57 | CC-MAIN-2018-51 | 1544376823303.28 | [] | docs.themesdepot.org |
Changelog¶
1.18.0¶
2018-12-02
The work on geopy 2.0 has started, see the new geopy 2.0 doc section
for more info. geopy 2.0 will drop support for Python 2.7 and 3.4.
To ensure a smoother transition from 1.x to 2.0, make sure to check
your code with warnings enabled (i.e. run python with the
-Wd
switch).
- ADDED: Geolake geocoder. Contributed by Yorick Holkamp. (#329)
- ADDED: BANFrance (Base Adresse Nationale) geocoder. Contributed by Sébastien Barré. (#336)
- ADDED: TomTom and AzureMaps: language param has been added to the reverse method.
- ADDED: Geonames geocoder now supports both findNearbyPlaceName and findNearby reverse geocoding methods, as chosen by a new find_nearby_type parameter of the reverse method. Contributed by svalee. (#327)
- ADDED: Geonames geocoder now supports returning a timezone for a particular Point via a new reverse_timezone method. Contributed by svalee. (#327)
- ADDED: Geonames geocoder’s reverse method now supports new parameters: lang and feature_code. Contributed by svalee. (#327)
- ADDED: Geonames now supports scheme parameter. Although the service itself doesn’t yet support https, it will be possible to enable https via this new parameter as soon as they add the support, without waiting for a new release of geopy.
- CHANGED: Geonames now builds Location.address differently: previously it looked like
Kreuzberg, 16, DE, now it looks like
Kreuzberg, Berlin, Germany.
- CHANGED: All warnings now specify a correct stacklevel so that the warnings point at the place in your code that triggered it, instead of the geopy internals.
- CHANGED: All warnings with UserWarning category which will be removed in geopy 2.0 now have the DeprecationWarning category.
- CHANGED: geopy.extra.rate_limiter.RateLimiter is no longer an experimental API.
- CHANGED: GoogleV3.timezone now issues a deprecation warning when at_time is a number instead of a datetime. In geopy 2.0 this will become an exception.
- CHANGED: GoogleV3.timezone method is now deprecated in favor of GoogleV3.reverse_timezone, which works exactly the same, except that it returns a new geopy.Timezone object, which is a wrapper for pytz timezone similarly to geopy.Location. This object also contains a raw response of the service. GoogleV3.timezone will be removed in geopy 2.0. (#332)
- CHANGED: Point constructor silently ignored the tail of the string if it couldn’t be parsed, now it is not ignored. For example,
75 5th Avenue, NYC, USAwas parsed as
Point(75, 5), but now it would raise a ValueError exception.
- FIXED: GoogleV3.timezone method didn’t process errors returned by the API.
1.17.0¶
2018-09-13
- ADDED: OpenMapQuest how inherits from Nominatim. This adds support for all parameters and queries implemented in Nominatim (such as reverse geocoding). (#319)
- ADDED: Nominatim-based geocoders now support an extratags option. Contributed by Oleg. (#320)
- ADDED: Mapbox geocoder. Contributed by William Hammond. (#323)
- ADDED: ArcGIS now supports custom domain and auth_domain values. Contributed by Albina. (#325)
- ADDED: Bodies of unsuccessful HTTP responses are now logged with INFO level.
- CHANGED: Reverse geocoding methods now issue a warning for string queries which cannot be used to construct a Point instance. In geopy 2.0 this will become an exception.
- CHANGED: GoogleV3 now issues a warning when used without an API key.
- CHANGED: Parameters accepting bounding boxes have been unified to accept a pair of diagonal points across all geopy. Previous formats are still supported (until geopy 2.0) but now issue a warning when used.
- CHANGED: Path part of the API urls has been moved to class attributes in all geocoders, which allows to override them in subclasses. Bing and What3Words now store api urls internally differently.
- FIXED: TomTom and AzureMaps have been passing boolean values for typeahead in a wrong format (i.e. 0 and 1 instead of false and true).
1.16.0¶
2018-07-28
- ADDED:
geopy.extra.rate_limiter.RateLimiterclass, useful for bulk-geocoding a pandas DataFrame. See also the new Usage with Pandas doc section. (#317)
- CHANGED: Nominatim now issues a warning when the default user_agent is used against nominatim.openstreetmap.org. Please always specify a custom user-agent when using Nominatim. (#316)
1.15.0¶
2018-07-15
- ADDED: GeocodeEarth geocoder based on Pelias (ex-Mapzen). (#309)
- ADDED: TomTom and AzureMaps (based on TomTom) geocoders. (#312)
- ADDED: HERE geocoder. Contributed by deeplook. (#304)
- ADDED: Baidu now supports authentication using SK via a new security_key option. Contributed by tony. (#298)
- ADDED: Nominatim’s and Pickpoint’s view_box option now accepts a list of Points or numbers instead of just stringified coordinates. Contributed by svalee. (#299)
- ADDED: Nominatim and Pickpoint geocoders now support a bounded option, which restricts results to the items strictly contained within the view_box. Contributed by Karimov Dmitriy. (#182)
- ADDED: proxies param of geocoders can now accept a single string instead of a dict. See the updated docs for the
geopy.geocoders.options.default_proxiesattribute for more details. Contributed by svalee. (#300)
- CHANGED: Mapzen has been renamed to Pelias, domain parameter has been made required. (#309)
- CHANGED: What3Words API has been updated from v1 to v2. Please note that Location.raw results have changed due to that. Contributed by Jonathan Batchelor. (#226)
- FIXED: Baidu mistakenly didn’t process the returned errors correctly. Contributed by tony. (#298)
- FIXED: proxies={} didn’t reset system proxies as expected.
1.14.0¶
2018-05-13
This release contains a lot of public API cleanup. Also make sure to check out the updated docs! A new Semver doc section has been added, explaining the geopy’s policy on breaking changes.
- ADDED: Nominatim geocoder now supports an addressdetails option in the reverse method. Contributed by Serphentas. (#285)
- ADDED: ArcGIS geocoder now supports an out_fields option in the geocode method. Contributed by Jonathan Batchelor. (#227)
- ADDED: Yandex geocoder now supports a kind option in the reverse method.
- ADDED: Some geocoders were missing format_string option. Now all geocoders support it.
- ADDED: geopy.distance.lonlat function for conveniently converting (x, y, [z]) coordinate tuples to the Point instances, which use (y, x, [z]). Contributed by svalee. (#282)
- ADDED: geopy.geocoders.options object, which allows to configure geocoder defaults (such as User-Agent, timeout, format_string) application-wide. (#288)
- ADDED: Support for supplying a custom SSL context. See docs for geopy.geocoders.options.default_ssl_context. (#291)
- ADDED: Baidu geocoder was missing the exactly_one option in its reverse method.
- ADDED: GeocodeFarm now supports a scheme option.
- CHANGED: Baidu and Yandex geocoders now use https scheme by default instead of http.
- CHANGED: ArcGIS geocoder was updated to use the latest API. Please note that Location.raw results for geocode have changed a little due to that. Contributed by Jonathan Batchelor. (#227)
- CHANGED: Explicitly passed timeout=None in geocoder calls now issues a warning. Currently it means “use geocoder’s default timeout”, while in geopy 2.0 it would mean “use no timeout”. (#288)
- CHANGED: GoogleV3 geocode call now supports components without query being specified. (#296)
- CHANGED: GeoNames, GoogleV3, IGNFrance, OpenCage and Yandex erroneously had exactly_one=False by default for reverse methods, which must have been True. This behavior has been kept, however a warning will be issued now unless exactly_one option is explicitly specified in reverse calls for these geocoders. The default value will be changed in geopy 2.0. (#295)
- CHANGED: Point now throws a ValueError exception instead of normalizing latitude and tolerating NaN/inf values for coordinates. (#294)
- CHANGED: Vincenty usage now issues a warning. Geodesic should be used instead. Vincenty is planned to be removed in geopy 2.0. (#293)
- CHANGED: ArcGIS wkid option for reverse call has been deprecated because it was never working properly, and it won’t, due to the coordinates normalization in Point.
- FIXED: ArcGIS and What3Words did not respect exactly_one=False. Now they respect it and return a list of a single location in this case.
- FIXED: ArcGIS was throwing an exception on empty response of reverse. Now None is returned, as expected.
- FIXED: GeocodeFarm was raising an exception on empty response instead of returning None. Contributed by Arthur Pemberton. (#240)
- FIXED: GeocodeFarm had missing Location.address value sometimes.
- REMOVED: geopy.geocoders.DEFAULT_* constants (in favor of geopy.geocoders.options.default_* attributes). (#288)
- REMOVED: YahooPlaceFinder geocoder. (#283)
- REMOVED: GeocoderDotUS geocoder. (#286)
1.13.0¶
2018-04-12
- ADDED: Pickpoint geocoder. Contributed by Vladimir Kalinkin. (#246)
- ADDED: Bing geocoder: additional parameters for geocoding (culture and include_country_code). Contributed by Bernd Schlapsi. (#166)
- ADDED: Point and Location instances are now picklable.
- ADDED: More accurate algorithm for distance computation geopy.distance.geodesic, which is now a default geopy.distance.distance. Vincenty usage is now discouraged in favor of the geodesic. This also has added a dependency of geopy on geographiclib package. Contributed by Charles Karney. (#144)
- ADDED: Nominatim geocoder now supports a limit option and uses limit=1 for exactly_one=True requests. Contributed by Serphentas. (#281)
- CHANGED: Point now issues warnings for incorrect or ambiguous inputs. Some of them (namely not finite values and out of band latitudes) will be replaced with ValueError exceptions in the future versions of geopy. (#272)
- CHANGED: Point now uses fmod instead of % which results in more accurate coordinates normalization. Contributed by svalee. (#275, #279)
- CHANGED: When using http proxy, urllib’s install_opener was used, which was altering urlopen call globally. It’s not used anymore.
- CHANGED: Point now raises ValueError instead of TypeError when more than 3 arguments have been passed.
- FIXED: Point was raising an exception when compared to non-iterables.
- FIXED: Coordinates of a Point instance changed via __setitem__ were not updating the corresponding lat/long/alt attributes.
- FIXED: Coordinates of a Point instance changed via __setitem__ were not being normalized after assignment. Note, however, that attribute assignments are still not normalized. (#272)
- FIXED: Distance instances comparison was not working in Python3.
- FIXED: Yandex geocoder was sending API key with an incorrect parameter.
- FIXED: Unit conversions from feet were incorrect. Contributed by scottessner. (#162)
- FIXED: Vincenty destination function had an error in the formula implementation. Contributed by Hanno Schlichting. (#194)
- FIXED: Vincenty was throwing UnboundLocalError when difference between the two longitudes was close to 2*pi or either of them was NaN. (#187)
- REMOVED: geopy.util.NullHandler logging handler has been removed.
1.12.0¶
2018-03-13
- ADDED: Mapzen geocoder. Contributed by migurski. (#183)
- ADDED: GoogleV3 geocoder now supports a channel option. Contributed by gotche. (#206)
- ADDED: Photon geocoder now accepts a new limit option. Contributed by Mariana Georgieva.
- CHANGED: Use the IUGG mean earth radius for EARTH_RADIUS. Contributed by cffk. (#151)
- CHANGED: Use the exact conversion factor from kilometers to miles. Contributed by cffk. (#150)
- CHANGED: OpenMapQuest geocoder now properly supports api_key option and makes it required.
- CHANGED: Photon geocoder: removed osm_tag option from reverse geocoding method, as Photon backend doesn’t support it for reverse geocoding.
- FIXED: Photon geocoder was always returning an empty address.
- FIXED: Yandex geocoder was returning a truncated address (the name part of a place was missing).
- FIXED: The custom User-Agent header was not actually sent. This also fixes broken Nominatim, which has recently banned the stock urllib user agent.
- FIXED: geopy.util.get_version() function was throwing an ImportError exception instead of returning a version string.
- FIXED: Docs for constructing a geopy.point.Point were referencing latitude and longitude in a wrong order. Contributed by micahcochran and sjorek. (#207 #229)
- REMOVED: Navidata geocoder has been removed. Contributed by medecau. (#204)
1.11.0¶
2015-09-01
- ADDED: Photon geocoder. Contributed by mthh.
- ADDED: Bing supports structured query parameters. Contributed by SemiNormal.
- CHANGED: Geocoders send a User-Agent header, which by default is geopy/1.11.0. Configure it during geocoder initialization. Contributed by sebastianneubauer.
- FIXED: Index out of range error with no results using Yandex. Contributed by facciocose.
- FIXED: Nominatim was incorrectly sending view_box when not requested, and formatting it incorrectly. Contributed by m0zes.
1.10.0¶
2015-04-05
- CHANGED: GeocodeFarm now uses version 3 of the service’s API, which allows use by unauthenticated users, multiple results, and SSL/TLS. You may need to obtain a new API key from GeocodeFarm, or use None for their free tier. Contributed by Eric Palakovich Carr.
- ADDED: DataBC geocoder for use with the British Columbia government’s DataBC service. Contributed by Benjamin Trigona-Harany.
- ADDED: Placefinder’s geocode method now requests a timezone if the with_timezone parameter is true. Contributed by willr.
- FIXED: Nominatim specifies a viewbox parameter rather than the apparently deprecated view_box.
1.9.1¶
2015-02-17
- FIXED: Fix support for GoogleV3 bounds parameter. Contributed by Benjamin Trigona-Harany.
1.9.0¶
2015-02-12
- CHANGED: MapQuest geocoder removed as the API it uses is now only available to enterprise accounts. OpenMapQuest is a replacement for Nominatim-sourced data.
- CHANGED: Nominatim now uses HTTPS by default and accepts a scheme argument. Contributed by srounet.
- ADDED: Nominatim now accepts a domain argument, which allows using a different server than nominatim.openstreetmap.org. Contributed by srounet.
- FIXED: Bing was not accessible from get_geocoder_for_service. Contributed by Adrián López.
1.8.0¶
2015-01-21
- ADDED: NaviData geocoder added. Contributed by NaviData.
- CHANGED: LiveAddress now requires HTTPS connections. If you set scheme to be http, rather than the default https, you will now receive a ConfigurationError.
1.7.1¶
2015-01-05
- FIXED: IGN France geocoder’s address formatting better handles results that do not have a building number. Contributed by Thomas Gratier.
1.7.0¶
2014-12-30
- ADDED: IGN France. | https://geopy.readthedocs.io/en/latest/changelog_1xx.html | 2018-12-10T04:43:49 | CC-MAIN-2018-51 | 1544376823303.28 | [] | geopy.readthedocs.io |
Turbo has built-in session support. You can easily to customize session store and control the life cycle of a session.
The life cycle of a session
By default, the
BaseBaseHandler class has a
property called
session. When tornado server
prepares to serve a user request, if you call
self.seesion by yourself in
prepare hooks or somewhere else before
on_finish hooks called, a
session_id will be added to response headers, default in cookie. | https://app-turbo.readthedocs.io/en/latest/tutorial/session/ | 2018-12-10T04:32:00 | CC-MAIN-2018-51 | 1544376823303.28 | [] | app-turbo.readthedocs.io |
Release Checklist¶
This page describes the process of releasing new versions of Ephemeris.
This release checklist is based on the Pocoo Release Management Workflow.
This assumes
~/.pypirc file exists with the following fields (variations)
are fine.
[distutils] index-servers = pypi test [pypi] username:<username> password:<password> [test] repository: username:<username> password:<password>
Review
git statusfor missing files.
Verify the latest Travis CI builds pass.
make open-docsand review changelog.
Ensure the target release is set correctly in
ephemeris/__init__.py(
versionwill be a
devNvariant of target release).
make clean && make lint && make test
make release
This process will push packages to test PyPI, allow review, publish to production PyPI, tag the git repository, push the tag upstream. If custom changes to this process are needed, the process can be broken down into steps including:
make release-local
make push-release | https://ephemeris.readthedocs.io/en/latest/developing.html | 2018-12-10T04:20:05 | CC-MAIN-2018-51 | 1544376823303.28 | [] | ephemeris.readthedocs.io |
Link: Consistent access to the data you care about¶
link Release v1.2.7. (Installation)
Link was designed to deal with the growing number of databases, apis and environments needed to grow a technology and a team. Link provides a simple way to configure and connect to all of these different technologies.
Goals:
- Create an easy and simple development environment and process
- Make configuration easy for complex environments
- Allow people wrap their own apis, dbs, and other pieces of the system and plug them into link
Here is an example of grabbing data from a database and turning it into an dataframe:
In [1]: from link import lnk In [2]: my_db = lnk.dbs.my_db In [3]: df = my_db.select('select * from my_table').as_dataframe() <class 'pandas.core.frame.DataFrame'> Int64Index: 325 entries, 0 to 324 Data columns: id 325 non-null values user_id 323 non-null values app_id 325 non-null values name 325 non-null values body 325 non-null values created 324 non-null values dtypes: float64(2), int64(3), object(4)
Contents: | https://link-docs.readthedocs.io/en/latest/ | 2018-12-10T05:35:29 | CC-MAIN-2018-51 | 1544376823303.28 | [] | link-docs.readthedocs.io |
Spec
BlastTabularFileInFormattedFileIn abstraction for BlastTabular
Member Function Overview
Member Functions Inherited From FormattedFile
Interface Function Overview
bool onRecord(blastTabularIn);Returns whether the currently buffered line looks like the start of a record.
void readFooter(blastTabularIn);Read the footer (bottom-most section) of a BlastTabular file.
void readHeader(blastTabularIn);Read the header (top-most section) of a BlastTabular file.
void readRecord(blastRecord, blastTabularIn);Read a record from a file in BlastTabular format.
Interface Functions Inherited From FormattedFile
Interface Functions Inherited From FormattedFileIn
Interface Metafunction Overview
Interface Metafunctions Inherited From FormattedFile
Detailed Description
This is a FormattedFile specialization for reading BlastTabular formats. For details on how to influence the reading of files and how to differentiate between the tabular format without comment lines and the one with comment lines, see BlastIOContext. Please note that you have specify the type of the context as a template parameter to BlastTabularFileIn.
Overview
- open BlastTabularFileIn
- readHeader
- while onRecord
- readFooter
For a detailed example have a look at the Blast IO tutorial.
See Also
Interface Functions Detail
bool onRecord(blastTabularIn);
Parameters
Returns
Thrown Exceptions
Data Races
void readFooter(blastTabularIn);
Parameters
Thrown Exceptions
Data Races
void readHeader(blastTabularIn);
Parameters
Thrown Exceptions
Data Races
void readRecord(blastRecord, blastTabularIn);
Parameters
Remarks
This function will read an entire record from a blast tabular file, i.e. it will read the comment lines (if the format is COMMENTS) and 0-n BlastMatches belonging to one query.
Please note that if there are no comment lines in the file the boundary between records is inferred from the indentity of the first field, i.e. non-standard field configurations must also have Q_SEQ_ID as their first BlastMatchField.
Comment lines
The qId member of the record is read from the comment lines and the matches are resized to the expected number of matches succeeding the comments.
This function also sets many properties of blastTabularIn's BlastIOContext, including these members:
It also sets the blast program run-time parameter of the context depending on the information found in the comments. If the compile time parameter was set on the context and they are different this will result in a critical error.
Please note that for legacyFormat the fields member is always ignored, however fieldsAsStrings is still read from the comments, in case you want to process it.
In case you do not wish the fields to be read from the comments, you can set context.ignoreFieldsInComments to true. This will be prevent it from being read and will allow you to specify it manually which might be relevant for reading the match lines.
If the format is NO_COMMENTS none of the above happens and qId is derived from the first match.
Matches
A match line contains 1 - n columns or fields, 12 by default. The fields member of the context is considered when reading these fields. It is usually extracted from the comment lines but can also be set by yourself if there are no comments or if you want to overwrite the comments' information (see above). You may specify less fields than are actually present, in this case the additional fields will be discarded. The parameter is ignored if legacyFormat is set.
To differentiate between members of a BlastMatch that were read from the file and those that have not been set (e.g. both could be 0), the latter are initialized to their respective max-values.
Please note that the only transformations made to the data are the following:
In contrast to writeRecord no other transformations are made, e.g. the positions are still one-indexed and flipped for reverse strand matches. This is due to the required fields for retransformation (sequence lengths, frames) not being available in the default columns. | http://docs.seqan.de/seqan/2.1.0/specialization_BlastTabularFileIn.html | 2018-12-10T03:59:15 | CC-MAIN-2018-51 | 1544376823303.28 | [] | docs.seqan.de |
You don't need to submit pull requests to contribute to the CodeBuddies community.
On Slack:
Help each other out with your coding questions.
Share useful or informative links.
Award points to other users by @username++ them.
On CodeBuddies.org:
Start a silent hangout to invite others to cowork with you
Start a teaching hangout to teach something you know
Start a collaborative hangout to pair program or work through a tutorial
Joining an organizing team
Join the #podcast channel if you're interested in producing podcast episodes (cross-posted on YouTube) as an interviewer or video editor
Join the #cb-newsletter channel if you're interested in helping out with the newsletter or Medium publication
Join #cb-sponsors-outreach if you're interested in being on the team that reaches out to sponsors
Join #cb-forum if you're interested in helping grow forum.codebuddies.org as a moderator
On our Open Collective:
Donate to our Open Collective to support us financially -- or if you know of a company who would like to sponsor, please connect us with them. :) Our finances are 100% transparent on the Open Collective, and will go to expenses like:
domain renewal
hosting costs
upgrading our database hosting
upgrading our mailgun plan (for sending transactional emails)
affording stickers to send to contributors | https://docs.codebuddies.org/community | 2018-12-10T05:15:47 | CC-MAIN-2018-51 | 1544376823303.28 | [] | docs.codebuddies.org |
How to set up SSL
Secure Socket Layer (SSL) encrypts all of the communication between the web site and its visitors. This makes it so that when the buyer sends their credit card information over the internet it's kept completely private and secure. SSL is an absolute must if you plan on running an ecommerce site, no matter what CMS or plugin you use. If you're interested in the technology behind SSL check out this Wikipedia article.
How It Works
SSL works by having a digital certificate on your server that is officially and legally signed by a 3rd party company. There are a variety of companies that can create the certificates, and they've all been required to prove that they are reliable and secure.
When the browser connects to the server it validates the certificate and then creates an encrypted browsing session.
Getting A Certificate
If you want your site to run SSL you must buy one of these certificates. We recommend namecheap.com.
Important: An SSL certificate will only work on the exact domain name they were purchased for. and example.com are different as far as SSL certificates go. You must choose one or the other and use only that one.
Installing Your Certificate
Installing an SSL certificate can be tricky. Unless you run your own server your host will either
- have an SSL management area in your hosting panel or
- install the certificate for you.
Either way the technical difficulties should be taken care of for you.
Once your certificate is installed properly your browser address bar should show a lock in it, like this:
If you click the lock you should see something similar to this:
Easy Digital Downloads Configuration
Once your SSL certificate is installed go to Downloads → Settings → Misc and enable Enforce SSL on Checkout.
This will force the browser to use the SSL version of your web site if it isn't already, ensuring that communications are encrypted.
Helper Plugins
The EDD setting above only ensures that your checkout page is using your SSL certificate. To force the rest of your site to do so as well we suggest using any one of the plugins listed below.
Related Articles
- 2Checkout Payment Gateway Setup Instructions
- Setup Documentation for CardSave Gateway
- NETbilling Payment Gateway Setup
- PayPal Adaptive Payments Setup
- PayPal Payments Advanced
- PayPal Website Payments Pro and PayPal Express Gateway
- PayPlug Payment Gateway - Set up
- BitPay Gateway Documentation
- Braintree Gateway Setup
- Check Payment Gateway Setup Documentation
- Coinbase Payment Gateway Setup Documentation
- GoCardless Payment Gateway
- PayU India Payment Gateway Setup Documentation
- Setup Documentation for Stripe Payment Gateway
- WePay Payment Gateway Documentation
- Easy Digital Downloads Misc Settings | https://docs.easydigitaldownloads.com/article/994-how-to-set-up-ssl | 2018-12-10T04:14:33 | CC-MAIN-2018-51 | 1544376823303.28 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5463d152e4b0f639418397ca/images/55c22157e4b01fdb81eb0975/file-caSGWI4p5P.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5463d152e4b0f639418397ca/images/55c22178e4b089486cad8d27/file-sHTZ55Ww3L.jpg',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5463d152e4b0f639418397ca/images/55c222e9e4b089486cad8d3a/file-uuDygbKA8q.png',
None], dtype=object) ] | docs.easydigitaldownloads.com |
Setup Database Connection Dialog
This article is relevant to entity models that utilize the deprecated Visual Studio integration of Telerik Data Access. The current documentation of the Data Access framework is available here.
The Setup Database Connection dialog is that part of the Telerik Data Access Create Model Wizard which enables you to specify the data source, connection options, and database used to generate the domain model.
The Setup Database Connection dialog will appear only if the selected domain model type is Populate from Database.). | https://docs.telerik.com/data-access/deprecated/feature-reference/tools/new-domain-model-wizard/developemnt-environment-wizards-dialogs-model-tools-wizard-connection.html | 2018-12-10T05:11:35 | CC-MAIN-2018-51 | 1544376823303.28 | [array(['/data-access/images/1devenvironment-wizards-model-wizard-connection-010.png',
None], dtype=object) ] | docs.telerik.com |
Many companies only require that you replace certificates of services that are accessible externally. However, Certificate Manager also supports replacing solution user certificates. Solution users are collections of services, for example, all services that are associated with the vSphere Web Client In multi-node deployments replace the machine solution user certificate on the Platform Services Controller and the full set of solution users on each management node.
About this task
When you are prompted for a solution user certificate, provide the complete signing certificate chain of the third-party CA.
The format should be similar to the following.
-----BEGIN CERTIFICATE----- Signing certificate -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- CA intermediate certificates -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- Root certificate of enterprise or external CA -----END).
Request a certificate for each solution user on each node from your third-party or enterprise CA. You can generate the CSR using vSphere Certificate Manager or prepare it yourself. The CSR must meet the following requirements:
Key size: 2048 bits or more (PEM encoded)
CRT format
x509 version 3
SubjectAltName must contain DNS Name=<machine_FQDN>
Each solution user certificate must have a different Subject. Consider, for example, including the solution user name (such as vpxd) or other unique identifier.
Contains the following Key Usages: Digital Signature, Non Repudiation, Key Encipherment
See also VMware Knowledge Base article 2112014, Obtaining vSphere certificates from a Microsoft Certificate Authority.
Procedure
- Start vSphere Certificate Manager and select option 5.
- Select option 2 to start certificate replacement and respond to the prompts.
vSphere Certificate Manager prompts you for the following information:
Password for [email protected].
Certificate and key for machine solution user.
If you run vSphere Certificate Manager on a Platform Services Controller node, you are prompted for the certificate and key (vpxd.crt and vpxd.key) for the machine solution user.
If you run vSphere Certificate Manager on a management node or an embedded deployment, you are prompted for the full set of certificates and keys (vpxd.crt and vpxd.key) for all solution users.
What to do next
If you are upgrading from a vSphere 5.x environment, you might have to replace the vCenter Single Sign-On certificate inside vmdir. See Replace the VMware Directory Service Certificate in Mixed Mode Environments. | https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.security.doc/GUID-0133D240-CD49-400C-92D7-8E6AB69B3BC0.html | 2018-12-10T04:59:42 | CC-MAIN-2018-51 | 1544376823303.28 | [] | docs.vmware.com |
There are two forms of the SELECT_SQL command. This section describes the free format version which allows any SQL that is valid for the particular database engine. No parsing is performed of the SQL either at compile time or runtime. The entered SQL command is passed exactly as it is to the database engine. It is the responsibility of the RDML programmer to ensure that the data returned by the database engine matches the list of fields in the FIELDS parameter. See SELECT_SQL for the other form of SELECT_SQL.
This form of the SELECT_SQL command can only be used in RDMLX functions and components.
The SELECT_SQL command is used in conjunction with the ENDSELECT command to form a "loop" to process one or more rows (records) from one or more tables (files).
For example, the following SELECT_SQL / ENDSELECT loop selects all values of product and quantity from the table ORDLIN and places them, one by one, in a list:
----> DEF_LIST NAME(#ALIST) FIELDS(#PRODUCT #QUANTITY)
--> SELECT_SQL FIELDS(#PRODUCT #QUANTITY)
| USING('SELECT "PRODUCT", "QUANTITY" FROM "MYDTALIB"."ORDLIN"')
|
| ADD_ENTRY(#ALIST)
|
---- ENDSELECT
Before attempting to use free format SELECT_SQL you must be aware of the following:
1. Information accessed via SELECT_SQL is for read only. If you use INSERT or UPDATE statements in your USING parameter you do so at your own risk.
2. SELECT_SQL does not use the IO Modules/OAMs so it bypasses the repository validation and triggers.
3. The SELECT_SQL command is primarily intended for performing complex extract/join/summary extractions from one or more SQL database tables (files) for output to reports, screens or other tables. It is not intended for use in high volume or heavy use interactive applications.
With that intention in mind, it must be balanced by the fact that SELECT_SQL is a very powerful and useful command that can vastly simplify and speed up most join/extract/summary applications, no matter whether the results are to be directed to a screen, a printer, or into another file (table).
4. The SELECT_SQL command provides very powerful database extract/join/summarize capabilities that are directly supported by the SQL database facilities. However, the current IBM i implementation of SQL may require and use significant resource in some situations. It is entirely the responsibility of the user to compare the large benefits of this command, with its resource utilization, and to decide whether it is being correctly used. One of the factors to consider is whether the USING parameter uses any non-key fields. If it does, then SELECT_SQL will probably be quicker than SELECT. Otherwise SELECT will probably be quicker. This is especially important when developing the program on Visual LANSA first with the intention of also running it on IBM i. This is because Visual LANSA has much fewer performance differences between SELECT and SELECT_SQL.
5. DO NOT break SELECT_SQL loops with GOTO commands as this may leave the SQL cursor open. You should use the LEAVE RDML command to exit SELECT_SQL loops instead.
6. This section assumes that the user is familiar with the SQL 'SELECT' command. This section is about how the SQL 'SELECT' command is accessed directly from RDML functions, not about the syntax, format and uses of the SQL 'SELECT' command.
If your command is incorrect then the following diagnosis is possible:
The extensive use of the SELECT_SQL command is not recommended for the following reasons:
REQUEST FIELD(#ANYSQL)
Select_Sql Fields(#STD_NUM) Using(#ANYSQL)
endselect.
and the end user could enter this on the screen: "delete from mylib.afile;select count(*) from mylib.afile"
Also See
7.110.1 SELECT_SQL Free Format Parameters
7.110.2 SELECT_SQL Free Format Examples
7.110.3 SELECT_SQL Free Format References
7.110.4 SELECT_SQL Free Format Coercions
Required
SELECT_SQL --- FIELDS ------- field name --------------------->
>-- USING -------- SQL select command ------------->
-----------------------------------------------------------------
Optional
>-- FROM_FILES --- file name ---------------------->
| |
------------ 20 max-----------
>-- IO_STATUS ---- field name --------------------->
*STATUS
>-- IO_ERROR ----- *ABORT -------------------------|
*RETURN
label | https://docs.lansa.com/14/en/lansa015/content/lansa/select_sql_free.htm | 2018-12-10T04:38:22 | CC-MAIN-2018-51 | 1544376823303.28 | [] | docs.lansa.com |
Amazon SQS FIFO (First-In-First-Out) Queues
FIFO queues are available in the US East (N. Virginia), US East (Ohio), US West (Oregon), EU (Ireland), Asia Pacific (Sydney), and Asia Pacific (Tokyo) regions. FIFO queues have all the capabilities of the standard queue.
For information about creating FIFO queues with or without server-side encryption
using the AWS Management Console, the AWS SDK for Java (and the
CreateQueue action), or AWS CloudFormation, see Creating an Amazon SQS Queue and Creating an Amazon SQS Queue with SSE.
FIFO (First-In-First-Out) queues are designed to enhance messaging between applications when the order of operations and events is critical, or where duplicates can't be tolerated, for example:
Ensure that user-entered commands are executed in the right order.
Display the correct product price by sending price modifications in the right order.
Prevent a student from enrolling in a course before registering for an account.
FIFO queues also provide exactly-once processing but have a limited number of transactions per second (TPS):
By default, FIFO queues support up to 3,000 messages per second with batching. To request a limit increase, file a support request.
FIFO queues support up to 300 messages per second (300 send, receive, or delete operations per second) without batching.
Note
The name of a FIFO queue must end with the
.fifo suffix. The suffix counts towards the 80-character queue name limit.
To determine whether a queue is FIFO, you can check whether the queue name ends with the suffix.
For best practices of working with FIFO queues, see Additional Recommendations for Amazon SQS FIFO Queues and Recommendations for Amazon SQS Standard and FIFO (First-In-First-Out) Queues .
For information about compatibility of clients and services with FIFO queues, see Compatibility.
Topics
Message.
Key Terms
The following key terms can help you better understand the functionality of FIFO queues. For more information, see the Amazon Simple Queue Service API Reference.
- Message Deduplication ID.
Note
Message deduplication applies to an entire queue, not to individual message groups.
Amazon SQS continues to keep track of the message deduplication ID even after the message is received and deleted.
- Message Group ID).
- Receive Request Attempt ID
The token used for deduplication of
ReceiveMessagecalls.
- Sequence Number
The large, non-consecutive number that Amazon SQS assigns to each message.
FIF ensure that Amazon SQS preserves the order in which messages are sent and received, ensure that each producer uses.
Exactly.
Moving from a Standard Queue to a FIFO Queue
If you have an existing application that uses standard queues and you want to take advantage of the ordering or exactly-once processing features of FIFO queues, you need to configure the queue and your application correctly.
Note
You can't convert an existing standard queue into a FIFO queue. To make the move, you must either create a new FIFO queue for your application or delete your existing standard queue and recreate it as a FIFO queue.
Use the following checklist to ensure that your application works correctly with a FIFO queue.
By default, FIFO queues support up to 3,000 messages per second with batching. To request a limit increase, file a support request. FIFO queues support up to 300 messages per second (300 send, receive, or delete operations per second) without batching..
Compatibility
-. | https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues.html | 2018-12-10T04:41:37 | CC-MAIN-2018-51 | 1544376823303.28 | [] | docs.aws.amazon.com |
CreateObject Method
Creates an Automation object of the specified class. If the application is already running, CreateObject will create a new instance..
Note The CreateObject methods commonly used in the example code within this Help file (available when you click "Example") are made available by Microsoft Visual Basic or Microsoft Visual Basic for Applications (VBA). These examples do not use the same CreateObject method that is implemented as part of the object model in Outlook.
expression**.CreateObject(ObjectName)**
*expression * Required. An expression that returns an Application object.
ObjectName Required String. The class name of the object to create. For information about valid class names, see OLE Programmatic Identifiers .
Example
This VBScript example uses the Open event of the item to access Microsoft Internet Explorer and display the Web page.
Sub Item_Open() Set Web = CreateObject("InternetExplorer.Application") Web.Visible = True Web.Navigate "" End Sub
This VBScript example uses the Click event of a CommandButton control on the item to access Microsoft Word and open a document in the root directory named "Resume.doc".
Sub CommandButton1_Click() Set Word = Application.CreateObject("Word.Application") Word.Visible = True Word.Documents.Open("C:\Resume.doc") End Sub
Applies to | Application Object
See Also | Application Property | https://docs.microsoft.com/en-us/previous-versions/office/developer/office-2003/aa220083(v=office.11)?redirectedfrom=MSDN | 2020-05-25T15:59:45 | CC-MAIN-2020-24 | 1590347388758.12 | [] | docs.microsoft.com |
This topic explains how to extend Virtualize's interface,Virtualize will automatically make a newresponder available for for configuring and sending request or response messages using that format. You can add instances of the new client/responder to your test scenarios or responder suites.
SeeCustom (General Procedure of Adding an Extension), (seeGeneral Procedure of Adding an Extension in Virtualize) and restartVirtual.
- The values provided to the extension GUI are saved as a name-value String map. As a result, rearranging the fields in the form element in parasoft-extension.xml will not affect how the user values are saved; however, changing the ids will affect this. The ids are used to save/load the values so they need to be unique. If you change them, then previously-saved configurations will not load the previous values and will become empty. However, you can use a version updater to migrate old settings saved with old ids to a new set of ids.
- Only GUI fields with string values are supported in the custom form GUI. If your extension requires integers or other types, then you may convert the string content to the desired type in the extension implementation.
- Tables or lists can be implemented as comma-separated values in the string fields. | https://docs.parasoft.com/pages/?pageId=33860126&sortBy=createddate | 2020-05-25T13:44:16 | CC-MAIN-2020-24 | 1590347388758.12 | [] | docs.parasoft.com |
Reference Information
This article contains the following sections:
- Accessing Technical Documentation
- Initializing the SDK Client
- Setting Authentication Credentials
- Using Digital Twins
- Using Sight Machine Data Models
- Generating Data Visualizations
Accessing Technical Documentation
You can find the complete technical documentation packaged with the SDK. The docstrings can also be accessed within Python. For example:
help(sm) help(sm.client.Credentials) help(sm.twin.Machine)
Initializing the SDK Client
The SDK Client provides methods for authentication and initializing new DigitalTwin and Plot objects.
To initialize the SDK Client:
- Run:
cli = sm.Client('<tenantname>', auto_login=False)
where '<tenantname>' is the name of your Sight Machine subdomain (e.g., ‘demo’ for demo.sightmachine.io). The Client will be used to inspect configurations, retrieve data, and generate visualizations.
Setting Authentication Credentials
After installing the SDK, you need to set authentication credentials. You only need to authenticate the first time that you use the SDK; the credentials will be stored for future use.
To set authentication credentials:
- Log in using the method provided by Sight Machine:
cli.login('basic', email='<[email protected]>', password='<password>') cli.login('apikey', key='<apikey>')
Using Digital Twins
Digital Twins store the configuration for user-configured models such as Machine and Machine Type. Their most common application is to look up information that you can use to refine a query for data, such as the database-friendly name of a particular machine or machine type.
To use Digital Twins:
- Run:
dt_mch = cli.get_twin('Machine') display(type(dt_mch)) df_mch_config = dt_mch.fetch_meta(cli.session) display(type(df_mch_config), df_mch_config.shape) display(df_mch_config.sort_values('Machine.source_clean').head(10))
Using Sight Machine Data Models
Sight Machine has developed a number of data models for contextualized data. Some, such as Machine Type and Machine, are user-configured models, while others, such as Cycles, Downtimes, and Defects, are generated by the AI Data Pipeline.
For information about interacting with models such as Machine, see Using Digital Twinsabove.
For more information about Sight Machine’s data models, reference the following:
Retrieving Data
The SDK provides a simple interface for downloading data from models such as Cycles, Downtimes, and Defects.
To retrieve data:
- Generate a query to limit the data returned.
The Sight Machine SDK supports a PyMongo-like query syntax. See the PyMongo Tutorial for examples. One notable difference is that the Sight Machine API does not support logical OR.
- You may wish to explore Digital Twins before generating a query. Get the Digital Twin for the machine that you are interested in gathering data from:
- Assemble the query:
- Use the query to fetch data. The data is returned as a pandas dataframe. The same function can be applied to any data model:
- You can now export the data, run an exploratory analysis, train a model, blend in data from other sources, or otherwise manipulate the data.
MACHINE_TYPE = 'Lasercut' dt_lc = cli.get_twin('Machine', MACHINE_TYPE)
DATE_START = datetime(2017, 8, 6) DATE_END = datetime(2017, 8, 7) QUERY = { 'endtime' : {'$gte' : DATE_START, '$lt' : DATE_END}, 'machine.source_type' : MACHINE_TYPE }
df_lc_c = dt_lc.fetch_data(cli.session, 'cycle', QUERY, normalize=True) display(df_lc_c.shape) df_lc_dt = dt_lc.fetch_data(cli.session, 'downtime', QUERY, normalize=True) df_lc_def = dt_lc.fetch_data(cli.session, 'defect', QUERY, normalize=True)
Generating Data Visualizations
You can generate basic visualizations, set chart titles, add overlays, and define panels using the SDK. Python also supports other visualization libraries. See the following:
- Generating Basic Visualizations
- Using Other Styling Methods
- Adding Overlays
- Generating Code and Customizing Plots
Generating Basic Visualizations
The SDK provides simple methods for generating basic visualizations.
Using Other Styling Methods
You can add a chart title.
To set the title of your chart:
- Run:
plt = cli.get_plot('line', df_tmp) plt.set_title('Temperature over Time') iplot(plt.plot())
Adding Overlays
The SDK offers support for adding overlays to plots using a Plotly feature called “shapes.” At present, the SDK provides methods for adding lines, rectangles, and circles/ovals.
The basic information needed to generate a shape include its location, form, and style. These attributes of the shape must be stored as columns in a dataframe, which the SDK will interpret.
To add overlays:
- Generate the x and y coordinates of the shape’s bounding box from your data. This example overlay is designed to be applied to a graph of temperature over time; select the appropriate data for your own chart.
- Set the form of the shape.
- If desired, set styling for the shape. This is optional; a default styling will be applied to any options that are not specified. Note that Plotly expects some style and layout options to be nested. To set these, insert a dictionary into each cell that contains the appropriate levels of nesting. The dataframe is treated as the outside level.
- After the shape definition dataframe is complete, apply it to your plot.
df_rect = df_cycle[['starttime','endtime']].copy(deep=True) df_rect = df_rect.rename(columns={'starttime': 'x0', 'endtime': 'x1'}) df_rect.loc[:, 'x0'] = pd.to_datetime( df_rect['x0'])df_rect.loc[:, 'x1'] = pd.to_datetime( df_rect['x1']) df_rect.loc[:, 'y0'] = 0df_rect.loc[:, 'y1'] = 1
df_rect.loc[:, 'xref'] = 'x' df_rect.loc[:, 'yref'] = 'paper' df_rect.loc[:, 'type'] = 'rect'
df_rect.loc[:, 'fillcolor'] = '#134936' df_rect.loc[:, 'opacity'] = 0.15 df_rect.reset_index(drop=True, inplace=True) df_rect['line'] = pd.Series([{ 'width': 0 } for i in range(len(df_rect.index))])
df_tmp = df_cycle[['endtime', 'temperature', 'shift']] plt = cli.get_plot('line', df_tmp) plt.add_overlay(sm.plot.Shape(df_rect, 'df_rect')) iplot(plt.plot())
For more information about Plotly shapes and the available options, see:
Generating Code and Customizing Plots
The SDK supports basic visualizations with Sight Machine styling. Advanced users can also customize the styling or other features of the plots.
The SDK generates the Python code used to make each plot. You can edit and run this code independently.
To generate the code and customize plots:
- Run:
- Copy the output into your environment (a new file, a Jupyter notebook cell, etc.) and edit as desired.
sys.stdout.write(str(plt6.generate_code()))sys.stdout.flush()
For more details, consult the Plotly reference: | https://docs.sightmachine.com/article/75-reference-information | 2020-05-25T14:56:53 | CC-MAIN-2020-24 | 1590347388758.12 | [] | docs.sightmachine.com |
In addition to the statistics provided by each component, a user will want to know if any problems occur. While we could monitor.. | https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.0.0/bk_getting-started-with-apache-nifi/content/bulletins.html | 2017-11-17T19:10:32 | CC-MAIN-2017-47 | 1510934803906.12 | [] | docs.hortonworks.com |
Object Handles
Drivers and user-mode components access most system-defined objects through handles. Handles are represented by the HANDLE opaque data type. (Note that handles are not used to access device objects or driver objects.)
For most object types, the kernel-mode routine that creates or opens the object provides a handle to the caller. The caller then uses that handle in subsequent operations on the object.
Here is a list of object types that drivers typically use, and the routines that provide handles to objects of that type.
When the driver no longer requires access to the object, it calls the ZwClose routine to close the handle. This works for all of the object types listed in the table above.
Most of the routines that provide handles take an OBJECT_ATTRIBUTES structure as a parameter. This structure can be used to specify attributes for the handle.
Drivers can specify the following handle attributes:
OBJ_KERNEL_HANDLE
The handle can only be accessed from kernel mode.
OBJ_INHERIT
Any children of the current process receive a copy of the handle when they are created.
OBJ_FORCE_ACCESS_CHECK
This attribute specifies that the system performs all access checks on the handle. By default, the system bypasses all access checks on handles created in kernel mode.
Use the InitializeObjectAttributes routine to set these attributes in an OBJECT_ATTRIBUTES structure.
For information about validating object handles, see Failure to Validate Object Handles.
Private Object Handles
Whenever a driver creates an object handle for its private use, the driver must specify the OBJ_KERNEL_HANDLE attribute. This ensures that the handle is inaccessible to user-mode applications.
Shared Object Handles
A driver that shares object handles between kernel mode and user mode must be carefully written to avoid accidentally creating security holes. Here are some guidelines:
Create handles in kernel mode and pass them to user mode, instead of the other way around. Handles created by a user-mode component and passed to the driver should not be trusted.
If the driver must manipulate handles on behalf of user-mode applications, use the OBJ_FORCE_ACCESS_CHECK attribute to verify that the application has the necessary access.
Use ObReferenceObjectByPointer to keep a kernel-mode reference on a shared handle. Otherwise, if a user-mode component closes the handle, the reference count goes to zero, and if the driver then tries to use or close the handle the system will crash.
If a user-mode application creates an event object, a driver can safely wait for that event to be signaled, but only if the application passes a handle to the event object to the driver through an IOCTL. The driver must handle the IOCTL in the context of the process that created the event and must validate that the handle is an event handle by calling ObReferenceObjectByHandle.
Send comments about this topic to Microsoft | https://docs.microsoft.com/en-us/windows-hardware/drivers/kernel/object-handles | 2017-11-17T20:21:51 | CC-MAIN-2017-47 | 1510934803906.12 | [] | docs.microsoft.com |
.
Note
The name None cannot be reassigned (assignments to it, even as an attribute name, raise SyntaxError), so it can be considered a “true” constant.
The site module (which is imported automatically during startup, except if the -S command-line option is given) adds several constants to the built-in namespace. They are useful for the interactive interpreter shell and should not be used in programs. | https://docs.python.org/2.6/library/constants.html | 2017-11-17T19:15:31 | CC-MAIN-2017-47 | 1510934803906.12 | [] | docs.python.org |
Syntax:
make_colour_rgb(red, green, blue);
Returns: value
GameMaker: Studio provides this function (as well as
others) to permit the user to make their own colours. This
particular function takes three component parts, the red,
the green and the blue components of the colour that
you wish to make. These values are taken as being between 0 and 255
so you can make 16,777,216 (256*256*256) colours with this! Below
you can see an image of how these components look when separated:
The image on the left is a
break-down of the individual components of the function, and then
on the right is an illustration of how changing these components
affects the end colour.
col = make_colour_rgb(100, 145, 255);
The above code uses the function to create a colour and store its value in the variable "col" for later use. | http://docs.yoyogames.com/source/dadiospice/002_reference/drawing/colour%20and%20blending/make_colour_rgb.html | 2017-11-17T19:30:35 | CC-MAIN-2017-47 | 1510934803906.12 | [] | docs.yoyogames.com |
Tutorial: Basics
Setting up for the DSE Search tutorial includes creating a Cassandra node, importing data, and creating resources.
In this tutorial, you use some sample data from a health-related census.
Start DSE Search and download files
This setup assumes you started DataStax Enterprise in DSE Search mode and downloaded the sample data and tutorial files. The tutorial files include a CQL table definition, which uses a compound primary key. The partitioning key is the id column and the clustering key is the age column.
Procedure
- Download the sample data and tutorial files.
- Unzip the files you downloaded in the DataStax Enterprise installation home directory.A solr_tutorial46 directory is created that contains the following files.
- copy_nhanes.cql
The
COPYcommand you use to import data
- create_nhanes.cql
The Cassandra CQL table definition
- nhanes52.csv
The CSV (comma separated value) data
- schema.xml and solrconfig.xml
The Solr schema and configuration file for the advanced tutorial
- Take a look at these files using your favorite editor. | https://docs.datastax.com/en/datastax_enterprise/4.8/datastax_enterprise/srch/srchTutStrt.html | 2017-11-17T19:21:26 | CC-MAIN-2017-47 | 1510934803906.12 | [] | docs.datastax.com |
Contents IT Service Management Previous Topic Next Topic Subscribe to my on-call calendar Add To My Docs Add selected topic Add selected topic and subtopics Subscribe to Updates Share Save as PDF Save selected topic Save selected topic and subtopics Save all topics in Contents Subscribe to my on-call calendar Subscribe to my on-call calendar You can subscribe to your on-call calendar using your personal calendar client. Before you begin Request your rota manager to send your on-call calendar subscription URL. This subscription URL is sent via an email notification. Ensure that your calendar client uses and supports the iCalendar format. Ensure that your calendar client provides the ability to subscribe to an external calendar. Attention: The end user must provide their instance credentials to authenticate themselves to use the subscription URL to view on-events. Currently, only the Calendar application on Mac OS X 8.0 onwards and the Outlook application on Windows 2013 onwards support authenticated calendar subscriptions. About this task Roles required: admin, rota_admin, itil, rota_manager Procedure On your calendar client, search for user assistance that assists you in navigating to the location where you can add a new external calendar subscription. Enter the subscription URL from the email notification you received earlier. Click Subscribe. Enter basic authentication details such as your login details for your ServiceNow® instance. Specify additional details such as how frequently the calendar is updated. Last Updated: 217 Tags: Products > IT Service Management > On-Call Scheduling; | https://docs.servicenow.com/bundle/istanbul-it-service-management/page/administer/on-call-scheduling/task/subscribe-to-my-on-call-schedule.html | 2017-11-17T19:19:04 | CC-MAIN-2017-47 | 1510934803906.12 | [] | docs.servicenow.com |
BLT¶
Build, Link and Triumph
BLT is composition of CMake macros and several widely used open source tools assembled to simplify HPC software development.
BLT was released by Lawrence Livermore National Laboratory (LLNL) under a BSD-style open source license. It is developed on github under LLNL’s github organization:
BLT at a Glance¶
- Simplifies Setup
- CMake macros for:
- Creating libraries and executables
- Managing compiler flags
- Managing external dependencies
- Multi-platform support (HPC Platforms, OSX, Windows)
- Batteries included
- Built-in support for HPC Basics: MPI, OpenMP, and CUDA
- Built-in support for unit testing in C/C++ and Fortran
- Streamlines development processes
- Support for documentation generation
- Support for code health tools:
- Runtime and static analysis, benchmarking
BLT Developers¶
Developers include:
- Chris White ([email protected])
- Cyrus Harrison ([email protected])
- George Zagaris ([email protected])
- Kenneth Weiss ([email protected])
- Lee Taylor ([email protected])
- Aaron Black ([email protected])
- David A. Beckingsale ([email protected])
- Richard Hornung ([email protected])
- Randolph Settgast ([email protected])
- Peter Robinson ([email protected])
BLT User Tutorial¶
This tutorial is aimed at getting BLT users up and running as quickly as possible.
It provides instructions for:
- Adding BLT to a CMake project
- Setting up host-config files to handle multiple platform configurations
- Building, linking, and installing libraries and executables
- Setting up unit tests with GTest
- Using external project dependencies
- Creating documentation with Sphinx and Doxygen
The tutorial provides several examples that calculate the value of
by approximating the integral
using numerical
integration. The code is adapted from:.
The tutorial requires a C++ compiler and CMake, we recommend using CMake 3.8.0 or newer. Parts of the tutorial also require MPI, CUDA, Sphinx and Doxygen.
Tutorial Contents
- Setup BLT in your CMake Project
- Creating Libraries and Executables
- Portable compiler flags
- Unit Testing
- External Dependencies
- Creating Documentation
- CMake Recommendations | http://llnl-blt.readthedocs.io/en/latest/ | 2017-11-17T19:29:45 | CC-MAIN-2017-47 | 1510934803906.12 | [] | llnl-blt.readthedocs.io |
7. How app and an example works using configman¶
7.1. The minimum app¶
To illustrate the example, let’s look at an example of an app that uses
socorro_app to leverage
configman to run. Let’s look at
weeklyReportsPartitions.py
As you can see, it’s a subclass of the socorro.app.socorro_app.App
class which is a the-least-you-need wrapper for a minimal app. As you can see,
it takes care of logging and executing your
main function.
7.2. Connecting and handling transactions¶
Let’s go back to the
weeklyReportsPartitions.py cron script and take a look
at what it does.
It only really has one
configman option and that’s the
transaction_executor_class. The default value is
TransactionExecutorWithBackoff
which is the class that’s going to take care of two things:
- execute a callable that accepts an opened database connection as first and only parameter
- committing the transaction if there are no errors and rolling back the transaction if an exception is raised
- NB: if an
OperationalErroror
InterfaceErrorexception is raised,
TransactionExecutorWithBackoffwill log that and retry after configurable delay
Note that
TransactionExecutorWithBackoff is the default
transaction_executor_class but if you override it, for example by the
command line, with
TransactionExecutor no exceptions are swallowed and it
doesn’t retry.
Now, connections are created and closed by the ConnectionContext
class. As you might have noticed, the default
database_class defined in the
TransactionExecutor is
socorro.external.postgresql.connection_context.ConnectionContext as you can
see here
The idea is that any external module (e.g. Boto, PostgreSQL, etc) can define a
ConnectionContext class as per this model. What its job is is to create and
close connections and it has to do so in a contextmanager. What that means is
that you can do this:
connector = ConnectionContext() with connector() as connection: # opens a connection do_something(connection) # closes the connection
And if errors are raised within the
do_something function it doesn’t matter.
The connection will be closed.
7.3. What was the point of that?!¶
For one thing, this app being a
configman derived app means that all
configuration settings are as flexible as
configman is. You can supply
different values for any of the options either by the command line (try running
--help on the
./weeklyReportsPartitions.py script) and you can control
them with various configuration files as per your liking.
The other thing to notice is that when writing another similar cron script, all you need to do is to worry about exactly what to execute and let the framework take care of transactions and opening and closing connections. Each class is supposed to do one job and one job only.
configman uses not only basic options such as
database_password but also
more complex options such as aggregators. These are basically invariant options
that depend on each other and uses functions in there to get its stuff together.
7.4. How to override where config files are read¶
socorro_app supports multiple ways of picking up config files. The most
basic option is the –admin.conf= option. E.g.:
python myapp.py --admin.conf=/path/to/my.ini
The default if you don’t specify a
--admin.conf is that it will look for a
.ini file with the same name as the app. So if
app_name='myapp' and you
start it like this:
python myapp.py
it will automatically try to read
config/myapp.ini and if you want to
override the directory it searches in you have to set an environment variable
called
DEFAULT_SOCORRO_CONFIG_PATH like this:
export DEFAULT_SOCORRO_CONFIG_PATH=/etc/socorro python myapp.py
Which means it will try to read
/etc/socorro/myapp.ini.
NOTE: If you specify a
DEFAULT_SOCORRO_CONFIG_PATH that directory must
exist and be readable or else you will get an
IOError when you try to start
your app. | http://socorro.readthedocs.io/en/latest/architecture/socorro_app.html | 2017-11-17T18:58:30 | CC-MAIN-2017-47 | 1510934803906.12 | [] | socorro.readthedocs.io |
Product: Posh Hair
Product Code: ps_ac2340
DAZ Original: Yes
Created by: Bobbie25
Released: February 4, 2008
Required Products: Aiko 4.0 Base, Victoria 4.1 Base, Stephanie 3.0 Petite Base, Victoria 3.0 Base, Aiko 3.0 Base
You can find new icons for loading this product in the following Poser libraries:
Product Title: Posh Hair
Author: umblefugly & Bobbie25
Author E-mail: Bobbie25designs(at)yahoo.com (change (at) for @)
Product Date:01-28-08\ClassicBobHair\
\Runtime\Libraries\Hair\Posh Hair\
\Runtime\libraries\materials\Posh Hair\
\Runtime\libraries\Pose\Posh Hair\
\Runtime\textures\B25BobHair\ | http://docs.daz3d.com/doku.php/artzone/azproduct/6132 | 2017-11-17T19:07:01 | CC-MAIN-2017-47 | 1510934803906.12 | [] | docs.daz3d.com |
Four main modules within Streaming Analytics Manager offer services to different personas in an organization:
The following subsections describe responsibilities for each persona. For additional information, see the following chapters in this guide:
A platform operator typically manages the Streaming Analytics Manager platform, and provisions various services and resources for the application development team. Common responsibilities of a platform operator include:
Installing and managing the Streaming Analytics Manager application platform.
Provisioning and providing access to services (e.g big data services like Kafka, Storm, HDFS, HBase) for use by the development team when building stream apps.
Provisioning and providing access to environments such as development, testing, and production, for use by the development team when provisioning stream apps..
More Information
See Managing Stream Apps for more information about creating and managing the Streaming Analytics Manager environment.
The application developer uses the Stream Builder component to design, implement, deploy, and debug stream apps in Streaming Analytics Manager.
The following subsections describe component building blocks and schema requirements.
More Information
Stream Builder offers several building blocks for stream apps: sources, processors, sinks, and custom components.
Source builder components are used to create data streams. SAM has the following sources:
Kafka
Azure Event Hub
HDFS
Processor builder components are used to manipulate events in the stream.
The following table lists processors that are available with Streaming Analytics Manager.
Sink builder components are used to send events to other systems.
Streaming Analytics Manager supports the following sinks:
Kafka
Druid
HDFS
HBase
Hive
JDBC
OpenTSDB
Notification (OOO support Kafka and the ability to add custom notifications)
Cassandra
Solr
For more information about developing custom components, see SDK Developer Persona...
More Information
See the Gallery of Superset Visualizations for visualization examples
Streaming Analytics Manager supports the development of custom functionality through the use of its SDK.
More Information
Adding Custom Builder Components | https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.0.0/bk_overview/content/sam-personas.html | 2017-11-17T19:19:11 | CC-MAIN-2017-47 | 1510934803906.12 | [array(['figures/3/images/construct-diagram.png', None], dtype=object)] | docs.hortonworks.com |
Blueprint information settings control who can access a blueprint and how many machines they can provision with it.
Prerequisites
Log in to the vRealize Automation console as a tenant administrator or business group manager.
Gather the following information from your fabric administrator:
Note:
The name and location of the WinPE ISO image.
The name of the WIM file, the UNC path to the WIM file, and the index used to extract the desired image from the WIM file.
The user name and password under which to map the WIM image path to a network drive on the provisioned machine.
For Dell iDRAC integrations where the image is located on a CIFS share that requires authentication, the user name, and password to access the share.
If you do not want to accept the default, K, the drive letter to which the WIM image path is mapped on the provisioned machine.
You fabric administrator might have provided this information in a build profile.
Procedure
- Select and select the type of blueprint you are creating.
- Enter a name in the Name text box.
- (Optional) : Enter a description in the Description text box.
- (Optional) : Select the Master check box to allow users to copy your blueprint.
-.
Results
Your blueprint is not finished. Do not navigate away from this page. | https://docs.vmware.com/en/vRealize-Automation/6.2/com.vmware.vra.iaas.physical.doc/GUID-A9CD4FF1-86FD-4FEF-A3F9-80DB9A71EE89.html | 2017-11-17T19:58:42 | CC-MAIN-2017-47 | 1510934803906.12 | [] | docs.vmware.com |
Orchestrator uses the Mozilla Rhino 1.7R4 JavaScript engine. However, the implementation of Rhino in Orchestrator presents some limitations.
When writing scripts for workflows, you must consider the following limitations of the Mozilla Rhino implementation in Orchestrator.
When a workflow runs, the objects that pass from one workflow element to another are not JavaScript objects. What is passed from one element to the next is the serialization of a Java object that has a JavaScript image. As a consequence, you cannot use the whole JavaScript language, but only the classes that are present in the API Explorer. You cannot pass function objects from one workflow element to another.
Orchestrator runs the code in scriptable task elements in a context that is not the Rhino root context. Orchestrator transparently wraps scriptable task elements and actions into JavaScript functions, which it then runs. A scriptable task element that contains
System.log(this);does not display the global object
thisin the same way as a standard Rhino implementation does.
You can only call actions that return nonserializable objects from scripting, and not from workflows. To call an action that returns a nonserializable object, you must write a scriptable task element that calls the action by using the System.getModuleModuleName.action() method.
Workflow validation does not check whether a workflow attribute type is different from an input type of an action or subworkflow. If you change the type of a workflow input parameter, for example from VIM3:VirtualMachine to VC:VirtualMachine, but you do not update any scriptable tasks or actions that use the original input type, the workflow validates but does not run. | https://docs.vmware.com/en/vRealize-Orchestrator/6.0/com.vmware.vrealize.orchestrator-dev.doc/GUID-2BDAC8BD-8A5D-4ACE-AD4B-45E3F24DE6DB.html | 2017-11-17T19:58:24 | CC-MAIN-2017-47 | 1510934803906.12 | [] | docs.vmware.com |
. Of course, this module can be used on Windows only.
Scriptom is especially interesting if you are developing Groovy shell scripts under Windows. You can combine both Groovy code and any Java library with the platform-specific features available to Windows Scripting Host or OLE COM automation from Office.
Installation
Zip bundle
The easiest way for installing Scriptom is to unzip the Zip bundle in your %GROOVY_HOME% directory.
The distribution contains the jacob.jar and jacob.dll, and the scriptom.jar. The DLL needs to be in the bin directory, or in your java.library.path to be loaded by jacob.jar.
Building from sources
If you are brave enough and prefer using the very latest fresh version from CVS Head, you can build Scriptom from sources. Checkout modules/scriptom, and use Maven to do the installation automatically. If your %GROOVY_HOME% points at the target/install directory of your groovy-core source tree, just type:
Otherwise, if you have installed Groovy in a different directory, you have two possibilities, either you change the property groovy.install.staging.dest to your %GROOVY_HOME% directory in the project.properties file, and run maven, or you can type:
Usage
Let's say we want to script Internet Explorer. First, we're going to import the ActiveX proxy class.
Then, we're going to create a GroovyObjectSupport wrapper around the ActiveXComponent class of Jacob. And now, we're ready to use properties or methods from the component:
Note however that explorer.Visible returns a proxy, if you want to get the real value of that property, you will have to use the expression explorer.Visible.value or explorer.Visible.getValue().
Limitations
For the moment, Scriptom is in a beta stage, so you may encounter some bugs or limitations with certain ActiveX or COM component, so don't hesitate to post bugs either in JIRA or on the mailing lists. There may be some issues with the mappings of certain objects returned by the component and the Java/Groovy counterpart.
An important limitation for the first release is that it is not yet possible to subscribe to events generated by the components you are scripting. In the next releases, I hope I will be able to let you define your own event handlers with closures, with something like:
But for the moment, event callbacks are not supported. the Windows Shell object
Scripting Windows Media Player
When event callbacks are supported, you will be able to subscribe to the player.statusChange event, so that you can play the wav entirely, before loading a new sample (instead of listening only to the first second of each sample).
Converts a Word document into HTML
This program takes a Word document as first parameter, and generate an HTML file with the same name, but with the .html extension. | http://docs.codehaus.org/pages/viewpage.action?pageId=18143 | 2015-02-27T04:13:21 | CC-MAIN-2015-11 | 1424936460472.17 | [] | docs.codehaus.org |
User Guide
Local Navigation
About smart tags
Organizations might add smart tags to items such as posters, flyers, or t-shirts. When you tap the smart tag reader on the back of your BlackBerry® smartphone against a smart tag, your smartphone views the smart tag and presents you with the options to view or delete the smart tag. Smart tags can contain a range of information including web addresses, telephone numbers, email addresses, coupons, graphics, media files, event details, and more.
Next topic: View, save, or delete a smart tag
Previous topic: Smart tags
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/36022/About_smart_posters_61_1439268_11.jsp | 2015-02-27T04:24:29 | CC-MAIN-2015-11 | 1424936460472.17 | [] | docs.blackberry.com |
(PECL quickhash >= Unknown)
QuickHash.
Exemplo #1 QuickHashIntHash::delete() example
<?php
$hash = new QuickHashIntHash( 1024 );
var_dump( $hash->exists( 4 ) );
var_dump( $hash->add( 4, 5 ) );
var_dump( $hash->delete( 4 ) );
var_dump( $hash->exists( 4 ) );
var_dump( $hash->delete( 4 ) );
?>
O exemplo acima irá imprimir algo similar à:
bool(false) bool(true) bool(true) bool(false) bool(false) | http://docs.php.net/manual/pt_BR/quickhashinthash.delete.php | 2015-02-27T04:08:14 | CC-MAIN-2015-11 | 1424936460472.17 | [] | docs.php.net |
numpy.random.dirichlet¶
- numpy.random.dirichlet(alpha, size=None)¶
Draw") | http://docs.scipy.org/doc/numpy-dev/reference/generated/numpy.random.dirichlet.html | 2015-02-27T04:07:31 | CC-MAIN-2015-11 | 1424936460472.17 | [] | docs.scipy.org |
]
Returns a list of images created by the specified pipeline.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
list-image-pipeline-images --image-pipeline-arn <value> [--filters <value>] [--max-results <value>] [--next-token <value>] [--cli-input-json <value>] [--generate-cli-skeleton <value>]
--image-pipeline-arn (string)
The Amazon Resource Name (ARN) of the image pipeline whose images you want to view.
--filters (list)
The filters.
image pipeline pipeline images
The following list-image-pipeline-images example lists all images that were created by a specific image pipeline.
aws imagebuilder list-image-pipeline-images \ --image-pipeline-arn arn:aws:imagebuilder:us-west-2:123456789012:image-pipeline/mywindows2016pipeline
Output:
{ "requestId": "a1b2c3d4-5678-90ab-cdef-EXAMPLE11111", "imagePipelineList": [ { "arn": "arn:aws:imagebuilder:us-west-2:123456789012:image-pipeline/mywindows2016pipeline", "name": "MyWindows2016Pipeline", "description": "Builds Windows 2016 Images", "platform": "Windows", "imageRecipeArn": "arn:aws:imagebuilder:us-west-2:123456789012:image-recipe/mybasicrecipe/2019.12.03", "infrastructureConfigurationArn": "arn:aws:imagebuilder:us-west-2:123456789012:infrastructure-configuration/myexampleinfrastructure", "distributionConfigurationArn": "arn:aws:imagebuilder:us-west-2:123456789012:distribution-configuration/myexampledistribution", "imageTestsConfiguration": { "imageTestsEnabled": true, "timeoutMinutes": 60 }, "schedule": { "scheduleExpression": "cron(0 0 * * SUN)", "pipelineExecutionStartCondition": "EXPRESSION_MATCH_AND_DEPENDENCY_UPDATES_AVAILABLE" }, "status": "ENABLED", "dateCreated": "2020-02-19T19:04:01.253Z", "dateUpdated": "2020-02-19T19:04:01.253Z", "tags": { "KeyName": "KeyValue" } }, { "arn": "arn:aws:imagebuilder:us-west-2:123456789012:image-pipeline/sam", "name": "PipelineName", "platform": "Linux", "imageRecipeArn": "arn:aws:imagebuilder:us-west-2:123456789012:image-recipe/recipe-name-a1b2c3d45678/1.0.0", "infrastructureConfigurationArn": "arn:aws:imagebuilder:us-west-2:123456789012:infrastructure-configuration/infrastructureconfiguration-name-a1b2c3d45678", "imageTestsConfiguration": { "imageTestsEnabled": true, "timeoutMinutes": 720 }, "status": "ENABLED", "dateCreated": "2019-12-16T18:19:02.068Z", "dateUpdated": "2019-12-16T18:19:02.068Z", "tags": { "KeyName": "KeyValue" } } ] }
For more information, see Setting Up and Managing an EC2 Image Builder Image Pipeline Using the AWS CLI in the EC2 Image Builder Users Guide.
requestId -> (string)
The request ID that uniquely identifies this request.
imageSummaryList -> (list)
The list of images built by this pipeline.
(structure)
An image summary.
arn -> (string)The Amazon Resource Name (ARN) of the image.
name -> (string)The name of the image.
type -> (string)Specifies whether this is an AMI or container image.
version -> (string)The version of the image.
platform -> (string)The platform of the image.
osVersion -> (string)The operating system version of the instance. For example, Amazon Linux 2, Ubuntu 18, or Microsoft Windows Server 2019.
state -> (structure)
The state of the image.
status -> (string)The status of the image.
reason -> (string)The reason for the image's status.
owner -> (string)The owner of the image.
dateCreated -> (string)The date on which this image was created.
outputResources -> (structure)
The output resources produced when creating this image.
amis -> (list)
The EC2 AMIs created by this image.
(structure)
Details of an EC2 AMI.
region -> (string)The AWS Region of the EC2 AMI.
image -> (string)The AMI ID of the EC2 AMI.
name -> (string)The name of the EC2 AMI.
description -> (string)The description of the EC2 AMI. Minimum and maximum length are in characters.
state -> (structure)
Image state shows the image status and the reason for that status.
status -> (string)The status of the image.
reason -> (string)The reason for the image's status.
accountId -> (string)The account ID of the owner of the AMI.
containers -> (list)
Container images that the pipeline has generated and stored in the output repository.
(structure)
A container encapsulates the runtime environment for an application.
region -> (string)Containers and container images are Region-specific. This is the Region context for the container.
imageUris -> (list)
A list of URIs for containers created in the context Region.
(string)
tags -> (map)
The tags of the image.
key -> (string)
value -> (string)
nextToken -> (string)
The next token used for paginated responses. When this is not empty, there are additional elements that the service has not included in this request. Use this token with the next request to retrieve additional objects. | https://docs.aws.amazon.com/cli/latest/reference/imagebuilder/list-image-pipeline-images.html | 2021-05-06T14:07:35 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.aws.amazon.com |
Synthetic assets are an important part of a decentralized ecosystem. They allow decentralized infrastructure to offer assets that track the price of any system.
Right now existing protocols for synthetics (like Synthetix and UMA) require you to put your assets into a base token in order to collateralise the creation of your synthetic asset, taking on risk and losing your exposure to the underlying assets of your choice.
Instead of this, we are allowing users to use their LP tokens to create synthetic assets, so they will still have their exposure to the underlying assets and the LP fees from that trade pair.
Despite these inefficiencies the market for synthetics on Ethereum is already >$1B marketcap and >$1B TVL.
We believe BAO will increase that dramatically by making it practical to partake in synthetic asset creation.
Most people do not understand synthetics.
They think that derivatives are only good for synthetic stocks. They just want to buy stocks like $APPL or $GME on the blockchain 24/7.
While that is a good part of synthetics, it is just replicating our existing market on the blockchain. What is far more powerful and exciting is that with synthetics we can do something new.
With synthetics you can turn any quantifiable discrete event into a financial feed that people can buy and sell.
You can create an asset based on the “Unemployment Rate in Hong Kong” or “Average Annual Rainfall in Seattle” or the “Number of iPhones Sold Annually”.
These synthetic assets allow you to buy long or short depending on if you think the number will go up or down, and this creates powerful use cases.
You could now give anyone in the world performance rewards based on specific criteria.
Imagine the Mayor of a city that in addition of his base salary, could also get a bonus package from a basket of tokens representing the inverse unemployment rate, the city happiness score, and the inverse crime rate in their city. The better results they create for their city, the more their tokens are worth..
We use the terms "soft" synthetics and "hard" synthetics to refer to two different types of synthetic asset, defined below. We will implement them separately, as you can see from our roadmap.
Soft synthetics are indexes. Well known indexes in the space currently are DPI, Indexed.Finance (CC10, ORA5, DEGEN, etc.) and PieDao. The ponds feature that will be implemented by Panda is our first soft synthetic mechanism we will create, based on Balancer v2. An index of the meme coins, the "memedex" will be featured on BSC.
Hard synthetics are synthetics assets that links price to data. That data could itself be a price, like $TSLA stock price, the price of 1 once of gold, or something more exotic like the number of goals Neymar Jr scores, rainfall levels or crime rates etc. The value of the synthetic asset is backed by collateral. in BAO's case, we will be focusing on allowing the use of LP tokens as that collateral.
To enable the users to create synthetic assets, the Bao Finance team will be creating a fork of the Balancer and Synthetix ecosystems. Instead of this synthetic asset ecosystem being required to have a single token (like SNX) users will be able to create synthetic assets using a combination of bao tokens and their LP tokens as collateral.
The specific collateral cap for each asset, and the overall collateral cap will be decided upon by users.
Synthetic assets will rely on oracles for their pricing information and will support:
Uniswap V2 Oracles
Chainlink Oracles
Augur Outcome Oracles
Band Protocol Oracles
The Bao team will create and deploy the synthetic asset contracts, but the activation for this protocol will not take place until the BaoDao is handed over to the management of the community. | https://docs.bao.finance/synthetic-assets | 2021-05-06T13:09:01 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.bao.finance |
MemoryDataStore¶
We do have one
MemoryDataStore suitable for storing temporary information in memory prior to saving it out to disk.
Please be advised that it is set up to accurately mirror information being located on disk and is not performant in
any way. That said it works; and is pretty easy to stuff your data into.
This implementation is actually offered by the ``gt-main`` module, it is being documented here for consistency.
References:
MemoryDataStore (javadocs)
-
Create¶
MemoryDataStore is not fast - it is for testing. Why is it not fast do you ask? Because we use it to be strict and model what working with an external service is like (as such it is going to copy your data again and again and again).
Unlike most
DataStores we will be creating this one by hand, rather than using a factory.:
MemoryDataStore memory = new MemoryDataStore(); // you are on the honour system to only add features of the same type memory.addFeature( feature ); ...
Q: Why is this so slow?
It is slow for two reasons:
It is not indexed, every access involves looking at every feature and applying your filter to it
It duplicates each feature (so you don’t accidentally modify something outside of a transaction)
It probably duplicates each feature in order to apply the feature for extra slowness.
Q: Given me something faster
The gt-main DataUtilities offers several high performance alternatives to
MemoryDataStore.
Examples¶
Using a
MemoryDataStoreto alter content.
Thanks to Mau Macros for the following example:
SimpleFeatureSource alter( SimpleFeatureCollection collection, String typename, SimpleFeatureType featureType, final List<AttributeDescriptor> newTypes) { try { // Create target schema SimpleFeatureTypeBuilder buildType = new SimpleFeatureTypeBuilder(); buildType.init(featureType); buildType.setName(typename); buildType.addAll(newTypes); final SimpleFeatureType schema = buildType.buildFeatureType(); // Configure memory datastore final MemoryDataStore memory = new MemoryDataStore(); memory.createSchema(schema); collection.accepts( new FeatureVisitor() { public void visit(Feature feature) { SimpleFeatureBuilder builder = new SimpleFeatureBuilder(schema); builder.init((SimpleFeature) feature); for (AttributeDescriptor descriptor : newTypes) { builder.add(descriptor.getDefaultValue()); } SimpleFeature newFeature = builder.buildFeature(feature.getIdentifier().getID()); memory.addFeature(newFeature); } }, null); return memory.getFeatureSource(typename); } catch (Exception e) { e.printStackTrace(); } return null; } | https://docs.geotools.org/latest/userguide/library/data/memory.html | 2021-05-06T13:28:33 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.geotools.org |
Retrieving data returned in the response
The data returned in the response depends on the parameters sent in the payment request, the payment type, the settings of your shop and the notification format.
The data is always sent by the payment gateway using the POST method.
The first step consists in retrieving the contents received via the POST method.
Examples:
In PHP, data is stored in the super global variable $_POST,
In ASP.NET (C#), you must use the Form property of the HttpRequest class,
In Java, you must use the getParameter method of the HttpServletRequest interface.
The response consists of a field list. Each field contains a response value. The field list can be updated.
The script will have to create a loop to retrieve all the transmitted fields.
It is recommended to test the presence of the vads_hash field, which is only present during a notification.
if (empty ($_POST)){ echo 'POST is empty'; }else{ echo 'Data Received '; if (isset($_POST['vads_hash'])){ echo 'Form API notification detected'; //Signature computation //Signature verification //Order Update } } | https://docs.lyra.com/en/collect/form-payment/subscription-token/retrieving-data-returned-in-the-response.html | 2021-05-06T12:13:04 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.lyra.com |
Configuring DNSSEC for a domain.
You can protect your domain from this type of attack, known as DNS spoofing or a man-in-the-middle attack, by configuring Domain Name System Security Extensions (DNSSEC), a protocol for securing DNS traffic.
Amazon Route 53 supports DNSSEC signing as well as DNSSEC for domain registration. If you want to configure DNSSEC signing for a domain that's registered with Route 53, see Configuring DNSSEC signing in Amazon Route 53.
Topics
Overview of how DNSSEC protects your domain
When you configure DNSSEC for your domain, a DNS resolver establishes a chain of trust for responses from intermediate resolvers. The chain of trust begins with the TLD registry for the domain (your domain's parent zone) and ends with the authoritative name servers at your DNS service provider. Not all DNS resolvers support DNSSEC. Resolvers that don't support DNSSEC don't perform any signature or authenticity validation.
Here's how you configure DNSSEC for domains registered with Amazon Route 53 to protect your internet hosts from DNS spoofing, simplified for clarity:
Use the method provided by your DNS service provider to sign the records in your hosted zone with the private key in an asymmetric key pair.
Important
Route 53 supports DNSSEC signing as well as DNSSEC for domain registration. To learn more, see Configuring DNSSEC signing in Amazon Route 53.
Provide the public key from the key pair to your domain registrar, and specify the algorithm that was used to generate the key pair. The domain registrar forwards the public key and the algorithm to the registry for the top-level domain (TLD).
For information about how to perform this step for domains that you registered with Route 53, see Adding public keys for a domain.
After you configure DNSSEC, here's how it protects your domain from DNS spoofing:
Submit a DNS query, for example, by browsing to a website or by sending an email message.
The request is routed to a DNS resolver. Resolvers are responsible for returning the appropriate value to clients based on the request, for example, the IP address for the host that is running a web server or an email server.
If the IP address is cached on the DNS resolver (because someone else has already submitted the same DNS query, and the resolver already got the value), the resolver returns the IP address to the client that submitted the request. The client then uses the IP address to access the host.
If the IP address isn't cached on the DNS resolver, the resolver sends a request to the parent zone for your domain, at the TLD registry, which returns two values:
The Delegation Signer (DS) record, which is a public key that corresponds with the private key that was used to sign the record.
The IP addresses of the authoritative name servers for your domain.
The DNS resolver sends the original request to another DNS resolver. If that resolver doesn't have the IP address, it repeats the process until a resolver sends the request to a name server at your DNS service provider. The name server returns two values:
The record for the domain, such as example.com. Typically this contains the IP address of a host.
The signature for the record, which you created when you configured DNSSEC.
The DNS resolver uses the public key that you provided to the domain registrar (and the registrar forwarded to the TLD registry) to do two things:
Establish a chain of trust.
Verify that the signed response from the DNS service provider is legitimate and hasn't been replaced with a bad response from an attacker.
If the response is authentic, the resolver returns the value to the client that submitted the request.
If the response can't be verified, the resolver returns an error to the user.
If the TLD registry for the domain doesn't have the public key for the domain, the resolver responds to the DNS query by using the response that it got from the DNS service provider.
Prerequisites and maximums for configuring DNSSEC for a domain
To configure DNSSEC for a domain, your domain and DNS service provider must meet the following prerequisites:
The registry for the TLD must support DNSSEC. To determine whether the registry for your TLD supports DNSSEC, see Domains that you can register with Amazon Route 53.
The DNS service provider for the domain must support DNSSEC.
Important
Route 53 supports DNSSEC signing as well as DNSSEC for domain registration. To learn more, see Configuring DNSSEC signing in Amazon Route 53.
You must configure DNSSEC with the DNS service provider for your domain before you add public keys for the domain to Route 53.
The number of public keys that you can add to a domain depends on the TLD for the domain:
.com and .net domains – up to thirteen keys
All other domains – up to four keys
Adding public keys for a domain
When you're rotating keys or you're enabling DNSSEC for a domain, perform the following procedure after you configure DNSSEC with the DNS service provider for the domain.
To add public keys for a domain
If you haven't already configured DNSSEC with your DNS service provider, use the method provided by your service provider to configure DNSSEC.
Sign in to the AWS Management Console and open the Route 53 console at
.
In the navigation pane, choose Registered domains.
Choose the name of the domain that you want to add keys for.
At the DNSSEC status field, choose Manage keys.
Specify the following values:
- Key type
Choose whether you want to upload a key-signing key (KSK) or a zone-signing key (ZSK).
- Algorithm
Choose the algorithm that you used to sign the records for the hosted zone.
- Public key
Specify the public key from the asymmetric key pair that you used to configure DNSSEC with your DNS service provider.
Note the following:
Specify the public key, not the digest.
You must specify the key in base64 format.
Choose Add.
Note
You can only add one public key at a time. If you need to add more keys, wait until you receive a confirmation email from Route 53.
When Route 53 receives a response from the registry, we send an email to the registrant contact for the domain. The email either confirms that the public key has been added to the domain at the registry or explains why the key couldn't be added.
Deleting public keys for a domain
When you're rotating keys or you're disabling DNSSEC for the domain, delete public keys using the following procedure before you disable DNSSEC with your DNS service provider. Note the following:
If you're rotating public keys, we recommend that you wait for up to three days after you add the new public keys to delete the old public keys.
If you're disabling DNSSEC, delete public keys for the domain first. We recommend that you wait for up to three days before you disable DNSSEC with the DNS service for the domain.
If DNSSEC is enabled for the domain and you disable DNSSEC with the DNS service, DNS
resolvers that support
DNSSEC will return a
SERVFAIL error to clients, and the clients won't be able to access the endpoints
that are associated with the domain.
To delete public keys for a domain
Sign in to the AWS Management Console and open the Route 53 console at
.
In the navigation pane, choose Registered domains.
Choose the name of the domain that you want to delete keys from.
At the DNSSEC status field, choose Manage keys.
Find the key that you want to delete, and choose Delete.
Note
You can only delete one public key at a time. If you need to delete more keys, wait until you receive a confirmation email from Amazon Route 53.
When Route 53 receives a response from the registry, we send an email to the registrant contact for the domain. The email either confirms that the public key has been deleted from the domain at the registry or explains why the key couldn't be deleted. | https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/domain-configure-dnssec.html | 2021-05-06T13:45:37 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.aws.amazon.com |
Step 4: Specify the App to Deploy to the Instance
Tell AWS OpsWorks Stacks about the app that you will deploy to the instance later in this walkthrough. In this context, AWS OpsWorks Stacks defines an app as code you want to run on an instance. (For more information, see Apps.)
The procedure in this section applies to Chef 12 and newer stacks. For information about how to add apps to layers in Chef 11 stacks, see Step 2.4: Create and Deploy an App - Chef 11.
To specify the app to deploy
In the service navigation pane, choose Apps:
The Apps page is displayed. Choose Add an app. The Add App page is displayed.
For Settings, for Name, type
MyLinuxDemoApp. (You can type a different name, but be sure to substitute it for
MyLinuxDemoAppthroughout this walkthrough.)
For Application Source, for Repository URL, type
Leave the defaults for the following:
Settings, Document root (blank)
Data Sources, Data source type (None)
Repository type (Git)
Repository SSH key (blank)
Branch/Revision (blank)
Environment Variables (blank KEY, blank VALUE, unchecked Protected Value)
Add Domains, Domain Name (blank)
SSL Settings, Enable SSL (No)
Choose Add App. AWS OpsWorks Stacks adds the app and displays the Apps page.
You now have an app with the correct settings for this walkthrough.
In the next step, you will launch the instance. | https://docs.aws.amazon.com/opsworks/latest/userguide/gettingstarted-linux-specify-app.html | 2021-05-06T12:44:17 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.aws.amazon.com |
Yes there is. What is this number? It depends.
If a specific service (like calculations service), is used in standalone web application mode, then we expose it to Tomcat without any alteration. Tomcat by default allows 200 concurrent (worker) threads. (Can be changed by server.tomcat.threads.max property. Be aware it also has a maximum number for accepted connections which is 8192, and can be changed by server.tomcat.max-connections property.) But this is also influenced by the OS settings and the available memory. (A process can not sprout endless threads.)
If the service is used in microservices system mode then our gateway service have hystrix circuit breaker installed which only allows 10 running threads and 90 waiting requests. These of course can be configured. For configuration detail please find our documentation or the hystrix's documentation. It is generally better to scale up the waiting requests rather than active ones since too many active tasks can cause throttling. If they have to scale, it is better to scale out the number of executing nodes and load balance the requests.
If the client runs a service in production for many users he is better off with many small requests than few huge ones. If the service is for a very few (even one) individual then few huge requests (with many structures) can have a performance benefit. It must be told: after a certain size the performance gain will be negligible. The number of ideal structures are also influenced by the requested method, and the "size" of the structures (and even the size of the structure representation, a.k.a. the chemical format). As a general rule of thumb I can say: requests should not take longer than 1 seconds. 1-2000 molecules can be a good number for that (but it depends). If a request takes more than 1 second it is more costly to experience any kind of error and it also limits the number of concurrent requests. In one second the total communication cost is less than 1% of the whole process, if you move beyond that it is meaningless.
It is configurable. The default configuration using h2 db, but you can change it to PostgreSQL as well. See configuration details here.
If your JChem Microservices DB configuration is using h2 backend, then it is possible to configure the service to access the embedded h2 db console. You need to add the following lines to application.properties file:
spring.h2.console.enabled=true spring.h2.console.path=/h2 spring.h2.console.settings.web-allow-others=true
After restarting the service the database console is available at the localhost's 8062 port.
Logging parameters are configured in the application.properties file. The default values are:
It is supported through external beans with introduction of a new logic (filter, endpoint, filter, logging, health check, etc.). An example is available here.
In the case molecule types with tautomerHandlingMode=GENERIC parameter, similarity search gives false results. These is no workaround at the moment, please do not execute similarity search in table having molecule type with com.chemaxon.zetor.types[n].tautomerHandlingMode=GENERIC parameter specified in the application.properties file.
From version 21.9.0 similarity search works correctly even in the case of molecule types with tautomerHandlingMode=GENERIC parameter.
If the default schema gcrdb is set in the application.properties file, the additional parameters - taken from JChem Engines cache and memory calculator like com.chemaxon.zetor.settings.molecule.cachedObjectCount are not taken into account. If you want to set these additional parameters, please set com.chemaxon.zetor.settings.scheme= mapdb | https://docs.chemaxon.com/display/docs/jchem-microservices-faq-and-known-issues.md | 2021-05-06T13:15:35 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.chemaxon.com |
Highly customizable reports can be generated through Mautic's Report menu.
Choose the data source appropriate to the report you want. Each data source has a different set of available columns, filters and graphs.
Types of data sources available are as below -
Each report can be customized to include the columns of choice. Filter data based on set criteria and/or set a specific order for the data. In addition you can also group by and select different function operators to calculate fields. Note that when you select functions operators a totals row will be added to the report. This totals row will not be exported when selecting to export a report.
Some reports have graphs available. Select the graph desired from the left list - it will move to the right and will be part of the report.
Each graph of each report is made available as a widget on the dashboard allowing complete customization of the dashboard.
Enable or disable sending reports via email by using the toggle switch.
It is possible to schedule emails which will send reports to one or more email addresses. In the To field, enter a comma-separated list of email addresses and set the frequency of sending reports by choosing day, week, or month from the drop-down list.
Since version 3.2 it is possible to send a report once. This may be helpful if the report takes some time to load. The cron job will process the request, and send the result by email.
The problem is that email attachments cannot be too large, as this could prevent them being sent. If the file is greater than 5MB (configurable file attachment limit) then the CSV file will be zipped. If the zip package is within the file attachment limit then it will be sent as an email attachment. If it is still too big the zip file will be moved to a more permanent location, and the email will contain a link to download it. The download link works only for logged in Mautic users.
If someone tries to download a compressed CSV report that had been deleted for whatever reason, Mautic will schedule the report to NOW again, and send the user the email notification when the CSV report has been created.
The one-time report export and send can be configured 2 ways:
By scheduling the email:
By a button from the report list:
The button is available only for non-scheduled reports as it would reset configuration for scheduled reports.
To be able to send scheduled reports, the following cron command is required:
php /path/to/mautic/bin/console mautic:reports:scheduler [--report=ID]
The
--report=ID argument allows you to specify a report by ID if required. For more information, see Cron jobs. | https://docs.mautic.org/en/reports | 2021-05-06T12:41:59 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.mautic.org |
You can show data markers for a line chart.
For some charts, like line charts, you can show data markers.
To show data markers, follow these steps:
While viewing your search or answer as a chart, click the chart configuration icon
on the top right.
Click Settings at the bottom of the Customize menu.
Select Data Markers. | https://docs.thoughtspot.com/6.1/end-user/search/show-data-markers.html | 2021-05-06T12:58:57 | CC-MAIN-2021-21 | 1620243988753.97 | [array(['/6.1/images/chart-config-data-markers.gif',
'Show data markers Show data markers'], dtype=object)] | docs.thoughtspot.com |
SearchIQ is in Beta.
Sometimes, notice a section under. | https://docs.thoughtspot.com/6.1/end-user/search/teach-searchiq.html | 2021-05-06T13:28:03 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.thoughtspot.com |
When installing a ThousandEyes Enterprise Agent behind a firewall or similar device (such as a router with access control lists (ACLs)), the device must be configured with rules that allow the Enterprise Agent to register with the ThousandEyes platform, execute tests, report test results, and access necessary infrastructure services such as the Domain Name Service (DNS), the Network Time Protocol (NTP) and repositories for software package updates.
This article provides a complete set of information to allow Enterprise Agent network communication to traverse a firewall or similar device. For administrators wishing to quickly install an Enterprise Agent, review the Base Rules section for the instructions required to register the agent in the ThousandEyes platform. Software updates are covered in the Installation Type Rules section.
A firewall rule or ACL for Enterprise Agent communication is specified using one or more of the following criteria:
Destination IP address(es) or DNS domain name(s)
Destination Port numbers (if the protocol is TCP or UDP)
Protocol (TCP, UDP, or ICMP)
Direction (outbound from the agent unless otherwise noted)
Notes:
To use domain names in rules or ACLs, the firewall or other filtering device must support resolution of domain names to IP addresses.
In the tables below, any destination specified only by domain name must be resolved by the customer if an IP address is required. Many common tools such as dig, drill or nslookup can be used for resolving DNS names to IP addresses. Note that third-party (non-ThousandEyes) DNS mappings may change without warning.
thousandeyes.com domain names are not currently protected by DNSSEC.
Direction assumes rules or ACLs use dynamic/stateful filtering, which permit response packets automatically. If your firewall or filter device uses static packet filters, you must create rules in both directions of the communication.
Firewalls or similar devices which use rules or ACLs are typically also capable of performing network address translation (NAT). If your Enterprise Agent is behind a NAT device, then ensure that the necessary NAT rule for your agent exists for the types of tests that the agent will run. ThousandEyes recommends creating static, "one-to-one" NAT rules for the agent as the simplest configuration for proper test function.
Some organizations require an Enterprise Agent to use a proxy server for HTTP-based communication. A proxy may be configured with an allowed list of destinations, which are normally specified by domain names (some proxies may require IP addresses or both). For environments which require a proxy, consult your organization's proxy administrator and the articles Installing Enterprise Agents in Proxy Environments and Configuring an Enterprise Agent to Use a Proxy Server.
Agents will still require configuration of rules or ACLs for non-HTTP based communication, which is typically not sent via proxy servers. Most notably, the Network layer data (overview metrics and path visualization) can only be obtained to the proxy but cannot be transmitted through a proxy to the target server. If Network layer data to the target server is desired, then the proxy will need to be bypassed and firewall rules or ACLs configured to allow the Network layer communication directly from the agent to the target.
Rules are divided into four section: 1) base rules that are required for all agents, 2) rules specific to an agent's installation type, 3) rules required for tests run by an agent, and miscellaneous rules. The latter three categories have multiple sections and subsections. To construct rules for your installation, review the relevant sections and subsections in each category to identify all needed rules.
Use the links in the list below for quick navigation to a specific section of this document.
Rules in each category are cumulative. Add base rules plus the applicable rules for your Enterprise Agent installation type and tests and to obtain the complete ruleset needed for a given agent.
The following communication is required for installation and full functionality of all Enterprise Agents.
Some organizations may not require rules for DNS and/or NTP servers if both the agent and servers are located inside the organization's network, and thus this communication is not blocked by existing rules or ACLs.
Additionally, ThousandEyes recommends permitting all ICMP error message types inbound to the agent in order to ensure full network functionality. If your firewall is fully stateful/dynamic for all ICMP error response types, then no rules are required. For firewalls which do not dynamically allow ICMP error messages in response to packets sent outbound that encounter the error conditions, we recommend allowing inbound the following:
Consult your firewall vendor's documentation or contact their technical support to determine whether you need to add rules to allow these ICMP error message responses. Be aware that explicit NAT rules may also be required for the inbound ICMP if the agent is behind a NAT IP address.
Installation types are displayed in the Add New Enterprise Agent dialog of the Enterprise Agents page. Additionally, the "Type" filter of the Enterprise Agents page displays a listing of the types of currently installed (active and deactivated) Enterprise Agents belonging to the current Account Group.
Determine the installation type of your Enterprise Agents and refer the applicable section(s) below. Some installation types fall under more than one section's set of rules (i.e. rules are cumulative, per the infobox above). For example, A ThousandEyes Virtual Appliance requires the rules in the Appliances and Containers section and the rules in the Appliances section within the Appliances and Containers section.
ThousandEyes Appliances and Containers are based on the Ubuntu Linux operating system, and require access to both the generic Ubuntu software package repositories and the ThousandEyes repository to update software automatically. The following rules are required for agents distributed in virtual machine format (Virtual Appliances and Hyper-V Appliances, Cisco- and Juniper-based agents) and for Physical Appliances and Raspberry Pi-based agents which are distributed via ISO image:
Select port 80 or port 443 depending on your organization's security requirements.
The apt.thousandeyes.com repository is located in a content delivery network (CDN), where IP addresses can change without notice. For customers requiring a static IP address for the ThousandEyes APT repository, the aptproxy.thousandeyes.com domain name will always resolve to the same IP addresses. See the ThousandEyes article Static IP Addresses for ThousandEyes Repositories for additional information.
Appliances
ThousandEyes Appliances--which include Virtual Appliances, Physical Appliances, Hyper-V Appliances, and agents installed on Cisco or Raspberry Pi platforms--provide a web-based administration interface, as well as an SSH server for command-line management. The direction of the connections are inbound to the agent (agent IP address is the destination). If web or SSH connections traverse a firewall, the following rules are required:
For more information, see How to set up the Virtual Appliance.
Raspberry Pi
The following rule is required for agents installed on Raspberry Pi 4 hardware:
Docker containers
The following rule is required only for installation of Docker container-based agents that download images maintained in the Docker registry (as installed with the default docker pull command; see Enterprise Agent Deployment Using Docker)
ThousandEyes agent software is supplied by either the ThousandEyes APT repository apt.thousandeyes.com (for Ubuntu), or the ThousandEyes YUM repository yum.thousandeyes.com (For Red Hat, CentOS and Oracle Linux). Agent software has dependencies on software packages provided in common repositories that typically are required for the operating system. Consult your operating system's documentation for the locations of these repositories and construct rules as required.
The apt.thousandeyes.com and yum.thousandeyes.com repositories are located in a content delivery network (CDN), where IP addresses can change without notice. For customers requiring a static IP address for the ThousandEyes APT or YUM repositories, the aptproxy.thousandeyes.com and yumproxy.thousandeyes.com domain names will always resolve to the same IP addresses. See the ThousandEyes article Static IP Addresses for ThousandEyes Repositories for additional information.
For all Linux package installs, if the ThousandEyes BrowserBot package has been installed (implements the Page Load and the Web Transaction test types) and if host-based Linux-based firewall software is employed then following rule is required:
Agent processes make internal network connections (i.e. not using the physical network) to the BrowserBot sandbox, which listens on port 8998/TCP of the loopback interface (normally uses IP address 127.0.0.1). Configure the host-based firewall to allow connections to the loopback IP address on the specified port.
Ubuntu
The following rules are required for Ubuntu Linux package installations:
Select port 80 or port 443 depending on your organization's security requirements.
Red Hat
The following rules are required for Red Hat Enterprise Linux, CentOS and Oracle Linux package installations:
Select port 80 or port 443 depending on your organization's security requirements.
ThousandEyes provides IaaS Enterprise Agent templates in Amazon Web Services (AWS). Amazon provides software repositories within the AWS region that the Enterprise Agent EC2 instance requires, and access is available by default from the VPC if the deployment follows the instructions provided in the article IaaS Enterprise Agent Deployment - Amazon AWS. Additionally, access to the ThousandEyes package repository should also be available by default.
However, if a firewall or similar device blocks outbound connections from the EC2 instance, then add the following rule:
Select port 80 or port 443 depending on your organization's security requirements.
Additionally, permit access to the AWS regional repository and any others listed in the /etc/apt/sources.list file of the Enterprise Agent.
For example, an agent in AWS region us-west-1 would require the following rules to access the region's repository, us-west-1.ec2.archive.ubuntu.com and to security.ubuntu.com, per the sources.list file.
Substitute the appropriate destinations from your configuration.
The protocol, port, and destination of rules to permit test traffic will depend on the type of test created and the target (destination) of the test. Normally, the direction for test rules is outbound from the agent. However, for agent-to-agent tests and RTP Stream tests, agents are both sources and target of the test, so the direction for test rules is both outbound and inbound, as indicated below.
The sections below use the default ports for the test types. For example, a Web layer test will need outbound access on TCP port 80 and/or 443 by default. If a test is configured with a non-default port number then the rule must use that port number instead of the default.
Additionally, most non-Network layer tests include Network measurements via the Perform network measurements setting (configured by default, under the Advanced Settings tab of the test configuration). When using Perform network measurements, additional rules may be required for those measurements, in addition to any rules based on the test type. Use the instructions in the Agent-to-Server section below to add any needed rule for your test's network measurements, treating the Protocol field on your test's Advanced Settings tab as the Protocol field in the agent-to-server test.
Similarly, if the Perform network measurements setting includes Collect BGP data then use the instructions in the Routing Layer section below to add any needed rules for your test's network measurements.
The Routing Layer contains the BGP test type which provides the BGP Route Visualization. The BGP Route Visualization is also part of the Perform network measurements setting of other test types. BGP data is supplied by public BGP Monitors which report data to ThousandEyes, or customers may create Private BGP Monitors using their own BGP-enabled devices to peer with ThousandEyes. If a Private BGP Monitor is used for a BGP test or as part of other tests' Network metrics, and that Private BGP Monitor traverses a firewall, then a firewall rule or ACL may be required.
Private BGP Monitors are independent of Enterprise Agents, but can provide data for tests run by Enterprise Agents, so are included in this article. If your organization uses Private BGP Monitors, review the article Inside-Out BGP Visibility for additional information.
BGP
Private BGP Monitors peer with a router in the ThousandEyes infrastructure. When the Private BGP Monitor is first configured, customers are sent the domain name of the ThousandEyes peer, along with peering instructions. If a Private BGP Monitor traverses a firewall, then the following firewall rule or ACL may be required.
The source is the customer's BGP-speaking device, not an Enterprise Agent. The destination information can be obtained from the email that ThousandEyes sends after a Private BGP Monitor is requested. ThousandEyes emails configuration information to the requestor, including the peer's domain name (for example bgpc1.thousandeyes.com) or IP address.
Network layer tests permit a choice of protocol (TCP, UDP, and/or ICMP depending on the test type). Create rules with the protocol configured in the test's Protocol setting.
Agent-to-Server
Agent-to-server tests default to TCP as the protocol and 80 as the port. If a different port number is used in the test, use that port number in the rule.
Alternatively, if ICMP is selected in the Protocol field on the Basic Configuration tab of the test settings then use the following rule.
Note that ICMP does not use port numbers, but rather Types and Codes. For the Overview metrics rule, the outbound ICMP code and type uses Type 8, Code 0 (Echo Request) and the return/inbound ICMP uses code 0, type 0 (Echo Reply). Many firewalls refer to this combination as the "ping" service or object.
For the Path Visualization, the outbound ICMP packets use type 8, code 0 (Echo Request) and the returning inbound ICMP packets use both type 0, code 0 (Echo Reply) and type 11, code 0 (Time to Live exceeded in Transit). Many firewalls refer to this combination as the "traceroute" service or object. Normally, a separate rule for traceroute is not needed because stateful firewalls associate the outbound packets (whether TCP, UDP or ICMP) with the inbound ICMP type 11 packets generated by the outbound packets. However, if a firewall cannot make this association, a second rule to allow the type 11 packets inbound to the agent may be required. Such a rule may be similar to the above ICMP rule but with the "traceroute" object, or may require a rule for ICMP type 11 to the agent as destination, from any source.
Note also that "many-to-one" network address translation may not make the association between outbound packets and incoming ICMP type 11 packets, and will block the type 11 packets. A one-to-one NAT rule will need to be configured to allow the inbound ICMP type 11 packets to be correctly translated.
Agent-to-Agent
Agent-to-agent tests default to TCP as the protocol and 49153 as the port. Alternatively, if UDP is selected in the Protocol field on the Basic Configuration tab of the test settings then use UDP. If a different port number is used in the test, use that port number in the rule.
With agent-to-agent tests, if one or both agents is behind a network address translation device, the NAT traversal feature may be required, particularly if the NAT is not a one-to-one NAT. If required, the Behind a NAT box must be checked on the Enterprise Agent's Settings page.
NAT traversal requires communication to the ThousandEyes NAT traversal service, which may require an additional rule. Use the first rule for TCP-based tests; the second for UDP-based tests.
If a many-to-one type of NAT is used, then the NAT device should meet the criteria in the article NAT Traversal for Agent-to-Agent Tests for agent-to-agent tests.
DNS Layer tests all use a destination port of 53. The port is not user-configurable. Additionally, DNS Layer tests use only one transport protocol, either UDP or TCP. Truncated responses will never result in a test switching from UDP to TCP.
DNS Server
DNS Server tests default to UDP as the protocol, as specified in the Transport field on the Advanced Settings tab of the test settings. Alternatively, TCP may be used.
DNS Trace
DNS Trace tests default to UDP as the protocol, as specified in the Transport field on the Advanced Settings tab of the test settings. Alternatively, TCP may be used.
Normally, the test must have access to all destinations in order to access all of the servers required to perform iterative queries to authoritative nameservers in the DNS hierarchy, starting from the root nameservers.
DNSSEC
DNSSEC tests are similar to DNS Trace tests, except that DNSSEC tests always use UDP as the transport protocol.
Normally, the test must have access to all destinations in order to access all of the servers required to perform iterative queries to authoritative nameservers in the DNS hierarchy, starting from the root nameservers.
Web layer tests (other than FTP Server tests) differ from other tests in that the target of the test is potentially (or likely) not the only destination for traffic from the agent. HTTP Server tests can receive HTTP redirects to domains other than the target domain name or IP address. Moreover. Page Load and Transaction tests load entire web pages which typically require connections to many destinations. For this reason, the Destination column in the Page Load and Transaction test section indicates "all destinations". If the domains to which requests are made are known, rules can be created which specify only those domains.
HTTP Server
The HTTP Server test uses port 80 by default if the test target is configured with the
http:// scheme and uses port 443 if configured with the
https:// scheme. If a non-default port number is used by the target server, use that port number in the rule.
Typically, a request using HTTP to port 80 is redirected to the HTTPS service on port 443.
Page Load and Transaction
Normally, for Page Load and Transaction tests (a.k.a. Browser-based tests) allowing the agent to access all destinations using HTTP and HTTPS is the easiest way to configure the ruleset, unless the destinations are well known and few in number.
Note that when an agent running browser-based tests is configured to use a proxy server, some amount of HTTP-based communication cannot be proxied. Specifically, HTTP-based downloads of SSL/TLS digital certificates (AIA fetching) and certificate revocation lists (CRLs) as well as the Online Certificate Status Protocol (OCSP) are not currently proxy-aware. Under certain circumstances (OCSP stapling unavailable, sites using EV certificates) a Browser-based test could experience errors if the agent cannot perform these types of communication directly. In this situation, two options exist:
Create a firewall rule or ACL which permits HTTP connections (typically using the
http:// scheme) from the agent to the site required
Create a firewall rule or ACL which responds with a TCP reset to the connection attempts from the agent
Contact ThousandEyes Customer Engineering for additional information.
FTP Server
FTP Server tests can use one of three TCP-based protocols: FTP (Active or Passive modes), FTPS or SFTP. The FTP server test uses the following ports by default:
Port 21 if the test target is configured with the ftp:// scheme, and port 20 (inbound to the agent from the target server) if Active mode is configured in the Advanced Settings tab
Port 990 if configured with the ftps:// scheme
Port 22 if configured with the sftp:// scheme
If a non-default port number is used by the target server, use that port number in the rule.
Voice Layer provide tests for control and data streams of a voice-over-IP (VoIP) call, using SIP and RTP, respectively. The SIP Server test connects to a server, proxy, session border controller (SBC) or gateway, on the customer's premises or in the cloud. The RTP Stream test is performed between two ThousandEyes Agents to assess the quality of voice data given the characteristics of the network path.
SIP Server
SIP Server tests default to TCP as the protocol and 5060 as the port number. Alternatively, if UDP is selected in the Protocol field on the Basic Configuration tab of the test settings then use UDP, or if TLS is selected in the Protocol field then use TCP as the protocol and 5061 as the port number. If a different port number is used in the test, use that port number in the rule.
Select one of the two rules above per your test's configuration.
RTP Stream
The RTP Stream test is performed between two ThousandEyes Agents--similar to an agent-to-agent test. Review the requirements for agent-to-agent test rules. RTP Stream tests default to 49152 as the port number. If a different port number is used in the test, use that port number in the rule.
Enterprise Agents may require additional rules for optional configurations, such as Kerberos/Active Directory authentication, proxy server configurations, or the Device Layer.
Enterprise Agents which use Kerberos authentication (which is used by Microsoft's Active Directory) to authenticate HTTP requests to web servers or proxies must be able to reach the Kerberos domain controller (KDC) listed in the configuration's KDC Host field on the Kerberos Settings page. If using a Kerberos configuration for an agent, and if communication to the Kerberos domain controller traverses a firewall, then the following rule is required:
The Kerberos settings default to port number 88. If a different port number is used in the configuration's KDC Port field then use that port number in the rule.
Enterprise Agents can be configured to use one or more proxy servers for tests, administrative communications, or both. These configurations may require additional firewall rules or ACLs.
Proxy Servers
An organization's proxy servers may be deployed on the same internal networks as Enterprise Agents, or the proxies may be cloud-based, including SaaS-based proxy solutions. If communication to any configured proxy server traverses a firewall, then the following rule is required:
A proxy server may use one port number for all connections or may use multiple ports for different protocols--most commonly one port for HTTP connections and a second for HTTPS connections. Review your organization's proxy configuration documentation or contact your proxy server administrators to determine what port(s) are used by all proxies that the agent will use.
PAC file Servers
When a client such as a browser or an Enterprise Agent must use multiple proxy servers (for redundancy, optimal performance or other reasons), the client can be configured to use a proxy auto-configuration (PAC) file to select a proxy to handle each HTTP request. The PAC file must be retrieved from a web server at client start-up. If communication to the PAC file's web server traverses a firewall, then the following rule is required:
Select the appropriate port number based on the scheme (
http:// or
https://) of the PAC file's URL.
ThousandEyes Device Layer feature uses the Simple Network Management Protocol (SNMP) to communicate with networked devices. Agents send SNMP GET requests to networked devices either when configured as the targets of Device Discovery, or after the devices have been discovered. If communication to the targeted device traverses a firewall, then the following rule is required:
Additional devices may be discovered without explicitly specifying an IP address in a discovery's Targets field. The discovery can occur even if those devices are blocked from the agent by a firewall, but the agent will not be able to retrieve data. For those discovered devices, a similar rule to the above will be required, using the discovered device:
ThousandEyes' Internet Insights feature aggregates data from existing tests of various types. Because no tests are specific to Internet Insights, no firewall rules or ACLs are required to use Internet Insights. | https://docs.thousandeyes.com/product-documentation/global-vantage-points/enterprise-agents/configuring/firewall-configuration-for-enterprise-agents | 2021-05-06T12:56:35 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.thousandeyes.com |
Icon
This feature is supported in wearable applications only.
The icon component inherits from the image component, which means that image functions can be used on the icon component.
For more information, see the Icon API.
Figure: Icon hierarchy
Adding an Icon Component
To create an icon component, use the
elm_icon_add() function.
Evas_Object *icon; Evas_Object *parent; icon = elm_icon_add(parent);.
Note
The signal list in the API reference can be more extensive, but only the above signals are actually supported in Tizen.
In both cases, the
event_info callback parameter is
NULL.
Note
Except as noted, this content is licensed under LGPLv2.1+.
Related Information
- Dependencies
- Tizen 2.3.1 and Higher for Wearable | https://docs.tizen.org/application/native/guides/ui/efl/wearable/component-icon/ | 2021-05-06T12:46:15 | CC-MAIN-2021-21 | 1620243988753.97 | [array(['../media/icon_tree.png', 'Icon hierarchy'], dtype=object)] | docs.tizen.org |
Date: Thu, 6 May 2021 06:06:20 -0700 (PDT) Message-ID: <[email protected]> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_750639_52831510.1620306380639" ------=_Part_750639_52831510.1620306380639 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
Generic properties allow you to configure messages as they're processed = by the ESB, such as marking a message as out-only (no response message will= be expected), adding a custom error message or code to the message, and di= sabling WS-Addressing headers. | https://docs.wso2.com/exportword?pageId=50500440 | 2021-05-06T13:06:20 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.wso2.com |
Administrationshandbuch
Administrators have a great responsibility to manage the application. One area where Dime.Scheduler's power and flexibility emerges is in the administration and setup views, with its many buttons and switches. Because of this, an entire manual is dedicated to the management and configuration of Dime.Scheduler.
Dime.Scheduler has many capabilities and may even seem daunting when people use it for the very first time. Even though this chapter is centered around the administration of the application, it does not guide you through common usage scenarios. For those who are getting started, or just want to achieve a certain task, we recommend you consult the guides.
How the manual is organizedHow the manual is organized
The administrator manual is a reflection of the application's administration and settings sections. There is also an entire section dedicated to the authentication and authorization of users, followed by a set of unrelated sections such as the setup of resources and back office connectors, to be completed with a chapter on monitoring and troubleshooting.
Read moreRead more | https://docs.dimescheduler.com/docs/de/intro/administrator-manual | 2021-05-06T13:25:48 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.dimescheduler.com |
Graphic Rule
This is an example of the graphic representation of equipment in an elevation drawing.
Label Rule
Every piece of equipment is labeled with a name label and a north-east coordinate label. The name label is placed in the center of the equipment, unless there is an object already in the center, or if there is not sufficient white space for the label. If the label cannot be placed on the center of the object, it is placed with a jogged leader. The coordinate label is placed on the object origin, unless there is insufficient space, in which case it will find the necessary white space. See the above graphic for an example of the labels.
Dimension Rule
Horizontal dimensions are placed on each equipment object. The point of dimensioning is the object origin. Nearby columns are also dimensioned with the equipment objects. | https://docs.hexagonppm.com/reader/hK0ODlG2hJkCr_AZXM8KrA/~ATzEXkv84lxVJlNd8BupQ | 2020-02-17T09:31:15 | CC-MAIN-2020-10 | 1581875141806.26 | [] | docs.hexagonppm.com |
Simon James Design
A while back i blogged about my Sunday Sofa that i got from Simon James Design. I ended up getting frustrated with their old site ... and since i knew Simon i suggested a new site was needed. I helped them out getting their new site up ... and it has just gone live.
They make great designer furniture and now with the new site you can see nice big pictures of their stuff. | https://docs.microsoft.com/en-us/archive/blogs/cjohnson/simon-james-design | 2020-02-17T11:02:02 | CC-MAIN-2020-10 | 1581875141806.26 | [] | docs.microsoft.com |
"The number of items in this list exceeds the list view threshold" when you view lists in Office 365
Problem Manage.
Feedback | https://docs.microsoft.com/en-us/sharepoint/troubleshoot/lists-and-libraries/items-exceeds-list-view-threshold?redirectSourcePath=%252fsk-sk%252farticle%252f-po%2525C4%25258Det-polo%2525C5%2525BEiek-v-tomto-zozname-prekra%2525C4%25258Duje-prahov%2525C3%2525BA-hodnotu-zobrazenia-zoznamu-chyba-pri-zobrazen%2525C3%2525AD-zoznamu-sharepoint-online-v-office-365-enterprise-2bbd4a01-a8c7-412b-9447-0a830be9e9ca | 2020-02-17T09:10:01 | CC-MAIN-2020-10 | 1581875141806.26 | [] | docs.microsoft.com |
Server security and firewalls¶
General¶
OMERO has been built with security in mind. Various standard security practices have been adhered to during the development of the server and client including:
Encryption of all passwords between client and server via SSL
Full encryption of all data when requested via SSL
User and group based access control
Authentication via LDAP
Limited visible TCP ports to ease firewalling
Use of a higher level language (Java or Python) to limit buffer overflows and other security issues associated with native code
Escaping and bind variable use in all SQL interactions performed via Hibernate
Firewall configuration¶
Securing your OMERO system with so called firewalling or packet filtering can be done quite easily. By default, OMERO clients only need to connect to two TCP ports for communication with your OMERO.server: 4063 (unsecured) and 4064 (SSL). These are the IANA assigned ports for the Glacier2 router from ZeroC. Both of these values, however, are completely up to you, see SSL below.
Important OMERO ports:
TCP/4063
TCP/4064
If you are using OMERO.web, then you will also need to make your HTTP and HTTPS ports available. These are usually 80 and 443.
Important OMERO.web ports:
TCP/80
TCP/443
Example OpenBSD firewall rules¶
block in log on $ext_if from any to <omero_server_ip> pass in on $ext_if proto tcp from any to <omero_server_ip> port 4063 pass in on $ext_if proto tcp from any to <omero_server_ip> port 4064 pass in on $ext_if proto tcp from any to <omero_server_ip> port 443 pass in on $ext_if proto tcp from any to <omero_server_ip> port 80
Passwords¶
The passwords stored in the
password table are salted and hashed, so it is
impossible to recover a lost one, instead a new one must be set by an admin.
If the password for the root user is lost, the only way to reset it (in the absence of other admin accounts) is to manually update the password table. The omero command can generate the required SQL statement for you:
$ omero db password Please enter password for OMERO root user: Please re-enter password for OMERO root user: UPDATE password SET hash = 'PJueOtwuTPHB8Nq/1rFVxg==' WHERE experimenter_id = 0;
Current hashed password:
$ psql mydatabase -c " select * from password" experimenter_id | hash -----------------+-------------------------- 0 | Xr4ilOzQ4PCOq3aQ0qbuaQ== (1 row)
Change the password using the generated SQL statement:
$ psql mydatabase -c "UPDATE password SET hash = 'PJueOtwuTPHB8Nq/1rFVxg==' WHERE experimenter_id = 0;" UPDATE 1
Stored data¶
The server’s binary repository and database contain information that may
be confidential. Afford access only on a limited and necessary basis.
For example, the ReadSession warning is for
naught if the restricted administrator can read the contents of the
session table.
Java key- and truststores¶
If your server is connecting to another server over SSL, you may need to configure a truststore and/or a keystore for the Java process. This happens, for example, when your LDAP server uses SSL. See the LDAP plugin for information on how to configure the LDAP URLs. As with all configuration properties, you will need to restart your server after changing them.
To do this, you will need to configure several server properties, similar to the properties you configured during installation.
truststore path
omero config set omero.security.trustStore /home/user/.keystore.
If you don’t have one you can create it using the following:
openssl s_client -connect {{host}}:{{port}} -prexit < /dev/null | openssl x509 -outform PEM | keytool -import -alias ldap -storepass {{password}} -keystore {{truststore}} -noprompt
truststore password
omero config set omero.security.trustStorePassword secret
keystore path
omero config set omero.security.keyStore /home/user/.mystore A keystore is a database of private keys and their associated X.509 certificate chains authenticating the corresponding public keys. A keystore is mostly needed if you are doing client-side certificates for authentication against your LDAP server.
keystore password
omero config set omero.security.keyStorePassword secret
SSL¶
Especially if you are going to use LDAP authentication to your server, it is important to encrypt the transport channel between clients and the Glacier2 router to keep your passwords safe.
By default, all logins to OMERO occur over SSL using an anonymous handshake. After the initial connection, communication is un-encrypted to speed up image loading. Clients can still request to have all communications encrypted by clicking on the lock symbol. An unlocked symbol means that non-password related activities (i.e. anything other than login and changing your password) will be unencrypted, and the only critical data which is passed in the clear is your session id.
Administrators can configure OMERO such that unencrypted connections are
not allowed, and the user’s choice will be silently ignored. The SSL
and non-SSL ports are configured in the
etc/grid/default.xml
file and, as described above, default to 4064 and 4063 respectively and
can be modified using the Ports configuration
properties. For instance, to prefix all ports with 1, use
omero.ports.prefix:
$ omero config set omero.ports.prefix 1
You can disable unencrypted connections by redirecting clients to the SSL
port using the server property
omero.router.insecure:
$ omero config set omero.router.insecure "OMERO.Glacier2/router:ssl -p 4064 -h @omero.host@"
If you want to force host verification see Client Server SSL verification.
See also | https://docs.openmicroscopy.org/omero/5.6.0/sysadmins/server-security.html | 2020-02-17T11:12:08 | CC-MAIN-2020-10 | 1581875141806.26 | [] | docs.openmicroscopy.org |
About Frame Markers
Frame markers are simple colour markers that you can add in the Timeline view. They can be added to any frame, regardless of whether or not it contains a drawing. They can help you organize your scene by marking important frames in your scene.
Contrary to scene markers, frame markers are placed on a specific frame in a specific layer. They can only be added to a single frame, and not to a span of frames.
Frame markers also distinguish themselves from drawing markers in that they do not mark drawings, but frames. They can even be added to frames that do not contain any drawing. They are also only visible in the Timeline view. | https://docs.toonboom.com/help/harmony-17/advanced/timing/about-frame-markers.html | 2020-02-17T10:21:15 | CC-MAIN-2020-10 | 1581875141806.26 | [] | docs.toonboom.com |
<![CDATA[ ]]>Production > Effects > Particle Effects > Particle Effect Modules > Baker Composite
Baker Composite
The Baker Composite is used to composite more than one Particle-Baker, as well as any other elements that need to be combined before they pass through the Particle Visualizer. | https://docs.toonboom.com/help/harmony-11/workflow-network/Content/_CORE/_Workflow/031_Effects/107_H3_Baker_Composite.html | 2017-10-17T00:04:02 | CC-MAIN-2017-43 | 1508187820487.5 | [array(['../../../Resources/Images/_ICONS/Home_Icon.png', None],
dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stage.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/draw.png',
'Toon Boom Harmony 11 Draw Online Documentation'], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/sketch.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/controlcenter.png',
'Installation and Control Center Online Documentation Installation and Control Center Online Documentation'],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/scan.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stagePaint.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stagePlay.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stageXsheet.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/Stage/Effects/Har09_024_BakerComp_Network.png',
None], dtype=object) ] | docs.toonboom.com |
Responsibilities¶
When looking at security, the responsibilities are split between
- You as a developer
- Vapor Cloud as a provider
- Our provider AWS
We will here outline shortly what this means to you as a developer, and will go further into details on the next pages.
You as a developer¶
As a developer you are responsible to ensure the security in your app on the application layer.
Some examples:
- Sanitize data before doing database operations
- Enforce good password policy
- Prevent XSS (Cross-site scripting)
Vapor Cloud as a provider¶
We are responsible to ensure the security on the servers including load balancers, webservers, database servers etc.
We will also handle all monitoring on the server side, but won't monitor each individual app. If uptime is critical, we suggest you setup monitoring on your own application. is an example of a free stable monitoring system to monitor URLs
AWS¶
Everything on Vapor Cloud is hosted on AWS (Amazon Web Services). AWS have a good reputation when it comes to security, and they handle the more low-level physical security. You can read more about AWS securit here: | https://docs.vapor.cloud/advanced/security/responsibilities/ | 2017-10-16T23:56:12 | CC-MAIN-2017-43 | 1508187820487.5 | [] | docs.vapor.cloud |
Sometimes we will need more than one grab per class, but we can only add one annotation type per annotatable node. This class allows for multiple grabs to be added.
For example:
@Grapes([@Grab(module='m1'), @Grab(module='m2')])class AnnotatedClass { ... }
You can override an implicit transitive dependency by providing an explicit one. E.g. htmlunit 2.6 normally uses xerces 2.9.1 but you can get 2.9.0 as follows:
Obviously, only do this if you understand the consequences.Obviously, only do this if you understand the consequences.
@Grapes([
@Grab('net.sourceforge.htmlunit:htmlunit:2.6'),
@Grab('xerces#xercesImpl;2.9.0') ])
You can also remove transitive dependencies altogether (provided you
know you don't need them) using
@GrabExclude.
For example, here is how we would not grab the
logkit and
avalon-framework transitive dependencies for Apache POI:
It is also sometimes also useful to useIt is also sometimes also useful to use
'
@GrabConfigto further adjust how dependencies are grabbed. See
@GrabConfigfor further information.
This will be pushed into the child grab annotations if the value is not set in the child annotation already.
This results in an effective change in the default value, which each @Grab can still override @default true | http://docs.groovy-lang.org/latest/html/gapi/groovy/lang/Grapes.html | 2017-10-17T00:19:49 | CC-MAIN-2017-43 | 1508187820487.5 | [] | docs.groovy-lang.org |
Flashing the Arduino Dock
The Arduino Dock allows the Omega and the ATmega328P microcontroller interact with each other. Having an OS with the connectivity power of the Omega communicate easily with a microcontroller can be very powerful used effectively. We can do amazing things with the Omega communicating with the ATmega328P chip such as flashing the microcontroller wirelessly.
Programming and flashing a microcontroller mean the same thing, you are taking compiled code and uploading it to a microcontroller. The terms are often used interchangeably.
We’ll first cover how to setup your computer and Omega, and then move on to cover how to actually flash your Arduino Dock.
Prerequisites
You’ll need to first make sure that your Omega has connected to internet.
Then you’ll want to ssh into the Omega’s terminal in order to install the arduino dock package.
We’ve written a guide to connecting to your Omega’s terminal via SSH in case you don’t know how!
To install this package you’ll need to use
opkg. Enter the following commands on the command-line:
opkg update opkg install arduino-dock-2
Accessing the Omega
The Omega must be accessible via its URL where
ABCD is your Omega’s unique code.
The requirements vary depending on your Operating System:
Arduino IDE
This has to be done just once to enable flashing wirelessly.
Install the latest Arduino IDE from the good folks over at Arduino. We did all of our testing using Version 1.8.0.
Installing the Arduino Dock Device Profile
Open the Arduino IDE and go to File -> Preferences. Copy this URL to our Arduino Dock device profile:
And paste it into the Additional Boards Manager URLs section near the bottom of the window.
If you already have links to other custom boards in your IDE, click on the button on the right of the text box. You can then add the URL in a new line.
Click OK, then go to Tools -> Boards -> Board Manager (at the top of the menu). In the search bar, type “Onion” and hit Enter. When the Onion Arduino Dock entry pops up, click on Install.
Click on Close to return to the IDE. The editor will now download the settings for the Arduino Dock and make it available as a board in the Tools -> Boards menu!
Doing the Actual Flashing
Now we get to the fun part, flashing sketches to the ATmega chip!
There are two methods for flashing the ATmega328P chip using the Omega:
- Using the Arduino IDE wirelessly
- We strongly recommend this option as it will work in almost all cases.
- Compiling the file on your computer, copy it to the Omega, and then flash the chip from the command line
- Only use this method as a backup plan in case you cannot upload using the IDE
Wireless Flashing with the Arduino IDE
Thanks to the setup you did on your computer and the Arduino Dock, you can actually use the Arduino IDE on your computer to wirelessly flash Sketches to the Arduino Dock, so long as your computer and the Omega on your Arduino Dock are on the same WiFi network.
The process that takes place with this method:
- Your computer and the Arduino IDE compile the Sketch
- The compiled sketch is transferred to your Omega using SSH
- The Omega will flash the microcontroller
The Steps:
In the Arduino Tools menu, select “Onion Arduino Dock” for the Board (near the bottom of the menu), and your Omega-ABCD hostname as the Port:
If your Omega does not show up in the Port menu as a network port, restart the Arduino and wait for 30 seconds:
When your sketch is ready, hit the Upload button. Once the sketch is compiled, it will prompt you for your Omega password to upload the sketch. The password is
onioneer by default:
The IDE actually creates an SSH connection with the Omega to transfer the compiled hex file, and the Omega with then flash the ATmega microcontroller using 4 GPIOs.
Once the upload completes, the info screen will show something along the lines of:
The ATmega chip is now running your sketch, enjoy!
Note: An orange message saying
ash: merge-sketch-with-bootloader.lua: not found may appear in the info screen. You can safely ignore this message, it does not affect the sketch upload.
Manually Flashing on the Command line
Like we mentioned before this method should only be used as a backup to using the Arduino IDE. This is handy if the Arduino IDE cannot detect your Omega as a Network Port due to any connection/setup issues.
First, enable verbose output during compilation in the Arduino IDE Preferences:
Hit the verify button to compile the sketch, once it’s complete you will have to scroll to the right to find the path to the compiled hex file:
Copy this path and then transfer the file to your Omega.
For more information on transferring files to your Omega from your computer you can check out our extensive guide to transferring files to your Omega
Now that the hex file is on your Omega, you can flash it to the ATmega chip from the Omega’s terminal:
sh /usr/bin/arduino-dock flash <hex file>
For example:
# sh /usr/bin/arduino-dock flash /root/blink2.hex > Flashing application '/root/blink2.hex' ... device : /dev/i2c-0 (address: 0x29) version : TWIBOOTm328pv2.1??x (sig: 0x1e 0x95 0x0f => AVR Mega 32p) flash size : 0x7800 / 30720 (0x80 bytes/page) eeprom size : 0x0400 / 1024 writing flash : [**************************************************#?] (3210) verifying flash: [**************************************************#?] (3210) > Done
The sketch has been flashing and is running on the Arduino Dock, enjoy! | https://docs.onion.io/omega2-docs/flash-arduino-dock-wirelessly.html | 2017-10-16T23:49:38 | CC-MAIN-2017-43 | 1508187820487.5 | [array(['https://raw.githubusercontent.com/OnionIoT/Onion-Docs/master/Omega2/Documentation/Doing-Stuff/img/arduino-dock-preferences-boards-manager-urls.png',
None], dtype=object)
array(['https://raw.githubusercontent.com/OnionIoT/Onion-Docs/master/Omega2/Documentation/Doing-Stuff/img/arduino-dock-boards-manager.png',
None], dtype=object)
array(['https://raw.githubusercontent.com/OnionIoT/Onion-Docs/master/Omega2/Documentation/Doing-Stuff/img/arduino-dock-ide-board-port.png',
'Arduino IDE Tools->Port menu'], dtype=object)
array(['http://i.imgur.com/UDXIDVL.png', 'Arduino IDE Uploading Sketch'],
dtype=object)
array(['http://i.imgur.com/oPOB4Vl.png', 'Arduino IDE Upload Done'],
dtype=object)
array(['http://i.imgur.com/A6uXT6Y.png', 'Arduino IDE Preferences'],
dtype=object)
array(['http://i.imgur.com/QEiDwu8.png', 'Arduino IDE Compiled Hex file'],
dtype=object) ] | docs.onion.io |
#include <wx/tracker.h>
Add-on base class for a trackable object.
This class maintains an internal linked list of classes of type wxTrackerNode and calls OnObjectDestroy() on them if this object is destroyed. The most common usage is by using the wxWeakRef<T> class template which automates this. This class has no public API. Its only use is by deriving another class from it to make it trackable. | http://docs.wxwidgets.org/trunk/classwx_trackable.html | 2017-10-17T00:03:39 | CC-MAIN-2017-43 | 1508187820487.5 | [] | docs.wxwidgets.org |
#include <wx/dragimag.h>
This class is used when you wish to drag an object on the screen, and a simple cursor is not enough.
On Windows, the Win32 API is used to achieve smooth dragging. On other platforms, wxGenericDragImage is used. Applications may also prefer to use wxGenericDragImage on Windows, too. DoDrawImage() and GetImageRect().
Default constructor.
Constructs a drag image from a bitmap and optional cursor.
Constructs a drag image from an icon and optional cursor.
Constructs a drag image from a text string and optional cursor.
Constructs a drag image from the text in the given tree control item, and optional cursor.
Constructs a drag image from the text in the given list control item, and optional cursor.
Start dragging the image, using the first window to capture the mouse and the second to specify the bounding area.
This form is equivalent to using the first form, but more convenient than working out the bounding rectangle explicitly.
You need to then call Show() and Move() to show the image on the screen. Call EndDrag() when the drag has finished.
Note that this call automatically calls CaptureMouse(). GetImageRect().
Call this when the drag has finished.
Returns the rectangle enclosing the image, assuming that the image is drawn with its top-left corner at the given point.
This function is available in wxGenericDragImage only, and may be overridden (together with DoDrawImage()) to provide a virtual drawing capability.
Call this to move the image to a new position.
The image will only be shown if Show() has been called previously (for example at the start of the drag).
You can move the image either when the image is hidden or shown, but in general dragging will be smoother if you move the image when it is shown.
Shows the image.
Call this at least once when dragging.. | http://docs.wxwidgets.org/trunk/classwx_drag_image.html | 2017-10-17T00:15:13 | CC-MAIN-2017-43 | 1508187820487.5 | [] | docs.wxwidgets.org |
aggregation
An InfluxQL function that returns an aggregated value across a set of points. See InfluxQL Functions for a complete list of the available and upcoming aggregations.
Related entries: function, selector, transformation
cluster
A collection of servers running InfluxDB nodes. All nodes in a cluster have the same users, databases, retention policies, and continuous queries. See Clustering for how to set up an InfluxDB cluster.
Related entries: node, server
consensus node
A node running only the consensus service.
See Cluster Node Configuration.
Related entries: cluster, consensus service, data node, node, hybrid node
consensus service
The InfluxDB service that participates in the raft consensus group. A cluster must have at least three nodes running the consensus service (consensus or hybrid nodes), but it can have more. There should be an odd number of nodes running the consensus service in a cluster.
The number of consensus services that can fail before the cluster is degraded is ⌈n/2 + 1⌉ where
n is the number of consensus services in the cluster.
Thus, an even number of consensus services offer no additional redundancy or resiliency.
The consensus service ensures consistency across the cluster for node membership, databases, retention policies, users, continuous queries, shard metadata, and subscriptions.
See Cluster Node Configuration.
Related entries: cluster, consensus node, data service, node, hybrid node
continuous query (CQ)
An InfluxQL query that runs automatically and periodically within a database.
Continuous queries require a function in the
SELECT clause and must include a
GROUP BY time() clause.
See Continuous Queries.
Related entries: function
coordinator node
The node that receives write and query requests for the cluster.
Related entries: cluster, hinted handoff, node
data node
A node running only the data service.
See Cluster Node Configuration.
Related entries: cluster, consensus node, data service, node, hybrid node
data service
The InfluxDB service that persists time-series data to the node. A cluster must have at least one node (data or hybrid nodes) running the data service, but may have any number beyond one.
See Cluster Node Configuration.
Related entries: cluster, consensus node, consensus service, node, hybrid node: replication factor,
hinted handoff
A durable queue of data destined for a server which was unavailable at the time the data was received. Coordinating nodes temporarily store queued data when a target node for a write is down for a short period of time.
Related entries: cluster, node, server
hybrid node
A node running both the consensus and data services.
See Cluster Node Configuration.
Related entries: cluster, consensus node, consensus service, node, data node, data service.
Related entries: cluster, server..
Note that there are no query performance benefits from replication. Replication is for ensuring data availability when a data node or nodes are unavailable. See Database Management for how to set the replication factor.
Related entries: cluster, duration, node, retention policy
retention policy (RP)
The part of InfluxDB’s data structure that describes for how long InfluxDB keeps data (duration) and how many copies of those data are stored in the cluster (replication factor). RPs are unique per database and along with the measurement and tag set define a series.
When you create a database, InfluxDB automatically creates a retention policy called
default with an infinite duration and a replication factor set to the number of nodes in the cluster..
Related entries: cluster, node. | https://docs.influxdata.com/influxdb/v0.10/concepts/glossary/ | 2017-10-17T00:39:45 | CC-MAIN-2017-43 | 1508187820487.5 | [] | docs.influxdata.com |
An Act to amend 111.322 (2m) (a) and 111.322 (2m) (b); and to create 103.135 and 106.54 (11) of the statutes; Relating to: prohibiting an employer from relying on or inquiring about a prospective employee's current or prior compensation and from restricting an employee's right to disclose compensation information and providing a penalty. (FE) | https://docs.legis.wisconsin.gov/2017/proposals/ab213 | 2017-10-17T00:09:54 | CC-MAIN-2017-43 | 1508187820487.5 | [] | docs.legis.wisconsin.gov |
This server contains the complete user documentation for KDE (except the playground module).
You can select the release and language of the documentation you are searching for. Not all languages have all documentation translated.
The API documentation is available at api.kde.org.
Maintained by KDE Sysadmin
KDE and K Desktop Environment are trademarks of KDE e.V. | Legal! | http://docs.kde.org/development/de | 2009-07-04T00:54:36 | crawl-002 | crawl-002-009 | [] | docs.kde.org |
A set of components and procedures that allows separate systems of record to maintain record consistency and workflow integrity by using the public Ethereum Mainnet as a common frame of reference (CFR).
Even counterparties to the same business-to-business Workflow typically must not have access to -- or even awareness of -- processes that they are not directly involved in. A Workflow that can restrict access and awareness to a specific set of Parties down to a single Step or Record has achieved "atomic compartmentalization." With the Mainnet, this kind of compartmentalization can be said to be "silo-less," because, while each Step in a Workflow can restrict awareness and access, different Workflows -- even ones created and operated separately over a long period -- can integrate Steps. Workflow A can pass parameters between it Step A#33 with Workflow B's Step B#22, if done with care, and the logic and data can be made compatible. There is still work there to integrate the different Workflows, but because they are all baselined on the same Mainnet, this can be done at the application/business-process level. One need not call a network administrator to set up a new channel or figure out which private blockchain network will be primary.
The Mainnet is an always-on state machine that is maintained as a public good in such a way that it maximizes the resistance to an individual or group to gain control, lock out users from valid functions, or change history. The term, Mainnet, is capitalized to emphasize its relationship to the capitalized Internet.
Used without capitalization to distinguish a public production network from public testnets. For example, the Ethereum mainnet vs. its testnets, such as ropsten.
There are many forms of middleware. We use the term in the context of the Baseline Protocol in a particular way. Systems of record maintained by legally separate entities require a common frame of reference in order to run business process integration across them. Flow control, ensuring that two processes don't run inappropriately against the same shared state, terminating the back and forth of the two generals problem, non-repudiation, etc. In this context, the protocol is primarily about loose-coupling architecture in the transaction-processing middleware (TPM) area. It is not necessarily about schema translators, though a typical system would very likely run CRUD access between a baseline server and a system of record through translation services in a traditional Enterprise Service Bus (ESB). Unlike some RPC middleware, the Baseline Protocol is asynchronous, though it is certainly about passing parameters between functions running on two or more remote machines...and ensuring consistency between them.
WIP section - Placeholder for future updates
A set of Parties (defined in the orgRegistry) when a Registrar factory smart contract is called to start a Workflow.
A series of Steps that constitute a coherent business process between a set of counter-parties in a Workflow.
A discrete baselined Record/Function/BusinessEvent/Document (e.g,. RFP, MSA, PO) within a Workflow that implements a set of baseline Tasks.
Different Workflow Steps implement either all or a subset of Tasks, such as selecting a set of counter-parties, serializing a record, sending messages, sometimes executing off-chain functions (not in Radish34, but definitely in protocol-compliant stack), executing and sending/receiving EdDSA sigs, invoking ZK service, sometimes ensuring that the ZK circuit (or a code package/container in a non-Radish34 implementation) of a previous Step is executed correctly (only for Steps that follow previous steps that enforce 'entanglement')...etc. The set of Tasks that a Step implements is called its LifeCycle. Most of these Tasks invoke on or more Components.
In the Baseline Protocol context, Components are just the general term for services, smart contracts, etc., like the ZK service, messenger and smart contracts. Some of these are not in the current Radish34 implementation as distinct components, but some/all should be constructed in the protocol work.
In Radish34, there isn't a coherent "Baseline Server", but in the ultimate reference implementation of the protocol, the set of off-chain Components and APIs to messaging, systems of record, etc. presumably will be packaged as the "Baseline Server".
There are several things passed to the Mainnet, primarily from the ZK Service, in the process of baselining a Workflow Step. An important one is a hash, which is stored in the Shield contract (created when setting up a WorkGroup by the Registrar "factory" smart contract, along with the orgRegistry and Validator contract). This happens when a Step successfully passes the ZK validation process. The Baseline Proof is a "proof of consistency" token, and also is used as a state-marker for managing Workflow integrity.
During the Radish34 project, the notion of a shared "codebook" was often discussed. This in concept refers to a collection of artifacts or library components that enable a business process to be "baselined", but in general is extensible to any such components that subject to a business process. These artifacts in the case of Radish34 and presumably what would be "importable" components include the zk snark based circuits for usage in off chain zkp based proof generation and/or other logical components representing a business logic in a workflow process. hash deposited in a Shield Contract when a Workflow Step successfully completes, representing successful verification on chain of the correctness of a logical statement (proof); successful storage of the hash in a Merkle tree within a Shield contract on chain.
(work in progress...see Radish34 section for full documentation.)
RFP: Request for proposal is a document issued at the beginning of a business process containing a request for a good transfer placed by a intending buying party. In some cases RFPs are issued in public, and in some cases they are issued in private in a point to point communication mode
Bid/Proposal: The response to an RFP, wherein an intended selling party proposes their acceptance of the RFP, and addition of contractual terms for the intended procurement of goods. These terms vary from use case to use case, and in the case of Radish34 this is represented by a volume discount structure which typically is negotiated between the buying and selling parties
MSA: Master Service Agreement is a document/contract issued to mark the beginning of the procurement process issued by the buying to the selling party and contains the terms agreed upon as in the bid/proposal. Typically, this is a material process and require co-signing of the agreements, that can be verifiable and provable.
PO: Purchase order is a document/token issued to begin the ordering process and that needs to maintain linkage to the MSA and thereby the terms agreed upon in the MSA. | https://docs.baseline-protocol.org/basics/glossary | 2020-07-02T13:28:29 | CC-MAIN-2020-29 | 1593655878753.12 | [] | docs.baseline-protocol.org |
The CloudWalk Framework has a file system that can handle files of type WALK dbfile, where you can create and work with files in the following format:
key_1=buffer_2\nkey_2=buffer_2\n
Basically, it would be the format of text files in UNIX environments, where \n represents a newline. The configuration file itself (config.dat) is a WALK dbfile file.
The commands editfile, readfile and readfilebyindex can be used to work with this file type, based on keys and values, enabling a fast and efficient way to store static data on the device.
We recommend using the utility EditDBFile to manipulate your WALK dbfile files. Follow this procedure: | https://docs.cloudwalk.io/en/posxml/file-system | 2020-07-02T12:52:32 | CC-MAIN-2020-29 | 1593655878753.12 | [] | docs.cloudwalk.io |
You MUST set up your environment according to the steps below before installing Fluentd. Failing to do so will be the cause of many unnecessary problems.
It's HIGHLY recommended that you set up NTP daemon (e.g. chrony, ntpd, etc) on the node to have accurate current timestamp. This is crucial for the production-grade logging services. For AWS (Amazon Web Services) users we recommend to use Amazon Time Sync Service, AWS hosted NTP server. Please check AWS EC2: Setting the Time for Your Linux Instance.
Please increase the maximum number of file descriptors. You can check the current number using the
ulimit -n command.
$ ulimit -n65535
If your console shows
1024, it is insufficient. Please add following lines to your
/etc/security/limits.conf file and reboot your machine.
root soft nofile 65536root
For high load environments consisting of many Fluentd instances, please add these parameters to your
/etc/sysctl.conf file. Please either type
sysctl -p or reboot your node to have the changes take effect.
net.core.somaxconn = 1024net.core.netdev_max_backlog = 5000net.core.rmem_max = 16777216net.core.wmem_max = 16777216net.ipv4.tcp_wmem = 4096 12582912 16777216net.ipv4.tcp_rmem = 4096 12582912 16777216net.ipv4.tcp_max_syn_backlog = 8096net.ipv4.tcp_slow_start_after_idle = 0net.ipv4.tcp_tw_reuse = 1net.ipv4.ip_local_port_range = 10240 65535
These kernel options were originally taken from the presentation "How Netflix Tunes EC2 Instances for Performance" by Brendan Gregg, Senior Performance Architect at AWS re:Invent 2017.
If this article is incorrect or outdated, or omits critical information, please let us know. Fluentd is a open source project under Cloud Native Computing Foundation (CNCF). All components are available under the Apache 2 License. | https://docs.fluentd.org/v/0.12/articles/before-install | 2020-07-02T13:14:52 | CC-MAIN-2020-29 | 1593655878753.12 | [] | docs.fluentd.org |
Spl.
Please confirm that your Java version is 8 or higher.
$ java -versionjava version "1.8.0_111"Java(TM) SE Runtime Environment (build 1.8.0_111-b14)Java HotSpot(TM) 64-Bit Server VM (build 25.111-b14, mixed mode)
Now that we've checked for prerequisites, we're now ready to install and set up the three open source tools.
To install Elasticsearch, please download and extract the Elasticsearch package as shown below.
$ curl -O tar zxvf elasticsearch-5.0.2.tar.gz$ cd elasticsearch-5.0.2
Once installation is complete, start Elasticsearch.
$ ./bin/elasticsearch
To install Kibana, download it via the official webpage and extract it. Kibana is a HTML / CSS / JavaScript application. Download page is here. In this article, we download Mac OS X binary.
$ curl -O tar zxvf kibana-5.0.2-darwin-x86_64.tar.gz$ cd kibana-5.0.2-darwin-x86_64
Once installation is complete, start Kibana and run
./bin/kibana. You can modify Kibana's configuration via
config/kibana.yml.
$ ./bin/kibana
Access in your browser.
In this guide We'll install td-agent, the stable release of Fluentd. Please refer to the guides below for detailed installation steps.
Next, we'll install the Elasticsearch plugin for Fluentd: fluent-plugin-elasticsearch. Then, install fluent-plugin-elasticsearch as follows.
$ sudo /usr/sbin/td-agent-gem install fluent-plugin-elasticsearch --no-document
We'll configure td-agent (Fluentd) to interface properly with Elasticsearch. Please modify
/etc/td-agent/td-agent.conf as shown below:
# get logs from syslog<source>@type syslogport 42185tag syslog</source># get logs from fluent-logger, fluent-cat or other fluentd instances<source>@type forward</source><match syslog.**>@type elasticsearchlogstash_format trueflush
Once Fluentd receives some event logs from rsyslog and has flushed them to Elasticsearch, you can search the stored logs using Kibana by accessing Kibana's index.html in your browser. Here is an image example.
To manually send logs to Elasticsearch, please use the
logger command.
$ logger -t test foobar
When debugging your td-agent configuration, using filter_stdout will be useful. All the logs including errors can be found at
/etc/td-agent/td-agent.log.
<filter syslog.**>@type stdout</filter><match syslog.**>@type elasticsearchlogstash_format trueflush_interval 10s # for testing</match>.
If this article is incorrect or outdated, or omits critical information, please let us know. Fluentd is a open source project under Cloud Native Computing Foundation (CNCF). All components are available under the Apache 2 License. | https://docs.fluentd.org/v/0.12/articles/free-alternative-to-splunk-by-fluentd | 2020-07-02T13:18:24 | CC-MAIN-2020-29 | 1593655878753.12 | [] | docs.fluentd.org |
List managedEBooks
Namespace: microsoft.graph
Important: Microsoft Graph APIs under the /beta version are subject to change; production use is not supported.
Note: The Microsoft Graph API for Intune requires an active Intune license for the tenant.
List properties and relationships of the managed managedEBook: 756 { ": "" } ] } | https://docs.microsoft.com/en-us/graph/api/intune-books-managedebook-list?view=graph-rest-beta | 2020-07-02T13:13:49 | CC-MAIN-2020-29 | 1593655878753.12 | [] | docs.microsoft.com |
Zendesk Source Connector for Confluent Platform¶
Zendesk Support is a system for tracking, prioritizing, and solving customer
support tickets. The Kafka Connect Zendesk Source connector copies data into
Apache Kafka® from various Zendesk support tables such as
tickets,
ticket_audits,
ticket_fields,
groups,
organizations,
satisfaction_ratings, and others, using the Zendesk Support API. Please
find the list of supported Zendesk tables in the supported tables section.
Features¶
The Zendesk Source Connector offers the following features:
- Quick Turnaround: The Zendesk connector ensures that data between your Zendesk Tables and corresponding Kafka topics are synced quickly, without unnecessary lag. The poll frequency on each table has been specifically configured based on the size of the table, so that larger and more dynamic tables, like
Tickets, are polled more frequently than the static tables like
Organizations.
- At Least Once Delivery: The connector guarantees no loss of messages from Zendesk to Kafka. Messages may be reprocessed because of task failure or API limits, which may cause duplication.
- Schema Detection and Evolution: The connector supports automatic schema detection and backward compatible schema evolution for all supported tables.
- Real-time and Historical Lookup: The connector supports fetching all the past historical records for all tables. It can also be configured to pull in data from only a specified time in the past (see configuration property
`zendesk.since).
- Automatic Retries:and
retry.backoff.ms.
- Intelligent backoffs: If there are too many requests because of support API rate limits, the connector intelligently spaces out the HTTP fetch operations to ensure a smooth balance between recency, API limits, and back pressure.
- Resource Balance and throughput: Different resources with Zendesk could have different rates of creation and updation. Such resources can be balanced among the workers, with reduced hot-spotting, by keeping the resources in configuration
zendesk.tablessorted by the order of their expected cardinality. Also, the
task.max,
max.in.flight.requests, and
max.batch.sizeconfiguration properties can be used to improve overall throughput.
Supported Tables¶
The following tables from Zendesk are supported in this version of Kafka Connect Zendesk Source connector:
custom_roles,
groups,
group_memberships,
organizations,
organization_subscriptions,
organization_memberships,
satisfaction_ratings,
tickets,
ticket_audits,
ticket_fields,
ticket_metrics and
users.
Prerequisites¶
The following are required to run the Kafka Connect Zendesk Source Connector:
- Kafka Broker: Confluent Platform 3.3.0 or above, or Kafka 0.11.0 or above
- Kafka Connect: Confluent Platform 4.1.0 or above, or Kafka 1.1.0 or above
- Java 1.8
- Zendesk API: Support APIs should be enabled for the Zendesk account. Also either
oauth2or
passwordmechanism should be enabled in the Zendesk account. For information, look at Using the API dashboard: Enabling password or token access.
- Zendesk account type: Certain tables, such as
custom_roles, can only be accessed if Zendesk Account is an
Enterpriseaccount. Refer Custom Agent Roles.
- Zendesk settings: Some settings may need to be enabled to ensure export is possible. Example,
satisfaction_ratingscan only be exported if it is enabled. Refer to Support API: Satisfaction Ratings.
Install the Zendesk-zendesk:latest
You can install a specific version by replacing
latest with a version number. For example:
confluent-hub install confluentinc/kafka-connect-zendesk:1.0.1
Install the connector manually¶
Download and extract the ZIP file for your connector and then follow the manual connector installation instructions.
Configuration Properties¶
For a complete list of configuration properties for this connector, see Zendesk Source Connector Configuration Properties.
- Prerequisite
- Zendesk Developer Account
In this quick start guide, the Zendesk Connector is used to consume records from a Zendesk resource called
tickets and send the records to a Kafka topic named
ZD_tickets.
Install the connector through the Confluent Hub Client.
# run from your confluent platform installation directory confluent-hub install confluentinc/kafka-connect-zendesk
Check the status of all services.
confluent local status
Configure your connector by first creating a JSON file named
zendesk.jsonwith the following properties.
// substitute <> with your config { "name": "ZendeskConnector", "config": { "connector.class": "io.confluent.connect.zendesk.ZendeskS": 1, "poll.interval.ms": 1000, "topic.name.pattern": "ZD_${entityName}", "zendesk.auth.type": "basic", "zendesk.url": "https://<sub-domain>.zendesk.com", "zendesk.user": "<username>", "zendesk.password": "<password>", "zendesk.tables": "tickets", "zendesk.since": "2019-08-01" } }
Start the Zendesk Source connector by loading the connector’s configuration with the following command:
Caution
You must include a double dash (
--) between the topic name and your flag. For more information, see this post.
confluent local load zendesk -- -d zendesk.json
Confirm that the connector is in a
RUNNINGstate.
confluent local status ZendeskConnector
Create one ticket record using Zendesk API as follows.
curl https://{subdomain}.zendesk.com/api/v2/tickets.json \ -d '{"ticket": {"subject": "My printer is on fire!", "comment": { "body": "The smoke is very colorful." }}}' \ -H "Content-Type: application/json" -v -u {email_address}:{password} -X POST
Confirm the messages were delivered to the
ZD_ticketstopic in Kafka. Note, it may take a minute before the record populates the topic.
confluent local consume ZD_tickets -- --from-beginning | https://docs.confluent.io/current/connect/kafka-connect-zendesk/index.html | 2020-07-02T13:37:14 | CC-MAIN-2020-29 | 1593655878753.12 | [] | docs.confluent.io |
In LogicalDOC, there are a variety of ways that a user can upload documents. The options are described below. It is important to remember that the user must select a folder with write permission to be able to create new documents.
Standard upload
Click on the Add documents icon on the toolbar.
An upload popup window will be shown. You can select one or more files from your PC that you want to place in the current folder. Depending on the configuration of your LogicalDOC, your files can be checked by the antivirus software. and you may not be allowed to upload It is possible that some extensions or large files cannot be uploaded.
Use the option 'Immediate indexing' if you want LogicalDOC to immediately index your new documents (normally documents are indexed in the background).
If you want to upload a folder’s structure, you can upload a .zip file and select the option 'Import documents from ZIP'. However, in this case, the import will be executed separately. You will be notified of the conclusion of the import process by a system message. Once the upload is completed, click on Send and a second dialog box will allow you to enter the metadata that will be applied to the new documents.
Maximum upload size
By default, you cannot upload files larger than 100MB. This setting can be changed in Administration > Settings > GUI Settings > Upload max. size
Drop Spot
LogicalDOC supports a feature called Drop Spot that allows you to drag and drop files and folders directly from your desktop into the documents repository. To activate Drop Spot, simply click on the icon in the documents toolbar.
Now you can drag and drop your files and folders in the square and then click on Upload to start all the transfers. Once you are finished, just close the Drop Spot popup. | https://docs.logicaldoc.com/en/working-with-documents/adding-new-documents | 2020-07-02T12:44:10 | CC-MAIN-2020-29 | 1593655878753.12 | [array(['/images/stories/en/add_documents.gif', None], dtype=object)
array(['/images/stories/en/upload.gif', None], dtype=object)] | docs.logicaldoc.com |
Release 3.4.3
Release period: 2019-01-04 to 2019-01-10
This release includes the following issues:
- Central scheduler for regular jobs
- meshMarketplace Metering Collector
Ticket DetailsTicket Details
Central scheduler for regular jobsCentral scheduler for regular jobs
Audience: Operator
DescriptionDescription
To avoid parallel execution of scheduled jobs on multiple nodes of meshfed, we implemented a singleton scheduler that triggers all jobs in meshfed components within one enviroment.
meshMarketplace Metering CollectormeshMarketplace Metering Collector
Audience: Operator
Component: billing
DescriptionDescription
In preparation for the upcoming release of metering and billing for meshMarketplace, this release includes a new meshmarketplace-collector service. This service collects service event data from all marketplaces and ingests it into the metering pipeline. | https://docs.meshcloud.io/blog/2019/01/10/Release-0.html | 2020-07-02T11:54:17 | CC-MAIN-2020-29 | 1593655878753.12 | [] | docs.meshcloud.io |
Legal Service Management The ServiceNow® Legal Service Management application lets you request changes to the operation and maintenance of your legal-related cases. The legal staff can then track these requests and make the necessary changes. Any user in the system can view all open legal requests, giving your users a chance to see the legal issues that have already been reported before they submit a new request. Request templates are linked to service catalog items, specifically to record producers. When you make a request from the catalog, it uses the template associated with the catalog item to create the actual legal request. Legal.. Domain separation in Legal Service ManagementThis is an overview of domain separation in Legal Service Management. Domain separation allows you to separate data, processes, and administrative tasks into logical groupings called domains. You can then control several aspects of this separation, including which users can see and access data. | https://docs.servicenow.com/bundle/kingston-service-management-for-the-enterprise/page/product/planning-and-policy/concept/c_LegalServiceManagement.html | 2020-07-02T11:48:03 | CC-MAIN-2020-29 | 1593655878753.12 | [] | docs.servicenow.com |
Installation¶
Installation with pip¶
To install the Cell Browser using pip, you will need Python2.5+ or Python3+ and pip. With these setup, on a Mac or any Linux system, simply run:
sudo pip install cellbrowser
On Linux, if you are not allowed to run the sudo command, you can install the Cell Browser into your user home directory:
pip install --user cellbrowser export PATH=$PATH:~/.local/bin
You can add the second command to your ~/.profile or ~/.bashrc, this will allow you to run the Cell Browser commands without having to specify their location.
On OSX, if running
sudo pip outputs command not found, you will need to setup pip first by running:
sudo easy_install pip
Installation with conda¶
If you would prefer to install the Cell Browser through bioconda, you can run:
conda install -c bioconda ucsc-cell-browser
There should be conda versions for release 0.4.23 onwards. The conda version is managed by Pablo Moreno at the EBI and is often a few releases behind. Please indicate in any bug reports if you used conda to install.
Installation with git clone¶
Pip is not required to install the Cell Browser. As an alternative to pip or conda, you can also git clone the repo and run the command line scripts under cellbrowser/src:
git clone --depth=10 cd cellBrowser/src | https://cellbrowser.readthedocs.io/installation.html | 2020-07-02T11:17:55 | CC-MAIN-2020-29 | 1593655878753.12 | [] | cellbrowser.readthedocs.io |