content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Compatibility of Couchbase Features, Couchbase Server Versions, and the Couchbase Node.js SDK
Features available in different SDK versions, and compatibility between Server and SDK.
The Couchbase Node.js Client will run on any supported LTS version of Node.js — currently, 10.x, and 12.x.
Couchbase Version/SDK Version Matrix
Couchbase SDKs are tested against a variety of different environments to ensure both backward and forward compatibility with different versions of Couchbase Server. The matrix below denotes the version of Couchbase Server, the version of the Node.js. | https://docs.couchbase.com/nodejs-sdk/3.0/project-docs/compatibility.html | 2020-03-28T18:52:05 | CC-MAIN-2020-16 | 1585370492125.18 | [] | docs.couchbase.com |
This section will help you understand how ERPNext enables you to efficiently manage the leave schedule of your organization. It also explains how employees can apply for leaves.
The number and type of leaves an Employee can apply is controlled by Leave Allocation. You can create Leave Allocation for a Leave Period based on the Company's Leave Policy. You can also allocate Additional Leaves to your employees and generate reports to track leaves taken by Employees.
Employees can also create leave requests, which their respective managers (leave approvers) can approve or reject. An Employee can select leaves from a number of leave types such as Sick Leave, Casual Leave, Privilege Leave and so on.
ERPNext is used by more than 5000 companies across the world | https://docs.erpnext.com/docs/user/manual/en/human-resources/leave-management-intro | 2020-03-28T18:58:58 | CC-MAIN-2020-16 | 1585370492125.18 | [] | docs.erpnext.com |
What benefits does Datacenters.com offer to a Provider?
Benefits of being a provider on Datacenters.com
- Traffic & Visibility
- Warm Sales Leads
- Administrative Access
- Guest Blogging
- Marketplace Access
- Presence on an industry-leading website
- Direct access to insights into users viewing your profile, marketplace, and researching on Datacenters.com
- Presence for users all the time with automated functionality and robust chatbot
If you still have questions, feel free to email us [email protected].
Last Updated: January 7, 2020 | http://docs.datacenters.com/provider-support/provider-account/what-benefits-does-datacenters-com-offer-to-a-provider | 2020-03-28T17:08:11 | CC-MAIN-2020-16 | 1585370492125.18 | [] | docs.datacenters.com |
Table of Contents
Product Index
dForce Karate Gi Textures include highly detailed and authentic textures to this dForce martial arts outfit. Both the top and pants have authentic color options accepted in real Karate outfits. As well as the basic white the Top and bottom also come in Black, Red and Blue. Following the Karate grading system the belts can be colored White, Yellow, Orange, Green, Blue, Purple, Red Brown and Black
For Iray users there is also a selection of LIE presets that add various logos to the Top and. | http://docs.daz3d.com/doku.php/public/read_me/index/59759/start | 2020-03-28T17:24:31 | CC-MAIN-2020-16 | 1585370492125.18 | [] | docs.daz3d.com |
Install Manager allows you to choose your locations for downloads and installing content. This will walk you through setting up Install Manager to use these locations.
This assumes you have a valid DAZ 3D account, know your login credentials and have already installed Install Manager. If not, please refer to: Installing Install Manager (WIP).
Multiple user accounts can be added to Install Manager. The settings you choose will be on a “Per User” basis - meaning you can set different paths for downloads and preferences for each user.
Once you have logged into Install Manager you will see the following screen.
In the top right corner is a small Gear icon. This is where you access your settings.
Click the “Gear icon” now.
Your Settings window has several Page and options:
You can click the “ + ” in the bottom left corner of the Accounts or Installation pages to add accounts/locations at any time, as needed. Remember, each account can have different download locations and preferences.
For more advanced information on these Features, please see: User Interface (UI)
We will now move to the Downloads Page to begin telling IM where we want things to go…
The Downloads and Installation pages are where most of your customization will take place. In Downloads, you tell the application where to save the files you download from your store account.
Download To Where - Click the “ … ” browse button and choose the location you want the application to save any content installers it downloads from your account. This can be on your local drive, or an external location. For help on downloading files from your account, please see Downloading With Install Manager (WIP)
Using the application Show filter - Sometimes finding what you want in your account can be a bit daunting. Especially if you have a large catalog of purchases. In this section, you can filter the products by the application(s) you choose. Different combinations will give different results. If you are expecting a file you do not see, try adding another filter to your selection. Older files are backwards compatible, with some exceptions. This means if you have DAZ Studio 4.5 selected you will see more content than if you select DAZ Studio 3. DS 3 will not read .DSF (DS 4) or .DUF (DS 4.5) files so any of this type content would be filtered out. DS 4.5 will read all files extension, so you would see a great deal more products listed.
Once you have your location added, switch over to the Installation Page…
Install To - If you use one program and have a single location set for content installation, you will see “Recommended Folder”. If you use multiple programs such as DAZ Studio, Poser and Carrara on the same machine, you may have multiple content locations mapped. If you add your various folder paths, you will be able to choose where you want your content to be installed at any given time. For more on this please see Installing With Install Manager (WIP).
Any added locations will now be available in the Install To: drop-down selection above.
It is highly recommended that you choose an install location that is different than you currently installed content.
You are now ready to begin using Install Manager. For specific help on downloading, installing and managing your content, please see these articles in the User Guide:
If you need further help with Install Manager, please start here:
If you would like more in-depth information, please read: | http://docs.daz3d.com/doku.php/public/software/install_manager/userguide/configure_install_manager/tutorials/settings_for_install_manager/start | 2020-03-28T18:06:26 | CC-MAIN-2020-16 | 1585370492125.18 | [] | docs.daz3d.com |
Account Management
- How do I register my students?
- What is teacher verification?
- How do I become a verified teacher?
- How do I get a free teacher account?
- Accounts on Grok Learning
- How do I / my students log in?
- Do I need to register my students?
- Subscriptions on Grok Learning
- How do I reset my password?
- I'm a parent, how do I sign up my children?
- How do I add my home school?
- How does the Department of Education integration work?
- SAML SSO - I'm using Azure AD (Microsoft, Cloud)
- How do I confirm my email address?
- Can I edit/reset a student's password?
- My student didn't receive the enrolment email
- Can students without email addresses still use Grok?
- How can I setup SSO between my school and Grok?
- My student can't see paid content
- How do I edit a student's details? | https://docs.groklearning.io/category/20-account-management?sort=popularity | 2020-03-28T18:10:48 | CC-MAIN-2020-16 | 1585370492125.18 | [] | docs.groklearning.io |
This document explains how you manage letters of credit when selling goods to a foreign customer.
Usually it is the seller who requests that a letter of credit should be used as payment method.
An agreement is made with the supplier. A letter of credit is received, approved and registered. The goods are delivered and the letter of credit is reported as paid (with status 85 = Paid). It can be reviewed in 'Letter of Credit. Open' (RMS100).
For detailed information on how the system is affected, see Managing Trading Risk.
When a customer invoice is created based on the customer order in 'Customer Order. Open' (OIS100), the following account entries are created:
* The letter of credit number connected to the order is used as the invoice number for this account entry, not a number from the standard invoice number series.
When the customer payment is recorded in 'Payment Received. Record' (ARS110), the following account entries are created:
See Managing Trading Risk.
This is a typical flow when using letters of credit. However, there are variations, depending on the type of letter of credit, terms and organization.
Check Buyer's Credit Status
Make a credit check to verify that the buyer is a reputable firm or individual.
Enter into Agreement with Buyer
Establish the acceptable form and terms for the transaction terms.
Review Letter of Credit and Terms Stipulated
Carefully review the letter of credit and supporting documentation that have been forwarded by the bank you use as the corresponding bank.
Contact Buyer for Correction of Documents and Terms
If any detail deviates from the original agreement or is incorrect, contact the buyer for correction.
Prepare Goods Delivery and Requested Documents
Arrange for the delivery of the goods and produce the necessary documents.
Send Documents to Company Bank after Delivery
Immediately after the goods have been placed on board the carrying vessel, send the documents to the corresponding bank.
Record Payment Received
When the buyer's bank has sent the money to your corresponding bank, record the payment in 'Payment Received. Record' (ARS110). This automatically updates the status of the letter of credit to 81 = 'Paid not confirmed' or 85 = 'Paid, confirmed', depending on the type of letter of credit. | https://docs.infor.com/help_m3beud_16.x/topic/com.infor.help.finctrlhs_16.x/c000797.html | 2020-03-28T18:52:24 | CC-MAIN-2020-16 | 1585370492125.18 | [] | docs.infor.com |
Wagtail 2.6 release notes¶
What’s new¶
Accessibility targets and improvements¶
Wagtail now has official accessibility support targets: we are aiming for compliance with WCAG2.1, AA level. WCAG 2.1 is the international standard which underpins many national accessibility laws.
Wagtail isn’t fully compliant just yet, but we have made many changes to the admin interface to get there. We thank the UK Government (in particular the CMS team at the Department for International Trade), who commissioned many of these improvements.
Here are changes which should make Wagtail more usable for all users regardless of abilities:
- Increase font-size across the whole admin (Beth Menzies, Katie Locke)
- Improved text color contrast across the whole admin (Beth Menzies, Katie Locke)
- Added consistent focus outline styles across the whole admin (Thibaud Colas)
- Ensured the ‘add child page’ button displays when focused (Helen Chapman, Katie Locke)
This release also contains many big improvements for screen reader users:
- Added more ARIA landmarks across the admin interface and welcome page for screen reader users to navigate the CMS more easily (Beth Menzies)
- Improved heading structure for screen reader users navigating the CMS admin (Beth Menzies, Helen Chapman)
- Make icon font implementation more screen-reader-friendly (Thibaud Colas)
- Removed buggy tab order customisations in the CMS admin (Jordan Bauer)
- Screen readers now treat page-level action dropdowns as navigation instead of menus (Helen Chapman)
- Fixed occurrences of invalid HTML across the CMS admin (Thibaud Colas)
- Add empty alt attributes to all images in the CMS admin (Andreas Bernacca)
- Fixed focus not moving to the pages explorer menu when open (Helen Chapman)
We’ve also had a look at how controls are labeled across the UI for screen reader users:
- Add image dimensions in image gallery and image choosers for screen reader users (Helen Chapman)
- Add more contextual information for screen readers in the explorer menu’s links (Helen Chapman)
- Make URL generator preview image alt translatable (Thibaud Colas)
- Screen readers now announce “Dashboard” for the main nav’s logo link instead of Wagtail’s version number (Thibaud Colas)
- Remove duplicate labels in image gallery and image choosers for screen reader users (Helen Chapman)
- Added a label to the modals’ “close” button for screen reader users (Helen Chapman, Katie Locke)
- Added labels to permission checkboxes for screen reader users (Helen Chapman, Katie Locke)
- Improve screen-reader labels for action links in page listing (Helen Chapman, Katie Locke)
- Add screen-reader labels for table headings in page listing (Helen Chapman, Katie Locke)
- Add screen reader labels for page privacy toggle, edit lock, status tag in page explorer & edit views (Helen Chapman, Katie Locke)
- Add screen-reader labels for dashboard summary cards (Helen Chapman, Katie Locke)
- Add screen-reader labels for privacy toggle of collections (Helen Chapman, Katie Locke)
Again, this is still a work in progress – if you are aware of other existing accessibility issues, please do open an issue if there isn’t one already.
Other features¶
- Added support for
short_descriptionfor field labels in modeladmin’s
InspectView(Wesley van Lee)
- Rearranged SCSS folder structure to the client folder and split them approximately according to ITCSS. (Naomi Morduch Toubman, Jonny Scholes, Janneke Janssen, Hugo van den Berg)
- Added support for specifying cell alignment on TableBlock (Samuel Mendes)
- Added more informative error when a non-image object is passed to the
imagetemplate tag (Deniz Dogan)
- Added ButtonHelper examples in the modelAdmin primer page within documentation (Kalob Taulien)
- Multiple clarifications, grammar and typo fixes throughout documentation (Dan Swain)
- Use correct URL in API example in documentation (Michael Bunsen)
- Move datetime widget initialiser JS into the widget’s form media instead of page editor media (Matt Westcott)
- Add form field prefixes for input forms in chooser modals (Matt Westcott)
- Removed version number from the logo link’s title. The version can now be found under the Settings menu (Thibaud Colas)
- Added “don’t delete” option to confirmation screen when deleting images, documents and modeladmin models (Kevin Howbrook)
- Added
branding_titletemplate block for the admin title prefix (Dillen Meijboom)
- Added support for custom search handler classes to modeladmin’s IndexView, and added a class that uses the default Wagtail search backend for searching (Seb Brown, Andy Babic)
- Update group edit view to expose the
Permissionobject for each checkbox (George Hickman)
- Improve performance of Pages for Moderation panel (Fidel Ramos)
- Added
process_child_objectand
exclude_fieldsarguments to
Page.copy()to make it easier for third-party apps to customise copy behavior (Karl Hobley)
- Added
Page.with_content_json(), allowing revision content loading behaviour to be customised on a per-model basis (Karl Hobley)
- Added
construct_settings_menuhook (Jordan Bauer, Quadric)
- Fixed compatibility of date / time choosers with wagtail-react-streamfield (Mike Hearn)
- Performance optimization of several admin functions, including breadcrumbs, home and index pages (Fidel Ramos)
Bug fixes¶
- ModelAdmin no longer fails when filtering over a foreign key relation (Jason Dilworth, Matt Westcott)
- The Wagtail version number is now visible within the Settings menu (Kevin Howbrook)
- Scaling images now rounds values to an integer so that images render without errors (Adrian Brunyate)
- Revised test decorator to ensure TestPageEditHandlers test cases run correctly (Alex Tomkins)
- Wagtail bird animation in admin now ends correctly on all browsers (Deniz Dogan)
- Explorer menu no longer shows sibling pages for which the user does not have access (Mike Hearn)
- Admin HTML now includes the correct
dirattribute for the active language (Andreas Bernacca)
- Fix type error when using
--chunk_sizeargument on
./manage.py update_index(Seb Brown)
- Avoid rendering entire form in EditHandler’s
reprmethod (Alex Tomkins)
- Add empty alt attributes to HTML output of Embedly and oEmbed embed finders (Andreas Bernacca)
- Clear pending AJAX request if error occurs on page chooser (Matt Westcott)
- Prevent text from overlapping in focal point editing UI (Beth Menzies)
- Restore custom “Date” icon for scheduled publishing panel in Edit page’s Settings tab (Helen Chapman)
- Added missing form media to user edit form template (Matt Westcott)
-
Page.copy()no longer copies child objects when the accessor name is included in
exclude_fields_in_copy(Karl Hobley)
- Clicking the privacy toggle while the page is still loading no longer loads the wrong data in the page (Helen Chapman)
- Added missing
is_stored_locallymethod to
AbstractDocument(jonny5532)
- Query model no longer removes punctuation as part of string normalisation (William Blackie)
- Make login test helper work with user models with non-default username fields (Andrew Miller)
- Delay dirty form check to prevent “unsaved changes” warning from being wrongly triggered (Thibaud Colas)
Upgrade considerations¶
Removed support for Python 3.4¶
Python 3.4 is no longer supported as of this release; please upgrade to Python 3.5 or above before upgrading Wagtail.
Icon font implementation changes¶
The icon font implementation has been changed to be invisible for screen-reader users, by switching to using Private Use Areas Unicode code points. All of the icon classes (
icon-user,
icon-search, etc) should still work the same, except for two which have been removed because they were duplicates:
-
icon-pictureis removed. Use
icon-imageinstead (same visual).
-
icon-file-text-altis removed. Use
icon-doc-fullinstead (same visual).
For a list of all available icons, please see the UI Styleguide. | https://docs.wagtail.io/en/v2.7/releases/2.6.html | 2020-03-28T18:08:10 | CC-MAIN-2020-16 | 1585370492125.18 | [] | docs.wagtail.io |
Unreal Engine
Satisfactory uses the Unreal Engine 4.22 by Epic Games as their Game Engine. UE4 provides a solid framework for developing fast executing native code and a interface for artists to use a more easy way for creating content.
In this section we go over some minor basics you should know. | https://docs.ficsit.app/satisfactory-modding/2.2.0/Development/UnrealEngine/index.html | 2020-08-03T19:58:17 | CC-MAIN-2020-34 | 1596439735833.83 | [] | docs.ficsit.app |
Table of Contents
Product Index
CJ 8 is a tough weapon-wielding warrior with this set of barbarian inspired poses from Capsces Digital Ink. CDI Poses for CJ 8 and Genesis 8 Female includes 20 poses and seven expressions, as well as, wearable presets to easily load select “Weapons of War” weapons with material settings for rendering the weapons in Iray. With CDI Poses, your CJ 8 can crouch, kick, lean, hang, and jump, and express her true feelings with the. | http://docs.daz3d.com/doku.php/public/read_me/index/66893/start | 2020-08-03T22:23:16 | CC-MAIN-2020-34 | 1596439735833.83 | [] | docs.daz3d.com |
Working with ETLs
Navigate to Administration > ETL & System Tasks > ETL tasks to view the list of all configured ETL tasks grouped by task group. You can manage or run ETL tasks, data movers, and task chains.
For more information, refer to the following sections:
Overview
Similar to the Maintaining System tasks page, each row in the table represents an ETL task, and displays details including the last execution results. For more information on the structure of the summary table, refer to Managing ETL and System tasks.
Note
Service ETL tasks do not exit, hence they do not have a Last exit value.
For more information about execution issues, analyze the log file. For more information, see Managing tasks.
You can use the Task Commands form to start or reschedule any task displayed in the list. This functionality is described in the Task management section.
Viewing, editing, and deleting an ETL task
The ETL tasks page under Administration > ETL & System Tasks > ETL tasks displays detailed information about all ETLs in BMC TrueSight Capacity Optimization, summarizing the status of their last run. For more information, see Understanding the ETL task summary table.
You can also edit the properties of an ETL task from this page. For more information, see Editing an ETL task.
You can delete a task using.
If you want to delete the entities imported by the ETL including details like Entity catalog, Lastcounter, and hierarchy transactions, from the delete drop-down click.
ETL tasks, can be classified using task groups, in order to gather processes which collect homogeneous data.
Click Lastcounter to view the Status detail table. It lists the timestamp, result of the last run, and the value of the lastcounter parameter for each data source. Click Edit Lastcounter to manually change the lastcounter value.
Additional information
The lastcounter and lookup entries are created only when the ETL task is in production mode.
Recommendation
It is strongly recommended to limit the amount of data to import. If you need to recover historical data you should do it in small chunks. For instance, you should import data for a few days at a time, and import data for subsequent days in chunks.
Working with ETL entity catalogs
Click Entity catalog in the ETL Task details page to view information about its lookup tables. You can view the list of systems, business drivers and domains imported by the ETL, including data sources and the name they will have in BMC TrueSight Capacity Optimization.
Deleting an entity catalog record
Each row displays the mapping between the data sources and BMC TrueSight Capacity Optimization name. Clickto delete individual mappings. To delete multiple mappings, press SHIFT and Click the rows you want to delete, and then click Delete selected systems or Delete selected business drivers. A confirmation pop-up will prompt you to choose one of the following actions:
- Remove the lookup reference for the selected resources.
- Remove the lookup reference and delete the selected resources; if the selected resources are not shared with other ETL tasks, this operation will change their status to "dismissed". For more information, see Life cycle and status of entities and domains.
When an ETL task encounters an entity (or domain) in the data source, it checks its own lookup tables to find a configured target. If no target is found, the object is treated as a new object and the ETL task performs the following actions:
- Creates a new entity (or domain) with a name identical to the one found in the data source.
- Adds an entry into the ETL task lookup tables to track the new association.
Additional information
New entities appear in the lookup tables and in the All Systems and Business Drivers > Newly discovered page of the Workspace section. For details, see Life cycle and status of entities and domains.
Adding a lookup table record
You can manually add a record to the Systems, Business Drivers, or Domains lookup tables. Click Add system lookup, Add business driver lookup, or Add domain lookup, and enter the required details in the popup that is displayed.
Click Add system, Add business driver, or Add domain as applicable.
Sharing Entity catalog
You can also configure an ETL to share the entity catalog of another ETL. To do so, follow these steps:
- Edit the Run configuration of the ETL.
- In the Edit run configuration page that appears, expand Entity catalog.
- Select Shared Entity Catalog and select an entity catalog from Sharing with Entity Catalog..
- Click Save.
Additional information
When two ETL tasks share the entity catalog, both of them should be able to load the same entity. Whenever a new entity is defined, one of the two ETLs will load it first, in no particular order.
Some issues might be caused if you set up the entity catalog after the first data import. An ETL task could automatically create a new entity and import its data, while it should have appended data to an existing entity. If this happens, you will have to perform an entity catalog reconciliation.
Note
The manual reconciliation of an entity in BMC TrueSight Capacity Optimization is discouraged. If manual reconciliation is performed incorrectly, it may disrupt the system. Also, the reconciliation process cannot be undone. It is strongly advised that you run an ETL task in simulation mode before executing it for the first time, to facilitate solving lookup duplication issues beforehand. For details, see Entity name resolution and Preventing duplication issues.
Lookup duplication example
The following example depicts a situation in which a lookup reconciliation is necessary.
An ETL task,
ETL_A, which accesses a data source
dsA that collects data for two systems:
sys1 and
sys2.
ETL_A runs everyday, and has been running for some time.
After its first run, it created two new entities in BMC TrueSight Capacity Optimization,
sys1 and
sys2. You later renamed these entities as
ny_sys1 and
ny_sys2 to match your BMC TrueSight Capacity Optimization naming policy.
The lookup table of
ETL_A contains the following mappings, where 301 and 302 are the unique IDs for those BMC TrueSight Capacity Optimization entities.
In your IT infrastructure there is also another data source,
dsB, which stores data for two systems,
sys2 (the same as before) and
sys3, but collects a different set of metrics from
dsA.
If you create a new ETL task,
ETL_B, which imports
dsB data from
sys2 and
sys3 into BMC TrueSight Capacity Optimization and let
ETL_B perform an automatic lookup, its lookup table will look like the following:
The BMC TrueSight Capacity Optimization Data Warehouse now has two new systems. This is a problem, since
sys2 already exists, but
etlB did not know it.
In this case,
ETL_B should share the lookup table of
ETL_A in order to assign data to the correct system in BMC TrueSight Capacity Optimization, that is
ny_sys2.
Lookup reconciliation
If a lookup duplication problem occurred, you can recover the problem. To learn how, see Lookup reconciliation and splitting in Entity catalogs.
Preventing duplication issues
To avoid these problems, the correct procedure for creating a new ETL task is:
- Create the new ETL task with simulation mode turned on and the maximum log level (10).
- Manually run the ETL task and check its execution log to find out if it created any new entities. You can use this information to understand if the automatic lookup process is safe and if you need to use shared lookup from another ETL.
- If you notice an issue, you can also manually add a line in the lookup table.
- Toggle simulation mode off.
- Run the ETL task to import new data.
This following topics help you work with and understand ETL modules.
- Configuring database connection using Perl or Java ETLs
- Determining how to aggregate data to a given resolution using SQL
- Determining how to append the output of an ETL to an existing file using FileAppenderA
- Determining how to extract compressed files
- Enabling Windows shares mounting
- Exporting the run configuration of an ETL
- Gathering ETL logs
- Handling ETL lookup name
- Importing and exporting ETL packages
- Importing macro-events
- Setting file access for parsers
- Understanding entity identification and lookup | https://docs.bmc.com/docs/TSCapacity/110/working-with-etls-674155186.html | 2020-08-03T20:56:24 | CC-MAIN-2020-34 | 1596439735833.83 | [] | docs.bmc.com |
BMC
Track-It! 20.19 24 Jan
Learn what’s new or changed for Track-It! 20.19.xx and 20.18.xx, including new features, urgent issues, documentation updates, and fixes or patches.
Tip
To stay informed of changes to this space, place a watch | https://docs.bmc.com/docs/trackit2019/en/home-852580849.html | 2020-08-03T20:10:52 | CC-MAIN-2020-34 | 1596439735833.83 | [] | docs.bmc.com |
Source code for MDAnalysis.topology.guessers
# -*- # """ Guessing unknown Topology information --- :mod:`MDAnalysis.topology.guessers` ============================================================================= In general `guess_atom_X` returns the guessed value for a single value, while `guess_Xs` will work on an array of many atoms. """ from __future__ import absolute_import import numpy as np import warnings import re from ..lib import distances from . import tables[docs]def guess_masses(atom_types): """Guess the mass of many atoms based upon their type Parameters ---------- atom_types Type of each atom Returns ------- atom_masses : np.ndarray dtype float64 """ validate_atom_types(atom_types) masses = np.array([get_atom_mass(atom_t) for atom_t in atom_types], dtype=np.float64) return masses[docs]def validate_atom_types(atom_types): """Vaildates the atom types based on whether they are available in our tables Parameters ---------- atom_types Type of each atom Returns ------- None .. versionchanged:: 0.20.0 Try uppercase atom type name as well """ for atom_type in np.unique(atom_types): try: tables.masses[atom_type] except KeyError: try: tables.masses[atom_type.upper()] except KeyError: warnings.warn("Failed to guess the mass for the following atom types: {}".format(atom_type))[docs]def guess_types(atom_names): """Guess the atom type of many atoms based on atom name Parameters ---------- atom_names Name of each atom Returns ------- atom_types : np.ndarray dtype object """ return np.array([guess_atom_element(name) for name in atom_names], dtype=object)[docs]def guess_atom_type(atomname): """Guess atom type from the name. At the moment, this function simply returns the element, as guessed by :func:`guess_atom_element`. See Also -------- :func:`guess_atom_element` :mod:`MDAnalysis.topology.tables` """ return guess_atom_element(atomname)NUMBERS = re.compile(r'[0-9]') # match numbers SYMBOLS = re.compile(r'[\*\+\-]') # match *, +, -[docs]def guess_atom_element(atomname): "". .. Warning: The translation table is incomplete. This will probably result in some mistakes, but it still better than nothing! See Also -------- :func:`guess_atom_type` :mod:`MDAnalysis.topology.tables` """ if atomname == '': return '' try: return tables.atomelements[atomname.upper()] except KeyError: # strip symbols and numbers no_symbols = re.sub(SYMBOLS, '', atomname) name = re.sub(NUMBERS, '', no_symbols).upper() # just in case if name in tables.atomelements: return tables.atomelements[name] while name: if name in tables.elements: return name if name[:-1] in tables.elements: return name[:-1] if name[1:] in tables.elements: return name[1:] if len(name) <= 2: return name[0] name = name[:-1] # probably element is on left not right # if it's numbers return no_symbols[docs]def guess_bonds(atoms, coords, box=None, **kwargs): r"""Guess if bonds exist between two atoms based on their distance. Bond between two atoms is created, if the two atoms are within .. math:: d < f \cdot (R_1 + R_2) of each other, where :math:`R_1` and :math:`R_2` are the VdW radii of the atoms and :math:`f` is an ad-hoc *fudge_factor*. This is the `same algorithm that VMD uses`_. Parameters ---------- atoms : AtomGroup atoms for which bonds should be guessed coords : array coordinates of the atoms (i.e., `AtomGroup.positions)`) fudge_factor : float, optional The factor by which atoms must overlap eachother to be considered a bond. Larger values will increase the number of bonds found. [0.55] vdwradii : dict, optional To supply custom vdwradii for atoms in the algorithm. Must be a dict of format {type:radii}. The default table of van der Waals radii is hard-coded as :data:`MDAnalysis.topology.tables.vdwradii`. Any user defined vdwradii passed as an argument will supercede the table values. [``None``] lower_bound : float, optional The minimum bond length. All bonds found shorter than this length will be ignored. This is useful for parsing PDB with altloc records where atoms with altloc A and B maybe very close together and there should be no chemical bond between them. [0.1] box : array_like, optional Bonds are found using a distance search, if unit cell information is given, periodic boundary conditions will be considered in the distance search. [``None``] Returns ------- list List of tuples suitable for use in Universe topology building. Warnings -------- No check is done after the bonds are guessed to see if Lewis structure is correct. This is wrong and will burn somebody. Raises ------ :exc:`ValueError` if inputs are malformed or `vdwradii` data is missing. .. _`same algorithm that VMD uses`: .. versionadded:: 0.7.7 .. versionchanged:: 0.9.0 Updated method internally to use more :mod:`numpy`, should work faster. Should also use less memory, previously scaled as :math:`O(n^2)`. *vdwradii* argument now augments table list rather than replacing entirely. """ # why not just use atom.positions? if len(atoms) != len(coords): raise ValueError("'atoms' and 'coord' must be the same length") fudge_factor = kwargs.get('fudge_factor', 0.55) vdwradii = tables.vdwradii.copy() # so I don't permanently change it user_vdwradii = kwargs.get('vdwradii', None) if user_vdwradii: # this should make algo use their values over defaults vdwradii.update(user_vdwradii) # Try using types, then elements atomtypes = atoms.types # check that all types have a defined vdw if not all(val in vdwradii for val in set(atomtypes)): raise ValueError(("vdw radii for types: " + ", ".join([t for t in set(atomtypes) if not t in vdwradii]) + ". These can be defined manually using the" + " keyword 'vdwradii'")) lower_bound = kwargs.get('lower_bound', 0.1) if box is not None: box = np.asarray(box) # to speed up checking, calculate what the largest possible bond # atom that would warrant attention. # then use this to quickly mask distance results later max_vdw = max([vdwradii[t] for t in atomtypes]) bonds = [] pairs, dist = distances.self_capped_distance(coords, max_cutoff=2.0*max_vdw, min_cutoff=lower_bound, box=box) for idx, (i, j) in enumerate(pairs): d = (vdwradii[atomtypes[i]] + vdwradii[atomtypes[j]])*fudge_factor if (dist[idx] < d): bonds.append((atoms[i].index, atoms[j].index)) return tuple(bonds)[docs]def guess_angles(bonds): """Given a list of Bonds, find all angles that exist between atoms. Works by assuming that if atoms 1 & 2 are bonded, and 2 & 3 are bonded, then (1,2,3) must be an angle. Returns ------- list of tuples List of tuples defining the angles. Suitable for use in u._topology See Also -------- :meth:`guess_bonds` .. versionadded 0.9.0 """ angles_found = set() for b in bonds: for atom in b: other_a = b.partner(atom) # who's my friend currently in Bond for other_b in atom.bonds: if other_b != b: # if not the same bond I start as third_a = other_b.partner(atom) desc = tuple([other_a.index, atom.index, third_a.index]) if desc[0] > desc[-1]: # first index always less than last desc = desc[::-1] angles_found.add(desc) return tuple(angles_found)[docs]def guess_dihedrals(angles): """Given a list of Angles, find all dihedrals that exist between atoms. Works by assuming that if (1,2,3) is an angle, and 3 & 4 are bonded, then (1,2,3,4) must be a dihedral. Returns ------- list of tuples List of tuples defining the dihedrals. Suitable for use in u._topology .. versionadded 0.9.0 """ dihedrals_found = set() for b in angles: a_tup = tuple([a.index for a in b]) # angle as tuple of numbers # if searching with b[0], want tuple of (b[2], b[1], b[0], +new) # search the first and last atom of each angle for atom, prefix in zip([b.atoms[0], b.atoms[-1]], [a_tup[::-1], a_tup]): for other_b in atom.bonds: if not other_b.partner(atom) in b: third_a = other_b.partner(atom) desc = prefix + (third_a.index,) if desc[0] > desc[-1]: desc = desc[::-1] dihedrals_found.add(desc) return tuple(dihedrals_found)[docs]def guess_improper_dihedrals(angles): "") Returns ------- List of tuples defining the improper dihedrals. Suitable for use in u._topology .. versionadded 0.9.0 """ dihedrals_found = set() for b in angles: atom = b[1] # select middle atom in angle # start of improper tuple a_tup = tuple([b[a].index for a in [1, 2, 0]]) # if searching with b[1], want tuple of (b[1], b[2], b[0], +new) # search the first and last atom of each angle for other_b in atom.bonds: other_atom = other_b.partner(atom) # if this atom isn't in the angle I started with if not other_atom in b: desc = a_tup + (other_atom.index,) if desc[0] > desc[-1]: desc = desc[::-1] dihedrals_found.add(desc) return tuple(dihedrals_found)[docs]def get_atom_mass(element): """Return the atomic mass in u for *element*. Masses are looked up in :data:`MDAnalysis.topology.tables.masses`. .. Warning:: Unknown masses are set to 0.0 .. versionchanged:: 0.20.0 Try uppercase atom type name as well """ try: return tables.masses[element] except KeyError: try: return tables.masses[element.upper()] except KeyError: return 0.0[docs]def guess_atom_mass(atomname): """Guess a mass based on the atom name. :func:`guess_atom_element` is used to determine the kind of atom. .. warning:: Anything not recognized is simply set to 0; if you rely on the masses you might want to double check. """ return get_atom_mass(guess_atom_element(atomname)) | https://docs.mdanalysis.org/1.0.0/_modules/MDAnalysis/topology/guessers.html | 2020-08-03T20:47:08 | CC-MAIN-2020-34 | 1596439735833.83 | [] | docs.mdanalysis.org |
Cortana on IoT Core
Cortana is a personal digital assistant working across all your devices to help you in your daily life. She learns about you; helps you get things done by completing tasks; interacts with you using natural language in a consistent, contextual way; and always looks out for you. Cortana has a consistent visual identity, personality, and voice.
To enable hands-free intelligent assistance in your device with the Cortana Device SDK, please visit The Cortana Dev Center.
Cortana on IoT Core will focus on commercial scenarios in the future. Updates will come soon. | https://docs.microsoft.com/en-us/windows/iot-core/extend-your-app/cortanaoniotcore | 2020-08-03T21:37:22 | CC-MAIN-2020-34 | 1596439735833.83 | [] | docs.microsoft.com |
Serialisation¶
This library does not enforce any particular serialisation scheme.
Every
EncryptedNumber
instance has a
public_key attribute, and serialising each
EncryptedNumber independently would be heinously inefficient when sending
a large list of instances. It is up to you to serialise in a way that is efficient for your use
case.
Basic JSON Serialisation¶
This basic serialisation method is an example of serialising a vector of encrypted numbers. Note that if you are only using the python-paillier library g will always be n + 1, so these is no need to serialise it as part of the public key.
To send a list of values encrypted against one public key, the following is one way to serialise:
>>> import json >>> enc_with_one_pub_key = {} >>> enc_with_one_pub_key['public_key'] = {'n': public_key.n} >>> enc_with_one_pub_key['values'] = [ ... (str(x.ciphertext()), x.exponent) for x in encrypted_number_list ... ] >>> serialised = json.dumps(enc_with_one_pub_key)
Deserialisation of the above scheme might look as follows:
>>> received_dict = json.loads(serialised) >>> pk = received_dict['public_key'] >>> public_key_rec = paillier.PaillierPublicKey(n=int(pk['n'])) >>> enc_nums_rec = [ ... paillier.EncryptedNumber(public_key_rec, int(x[0]), int(x[1])) ... for x in received_dict['values'] ... ]
If both parties already know public_key, then you might instead send a hash of the public key.
JWK Serialisation¶
This serialisation scheme is used by the Command Line Utility, and is based on the JSON Web Key (JWK) format. This serialisation scheme should be used to increase compatibility between libraries.
All cryptographic integers are represented as Base64UrlEncoded numbers.
Note the existence of
base64_to_int() and
int_to_base64().
“key_ops” (Key Operations) Parameter¶
Values will be “encrypt” and “decrypt” for public and private keys respectively. We decided not to add homomorphic properties to the key operations.
“kid” (Key Identifier)¶
The kid may be set to any ascii string. Useful for storing key names, generation tools, times etc.
Public Key¶
In addition to the “kty”, “kid”, “key_ops” and “alg” attributes, a public key will have:
- n The public key’s modulus - Base64 url encoded
Example of a 256 bit public key:
python -m phe.command_line genpkey --keysize 256 - | python -m phe.command_line extract - - { "kty": "DAJ", "kid": "Example Paillier public key", "key_ops": [ "encrypt" ], "n": "m0lOEwDHVA_VieL2k3BKMjf_HIgagfhNIZy1YhgZF5M", "alg": "PAI-GN1" }
Private Key¶
Note
The serialised private key includes the public key.
In addition to the “kty”, “kid”, “key_ops” and “alg” attributes, a private key will have:
- mu and lambda - The private key’s secrets. See Paillier’s paper for details.
- pub - The Public Key serialised as described above.
Example of a 256 bit private key:
python -m phe.command_line genpkey --keysize 256 - { "mu": "Dzq1_tz2qDX_-S4shia9Rw34Z9ix9b-fhPi3In76NaI", "kty": "DAJ", "key_ops": [ "decrypt" ], "kid": "Paillier private key generated by pheutil on 2016-05-24 14:18:25", "lambda": "haFTvA70KcI5XXReJUlQWRQdYHxaUS8baGQGug9dewA", "pub": { "alg": "PAI-GN1", "n": "haFTvA70KcI5XXReJUlQWoZus12aSJJ5EXAvu93xR7k", "kty": "DAJ", "key_ops": [ "encrypt" ], "kid": "Paillier public key generated by pheutil on 2016-05-24 14:18:25" } }
Warning
“kty” and “alg” values should be registered in the IANA “JSON Web Key Types” registry established by JWA. We have not registered DAJ or PAI-GN1 - however we intend to begin that conversation. | https://python-paillier.readthedocs.io/en/1.3.1/serialisation.html | 2020-08-03T21:07:01 | CC-MAIN-2020-34 | 1596439735833.83 | [] | python-paillier.readthedocs.io |
Managing a Greenplum System
Managing a Greenplum System
This section describes basic system administration tasks performed by a Greenplum Database system administrator.
- About the Greenplum Database Release Version Number
Greenplum Database version numbers and they way they change identify what has been modified from one Greenplum Database release to the next.
-.
- Accessing the Database
This topic describes the various client tools you can use to connect to Greenplum Database, and how to establish a database session.
- Configuring the Greenplum Database System
Server configuration parameters affect the behavior of Greenplum Database.
- Enabling Compression
You can configure Greenplum Database to use data compression with some database features and with some utilities.
- Enabling High Availability and Data Consistency Features
The fault tolerance and the high-availability features of Greenplum Database can be configured.
- Backing Up and Restoring Databases
This topic describes how to use Greenplum backup and restore features.
- Expanding a Greenplum System
To scale up performance and storage capacity, expand your Greenplum Database system by adding hosts to the system. In general, adding nodes to a Greenplum cluster achieves a linear scaling of performance and storage capacity.
- Monitoring a Greenplum System
You can monitor a Greenplum Database system using a variety of tools included with the system or available as add-ons.
- Routine System Maintenance Tasks
To keep a Greenplum Database system running efficiently, the database must be regularly cleared of expired data and the table statistics must be updated so that the query optimizer has accurate information.
- Recommended Monitoring and Maintenance Tasks
This section lists monitoring and maintenance activities recommended to ensure high availability and consistent performance of your Greenplum Database cluster.
Parent topic: Greenplum Database Administrator Guide | http://docs.greenplum.org/6-9/admin_guide/managing/partII.html | 2020-08-03T21:06:09 | CC-MAIN-2020-34 | 1596439735833.83 | [] | docs.greenplum.org |
.
Schedules a service software update for an Amazon ES domain.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
start-elasticsearch-service-software-update --domain-name <value> [--cli-input-json <value>] [--generate-cli-skeleton <value>]
--domain-name (string)
The name of the domain that you want to update to the latest serviceoftwareOptions -> (structure)
The current status of the Elasticsearch service software update.
CurrentVersion -> (string)The current service software version that is present on the domain.
NewVersion -> (string)The new service software version if one is available.
UpdateAvailable -> (boolean)True if you are able to update you service software version. False if you are not able to update your service software version.
Cancellable -> (boolean)True if you are able to cancel your service software version update. False if if a service software is never automatically updated. False if a service software is automatically updated after AutomatedUpdateDate . | https://docs.aws.amazon.com/cli/latest/reference/es/start-elasticsearch-service-software-update.html | 2020-08-03T21:48:20 | CC-MAIN-2020-34 | 1596439735833.83 | [] | docs.aws.amazon.com |
detector model. Any active instances of the detector model are also deleted.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
delete-detector-model --detector-model-name <value> [--cli-input-json <value>] [--generate-cli-skeleton <value>]
--detector-model-name (string)
The name of the detector model detector model
The following delete-detector-model example deletes the specified detector model. Any active instances of the detector model are also deleted.
aws iotevents delete-detector-model \ --detector-model-name motorDetectorModel
This command produces no output.
For more information, see DeleteDetectorModel in the AWS IoT Events API Reference. | https://docs.aws.amazon.com/cli/latest/reference/iotevents/delete-detector-model.html | 2020-08-03T21:25:22 | CC-MAIN-2020-34 | 1596439735833.83 | [] | docs.aws.amazon.com |
Note: We’ve renamed our SmartConnectors to Integration Apps..
Please sign in to leave a comment. | https://docs.celigo.com/hc/en-us/articles/360031657332 | 2020-08-03T21:16:57 | CC-MAIN-2020-34 | 1596439735833.83 | [] | docs.celigo.com |
Tutorial: Filter Editor
- 5 minutes to read
This walkthrough is a transcript of the Filter Editor video available on the DevExpress YouTube Channel.
The grid control ships with a built-in Filter Editor dialog that allows end-users to build filter criteria of any complexity using either the tree-like filter builder interface or a text editor with hints much like Visual Studio IntelliSense. In this tutorial, you will learn how end-users can invoke this dialog, what options affect its availability and how you can invoke it from code and customize it before it's displayed.
Invoking the Filter Editor and Creating Simple Filter Criteria
To invoke the Filter Editor, right-click any column header and select Filter Editor… in the context menu.
By default, the Filter Editor displays filter criteria as a tree where individual nodes represent simple filter conditions. The root node is the logical operator combining all conditions. Any filter condition consists of three parts: a column name, criteria operator and operand value. If the grid's data is not filtered, the editor contains one incomplete filter condition for the clicked column.
Click the value box and select Medium in the dropdown list.
Click OK to close the editor and apply changes. As a result, the grid displays only records with a priority set to Medium. Now you can invoke the Filter Editor using the Edit Filter button within the Filter Panel.
To add a new filter condition, click the plus button (
) next to the root node. This can also be done by clicking the logical operator and selecting Add Condition.
Select Name in the list of available columns. Then, use the Contains comparison operator and enter the 'vendor' string in the value box. Click Apply to filter data without closing the editor.
The grid now displays records with medium priority and names containing 'vendor' in them.
Deleting Filter Conditions
Now delete all filter conditions by clicking their
buttons or by selecting Clear All in the logical operator's menu.
Constructing Complex Filter Criteria
You can now create a more complex filter criteria. To create a new condition group, click the root logical operator and select Add Group.
Change the created logical operator to OR.
Create two new conditions within this group. These conditions will select records that have a High priority or a status set to New. In the same manner, create one more OR condition group with two conditions. These conditions will select records with Created Date between January 1 and today or those where Fixed Date Is greater than April 1.
Click OK to filter data using the created criterion. You'll see the entire filter condition displayed in the filter panel.
Changing the Filter Editor Style
Now try a different filter condition editor UI. At design time, access the View's settings, expand ColumnView.OptionsFilter and set the ColumnViewOptionsFilter.DefaultFilterEditorView property to FilterEditorViewMode.Text.
Run the application and invoke the Filter Editor. Now you can type a filter string directly into the embedded Rich Text Editor. Dropdown lists of operators and field names are automatically invoked when typing a filter, much like the Visual Studio IntelliSense feature.
Locate the same property and set the editing mode to FilterEditorViewMode.VisualAndText. The Filter Editor will display both the Visual and Text editors in their own tabs.
Preventing End-Users from Invoking the Filter Editor
If you don't want end-users to invoke the dialog from the column header menu, set the ColumnViewOptionsFilter.AllowFilterEditor property to false.
Note that the filter panel's Edit Filter button has also become invisible.
Invoking and Customizing the Filter Editor in Code
Return to design-time and see how the Filter Editor can be invoked and customized in code.
In the Click event handler for the Show Filter Editor button, call the View's ColumnView.ShowFilterEditor method to invoke the Filter Editor in visual style.
private void btn_ShowFilterEditor_ItemClick(object sender, ItemClickEventArgs e) { gridView.OptionsFilter.DefaultFilterEditorView = DevExpress.XtraEditors.FilterEditorViewMode.Visual; gridView.ShowFilterEditor(null); }
Additionally, handle the View's ColumnView.FilterEditorCreated event, which is raised when the Filter Editor is about to be displayed. In the event handler, customize the value color using the FilterControl.AppearanceValueColor property of the event's FilterControlEventArgs.FilterControl parameter. Enable the FilterControl.ShowOperandTypeIcon option to allow values of one column to be compared to values in other columns or to predefined constants.
private void gridView_FilterEditorCreated(object sender, DevExpress.XtraGrid.Views.Base.FilterControlEventArgs e) { e.FilterControl.AppearanceValueColor = Color.Red; e.FilterControl.ShowOperandTypeIcon = true; }
Run the application and click the Show Filter Editor button. In the invoked editor, add a new condition and then click the operand type icon that's now displayed on the left of the remove button.
Click the value box, select Date and time constants and choose This year.
Change the comparison operator to Is less than. Add another condition that selects records where Priority is Medium. Note that the value is painted using the red color as specified in the event handler.
| https://docs.devexpress.com/WindowsForms/114645/controls-and-libraries/data-grid/getting-started/walkthroughs/filter-and-search/tutorial-filter-editor | 2020-08-03T21:32:08 | CC-MAIN-2020-34 | 1596439735833.83 | [array(['/WindowsForms/images/gridview_filtering_filtereditormenuitem119396.png',
'GridView_Filtering_FilterEditorMenuItem'], dtype=object)
array(['/WindowsForms/images/gridview_filtering_initialfiltereditorcontent119397.png',
'GridView_Filtering_InitialFilterEditorContent'], dtype=object)
array(['/WindowsForms/images/gridview_filtering_filtereditorwithonecondition119398.png',
'GridView_Filtering_FilterEditorWithOneCondition'], dtype=object)
array(['/WindowsForms/images/gridview_filtering_invokingfiltereditorviaeditfilterbutton119399.png',
'GridView_Filtering_InvokingFilterEditorViaEditFilterButton'],
dtype=object)
array(['/WindowsForms/images/gridview_filtering_filtereditoraddconditionmenuitem119405.png',
'GridView_Filtering_FilterEditorAddConditionMenuItem'],
dtype=object)
array(['/WindowsForms/images/gridview_filtering_filtereditorwithtwoconditions119400.png',
'GridView_Filtering_FilterEditorWithTwoConditions'], dtype=object)
array(['/WindowsForms/images/gridview_filtering_filtereditorfilteringresult119401.png',
'GridView_Filtering_FilterEditorFilteringResult'], dtype=object)
array(['/WindowsForms/images/gridview_filtering_filtereditorclearallmenuitem119402.png',
'GridView_Filtering_FilterEditorClearAllMenuItem'], dtype=object)
array(['/WindowsForms/images/gridview_filtering_filtereditoraddgroupmenuitem119403.png',
'GridView_Filtering_FilterEditorAddGroupMenuItem'], dtype=object)
array(['/WindowsForms/images/gridview_filtering_filtereditorchanginglogicaloperator119406.png',
'GridView_Filtering_FilterEditorChangingLogicalOperator'],
dtype=object)
array(['/WindowsForms/images/gridview_filtering_filtereditorwithcomplexcriteria119404.png',
'GridView_Filtering_FilterEditorWithComplexCriteria'], dtype=object)
array(['/WindowsForms/images/gridview_filtering_filtereditorcomplexcriteriaresult119409.png',
'GridView_Filtering_FilterEditorComplexCriteriaResult'],
dtype=object)
array(['/WindowsForms/images/gridview_filtering_defaultfiltereditorviewproperty119410.png',
'GridView_Filtering_DefaultFilterEditorViewProperty'], dtype=object)
array(['/WindowsForms/images/gridview_filtering_filtereditortextstyle119411.png',
'GridView_Filtering_FilterEditorTextStyle'], dtype=object)
array(['/WindowsForms/images/gridview_filtering_filtereditorvisualandtextstyle119412.png',
'GridView_Filtering_FilterEditorVisualAndTextStyle'], dtype=object)
array(['/WindowsForms/images/gridview_filtering_allowfiltereditorproperty119413.png',
'GridView_Filtering_AllowFilterEditorProperty'], dtype=object)
array(['/WindowsForms/images/gridview_filtering_filtereditoroperandtypeicon119414.png',
'GridView_Filtering_FilterEditorOperandTypeIcon'], dtype=object)
array(['/WindowsForms/images/gridview_filtering_filtereditordatetimeconstants119415.png',
'GridView_Filtering_FilterEditorDateTimeConstants'], dtype=object)
array(['/WindowsForms/images/gridview_filtering_filtereditorredvaluecolor119416.png',
'GridView_Filtering_FilterEditorRedValueColor'], dtype=object) ] | docs.devexpress.com |
Optimize provisioned throughput cost in Azure Cosmos DB
By offering provisioned throughput model, Azure Cosmos DB offers predictable performance at any scale. Reserving or provisioning throughput ahead of time eliminates the “noisy neighbor effect” on your performance. You specify the exact amount of throughput you need and Azure Cosmos DB guarantees the configured throughput, backed by SLA.
You can start with a minimum throughput of 400 RU/sec and scale up to tens of millions of requests per second or even more. Each request you issue against your Azure Cosmos container or database, such as a read request, write request, query request, stored procedures have a corresponding cost that is deducted from your provisioned throughput. If you provision 400 RU/s and issue a query that costs 40 RUs, you will be able to issue 10 such queries per second. Any request beyond that will get rate-limited and you should retry the request. If you are using client drivers, they support the automatic retry logic.
You can provision throughput on databases or containers and each strategy can help you save on costs depending on the scenario.
Optimize by provisioning throughput at different levels
If you provision throughput on a database, all the containers, for example collections/tables/graphs within that database can share the throughput based on the load. Throughput reserved at the database level is shared unevenly, depending on the workload on a specific set of containers.
If you provision throughput on a container, the throughput is guaranteed for that container, backed by the SLA. The choice of a logical partition key is crucial for even distribution of load across all the logical partitions of a container. See Partitioning and horizontal scaling articles for more details.
The following are some guidelines to decide on a provisioned throughput strategy:
Consider provisioning throughput on an Azure Cosmos database (containing a set of containers) if:
You have a few dozen Azure Cosmos containers and want to share throughput across some or all of them.
You are migrating from a single-tenant database designed to run on IaaS-hosted VMs or on-premises, for example, NoSQL or relational databases to Azure Cosmos DB. And if you have many collections/tables/graphs and you do not want to make any changes to your data model. Note, you might have to compromise some of the benefits offered by Azure Cosmos DB if you are not updating your data model when migrating from an on-premises database. It's recommended that you always reaccess your data model to get the most in terms of performance and also to optimize for costs.
You want to absorb unplanned spikes in workloads by virtue of pooled throughput at the database level subjected to unexpected spike in workload.
Instead of setting specific throughput on individual containers, you care about getting the aggregate throughput across a set of containers within the database.
Consider provisioning throughput on an individual container if:
You have a few Azure Cosmos containers. Because Azure Cosmos DB is schema-agnostic, a container can contain items that have heterogeneous schemas and does not require customers to create multiple container types, one for each entity. It is always an option to consider if grouping separate say 10-20 containers into a single container makes sense. With a 400 RUs minimum for containers, pooling all 10-20 containers into one could be more cost effective.
You want to control the throughput on a specific container and get the guaranteed throughput on a given container backed by SLA.
Consider a hybrid of the above two strategies:
As mentioned earlier, Azure Cosmos DB allows you to mix and match the above two strategies, so you can now have some containers within Azure Cosmos database, which may share the throughput provisioned on the database as well as, some containers within the same database, which may have dedicated amounts of provisioned throughput.
You can apply the above strategies to come up with a hybrid configuration, where you have both database level provisioned throughput with some containers having dedicated throughput.
As shown in the following table, depending on the choice of API, you can provision throughput at different granularities.
By provisioning throughput at different levels, you can optimize your costs based on the characteristics of your workload. As mentioned earlier, you can programmatically and at any time increase or decrease your provisioned throughput for either individual container(s) or collectively across a set of containers. By elastically scaling throughput as your workload changes, you only pay for the throughput that you have configured. If your container or a set of containers is distributed across multiple regions, then the throughput you configure on the container or a set of containers is guaranteed to be made available across all regions.
Optimize with rate-limiting your requests
For workloads that aren't sensitive to latency, you can provision less throughput and let the application handle rate-limiting when the actual throughput exceeds the provisioned throughput. The server will preemptively end the request with
RequestRateTooLarge (HTTP status code 429) and return the
x-ms-retry-after-ms header indicating the amount of time, in milliseconds, that the user must wait before retrying the request.
HTTP Status 429, Status Line: RequestRateTooLarge x-ms-retry-after-ms :100
Retry logic in SDKs
The native SDKs (.NET/.NET Core, Java, Node.js and Python) implicitly catch this response, respect the server-specified retry-after header, and retry the request. Unless your account is accessed concurrently by multiple clients, the next retry will succeed.
If you have more than one client cumulatively operating consistently above the request rate, the default retry count, which is currently set to 9, may not be sufficient. In such cases, the client throws a
RequestRateTooLargeException with status code 429 to the application. The default retry count can be changed by setting the
RetryOptions on the ConnectionPolicy instance. By default, the
RequestRateTooLarge.
MaxRetryAttemptsOnThrottledRequests is set to 3, so in this case, if a request operation is rate limited by exceeding the reserved throughput for the container, the request operation retries three times before throwing the exception to the application. MaxRetryWaitTimeInSeconds is set to 60, so in this case if the cumulative retry wait time in seconds since the first request exceeds 60 seconds, the exception is thrown.
ConnectionPolicy connectionPolicy = new ConnectionPolicy(); connectionPolicy.RetryOptions.MaxRetryAttemptsOnThrottledRequests = 3; connectionPolicy.RetryOptions.MaxRetryWaitTimeInSeconds = 60;
Partitioning strategy and provisioned throughput costs
Good partitioning strategy is important to optimize costs in Azure Cosmos DB. Ensure that there is no skew of partitions, which are exposed through storage metrics. Ensure that there is no skew of throughput for a partition, which is exposed with throughput metrics. Ensure that there is no skew towards particular partition keys. Dominant keys in storage are exposed through metrics but the key will be dependent on your application access pattern. It's best to think about the right logical partition key. A good partition key is expected to have the following characteristics:
Choose a partition key that spreads workload evenly across all partitions and evenly over time. In other words, you shouldn't have some keys to with majority of the data and some keys with less or no data.
Choose a partition key that enables access patterns to be evenly spread across logical partitions. The workload is reasonably even across all the keys. In other words, the majority of the workload shouldn't be focused on a few specific keys.
Choose a partition key that has a wide range of values.
The basic idea is to spread the data and the activity in your container across the set of logical partitions, so that resources for data storage and throughput can be distributed across the logical partitions. Candidates for partition keys may include the properties that appear frequently as a filter in your queries. Queries can be efficiently routed by including the partition key in the filter predicate. With such a partitioning strategy, optimizing provisioned throughput will be a lot easier.
Design smaller items for higher throughput
The request charge or the request processing cost of a given operation is directly correlated to the size of the item. Operations on large items will cost more than operations on smaller items.
Data access patterns
It is always a good practice to logically separate your data into logical categories based on how frequently you access the data. By categorizing it as hot, medium, or cold data you can fine-tune the storage consumed and the throughput required. Depending on the frequency of access, you can place the data into separate containers (for example, tables, graphs, and collections) and fine-tune the provisioned throughput on them to accommodate to the needs of that segment of data.
Furthermore, if you're using Azure Cosmos DB, and you know you are not going to search by certain data values or will rarely access them, you should store the compressed values of these attributes. With this method you save on storage space, index space, and provisioned throughput and result in lower costs.
Optimize by changing indexing policy
By default, Azure Cosmos DB automatically indexes every property of every record. This is intended to ease development and ensure excellent performance across many different types of ad hoc queries. If you have large records with thousands of properties, paying the throughput cost for indexing every property may not be useful, especially if you only query against 10 or 20 of those properties. As you get closer to getting a handle on your specific workload, our guidance is to tune your index policy. Full details on Azure Cosmos DB indexing policy can be found here.
Monitoring provisioned and consumed throughput
You can monitor the total number of RUs provisioned, number of rate-limited requests as well as the number of RUs you’ve consumed in the Azure portal. The following image shows an example usage metric:
You can also set alerts to check if the number of rate-limited requests exceeds a specific threshold. See How to monitor Azure Cosmos DB article for more details. These alerts can send an email to the account administrators or call a custom HTTP Webhook or an Azure Function to automatically increase provisioned throughput.
Scale your throughput elastically and on-demand
Since you are billed for the throughput provisioned, matching the provisioned throughput to your needs can help you avoid the charges for the unused throughput. You can scale your provisioned throughput up or down any time, as needed. If your throughput needs are very predictable you can use Azure Functions and use a Timer Trigger to increase or decrease throughput on a schedule.
Monitoring the consumption of your RUs and the ratio of rate-limited requests may reveal that you do not need to keep provisioned throughout constant throughout the day or the week. You may receive less traffic at night or during the weekend. By using either Azure portal or Azure Cosmos DB native SDKs or REST API, you can scale your provisioned throughput at any time. Azure Cosmos DB’s REST API provides endpoints to programmatically update the performance level of your containers making it straightforward to adjust the throughput from your code depending on the time of the day or the day of the week. The operation is performed without any downtime, and typically takes effect in less than a minute.
One of the areas you should scale throughput is when you ingest data into Azure Cosmos DB, for example, during data migration. Once you have completed the migration, you can scale provisioned throughput down to handle the solution’s steady state.
Remember, the billing is at the granularity of one hour, so you will not save any money if you change your provisioned throughput more often than one hour at a time.
Determine the throughput needed for a new workload
To determine the provisioned throughput for a new workload, you can use the following steps:
Perform an initial, rough evaluation using the capacity planner and adjust your estimates with the help of the Azure Cosmos Explorer in the Azure portal.
It's recommended to create the containers with higher throughput than expected and then scaling down as needed.
It's recommended to use one of the native Azure Cosmos DB SDKs to benefit from automatic retries when requests get rate-limited. If you’re working on a platform that is not supported and use Cosmos DB’s REST API, implement your own retry policy using the
x-ms-retry-after-msheader.
Make sure that your application code gracefully supports the case when all retries fail.
You can configure alerts from the Azure portal to get notifications for rate-limiting. You can start with conservative limits like 10 rate-limited requests over the last 15 minutes and switch to more eager rules once you figure out your actual consumption. Occasional rate-limits are fine, they show that you’re playing with the limits you’ve set and that’s exactly what you want to do.
Use monitoring to understand your traffic pattern, so you can consider the need to dynamically adjust your throughput provisioning over the day or a week.
Monitor your provisioned vs. consumed throughput ratio regularly to make sure you have not provisioned more than required number of containers and databases. Having a little over provisioned throughput is a good safety check.
Best practices to optimize provisioned throughput
The following steps help you to make your solutions highly scalable and cost-effective when using Azure Cosmos DB.
If you have significantly over provisioned throughput across containers and databases, you should review RUs provisioned Vs consumed RUs and fine-tune the workloads.
One method for estimating the amount of reserved throughput required by your application is to record the request unit RU charge associated with running typical operations against a representative Azure Cosmos container or database used by your application and then estimate the number of operations you anticipate to perform each second. Be sure to measure and include typical queries and their usage as well. To learn how to estimate RU costs of queries programmatically or using portal see Optimizing the cost of queries.
Another way to get operations and their costs in RUs is by enabling Azure Monitor logs, which will give you the breakdown of operation/duration and the request charge. Azure Cosmos DB provides request charge for every operation, so every operation charge can be stored back from the response and then used for analysis.
You can elastically scale up and down provisioned throughput as you need to accommodate your workload needs.
You can add and remove regions associated with your Azure Cosmos account as you need and control costs.
Make sure you have even distribution of data and workloads across logical partitions of your containers. If you have uneven partition distribution, this may cause to provision higher amount of throughput than the value that is needed. If you identify that you have a skewed distribution, we recommend redistributing the workload evenly across the partitions or repartition the data.
If you have many containers and these containers do not require SLAs, you can use the database-based offer for the cases where the per container throughput SLAs do not apply. You should identify which of the Azure Cosmos containers you want to migrate to the database level throughput offer and then migrate them by using a change feed-based solution.
Consider using the “Cosmos DB Free Tier” (free for one year), Try Cosmos DB (up to three regions) or downloadable Cosmos DB emulator for dev/test scenarios. By using these options for test-dev, you can substantially lower your costs.
You can further perform workload-specific cost optimizations – for example, increasing batch-size, load-balancing reads across multiple regions, and de-duplicating data, if applicable.
With Azure Cosmos DB reserved capacity, you can get significant discounts for up to 65% for three years. Azure Cosmos DB reserved capacity model is an upfront commitment on requests units needed over time. The discounts are tiered such that the more request units you use over a longer period, the more your discount will be. These discounts are applied immediately. Any RUs used above your provisioned values are charged based on the non-reserved capacity cost. See Cosmos DB reserved capacity) for more details. Consider purchasing reserved capacity to further lower your provisioned throughput costs.
Next steps
Next you can proceed to learn more about cost optimization in Azure Cosmos DB with the following articles:
- Learn more about Optimizing for development and testing
- Learn more about Understanding your Azure Cosmos DB bill
- Learn more about Optimizing storage cost
- Learn more about Optimizing the cost of reads and writes
- Learn more about Optimizing the cost of queries
- Learn more about Optimizing the cost of multi-region Azure Cosmos accounts | https://docs.microsoft.com/en-ca/azure/cosmos-db/optimize-cost-throughput | 2020-08-03T22:22:13 | CC-MAIN-2020-34 | 1596439735833.83 | [array(['media/optimize-cost-throughput/monitoring.png',
'Monitor request units in the Azure portal'], dtype=object)] | docs.microsoft.com |
2.2.1.17 Client Persistent Key List PDU
The Persistent Key List PDU is an RDP Connection Sequence PDU sent from client to server during the Connection Finalization phase of the RDP Connection Sequence (see section 1.3.1.1 for an overview of the RDP Connection Sequence phases). A single Persistent Key List PDU or a sequence of Persistent Key List PDUs MUST be sent after transmitting the Client Control (Request Control) PDU (section 2.2.1.16) if the client has bitmaps that were stored in a Persistent Bitmap Cache (section 3.2.1.14), the server advertised support for the Bitmap Host Cache Support Capability Set (section 2.2.7.2.1), and a Deactivation-Reactivation Sequence is not in progress (see section 1.3.1.3 for an overview of the Deactivation-Reactivation Sequence). Persistent Key List PDU Data (section 2.2.1.17.
persistentKeyListPduData (variable): The contents of the Persistent Key List PDU, as specified in section 2.2.1.17.1. | https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-rdpbcgr/2d122191-af10-4e36-a781-381e91c182b7 | 2020-08-03T20:54:10 | CC-MAIN-2020-34 | 1596439735833.83 | [] | docs.microsoft.com |
What's new in the iOS client
We regularly update the Remote Desktop client for iOS, adding new features and fixing issues. You'll find the latest updates on this page.
How to report issues
We're committed to making the Remote Desktop client for iOS the best it can be, so we value your feedback. You can report any issues at Help > Report an Issue.
Updates for version 10.0.7
Date published: 4/29/2020
In this update we've added the ability to sort the PC list view (available on iPhone) by name or time last connected.
Updates for version 10.0.6
Date published: 3/31/2020
It's time for a quick update with some bug fixes. Here’s what's new for this release:
- Fixed a number of VoiceOver accessibility issues.
- Fixed an issue where users couldn't connect with Turkish credentials.
- Sessions displayed in the switcher UI are now ordered by when they were launched.
- Selecting the Back button in the Connection Center now takes you back to the last active session.
- Swiftpoint mice are now released when switching away from the client to another app.
- Improved interoperability with the Windows Virtual Desktop service.
- Fixed crashes that were showing up in error reporting.
We appreciate all the comments sent to us through the App Store, in-app feedback, and email. In addition, special thanks to everyone who worked with us to diagnose issues.
Updates for version 10.0.5
Date published: 03/09/20
We've put together some bug fixes and feature updates for this 10.0.5 release. Here's what's new:
- Launched RDP files are now automatically imported (look for the toggle in General settings).
- You can now launch iCloud-based RDP files that haven't been downloaded in the Files app yet.
- The remote session can now extend underneath the Home indicator on iPhones (look for the toggle in Display settings).
- Added support for typing composite characters with multiple keystrokes, such as é.
- Added support for the iPad on-screen floating keyboard.
- Added support for adjusting properties of redirected cameras from a remote session.
- Fixed a bug in the gesture recognizer that caused the client to become unresponsive when connected to a remote session.
- You can now enter App Switching mode with a single swipe up (except when you're in Touch mode with the session extended into the Home indicator area).
- The Home indicator will now automatically hide when connected to a remote session, and will reappear when you tap the screen.
- Added a keyboard shortcut to get to app settings in the Connection Center (Command + ,).
- Added a keyboard shortcut to refresh all workspaces in the Connection Center (Command + R).
- Hooked up the system keyboard shortcut for Escape when connected to a remote session (Command + .).
- Fixed scenarios where the Windows on-screen keyboard in the remote session was too small.
- Implemented auto-keyboard focus throughout the Connection Center to make data entry more seamless.
- Pressing Enter at a credential prompt now results in the prompt being dismissed and the current flow resuming.
- Fixed a scenario where the client would crash when pressing Shift + Option + Left, Up, or Down arrow key.
- Fixed a crash that occurred when removing a SwiftPoint device.
- Fixed other crashes reported to us by users since the last release.
We'd like to thank everyone who reported bugs and worked with us to diagnose issues.
Updates for version 10.0.4
Date published: 02/03/20
It's time for another update! We want to thank everyone who reported bugs and worked with us to diagnose issues. Here's what's new in this version:
- Confirmation UI is now shown when deleting user accounts and gateways.
- The search UI in the Connection Center has been slightly reworked.
- The username hint, if it exists, is now shown in the credential prompt UI when launching from an RDP file or URI.
- Fixed an issue where the extended on-screen keyboard would extend underneath the iPhone notch.
- Fixed a bug where external keyboards would stop working after being disconnected and reconnected.
- Added support for the Esc key on external keyboards.
- Fixed a bug where English characters appeared when entering Chinese characters.
- Fixed a bug where some Chinese input would remain in the remote session after deletion.
- Fixed other crashes reported to us by users since the last release.
We appreciate all your comments sent to us through the App Store, in-app feedback, and email. We'll continue focusing on making this app better with each release.
Updates for version 10.0.3
Date published: 01/16/20
It's 2020 and time for our first release of the year, which means new features and bug fixes. Here's what is included in this update:
- Support for launching connections from RDP files and RDP URIs.
- Workspace headers are now collapsible.
- Zooming and panning at the same time is now supported in Mouse Pointer mode.
- A press-and-hold gesture in Mouse Pointer mode will now trigger a right-click in the remote session.
- Removed force-touch gesture for right-click in Mouse Pointer mode.
- The in-session switcher screen now supports disconnecting, even if no apps are connected.
- Light dismiss is now supported in the in-session switcher screen.
- PCs and apps are no longer automatically reordered in the in-session switcher screen.
- Enlarged the hit test area for the PC thumbnail view ellipses menu.
- The Input Devices settings page now contains a link to supported devices.
- Fixed a bug that caused the Bluetooth permissions UI to repeatedly appear at launch for some users.
- Fixed other crashes reported to us by users since the last release.
Updates for version 10.0.2
Date published: 12/20/19
We've been working hard to fix bugs and add useful features. Here's what's new in this release:
- Support for Japanese and Chinese input on hardware keyboards.
- The PC list view now shows the friendly name of the associated user account, if one exists.
- The permissions UI in the first-run experience is now rendered correctly in Light mode.
- Fixed a crash that happened whenever someone pressed the Option and Up or Down arrow keys at the same time on a hardware keyboard.
- Updated the on-screen keyboard layout used in the password prompt UI to make finding the Backslash key easier.
- Fixed other crashes reported to us by users since the last release.
Updates for version 10.0.1
Date published: 12/15/19
Here's what new in this release:
- Support for the Windows Virtual Desktop service.
- Updated Connection Center UI.
- Updated in-session UI.
Updates for version 10.0.0
Date published: 12/13/19
It's been well over a year since we last updated the Remote Desktop Client for iOS. However, we're back with an exciting new update, and there will be many more updates to come on a regular basis from here on out. Here's what's new in version 10.0.0:
- Support for the Windows Virtual Desktop service.
- A new Connection Center UI.
- A new in-session UI that can switch between connected PCs and apps.
- New layout for the auxiliary on-screen keyboard.
- Improved external keyboard support.
- SwiftPoint Bluetooth mouse support.
- Microphone redirection support.
- Local storage redirection support.
- Camera redirection support (only available for Windows 10, version 1809 or later).
- Support for new iPhone and iPad devices.
- Dark and light theme support.
- Control whether your phone can lock when connected to a remote PC or app.
- You can now collapse the in-session connection bar by pressing and holding the Remote Desktop logo button.
Updates for version 8.1.42
Date published: 06/20/2018
- Bug fixes and performance improvements.
Updates for version 8.1.41
Date published: 03/28/2018
- Updates to address CredSSP encryption oracle remediation described in CVE-2018-0886.
How to report issues
We're committed to making this app the best it can be and value your feedback. You can report issues to us by navigating to Settings > Report an Issue in the client. | https://docs.microsoft.com/en-us/windows-server/remote/remote-desktop-services/clients/ios-whatsnew | 2020-08-03T20:11:45 | CC-MAIN-2020-34 | 1596439735833.83 | [] | docs.microsoft.com |
Table of Contents
Product Index
Make your characters even deadlier with Ninjitsu for Genesis 8 Female.
This complete Ninja outfit for Genesis 8 Female includes all traditional ninja garb such as mask, tunic, pants, robes, undershirt, pants, and shoes, plus included sword and sheath.
Shhh… sneak up on your target, and win glory with the Ninjitsu. | http://docs.daz3d.com/doku.php/public/read_me/index/66841/start | 2020-08-03T21:32:11 | CC-MAIN-2020-34 | 1596439735833.83 | [] | docs.daz3d.com |
Product Index
Get more out of your Silk Shadow Outfit with dForce Silk Shadow Outfit textures.
These 6 new complete sets of textures and Iray materials will make your character look her best in red, blue, purple, black and metal finishes.
Get your beautiful and skilled warrior more options with dForce Silk Shadow Outfit. | http://docs.daz3d.com/doku.php/public/read_me/index/67215/start | 2020-08-03T20:12:35 | CC-MAIN-2020-34 | 1596439735833.83 | [] | docs.daz3d.com |
What.
Please sign in to leave a comment. | https://docs.celigo.com/hc/en-us/articles/115003492748 | 2020-08-03T21:30:00 | CC-MAIN-2020-34 | 1596439735833.83 | [] | docs.celigo.com |
WindowsFormsSettings.FormThickBorder Property
Gets or sets whether all XtraForm and RibbonForm forms in the application should display increased borders.
Namespace: DevExpress.XtraEditors
Assembly: DevExpress.Utils.v20.1.dll
Declaration
public static bool FormThickBorder { get; set; }
Public Shared Property FormThickBorder As Boolean
Property Value
Remarks.
- When you manually disable the XtraForm.FormBorderEffect property.
- When forms are child MDI forms.
- When a user accesses the application through a Remote Desktop connection.
In these cases, you can increase form borders to broaden the resize area.
To do that, enable either the FormThickBorder or WindowsFormsSettings.MdiFormThickBorder property depending on whether you want to increase borders for all DevExpress forms, or only those that serve as child MDI forms.
Use the WindowsFormsSettings.ThickBorderWidth property to change the thick borders' width. | https://docs.devexpress.com/WindowsForms/DevExpress.XtraEditors.WindowsFormsSettings.FormThickBorder | 2020-08-03T21:59:33 | CC-MAIN-2020-34 | 1596439735833.83 | [array(['/WindowsForms/images/form-thickness-false.png', None],
dtype=object)
array(['/WindowsForms/images/form-thickness-true-2.png', None],
dtype=object) ] | docs.devexpress.com |
Create an FTP export
- To configure the FTP export, navigate to Tools menu > Flow builder and click on + Source applications.
- Select the source application as FTP from the application dropdown list and the required FTP connection.
- Click Next.
- Create or select an FTP connection.
General
- Name (required): Name your export so that you can easily reference it from other parts of the application.
- Description (optional): Add an easily-understood description to the export. This is to help other users understand what it’s doing without having to read through all the fields and settings.
How would you like to parse files?
Here we present parsing a CSV file, but you can select from many supported file types.
- File Type (required): Select the type of files being exported from the FTP Server. We support the following formats: CSV, JSON, XLSX, XML, EDI X12, Fixed width (FW), and EDIFACT. For example, choose CSV if you are exporting CSV files (i.e. files containing comma-separated values). Choose EDI X12 or EDIFACT if you are exporting EDI files in the EDI X12 or EDIFACT formats.
- Sample File (that would be parsed) (required): Select the sample file from your local computer that you would like to import. The sample file should contain all the columns/fields that you require. Celigo integrator.io uses this file to automap fields. The maximum file size allowed is currently 100 MB.
- CSV parser helper (optional): Test your CSV formatting to ensure it aligns with your needs.
- Column Delimiter: Select the column delimiter that you are using for the files being exported. We support comma, space, semicolon, pipe, or tab delimiters.
- Row Delimiter: Select the row delimiter to be used in the file to be exported.
Ex: '\n', '\r\n', '\r' (Default is CRLF).
- Trim spaces: Choose True if you would like to remove all leading and trailing whitespaces in your column data.
- File has header: Choose True if the files you are exporting contain a top-level header row. For example, if the very first row in the CSV files being exported is reserved for column names (and not actual data) then set this field to True.
- No of rows to skip: Specify the number of rows to skip if want to skip the header rows from your file.
- Multiple rows per record: Check this box if your CSV or XLSX file contains multiple records that can be grouped together, such as line items of a sales order.
- Key columns: If multiple rows of data represent a single object (sales order line items for example), you can group these rows into a single export record. In this case, you can provide one or more columns in the source data that should be used to group related records. Typically, this would be the ID column of the parent object, such as a sales order ID.
- Directory Path (required): This field determines the location of files to be exported from the FTP host. For example, \Parent Directory\Child Directory.
- Output mode (optional): This field determines what type of information will be returned by the file export. It can be set to Records or Blob keys as per the requirement. For most use cases, you would use the Records option. Choose blob keys if you need to transfer files without modifying or transforming them. Output mode is being replaced with the concept of transferring files in the beta.
- File Name Starts With (optional): Use this field to specify a file name prefix that will be used to filter which files in the FTP folder should be exported (vs not). If you want to parse all the files in the directory, then you can leave this field as blank. For example, if you set this value to TestFile, then only files where the name starts with TestFile will be exported (e.g., TestFile.csv). This prefix is case-sensitive.
- File Name Ends With (optional): Use this field to specify a file name suffix that will be used to filter which files in the FTP folder will be exported (vs not). For example, if you set this value to "End.CSV" then only files where the name ends with "End.CSV" will be exported (like myFile-END.CSV). This suffix is case-sensitive. Also specify the file extension for this filter to work correctly.
- CSV to parse: This is being replaced with the export panel, which enables you to test an export and check formatting on-the-fly (available in our beta). In our current UI, this field will show the data from your uploaded sample file.
Preview data
- Parsed result: This is being replaced with the export panel which enables you to test an export and check formatting on-the-fly, which is available in our beta. In our current UI, this will display the result generated after applying your settings. In the beta, you can view rich preview information to ensure that your export is working before running your flow.
Advanced settings
- Decompress files: If the files you are exporting are in a compressed format then please set this field to True.
- Leave file on server: If this field is set to True then integrator.io will NOT delete files from the export application after an export completes. The files will be left on the export application, and if the export process runs again, the same files will be exported again.
- File encoding: The file encoding indicates how the individual characters in your data are represented on the file system. Depending on the source system of the data, the encoding can take on different formats. The default encoding is utf-8.
- Page size: Specify how many records you want in each page of data. The default Page size value (when you leave this field blank) is 20.
- Data URI template: When your flow runs but has data errors, this field can be really helpful in that. It allows you to make sure that all the errors in your job dashboard have a link to the original data in the export application. This field uses a handlebars template to generate the dynamic links based on the data being exported. For example, if you are exporting a customer record from Shopify, you would most likely set this field to the following value:{{{id}}}. Or, if you are just exporting a CSV file from an FTP site then this field could simply be one or more columns from the file, such as {{{internal_id}} or {{{email}}}.
- Batch size: Set this field to limit the number of files processed in a single batch request. Setting this field will not limit the total number of files you can process in a flow. This field allows you to optimize for really big files where bigger batches might experience network timeout errors vs. very small files where processing 1000 files in a single batch improves the flow performance. 1000 is the max value.
Save your export
When you’ve tested your export and it’s ready to go, click Save.
Please sign in to leave a comment. | https://docs.celigo.com/hc/en-us/articles/360045238292-Export-files-from-an-FTP-server | 2020-08-03T20:09:39 | CC-MAIN-2020-34 | 1596439735833.83 | [array(['/hc/article_attachments/360060945551/mceclip0.png',
'mceclip0.png'], dtype=object)
array(['/hc/article_attachments/360060924672/ftpadvset.png', None],
dtype=object) ] | docs.celigo.com |
Managing Connections using the .NET SDK with Couchbase Server
This section describes how to connect the .NETAsync() with a connection string, username, and password:
var cluster = await Cluster.ConnectAsync("couchbase://localhost", "username", "passord"); var bucket = await cluster.BucketAsync("travel-sample"); var collection = bucket.DefaultCollection(); // You can access multiple buckets using the same Cluster object. var anotherBucket = await cluster.BucketAsync("beer (
?).
127.0.0.1
nodeA.example.com,nodeB.example.com
127.0.0.1?io.networkResolution=external&timeout.kvTimeout=10s NuGet package to help you get started with Dependency Injection (DI)..
Additionally, in certain situations
ClusterOptions.IgnoreRemoteCertificateNameMismatch may also need to be true. | https://docs.couchbase.com/dotnet-sdk/3.0/howtos/managing-connections.html | 2020-08-03T20:54:14 | CC-MAIN-2020-34 | 1596439735833.83 | [] | docs.couchbase.com |
Unable to create DSN for Microsoft Office System Driver on 64-bit versions of Windows
Note
Office 365 ProPlus is being renamed to Microsoft 365 Apps for enterprise. For more information about this change, read this blog post.
Symptoms
When attempting to create ODBC connections that utilize the Microsoft Office System Driver, such as connections to Access or Excel, on a 64-bit Operating system like Windows 7, the drivers are not visible. They are not visible in the standard ODBC Administrator dialog that is launched from the Administrative Tools dialog in the Control Panel.
Cause
This occurs when the 32-bit version of Office or the 32-bit Office System Drivers is installed on a 64-bit version of Windows. In 64-bit versions of Windows, there is a separate ODBC Administrator used to manage 32-bit drivers and DSNs.
Resolution
To locate the 32-bit Office System drivers,. For example, the default location on a Windows 7 64-bit machine is "C:\Windows\SysWOW64\odbcad32.exe".
More Information
On a 64-Bit Windows operating system, there are two versions of the ODBC Administrator tool. The 64-bit ODBC Administrator tool is the default dialog that is launched from the control panel and is used to manage the 64-bit drivers and DSNs on the machine. The second ODBC Administrator tool to manage the 32-bit drivers and DSNs on the machine can be launched from the SysWow64 folder.
To determine whether Office 2010 64-bit or 32-bit is installed, take the following steps:
- Open an Office application like Excel.
- Click on the File tab in the upper left corner.
- Select Help on the left-hand side
- Underneath "About Microsoft Excel", you will see a version number and in parentheses 32-bit or 64-bit will be listed.
Note: All Office versions prior to Office 2010 can only be installed as 32-bit applications.
Here is a table that shows which ODBC Administrator Tool to use:
For more information about known issues with using the 32-bit and 64-bit ODBC Administrator tool view the following article:
942976
For more information on the 2010 Office System Drivers view the following article:
Microsoft Access Database Engine 2010 Redistributable | https://docs.microsoft.com/en-us/office/troubleshoot/office-suite-issues/cant-create-dsn-for-office-system-driver | 2020-08-03T21:48:35 | CC-MAIN-2020-34 | 1596439735833.83 | [] | docs.microsoft.com |
Configure RBAC roles in Microsoft Azure
For Alert Logic to protect assets in Microsoft Azure, you must create a user account with specific permissions. Role-Based Access Control (RBAC) enables fine-grained access management for Azure accounts. Assignment of a RBAC role to the user account you create grants only the amount of access required to allow Alert Logic to monitor your environments.
This procedure requires administrative permissions in Azure, and the installation of one of the following command line interfaces:
- Azure Command Line Interface (CLI) 2.0
- Azure PowerShell
To configure your RBAC role in Azure:
Create a user account in Azure
- In the Azure Resource Menu, click Domain names, and then make note of the primary Active Directory domain name.
- In the Azure Resource Menu, click Overview.
- In the Azure Active Directory blade, under Quick tasks, click Add a user.
- Enter a Name (for example, AL Cloud Defender).
- Enter a User name. The user name should be in the form of an email address based on the Active Directory (for example, [email protected]).
- Select the Show Password check box and make note of the password.
- Click Create. Profile and Properties can be set if needed.
- Back on the Azure Active Directory blade, click Users and groups.
- Click All users, and ensure the new user name appears in alphabetical order in the list.
- Open a new browser window.
- Log in to the Azure portal as the new user.
- At the prompt, change the password for the user.
Create a custom RBAC role
RBAC roles enable fine-grained access management for Azure. After you create a user account, you must assign an RBAC role to the user. The Alert Logic RBAC role grants only the amount of access required to monitor your environments.
To create a custom RBAC role:
To create a role document:
- Create a new text file and copy the Alert Logic RBAC role into it.
- Make the following changes to the file:
- In the Name field, change the entry to the user name for the user account you just created.
- In the AssignableScopes field, change the <subscription ID> value to the Subscription ID value found on your Azure portal Subscriptions blade.
- Save the text file as a .JSON file.
To create a custom role in Azure:
- Open either Azure CLI or Azure PowerShell and log in to your Azure account and specify the default subscription.
az login
az account set -s [your subscription name or id]
Login-AzureRmAccount
Get-AzureRmSubscription –SubscriptionName [your subscription name] | Select-AzureRmSubscription
- Create your custom role in Azure.
az role definition create --role-definition [path to the permissions file]
New-AzureRmRoleDefinition -InputFile [path to the permissions file]
- In the Azure portal, verify that the new role appears in the Roles tab in Subscriptions > Access Control (IAM).
Assign the role to the user account
Once the RBAC role is created, it must be assigned to the user account. In Azure, roles are assigned in the Access Control portion of the Subscriptions blade.
- In the Azure Navigation Menu, click Subscriptions.
- In the Subscriptions blade, select the subscription you want Alert Logic to protect and then click Access Control (IAM).
- Above the list of users, click +Add.
- In the Add access blade, select the created RBAC role from those listed.
- In the Add users blade, enter the user account name in the search field and select the user account name from the list.
- Click Select.
- Click OK. | https://legacy.docs.alertlogic.com/gsg/Azure-environ-in-Cloud-Defender.htm | 2020-08-03T21:22:13 | CC-MAIN-2020-34 | 1596439735833.83 | [array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object) ] | legacy.docs.alertlogic.com |
CreateListener
Create a listener to process inbound connections from clients to an accelerator. Connections arrive to assigned static IP addresses on a port, port range, or list of port ranges that you specify. To see an AWS CLI example of creating a listener, scroll down to Example.
Request Syntax
{ "AcceleratorArn": "
string", "ClientAffinity": "
string", "IdempotencyToken": "
string", "PortRanges": [ { "FromPort":
number, "ToPort":
number} ], "Protocol": "
string" }
Request Parameters
For information about the parameters that are common to all actions, see Common Parameters.
The request accepts the following data in JSON format.
- AcceleratorArn
The Amazon Resource Name (ARN) of your accelerator.
Type: String
Length Constraints: Maximum length of 255.
Required: Yes
-.
Type: String
Valid Values:
NONE | SOURCE_IP
Required: No
- IdempotencyToken
A unique, case-sensitive identifier that you provide to ensure the idempotency—that is, the uniqueness—of the request.
Type: String
Length Constraints: Maximum length of 255.
Required: Yes
- PortRanges
The list of port ranges to support for connections from clients to your accelerator.
Type: Array of PortRange objects
Array Members: Minimum number of 1 item. Maximum number of 10 items.
Required: Yes
- Protocol
The protocol for connections from clients to your accelerator.
Type: String
Valid Values:
TCP | UDP
Required: Yes
Response Syntax
{ "Listener": { "ClientAffinity": "string", "ListenerArn": "string", "PortRanges": [ { "FromPort": number, "ToPort": number } ], "Protocol": PortRangeException
The port numbers that you specified are not valid numbers or are not unique for this accelerator.
HTTP Status Code: 400
- LimitExceededException
Processing your request would cause you to exceed an AWS Global Accelerator limit.
HTTP Status Code: 400
Example
Create a listener
The following is an example of creating a listener, and the response.
aws globalaccelerator create-listener --accelerator-arn arn:aws:globalaccelerator::012345678901:accelerator/1234abcd-abcd-1234-abcd-1234abcdefgh --port-ranges FromPort=80,ToPort=80 FromPort=81,ToPort=81 --protocol TCP --region us-west-2
{ "Listener": { "PortRanges": [ { "ToPort": 80, "FromPort": 80 }, { "ToPort": 81, "FromPort": 81 } ], "ClientAffinity": "NONE", "Protocol": "TCP", "ListenerArn": "arn:aws:globalaccelerator::012345678901:accelerator/1234abcd-abcd-1234-abcd-1234abcdefgh/listener/0123vxyz" } }
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: | https://docs.aws.amazon.com/global-accelerator/latest/api/API_CreateListener.html | 2020-08-03T20:44:54 | CC-MAIN-2020-34 | 1596439735833.83 | [] | docs.aws.amazon.com |
The Bokeh server is an optional component that can be used to provide additional capabilities, such as:
The Bokeh server is built on top of Flask, specifically as a
Flask Blueprint. You can embed the Bokeh server functionality inside
a Flask application, or deploy the server in various configurations
(described below), using this blueprint. The Bokeh library also ships
with a standalone executable
bokeh-server that you can easily run to
try out server examples, for prototyping, etc. however it is not intended
for production use.
The basic task of the Bokeh Server is to be a mediator between the original data and plot models created by a user, and the reflected data and plot models in the BokehJS client:
Here you can see illustrated the most useful and compelling of the Bokeh server: full two-way communication between the original code and the BokehJS plot. Plots are published by sending them to the server. The data for the plot can be updated on the server, and the client will respond and update the plot. Users can interact with the plot through tools and widgets in the browser, then the results of these interactions can be pulled back to the original code to inform some further query or analysis (possibly resulting in updates pushed back the plot).
We will explore the capabilities afforded by the Bokeh server in detail below.
The core architecture of
bokeh-server develops around 2 core models:
Document
User
A User controls authentication information at the user level and both models
combined determines the authorization information regarding user
documents
that are private, so can be accessed only by the user, or public.
One thing to keep in mind when interacting with bokeh-server is that every session open to the server implies that an user is logged in to the server. More information about this can be found at the Authenticating Users paragraph below.
If Bokeh was installed running python setup.py or using a conda package, then the
bokeh-server command should be available and you can run it from any directory.
bokeh-server
Note
This will create temporary files in the directory in which you are running it.
You may want to create a
~/bokehtemp/ directory or some such, and run the
command there
If you have Bokeh installed for development mode (see Building and Installing), then you should go into the checked-out source directory and run:
python ./bokeh-server
Note
bokeh-server accepts many input argument options that let the user customize
it’s configuration. Although we will use a few of those in this section we highly
encourage the user to run
bokeh-server -h for more details.
Now that we have learned how to run the server, it’s time to start using it!
In order to use our running
bokeh-server we need to create a plot and store it
on the server.
It’s possible to do it by using the
Document and the
Session objects.
The former can be considered as a
namespace object that holds the plot
information while the later will take care of connecting and registering the
information on the server. It also acts as an open channel that can be used
to send/receive changes to/from the server.
As usual, the
bokeh.plotting interface provides a set of useful shortcuts
that can be used for this. The result is that creating a line plot as a static
html file is not so different than creating it on a
bokeh-server, as we can
see on the following example:
from bokeh.plotting import figure, output_server, show output_server("line") # THIS LINE HAS CHANGED! p = figure(plot_width=400, plot_height=400) # add a line renderer p.line([1, 2, 3, 4, 5], [6, 7, 2, 4, 5], line_width=2) show(p)
As mentioned before
bokeh-server does implement the concept of authentication.
At this point one could raise the following question: Really? So why I wasn’t asked
to login to register or the plot I’ve created in the previous section?
This is a good question and the reason is because
bokeh-server defaults to
single user mode when launched. This is very important to keep in mind: when in
single user mode every request is automatically logged in as a user with username
defaultuser.
However for teams, and for plot publishing (see Publishing to the Server for
more details), it makes more sense to add an authentication layer. This way
users won’t be able to overwrite each other’s plots. To do enable multi user
mode, you need to turn on the multi_user bokeh server setting by using the
command line parameter
-m. Once this is done, all scripts that use the
bokeh server must authenticate with the bokeh server.
Once again the
Session object can be used to create or login users to the
server.
An user can be created with the following python code:
session = Session(root_url=url) session.register(username, password)
or login with:
session = Session(root_url=url) session.login(username, password)
Note
~/.bokehdirectory), so logging in is not necessary in subsequent invocations.
As mentioned earlier, when running in multi user mode, a plot must be published so that different logged users can access it. This can be done, again, using the session object as the following snipped shows:
output_server('myplot') # make some plots cursession().publish()
A public link to a plot on the bokeh server page can be viewed by appending
?public=true To the url - for example if you have the url to a
plot,
You can generate a public link to the published plot using.
Note
In addition, the autoload_server function call in bokeh.embed shown in server data also takes a public=true keyword argument, which will generate an embeddable html snippet that will load the public version of a given plot
Streaming data to automatically update plots is very straightforward
using
bokeh-server. As seen previously,
Session object exposes
the
session.store_objects method that can be used to update objects
on the server (and consequently on the browser) from your python code.
Here’s a simple example:
import time from random import shuffle from bokeh.plotting import figure, output_server, cursession, show # prepare output to server output_server("animated_line") p = figure(plot_width=400, plot_height=400) p.line([1, 2, 3, 4, 5], [6, 7, 2, 4, 5], name='ex_line') show(p) # create some simple animation.. # first get our figure example data source renderer = p.select(dict(name="ex_line")) ds = renderer[0].data_source while True: # Update y data of the source object shuffle(ds.data["y"]) # store the updated source on the server cursession().store_objects(ds) time.sleep(0.5)
Notice that in order to update the plot values we only need to update it’s
datasource and store it on the server using the
session object. | https://docs.bokeh.org/en/0.9.3/docs/user_guide/server.html | 2020-08-03T20:30:09 | CC-MAIN-2020-34 | 1596439735833.83 | [] | docs.bokeh.org |
Appearance
The Appearance tab offers many color selections, the first being the plot area background color.
For Plot Colors, you can change the colors for points and some network-specific options, for desktop and VR separately. The network graph options include standard edge color, 2D Maps edge color, edge insight color (highlight color when a network insight is selected), and edge alpha (opacity).
The Grid Box Colors section allows you to change the gridbox color, axis label color, and tickmark color, also for desktop and VR separately.
Note: Users can click on the ‘Reset’ button to reset their appearance settings. | https://docs.virtualitics.com/getting-started/preferences-appearance/ | 2020-08-03T21:04:28 | CC-MAIN-2020-34 | 1596439735833.83 | [array(['https://docs.virtualitics.com/wp-content/uploads/2018/05/Appearance_.png',
None], dtype=object) ] | docs.virtualitics.com |
Buildable
Next up, we’ll make a buildable. A buildable is simply anything you will be able to place with the
building gun. Even if it is just a simple foundation, it is still a
FGBuildable.
A buildable is fairly similar to an Item with a few more values to configure and a few to ignore, plus a hologram to control the actor class.
You need to know that the center is the position where the build gun targets for.
You can use this example Mesh.
This Actor is also where you can define the snapping mesh and the clearance mesh, but we won’t do that for now. how this works is on the Paintable BP-Class. | https://docs.ficsit.app/satisfactory-modding/2.2.0/Development/BeginnersGuide/SimpleMod/buildable.html | 2020-08-03T20:05:37 | CC-MAIN-2020-34 | 1596439735833.83 | [] | docs.ficsit.app |
How to change the Request List View?
To change the request list view, follow the steps given below:
1. Click MY REQUESTS in the Main Menu.
The MY REQUESTS page will be displayed.
2. Click
icon.
The dropdown list will be displayed.
3. Click required list view.
The selected request list view will be displayed. The below screenshot shows My Requests list view.
| https://docs.flowz.com/article/how-to-change-the-request-list-view.html | 2020-08-03T20:49:56 | CC-MAIN-2020-34 | 1596439735833.83 | [array(['http://phpkb.178.128.146.163.nip.io:9091/assets/changetheRequestListView1.png',
None], dtype=object)
array(['http://phpkb.178.128.146.163.nip.io:9091/assets/changetheRequestListView2.png',
None], dtype=object)
array(['http://phpkb.178.128.146.163.nip.io:9091/assets/changetheRequestListView3.png',
None], dtype=object) ] | docs.flowz.com |
Exact Setup
In order to authenticate the connector, you will need a Client ID and Client Secret.
First, you should register an app with Exact.
How to register your app
- Go to the Exact Online App Centre.
- From the country menu in the upper right corner of the website, select the preferred language.
- Be aware that selecting a country defines the Exact Online website.
- Click Login.
- Enter your user name and password and click Login.
- When you are authenticated you will be redirected to the App Centre.
- Click Manage my apps
Click one of the options below:
Register for a testing app: Your app will not be visible in the App Centre and cannot be used by external clients.
Register a production app: Your app can be used by external clients. You have the option to publish your app in the App Centre, once it has been approved by your partner manager.
Enter a unique app name.
A unique SEO name is automatically generated for the app name. The name of the app you create is unique per country, which means you can have an app with the same name across multiple countries.
For Redirect URI enter: (ServiceDomain should be substituted for your actual service domain)
Your Cyclr service domain, e.g. yourcompany.cyclr.com can be found in your Cyclr Console under Settings > General Settings > Service Domain.
Scroll down to find your Client ID and Client Secret. You can use these to authenticate your Exact connector.
For more information on getting your app set up, visit the official exact docs here. | https://docs.cyclr.com/exact-connector | 2021-07-24T08:24:29 | CC-MAIN-2021-31 | 1627046150134.86 | [] | docs.cyclr.com |
Mafia II Joe's Adventures - Playboy Locations Submitted by root on Sat, 02/23/2013 - 20:04 1. Mission "Witness" on the table in the room with the guards, right before the room with the chief witness. 2. Dam Road, south end of the dam, on the ground behind the round building located in the water. 3. Train Station in Dipton, under the stairway to the middle platform, in front of the main doors. 4. Mission "Going out of Business", in the garage behind the first car on the left. 5. Bruski's Scrapyard, walk under the stairs of Mike's beast, next right on the ground. 6. Greenfield safe house, on coffee table. 7. Mission "Connection", on top of the bridge over the train, next to the lever which will open the main gate (missing in the picture... sorry, took it already). 8. Mission "Saving Marty", behind the stairs, at the far end of the platform (under the stairs, unlock the gate first), on table. 9. In Marty's apartment, on the coffee table (missing in the picture... sorry, took it already). 10. Mission "A Lesson in Manners", inside the villa, on the balcony left side. 11. In the mission "Supermarket", room behind the office, after opening the safe. 12. Harry's Gunshop, when entering, in front of you on two crates. 13. Kingston, next to car service near stadium on the gound. 14. North Millville, at the end of the long Greasers road, the building in the right, on the ramp. 15. In the mission "Bet on That", after shoot out in small room left to the bookie's car. 16. Drag Strip bar on the border of Little Italy and North MillVille, on table. 17. In the mission "Piece of Cake", inside parking garage to the right, down the stairs, on the ground. 18. In the mission "Cathouse", on a couch in front of a single cat smoking, dressed in white. 19. Final mission, after auto-save there's a guy upstairs who stands on the magazine. Log in to post comments39016 reads | https://docs.gz.ro/mafia2-joe-adventures-playboy.html | 2021-07-24T08:31:10 | CC-MAIN-2021-31 | 1627046150134.86 | [] | docs.gz.ro |
Embedded devices almost universally use flash memory for storage. An important property of flash memory cells is that they can only handle a certain amount of writes until they fail (wear out). Wear leveling (distributing writes across the cells) and error correction (avoiding use of failed cells) are strategies used to prolong the life of flash storage devices. These strategies are either transparently implements wear leveling and error correction., but the version of U-Boot you are using must support the file system type used for rootfs because it needs to read the Linux kernel from the file system and start the Linux boot process.
The standard Yocto Project
IMAGE_FSTYPES variable determines the image types
to create in Yocto deploy directory. The meta-mender layer appends the
mender
type to that variable, and usually
sdimg,
uefiimg or a different type ending
with
img, depending on enabled
image features
for the Yocto-build. Selecting a filesystem for the individual partition files
by setting the
ARTIFACTIMG_FSTYPE variable. We advise that you clean up
the
IMAGE_FSTYPES variable to avoid creating unnecessary image files.
Configuring storage for Mender requires setting two variables. The first
variable,
MENDER_STORAGE_DEVICE, configures the expected location on the
device of the storage device. The second,
MENDER_STORAGE_TOTAL_SIZE_MB used to define the sizes of the partitions.
Even though the default size of
MENDER_DATA_PART_SIZE_MB is
128 MB, it will try to resize the partition and filesystem image on first boot to the full size of the underlying block device which is also resized to occupy remainder of available blocks on the storage medium. This functionality relies on systemd-growfs which is not available for all filesystems. See mender-growfs-data feature for more information.
The value of <BOOT> depends on what features are enabled:
mender-ubootenabled:
/uboot
mender-gruband
mender-biosenabled:
/boot/grub
mender-grubenabled:
/boot/efi
You can override these default values in your
local.conf. For details consult Mender image variables.
Deploying a full rootfs image update will wipe all data previously stored on
that partition. To make data persist across updates, applications must use the
partition mounted on
/data. In fact, the Mender client itself uses
/data/mender to preserve data and state.
For example:
do_install() { install -d ${D}/data install -m 0644 persistent.txt ${D}/data/ }
The
meta-mender-demo layer includes a sample recipe,
hello-mender, which deploys a text file to the persistent data partition.
Keep in mind that any files you add to the
/data directory are not included
in
.mender artifacts, since they don't contain a data partition. Only
complete partitioned images (
.biosimg,
.sdimg,
.uefiimg, etc) will
contain the files.
Although it is not needed for most work with Mender, for some flashing setups,
it can be useful to have the sole data partition available as an image file.
In this case, adding
dataimg to the Yocto Project
IMAGE_FSTYPES
variable will make the resulting image file given the
.dataimg suffix. Its
filesystem type will be the value of
ARTIFACTIMG_FSTYPE.
For example:
IMAGE_FSTYPES_append = " dataimg"
Found errors? Think you can improve this documentation? Simply click the Edit link at the top of the page, and then the icon on Github to submit changes.
© 2021 Northern.tech AS | https://docs.mender.io/2.6/system-updates-yocto-project/board-integration/partition-configuration | 2021-07-24T08:34:42 | CC-MAIN-2021-31 | 1627046150134.86 | [] | docs.mender.io |
We removed our free Sandbox April 25th.
You can read more on our blog.
SMTP.
Introduction¶
Our SMTP service is based on Postfix. It can receive e-mails from your instances, and forward them:
- either to their final destination (e.g., if you send an e-mail to [email protected], send it to GMail servers);
- or to a relay or smarthost like SendGrid, CritSend, or others. dotCloud uses (and recommends!) MailGun for that purpose.
If the destination server is down, instead of bouncing an error message to the sender immediately, the MTA will keep it into a queue, and retry to deliver it at regular intervals. If the delivery still fails after a while (it can be hours or even days, depending of the setup), it will give up and send you back an error message. This mechanism allows for mail servers to be down for short period of time, or to do throttling (stop accepting new messages) when they are overloaded, for instance.
Note
Why should I use a SMTP service?
If SMTP services were bundled with web application servers, you could experience the following issue:
- At normal time, you have 4 web frontends.
- Due to a peak of activity, you scale to 10 frontends.
- Instance #7 sends a notification e-mail to some user.
- The e-mail server of this user is temporarily down, causing instance #7 to spool the message.
- The activity goes back to normal, and you want to scale down.
- Ah, wait, you can’t scale down as long as you have pending messages in instances: they would be lost!
That’s why scalable web applications need to decouple things as much as possible – and SMTP servers are just one of the many places where it is required.
Note
Why should I use a service like MailGun instead of, e.g., SES?
SES is a basic outgoing email service. Mailgun is a complete email platform which one can easily build Gmail-like service on top of.
Even when looking only from outgoing email perspective, Mailgun gives developers full reputation isolation and control over their mailing queue.
Deploying¶
To include a SMTP service in your stack, just add a section like the following one in your Build File:
mailer: type: smtp
Warning
The default configuration will try to send e-mails directly to their destination. A lot of e-mail providers will systematically flag messages coming from the cloud as “spam”. You can still use this default configuration for testing purposes, but when going into production, you should rely on a third-party mail routing service.
You can use a third-party relay (also called a “smarthost”) if it supports standard SMTP routing (most services do). The parameters must be specified in the Build File.
Here is an example with MailGun:
mailer: type: smtp config: smtp_relay_server: smtp.mailgun.org smtp_relay_port: 587 smtp_relay_username: [email protected] smtp_relay_password: YourMailgunPassword
Note
The smtp_relay_port, smtp_relay_username and smtp_relay_password are optional arguments here.
Warning
The config dictionary of a service is applied when your service is launched for the first time. If you change it, you will need to destroy the concerned service and push your application again to apply the changes.
Using Your New SMTP Service¶
There are two ways to use your new SMTP service:
- by entering its address, port, etc. in the local SMTP configuration for each other service (PHP, Ruby, Python, Java, and so on); to relay them to the SMTP service.
- by sending your messages directly to it, from your code.
Note
Why can’t I send my messages directly to, e.g., SendGrid?
Well, you can. As explained above, the whole point of using your own dotCloud SMTP relay is to prevent messages from lingering in the local SMTP queue, where they could be lost if the services scales down. If you are 100% sure that SendGrid will accept your messages 100% of the time, you can configure the local SMTP relays to use SendGrid servers (or send to SendGrid directly from your code).
Get Your SMTP Service Parameters¶
You can retrieve all the parameters with “dotcloud info”, like this:
dotcloud info mailer
The output should look something like the following:
cluster: wolverine config: smtp_password: _|KZ&CKWa[ name: ramen.mailer namespace: ramen ports: - name: smtp url: smtp://dotcloud:_|KZ&CKWa[@mailer.ramen.dotcloud.com:1587 - name: ssh url: ssh://[email protected]:1586 type: smtp
In this case, the parameters are:
- login: dotcloud
- password: _|KZ&CKWa[
- host: mailer.ramen.dotcloud.com
- port: 1587
Configure the Local MTA to Use Your SMTP Service¶
All our services come with a built-in MTA listening on localhost. You can specify a smarthost (SMTP relay) for this MTA when you deploy the service.
Note
You cannot yet reconfigure a running service. You have to destroy it and re-deploy a new one for now. Sorry about that.
The configuration format is the same for all images. A short example is worth one thousand words:
www: type: php config: smtp_server: mailer.ramen.dotcloud.com smtp_port: 1587 smtp_username: dotcloud smtp_password: _|KZ&CKWa[
Note
If you want to use your SendGrid/CritSend/MailGun/etc. account, you can do it there. Just put the right parameters in the right place.
Warning
The config dictionary of a service is applied when your service is launched for the first time. If you change it, you will need to destroy the concerned service and push your application again to apply the changes.
You can then use default mail functions in your code. If a programming language or framework asks you a specific SMTP server address, use “localhost”.
Send Your Messages Directly Through Your SMTP Service¶
If you don’t want to go through the local MTA, you can use the parameters retrieved with dotcloud info. You will need to pass them to the mail() function (or its equivalent) directly in your code.
It is probably easier to configure the local MTA to relay to your SMTP service, since it means that you won’t have to change your code between different environments.
Note
For your reference, the SMTP service only supports the CRAM-MD5 authentication method. Any decent SMTP library will automatically detect and use this authentication method.
Troubleshooting¶
You can check if Postfix is running and display the status of its different queues with:
dotcloud status mailer
You can read Postfix e-mail logs with:
dotcloud logs mailer
Press Ctrl+C to stop watching the logs.
You can always restart Postfix with:
dotcloud restart mailer
Receiving Mails¶
The SMTP service cannot receive messages coming from the outside. However, if you need to handle those, we suggest you have a look at excellent third party services like MailGun. Incoming mails can be:
- routed to a URL in your app;
- stored in a mailbox, which can be accessed using an API.
Don’t hesitate to have a look at MailGun documentation for receiving and parsing mail to see how easy it is to handle incoming mails! | https://docs.dotcloud.com/services/smtp/ | 2014-03-07T11:33:52 | CC-MAIN-2014-10 | 1393999642306 | [] | docs.dotcloud.com |
User Guide
Local Navigation
Tips: Keeping your information safe
You can take some simple steps to help prevent the information on your BlackBerry® device from being compromised, such as avoiding leaving your device unattended. tasks
Next topic: Tips: Updating your software
Previous topic: Tips: Freeing and conserving storage space
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/21510/Tips_keeping_your_info_safe_60_1231379_11.jsp | 2014-03-07T11:40:21 | CC-MAIN-2014-10 | 1393999642306 | [] | docs.blackberry.com |
User Guide
- Getting started
- Messages
- Contacts
- Calendar
- Browser
- BlackBerry Bridge
- Connections
- Connect to a mobile network
-
- Documents To Go
- Print To Go
- BlackBerry App World
- Camera
- Videos
- Music
- Music Store
- Battery and power
- Applications
- Clock
- Video chat
- Video Store
- Bing Maps
- BlackBerry News
- Security
- Podcasts
- Voice notes
- Legal notice
Home > Support > BlackBerry Manuals & Help > BlackBerry Manuals > BlackBerry PlayBook tablet > User Guide BlackBerry PlayBook Tablet - 2.0
Bluetooth technology
Related information
Next topic: Connect a Bluetooth enabled device
Previous topic: I can't connect to a Wi-Fi network
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/27018/Bluetooth_1401442_11.jsp | 2014-03-07T11:36:32 | CC-MAIN-2014-10 | 1393999642306 | [] | docs.blackberry.com |
You've reached the documentation site for the Marmalade Project.
Caveat
Marmalade is extremely experimental as of now. We believe the core functionality to be reasonably stable, with decent unit test coverage. However, since this is still alpha software, the core api could and probably will change with successive releases, so exercise reasonable caution before including Marmalade in your production software plans. You might want to join the -user mailing list, too.
QuickLinks
- Go straight to the usage tips.
News
2005-Mar-08
I've been working hard on marmalade lately, getting it ready to replace jelly in maven2...in fact, I've been working too hard to update this content. Sorry.
- I've improved the usability of Marmalade by leaps and bounds, through the creation of a MarmaladeLauncher which provides easy, complete access to configuration of both the parsetime and runtime contexts at one go.
- In related (and obvious) news, Marmalade is now completely embeddable within maven2.
2004-09-09
- 1.0-alpha-1 and 1.0-alpha-2-SNAPSHOT source archives are available on the website for those of use unfortunate enough not to have CVS access. ()
- Un-optimized, single-point-of-entry execution is available in 1.0-alpha-2-SNAPSHOT via the org.codehaus.marmalade.util.LazyMansAccess class. Check it out!
Introduction
Marmalade is a second-generation, extensible XML scripting tool. In function, it is similar to Jelly, but is much smaller and more flexible. It is also similar to JSP in some (admittedly superficial) ways.
The basic idea behind Marmalade is a realization that scripting, and especially XML-based scripting, is really a combination of two simple languages. The first language is made up entirely of struture delimiters, and essentially determines how the language constructs are related to one another. The second language is really just a data access mechanism, used to retrieve data. Separating these concepts allows Marmalade to mix-and-match many different mechanisms for each, using a parser to translate native format to the Marmalade object model, and an evaluator to translate object-graph expressions into data access calls into the context of the script. In fact, given the right parser, Marmalade doesn't even have to be an XML-only scripting language...
Also, keeping the data access language flexible and context defintion reasonably simple allows for easy creation of X-to-Marmalade bridges. For example, Marmalade currently contains a Jelly compatibility layer which can be used to run any Jelly taglib from within a Marmalade script. You can even mix-and-match Jelly tags with Marmalade native tags in many cases.
Most importantly, Marmalade strives to minimize the trail of dependencies you as the developer are forced to include in your project. To that end, the Marmalade Project is best described as an umbrella project consisting of many sub-component suites, all centered on the marmalade-core library. This allows developers to take a buffet approach to Marmalade's functionality: you can use Jelly-compat, but that means including all of the libraries needed by Jelly. Marmalade's core requires only a single library other than itself in order to function (compared to 13 dependencies for Jelly). What you add from there is entirely up to you.
Mailing Lists
See Mailing lists for more information.
SVN Access
Marmalade has been migrated to Subversion. The new URL is:
Bugs, Features, What-Nots
Marmalade has an issue tracking project on the Codehaus JIRA instance, so please log any feature requests, improvements, bug reports, etc. there. The address is:
Status
2005-Mar-08
The core functionality, marmalade-core, is basically stable now. I'm only adding the random tiny new feature now and again, without changing old APIs, in order to solidify the feature set and move toward a 1.0-alpha2 release. At this point, we're basically just waiting on documentation before the release! If you'd like to test drive marmalade, please checkout from CVS or contact the marmalade-dev list, and I'll put a distro up.
2004-07-27
Forgot to document one of the nicer features of Marmalade:
- Support for pluggable expression languages, even within the same script. (see: Expression Language Framework)
2004-07-20
Marmalade currently contains the following features:
- Support for OGNL as a data access language (see: OGNL Expression Language Support)
- Support for Commons-EL (same as JSP 2.0) as a data access language marmalade-el-commons
- XML parsing via ultra-fast XPP3 XML parser (MXP1)
- Scoped script contexts
- Implementation of the JSTL-core tag library as a native Marmalade taglib marmalade-tags-jstl-core
- Implementation of the Jelly-core tag library as a native Marmalade taglib marmalade-tags-jelly-core
- Jelly compatibility layer, allowing direct use of Jelly-native taglibs within Marmalade scripts marmalade-compat-jelly
- Buffet-style functionality; include only as much as you need, and don't import dependencies you don't use
Up-and-coming features:
- Ant compatibility layer, allowing direct use of Ant tasks and types marmalade-compat-ant
- JSTL-fmt tag library as a native Marmalade taglib marmalade-tags-jstl-fmt
Currently idle (for lack of demand):
- HTTPUnit tag library (Marmalade-native) marmalade-tags-httpunit | http://docs.codehaus.org/display/MMLD/Home | 2014-03-07T11:38:57 | CC-MAIN-2014-10 | 1393999642306 | [] | docs.codehaus.org |
Public Networks
Network identifier:
public_network don't support bridging generally don't have any other features that map to public networks either. can't be found, Vagrant will ask you to pick from a list of available network interfaces. | http://docs.vagrantup.com/v2/networking/public_network.html | 2014-03-07T11:34:05 | CC-MAIN-2014-10 | 1393999642306 | [] | docs.vagrantup.com |
Time Spent in Office Visits
The hours and minutes that students spend during office visits for different types of health issues can be found on this report.To get there:
- From the PowerSchool start page, click Extended Reports under the Functions heading on the left side.
- Click the Health tab.
- Click on Time Spent In Office Visits under the Office Visits heading.
There are three sections to the page:
- The Report Controls filters the data found on the report.
- The middle of the page is a list of office visit records that are excluded from the time calculation due to errors in the visit time-in or time-out.
- If there are no errors, this section will not appear on the page.
- Click the blue Student ID to go directly to a student’s office visit page to update any records with errors.
- At the bottom of the page is the actual time reporting.
Running the report:
- Update controls as appropriate:
- Student ID-Enter a student ID and only that student’s time in the office will be display on the report.
- IEP Students drop down-Filter the report on IEP or Non-IEP
- Visit Type drop down-Filter the report to only a specific Visit Type
- Visit Outcome drop down-Filter the report to only a specific Visit Outcome
- Start Date & End Date-Filter the report to a specific time frame
- Click the Update Report button.
Reading the Report:
The report provides the total time that students spent in the nurses office based on the selected controls as well as a break-down of the average time spent per office visit of that type and the average time that each student spent in the office over multiple visits if applicable.
Example-Time from IEP Students:
Using an example data set we know:
- Nurses spent 76 hours and 21 minutes on office visits for students with IEPs
- The average length of an IEP office visit was 32 minutes.
- The average IEP patient will spend 1 hour and 13 minutes in the office for the year.
- The average IEP student in the enitre school can be expected to spend 19 minutes in the office.
Example-Time from Orthopedic Office Visits:
Using an example data set we know:
- Nurses spent 31 hours and 14 minutes on orthepedic office visits.
- The average orthopedic visit was 16 minutes.
- The average orthopedic patient will spend 24 minutes across all of his/her orthopedic visits.
- The average student in the enitre school can be expected to spend 0.8 minutes in orthopedic visits. | https://docs.glenbard.org/index.php/ps-2/admin-ps/health/time-spent-in-office-visits/ | 2021-04-10T18:19:38 | CC-MAIN-2021-17 | 1618038057476.6 | [] | docs.glenbard.org |
17..
The only pre-requisite for running these is a Docker installation for your
platform. See the official Docker installation page for instructions.
The documentation assumes familiarity with the basics of Docker in general and in particular the different ways of mounting storage into containers with the desired container-side permissions.
Note
For readability, short-form (
-v) mount arguments will be used throughout
but readers should be aware that the official Docker documentation
recommends the long-form (
--mount) alternative.
17.1. Image Defaults¶
Default values for the images in the
oxfordsemantic/rdfox repository are
described below. It is possible to override all of these defaults but, for
simplicity, the documentation of each assumes the other defaults are being
used.
- User
The default user within the images is
rdfox. This determines the default server directory path within containers as
/home/rdfox/.RDFox.
- Entrypoint
The default entry point for the images is the RDFox executable embedded at container path
/opt/RDFox/RDFox. The version of the executable at this path will match the version in the tag for the image.
- Command
The default command is
daemon. This can be overridden with any valid RDFox command (see Section 16.1).
- Working directory
The default working directory for the images is
/data. This has no impact when starting in daemon mode but will be the default root directory when using the RDFox shell.
- Ports
The images expose the default port for the RDFox endpoint: 12110.
- Required superuser capabilities
In the interests of security, images in the
oxfordsemantic/rdfoxrepository have been designed to run with no superuser capabilities (see Linux kernel capabilities). The inclusion of the argument
--cap-drop ALLin docker run commands, or the equivalent in other environments, is recommended when launching images in the above repository.
17.2. Suggested Mounts¶
The following section provides suggestions on how to mount key data or storage into RDFox containers.
17 RDFox process
to locate the license without requiring additional command line arguments
whilst ensuring that the license key need not be stored within the server
directory.
If using the images in Kubernetes, where the RDFox license key may be held as a
Secret resource, mounting the secret as described above will hide the
image’s entry-point executable, leading to a failure to start. In this
situation, mapping the secret into the container’s environment via variable
RDFOX_LICENSE_CONTENT is the next most convenient option.
The methods described above work for both
oxfordsemantic/rdfox and
oxfordsemantic/rdfox-init images.
17.2.2. Mounting the Server Directory¶
When RDFox is configured to persist roles and data stores, it does so inside
the configured server directory. As described above, the default server
directory path for
oxfordsemantic/rdfox images is
/home/rdfox/.RDFox.
For use cases where persistence is enabled it is therefore most convenient to
mount storage,.
17.
17.2.3. Mounting the Shell Root Directory¶
Often, when developing applications for RDFox it is convenient to keep the
rules, base facts and shell scripts for the application in a single directory.
When using the RDFox shell in Docker, this directory must be mounted into the
container’s file system with the appropriate permissions to allow the
rdfox
user to read and write as necessary.
Users are free to use either a named volume or a bind mount for the shell root
directory. Bind mounting a host directory will often be more convenient for
this purpose however named volumes offer more flexibility to achieve the
necessary container-side permissions, particularly where the Docker client is
on Windows. The most convenient target path for the mount is the default
working directory
/data as this is the default value for the shell root
variable.
17.3. Example Docker Run Commands¶
The example commands in this section demonstrate how to run RDFox Docker images in several different configurations, starting with the simplest possible and progressing to more realistic scenarios. With the appropriate substitutions, the commands should work on any operating system where the Docker CLI is supported.
In all of the examples,
<path-to-license-file> should be replaced with an
absolute path to a valid, in-date RDFox license key file.
Example: Running Interactively with no Persistence or Endpoint.
The simplest possible
docker run command to launch RDFox in a container is:
docker run -it -v <path-to-license-file>:/opt/RDFox/RDFox.lic oxfordsemantic/rdfox sandbox
This will start the RDFox shell for interactive usage but is not very
useful. First of all, nothing from the session will be persisted since no
server directory has been mounted and the image default
daemon command
has been overridden with
sandbox mode thereby disabling role and data
store persistence. Second, since no storage has been mounted to the
/data path, there is no possibility of importing from or exporting to
the host filesystem using the shell, and we have no access to any shell
scripts. Finally, since we did not publish the 12110 port it will be
impossible to reach the RDFox endpoint from outside the container even if it
is started using
endpoint start. Nevertheless, this command may be
useful for quickly testing out RDFox functionality on data and rules simple
enough to be typed (or pasted) in.
To run as a purely in-memory data store, where all interaction will be via
the RDFox endpoint, it is possible to supply the name and password of the
first role via environment variables in order to initialize access control
without any interaction via standard input and output. Assuming variables
RDFOX_ROLE and
RDFOX_PASSWORD have been defined in the environment
where the command will run, a container \ oxfordsemantic/rdfox \ -persist-roles off -persist-ds off daemon
The RDFox server started by the above command will contain no data stores
initially. These can be created and populated via the REST API however, in
some situations, it may be desirable to do this 16.2.2.9).
17 14.13. | https://docs.oxfordsemantic.tech/docker.html | 2021-04-10T19:22:31 | CC-MAIN-2021-17 | 1618038057476.6 | [] | docs.oxfordsemantic.tech |
Ver", or "credentials") are digital documents that conform to the W3C Verifiable Credential Data Model. VCs provide a standard for digitally issuing, holding, and verifying data about a subject.
Definition of Verifiable Credential
)
Often called the "Trust Triangle," this classic diagram helps conceptualize the verifiable credential model.
Components of a Credential
To break down the components of a credential, we'll use a digital driver's license as an example.
Issuer DID
As you can see from the diagram above, a VerifierVerifier - A verifier is an agent that is set up for verifications. Every organization created in the Trinsic platform is a verifier by default. TenantTenant - A tenant is synonymous with organization. It is an agent provisioned with the capability to issue or verify credentials. For the sake of clarity, our core products (Wallet / Studio) will only refer to "organizations". is assigned an issuer DID on the network your tenant is provisioned to. The issuer DID acts as a public-facing address or public key. In the self-sovereign identity space, these DIDs are usually written to a public blockchain, but other DID registries.
In the Trinsic Studio, you can find your issuer DID toward the bottom of the Organization page:
Schema
Each credential needs a template so the data can be shared and interpreted correctly. That template is called a SchemaSchema - A schema is an outline for what a credential should look like. It defines what attributes will go in the credential. Ideally, schemas will be reused as much as possible to facilitate interoperability..:
{ "$schema": "", "description": "Email", "type": "object", "properties": { "emailAddress": { "type": "string", "format": "email" } }, "required": ["emailAddress"], "additionalProperties": false }
Schemas also need to be discoverable by the verifier of the credential. Because of the Hyperledger Indy/Aries tech stack that we use, we write the schema object to the public ledger. Other implementations leverage schema.org or other publicly discoverable sources.
Schemas are nonproprietary; any issuer can view/use the schemas written by any other issuer.
We abstract schema creation away into the same action as creation of a credential definition. Keep reading to read how to create a schema and credential definition.
Credential Definition
Just having an issuer DID and a schema written to the ledger is not enough, however. You need to put them together in order to actually issue credentials. This is where. comes in.
A credential definition is a particular issuer's template to issue credentials from. There are two ways for an issuer to create a credential definition:
- Based on an existing schema: This is the ideal way to create a credential definition because the issuer can leverage an existing schema. In addition, this promotes interoperability and trust in credentials at scale (passports, transcripts, and employee ID cards should all be based on similar data models, for example).
- Create a new schema alongside the credential definition: This is the most common method of creating a credential definition in the early days because there is no existing directory of standard schemas.
Going back to our example, the New York DMV would create. with their issuer DID and the schema. They're likely to use an existing schema, ideally the same one used by other states. Once they've created a credential definition, they can then go forward with issuing New York Driver's Licenses.
Remember this:
- Credential definitions are the combination of schemas and issuer DIDs and act as a template for issuance (in fact, we call them credential templates!)!
- Credential definitions and schemas are permanent artifacts on the ledger that can't be edited.
To learn how to create schemas and credential definitions, see our API Reference docs.
Credential
In order to issue a credential, you need a credential definition (called a credential template!). Once you have the credential definition, all you need to do is to populate the values and offer it to a holder.
{ "credentialId": "string", "state": "Offered", "connectionId": "string", "definitionId": "string", "schemaId": "string", "values": { "additionalProp1": "string", "additionalProp2": "string", "additionalProp3": "string" } }
How to offer Credentials in Trinsic
Issue your first credential by following our tutorial.
Reference App
issuer-reference-app on GitHub for a demo on how to issue a business card as a credential to anyone with a mobile wallet.
Updated 4 months ago | https://docs.trinsic.id/docs/issue-credentials | 2021-04-10T19:01:10 | CC-MAIN-2021-17 | 1618038057476.6 | [array(['https://files.readme.io/d06a672-basicSSImodel2.png',
'basicSSImodel2.png'], dtype=object)
array(['https://files.readme.io/d06a672-basicSSImodel2.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/aa1b377-Untitled.png', 'Untitled.png'],
dtype=object)
array(['https://files.readme.io/aa1b377-Untitled.png',
'Click to close...'], dtype=object) ] | docs.trinsic.id |
Your users can save their own blocks and reuse them when designing content in your application. When a user saves their own block, it's only visible to them and nobody else.
Saving a Block
Users can save their blocks by clicking the save button at the bottom right of a selected row.
Once they click save, they will be asked a Category Name and optional Tags. This helps organize the blocks and make them searchable.
Enable Feature
You can enable user saved blocks by passing a user object with the unique user id. The user object must have a unique user id for the user who is using the editor. You can also pass a name and email optionally.
The save button for blocks will automatically appear once a unique user id is available.
unlayer.init({ user: { id: 1, name: 'John Doe', email: '[email protected]' } })
Updated 2 years ago | https://docs.unlayer.com/docs/user-saved-blocks | 2021-04-10T18:35:19 | CC-MAIN-2021-17 | 1618038057476.6 | [array(['https://files.readme.io/be546c4-saveblock.png', 'saveblock.png'],
dtype=object)
array(['https://files.readme.io/be546c4-saveblock.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/4de95c5-saveblockinfo.png',
'saveblockinfo.png'], dtype=object)
array(['https://files.readme.io/4de95c5-saveblockinfo.png',
'Click to close...'], dtype=object) ] | docs.unlayer.com |
Electronics Tray
Introduction
The Electronics Tray is a convenient new way to mount your underwater electronics in a 4” watertight enclosure. We designed this tray to make installing and working on your electronics as easy as possible. The Electronics Tray mounts to a 4” Watertight Enclosure O-ring Flange, so you don’t need any tools to access your electronics.
Features
Specifications
- 5mm thick machined ABS panels
- Hard anodized aluminum standoffs
- Convenient cable routing slots and holes
- Mounting Locations for:
2D Drawings
3D Model
All 3D models are provided in zip archives containing the follow file types:
- SolidWorks Part (.sldprt)
- IGES (.igs)
- STEP (.step)
- STL (.stl)
Assembly
Diagram of the Tray
Assembling the Electronics Tray
Install the four short standoffs in the four inch O-ring flange. Do not overtighten. Finger-tight plus 1/8th of a turn will secure these.
Install the rear panel on the short standoffs.
Install the long standoffs into the rear panel.
Install the main panel and front panel.
Now your Etray is ready for installing your electronics..
Attach the jumpers to the Black Screw Terminal Block if necessary for your application.
Install the 8-Circuit Barrier Blocks onto the main tray.
Install the standoffs and 9-Circuit Eurostyle Terminal Strip on top of the standoffs.
Now you are ready to finish installing your electronics! | http://docs.bluerobotics.com/etray/ | 2018-09-18T19:41:19 | CC-MAIN-2018-39 | 1537267155676.21 | [array(['/etray/cad/electronics-tray-render.png', None], dtype=object)
array(['/etray/cad/ASSEM-ETRAY-X1.png', None], dtype=object)
array(['/etray/cad/elec-tray-annotated.png', None], dtype=object)] | docs.bluerobotics.com |
The most effective way to read a large file from S3 is in a single HTTPS request, reading in all data from the beginning to the end. This is exactly the read pattern used when the source data is a CSV file or files compressed with GZIP.
ORC and Parquet files benefit from Random IO: they read footer data, seek backwards, and skip forwards, to minimize the amount of data read. This IO pattern is highly efficient for HDFS, but for object stores, making and breaking new HTTP connections, then this IO pattern is very very expensive.
By default, as soon as an application makes a backwards
seek() in a file,
the S3A connector switches into “random” IO mode, where instead
of trying to read the entire file, only the amount configured in
fs.s3a.readahead.range
is read in. This results in an IO behavior
where, at the possible expense of breaking the first HTTP connection, reading
ORC/Parquet data is efficient.
See Optimizing S3A read for different file types. | https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.1/bk_cloud-data-access/content/s3-read-performance.html | 2018-09-18T20:31:08 | CC-MAIN-2018-39 | 1537267155676.21 | [] | docs.hortonworks.com |
OlkComboBox Object (Outlook)
A control that supports the display of a selection from a drop-down list of all choices.
Remarks
Before you use this control for the first time in the forms designer, add the Microsoft Outlook Combo Box Control to the control toolbox. You can only add this control to a form region in an Outlook form using the forms designer.
The following is an example of a combo box control that has been bound to the Sensitivity field. This control supports Microsoft Windows themes.
If the Click event is implemented but the DropButtonClick event is not implemented, then clicking the drop button will fire only the Click event.
For more information about Outlook controls, see Controls in a Custom Form. For examples of add-ins in C# and Visual Basic .NET that use Outlook controls, see code sample downloads on MSDN.
Events
Methods
Properties
See also
Outlook Object Model Reference | https://docs.microsoft.com/en-us/office/vba/api/outlook.olkcombobox | 2018-09-18T19:22:24 | CC-MAIN-2018-39 | 1537267155676.21 | [array(['../images/olcombobox_za10120277.gif', 'Combo box'], dtype=object)] | docs.microsoft.com |
Before upgrading the cluster to HDP 3.0.1, you must prepare Hive for the upgrade. The Hive pre-upgrade tool is designed to help you upgrade Hive 2 in HDP 2.6.5 and later to Hive 3 in HDP 3.0.1. Upgrading Hive in releases earlier than HDP 2.6.5 is not supported. Upgrading the on-disk layout for transactional tables changed in Hive 3 and requires running major compaction before the upgrade to ensure that Hive 3 can read the tables. The pre-upgrade tool can identify which tables or partitions need to be compacted and generates a script that you can execute in Beeline to run those compactions. Because compaction can be a time consuming and resource intensive process, you should examine this script to gauge how long the actual upgrade might take. After a major compaction runs in preparation for an upgrade, you must prevent the execution of update, delete, or merge statements against transactional tables until the upgrade is complete. If an update, delete, or merge occurs within a partition, the partition must undergo another major compaction prior to upgrade..
Running the help for the pre-upgrade command, as described below, shows you all the command options.
Before you begin
Ensure that the Hive Metastore is running. Connectivity between the tool and Hive MetaStore is mandatory. property to accommodate your data.
In kerberized cluster, enter
kinitbefore executing the pre-upgrade tool command.
When you run the pre-upgrade tool command, you might need to set
-Djavax.security.auth.useSubjectCredsOnly=falsein a Kerberized environment if you see the following types of errors after running kinit:
org.ietf.jgss.GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)
To run the pre-upgrade tool:
SSH into the host running the Hive Metastore. You can locate this host by going to the Ambari Web UI and clicking Hosts. Click on the Filter icon and type in “Hive Metastore: All” to find each host that has a Hive Metastore Instance installed on it.
Change directories to the
/tmpdirectory.
cd /tmp
Execute the following command to download the pre-upgrade tool JAR:
wget
Export the JAVA_HOME environment variable if necessary.
export JAVA_HOME=[ path to your installed JDK ]
Take a look at the help for the pre-upgrade command by using the -help option. For example:
cd <location of downloaded pre-upgrade tool>
java -help hive-pre-upgrade-3.1.0.3.0.0.0-1634.jar
Adjust the classpath in the following example pre-upgrade command to suit your environment, and then execute a dry run of preupgrading by running the following command:
$JAVA_HOME/bin/java -Djavax.security.auth.useSubjectCredsOnly=false -cp /usr/hdp/current/hive2/lib/derby-10.10.2.0.jar:/usr/hdp/current/hive2/lib/:/usr/hdp/current/hadoop/:/usr/hdp/current/hadoop/lib/:/usr/hdp/current/hadoop-mapreduce/:/usr/hdp/current/hadoop-mapreduce/lib/:/usr/hdp/current/hadoop-hdfs/:/usr/hdp/current/hadoop-hdfs/lib/*:/usr/hdp/current/hadoop/etc/hadoop/:/tmp/hive-pre-upgrade-3.1.0.3.0.0.0-1634.jar:/usr/hdp/current/hive/conf/conf.server org.apache.hadoop.hive.upgrade.acid.PreUpgradeTool
Examine the scripts in the output to understand what running the scripts will do.
Login to Beeline as the Hive service user, and run each generated script to prepare the cluster for upgrading.
The Hive service user is usually the hive user. This is hive by default. If you don’t know which user is the Hive service user in your cluster, go to the Ambari Web UI and click Cluster Admin > Service Accounts, and then look for
Hive User.
Register and Install Target Version | https://docs.hortonworks.com/HDPDocuments/Ambari-2.7.1.0/bk_ambari-upgrade/content/prepare_hive_for_upgrade.html | 2018-09-18T20:32:15 | CC-MAIN-2018-39 | 1537267155676.21 | [] | docs.hortonworks.com |
Manually Redeploy Cluster Topologies
How to manually redeploy cluster topologies, when configuring the Knox Gateway.
You are not required to manually redeploy clusters after updating cluster properties.
The gateway monitors the topology descriptor files in the
$gatewaydir/conf/topologies directory and automatically redeploys
the cluster if any descriptor changes or a new one is added. (The corresponding
deployment is in
$gatewaydir/data/deployments.)
However, you must redeploy the clusters after changing any of the following gateway properties or gateway-wide settings:
Time settings on the gateway host
Implementing or updating Kerberos
Implementing or updating SSL certificates
Changing a cluster alias
When making gateway-wide changes (such as implementing Kerberos or SSL), or if you change the system clock, you must redeploy all the Cluster Topologies.
When making changes that impact a single cluster, such as changing an alias or restoring from an earlier cluster topology descriptor file, you only redeploy the affected cluster.
- To verify the timestamp on the currently deployed cluster(s), visit:
cd $gatewaydir/data/deployments.
- To redeploy, enter:
- To verify that a new cluster WAR was created, enter:The system displays something similar to: | https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.1/configuring-proxy-knox/content/manually_redeploy_cluster_topologies.html | 2018-09-18T20:31:29 | CC-MAIN-2018-39 | 1537267155676.21 | [] | docs.hortonworks.com |
Panel¶
Reference
- Mode
Object Mode
- Panel
All collections that an object has been assigned to are listed in the Properties editor.
- Add to Collection
Adds the selected objects from a collection. A pop-up lets you specify the collection to add to.
- New
+
Creates a new collection and adds the selected object(s).. | https://docs.blender.org/manual/ru/dev/scene_layout/collections/collections.html | 2020-02-17T07:50:24 | CC-MAIN-2020-10 | 1581875141749.3 | [] | docs.blender.org |
The Xamarin.Forms Command Interface
In the Model-View-ViewModel (MVVM) architecture, data bindings are defined between properties in the ViewModel, which is generally a class that derives from
INotifyPropertyChanged, and properties in the View, which is generally the XAML file. Sometimes an application has needs that go beyond these property bindings by requiring the user to initiate commands that affect something in the ViewModel. These commands are generally signaled by button clicks or finger taps, and traditionally they are processed in the code-behind file in a handler for the
Clicked event of the
Button or the
Tapped event of a
TapGestureRecognizer.
The commanding interface provides an alternative approach to implementing commands that is much better suited to the MVVM architecture. The ViewModel itself can contain commands, which are methods that are executed in reaction to a specific activity in the View such as a
Button click. Data bindings are defined between these commands and the
Button.
To allow a data binding between a
Button and a ViewModel, the
Button defines two properties:
Commandof type
System.Windows.Input.ICommand
CommandParameterof type
Object
To use the command interface, you define a data binding that targets the
Command property of the
Button where the source is a property in the ViewModel of type
ICommand. The ViewModel contains code associated with that
ICommand property that is executed when the button is clicked. You can set
CommandParameter to arbitrary data to distinguish between multiple buttons if they are all bound to the same
ICommand property in the ViewModel.
The
Command and
CommandParameter properties are also defined by the following classes:
MenuItemand hence,
ToolbarItem, which derives from
MenuItem
TextCelland hence,
ImageCell, which derives from
TextCell
TapGestureRecognizer
SearchBar defines a
SearchCommand property of type
ICommand and a
SearchCommandParameter property. The
RefreshCommand property of
ListView is also of type
ICommand.
All these commands can be handled within a ViewModel in a manner that doesn't depend on the particular user-interface object in the View.
The ICommand Interface
The
System.Windows.Input.ICommand interface is not part of Xamarin.Forms. It is defined instead in the System.Windows.Input namespace, and consists of two methods and one event:
public interface ICommand { public void Execute (Object parameter); public bool CanExecute (Object parameter); public event EventHandler CanExecuteChanged; }
To use the command interface, your ViewModel contains properties of type
ICommand:
public ICommand MyCommand { private set; get; }
The ViewModel must also reference a class that implements the
ICommand interface. This class will be described shortly. In the View, the
Command property of a
Button is bound to that property:
<Button Text="Execute command" Command="{Binding MyCommand}" />
When the user presses the
Button, the
Button calls the
Execute method in the
ICommand object bound to its
Command property. That's the simplest part of the commanding interface.
The
CanExecute method is more complex. When the binding is first defined on the
Command property of the
Button, and when the data binding changes in some way, the
Button calls the
CanExecute method in the
ICommand object. If
CanExecute returns
false, then the
Button disables itself. This indicates that the particular command is currently unavailable or invalid.
The
Button also attaches a handler on the
CanExecuteChanged event of
ICommand. The event is fired from within the ViewModel. When that event is fired, the
Button calls
CanExecute again. The
Button enables itself if
CanExecute returns
true and disables itself if
CanExecute returns
false.
Important
Do not use the
IsEnabled property of
Button if you're using the command interface.
The Command Class
When your ViewModel defines a property of type
ICommand, the ViewModel must also contain or reference a class that implements the
ICommand interface. This class must contain or reference the
Execute and
CanExecute methods, and fire the
CanExecuteChanged event whenever the
CanExecute method might return a different value.
You can write such a class yourself, or you can use a class that someone else has written. Because
ICommand is part of Microsoft Windows, it has been used for years with Windows MVVM applications. Using a Windows class that implements
ICommand allows you to share your ViewModels between Windows applications and Xamarin.Forms applications.
If sharing ViewModels between Windows and Xamarin.Forms is not a concern, then you can use the
Command or
Command<T> class included in Xamarin.Forms to implement the
ICommand interface. These classes allow you to specify the bodies of the
Execute and
CanExecute methods in class constructors. Use
Command<T> when you use the
CommandParameter property to distinguish between multiple views bound to the same
ICommand property, and the simpler
Command class when that isn't a requirement.
Basic Commanding
The Person Entry page in the Data Binding Demos program demonstrates some simple commands implemented in a ViewModel.
The
PersonViewModel defines three properties named
Name,
Age, and
Skills that define a person. This class does not contain any
ICommand properties:
public class PersonViewModel : INotifyPropertyChanged { string name; double age; string skills; public event PropertyChangedEventHandler PropertyChanged; public string Name { set { SetProperty(ref name, value); } get { return name; } } public double Age { set { SetProperty(ref age, value); } get { return age; } } public string Skills { set { SetProperty(ref skills, value); } get { return skills; } } public override string ToString() { return Name + ", age " + Age; })); } }
The
PersonCollectionViewModel shown below creates new objects of type
PersonViewModel and allows the user to fill in the data. For that purpose, the class defines properties
IsEditing of type
bool and
PersonEdit of type
PersonViewModel. In addition, the class defines three properties of type
ICommand and a property named
Persons of type
IList<PersonViewModel>:
public class PersonCollectionViewModel : INotifyPropertyChanged { PersonViewModel personEdit; bool isEditing; public event PropertyChangedEventHandler PropertyChanged; ··· public bool IsEditing { private set { SetProperty(ref isEditing, value); } get { return isEditing; } } public PersonViewModel PersonEdit { set { SetProperty(ref personEdit, value); } get { return personEdit; } } public ICommand NewCommand { private set; get; } public ICommand SubmitCommand { private set; get; } public ICommand CancelCommand { private set; get; } public IList<PersonViewModel> Persons { get; } = new ObservableCollection<PersonViewModel>();)); } }
This abbreviated listing does not include the class's constructor, which is where the three properties of type
ICommand are defined, which will be shown shortly. Notice that changes to the three properties of type
ICommand and the
Persons property do not result in
PropertyChanged events being fired. These properties are all set when the class is first created and do not change thereafter.
Before examining the constructor of the
PersonCollectionViewModel class, let's look at the XAML file for the Person Entry program. This contains a
Grid with its
BindingContext property set to the
PersonCollectionViewModel. The
Grid contains a
Button with the text New with its
Command property bound to the
NewCommand property in the ViewModel, an entry form with properties bound to the
IsEditing property, as well as properties of
PersonViewModel, and two more buttons bound to the
SubmitCommand and
CancelCommand properties of the ViewModel. The final
ListView displays the collection of persons already entered:
<ContentPage xmlns="" xmlns: <Grid Margin="10"> <Grid.BindingContext> <local:PersonCollectionViewModel /> </Grid.BindingContext> <Grid.RowDefinitions> <RowDefinition Height="Auto" /> <RowDefinition Height="Auto" /> <RowDefinition Height="Auto" /> <RowDefinition Height="*" /> </Grid.RowDefinitions> <!-- New Button --> <Button Text="New" Grid. <!-- Entry Form --> <Grid Grid. <Grid BindingContext="{Binding PersonEdit}"> <Grid.RowDefinitions> <RowDefinition Height="Auto" /> <RowDefinition Height="Auto" /> <RowDefinition Height="Auto" /> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto" /> <ColumnDefinition Width="*" /> </Grid.ColumnDefinitions> <Label Text="Name: " Grid. <Entry Text="{Binding Name}" Grid. <Label Text="Age: " Grid. <StackLayout Orientation="Horizontal" Grid. <Stepper Value="{Binding Age}" Maximum="100" /> <Label Text="{Binding Age, StringFormat='{0} years old'}" VerticalOptions="Center" /> </StackLayout> <Label Text="Skills: " Grid. <Entry Text="{Binding Skills}" Grid. </Grid> </Grid> <!-- Submit and Cancel Buttons --> <Grid Grid. <Grid.ColumnDefinitions> <ColumnDefinition Width="*" /> <ColumnDefinition Width="*" /> </Grid.ColumnDefinitions> <Button Text="Submit" Grid. <Button Text="Cancel" Grid. </Grid> <!-- List of Persons --> <ListView Grid. </Grid> </ContentPage>
Here's how it works: The user first presses the New button. This enables the entry form but disables the New button. The user then enters a name, age, and skills. At any time during the editing, the user can press the Cancel button to start over. Only when a name and a valid age have been entered is the Submit button enabled. Pressing this Submit button transfers the person to the collection displayed by the
ListView. After either the Cancel or Submit button is pressed, the entry form is cleared and the New button is enabled again.
The iOS screen at the left shows the layout before a valid age is entered. The Android screen shows the Submit button enabled after an age has been set:
The program does not have any facility for editing existing entries, and does not save the entries when you navigate away from the page.
All the logic for the New, Submit, and Cancel buttons is handled in
PersonCollectionViewModel through definitions of the
NewCommand,
SubmitCommand, and
CancelCommand properties. The constructor of the
PersonCollectionViewModel sets these three properties to objects of type
Command.
A constructor of the
Command class allows you to pass arguments of type
Action and
Func<bool> corresponding to the
Execute and
CanExecute methods. It's easiest to define these actions and functions as lambda functions right in the
Command constructor. Here is the definition of the
Command object for the
NewCommand property:
public class PersonCollectionViewModel : INotifyPropertyChanged { ··· public PersonCollectionViewModel() { NewCommand = new Command( execute: () => { PersonEdit = new PersonViewModel(); PersonEdit.PropertyChanged += OnPersonEditPropertyChanged; IsEditing = true; RefreshCanExecutes(); }, canExecute: () => { return !IsEditing; }); ··· } void OnPersonEditPropertyChanged(object sender, PropertyChangedEventArgs args) { (SubmitCommand as Command).ChangeCanExecute(); } void RefreshCanExecutes() { (NewCommand as Command).ChangeCanExecute(); (SubmitCommand as Command).ChangeCanExecute(); (CancelCommand as Command).ChangeCanExecute(); } ··· }
When the user clicks the New button, the
execute function passed to the
Command constructor is executed. This creates a new
PersonViewModel object, sets a handler on that object's
PropertyChanged event, sets
IsEditing to
true, and calls the
RefreshCanExecutes method defined after the constructor.
Besides implementing the
ICommand interface, the
Command class also defines a method named
ChangeCanExecute. Your ViewModel should call
ChangeCanExecute for an
ICommand property whenever anything happens that might change the return value of the
CanExecute method. A call to
ChangeCanExecute causes the
Command class to fire the
CanExecuteChanged method. The
Button has attached a handler for that event and responds by calling
CanExecute again, and then enabling itself based on the return value of that method.
When the
execute method of
NewCommand calls
RefreshCanExecutes, the
NewCommand property gets a call to
ChangeCanExecute, and the
Button calls the
canExecute method, which now returns
false because the
IsEditing property is now
true.
The
PropertyChanged handler for the new
PersonViewModel object calls the
ChangeCanExecute method of
SubmitCommand. Here's how that command property is implemented:
public class PersonCollectionViewModel : INotifyPropertyChanged { ··· public PersonCollectionViewModel() { ··· SubmitCommand = new Command( execute: () => { Persons.Add(PersonEdit); PersonEdit.PropertyChanged -= OnPersonEditPropertyChanged; PersonEdit = null; IsEditing = false; RefreshCanExecutes(); }, canExecute: () => { return PersonEdit != null && PersonEdit.Name != null && PersonEdit.Name.Length > 1 && PersonEdit.Age > 0; }); ··· } ··· }
The
canExecute function for
SubmitCommand is called every time there's a property changed in the
PersonViewModel object being edited. It returns
true only when the
Name property is at least one character long, and
Age is greater than 0. At that time, the Submit button becomes enabled.
The
execute function for Submit removes the property-changed handler from the
PersonViewModel, adds the object to the
Persons collection, and returns everything to initial conditions.
The
execute function for the Cancel button does everything that the Submit button does except add the object to the collection:
public class PersonCollectionViewModel : INotifyPropertyChanged { ··· public PersonCollectionViewModel() { ··· CancelCommand = new Command( execute: () => { PersonEdit.PropertyChanged -= OnPersonEditPropertyChanged; PersonEdit = null; IsEditing = false; RefreshCanExecutes(); }, canExecute: () => { return IsEditing; }); } ··· }
The
canExecute method returns
true at any time a
PersonViewModel is being edited.
These techniques could be adapted to more complex scenarios: A property in
PersonCollectionViewModel could be bound to the
SelectedItem property of the
ListView for editing existing items, and a Delete button could be added to delete those items.
It isn't necessary to define the
execute and
canExecute methods as lambda functions. You can write them as regular private methods in the ViewModel and reference them in the
Command constructors. However, this approach does tend to result in a lot of methods that are referenced only once in the ViewModel.
Using Command Parameters
It is sometimes convenient for one or more buttons (or other user-interface objects) to share the same
ICommand property in the ViewModel. In this case, you use the
CommandParameter property to distinguish between the buttons.
You can continue to use the
Command class for these shared
ICommand properties. The class defines an alternative constructor that accepts
execute and
canExecute methods with parameters of type
Object. This is how the
CommandParameter is passed to these methods.
However, when using
CommandParameter, it's easiest to use the generic
Command<T> class to specify the type of the object set to
CommandParameter. The
execute and
canExecute methods that you specify have parameters of that type.
The Decimal Keyboard page illustrates this technique by showing how to implement a keypad for entering decimal numbers. The
BindingContext for the
Grid is a
DecimalKeypadViewModel. The
Entry property of this ViewModel is bound to the
Text property of a
Label. All the
Button objects are bound to various commands in the ViewModel:
ClearCommand,
BackspaceCommand, and
DigitCommand:
<ContentPage xmlns="" xmlns: <Grid WidthRequest="240" HeightRequest="480" ColumnSpacing="2" RowSpacing="2" HorizontalOptions="Center" VerticalOptions="Center"> <Grid.BindingContext> <local:DecimalKeypadViewModel /> </Grid.BindingContext> <Grid.Resources> <ResourceDictionary> <Style TargetType="Button"> <Setter Property="FontSize" Value="32" /> <Setter Property="BorderWidth" Value="1" /> <Setter Property="BorderColor" Value="Black" /> </Style> </ResourceDictionary> </Grid.Resources> <Label Text="{Binding Entry}" Grid. <Button Text="CLEAR" Grid. <Button Text="⇦" Grid. <Button Text="7" Grid. <Button Text="8" Grid. <Button Text="9" Grid. <Button Text="4" Grid. <Button Text="5" Grid. <Button Text="6" Grid. <Button Text="1" Grid. <Button Text="2" Grid. <Button Text="3" Grid. <Button Text="0" Grid. <Button Text="·" Grid. </Grid> </ContentPage>
The 11 buttons for the 10 digits and the decimal point share a binding to
DigitCommand. The
CommandParameter distinguishes between these buttons. The value set to
CommandParameter is generally the same as the text displayed by the button except for the decimal point, which for purposes of clarity is displayed with a middle dot character.
Here's the program in action:
Notice that the button for the decimal point in all three screenshots is disabled because the entered number already contains a decimal point.
The
DecimalKeypadViewModel defines an
Entry property of type
string (which is the only property that triggers a
PropertyChanged event) and three properties of type
ICommand:
public class DecimalKeypadViewModel : INotifyPropertyChanged { string entry = "0"; public event PropertyChangedEventHandler PropertyChanged; ··· public string Entry { private set { if (entry != value) { entry = value; PropertyChanged?.Invoke(this, new PropertyChangedEventArgs("Entry")); } } get { return entry; } } public ICommand ClearCommand { private set; get; } public ICommand BackspaceCommand { private set; get; } public ICommand DigitCommand { private set; get; } }
The button corresponding to the
ClearCommand is always enabled and simply sets the entry back to "0":
public class DecimalKeypadViewModel : INotifyPropertyChanged { ··· public DecimalKeypadViewModel() { ClearCommand = new Command( execute: () => { Entry = "0"; RefreshCanExecutes(); }); ··· } void RefreshCanExecutes() { ((Command)BackspaceCommand).ChangeCanExecute(); ((Command)DigitCommand).ChangeCanExecute(); } ··· }
Because the button is always enabled, it is not necessary to specify a
canExecute argument in the
Command constructor.
The logic for entering numbers and backspacing is a little tricky because if no digits have been entered, then the
Entry property is the string "0". If the user types more zeroes, then the
Entry still contains just one zero. If the user types any other digit, that digit replaces the zero. But if the user types a decimal point before any other digit, then
Entry is the string "0.".
The Backspace button is enabled only when the length of the entry is greater than 1, or if
Entry is not equal to the string "0":
public class DecimalKeypadViewModel : INotifyPropertyChanged { ··· public DecimalKeypadViewModel() { ··· BackspaceCommand = new Command( execute: () => { Entry = Entry.Substring(0, Entry.Length - 1); if (Entry == "") { Entry = "0"; } RefreshCanExecutes(); }, canExecute: () => { return Entry.Length > 1 || Entry != "0"; }); ··· } ··· }
The logic for the
execute function for the Backspace button ensures that the
Entry is at least a string of "0".
The
DigitCommand property is bound to 11 buttons, each of which identifies itself with the
CommandParameter property. The
DigitCommand could be set to an instance of the regular
Command class, but it's easier to use the
Command<T> generic class. When using the commanding interface with XAML, the
CommandParameter properties are usually strings, and that's the type of the generic argument. The
execute and
canExecute functions then have arguments of type
string:
public class DecimalKeypadViewModel : INotifyPropertyChanged { ··· public DecimalKeypadViewModel() { ··· DigitCommand = new Command<string>( execute: (string arg) => { Entry += arg; if (Entry.StartsWith("0") && !Entry.StartsWith("0.")) { Entry = Entry.Substring(1); } RefreshCanExecutes(); }, canExecute: (string arg) => { return !(arg == "." && Entry.Contains(".")); }); } ··· }
The
execute method appends the string argument to the
Entry property. However, if the result begins with a zero (but not a zero and a decimal point) then that initial zero must be removed using the
Substring function.
The
canExecute method returns
false only if the argument is the decimal point (indicating that the decimal point is being pressed) and
Entry already contains a decimal point.
All the
execute methods call
RefreshCanExecutes, which then calls
ChangeCanExecute for both
DigitCommand and
ClearCommand. This ensures that the decimal point and backspace buttons are enabled or disabled based on the current sequence of entered digits.
Adding Commands to Existing Views
If you'd like to use the commanding interface with views that don't support it, it's possible to use a Xamarin.Forms behavior that converts an event into a command. This is described in the article Reusable EventToCommandBehavior.
Asynchronous Commanding for Navigation Menus
Commanding is convenient for implementing navigation menus, such as that in the Data Binding Demos program itself. Here's part of MainPage.xaml:
<?xml version="1.0" encoding="utf-8" ?> <ContentPage xmlns="" xmlns: <TableView Intent="Menu"> <TableRoot> <TableSection Title="Basic Bindings"> <TextCell Text="Basic Code Binding" Detail="Define a data-binding in code" Command="{Binding NavigateCommand}" CommandParameter="{x:Type local:BasicCodeBindingPage}" /> <TextCell Text="Basic XAML Binding" Detail="Define a data-binding in XAML" Command="{Binding NavigateCommand}" CommandParameter="{x:Type local:BasicXamlBindingPage}" /> <TextCell Text="Alternative Code Binding" Detail="Define a data-binding in code without a BindingContext" Command="{Binding NavigateCommand}" CommandParameter="{x:Type local:AlternativeCodeBindingPage}" /> ··· </TableSection> </TableRoot> </TableView> </ContentPage>
When using commanding with XAML,
CommandParameter properties are usually set to strings. In this case, however, a XAML markup extension is used so that the
CommandParameter is of type
System.Type.
Each
Command property is bound to a property named
NavigateCommand. That property is defined in the code-behind file, MainPage.xaml.cs: sets the
NavigateCommand property to an
execute method that instantiates the
System.Type parameter and then navigates to it. Because the
PushAsync call requires an
await operator, the
execute method must be flagged as asynchronous. This is accomplished with the
async keyword before the parameter list.
The constructor also sets the
BindingContext of the page to itself so that the bindings reference the
NavigateCommand in this class.
The order of the code in this constructor makes a difference: The
InitializeComponent call causes the XAML to be parsed, but at that time the binding to a property named
NavigateCommand cannot be resolved because
BindingContext is set to
null. If the
BindingContext is set in the constructor before
NavigateCommand is set, then the binding can be resolved when
BindingContext is set, but at that time,
NavigateCommand is still
null. Setting
NavigateCommand after
BindingContext will have no effect on the binding because a change to
NavigateCommand doesn't fire a
PropertyChanged event, and the binding doesn't know that
NavigateCommand is now valid.
Setting both
NavigateCommand and
BindingContext (in any order) prior to the call to
InitializeComponent will work because both components of the binding are set when the XAML parser encounters the binding definition.
Data bindings can sometimes be tricky, but as you've seen in this series of articles, they are powerful and versatile, and help greatly to organize your code by separating underlying logic from the user interface.
Related Links
Feedback | https://docs.microsoft.com/en-US/xamarin/xamarin-forms/app-fundamentals/data-binding/commanding | 2020-02-17T08:05:45 | CC-MAIN-2020-10 | 1581875141749.3 | [] | docs.microsoft.com |
Release notes¶
This page contains the release notes for PennyLane.
Release 0.8.0 (current release)¶
New features since last release
Added a quantum chemistry package,
pennylane.qchem, which supports integration with OpenFermion, Psi4, PySCF, and OpenBabel. (#453)
Features include:
Generate the qubit Hamiltonians directly starting with the atomic structure of the molecule.
Calculate the mean-field (Hartree-Fock) electronic structure of molecules.
Allow to define an active space based on the number of active electrons and active orbitals.
Perform the fermionic-to-qubit transformation of the electronic Hamiltonian by using different functions implemented in OpenFermion.
Convert OpenFermion’s QubitOperator to a Pennylane
Hamiltonianclass.
Perform a Variational Quantum Eigensolver (VQE) computation with this Hamiltonian in PennyLane.
Check out the quantum chemistry quickstart, as well the quantum chemistry and VQE tutorials.
PennyLane now has some functions and classes for creating and solving VQE problems. (#467)
qml.Hamiltonian: a lightweight class for representing qubit Hamiltonians
qml.VQECost: a class for quickly constructing a differentiable cost function given a circuit ansatz, Hamiltonian, and one or more devices
>>> H = qml.vqe.Hamiltonian(coeffs, obs) >>> cost = qml.VQECost(ansatz, hamiltonian, dev, interface="torch") >>> params = torch.rand([4, 3]) >>> cost(params) tensor(0.0245, dtype=torch.float64)
Added a circuit drawing feature that provides a text-based representation of a QNode instance. It can be invoked via
qnode.draw(). The user can specify to display variable names instead of variable values and choose either an ASCII or Unicode charset. (#446)
Consider the following circuit as an example:
@qml.qnode(dev) def qfunc(a, w): qml.Hadamard(0) qml.CRX(a, wires=[0, 1]) qml.Rot(w[0], w[1], w[2], wires=[1]) qml.CRX(-a, wires=[0, 1]) return qml.expval(qml.PauliZ(0) @ qml.PauliZ(1))
We can draw the circuit after it has been executed:
>>> result = qfunc(2.3, [1.2, 3.2, 0.7]) >>> print(qfunc.draw()) 0: ──H──╭C────────────────────────────╭C─────────╭┤ ⟨Z ⊗ Z⟩ 1: ─────╰RX(2.3)──Rot(1.2, 3.2, 0.7)──╰RX(-2.3)──╰┤ ⟨Z ⊗ Z⟩ >>> print(qfunc.draw(charset="ascii")) 0: --H--+C----------------------------+C---------+| <Z @ Z> 1: -----+RX(2.3)--Rot(1.2, 3.2, 0.7)--+RX(-2.3)--+| <Z @ Z> >>> print(qfunc.draw(show_variable_names=True)) 0: ──H──╭C─────────────────────────────╭C─────────╭┤ ⟨Z ⊗ Z⟩ 1: ─────╰RX(a)──Rot(w[0], w[1], w[2])──╰RX(-1*a)──╰┤ ⟨Z ⊗ Z⟩
Added
QAOAEmbeddingand its parameter initialization as a new trainable template. (#442)
Added the
qml.probs()measurement function, allowing QNodes to differentiate variational circuit probabilities on simulators and hardware. (#432)
@qml.qnode(dev) def circuit(x): qml.Hadamard(wires=0) qml.RY(x, wires=0) qml.RX(x, wires=1) qml.CNOT(wires=[0, 1]) return qml.probs(wires=[0])
Executing this circuit gives the marginal probability of wire 1:
>>> circuit(0.2) [0.40066533 0.59933467]
QNodes that return probabilities fully support autodifferentiation.
Added the convenience load functions
qml.from_pyquil,
qml.from_quiland
qml.from_quil_filethat convert pyQuil objects and Quil code to PennyLane templates. This feature requires version 0.8 or above of the PennyLane-Forest plugin. (#459)
Added a
qml.invmethod that inverts templates and sequences of Operations. Added a
@qml.templatedecorator that makes templates return the queued Operations. (#462)
For example, using this function to invert a template inside a QNode:
@qml.template def ansatz(weights, wires): for idx, wire in enumerate(wires): qml.RX(weights[idx], wires=[wire]) for idx in range(len(wires) - 1): qml.CNOT(wires=[wires[idx], wires[idx + 1]]) dev = qml.device('default.qubit', wires=2) @qml.qnode(dev) def circuit(weights): qml.inv(ansatz(weights, wires=[0, 1])) return qml.expval(qml.PauliZ(0) @ qml.PauliZ(1))
Added the
QNodeCollectioncontainer class, that allows independent QNodes to be stored and evaluated simultaneously. Experimental support for asynchronous evaluation of contained QNodes is provided with the
parallel=Truekeyword argument. (#466)
Added a high level
qml.mapfunction, that maps a quantum circuit template over a list of observables or devices, returning a
QNodeCollection. (#466)
For example:
>>> def my_template(params, wires, **kwargs): >>> qml.RX(params[0], wires=wires[0]) >>> qml.RX(params[1], wires=wires[1]) >>> qml.CNOT(wires=wires) >>> obs_list = [qml.PauliX(0) @ qml.PauliZ(1), qml.PauliZ(0) @ qml.PauliX(1)] >>> dev = qml.device("default.qubit", wires=2) >>> qnodes = qml.map(my_template, obs_list, dev, measure="expval") >>> qnodes([0.54, 0.12]) array([-0.06154835 0.99280864])
Added high level
qml.sum,
qml.dot,
qml.applyfunctions that act on QNode collections. (#466)
qml.applyallows vectorized functions to act over the entire QNode collection:
>>> qnodes = qml.map(my_template, obs_list, dev, measure="expval") >>> cost = qml.apply(np.sin, qnodes) >>> cost([0.54, 0.12]) array([-0.0615095 0.83756375])
qml.sumand
qml.dottake the sum of a QNode collection, and a dot product of tensors/arrays/QNode collections, respectively.
Breaking changes
Deprecated the old-style
QNodesuch that only the new-style
QNodeand its syntax can be used, moved all related files from the
pennylane/betafolder to
pennylane. (#440)
Improvements
Added the
Tensor.prune()method and the
Tensor.non_identity_obsproperty for extracting non-identity instances from the observables making up a
Tensorinstance. (#498)
Renamed the
expt.tensornetand
expt.tensornet.tfdevices to
default.tensorand
default.tensor.tf. (#495)
Added a serialization method to the
CircuitGraphclass that is used to create a unique hash for each quantum circuit graph. (#470)
Added the
Observable.eigvalsmethod to return the eigenvalues of observables. (#449)
Added the
Observable.diagonalizing_gatesmethod to return the gates that diagonalize an observable in the computational basis. (#454)
Added the
Operator.matrixmethod to return the matrix representation of an operator in the computational basis. (#454)
Added a
QubitDeviceclass which implements common functionalities of plugin devices such that plugin devices can rely on these implementations. The new
QubitDevicealso includes a new
executemethod, which allows for more convenient plugin design. In addition,
QubitDevicealso unifies the way samples are generated on qubit-based devices. (#452) (#473)
Improved documentation of
AmplitudeEmbeddingand
BasisEmbeddingtemplates. (#441) (#439)
Codeblocks in the documentation now have a ‘copy’ button for easily copying examples. (#437)
Documentation
Bug fixes
Fixed a bug in
CVQNode._pd_analytic, where non-descendant observables were not Heisenberg-transformed before evaluating the partial derivatives when using the order-2 parameter-shift method, resulting in an erroneous Jacobian for some circuits. (#433)
Contributors
This release contains contributions from (in alphabetical order):
Juan Miguel Arrazola, Ville Bergholm, Alain Delgado Gran, Olivia Di Matteo, Theodor Isacsson, Josh Izaac, Soran Jahangiri, Nathan Killoran, Johannes Jakob Meyer, Zeyue Niu, Maria Schuld, Antal Száva.
Release 0.7.0¶
New features since last release
Custom padding constant in
AmplitudeEmbeddingis supported (see ‘Breaking changes’.) (#419)
StronglyEntanglingLayerand
RandomLayernow work with a single wire. (#409) (#413)
Added support for applying the inverse of an
Operationwithin a circuit. (#377)
Added an
OperationRecorder()context manager, that allows templates and quantum functions to be executed while recording events. The recorder can be used with and without QNodes as a debugging utility. (#388)
Operations can now specify a decomposition that is used when the desired operation is not supported on the target device. (#396)
The ability to load circuits from external frameworks as templates has been added via the new
qml.load()function. This feature requires plugin support — this initial release provides support for Qiskit circuits and QASM files when
pennylane-qiskitis installed, via the functions
qml.from_qiskitand
qml.from_qasm. (#418)
An experimental tensor network device has been added (#416) (#395) (#394) (#380)
An experimental tensor network device which uses TensorFlow for backpropagation has been added (#427)
Custom padding constant in
AmplitudeEmbeddingis supported (see ‘Breaking changes’.) (#419)
Breaking changes
The
padparameter in
AmplitudeEmbedding()is now either
None(no automatic padding), or a number that is used as the padding constant. (#419)
Initialization functions now return a single array of weights per function. Utilities for multi-weight templates
Interferometer()and
CVNeuralNetLayers()are provided. (#412)
The single layer templates
RandomLayer(),
CVNeuralNetLayer()and
StronglyEntanglingLayer()have been turned into private functions
_random_layer(),
_cv_neural_net_layer()and
_strongly_entangling_layer(). Recommended use is now via the corresponding
Layers()templates. (#413)
Improvements
Added extensive input checks in templates. (#419)
Templates integration tests are rewritten - now cover keyword/positional argument passing, interfaces and combinations of templates. (#409) (#419)
State vector preparation operations in the
default.qubitplugin can now be applied to subsets of wires, and are restricted to being the first operation in a circuit. (#346)
The
QNodeclass is split into a hierarchy of simpler classes. (#354) (#398) (#415) (#417) (#425)
Added the gates U1, U2 and U3 parametrizing arbitrary unitaries on 1, 2 and 3 qubits and the Toffoli gate to the set of qubit operations. (#396)
Changes have been made to accomodate the movement of the main function in
pytest._internalto
pytest._internal.mainin pip 19.3. (#404)
Added the templates
BasisStatePreparationand
MottonenStatePreparationthat use gates to prepare a basis state and an arbitrary state respectively. (#336)
Added decompositions for
BasisStateand
QubitStateVectorbased on state preparation templates. (#414)
Replaces the pseudo-inverse in the quantum natural gradient optimizer (which can be numerically unstable) with
np.linalg.solve. (#428)
Contributors
This release contains contributions from (in alphabetical order):
Ville Bergholm, Josh Izaac, Nathan Killoran, Angus Lowe, Johannes Jakob Meyer, Oluwatobi Ogunbayo, Maria Schuld, Antal Száva.
Release 0.6.1¶
New features since last release
Added a
print_appliedmethod to QNodes, allowing the operation and observable queue to be printed as last constructed. (#378)
Improvements
A new
Operatorbase class is introduced, which is inherited by both the
Observableclass and the
Operationclass. (#355)
Removed deprecated
@abstractpropertydecorators in
_device.py. (#374)
The
CircuitGraphclass is updated to deal with
Operationinstances directly. (#344)
Comprehensive gradient tests have been added for the interfaces. (#381)
Documentation
The new restructured documentation has been polished and updated. (#387) (#375) (#372) (#370) (#369) (#367) (#364)
Updated the development guides. (#382) (#379)
Added all modules, classes, and functions to the API section in the documentation. (#373)
Bug fixes
Replaces the existing
np.linalg.normnormalization with hand-coded normalization, allowing
AmplitudeEmbeddingto be used with differentiable parameters. AmplitudeEmbedding tests have been added and improved. (#376)
Contributors
This release contains contributions from (in alphabetical order):
Ville Bergholm, Josh Izaac, Nathan Killoran, Maria Schuld, Antal Száva
Release 0.6.0¶
New features since last release
The devices
default.qubitand
default.gaussianhave a new initialization parameter
analyticthat indicates if expectation values and variances should be calculated analytically and not be estimated from data. (#317)
Added C-SWAP gate to the set of qubit operations (#330)
The TensorFlow interface has been renamed from
"tfe"to
"tf", and now supports TensorFlow 2.0. (#337)
Added the S and T gates to the set of qubit operations. (#343)
Tensor observables are now supported within the
expval,
var, and
samplefunctions, by using the
@operator. (#267)
Breaking changes
The argument
nspecifying the number of samples in the method
Device.samplewas removed. Instead, the method will always return
Device.shotsmany samples. (#317)
Improvements
The number of shots / random samples used to estimate expectation values and variances,
Device.shots, can now be changed after device creation. (#317)
Unified import shortcuts to be under qml in qnode.py and test_operation.py (#329)
The quantum natural gradient now uses
scipy.linalg.pinvhwhich is more efficient for symmetric matrices than the previously used
scipy.linalg.pinv. (#331)
The deprecated
qml.expval.Observablesyntax has been removed. (#267)
Remainder of the unittest-style tests were ported to pytest. (#310)
The
do_queueargument for operations now only takes effect within QNodes. Outside of QNodes, operations can now be instantiated without needing to specify
do_queue. (#359)
Documentation
The docs are rewritten and restructured to contain a code introduction section as well as an API section. (#314)
Added Ising model example to the tutorials (#319)
Added tutorial for QAOA on MaxCut problem (#328)
Added QGAN flow chart figure to its tutorial (#333)
Added missing figures for gallery thumbnails of state-preparation and QGAN tutorials (#326)
Fixed typos in the state preparation tutorial (#321)
Fixed bug in VQE tutorial 3D plots (#327)
Bug fixes
Contributors
This release contains contributions from (in alphabetical order):
Shahnawaz Ahmed, Ville Bergholm, Aroosa Ijaz, Josh Izaac, Nathan Killoran, Angus Lowe, Johannes Jakob Meyer, Maria Schuld, Antal Száva, Roeland Wiersema.
Release 0.5.0¶
New features since last release
Adds a new optimizer,
qml.QNGOptimizer, which optimizes QNodes using quantum natural gradient descent. See for more details. (#295) (#311)
Adds a new QNode method,
QNode.metric_tensor(), which returns the block-diagonal approximation to the Fubini-Study metric tensor evaluated on the attached device. (#295)
Sampling support: QNodes can now return a specified number of samples from a given observable via the top-level
pennylane.sample()function. To support this on plugin devices, there is a new
Device.samplemethod.
Calculating gradients of QNodes that involve sampling is not possible. (#256)
default.qubithas been updated to provide support for sampling. (#256)
Added controlled rotation gates to PennyLane operations and
default.qubitplugin. (#251)
Breaking changes
The method
Device.supportedwas removed, and replaced with the methods
Device.supports_observableand
Device.supports_operation. Both methods can be called with string arguments (
dev.supports_observable('PauliX')) and class arguments (
dev.supports_observable(qml.PauliX)). (#276)
The following CV observables were renamed to comply with the new Operation/Observable scheme:
MeanPhotonto
NumberOperator,
Homodyneto
QuadOperatorand
NumberStateto
FockStateProjector. (#254)
Improvements
The
AmplitudeEmbeddingfunction now provides options to normalize and pad features to ensure a valid state vector is prepared. (#275)
Operations can now optionally specify generators, either as existing PennyLane operations, or by providing a NumPy array. (#295) (#313)
Adds a
Device.parametersproperty, so that devices can view a dictionary mapping free parameters to operation parameters. This will allow plugin devices to take advantage of parametric compilation. (#283)
Introduces two enumerations:
Anyand
All, representing any number of wires and all wires in the system respectively. They can be imported from
pennylane.operation, and can be used when defining the
Operation.num_wiresclass attribute of operations. (#277)
As part of this change:
Allis equivalent to the integer 0, for backwards compatibility with the existing test suite
Anyis equivalent to the integer -1 to allow numeric comparison operators to continue working
An additional validation is now added to the
Operationclass, which will alert the user that an operation with
num_wires = Allis being incorrectly.
The one-qubit rotations in
pennylane.plugins.default_qubitno longer depend on Scipy’s
expm. Instead they are calculated with Euler’s formula. (#292)
Creates an
ObservableReturnTypesenumeration class containing
Sample,
Varianceand
Expectation. These new values can be assigned to the
return_typeattribute of an
Observable. (#290)
Changed the signature of the
RandomLayerand
RandomLayerstemplates to have a fixed seed by default. (#258)
setup.pyhas been cleaned up, removing the non-working shebang, and removing unused imports. (#262)
Documentation
A documentation refactor to simplify the tutorials and include Sphinx-Gallery. (#291)
Examples and tutorials previously split across the
examples/and
doc/tutorials/directories, in a mixture of ReST and Jupyter notebooks, have been rewritten as Python scripts with ReST comments in a single location, the
examples/folder.
Sphinx-Gallery is used to automatically build and run the tutorials. Rendered output is displayed in the Sphinx documentation.
Links are provided at the top of every tutorial page for downloading the tutorial as an executable python script, downloading the tutorial as a Jupyter notebook, or viewing the notebook on GitHub.
The tutorials table of contents have been moved to a single quick start page.
Fixed a typo in
QubitStateVector. (#296)
Fixed a typo in the
default_gaussian.gaussian_statefunction. (#293)
Fixed a typo in the gradient recipe within the
RX,
RY,
RZoperation docstrings. (#248)
Fixed a broken link in the tutorial documentation, as a result of the
qml.expval.Observabledeprecation. (#246)
Bug fixes
Fixed a bug where a
PolyXPobservable would fail if applied to subsets of wires on
default.gaussian. (#277)
Contributors
This release contains contributions from (in alphabetical order):
Simon Cross, Aroosa Ijaz, Josh Izaac, Nathan Killoran, Johannes Jakob Meyer, Rohit Midha, Nicolás Quesada, Maria Schuld, Antal Száva, Roeland Wiersema.
Release 0.4.0¶
New features since last release
pennylane.expval()is now a top-level function, and is no longer a package of classes. For now, the existing
pennylane.expval.Observableinterface continues to work, but will raise a deprecation warning. (#232)
Variance support: QNodes can now return the variance of observables, via the top-level
pennylane.var()function. To support this on plugin devices, there is a new
Device.varmethod.
The following observables support analytic gradients of variances:
All qubit observables (requiring 3 circuit evaluations for involutory observables such as
Identity,
X,
Y,
Z; and 5 circuit evals for non-involutary observables, currently only
qml.Hermitian)
First-order CV observables (requiring 5 circuit evaluations)
Second-order CV observables support numerical variance gradients.
pennylane.about()function added, providing details on current PennyLane version, installed plugins, Python, platform, and NumPy versions (#186)
Removed the logic that allowed
wiresto be passed as a positional argument in quantum operations. This allows us to raise more useful error messages for the user if incorrect syntax is used. (#188)
Adds support for multi-qubit expectation values of the
pennylane.Hermitian()observable (#192)
Adds support for multi-qubit expectation values in
default.qubit. (#202)
Organize templates into submodules (#195). This included the following improvements:
Distinguish embedding templates from layer templates.
New random initialization functions supporting the templates available in the new submodule
pennylane.init.
Added a random circuit template (
RandomLayers()), in which rotations and 2-qubit gates are randomly distributed over the wires
Add various embedding strategies
Breaking changes
The
Devicemethods
expectations,
pre_expval, and
post_expvalhave been renamed to
observables,
pre_measure, and
post_measurerespectively. (#232)
Improvements
default.qubitplugin now uses
np.tensordotwhen applying quantum operations and evaluating expectations, resulting in significant speedup (#239), (#241)
PennyLane now allows division of quantum operation parameters by a constant (#179)
Portions of the test suite are in the process of being ported to pytest. Note: this is still a work in progress.
Ported tests include:
test_ops.py
test_about.py
test_classical_gradients.py
test_observables.py
test_measure.py
test_init.py
test_templates*.py
test_ops.py
test_variable.py
test_qnode.py(partial)
Bug fixes
Fixed a bug in
Device.supported, which would incorrectly mark an operation as supported if it shared a name with an observable (#203)
Fixed a bug in
Operation.wires, by explicitly casting the type of each wire to an integer (#206)
Removed code in PennyLane which configured the logger, as this would clash with users’ configurations (#208)
Fixed a bug in
default.qubit, in which
QubitStateVectoroperations were accidentally being cast to
np.floatinstead of
np.complex. (#211)
Contributors
This release contains contributions from:
Shahnawaz Ahmed, riveSunder, Aroosa Ijaz, Josh Izaac, Nathan Killoran, Maria Schuld.
Release 0.3.1¶
Bug fixes
Fixed a bug where the interfaces submodule was not correctly being packaged via setup.py
Release 0.3.0¶
New features since last release
PennyLane now includes a new
interfacessubmodule, which enables QNode integration with additional machine learning libraries.
Adds support for an experimental PyTorch interface for QNodes
Adds support for an experimental TensorFlow eager execution interface for QNodes
Adds a PyTorch+GPU+QPU tutorial to the documentation
Documentation now includes links and tutorials including the new PennyLane-Forest plugin.
Improvements
Printing a QNode object, via
print(qnode)or in an interactive terminal, now displays more useful information regarding the QNode, including the device it runs on, the number of wires, it’s interface, and the quantum function it uses:
>>> print(qnode) <QNode: device='default.qubit', func=circuit, wires=2, interface=PyTorch>
Contributors
This release contains contributions from:
Josh Izaac and Nathan Killoran.
Release 0.2.0¶
New features since last release
Added the
Identityexpectation value for both CV and qubit models (#135)
Added the
templates.pysubmodule, containing some commonly used QML models to be used as ansatz in QNodes (#133)
Added the
qml.InterferometerCV operation (#152)
Wires are now supported as free QNode parameters (#151)
Added ability to update stepsizes of the optimizers (#159)
Improvements
Removed use of hardcoded values in the optimizers, made them parameters (see #131 and #132)
Created the new
PlaceholderExpectation, to be used when both CV and qubit expval modules contain expectations with the same name
Provide the plugins a way to view the operation queue before applying operations. This allows for on-the-fly modifications of the queue, allowing hardware-based plugins to support the full range of qubit expectation values. (#143)
QNode return values now support any form of sequence, such as lists, sets, etc. (#144)
CV analytic gradient calculation is now more robust, allowing for operations which may not themselves be differentiated, but have a well defined
_heisenberg_repmethod, and so may succeed operations that are analytically differentiable (#152)
Bug fixes
Fixed a bug where the variational classifier example was not batching when learning parity (see #128 and #129)
Fixed an inconsistency where some initial state operations were documented as accepting complex parameters - all operations now accept real values (#146)
Contributors
This release contains contributions from:
Christian Gogolin, Josh Izaac, Nathan Killoran, and Maria Schuld.
Contents
Downloads | https://pennylane.readthedocs.io/en/stable/development/release_notes.html | 2020-02-17T07:02:46 | CC-MAIN-2020-10 | 1581875141749.3 | [] | pennylane.readthedocs.io |
Remote VCN Peering (Across Regions)
This topic is about remote VCN peering. In this case, remote means that the VCNs reside in different regions. If the VCNs you want to connect are in the same region, see Local VCN Peering (Within Region).
Warning
Avoid entering confidential information when assigning descriptions, tags, or friendly names to your cloud resources through the Oracle Cloud Infrastructure Console, API, or CLI.
Overview of Remote VCN Peering.
Summary of Networking Components for Remote Peering
At a high level, the Networking service components required for a remote peering include:
Two VCNs with non-overlapping CIDRs, in different regions that support remote peering. The VCNs must be in the same tenancy.Note
All VCN CIDRs Must Not Overlap
The two VCNs in the peering relationship must not have overlapping CIDRs. Also, if a particular VCN has multiple peering relationships, those other VCNs must not have overlapping CIDRs with each other. For example, if VCN-1 is peered with VCN-2 and also VCN-3, then VCN-2 and VCN-3 must not have overlapping CIDRs.
- A dynamic routing gateway (DRG) attached to each VCN in the peering relationship. Your VCN already has a DRG if you're using an IPSec VPN or an Oracle Cloud Infrastructure FastConnect private virtual circuit.
- A remote peering connection (RPC) on each DRG in the peering relationship.
- A connection between those two RPCs.
-.
Note
A given VCN can use the connected RPCs to reach only VNICs in the other VCN, and not destinations outside of the VCNs (such as the internet or your on-premises network). For example, if VCN-1 in the preceding diagram were to have an internet gateway, the instances in VCN-2 could NOT use it to send traffic to endpoints on the internet. However, be aware that VCN-2 could receive traffic from the internet via VCN-1. For more information, see Important Implications of Peering.
Spoke-to-Spoke: Remote Peering with Transit Routing
Imagine that in each region you have multiple VCNs in a hub-and-spoke layout, as shown in the following diagram. This type of layout within a region is discussed in detail in Transit Routing: Access to Multiple VCNs in the Same Region. The spoke VCNs in a given region are locally peered with the hub VCN in the same region, using local peering gateways .
You can set up remote peering between the two hub VCNs. You can then also set up transit routing for the hub VCN's DRG and LPGs, as discussed in Transit Routing: Access to Multiple VCNs in the Same Region. This setup allows a spoke VCN in one region to communicate with one or more spoke VCNs in the other region without needing a remote peering connection directly between those VCNs.
For example, you could configure routing so that resources in VCN-1-A could communicate with resources in VCN-2-A and VCN-2-B by way of the hub VCNs. That way, VCN 1-A is not required to have a separate remote peering with each of the spoke VCNs in the other region. You could also set up routing so that VCN-1-B could communicate with the spoke VCNs in region 2, without needing its own remote peerings to them.
Explicit Agreement Required from Both Sides
Peering involves two VCNs in the same tenancy that might be administered by the same party or two different ones. The two parties might both be in your company but in different departments.
Peering between two VCNs requires explicit agreement from both parties in the form of Oracle Cloud Infrastructure Identity and Access Management policies that each party implements for their own VCN's compartment .
Important Remote Peering Concepts
The following concepts help you understand the basics of VCN peering and how to establish a remote peering.
- peering
- A peering is a single peering relationship between two VCNs. Example: If VCN-1 peers with two other VCNs, then there are two peerings. The remote part of remote peering indicates that the VCNs are in different regions. For remote peering, the VCNs must be in the same tenancy.
-. The VCNs must be in the same tenancy.
- For more information about the required policies and VCN configuration, see Setting Up a Remote RPCs. In turn, the acceptor must create a particular IAM policy that gives the requestor permission to connect to RPCs in the acceptor's compartment . Without that policy, the requestor's request to connect fails.
- region subscription
- To peer with a VCN in another region, your tenancy must first be subscribed to that region. For information about subscribing, see Managing Regions.
- remote peering connection (rpc)
- A remote peering connection (RPC) is a component you create on the DRG attached to your VCN. The RPC's job is to act as a connection point for a remotely peered VCN. As part of configuring the VCNs, each administrator must create an RPC for the DRG on their VCN. A given DRG must have a separate RPC for each remote peering it establishes for the VCN (maximum 10 RPCs per tenancy). To continue with the previous example: the DRG on VCN-1 would have two RPCs to peer with two other VCNs. In the API, a RemotePeeringConnection is an object that contains information about the peering. You can't reuse an RPC to later establish another peering with it.
- connection between two rpcs
- When the requestor initiates the request to peer (in the Console or API), they're effectively asking to connect the two RPCs. This means the requestor must have information to identify each RPC (such as the RPC's region and OCID ).
- Either VCN administrator can terminate a peering by deleting their RPC. In that case, the other RPC's status switches to REVOKED. The administrator could instead render the connection non-functional by removing the route rules that enable traffic to flow across the connection (see the next section).
- routing to the drg
- As part of configuring the VCNs, each administrator must update the VCN's routing to enable traffic to flow between the VCNs. For each subnet that needs to communicate with the other VCN, you update the subnet's route table. The route rule specifies the destination traffic's CIDR and your DRG as the target. Your DRG routes traffic that matches that rule to the other DRG, DRG-1 based on the rule in Subnet A's route table. From there the traffic is routed through the RPCs to DRG-2, and then from there, on to the destination in Subnet X.
-
- Note
As mentioned earlier, a given VCN can use the connected RPCs to reach only VNICs in the other VCN, and not destinations outside of the VCNs (such as the internet or your on-premises network). Peering
If you haven't yet, read Important Implications of Peering to understand important access control, security, and performance implications for peered VCNs.
Setting Up a Remote Peering
This section covers the general process for setting up a peering between two VCNs in different regions.
The following procedure assumes that:
- Your tenancy is subscribed to the other VCN's region. If it's not, see Managing Regions.
- You already have a DRG attached to your VCN. If you don't, see Dynamic Routing Gateways (DRGs).
- Create the RPCs: Each VCN administrator creates an RPC for their own VCN's DRG.
- Share information: The administrators share the basic required information.
- Set up the required IAM policies for the connection: The administrators set up IAM policies to enable the connection to be established.
- Establish the connection: The requestor connects the two RPCs (see Important Remote Peering Concepts for the definition of the requestor and acceptor).
-. Each administrator needs to know the CIDR block or specific subnets from the other's VCN and share that in task B.
Each administrator creates an RPC for their own VCN's DRG. "You" in the following procedure means an administrator (either the acceptor or requestor).
Required IAM Policy to Create RPCs
If the administrators already have broad network administrator permissions (see Let network admins manage a cloud network), then they have permission to create, update, and delete RPCs. Otherwise, here's an example policy giving the necessary permissions to a group called RPCAdmins. The second statement is required because creating an RPC affects the DRG it belongs to, so the administrator must have permission to manage DRGs.
Allow group RPCAdmins to manage remote-peering-connections in tenancy Allow group RPCAdmins to manage drgs in tenancy
- In the Console, confirm you're viewing the compartment that contains the DRG that you want to add the RPC to. For information about compartments and access control, see Access Control.
Open the navigation menu. Under Core Infrastructure, go to Networking and click Dynamic Routing Gateways.
- Click the DRG you're interested in.
- Under Resources, click Remote Peering Connections.
- Click Create Remote Peering Connection.
Enter the following:
- Name: A friendly name for the RPC. It doesn't have to be unique, and it cannot be changed later in the Console (but you can change it with the API). Avoid entering confidential information.
- Create in compartment: The compartment where you want to create the RPC, if different from the compartment you're currently working in.
Click Create Remote Peering Connection.
The RPC is then created and displayed on the Remote Peering Connections page in the compartment you chose.
- If you're the acceptor, record the RPC's region and OCID to later give to the requestor.
If you're the acceptor, give this information to the requestor (for example, by email or other out-of-band method):
- The region your VCN is in (the requestor's tenancy must be subscribed to this region).
- Your RPC's OCID.
- The CIDR blocks for subnets in your VCN that should be available to the other VCN. The requestor needs this information when setting up routing for the requestor VCN.
If you're the requestor, give this information to the acceptor:
- The region your VCN is in (the acceptor's tenancy must be subscribed to this region).
- The name of the IAM group that should be granted permission to create a connection in the acceptor's compartment (in the example in the next task, the group is RequestorGrp).
- The CIDR blocks for subnets in your VCN that should be available to the other VCN. The acceptor needs this information when setting up routing for the acceptor VCN.
Both the requestor and acceptor must ensure the right policies are in place. These consist of:
Policy R (implemented by the requestor):
Allow group RequestorGrp to manage remote-peering-from in compartment RequestorComp
The requestor is in an IAM group called RequestorGrp. This policy lets anyone in the group initiate a connection from any RPC remote-peering-to in compartment AcceptorComp
This policy lets the requestor connect to any RPC in the acceptor's compartment (AcceptorComp). This statement reflects the required agreement from the acceptor for the peering to be established. Policy A can be attached to either the tenancy (root compartment) or to AcceptorComp.
Both Policy R and Policy A give RequestorGrp access. However, Policy R has a resource-type called remote-peering-from, and Policy A has a resource-type called remote-peering-to. Together, these policies let someone in RequestorGrp establish the connection from an RPC in the requestor's compartment to an RPC in the acceptor's compartment. The API call to actually create the connection specifies which two RPCs.
Tip RPCs). And further, if the policy is instead written to cover the entire tenancy (
Allow group NetworkAdmin to manage virtual-network-family in tenancy), then the requestor already has all the required permissions in both compartments to establish the connection. In that case, policy A is not required.
The requestor must perform this task.
Prerequisite: The requestor must have:
- The region the acceptor's VCN is in (the requestor's tenancy must be subscribed to the region).
- The OCID of the acceptor's RPC.
- In the Console, view the details for the requestor RPC that you want to connect to the acceptor RPC.
- Click Establish Connection.
Enter the following:
- Region: The region that contains the acceptor's VCN. The drop-down list includes only those regions that both support remote VCN peering and your tenancy is subscribed to.
- Remote Peering Connection OCID: The OCID of the acceptor's RPC.
Click Establish Connection.
The connection is established and the RPC's state changes to PEERED.
As mentioned earlier, each administrator can do this task before or after the connection is established.
Prerequisite: Each administrator must have the CIDR block or specific subnets for the other VCN.
For your own VCN:
- Determine which subnets in your VCN need to communicate with the other VCN.
Update the route table for each of those subnets to include a new rule that directs traffic destined for the other VCN to your DRG:
Open the navigation menu. Under Core Infrastructure, go to Networking and click Virtual Cloud Networks.
- Click the VCN you're interested in.
- Under Resources, click Route Tables.
- Click the route table you're interested in.
Click Add Route Rule and enter the following:
- Target Type: Dynamic Routing Gateway. The VCN's attached DRG is automatically selected as the target, and you don't have to specify the target yourself.
- Destination CIDR Block: The other VCN's CIDR block. If you want, you can specify a subnet or particular subset of the peered VCN's CIDR.
- Description: An optional description of the rule.
- Click Add Route Rule.
Any subnet traffic with a destination that matches the rule is routed to your DRG. For more information about setting up route rules, see Route Tables.
Tip
Without the required routing, traffic doesn't flow between the peered DRGs. If a situation occurs where you need to temporarily stop the peering, you can simply remove the route rules that enable traffic. You don't need to delete the RPCs.:
- In the Allow Rules for Ingress section, click +Add Rule.
- Leave the Stateless checkbox unchecked.
- RPCs.
Deleting an RPC terminates the peering. The RPC at the other side of the peering changes to the REVOKED state.
Open the navigation menu. Under Core Infrastructure, go to Networking and click Dynamic Routing Gateways.
- Click the DRG you're interested in.
- Under Resources, click Remote Peering Connections.
- Click the RPC you're interested in.
- Click Terminate.
- Confirm when prompted.
After deleting an RPC (and thus terminating a peering), it's recommended you review your route tables and security rules to remove any rules that enabled traffic with the other VCN.
Using the API
For information about using the API and signing requests, see REST APIs and Security Credentials. For information about SDKs, see Software Development Kits and Command Line Interface.
To manage your RPCs and create connections, use these operations: | https://docs.cloud.oracle.com/en-us/iaas/Content/Network/Tasks/remoteVCNpeering.htm | 2020-02-17T06:57:25 | CC-MAIN-2020-10 | 1581875141749.3 | [] | docs.cloud.oracle.com |
Standard enterprise governance guide
Overview of best practices
This governance guide follows the experiences of a fictional company through various stages of governance maturity. It is based on real customer experiences. The best practices are based on the constraints and needs of the fictional company.
As a quick starting point, this overview defines a minimum viable product (MVP) for governance based on best practices. It also provides links to some governance improvements that add further best practices as new business or technical risks emerge.
Warning
This MVP is a baseline starting point, based on a set of assumptions. Even this minimal set of best practices is based on corporate policies driven by unique business risks and risk tolerances. To see if these assumptions apply to you, read the longer narrative that follows this article.
Governance best practices
These best practices serve as a foundation for an organization to quickly and consistently add governance guardrails across your subscriptions.
Resource organization
The following diagram shows the governance MVP hierarchy for organizing resources.
Every application should be deployed in the proper area of the management group, subscription, and resource group hierarchy. During deployment planning, the cloud governance team will create the necessary nodes in the hierarchy to empower the cloud adoption teams.
- One management group for each type of environment (such as production, development, and test).
- Two subscriptions, one for production workloads and another for nonproduction workloads.
- Consistent nomenclature should be applied at each level of this grouping hierarchy.
- Resource groups should be deployed in a manner that considers its contents lifecycle: everything that is developed together, is managed together, and retires together goes together. For more information on resource group best practices, see here.
- Region selection is incredibly important and must be considered so that networking, monitoring, auditing can be in place for failover/failback as well as confirmation that needed SKUs are available in the preferred regions.
Here is an example of this pattern in use:
These patterns provide room for growth without complicating the hierarchy unnecessarily.
Note
In the event of changes to your business requirements, Azure management groups allow you to easily reorganize your management hierarchy and subscription group assignments. However, keep in mind that policy and role assignments applied to a management group are inherited by all subscriptions underneath that group in the hierarchy. If you plan to reassign subscriptions between management groups, make sure that you are aware of any policy and role assignment changes that may result. See the Azure management groups documentation for more information.
Governance of resources
A set of global policies and RBAC roles will provide a baseline level of governance enforcement. To meet the cloud governance team's policy requirements, implementing the governance MVP requires completing the following tasks:
- Identify the Azure Policy definitions needed to enforce business requirements. This can include using built-in definitions and creating new custom definitions.
- Create a blueprint definition using these built-in and custom policy and the role assignments required by the governance MVP.
- Apply policies and configuration globally by assigning the blueprint definition to all subscriptions.
Identify policy definitions
Azure provides several built-in policies and role definitions that you can assign to any management group, subscription, or resource group. Many common governance requirements can be handled using built-in definitions. However, it's likely that you will also need to create custom policy definitions to handle your specific requirements.
Custom policy definitions are saved to either a management group or a subscription and are inherited through the management group hierarchy. If a policy definition's save location is a management group, that policy definition is available to assign to any of that group's child management groups or subscriptions.
Since the policies required to support the governance MVP are meant to apply to all current subscriptions, the following business requirements will be implemented using a combination of built-in definitions and custom definitions created in the root management group:
- Restrict the list of available role assignments to a set of built-in Azure roles authorized by your cloud governance team. This requires a custom policy definition.
- Require the following tags on all resources: Department/Billing Unit, Geography, Data Classification, Criticality, SLA, Environment, Application Archetype, Application, and Application Owner. This can be handled using the
Require specified tagbuilt-in definition.
- Require that the
Applicationtag for resources should match the name of the relevant resource group. This can be handled using the "Require tag and its value" built-in definition.
For information on defining custom policies see the Azure Policy documentation. For guidance and examples of custom policies, consult the Azure Policy samples site and the associated GitHub repository.
Assign Azure Policy and RBAC roles using Azure Blueprints
Azure policies can be assigned at the resource group, subscription, and management group level, and can be included in Azure Blueprints definitions. Although the policy requirements defined in this governance MVP apply to all current subscriptions, it's very likely that future deployments will require exceptions or alternative policies. As a result, assigning policy using management groups, with all child subscriptions inheriting these assignments, may not be flexible enough to support these scenarios.
Azure Blueprints allow the consistent assignment of policy and roles, application of Resource Manager templates, and deployment of resource groups across multiple subscriptions. As with policy definitions, blueprint definitions are saved to management groups or subscriptions, and are available through inheritance to any children in the management group hierarchy.
The cloud governance team has decided that enforcement of required Azure Policy and RBAC assignments across subscriptions will be implemented through Azure Blueprints and associated artifacts:
- In the root management group, create a blueprint definition named
governance-baseline.
- Add the following blueprint artifacts to the blueprint definition:
- Policy assignments for the custom Azure Policy definitions defined at the management group root.
- Resource group definitions for any groups required in subscriptions created or governed by the Governance MVP.
- Standard role assignments required in subscriptions created or governed by the Governance MVP.
- Publish the blueprint definition.
- Assign the
governance-baselineblueprint definition to all subscriptions.
See the Azure Blueprints documentation for more information on creating and using blueprint definitions.
Secure hybrid VNet
Specific subscriptions often require some level of access to on-premises resources. This is common in migration scenarios or dev scenarios where dependent resources reside in the on-premises datacenter.
Until trust in the cloud environment is fully established it's important to tightly control and monitor any allowed communication between the on-premises environment and cloud workloads, and that the on-premises network is secured against potential unauthorized access from cloud-based resources. To support these scenarios, the governance MVP adds the following best practices:
- Establish a cloud secure hybrid VNet.
- The VPN reference architecture establishes a pattern and deployment model for creating a VPN Gateway in Azure.
- Validate that on-premises security and traffic management mechanisms treat connected cloud networks as untrusted. Resources and services hosted in the cloud should only have access to authorized on-premises services.
- Validate that the local edge device in the on-premises datacenter is compatible with Azure VPN Gateway requirements and is configured to access the public internet.
- Note that VPN tunnels should not be considered production ready circuits for anything but the most simple workloads. Anything beyond a few simple workloads requiring on-premises connectivity should use Azure ExpressRoute.
- In the root management group, create a second blueprint definition named
secure-hybrid-vnet.
- Add the Resource Manager template for the VPN Gateway as an artifact to the blueprint definition.
- Add the Resource Manager template for the virtual network as an artifact to the blueprint definition.
- Publish the blueprint definition.
- Assign the
secure-hybrid-vnetblueprint definition to any subscriptions requiring on-premises connectivity. This definition should be assigned in addition to the
governance-baselineblueprint definition.
One of the biggest concerns raised by IT security and traditional governance teams is the risk that early stage cloud adoption will compromise existing assets. The above approach allows cloud adoption teams to build and migrate hybrid solutions, with reduced risk to on-premises assets. As trust in the cloud environment increases, later evolutions may remove this temporary solution.
Note
The above is a starting point to quickly create a baseline governance MVP. This is only the beginning of the governance journey. Further evolution will be needed as the company continues to adopt the cloud and takes on more risk in the following areas:
- Mission-critical workloads
- Protected data
- Cost management
- Multicloud scenarios
Moreover, the specific details of this MVP are based on the example journey of a fictional company, described in the articles that follow. We highly recommend becoming familiar with the other articles in this series before implementing this best practice.
Iterative governance improvements
Once this MVP has been deployed, additional layers of governance can be incorporated into the environment quickly. Here are some ways to improve the MVP to meet specific business needs:
- Security Baseline for protected data
- Resource configurations for mission-critical applications
- Controls for Cost Management
- Controls for multicloud evolution
What does this guidance provide?
In the MVP, practices and tools from the Deployment Acceleration discipline are established to quickly apply corporate policy. In particular, the MVP uses Azure Blueprints, Azure Policy, and Azure management groups to apply a few basic corporate policies, as defined in the narrative for this fictional company. Those corporate policies are applied using Resource Manager templates and Azure policies to establish a small baseline for identity and security.
Incremental improvement of governance practices
Over time, this governance MVP will be used to improve governance practices. As adoption advances, business risk grows. Various disciplines within the Cloud Adoption Framework governance model will change to manage those risks. Later articles in this series discuss the incremental improvement of corporate policy affecting the fictional company. These improvements happen across three disciplines:
- Cost Management, as adoption scales.
- Security Baseline, as protected data is deployed.
- Resource Consistency, as IT Operations begins supporting mission-critical workloads.
Next steps
Now that you're familiar with the governance MVP and have an idea of the governance improvements to follow, read the supporting narrative for additional context.
Feedback | https://docs.microsoft.com/en-us/azure/cloud-adoption-framework/govern/guides/standard/ | 2020-02-17T06:49:38 | CC-MAIN-2020-10 | 1581875141749.3 | [array(['../../../_images/govern/resource-organization.png',
'Diagram of resource organization'], dtype=object)
array(['../../../_images/govern/mid-market-resource-organization.png',
'Resource organization example for a mid-market company'],
dtype=object)
array(['../../../_images/govern/governance-mvp.png',
'Example of an incremental governance MVP'], dtype=object)
array(['../../../_images/govern/governance-improvement.png',
'Example of an incremental governance MVP'], dtype=object)] | docs.microsoft.com |
File Revision DTO v9 Represents a revision of the given file. Field Name Type Category Constraint Reference Description description String Optional length >= 0 & length <= 2048 Description of the given file revision. file Identifier Required File : 8, 9, 10 Referenced file. language String Optional length >= 0 & length <= 255 Language of the file for which the revision is defined. latestRevision Boolean Optional Indicates if the revision is latest. It's possible to have at most one latest revision for a file. revision String Required length >= 0 & length <= 255 String value indicating file revision. title String Optional length >= 0 & length <= 512 Title of the file revision. | https://docs.coresystems.net/api/dtos/filerevisiondto_v9.html | 2020-02-17T06:23:32 | CC-MAIN-2020-10 | 1581875141749.3 | [] | docs.coresystems.net |
ASP.NET - how it uses Windows memory
Sooner or later you are bound to run into the dreaded OutOfMemoryException and wonder, "why me, why now?". ASP.NET and other web applications are particularly susceptible to high memory consumption because a web application is typically serving 100s or 1000s of users all at once, unlike your typical desktop application which is just serving one user.
How big is big?
The code running inside usermode processes are provided with the illusion that they are in their own private world of memory. This illusion is provided by the Windows Virtual Memory Manager. Code can request a chunk of this virtual address space through a call to the VirtualAlloc API and it is handed out in chunks of 4Kb. By default a usermode process "sees" 2Gb of address space. The other 2Gb is reserved for the kernel. This 2Gb is a pretty big place. If every pixel on my screen of 1024 x 768 pixels represents one of these 4Kb pages, 2Gb is about 2/3 of my entire screen. But in these days of data hungry web applications it is surprising how easy it is to run out.
If a system is booted with the /3Gb switch in boot.ini (only supported on Enterprise and Data Center editions of Windows 2000, and all versions of Windows XP and Windows Server 2003) a process that is linked with the /LARGEADDRESSAWARE switch can "see" 3Gb. Aspnet_wp.exe is linked in that way in version 1.1 and can take advantage of that. However you have to be careful booting a system with /3Gb as this reduces the amount of memory available to the kernel which is not always appropriate. It depends what the server is used for and what else runs on it. And that is your lot. 3Gb is all you are going to get unless you make the switch to the 64-bit world. If you are running on x64 Windows Server 2003, even if you stick to 32-bit worker processes you can still get a bit extra. Usermode now gets access to a full 4Gb of addressable space and the kernel bits are neatly tucked away out of sight. This is dicussed in this rather intersting article about how the folks at Microsoft.com switched to x64.
Whether the process "sees" 2Gb or 3Gb, there are many things that use this virtual address space. For example each DLL loaded into the process, thread local storage, stacks, native heaps and directly allocated virtual memory blocks all occupy parts of this virtual address space. Out of what is left, the .NET runtime has to allocate virtual address ranges for use by the managed heap. So in a best case, you could expect to get about 1.5Gb or 2.2Gb of managed objects allocated in a process (on standard and /3Gb booted systems respectively).
Fragmentation
Virtual memory within a process always gets fragmented to a greater or lesser extent. The free space is never all together at one place. And virtual address space fragmentation is not like disk fragmentation. You cannot just run a defragmentation tool to clean things up (despite what people will try to sell you on the internet!). Since native code works with absolute pointers to memory allocations you would have to inform everyone that has a pointer to the block if someone moved it. That just isn't going to happen. (The .NET garbage collector does do this for managed memory but that's the point - it's managed. Virtual memory is the Wild West of memory.)
Certain types of memory allocation require a certain minimum size of free block to succeed. For example the .NET runtime generally reserves virtual address space in 64Mb blocks. So it is possible to be in a situation where there are maybe 100s of Mb free but no single block bigger than 64Mb. In this situation if the .NET runtime happens to need to extend the reserved space for the managed heap (due to memory allocation pressure from the application) an OutOfMemoryException will occur.
If you want to see all those various virtual memory allocations just attach WinDBG and run the !address command. This will output details of every virtual memory allocation in the process. If you haven't got time to read all of them just add -summary to the command and you'll get the high level overview of how much is committed, free, used for native heaps and what is the size of the largest contiguous free block.
Heaps
Now 4Kb allocations are pretty inefficient for most application purposes. If you want to allocate memory to store the word "Beans" as ASCII text then you only need 6 bytes including a null terminator so why would you want 4Kb? 99.8% of the memory would be wasted. And if you then wanted to allocate memory for "Cheese" you would need another 4Kb allocation. Beans and Cheese are both good but are they that good?
Enter the concept of heap managers.
The idea of the heap manager is to act as a broker between the application code and virtual memory. It's kind of like borrowing money. If you want to borrow £5,000 to buy a car and you go out to the international money markets to borrow it, the financiers are going to turn around and say "Sorry, but we only lend in traunches of £10,000,000. So instead you go to your local bank. They've already borrowed the money in chunks of £10,000,000 and are willing to do the administration and hand it out in smaller loans to people like you and me. Well, sometimes.
Windows provides a heap manager which is used for memory allocation by 99% of non-.NET applications whether they know it or not. When a C++ application calls 'new' this is usually compiled down to a call to HeapAlloc which is the API by which you ask the heap manager for a chunk of memory. I sometimes refer to this as the native heap to distinguish it from the .NET garbage collected heap. Most .NET application processes are actually using native heap as well. Even the CLR uses a little bit of native heap for some of its internal structures. Also if your application interoperates with things like COM components or ODBC drivers then those components will be mapped into your process space and will be using native heap for their memory allocation.
The managed objects of your application are of course stored in the .NET managed heap, a.k.a the garbage collected heap. The CLR allocates address space directly with calls to VirtualAlloc. It does this (mostly) in chunks of 64Mb, commits it on an as needed basis and then efficiently subdivides that into the various managed objects the application allocates.
Damage limitation
Sometimes the use of memory in ASP.NET applications gets out of control All at once 50 users decide to request a 50Mb report that none of the developers ever anticipated would exist. And they only ever tested it with 10 users anyway. Or a bug in code hooks up an object instance to a static event handler and before you know it you've leaked all your precious memory.
To combat this, ASP.NET has a governor mechanism built in. This regularly monitors the approximate total committed memory in the process. If this reaches a specified percentage threshold of the physical RAM on the system ASP.NET decides to call it a day on that particular worker process instance and spins up a new one. New requests are routed to it and the existing requests in the old one allowed to come to a natural conclusion and then the old worker process is shutdown. This threshold is set via the memoryLimit setting in machine.config.
If the webGarden setting is set to false then only one aspnet_wp.exe process runs and the memoryLimit setting is interpreted as the percentage of physical RAM at which ASP.NET will proactively recycle the worker process. However if webGarden is true then the number of CPUs is taken into account (as you would get as many worker processes as you have CPUs).
Therefore if you have 4Gb of RAM and memoryLimit="70", this means that private bytes for the aspnet_wp.exe process would have to reach 4 * 0.7 = 2.8Gb before ASP.NET initiated a recycle. If you had webGarden ="true", if you had 4 CPUs the calculation would be 4 * 0.7 /4 = 700Mb. If you had one CPU it is actually very unlikely you will ever reach that threshold because you are likely to have run into an OutOfMemoryException due to address space fragmentation long before you managed to commit 2.8Gb of allocations. So you need to think carefully about what memoryLimit should be set to based on how much RAM you have, how many CPUs you have and whether you have enabled webGardening. In IIS6 things are somewhat different as IIS takes over the performance and health monitoring of the worker processes.
You do not really want to get to the stage where OutOfMemoryExceptions start occurring. If you do it is unlikley that the application will recover. What will most likely happen is that a bit of memory will be recovered allowing things to continue for a while but then another allocation will fail at some random place and the worker process will become unstable. Ultimately it will become unresponsive and be deemed unhealthy by either IIS or the ASP,NET ISAPI filter and be terminated.
A better option would be if the OutOfMemoryException were treated as a fatal-to-process situation and the process were immediately terminated. With .NET 1.1 and later this is possible if you set the GCFailFastOnOOM registry value to a non-zero value. ( [Updated 30/5/8] I recommend a value of 5. Any non-zero value used to be sufficient to trigger a failfast on OOM but changes to the CLR mean that is no longer the case). Note: in this case the process is rudely terminated. In memory state will be lost, but hopefully your application puts anything important in a transacted database anyway. As soon as the process is gone, ASP.NET or IIS health monitoring will spin up a new one.
These mechanisms for automatic recycling of the worker process in high memory situations are really just stop gap measures until you get around to figuring out why your application uses so much memory in the first place or why it uses more and more as time goes on. The answer to those two questions is another story entirely... | https://docs.microsoft.com/en-us/archive/blogs/dougste/asp-net-how-it-uses-windows-memory | 2020-02-17T08:12:02 | CC-MAIN-2020-10 | 1581875141749.3 | [] | docs.microsoft.com |
After you’ve determined your configuration options, set up your virtual machines (VMs). The ThoughtSpot base image for booting the VMs and some other aspects of system setup will be shared with you on GCP by ThoughtSpot.
About the ThoughtSpot and Google Cloud Platform
ThoughtSpot uses a custom image to populate VMs on GCP. The base image is a Centos derived image, which will be available to you in your Google Compute Engine project for Boot disk options under Custom Images.
Ask ThoughtSpot Support for access to this image. We will need the Google account/email ID of the individual who will be signed into your organization’s GCP console. We will share ThoughtSpot’s GCP project with them so they can use the contained boot disk image for creating ThoughtSpot VMs.
Overview
Before you can create a ThoughtSpot cluster, you need to provision VMs. We’ll do this on Google Compute Engine, the GCP platform for creating and running VMs.
In a nutshell, the required configuration ThoughtSpot is:
- 64 vCPU
- 416 GB RAM
- 250 GB SSD for the boot disk, provisioned with a ThoughtSpot base image
- Two 1 TB SSDs for data
The following topics walk you through this process.
Create an instance
Go to the Compute Engine dashboard, and select the associated ThoughtSpot project.
Select VM instances on the left panel and click CREATE INSTANCE.
Provide a name for the image, choose a region, choose number of CPUs (e.g., 8 vCPUs for a cluster), and click Customize to further configure CPUs and memory.
For Machine type set the following configuration:
Configure the Boot disk.
a. Scroll down to the find the Boot disk section and click Change.
b. Click Custom Images on the tabs at the top, select a ThoughtSpot base image and configure the boot disk as follows:Note: ThoughtSpot updates these base images with patches and enhancements. If more than one image is available, the latest one is always at the top of the list. Both will work, but we recommend using the latest image because it typically contains the latest security and maintenance patches.
c. Click Select to save the boot disk configuration.
Back on the main configuration page, click to expand the advanced configuration options (Management, security, disks, networking, sole tenancy).
Attach two 1 TB SSD drives. These drives will be used for the data storage.
a. Click the Disks tab, and click Add new disk.
b. Configure the following settings for each disk.
Customize the network settings as needed, preferably use your default VPC settings.
Repeat these steps to create the necessary number of such VMs.
Prepare the VMs (ThoughtSpot Systems Reliability Team)
Before we can install a ThoughtSpot cluster, an administrator must log in to each VM via SSH as user “admin” and complete the following preparation steps:
- Run
sudo /usr/local/scaligent/bin/prepare_disks.shon every machine.
- Configure each VM based on the site-survey.
Launch the cluster
Upload the TS tarball to one of the machines and proceed with the normal cluster creation process, using tscli cluster create. | https://docs.thoughtspot.com/5.2/appliance/gcp/launch-an-instance.html | 2020-02-17T06:00:29 | CC-MAIN-2020-10 | 1581875141749.3 | [] | docs.thoughtspot.com |
iopro.pyodbc¶
This project is an enhancement of the Python database module for ODBC that implements the Python DB API 2.0 specification. You can see the original project here:
The enhancements are documented in this file. For general info about the pyodbc package, please refer to the original project documentation.
This module enhancement requires:
- Python 2.4 or greater
- ODBC 3.0 or greater
- NumPy 1.5 or greater (1.7 is required for datetime64 support)
The enhancements in this module consist mainly in the addition of some new methods for fetching the data after a query and put it in a variety of NumPy containers.
Using NumPy as data containers instead of the classical list of tuples has a couple of advantages:
1) The NumPy container is much more compact, and hence, it requires much less memory, than the original approach.
2) As a NumPy container can hold arbitrarily large arrays, it requires much less object creation than the original approach (one Python object per datum retrieved).
This means that this enhancements will allow to fetch data out of relational databases in a much faster way, while consuming significantly less resources.
API additions¶
Methods¶
Cursor.fetchdictarray (size=cursor.arraysize)
This is similar to the original Cursor.fetchmany(size), but the data is returned in a dictionary where the keys are the names of the columns and the values are NumPy containers.
For example, it a SELECT is returning 3 columns with names ‘a’, ‘b’ and ‘c’ and types varchar(10), integer and timestamp, the returned object will be something similar to:
{'a': array([...], dtype='S11'), 'b': array([...], dtype=int32), 'c': array([...], dtype=datetime64[us])}
Note that the varchar(10) type is translated automatically to a string type of 11 elements (‘S11’). This is because the ODBC driver needs one additional space to put the trailing ‘0’ in strings, and NumPy needs to provide the room for this.
Also, it is important to stress that all the timestamp types are translated into a NumPy datetime64 type with a resolution of microseconds by default.
Cursor.fetchsarray (size=cursor.arraysize)
This is similar to the original Cursor.fetchmany(size), but the data is returned in a NumPy structured array, where the name and type of the fields matches to those resulting from the SELECT.
Here it is an example of the output for the SELECT above:
array([(...), (...)], dtype=[('a', '|S11'), ('b', '<i4'), ('c', ('<M8[us]', {}))])
Note that, due to efficiency considerations, this method is calling the fetchdictarray() behind the scenes, and then doing a conversion to get an structured array. So, in general, this is a bit slower than its fetchdictarray() counterpart.
Data types supported¶
The new methods listed above have support for a subset of the standard ODBC. In particular:
- String support (SQL_VARCHAR) is supported.
- Numerical types, be them integers or floats (single and double precision) are fully supported. Here it is the complete list: SQL_INTEGER, SQL_TINYINT, SQL_SMALLINT, SQL_FLOAT and SQL_DOUBLE.
- Dates, times, and timestamps are mapped to the datetime64 and timedelta NumPy types. The list of supported data types are: SQL_DATE, SQL_TIME and SQL_TIMESTAMP,
- Binary data is not supported yet.
- Unicode strings are not supported yet.
NULL values¶
As there is not (yet) a definitive support for missing values (NA) in NumPy, this module represents NA data as particular values depending on the data type. Here it is the current table of the particular values:
int8: -128 (-2**7) uint8: 255 (2**8-1) int16: -32768 (-2**15) uint16: 65535 (2**16-1) int32: -2147483648 (-2**31) uint32: 4294967295 (2**32-1) int64: -9223372036854775808 (-2**63) uint64: 18446744073709551615 (2**64-1) float32: NaN float64: NaN datetime64: NaT timedelta64: NaT (or -2**63) string: 'NA'
Improvements for 1.1 release¶
- The rowcount is not trusted anymore for the fetchdict() and fetchsarray() methods. Now the NumPy containers are built incrementally, using realloc for a better use of resources.
- The Python interpreter does not exit anymore when fetching an exotic datatype not supported by NumPy.
- The docsctrings for fetchdict() and fetchsarray() have been improved. | https://docs.anaconda.com/iopro/1.8.0/pyodbc/ | 2020-02-17T08:01:24 | CC-MAIN-2020-10 | 1581875141749.3 | [] | docs.anaconda.com |
About
From the Clever Global Search page, you can enter the search term that you would like to find. This can be a complete or partial name, phone number, postcode, item description, code, or anything else that you may want to locate. is:
A flexible search criteria across both standard and custom tables and fields
User-friendly with a | https://docs.cleverdynamics.com/Clever%20Global%20Search/User%20Guide/About/ | 2020-02-17T07:18:04 | CC-MAIN-2020-10 | 1581875141749.3 | [] | docs.cleverdynamics.com |
What happens when I archive my goal?
If you want to stop working on a goal you can archive it. It will set the end date of your goal to yesterday or end of last week. You can still review your performance or even re-activate your archived goal.
Open the goal you would like to archive and choose the Archive goal option from the options menu . | https://docs.goalifyapp.com/what_happens_when_i_archive_my_goal.en.html | 2020-02-17T07:41:34 | CC-MAIN-2020-10 | 1581875141749.3 | [] | docs.goalifyapp.com |
NtCreateFile function
The NtCreateFile routine creates a new file or opens an existing file.
Syntax
__kernel_entry NTSYSCALLAPI NTSTATUS NtCreateFile( );
Parameters
FileHandle
A pointer to a HANDLE variable that receives a handle to the file.
DesiredAccess
A pointer to a LARGE_INTEGER that contains the initial allocation size, in bytes, for a file that is created or overwritten. If AllocationSize is NULL, no allocation size is specified. If no file is created or overwritten, AllocationSize is ignored.
FileAttributes
Type of share access, which is specified as zero or any combination of the following flags.
Device and intermediate drivers usually set ShareAccess to zero, which gives the caller exclusive access to the open file.
CreateDisposition
Specifies the action to perform if the file does or does not exist. CreateDisposition can be one of the values in the following table.
CreateOptions
Specifies the options to apply when the driver creates or opens the file. Use one or more of the flags in the following table.
EaBuffer
For device and intermediate drivers, this parameter must be a NULL pointer.
EaLength
For device and intermediate drivers, this parameter must be zero.
Return value
NtCreateFile returns STATUS_SUCCESS on success or an appropriate NTSTATUS error code on failure. In the latter case, the caller can determine the cause of the failure by checking the IoStatusBlock parameter.
Note
N NtCreateFile tries to handle this situation.
Remarks
N N N N NtCreateFile for a given file must not conflict with the accesses that other openers of the file have disallowed.
The CreateDisposition value FILE_SUPERSEDE requires that the caller have DELETE access to a existing file object. If so, a successful call to N NtCreateFile is called with a existing file and either of these CreateDisposition values, the file will N. N NtReadFile or NtWriteFile must be a multiple of the sector size.
- The Length passed to NtReadFile or NtW NtCreateFile to get a handle for the file object that represents the physical device, and pass that handle to NtQueryInformationFile. For a list of the system's FILE_XXX_ALIGNMENT values, see DEVICE_OBJECT.
- Calls to NtSet NtReadFile and NtWriteFile. Call NtQueryInformationFile or NtSetInformationFile to get or set this position.
If the CreateOptions FILE_OPEN_REPARSE_POINT flag is not specified and NtCreateFile attempts to open a file with a reparse point, normal reparse point processing occurs for the file. If, on the other hand, the FILE_OPEN_REPARSE_POINT flag is specified, normal reparse processing does not occur and NtCreateFile attempts to directly open the reparse point file. In either case, if the open operation was successful, NtCreateFile returns STATUS_SUCCESS; otherwise, the routine returns an NTSTATUS error code. N N 3.
Callers of NtCreateFile must be running at IRQL = PASSIVE_LEVEL and with special kernel APCs enabled.
Note
If the call to this function occurs in user mode, you should use the name "NtCreateFile" instead of "ZwCreateFile".
InitializeObjectAttributes
Using Nt and Zw Versions of the Native System Services Routines
Feedback | https://docs.microsoft.com/en-us/windows-hardware/drivers/ddi/ntifs/nf-ntifs-ntcreatefile | 2020-02-17T08:24:16 | CC-MAIN-2020-10 | 1581875141749.3 | [] | docs.microsoft.com |
Integrate real-time chat in your Android client apps with speed and efficiency. Our Android Android Android SDK is simple if you’re familiar with using external libraries or SDKs. To install the SDK using
Gradle, add the following lines to a
build.gradle file at the app level.
repositories { maven { url "" } } dependencies { implementation 'com.sendbird.sdk:sendbird-android-sdk:3.0.118' } from and write on a user device’s storage. To grant system permissions, add the following lines to your
AndroidManifest.xml file.
<uses-permission android: <!-- READ/WRITE_EXTERNAL_STORAGE permissions are required to upload or download files from/into external storage. --> <uses-permission android: <uses-permission android:
When you build your APK with
minifyEnabled true, add the following line to the module's ProGuard rules file.
-dontwarn com.sendbird.android.shadow.**
The Android SDK simplifies messaging SDK to Android’s context, thereby allowing it to respond to connection and state changes. To the
init() method, pass the App ID of your SendBird application in the dashboard to initialize the SDK.
Note: The
SendBird.init()method must be called once across your Android client app. It is recommended to initialize the Android messages.
OpenChannel.getChannel(CHANNEL_URL, new OpenChannel.OpenChannelGetHandler() { @Override public void onResult(OpenChannel openChannel, SendBirdException e) { if (e != null) { // Error. return; } openChannel.enter(new OpenChannel.OpenChannelEnterHandler() { @Override public void onResult(SendBirdException e) { if (e != null) { //, new BaseChannel.SendUserMessageHandler() { @Override public void onSent(UserMessage userMessage, SendBirdException e) { if (e != null) { // Error. return; } } }); | https://docs.sendbird.com/android/quick_start | 2020-02-17T07:19:34 | CC-MAIN-2020-10 | 1581875141749.3 | [] | docs.sendbird.com |
To show own fonts you need to first add .ttf file to application. So open your App_Data/Pdf folder and copy your font to
Next open your nopCommerce Configuration/Settings/All Settings (Advanced)
Find setting pdfsettings.fontfilename and change value to your font name:
To show how it is working will add just only one string. That will affect all invoice, so will be empty invoice except that string added by me.
Result
Please remember to check “My language has diacritical marks” setting at the first tab of plugin configuration. | http://docs.nop4you.com/own-fonts | 2020-02-17T07:12:53 | CC-MAIN-2020-10 | 1581875141749.3 | [] | docs.nop4you.com |
Settings¶
Soft Body¶
Reference
- Panel
- Collision Collection
If set, soft body collides with objects from the collection, instead of using objects that are on the same layer.
- Object
-.
- Simulation
- Speed
You can control the internal timing of the soft body system with this value. It sets the correlation between frame rate and tempo of the simulation. A free falling body should cover a distance of about ten meters after one second. You can adjust the scale of your scene and simulation with this correlation. If you render with 25 frames per second, you will have to set Speed to 1.3.
Soft Body Cache¶
Reference
- Panel
Soft Body physics simulations use a unified system for caching and baking. See Particle Cache and General Baking documentation for reference.
Soft Body Goal¶
Reference
- Panel
- Use Goal.
- Goal Strength
- Default
Goal weight/strength for all vertices when no Vertex Group is assigned. If you use a vertex group the weight of a vertex defines its goal.
- Minimum/Maximum
When you use a vertex group, you can use the Minimum and Maximum to fine-tune (clamp) the weight values. The lowest vertex weight will become Minimum, the highest value becomes Maximum.
Soft Body Edges¶
Reference
- Panel
- Use Edges
Allow the edges in a mesh object to act like springs. See interior forces.
- Springs
- (computationally intensive!). While Face enabled is great, and solves lots of collision errors, there does not seem to be any dampening settings for it, so parts of the soft body object near a collision mesh tend to „jitter“ as they bounce off and fall back, even when there is no motion of any meshes. Edge collision has dampening, so that can be controlled, but Deflection dampening value on a collision object does not seem to affect the face collision.
-.
- Stiff Quads
- Use Stiff Quads
For quad faces, the diagonal edges are used as springs. This stops quad faces to collapse completely on collisions (what they would do otherwise).
- Shear
Stiffness of the virtual springs created for quad faces.
Soft Body Self Collision¶
Reference
- Panel
Bemerkung
Self-Collision is working only if you have activated Use Edges.
- Self Collision
When enabled, allows you to control how Blender will prevent the soft body from intersecting with itself. Every vertex is surrounded with an elastic virtual ball. Vertices may not penetrate the balls of other vertices. If you want a good result you may have to adjust the size of these balls. Normally it works pretty well with the default options.
- Calculation Type
- Manual
The Ball Size directly sets the ball size.
- Average
The average length of all edges attached to the vertex is calculated and then multiplied with the Ball Size setting. Works well with evenly distributed vertices.
- Minimal/Maximal
The ball size is as large as the smallest/largest spring length of the vertex multiplied with the Ball Size.
- Average Min Max
Size = ((Min + Max)/2) × Ball Size.
- Ball Size
Fraction of the length of attached edges. The edge length is computed based on the choosen algorithm. This setting is the factor that is multiplied by the spring length. It is a spherical distance (radius) within which, if another vertex of the same mesh enters, the vertex starts to deflect in order to avoid a self-collision. Set this value to the fractional distance between vertices that you want them to have their own „space“. Too high of a value will include too many vertices all the time and slow down the calculation. Too low of a level will let other vertices get too close and thus possibly intersect because there will not be enough time to slow them down.
- Stiffness
How elastic that ball of personal space is. A high stiffness means that the vertex reacts immediately to another vertex enters their space.
-
- Panel
The settings in the Soft Body Solver panel determine the accuracy of the simulation.
- Step Size
- Min Step
Minimum simulation steps per frame. Increase this value, if the soft body misses fast-moving collision objects.
- Max Step. Default 0.1. allow you to control how Blender will react (deform) the soft body. | https://docs.blender.org/manual/de/dev/physics/soft_body/settings.html | 2020-02-17T06:15:20 | CC-MAIN-2020-10 | 1581875141749.3 | [] | docs.blender.org |
Saves displayed online maps into internal database in temporary memory and accelerates their future loading.
Displays only cached map tiles of online maps to avoid downloading from the internet.
If an online map is activated but none of its tiles are cached and this option is checked, the map screen is empty.
Loads SQLite-based maps according to actual GPS position. There is no need to select maps manually when out of one, Locus Map switches it on automatically. To make this work, store all the maps into one folder.
There are three options:
Loads available vector maps according to actual GPS position. There is no need to select maps when out of one, Locus Map switches it on automatically.
Select one of world online or offline maps for upper zooms so that detailed zooms of your local map are loaded faster.
Enables setting size of vector maps texts (names of cities, streets, etc.) on scale from 50 to 500% of the default size.
Graphic elements displaying various non-map objects or values.
Map scale is the ratio of a distance on the map to the corresponding distance on the ground. Traditional indicator in the lower left corner of the map screen:
Circles indicate estimated beeline distance to be passed according to user's current speed in 5, 15, 30 and 60 minutes:
Circles indicate selected distances from the map screen cursor - 10, 25, 50, 100, 500 m, 1, 2, 5, 10, 25, 50, 100 and 200 km:
In case the GPS is fixed the circles have different color and indicate distance from the user's location.
Displays elevation value at the map screen cursor position:
It is based on downloaded offline elevation files (Locus Map Pro only).
Stretches a line with azimuth and distance between user's GPS location and the map screen cursor:
Displays a line from user's current position across the screen to indicate direction of motion:
Useful when trying to maintain a specific course.
Displays a line from user's current position across the screen to indicate direction he/she is pointing at with the device:
Similar function to the Show view button on the bottom screen panel.
Sets size of texts within auxuliary objects (dynamic altitude, labels etc.) from 50 to 300% of a normal size.
Tap Enable and move the slider to adjust the resolution you see on the preview window. It displays the map around position of your map cursor.
Locus Map Pro only
Enables additional shading of map based on offline elevation files.
Enables adjusting colors of active map: | https://docs.locusmap.eu/doku.php?id=manual:user_guide:maps_settings | 2020-02-17T07:18:06 | CC-MAIN-2020-10 | 1581875141749.3 | [] | docs.locusmap.eu |
Ota Unicast Bootloader Client Plugin
APIs/callbacks for ota-unicast-bootloader clients. More...
Detailed Description
APIs/callbacks for ota-unicast-bootloader clients.-client.h for source code.
Function Documentation.
A callback invoked by the OTA Bootloader Client plugin to indicate that an image download has completed.
- Parameters
-
A callback invoked by the OTA Unicast Bootloader Client plugin when an image segment that is part of an image the application chose to download was received.
- Parameters
-
A callback invoked by the OTA Unicast Bootloader Client plugin to indicate that an OTA Unicast Bootloader Server has requested to perform a bootload operation at a certain point in time in the future.
- Parameters
-
- Returns
- TRUE if the application accepted the request of bootloading the specified image at the requested time, FALSE otherwise.
A callback invoked by the OTA Unicast Client plugin when the client starts receiving a new image. The application can choose to start receiving the image or ignore(refuse) it. If the application chooses to receive the image, other images sent out by another server are ignored until the client completes this download.
- Parameters
- | https://docs.silabs.com/connect-stack/2.5/group-ota-unicast-bootloader-client | 2020-02-17T08:21:33 | CC-MAIN-2020-10 | 1581875141749.3 | [] | docs.silabs.com |
SimpleLocalize can be used on many different ways, you can choose between CLI, Import/Export option or create your custom workflow thanks to API access.
CLI allows you to improve your workflow by finding i18n keys in your source code on every commit push. Setup new project, configure
simplelocalize.yml file, invoke bash script. CLI will find i18n keys and send them to SimpleLocalize.
curl -s | bash
Import & Export feature allows you to translate content in Excel files or some custom software. Get an easy access to those features thanks to our web application or API
Use links below to upload or export your JSON/CSV/Excel files.
Use links below to create and fetch translations through our API | https://docs.simplelocalize.io/ | 2020-02-17T06:35:41 | CC-MAIN-2020-10 | 1581875141749.3 | [] | docs.simplelocalize.io |
最近更新时间:2020-02-17 15:07:09
Door Chain provided by KS3 can control accessing via the black and white lists. The black lists are used to add the source domains to be blocked, and the white lists are used to add the source domains that are allowed to access. The Door Chain function can be manually disabled.
Obey the following rules when entering a domain name:
Because some legitimate requests do not have a referer, sometimes the requests with an empty referer cannot be rejected. In KS3, we also provide the option to manually set the referer. | https://docs.ksyun.com/documents/28021?preview=1 | 2020-02-17T07:07:37 | CC-MAIN-2020-10 | 1581875141749.3 | [] | docs.ksyun.com |
Using Windows Workflow Foundation for UI flow
Matt Winkle introduced a sample solution called PageFlow which shows how to implement UI flow using the Windows Workflow Foundation. More details about the sample are at Matt's blog. Shelly Guo has a nice demo of PageFlow on Channel 9 here. I just watched it. PageFlow adds a new workflow type specifically for UI flow. In Shelly's demo, she shows using the same PageFlow for ASP.NET & WPF. Since introducing the sample, some recurring questions are being asked both internally and externally. Questions like "Why not a state machine?" and "What about WCSF / Acropolis / Codename 'foo'?" Matt answers these questions on his blog.
-Marc | https://docs.microsoft.com/en-us/archive/blogs/msdn/publicsector/using-windows-workflow-foundation-for-ui-flow | 2020-02-17T08:25:02 | CC-MAIN-2020-10 | 1581875141749.3 | [] | docs.microsoft.com |
Projects Portfolio¶
The Projects Portfolio allows you to track the status of projects you have access to, including:
- projects you own
- projects you have been added to as a collaborator
- or if you are a system administrators, all projects across Domino
To open the Projects Portfolio, click Control Center in the Switch To menu.
You will find the Projects Portfolio as an option in the Control Center main menu.
The Control Center interface above shows the Projects Portfolio with the following important elements:
- This is the main menu option
This interface allows you to quickly digest the state of work in your projects. To maximize the usefulness of this tool, be sure to understand how administrators can configure meaningful project stages for their teams, and read about how users set project stages, raise and resolve blockers, and change project status. | https://docs.dominodatalab.com/en/3.6/reference/projects/Projects_Portfolio.html | 2020-02-17T07:47:34 | CC-MAIN-2020-10 | 1581875141749.3 | [array(['../../_images/Screen_Shot_2019-06-17_at_7.20.04_AM.png',
'Screen_Shot_2019-06-17_at_7.20.04_AM.png'], dtype=object)] | docs.dominodatalab.com |
How can I add a note to an activity?
You can add a short note to any activity – use the text field on the recording screen.
To add a note to an already recorded activity, switch to the goal's calendar after opening your goal. Now tap on the Edit button next to the activity you want to add a note to. You will also be able to change the recorded value. | https://docs.goalifyapp.com/how_can_i_add_a_note_to_an_activity.en.html | 2020-02-17T07:45:12 | CC-MAIN-2020-10 | 1581875141749.3 | [] | docs.goalifyapp.com |
Microsoft 365 Education
Microsoft 365 is available in a variety of plans to best meet the needs of your organization. If you're looking for the differences between the Microsoft 365 and Office 365 Education plans, this article will show you which features are included in each of them.
Microsoft 365 provides a complete system, including Office 365, Windows 10, and Enterprise Mobility and Security. The following table lists the Office 365 for Education A1, A3, and A5 features along with the corresponding Microsoft 365 for Education A3 and A5 features. To compare Office 365 features across business and enterprise plans, see Compare Office 365 for Business plans, or, for a more detailed list of features, see the relevant service description at Office 365 service descriptions. To search for support articles and information, see Office Help & Training.
Services and features
Each Microsoft 365 Education plan includes a number of individual services, such as Exchange Online and SharePoint Online. The following table shows the services that are available in each Office 365 and Microsoft 365 plan so that you can choose the solution that best meets your needs. To review Office 365 services and features in greater detail, see the Office 365 Education service description.
Note
1 Includes Exchange Online Plan 1 plus supplemental features.
2 Includes Exchange Online Plan 2.
3 Includes SharePoint Online Plan 1 plus supplemental features.
4 Includes SharePoint Online Plan 2.
5 Project Online Essentials is not included but can be added for free to the Office 365 Education plan.
6 Microsoft 365 Education A5 contains Phone System, Audio Conferencing, and Calling Plan capabilities. To implement Calling Plan requires an additional plan purchase (either Domestic Calling Plan or International Calling Plan).
7 To learn more about which Azure Information Protection features are included with Office 365 plans, see Azure Information Protection.
8 Includes Intune.
9 Servers and CALs are included for Exchange, SharePoint, and Skype for Business.
10 ECAL or Core CAL, depending on the version of A3 that is purchased—with A5, the ECAL rights are included.
11 For more information about Azure Active Directory, see What is Active Directory?.
12 Office Pro Plus is required in order to apply protections and send protected emails from the Outlook Desktop.
13 Microsoft 365 Education A5 Student Use Benefit does not include Microsoft Defender Advanced Threat Protection.
Feedback | https://docs.microsoft.com/en-us/office365/servicedescriptions/office-365-platform-service-description/microsoft-365-education | 2020-02-17T06:55:43 | CC-MAIN-2020-10 | 1581875141749.3 | [] | docs.microsoft.com |
AutomationPeer.IsPasswordCore Method
Microsoft Silverlight will reach end of support after October 2021. Learn more.
When overridden in a derived class, is called by IsPassword.
Namespace: System.Windows.Automation.Peers
Assembly: System.Windows (in System.Windows.dll)
Syntax
'Declaration Protected MustOverride Function IsPasswordCore As Boolean
protected abstract bool IsPasswordCore()
Return Value
Type: System.Boolean
true if the element contains sensitive content; | https://docs.microsoft.com/en-us/previous-versions/windows/silverlight/dotnet-windows-silverlight/ms600961%28v%3Dvs.95%29 | 2020-02-17T07:45:45 | CC-MAIN-2020-10 | 1581875141749.3 | [] | docs.microsoft.com |
Represents the base class for classes that provide the common Y-axes functionality in Swift Plot charts.
Namespace: DevExpress.XtraCharts
Assembly: DevExpress.XtraCharts.v19.2.dll
public abstract class SwiftPlotDiagramAxisYBase : SwiftPlotDiagramAxis
Public MustInherit Class SwiftPlotDiagramAxisYBase Inherits SwiftPlotDiagramAxis
The SwiftPlotDiagramAxisYBase serves as a base class both for primary and secondary Y-axes of a Swift Plot Diagram.
For the X-axis, a similar functionality is provided by the SwiftPlotDiagramAxisXBase class. | https://docs.devexpress.com/CoreLibraries/DevExpress.XtraCharts.SwiftPlotDiagramAxisYBase | 2020-02-17T07:30:49 | CC-MAIN-2020-10 | 1581875141749.3 | [] | docs.devexpress.com |
Trenches
A tasteful and, at the same time, shockingly naturalistic image of the war in Donbas.
In his directorial debut, Loup Bureau reveals the depressing daily life on the front lines as experienced by young Ukrainian soldiers waging a positional battle with separatists somewhere in the Donbas. Long, hypnotic, black and white shots wander through endless labyrinths of trenches reminiscent of World War I. While the effort put into building them is painfully real, the makeshift shelters provide only illusory cover from enemy missiles. Death lurks literally everywhere and war is nothing like a romantic adventure. The director, who spent three months in the trenches, captures their atmosphere clearly, almost tangibly, showing fear, blood, sweat, mud and tears. Although the images ultimately turn to color on the screen, there is no doubt that the time spent in the trenches will permanently scar the characters' psyches.
Konrad Wirkowski
2021 La Biennale di Venezia
2021 IDFA | https://watchdocs.pl/en/watch-docs/2021/films/trenches | 2022-06-25T13:33:19 | CC-MAIN-2022-27 | 1656103035636.10 | [array(['/upload/thumb/2021/11/trenches_still_1-1-_auto_800x900.jpg',
'Trenches'], dtype=object) ] | watchdocs.pl |
Search and Replace Data in a Range
Sometimes, you need to search for and replace specific data in a range, ignoring any cell values outside the desired range. Aspose.Cells allows you to limit a search to a specific range. This article explains how.
Aspose.Cells provides the FindOptions.setRange() method for specifying a range when searching for data.
Suppose you want to search for the string “search” and replace it with “replace” in the range E3:H6. In the screenshot below, the string “search” can be seen in several cells but we want to replace it only in a given range, here highlighted in yellow.
Input file
After the execution of the code, the output file looks like the below. All “search” strings within the range have been replaced with “replace”.
Output file
| https://docs.aspose.com/cells/java/search-and-replace-data-in-a-range/ | 2022-06-25T13:50:22 | CC-MAIN-2022-27 | 1656103035636.10 | [array(['search-and-replace-data-in-a-range_1.png', 'todo:image_alt_text'],
dtype=object)
array(['search-and-replace-data-in-a-range_2.png', 'todo:image_alt_text'],
dtype=object) ] | docs.aspose.com |
# Import Account From Keeper Wallet
Open waves.exchange (opens new window) app and click Log in.
Click Software, enter your password and click Continue.
Click Import accounts.
Select Keeper Wallet.
On the next screen check the Account address and Account name. If everything is correct, click Continue.
In Keeper Wallet window click Sign.
After that, you will be forwarded to your wallet. | https://docs.waves.exchange/en/waves-exchange/waves-exchange-online-desktop/online-desktop-account/online-desktop-import-keeper | 2022-06-25T14:16:27 | CC-MAIN-2022-27 | 1656103035636.10 | [array(['/assets/img/import_01.db00a949.png', None], dtype=object)
array(['/assets/img/import_02.7f7a7f1b.png', None], dtype=object)
array(['/assets/img/import_03.ebbaf886.png', None], dtype=object)
array(['/assets/img/import_04.2ce4d6ad.png', None], dtype=object)] | docs.waves.exchange |
Using ADO.NET
- PDF for offline use
-
- Sample Code:
-
- Related Recipes:
-
- Related Links:
-
Let us know how you feel about this
Translation Quality
0/250
last updated: 2017-03
Xamarin has built-in support for the SQLite database that is available
on Android and can be exposed using familiar ADO.NET-like syntax. Using
these APIs requires you to write SQL statements that are processed by
SQLite, such as
CREATE TABLE,
INSERT and
SELECT statements.
Assembly References
To use access SQLite via ADO.NET you must add
System.Data and
Mono.Data.Sqlite
references to your Android project, as shown here:
Right-click References > Edit References... then click to select the required assemblies.
About Mono.Data.Sqlite
We will use the
Mono.Data.Sqlite.SqliteConnection class to create a
blank database file and then to instantiate
SqliteCommand objects
that we can use to execute SQL instructions against the database.
Creating a Blank Database – Call the
CreateFilemethod with a valid (ie. writeable) file path. You should check whether the file already exists before calling this method, otherwise a new (blank) database will be created over the top of the old one, and the data in the old file will be lost.
Mono.Data.Sqlite.SqliteConnection.CreateFile (dbPath);The
dbPathvariable should be determined according the rules discussed earlier in this document.
Creating a Database Connection – After the SQLite database file has been created you can create a connection object to access the data. The connection is constructed with a connection string which takes the form of
Data Source=file_path, as shown here:
var connection = new SqliteConnection ("Data Source=" + dbPath); connection.Open(); // do stuff connection.Close();
As mentioned earlier, a connection should never be re-used across different threads. If in doubt, create the connection as required and close it when you're done; but be mindful of doing this more often than required too.
Creating and Executing a Database Command – Once we have a connection we can execute arbitrary SQL commands against it. The code below shows a
CREATE TABLEstatement being executed.
using (var command = connection.CreateCommand ()) { command.CommandText = "CREATE TABLE [Items] ([_id] int, [Symbol] ntext, [Name] ntext);"; var rowcount = command.ExecuteNonQuery (); }
When executing SQL directly against the database you should take the
normal precautions not to make invalid requests, such as attempting to
create a table that already exists. Keep track of the structure of your
database so that you don't cause a
SqliteException such as SQLite
error table [Items] already exists.
Basic Data Access
The DataAccess_Basic sample code for this document looks like this when running on Android:
The code below illustrates how to perform simple SQLite operations and shows the results in as text in the application's main window.
You'll need to include these namespaces:
using System; using System.IO; using Mono.Data.Sqlite;
The following code sample shows an entire database interaction:
- Creating the database file
- Inserting some data
- Querying the data
These operations would typically appear in multiple places throughout your code, for example you may create the database file and tables when your application first starts and perform data reads and writes in individual screens in your app. In the example below have been grouped into a single method for this example:
public static SqliteConnection connection; public static string DoSomeDataAccess () { // determine the path for the database file string dbPath = Path.Combine ( Environment.GetFolderPath (Environment.SpecialFolder.Personal), "adodemo.db3"); bool exists = File.Exists (dbPath); if (!exists) { Console.WriteLine("Creating database"); // Need to create the database before seeding it with some data Mono.Data.Sqlite.SqliteConnection.CreateFile (dbPath); connection = new SqliteConnection ("Data Source=" + dbPath); var commands = new[] { "CREATE TABLE [Items] (_id ntext, Symbol ntext);", "INSERT INTO [Items] ([_id], [Symbol]) VALUES ('1', 'AAPL')", "INSERT INTO [Items] ([_id], [Symbol]) VALUES ('2', 'GOOG')", "INSERT INTO [Items] ([_id], [Symbol]) VALUES ('3', 'MSFT')" }; // Open the database connection and create table with data connection.Open (); foreach (var command in commands) { using (var c = connection.CreateCommand ()) { c.CommandText = command; var rowcount = c.ExecuteNonQuery (); Console.WriteLine("\tExecuted " + command); } } } else { Console.WriteLine("Database already exists"); // Open connection to existing database file connection = new SqliteConnection ("Data Source=" + dbPath); connection.Open (); } // query the database to prove data was inserted! using (var contents = connection.CreateCommand ()) { contents.CommandText = "SELECT [_id], [Symbol] from [Items]"; var r = contents.ExecuteReader (); Console.WriteLine("Reading data"); while (r.Read ()) Console.WriteLine("\tKey={0}; Value={1}", r ["_id"].ToString (), r ["Symbol"].ToString ()); } connection.Close (); }
More Complex Queries
Because SQLite allows arbitrary SQL commands to be run against the
data, you can perform whatever
CREATE,
INSERT,
UPDATE,
DELETE,
or
SELECT statements you like. You can read about the SQL commands
supported by SQLite at the Sqlite website. The SQL statements are run
using one of three methods on an
SqliteCommand object:
ExecuteNonQuery – Typically used for table creation or data insertion. The return value for some operations is the number of rows affected, otherwise it's -1.
ExecuteReader – Used when a collection of rows should be returned as a
SqlDataReader.
ExecuteScalar – Retrieves a single value (for example an aggregate).
EXECUTENONQUERY
INSERT,
UPDATE, and
DELETE statements will return the number of
rows affected. All other SQL statements will return -1.
using (var c = connection.CreateCommand ()) { c.CommandText = "INSERT INTO [Items] ([_id], [Symbol]) VALUES ('1', 'APPL')"; var rowcount = c.ExecuteNonQuery (); // rowcount will be 1 }
EXECUTEREADER
The following method shows a
WHERE clause in the
SELECT statement.
Because the code is crafting a complete SQL statement it must take care
to escape reserved characters such as the quote (') around strings.
public static string MoreComplexQuery () { var output = ""; output += "\nComplex query example: "; string dbPath = Path.Combine ( Environment.GetFolderPath (Environment.SpecialFolder.Personal), "ormdemo.db3"); connection = new SqliteConnection ("Data Source=" + dbPath); connection.Open (); using (var contents = connection.CreateCommand ()) { contents.CommandText = "SELECT * FROM [Items] WHERE Symbol = 'MSFT'"; var r = contents.ExecuteReader (); output += "\nReading data"; while (r.Read ()) output += String.Format ("\n\tKey={0}; Value={1}", r ["_id"].ToString (), r ["Symbol"].ToString ()); } connection.Close (); return output; }
The
ExecuteReader method returns a
SqliteDataReader object. In addition
to the
Read method shown in the example, other useful properties
include:
RowsAffected – Count of the rows affected by the query.
HasRows – Whether any rows were returned.
EXECUTESCALAR
Use this for
SELECT statements that return a single value (such as an
aggregate).
using (var contents = connection.CreateCommand ()) { contents.CommandText = "SELECT COUNT(*) FROM [Items] WHERE Symbol <> 'MSFT'"; var i = contents.ExecuteScalar (); }
The
ExecuteScalar method's return type is
object – you should
cast the result depending on the database query. The result could be an
integer from a
COUNT query or a string from a single column
query. Note that this is different to other
Execute methods that return
a reader object or a count of the number of rows. | https://docs.mono-android.net/guides/android/application_fundamentals/data/part_4_using_adonet/ | 2017-07-20T20:32:12 | CC-MAIN-2017-30 | 1500549423486.26 | [array(['Images/image5.png',
'Android references in Xamarin Studio Android references in Xamarin Studio'],
dtype=object)
array(['Images/image7.png',
'Android references in Visual Studio Android references in Visual Studio'],
dtype=object)
array(['Images/image8.png',
'Android ADO.NET sample Android ADO.NET sample'], dtype=object)] | docs.mono-android.net |
RadioButton
- PDF for offline use
-
Let us know how you feel about this
Translation Quality
0/250
last updated: 2017-04
In this section, you will create two mutually-exclusive radio buttons
(enabling one disables the other), using the
RadioGroup
and
RadioButton
widgets. When either radio button is pressed, a toast message will be
displayed.
Open the Resources/layout/Main.axml file and add two
RadioButtons, nested in a
RadioGroup(inside the
LinearLayout):
<RadioGroup android: <RadioButton android: <RadioButton android: </RadioGroup>
It's important that the
RadioButtons are grouped together by the
RadioGroupelement so that no more than one can be selected at a time. This logic is automatically handled by the Android system. When one
RadioButtonwithin a group is selected, all others are automatically deselected.
To do something when each
RadioButtonis selected, we need to write an event handler:
private void RadioButtonClick (object sender, EventArgs e) { RadioButton rb = (RadioButton)sender; Toast.MakeText (this, rb.Text, ToastLength.Short).Show (); }
First, the sender that is passed in is cast into a RadioButton. Then a
Toastmessage displays the selected radio button's text.
Now, at the bottom of the
OnCreate()method, add the following:
RadioButton radio_red = FindViewById<RadioButton>(Resource.Id.radio_red); RadioButton radio_blue = FindViewById<RadioButton>(Resource.Id.radio_blue); radio_red.Click += RadioButtonClick; radio_blue.Click += RadioButtonClick;
This captures each of the
RadioButtons from the layout and adds the newly-created event handlerto each.
Run the application.
Tip: If you need to change the state yourself (such as when loading a saved
CheckBoxPreference),
use the
Checked
property setter or
Toggle()
method.
Portions of this page are modifications based on work created and shared by the Android Open Source Project and used according to terms described in the Creative Commons 2.5 Attribution License.. | https://docs.mono-android.net/guides/android/user_interface/form_elements/radiobutton/ | 2017-07-20T20:31:38 | CC-MAIN-2017-30 | 1500549423486.26 | [] | docs.mono-android.net |
This topic describes how to to configure startup options that will be used every time the Database Engine starts in SQL Server 2017 by using SQL Server Configuration Manager. For a list of startup options, see Database Engine Service Startup Options.
Before You Begin.
Using SQL Server Configuration Manager
To configure startup options
Click the Start button, point to All Programs, point to Microsoft SQL Server 2017, point to Configuration Tools, and then click SQL Server Configuration Manager..
In SQL Server Configuration Manager, click SQL Server Services.
In the right pane, right-click SQL Server (
),.
Warning
After you are finished using single-user mode, in the Startup Parameters box, select the -m parameter in the Existing Parameters box, and then click Remove. Restart the Database Engine to restore SQL Server to the typical multi-user mode.
See Also
Start SQL Server in Single-User Mode
Connect to SQL Server When System Administrators Are Locked Out
Start, Stop, or Pause the SQL Server Agent Service | https://docs.microsoft.com/en-us/sql/database-engine/configure-windows/scm-services-configure-server-startup-options | 2017-07-20T22:03:10 | CC-MAIN-2017-30 | 1500549423486.26 | [] | docs.microsoft.com |
You are viewing documentation for version 3 of the AWS SDK for Ruby. Version 2 documentation can be found here.
Class: Aws::Greengrass::Types::AssociateServiceRoleToAccountRequest
- Inherits:
- Struct
- Object
- Struct
- Aws::Greengrass::Types::AssociateServiceRoleToAccountRequest
- Defined in:
- gems/aws-sdk-greengrass/lib/aws-sdk-greengrass/types.rb
Overview
Note:
When making an API call, you may pass AssociateServiceRoleToAccountRequest data as a hash:
{ role_arn: "__string", }
Instance Attribute Summary collapse
- #role_arn ⇒ String
Role arn you wish to associate with this account.
Instance Attribute Details
#role_arn ⇒ String
Role arn you wish to associate with this account. | http://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/Greengrass/Types/AssociateServiceRoleToAccountRequest.html | 2017-10-17T08:05:00 | CC-MAIN-2017-43 | 1508187820930.11 | [] | docs.aws.amazon.com |
Object Scopes
Determining How the Mule Container Deals with Objects
Scope (also referred to as cardinality) describes how the objects are created and managed in the Mule container.
Three Scopes
Three object scopes are defined in Mule:
Singleton - only one object is created for all requests
Prototype - a new object is created for every request or every time the object is requested from the registry
Pooled - Only applies to component objects, but these are stored in a pool that guarantees that only one thread will access an object at any one time.
Rules About Object Scopes
Singleton objects must be thread-safe, since multiple threads will be accessing the same object. Therefore, any member variables need to be guarded when writing to ensure only one thread at a time changes the data.
Objects that are given prototype scope are created for each request on the object, so the object does not need to be thread-safe, since there will only ever be one thread accessing it. However, the object must be stateless, since any member variables will only exist for the lifetime of the request.
Pooled objects are thread safe since Mule will guarantee that only one thread will access the object at a time. Pooled objects can’t easily maintain state on the object itself since there will be multiple instances created. The advantage of pooled over prototype is when the object may be expensive to create and creating a new instance for every message it receives will slow down the application.
When configuring Mule through XML Spring is used to translate the XML into objects that define how the Mule instance will run. All top-level objects in Mule are singletons including Service, Model, Connector, Endpoint, Agent, Security Manager, Transaction Manager. The only object that has prototype scope is Transformer (This is because transformers can be used in different contexts and potentially in different threads at the same time).
Components (POJO objects used by your service) can either be singleton, prototype or pooled. | https://docs.mulesoft.com/mule-user-guide/v/3.2/object-scopes | 2017-10-17T07:36:57 | CC-MAIN-2017-43 | 1508187820930.11 | [] | docs.mulesoft.com |
Themes
This help article will demonstrate a step by step tutorial how to customize the ControlDefault theme for RadProgress IndicatorElement1 in Controls Structure on the left side. Then, select UpperProgressindicatorElement in the Elements section.
Modify the applied fill repository item.
Save the theme by selecting File >> Save As.
Now, you can apply your custom theme to RadProgressBar by using the demonstrated approach in the following link: Using custom themes | https://docs.telerik.com/devtools/winforms/track-and-status-controls/progressbar/customizing-appearance/themes | 2017-10-17T07:56:46 | CC-MAIN-2017-43 | 1508187820930.11 | [] | docs.telerik.com |
Welcome to APWG eCrime Research, a community of researchers from a
number of fields within industry and academia working together to forge
what is coalescing into a new discipline, ecrime research, drawing from
a number of sciences.
A good deal of the activity of APWG eCrime Research is contained in the preparation and development of research by investigators and students developing papers and discussions for the peer-reviewed APWG eCrime Research Summit (in the Fall) and the APWG eCrime Researchers Sync-Up (in early Spring).
The research products that have proceeded from this community has been influential in the establishment and development of APWG programs in education, standards development, industrial policy and in mapping the emerging ecrime threatscape. As such, the APWG has cultivated this community to serve the interests of its membership since the formal founding of the APWG eCrime Researchers Summit in 2006.
As of 2009, however, APWG formally articulated its intent to establish a collaborative research center for industry and academia for eCrime, a goal that has yet to be achieved but around which a growing community is expressing interest and gathering their resources to join. Announcements about that project will be forthcoming here through our eCrime Research Blog, hosted by APWG eCrime Researchers Summit Chair Randy Vaughn of Baylor University.
APWG holds inaugural eCrime Researchers Sync-up in Dublin in March 2010, for researchers in the field
IEEE-SA Funds eCrime Fighter Scholarship Program for APWG's eCrime researchers | http://docs.apwg.org/ecrimeresearch/index.html | 2017-10-17T07:25:51 | CC-MAIN-2017-43 | 1508187820930.11 | [] | docs.apwg.org |
The Run Restore page lets you launch and monitor the restore process that has been selected in the previous dialogs.
The three panels on left side (Restore From, Restore To, and Database to be Restored) of the page contain the data previously gathered. Note the the Restore To host is displayed only if it differs from the the Restore From host.
The restore process uses disk space under the temporary directory configured in the Backup Where page. The amount of disk space required depends on the number of backup images required for restoration and the backup method used.
Restores can take a long time if the database is large. The status of the restore process is shown in right hand panel. The restore process can be cancelled using the Cancel link that appears in the right hand pane. ZRM will cancel the restore process only when it is safe to do so (i.e, will not cause data corruption or loss of data).
Restore task can be cancelled:
Only the process running on the ZRM server are stopped during cancellation. All processes on the MySQL server may have to be stopped manually.
After restoration, it is important to check the database(s)/table(s) that were restored. SQL command CHECK TABLE can be used for consistency checking. Use of EXTENDED option is recommended. EXTENDED option does a full key lookup for all rows in the table and will take significant time for a large database.
mysql> CHECK TABLE <table1>, <table2> EXTENDED;
No other application can be using the database during table consistency check.
When you are restoring databases with InnoDB tables to a MySQL server that already has InnoDB tables, the common InnoDB files are overwritten (ib_data and ib_logfile*). ZRM displays a warning message :
Backup directory contains InnoDB data and InnodDB log files, During the restore process these files will be overwritten on target MySQL server. You may loose the existing data from the MySQL server. Are you sure you want to continue the Restore?
It is important to make sure there are no InnoDB tables in the MySQL server being restored to. Please see restoration of InnoDB tables section below for an alternative.
This procedure can be used to instantiate a MySQL slave.
You need to follow the procedure from the MySQL manual on how to set up replication. Instead of steps 3 and 5, you can use ZRM backups of the master server to restore the data to the slave server in step 5.
After restoring data to the replication slave either from backup images of MySQL master or another MySQL replication slave, you need to set up configure master server information on the slave.
Perform MySQL CHANGE MASTER TO command on the replication slave as shown below
mysql> CHANGE MASTER TO -> MASTER_HOST='replication_
master_host_name', -> MASTER_USER='
replication_user_name', -> MASTER_PASSWORD='
replication_password', -> MASTER_LOG_FILE='
value from next-binlog parameter from ZRM reports', ->
MASTER_LOG_POS=
0;
MASTER_LOG_FILE
This value can be obtained from ZRM reports for the backup that was restored. Use ZRM Custom Reports page and look at Next BinLog parameter for the backup run.
MASTER_LOG_POS
The master-log-position will be zero because logs are rotated when backups are performed.
If you have performed backup using Xtrabackup, you can restore one or more InnoDB table(s) from a database backup. You should selected the tables for restoration in the Restore Where page. ZRM restore process exports the innoDB tables.
The exported innoDB tables will have to be manually imported into Percona MySQL server. These steps will have to be performed manually and are documented in the Percona manual.
Viewing Details: | http://docs.zmanda.com/index.php?title=Project:Zmanda_Recovery_Manager_for_MySQL_3.6/ZMC_Users_Manual/Run_Restore&action=print | 2017-10-17T07:49:49 | CC-MAIN-2017-43 | 1508187820930.11 | [] | docs.zmanda.com |
1000 Provider Unreachable
The Issue
Nanobox is unable to connect to your hosting provider.
Possible Causes
- Your hosting provider's API may be unresponsive or returning errors.
Steps to Take
View the Full Stack Trace
The full stack trace includes meta information related to the running sequence as well as the response from your service provider. These should help to pinpoint the exact cause of the error.
Contact Your Hosting Provider
There me be a known issue or outage preventing Nanobox from communicating with your hosting provider. Contact your hosting provider for more information.
Retry the Process
Once the issue with your provider has been resolved, retry the process. The retry button is available in your dashboard, to the right of the error information.
Reach out to [email protected] and we'll try to help. | https://docs.nanobox.io/trbl/error/1000/ | 2017-10-17T07:42:34 | CC-MAIN-2017-43 | 1508187820930.11 | [array(['/assets/images/process-retry-e49af6d0.png', 'Retry Sequence'],
dtype=object) ] | docs.nanobox.io |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
Initiates the asynchronous execution of the GetComplianceSummaryByResourceType operation.
This is an asynchronous operation using the standard naming convention for .NET 4.5 or higher. For .NET 3.5 the operation is implemented as a pair of methods using the standard naming convention of BeginGetComplianceSummaryByResourceType and EndGetComplianceSummaryByResourceType.
Namespace: Amazon.ConfigService
Assembly: AWSSDK.ConfigService.dll
Version: 3.x.y.z
Container for the necessary parameters to execute the GetComplianceSummaryByResourceTy | http://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/ConfigService/MConfigServiceIConfigServiceGetComplianceSummaryByResourceTypeAsyncGetComplianceSummaryByResourceTypeRequestCancellationToken.html | 2017-10-17T07:51:09 | CC-MAIN-2017-43 | 1508187820930.11 | [] | docs.aws.amazon.com |
source code
A simple least-recently-used cache, with a fixed maximum size. Note that an item's memory will not necessarily be free if other code maintains a reference to it, but this class will "lose track" of it all the same. Without caution, this can lead to duplicate items in memory simultaneously. | http://docs.buildbot.net/0.8.1/reference/buildbot.util.LRUCache-class.html | 2015-04-18T04:54:08 | CC-MAIN-2015-18 | 1429246633799.48 | [] | docs.buildbot.net |
In some cases - probably due to bad metadata on the repository - the dependency quality for this metadata (perhaps incorporating some sort of user comments/ratings in MRM per-artifact?), we will have some users who feel it would be much easier to build their projects without transitive artifact resolution.. | http://docs.codehaus.org/plugins/viewsource/viewpagesrc.action?pageId=58929 | 2015-04-18T05:22:00 | CC-MAIN-2015-18 | 1429246633799.48 | [] | docs.codehaus.org |
Native Packaging for iOS and Android
Contents
This guide describes how to package a Sencha Touch app to run natively on iOS or Android devices using the Sencha Touch Native Packager tool.
Native App Packaging General Procedures
The app packaging process is very much the same whether you target iOS or Android devices. The main difference is that each environment requires you to complete a different prerequisite. Additionally, some of the details of creating the config file differ between the two environments.
Here are the basic steps for app packaging:
- Provisioning - For iOS, complete iOS provisioning on the Apple iOS provisioning portal (requires an Apple ID and password). including certificates and devices set up through the provisioning portal and Xcode. Android provisioning requires that you obtain an Android ready certificate (debug or release) to sign your application.
- Create config file - Create a packaging configuration file for use with the Native Packager.
- Package your app - Run the packager to create a packaged
\<application\>.appfile for iOS or an
.apkfile for Android.
Each of these steps is described in this guide.
Required Software
Before you begin, make sure your computer is running the following:
- JRE Sencha Cmd is written in Java and requires Java Runtime Environment version 1.6 or 1.7 (best)
- Sencha Cmd
- Ruby 1.9.3 (or earlier): Sencha Cmd does not work with Ruby 2.0. Ruby differs by OS:
- Windows: Download Ruby 1.9.3.n from rubyinstaller.org. Get the .exe file version of the software and install it.
- Mac OS: Ruby is pre-installed. You can test which version you have with the Ruby -v command. If you have version 2.0, download the Ruby version manager (rvm). Use this command to download and install Ruby: rvm install 1.9.3 --with-gcc=clang and set your PATH variable to point to the Ruby 1.9.3 install directory.
- Ubuntu: Use sudo apt-get install ruby1.9.3 to download Ruby 1.9.3.
- iOS Packaging: Apple Xcode
- Android Packaging: Android SDK Tools and Eclipse (optional).
Step 1: Provisioning
Provisioning differs between the two environments, as follows:
iOS: Refer to the Native iOS Provisioning Guide and use the Apple iOS provisioning portal (requires an Apple ID and password) to get a development or distribution certificate and profiles. Create an App ID and provision your application. You need your App ID and App Name to package your app. Refer to the How-To section in the Apple iOS provisioning portal for help.
Android: Use the Android SDK Keytool to create a certificate to sign your Android application. The following example uses Keytool command to gneerate a private key:
$ keytool -genkey -v -keystore my-release-key.keystore -alias alias_name -keyalg RSA -keysize 2048 -validity 10000
For more information, see the Android Developers Guide Signing Your Applications.
Step 2: Install the packager
The
sencha command includes the package option that installs to the specified location during installation.
Step 3: Create a packaging configuration file
Create a configuration file template by running the following command in the Terminal:
sencha package generate <configTemplate.json>
<configTemplate.json> is the name of the configuration file. It cannot contain any spaces.
The configuration file should have the following format. Parameters unique to iOS or Android are noted. Note that the parameters do not have to follow any particular order in an actual config file.
{ "applicationName": "<AppName>", "applicationId": "<AppID>", "bundleSeedId": "<String>", (iOS only) "versionString": "<AppVersion>", "versionCode": "<BuildNumber>", (Android only) "icon": "<Object>", "inputPath": "<AppPackageInputPath>", "outputPath": "<AppPackageOutputPath>", "rawConfig": "<Info.plistKeys>", (iOS only) "configuration": "<Release | Debug>", "notificationConfiguration": "<Release | Debug>", (iOS only; optional) "platform": "<iOSSimulator | iOS | Android | AndroidEmulator>", "deviceType": "<iPhone | iPad | Universal>", (iOS only) "certificatePath": "<CertificateLocation>", "certificateAlias": "<CertificateAlias>", (Optional) "certificatePassword": "<Password>", (Optional) "permissions": "<ApplicationPermissions>" (Android only) "sdkPath": "<SDKLocation>", (Android only) "androidAPILevel": "<VersionNumber>", (Android only) "minOSVersion": "<VersionNumber>", (iOS only) "orientations": "<Direction>" }
The rest of this section provides details about each parameter, noting environment-specific settings.
applicationName
The name of your application, which a device displays to the user after the app is installed.
iOS: The application name should match the name provided in the iOS Provisioning Portal (requires an Apple ID and password), in the App IDs section. Here's an example iOS App ID, showing both the name and the ID:
App ID
This example uses the following:
- AppName: Sencha Touch Packaging
- AppID: com.Sencha.TouchPackaging
Note the App ID is the same as the one you put in the Identifier field in Xcode.
Android: The output file will have the name <AppName>.apk.
applicationId
The ID for your app. It's suggested that you use a nameSpace for your app, such as
com.sencha.TouchPackage, as shown in the example above. For iOS, this can also be found in the provisioning portal.
bundleSeedId (iOS only)
The ten character string in front of the iOS application ID obtained from
the iOS Provisioning Portal
(requires an Apple ID and password). In the example shown above under
applicationName, it's
H8A8ADYR7H.
versionString
This is the version number of your application. Usually it takes a string such as
1.0.
versionCode (Android only)
The build number of an Android app, also called the integer version code.
icon
The icon displayed to the user along with your app name. iOS uses 57, 72, 114 and 144 pixel icons. Android uses 36, 48 and 72 pixel icons. If you package for Android, you can ignore iOS icons and vice versa.
iOS: Specifies the icon file to be used for your application. Retina icon should be specified with
@2x at the end of the icon name. A regular icon name looks like
icon.png, while a retina icon looks like
(regular) [email protected]. If a retina icon with the
@2x.png exists, the packager includes the retina icon. Should also specify target device for app, as follows:
"icon": "36":"resources/icons/Icon_Android36.png", "48":"resources/icons/Icon_Android48.png", "57": "resources/icons/Icon.png", "72": "resources/icons/Icon~ipad.png", "114": "resources/icons/[email protected]", "144": "resources/icons/[email protected]" }
Refer to the Apple documentation for specific information about icon sizes.
Android: Specifies the launcher icon file to be used for your application. Refer to the Android Launcher Icons guide for more information.
inputPath
This is location of your Sencha Touch 2 application, relative to the configuration file.
outputPath
This is the output location of the packaged application, that is where the built application file will be saved.
rawConfig (iOS only)
"Raw" keys that can be included with
info.plist configuration with iOS apps.
info.plist is the name of an information property list file, a structure text file with configuration information for a bundled executable. See Information Property List Files in the iOS Developer Library for more information.
configuration
Indicates whether you are building the debug or release configuration of your application.
Debug should be used unless you are submitting your app to an online store, in which case
Release should be specified.
notificationConfiguration (iOS only)
Optional for apps that use push notifications.
Debug should be used unless you are submitting your app to an online store, in which case
Release should be specified. If app doesn't use push notifications, leave blank or remove parameter.
platform
Indicate the platform on which your application will run.
- iOS: Options are
iOSSimulatoror
iOS.
- Android: Options are
Androidor
AndroidEmulator.
deviceType (iOS only)
Indicates the iOS device type that your application will run on. Available options are:
- iPhone
- iPad
- Universal
certificatePath
This is the location of your certificate. This is required when you are developing for Android or you are developing on Windows.
certificateAlias (Optional)
Indicates the name of your certificate. It this is not specified when developing on OSX, the packaging tool automatically tries to find the certificate using the applicationId.
Can be just a simple matcher. For example, if your certificate name is "iPhone Developer: Robert Dougan (ABCDEFGHIJ)", you can just enter
iPhone Developer.
Not required when using a
certificatePath on Windows.
certificatePassword (Optional)
Use only if password was specified when generating certificate for release build of Android (iOS or Windows) or any iOS build on Windows. Indicates password set for certificate. If no password set, leave blank or eliminate parameter.
permissions (Android only)
Array of permissions to use services called from an Android app, including coarse location, fine location, information about networks, the camera, and so on. See the complete list of permissions for Android app services.
sdkPath (Android only)
Indicates the path to the Android SDK.
androidAPILevel (Android only)
Indicates the Android API level, the version of Android SDK to use. For more information, see What is API Level in the Android SDK documentation. Be sure to install the corresponding platform API in the Android SDK manager (android_sdk/tools/android).
minOSVersion (iOS only)
Indicates number of lowest iOS version required for app to run.
orientations
Indicates the device orientations in which the application can run. Options are:
- portrait
- landscapeLeft
- landscapeRight
- portraitUpsideDown
Note. If this omitted, default orientations setting is all four orientations.
Sample iOS config file
The following is an example iOS config file.
{ "applicationName": "SenchaAPI", "applicationId": "com.sencha.api", "outputPath": "~/stbuild/app/", "versionString": "1.2", "inputPath": "~/stbuild/webapp", "icon": { "57": "resources/icons/Icon.png", "72": "resources/icons/Icon~ipad.png", "114": "resources/icons/[email protected]", "144": "resources/icons/[email protected]" }, "rawConfig": "<key>UIPrerenderedIcon</key><true/>", "configuration": "debug", "notificationConfiguration": "debug", "platform": "iOS", "deviceType": "iPhone", "certificatePath": "stbuild.keystore", "certificateAlias": "iPhone Developer", "certificatePassword": "stbuild", "minOSVersion": "4.0", "bundleSeedId": "KPXFEPZ6EF", "orientations": [ "portrait", "landscapeLeft", "landscapeRight", "portraitUpsideDown" ] }
Sample Android config file
The following is an example Android config file.
{ "applicationName": "SenchaAPI", "applicationId": "com.sencha.api", "outputPath": "~/stbuild/app/", "versionString": "1.2", "versionCode": "12", "inputPath": "~/stbuild/webapp", "icon": { "36":"resources/icons/Icon_Android36.png", "48":"resources/icons/Icon_Android48.png", "57": "resources/icons/Icon.png", "72": "resources/icons/Icon~ipad.png", "114": "resources/icons/[email protected]", "144": "resources/icons/[email protected]" }, "configuration": "debug", "platform": "android", "certificatePath": "stbuild.keystore", "certificateAlias": "Android Developer", "certificatePassword": "stbuild", "permissions": [ "INTERNET", "ACCESS_NETWORK_STATE", "CAMERA", "VIBRATE", "ACCESS_FINE_LOCATION", "ACCESS_COARSE_LOCATION", "CALL_PHONE" ], "sdkPath": "/android_sdk-mac_86/", "androidAPILevel": "7", "orientations": [ "portrait", "landscapeLeft", "landscapeRight", "portraitUpsideDown" ] }
Step 4: Run the packager to create the packaged application
After creating the config file, the next step is to package the app. Here are the procedures for packaging both debug and release versions of an app for both iOS and Android.
iOS: Package a debug application
The appropriate
platform and
configuration settings need to be made in the config file, for example:
platform: iOSSimulator configuration: Debug
If
platform and
configuration are not set, the packaged app will not run correctly.
With these configs set properly, issue the following command in Terminal:
sencha package run <configFile.json>
In this example, which targets the iOS Simulator in the
platform config parameter, successful completion of the
package command launches the iOS simulator with the application running natively. Note that the
deviceType identifier --
iPhone or
iPad -- has to be set properly to trigger the appropriate simulator.
iOS: Package a release application
To package a signed application to run on the device, issue the following command in the terminal:
sencha package <configFile.json>
Note an
<AppName.app> is created in the specified output location. This is the application that you can use to deploy to the iOS device.
Android: Package a debug application and run it on Android Emulator
The appropriate
platform and
configuration settings need to be made in the config file, for example:
platform: AndroidEmulator configuration: Debug
If
platform and
configuration are not set, the packaged app will not run correctly.
With these configs set properly, start the Android Emulator and issue the following command:
sencha package run <configFile.json>
In this example, which targets the Android Emulator in the
platform config parameter, successful completion of the
package command launches the app in the already running emulator. If
package is successful, an
.apk is available in the application output location for you to manually test on an Android Emulator or a device.
More information about the Android Emulator can be found in Android Developer Guide: Using the Android Emulator.
Android: Package an application for distribution
To package a signed application to run on the device, issue the following command:
sencha package <configFile.json>
An
<AppName.apk> is created in the specified output location. This is the application that you can use to release for distribution.
Additional Resources
iOS Resources
- Native iOS Provisioning
- Apple iOS provisioning portal (requires an Apple ID and password)
Android Resources
- Installing the ADT Plugin for Eclipse
- Eclipse
- Managing Virtual Devices for Android Emulator, "Setting up Virtual Devices". | http://docs.sencha.com/touch/2.1.1/?_escaped_fragment_=/guide/native_packaging | 2015-04-18T04:53:04 | CC-MAIN-2015-18 | 1429246633799.48 | [] | docs.sencha.com |
Key Elements of the Language
SenseTalk in a Nutshell
A One Hour Introduction for Experienced Programmers and Inquiring Minds.
SenseTalk is a powerful, high level, and easy-to-read language. It is designed to be similar to English, avoiding cryptic symbols and rigid syntax when possible, in order to be both easy to read and to write. This section briefly describes many aspects of the language.
The information presented here is intended to provide a quick overview of most of the key elements of SenseTalk. Experienced scripters and programmers may find this is enough to get them started using SenseTalk. Other users may want to skim over this section to get a quick sense of what’s ahead before moving on to the more detailed explanations in following sections.
You may also find this section to be a convenient place to turn later when all you need is a quick reminder of how things work. For each topic, references are given to the section where more complete information may be found.
Scripts
A SenseTalk script is a series of command statements. When the script is run, each statement is executed in turn. Commands usually begin with a verb, and are each written on a separate line:
put 7 into days
multiply days by 4
put days -- 28
A script is often stored as a text file on your computer.
SenseTalk is not case-sensitive: commands can be written in uppercase, lowercase, or a mixture without changing the meaning:
Put 7 into DAYS
(See Script Structure and Control Flow)
Simple Values
In SenseTalk there are simple values:
5 -- number
sixty-four -- number expressed in words
"hello" -- text string (full international Unicode text allowed)
empty -- constant
0x7F -- hexadecimal number
<3FA64B> -- binary data
Text Blocks
Multi-line blocks of text may be enclosed in {{ and }}. This type of text block is particularly useful for dynamically setting the script of an object, or for defining a block of data:
set names to {{
Harry Potter
Hermione Granger
Ron Weasley
}}
Operators
Operators combine values into expressions. A full range of common (and some uncommon) operators is available. A put command with no destination displays the value of an expression:
put 3 + 2 * 7 -- 17
put five is less than two times three -- true
put "a" is in "Magnificent" -- true
put 7 is between 5 and 12 -- true
put "poems/autumn/InTheWoods" split by "/"
-- (poems,autumn,InTheWoods)
Parentheses can be used to group operations:
put ((3 + 2) * 7) is 35 -- true
(see Expressions)
Concatenation
Text strings can be joined (concatenated) using & or &&. The & operator joins strings directly; && joins them with an intervening space:
put "red" & "orange" -- "redorange"
put "Hello" && "World" -- "Hello World"
(see Expressions)
Value assignment
Values can be stored in containers. A variable is a simple container. Either the put into or set to command may be used to assign a value to a container:
put 12 into counter
set counter to 12
Variables are created automatically when used; they do not need to be declared first.
(see Containers)
The Put Command
The put command can also append values before or after the contents of a container:
put 123 into holder -- "123"
put "X" before holder -- "X123"
put "Y" after holder -- "X123Y"
(see Containers)
Typeless Language
SenseTalk is a typeless language. Variables can hold any type of value:
put 132 into bucket
-- bucket is holding a number
put "green cheese" into bucket
-- now bucket holds a text string
Values are converted automatically as needed:
put ("1" & "2") / 3 -- 4
(see Expressions)
Unquoted Strings
Any variable that has not yet had a value stored into it will evaluate to its name. In this way, they can be used as unquoted strings, which can be convenient.
put Bread && butter -- "Bread butter"
(see Containers)
Constants
Some words have predefined values other than their names. These are commonly called “constants” but SenseTalk tries to allow you maximum freedom to use any variable names you choose, so only a very few of these are truly constant; the others can be used as variables and have their values changed.
The actual constants include true, false, up, down, end, empty, and return:
put street1 & return & street2 & return & city into address
if line 2 of address is empty then delete line 2 of address
The predefined variables include numbers and special characters like quote and tab:
put 2*pi
-- 6.283185
add one to count
put "Edward" && quote & "Red" & quote && "Jones"
-- Edward "Red" Jones
put comma after word 2 of sentence
Comments can add descriptive information. Comments are ignored by SenseTalk when your script is running. The symbols -- (two dashes in a row), —(an em dash), or // (two slashes) mark the remainder of that line as a comment:
// this script adds two numbers and returns the sum
params a,b — this line declares names for the two parameters
return a+b // return the sum (that’s all there is to it!)
For longer (or shorter) comments you may enclose the comment in (* and *). This technique is sometimes used to temporarily turn off (or “comment out”) part of a script. These "block comments" can be nested:
(*
put "the total so far is : " & total -- check the values
put "the average is : " & total / count (* a nested comment *)
*)
(See Script Structure and Control Flow)
Chunk Expressions
A chunk expression can be used to specify part of a value:
put word 2 of "green cheese"
-- "cheese"
put item 3 of "one,two,three,four"
-- "three"
put lines 1 to 3 of bigText
-- the first 3 lines
put the first 3 lines of bigText
-- also the first 3 lines
put any character of "abcdefg"
-- one letter chosen at random
Negative numbers count back from the end of a value:
put item -3 of "one,two,three,four" -- "two"
put chars 2 to -2 of "abcdefg" -- "bcdef"
Chunks of containers are also containers (you can store into them):
put "green cheese" into bucket -- "green cheese"
put "blue" into word 1 of bucket -- "blue cheese"
put "ack" into chars 3 to 4 of word 1 of bucket
-- "black cheese"
(see Chunk Expressions)
Lists
You can create a list of values by merely listing the values in parentheses separated by commas:
(1,2,3)
("John",67, "555-1234", cityName)
("bread", "milk", "tofu")
Lists may include any type of value, including other lists:
("Mary", 19, ("555-6545", "555-0684"), "Boston")
Lists can be stored in containers:
put (2,3,5,7,11,13) into earlyPrimes
List Items
Items in a list can be accessed by number. The first item is number 1:
put item 1 of ("red", "green", "blue") -- "red"
put the third item of ("c","d","e","f","g") -- "e"
List items in a container are also containers (they can be stored into):
put (12,17,32) into numbers -- (12,17,32)
put 99 into item 5 of numbers -- (12,17,32,,99)
add 4 to item 2 of numbers -- (12,21,32,,99)
(see Chunk Expressions)
Combining Lists
Lists can be joined using &&&:
put ("red", "green", "blue") into colors
-- ("red", "green", "blue")
put (12,17,32) into numbers -- (12,17,32)
put colors &&& numbers into combinedList
-- ("red","green","blue",12,17,32)
To create a list of nested lists instead of combining into a single list, just use parentheses to create a new list:
put (colors,numbers) into nestedList
-- (("red","green","blue"), (12,17,32))
(see Expressions)
Property Lists
A simple object or property list is a collection of named values (called properties). Each value in an object is identified by its property name:
(size:12, color:blue)
(name:"John", age:67, phone:"555-1234", city:"Denver")
Objects can be stored in containers:
put (size:8, color:pink) into description
(see Values, Lists, and Property Lists)
Object Properties
Properties in an object can be accessed by name:
put property width of (shape:"oval", height:12, width:16) -- 16
New properties can be created simply by storing into them:
put "red" into property color of currentShape
(see Containers)
An object’s properties can be accessed in several different ways:
put (name:"Michael") into mike -- create an object
put a new object into property cat of mike
-- create a nested object
put "Fido" into the name of mike's cat
put mike's cat's name -- Fido
put mike.name -- Michael
In addition, an object can access its own properties (or those of an object it is helping) using “me” and “my”:
put the age of me
if my name begins with "s" then ...
(see Objects and Messages)
Properties are containers – their contents can be changed:
add one to my dog's age-- it’s her birthday!
((see Containers, Lists, and Property Lists)
Nesting
Lists and objects can be nested in complex ways:
(size:12, colors:(blue,orange), vendor:(name:"Jackson Industries",phone:"555-4532"))
(see Lists and Property Lists)
Ranges
A range is an expression that uses "to" or ".." to specify a range of values. A range can be stored in a variable:
put 13 to 19 into teenRange
put teenRange -- 13 to 19
A range can be explicitly converted to a list:
put teenRange as list -- (13,14,15,16,17,18,19)
Or a range can simply be treated like a list:
put item 4 of teenRange -- 16
delete item 1 of teenRange -- teenRange is now a list
Iterators
A range, a list, or a custom iterator object can be used as an iterator to obtain a sequence of values one at a time:
put "Z" to "A" into reverseAlphabet
put reverseAlphabet.nextValue -- Z
put reverseAlphabet.nextValue -- Y
put reverseAlphabet.nextValue -- X
Each Expressions
An each expression can generate a list of values of interest:
set saying to "Beauty lies beyond the bounds of reason"
put each word of saying where each begins with "b"
-- (Beauty,beyond,bounds)
Operations can be applied to each value in an each expression
put the length of each word of saying
-- (6,4,6,3,6,2,6)
put uppercase of each word of saying where the length of each is 6
-- (BEAUTY,BEYOND,BOUNDS,REASON)
(see Each Expressions)
Repeat Blocks
Repeat blocks are used to repeat a sequence of command statements several times:
repeat 3 times
play "Glass"
wait one second
end repeat
Several types of repeat loops are available, including repeat while, repeat until, repeat with and repeat with each:
repeat with each item of (1,3,5,7,9)
put it & tab & it squared & tab & the square root of it
end repeat
(See Script Structure and Control Flow)
Conditionals
if / then / else constructs let scripts choose which commands to execute based on conditions:
if hoursWorked > 40 then calculateOvertime
if lastName comes before "Jones" then
put firstName && lastName & return after firstPageList
else
put firstName && lastName & return after secondPageList
end if
(See Script Structure and Control Flow)
Calling Another Script
A script can run another script by simply using the name of the other script as a command:
simplify -- run the simplify script
(See Script Structure and Control Flow)
Parameters
Parameters can be passed to the other script by listing them after the command name:
simplify 1,2,3
(See Script Structure and Control Flow)
Run Command
If a script's name contains spaces or special characters, or it is located in another directory, the run command can be used to run it:
run "more complex" -- run the "more complex" script
Parameters can also be passed using run:
run "lib/registerStudent" "Chris Jones","44-2516"
(see Working with Messages)
Handlers
A script may include handlers that define additional behaviors:
to handle earnWages hours, rate
add hours*rate to my grossWages
end earnWages
A script can call one of its handlers as a command:
earnWages 40, 12.75
A handler in another script can be called using the run command:
run ScoreKeeper's resetScores "east", "south"
(See Script Structure and Control Flow)
Try/Catch Blocks
A script can catch any exceptions that are thrown when errors occur during execution:
try -- begin trapping errors
riskyOperation -- do something that might raise an error
catch theException
-- put code here to recover from any errors
end try
(See Script Structure and Control Flow)
Exceptions
A script may also throw an exception. If an exception thrown by a script is not caught by a try block, script execution will be stopped.
throw "error name", "error description"
(See Script Structure and Control Flow)
Local and Global Properties
Local and global properties control various aspects of script operation. They can be treated as containers, and are accessed using the word "the" followed by the property name:
set the numberFormat to "0.00"
insert "Natural" after the timeInputFormat
(see Containers)
File Access
File contents can be accessed directly:
put file "/Users/mbg/travel.log" into travelData
Files can also be treated as containers, allowing data to be written to them or modified in place. If the file does not already exist, it will be created:
put updatedTravelData into file "/Users/mbg/travel.log"
add 1 to item 2 of line updateIndex of file "/Users/mbg/updates"
(see File and Folder Interaction)
Sorting
The sort command provides expressive and flexible sorting of items in a list or of text chunks in a container:
sort callsReceivedList
sort the lines of file "donors" descending
sort the items of inventory in numeric order by the quantity of each
(see Text and Data Manipulation)
Summary
The list of features and brief descriptions provided above give only an introductory summary of the language. SenseTalk offers many other capabilities not mentioned here, plus additional options for these features. The remaining sections of this manual describe all of SenseTalk’s features in detail.
This topic was last updated on June 08, 2017, at 09:22:15 AM. | http://docs.testplant.com/ePF/SenseTalk/stk-key-elements.htm | 2018-03-17T05:57:58 | CC-MAIN-2018-13 | 1521257644701.7 | [] | docs.testplant.com |
The.
Only spaces where you are their member are displayed in the search result.. | https://docs.exoplatform.org/public/topic/PLF50/PLFUserGuide.WorkingWithWikis.html | 2018-03-17T06:43:47 | CC-MAIN-2018-13 | 1521257644701.7 | [] | docs.exoplatform.org |
glVertexAttribBinding — associate a vertex attribute and a vertex buffer binding for a vertex array object
vaobj
Specifies the name of the vertex array object for
glVertexArrayAttribBinding.
attribindex
The index of the attribute to associate with a vertex buffer binding.
bindingindex
The index of the vertex buffer binding with which to associate the generic vertex attribute.
glVertexAttribBinding and
glVertexArrayAttribBinding establishes an association between the generic vertex attribute of a vertex array object whose index is given by
attribindex, and a vertex buffer binding whose index is given by
bindingindex. For
glVertexAttribBinding, the vertex array object affected is that currently bound. For
glVertexArrayAttribBinding,
vaobj is the name of the vertex array object.
attribindex must be less than the value of
GL_MAX_VERTEX_ATTRIBS and
bindingindex must be less than the value of
GL_MAX_VERTEX_ATTRIB_BINDINGS.
GL_INVALID_OPERATION is generated by
glVertexAttribBinding if no vertex array object is bound.
GL_INVALID_OPERATION is generated by
glVertexArrayAttribBinding if
vaobj is not the name of an existing vertex array object.
GL_INVALID_VALUE is generated if
attribindex is greater than or equal to the value of
GL_MAX_VERTEX_ATTRIBS.
GL_INVALID_VALUE is generated if
bindingindex is greater than or equal to the value of
GL_MAX_VERTEX_ATTRIB_BINDINGS.
Copyright © 2013-2014 Khronos Group. This material may be distributed subject to the terms and conditions set forth in the Open Publication License, v 1.0, 8 June 1999.. | http://docs.gl/gl4/glVertexAttribBinding | 2018-03-17T06:21:39 | CC-MAIN-2018-13 | 1521257644701.7 | [] | docs.gl |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region..
For .NET Core and PCL this operation is only available in asynchronous form. Please refer to BatchMeterUsageAsync.
Namespace: Amazon.AWSMarketplaceMetering
Assembly: AWSSDK.AWSMarketplaceMetering.dll
Version: 3.x.y.z
Container for the necessary parameters to execute the BatchMeterUsage service method.
.NET Framework:
Supported in: 4.5, 4.0, 3.5
Portable Class Library:
Supported in: Windows Store Apps
Supported in: Windows Phone 8.1
Supported in: Xamarin Android
Supported in: Xamarin iOS (Unified)
Supported in: Xamarin.Forms | https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/AWSMarketplaceMetering/MAWSMarketplaceMeteringBatchMeterUsageBatchMeterUsageRequest.html | 2018-03-17T06:51:29 | CC-MAIN-2018-13 | 1521257644701.7 | [] | docs.aws.amazon.com |
parallel. pg_loader-output.sql | http://docs.evergreen-ils.org/2.5/_importing_authority_records_from_command_line.html | 2018-03-17T06:33:36 | CC-MAIN-2018-13 | 1521257644701.7 | [] | docs.evergreen-ils.org |
Cache Server Configuration
You configure the cache server in two files:
gemfire.properties for server system-level configuration and
cache.xml for cache-level configuration.
The configuration of the caches is part of the application development process. See Cache Initialization File. (The cache-level configuration file is generally referred to as
cache.xml, but you can use any name.)
Configuration File Locations
For the GemFire cache server, the
gemfire.properties file is usually stored in the current working directory. For more information, see the User’s Guide.
For the
cache.xml cache configuration file, a native client looks for the path specified by the
cache-xml-file attribute in
gfcpp.properties (see Attributes in gfcpp.properties). If the
cache.xml is not found, the process starts with an unconfigured cache.
Modifying Attributes Outside the gemfire.properties File
In addition to the
gemfire.properties file, you can pass attributes to the cache server on the gfsh command line. These override any settings found in the
gemfire.properties file when starting the cache server.
For more information, see Configuring a Cluster in the User’s Guide. | http://gemfire-native-90.docs.pivotal.io/native/setting-properties/cache-server-config.html | 2018-03-17T06:06:45 | CC-MAIN-2018-13 | 1521257644701.7 | [] | gemfire-native-90.docs.pivotal.io |