content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
PoolStringArray¶
A pooled array of Strings.
Description¶
An array specifically designed to hold Strings. Optimized for memory usage, does not fragment the memory.
Note: This type is passed by value and not by reference. This means that when mutating a class property of type
PoolStringArray or mutating a
PoolStringArray within an Array or Dictionary, changes will be lost:
var array = [PoolStringArray()] array[0].push_back("hello") print(array) # [[]] (empty PoolStringArray within an Array)
Instead, the entire
PoolStringArray property must be reassigned with
= for it to be changed:
var array = [PoolStringArray()] var pool_array = array[0] pool_array.push_back("hello") array[0] = pool_array print(array) # [[hello]] (PoolStringArray with 1 element inside an Array)
Tutorials¶
Methods¶ String at the given index.
Returns the number of elements in the array.
void sort ( )
Sorts the elements of the array in ascending order. | https://docs.godotengine.org/pt_BR/latest/classes/class_poolstringarray.html | 2022-09-24T20:36:33 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.godotengine.org |
Configuration Data
When deploying the repository there are a number of configuration options that need to be set to ensure the system will function as intended, some of these are required for minimal functionality and others are used for customisation.
These options will need to be set in the system management of the repository.
Your Name » System management » Configuration » Configuration data
Set by selecting Edit on the right and adding the data in valid JSON.
Options
Required
Recommended
Other | https://docs.haplo.org/app/repository/setup/configuration-data | 2022-09-24T19:46:56 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.haplo.org |
Configure Extension
What settings are required for Hello Clever's Magento 2 extension?
Basic SettingsBasic Settings
Login to Magento Admin and go to
Stores → Configuration → Sales → Payment Methods → Hello Clever
Click on Enabled > Yes to activate the extension to your site.
You can choose the API Environment, Production is when you're ready to use the extension on your live store and sandbox for testing the payment system.
Enter the required App ID and Secret key, in order to obtain these you will ned to generate these keys from Hello Clever Merchant Dashboard
Please click on 'Test Connection' button to ensure proper connection.
Click on 'Save Config' and Hello Clever should be ready to use. | https://docs.helloclever.co/docs/magento-configure/ | 2022-09-24T18:42:11 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.helloclever.co |
5.3.2. Tagging and Releasing Components¶
Warning
The intent of this section is to list the current state of the SIMP Team’s
release processes. Since these processes are constantly being improved and
automated, you can expect this section content to evolve as well and may be
best served by reading the version from the
master branch of the
simp-doc repository.
This section describes the release procedures for SIMP. The SIMP Team releases:
- Individual Puppet modules as tar files to PuppetForge
- Individual Puppet modules as signed RPMs to packagecloud and the SIMP Archive
- Ruby gems for building and testing to RubyGems.org
- SIMP system dependencies as signed RPMs to packagecloud and the SIMP Archive
- SIMP-system ISOs to simp-project.com
SIMP component releases listed above are based off of an official
GitHub release the SIMP Team has made to a corresponding SIMP GitHub
project. In the case of a SIMP ISO, the component release tag is
for the
simp-core project, which compiles existing, released
component RPMs and dependencies into an ISO.
Note
The SIMP ISO includes RPMs for Puppet modules that are not maintained by
SIMP. When a suitable signed RPM does not already exist for such a module
(e.g.,
kmod Puppet module maintained by
camptocamp), SIMP builds a
signed RPM for that project, using one of that project’s GitHub release
All modules provided by the SIMP Project, are directly sourced from SIMP-controlled repository forks. We do not pull directly from upstream sources. | https://simp.readthedocs.io/en/6.4.0-0/contributors_guide/maintenance/Tagging_and_Releasing_Components.html | 2022-09-24T18:41:37 | CC-MAIN-2022-40 | 1664030333455.97 | [] | simp.readthedocs.io |
fortinet.fortios.fortios_ips_decoder module – Configure IPS decoder_decoder.
New in version 2.0.0: of fortinet.fortios
Synopsis
This module is able to configure a FortiGate or FortiOS (FOS) device by allowing the user to set and modify ips feature and decoder decoder. fortios_ips_decoder: vdom: "{{ vdom }}" state: "present" access_token: "<your_own_value>" ips_decoder: name: "default_name_3" parameter: - name: "default_name_5" value: "<your_own_value>"
Return Values
Common return values are documented here, the following are the fields unique to this module:
Collection links
Issue Tracker Homepage Repository (Sources) | https://docs.ansible.com/ansible/latest/collections/fortinet/fortios/fortios_ips_decoder_module.html | 2022-09-24T19:01:23 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.ansible.com |
MetaBiobank is a catalogue of information describing biological sample collections stored in various biobanks. MetaBiobank is for you if:
Our service provides sample collection metadata which can help you decide whether certain samples are of interest to you. Our aim is to provide a simple way to search for interesting biological samples (for researchers) while also providing an additional channel to spread information about biobank resources (for biorepository administrators). | https://docs.cyfronet.pl/pages/viewpage.action?pageId=19370480 | 2022-09-24T19:33:35 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.cyfronet.pl |
- svlogd documentation for more information about the files it generates."
Logrotate
/etc/gitlab/gitlab.rb:
# Below are some of the default settings logging['logrotate_frequency'] = "daily" # rotate logs daily logging['logrotate_maxsize'] = nil # logs will be rotated when they grow bigger than size specified for `maxsize`, even before the specified time interval (daily, weekly, monthly, or yearly) $bar'
JSON logging
Structured logs can be exported via JSON to be parsed by Elasticsearch, Splunk, or another log management system.. | https://docs.gitlab.com/14.10/omnibus/settings/logs.html | 2022-09-24T20:02:36 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.gitlab.com |
There are a couple tools configured for developing AppScale Tools with Python: nose and pylint. Nose is a test runner that is invoked using the nosetests command. Nose will automatically find and run any tests it can find. Initial tests have been placed in the tests directory.
Pylint is a “linter”, which helps keep the code free of syntax and formatting convention errors. Before pushing code upstream, make sure to run pylint in order to catch errors and keep code consistent.
Pylint has been re-configured with the following changes from the default:
- the messages in the report include their respective IDs. they are useful for configuring pylint
- the max line width was changed from 80 to 120.
- the string module is not in the deprecated list because there are useful functions in there not anywhere else.
- there is a lint package in the project’s root that contains a module that defines a helper module for defining objects in the sh module that are used in the code, but are dynamic so pylint, by default, counts them as errors.
To make it easier to run there is a make target: pylint. Run it like so to lint the appscaletools package:
make pylint
Or to run it on another file, like bin/appscale-boostrap:
make pylint f=bin/appscale-tools | https://appscale-tools.readthedocs.io/en/latest/development.html | 2022-09-24T20:05:30 | CC-MAIN-2022-40 | 1664030333455.97 | [] | appscale-tools.readthedocs.io |
User goals and user goal models¶
CAIRIS supports the specification, modelling, and validation of user goal models. These models are based on a subset of the Goal-oriented Requirements Language (GRL) : a language for modelling intentional relationships between goals.
There are several reasons why you might find working with user goals useful.
- Expressing persona data using user goals can help elicit intentional relationships that support or refute aspects of a persona’s behaviour.
- Agent-oriented goal modelling language are popular in Requirements Engineering, making a user goal model a potential vehicle for interchange between RE methods, techniques, and tools.
- By exploring the way that user goals contribute to other user goals, it is possible to identify new requirements, threats, or vulnerabilities resulting from goals that are satisfied or denied.
User goals represent the intentional desires of actors, where actors are personas. Three types of user goals can be specified in CAIRIS:
- [Hard] goals are user goals that can be measurably satisfied.
- Soft goals are user goals with less well-defined success criteria that can be satisficed.
- Beliefs capture perceptions or opinions that are important to the actor.
User goal models can be generated in CAIRIS or, alternatively, can be exported to jUCMNav.
Adding, updating, and deleting user goals¶
Before you can create a user goal, you first need to create a document reference. If document references represent the factoids upon which a persona is based, a user goal is this factoid expressed in intentional terms.
- To create a user goal, click on the UX/User Goals menu to open the user goals table, and click on the Add button to open the user goal form.
- Enter the name of the user goal. Because they expressed intentions, user goals should follow the naming convention of “goal [be] achieved”, e.g. “AJS task captured”.
- Select the persona associated with this user goal.
- Indicate whether the user goal is a belief, [hard] goal, or soft goal.
- Select the Reference grounding the user goal. If this is a document reference, select Document as the element type and select the document reference name from the combo box. If the user goal is based on a persona characteristic, select Persona as the element type. To help you phrase the user goal, details of the document reference or persona characteristic are displayed.
- If you wish, you can override the calculated satisfaction score with an initial satisfaction value. The available values are Satisfied (100), Weakly Satisfied (50), None (0), Weakly Denied (-50), Denied (50).
- If you wish to associate the user goal with a KAOS goal, click on the Add button in the System Goal table to select the goal.
- Click on the Create button to add a new user goal.
- Existing user goals can be modified by clicking on the user goal in the User Goals table, making the necessary changes, and clicking on the Update button.
- To delete a user goal, select the user goal to delete in the User Goals table, and click on the Delete button.
Adding, updating, and deleting user goal contributions¶
- To create a user goal contribution, click on the UX/User Goal Contributions menu to open the user goal contributions table, and click on the Add button to open the user goal contribution form.
- Depending on the reference grounding the source user goal, select Document or Persona as the source type, and select the source user goal name.
- Depending on the reference grounding the destination user goal, select Document or Persona as the source type, and select the destination user goal name.
- Indicate whether the source user goal is the means or the end of the user goal contribution link.
- Select the strength of the contribution link. The options available are Make (100), SomePositive (50), Help (25), Hurt (-25), SomeNegative (-50), and Break (-100).
- Click on the Create button button to add a new user goal contribution.
- Existing contribution links can be modified by clicking on the contribution in the user goal contributions table, making the necessary changes, and clicking on the Update button.
- To delete a user goal contribution, select the contribution link to delete in the user goal contributions table, and click on the Delete button.
Task contributions¶
In addition to adding an initial satisfaction level for user goals, you can also set the contribution level that a task has on one or more user goals.
- To add such a contribution link, in the appropriate task, click on the Add button in the User Goal Contribution table in the Concerns folder for the appropriate task environment.
- From the Task Contribution dialog box, select the user goal concerned with the task in this environment and click Ok. The task-goal contribution link will be added when the task is created or updated.
Adding User goal elements to persona characteristics¶
User goals can be associated with persona characteristics, and their supporting grounds, warrants, or rebuttals. User goals drawn from persona characteristics are implicitly linked with user goals associated with these grounds/warrants/rebuttal elements, so adding user goals while working with persona characteristics is a good way of initially specifying user goal models.
- To add these User goal elements, open the persona characteristic you want to update, and click on the User Goal Elements folder.
- Select the Element type for the user goal. This can be either a belief, goal, soft goal, or task (tasks are relevant only if exporting to jUCMNav).
- Enter a user goal that expresses the characteristic in intentional terms.
- For each appropriate grounds, warrant, and rebuttal reference, click on the reference to open the characteristic reference dialog.
- Expresses the ground, warrant, or rebuttal reference in intentional terms.
- Select the element type for this synopsis this can be a belief, goal or soft goal.
- Given the intentional relationship between this element and the belief, goal, softgoal, or task associated with the persona characteristic, indicate whether this element is a means for achieving the characteristic element’s end by selecting Means in the Means/End combo box. Alternatively, if the characteristic’s element is a means for achieving this user goal element end then select End.
- Use the Contribution box to indicate how much this reference contributes to achieving its means or end. Possible values are Make (100), SomePositive (50), Help (25), Hurt (-25), SomeNegative (-50), and Break (-100).
- Click on the Save button to update the persona characteristic, and close the dialog.
- Click on the Update button on the persona characteristic form to save the persona characteristic.
Viewing a user goal model¶
To view the user goal model, click on the Models/User Goal model. Like other models, clicking on model nodes provides more details on the user goal or task.
Working with workbooks¶
CAIRIS can generate an Excel workbook for capturing user goals and contribution links from persona charactacteristics. To create a workbook, select the System/Export menu, click on the User goals (Workbook) radio button, enter the spreadsheet file to be created, and click on the Export button.
Note
If you have server access, you can also run the cairis/bin/ug2wb.py script, indicating the user account, database, and name of the XLSX file to be generated, i.e.
./ug2wb.py --user test --database default RickGoals.xlsx.
The generated Excel workbook (which is compatible with LibreOffice), contains UserGoal and UserContribution worksheets. Edited cells for both sheets are coloured green.
The UserGoal worksheet is pre-populated with read-only data on the persona characteristic or document reference name, its description, the persona it is associated with, and an indicator to whether the reference corresponds to a persona [characteristic] or document reference. When completing the worksheet, you should indicate the intentional elements associated with the persona characteristics or document references providing their grounds, warrants, or rebuttals. You should also indicate the element type (goal, softgoal, or belief), and - if you wish - the initial satisfaction level using the dropdown lists provided. When generating a CAIRIS model, new user goals will only be created if cell values for each row are complete.
The source and destination cells in the ContributionsSheet are pre-populated once user goals have been added in the UserGoal sheet, so you only need to ensure the means/end and contribution links are set. When generating a CAIRIS model, contribution links will only be created if both Source AND Destination values have been set, i.e. their associated user goals have been defined.
To re-import the completed workbook back to CAIRIS, select the System/Import menu, select User goals (Workbook) from the dropdown box, select the workbook to be imported, and click on the Import button.
Note
If you have server access, you can also run the cairis/bin/wb2ug.py script, indicating the name of the XLSX file to be imported and the name of the CAIRIS model file to be created, i.e.
./wb2ug.py --xlsx RickGoals.xlsx RickGoals.xml. The resulting model can be imported into CAIRIS, but take care not to overwrite existing data. | https://cairis.readthedocs.io/en/latest/usergoals.html | 2022-09-24T18:30:54 | CC-MAIN-2022-40 | 1664030333455.97 | [array(['_images/userGoal.jpg', 'Editing a user goal'], dtype=object)
array(['_images/ugc.jpg', 'Editing a user goal'], dtype=object)
array(['_images/addTaskContribution.jpg',
'Adding a task-goal contribution'], dtype=object)
array(['_images/crGrl.jpg',
'Associating user goal with characteristic reference'],
dtype=object)
array(['_images/pcUserGoal.jpg',
'Associating user goal with persona characteristic'], dtype=object)
array(['_images/userGoalModel.jpg', 'Viewing a user goal model'],
dtype=object)
array(['_images/UserGoalSheet.jpg', 'user goal worksheet'], dtype=object)
array(['_images/ContributionsSheet.jpg', 'contribution worksheet'],
dtype=object) ] | cairis.readthedocs.io |
Getting started with AWS Network Manager for Transit Gateway networks
The following tasks help you become familiar with AWS Network Manager. For more information about how Network Manager works, see How AWS Network Manager works.
In this example, you create a global network and register your transit gateway with the global network. You can also define and associate your on-premises network resources with the global network.
Tasks
Prerequisites
Before you begin, ensure that you have a transit gateway with attachments in your account or in any account within your organization. For more information, see Getting Started with Transit Gateways.
The transit gateway can be in the same AWS account as the global network or in a different AWS account within the organization.
Step 1: Create a global network
Create a global network as a container for your transit gateway.
To create a global network
Open the Network Manager console at
.
Choose Get started.
In the navigation pane, choose Global networks.
Choose Create global network.
Enter a name and description for the global network, and choose Create global network.
Step 2: Register your transit gateway
Register a transit gateway in your global network.
To register the transit gateway
Access the Network Manager console at
.
Choose Get started.
On the Global networks page, choose the global network ID.
In the navigation pane, choose Transit gateways, and then choose Register transit gateway.
From the Select account dropdown list, choose the account that you want to register the transit gateway from.
A list of transit gateways from that account appear in the Select transit gateway to register section.
Select one or more transit gateways from the list, and then choose Register transit gateway.
Step 3: (Optional) Define and associate your on-premises network resources
You can define your on-premises network by creating sites, links, and devices to represent objects in your network. For more information, see the following procedures:
-
-
You associate the device with a specific site, and with one or more links. For more information, see Associate a device.
On your transit gateway you can
Create a Site-to-Site VPN connection attachment. For more information, see Customer gateway associations.
Create a transit gateway Connect attachment, and then associate the Connect peer with the device. For more information, see Transit Gateway Connect peer associations.
You can also work with one of our Partners in the AWS Partner Network (APN) to
provision and connect your on-premises network. For more information, see
AWS Network Manager
Step 4: (Optional) Enable multi-account access
Enable multi-account access to register transit gateways from multiple accounts, allowing you to view and manage transit gateways and associated resources from those registered accounts in your global network. Onboarding to AWS Organizations is a prerequisite for enabling multi-account access for Network Manager.
Create your organization using AWS Organizations.
If you've already done this skip this step. For more information on creating an organization using AWS Organizations, see Creating and managing an organization in the AWS Organizations User Guide.
Enable multi-account on the Network Manager console.
This enables trusted access for Network Manager and allows for registering delegated administrators. For more information enabling trusted access and registering delegated administrators, see Multi-account .
Create your global network.
For more information on creating a global network, see Create a global network.
Register transit gateways.
With multi-account enabled, you can register transit gateways from multiple accounts to your global network. For more information about registering transit gateways, see Transit gateway registrations.
Step 5: View and monitor your global network
The Network Manager console provides a dashboard for you to view and monitor both your transit gateway network objects in your global network.
To access the dashboard for your global network
Access the Network Manager console at
.
Choose Get started.
On the Global networks page, choose the global network ID.
The Overview page provides an inventory of the objects in your global network for your transit gateway network. For more information about the pages in the dashboard, see Visualize transit gateway networks. | https://docs.aws.amazon.com/vpc/latest/tgwnm/network-manager-getting-started.html | 2022-09-24T19:53:29 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.aws.amazon.com |
AnimatedSprite¶
Inherits: Node2D < CanvasItem < Node < Object
Sprite node that contains multiple textures as frames to play for animation.
Description¶
AnimatedSprite is similar to the Sprite node, except it carries multiple textures as animation frames. Animations are created using a SpriteFrames resource, which allows you to import image files (or a folder containing said files) to provide the animation frames for the sprite. The SpriteFrames resource can be configured in the editor via the SpriteFrames bottom panel.
Note: You can associate a set of normal or specular maps by creating additional SpriteFrames resources with a
_normal or
_specular suffix. For example, having 3 SpriteFrames resources
run,
run_normal, and
run_specular will make it so the
run animation uses normal and specular maps.). Allows you the option to load, edit, clear, make unique and save the states of the SpriteFrames resource.
The texture's drawing offset.
If
true, the animation is currently playing.
The animation speed is multiplied by this value.
Method Descriptions¶
Plays the animation named
anim. If no
anim is provided, the current animation is played. If
backwards is
true, the animation will be played in reverse.
void stop ( )
Stops the current animation (does not reset the frame counter). | https://docs.godotengine.org/pt_BR/stable/classes/class_animatedsprite.html | 2022-09-24T18:43:23 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.godotengine.org |
Image
This widget can be used to show one single image based on a URL or Base64 encoded image and mimeType. The
url in the
model is leading.
Model using Base64 encoded image
{ "type": "image", "mimeType": "image/png", "base64": "" }
The following MIME types are considered safe for use on web pages and supported by modern web browsers:
Update Behavior
The update logic of the widget accepts a payload which is a string or a JSON object. In case the payload is a string the
widget checks whether the string is a JSON string or base64 string. In case the string is base64 encoded, the
base64
field in the model will be updated. In case the payload is a JSON string or JSON object the values of the fields
url,
base64 and
mimeType will be used to change the fields in the model accordingly. In case the payload input is neither
a base64 encoded or JSON string the value will be used to update the
url in the model. | https://docs.inmation.com/webapps/1.84/webstudio/ReferenceDocs/widgets/image/index.html | 2022-09-24T19:30:40 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.inmation.com |
Category
Type Trait Functions
Syntax
bool __has_trivial_copy_constructor ( typename T )
Returns true if and only if T has a trivial default constructor.
Error if T is an incomplete type.
False (but well-formed) if a class type does not have a default constructor
The definition (from Section 20.4.4.3 of the Working Draft) notes has a type T has a trivial copy constructor if it is in the list::
A copy constructor for class X is trivial if:
Ox interaction false if the default constructor is defined as deleted.
Ox interactionwith default function definitions. | http://docs.embarcadero.com/products/rad_studio/delphiAndcpp2009/HelpUpdate2/EN/html/devwin32/__has_trivial_copy_constructor_xml.html | 2012-05-26T18:30:42 | crawl-003 | crawl-003-015 | [] | docs.embarcadero.com |
MoveBy lets you specify how many rows forward or back to move the cursor in a dataset. Movement is relative to the current record at the time that MoveBy is called. MoveBy also sets the BOF and EOF properties for the dataset as appropriate.
This function takes an integer parameter, the number of records to move. Positive integers indicate a forward move and negative integers indicate a backward move.
The following code moves two records backward in CustTable:
CustTable.MoveBy(-2);
CustTable->MoveBy(-2); | http://docs.embarcadero.com/products/rad_studio/delphiAndcpp2009/HelpUpdate2/EN/html/devwin32/5datasetusingthemovebymethod_xml.html | 2012-05-26T18:26:24 | crawl-003 | crawl-003-015 | [] | docs.embarcadero.com |
# Copyright (C) 2005-2008 Splunk Inc. All Rights Reserved. Version 3.0 # # This file contains possible attributes and values you can use to configure transform # and event signing in transforms.conf. # # There is a transforms.conf in $SPLUNK_HOME/etc/system/default/. To set custom configurations, # place a transforms.conf $SPLUNK_HOME/etc/system/local/. For examples, see transforms.conf.example. # You can enable configurations changes made to transforms.conf by typing the following search string # in Splunk Web: # # | extract reload=T # # To learn more about configuration files (including precedence) please see the documentation # located at. [<unique_stanza_name>] * Name your stanza. Use this name when configuring props.conf. For example, in a props.conf stanza, enter TRANSFORMS-<value> = <unique_stanza_name>. * Follow this stanza name with any number of the following attribute/value pairs. * If you do not specify an entry for each attribute, Splunk uses the default value. REGEX = <regular expression> * Enter a regular expression to operate on> * Set delimiter characters to separate data into key-value pairs, and then to separate key from value. * NOTE: Delimiters must be quoted with " " (to escape, use \). * Usually, two sets of delimiter characters must be specified: The first to extract key/value pairs. The second to separate the key from the value. * If you enter only one set of delimiter characters, then the extracted tokens: Are named with names from FIELDS, if FIELDS are entered (below). OR even tokens are used as field names while odd tokens become field values. * Consecutive delimiter characters are consumed except when a list of field names is specified. FIELDS = <quoted string list> * List the names of the field values extracted using DELIMS. * NOTE: If field names contain spaces or commas they must be quoted with " " (to escape, use \). * Defaults to "". ####### # KEYS: ####### * NOTE: Keys are case-sensitive. Use the following keys exactly as they appear. _raw : The raw text of the event. _done : If set to any string, this is::" * NOTE: Any KEY prefixed by '_' is not indexed by Splunk, in general.
transforms.conf.example
# Copyright (C) 2005-2008 Splunk Inc. All Rights Reserved. Version 3 # This example extracts key-value pairs which are separated by '|' # while the key is delimited from value by '='. [pipe_eq] DELIMS = "|", "=" # This example extracts key-value pairs which are separated by '|' # while the key is delimited from value. | http://docs.splunk.com/Documentation/Splunk/3.4.11/Admin/Transformsconf | 2012-05-26T18:19:22 | crawl-003 | crawl-003-015 | [] | docs.splunk.com |
Adding creatives
Add creatives so that you can use them when you create ads for line items. Each creativeThe media asset associated with an ad, such as an image or video file. represents a media asset that you may use in your ads.
To add a creative:
Click the Orders tab.
Click the Creatives subtab and then click Create Creative.
This opens the Create Creative screen where you need to specify details for the new creative.
In the Name field, type in a name for the creative. This name must be unique within the instance of OpenX.
In the Ad Type list, specify the type of creative that you are setting up. Select one of the following:
File. This creative uses an image file.
HTML. This creative uses HTML code.
Specify the file location or the HTML code for the creative.
For a file, choose one of the following:
Local. If the file is local to your system, browse to the location of the file. The maximum allowable file size is 10 MB, but smaller file sizes results in faster delivery.
Remote. If the creative file is remote, type in the URL for the location of the file.
For HTML, specify the following details:
(Optional) Specify the following details for the creative:
External IdentifierA free-form reference ID. For example, "Debbie's Account.". Type in an ID that references an external system you have integrated with OpenX.
Notes. Type in any additional information about the creative.
Click Create. This saves the creative and updates the list.
| https://docs.openx.com/Content/publishers/addingcreatives.html | 2017-09-19T20:36:34 | CC-MAIN-2017-39 | 1505818686034.31 | [array(['../Resources/Images/AdServerLozenge.png',
'This topic applies to Ad Server. This topic applies to Ad Server.'],
dtype=object) ] | docs.openx.com |
As a COD member we are always on the lookout for ways to help you elevate your practice.
One of these is forming partnerships with companies that share the same Chiropractic vision as we do. When we form a partnership one request that we ask is if the company would be willing to give some of our members a gift.
Some companies give away t-shirts, pens or other assorted trinkets…..
But, our friends over at MeyerDC have exceeded all of our expectations.
Here’s the deal: When you go to MeyerDC and place your next order use the coupon code COD2016
As a COD Member this code will give you 15% off your entire order and Free ground shipping (excluding tables, Vitamix and Tempurpedic).
Pretty awesome right?
We hope this token of appreciation is well received within our community and we thank MeyerDC for stepping up and helping serve the Chiropractic community. | https://circleofdocs.com/meyerdc-partners-with-circle-of-docs-and-gives-exclusive-benefits/ | 2017-09-19T20:50:40 | CC-MAIN-2017-39 | 1505818686034.31 | [array(['https://circleofdocs.com/wp-content/uploads/2016/04/Meyer15-696x426.jpg',
'Meyer15'], dtype=object) ] | circleofdocs.com |
Introduction
Rug is the backbone of Atomist’s features. Atomist is about automating away all the distractions from writing and operating great software. Rug provides the tools and infrastructure to make complete automation a reality. The model that underpins Rug helps you automate common tasks and react to changes in your development ecosystem.
The Rug ecosystem includes a programming model, runtime, test runner, and package manager. The Rug programming model is expressed as TypeScript module interfaces and classes. The Rug runtime runs as a service, accessible from any Slack that has invited the Atomist Bot (try it in Atomist Community Slack). There’s also the Rug CLI for local use, essential for Rug development.
Rug is a medium for code that modifies code. Rug helps developers automate development. When a coding task is common, tedious, nit picky, or hard to remember how to do correctly, there is value in encoding how it’s done, instead of performing the typing every time. Not only does the automation reduce mistakes, it serves as documentation for the process.
Automating your development tasks¶
It is said that good developers are lazy and like to automate their work. However, the tools to drive that automation have been somewhat sparse and crude when we consider the sheer complexity of projects nowadays.
Triggering an automated response¶
In some cases, a response to a system event should be automated so the team can focus on the things that require human attention.
In Rug, this is achieved through event handlers.
Triggering a human decision¶
Automation is fantastic but humans are the sole judges. Atomist gives you the power to implement new skills that can be triggered by a team member at when needed.
In Rug, this is achieved through command handlers.
Creating new projects¶
In a world of rapidly evolving software, creating new projects has become a task performed much more often than in the past. Meanwhile, the complexity of projects has grown dramatically with configuration required for logging, CI, dependency management…
It appears clear that automating the generation of projects is a prime for any team willing to move fast but with repeatable quality.
In Rug, this is achieved through generators.
Editing projects¶
Automating the creation of projects is a great step forward but it cannot stop there. There are tasks that are repeated on a daily basis and doing them manually can be error prone, not to mention rather boring. Let’s not forget that code quickly becomes legacy that nobody knows really about any longer.
Automating those changes is an asset for any developer who wishes to focus on delivering great software without wasting time in mundane tasks.
In Rug, this is achieved through editors.
Examples¶
What does using Rug in your team look like in practice? Here are just a few examples.
- Helping technical leads to guide development teams in best practices on various technologies from initial project creation through to the full lifecycle of a project
- Safely applying and evaluating new technologies to existing projects
- Helping open source project owners to guide their users on how to start out with, and continuously update and evolve, the software based on their work.
- Helping to apply best-practice tools and techniques from the microservices toolbox
In recent years, the DevOps trend has shown us that concerns about software does not stop once it has been delivered. Software exists thanks to those who designed and developed it but thrives thanks to those who operate it. At Atomist, we believe those two sides live in the same world and more must be done to unite them. Atomist brings everything together through event-driven development.
A common setup today is as follows:
- A project’s source code lives in GitHub
- A project is automatically built and tested in a CI service
- A project is usually automatically delivered in a forge somewhere
- A project may even be deployed automatically in an environment
- A project is then operated, monitored, and cared for in that environment for users to enjoy
- Issues are created
During all those phases, a massive amount of events were triggered: a commit was pushed, a build succeeded or failed, the project was deployed, the service failed in production…
Atomist believes that all these events bring all the team members as one. However, not all events may not be able relevant to a team at a given time. Moreover, it seems appropriate to think that we should also automate the response to some of those events. This is why the Rug programming model has a holistic view of development and operation, allowing automation or user intervention at every step. | http://docs.atomist.com/user-guide/rug/ | 2017-09-19T20:43:16 | CC-MAIN-2017-39 | 1505818686034.31 | [] | docs.atomist.com |
bool OEMDLPerceiveBondStereo(OEMolBase &mol)
Assigns wedge and hash bonds to a connection table from the OEChem stereochemistry of each atom. This function requires that the molecule have 2D coordinates. See example in Figure: Example of using OEMDLPerceiveBondStereo
Example of using OEMDLPerceiveBondStereo: before (A) and after (B) calling the OEMDLPerceiveBondStereo function
Note
This function is the opposite of the OEMDLStereoFromBondStereo function.
See also
Note
The OEMDLPerceiveBondStereo function preserves wavy bonds (OEBondStereo_Wavy). These can only be removed by calling the OEMDLClearBondStereo function. | https://docs.eyesopen.com/toolkits/python/oechemtk/OEChemFunctions/OEMDLPerceiveBondStereo.html | 2017-09-19T20:49:35 | CC-MAIN-2017-39 | 1505818686034.31 | [] | docs.eyesopen.com |
Link Management¶
The LLCP Link Management component is responsible for link activation, and deactivation, keep the appearance of a symmetric communication link, as well as frame aggregation and disaggregation.
Link Activation¶
The link activation is started when the local Medium Access Layer
(MAC) notifies the LLC that a peer device capable of executing LLCP
has come into communication range. Presently, the only MAC layer
defined is the NFC Data Exchange Protocol (DEP). In NFC-DEP, a device
capable of executing LLCP is discovered if the magic octet sequence
46666Dh is received during MAC activation.
The LLCP specification defines that a Parameter Exchange (PAX) PDU is send and received during activation. However, if the MAC layer is NFC-DEP, then the LLC Parameters that would go into the PAX PDU are transmitted as part of the NFC-DEP activation procedure and the PAX PDU is actually forbidden.
LLC Parameters received are commitments of capabilities of the sending party, not subjected to negotiation. The most important, and strictly required, parameter is the Version Number to indicate the major and minor protocol version supported. Both sides then independently determine the protocol version to run by using the lower minor version if the major versions are identical. If the major versions differ then the more advanced implementation may decide if it can fall back to the <major>.<minor> version of the peer device.
Interoperability Test Requirement
The interoperability test scenarios require that NFC devices implement at least LLCP Version 1.1.
The Maximum Information Unit Extension (MIUX) parameter indicates the maximum number of information octets that the implementation is able to receive within a single LLC PDU. The guaranteed MIU is 128 octets and the MIUX parameter only encodes the number of additional octets that are acceptable. The sum of 128 and MIUX is the Link MIU. It is highly recommended that implementations provide a Link MIU of more than 128 octets (thus send an MIUX parameter). If an implementation can afford the memory, the largest possible Link MIU of 2176 octets should be used. If a device is short on memory it should at least use a Link MIU of 248 octets, to allow full utilization of an NFC-DEP information packet.
Interoperability Test Requirement
The interoperability test scenarios require that NFC devices have a Link MIU of 248 octets or more.
The Well-Known Service List (WKS) parameter informs the peer device of the well-known service access points that are active on the device and for which LLC PDUs will be accepted. It does, however, not imply that other well-known services would not become available after link activation, for example on a platform with on-demand service activation. The main purpose of the WKS parameter is to reduce the amount of service discovery or blind communication attempts.
Interoperability Test Requirement
The interoperability test scenarios require that NFC devices send a WKS parameter during link activation.
The Link Timeout (LTO) parameter announces the maximum time the LLC may ever need from receiving to returning an LLC PDU. Said differently, a local LLC can safely assume that if, after sending an LLC PDU, the remote device’s Link Timeout expires before an LLC PDU is received, that communication will no longer be possible. If an LTO parameter is not received during link activation, a default value of 100 milliseconds is applied. The larget possible Link Timeout value is 2550 milliseconds which is, as a time to let the user know the end of communication, not an ideal upper bound.
Interoperability Test Requirement
The interoperability test scenarios require that if an LTO parameter then the resulting remote Link Timeout value does not exceed 1000 milliseconds.
The Option (OPT) parameter communicates the link service class supported by the sending LLC. The link service class indicates the supported transport types: connection-less, connection-oriented, or both. If the OPT parameter is not received during link activation (or the link service class set to zero), the local LLC may behave as if both connection-less and connection-oriented transport type are supported.
Interoperability Test Requirement
The interoperability test scenarios require that NFC devices support both connection-less and connection-oriented transport type and send an an OPT parameter during link activation.
- Enable near field communication between the Device In Testmode and the Device Under Test and receive the LLC Parameters.
- Verify that the VERSION parameter is received and announces a major version of 1 and a minor version of 1 or higher.
- Verify that the MIUX parameter is received with an MIU extension value of 120 or more octets (resulting in a remote Link MIU of 248 or more octets).
- Verify that the WKS parameter is received and announces presence of well-known services at service access point addresses 0, 1, and 4.
- Verify that if the LTO parameter is received it’s value does not exceed 100 (resulting in a remote Link Timeout value of no more than 1000 milliseconds).
- Verify that the OPT parameter is received and indicates support for both connection-less and connection-oriented transport type communication.
Link Deactivation¶
The most usual way of Peer-To-Peer link termination between two NFC devices is a communication timeout due to both devices moved out of near field communication range. Nevertheless, a device may for whatever reason wish to terminate the link while communication would still be possible. The LLCP specification call this intentional link deactivation and allows a local link management component to send a Disconnect (DISC) PDU to the remote link management component. No further PDUs are then to be exchanged between the two LLCs (buffered transmissions may still be send by a MAC layer but not propagate to the LLC). Note that unlike termination of a data link connection the link management component receiving a DISC PDU will not return a Disconnected Mode (DM) PDU.
- Perform Link Activation
- Send a Disconnect (DISC) PDU with source service access point address 0 to the remote link management component at the destination service access point address 0.
- Verify that the Device Under Test does not send any further LLC PDU.
Link Symmetry¶
The LLC layer allows service users to run symmetrical communication on top a master-slave communication style MAC layer such as NFC-DEP. To applications or protocols on top of LLCP this means that service data units can be send or received at any point in time, independent of the time when the other device would eventually ask for or answer a transmission.
To achieve symmetrical communication both link management components observe the flow of outbound PDUs and send, if no other PDU is available, a Symmetry (SYMM) PDU as a substitute. The time until a SYMM PDU is sent as a substitute is critical for performance and the appearance of symmetrical communication. Generally it should be as short as possible, but if an implementation expects other PDUs to become available within a short amount of time it may well increase performance if that PDU is sent a few milliseconds later instead of delaying it until a next PDU is received from the remote LLC.
Interoperability Test Requirement
The interoperability test scenarios require that NFC devices send a SYMM PDU no later than 10 milliseconds after a PDU was received and no other PDU became availble for sending.
Sometimes a concern exists that if only SYMM PDUs are exchanged with short delays it does negatively affect power consumption for no useful information exchange (apart from the fact that two devices are still in proximity which could as well regarded useful information). Without debating that concern, a viable way to reduce the exchange of only SYMM PDUs is to observe when a specific number of SYMM PDUs has been the only exchange between the two LLCs, and then increase the time between receiving and returning a SYMM PDU. Any other PDU sent or received would then restore the original conditions.
Interoperability Test Requirement
The interoperability test scenarios require that NFC devices do not increase the time between receiving and sending a SYMM PDU before at least a consecutive sequence of 10 SYMM PDUs has been received and send (5 per direction).
- Perform Link Activation
- Verify for at least 5 seconds that Symmetry (SYMM) or other PDUs are received within the time limits of the remote Link Timeout.
- Verify that the average time between an outbound and the next inbound PDU does not exceed 10 milliseconds until a sequence of 10 consecutive SYMM PDUs are sent and received (5 per direction).
- Perform Link Deactivation
Aggregation¶
Frame aggregation allows an LLC to send more than one PDU in a single transmission using Aggregated Frame (AGF) PDUs. As LLCP allows multiple conversations at the same time this does almost always significantly increase data throughput and decrease transaction delays for all communications running across the LLCP Link. It is thus highly recommended that NFC Devices implement and use frame aggregation whenever possible.
Interoperability Test Requirement
The interoperability test scenarios require that NFC devices implement and use frame aggregation.
Disaggregating AGF PDUs is mandatory for any LLCP implementation. When disaggregating, embedded PDUs are to be processed in the order they appear within the AGF PDU and treated as if they were received individually in that order.
- Perform Link Activation
- Send two CONNECT PDUs with different source service access point addresses and the destination service access point address
0aggregated into a single AGF PDU. Both CONNECT PDUs shall not contain a Service Name (SN) parameter, so they are not treated as a request to resolve and connect by service name.
- Verify that the Device Under Test returns a Disconnected Mode (DM) PDU to each of the service access points that sent a CONNECT PDU aggregated within a single AGF PDU.
- Perform Link Deactivation | http://nfciop.readthedocs.io/en/latest/pp/link-management.html | 2017-09-19T20:33:54 | CC-MAIN-2017-39 | 1505818686034.31 | [] | nfciop.readthedocs.io |
Products in Murano
As is discussed in the Data Flow article, Murano internally represents real-world IoT components as the concepts of Connected Products, Applications, and Integrations.
This article has information about Products in Murano.
The following topics are related and can be reviewed for more information:
- Hardware Ports (supported hardware & libraries)
- Gateway Software
- The Device Service
- Connected Product lifecycle management
- View and record Product metrics
- Using my Product in my Application
- Murano Applications
- Murano Integrations_2<<
To make “connecting/interacting to a remote Device” a scalable effort for businesses, Murano uses the idea of a Device type (or Device model)—known as a Product—to define how Devices belonging to the Product interact. Products, therefore, have multiple Devices associated with them (one Product, many Devices). Once the behavior of the general Product is defined, it is easy to start “stamping out” thousands of Devices that will interact as required by the Product to which they belong. The representation and capabilities in Murano of a given Device are often referred to by the industry as a “Digital Twin,” a “Device Shadow,” or a “Virtual Device.”
Because there are so many possible Products variants (ranging from a simple sensor to a highly-capable gateway—each having varieties of processing, communications, storage, data representation, and control capabilities), each Murano Product allows the configuration of its capabilities. Every Device interacts with Murano as defined by its Product’s configuration of these areas:
- Communications authentication method
- Communications protocol
- Device identities/whitelist pool
- Expected communications behavior
- Expected data schema
- Firmware/content
- Time-series storage
- Key-value storage
- Cloud-assisted processing
- User/Value-Added-Reseller access/control permissions
- Automated external Integrations
It is also often the case that a Product will be used to power a variety of End User applications (Applications). For example, a Connected Pump Product may have some specific Pumps that are used in an container truck filling/dispensing application, while other Pumps (belonging to the same Product) are used in a maintenance facility fluid transfer application. In Murano, a Product is able to be used in a variety of Applications without changes to the underlying Devices – even when the Applications are created and maintained by business partners or customers.
Up Next
To get started with Murano Products, try the Device Sensor Quickstart or dive in deeper with the HVAC Tutorial. | http://docs.exosite.com/about/products/ | 2017-09-19T20:30:58 | CC-MAIN-2017-39 | 1505818686034.31 | [array(['/about/assets/murano-components.png', 'Murano Components'],
dtype=object)
array(['/about/assets/map-connected-products.png', 'Connected Products'],
dtype=object)
array(['/about/assets/map-product-types.png', 'Product Types'],
dtype=object)
array(['/about/assets/map-product-devices.png',
'One Product, Many Devices'], dtype=object)
array(['/about/assets/map-product-many-apps.png',
'One Product, Many Applications'], dtype=object)] | docs.exosite.com |
WCCP Service Configuration¶
The service definition file is used by traffic_wccp and traffic_server directly.
The elements in the security definition file are inspired by the WCCP RFC (8/2012). There is also an older version of the RFC that shows up commonly in search results, WCCP (4/2001), and was the RFC reference used in the original WCCP support for Traffic Server several years ago.
A sample service group file is included in the source tree under traffic_wccp.
Security Section¶
In the security section, you can define a password shared between the WCCP Client and the WCCP router. This password is used to encrypt the WCCP traffic. It is optional, but highly recommended.
Attributes in this section
- option - Must be set to MD5 if you want to encrypt your WCCP traffic
- key – The same password that you set with the associated WCCP router.
Services Section¶
In the services section you can define a list of service groups. Each top level entry is a separate service group.
Service group attributes include
- name – A name for the service. Not used in the rest of the WCCP processing.
- description – A description of the service. Again, not used in the rest of the WCCP processing.
- id - The security group ID. It must match the service group ID that has been defined on the associated WCCP router. This is the true service group identifier from the WCCP perspective.
- type – This defines the type of service group either “STANDARD” or “DYNAMIC”. There is one standard defined service group, HTTP with the id of 0. The 4/2001 RFC indicates that id’s 0-50 are reserved for well known service groups. But more recent 8/2012 RFC indicates that values 0 through 254 are valid service id’s for dynamic services. To avoid differences with older WCCP routers, you probably want to avoid dynamic service ID’s 0 through 50.
- priority – This is a value from 0 to 255. The higher number is a higher priority. Well known (STANDARD) services are set to a value of 240. If there are multiple service groups that could match a given packet, the higher priority service group is applied. RFC For example, you have service group 100 defined for packets with destination port 80, and service group 101 defined for packets with source port 1024. For a packet with destination port set to 80 and source port set to 1024, the priorities of the service groups would need to be compared to determine which service group applies.
- protocol – This is IP protocol number that should match. Generally this is set to 6 (TCP) or 17 (UDP).
- assignment – WCCP supports multiple WCCP clients supporting a single service group. However, the current WCCP client implementation in Traffic Server assumes there is only a single WCCP client, and so creates assignment tables that will direct all traffic to that WCCP client. The assignment type is either hash or mask, and if it is not set, it defaults to hash. If Traffic Server ever supports more than one cache, it will likely only support a balanced hash assignment. The mask/value assignment seems to be better suited to situations where the traffic needs to be more strongly controlled.
- primary-hash – This is the element of the packet that is used to compute the primary key. The value options are src_ip, dst_ip, src_port, or dst_port. This entry is a list, so multiple values can be specified. In that case, all the specified packet attributes will be used to compute the hash bucket. In the current implementation, the primary hash value does not matter, since the client always generates a hash table that directs all matching traffic to it. But if multiple clients are ever supported, knowledge of the local traffic distribution could be used to pick a packet attribute that will better spread traffic over the WCCP clients.
- alt-hash – The protocol supports a two level hash. This attribute is a list with the same value options as for primary-hash. Again, since the current Traffic Server implementation only creates assignment tables to a single client, specifying the alt-hash values does nothing.
- ports – This is a list of port values. Up to 8 port values may be included in a service group definition.
- port-type – This attribute can have the value of src or dst. If not specified, it defaults to dst. It indicates whether the port values should be interpreted as source ports or destination ports.
- forward – This is a list. The list of the values can be GRE or L2. This advertises how the client wants to process WCCP packets. GRE means that the packets will be delivered in a GRE tunnel. This is the default. L2 means that the client is on the same network and can get traffic delivered to it from the router by L2 routing (MAC addresses).
- return – The WCCP protocol allows a WCCP client to decline a packet and return it back to the router. The current WCCP client implementation never does this. The value options are the same as for the forward attribute.
- routers – This is the list of router addresses the WCCP client communicates with. The WCCP protocols allows for multiple WCCP routers to be involved in a service group. The multiple router scenario has at most been lightly tested in the Traffic Server implementation.
- proc-name – This attribute is only used by traffic_wccp. It is not used in the traffic_server WCCP support. This is the path to a process’ PID file. The service group is advertised to the WCCP router if the process identified in the PID file is currently operational. | https://docs.trafficserver.apache.org/en/latest/admin-guide/configuration/transparent-proxy/wccp-service-config.en.html | 2017-09-19T20:41:15 | CC-MAIN-2017-39 | 1505818686034.31 | [] | docs.trafficserver.apache.org |
CAMAC is physically interfaced to the VME via either the CES CBD 8210 branch highway driver, or the Wiener VC32/CC32 board set. This chapter describes a mechanism for doing single CAMAC operations on a CAMAC crate. This mechanism can be incorporated into shell scripts to do low level control.
shell command support for CAMAC operations is provided in the form of three Tcl scripts:
cesbcnaf - performs a single camac operation via a CES CBD8210
wienerebcnaf - performs a single CAMAC oepration via a VC32/CC32 board set.
bcnaf - performs a single CAMAC command allowing you to choose which interface is to be used.
To make use of the command above, you should add the bin subdirectory of your NSCLDAQ installation to your path. This is done by adding the following to your .bashrc:
Where
DAQROOT is the top level directory of the NSCLDAQ
installation you are using (e.g. /usr/opt/daq/10.0). | http://docs.nscl.msu.edu/daq/newsite/nscldaq-11.2/c6888.html | 2017-09-19T20:47:04 | CC-MAIN-2017-39 | 1505818686034.31 | [] | docs.nscl.msu.edu |
Logs and troubleshootingEstimated reading time: 15 minutes
Here is information about how to diagnose and troubleshoot problems, send logs and communicate with the Docker for Mac team, use our forums and Knowledge Hub, browse and log issues on GitHub, and find workarounds for known problems.
Docker Knowledge Hub
Looking for help with Docker for Mac? Check out the Docker Knowledge Hub for knowledge base articles, FAQs, and technical support for various subscription levels.
Diagnose problems, send feedback, and create GitHub issues
If you encounter problems for which you do not find solutions in this documentation, Docker for Mac issues on GitHub already filed by other users, or on the Docker for Mac forum, we can help you troubleshoot the log data.
Choose
–>
Diagnose & Feedback from the menu bar.
You can choose to run diagnostics only, or diagnose and send the results to the Docker Team:
- Diagnose Only - Runs diagnostics, and shows results locally. (Results are not sent to Docker, and no ID is generated.)
- Diagnose & Upload - Runs diagnostics, shows results, and auto-uploads the diagnostic results to Docker. A diagnostic ID is auto-generated. You can refer to this ID when communicating with the Docker Team. Optionally, you can open an issue on GitHub using the uploaded results and ID as a basis.
If you click Open Issues, this opens Docker for Mac issues on GitHub in your web browser in a “create new issue” template prepopulated with the following:
ID and summary of the diagnostic you just ran
System and version details
Sections where you can fill in a description of expected and actual behavior, and steps to reproduce the issue
You can also create a new issue directly on GitHub at. (The README for the repository is here.)
Click New Issue on that page (or right here ☺) to get a “create new issue” template prepopulated with sections for the ID and summary of your diagnostics, system and version details, description of expected and actual behavior, and steps to reproduce the issue.
Checking the logs
In addition to using the diagnose and feedback option to submit logs, you can browse the logs yourself.
Use the command line to view logs
To view Docker for Mac logs at the command line, type this command in a terminal window or your favorite shell.
$ syslog -k Sender Docker
Alternatively, you can send the output of this command to a file. The following
command redirects the log output to a file called
my_docker_logs.txt.
$ syslog -k Sender Docker > ~/Desktop/my_docker_logs.txt
Use the Mac Console for log queries
Macs provide a built-in log viewer. You can use the Mac Console System Log Query to check Docker app logs.
The Console lives on your Mac hard drive in
Applications >
Utilities. You
can bring it up quickly by just searching for it with Spotlight Search.
To find all Docker app log messages, do the following.
From the Console menu, choose File > New System Log Query…
- Name your search (for example
Docker)
- Set the Sender to Docker
Click OK to run the log query.
You can use the Console Log Query to search logs, filter the results in various ways, and create reports.
For example, you could construct a search for log messages sent by Docker that
contain the word
hypervisor then filter the results by time (earlier, later,
now).
The diagnostics and usage information to the left of the results provide auto-generated reports on packages.
Troubleshooting
Make sure certificates are set up correctly
Docker for Mac will ignore certificates listed under insecure registries, and
will not send client certificates to them. Commands like
docker run that
attempt to pull from the registry will produce Adding TLS certificates in the Getting Started topic.
Docker for Mac will not start if Mac user account and home folder are renamed after installing the app
If, after installing Docker for Mac, you change the name of your macOS user account and home folder, Docker for Mac will fail to start. To solve this problem, uninstall then reinstall Docker for Mac under the new user account.
See also, the discussion on the issue docker/for-mac#1209 and Do I need to reinstall Docker for Mac if I change the name of my macOS account? in the FAQs.
Volume mounting requires file sharing for any project directories outside of
/Users
If you are using mounted volumes and get runtime errors indicating an application file is not found, a volume mount is denied, or a service cannot start (e.g., with Docker Compose), you might need to enable file sharing.
Volume mounting requires shared drives for projects that live outside of the
/Users directory. Go to
–> Preferences –> File sharing and share the drive that
contains the Dockerfile and volume. Mac Mac before attempting to start containers.
Try to start the container again:
$ docker start old-container old-container
Incompatible CPU detected
Docker for Mac requires a processor (CPU) that supports virtualization and, more specifically, the Apple Hypervisor framework. Docker for Mac is only compatible with Macs that have a CPU that supports the Hypervisor framework. Most Macs built in 2010 and later support it, as described in the Apple Hypervisor Framework documentation about supported hardware:
Generally, machines with an Intel VT-x feature set that includes Extended Page Tables (EPT) and Unrestricted Mode are supported.
To check if your Mac supports the Hypervisor framework, run this command in a terminal window.
sysctl kern.hv_support
If your Mac supports the Hypervisor Framework,
the command will print
kern.hv_support: 1.
If not, the command will print
kern.hv_support: 0.
See also, Hypervisor Framework Reference in the Apple documentation, and Docker for Mac system requirements in What to know before you install.
Workarounds for common problems
IPv6 workaround to auto-filter DNS addresses - IPv6 is not yet supported on Docker for Mac, which typically manifests as a network timeout when running
dockercommands that need access to external network servers (e.g.,
docker pull busybox).
$ docker pull busybox Using default tag: latest Pulling repository docker.io/library/busybox Network timed out while trying to connect to. You may want to check your internet connection or if you are behind a proxy.
Starting with v1.12.1, 2016-09016 on the stable channel, and Beta 24 on the beta channel, a workaround is provided that auto-filters out the IPv6 addresses in DNS server lists and enables successful network accesss. For example,
2001:4860:4860::8888would become
8.8.8.8. So, the only workaround action needed for users is to upgrade to Docker for Mac stable v1.12.1 or newer, or Beta 24 or newer.
On releases with the workaround included to filter out / truncate IPv6 addresses from the DNS list, the above command should run properly:
$ docker pull busybox Using default tag: latest latest: Pulling from library/busybox Digest: sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6 Status: Image is up to date for busy box:latest
To learn more, see these issues on GitHub and Docker for Mac forums:
If Docker for Mac fails to install or start properly:
Make sure you quit Docker for Mac before installing a new version of the application (
–> Quit Docker). Otherwise, you will get an “application in use” error when you try to copy the new app from the
.dmgto
/Applications.
Restart your Mac to stop / discard any vestige of the daemon running from the previously installed version.
Run the uninstall commands from the menu.
If
dockercommands aren’t working properly or as expected:
Make sure you are not using the legacy Docker Machine environment in your shell or command window. You do not need
DOCKER_HOSTset, so unset it as it may be pointing at another Docker (e.g. VirtualBox). If you use bash,
unset ${!DOCKER_*}will unset existing
DOCKERenvironment variables you have set.
For other shells, unset each environment variable individually as described in Setting up to run Docker for Mac in Docker for Mac vs. Docker Toolbox.
- Note that network connections will fail if the macOS Firewall is set to “Block all incoming connections”. You can enable the firewall, but
bootpdmust be allowed incoming connections so that the VM can get an IP address.
- For the
hello-world-nginxexample, Docker for Mac must be running in order to get to the webserver on. Make sure that the Docker whale is showing in the menu bar, and that you run the Docker commands in a shell that is connected to the Docker for Mac Engine (not Engine from Toolbox). Otherwise, you might start the webserver container but get a “web page not available” error when you go to
localhost. For more on distinguishing between the two environments, see Docker for Mac vs. Docker Toolbox.
If you see errors like
Bind for 0.0.0.0:8080 failed: port is already allocatedor
listen tcp:0.0.0.0:8080: bind: address is already in use:
These errors are often caused by some other software on the Mac using those ports.
Run
lsof -i tcp:8080to discover the name and pid of the other process and decide whether to shut the other process down, or to use a different port in your docker app.
Known issues
- IPv6 is not yet supported on Docker for Mac. If you are using IPv6, and haven’t upgraded to Beta 24 or v1.12.1 stable or newer, you will see a network timeout when you run
dockercommands that need access to external network servers. The aforementioned releases include a workaround for this because Docker for Mac does not yet support IPv6. See “IPv6 workaround to auto-filter DNS addresses” in Workarounds for common problems.
- You might encounter errors when using
docker-compose upwith Docker for Mac (
ValueError: Extra Data). We’ve identified this is likely related to data and/or events being passed all at once rather than one by one, so sometimes the data comes back as 2+ objects concatenated and causes an error.
- Force-ejecting the
.dmgafter running
Docker.appfrom it results in an unresponsive whale in the menu bar, Docker tasks “not responding” in activity monitor, helper processes running, and supporting technologies consuming large percentages of CPU. Please reboot, and then re-start Docker for Mac. If needed,
force quitany Docker related applications as part of the reboot.
- Docker does not auto-start on login even when it is enabled in
–> Preferences. This is related to a set of issues with Docker helper, registration, and versioning.
- Docker for Mac uses the
HyperKithypervisor () in macOS 10.10 Yosemite and higher. If you are developing with tools that have conflicts with
HyperKit, such as Intel Hardware Accelerated Execution Manager (HAXM), the current workaround is not to run them at the same time. You can pause
HyperKitby quitting Docker for Mac temporarily while you work with HAXM. This will allow you to continue work with the other tools and prevent
HyperKitfrom interfering.
If you are working with applications like Apache Maven that expect settings for
DOCKER_HOSTand
DOCKER_CERT_PATHenvironment variables, specify these to connect to Docker instances through Unix sockets. For example:
export DOCKER_HOST=unix:///var/run/docker.sock
docker-compose1.7.1 performs DNS unnecessary lookups for
localunixsocket.localwhich can take 5s to timeout on some networks. If
docker-composecommands seem very slow but seem to speed up when the network is disabled (e.g. when disconnected from wifi), try appending
127.0.0.1 localunixsocket.localto the file
/etc/hosts. Alternatively you could create a plain-text TCP proxy on localhost:1234 using:
docker run -d -v /var/run/docker.sock:/var/run/docker.sock -p 127.0.0.1:1234:1234 bobrik/socat TCP-LISTEN:1234,fork UNIX-CONNECT:/var/run/docker.sock and then `export DOCKER_HOST=tcp://localhost:1234`.. Applications that behave in this way include:
rake
ember build
- Symfony
- Magento
- Zend Framework
- PHP applications that use Composer to install dependencies in a
vendorfolder
As a work-around for this behavior, you can put vendor or third-party library directories in Docker volumes, perform temporary file system operations outside of
osxfsmounts, and use third-party tools like Unison or
rsyncto synchronize between container directories and bind-mounted directories. We are actively working on
osxfsperformance using a number of different techniques. To learn more, please see the topic on Performance issues, solutions, and roadmap.
If your system does not have access to an NTP server, then after a hibernate the time seen by Docker for Mac may be considerably out of sync with the host. Furthermore, the time may slowly drift out of sync during use. To manually reset the time after hibernation, run:
docker run --rm --privileged alpine hwclock -s
Or, to resolve both issues, you can add the local clock as a low-priority (high stratum) fallback NTP time source for the host. To do this, edit the host’s
/etc/ntp-restrict.confto add:
server 127.127.1.1 # LCL, local clock fudge 127.127.1.1 stratum 12 # increase stratum
Then restart the NTP service with:
sudo launchctl unload /System/Library/LaunchDaemons/org.ntp.ntpd.plist sudo launchctl load /System/Library/LaunchDaemons/org.ntp.ntpd.plist | https://docs.docker.com/docker-for-mac/troubleshoot/ | 2017-09-19T20:48:29 | CC-MAIN-2017-39 | 1505818686034.31 | [array(['/docker-for-mac/images/settings-diagnose.png',
'Diagnose problems'], dtype=object)
array(['/docker-for-mac/images/settings-diagnostic-results-only.png',
'Diagnostic results only'], dtype=object)
array(['/docker-for-mac/images/settings-diagnose-id.png',
'Diagnostics & Feedback'], dtype=object)
array(['/docker-for-mac/images/diagnose-issue.png',
'Create issue on GitHub'], dtype=object)
array(['/docker-for-mac/images/diagnose-d4mac-issues-template.png',
'issue template'], dtype=object)
array(['/docker-for-mac/images/console_logs.png',
'Mac Console display of Docker app search results'], dtype=object)] | docs.docker.com |
Contributing to documentation¶
You see that build script in the docs folder ? Don’t use it.
That is, unless you have followed the instructions on how to compile the JavaScript documentation and placed the sdoc_toolkit-2.4.0 in a folder named ~/UbuntuOne/SDKs.
I might give the build script some attention some day and make it more useful, but for now I have other priorities.
Writing documentation¶
You can find documentation in three places in this project:
- In Python docstring
- In JavaScript comments of the editlive widgets
- And finally in the docs/ folder
So if you submit a pull request, it’s quite easy to update the documentation for the code you are submitting. You just have to comment the code properly.
Building the JavaScript documentation¶
This is only needed if changes have been made to a JavaScript file.
Installing requirements¶
Using Ubuntu One is really not a requirement, just a convenience for me.
mkdir -p ~/Ubuntu\ One/SDKs/ && cd ~/Ubuntu\ One/SDKs/ wget unzip jsdoc_toolkit-2.4.0.zip cd jsdoc_toolkit-2.4.0
Compiling docs¶
cd django-editlive/ java -jar ~/Ubuntu\ One/SDKs/jsdoc_toolkit-2.4.0/jsdoc-toolkit/jsrun.jar \ ~/Ubuntu\ One/SDKs/jsdoc_toolkit-2.4.0/jsdoc-toolkit/app/run.js ./ \ --template=_themes/jsdoc-for-sphinx -x=js,jsx --directory=./jsdoc | http://django-editlive.readthedocs.io/en/latest/topics/develop/documentation.html | 2017-09-19T20:25:24 | CC-MAIN-2017-39 | 1505818686034.31 | [] | django-editlive.readthedocs.io |
#include <Exception.h> class
CException{
CException(const char* pszAction);
CException(const std::string& rsAction);
virtual const const char* ReasonText();
virtual const Int_t ReasonCode();
const const char* WasDoing();};
This class is the abstract base class of the exception class hierarchy.
The class hierarchy provides textual exception descriptions (that can be displayed
to a user), as well as support for a numerical code that can be easily processed
by software. The textual description is composed by the
ReasonText function in an exception specific way. Usually
the resulting message includes a description of the message along with context
information held by this base class and retrieved via
WasDoing.
CException(const char* pszAction);
CException(const std::string& rsAction);
Both of the constructors described above save their parameter as the context specific
part of the description. This string can be retrieved via
WasDoing. Normally the string should give some indication
of what the program was doing when the exception was thrown.
virtual const const char* ReasonText();
A virtual function that is supposed to return the reason the exception was thrown. For the base class this is the string Unspecified Exception.
virtual const Int_t ReasonCode ();
Returns a number that corresponds to the exception reason within a specific
class of exceptions (e.g. for a
CErrnoException this would
be the value of
errno at the time the exception was constructed).
For the base class this will always be -1.
const const char* WasDoing();
Returns the context string that was passed to our constructor. This will normally
be called by a derived class's
ReasonText function when
constructing the text. | http://docs.nscl.msu.edu/daq/newsite/nscldaq-11.2/r45760.html | 2017-09-19T20:48:31 | CC-MAIN-2017-39 | 1505818686034.31 | [] | docs.nscl.msu.edu |
Robots.txt file or subdomain: /bin/ Disallow: /cache/ Disallow: /cli/ Disallow: /components/ Disallow: /includes/ Disallow: /installation/ Disallow: /language/ Disallow: /layouts/ Disallow: /libraries/ Disallow: /logs/ Disallow: /modules/ Disallow: /plugins/ Disallow: /tmp/
Robot exclusion
You can exclude directories or block robots from your site adding Disallow rule | https://docs.joomla.org/Robots.txt_file | 2017-01-16T12:57:31 | CC-MAIN-2017-04 | 1484560279176.20 | [] | docs.joomla.org |
Changes are automatically saved when you exit the editor via the OK button.
Access
Explore the ways you can reach the VA Snippet editor, and which editor you should use.
Language
Visual Assist maintains separate VA Snippets for separate programming languages. All VA Snippets can be accessed when the editor is open.
Each time you open the VA Snippet editor, be sure you are in the context of the programming language you expect. It is easy to create or modify a VA Snippet for the wrong programming language.
When opening the VA Snippet editor from the context menu of the text editor, the context in the editor is set to the programming language of the active document.
You must recreate, or duplicate and drag-and-drop, VA Snippets you want to use in multiple languages.
Toolbar
The first four buttons in the toolbar of the editor operate on the level of VA Snippet, and are operative only when focus is in the tree. The last button, Insert Reserved String, is operative only when focus is in the code field.
Toolbar buttons are available for:
- New, to create an empty VA Snippet
- Save, to save changes since the last save or open
- Duplicate, to make a copy of the current VA Snippet
- Delete, to remove the current VA Snippet
- Insert Reserved String, to open a menu of reserved strings if focus is the Code field of the current VA Snippet
Drag and Drop
Although order has no impact on use, you can rearrange VA Snippets by dragging and dropping them in the editor.
As in the following example, you can drop a VA Snippet to move it below a highlighted one.
You can drag VA Snippets to move them from one programming language to another.
There is no way to automatically sort VA Snippets.
Modified Flag
VA Snippets that have been modified since the editor was opened appear red.
Refactoring Titles
The titles of the VA Snippets required by the refactoring and code-generation commands of Visual Assist begin with "Refactoring". Visual Assist finds these VA Snippets by title. You may change the code associated with these VA Snippets, but do not change their titles. If you change a title, Visual Assist will recreate the respective default.
You may drag-and-drop the refactoring VA Snippets without effect.
You may create a new VA Snippet with "Refactoring" in the title, but its presence will not extend the refactoring capabilities of Visual Assist.
SuggestionsForType Titles
The titles of the VA Snippets used by the Smart Suggestions feature of Visual Assist begin with "SuggestionForType". Visual Assist finds these VA Snippets by title. You may change the initial values—the "code"—associated with each entry, but a change in title may render the VA Snippet inoperative.
Context Menus
Custom context menus are available in the tree
and in the code field.
| http://docs.wholetomato.com/default.asp?W261 | 2017-01-16T12:51:19 | CC-MAIN-2017-04 | 1484560279176.20 | [array(['default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=30156&sFileName=editorIntro.png',
None], dtype=object)
array(['default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=31513&sFileName=editorLanguages.png',
None], dtype=object)
array(['default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=31507&sFileName=editorDuplicate.png',
None], dtype=object)
array(['default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=31509&sFileName=editorDrag.png',
None], dtype=object)
array(['default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=31510&sFileName=editorModified.png',
None], dtype=object)
array(['default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=31511&sFileName=editorContextMenu.png',
None], dtype=object)
array(['default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=31512&sFileName=editorContextMenuCode.png',
None], dtype=object) ] | docs.wholetomato.com |
Connect Windows computers to Log Analytics
This article shows the steps to connect the Windows computers in your on-premises infrastructure directly to OMS workspaces by using a customized version of the Microsoft Monitoring Agent (MMA). You need to install and connect agents for all of the computers that you want to onboard to OMS in order for them to send data to OMS and to view and act on that data in the OMS portal. Each agent can report to multiple workspaces.
You can install agents using Setup, command line, or with Desired State Configuration (DSC) in Azure Automation.
Note
For virtual machines running in Azure you can simplify installation by using the virtual machine extension.
On computers with Internet connectivity, the agent will use the connection to the Internet to send data to OMS. For computers that do not have Internet connectivity, you can use a proxy or the OMS Gateway.
Connecting your Windows computers to OMS is straightforward using 3 simple steps:
- Download the agent setup file from the OMS portal
- Install the agent using the method you choose
- Configure the agent or add additional workspaces, if necessary
The following diagram shows the relationship between your Windows computers and OMS after you’ve installed and configured agents.
System requirements and required configuration
Before you install or deploy agents, review the following details to ensure you meet necessary requirements.
- You can only install the OMS MMA on computers running Windows Server 2008 SP 1 or later or Windows 7 SP1 or later.
- You'll need an OMS subscription. For additional information, see Get started with Log Analytics.
- Each Windows computer must be able to connect to the Internet using HTTPS or to the OMS Gateway. This connection can be direct, via a proxy, or through the OMS Gateway.
- You can install the OMS MMA on stand-alone computers, servers, and virtual machines. If you want to connect Azure-hosted virtual machines to OMS, see Connect Azure virtual machines to Log Analytics.
- The agent needs to use TCP port 443 for various resources. For more information, see Configure proxy and firewall settings in Log Analytics.
Download the agent setup file from OMS
- In the OMS portal, on the Overview page, click the Settings tile. Click the Connected Sources tab at the top.
- Click Windows Servers and then click Download Windows Agent applicable to your computer processor type to download the setup file.
- On the right of Workspace ID, click the copy icon and paste the ID into Notepad.
- On the right of Primary Key, click the copy icon and paste the key into Notepad.
Install the agent using setup
- Run Setup to install the agent on a computer that you want to manage.
- Log Analytics (OMS), Operations Manager, or you can leave the choices blank if you want to configure the agent later. Click Next.
- If you chose to connect to Azure Log Analytics (OMS), paste the Workspace ID and Workspace Key (Primary Key) that you copied into Notepad in the previous procedure and then click Next.
- If you chose to connect to Operations Manager, type the Management Group Name, Management Server name, and Management Server Port, and then click Next. On the Agent Action Account page, choose either the Local System account or a local domain account and then click Next.
On the Ready to Install page, review your choices and then click Install.
- On the Configuration completed successfully page, click Finish.
- When complete, the Microsoft Monitoring Agent appears in Control Panel. You can review your configuration there and verify that the agent is connected to Operational Insights (OMS). When connected to OMS, the agent displays a message stating: The Microsoft Monitoring Agent has successfully connected to the Microsoft Operations Management Suite service.
Install the agent using the command line
Modify and then use the following example to install the agent using the command line.
Note
If you want to upgrade an agent, you need to use the Log Analytics scripting API. See the next section to upgrade an agent.
MMASetup-AMD64.exe /Q:A /R:N /C:"setup.exe /qn ADD_OPINSIGHTS_WORKSPACE=1 OPINSIGHTS_WORKSPACE_ID=<your workspace id> OPINSIGHTS_WORKSPACE_KEY=<your workspace key> AcceptEndUserLicenseAgreement=1"
Upgrade the agent and add a workspace using a script
You can upgrade an agent and add a workspace using the Log Analytics scripting API with the following PowerShell example.
$mma = New-Object -ComObject 'AgentConfigManager.MgmtSvcCfg' $mma.AddCloudWorkspace($workspaceId, $workspaceKey) $mma.ReloadConfiguration()
Note
If you've used the command line or script previously to install or configure the agent,
EnableAzureOperationalInsights was replaced by
AddCloudWorkspace.
Install the agent using DSC in Azure Automation
You can use the following script example to install the agent using DSC in Azure Automation. The example installs the 64-bit agent, identified by the
URI value. You can also use the 32-bit version by replacing the URI value. The URIs for both versions are:
- Windows 64 bit agent -
- Windows 32 bit agent -
Note
This procedure and script example will not upgrade an existing agent.
- Import the xPSDesiredStateConfiguration DSC Module from into Azure Automation.
- Create Azure Automation variable assets for OPSINSIGHTS_WS_ID and OPSINSIGHTS_WS_KEY. Set OPSINSIGHTS_WS_ID to your OMS Log Analytics workspace ID and set OPSINSIGHTS_WS_KEY to the primary key of your workspace.
- Use the script below and save it as MMAgent.ps1
- Modify and then use the following example to install the agent using DSC in Azure Automation. Import MMAgent.ps1 into Azure Automation by using the Azure Automation interface or cmdlet.
- Assign a node to the configuration. Within 15 minutes the node will check its configuration and the MMA will be pushed to the node.
Configuration MMAgent { $OIPackageLocalPath = "C: ADD_OPINSIGHTS_WORKSPACE=1 OPINSIGHTS_WORKSPACE_ID=' + $OPSINSIGHTS_WS_ID + ' OPINSIGHTS_WORKSPACE_KEY=' + $OPSINSIGHTS_WS_KEY + ' AcceptEndUserLicenseAgreement=1"' DependsOn = "[xRemoteFile]OIPackage" } } }
Get the latest ProductId value
The
ProductId value in the MMAgent.ps1 script is unique to each agent version. When an updated version of each agent is published, the ProductId value changes. So, when the ProductId changes in the future, you can find the agent version using a simple script. After you have the latest agent version installed on a test server, you can use the following script to get the installed ProductId value. Using the latest ProductId value, you can update the value in the MMAgent.ps1 script.
$InstalledApplications = Get-ChildItem hklm:\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall foreach ($Application in $InstalledApplications) { $Key = Get-ItemProperty $Application.PSPath if ($Key.DisplayName -eq "Microsoft Monitoring Agent") { $Key.DisplayName Write-Output ("Product ID is: " + $Key.PSChildName.Substring(1,$Key.PSChildName.Length -2)) } }
Configure an agent manually or add additional workspaces
If you've installed agents but did not configure them or if you want the agent to report to multiple workspaces, you can use the following information to enable an agent or reconfigure it. After you've configured the agent, it will register with the agent service and will get necessary configuration information and management packs that contain solution information.
- After you've installed the Microsoft Monitoring Agent, open Control Panel.
- Open Microsoft Monitoring Agent and then click the Azure Log Analytics (OMS) tab.
- Click Add to open the Add a Log Analytics Workspace box.
- Paste the Workspace ID and Workspace Key (Primary Key) that you copied into Notepad in a previous procedure for the workspace that you want to add and then click OK.
After data is collected from computers monitored by the agent, the number of computers monitored by OMS will appear in the OMS portal on the Connected Sources tab in Settings as Servers Connected.
To disable an agent
- After installing the agent, open Control Panel.
- Open Microsoft Monitoring Agent and then click the Azure Log Analytics (OMS) tab.
- Select a workspace and then click Remove. Repeat this step for all other workspaces.
Optionally, configure agents to report to an Operations Manager management group
If you use Operations Manager in your IT infrastructure, you can also use the MMA agent as an Operations Manager agent.
To configure MMA agents to report to an Operations Manager management group
- On the computer where the agent is installed, open Control Panel.
- Open Microsoft Monitoring Agent and then click the Operations Manager tab.
- If your Operations Manager servers have integration with Active Directory, click Automatically update management group assignments from AD DS.
- Click Add to open the Add a Management Group dialog box.
- In Management group name box, type the name of your management group.
- In the Primary management server box, type the computer name of the primary management server.
- In the Management server port box, type the TCP port number.
- Under Agent Action Account, choose either the Local System account or a local domain account.
- Click OK to close the Add a Management Group dialog box and then click OK to close the Microsoft Monitoring Agent Properties dialog box.
Optionally, configure agents to use the OMS Gateway
If you have servers or clients that do not have a connection to the Internet, you can still have them send data to OMS by using the OMS Gateway. When you use the Gateway, all data from agents is sent through a single server that has access to the Internet. The Gateway transfers data from the agents to OMS directly without analyzing any of the data that is transferred.
See OMS Gateway to learn more about the Gateway, including setup, and configuration.
For information about how to configure your agents to use a proxy server, which in this case is the OMS Gateway, see Configure proxy and firewall settings in Log Analytics.
Optionally, configure proxy and firewall settings
If you have proxy servers or firewalls in your environment that restrict access to the Internet, see Configure proxy and firewall settings in Log Analytics to enable your agents to communicate to the OMS service.
Next steps
- Add Log Analytics solutions from the Solutions Gallery to add functionality and gather data.
- Configure proxy and firewall settings in Log Analytics if your organization uses a proxy server or firewall so that agents can communicate with the Log Analytics service. | https://docs.microsoft.com/en-us/azure/log-analytics/log-analytics-windows-agents | 2017-01-16T12:56:49 | CC-MAIN-2017-04 | 1484560279176.20 | [array(['media/log-analytics-windows-agents/oms-direct-agent-diagram.png',
'oms-direct-agent-diagram'], dtype=object) ] | docs.microsoft.com |
How to contribute to community projects using GitHub
Preperation
This tutorial assumes you already have:
- A registered GitHub Account
- GitHub Desktop downloaded and installed
- You are logged into GitHub
Forking the repository
Go to the Rigs of Rods community website on GitHub. Click on the project you want to work on (
baker-ranch-v2 in this example).
You will now see the projects repository. The repository contains all assets bound to this project (e.g. terrain configuration files, textures and meshes for maps).
Click the
Fork button at the top right
The repository should now fork to your profile. Once it’s done, you should be redirected to your new fork:
Verify you are viewing your own repository (indicated by
your username/
project name with
forked from ... subtitle)
Cloning your local repository
Click on
Clone or download then
Open in Desktop:
After you click on
Open in Desktop a dialog will appear. This dialog will be different depending on the web browser you are using.
Make sure
GitHubDesktop.exe is highlighted and hit
Open link.
If you have no option here you need to install GitHub Desktop (see Preperation).
GitHub Desktop will open and ask you where it should put the files so you can work on them.
Select the place where you’d like to have them (Desktop in this example).
Click
Clone then wait for the download to finish.
After the download is finished, open the project’s folder by clicking
Repository ->
Show in Explorer
Making changes
In this folder you work on the project like you would usually do.
Committing to your local repository
This step assumes you are done with your changes and made sure they work and are bugfree.
So far all changes are offline on your computer. To contribute to the project you have to upload (commit) them.
Open GitHub Desktop. It will show the changes you have made. Check/uncheck the changed files you want to upload in one commit (it’s good practice to do one commit per change. If you don’t understand what this means yet don’t worry, it will come with time).
Enter a commit message explaining your change(s). If your changes are big or you want to add a comment use the description field for additional information. This is usually not necessary.
Click on
Commit to master
Your changes are now saved into the git subsystem but they are still offline…
Uploading to GitHub
You’ll see a confirmation that your commit has been saved. Notice your changes are now in the
History tab.
To upload your changes click on
Push origin:
Creating a pull request
Now your changes are online but they are not in the central repository just yet.
As of now they are only stored in your personal repository.
To contribute to the project they have to be “merged” into the central repository.
Open your personal repository on GitHub in your browser by clicking
Repository ->
View on GitHub.
Notice it’s telling you your branch is
x commits ahead of y. This means you have changes which are not in the central repository yet.
- Click on
New pull request
On the next page you can review your changes again and start a pull request. Click
Create pull request
- Enter a description for your pull request
- Since pull request can contain multiple commits and changes you may want to deliver more information in the description section (optional)
- Click “Create pull request”
You just created a pull request.
Your changes will be reviewed by the project’s admins to ensure the overall quality of the project and to prevent trolls. You can review the status of your request at your pull request page.
- open (= pending)
- merged (= merged into central repository)
- closed (= not accepted)
Often there will be a discussion about your changes. Admins will inform you about bugs, things they don’t like etc. You should listen to them and adapt your changes accordingly.
If someone replied to your pull request you will be notified by the GitHub interface. The GitHub icon at the top right of the page will have a blue dot to inform you about replies (similar to Facebook or Google+).
Keeping your local repository updated
To keep your local repository updated with the main repository:
First, you’ll need to get the url for the main repository.
To do this, go to the repository you forked from (In this example
RigsOfRods-Community/baker-ranch-v2) then click
Clone or download.
The url that appears under
Clone with HTTPS is the url. Save this as you’ll need it later.
In GitHub Desktop, click
Repository ->
Open in Command Prompt
A Command Prompt window will open.
Type these commands one at a time:
git remote add upstream url git fetch upstream git merge upstream/master --no-commit
url being the url you found earlier. Example:
The
master branch name might be different depending on the repository. For example the Community Map’s default branch is
Default.
Then in GitHub Desktop click
Push to origin. After it’s done, when you check your repository on GitHub it should now say
This branch is even with RigsOfRods-Community:master.:
| http://docs.rigsofrods.org/tools-tutorials/contributing-github/ | 2018-09-18T18:07:25 | CC-MAIN-2018-39 | 1537267155634.45 | [array(['/images/github-1.png', '1'], dtype=object)
array(['/images/github-2.png', '2'], dtype=object)
array(['/images/github-3.png', '3'], dtype=object)
array(['/images/github-4.png', '4'], dtype=object)
array(['/images/github-5.png', '5'], dtype=object)
array(['/images/github-6.png', '6'], dtype=object)
array(['/images/github-7.png', '7'], dtype=object)
array(['/images/github-8.png', '8'], dtype=object)
array(['/images/github-9.png', '9'], dtype=object)
array(['/images/github-10.png', '10'], dtype=object)
array(['/images/github-11.png', '11'], dtype=object)
array(['/images/github-12.png', '12'], dtype=object)
array(['/images/github-13.png', '13'], dtype=object)
array(['/images/github-14.png', '14'], dtype=object)
array(['/images/github-15.png', '15'], dtype=object)
array(['/images/github-16.png', '16'], dtype=object)
array(['/images/github-17.png', '17'], dtype=object)
array(['/images/github-18.png', '18'], dtype=object)] | docs.rigsofrods.org |
Installing Vagrant from source is an advanced topic and is only recommended when using the official installer is not an option. This page details the steps and prerequisites for installing Vagrant from source.
You must have a modern Ruby (>= 2.2) in order to develop and build Vagrant. The specific Ruby version is documented in the Vagrant's
gemspec. Please refer to the
vagrant.gemspec in the repository on GitHub, as it will contain the most up-to-date requirement. This guide will not discuss how to install and manage Ruby. However, beware of the following pitfalls:
Clone Vagrant's repository from GitHub into the directory where you keep code on your machine:
$ git clone
cd into that path. All commands will be run from this path:
$ cd /path/to/your/vagrant/clone
Run the
bundle command with a required version* to install the requirements:
$ bundle install
You can now run Vagrant by running
bundle exec vagrant from inside that directory.
In order to use your locally-installed version of Vagrant in other projects, you will need to create a binstub and add it to your path.
First, run the following command from the Vagrant repo:
$ bundle --binstubs exec
This will generate files in
exec/, including
vagrant. You can now specify the full path to the
exec/vagrant anywhere on your operating system:
$ /path/to/vagrant/exec/vagrant init -m hashicorp/precise64
Note that you will receive warnings that running Vagrant like this is not supported. It's true. It's not. You should listen to those warnings.
If you do not want to specify the full path to Vagrant (i.e. you just want to run
vagrant), you can create a symbolic link to your exec:
$ ln -sf /path/to/vagrant/exec/vagrant /usr/local/bin/vagrant
When you want to switch back to the official Vagrant version, simply remove the symlink.
© 2010–2017 Mitchell Hashimoto
Licensed under the MPL 2.0 License. | http://docs.w3cub.com/vagrant/installation/source/ | 2018-09-18T18:12:05 | CC-MAIN-2018-39 | 1537267155634.45 | [] | docs.w3cub.com |
M2E PRO – USER GUIDE FOR MAGENTO 2
Introduction
In the hustle and bustle of today’s retail industry, it is important that business should connect with online marketplaces. There are various reasons why many retailers use sites like Amazon, eBay, Rakuten, Sears, etc,… to scale online orders, to tap into global demand, to gain support and so on.
However, is selling on marketplaces easy to set up and grow your sales?
There are still many problems that you will face while embarking on these virtual marketplaces. Peer competition in each marketplace is just one part of the story outside the business, while inventory management is one of the nerve-racking internal issues that may cause you to fail to make a sustainable revenue. Cataloguing and updating inventory across marketplaces isn’t an easy feat. Changing the price of copious products on different channels is also the big challenge when you want to stay ahead of the competition.
So how to get at ease with all the challenges of inventory management across marketplaces?
It will be much better if you synchronize your Magento warehouses to your Amazon/eBay/Rakuten stores by M2E Pro. This module can help you effectively synchronize stock level and manage orders between Magento backend and different marketplaces. What is more, from now on with the M2E Magento Integration, your data will be managed not only in the whole Magento store but also in specific warehouses. The entire work is done smartly by just one order management system. So controlling the flow of thousands of products will become easier and smoother than ever!
Let’s open more doors for your business by expanding your presence to these marketplaces with this M2E Pro solution now!
M2E Pro - Magento - Ebay/ Amazon Marketplace Integration is one module in our Omnichannel solution for Magento retailers.
Create a new listing
Path: eBay Integration > M2E Pro
Click on Add Listing button to begin the process.
Step 1: General Settings
On the New Listing Creation page, a status bar will be displayed the whole process as below to help you create a new listing step by step.
i. General section:
(1) Title: Enter a descriptive name for your M2E Pro listing. This is used for reference within M2E Pro and will not appear on your eBay Listings.
ii. eBay settings section:
(2) Account: Use an eBay User ID you are going to list the Items from. If the eBay User ID you want to use is not available as an option – you can add one by clicking Add Another button next to the Account bulk.
(3) Marketplace: Choose the marketplace on which you want to sell the products listed on the M2E Pro Listing. The currency will be set automatically according to the marketplace you select.
iii. Magento Settings section:
(4) Magento Store View: Select a Magento Store View you want to display this M2E Pro Listing.
iv. Warehouse section:
(5) Select Warehouse: Choose a warehouse you want to assign this product listing to.
Step 2: Payment/ Shipping Settings
After specifying general information, you need to set payment methods, shipping methods, and whether you accept return for Products added on eBay with the M2E Pro Listing you‘re editing.
i. Payment tab
Choose one payment method for the product listing. If you choose PayPal, please enter your PayPal registered email. Besides, you can select other payment methods you are using.
Click on Save as New Policy button to save your settings.
ii. Shipping tab:
• Item Location section is prefilled for you based on your Magento settings. Check if the information is correct and edit it if necessary.
• Domestic Shipping section allows you to select a suitable shipping method that allows items to be delivered to Buyers.
Other shipping settings:
• Combined Shipping Profile section allows you to use flat shipping and calculated shipping rule profiles that you created on eBay. In addition, you can enable promotional shipping rule to offer buyers special discounts when they purchase multiple items or spend a certain amount.
• International Shipping (only display if you choose flat and calculated shipping method) allows you to qualify Listings to be posted in international marketplaces where the selected buyers can see it.
• Package Details section (only display if you choose the shipping method: Calculated: cost varies by Buyers location) enables you to set a measurement system, size source, dimension source and weight source for the package.
• Excluded Locations section: allows you to exclude buyers in particular locations from purchasing items.
Remember to click on Save as New Policy button to save your settings.
iii. Return tab
• Return Policy section allows you to enable return policy, and set its conditions such as the type of refund buyers will receive, the duration of which items will be returned, the fee of restocking, and so on.
Click Save as New Policy to complete this settings.
Step 3: Selling Settings
i. Price, Quantity and Format tab:
• How You Want To Sell Your Item section requires you to choose a sale type of listing from many options such as Fixed Price, Auction, and Magento Attributes. Besides, you can decide whether you want Buyers to remain Anonymous to other users in Private Listing field and elect to offer the Item exclusively to Business users.
• Quantity and Duration section: In this section, you can set the length of time your Listing will be available on eBay, and other quantity conditions by which your items will be listed on eBay.
• Taxation section: Set VAT rate and the tax will be added to or included in Item Price according to Add VAT Percentage field settings located in the Price section below.
• Price section allows you to choose whether you want to add VAT to the Price when a product is listed on eBay. In addition, you need to choose a type of price and add a value you want the Magento Price to be modified while listing on eBay.
• Donations section helps choose your favorite charity or organization and the donation percentage which you want to transfer to it.
• Best Offer section requires you to determine if you accept offers from buyers and negotiate a price. You can also manually respond the Buyers directly on eBay or use auto Accept and Decline Offers conditions.
ii. Description tab:
• Condition section let you specify which condition has the Item you are going to sell on eBay. Besides, if you want to have a custom value, add notes to provide additional details about the Item’s Condition.
• Images section allows you to set the elements of images such as location, size, gallery images, watermark,
• Details section requires you to specify the type of title and subtitle displayed.
• Description section: Select whether to show Product Description or Product Short Description as an eBay Item description. Otherwise, customize the descriptive information by choosing Custom Value and inserting Magento Attributes, Images tags or an HTML code you created.
• eBay Catalog Identifiers section allows you to specify a Magento Attribute that contains Product UPC/ EAN/ ISBN/Brand/MPN Value.
• Upgrade Tools section helps you make your Listing stand out from others These tools include Value Pack Bundle, Listing Upgrades (Bold or Highlight), Gallery Type and Hit Counter that counts the number of visitors to your eBay listing.
After completing all these above fields in step 3, click on Save as New Policy button.
Step 4: Synchronize Settings
There are several different types of Synchronization Rules that you can set on this screen:
• List Rules define which new products should be listed on eBay.
• Revise Rules define when products currently listed on eBay should be revised.
• Relist Rules define when products that have previously been listed on eBay but aren’t currently should be relisted on eBay.
Assign warehouse to all M2E product listings on the marketplace
Path: Stores > Configuration > Magestore Extension > M2esuccess Configuration
In M2E Pro Integration section:
(1) eBay store is associated with:
Select Specify Warehouse to allow a specific warehouse to link with your eBay store.
Select Global Website to link Ebay store with the primary warehouse.
(2) [Ebay] Warehouse (only available if you choose Specify Warehouse): choose a specific warehouse to link with your eBay store.
(3) Amazon store is associated with:
Select Specify Warehouse to allow a specific warehouse to link with your Amazon store.
Select Global Website to set Amazon store linking with primary warehouse.
[Amazon] Warehouse (only available if you choose Specify Warehouse): choose a specified warehouse to link with your Amazon store
Note: You have two ways to assign the warehouse to your product listing:
• The first way: Assign the warehouse to the product listing when creating a new product listing. Refer to Create a new product listing part for more details.
• The second one: Assign the warehouse to the product listing in Configuration. However, this setting is only applied to the product listings which are not assigned to any warehouse. That means if you assigned the warehouse when creating a new product listing, it will overwrite options in Configuration.
Manage orders between Magento backend and marketplaces
Path: eBay Integration > Sales section > Orders
View orders synchronization between eBay and Magento backend.
If you enable the synchronization between Magento Backend and your marketplaces, you can see the order list as below when you follow the path (eBay Integration > Sales section > Orders)
When clicking on a corresponding row of an order, you will see the overview details about eBay order.
eBay Order Details will appear on another page as below:
When an order is placed on eBay and the corresponding Magento order is created manually by admin in Magento backend, you can see the Magento order numbers below. Click on this number to view the Magento Order Details created for selected eBay Order.
The Magento Order Details will appear on another page as below:
Process eBay orders in Magento backend.
(1) Click on the corresponding row of an eBay Order that has not been created in Magento.
(2) To create order in Magento backend, click on Create Magento Order button on the top right of the page.
After that, a notification will be displayed to notify you that the order was created in Magento.
(3) Continually, click on Mark as Paid button to confirm that you have received payment for the order.
(4) After having shipped the order to the buyer, you can click Mark as Shipped to confirm that you have dispatched the order. You can also add tracking Magento order details by clicking the Magento order number for quick access to the Magento order.
Resend shipping information if necessary ( for example, you have added a tracking number or the buyer has lost the shipping information).
Notice to the *Transaction items section:
The items that form the eBay order appear in this section. If you see a link to Map to Magento Product, you will need to let M2E Pro know which Magento product in your inventory matches the eBay item.
First, click on Map to Magento Product and an Order Item Mapping page will appear as below.
Search for the corresponding product title or SKU and click on Map to This Product link on the appropriate line as above.
Click Unmap to break the link between the eBay item and a Magento product before mapping again.
Every change you made to the eBay order in this eBay View Order Details page will be automatically synchronized with the Magento View Order Details page.. | http://docs.magestore.com/Guide%20By%20Functions/Magento%202/M2E%20Pro/ | 2018-09-18T17:22:12 | CC-MAIN-2018-39 | 1537267155634.45 | [array(['../M2Epro_Image/image001.png', 'm2e pro'], dtype=object)
array(['../M2Epro_Image/image003.png', 'm2e pro'], dtype=object)
array(['../M2Epro_Image/image005.png', 'm2e pro'], dtype=object)
array(['../M2Epro_Image/image007.png', 'm2e pro'], dtype=object)
array(['../M2Epro_Image/image009.png', 'm2e pro'], dtype=object)
array(['../M2Epro_Image/image011.png', 'm2e pro'], dtype=object)
array(['../M2Epro_Image/image013.png', 'm2e pro'], dtype=object)
array(['../M2Epro_Image/image015.png', 'm2e pro'], dtype=object)
array(['../M2Epro_Image/image017.png', 'm2e pro'], dtype=object)
array(['../M2Epro_Image/image019.png', 'm2e pro'], dtype=object)
array(['../M2Epro_Image/image021.png', 'm2e pro'], dtype=object)
array(['../M2Epro_Image/image023.png', 'm2e pro'], dtype=object)
array(['../M2Epro_Image/image025.png', 'm2e pro'], dtype=object)
array(['../M2Epro_Image/image027.png', 'm2e pro'], dtype=object)
array(['../M2Epro_Image/image029.png', 'm2e pro'], dtype=object)
array(['../M2Epro_Image/image031.png', 'm2e pro'], dtype=object)
array(['../M2Epro_Image/image033.png', 'm2e pro'], dtype=object)
array(['../M2Epro_Image/image035.png', 'm2e pro'], dtype=object)
array(['../M2Epro_Image/image037.png', 'm2e pro'], dtype=object)
array(['../M2Epro_Image/image039.png', 'm2e pro'], dtype=object)
array(['../M2Epro_Image/image041.png', 'm2e pro'], dtype=object)
array(['../M2Epro_Image/image043.png', 'm2e pro'], dtype=object)
array(['../M2Epro_Image/image045.png', 'm2e pro'], dtype=object)
array(['../M2Epro_Image/image047.png', 'm2e pro'], dtype=object)
array(['../M2Epro_Image/image049.png', 'm2e pro'], dtype=object)
array(['../M2Epro_Image/image051.png', 'm2e pro'], dtype=object)
array(['../M2Epro_Image/image053.png', 'm2e pro'], dtype=object)
array(['../M2Epro_Image/image055.png', 'm2e pro'], dtype=object)
array(['../M2Epro_Image/image057.png', 'm2e pro'], dtype=object)
array(['../M2Epro_Image/image059.png', 'm2e pro'], dtype=object)
array(['../M2Epro_Image/image061.png', 'm2e pro'], dtype=object)
array(['../M2Epro_Image/image063.png', 'm2e pro'], dtype=object)
array(['../M2Epro_Image/image065.png', 'm2e pro'], dtype=object)
array(['../M2Epro_Image/image067.png', 'm2e pro'], dtype=object)
array(['../M2Epro_Image/image069.png', 'm2e pro'], dtype=object)
array(['../M2Epro_Image/image071.png', 'm2e pro'], dtype=object)] | docs.magestore.com |
you begin a group, the coordinate system for GUI controls are set so (0,0) is the top-left corner of the group. All controls are clipped to the group.
Groups can be nested - if they are, children are clipped to their parents.
This is very useful when moving a bunch of GUI elements around on screen. A common use case is designing your menus to fit on a specific screen size, then centering the GUI on larger displays. See Also: matrix, BeginScrollView.: | https://docs.unity3d.com/ScriptReference/GUI.BeginGroup.html | 2018-09-18T17:25:37 | CC-MAIN-2018-39 | 1537267155634.45 | [] | docs.unity3d.com |
The LANSA Email Setup on IBM i task includes the following steps.
Step 2. Configure Simple Mail Transfer Protocol (SMTP)
Step 3. Setup the Mail Server Framework (MSF)
Step 4. Set System Value QUTCOFFSET
Step 5. Test your Configuration
You may also wish to review the 5.1 Sample LANSA Email Function. | https://docs.lansa.com/14/en/lansa008/content/lansa/insef_000.htm | 2018-09-18T17:38:42 | CC-MAIN-2018-39 | 1537267155634.45 | [] | docs.lansa.com |
, add an existing hard disk.
The Select File dialog box opens.
- In the Select File, expand a datastore, select a virtual machine folder, and select the disk to add. Click OK
The disk file appears in the Contents column. The File Type drop-down menu shows the compatibility file types for this disk.
- (Optional) Expand New Hard disk and make further customizations for the hard disk. .
- Click OK. | https://docs.vmware.com/en/VMware-Cloud-on-AWS/services/com.vmware.vsphere.vmc-aws-manage-vms.doc/GUID-BDDECFBC-2FD5-4E4A-ABC7-AD274F4F40B4.html | 2018-09-18T17:04:58 | CC-MAIN-2018-39 | 1537267155634.45 | [] | docs.vmware.com |
WEB-BASED POINT-OF-SALE (WEBPOS) - USER GUIDE FOR MAGENTO module in our Omnichannel Solution soon.
With the latest upgraded version and its convenience , we see.
(1) Name of the staff in this session
(2) Location of POS
(3) Opening Balance before starting new session
(4) Value of the currency contrbuting to Opening Balance (such as: $100 )
(5) Number of the currency unit (for example : 2)
(6) Subtotal (you will have: $100 * 2 = $200)
(7) After checking all the information above, click this button to Open New Session
Type the coin/bill value > Put in the number of those coins/bills > Click “Open Session”
In case you logged in to POS but no window pop-up automatically like the picture above, then you need to make some change in the back-end system. Here is the instruction:
Go to “Store” > “Configuration” > Select “Magestore Extension” > WebPOS Select “Yes” on this red frame and please don’t forget to click “Save Config” to make it work.
On the screen of POS System, it will be displayed like this
- Finally, at the end of the day, POS Managers must undertake mission to create Closing Balance, which means they have to confirm the amount of cash in store after all transactions on that day. Then, the system would be able to provide Session Report for Manager. It reflects two things: Cash and Payment Slip
And now, it’s time for Managers checked Closing Balance. 2 situations could happen in this step:
- If the Theory and Real Balance are the same, Managers could directly move to the step of Set Closing Balance, then end this workflow (Session Management)
- If the Theory and Real Balance are not the same, Managers have 2 options below to solve this problem; If accept the difference, Manager has to accept the Profit or Loss (with reason)
Otherwise, Staffs have to Put the money in or Take Money out (with reason) (1) Amount of cash that staff will put in
(2) Reason
(3) Name of Staff will do this action**
(1) Amount of cash that staff will take out
(2) Reason
(3) Name of Staff will do this action
After all, when the Theory Balance’s equal to the Real one, POS Managers are able to “Set Closing Balance”. Then, it ends of the “General Sale Process” Workflow.
(1) The value of currency in the drawer
(2) Number of those
(3) Click “Confirm” to agree with the information displayed above_9<<
.
. | http://docs.magestore.com/Guide%20By%20Business%20Flow/Point-of-Sale%20-POS/Magento%202/ | 2018-09-18T18:02:48 | CC-MAIN-2018-39 | 1537267155634.45 | [array(['../Anh%20M1/top%201.png?raw=true', 'General Process'],
dtype=object)
array(['../Anh%20M1/top%202.png?raw=true', 'POS Order'], dtype=object)
array(['../Anh%20M1/top%203.png?raw=true', 'POS Return Order'],
dtype=object)
array(['../Image%20shadow/image008.png?raw=true', 'Store Cashier'],
dtype=object)
array(['../img/R6%20.png?raw=true', 'General Sales Process'], dtype=object)
array(['../img/R7%20.png?raw=true', 'General Sales Process'], dtype=object)
array(['../img/R8%20.png?raw=true', 'General Sales Process'], dtype=object)
array(['./img/R11.png?raw=true', 'General Sales Process'], dtype=object)
array(['../img/R13.png?raw=true', 'General Sales Process'], dtype=object)
array(['../img/R16.png?raw=true', 'POS Order'], dtype=object)
array(['../img/R17.png?raw=true', 'POS Order'], dtype=object)
array(['../img/R24.png?raw=true', 'POS Order'], dtype=object)] | docs.magestore.com |
sunpy.util.cond_dispatch.
ConditionalDispatch[source] [edit on github]¶
Methods Summary
Methods Documentation
__call__(*args, **kwargs)[source] [edit on github]¶
Call self as a function.
add(fun, condition=None, types=None, check=True)[source] [edit on github]¶
Add fun to ConditionalDispatch under the condition that the arguments must match. If condition is left out, the function is executed for every input that matches the signature. Functions are considered in the order they are added, but ones with condition=None are considered as the last: that means, a function with condition None serves as an else branch for that signature. conditions must be mutually exclusive because otherwise which will be executed depends on the order they are added in. Function signatures of fun and condition must match (if fun is bound, the bound parameter needs to be left out in condition).
add_dec(condition)[source] [edit on github]¶
from_existing(cond_dispatch)[source] [edit on github]¶
generate_docs()[source] [edit on github]¶
get_signatures(prefix='', start=0)[source] [edit on github]¶
Return an iterator containing all possible function signatures. If prefix is given, use it as function name in signatures, else leave it out. If start is given, leave out first n elements.
If start is -1, leave out first element if the function was created by run_cls.
wrapper()[source] [edit on github]¶ | http://docs.sunpy.org/en/stable/api/sunpy.util.cond_dispatch.ConditionalDispatch.html | 2018-09-18T17:29:13 | CC-MAIN-2018-39 | 1537267155634.45 | [] | docs.sunpy.org |
Prerequisites to manually add slave nodes
Make sure the ports are available, the database is deployed, and the correct JDK version is installed on all the nodes in the cluster.
Ensure that the new slave nodes meet the following prerequisites:
- The following operating systems are supported:
64-bit Red Hat Enterprise Linux (RHEL) 5 or 6
64-bit CentOS 5 or 6
64-bit SUSE Linux Enterprise Server (SLES) 11, SP1
At each of your hosts:
yum (RHEL)
zypper (SLES)
rpm
scp
curl
wget
unzip
tar
pdsh
Ensure that all of the ports are available.
To install Hive metastore or to use an external database for Oozie metastore, ensure that you deploy either a MySQL or an Oracle database in your cluster. For instructions, see "Meet Minimum System Requirements" in the Installing HDP Manually guide.
Your system must have the correct JDK installed on all of the nodes in the cluster. For more information, see "Meet Minimum System Requirements" in the Installing HDP Manually guide. | https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.1/administration/content/section_slave_node_prerequisites.html | 2018-09-18T18:50:41 | CC-MAIN-2018-39 | 1537267155634.45 | [] | docs.hortonworks.com |
Exporting OpenGL Frames
You can export the OpenGL frames (fast display mode) if you need a quick render for your scene. Heavier scenes containing 3D, multiple effects and camera moves can be fairly long to export.
Use the Export OpenGL Frames dialog box to select the frames you want to render from the OpenGL Camera view, then save the rendered frames as image files or as a QuickTime movie.
Frames saved from OpenGL view have neither antialiasing nor special effects. To render final frames with effects, export as images or a movie.
- From the top menu, select File > Export > OpenGL Frames.
The Export OpenGL Frames dialog box opens.
- In the Output section, click Browse and select a folder for the frames.
- In the Filename field, enter a name (prefix) for the frames or revert to the default name by clicking Default.
- In the Format section, decide if you want to export individual image frames or a movie.
- Suffix: Lets you select the desired suffix. If you intend to render only a few frames, use 1 or 01, whereas if you intend to render 1000 frames, you can select 0001.
- Drawing Type: Lets you select the file type to render, such as .tga or .sgi.
- Click Movie Options to customize the Audio and Video settings for the *.mov export—see Exporting QuickTime Movies.
- From the Resolution section, select a resolution for export. If you are running some quick tests, then you might want to reduce the resolution to save time and space. You also have the option to set a Custom width and height to produce smaller or larger frames.
- In the Range section, decide whether you want to render all your frames, a range of frames, the current frame or selected frames.
- In the Options section, select the Open in Player option to view the rendered frames.
- Click OK. | https://docs.toonboom.com/help/harmony-14/advanced/export/export-opengl-frame.html | 2018-09-18T17:22:24 | CC-MAIN-2018-39 | 1537267155634.45 | [array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Resources/Images/HAR/Stage/Export/HAR12/HAR12_export_opengl_frames_ADV.png',
None], dtype=object) ] | docs.toonboom.com |
Custom Project Structure¶
Some advanced users like having WordPress in its own directory or move plugins, themes or uploads in another directory. VersionPress supports some scenarios. Just remember that all files related to the site have to be under the project root (
VP_PROJECT_ROOT).
Warning
You need to adjust your project structure before fully initalizing VersionPress. The recommended procedure is:
- Customize your WordPress site structure.
- Install and active VersionPress, the plugin – do not go through the full initialization yet.
- Follow the instructions below, i.e., set some config constant like
VP_PROJECT_ROOT.
- Initialize VersionPress.
Giving WordPress its own directory¶
You can move WordPress into its own directory by following instructions on Codex. However, there is one extra step. You need to define
VP_PROJECT_ROOT constant to let VersionPress know where it should create the repository. See the configuration page for instructions.
.git directory
Be sure that the
.git directory stays in the root directory if the project is already versioned.
Moving wp-content, plugin or uploads directories¶
It is possible to move these folders by following instructions on Codex. Be sure that you define constants referencing directories in the
wp-config.common.php and constants containing URLs in the
wp-config.php if VersionPress is already active.
Moving VPDB directory¶
You can also rename or move the directory where VersionPress saves all its data. Use constant
VP_VPDB_DIR to get it done. See the configuration page for instructions.
Note
It will NOT be possible to undo changes before moving the directory. | https://docs.versionpress.net/en/feature-focus/custom-project-structure/ | 2018-09-18T18:27:51 | CC-MAIN-2018-39 | 1537267155634.45 | [] | docs.versionpress.net |
Week View
Overview
The Week view by default shows a full seven-day week week at a time, which can be set to start at a predefined day (say Monday, or Sunday). To move to the next or previous week, you can use the back and forward keyboard arrows, or the SchedulerNavigator control, which also allows you to control whether to show weekends or not.
Figure 1: Week View
Set Week View
To explicitly set the Work Week to be the default view which the user sees on the form:
Set ActiveViewType
this.radScheduler1.ActiveViewType = SchedulerViewType.Week;
Me.RadScheduler1.ActiveViewType = SchedulerViewType.Week
Get Week View
To get the instance to the SchedulerWeek view()
note
This method returns null if the active view of the scheduler is not SchedulerWeekView.
- use the RadScheduler ActiveView property:
ActiveView Property
if (this.radScheduler1.ActiveViewType == SchedulerViewType.Week) { SchedulerWeekView activeWeekView = (SchedulerWeekView)this.radScheduler1.ActiveView; }
If Me.RadScheduler1.ActiveViewType = SchedulerViewType.Week Then Dim activeWeekView As SchedulerWeekView = CType(Me.RadScheduler1.ActiveView, SchedulerWeekView) End If
Showing/Hiding The Weekend
By default the weekends are shown, but you can hide them by using the ShowWeekend property:
Show Weekend
weekView.ShowWeekend = false;
weekView.ShowWeekend = False | https://docs.telerik.com/devtools/winforms/scheduler/views/week-view | 2017-11-17T23:07:45 | CC-MAIN-2017-47 | 1510934804019.50 | [array(['images/scheduler-views-week-view001.png',
'scheduler-views-week-view 001'], dtype=object)] | docs.telerik.com |
Just in time for Tuesday's U.S. General Election, we decided to shine the spotlight on a few of our favorite election-themed documentaries. Executive Director Erica Ginsberg and one of our favorite Documentary Appreciation Salon Facilitators Joshua Glick both offer up their votes.
Josh's Picks: Documenting Presidential Campaigns
Presidential candidates have lived in the glow and glare of cinema’s spotlight since the dawn of the medium, yet particular elections boast unique instances of documentary artistry. I've chosen three seminal films that each captured moments of change and transformation for American campaigning. At the same time, these films essentially ushered in new and exciting ways for people to emotionally experience politics; they demonstrated the power of carefully crafted nonfiction to humanize a candidate, reveal the passion of the party machine supporting him, or mythologize the collective memory of a deceased president through giving him a cinematic afterlife. It is worth viewing or re-viewing these films both for the joy of seeing some aesthetic landmarks in the history of political media and to better understand how past precedents have shaped contemporary election-related documentaries.
McKinley at Home—Canton, O (1896)
The American Mutoscope and Biograph Company’s one-minute portrait of William McKinley was the first instance of a presidential candidate on film. The film effectively captures the visual iconography of McKinley’s “front porch” campaign that contrasted the Democratic candidate William Jennings Bryan’s whistle-stop speech tour. McKinley appears congenial and confident on his lawn in Canton, Ohio. With the porch in the background, McKinley moves casually toward the camera and stops to read news of his party’s nomination, delivered by his secretary. McKinley at Home was used to premiere the new Biograph motion-picture apparatus in New York, as well as to advertise the character of the Republican candidate to the American people. The yoking of nostalgic homestead with new age technology allowed McKinley to engage viewers on a mass scale and in a personal way.
How Can You See It? Watch it on YouTube or as part of Kino Video's anthology The Movies Begin: Volume 1
The Making of the President: 1960 (1963)
While Drew Associates’ Primary (1960) is often considered to be the John F. Kennedy campaign film, The Making of the President: 1960 captivated audiences and critics when it first aired on ABC on December 29, 1963. The documentary went on to win four Emmys, including “Program of the Year.” Los Angeles-based Wolper Productions and the late director Mel Stuart sought not only to register Kennedy’s magnetic personality, but romantically represent American politics as a democratic, participatory process that involved the sharing and passing of power.
The project brought together Hollywood insiders and the liberal elite of Washington. Additionally, producer David Wolper used footage from a wide variety of broadcast news sources and institutional archives. Particularly interesting are the scenes in the film of the first televised Kennedy-Nixon debate. Pulitzer Prize-winning journalist Theodore H. White’s voice-over commentary about the critical importance of Kennedy’s ability to thrive on the mass-mediated stages of contemporary televisual politics resonates with our current political climate, in which candidates strive to project their image, aura, and ideology to viewers through a multiplicity of audiovisual platforms.
How Can You See It? Watch it on Netflix or purchase the DVD through Acorn Media Group.
The War Room (1993)
American cinéma vérité documentarians D.A. Pennebaker and Chris Hegedus have made their careers finding the story behind the story: observing the private meetings, practice sessions, workshops, and rehearsals that influence and shape peoples’ public political or cultural performances. In tracking Bill Clinton’s 1992 presidential race in The War Room, the Pennebaker-Hegedus team focus more on the masterminds behind Clinton’s campaign—James Carville and George Stephanopoulos—who inject planning and strategizing with a high degree of rhetorical flare and poetry.
The suave, determined demeanor and calculated moves of Stephanopoulos complement the pyrotechnic quips and verbal parries and thrusts of Carville. The emotional climax of the film comes on the eve of the election, when Carville delivers a teary-eyed speech to eager staffers about the power and joy of merging one’s love and labor for the cause of public service.
How Can You See It? Watch it on Netflix or purchase the DVD through The Criterion Collection.
Erica's Picks: Getting Inside the Election Process
Perhaps it is because I grew up inside the Beltway Bubble where we eat, sleep, and breathe politics in the way that other towns do with sports teams. Every four years, it can look to outsiders like things change here, but really all that changes is the cast of characters and some of their pet issues. The basic way things get done (or don't get done) never really changes. That is perhaps why the three films which I have chosen are not so much about the presidential elections specifically, but are more about elections themselves.
Please Vote for Me (2007)
In 2007, a group of international funding institutions and broadcasters came together to support a project called Why Democracy? which sought to fund 10 documentary film projects from around the world which looked at the meaning of democracy in the post-Cold War, post-9/11 era. One of the films which came out of this was Weijun Chen's Please Vote for Me. Chen spent six months filming a third-grade classroom in his home city of Wuhan, China where three eight-year olds are competing in an election to select a class monitor.
The verite-style film is interesting for what it reveals about China's experimentation with democracy and the impact the one-child-policy and the growth of its middle class with parents as involved in the success of their children as they are in the United States. However, what it also reveals is the universality of the human need for influence and power. The three candidates quickly form campaign strategies with help from other students (and overzealous parents) and work to influence, spin, and manipulate the outcome of the elections. While the film keeps us laughing at what comes out of the mouths and minds of these kids, it also quite accurately documents the human nature which is at the heart of the election process.
How Can You See It? Watch it on Netflix or purchase the DVD through Amazon
American Blackout (2006)
Ian Inaba's American Blackout looks at how African-American voters have been marginalized in the contemporary American electoral system. Inaba's film came out of the Guerilla News Network, known until then for its short music-video-style web documentaries reflecting politically progressive viewpoints. The film follows a frenzied style as a feature-length piece, effectively divided into three sections. The two bookends focus respectively on the 2000 and 2004 presidential elections and irregularities (particularly in Florida and Ohio) which disenfranchised thousands of African-American voters.
However, these bigger issue stories are centered by the character-focused story of Congresswoman Cynthia McKinney which provides the most damning arguments about how those who question the a corrupt status quo can themselves be targeted by its proponents. Released at the height of the resurgence of political documentaries (not only Moore, but Robert Greenwald, Alex Gibney, and others), American Blackout in many ways represents a new era for political documentaries which are both unapologetically polemical and built in the quick-cuts era of the Sesame Street/MTV generation.
How Can You See It? Watch it on Netflix or purchase the DVD through The ConneXtion
Street Fight (2005)
In the post-Clinton era of politics, most candidates for office exercise tight control over their media image. Marshall Curry's Street Fight probably does the best job of both showcasing this and also in showing one candidate who is willing to allow pretty extensive access to the behind the scenes of a tight race. The election in question was a local one - for the Mayor of Newark, New Jersey and the candidate was Cory Booker. Booker was running against four-term incumbent Sharpe James in an election which reflected the changing of the guard in African-American urban politics from the civil rights pioneers to their children who have lived in a much different kind of racially-polarized society.
While the film touches on this issues, its main focus is on a David-vs-Goliath tale where the young upstart candidate must fight a well-established political machine. Even the filmmaker becomes involved in this battle, as he is regularly accused by the Sharpe campaign of siding with Booker. These exchanges between Curry and the Sharpe team are reminiscent of the earlier works of Michael Moore, where the filmmaker becomes the hero being thrown out by the establishment, and, like Moore, Curry finds a crack in the establishment - in this case through his regular exchanges with Sharpe's exasperated press officer. While the film's ending was hardly the end of the story for Booker, it provides a good slice of life of street-level local politicking.
How Can You See It? Watch it on SnagFilms or purchase the DVD through IndieBlitz.
Think we got it wrong? Why don't you tell us your favorite election-themed films in the comments?
Be the first to comment | http://www.docsinprogress.org/201211our-favorite-election-themed-docs | 2017-11-17T22:46:36 | CC-MAIN-2017-47 | 1510934804019.50 | [] | www.docsinprogress.org |
Installation guide¶
Before installing django-contact-form, you’ll need to have a copy of Django already installed. For information on obtaining and installing Django, consult the Django download page, which offers convenient packaged downloads and installation instructions.
The release of django-contact-form-contact-form-contact-form.
Normal installation¶
The preferred method of installing django-contact-form-contact-form
Manual installation¶
It’s also possible to install django-contact-form manually. To do
so, obtain the latest packaged version from the listing on the Python
Package Index. Unpack the
.tar.gz file, and run:
python setup.py install
Once you’ve installed django-contact-form, you can verify
successful installation by opening a Python interpreter and typing
import contact_form.
If the installation was successful, you’ll simply get a fresh Python
prompt. If you instead see an
ImportError, check the configuration
of your install tools and your Python import path to ensure
django-contact-form installed into a location Python can import
from.
Installing from a source checkout¶
The development repository for django-contact-form is at <>. Presuming you have git installed, you can obtain a copy of the repository by typing:
git clone
From there, you can use normal git commands to check out the specific
revision you want, and install it using
python setup.py install.
Basic configuration and use¶
Once you have Django and django-contact-form installed, check out the quick start guide to see how to get your contact form up and running. | http://django-contact-form.readthedocs.io/en/1.3/install.html | 2017-11-17T23:12:56 | CC-MAIN-2017-47 | 1510934804019.50 | [] | django-contact-form.readthedocs.io |
or other Textpattern tags.
Typical container tags are the permlink tag
<txp:permlink></txp:permlink> and all conditional tags. Container tags are used when something has to be enclosed by tags instead of being replaced by them.
A link is a good example: you have a text string (or a title tag) around which you want to wrap an HTML anchor element:
<txp:permlink> <txp:title /> </txp:permlink> ...content... <txp:permlink> Read more... </txp:permlink>
The example above would be rendered into something like so:
<a href="/articles/this-article-title">This article title</a> ...content... <a href="/articles/this-article-title">Read more...</a>
Closing tags correctly
Textpattern tags behave like XML tags insofar as they must be closed correctly. Any containing tag must have both an opening tag and a corresponding closing tag (marked with a preceding slash):
<txp:some_tag> ...content... </txp:some_tag>
If the tag is a conditional tag, check to make sure that any else tag is employed correctly:
Right:
<txp:if_some_condition> ...true branch... <txp:else /> ...false branch... </txp:if_some_condition>
Wrong:
<txp:else> </txp:else> </txp:else />
Single (self-closing) tags must have a single slash at the end:
<txp:some_single_tag
Also check that the angle brackets have not been HTML encoded by mistake, e.g.:
<txp:some_tag /> | https://docs.textpattern.io/tags/tag-basics/self-closed-versus-container-tags | 2017-11-17T22:49:24 | CC-MAIN-2017-47 | 1510934804019.50 | [] | docs.textpattern.io |
h1. Safe Mode move_uploaded_file error [todo]
Your hosting company has set PHP’s “upload_tmp_dir setting”: incorrectly. You’ll need to ask them to fix it.
The upload_tmp_dir setting must refer to a filesystem directory that is accessible and writable by the PHP server process. In Safe Mode, upload_tmp_dir must be within open_basedir. See “here”: and “here”: for technical information. | https://docs.textpattern.io/faqs/safe-mode-move_uploaded_file-error | 2017-11-17T23:01:18 | CC-MAIN-2017-47 | 1510934804019.50 | [] | docs.textpattern.io |