content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Packaging your RapidSMS application for re-use¶
If you’d like others to be able to use your application, you’ll want to package it and publish it on PyPI.
You will package and publish your RapidSMS application in the same way you would any other Django application. Django provides excellent documentation on packaging your Django app, so we won’t try to write the same thing here.
We recommend using at least the following classifiers on your package:
Framework :: Django Intended Audience :: Developers Programming Language :: Python Topic :: Communications Topic :: Software Development :: Libraries :: Python Modules
Depending on your project, also consider:
Operating System :: OS Independent Topic :: Internet :: WWW/HTTP :: Dynamic Content Environment :: Web Environment
You’ll also need to give your package a license that allows others to use it. RapidSMS uses the BSD license and we recommend it if you don’t have a strong preference for another license. | http://docs.rapidsms.org/en/develop/topics/packaging.html | 2017-11-18T02:57:09 | CC-MAIN-2017-47 | 1510934804518.38 | [] | docs.rapidsms.org |
A Pose and Prop set for Poser and DAZ Studio
Author: First Bastion
Installers For POSER and DAZ|Studio
Includes - pz2, fc2, pp2, OBJ files and Textures,
Wiki/Readme Information
FOR POSER and probably info for DS too
This set includes 24 unique poses, both climbing and hanging for M4. It also includes a duo set, one M4 grabbing hold of another M4's wrist who is hanging over a ledge.
In certain poses, the hip bone is rotated, so in those poses, the pz2 contains rotational information that will cause your figure to rotate to that preset. This can be a nuissance. Usually you can simply rotate in the y-axis to bring it back to facing the cliff.
The smart props are for each hand and for each foot. Pretty self-explanatory. Load M4 first, then load the smart props and they will parent to the hands and feet respectively. When bringing them to the very detailed cliff wall, you may have to do some minor rotational adjustments of the entire body/root of M4 to get the desired placement on the many possible locations on the cliff face. The foot props are particularly larger so they can compensate for the placement being absorbed by the rock wall as much as necessary. These same foot holds can also be used as hand holds if the artist so chooses. Simply move the foothold to one of the M4 hands, and parent it. This is useful for the hanging poses, which obviously don't need footholds, though footholds on some of the hanging poses can emulate a far reachupward. Experiment.
Translation information is not included in the poses, except for the duo, that are aligned at zero position so that their wrists lock in approximately the same position. Parent one M4 to the other, and then they will both move in tandem to where ever you need them to be placed.
The Face and hand poses are also fairly straightforward, unless it's the first time you are using them. Choose the M4 head, and click the fc2 of your choice. These face poses only use the expression channels and do not alter your other morphs. Use the eye position poses to adjust M4's eyes separately. Please note that if your character figure has viseme pronounciation morph on the face already, the facial expression can start to look over exagerated.
The Hand poses, offer the standard 2 finger, 3 finger, 4 finger, and full hand grips common in rock climbing plus a few more. The duo A + B hand poses coincide to the upper and lower Duo M4 grabhold. These are both for the specific extended hand. For all the other hand poses, Choose you figure, double click on the hand pose and you will have the option of applying it to either the right or left hand.
ALso included in the props folder is a pp2 preset to load this climbing rock cliff face in the exact position as the Great Cliff in the Hidden Waterfalls Environment. This is not a requirement, just an bonus option of a related product if an artist happens to have that set. Delete or make invisible/hide, the GreatCliff model, and then double click on the Climbing Cliff in the HiddenWaters Placement folder. It should fit right in since it uses the similar rock texture. Please note that this version of the cliff is significantly more detailed and will increase polycounts. Still if you happen to have The Hidden Waterfalls and want your characters to do some climbing, this is available to you. Enjoy.
Performance It should be noted that the Climbing cliff is extremely detailed, with over 160,000 polies. If you do replace the Great cliff in “The Hidden Waterfalls” with this one, some people may experience a hit in performance (but only on slower machines. This level was built and rendered on a single core PC with only 3 gigs of ram with Nvidia 9500 accelerator card. Hardly a state of the art system. (Composite rendering techniques are recommended on slower machines). Most current systems are more powerful should not have an issue with the polycount. If you notice that it is sluggish on your system, well that's the reason..
Thanks,
FirstBastion | http://docs.daz3d.com/doku.php/artzone/azproduct/11618 | 2017-11-18T02:51:42 | CC-MAIN-2017-47 | 1510934804518.38 | [] | docs.daz3d.com |
Settings¶
The settings dialog can be invoked at any time by selecting
Settings from the
Tools menu option.
The following buttons are always available on any page of the Settings dialog. Sometimes the
Cancel
button has no effect for the page - this will be noted on the page in the area next to the buttons.
Settings that are specific to Git Extensions and apply globally will be stored in a file called
GitExtensions.settings
either in the user’s application data path or with the program.
The location is dependant on the IsPortable setting in the
GitExtensions.exe.config file that is with the program.
Settings that are specific to Git Extensions but apply to only the current repository will be stored in a file of the same
name,
GitExtensions.settings, but in either the root folder of the repository or the
.git folder of the repository,
depending on whether or not they are distributed with that repository.
The settings that are used by Git are stored in the configuration files of Git. The global settings are stored in the file called
.gitconfig in the user directory. The local settings are stored in the
.git\config file of the repository.
Checklist¶
This page is a visual overview of the minimal settings that Git Extensions requires to work properly. Any items highlighted in red should be configured by clicking on the highlighted item.
This page contains the following settings and buttons.
Git¶
This page contains the settings needed to access git repositories. The repositories will be accessed using external tools. For Windows usually “Git for Windows” or Cygwin are used. Git Extensions will try to configure these settings automatically.
Git¶
Command used to run git (git.cmd or git.exe)¶
Needed for Git Extensions to run Git commands. Set the full command used to run git (“Git for Windows” or Cygwin). Use the
Browsebutton to find the executable on your file system.
The global configuration file used by git will be put in the HOME directory. On some systems the home directory is not set
or is pointed to a network drive. Git Extensions will try to detect the optimal setting for your environment. When there is
already a global git configuration file, this location will be used. If you need to relocate the home directory for git,
click the
Change HOME button to change this setting. Otherwise leave this setting as the default.
Git Extensions¶
This page contains general settings for Git Extensions.
Performance¶
Show number of changed files on commit button¶
When enabled, the number of pending commits are shown on the toolbar as a figure in parentheses next to the Commit button. Git Extensions must be stopped and restarted to activate changes to this option.
Show current working directory changes in revision graph¶
When enabled, two extra revisions are added to the revision graph. The first shows the current working directory status. The second shows the staged files. This option can cause slowdowns when browsing large repositories.
Use FileSystemWatcher to check if index is changed¶
Using the FileSystemWatcher to check index state improves the performance in some cases. Turn this off if you experience refresh problems in commit log.
Show stash count on status bar in browse window¶
When you use the stash a lot, it can be useful to show the number of stashed items on the toolbar. This option causes serious slowdowns in large repositories and is turned off by default.
Check for uncommitted changes in checkout branch dialog¶
Git Extensions will not allow you to checkout a branch if you have uncommitted changes on the current branch. If you select this option, Git Extensions will display a dialog where you can decide what to do with uncommitted changes before swapping branches.
Limit number of commits that will be loaded in list at start-up¶
This number specifies the maximum number of commits that Git Extensions will load when it is started. These commits are shown in the Commit Log window. To see more commits than are loaded, then this setting will need to be adjusted and Git Extensions restarted.
Behaviour¶
Close Process dialog when process succeeds¶
When a process is finished, close the process dialog automatically. Leave this option off if you want to see the result of processes. When a process has failed, the dialog will automatically remain open.
Show console window when executing git process¶
Git Extensions uses command line tools to access the git repository. In some environments it might be useful to see the command line dialog when a process is executed. An option on the command line dialog window displayed allows this setting to be turned off.
Use patience diff algorithm¶
Use the Git ‘patience diff’ algorithm instead of the default. This algorithm is useful in situations where two files have diverged significantly and the default algorithm may become ‘misaligned’, resulting in a totally unusable conflict file.
Include untracked files in stash¶
If checked, when a stash is performed as a result of any action except a manual stash request, e.g. checking out a new branch and requesting a stash then any files not tracked by git will also be saved to the stash.
Follow exact renames and copies only¶
Follow file renames and copies for which similarity index is 100%. That is when a file is renamed or copied and is commited with no changes made to its content.
Open last working dir on startup¶
When starting Git Extensions, open the last used repository (bypassing the Start Page).
Play Special Startup Sound¶
Play a sound when starting Git Extensions. It will put you in a good moooooood!
Default clone destination¶
Git Extensions will pre-fill destination directory input with value of this setting on any form used to perform repository clone.
Commit dialog¶
This page contains settings for the Git Extensions Commit dialog.
Behaviour¶
Provide auto-completion in commit dialog¶
Enables auto-completion in commit dialog message box. Auto-completion words are taken from the changed files shown by the commit dialog. For each file type there can be configured a regular expression that decides which words should be considered as candidates for auto-completion. The default regular expressions included with Git Extensions can be found here: You can override the default regular expressions by creating an AutoCompleteRegexes.txt file in the Git Extensions installation directory.
Show errors when staging files¶
If an error occurs when files are staged (in the Commit dialog), then the process dialog showing the results of the git command is shown if this setting is checked.
Ensure the second line of commit message is empty¶
Enforces the second line of a commit message to be blank.
Compose commit messages in Commit dialog¶
If this is unchecked, then commit messages cannot be entered in the commit dialog. When the
Commitbutton is clicked, a new editor window is opened where the commit message can be entered.
Number of previous messages in commit dialog¶
The number of commit messages, from the top of the current branch, that will be made available from the
Commit messagecombo box on the Commit dialog.
Remember 'Amend commit' checkbox on commit form close¶
Remembers the state of the ‘Amend commit’ checkbox when the ‘Commit dialog’ is being closed. The remembered state will be restored on the next ‘Commit dialog’ creation. The ‘Amend commit’ checkbox is being unchecked after each commit. So, when the ‘Commit dialog’ is being closed automatically after commiting changes, the ‘Amend commit’ checkbox is going to be unchecked first and its state will be saved after that. Therefore the checked state is remembered only if the ‘Commit dialog’ is being closed by an user without commiting changes.
Tick the boxes in this sub-group for any of the additional buttons that you wish to have available below the commit button. These buttons are considered additional to basic functionality and have consequences if you should click them accidentally, including resetting unrecorded work.
Appearance¶
This page contains settings that affect the appearance of the application.
General¶
Show relative date instead of full date¶
Show relative date, e.g. 2 weeks ago, instead of full date. Displayed on the
committab on the main Commit Log window.
Show current branch in Visual Studio¶
Determines whether or not the currently checked out branch is displayed on the Git Extensions toolbar within Visual Studio.
Auto scale user interface when high DPI is used¶
Automatically resize controls and their contents according to the current system resolution of the display, measured in dots per inch (DPI).
Truncate long filenames¶
This setting affects the display of filenames in a component of a window e.g. in the Diff tab of the Commit Log window. The options that can be selected are:
None- no truncation occurs; a horizontal scroll bar is used to see the whole filename.
Compact- no horizontal scroll bar. Filenames are truncated at both start and end to fit into the width of the display component.
Trimstart- no horizontal scroll bar. Filenames are truncated at the start only.
FileNameOnly- the path is always removed, leaving only the name of the file, even if there is space for the path.
If checked, gravatar will be accessed to retrieve an image for the author of commits. This image is displayed on the
committab on the main Commit Log window.
The display size of the user image.
The number of days to elapse before gravatar is checked for any changes to an authors image.
If the author has not set up their own image, then gravatar can return an image based on one of these services.
Fonts¶
Revision Links¶
You can configure here how to convert parts of a revision data into clickable links. These links will be located under the commit message on the
Commit
tab in the
Related links section.
The most common case is to convert an issue number given as a part of commit message into a link to the coresponding issue-tracker page. The screenshot below shows an example configuration for GitHub issues.
Categories¶
Lists all the currently defined Categories. Click the
Addbutton to add a new empty Category. The default name is ‘new’. To remove a Category select it and click the
Removebutton.
Name¶
This is the Category name used to match the same categories defined on different levels of the Settings.
Enabled¶
Indicates whether the Category is enabled or not. Disabled categories are skipped while creating links.
Remote data¶
It is possible to use data from remote’s URL to build a link. This way, links can be defined globally for all repositories sharing the same URL schema.
Use remotes¶
Regex to filter which remotes to use. Leave blank to create links not depending on remotes. If full names of remotes are given then matching remotes are sorted by its position in the given Regex.
Revision data¶
Search pattern¶
Regular expression used for matching text in the chosen revision parts. Each matched fragment will be used to create a new link. More than one fragment can be used in a single link by using a capturing group. Matches from the Remote data group go before matches from the Revision data group. A capturing group value can be passed to a link by using zero-based indexed placeholders in a link format definition e.g. {0}.
Nested pattern¶
Nested patterncan be used when only a part of the text matched by the Search pattern should be used to format a link. When the
Nested patternis empty, matches found by the Search pattern are used to create links.
Links: Caption/URI¶
List of links to be created from a single match. Each link consists of the
Captionto be displayed and the
URIto be opened when the link is clicked on. In addition to the standard zero-based indexed placeholders, the
%COMMIT_HASH%placeholder can be used to put the commit’s hash into the link. For example:
Colors¶
This page contains settings to define the colors used in the application.
Revision graph¶
Multicolor branches¶
Displays branch commits in different colors if checked. If unchecked, all branches are shown in the same color. This color can be selected.
Striped branch change¶
When a new branch is created from an existing branch, the common part of the history is shown in a ‘hatch’ pattern.
Draw non relatives graph gray¶
Show commit history in gray for branches not related to the current branch.
Draw non relatives text gray¶
Show commit text in gray for branches not related to the current branch.
Highlight all the revisions authored by the same author as the author of the currently selected revision (matched by email). If there is no revision selected, then the current user’s email is used to match revisions to be highlighted.
Color to show authored revisions in.
Application Icon¶
Difference View¶
Start Page¶
This page allows you to add/remove or modify the Categories and repositories that will appear on the Start Page when Git Extensions is launched. Per Category you can either configure an RSS feed or add repositories. The order of both Categories, and repositories within Categories, can be changed using the context menus in the Start Page. See Start Page for further details.
Categories¶
Lists all the currently defined Categories. Click the
Addbutton to add a new empty Category. The default name is ‘new’. To remove a Category select it and click Remove. This will delete the Category and any repositories belonging to that Category.
Path/Title/Description¶
For each repository defined for a Category, shows the path, title and description. To add a new repository, click on a blank line and type the appropriate information. The contents of the Path field are shown on the Start Page as a link to your repository if the Title field is blank. If the Title field is non-blank, then this text is shown as the link to your repository. Any text in the Description field is shown underneath the repository link on the Start Page.
An RSS Feed can be useful to follow repositories on GitHub for example. See this page on GitHub:. You can also follow commits on public GitHub repositories by:
- In your browser, navigate to the public repository on GitHub.
- Select the branch you are interested in.
- Click on the Commits tab.
- You will find a RSS icon next to the words “Commit History”.
- Copy the link
- Paste the link into the RSS Feed field in the Settings - Start Page as shown above.
Your Start Page will then show each commit - clicking on a link will open your browser and take you to the commit on GitHub.
Git Config¶
This page contains some of the settings of Git that are used by and therefore can be changed from within Git Extensions.
If you change a Git setting from the Git command line using
git config then the same change in setting can be seen inside
Git Extensions. If you change a Git setting from inside Git Extensions then that change can be seen using
git config --get.
Git configuration can be global or local configuration. Global configuration applies to all repositories. Local configuration overrides the global configuration for the current repository.
Editor¶
Editor that git.exe opens (e.g. for editing commit message). This is not used by Git Extensions, only when you call git.exe from the command line. By default Git will use the built in editor.
Mergetool¶
Merge tool used to solve merge conflicts. Git Extensions will search for common merge tools on your system.
Path to mergetool¶
Path to merge tool. Git Extensions will search for common merge tools on your system.
Mergetool command¶
Command that Git uses to start the merge tool. Git Extensions will try to set this automatically when a merge tool is chosen. This setting can be left empty when Git supports the mergetool (e.g. kdiff3).
Keep backup (.orig) after merge¶
Check to save the state of the original file before modifying to solve merge conflicts. Refer to Git configuration setting
`mergetool.keepBackup`.
Difftool¶
Diff tool that is used to show differences between source files. Git Extensions will search for common diff tools on your system.
Path to difftool¶
The path to the diff tool. Git Extensions will search for common diff tools on your system.
DiffTool command¶
Command that Git uses to start the diff tool. This setting should only be filled in when Git doesn’t support the diff tool.
Path to commit template¶
A path to a file whose contents are used to pre-populate the commit message in the commit dialog.
Line endings¶
Choose how git should handle line endings when checking out and checking in files. Refer to
Build server integration¶
This page allows you to configure the integration with build servers. This allows the build status of each commit to be displayed directly in the revision log, as well as providing a tab for direct access to the Build Server build report for the selected commit.
General¶
Show build status summary in revision log¶
Check to show a summary of the build results with the commits in the main revision log.
AppVeyor¶
Account name¶
AppVeyor account name. You don’t have to enter it if the projects you want to query for build status are public.
API token¶
AppVeyor API token. Requiered if the Account name is entered. See
Project(s) name(s)¶
Projects names separated with ‘|’, e.g. gitextensions/gitextensions|jbialobr/gitextensions
Display tests results in build status summary for every build result¶
Include tests results in the build status summary for every build result.
Display GitHub pull requests builds¶
Display build status for revisions which GitHub pull requests are based on. If you have fetched revisions from other users’ forks, GitExtensions will show a build status for those revisions for which a build was performed as a part of a pull request’s check.
GitHubToken¶
Token to allow access the GitHub API. You can generate your private token at
Jenkins¶
TeamCity¶
Project name¶
Enter the name of the project which tracks this repository in TeamCity. Multiple project names can be entered separated by the | character.
Build Id Filter¶
Enter a regexp filter for which build results you want to retrieve in the case that your build project creates multiple builds. For example, if your project includes both devBuild and docBuild you may wish to apply a filter of “devBuild” to retrieve the results from only the program build.
SSH¶
This page allows you to configure the SSH client you want Git to use. Git Extensions is optimized for PuTTY. Git Extensions will show command line dialogs if you do not use PuTTY and user input is required (unless you have configured SSH to use authentication with key instead of password). Git Extensions can load SSH keys for PuTTY when needed.
Specify which ssh client to use¶
Configure PuTTY¶
Scripts¶
This page allows you to configure specific commands to run before/after Git actions or to add a new command to the User Menu. The top half of the page summarises all of the scripts currently defined. If a script is selected from the summary, the bottom half of the page will allow modifications to the script definition.
A hotkey can also be assigned to execute a specific script. See Hotkeys.
Enabled¶
If checked, the script is active and will be performed at the appropriate time (as determined by the On Event setting).
Ask for confirmation¶
If checked, then a popup window is displayed just before the script is run to confirm whether or not the script is to be run. Note that this popup is not displayed when the script is added as a command to the User Menu (On Event setting is ShowInUserMenuBar).
Run in background¶
If checked, the script will run in the background and Git Extensions will return to your control without waiting for the script to finish.
If checked, the script is added to the context menu that is displayed when right-clicking on a line in the Commit Log page.
Is PowerShell¶
If checked, the command is started through a powershell.exe process. If the Run in background is checked, the powershell console is closed after finishing. If not, the powershell console is left for the user to close it manually.
Command¶
Enter the command to be run. This can be any command that your system can run e.g. an executable program, a .bat script, a Python command, etc. Use the
Browsebutton to find the command to run.
Arguments¶
Enter any arguments to be passed to the command that is run. The
Helpbutton displays items that will be resolved by Git Extensions before executing the command e.g. {cBranch} will resolve to the currently checked out branch, {UserInput} will display a popup where you can enter data to be passed to the command when it is run.
On Event¶
Select when this command will be executed, either before/after certain Git commands, or displayed on the User Menu bar.
Hotkeys¶
This page allows you to define keyboard shortcuts to actions when specific pages of Git Extensions are displayed. The HotKeyable Items identifies a page within Git Extensions. Selecting a Hotkeyable Item displays the list of commands on that page that can have a hotkey associated with them.
The Hotkeyable Items consist of the following pages
- Commit: the page displayed when a Commit is requested via the
CommitUser Menu button or the
Commands/Commitmenu option.
- Browse: the Commit Log page (the page displayed after a repository is selected from the Start Page).
- RevisionGrid: the list of commits on the Commit Log page.
- FileViewer: the page displayed when viewing the contents of a file.
- FormMergeConflicts: the page displayed when merge conflicts are detected that need correcting.
- Scripts: shows scripts defined in Git Extensions and allows shortcuts to be assigned. Refer Scripts.
Shell Extension¶
When installed, Git Extensions adds items to the context menu when a file/folder is right-clicked within Windows Explorer. One of these items
is
Git Extensions from which a further (cascaded) menu can be opened. This settings page determines which items will appear on that cascaded
menu and which will appear in the main context menu. Items that are checked will appear in the cascaded menu.
To the right side of the list of check boxes is a preview that shows you how the Git Extensions menu items will be arranged with your current choices.
By default, what is displayed in the context menu also depends on what item is right-clicked in Windows Explorer; a file or a folder
(and whether the folder is a Git repository or not). If you want Git Extensions to always include all of its context menu items,
check the box
Always show all commands.
Advanced¶
This page allows advanced settings to be modified. Clicking on the ‘+’ symbol on the tree of settings will display further settings. Refer Confirmations.
Always show checkout dialog¶
Always show the Checkout Branch dialog when swapping branches. This dialog is normally only shown when uncommitted changes exist on the current branch
Use last chosen "local changes" action as default action.¶
This setting works in conjunction with the ‘Git Extensions/Check for uncommitted changes in checkout branch dialog’ setting. If the ‘Check for uncommitted changes’ setting is checked, then the Checkout Branch dialog is shown only if this setting is unchecked. If this setting is checked, then no dialog is shown and the last chosen action is used.
General¶
Don’t show help images¶
In the Pull, Merge and Rebase dialogs, images are displayed by default to explain what happens with the branches and their commits and the meaning of LOCAL, BASE and REMOTE (for resolving merge conflicts) in different merge or rebase scenarios. If checked, these Help images will not be displayed.
Always show advanced options¶
In the Push, Merge and Rebase dialogs, advanced options are hidden by default and shown only after you click a link or checkbox. If this setting is checked then these options are always shown on those dialogs.
Check for release candidate versions¶
Include release candidate versions when checking for a newer version.
Use Console Emulator for console output in command dialogs¶
Using Console Emulator for console output in command dialogs may be useful the running command requires an user input, e.g. push, pull using ssh, confirming gc.
Confirmations¶
This page allows you to turn off certain confirmation popup windows.
Don’t ask to confirm to¶
Amend last commit¶
If checked, do not display the popup warning about the rewriting of history when you have elected to amend the last committed change.
Commit when no branch is currently checked out¶
When commiting changes and there is no branch currently being checked out, then GitExtensions warns you and proposes to checkout or create a branch. Enable this option to continue working with no warning.
Apply stashed changes after successful pull¶
In the Pull dialog, if
Auto stashis checked, then any changes will be stashed before the pull is performed. Any stashed changes are then re-applied after the pull is complete. If this setting is checked, the stashed changes are applied with no confirmation popup.
Apply stashed changes after successful checkout¶
In the Checkout Branch dialog, if
Stashis checked, then any changes will be stashed before the branch is checked out. If this setting is checked, then the stashed changes will be automatically re-applied after successful checkout of the branch with no confirmation popup.
Add a tracking reference for newly pushed branch¶
When you push a local branch to a remote and it doesn’t have a tracking reference, you are asked to confirm whether you want to add such a reference. If this setting is checked, a tracking reference will always be added if it does not exist.
Push a new branch for the remote¶
When pushing a new branch that does not exist on the remote repository, a confirmation popup will normally be displayed. If this setting is checked, then the new branch will be pushed with no confirmation popup.
Update submodules on checkout¶
When you check out a branch from a repository that has submodules, you will be asked to update the submodules. If this setting is checked, the submodules will be updated without asking.
Resolve conflicts¶
If enabled, then when conflicts are detected GitExtensions will start the Resolve conflicts dialog automatically without any prompt.
Commit changes after conflicts have been resolved¶
Enable this option to start the Commit dialog automatically after all conflicts have been resolved.
Confirm for the second time to abort a merge¶
When aborting a merge, rebase or other operation that caused conflicts to be resolved, an user is warned about the consequences of aborting and asked if he/she wants to continue. If the user chooses to continue the aborting operation, then he/she is asked for the second time if he/she is sure that he/she wants to abort. Enable this option to skip this second confirmation.
Detailed¶
This page allows detailed settings to be modified. Clicking on the ‘+’ symbol on the tree of settings will display further settings.
Push window¶
Get remote branches directly from the remote¶
Git caches locally remote data. This data is updated each time a fetch operation is performed. For a better performance GitExtensions uses the locally cached remote data to fill out controls on the Push dialog. Enable this option if you want GitExtensions to use remote data recieved directly from the remote server.
Merge window¶
Add log messages¶
If enabled, then in addition to branch names, git will populate the log message with one-line descriptions from at most the given number actual commits that are being merged. See—logltngt
Browse repository window¶
Show revision details next to the revision list¶
Enable to move the commit details panel from the tab pages at the bottom of the window to the top right corner.
Console emulator¶
Show the Console tab¶
Show the Console tab in the Browse Repository window.
Console settings¶
Console style¶
Choose one of the predefined ConEmu schemes. See.
Diff Viewer¶
Remember the 'Ignore whitespaces' preference¶
Remember in the GitExtensions settings the latest chosen value of the ‘Ignore whitespaces’ preference. Use the remembered value the next time GitExtensions is opened.
Remember the 'Show nonprinting characters' preference¶
Remember in the GitExtensions settings the latest chosen value of the ‘Show nonprinting characters’ preference. Use the remembered value the next time GitExtensions is opened.
Remember the 'Show entire file' preference¶
Remember in the GitExtensions settings the latest chosen value of the ‘Show entire file’ preference. Use the remembered value the next time GitExtensions is opened.
Remember the 'Number of context lines' preference¶
Remember in the GitExtensions settings the latest chosen value of the ‘Number of context lines’ preference. Use the remembered value the next time GitExtensions is opened.
Omit uninteresting changes from combined diff¶
Includes git –cc switch when generating a diff. See—cc
Open Submodule Diff in separate window¶
If enabled then double clicking on a submodule in the Diff file list opens a new instance of GitExtensions with the submodule as the selectect repository. If disabled, the File history window is opened for the double clicked submodule.
Plugins¶
Plugins provide extra functionality for Git Extensions.
Auto compile SubModules¶
This plugin proposes (confirmation required) that you automatically build submodules after they are updated via the GitExtensions Update submodules command.
Periodic background fetch¶
This plugin keeps your remote tracking branches up-to-date automatically by fetching periodically.
Arguments of git command to run¶
Enter the git command and its arguments into the edit box. The default command is
fetch --all, which will fetch all branches from all remotes. You can modify the command if you would prefer, for example, to fetch only a specific remote, e.g.
fetch upstream.
Fetch every (seconds)¶
Enter the number of seconds to wait between each fetch. Enter 0 to disable this plugin.
Refresh view after fetch¶
If checked, the commit log and branch labels will be refreshed after the fetch. If you are browsing the commit log and comparing revisions you may wish to disable the refresh to avoid unexpected changes to the commit log.
Create local tracking branches¶
This plugin will create local tracking branches for all branches on a remote repository. The remote repository is specified when the plugin is run.
Delete obsolete branches¶
This plugin allows you to delete obsolete branches i.e. those branches that are fully merged to another branch. It will display a list of obsolete branches for review before deletion.
Delete obsolete branches older than (days)¶
Select branches created greater than the specified number of days ago.
Find large files¶
Finds large files in the repository and allows you to delete them.
Gerrit Code Review¶
The Gerrit plugin provides integration with Gerrit for GitExtensions. This plugin has been based on the git-review tool.
For more information see:
GitFlow¶
The GitFlow plugin provides high-level repository operations for Vincent Driessen’s branching model
For more information see:
Github¶
This plugin will create an OAuth token so that some common GitHub actions can be integrated with Git Extensions.
For more information see:
Impact Graph¶
This plugin shows in a graphical format the number of commits and counts of changed lines in the repository performed by each person who has committed a change.
Statistics¶
This plugin provides various statistics (and a pie chart) about the current Git repository. For example, number of commits by author, lines of code per language.
Gource¶
Gource is a software version control visualization tool.
For more information see:
Proxy Switcher¶
This plugin can set/unset the value for the http.proxy git config file key as per the settings entered here.
Release Notes Generator¶
This plugin will generate ‘release notes’. This involves summarising all commits between the specified from and to commit expressions when the plugin is started. This output can be copied to the clipboard in various formats.
Create Bitbucket Pull Request¶
If your repository is hosted on Atlassian Bitbucket Server then this plugin will enable you to create a pull request for Bitbucket from Git Extensions
For more information see: | http://git-extensions-documentation.readthedocs.io/en/latest/settings.html | 2017-11-18T02:47:37 | CC-MAIN-2017-47 | 1510934804518.38 | [array(['_images/settings.png', '_images/settings.png'], dtype=object)
array(['_images/related_links_location.png',
'_images/related_links_location.png'], dtype=object)
array(['_images/revision_links.png', '_images/revision_links.png'],
dtype=object) ] | git-extensions-documentation.readthedocs.io |
A policy contains one or more access rules. Each rule consists of settings that you can configure to manage user access to their Workspace ONE portal as a whole or to specific Web and desktop applications.
A policy rule can be configured to take actions such as block, allow, or step-up authenticate users based on conditions such as network, device type, AirWatch device enrollment and compliant status, or application being accessed.
Network Range
For each rule, you determine the user base by specifying a network range. A network range consists of one or more IP ranges. You create network ranges from the Identity & Access Management tab, Setup > Network Ranges page before applications.
Device Type
Select the type of device that the rule manages. The client types are Web Browser, Workspace ONE App, iOS, Android, Windows 10, OS X, and All Device Types.
You can configure rules to designate which type of device can access content and all authentication requests coming from that type of device use the policy rule.
Authentication Methods
In the policy rule, you set the order that authentication methods are applied. The authentication methods are applied in the order they are listed. The first identity provider instance that meets the authentication method and network range configuration in the policy is selected. The user authentication request is forwarded to the identity provider instance for authentication. If authentication fails, the next authentication method in the list is selected.
Authentication Session Length
For each rule, you set the number of hours that this authentication is valid. The re-authenticate after value determines the maximum time users have since their last authentication event to access their portal or to start a specific application. For example, a value of 4 in a Web application rule gives users four hours to start the Web application unless they initiate another authentication event that extends the time.
Custom Access Denied Error Message
When users attempt to sign in and fail because of invalid credentials, misconfiguration rule for mobile devices that you want to manage, if a user tries to sign in from an unenrolled device, you can create the following custom error message. Enroll your device to access corporate resources by clicking the link at the end of this message. If your device is already enrolled, contact support for help. | https://docs.vmware.com/en/VMware-AirWatch/9.1.1/simple_idm_administration/GUID-C2B03912-C7D8-4524-AE6E-8E8B901B9FD6.html | 2017-11-18T02:41:57 | CC-MAIN-2017-47 | 1510934804518.38 | [] | docs.vmware.com |
The default experience for users who log in to the Workspace ONE portal from VMware Identity Manager is to select the domain to which they belong on the first login page that displays.
VMware Identity Manager displays the authentication page based on the access policy rules configured for that domain.
Users are identified uniquely by both their user name and domain. Because users select their domain first, users that have the same user name but in different domains can log in successfully. For example, you can have a user jane in domain eng.example.com and another user jane in domain sales.example.com | https://docs.vmware.com/en/VMware-AirWatch/9.2/idm-administrator_aw/GUID-AD49ACA8-1874-4EC6-A874-708ECC69C1A6.html | 2017-11-18T03:30:11 | CC-MAIN-2017-47 | 1510934804518.38 | [] | docs.vmware.com |
You create a super user account to manage the vSphere infrastructure. The super user account has the same privileges as the [email protected] account. After the bring-up process is complete, the password for the [email protected] account is rotated to a random password, but the password for the super user account does not change. You can, thus, login to SDDC Manager with the super user name and password without having to look up the rotated password for the administrator account.
Procedure
- Type a user name and password for the super user.
The password must be between 8 and 20 characters long and must contain at least one each of the following:
lowercase letter
uppercase letter
number
special character such as ! or @
- Click NEXT. | https://docs.vmware.com/en/VMware-Cloud-Foundation/2.2/com.vmware.vcf.ovdeploy.doc_22/GUID-5FE0192A-1D8B-42B0-AB93-2A763B778FCF.html | 2017-11-18T03:11:41 | CC-MAIN-2017-47 | 1510934804518.38 | [] | docs.vmware.com |
You must run some PowerShell plug-in workflows to complete the process of adding physical machines and non-vSphere virtual machines to desktop pools using the Horizon vRealize Orchestrator plug-in.
About this task
As an alternative to running the PowerShell workflows listed in this procedure and the Register Machines to Pool workflow, you can run the Add Physical Machines to Pool workflow, available in the Workflows/Example folder.
Prerequisites
Verify that you have the vRealize Orchestrator Plug-In for Microsoft Windows PowerShell, which contains the workflows required for this procedure.
Verify that you have administrator credentials for the Orchestrator server. The account must be a member of the vRealize Orchestrator Admin group configured to authenticate through vCenter Single Sign-On.
Run the Register Machines to Pool workflow to register all machine DNS names into manual unmanaged desktop pools in View. The Register Machines to Pool workflow returns a token (one for each registered DNS) that will be pushed into the Windows Registry of the machines when you run the PowerShell command described in this procedure.
Procedure
- Log in to Orchestrator as an administrator.
- Click the Workflows view in Orchestrator.
- In the workflows hierarchical list, select Add a PowerShell host workflow. and navigate to the
- Right-click the Add a PowerShell host workflow and select Start workflow.
- Provide the host name and fully qualified domain name of the physical machine and click Next.
If the machine is not in a domain, you can use the IP address. If you do not supply the port number, the default port is used.
- Complete the form that appears and click Next.
- Complete the form that appears.
- Click Submit to run the workflow.
- When the workflow finishes, right-click the Invoke a PowerShell Script workflow, located in the PowerShell folder, and select Start workflow.
- Select the host you just added and click Next.
- (Optional) Add the Identity registry key.
- Check whether the hklm:\SOFTWARE\VMware, Inc.\VMware VDM\Agent\Identity registry key exists.
- If the registry key does not exist, run the following command:
New-Item -Path "hklm:\SOFTWARE\VMware, Inc.\VMware VDM\Agent" -Name Identity
- In the Script text area, enter the following command:
New-ItemProperty -Path "hklm:\SOFTWARE\VMware, Inc.\VMware VDM\Agent\Identity" -Name Bootstrap -PropertyType String –Value “TokenReturnedByWorkflow” –Force
For TokenReturnedByWorkflow, use the token returned by the Register Machines to Pool workflow that you previously executed to register machine DNS names.
- Click Submit to run the workflow.
Results
The View Agent token on the machine is now paired with the View Connection Server instance. | https://docs.vmware.com/en/VMware-Horizon-7/7.1/com.vmware.using.horizon-vro-plugin.doc/GUID-D4A8CC3C-1A3A-49B9-A651-3B16FC19AEB7.html | 2017-11-18T03:12:24 | CC-MAIN-2017-47 | 1510934804518.38 | [] | docs.vmware.com |
RightScale Optima enables you to analyze, report, forecast and optimize costs across all of your clouds and cloud accounts. With Optima
You will need to connect your cloud accounts to RightScale in order to view cloud costs. If you have already connected your cloud accounts in RightScale Cloud Management, then you are ready to begin using Optima...
The Instant Analyzer in Optima.
Optima lets you stay on top of your cloud spend with Scheduled Reports. Use the Instant Analyzer to filter your data (by application, team, user, or any other dimension) and then schedule a daily, weekly or monthly report to be delivered to your email inbox. You’ll get an easy to understand comparison that shows how much your cloud spend is increasing or decreasing.
One of the challenges of managing cloud costs is forecasting future cloud spend. The Scenario Builder in Optima lets you specify your expected growth in instances for your cloud accounts or workloads. Optima).
For more information see the Guides or Reference sections. | http://docs.rightscale.com/ca/ca_overview_of_multicloud_cost_management.html | 2017-11-18T02:42:46 | CC-MAIN-2017-47 | 1510934804518.38 | [] | docs.rightscale.com |
Contents IT Business Management Previous Topic Next Topic Teamspaces Add To My Docs Add selected topic Add selected topic and subtopics Subscribe to Updates Share Save as PDF Save selected topic Save selected topic and subtopics Save all topics in Contents Teamspaces Teamspaces A teamspace appears as an application in the instance application navigator. The teamspace includes module links that come from the Project and Portfolio Suite applications, such as the Project, Idea, and Demand applications. Use teamspaces to provide functional and data separation of these applications between different teams in your organization. The following is an example teamspace for a marketing team: Figure 1. Example teamspace Teamspace activation You must activate a teamspace plugin to use the teamspace feature. Following teamspace plugins are available: Project Management TeamSpace 1 (com.snc.ppm_teamspace_1) Project Management TeamSpace 2 (com.snc.ppm_teamspace_2) Project Management TeamSpace 3 (com.snc.ppm_teamspace_3) Project Management TeamSpace 4 (com.snc.ppm_teamspace_4) Project Management TeamSpace 5 (com.snc.ppm_teamspace_5) The teamspaces loaded with these plugins contain the same components, but the components have different prefixes. For example, teamspace 1 includes a project table named Teamspace_1 Project [tsp1_project] and teamspace 5 includes a project table named Teamspace_5 Project [tsp5_project]. You can enable any or all of these teamspaces and assign the teamspace specific roles to relevant users in the group that should use the teamspace. Teamspace customization You can customize the Project and Demand portions of a teamspace without affecting other teamspaces. This table summarizes what you can customize:Table 1. Teamspace customization Customization to Project or Demand within a teamspace Supported? Data model changes, such as adding a field to the Project or Demand form. Yes Changes to business rules, UI actions, UI policies, security rules, data policies, and workflows. Yes Changes to shared roles, such as project_manager, demand_manager, and so on. Yes Form and list layouts, list controls, and related lists Yes Dictionary overrides Yes Teamspace featuresTeamspaces incorporate several features from the Project, Idea, and Demand applications.Activate teamspacesYou can activate one or all teamspace plugins to use the teamspaces feature.Installed with teamspacesThe tables and roles that are installed with project teamspaces are prefixed with an abbreviation based on the name of teamspace.Configure teamspace settingsAfter you activate a teamspace plugin, you can configure teamspace settings. Last Updated: 218 Tags: Products > IT Business Management > Project Portfolio | https://docs.servicenow.com/bundle/istanbul-it-business-management/page/product/project-management/concept/c_Teamspaces.html | 2017-11-18T03:01:58 | CC-MAIN-2017-47 | 1510934804518.38 | [] | docs.servicenow.com |
Difference between revisions of "Managing Component Updates (Component release files)"
From Joomla! Documentation
Revision as of 18:00, 31 May 2011<<
Contents
Articles in this series
Component Release Files
Included in this example are zip files for three releases. You will need to download all three 3.0. The intent is that this will cause an abort of the install or update.
Version 1.1.1
The zip file can be downloaded from democompupdate_111.zip. Diff's of democompupdate_10.zip and democompupdate_111.zip.zip and democompupdate_13.zip'; | https://docs.joomla.org/index.php?title=J2.5:Managing_Component_Updates_(Component_release_files)&diff=59096&oldid=59070 | 2015-04-18T05:56:00 | CC-MAIN-2015-18 | 1429246633799.48 | [array(['/images/d/da/Compat_icon_1_6.png', 'Joomla 1.6'], dtype=object)] | docs.joomla.org |
Revision history of "JLDAP::setDN/11.1"
View logs for this page
There is no edit history for this page.
This page has been deleted. The deletion and move log for the page are provided below for reference.
- 19:21, 10 May 2013 JoomlaWikiBot (Talk | contribs) moved page JLDAP::setDN/11.1 to API17:JLDAP::setDN without leaving a redirect (Robot: Moved page) | https://docs.joomla.org/index.php?title=JLDAP::setDN/11.1&action=history | 2015-04-18T05:33:30 | CC-MAIN-2015-18 | 1429246633799.48 | [] | docs.joomla.org |
Revision history of "JTableContent:: getAssetParentId/11.1"
View logs for this page
There is no edit history for this page.
This page has been deleted. The deletion and move log for the page are provided below for reference.
- 21:20, 10 May 2013 JoomlaWikiBot (Talk | contribs) moved page JTableContent:: getAssetParentId/11.1 to API17:JTableContent:: getAssetParentId without leaving a redirect (Robot: Moved page) | https://docs.joomla.org/index.php?title=JTableContent::_getAssetParentId/11.1&action=history | 2015-04-18T05:40:59 | CC-MAIN-2015-18 | 1429246633799.48 | [] | docs.joomla.org |
Difference between revisions of "Writing System Tests for Joomla! - Part 1"
From Joomla! Documentation
Latest revision as of 06:19, 17 May 2013
Contents
- 1 Introduction
- 2 Overview
- 3 Planning the Test
- 4 Writing System Tests with Selenium IDE
- 5 Converting Selenium IDE Tests to PHP
- 6 Creating a Basic System Test in PHP
- 7 More Advanced Topics
Introduction
As documented at Running Automated Tests for the Joomla CMS, Joomla! 2 the Joomla CMS.
As of May 17, 2013, the latest version of Selenium IDE is 2.0.0. You can get it at. Click on the download link and Firefox will install it automatically and then re-start.
Preparing your Environment.>
You can download the basic and intermediate HTML files here. To run them on your system you will need to edit the URL's in the "open" statements to match your test URL.'; example would be:
Example + 0001 + Testresulting in
Example0001Test
class Example0001Test extends SeleniumJoomlaTestCase
- Remove the function setUp()
Creating a Basic System Test in PHP.
More Advanced Topics
To learn about more advanced topics related to system testing, please see the article Writing System Tests for Joomla! - Part 2. | https://docs.joomla.org/index.php?title=Writing_System_Tests_for_Joomla!_-_Part_1&diff=99410&oldid=28590 | 2015-04-18T06:05:00 | CC-MAIN-2015-18 | 1429246633799.48 | [] | docs.joomla.org |
User Guide
Local Navigation
About the key store
The key store on your BlackBerry smartphone might store the following items:
- Personal certificates or PGP keys (public and private key pairs)
- Certificates that you download using a certification authority profile or the BlackBerry Desktop Software
- Root certificates that are included in the BlackBerry Desktop Software
- Certificates that you download from an LDAP-enabled server or DSML-enabled server
- PGP public keys that you download from an LDAP-enabled server
- Certificates or PGP public keys that you import from your smartphone or a media card
- Certificates or PGP public keys that you add from a message.
Next topic: Change the key store password
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/37425/About_the_key_store_61_1596791_11.jsp | 2015-04-18T05:15:08 | CC-MAIN-2015-18 | 1429246633799.48 | [] | docs.blackberry.com |
You are viewing an old version of this page. View the current version.
Compare with Current
View Page History
Version 25
Sonar is the ultimate open source platform to manage code quality.
Title
Implement a fully new COPY/PASTE detector algorithm
Keywords
CPD, Algorithm, Simian
Description
The current COPY/PASTE detector is based on PMD/CPD. CPD is pretty good open source implementation but which has two main drawbacks : there isn't too much activity on this library and it requires lot of memory to work so it can't be used to analyse millions of lines of code :
Mentor(s)
Freddy Mallet, Evgeny Mandrikov
Constraint
It should be possible to analyse any amount of source code with a limited amount of memory.
Materials
Below is a list of links to some materials, which can be very useful during preparation of proposal , so we decided to share them with you :
---
Implement a Sonar Wallboard
Report, Wallboard, Widgets
Sonar is well-known for its user-friendly interface but doesn't yet provide any mechanism to configure and display a Wallboard.
Simon Brandhof
Reuse the Sonar widget/dashboard mechanism
Implement a plugin to cover a new language like Scala, Pyton, ...
Plugin, Language
Through its plugin ecosystem, Sonar is already able to analyse C, Java, COBOL, Flex, Groovy, ... source code. But lot of exiting languages are not yet covered.
Simon Brandhof, Freddy Mallet
Make the C Rules engine a popular Open Source rule engine for C source code
Plugin, AST, C, Rule
The C Rules engine is still a pretty young plugin which is based on the SonarSource C Plugin. This plugin doesn't yet contain lot of rules/checks but is ready to grow in order to embed best practices and market standards.
Freddy Mallet
Create Sonarpedia.org
Community, Social, Descriptions
Stackoverflow is a good example of a social community which let you get answer on any kind of questions about programming. Sonarpedia could be similar but dedicated to Quality and Security questions.
Powered by a free Atlassian Confluence Open Source Project License granted to Codehaus. Evaluate Confluence today. | http://docs.codehaus.org/pages/viewpage.action?pageId=228175289 | 2015-04-18T05:16:40 | CC-MAIN-2015-18 | 1429246633799.48 | [] | docs.codehaus.org |
By default, when a patron places a hold on a title, the hold targeter will search for copies to fill the hold only at circulating libraries that are open. Copies at closed libraries are not targeted to fill holds. When turned on, this feature enables Evergreen to target copies that have closed circulating libraries to fill holds. Two new org unit settings control this feature.
Use the following setting to target copies for holds at closed circulating libraries:
Use the following setting to target copies for holds IF AND ONLY IF the circulating library is the hold’s pickup library. | http://docs.evergreen-ils.org/2.7/_target_copies_for_holds_at_closed_libraries.html | 2017-03-23T06:09:58 | CC-MAIN-2017-13 | 1490218186780.20 | [] | docs.evergreen-ils.org |
This document contains information for an outdated version and may not be maintained any more. If some of your projects still use this version, consider upgrading as soon as possible.
3.2.0-rc1
See 3.2.0 changelog for more information on what is new in 3.2
Change Log
Bugfixes
--18 8b638f5 Using undefined var in ModelAdmin (Loz Calver)
- 2015-08-09 cf9d2d1 Fix duplicate primary key crash on duplicate (Damian Mooyman)
- 2015-08-07 1f0602d Fixed regression from ClassInfo case-sensitivity fix. (Sam Minnee) | https://docs.silverstripe.org/en/3.2/changelogs/rc/3.2.0-rc1 | 2017-03-23T06:08:03 | CC-MAIN-2017-13 | 1490218186780.20 | [] | docs.silverstripe.org |
To test this feature, setup the rules that you want, then setup items/users
with barcodes that should match. Then try scanning the short version of
those barcodes in the various supported access points.
To report a problem with this documentation or provide feedback, please contact the DIG mailing list.
© 2008-2015 GPLS and others. The Evergreen Project is
a member of the Software
Freedom Conservancy. | http://docs.evergreen-ils.org/2.7/_testing.html | 2017-03-23T06:21:57 | CC-MAIN-2017-13 | 1490218186780.20 | [] | docs.evergreen-ils.org |
Ticket #1130 (closed defect: wontfix)
the stat() system call hangs when the argument is a remote folder
Description
I built OpenMoko? and QEMU using MokoMakefile? and I am trying to write a C++
appliction for OpenMoko?.
I established a connection between the emulated device and the host PC using pppd,
and i mounted a folder from the host to the device.
When calling stat() on the remote folder mounted as local folder, the program
hangs and does not respond to anything, not even to Ctrl-C.
When calling stat() on a local folder (one from the emulated device), it works fine.
Change History
Note: See TracTickets for help on using tickets. | http://docs.openmoko.org/trac/ticket/1130 | 2017-05-23T00:59:16 | CC-MAIN-2017-22 | 1495463607245.69 | [] | docs.openmoko.org |
These are the possible values that can be or'ed together to set the Program warning mask
Enables warnings when the parser determines that the argument types of a function or method call are such that the operation is guaranteed to produce a constant value.
The default warning mask.
This warning is made up of the following values combined with binary or:
Enables a warning when deprecated code is used.
Enables a warning when a program declares a local variable more than once in the same block; note that this is not a warning but rather an error when assume-local or new-style parse options are set.
Indicates that the embedded code has declared the same global variable more than once.
Enables a warning when an immediate hash is declared and at least one of the keys is repeated.
Enables a warning.
Enables a warning when a function or method call is made with more arguments than are used by the function or method.
Indicates that the embedded code performs some operation that is guaranteed to produce no result (for example, using the [] operator on an integer value)
The default warning mask for user modules.
This warning is made up of the following values combined with binary or:
Indicates that the embedded code is calling an unknown method in a class.
This warning may generate false positives; it may be vaild operation if the calling method is only called from a derived class that actually implements the method. In this case, the cast<> operator can be used to eliminate the warning.
Enables a warning when a function or method call is made with no side effects and the return value is ignored.
Indicates that the embedded code referenced an undeclared variable that will be assumed to be a global variable.
Indicates that the embedded code tried to enable or disable an unknown warning.
Indicates that code cannot be reached (for example; code in the same local block after an unconditional return or thread_exit statement)
This warning is raised when a variable is declared in a block but never referenced.
This warning means that the embedded code tried to change the warning mask, but it was locked, so the warning mask was actually unchanged. | https://docs.qore.org/current/lang/html/group__warning__constants.html | 2017-05-23T01:00:47 | CC-MAIN-2017-22 | 1495463607245.69 | [] | docs.qore.org |
Welcome to BrowserMob Proxy’s documentation!¶
Python client for the BrowserMob Proxy 2.0 REST API.
How to install¶
BrowserMob Proxy is available on PyPI, so you can install it with
pip:
$ pip install browsermob-proxy
Or with easy_install:
$ easy_install browsermob-proxy
Or by cloning the repo from GitHub:
$ git clone git://github.com/AutomatedTester/browsermob-proxy-py.git
Then install it by running:
$ python setup.py install
How to use with selenium-webdriver¶
Manually: server.stop() driver.quit()
How to Contribute¶
Getting Started¶
- Fork the repository on GitHub - well... duh :P
- Create a virtualenv: virtualenv venv
- Activate the virtualenv: . venv/bin/activate
- Install the package in develop mode: python setup.py develop
- Install requirements: pip install -r requirements.txt
- Run the tests to check that everything was successful: `py.test tests
Making Changes¶
Create a topic branch from where you want to base your work. * This is usually the master branch. * Only target release branches if you are certain your fix must be on that
branch.
- To quickly create a topic branch based on master; git checkout -b /my_contribution master. Please avoid working directly on the master branch.
Make commits of logical units.
Check for unnecessary whitespace with git diff –check before committing.
Make sure you have added the necessary tests for your changes.
Run _all_ the tests to assure nothing else was accidentally broken.
Submitting Changes¶
- Push your changes to a topic branch in your fork of the repository.
- Submit a pull request to the main repository
- After feedback has been given we expect responses within two weeks. After two weeks will may close the pull request if it isn’t showing any activity
Contents: | http://browsermob-proxy-py.readthedocs.io/en/latest/ | 2017-05-23T01:15:50 | CC-MAIN-2017-22 | 1495463607245.69 | [] | browsermob-proxy-py.readthedocs.io |
Ticket #1331 (closed defect: community)
Can't reply to "Unknown sender"
Description
In the messages program, there is no way to simply reply to a message from an
unknown sender (means not in contacts).
Right now, if you select the message, and press the write icon, you only get the
number in text of the message, not as the number you're writing to.
Change History
Note: See TracTickets for help on using tickets. | http://docs.openmoko.org/trac/ticket/1331 | 2017-05-23T01:00:58 | CC-MAIN-2017-22 | 1495463607245.69 | [] | docs.openmoko.org |
Release Notes
Local Navigation
Install the COP file for the BlackBerry MVS
- Upload the digitally-signed BlackBerry Mobile Voice System COP file to an SFTP server.
- In the Cisco Unified Communications Manager UI, in the Navigation drop-down list , click Cisco Unified OS Administration.
- Click GO.
- Log in using the OS or Platform administration account.
- Click Software Upgrades > Install/Upgrade.
- In the Source list, select Remote Filesystem.
- In the Directory field, enter the path to the directory on the SFTP server where you uploaded the BlackBerry MVS COP file.
- In the Server field, enter the IP address or host name of the SFTP server.
- In the User Name field, enter the user name for the SFTP sever.
- In the User Password field, enter the password for the SFTP server.
- In the Transfer Protocol list, select SFTP.
- Click Next.
- In the Options/Upgrades list, select the digitally-signed BlackBerry MVS COP file.
- Click Next.
- Verify the MD5 hash value for the digitally-signed BlackBerry MVS COP file against the Cisco website.
- Click Next.
- When the BlackBerry MVS COP file installation is finished, restart all Cisco Unified Communications Manager nodes in the cluster, starting with the Publisher server.
Previous topic: Using the Cisco Options Package (COP) file
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/admin/deliverables/43942/BBMVS_CUCM_Int_Guide_install_COP_file_SP1_1883489_11.jsp | 2014-10-20T08:43:34 | CC-MAIN-2014-42 | 1413507442288.9 | [] | docs.blackberry.com |
Article:
Click Parameters from the Toolbar in the Articles List View in the Administrator.:
Developers Notes
The filtering parameters in config.xml have the new parameter menu="hide". This hides the filters from the Menu Item's Component pane as you do not want cascading overrides to occur at the menu item level. | http://docs.joomla.org/index.php?title=Help15:Screen.content.15&diff=5006&oldid=5003 | 2014-10-20T09:22:06 | CC-MAIN-2014-42 | 1413507442288.9 | [] | docs.joomla.org |
Horizontal: 900px; height: 25px; }
The "auto" value in the "margin" attribute centers the item, so this would create a menu area that is 900: 900 h1 { padding: .75em 0 .52em 0; font-size: 0.8em; font-weight: bold; color: #0033CC; background-color: transparent; text-align: center; }
This would cause text in an h1 tag within the main_menu div tag of the html code to be centered in the horizontal menu. It would also have some padding above and below it.
--daganray 09:54, 15 November 2009 (UTC) | http://docs.joomla.org/index.php?title=Horizontal_centering&oldid=17701 | 2014-10-20T08:35:15 | CC-MAIN-2014-42 | 1413507442288.9 | [] | docs.joomla.org |
Hibernate.orgCommunity Documentation
4.0.1.GA.
Start by creating new Maven project using the Maven archetype plugin as follows:
Example 1.2. Using Maven's archetype plugin to create a sample project using Hibernate Validator
mvn archetype:generate \ -DarchetypeCatalog= \ -DgroupId=com.mycompany \ -DartifactId=beanvalidation-gettingstarted \ -Dversion=1.0-SNAPSHOT \ -Dpackage=com.mycompany
When presented with the list of available archetypes in the JBoss
Maven Repository select the
hibernate-validator-quickstart-archetype. After Maven
has downloaded all dependencies confirm the settings by just pressing
enter. Maven will create your project in the directory
beanvalidation-gettingstarted. or look at further examples referenced in Chapter 7, Further reading. To get a deeper understanding of the Bean Validation.ConstraintPayload; asign custom payload
objects to a constraint. This attribute is not used by the API
itself.
An examle for a custom payload could be the definition of a severity.
public class Severity { public static class Info extends ConstraintPayload {}; public static class Error extends ConstraintPayload {}; }
(messge, explicitely.
The passed-in
ConstraintValidatorContext
could be used to raise any custom validation errors, but as we are fine
with the default behavior, we can ignore that parameter for now.
Finally we need to specify the error message, that shall be used,
in case a
@CheckCase constraint is violated. To
do so, we add the following to our custom
ValidationMessages.properties (see also Section 2.2.4, “Message interpolation”)
Example 3.6..7..ConstraintPayload;.9. overriden allwow, )
Valid.7, “TraversableResolver interface”).
Example 5.7..8, “Providing a custom TraversableResolver”.
Example 5.9, “Providing a custom ConstraintValidatorFactory”).
Example 5.10. seperated, fully specified classnames seperated,.2.2.1, “Hibernate event-based validation” to your
project and register it manually.
When working with JSF2 or JBoss Seam and Hibernate Validator (Bean Validation) is present in the runtime environment validation is triggered for every field in the application. ??? furhter! | http://docs.jboss.org/hibernate/validator/4.0.1/reference/en/html_single/ | 2014-10-20T08:07:50 | CC-MAIN-2014-42 | 1413507442288.9 | [] | docs.jboss.org |
This help page or section is in the process of an expansion or major restructuring. You are welcome to assist in its construction by editing it as well. If this help page or section has not been edited in several days, please remove this template.
This page was last edited by Tom Hutchison (talk| contribs) 7 months ago. (Purge)
Use Special:Upload to upload files, to view or search previously uploaded images go to the list of uploaded files, uploads and deletions are also logged in the upload log. | http://docs.joomla.org/Help:Uploading_Files | 2014-10-20T09:21:16 | CC-MAIN-2014-42 | 1413507442288.9 | [] | docs.joomla.org |
Redrings (talk| contribs) 5 years::.# Two Using the Selectors. | http://docs.joomla.org/index.php?title=Content_creators&diff=15060&oldid=15059 | 2014-10-20T09:14:08 | CC-MAIN-2014-42 | 1413507442288.9 | [] | docs.joomla.org |
Main platform components
Let's go through the main components of the FlowX.AI platform:
FlowX.AI
Engine
- is the core of the platform. It runs the business processes, coordinating integrations and the UI
FlowX.AI
Designer
- is a collaborative, no-code, web-based application development environment that enables users to create web and mobile applications without having to know how to code:
Develop processes based on
BPMN 2.0
Configure user interfaces for the processes for both generated and custom screens
Define business rules and validations via DMN files or via the
MVEL
scripting language
Create integration connectors in a visual manner
Create data models for your applications
FlowX.AI
Plugins
- the platform comes with some ready made integrations, such as a document management solution, a plugin for sending various types of notifications, an OCR plugin and a CRM
Process
Renderer
SDKs - used in the Web (Angular), iOS, and Android applications in order to render the process screens and orchestrate the custom components
Overview
An overview of the most important components of the platform
The Engine
We call it the engine because it’s a nice analogy, once deployed on an existing stack, FlowX.AI becomes the core of your digital operating model.
With it you can:
create any type of external or internal facing application
redesign business processes from analog, paper-based ones to fully digital and automated ones,
manage data and
manage integrations. So you can hook it up to existing CRMs, ERPs, KYC, transaction data and many more.
FlowX.AI Engine runs the business processes, coordinating integrations and the omnichannel UI. It is a Kafka-based event driven platform, that is able to orchestrate, generate and integrate with any type of legacy system, without expensive or risky upgrades.
This is extremely important because often, digital apps used by a bank’s clients for example, are limited by the load imposed by the core banking system. And the customers see blocked screens and endlessly spinning flywheels. FlowX.AI buffers this load, offering a 0.2s response time, thus the customer never has to wait for data to load.
FlowX Designer
The platform has
no-code/low-code capabilities
, meaning applications can be developed in a visual way, available for anyone with a powerful business idea. So we’re talking about business analysts, product managers - people without advanced programming skills, but also experienced developers.
The process visual designer works on BPMN 2.0 standard - meaning that the learning curve for business analysts or product managers is quite fast. Thus, creating new applications (e.g. onboarding an SME client for banks) or adding new functionality (allow personal data changes in an app) takes only 10 days, instead of 6 to 8 months.
However, we do support custom CSS or custom screens. Because we’re aware each brand is different and each has its own CI, so you need to have the ability to create UIs that respect your brand guidelines.
AI - Rendered SDKs
Also, we provide web and native mobile SDKs, so that every app you create is automatically an omnichannel one: it can be displayed in a browser, embedded in an internet banking interface, or in a mobile banking app. Or even deployed as a standalone app in Google Play or AppStore.
Unlike other no-code/low-code platform which provide templates or building blocks for the UI, ours is generated on the fly, as a business analyst creates the process, and the data points. This feature reduces the need to use UX/UI expertise, the UI being generated respecting state-of the art UI frameworks.
Industry solutions & Plugins
In order to help you develop faster solutions, we’ve built our platform with extensibility in mind. This is why we've developed both industry-specific solutions and general plugins that extend the functionality of the platform in a more general way.
Industry Solutions
Our Finance solutions suite includes customer or employee journeys - such as loan origination, onboarding, mortgage or MiFID. These are like a template based on best industry practices, that you can take, customize and integrate with your process and data, and launch in a matter of days.
Plugins
Plugins are bits of functionality that allow you to expand the functionality of the platform - for example we have OCR plugins, notification plugin, chat and so on. On our roadmap, we’re looking to enhance this plugins library with 3rd party providers, so stay tuned for more.
Integrations
Connecting your legacy systems or third party apps to the engine is easily done through custom integrations. These can be developed using your preferred tech stack, the only requirement is that they connect to Kafka. These could be anything from legacy APIs, custom file exchange solutions, or RPA
Intro to Redis
Next - Getting started
Step by step example
Last modified
18d ago
Copy link
Contents
Overview
The Engine
FlowX Designer
AI - Rendered SDKs
Industry solutions & Plugins | https://docs.flowx.ai/getting-started/main-platform-components | 2021-11-27T14:02:41 | CC-MAIN-2021-49 | 1637964358189.36 | [] | docs.flowx.ai |
-23
Welcome to tonight's release! Some quick announcements before the details of the release.
We're excited to open registration for
ThousandEyes Connect
in Silicon Valley. Join us on July 12th to meet industry peers, share insights and learn from networking experts. Past speakers have included leaders from Cisco, Intuit, Oracle, and Zendesk. And if that's not enough, we'll give you a cool
t-shirt
for attending!
Lots of great new
ThousandEyes blog
posts to tell you about.
Senior Product Manager, Gopi Gopalakrishnan, has the
end-of-quarter review
of the major new features released over the past three months. If you need to catch up on all the good stuff that's new in ThousandEyes, this blog post is for you!
Senior Product Manager, Archana Kesavan's post
Webex Monitoring
provides details for monitoring the performance and availability of the many facets that comprise the Cisco Webex service. Take a peek into the Webex data center architecture and see how your meeting works. If you depend on Webex for business collaboration, you won't want to miss this info.
Exciting news from Senior Site Reliabiity Engineer, Raúl Benencia via the ThousandEyes blog. In his debut post, Raúl describes Shoelaces, a software tool which was written by his team to help manage ThousandEyes' infrastructure. Shoelaces works with DHCP and TFTP to provide bootstrapping of remote servers. The tool is lightweight, flexible and is now
freely available
under the Apache Public License 2.0! If you provision remote servers, check it out!
On to the release...
Cloud Agents
Another addition to our growing number of Cloud Agents in broadband ISP networks--this one in the Verizon network, in Seattle Washington. Additionally, we have a new Cloud Agent in Takamatsu, Japan. Konnichiwa!
Test enhancements
End to end loss
Previously, when a test performed TCP-based end to end loss measurements for the Overview, the ThousandEyes Agent would first attempt to use TCP selective acknowledgement (SACK) to send 50 packets to the target. If the test target did not support TCP SACK, the Agent would fall back to sending 50 separate synchronization (SYN) packets. This behavior is now configurable in the test's Advanced Settings. The behavior of each setting:
Prefer SACK
: The previous behavior: try SACK mode and fall back to SYN mode if needed. This is the default.
Force SACK
: Use SACK mode only
Force SYN
: Use SYN mode only
Not all targets will support TCP SACK. If a test is configured to use
Force SACK
mode but the target does not support selective acknowledgements, then the test will display the error "Target doesn't support SACK mode" and will not transmit 50 packets to the target.
For customers interested in learning about TCP selective acknowledgements, a great place to start is Jeremy Stretch's
TCP Selective Acknowledgments (SACK)
from the PacketLife blog.
Account Settings
We've revamped some of the tabs in the
Account Settings
page to better group the information as we add more functionality. The Security & Authentication and Usage tabs have been replaced with the Organization and Billing tabs.
The new Organization tab contains information that applies to a customer's entire organization, such as the single sign-on (SSO) configuration and password expiry, as well as the organization name and time zone settings in the app. Additionally, the number of licenses used by Enterprise and Endpoint Agents are displayed here, along with past, present and projected Cloud Unit usage, broken out by Account Group, Test type and individual Test.
The new Billing tab has information on the customer's subscription, including total licenses and Cloud Units, billing address and contact, payment method and invoice history.
Device Layer
Bulk Actions
In order to make easy the management of large numbers of devices, we're adding a feature to perform bulk actions for Device Layer devices.
Each device entry on the Devices tab of the Device Settings page now has a checkbox. When one or more devices' boxes are checked, a bar at the bottom of the page will appear, providing the ability to configure all checked devices. Configuration options include M
onitor
and
Unmonitor
buttons, and an
Edit Devices
button which provides the following actions:
Notifications for Slack and Hipchat
When Device Layer's Notifications for new interfaces or devices are sent to Slack or Hipchat, the color of the text is now blue (informational) instead of red (error).
Bug fixes & minor features
Here's the bug fixes and minor features in this week's release:
Fixed an issue where Target to Source path trace data was not shown in some bidirectional Agent to Agent tests.
When configuring a report, the Custom Date Range selector now consistently applies the chosen dates.
The login field for app.thousandeyes.com now strips trailing whitespace from a username, rather than returning an error.
Updated the Activity Log's Time selector with the new design used in Reports and Dashboards.
In the Endpoint Data's Session Details, changed the WiFi access point icon color from blue to grey, consistent with the ethernet connection icon.
When opening a newly created Endpoint Data Saved Event (a.k.a. snapshot), if the data has not finished saving, a "This saved event is still being generated" message is now displayed.
Questions and comments
Have feedback or questions about tonight's release? Want to comment on your experience trying Shoelaces?
Send us an email
!
Release Notes: 2018-06-06
Release Notes: 2018-05-09
Last modified
1yr ago
Copy link
Contents
Cloud Agents
Test enhancements
End to end loss
Account Settings
Device Layer
Bulk Actions
Notifications for Slack and Hipchat
Bug fixes & minor features
Questions and comments | https://docs.thousandeyes.com/archived-release-notes/2018/2018-05-23-release-notes | 2021-11-27T14:21:00 | CC-MAIN-2021-49 | 1637964358189.36 | [] | docs.thousandeyes.com |
See Also: UIAcceleration Members
The iOS accelerometer (see UIKit.UIAccelerometer) reports acceleration events as a triplet of vectors. Looking at the face of an iOS device, positive X is to the right, positive Y is to the top, and positive Z is towards the viewer. The value of each vector is a double whose units are g-force.
Application developers should not rely on the accuracy of a single acceleration event but rather should rather average or otherwise interpolate from a series of readings.
The UIEventSubtype.MotionShake event subtype can be used to detect a shake gesture. (See UIKit.UIEvent.) | http://docs.go-mono.com/monodoc.ashx?link=T%3AUIKit.UIAcceleration | 2021-11-27T13:44:36 | CC-MAIN-2021-49 | 1637964358189.36 | [] | docs.go-mono.com |
pasture
A Rust library for working with point cloud data. It features:
- Fine-grained support for arbitrary point attributes, similar to PDAL, but with added type safety
- A very flexible memory model, natively supporting both Array-of-Structs (AoS) and Struct-of-Arrays (SoA) memory layouts
- Support for reading and writing various point cloud formats with the
pasture-iocrate
- A growing set of algorithms with the
pasture-algorithmscrate
To this end,
pasture chooses flexibility over simplicity. If you are looking for something small and simple, for example to work with LAS files, try a crate like
las. If you are planning to implement high-performance tools and services that will work with very large point cloud data,
pasture is what you are looking for!
Usage
Add this to your
Cargo.toml:
[dependencies] pasture-core = "0.1.0" # You probably also want I/O support pasture-io = "0.1.0"
Development
pasture is in the early stages of development and is not yet stable.
License
pasture is distributed under the terms of the Apacke License (Version 2.0). See LICENSE for details. | https://docs.rs/crate/pasture-core/0.2.0 | 2021-11-27T15:36:29 | CC-MAIN-2021-49 | 1637964358189.36 | [] | docs.rs |
java.lang.Object
com.atlassian.jira.functest.framework.util.form.FormParameterUtilcom.atlassian.jira.functest.framework.util.form.FormParameterUtil
public class FormParameterUtil
This class is used to modify form requests that are to be submitted It does this by modifying the request on a form before submitting it. If you attempt to resunbmit a form then an assertion error ewill be thrown. also note that for duration from construction to form submission the validation of parameter values in the test is disabled. You have to work with the returned DOM, rather than with the tester, as the state of the request and response objects will not be in sync
public FormParameterUtil(net.sourceforge.jwebunit.WebTester tester, String formIdentifierOrName, String submitButtonName)
tester- the currently executing WebTester
formIdentifierOrName- either the id or the name of the form you want to modify
submitButtonName- the name of the submit button, if null then the default button is used
public void addOptionToHtmlSelect(String select, String[] value)
select- the name of the select to which to add an option
value- a string array containing the option values you want to submit
public void replaceOptionsinHtmlSelect(String select, String[] value)
select- the name of the select to which to repalce all options
value- a string array containing the option values you want to submit
public void setFormElement(String formElementName, String value)
formElementName- the name of the formElement
value- a string containing the value you want to submit
public void setParameters(Map<String,String[]> parameters)
parameters- A map containing the names and values of the parmeters to be submited
public Node submitForm()
DomNodeCopieris used to return a shallow copy of the Document | https://docs.atlassian.com/software/jira/docs/api/6.3.4/com/atlassian/jira/functest/framework/util/form/FormParameterUtil.html | 2021-11-27T14:45:29 | CC-MAIN-2021-49 | 1637964358189.36 | [] | docs.atlassian.com |
mars.learn.metrics.fbeta_score¶
- mars.learn.metrics.fbeta_score(y_true, y_pred, *, beta, labels=None, pos_label=1, average='binary', sample_weight=None, zero_division='warn')[source]¶
Compute the F-beta score
The F-beta score is the weighted harmonic mean of precision and recall, reaching its optimal value at 1 and its worst value at 0.
The beta parameter determines the weight of recall in the combined score.
beta < 1lends more weight to precision, while
beta > 1favors recall (
beta -> 0considers only precision,
beta -> +infonly recall).
Read more in the User Guide.
- Parameters
y_true (1d array-like, or label indicator array / sparse matrix) – Ground truth (correct) target values.
y_pred (1d array-like, or label indicator array / sparse matrix) – Estimated targets as returned by a classifier.
beta (float) – Determines the weight of recall in the combinedbeta_score – F-beta score of the positive class in binary classification or weighted average of the F-beta score of each class for the multiclass task.
- Return type
float (if average is not None) or array of float, shape = [n_unique_labels]
References
- 1
R. Baeza-Yates and B. Ribeiro-Neto (2011). Modern Information Retrieval. Addison Wesley, pp. 327-328.
- 2
Wikipedia entry for the F1-score
Examples
>>> from mars.learn.metrics import fbeta_score >>> y_true = [0, 1, 2, 0, 1, 2] >>> y_pred = [0, 2, 1, 0, 0, 1] >>> fbeta_score(y_true, y_pred, average='macro', beta=0.5) 0.23... >>> fbeta_score(y_true, y_pred, average='micro', beta=0.5) 0.33... >>> fbeta_score(y_true, y_pred, average='weighted', beta=0.5) 0.23... >>> fbeta_score(y_true, y_pred, average=None, beta=0.5) array([0.71..., 0. , 0. ])
Notes
When
true positive + false positive == 0or
true positive + false negative == 0, f-score returns 0 and raises
UndefinedMetricWarning. This behavior can be modified with
zero_division. | https://docs.pymars.org/en/latest/reference/learn/generated/mars.learn.metrics.fbeta_score.html | 2021-11-27T13:39:24 | CC-MAIN-2021-49 | 1637964358189.36 | [] | docs.pymars.org |
Date: Thu, 14 Jun 2018 07:20:17 +0800 From: blubee blubeeme <[email protected]> To: Valeri Galtsev <[email protected]> Cc: [email protected] Subject: Re: How to detect single user mode in FreeBSD ? Message-ID: <CALM2mEk9oVaswyAPLoe1G+gO1Ftyw0NOt_0eksn68m5mE0tHtQ@mail.gmail.com> In-Reply-To: <[email protected]>>
Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help. > > Valeri > -- > ++++++++++++++++++++++++++++++++++++++++ >" >
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=161121+0+/usr/local/www/mailindex/archive/2018/freebsd-questions/20180617.freebsd-questions | 2021-11-27T15:44:23 | CC-MAIN-2021-49 | 1637964358189.36 | [] | docs.freebsd.org |
lightkurve.correctors.corrector.Corrector¶
- class lightkurve.correctors.corrector.Corrector(original_lc: lightkurve.lightcurve.LightCurve)[source]¶
Abstract base class documenting the required structure of classes designed to remove systematic noise from light curves.
- Attributes
- original_lcLightCurve
The uncorrected light curve. Must be passed into (or computed by) the constructor method.
- corrected_lcLightCurve
Corrected light curve. Must be updated upon each call to the
correct()method.
- cadence_masknp.array of dtype=bool
Boolean array with the same length as
original_lc. True indicates that a cadence should be used to fit the noise model. By setting certain cadences to False, users can exclude those cadences from informing the noise model, which will help prevent the overfitting of those signals (e.g. exoplanet transits). By default, the cadence mask is True across all cadences.
Methods
- __init__(original_lc: lightkurve.lightcurve.LightCurve) None [source]¶
Constructor method.
The constructor shall: * accept all data required to run the correction (e.g. light curves, target pixel files, engineering data). * instantiate the
original_lcproperty.
Methods
Attributes | https://docs.lightkurve.org/reference/api/lightkurve.correctors.corrector.Corrector.html | 2021-11-27T14:24:16 | CC-MAIN-2021-49 | 1637964358189.36 | [] | docs.lightkurve.org |
Uri.
Check Host Name(String) Method
Definition
Important
Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
Determines whether the specified host name is a valid DNS name.
public: static UriHostNameType CheckHostName(System::String ^ name);
public static UriHostNameType CheckHostName (string name);
public static UriHostNameType CheckHostName (string? name);
static member CheckHostName : string -> UriHostNameType
Public Shared Function CheckHostName (name As String) As UriHostNameType
Parameters
The host name to validate. This can be an IPv4 or IPv6 address or an Internet host name.
Returns
The type of the host name. If the type of the host name cannot be determined or if the host name is
null or a zero-length string, this method returns Unknown.
Examples
The following example checks whether the host name is valid.
Console::WriteLine( Uri::CheckHostName( "" ) );
Console.WriteLine(Uri.CheckHostName(""));
Console.WriteLine(Uri.CheckHostName(""))
Remarks
The CheckHostName method checks that the host name provided meets the requirements for a valid Internet host name. It does not, however, perform a host-name lookup to verify the existence of the host. | https://docs.microsoft.com/en-us/dotnet/api/system.uri.checkhostname?view=netframework-4.8 | 2021-11-27T15:42:50 | CC-MAIN-2021-49 | 1637964358189.36 | [] | docs.microsoft.com |
RadSchedulerDialog
RadSchedulerDialog is a basic dialog class. It is inherited by the following RadScheduler's dialogs:
- DeleteRecurringAppointmentDialog: it is shown when you try to delete a recurring appointment allowing the user to delete just a single occurrence or the entire series.
- EditAppointmentDialog: it is shown when an appointment is about to be edited.
- EditRecurrenceDialog: it is shown when you try to add a recurrence rule to an appointment.
- OpenRecurringAppointmentDialog: it is shown when you try to edit a recurring appointment allowing you to specify whether to edit the specific occurrence or the entire series. | https://docs.telerik.com/devtools/winforms/controls/scheduler/dialogs/radschedulerdialog | 2021-11-27T15:02:47 | CC-MAIN-2021-49 | 1637964358189.36 | [] | docs.telerik.com |
operator_load_balancer_scheme: internalin your cluster configuration file before creating your cluster), you can use VPC Peering to enable your Cortex CLI to connect to your cluster operator from another VPC so that you may run
cortexcommands. Note that because the operator validates that the CLI user is an active IAM user in the same AWS account as the Cortex cluster, it is usually unnecessary to configure the operator's load balancer to be internal.
api_load_balancer_scheme: internalin your cluster configuration file before creating your cluster) and you disabled API Gateway for your API (i.e. you set
api_gateway: nonein the
networkingfield of your api configuration), you can use VPC Peering to enable prediction requests from another VPC.
kubernetes.io/service-nametag: | https://docs.cortex.dev/v/0.22/guides/vpc-peering | 2021-11-27T14:00:02 | CC-MAIN-2021-49 | 1637964358189.36 | [] | docs.cortex.dev |
You can manually enable protection domain monitoring using the NetApp Element Configuration extension point. You can select a protection domain threshold based on node or chassis domains.
A chassis domain emphasizes the resiliency of the cluster to withstand a chassis-level failure. A node domain emphasizes a select group of nodes, potentially across chassis. A chassis domain requires more potential capacity resources than a node domain to be resilient to failure. When a protection domain threshold is exceeded, a cluster no longer has sufficient capacity to heal from failure while also maintaining undisrupted data availability. | http://docs.netapp.com/hci/topic/com.netapp.doc.hci-vcp-ug-16p1/GUID-DFC68D4A-E715-4D64-9451-446407887A3D.html | 2021-11-27T13:59:58 | CC-MAIN-2021-49 | 1637964358189.36 | [] | docs.netapp.com |
NCryptAlgorithmName structure (ncrypt.h)
The NCryptAlgorithmName structure is used to contain information about a CNG algorithm.
Syntax
typedef struct _NCryptAlgorithmName { LPWSTR pszName; DWORD dwClass; DWORD dwAlgOperations; DWORD dwFlags; } NCryptAlgorithmName;
pszName
A pointer to a null-terminated Unicode string that contains the name of the algorithm. This can be one of the standard CNG Algorithm Identifiers or the identifier for another registered algorithm.
dwClass
A DWORD value that defines which algorithm class this algorithm belongs to. This can be one of the following values.
dwAlgOperations
A DWORD value that defines which operational classes this algorithm belongs to. This can be a combination of one or more of the following values.
dwFlags
A set of flags that provide more information about the algorithm. | https://docs.microsoft.com/en-us/windows/win32/api/ncrypt/ns-ncrypt-ncryptalgorithmname | 2021-11-27T15:36:02 | CC-MAIN-2021-49 | 1637964358189.36 | [] | docs.microsoft.com |
You're viewing Apigee Edge documentation.
View Apigee X documentation.
On Saturday, March 25, 2017, we began releasing a new cloud version of Apigee Edge for Public Cloud.
Bugs fixed
The following bug is fixed in this release. This list is primarily for users checking to see if their support tickets have been fixed. It's not designed to provide detailed information for all users. | https://docs.apigee.com/release/notes/17031501-apigee-edge-public-cloud-release-notes-ui?authuser=0&hl=ja | 2021-11-27T13:41:06 | CC-MAIN-2021-49 | 1637964358189.36 | [] | docs.apigee.com |
Troubleshooting Windows Subsystem for Linux
We have covered some common troubleshooting scenarios associated with WSL below, but please consider searching the issues filed in the WSL product repo on GitHub as well.
File an issue, bug report, feature request.
Installation issues
Installation failed with error 0x80070003
- The Windows Subsystem for Linux only runs on your system drive (usually this is your
C:drive). Make sure that distributions are stored on your system drive:
- Open Settings -> System -->
- Ensure.
Error: Windows Subsystem for Linux has no installed distributions.
- If you receive this error after you have already installed WSL distributions:
- Run the distribution at least once before invoking it from the command line.
- Check whether you may be running separate user accounts. Running your primary user account with elevated permissions (in admin mode) should not result in this error, but you should ensure that you aren't accidentally running the built-in Administrator account that comes with Windows. This is a separate user account and will not show any installed WSL distributions by design. For more info, see Enable and Disable the Built-in Administrator Account.
- The WSL executable is only installed to the native system directory. When you’re running a 32-bit process on 64-bit Windows (or on ARM64, any non-native combination), the hosted non-native process actually sees a different System32 folder. (The one a 32-bit process sees on x64 Windows is stored on disk at \Windows\SysWOW64.) You can access the “native” system32 from a hosted process by looking in the virtual folder:
\Windows\sysnative. It won’t actually be present on disk, mind you, but the filesystem path resolver will find it..
Common issues
I'm on Windows 10 version 1903 and I still do not see options for WSL 2
This is likely because your machine has not yet taken the backport for WSL 2. The simplest way to resolve this is by going to Windows Settings and clicking 'Check for Updates' to install the latest updates on your system. See the full instructions on taking the backport.
If you hit 'Check for Updates' and still do not receive the update you can install KB KB4566116 manually.
Error: 0x1bc when
wsl --set-default-version 2
This may happen when 'Display Language' or 'System Locale' setting is not English.
wsl --set-default-version 2 Error: 0x1bc For information on key differences with WSL 2 please visit
The actual error for
0x1bc is:
WSL 2 requires an update to its kernel component. For information please visit
For more information, please refer to issue 5749
Cannot access WSL files from Windows
A 9p protocol file server provides the service on the Linux side to allow Windows to access the Linux file system. If you cannot access WSL using
\\wsl$ on Windows, it could be because 9P did not start correctly.
To check this, you can check the start up logs using:
dmesg |grep 9p, and this will show you any errors. A successful output looks like the following:
[ 0.363323] 9p: Installing v9fs 9p2000 file system support [ 0.363336] FS-Cache: Netfs '9p' registered for caching [ 0.398989] 9pnet: Installing 9P2000 support
Please see this Github thread for further discussion on this issue.
Can't start WSL 2 distribution and only see 'WSL 2' in output
If your display language is not English, then it is possible you are seeing a truncated version of an error text.
C:\Users\me>wsl WSL 2
To resolve this issue, please visit and install the kernel manually by following the directions on that doc page.
command not found when executing windows .exe in linux
Users can run Windows executables like notepad.exe directly from Linux. Sometimes, you may hit "command not found" like below:
$ notepad.exe -bash: notepad.exe: command not found
If there are no win32 paths in your $PATH, interop isn't going to find the .exe.
You can verify it by running
echo $PATH in Linux. It's expected that you will see a win32 path (for example, /mnt/c/Windows) in the output.
If you can't see any Windows paths then most likely your PATH is being overwritten by your Linux shell.
Here is a an example that /etc/profile on Debian contributed to the problem:
if [ "`id -u`" -eq 0 ]; then PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" else PATH="/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games" fi
The correct way on Debian is to remove above lines. You may also append $PATH during the assignment like below, but this lead to some other problems with WSL and VSCode..
if [ "`id -u`" -eq 0 ]; then PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:$PATH" else PATH="/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:$PATH" fi
For more information, see issue 5296 and issue 5779.
"Error: 0x80370102 The virtual machine could not be started because a required feature is not installed."
Please enable the Virtual Machine Platform Windows feature and ensure virtualization is enabled in the BIOS.
Check the Hyper-V system requirements
If your machine is a VM, please enable nested virtualization manually. Launch powershell with admin, and run:
Set-VMProcessor -VMName <VMName> -ExposeVirtualizationExtensions $true
Please follow guidelines from your PC's manufacturer on how to enable virtualization. In general, this can involve using the system BIOS to ensure that these features are enabled on your CPU. Instructions for this process can vary from machine to machine, please see this article from Bleeping Computer for an example.
Restart your machine after enabling the
Virtual Machine Platformoptional component.
Additionally, if you have 3rd party hypervisors installed (Such as VMware or VirtualBox) then please ensure you have these on the latest versions which can support HyperV (VMware 15.5.5+ and VirtualBox 6+) or are turned off.
Learn more about how to Configure Nested Virtualization when running Hyper-V in a Virtual Machine.
Bash loses network connectivity once connected to a VPN
If after connecting to a VPN on Windows, bash loses network connectivity, try this workaround from within bash. This workaround will allow you to manually override the DNS resolution through
/etc/resolv.conf.
- Take a note of the DNS server of the VPN from doing
ipconfig.exe /all
- Make a copy of the existing resolv.conf
sudo cp /etc/resolv.conf /etc/resolv.conf.new
- Unlink the current resolv.conf
sudo unlink /etc/resolv.conf
sudo mv /etc/resolv.conf.new /etc/resolv.conf
- Open
/etc/resolv.confand
a. Delete the first line from the file, which says "# This file was automatically generated by WSL. To stop automatic generation of this file, remove this line.".
b. Add the DNS entry from (1) above as the very first entry in the list of DNS servers.
c. Close the file.
Once you have disconnected the VPN, you will have to revert the changes to
/etc/resolv.conf. To do this, do:
cd /etc
sudo mv resolv.conf resolv.conf.new
sudo ln -s ../run/resolvconf/resolv.conf resolv.conf
Starting WSL or installing a distribution returns an error code
Follow these instructions to collect detailed logs and file an issue on our GitHub.
Updating WSL
There are two components of Windows Subsystem for Linux that can require updating.
To update the Windows Subsystem for Linux itself, use the command
wsl --updatein PowerShell or CMD.
To update the specific Linux distribution user binaries, use the command:
apt-get update | apt-get upgradein the Linux distribution that you are seeking to update.
Apt-get upgrade
"Error: 0x80040306" on installation
This has to do with the fact that we do not support legacy console. To turn off legacy console:
- Open cmd.exe
- Right click title bar -> Properties -> Uncheck Use legacy console
- Click OK
"Error: 0x80040154" after Windows update
The Windows Subsystem for Linux feature may be disabled during a Windows update. If this happens the Windows feature must be re-enabled. Instructions for enabling the Windows Subsystem for Linux can be found in the Manual Installation Guide.
Changing the display language
Installation issues after Windows system restore
- Delete the
%windir%\System32\Tasks\Microsoft\Windows\Windows Subsystem for Linuxfolder.
Note: Do not do this if your optional feature is fully installed and working.
- Enable the WSL optional feature (if not already)
- Reboot
- lxrun /uninstall /full
- Install bash
No internet access in WSL
Some users have reported issues with specific firewall applications blocking internet access in WSL. The firewalls reported are:
- Kaspersky
- AVG
- Avast
- Symantec Endpoint Protection
In some cases turning off the firewall allows for access. In some cases simply having the firewall installed looks to block access.
If you are using Microsoft Defender Firewall, unchecking "Blocks all incoming connections, including those in the list of allowed apps." allows for access.
Permission Denied error when using ping
For Windows Anniversary Update, version 1607, administrator privileges in Windows are required to run ping in WSL. To run ping, run Bash on Ubuntu on Windows as an administrator, or run bash.exe from a CMD/PowerShell prompt with administrator privileges.
For later versions of Windows, Build 14926+, administrator privileges are no longer required.
Bash is hung
If while working with bash, you find that bash is hung (or deadlocked) and not responding to inputs, help us diagnose the issue by collecting and reporting a memory dump. Note that these steps will crash your system. Do not do this if you are not comfortable with that or save your work prior to doing this.
To collect a memory dump
Change the memory dump type to "complete memory dump". While changing the dump type, take a note of your current type.
Use the steps to configure crash using keyboard control.
Repro the hang or deadlock.
Crash the system using the key sequence from (2).
The system will crash and collect the memory dump.
Once the system reboots, report the memory.dmp to [email protected]. The default location of the dump file is %SystemRoot%\memory.dmp or C:\Windows\memory.dmp if C: is the system drive. In the email, note that the dump is for the WSL or Bash on Windows team.
Restore the memory dump type to the original setting.
Check your build number
To find your PC's architecture and Windows build number, open
Settings > System > About
Look for the OS Build and System Type fields.
To find your Windows Server build number, run the following in PowerShell:
systeminfo | Select-String "^OS Name","^OS Version"
Confirm WSL is enabled
You can confirm that the Windows Subsystem for Linux is enabled by running the following in PowerShell:
Get-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux
OpenSSH-Server connection issues
Trying to connect your SSH server is failed with the following error: "Connection closed by 127.0.0.1 port 22".
Make sure your OpenSSH Server is running:
sudo service ssh status
and you've followed this tutorial:
Stop the sshd service and start sshd in debug mode:
sudo service ssh stop sudo /usr/sbin/sshd -d
Check the startup logs and make sure HostKeys are available and you don't see log messages such as:
debug1: sshd version OpenSSH_7.2, OpenSSL 1.0.2g 1 Mar 2016 debug1: key_load_private: incorrect passphrase supplied to decrypt private key debug1: key_load_public: No such file or directory Could not load host key: /etc/ssh/ssh_host_rsa_key debug1: key_load_private: No such file or directory debug1: key_load_public: No such file or directory Could not load host key: /etc/ssh/ssh_host_dsa_key debug1: key_load_private: No such file or directory debug1: key_load_public: No such file or directory Could not load host key: /etc/ssh/ssh_host_ecdsa_key debug1: key_load_private: No such file or directory debug1: key_load_public: No such file or directory Could not load host key: /etc/ssh/ssh_host_ed25519_key
If you do see such messages and the keys are missing under
/etc/ssh/, you will have to regenerate the keys or just purge&install openssh-server:
sudo apt-get purge openssh-server sudo apt-get install openssh-server
"The referenced assembly could not be found." when enabling the WSL optional feature
This error is related to being in a bad install state. Please complete the following steps to try and fix this issue:
If you are running the enable WSL feature command from PowerShell, try using the GUI instead by opening the start menu, searching for 'Turn Windows features on or off' and then in the list select 'Windows Subsystem for Linux' which will install the optional component.
Update your version of Windows by going to Settings, Updates, and clicking 'Check for Updates'
If both of those fail and you need to access WSL please consider upgrading in place by reinstalling Windows 10 using installation media and selecting 'Keep Everything' to ensure your apps and files are preserved. You can find instructions on how to do so at the Reinstall Windows 10 page.
Correct (SSH related) permission errors
If you're seeing this error:
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: UNPROTECTED PRIVATE KEY FILE! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ Permissions 0777 for '/home/artur/.ssh/private-key.pem' are too open.
To fix this, append the following to the the
/etc/wsl.conf file:
[automount] enabled = true options = metadata,uid=1000,gid=1000,umask=0022
Please note that adding this command will include metadata and modify the file permissions on the Windows files seen from WSL. Please see the File System Permissions for more information.
Running Windows commands fails inside a distribution
Some distributions available in Microsoft Store are yet not fully compatible to run Windows commands out of the box. If you get an error
-bash: powershell.exe: command not found running
powershell.exe /c start . or any other Windows command, you can resolve it following these steps:
- In your WSL distribution run
echo $PATH.
If it does not include:
/mnt/c/Windows/system32something is redefining the standard PATH variable.
- Check profile settings with
cat /etc/profile.
If it contains assignment of the PATH variable, edit the file to comment out PATH assignment block with a # character.
- Check if wsl.conf is present
cat /etc/wsl.confand make sure it does not contain
appendWindowsPath=false, otherwise comment it out.
- Restart distribution by typing
wsl -tfollowed by distribution name or run
wsl --shutdowneither in cmd or PowerShell.
Unable to boot after installing WSL 2
We are aware of an issue affecting users where they are unable to boot after installing WSL 2. While we fully diagnose those issue, users have reported that changing the buffer size or installing the right drivers can help address this. Please view this Github issue to see the latest updates on this issue.
WSL 2 errors when ICS is disabled
Internet Connection Sharing (ICS) is a required component of WSL 2. The ICS service is used by the Host Network Service (HNS) to create the underlying virtual network which WSL 2 relies on for NAT, DNS, DHCP, and host connection sharing.
Disabling the ICS service (SharedAccess) or disabling ICS through group policy will prevent the WSL HNS network from being created. This will result in failures when creating a new WSL version 2 image, and the following error when trying to convert a version 1 image to version 2.
There are no more endpoints available from the endpoint mapper.
Systems that require WSL 2 should leave the ICS service (SharedAccess) in it's default start state, Manual (Trigger Start), and any policy that disables ICS should be overwritten or removed. While disabling the ICS service will break WSL 2, and we do not recommend disabling ICS, portions of ICS can be disabled using these instructionsng-application-guard-)
Using older versions of Windows and WSL
There are several differences to note if you're running an older version of Windows and WSL, like the Windows 10 Creators Update (Oct 2017, Build 16299) or Anniversary Update (Aug 2016, Build 14393). We recommend that you update to the latest Windows version, but if that's not possible, we have outlined some of the differences below.
Interoperability command differences:
bash.exehas been replaced with
wsl.exe. Linux commands can be run from the Windows Command Prompt or from PowerShell, but for early Windows versions, you man need to use the
bashcommand. For example:
C:\temp> bash -c "ls -la". The WSL commands passed into
bash -care forwarded to the WSL process without modification. File paths must be specified in the WSL format and care must be taken to escape relevant characters. For example:
C:\temp> bash -c "ls -la /proc/cpuinfo"or
C:\temp> bash -c "ls -la \"/mnt/c/Program Files\"".
- To see what commands are available for a particular distribution, run
[distro.exe] /?. For example, with Ubuntu:
C:\> ubuntu.exe /?.
- Windows path is included in the WSL
$PATH.
- When calling a Windows tool from a WSL distribution in an earlier version of Windows 10, you will need to specify the directory path. For example, to call the Windows Notepad app from your WSL command line, enter:
/mnt/c/Windows/System32/notepad.exe
- To change the default user to
rootuse this command in PowerShell:
C:\> lxrun /setdefaultuser rootand then run Bash.exe to log in:
C:\> bash.exe. Reset your password using the distributions password command:
$ passwd usernameand then close the Linux command line:
$ exit. From Windows command prompt or Powershell, reset your default user back to your normal Linux user account:
C:\> lxrun.exe /setdefaultuser username.
Uninstall legacy version of WSL
If you originally installed WSL on a version of Windows 10 prior to Creators update (Oct 2017, Build 16299), we recommend that you migrate any necessary files, data, etc. from the older Linux distribution you installed, to a newer distribution installed via the Microsoft Store. To remove the legacy distribution from your machine, run the following from a Command Line or PowerShell instance:
wsl --unregister Legacy. You also have the option to manually remove the older legacy distribution by deleting the
%localappdata%\lxss\ folder (and all it's sub-contents) using Windows File Explorer or with PowerShell:
rm -Recurse $env:localappdata/lxss/. | https://docs.microsoft.com/en-us/windows/wsl/troubleshooting?redirectedfrom=MSDN | 2021-11-27T16:23:13 | CC-MAIN-2021-49 | 1637964358189.36 | [array(['media/troubleshooting-virtualdisk-compress.png',
'Screenshot of WSL distro property settings'], dtype=object)
array(['media/system.png', 'Screenshot of Build and System Type fields'],
dtype=object) ] | docs.microsoft.com |
Full-text index restrictions¶
Caution
This topic introduces the restrictions for full-text indexes. Please read the restrictions very carefully before using the full-text indexes.
For now, full-text search has the following limitations:
Currently, full-text search supports
LOOKUPstatements only.
The maximum indexing string length is 256 bytes. The part of data that exceeds 256 bytes will not be indexed.
If there is a full-text index on the tag/edge type, the tag/edge type cannot be deleted or modified.
One tag/edge type can only have one full-text index.
The type of properties must be
string.
Full-text index can not be applied to search multiple tags/edge types.
Sorting for the returned results of the full-text search is not supported. Data is returned in the order of data insertion.
Full-text index can not search properties with value
NULL.
Altering Elasticsearch indexes is not supported at this time.
The pipe operator is not supported.
WHEREclauses supports full-text search only working on single terms.
Full-text indexes are not deleted together with the graph space.
Make sure that you start the Elasticsearch cluster and Nebula Graph at the same time. If not, the data writing on the Elasticsearch cluster can be incomplete.
Do not contain
'or
\in the vertex or edge values. If not, an error will be caused in the Elasticsearch cluster storage.
It may take a while for Elasticsearch to create indexes. If Nebula Graph warns no index is found, wait for the index to take effect (however, the waiting time is unknown and there is no code to check).
Nebula Graph clusters deployed with K8s do not support the full-text search feature. | https://docs.nebula-graph.io/2.6.1/4.deployment-and-installation/6.deploy-text-based-index/1.text-based-index-restrictions/ | 2021-11-27T14:34:11 | CC-MAIN-2021-49 | 1637964358189.36 | [] | docs.nebula-graph.io |
Ticket #1228 (closed defect: fixed)
SIM PIN entry has no 'back' button
Description
There is no apparent way to start over when entering the PIN.
If there is, please enlighten stupid users like myself who didn't find that
button. - And consider this a UI design bug in that case :-)
Change History
comment:3 Changed 11 years ago by erin_yueh@…
- Status changed from new to closed
- Resolution set to fixed
yeah, we can use * key to delete one digit
Note: See TracTickets for help on using tickets.
Actually the * button serves this function, but it really isn't very obvious
(just like # button is the same as "OK" button on many phones for pin entry). | https://docs.openmoko.org/trac/ticket/1228 | 2019-04-18T12:46:51 | CC-MAIN-2019-18 | 1555578517639.17 | [] | docs.openmoko.org |
Presentation
Advanced Tracking Wizard allows tracking tags to be easily inserted into your PrestaShop shop. You do not need any advanced knowledge. Thanks to the simple interface, you can create your own trackers or integrate existing trackers, e.g. Kelkoo, Criteo, Twenga, Zanox, Sociomantic, Net Affiliation or Webgains.
You can also add a different tracker by clicking Create tracker.
Configuration
You will need to be able to access its configuration panel to set the settings for the module. To this end, go to the back office of your shop and navigate to Modules. Look for Advanced Tracking Wizard and click Configure.
You are now in the admin interface for the module. This is where you can set the parameters of the module to suit your needs.
Create Tracker
This allows you to add a tracking code to your PrestaShop shop. You can add different tracking tools to those that are natively offered by the module or add your own code (advanced user). Name it, select the page, add the available tags in the places required, then activate it and you’re done.
The details of these options are given below.
Internal Name
This allows you to set an internal name for your tracker.
You can, for example, use this to differentiate between different trackers that apply to the same page but use different code.
Page
This allows you to choose the page on which you want the tracker being edited to be used.
The list of tags available will change depending on the page selected. e.g. a cart ID will not be available on a Product page but will be available on a Cart page.
You are presented a table for Category and Product pages so that you can restrict the tracker to a specific category/product, should you so desire. Leave this table empty to include everything.
Example :
To restrict the tracker, just search for the category or categories/product(s) in the search box in the right-hand column, then click the plus sign to add the restriction.
To delete a restriction, click the minus sign for the category/product to be deleted.
Enter your tracking code below
This is where you enter the code for your tracker.
Then click the tags available to insert them at the cursor position.
Activate tracker
This allows you to activate or deactivate the tracker once it has been created.
You can then edit this setting in the list of trackers (this list can be found on the home page of the configuration panel for the module), by clicking the
icon.
When the icon is green, the tracker is activated; when the icon is grey the tracker is deactivated.
Load a tracker
This option allows you to load a tracker from a specialised site such as Kelkoo, Criteo, Twenga, Zanox, Sociomantic, Net Affiliation or Webgains.
For each of these sites you will need your personal ID provided by the site to be able to load the tracker.
Finally, you can choose whether or not to activate a tracker that you have loaded.
NB: the trackers available here are those that have native support. You can add a tracker other than those given here by using the Create Tracker button. | https://docs.presta-module.com/en/advanced-tracking-wizard-2/ | 2019-04-18T12:24:57 | CC-MAIN-2019-18 | 1555578517639.17 | [array(['https://docs.presta-module.com/wp-content/uploads/2016/08/Configure-button-1-1024x60.jpg',
'Configure button'], dtype=object)
array(['https://docs.presta-module.com/wp-content/uploads/2016/08/Configure-TAB-1024x499.jpg',
'Configure TAB'], dtype=object)
array(['https://docs.presta-module.com/wp-content/uploads/2016/08/Internal-Name.jpg',
'Internal Name'], dtype=object)
array(['https://docs.presta-module.com/wp-content/uploads/2016/08/Page.jpg',
'Page'], dtype=object)
array(['https://docs.presta-module.com/wp-content/uploads/2016/08/Page-category.jpg',
'Page category'], dtype=object)
array(['https://docs.presta-module.com/wp-content/uploads/2015/09/ajouter-restriction.jpg',
'ajouter-restriction'], dtype=object)
array(['https://docs.presta-module.com/wp-content/uploads/2015/09/retirer-restriction.jpg',
'retirer-restriction'], dtype=object)
array(['https://docs.presta-module.com/wp-content/uploads/2016/08/Code.jpg',
'Code'], dtype=object)
array(['https://docs.presta-module.com/wp-content/uploads/2016/08/Activate-Tracker.jpg',
'Activate Tracker'], dtype=object)
array(['https://docs.presta-module.com/wp-content/uploads/2016/08/Load-a-tracker.jpg',
'Load a tracker'], dtype=object) ] | docs.presta-module.com |
Contents IT Service Management Previous Topic Next Topic State model and transitions Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share State model and transitions Change Management offers a state model to move and track change requests through several states. Figure 1. Example of state transitions for a normal change request The following table provides a list of all the states that a change request can progress through. Email notifications can be sent to the user who requested the change when it progresses to the following states: Scheduled, Implement, Review, and Canceled. Table 1. Change states State Description New Change request is not yet submitted for review and authorization. A change requester can save a change request as many times as necessary while building out the details of the change prior to submission. Assess Peer review and technical approval of the change details are performed during this state. Authorize Change Management and the CAB schedule the change and provide final authorization to proceed. Scheduled The change is fully scheduled and authorized, and is waiting for the planned start date. An email notification is sent to the user who requested the change. Implement The planned start date has approached and the actual work to implement the change is being conducted. An email notification is sent to the user, who requested the change. Review The work has been completed. The change requester determines whether the change was successful. A post-implementation review can be conducted during this state. An email notification is sent to the user who requested the change. Closed All review work is complete. The change is closed with no further action required. Canceled A change can be canceled at any point when it is no longer required. However, a change cannot be canceled from a Closed state. An email notification is sent to the user who requested the change. Normal, standard, and emergency changes progress through states in different ways. State progress for different changes Normal changes progress through all states. Standard changes are considered to be pre-authorized, so they bypass the Assess and Authorize states that trigger approval records. Approving these changes progress the change to the next appropriate state. Rejecting these changes send them back to New state. Emergency changes are similar to standard changes, except that they must be authorized. Modify the state of a change request In case of Normal change request, you can modify the state of a change request from Assess state to New state before the request is approved, by clicking Revert to New from the Context menu. In case of Emergency change request, you can modify the state of a change request from Authorized state to New state before the request is approved, by clicking Revert to New from the Context menu.Note: When you revert to New from the Assess state or the Authorized state, the workflow is restarted and all pending approvals are cancelled. Modify change request type A new ACL for change_request.type has been added that allows modification of the Type field in change request when the change request is in a New state and no approvals have been generated yet for it. In case of Standard change request, you can modify the type of the change request from Standard to Normal or Emergency, if the state of a change request is New. In case of Normal or Emergency change request, you can modify the type of the change request from Normal to Emergency or vice versa if the state of a change request is New. If a Normal or Emergency change request is rejected, the state of the change request is set to New. As the state of the change request is New, you can modify the type of the change request again. For example, if your Emergency change request is rejected on the grounds that the change request is Normal, you can modify the Type of the change request to Normal and resubmit the change request. State progression for normal, standard, and emergency changesEach change request model progresses through a number of state values in a specific order.Add a state to the state modelYou can add a new state to the existing state model for different change types based on the requirements of your organization.Configure state model transitionsYou can use script includes or UI policies to configure state models and the criteria for moving change requests from one state to another.Modify the email notification for change request state changesThere is a change request email notification, which, if active, sends a notification to the user when the state progresses to Scheduled, Implement, Review, or Canceled. You can modify the change request notification to specify when to send it, who receives it, and what it contains.Tutorial: add a new change management stateThis tutorial provides an example of adding a new state to the existing state model. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/london-it-service-management/page/product/change-management/concept/c_ChangeStateModel.html | 2019-04-18T13:18:48 | CC-MAIN-2019-18 | 1555578517639.17 | [] | docs.servicenow.com |
API
An Application Programming Interface (API) is a set of subroutine definitions, protocols, and tools for building computer applications. APIs allow developers to integrate applications, such as Interana, with their technologies by only exposing objects or actions the developer needs.
For web development, an API is usually a set of Hypertext Transfer Protocol (HTTP) request messages, combined with response messages, typically in an Extensible Markup Language (XML) or JavaScript Object Notation (JSON) format.
Related terms
- CLI
- JSON
- SDK | https://docs.interana.com/lexicon/API | 2019-04-18T13:30:41 | CC-MAIN-2019-18 | 1555578517639.17 | [] | docs.interana.com |
Radar is the location platform for mobile apps. Radar helps companies build better products and make better decisions with location data.
In order to enable mParticle’s integration with Radar, you will need an account with Radar to obtain your Publishable API Key, found on the Organization page in the Radar dashboard.
mParticle’s Radar integration requires that you add the Radar kit to your iOS or Android app.
If the Run Automatically setting is
enabled, Radar will automatically track users if location permissions have been granted.
disabled, you can call Radar methods directly to track users.
The source code for each kit is available if you would like to learn exactly how the method mapping occurs:
Add the Radar Kit to your iOS or Android app. See the Cocoapods and Gradle examples below, and reference the Apple SDK and Android SDK GitHub pages to read more about kits.
//Sample Podfile target '<Your Target>' do pod 'mParticle-Radar', '~> 6' end
//Sample build.gradle dependencies { compile ('com.mparticle:android-radar-kit:4.+') }
For more information, see Radar’s SDK and mParticle integration documentation.
Was this page helpful? | https://docs.mparticle.com/integrations/radar/event/ | 2019-04-18T12:17:31 | CC-MAIN-2019-18 | 1555578517639.17 | [] | docs.mparticle.com |
The VolumeCentroid command reports the coordinates of and places a point object at the volume centroid of surfaces, polysurfaces, and meshes.
Report the volume of closed surfaces, polysurfaces, or meshes.
Report the volume moments of inertia of surfaces and polysurfaces.
Mass Properties
Analyze an object's mass properties
Rhinoceros 6 © 2010-2019 Robert McNeel & Associates. 12-Apr-2019 | http://docs.mcneel.com/rhino/6/help/en-us/commands/volumecentroid.htm | 2019-04-18T13:11:41 | CC-MAIN-2019-18 | 1555578517639.17 | [] | docs.mcneel.com |
This site presents the documentation and OpenAPI Specification (AOS) for the Evidence Repository JSON-LD REST API
The ClinGen Evidence Repository is an expert curated repository for variant interpretation/classification that have been submitted in the context of a specific condition/disease and an interpretation guideline. The repository allows for retrieving Scientific Evidence and Provenance information Ontology (SEPIO) compliant JSON-LD documents which contain all the tags (evidence codes) and supporting evidence submitted by the expert panel for a specific variant/condition pair. The reasoning engine is run on all the evidence codes to generate a classification such as Pathogenic, Likely Pathogenic, etc.
The repository supports several ACMG-AMP derived guidelines published by the ClinGen expert panels. | https://erepo.docs.stoplight.io/ | 2019-04-18T13:20:29 | CC-MAIN-2019-18 | 1555578517639.17 | [] | erepo.docs.stoplight.io |
User Guide
Local Navigation
Import pictures to the Pictures application
Importing pictures from other folders on your BlackBerry smartphone or media card into the Pictures application allows you to have access to your pictures, while maintaining your existing folder structures and file locations.
- On the Home screen, click the Media icon > Pictures icon.
- Press the
key > Import Pictures. Folders that contain pictures that aren't saved in the Picture Library folder or Camera Pictures folder appear.
- Select the checkboxes smartphone storage space or media card, including any files that aren't
pictures and that aren't visible in the folders in the Pictures application,
highlight the folder.
Press the
key >
Delete. The folders that you imported are
deleted from their locations on your smartphone storage space or media card.
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/36023/Import_pictures_to_picture_library_61_1571281_11.jsp | 2014-12-18T15:41:47 | CC-MAIN-2014-52 | 1418802767247.82 | [] | docs.blackberry.com |
When drilling down from the project dashboard, the resource viewer is the ultimate place to view detailed information on a given file (or unit test file) on different axes.
Viewing Source Code
When drilling down into measures, you will eventually reach the file level and will be able to browse the source code.
SonarQube
The features described below are available since version<<
Issues Tab
The Issues tab displays issues directly in the source code.
.
You can directly manage the issues: comment, assign, link to an action plan, etc. See the Issues documentation page for more details. code_13<<
Issues Tab
The Issues tab displays issues on unit tests:
| http://docs.codehaus.org/pages/viewpage.action?pageId=231736026 | 2014-12-18T15:29:06 | CC-MAIN-2014-52 | 1418802767247.82 | [array(['/download/attachments/111706389/resource-viewer.png?version=1&modificationDate=1339007414456&api=v2',
None], dtype=object)
array(['/download/attachments/111706389/source-tab.png?version=2&modificationDate=1339008425162&api=v2',
None], dtype=object)
array(['/download/attachments/111706389/coverage_tab_overview.png?version=1&modificationDate=1362472195126&api=v2&effects=drop-shadow',
None], dtype=object)
array(['/download/attachments/111706389/coverage_tab_columns.png?version=1&modificationDate=1362471226946&api=v2&effects=drop-shadow',
None], dtype=object)
array(['/download/attachments/111706389/coverage_tab_test_covering_line.png?version=2&modificationDate=1362473518037&api=v2&effects=drop-shadow',
None], dtype=object)
array(['/download/attachments/111706389/coverage_tab_covered_by_test.png?version=1&modificationDate=1362472740078&api=v2&effects=drop-shadow',
None], dtype=object)
array(['/download/attachments/111706389/issues_tab_source_code.png?version=1&modificationDate=1370425405046&api=v2&effects=drop-shadow',
None], dtype=object)
array(['/download/attachments/111706389/duplications-tab.png?version=2&modificationDate=1339056054068&api=v2',
None], dtype=object)
array(['/download/attachments/111706389/tab-dependencies.png?version=2&modificationDate=1339056494390&api=v2',
None], dtype=object)
array(['/download/attachments/111706389/tab-lcom4.png?version=1&modificationDate=1339056557446&api=v2',
None], dtype=object)
array(['/download/attachments/111706389/tests-overview.png?version=1&modificationDate=1338973436627&api=v2',
None], dtype=object)
array(['/download/attachments/111706389/tests-source.png?version=1&modificationDate=1338973509186&api=v2',
None], dtype=object)
array(['/download/attachments/111706389/tests-tests.png?version=1&modificationDate=1338973530038&api=v2',
None], dtype=object)
array(['/download/attachments/111706389/tests_tab_covered_lines.png?version=1&modificationDate=1362473551428&api=v2&effects=drop-shadow',
None], dtype=object)
array(['/download/attachments/111706389/issues_tab_unit_tests.png?version=1&modificationDate=1370425903477&api=v2&effects=drop-shadow',
None], dtype=object) ] | docs.codehaus.org |
.2 (or later) release 2.2 release (or later) and install it.
As you follow the download instructions and setup wizard, make sure you install the beer-sample default bucket. It contains beer and brewery sample data, which you use with the examples.
If you already have Couchbase Server 2.2 it to be
easier, you can use a dependency manager such as Maven. Since the Java SDK 1.2.0 release,
all Couchbase-related dependencies are published.3.2.jar, or latest version available
spymemcached-2.10.5.jar
commons-codec-1.5.jar
httpcore-4.3.jar
netty-3.5.5.Final.jar
httpcore-nio-4.3.jar
jettison-1.1.jar
Previous releases are also available as zip archives as well as through Maven Central : * Couchbase Java Client 1.3.1 * Couchbase Java Client 1.3.2</version> </dependency>
If you program in Scala and want to manage your dependencies through sbt, then you can do it with these additions to your build.sbt file:
libraryDependencies += "couchbase" % "couchbase-client" % "1.3.2"
For Gradle you can use the following snippet:
repositories { mavenCentral() } dependencies { compile "com.couchbase.client:couchbase-client:1.3.2" }.3.2 very fair:
2012-12-03 18:57:45.777 INFO com.couchbase.client.CouchbaseConnection: Added {QA sa=/127.0.0.1:11210, #Rops=0, #Wops=0, #iq=0, topRop=null, topWop=null, toWrite=0, interested=0} to connect queue 2012-12-03 18:57:45.788 INFO com.couchbase.client.CouchbaseConnection: Connection state changed for sun.nio.ch.SelectionKeyImpl@76f8968f 2012-12-03 18:57:45.807 INFO com.couchbase.client.ViewConnection: Added localhost to connect queue 2012-12-03 18:57:45.808 INFO com.couchbase.client.CouchbaseClient: viewmode property isn't defined. Setting viewmode to production mode couchbase! 2012-12-03 18:57:45.925 INFO com.couchbase.client.CouchbaseConnection: Shut down Couchbase client 2012-12-03 18:57:45.929 INFO com.couchbase.client.ViewConnection: Node localhost has no ops in the queue 2012-12-03 18:57:45.929 INFO com.couchbase.client.ViewNode: I/O reactor terminated for localhost.262 INFO net.spy.memcached.protocol.binary.BinaryMemcachedNodeImpl: Removing cancelled operation: SASL auth operation.2>couchbase</groupId> <artifactId>couchbase-client</artifactId> <version>1.3.2</version> </dependency> <dependency> <groupId>com.google.code.gson</groupId> <artifactId>gson</artifactId> <version>2.2.2< and Couchbase Manual, Using the Views Editor.
clients using a consistent interface. The interface between your Java
application and your Couchbase or Memcached servers is provided through the
instantiation of a single object class,
CouchbaseClient.
Creating a new object based on this class opens the connection to each configured server the local host and the
default bucket:
List<URI> uris = new LinkedList<URI>(); uris.add(URI.create("")); try { client = new CouchbaseClient(uris, "default", ""); } catch (Exception e) { System.err.println("Error connecting to Couchbase: " + e.getMessage()); System.exit(0); }
The format of this constructor is:
CouchbaseClient(URIs,BUCKETNAME,BUCKETPASSWORD)
Where:
URIS is a
List of URIs to the Couchbase nodes. The format of the URI is the
hostname, port and path
/pools.
BUCKETNAME is the name of the bucket on the cluster that you want to use.
Specified as a
String.
BUCKETPASSWORD is the password for this bucket. Specified as a
String.
The returned
CouchbaseClient object can be used as with any other
CouchbaseClient object.
If you want to use SASL to provide secure connectivity to your Couchbase server,
create a
CouchbaseConnectionFactory that defines the SASL
connection type, user bucket, and password.
The connection to Couchbase uses the underlying protocol for SASL. This is
similar to the earlier example except that it uses the
CouchbaseConnectionFactory class.
List<URI> baseURIs = new ArrayList<URI>(); baseURIs.add(base); CouchbaseConnectionFactory cf = new CouchbaseConnectionFactory(baseURIs, "userbucket", "password"); client = new CouchbaseClient((CouchbaseConnectionFactory) cf);
A final approach to creating the connection is using the
CouchbaseConnectionFactoryBuilder and
CouchbaseConnectionFactory classes.((CouchbaseConnectionFactory) cf);
For example, the following code snippet sets the
OpTimeOut value to 10000 ms before creating the connection as we saw in the code above.
cfb.setOpTimeout(10000);
These parameters can be set at run time by setting a property on the command line (such as -DopTimeout=1000 ) or via properties in a file cbclient.properties in that order of precedence. optional timeout
period and unit specification. The following example shuts down the active connection
to all the configured servers after 60 seconds:
client.shutdown(60, TimeUnit.SECONDS);
The unit specification relies on the
TimeUnit object enumerator, which
supports the following values:
The method returns a Boolean value that indicates whether the shutdown request completed successfully.
You also can shut down an active connection immediately by using the
shutdown()
method to your Couchbase object instance. For example:
client.shutdown();
In this form the
shutdown() method returns no value. unecessarily .
The 1.3.2 release is the second bug fix release in the 1.3 series.
Fixes in 1.3.2
Known Issues in 1.3.1 release is the first bug fix release in the 1.3 series. It fixes a regression introduced in 1.3.0.
Fixes in 1.3.1
CouchbaseClient.asyncQuery(...)is called and a listener is attached to the future, it is now only called once instead of twice. This makes sure operations done in the listener are not performed twice without any external guards against this in place.
Known Issues in 1.3.0 release is the first minor release in the 1.3 series. It features a rewrite of the underlying view connection management and provides real asynchronous futures when used in combination with persistence and replication constraints.
New Features and Behavior Changes in 1.3.0
Note
The underlying
httpcore and
httpcore-nio dependencies have been upgraded to a newer version to facilitate the connection pool mechanisms provided. If you don't use a dependency management tool like Maven, make sure to replace the JARs with the ones provided in the download archive.
From an application developer perspective, you don't need to make any changes to your codebase (it's completely backward compatible), but some configuration flags have been introduced to make it more configurable:
CouchbaseConnectionFactoryBuilder.setViewWorkerSize(int workers): the number of view workers (defaults to 1) can be tuned if a very large numbers of view requests is fired off in parallel.
CouchbaseConnectionFactoryBuilder.setViewConnsPerNode(int conns): the maximum number of parallel open view connections per node in the cluster can be also tuned (defaults to 10).
The number of threads needed per
CouchbaseClient object has been decreased to a minimum of 2 (one orchestrator and one worker) instead of N (where N is the number of nodes in the cluster).
Because the current codebase now does fewer DNS lookups, it also fixes the long-standing issue reported in JCBC-151. While the issue is ultimately environmental, the code now helps mitigate the issue as much as possible.
As a result of the upgrade, the
ClusterManager class has also been refactored to use the upgraded underlying httpcore(nio) libraries. See JCBC-390.
Before this change, every overloaded PersistTo/ReplicateTo method did return a OperationFuture, but had to block until the observe condition was done - rendering the future essentially useless. This limitation has now been removed. The underlying codebase utilizes the built-in listeners to listen on the original mutation operation and then be notified on a listener when the operation is complete to do the actual - blocking - observe operation. Since this happens on a separate thread pool, the application itself is not blocked until instructed by the developer.
The main fact to be aware of is that a previously synchronous operation like
client.set("key", "value", PersistTo.MASTER) is now non-blocking until
get() is called explicitly. To honor the fact that disk-based operations potentially take longer to complete, a new observe time-out has been introduced that is set to 5 seconds by default (instead the 2.5 seconds with normal operations). It is configurable through the newly added
CouchbaseConnectionFactoryBuilder.setObsTimeout(long timeout) method.
CouchbaseConnectionFactoryBuilder.setObsPollMax(int maxPoll) has been deprecated and is ignored because it can be calculated out of the observe time-out and the still available observe interval settings.
CouchbaseClientobject, now an INFO-level log message gets printed that shows all configuration settings in effect. This helps greatly when diagnosing issues without turning on debug logging or looking at the code directly.
Fixes in 1.3.0
Known Issues in 1.3.0
.get(1, TimeUnit.MINUTES))is ignored if it is higer than the default
obsTimeoutsetting on the
CouchbaseConnectionFactory. The workaround here is to set a higher value through the
CouchbaseConnectionFactoryBuilderand then just use
.get()or a possibly lower timeout setting. | http://docs.couchbase.com/couchbase-sdk-java-1.3/ | 2014-12-18T15:21:46 | CC-MAIN-2014-52 | 1418802767247.82 | [] | docs.couchbase.com |
scipy.signal.cheby1¶
- scipy.signal.cheby1(N, rp, Wn, btype='low', analog=False, output='ba')[source]¶
Chebyshev type I digital and analog filter design.
Design an Nth order digital or analog Chebyshev type I filter and return the filter coefficients in (B,A) or (Z,P,K) form.
Notes
The Chebyshev type I filter maximizes the rate of cutoff between the frequency response’s passband and stopband, at the expense of ripple in the passband and increased ringing in the step response.
Type I filters roll off faster than Type II (cheby2), but Type II filters do not have any ripple in the passband.
The equiripple passband has N maxima or minima (for example, a 5th-order filter has 3 maxima and 2 minima). Consequently, the DC gain is unity for odd-order filters, or -rp dB for even-order filters.
Examples
Plot the filter’s frequency response, showing the critical points:
>>> from scipy import signal >>> import matplotlib.pyplot as plt
>>> b, a = signal.cheby1(4, 5, 100, 'low', analog=True) >>> w, h = signal.freqs(b, a) >>> plt.plot(w, 20 * np.log10(abs(h))) >>> plt.xscale('log') >>> plt.title('Chebyshev Type I frequency response (rp=5)') >>> plt.xlabel('Frequency [radians / second]') >>> plt.ylabel('Amplitude [dB]') >>> plt.margins(0, 0.1) >>> plt.grid(which='both', axis='both') >>> plt.axvline(100, color='green') # cutoff frequency >>> plt.axhline(-5, color='green') # rp >>> plt.show() | http://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.cheby1.html | 2014-12-18T15:27:29 | CC-MAIN-2014-52 | 1418802767247.82 | [] | docs.scipy.org |
.
One set of configuration limits apply to. | https://docs.apigee.com/api-platform/reference/limits?hl=he-IL | 2021-11-27T14:44:30 | CC-MAIN-2021-49 | 1637964358189.36 | [] | docs.apigee.com |
Python
Python 3.7
Fedora 29 intoduces Python 3.7, which adds numerous new features and optimizations compared to version 3.6, which was the default Python 3 stack in Fedora 28. Notable changes include:
asyncand
awaitare now reserved keywords.
The
asynciomodule has received new features and significant usability and performance improvements.
The
timemodule has gained support for functions with nanosecond resolution.
See What’s new in Python 3.7 and Features for 3.7 for more information. If you have your own Python apps, see Porting to Python 3.7 for information about compatibility-breaking changes and how to fix your applications.
/usr/bin/python is now a separate package
The unversioned python command from
/usr/bin/python has been moved into a separate
python-unversioned-command package.
You will get it by default when you install the
python2 package, but you are able to remove it.
Use the
python3 command if you need Python 3, and the
python2 command if you need Python 2.
The
python command continues to mean Python 2, but it is not guaranteed to be present.
See the Change page for detailed information and justification for this change.
/usr/bin/virtualenv is now in the python3-virtualenv package
The
virtualenv command now comes from the
python3-virtualenv package, as opposed to earlier releases where the command was in the
python2-virtualenv.
This effectively switches the command to Python 3; if you run
virtualenv without any additional options, it will create Python 3 environments. Use
virtualenv -p python2.7 to get the previously default behavior.
Ansible now uses Python3 by default
The
ansible package in Fedora is switching to use Python 3 by default, instead of Python 2. See Automation for details.
No more automagic Python bytecompilation
The current way of automatic Python byte-compiling of files outside Python-specific directories is too magical and error-prone. It is built on heuristics that are increasingly wrong. This change provides a way to opt out of it, and adjusts the guidelines to prefer explicit bytecompilation of such files. Later, the old behavior will either become opt-in only, or cease to exist.
Note that bytecompilation in Python-specific directories (e.g.
/usr/lib/python3.6/) is not affected.
See the Fedora Wiki change page for detailed documentation.
Update comps groups to use Python 3
Multiple package groups have been updated to use
python3 by default instead of
python2.
See Distribution-wide Changes for more information. | https://docs.fedoraproject.org/sq/fedora/f29/release-notes/developers/Development_Python/ | 2021-11-27T14:32:14 | CC-MAIN-2021-49 | 1637964358189.36 | [] | docs.fedoraproject.org |
Build, deploy and test
#Build
Let's start building our template project! Simply run:
In the output window, you'll see that our smart contract was compiled, and our Polywrap wrapper was built and output to the
./build/* folder. It contains the following files:
This directory's contents will be uploaded to decentralized storage, and enable any Polywrap Client to download, query, and execute your Polywrap's functionality within the application.
The
mutation.wasm and
query.wasm files are the WebAssembly files that are compiled from AssemblyScript.
The
schema.graphql file contain the APIs schema, consisting of custom types and callable methods (query and mutation).
Lastly, the
web3api.yaml manifest file describes the layout of the package.
#Deploy
To deploy our Polywrap wrapper and associated smart contracts for testing, let's first setup a test environment. Simply run:
This will stand-up an Ethereum node, as well as an IPFS node.
tip
In the future, test environments will be easily configurable to include any nodes your Polywrap wrapper requires.
Next, let's deploy the
SimpleStorage.sol smart contract, and the
simplestorage.eth Polywrap by running:
#Test
With our Polywrapper live at
simplestorage.eth on our test network, it's now time to test it out!
This is where our query recipes come in handy. Run
yarn test to see this in action.
In the output window, you'll see a combination of input queries, and returned results from the Polywrapper. In this query recipe, we send a combination of
set.graphql and
get.graphql queries which modify the
SimpleStorage.sol contract's stored value.
Now that we've built the template Polywrapper, let's add custom functionality to the template in the next section! | https://docs.polywrap.io/guides/create-as-wrapper/build-deploy-test/ | 2021-11-27T14:02:35 | CC-MAIN-2021-49 | 1637964358189.36 | [] | docs.polywrap.io |
If set to false, the importer will not resample curves when possible.
Read more about animation curve resampling.
Notes:
- Some unsupported FBX features (such as PreRotation or PostRotation on transforms) will override this setting. In these situations, animation curves will still be resampled even if the setting is disabled. For best results, avoid using PreRotation, PostRotation and GetRotationPivot.
- This option was introduced in Version 5.3. Prior to this version, Unity's import behaviour was as if this option was always enabled. Therefore enabling the option gives the same behaviour as pre-5.3 animation import. | https://docs.unity3d.com/kr/2019.1/ScriptReference/ModelImporter-resampleCurves.html | 2021-11-27T15:18:09 | CC-MAIN-2021-49 | 1637964358189.36 | [] | docs.unity3d.com |
You're viewing Apigee Edge documentation.
View Apigee X documentation.
>.
Samples
Local call to an internal proxy
.
URL as a variable
.
Google geocoding / define request
<ServiceCallout name="ServiceCallout-GeocodingRequest1"> ..
Call target servers
.
A.
Custom error handling table describes attributes that policy can populate the request message sent to the external service.resource.resource. | https://docs.apigee.com/api-platform/reference/policies/service-callout-policy?authuser=1 | 2021-11-27T14:19:53 | CC-MAIN-2021-49 | 1637964358189.36 | [array(['https://docs.apigee.com/api-platform/images/icon_policy_service-callout.jpg?authuser=1',
None], dtype=object) ] | docs.apigee.com |
NCBI SRA Import
1. Navigate to the Upload Page
From anywhere on the app, you can expand the navigation menu by clicking on the hamburger icon
on the upper left side of the screen.
2. Navigate to the SRA Upload
Navigate to the NCBI SRA tab
You have the ability to choose the sample type, and folder location.
NCBI SRA import is currently available for SRR shotgun metagenomic samples. 16S Functionality is currently in development.
3. Uploading Multiple Samples
You can provide a comma separated list to upload multiple samples of a single type.
If you are using the
- Select the "Accession List" that will download a text file.
- Open or copy the contents of the text file to your preferred spreadsheet application
- Convert the vertical list to a horizontal list. Cutting and Pasting the Transpose of the vertical list is the quickest way to accomplish this task.
- Export as a CSV
- Open the CSV as a text file
- Copy & Paste the contents into the NCBI SRA Upload text field
Updated 4 months ago | https://docs.cosmosid.com/docs/ncbi-sra-import | 2021-11-27T14:32:52 | CC-MAIN-2021-49 | 1637964358189.36 | [array(['https://d3omwy4q2qio9v.cloudfront.net/items/0d2D3n430K1o0r3R0g2K/Screen%20Shot%202019-04-02%20at%202.39.48%20PM.png',
'Hamburger alt text'], dtype=object)
array(['https://p-AeFvB6.t2.n0.cdn.getcloudapp.com/items/YEuOjKgg/a5e3dec9-8cad-48df-b2af-9fc64541d34e.jpg?source=client&v=973837a43a58b917f91737783e4994e9',
None], dtype=object)
array(['https://p-AeFvB6.t2.n0.cdn.getcloudapp.com/items/QwuApPjR/5f306587-db58-45b2-a10a-15b678cd582a.jpeg?source=client&v=748cc497ce2dc49896519413c4af7fdd',
None], dtype=object)
array(['https://p-AeFvB6.t2.n0.cdn.getcloudapp.com/items/12u4XQxv/61f556c4-6b63-4df4-b94e-309f03dcdba7.jpeg?v=480e0eb95ba31baf0f6b5341b5b2ff2e',
None], dtype=object) ] | docs.cosmosid.com |
Connect to Nebula Graph¶
Nebula Graph supports multiple types of clients, including a CLI client, a GUI client, and clients developed in popular programming languages. This topic provides an overview of Nebula Graph clients and basic instructions on how to use the native CLI client, Nebula Console.
Nebula Graph clients¶
You can use supported clients or console to connect to Nebula Graph.
Use Nebula Console to connect to Nebula Graph¶
Prerequisites¶
- You have started the Nebula Graph services. For how to start the services, see Start and Stop Nebula Graph.
- The machine you plan to run Nebula Console on has network access to the Nebula Graph services.
Steps¶
On the nebula-console page, select a Nebula Console version and click Assets.
Note
We recommend that you select the latest release.
In the Assets area, find the correct binary file for the machine where you want to run Nebula Console and download the file to the machine.
(Optional) Rename the binary file to
nebula-consolefor convenience.
Note
For Windows, rename the file to
nebula-console.exe.
On the machine to run Nebula Console, grant the execute permission of the nebula-console binary file to the user.
Note
For Windows, skip this step.
$ chmod 111 nebula-console
In the command line interface, change the working directory to the one where the nebula-console binary file is stored.
Run the following command to connect to Nebula Graph.
- For Linux or macOS:
$ ./nebula-console -addr <ip> -port <port> -u <username> -p <password> [-t 120] [-e "nGQL_statement" | -f filename.nGQL]
- For Windows:
> nebula-console.exe -addr <ip> -port <port> -u <username> -p <password> [-t 120] [-e "nGQL_statement" | -f filename.nGQL]
The description of the parameters is as follows.
You can find more details in the Nebula Console Repository.
Nebula Console commands¶
Nebula Console can export CSV file, DOT file, and import too.
Note
The commands are case insensitive.
Export a CSV file¶
CSV files save the return result of a executed command.
Note
- A CSV file will be saved in the working directory, i.e., what linux command
pwdshow;
- This command only works for the next query statement.
The command to export a csv file.
nebula> :CSV <file_name.csv>
Export a DOT file¶
DOT files save the return result of a executed command, and the result information is different from CSV files.
Note
- A DOT file will be saved in the working directory, i.e., what linux command
pwdshow;
- You can copy the contents of DOT file, and paste in GraphvizOnline, to visualize the excution plan;
- This command only works for the next query statement.
The command to export a DOT file.
nebula> :dot <file_name.dot>
For example,
nebula> :dot a.dot nebula> PROFILE FORMAT="dot" GO FROM "player100" OVER follow;
Importing a testing dataset¶
The testing dataset is named
nba. Details about schema and data can be seen by commands
SHOW.
Using the following command to import the testing dataset,
nebula> :play nba
Run a command multiple times¶
Sometimes, you want to run a command multiple times. Run the following command.
nebula> :repeat N
For example,
nebula> :repeat 3 nebula> GO FROM "player100" OVER follow; +-------------+ | follow._dst | +-------------+ | "player101" | | "player125" | +-------------+ Got 2 rows (time spent 2602/3214 us) Fri, 20 Aug 2021 06:36:05 UTC +-------------+ | follow._dst | +-------------+ | "player101" | | "player125" | +-------------+ Got 2 rows (time spent 583/849 us) Fri, 20 Aug 2021 06:36:05 UTC +-------------+ | follow._dst | +-------------+ | "player101" | | "player125" | +-------------+ Got 2 rows (time spent 496/671 us) Fri, 20 Aug 2021 06:36:05 UTC Executed 3 times, (total time spent 3681/4734 us), (average time spent 1227/1578 us)
Sleep to wait¶
Sleep N seconds.
It is usually used when altering schema. Since schema is altered in async way, and take effects in the next heartbeat cycle.
nebula> :sleep N
Disconnect Nebula Console from Nebula Graph¶
You can use
:EXIT or
:QUIT to disconnect from Nebula Graph. For convenience, Nebula Console supports using these commands in lower case without the colon (":"), such as
quit.
nebula> :QUIT Bye root!
How can I install Nebula Console from the source code¶
To download and compile the latest source code of Nebula Console, follow the instructions on the nebula console GitHub page. | https://docs.nebula-graph.io/2.6.1/4.deployment-and-installation/connect-to-nebula-graph/ | 2021-11-27T14:05:29 | CC-MAIN-2021-49 | 1637964358189.36 | [] | docs.nebula-graph.io |
You can upgrade vRealize Automation in vRealize Suite Lifecycle Manager.
Prerequisites
- Ensure that you have upgraded the earlier versions of vRealize Suite Lifecycle Manager to the latest. For more information on upgrading your vRealize Suite Lifecycle Manager, see Upgrade vRealize Suite Lifecycle Manager 8.x.
- Ensure that you have upgraded the earlier version of VMware Identity Manager to 3.3.2 or later. For more information on VMware Identity Manager upgrade, see Upgrade VMware Identity Manager.
- Verify that you have already installed vRealize Automation 8.0, 8.0.1, 8.1, 8.2, or 8.3.
- Perform the binary mapping of the vRealize Automation upgrade ISO from Local, myvmware or NFS share. For more information on binary mapping, see Configure Product Binaries.
- Increase the CPU, memory, and storage as per the system requirements of vRealize Automation 8.4. For more information, see the Hardware Requirements of vRealize Automation 8.4 Reference Architecture.
Procedure
- On the Lifecycle Operations page, click Manage Environments.
- Navigate to a vRealize Automation instance.
- Click View Details and click Upgrade.
A pop-up menu is appears to alert you to perform an inventory sync.
- Click Trigger Inventory Sync of the product before you upgrade.Note: At times, there can be a drift or a change in the environment outside of Lifecycle Manager and for Lifecycle Manager to be aware of the current state of the system, the inventory requires to be up-to-date.
- If the product inventory is already synced and up-to-date, then click Proceed Upgrade.
- After the inventory is synced, select the vRealize Automation version to 8.4.
- To select the Repository Type, you can either select vRealize Suite LCM Repository, only if you have mapped the ISO Binary map, or you can select the Repository URL with a private upgrade Repository URL.
- If you selected the Repository URL, enter the unauthenticated URL, and then click Next.
- Click Pre-Check.Pre-check validates the following criteria:
- If the source vRealize Automation versions are one of 8.0.0 or 8.0.1, ensure follow the steps given in the KB article 78325 before you upgrade to restore expired root accounts.
- SSH enabled - Verifies that SSH for the root user is enabled.
- Version check - Verifies if the target version selected for upgrade is compatible with the current vRealize Automation version.
- Disk space on root, data, and services log partition - Verifies if the required amount of free disk space is available in the root, data, and services log partition.
- CPU and Memory Check - Verifies if the required amount say 12 CPU and 42 GB Memory resources available in each vRealize Automation nodes before upgrade.
- vCenter property existence check - Verifies if the vCenter details are present as part of each node in the Lifecycle Manager inventory. Since a snapshot is taken during the upgrade process, it is important to have the right vCenter details within the Lifecycle Manager inventory.
- vRealize Automation VMs managed object reference ID retrieval check - Verifies if the managed object reference ID of the VM can be retrieved from the details available in the Lifecycle Manager inventory. This is required as you perform snapshot-related operations on the VMs, finding the VM using the same.
- Click Next and Submit.You can navigate to the Request Details page to view the progress of the upgrade status. You can enable the multi-tenancy for vRealize Automation, refer to Tenant Management in vRealize Suite Lifecycle Manager. | https://docs.vmware.com/en/VMware-vRealize-Suite-Lifecycle-Manager/8.4/com.vmware.vrsuite.lcm.8.4.doc/GUID-62A2C4A9-98BF-44A5-9C23-950016A615EA.html | 2021-11-27T15:35:07 | CC-MAIN-2021-49 | 1637964358189.36 | [] | docs.vmware.com |
Maintainer Documentation¶
This section contains some information that’s useful for Pencil maintainers.
Creating a New Release¶
There’s a
release.sh script that lives in the
build/ directory. This
script automates:
- Creating a release branch
- Updating the version number
- Sectioning off the changelog
- Updating distribution-specific files
- Creating a release commit & tag
- Pushing the branch to origin
- Creating a release on Github
- Uploading the built packages to the Github release
You will need
git,
curl,
sed and
jshon. Then you can just pass
the new version number to the script:
cd build ./release.sh 2.4.42
Once the script is complete, you will have to manually merge the release branch
into the
master and
develop branches, then delete the release branch:
git checkout master git merge release-v2.4.42 git push origin git checkout develop git merge release-v2.4.42 git push origin git push origin :release-v2.4.42 git branch -d release-v2.4.42 | https://pencil-prototyping.readthedocs.io/en/develop/maintainers/index.html | 2021-11-27T14:50:55 | CC-MAIN-2021-49 | 1637964358189.36 | [] | pencil-prototyping.readthedocs.io |
1. What is Fifengine?.
It’s a very flexible game creation framework and not tied to any genre, but geared towards an RTS or RPG using an isometric or top-down view style.
This manual describes how to use FIFE to power your game.
1.1. Features
This chapter describes all the features the engine currently supports in the master branch. Features are organized into categories for easy reference.
As we continue to add features to FIFE, we will update this list to keep it up to date.
Logger
Module specific logging functionality
Logging according to priority levels
Logging output to file and stdout
Map Editor
Multi Map Interface (edit multiple maps at the same time)
Object Selector
Deleting instances from map layer
Placing instances on map layer
Saving maps
Undo/Redo support for instances
Object Editor plugin
Light Editor plugin
Camera Editor plugin
Image Atlas Creator
Create/Edit image atlases
cross-platform
written in C++ using the Qt framework
The following event types are supported and can be bound to scripting functionality:
Mouse events
Keyboard events
Widget events
Custom commands
Support for following font formats:
Ingame console with support for executing Python code / scripts
Fully customizable GUIs via
Python wrapper via our PyChan extension
XML layouts
Skinable
-
-
General
Support for all formats implemented by SDL_image:
BMP, GIF, JPEG, LBM, PCX, PNG, PNM, TGA, TIFF, WEBP, XCF, XPM, XV
Color key support
Take ingame screenshots via hotkey
Pooling of image resources, resulting enhanced performance and reduced memory consumption
Image atlases (many images contained in one file)
Animations
Multiple definable key frame animations per object
Effects
Lighting (OpenGL renderer only)
Fog of War (OpenGL renderer only)
Maps
3D geometry definition (defined by tilt, rotation and scale)
Support for different tile and object grids with independent geometries
Multiple layers per map
All variations of square and hex shape geometries
Multiple cameras / views per map
Custom XML-based map file format
Pathfinding
Exchangable pathfinding backends:
Route path finder
Support for different renderers (RendererBackends):
SDL
OpenGL
Various resolutions
Bit-depth (16, 24, 32bit)
Window mode (fullscreen & windowed)
Colorkey for fast transparency effects
Transparency for tiles & objects
Colorkey for fast transparency effects
Lighting effects
Fog of War
Custom Isometric views defined by angle and tilt of camera
Top down/side views
Correct z-order sorting of map instances
Support for different renderers:
Blocking renderer
Cell selection renderer
Coordinate renderer
Floating text renderer
Grid renderer
Instance renderer
Quadtree renderer
Light renderer (OpenGL only)
Static layer support which renders an entire layer as one texture
Support for reading files on platforms with different byte orders
Read support for ZIP archives
Lazy loading of files for decreased load times
1.2. Games using Fifengine
The following projects are using Fife as their engine.
We work closely with both Unknown Horizons to continue to improve fife.
If you are developing a game with Fifengine and want it posted here, please let us know!
1.3. Media
You’ll find various media items in this section, including screenshots and videos.
1.3.1. Screenshots
There is a media page that can be found here:
Also check out some screenshots on our archived wiki here:
1.3.2. Videos
1.4. License
The source code (*.cpp, *.h & *.py) of Fifengine is licensed under the GNU LESSER GENERAL PUBLIC LICENSE Version 2.1.
See GNU Lesser General Public License v2.1 (LGPL-2.1) Explained for more info.
Can I use FIFE to make a commercial product?
You can create commercial games with FIFE without needing to pay us any fee.
The following basic rules apply concerning the used LGPL:
Third-Party Content Licenses
Third-party content, such as assets, images, and sounds might come from different sources.
Therefore each client directory comes with a separate LICENSE file that states the origin of the content, the author and the actual license it was published under.
2. Getting started
2.1. Installation
This chapter explains how to install fifengine. | http://docs.fifengine.net/user-manual/en/ | 2021-11-27T14:24:10 | CC-MAIN-2021-49 | 1637964358189.36 | [] | docs.fifengine.net |
About
Dr. Julapalli is founder and president of Integral Cardiovascular Center which he started in late 2010 in north Houston. He has built an active cardiology practice that attempts to bring an integrally informed approach to the practice of cardiology/medicine. He is on staff at all the major hospitals in the area. Dr. Julapalli graduated from Rice University with a B. A. in Biological Sciences, completed medical school at the University of Texas - San Antonio, and finished residency and fellowship training in Internal Medicine, Cardiology and Interventional Cardiology at University of Texas – Houston, home to the world's largest medical center. Prior to and during fellowship, he helped create and implement changes in protocols that have transformed the way acute myocardial infarction is treated in the community.
Board certification: American Board of Internal Medicine
Hospital affiliation: Methodist The Woodlands; Memorial Hermann The Woodlands | https://app.uber-docs.com/Specialists/SpecialistProfile/Vinay-Julapalli-MD/Integral-Cardiovascular-Center | 2021-11-27T14:05:30 | CC-MAIN-2021-49 | 1637964358189.36 | [] | app.uber-docs.com |
metadata for all crawlers defined in the customer account.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
get-crawlers: Crawlers
get-crawlers [-lers -> (list)
A list of crawler metadata.
(structure)
Specifies a crawler program that examines a data source and uses classifiers to try to determine its schema. If successful, the crawler records metadata concerning the data source in the Glue Data Catalog.)
RecrawlPolicy -> .
SchemaChangePolicy -> (structure)
The policy that specifies update and delete behaviors for the crawler.
UpdateBehavior -> (string)The update behavior when the crawler finds a changed schema.
DeleteBehavior -> (string)The deletion behavior when the crawler finds a deleted object.
LineageConfiguration -> (structure)
A configuration that specifies whether data lineage is enabled for the crawler.
CrawlerLineageSettings -> (string)
Specifies whether data lineage is enabled for the crawler. Valid values are:
- ENABLE: enables data lineage for the crawler
- DISABLE: disables data lineage for the crawler (see Time-Based Schedules for Jobs and Crawlers . For example, to run something every day at 12:15 UTC, you would Include and Exclude Patterns .
CrawlerSecurityConfiguration -> (string)The name of the SecurityConfiguration structure to be used by this crawler.
NextToken -> (string)
A continuation token, if the returned list has not reached the end of those defined in this customer account. | https://docs.aws.amazon.com/cli/latest/reference/glue/get-crawlers.html | 2021-11-27T16:08:20 | CC-MAIN-2021-49 | 1637964358189.36 | [] | docs.aws.amazon.com |
TCP flags (tcpflags)
Description
Returns the flags or control bits value of a TCP segment. This field contains the following 9 1-bit flags in this order:
How does it work in the search window?
Select Create column in the search window toolbar, then select the TCP flags operation. You need to specify one argument:
The data type of the values in the new column is integer.
How does it work in LINQ?
Use the operator
as... and add the operation syntax to create the new column. This is the syntax for the TCP flags operation:
tcpflags(packet) | https://docs.devo.com/confluence/ndt/v7.1.0/searching-data/building-a-query/operations-reference/packet-group/tcp-flags-tcpflags | 2021-11-27T14:50:20 | CC-MAIN-2021-49 | 1637964358189.36 | [] | docs.devo.com |
Koji Infrastructure SOP
Koji and plague are our buildsystems. They share some of the same machines to do their work.
Contents
Contact Information
- Owner
Fedora Infrastructure Team
#fedora-admin, sysadmin-build group
- Persons
mbonnet, dgilmore, f13, notting, mmcgrath, SmootherFrOgZ
- Servers
koji.fedoraproject.org
buildsys.fedoraproject.org
xenbuilder[1-4]
hammer1, ppc[1-4]
- Purpose
Build packages for Fedora.
Description
Users submit builds to koji.fedoraproject.org or buildsys.fedoraproject.org. From there it gets passed on to the builders.
Add packages into Buildroot
Some contributors may have the need to build packages against fresh built packages which are not into buildroot yet. Koji has override tags as a Inheritance to the build tag in order to include them into buildroot which can be set by:
koji tag-pkg dist-$release-override <package_nvr>
Troubleshooting and Resolution
Restarting Koji
If for some reason koji needs to be restarted, make sure to restart the koji master first, then the builders. If the koji master has been down for a short enough time the builders do not need to be restarted.:
service httpd restart service kojira restart service kojid restart
kojid won’t start or some builders won’t connect
In the event that some items are able to connect to koji while some are not, please make sure that the database is not filled up on connections. This is common if koji crashes and the db connections aren’t properly cleared. Upon restart many of the connections are full so koji cannot reconnect. Clearing old connections is easy, guess about how long it the new koji has been up and pick a number of minutes larger then that and kill those queries. From db3 as postgres run:
echo "select procpid from pg_stat_activity where usename='koji' and now() - query_start \ >= '00:40:00' order by query_start;" | psql koji | grep "^ " | xargs kill
OOM (Out of Memory) Issues
Out of memory issues occur from time to time on the build machines. There are a couple of options for correction. The first fix is to just restart the machine and hope it was a one time thing. If the problem continues please choose from one of the following options.
Increase Memory
The xen machines can have memory increased on their corresponding xen hosts. At present this is the table:
Edit
/etc/xen/xenbuilder[1-4] and add more memory.
Decrease weight
Each builder has a weight as to how much work can be given to it. Presently the only way to alter weight is actually changing the database on db3:
$ sudo su - postgres -bash-2.05b$ psql koji koji=# select * from host limit 1; id | user_id | name | arches | task_load | capacity | ready | enabled ---+---------+------------------------+-----------+-----------+----------+-------+--------- 6 | 130 | ppc3.fedora.redhat.com | ppc ppc64 | 1.5 | 4 | t | t (1 row) koji=# update host set capacity=2 where name='ppc3.fedora.redhat.com';
Simply update capacity to a lower number.
Disk Space Issues
The builders use a lot of temporary storage. Failed builds also get left on the builders, most should get cleaned but plague does not. The easiest thing to do is remove some older cache dirs.
Step one is to turn off both koji and plague:
/etc/init.d/plague-builder stop /etc/init.d/kojid stop
Next check to see what file system is full:
df -h
Typically just / will be full. The next thing to do is determine if we have any extremely large builds left on the builder. Typical locations include /var/lib/mock and /mnt/build (/mnt/build actually is on the local filesystem):
du -sh /var/lib/mock/* /mnt/build/*
/var/lib/mock/dist-f8-build-10443-1503
classic koji build
/var/lib/mock/fedora-6-ppc-core-57cd31505683ef1afa533197e91608c5a2c52864
classic plague build
If nothing jumps out immediately, just start deleting files older than one week. Once enough space has been freed start koji and plague back up:
/etc/init.d/plague-builder start /etc/init.d/kojid start | https://docs.fedoraproject.org/bn/infra/sysadmin_guide/koji/ | 2021-11-27T15:17:38 | CC-MAIN-2021-49 | 1637964358189.36 | [] | docs.fedoraproject.org |
Title
I Can Explain! Understanding Perceptions of Eyewitnesses as a Function of Type of Explanation and Inconsistent Confidence Statements
Document Type
Dissertation
Abstract
In the current study, 126 undergraduate students read a case summary describing an armed robbery of a convenience store, involving one eyewitness, and then viewed one of five brief videotapes of an eyewitness identification procedure. Confidence ratings were manipulated as 80% v. 100%: Type of explanation offered for changes in confidence consisted of social, memory-based or none. Results indicated increased perceptions of eyewitnesses were associated with confidence consistency, rather than type of explanation. Perhaps providing any explanation for changes in confidence drew attention to the inconsistency and magnified its effect on perceptions. Further, when the eyewitness provided one estimate of confidence, participants perceived them as more credible compared to confidence inflation condition. Implications for these results at trial are discussed.
Recommended Citation
Paiva, Melissa, "I Can Explain! Understanding Perceptions of Eyewitnesses as a Function of Type of Explanation and Inconsistent Confidence Statements" (2009). Psychology Theses. 2. | https://docs.rwu.edu/psych_thesis/2/ | 2021-11-27T14:43:58 | CC-MAIN-2021-49 | 1637964358189.36 | [] | docs.rwu.edu |
Bioinformatics Concepts
Background
CosmosID provides a platform to upload, process, and manage your metagenomic samples. To help you understand how our bioinformatics analysis works we will define a few terms.
kmer - a kmer is a nucleotide sequence of a certain length. It is common in genomics to select all possible kmers of a fixed length for each read in a sample, for example.
wgs - whole genome shotgun sequencing - conserved ribosomal RNA gene or genes, not the entire genome for identification.
More details on CosmosID
CosmosID uses a kmer based approach to identify microorganisms in metagenomic samples.
Specifically, for metagenomics, CosmosID identifies unique and shared kmers in reference genomes and stores them in our reference database. When a sample is submitted for analysis, we search the kmers in the sample against the kmers in our database to find matches that help us identify the microbes present in the sample.
CosmosID Curated Databases).
Another advantage of using the CosmosID databases is that they have been cleaned to remove contaminating sequences that are commonly found in DNA repositories. Additionally, we frequently update our databases to include new genomes that have been added to the sequencing space.
Types of Databases
Organism databases:
- Bacteria
- Viruses
- Fungi
- Protists
- Respiratory Viruses
Gene databases:
- Antibiotic Resistance
- Virulence Factor
Identification at Different Taxonomic Levels
Figure 1: ID and abundance at each taxonomic level
In Figure 1 you can see how kmers are mapped to taxonomic levels. Kmers are identified that are unique to each reference in the CosmosID database. Identification is made at the lowest taxonomic level possible, depending on which kmers are found in the sequenced sample.
Updated over 2 years ago | https://docs.cosmosid.com/docs/bioinformatics-concepts | 2021-11-27T13:41:54 | CC-MAIN-2021-49 | 1637964358189.36 | [array(['https://files.readme.io/402e4a4-Screen_Shot_2019-05-07_at_3.45.57_PM.png',
'Screen Shot 2019-05-07 at 3.45.57 PM.png'], dtype=object)
array(['https://files.readme.io/402e4a4-Screen_Shot_2019-05-07_at_3.45.57_PM.png',
'Click to close...'], dtype=object) ] | docs.cosmosid.com |
Configurations¶
Nebula Graph builds the configurations based on the gflags repository. Most configurations are flags. When the Nebula Graph service starts, it will get the configuration information from Configuration files by default. Configurations that are not in the file apply the default values.
Note
- Because there are many configurations and they may change as Nebula Graph develops, this topic will not introduce all configurations. To get detailed descriptions of configurations, follow the instructions below.
- It is not recommended to modify the configurations that are not introduced in this topic, unless you are familiar with the source code and fully understand the function of configurations.
Legacy version compatibility
In the topic of 1.x, we provide a method of using the
CONFIGS command to modify the configurations in the cache. However, using this method in a production environment can easily cause inconsistencies of configurations between clusters and the local. Therefore, this method will no longer be introduced in the topic of 2.x.
Get the configuration list and descriptions¶
Use the following command to get all the configuration information of the service corresponding to the binary file:
<binary> --help
For example:
# Get the help information from Meta $ /usr/local/nebula/bin/nebula-metad --help # Get the help information from Graph $ /usr/local/nebula/bin/nebula-graphd --help # Get the help information from Storage $ /usr/local/nebula/bin/nebula-storaged --help
The above examples use the default storage path
/usr/local/nebula/bin/. If you modify the installation path of Nebula Graph, use the actual path to query the configurations.
Get configurations¶
Use the
curl command to get the value of the running configurations.
Legacy version compatibility
The
curl commands and parameters in Nebula Graph v2.x. are different from Nebula Graph v1.x.
For example:
# Get the running configurations from Meta curl 127.0.0.1:19559/flags # Get the running configurations from Graph curl 127.0.0.1:19669/flags # Get the running configurations from Storage curl 127.0.0.1:19779/flags
Note
In an actual environment, use the real host IP address instead of
127.0.0.1 in the above example.
Configuration files¶
Nebula Graph provides two initial configuration files for each service,
<service_name>.conf.default and
<service_name>.conf.production. Users can use them in different scenarios conveniently. The default path is
/usr/local/nebula/etc/.
The configuration values in the initial configuration file are for reference only and can be adjusted according to actual needs. To use the initial configuration file, choose one of the above two files and delete the suffix
.default or
.production to make it valid.
Caution
To ensure the availability of services, the configurations of the same service must be consistent, except for the local IP address
local_ip. For example, three Storage servers are deployed in one Nebula Graph cluster. The configurations of the three Storage servers need to be the same, except for the IP address.
The initial configuration files corresponding to each service are as follows.
Each initial configuration file of all services contains
local_config. The default value is
true, which means that the Nebula Graph service will get configurations from its configuration files and start it.
Caution
It is not recommended to modify the value of
local_config to
false. If modified, the Nebula Graph service will first read the cached configurations, which may cause configuration inconsistencies between clusters and cause unknown risks.
Modify configurations¶
By default, each Nebula Graph service gets configurations from its configuration files. Users can modify configurations and make them valid according to the following steps:
Use a text editor to modify the configuration files of the target service and save the modification.
Choose an appropriate time to restart all Nebula Graph services to make the modifications valid. | https://docs.nebula-graph.io/2.6.1/5.configurations-and-logs/1.configurations/1.configurations/ | 2021-11-27T14:39:37 | CC-MAIN-2021-49 | 1637964358189.36 | [] | docs.nebula-graph.io |
Dapp Developers
Complete information including easy tutorials you need to build, deploy, and manage apps on Polygon
Validators
Learn how to stake with Polygon, and setup you own nodes to maintain the network and earn rewards
Integration
Key information for projects looking to integrate with Polygon. Wallets, developer tools, oracles and more - get all the info you need | https://docs.polygon.technology | 2021-11-27T14:39:09 | CC-MAIN-2021-49 | 1637964358189.36 | [] | docs.polygon.technology |
Home > Journals > RR > Vol. 3 (2007) > Iss. 1
Article Title
Interpreter of maladies: a commonplace for cultures
Abstract
Imagine living a double life – being pulled in all different directions, between your past and your present, your family and your friends, your two different cultures. Jhumpa Lahiri knows that double existence and shows individuals living it in her book Interpreter of Maladies. Interpreter of Maladies is a collection of short stories that focuses on Indian and American cultures and the people who get caught between the two.
Recommended Citation
Tetreault, Cora
(2008)
"Interpreter of maladies: a commonplace for cultures,"
Reason and Respect: Vol. 3
:
Iss.
1
, Article 9.
Available at: | https://docs.rwu.edu/rr/vol3/iss1/9/ | 2021-11-27T14:47:19 | CC-MAIN-2021-49 | 1637964358189.36 | [] | docs.rwu.edu |
PLINK
From UABgrid Documentation
Latest revision as of 09:32, 4 April 2012
This page is a Generic stub.
You can help by expanding this page..
PLINK is a free, open-source whole genome association analysis toolset, designed to perform a range of basic, large-scale analysis in a computationally efficient manner.
The PLINK web site also has a tutorial section that users should read through.
Please see this page for PLINK citing instructions.
To load PLINK into your environment, use the following module command:
module load plink/plink
The following commands are available
- plink - The plink executable is the primary binary for this software. Click here for the command line reference.
- gplink - This is a java based GUI for PLINK that provides the following functionality:
- is a GUI that allows construction of many common PLINK operations
- provides a simple project management tool and analysis log
- allows for data and computation to be on a separate server (via SSH)
- facilitates integration with Haploview
Running gplink: You should NOT run gplink from the cheaha login node (head node), only from the compute nodes using the qrsh command. The qrsh command will provide a shell on a compute node complete with X forwarding. For example:
[jsmith@cheaha ~]$ qrsh Rocks Compute Node Rocks 5.1 (V.I) Profile built 13:06 21-Nov-2008 Kickstarted 13:13 21-Nov-2008 [jsmith@compute-0-10 ~]$ module load plink/plink [jsmith@compute-0-10 ~]$ gplink
You should see the gPLINK window open. If you get an error similar to "No X11 DISPLAY variable was set", make sure your initial connection to Cheaha had X forwarding enabled.
If you want to use the PLINK R plugin functionality, please see this page for instructions. You'll need to install the Rserve package to use the plugin, for example:
install.packages("Rserve")
|- |pvm |3.4.5 |/usr/bin/pvm |PVM3 (Parallel Virtual Machine) is a library and daemon that allows distributed processing environments to be constructed on heterogeneous machines and architectures. | https://docs.uabgrid.uab.edu/tgw/index.php?title=PLINK&diff=cur&oldid=3992 | 2021-11-27T15:51:39 | CC-MAIN-2021-49 | 1637964358189.36 | [] | docs.uabgrid.uab.edu |
Import raw visit data with actions and visitor properties
The Import API allow you to bulk fill events and visitor properties into your Woopra instance in a single transaction.
This endpoint accepts an http body of line-separated JSON. This means, a single JSON object represented as a string (without the outer quotes) on each line. For Example:
{"visitor":{"email":"[email protected]"}, "actions":[{"time":1444077951001, "name":"purchase", "properties":{"price":20, "currency":"$"}]} {"visitor":{"email":"[email protected]", "username": "test2"}, "actions":[{"time":1444077951891, "name":"signup", "properties":{"campaign": true}}]} ...
Importing Visits
Each line in the body that you POST to /import should be a visit, a.k.a. session. The visit object should include a nested actions array, and a visitor object. At minimum, the visitor object should have an identifier so that Woopra knows to which person profile this visit belongs. The actions array is an array of event objects, each of which has a
name, a
time (in UNIX milliseconds), and a
properties object with the custom event properties.
It is very conceivable that a visitor will have multiple lines in this file as they have done a number of visits, each expressed on its own line, and each containing an array of actions performed on this visit.
Bulk Updating Visitor Properties
You can bulk update visitor properties without tracking any actions by only including visitor information on each line. So each line would omit the
actions array, and other visit properties, and look like:
{ "visitor": { "email": "<email>", "account_level": "enterprise", "property1": "prop1Value" } }
A note on generated visit properties
NOTE: while you cannot send generated visit properties when doing real-time tracking (becasue they are generated) the tracking servers do not generate these fields on imports, and thus, you can send them. Imports go straight into the events database as is (more or less) and the logic that we run before write time at the end of a normal session in real-time tracking, is not run in the case of imported sessions via this endpoint. They are written as is.
A note on Engagement
Similarly, the Woopra system does not run engagement on imported data. This means that labels, triggers, and other automations are not evaluated for imported events. | https://docs.woopra.com/reference/the-import-api | 2021-11-27T13:56:37 | CC-MAIN-2021-49 | 1637964358189.36 | [] | docs.woopra.com |
Reasoning¶
OWL reasoners can be used to check the consistency of an ontology, and to deduce new fact in the ontology, typically be reclassing Individuals to new Classes, and Classes to new superclasses, depending on their relations.
Several OWL reasoners exist; Owlready2 includes:
- a modified version of the HermiT reasoner, developed by the department of Computer Science of the University of Oxford, and released under the LGPL licence.
- a modified version of the Pellet reasoner, released under the AGPL licence.
HermiT and Pellet are written in Java, and thus you need a Java Vitual Machine to perform reasoning in Owlready2.
HermiT is used by default.
Configuration¶
Under Linux, Owlready should automatically find Java.
Under windows, you may need to configure the location of the Java interpreter, as follows:
>>> from owlready2 import * >>> import owlready2 >>> owlready2.JAVA_EXE = "C:\\path\\to\\java.exe"
Setting up everything¶
Before performing reasoning, you need to create all Classes, Properties and Instances, and to ensure that restrictions and disjointnesses / differences have been defined too.
Here is an example creating a ‘reasoning-ready’ ontology:
>>> from owlready2 import * >>> onto = get_ontology("") >>> with onto: ... class Drug(Thing): ... def take(self): print("I took a drug") ... class ActivePrinciple(Thing): ... pass ... class has_for_active_principle(Drug >> ActivePrinciple): ... python_name = "active_principles" ... class Placebo(Drug): ... equivalent_to = [Drug & Not(has_for_active_principle.some(ActivePrinciple))] ... def take(self): print("I took a placebo") ... class SingleActivePrincipleDrug(Drug): ... equivalent_to = [Drug & has_for_active_principle.exactly(1, ActivePrinciple)] ... def take(self): print("I took a drug with a single active principle") ... class DrugAssociation(Drug): ... equivalent_to = [Drug &Different([acetaminophen, amoxicillin, clavulanic_acid]) >>> drug1 = Drug(active_principles = [acetaminophen]) >>> drug2 = Drug(active_principles = [amoxicillin, clavulanic_acid]) >>> drug3 = Drug(active_principles = []) >>> close_world(Drug)
Running the reasoner¶
The reasoner (HermiT) is simply run by calling the sync_reasoner() global function:
>>> sync_reasoner()
By default, sync_reasoner() places all inferred facts in a special ontology, ‘’. You can control in which ontology the inferred facts are placed using the ‘with ontology’ statement (remember, all triples asserted inside a ‘with ontology’ statement go inside this ontology). For example, for placing all inferred facts in the ‘onto’ ontology:
>>> with onto: ... sync_reasoner()
This allows saving the ontology with the inferred facts (using onto.save() as usual).
The reasoner can also be limited to some ontologies:
>>> sync_reasoner([onto1, onto2,...])
If you also want to infer object property values, use the “infer_property_values” parameter:
>>> sync_reasoner(infer_property_values = True)
To use Pellet instead of HermiT, just use the sync_reasoner_pellet() function instead.
In addition, Pellet also supports the inference of data property values, using the “infer_data_property_values” parameter:
>>> sync_reasoner(infer_property_values = True, infer_data_property_values = True)
Results of the automatic classification¶
Owlready automatically gets the results of the reasoning from HermiT and reclassifies Individuals and Classes, i.e Owlready changes the Classes of Individuals and the superclasses of Classes.
>>>.
Inconsistent classes and ontologies¶
In case of inconsistent ontology, an OwlReadyInconsistentOntologyError is raised.
Inconcistent classes may occur without making the entire ontology inconsistent, as long as these classes have no individuals. Inconsistent classes are inferred as equivalent to Nothing. They can be obtained as follows:
>>> list(default_world.inconsistent_classes())
In addition, the consistency of a given class can be tested by checking for Nothing in its equivalent classes, as follows:
>>> if Nothing in Drug.equivalent_to: ... print("Drug is inconsistent!")
Note
To debug an inconsistent ontology the
explain command of the Pellet reasoner can provide some useful information.
The output of this command is shown if for
sync_reasoner_pellet(...) the keyword argument
debug has a value >=2 (default is 1).
However, note that the additional call to
pellet explain might take more time than the reasoning itself.
Querying inferred classification¶
The .get_parents_of(), .get_instances_of() and .get_children_of() methods of an ontology can be used to query the hierarchical relations, limited to those defined in the given ontology. This is commonly used after reasoning, to obtain the inferred hierarchical relations.
- .get_parents_of(entity) accepts any entity (Class, property or individual), and returns the superclasses (for a class), the superproperties (for a property), or the classes (for an individual). (NB for obtaining all parents, independently of the ontology they are asserted in, use entity.is_a).
- .get_instances_of(Class) returns the individuals that are asserted as belonging to the given Class in the ontology. (NB for obtaining all instances, independently of the ontology they are asserted in, use Class.instances()).
- .get_children_of(entity) returns the subclasses (or subproperties) that are asserted for the given Class or property in the ontology. (NB for obtaining all children, independently of the ontology they are asserted in, use entity.subclasses()).
Here is an example:
>>> inferences = get_ontology("") >>> with inferences: ... sync_reasoner() >>> inferences.get_parents_of(drug1) [onto.SingleActivePrincipleDrug] >>> drug1.is_a [onto.has_for_active_principle.only(OneOf([onto.acetaminophen])), onto.SingleActivePrincipleDrug] | https://owlready2.readthedocs.io/en/latest/reasoning.html | 2021-11-27T14:48:24 | CC-MAIN-2021-49 | 1637964358189.36 | [] | owlready2.readthedocs.io |
Internal use only. Base class for diagnostic_updater::Updater and self_test::Dispatcher. The class manages a collection of diagnostic updaters. It contains the common functionality used for producing diagnostic updates and for self-tests.
Definition at line 145 of file _diagnostic_updater.py.
Definition at line 168 of file _diagnostic_updater.py.
Add a task to the DiagnosticTaskVector. Usage: add(task): where task is a DiagnosticTask add(name, fn): add a DiagnosticTask embodied by a name and function
Definition at line 179 of file _diagnostic_updater.py.
Allows an action to be taken when a task is added. The Updater class uses this to immediately publish a diagnostic that says that the node is loading.
Definition at line 172 of file _diagnostic_updater.py.
Removes a task based on its name. Removes the first task that matches the specified name. (New in version 1.1.2) @param name Name of the task to remove. @return Returns true if a task matched and was removed.
Definition at line 195 of file _diagnostic_updater.py.
Definition at line 170 of file _diagnostic_updater.py.
Definition at line 169 of file _diagnostic_updater.py. | http://docs.ros.org/en/kinetic/api/diagnostic_updater/html/classdiagnostic__updater_1_1__diagnostic__updater_1_1DiagnosticTaskVector.html | 2021-11-27T15:40:03 | CC-MAIN-2021-49 | 1637964358189.36 | [] | docs.ros.org |
Using RapidDiag
The RapidDiag app is provided to assist the Splunk Administrator with collecting diagnostic information from one or more Splunk Enterprise instances simultaneously. What makes RapidDiag unique from the
diag command is the ability to use distributed search to run diagnostic collections across multiple nodes, while leveraging both operating system (OS) level tools and Splunk Enterprise tools to collect troubleshooting information.
When should I use RapidDiag?
RapidDiag offers a way to collect the data from OS-level tools or other sources automatically, and collect the results in one file. It is designed to ease data collection tasks when working with Splunk Support on troubleshooting an issue.. This allows RapidDiag collections to access to the search tier, indexers or cluster peers, and supporting nodes such as the cluster manager node.
- Manager node: The cluster manager node is configured with search access to the cluster peers.
- Search Head: A search head is configured with search access to the indexers or cluster peers.
The RapidDiag app includes command line support (CLI) and help. Use
splunk cmd rapidDiag -h to review the supported CLI commands. However, the CLI is for single instance use only..
Using a task template
In RapidDiag, a task template is a series of data collection tasks bundled together and named for their troubleshooting use case. The data collection tasks define OS and Splunk Enterprise tools used to collect the data. For example, the "File reading" template will generate multiple data collection tasks using the tools: for.
- Open RapidDiag.
- On the Task Templates page, select your indexers in the Peer Node dropdown.
- Choose the "Indexer Health" template. Select "Next."
- On the Review page, review the settings for the collectors.
- Select "Start Collecting."
- On the Task Manager page,.1.1, 8.1.2, 8.1.3, 8.1.4, 8.1.5, 8.1.6, 8.1.7
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/Splunk/8.1.5/Troubleshooting/Rapiddiag | 2021-11-27T15:51:38 | CC-MAIN-2021-49 | 1637964358189.36 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
Generally, when you import may show from the context menu.. | https://docs.kde.org/stable5/en/kmymoney/kmymoney/details.ledgers.match.html | 2021-11-27T15:09:17 | CC-MAIN-2021-49 | 1637964358189.36 | [array(['/stable5/en/kdoctools5-common/top-kde.jpg', None], dtype=object)] | docs.kde.org |
WSO2 API Manager includes separate Web applications as the API Publisher and the API Store. The root context of the API Manager is set to go to the API Publisher by default. For example, assume that the API Manager is hosted on a domain named
apis.com with default ports. The URLs of the API Store and API Publisher will be as follows:
- API Store -
- API Publisher -
If you open the root context, which is in your browser, it directs to the API Publisher by default. You can set this to go to the API Store as follows:
- Open the bundle
<AM_HOME>/repository/components/plugins/
org.wso2.am.styles_1.x.x.jar.
- Open the
component.xmlfile that is inside
META-INFdirectory.
Change the
<context-name>element, which points to publisher by default, to store:
<context> <context-id>default-context</context-id> <context-name>store</context-name> <protocol>http</protocol> <description>API Publisher Default Context</description> </context>
- Compress the JAR and put it back in the
<API-M_HOME>/repository/components/pluginsdirectory.
- Restart the server.
Open the default context () again in a browser and note that it directs to the API Store.
Tip: If you want to configure the API Publisher and Store to pass proxy server requests, configure a reverse proxy server. | https://docs.wso2.com/pages/viewpage.action?pageId=97563879 | 2021-11-27T14:13:52 | CC-MAIN-2021-49 | 1637964358189.36 | [] | docs.wso2.com |
HOTFIXES SOP
From time to time we have to quickly patch a problem or issue in applications in our infrastructure. This process allows us to do that and track what changed and be ready to remove it when the issue is fixed upstream.
Ansible based items:
For ansible, they should be placed after the task that installs the package to be changed or modified. Either in roles or tasks.
hotfix tasks should be called "HOTFIX description" They should also link in comments to any upstream bug or ticket. They should also have tags of hotfix
The process is:
Create a diff of any files changed in the fix.
Check in the _original_ files and change to role/task
Check in now your diffs of those same files.
ansible will replace the files on the affected machines completely with the fixed versions.
If you need to back it out, you can revert the diff step, wait and then remove the first checkin
Example:
<task that installs the httpd package> # # install hash randomization hotfix # See bug # - name: hotfix - copy over new httpd init script copy: src="{{ files }}/hotfix/httpd/httpd.init" dest=/etc/init.d/httpd owner=root group=root mode=0755 notify: - restart apache tags: - config - hotfix - apache | https://docs.fedoraproject.org/ru/infra/sysadmin_guide/hotfix/ | 2021-11-27T15:51:26 | CC-MAIN-2021-49 | 1637964358189.36 | [] | docs.fedoraproject.org |
Register an Inventory Beacon
FlexNet Manager Suite 2020 R2 (On-Premises)
Registering the beacon sets up its communications to FlexNet Manager Suite.
Complete this process after installing the inventory beacon software.
Note: If you are installing a hierarchy of inventory beacons, so that some 'child' beacons report to 'parent' beacons rather than to the central application server(s), you must register them top down through the hierarchy. Parent beacons must be registered before their children. As well, for a parent inventory beacon, you must choose between:
You may implement your choice of web server only after you have registered
this inventory beacon.
- Using an IIS web server, or the built-in simplified web server, to manage communications with downstream devices (whether inventory devices or child inventory beacons)
- Using Windows authentication, anonymous authentication, or a local account on the parent inventory beacon to run the web service managing those communications.
To register an inventory beacon:
- Run the inventory beacon interface (for example, ).Tip: Remember that running the inventory beacon requires an account with administrator privileges.By default, the interface should first display the Parent connection page (linked from the Beacon configuration group in the navigation bar). Ensure that this page displays.
- Ensure the Enable parent connection check box is selected.This enables the controls in this page.
- Click Download configuration.A new window opens in your web browser, pre-populated with the unique identification (GUID) of this inventory beacon in the Unique ID field. (This means that each inventory beacon must download its own configuration file, and you cannot share configuration files between inventory beacons.)Tip: The URL used by the web browser is the one registered as part of the installation of the inventory beacon.
- Does this inventory beacon connect directly to the central application server(s), or does it report to another inventory beacon in your hierarchy?
- If this beacon reports to the central application server(s), skip the Parent beacon field, ensuring that it is empty.
- If this beacon reports to another inventory beacon in your hierarchy, identify that higher beacon in the Parent beacon field:
- If you already know its name, enter (part of) the name in the field; but if you are not sure, leave the field blank.
- Click .A fly-down lists the available inventory beacons (matching your text entry, if you used one).
- Ensure that the appropriate parent beacon is selected (with the check box on its left end), and click Select.The, and in the same tab, click Import configuration.
- Browse to the file you saved in step 9 , and click Open.The configuration file is loaded, and populates the connection details in the Parent connection page.
- Does this inventory beacon connect directly to the central application server (in which case, the Current parent drop-down shows Application server), or does it report to another inventory beacon in your hierarchy?
- For a child inventory beacon reporting to another inventory beacon, you must enter the User Name and Password credentials for the account you created on the parent that runs the web service managing uploads/downloads.
- For a top-level inventory beacon that connects directly to the central application server, the User Name and Password credentials have been automatically populated from the configuration file. Do not under any circumstances modify these values (or any others in this dialog). Should the values become corrupted, you can repeat this process, being certain to import the new configuration file (which will contain a different password for the application server).
- When the configuration details are changed, the inventory beacon runs a background check on the connection, and displays the results on the page. If there are problems listed, you can address these and (if required) click Test connection to retry.This testing validates the downloaded credentials and the communication channel. When the connection test displays success, your beacon is registered and this process is completed. See concluding comments below about next steps.
-.
- Is a 'parent' inventory beacon through which others are to upload, or
- Is to collect inventory uploaded by installed instances of FlexNet inventory agent, or by the zero footprint inventory collection method (as defined in Gathering FlexNet Inventory, available through the title page of online help)
FlexNet Manager Suite (On-Premises)
2020 R2 | https://docs.flexera.com/FlexNetManagerSuite2020R2/EN/WebHelp/tasks/ConfigureInventoryBeacon.html | 2021-11-27T15:20:47 | CC-MAIN-2021-49 | 1637964358189.36 | [] | docs.flexera.com |
Updating a Dashboard Template
FlexNet Manager Suite 2020 R2 (On-Premises)
If any updates are required on a dashboard template, you can update a dashboard template by replacing a previously created dashboard template with your current personal Management Dashboard. Updating a dashboard template requires a role with the access right Configure FlexNet Manager Platform properties (located under the Administration section of when editing a role on the page).
Note: You can update any dashboard template (except the out of the box FNMS Default Dashboard), even if the dashboard template was not created by you. Also, updating a dashboard template does not affect any users who have used the dashboard template in their own Management Dashboard. Users need to load the new dashboard template into their Management Dashboard if they want your latest changes.
To update a dashboard template:
- Click the Manage Dashboard Templates button.The Manage Dashboard Templates dialog appears.
- From the drop-down list, click Save this dashboard as an existing template.
- From the Template Name drop-down list, select the relevant dashboard template.
- To update in the template name drop-down, click Save.
FlexNet Manager Suite (On-Premises)
2020 R2 | https://docs.flexera.com/FlexNetManagerSuite2020R2/EN/WebHelp/tasks/MgmtDash-UpdatingDashTemplate.html | 2021-11-27T15:20:11 | CC-MAIN-2021-49 | 1637964358189.36 | [] | docs.flexera.com |
Depending on which kind of data you need to work with in Gekkobrain, you will need an extractor to help you with this task. An extractor is a small piece of ABAP code you will need in your SAP system. We have gathered all Gekkobrain extractors in one transport, in order to make sure that you have all the tools needed in your system. You will need to download a specific transport depending on your SAP release.
You can monitor any ABAP system in your landscape. Typically you might start up with your ECC system, but you can also target other SAP systems as well. For example if you have a separate HCM, CRM or BI system or even a PI/PO system and if those are ABAP stacks you can set those up as well. The only requirement is that they are have the SAP_ABA application component installed and that the version of SAP_ABA is supported by the extractors. The versions we support is seen in the list of available packages/transport files for download in the download area.
You locate the download area in the menu.
Download the appropriate zip-package depending on your SAP version. It contains a co-file and a data file just like a usual externally defined transport.
You add the transport files to your Transportation Management System by adding the 2 files to the /cofiles/ and /datafiles/ subdirectories of your SAP development systems filesystem. Then proceed to import them using transaction STMS. The downloaded packages should be uploaded to your SAP Development or Sandbox system.
Its important to note that none of the extractors that you install needs to be imported to your productive system. Its a flexibility that many appreciate because it allows for the many extractors to be upgraded and deinstalled more easily should you wish to discontinue using Gekkobrains software. Its also normal for customers to start off not transporting the software to production but as a release date for production import comes up, then release the software to production.
There are operational concerns for any SAP installation and Gekkobrain offers this flexibility to adapt to any companys policy regarding third party software.
Not deploying the transports all the way to production means that an RFC connection from Dev to production, your Ops system, is required for the extractor to be able to function. RFC and local mode are detailed later in the documentation.
In order to download the Extract Framework, you need to download a package that will match the version of your SAP system.
Depending on which kinds of project you run in Gekkobrain - you should import the following transport: | https://docs.gekkobrain.com/doc_extractor/ | 2021-11-27T13:35:19 | CC-MAIN-2021-49 | 1637964358189.36 | [] | docs.gekkobrain.com |
AL Table Proxy Generator
Note
Effective November 2020:
- Common Data Service has been renamed to Microsoft Dataverse. Learn more
- Some terminology in Microsoft Dataverse has been updated. For example, entity is now table and field is now column. Learn more
The AL Table Proxy Generator tool can be used to generate one or more tables for integration with Microsoft Dataverse. When one or more tables are present in Microsoft Dataverse, but not in Dynamics 365 Business Central, the tool can be run to generate integration or proxy tables for the specified table or tables.
An integration or proxy table is a table that represents a table in Microsoft Dataverse. The integration table includes fields that correspond to columns in the Microsoft Dataverse table. The integration table acts as a link or connector between the Business Central table and the Microsoft Dataverse table.
Note
Microsoft Dataverse and Business Central store dates in different formats. In Business Central, all users see the same date across all time zones, whereas Microsoft Dataverse-based apps render the dates based on the current user's time zone.
The AL Table Proxy Generator tool does not support time zones for dates and converts dates from Microsoft Dataverse to the Business Central format.
The AL Table Proxy Generator tool is available with the AL Language extension. Look for the altpgen.exe tool in the equivalent folder of
c:\users\<username>\.vscode\extensions\<al extension version>\bin.
Generating proxy tables
- Start Windows PowerShell as an administrator.
- From the command prompt, write
.\altpgen.exefollowed by the parameters as described below.
-Project -PackageCachePath -ServiceURI -Entities -BaseId -[TableType]
- The table or tables are generated in the folder of the specified AL project.
Parameters
Specifying tables
The
Entities parameter specifies the logical names of the table(s) to create in AL. To know which ones to specify you need to check the main table relationships in Microsoft Dataverse. For more information, see Table relationships overview. You specify all tables that you want created, including the related tables, in the
Entities parameter separated by commas.
Related tables
An example could be, that you want to generate an AL proxy table for the CDS Worker Address (cdm_workeraddress).
If you run the altpgen tool and only specify
cdm_workeraddress, the tool will not generate the
Worker lookup field, because no related table
Worker is specified.
If you, in the
Entities parameter specify
cdm_workeraddress, cdm_worker, the
Worker lookup field will be generated. Furthermore, if your symbols contain the
cdm_worker table definition, the
Worker table will not be created as it's already in your symbols. If your symbols do not contain the
cdm_worker table, the
Worker table will be created together with the
Worker Address table.
Creating a new integration table
The following example starts the process for creating a new integration table in the specified AL project. When complete, the output path contains the Worker.al file that contains the description of the 50000 CDS Worker integration table. This table is set to the table type CDS.
.\altpgen -project:"C:\myprojectpath" -packagecachepath:"C:\mypackagepath" -serviceuri:"" -entities:cdm_worker,cdm_workeraddress -baseid:50000 -tabletype:CDS
See Also
Custom Integration with Microsoft Dataverse | https://docs.microsoft.com/en-us/dynamics365/business-central/dev-itpro/developer/devenv-al-table-proxy-generator | 2021-11-27T16:24:17 | CC-MAIN-2021-49 | 1637964358189.36 | [] | docs.microsoft.com |
As you create or edit your vRealize Automation Cloud cloud templates, use the most appropriate security resource options to meet your objectives.
Cloud agnostic security group resource
Cloud.SecurityGroupresource type. The default resource displays as:
Cloud_SecurityGroup_1: type: Cloud.SecurityGroup properties: constraints: [] securityGroupType: existing
You specify a security group resource in a cloud template design as either existing (
securityGroupType: existing) or on-demand (
securityGroupType: new).
You can add an existing security group to your cloud template or you can use an existing security group that has been added to a network profile.
For NSX-V and NSX-T, as well as NSX-T with the policy manager switch enabled in combination with VMware Cloud on AWS, you can add an existing security group or define a new security group as you design or modify your cloud template. On-demand security groups are supported for NSX-T and NSX-V, and VMware Cloud on AWS when used with NSX-T policy manager.
For all cloud account types except Microsoft Azure, you can associate one or more security groups to a machine NIC. A Microsoft Azure virtual machine NIC (machineName) can only be associated to one security group.
By default, the security group property
securityGroupType is set to
existing. To create an on-demand security group, enter
new for the
securityGroupType property. To specify firewall rules for an on-demand security group, use the
rules property in the
Cloud.SecurityGroup section of the security group resource.
Existing security groups
Existing security groups are created in a source cloud account resource such as NSX-T or Amazon Web Services. They are data collected by vRealize Automation Cloud from the source. You can select an existing security group from a list of available resources as part of a vRealize Automation Cloud network profile. In a cloud template design, you can specify an existing security group either inherently by its membership in a specified network profile or specifically by name using the
securityGroupType: existing setting in a security group resource. If you add a security group to a network profile, add at least one capability tag to the network profile. On-demand security group resources require a constraint tag when used in a cloud template design.
You can associate a security group resource in your cloud template design to one or more machine resources.
On-demand security groups
You can define on-demand security groups as you define or modify a cloud template design by using the
securityGroupType: new setting in the security group resource code.
You can use an on-demand security group for NSX-V and NSX-T, as well as Amazon Web Services when used with NSX-T Policy type, to apply a specific set of firewall rules to a networked machine resource or set of grouped resources. Each security group can contain multiple named firewall rules. You can use an on-demand security group to specify services or protocols and ports. Note that you can specify either a service or a protocol but not both. You can specify a port in addition to a protocol. You cannot specify a port if you specify a service. If the rule contains neither a service or a protocol, the default service value is Any.
You can also specify IP addresses and IP ranges in firewall rules. Some firewall rule examples are shown in Networks, security resources, and load balancers in vRealize Automation Cloud.
- Allow (default) - Allows the network traffic that is specified in this firewall rule.
- Deny - Blocks the network traffic that is specified in this firewall rule. Actively tells the client that the connection is rejected.
- Drop - Rejects the network traffic that is specified in this firewall rule. Silently drops the packet as if the listener is not online.
access: Allowand an
access: Denyfirewall rule, see Networks, security resources, and load balancers in vRealize Automation Cloud.
Firewall rules support either IPv4 or IPv6 format CIDR values for source and destination IP addresses. For an example design that uses IPv6 CIDR values in a firewall rule, see Networks, security resources, and load balancers in vRealize Automation Cloud.
On-demand and existing security groups for VMware Cloud on AWS
You can define an on-demand security group for a VMware Cloud on AWS machine in a cloud template by using the
securityGroupType: new setting in the security group resource code.
resources: Cloud_SecurityGroup_1: type: Cloud.SecurityGroup properties: name: vmc-odsg securityGroupType: new rules: - name: datapath direction: inbound protocol: TCP ports: 5011 access: Allow source: any
You can also define an existing security group for a networked VMware Cloud on AWS machine and optionally include constraint tagging, as shown in the following examples:
Cloud_SecurityGroup_2: type: Cloud.SecurityGroup properties: constraints: [xyz] securityGroupType: existing
Cloud_SecurityGroup_3: type: Cloud.SecurityGroup properties: securityGroupType: existing constraints: - tag: xyz
- If a security group is associated with one or more machines in the deployment, a delete action displays a message stating that the security group cannot be deleted.
- If a security group is not associated with any machine in the deployment, a delete action displays a message stating that the security group will be deleted from this deployment and the action cannot be undone. An existing security group is deleted from the cloud template, while an on-demand security group is destroyed.
Using NSX-V security tags and NSX-T VM tags
You can see and use NSX-V security tags and NSX-T and NSX-T with Policy VM tags from managed resources in vRealize Automation Cloud cloud templates.
NSX-V and NSX-T security tags are supported for use with vSphere. NSX-T security tags are also supported for use with VMware Cloud on AWS.
As with VMs deployed to vSphere, you can configure machine tags for a VM to be deployed on VMware Cloud on AWS. You can also update the machine tag after initial deployment. These machine tags allow vRealize Automation Cloud to dynamically assign a VM to an appropriate NSX-T security group during deployment.
key: nsxSecurityTagand a tag value in the compute resource in the cloud template, as shown in the following example, provided that the machine is connected to an NSX-V network:
tags: - key: nsxSecurityTag - value: security_tag_1 - key: nsxSecurityTag - value: security_tag_2
The specified value must correspond to an NSX-V security tag. If there are no security tags in NSX-V that match the specified
nsxSecurityTag key value, the deployment will fail.
NSX-V security tagging requires that the machine is connected to an NSX-V network. If the machine is connected to a vSphere network, the NSX-V security tagging is ignored. In either case, the vSphere machine is also tagged.
NSX-T does not have a separate security tag. Any tag specified on the compute resource in the cloud template results in the deployed VM being associated with all tags that are specified in NSX-T. For NSX-T, including NSX-T with Policy, VM tags are also expressed as a key value pair in the cloud template. The
key setting equates to the
scope setting in NSX-T and the
value setting equates to the
Tag Name specified in NSX-T.
To avoid confusion, do not use a
nsxSecurityTag key pairs when for NSX-T. If you specify an
nsxSecurityTag key value pair for use with NSX-T, including NSX-T with Policy, the deployment creates a VM tag with an empty Scope setting with a Tag name that matches the
value specified. When you view such tags in NSX-T, the Scope column will be empty.
Using app isolation policies in on-demand security group firewall rules
You can use an app isolation policy to only allow internal traffic between the resources that are provisioned by the cloud template. With app isolation, the machines provisioned by the cloud template can communicate with each other but cannot connect outside the firewall. You can create an app isolation policy in the network profile. You can also specify app isolation in a cloud template design by using an on-demand security group with a Deny firewall rule or a private or outbound network.
An app isolation policy is created with a lower precedence. If you apply multiple policies, the policies with the higher weight will take precedence.
When you create an application isolation policy, an auto-generated policy name is generated. The policy is also made available for reuse in other cloud template designs and iterations that are specific to the associated resource endpoint and project. The app isolation policy name is not visible in the cloud template but it is visible as a custom property on the project page () after the cloud template design is deployed.
For the same associated endpoint in a project, any deployment that requires an on-demand security group for app isolation can use the same app isolation policy. Once the policy is created, it is not deleted. When you specify an app isolation policy, vRealize Automation Cloud searches for the policy within the project and relative to the associated endpoint - If it finds the policy it reuses it, if it does not find the policy, it creates it. The app isolation policy name is only visible after its initial deployment in the project's custom properties listing.
Using security groups in iterative cloud template development
- In the Cloud Assembly template designer, detach the security group from all its associated machines in the cloud template.
- Redeploy the template by clicking Update an existing deployment.
- Remove the existing security group constraint tags and/or securityGroupType properties in the template.
- Add new security group constraint tags and/or securityGroupType properties in the template.
- Associate the new security group constraint tags and/or securityGroupType property instances to the machines in the template.
- Redeploy the template by clicking Update an existing deployment.
Available day 2 operations
For a list of common day 2 operations that are available for cloud template and deployment resources, see What actions can I run on Cloud Assembly deployments.
Learn more
For information about using a security group for network isolation, see Security resources in vRealize Automation Cloud.
For information about using security groups in network profiles, see Learn more about network profiles in vRealize Automation Cloud and Using security group settings in network profiles and cloud template designs in vRealize Automation Cloud.
For examples of using security groups in cloud templates, see Networks, security resources, and load balancers in vRealize Automation Cloud. | https://docs.vmware.com/en/VMware-Cloud-Assembly/services/Using-and-Managing/GUID-6058C1F7-4761-470F-84EC-FCFEF0A68D56.html | 2021-11-27T15:32:55 | CC-MAIN-2021-49 | 1637964358189.36 | [] | docs.vmware.com |
The example vRealize Automation Cloud Assembly project enables the users who can provision, and configures how much provisioning is possible.
Projects define the user and provisioning settings.
- Users and their role level of permission
- Priority for deployments as they are being provisioned to a cloud zone
- Maximum number of deployment instances per cloud zone
Procedure
- Go to .
- Click New Project, and enter the name WordPress.
- Click Users, and click Add Users.
- Add email addresses and roles for the users.
To successfully add a user, a VMware Cloud Services administrator must have enabled access to vRealize Automation Cloud Assembly for the user.
Remember that addresses shown here are only examples.
- [email protected], Member
- [email protected], Member
- [email protected], Administrator
- Click Provisioning, and click Add Cloud Zone.
- Add the cloud zones that the users can deploy to.
- Click Create.
- Go to , and open a zone that you created earlier.
- Click Projects, and verify that WordPress is a project that is allowed to provision to the zone.
- Check the other zones that you created.
What to do next
Create a basic cloud template. | https://docs.vmware.com/en/vRealize-Automation/8.6/Using-and-Managing-Cloud-Assembly/GUID-16E8D4F1-465D-4792-B61B-4D25A59DC54F.html | 2021-11-27T14:45:33 | CC-MAIN-2021-49 | 1637964358189.36 | [] | docs.vmware.com |
Changelog¶
Note
Changes in the devel branch, but not released yet are marked as DRAFT.
Ksconf 0.9¶
Highlights:
- Last version to support Python 2! It’s time.
API Changes
- Removed
match_bwlist()
FilteredListand derived classes should be used instead.
- Updated interface for
compare_cfgsand
compare_stanzas. (1) Removed the
preserve_emptyparameter and (2) Replaced the awkwardly named
allow_level0parameter with a new
replace_levelattribute that can be set to
global,
stanzaor
key. This new option can be used to control the level of detail in the output.
Ksconf v0.9.0 (2021-08-12)¶
Features & Enhancements:
- Add new
--keep-existingoption for
ksconf combineto preserve certain files that exist within the target directory but now within any source. Similiarly the new
--disable-cleanupoption will prevent any files from being removed. This is useful, for example if using
ksconf combineto write apps into
deployment-appswhere Splunk automatically creates a local
app.conffile, and the deletion and recreation of the file can result in unnecessary app re-deployments. These new options can be used together; for example, one useful pattern is to use
--disable-cleanupto block all removal while perfecting/testing
--keep-existingpatterns.
- Add support for previewing stanza changes with
ksconf promoteby combining
--stanza Xand
--summaryoptions at the same time. Thanks to guilhemmarchand for the suggestion. (#89)
- New CLI args for
ksconf diff. (1) New
--detailoption to specify how to handle certain ‘replace’ levels which impacts the way certain changes are represented. (2) New
--format jsonfor a more parseable output format. Note: This json format shouldn’t be considered stable at this time. If you have ideas about how this could be used, please reach out.
- Allow enabling/disabling TTY colors via environmental variable. The new
--disable-coloroption will disable color, or to disable more widely, add something like
export KSCONF_TTY_COLOR=offto your bashrc profile or Windows environment variables.
Bug fixes:
- Fixed layer detection bugs for
dir.dmode for layers. (1) Layers that weren’t immediately under the source directory were not detected, and (2) layers existing beyond a symlink were not detected. This change targeted for
ksconf combinebut may fix other similar issues.
- Fixed #91. where
ksconf diffwouldn’t correctly handle empty stanzas in the second input file (Reversing the order would sometimes worked to avoid the issue). This was resolved by enabling some improved empty stanza handling in the conf comparison algorithms that were updated back in 0.7.10, but never globally applied. This has been resolved.
Documentation improvements
- New git tip: Use a
gitdir:pointer to relocate the
.gitdir to avoid replicating it when a directory like
master-appsis a git working copy.
- Additional quick use case in the cheatsheet page. Demonstrate how ksconf could be used to list all “apps” present on a deployment server from the
serverclass.conffile.
API Change:
- Replaced use of
match_bwlist()with the
FiltedListSplunkGlobclass, which allows old code to be cleaned up and technically, there’s some expanded capabilities because of this (like many filters now supporting syntax, but this hasn’t been documented and may be left as an Easter egg; because who reads changelogs?)
- Dropped
tty_color()which had already been replaced with the
TermColorclass.
Ksconf 0.8¶
Highlights:
- New command ksconf package is designed for both Splunk developers and admins * New module
ksconf.builderhelps build Splunk apps using a pipeline; or when external Python libraries are bundled into an app
- Legit layer support with built-in layer filtering capabilities is available in several commands
- Python 3! Head’s up: We’ll be dropping support for Python 2 in an upcoming release
Note
Come chat about ksconf on GitHub discussions even if it’s to say we should use some other forum to stay in touch.
What’s new:
- The new ksconf package command supports the creation of Splunk app
.splfiles from a source directory. The
packagecommand can be used by admins to transfer apps around an organization, while keeping the
localfolder intact, or by a developer who wants
localto be automatically merged into
default. The app version can be set based on the latest git tag by simply saying
--set-version={{git_tag}}.
- The ksconf.builder Python module is a API-only first for ksconf! This build library allow caching of expensive deterministic build operations, and has out-of-the-box support for frequent build steps like adding Python modules locally using
pip. As the first feature with no CLI support, I’m exceeded to get input from the broader community on this approach. Of course this is just an experimental first release. As always, feedback welcome!
- Native support for layers! It’s official, layers are now a proper ksconf feature, not just an abstract concept that you could throw together yourself given enough time and effort. This does mean that ksconf has to be more opinionated, but the design supports switching layer methods, which can be extended over time to support new different strategies as they emerge and are embraced by the community. Supports layers filtering as a native feature. This has always been technically possible, but awkward to implement yourself. Layer support is currently available in ksconf combine and ksconf package commands.
- Moving to Python 3 soon. In preparation for the move to Python 3, I’ve added additional backport libraries to be installed when running Python 2. Support for Python 2 will be dropped in a future release, and anyone still on Splunk 7 who can’t get a Python 3 environment will have to use an older version of ksconf. Also note that when jumping to Python 3, we will likely be requiring Python 3.6 or newer right out of the gate. (This means dropping Python 2.7, 3.4 and 3.5 all at the same time.) Whoohoo for f-strings!
- CLI option abbreviation has been disabled. This could be a breaking change for existing scripts. Hopefully no one was relying on this already, but in order to prevent long-term CLI consistency issues as new CLI arguments are added, this feature has been disabled for all version of Python. This feature is only available, and was enabled by default, starting in Python 3.5.
- Removed insensitive language. Specifically the terms ‘whitelist’ and ‘blacklist’ have been replaced, where possible. Fortunately, these terms were not used in any CLI arguments, so there should be no user-facing changes as a result of this.
- Removed support for building a standalone executable (zipapp). This packaging option was added in v0.4.3, and deprecated in v0.6.0 once the Splunk app install option became available. I’m pretty sure this won’t be missed.
API Changes
- NEW API
ksconf.builderThe documentation for this module needs work, and the whole API should be considered quite experimental. The easiest way to get started is to look at the Build Example.
- NEW Context manager
update_conf. This enables super easy conf editing in Python with just a few lines of code. See docs API docs for a usage example.
Developer changes:
- Formatting via autopep8 and isort (enforced by pre-commit)
- Better flake8 integration for bulk checking (run via:
tox -e flake8,flake8-unittest)
Ksconf v0.8.7 (2020-04-29)¶
- Support combining
*.conf.specfiles in
ksconf combine, thus allowing
README.dto be it’s own layer.
- Fixed potential
unarchiveissue with older version of git where
git add --all DIRis more explicit, but equivalent to the modern day,
git add DIR.
Ksconf v0.8.6 (2020-04-20)¶
- Fixed
install.pySplunk app CLI install helper script to support referencing a specific version of Python. This is needed on Splunk 8.0 if you’d like to use Python 3 (or Splunk 8.1 if you want to use Python 2.7, but please don’t.) I suppose this would also work with using a custom Python interpreter other than the ones Splunk ships with, but then why not install with
pip, right? (Thanks to guilhem.marchand for bringing this issue to my attention.)
- Updated docs regarding changes to the use of
install.pyand fixed a bunch of spelling mistakes and other minor doc/comment tweaks.
- Fixed ASCII art issue.
Ksconf v0.8.5 (2020-04-07)¶
- Fixed packaging issue where external dependencies were missing. This doesn’t impact the Splunk package install, or anyone running Python 3.6 or later.
Ksconf v0.8.4 (2020-03-22)¶
- CLI change: Replaced short option for
--allowlistto be
-a, before it was
-w. I assume this was left over early development where the argument was initial called
--whitelist, but at this point
-wis just confusing. Normally, I’d keep
-wfor a period of time and issue a deprecation warning. However, given that 0.8.0 was released less than a week ago, and that ksconf package is an “alpha” feature, I’m going to make this change without prior warning.
- Add some safety checks to the package command to check for app naming issues (where the app folder doesn’t match
[package] idvalue in
app.conf), and hidden files and directories.
- Add new
{{app_id}}variable that’s usable with the ksconf package command.
- Added a new optional argument to
copy_files()called
targetfor additional control over the destination path of artifacts copied into the build folder.
- Minor tweak to unhandled exceptions. The name of the exception class is now show, and may be helpful in some situations.
- When using
make_missingin
update_conf, missing directories will now be created too.
- Additional fixes to the Ksconf for Splunk App
build.pyscript: Now explicitly creating a top-level
ksconffolder. It’s likely that this was the root cause of several other issues.
Ksconf v0.8.3 (2021-03-20)¶
- Fixed bugs created by v0.8.2 (yanked on pypi)
- Properly resolved issues with Splunk app building process.
- Open issue uncovered where
ksconf packagecan produce a tarball that’s unusable by Splunkbase.
Ksconf v0.8.1 (2021-03-20)¶
- Fixed some build issues with the Splunk app. (The splunk app is now built with
ksconf packageand the
ksconf.builder)
- Minor doc fix up; you know, the stuff typically found minutes after any new release :-)
Ksconf v0.8.0 (2021-03-19)¶
In addition to the 0.8 summary above, 0.8.0 specifically includes the following changes:
- Add automatic layer support. Currently the two supported layer schemes are (1) explicit layers (really this will
disableautomatic layer detection), and (2) the
dir.dformat which uses the
default.d/##-layer-namestyle directory support, which we previously promoted in the docs, but never really fully supported in a native way. This new
dir.ddirectory layout support also allows for multiple
*.dfolders in a single tree (so not just
default.d), and if your apps have different layer-points in different apps, it’s all handled transparently.
- Layer selection support was added to the
combinecommand. This allows you to
--includeand
--excludelayers as you see fit. See the docs for more details and examples of this new functionality. This works for both the new
dir.ddirectories and the explicit layers, though moving to the
dir.dformat is highly encouraged.
- New cheatsheet example: Using
ksconf packageand
splunk install apptogether.
- Updated the combine behavior to optimize for the situation where there is only a single conf input file provided. This behavior leaves any
.confor
.metafile untouched so there’s no sorting/normalizing or banner. See #64.
- Eliminated an “unknown command” error when one of the ksconf python modules has a SyntaxError. The new behavior isn’t perfect (you may still see “unrecognized arguments”), but overall it’s still a step in the right direction.
Ksconf 0.7.x¶
New functionality, massive documentation improvements, metadata support, and Splunk app install fixes.
Release v0.7.10 (2021-03-19)¶
- Fixed bug where empty stanzas in the local file could result in deletion in default with
ksconf promote. Updated diff interface to improve handling of empty stanzas, but wider support is still needed across other commands; but this isn’t a high priority.
Release v0.7.9 (2020-09-23)¶
- Fixed bug where empty stanzas could be removed from
.conffiles. This can be detrimental for
capability::*entries in
authorize.conf, for example. A big thanks to nebffa for tracking down this bug!
Release v0.7.8 (2020-06-19)¶
- New automatic
promotemode is now available using CLI arguments! This allows stanzas to be selected for promotion from the CLI in batch and interactive modes. This implementation borrows (and shares code) with the
ksconf filtercommand so hopefully the CLI arguments look familiar. It’s possible to promote a single stanza, a stanza wildcard, regex or invert the matching logic and promote everything except for the named stanza (blocklist). Right now
--stanzais the only supporting matching mode, but more can be added as needed. A huge thanks to mthambipillai for providing a pull-request with an initial implementation of this feature!
- Added a new summary output mode (
ksconf promote --summary) that will provide a quick summary of what content could be promoted. This can be used along side the new
--stanzafiltering options to show the names of stanzas that can be promoted.
- Replaced insensitive terminology with race-neutral terms. Specifically the terms ‘blacklist’ and ‘whitelist’ have been replaced. NOTE: This does not change any CLI attributes, but in a few cases the standard output terminology is slightly different. Also terminology in
.conffiles couldn’t be updated as that’s controlled by Splunk.
- Fixed bug in the
unarchivecommand where a
localefolder was blocked as a
localfolder and where a nested
defaultfolder (nested under a Python package, for example) could get renamed if
--default-dirwas used, now only the top-most
defaultfolder is updated. Also fixed an unlikely bug triggered when
default/app.confis missing.
- Fixed bug with
minimizewhen the required
--targetargument is not given. This now results in a reminder to the user rather than an unhandled exception.
- Splunk app packaging fix. Write access to the app was previously not granted due to a spelling mistake in the metadata file.
Release v0.7.7 (2020-03-05)¶
- Added new
--follow-symlinkoption to the
combinecommand so that input directory structures with symbolic links can be treated the same as proper directories.
- Corrected Windows issue where wildcard (glob) patterns weren’t expanded by for
checkand
sort. This is primarily a difference in how a proper shells (e.g., bash, csh, zsh) handle expansion natively vs CMD on Windows does not. However, since this is typically transparently handled by many CLI tools, we’ll follow suite. (BTW, running ksconf from the GIT Bash prompt is a great alternative.) Only the most minimalistic expansion rules will be available, (so don’t expect
{props,transforms,app}.confto work anytime soon), but this should be good enough for most use cases. Thanks to SID800 for reporting this bug.
- Fixed issues with the
unarchivecommand when
gitis not installed or an app is being unarchived (installed/upgrade) into a location not managed by Git. Note that additional output is now enabled when the
KSCONF_DEBUGenvironmental variable is set (in lieu of a proper verbose mode). Bug report provided by SID800.
- Enhanced
ksconf --versionoutput to include Git executable path and version information; as well as a platform dump. (Helpful for future bug reporting.)
- Added feature to disable the marker file (safety check) automatically created by the
combinecommand for use in automated processing workflows.
- Updated
pre-commitdocumentation and sample configurations to use
revrather than
shaas the means of identifying upstream tags or revisions. Recent releases of
pre-commitwill warn you about this during each run.
- Fixed a temporary file cleanup issue during certain in-place file replacement operations. (If you found any unexpected
*.tmpfiles, this could have been the cause.)
Release v0.7.6 (2019-08-15)¶
- Fresh review and cleanup of all docs! (A huge thank you to Brittany Barnett for this massive undertaking)
- Fixed unhandled exception when encountering a global stanza in metadata files.
- Expand some error messages, sanity checks, and added a new session token (
--session-key) authentication option for
rest-publish.
Release v0.7.5 (2019-07-03)¶
- Fixed a long-term bug where the diff output of a single-line attribute change was incorrectly represented in the textual output of ‘ksconf diff’ and the diff output in other commands. This resolves a combination of bugs, the first half of which was fixed in 0.7.3.
- Allow
make_docsscript to run on Windows, and other internal doc build process improvements.
Release v0.7.4 (2019-06-07)¶
- Inline the
sixmodule to avoid elusive bootstrapping cases where the module couldn’t be found. This primarily impacts
pre-commitusers. The
ksconf.ext.*prefix is being used for this, and any other inlined third party modules we may need in the future.
- Other minor docs fixes and internal non-visible changes.
Release v0.7.3 (2019-06-05)¶
- Added the new ksconf xml-format command.
- The
ksconf xml-formatcommand brings format consistency to your XML representations of Simple XML dashboards and navigation files by fixing indentation automatically adding
<![CDATA[ ... ]]>blocks, as needed, to reduce the need for XML escaping, resulting in more readable source.
- Additionally, a new pre-commit hook named ksconf-xml-format was added to leverage this new functionality. It looks specifically for xml views and navigation files based on path. This may also include Advanced XML, which hasn’t been tested; So if you use Advanced XML, proceed with caution.
- Note that this adds
lxmlas a packaging dependency which is needed for pre-commit hooks, but not strictly required at run time for other ksconf commands. This is NOT ideal, and may change in the future in attempts to keep ksconf as light-weight and standalone as possible. One possible alternative is setting up a different repo for pre-commit hooks. Python packaging and distribution tips welcome.
- Fixed data loss bug in
promote(interactive mode only) and improved some UI text and prompts.
- Fixed colorization of
ksconf diffoutput where certain lines failed to show up in the correct color.
- Fixed bug where debug tracebacks didn’t work correctly on Python 2.7. (Enable using
KSCONF_DEBUG=1.)
- Extended the output of
ksconf --versionto show the names and version of external modules, when present.
- Improved some resource allocation in corner cases.
- Tested with Splunk 7.3 (numeric similarity in version numbers is purely coincidental)
Attention
API BREAKAGE
The
DiffOp output values for
DIFF_OP_INSERT and
DIFF_OP_DELETE have been changed in a backwards-compatible breaking way.
The values of
a and
b were previously reversed for these two operations, leading to some code confusion.
Release v0.7.2 (2019-03-22)¶
- Fixed bug where
filterwould crash when doing stanza matching if global entries were present. Global stanzas can be matched by searching for a stanza named
default.
- Fixed broken
pre-commitissue that occurred for the
v0.7.1tag. This also kept
setup.pyfrom working if the
sixmodule wasn’t already installed. Developers and pre-commit users were impacted.
Release v0.7.1 (2019-03-13)¶
- Additional fixes for UTF-8 BOM files which appear to happen more frequently with
localfiles on Windows. This time some additional unit tests were added so hopefully there are few regressions in the future.
- Add the
ignore-missingargument to ksconf merge to prevent errors when input files are absent. This allows bashisms
Some_App/{{default,local}}/savedsearches.confto work without errors if the local or default file is missing.
- Check for incorrect environment setup and suggest running sourcing
setSplunkEnvto get a working environment. See #48 for more info.
- Minor improvements to some internal error handling, packaging, docs, and troubleshooting code.
Release v0.7.0 (2019-02-27)¶
Attention
For anyone who installed 0.6.x, we recommend a fresh install of the Splunk app due to packaging changes. This shouldn’t be an issue in the future.
General changes:
- Added new ksconf rest-publish command that supersedes the use of
rest-exportfor nearly every use case. Warning: No unit-testing has been created for this command yet, due to technical hurdles.
- Added Cheat Sheet to the docs.
- Massive doc cleanup of hundreds of typos and many expanded/clarified sections.
- Significant improvement to entrypoint handling and support for conditional inclusion of 3rd party libraries with sane behavior on import errors, and improved warnings. This information is conveniently viewable to the user via
ksconf --version.
- Refactored internal diff logic and added additional safeties and unit tests. This includes improvements to TTY colorization which should avoid previous color leaks scenarios that were likely if unhandled exceptions occur.
- New support for metadata handling.
- CLI change for
rest-export: The
--userargument has been replaced with
--ownerto keep clean separation between the login account and object owners. (The old argument is still accept for now.)
Splunk app changes:
- Modified installation of python package installation. In previous releases, various
.dist-infofolders were created with version-specific names leading to a mismatch of package versions after upgrade. For this reason, we suggest that anyone who previously installed 0.6.x should do a fresh install.
- Changed Splunk app install script to
install.py(it was
bootstrap_bin.py). Hopefully this is more intuitive.
- Added Windows support to
install.py.
- Now includes the Splunk Python SDK. Currently used for
rest-publishbut will eventually be used for additional functionally unique to the Splunk app.
Ksconf 0.6.x¶
Add deployment as a Splunk app for simplicity and significant docs cleanup.
Release v0.6.2 (2019-02-09)¶
- Massive rewrite and restructuring of the docs. Highlights include:
- Reference material has been moved out of the user manual into a different top-level section.
- Many new topics were added, such as
- A new approach for CLI documentation. We’re moving away from the WALL OF TEXT thing. (Yeah, it was really just the output from
--help). That was limiting formatting, linking, and making the CLI output way too long.
- Refreshed Splunk app icons. Add missing alt icon.
- Several minor internal cleanups. Specifically the output of
--versionhad a face lift.
Release v0.6.1 (2019-02-07)¶
- (Trivial) Fixed some small issues with the Splunk App (online AppInspect)
Release v0.6.0 (2019-02-06)¶
- Add initial support for building ksconf into a Splunk app.
- App contains a local copy of the docs, helpful for anyone who’s working offline.
- Credit to Sarah Larson for the ksconf logos.
- No
ksconffunctionality exposed to the Splunk UI at the moment.
- Docs/Sphinx improvements (more coming)
- Begin work on cleaning up API docs.
- Started converting various document pages into reStructuredText for greatly improved docs.
- Improved PDF fonts and fixed a bunch of sphinx errors/warnings.
- Refactored the install docs into 2 parts. With the new ability to install ksconf as a Splunk app it’s quite likely that most of the wonky corner cases will be less frequently needed, hence all the more exotic content was moved into the “Advanced Install Guide”, tidying things up.
Ksconf 0.5.x¶
Add Python 3 support, new commands, support for external command plugins, tox and vagrant for testing.
Release v0.5.6 (2019-02-04)¶
- Fixes and improvements to the
filtercommand. Found issue with processing from stdin, inconsistency in some CLI arguments, and finished implementation for various output modes.
- Add logo (fist attempt).
Release v0.5.5 (2019-01-28)¶
- New ksconf filter command added for slicing up a conf file into smaller pieces. Think of this as GREP that’s stanza-aware. Can also allow or block attributes, if desirable.
- Expanded
rest-exportCLI capabilities to include a new
--deleteoption, pretty-printing, and now supports stdin by allowing the user to explicitly set the file type using
--conf.
- Refactored all CLI unittests for increased readability and long-term maintenance. Unit tests now can also be run individually as scripts from the command line.
- Minor tweaks to the
snapshotoutput format, v0.2. This feature is still highly experimental.
Release v0.5.4 (2019-01-04)¶
- New commands added:
- ksconf snapshot will dump a set of configuration files to a JSON formatted file. This can be used used for incremental “snapshotting” of running Splunk apps to track changes overtime.
- ksconf rest-export builds a series of custom
curlcommands that can be used to publish or update stanzas on a remote instance without file system access. This can be helpful when pushing configs to Splunk Cloud when all you have is REST (splunkd) access. This command is indented for interactive admin not batch operations.
- Added the concept of command maturity. A listing is available by running
ksconf --version
- Fix typo in
KSCONF_DEBUG.
- Resolving some build issues.
- Improved support for development/testing environments using Vagrant (fixes) and Docker (new). Thanks to Lars Jonsson for these enhancements.
Release v0.5.3 (2018-11-02)¶
- Fixed bug where
ksconf combinecould incorrectly order directories on certain file systems (like ext4), effectively ignoring priorities. Repeated runs may resulted in undefined behavior. Solved by explicitly sorting input paths forcing processing to be done in lexicographical order.
- Fixed more issues with handling files with BOM encodings. BOMs and encodings in general are NOT preserved by ksconf. If this is an issue for you, please add an enhancement issue.
- Add Python 3.7 support
- Expand install docs specifically for offline mode and some OS-specific notes.
- Enable additional tracebacks for CLI debugging by setting
KSCONF_DEBUG=1in the environment.
Release v0.5.2 (2018-08-13)¶
- Expand CLI output for
--helpand
--version
- Internal cleanup of CLI entry point module name. Now the ksconf CLI can be invoked as
python -m ksconf, you know, for anyone who’s into that sort of thing.
- Minor docs and CI/testing improvements.
Release v0.5.1 (2018-06-28)¶
- Support external ksconf command plugins through custom entry_points, allowing for others to develop their own custom extensions as needed.
- Many internal changes: Refactoring of all CLI commands to use new entry_points as well as pave the way for future CLI unittest improvements.
- Docs cleanup / improvements.
Ksconf 0.4.x¶
Ksconf 0.4.x switched to a modular code base, added build/release automation, PyPI package
registration (installation via
pip install and, online docs.
Release v0.4.10 (2018-06-26)¶
- Improve file handling to avoid “unclosed file” warnings. Impacted
parse_conf(),
write_conf(), and many unittest helpers.
- Update badges to report on the master branch only. (No need to highlight failures on feature or bug-fix branches.)
Release v0.4.8 (2018-06-05)¶
- Massive cleanup of docs: revamped install guide, added ‘standalone’ install procedure and developer-focused docs. Updated license handling.
- Updated docs configuration to dynamically pull in the ksconf version number.
- Using the classic ‘read-the-docs’ Sphinx theme.
- Added additional PyPi badges to README (GitHub home page).
Release v0.4.4-v0.4.7 (2018-06-04)¶
- Deployment and install fixes (It’s difficult to troubleshoot/test without making a new release!)
Release v0.4.3 (2018-06-04)¶
- Rename PyPI package
kintyre-splunk-conf
- Add support for building a standalone executable (zipapp).
- Revamp install docs and location
- Add GitHub release for the standalone executable.
Release v0.4.0 (2018-05-19)¶
- Refactor entire code base. Switched from monolithic all-in-one file to clean-cut modules.
- Versioning is now discoverable via
ksconf --version, and controlled via git tags (via
git describe --tags).
Module layout¶
ksconf.conf.*- Configuration file parsing, writing, comparing, and so on
ksconf.util.*- Various helper functions
ksconf.archive- Support for decompressing Splunk apps (tgz/zip files)
ksconf.vc.git- Version control support. Git is the only VC tool supported for now. (Possibly ever)
ksconf.commands.<CMD>- Modules for specific CLI functions. I may make this extendable, eventually.
Ksconf 0.3.x¶
First public releases.
Release v0.3.2 (2018-04-24)¶
- Add AppVeyor for Windows platform testing
- Add codecov integration
- Created ConfFileProxy.dump()
Ksconf legacy releases¶
Ksconf started in a private Kintyre repo. There are no official releases; all git history has been rewritten.
Release legacy-v1.0.1 (2018-04-20)¶
- Fixes to blocklist support and many enhancements to
ksconf unarchive.
- Introduces parsing profiles.
- Lots of bug fixes to various subcommands.
- Added automatic detection of ‘subcommands’ for CLI documentation helper script.
Release legacy-v1.0.0 (2018-04-16)¶
- This is the first public release. First work began Nov 2017 (as a simple conf ‘sort’ tool, which was imported from yet another repo.) Version history was extracted/rewritten/preserved as much as possible.
- Mostly stable features.
- Unit test coverage over 85%
- Includes pre-commit hook configuration (so that other repos can use this to run
ksconf sortand
ksconf checkagainst their conf files. | https://ksconf.readthedocs.io/en/latest/changelog.html | 2021-11-27T13:43:35 | CC-MAIN-2021-49 | 1637964358189.36 | [] | ksconf.readthedocs.io |
throughput mode or the amount of provisioned throughput of an existing file system.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
update-file-system --file-system-id <value> [--throughput-mode <value>] [--provisioned-throughput-in-mibps <value>] [--cli-input-json <value>] [--generate-cli-skeleton <value>]
--file-system-id (string)
The ID of the file system that you want to update.
--throughput-mode (string)
(Optional) Updates the file system's throughput mode. If you're not updating your throughput mode, you don't need to provide this value in your request. If you are changing the ThroughputMode to provisioned , you must also set a value for ProvisionedThroughputInMibps .
Possible values:
- bursting
- provisioned
--provisioned-throughput-in-mibps (double)
(Optional) Sets the amount of provisioned throughput, in MiB/s, for the file system. Valid values are 1-1024. If you are changing the throughput mode to provisioned, you must also provide the amount of provisioned throughput. Required if ThroughputMode is changed to provisioned.
OwnerId -> (string)
The Amazon Web Services Name tag. For more information, see CreateFileSystem . If the file system has a Name tag, field, and the time at which that size was determined in its Timestamp field. The Timestamp value is the integer number of seconds since 1970-01-01T00:00:00Z. The SizeInBytes value doesn't represent the size of a consistent snapshot of the file system, but it is eventually consistent when there are no writes to the file system. That is, SizeInBytes represents field, Key Management ServiceMode set to provisioned .
AvailabilityZoneName -> (string)
Describes the Amazon Web Services is an Availability Zone ID for the us-east-1 Amazon Web Services Region, and it has the same location in every Amazon Web Services account.
Tags -> (list)
The tags associated with the file system, presented as an array of Tag objects.
(structure)
A tag is a key-value pair. Allowed characters are letters, white space, and numbers that can be represented in UTF-8, and the following characters:+ - = . _ : / .
Key -> (string)The tag key (String). The key can't start with aws: .
Value -> (string)The value of the tag key. | https://docs.aws.amazon.com/cli/latest/reference/efs/update-file-system.html | 2021-11-27T13:59:58 | CC-MAIN-2021-49 | 1637964358189.36 | [] | docs.aws.amazon.com |
When you configure Site Recovery Manager to use a shared recovery site, Site Recovery Manager supports the same operations as it does in a standard one-to-one configuration. Using Site Recovery Manager with a shared recovery site is subject to some limitations.
- Site Recovery Manager supports point-to-point replication. Site Recovery Manager does not support replication to multiple targets, even in a multi-site configuration.
- For each shared recovery site customer, you must install Site Recovery Manager Server once at the customer site and again at the recovery site.
- You must specify the same Site Recovery Manager extension ID when you install the Site Recovery Manager Server instances on the protected site and on the shared recovery site. For example, you can install the first pair of sites with the default Site Recovery Manager extension ID, then install subsequent pairs of sites with custom extension IDs.
- You must install each Site Recovery Manager Server instance at the shared recovery site on its own host machine. You cannot install multiple instances of Site Recovery Manager Server on the same host machine.
- Each Site Recovery Manager Server instance on the protected site and on the shared recovery site requires its own database.
- A single shared recovery site can support a maximum of ten protected sites. You can run concurrent recoveries from multiple sites. See Operational Limits of Site Recovery Manager for the number of concurrent recoveries that you can run with array-based replication and with vSphere Replication.
- In a large Site Recovery Manager environment, you might experience timeout errors when powering on virtual machines on a shared recovery site. See Timeout Errors When Powering on Virtual Machines on a Shared Recovery Site.
- When connecting to Site Recovery Manager on the shared recovery site, every customer can see all of the Site Recovery Manager extensions that are registered with the shared recovery site, including company names and descriptions. All customers of a shared recovery site can have access to other customers’ folders and potentially to other information at the shared recovery site. | https://docs.vmware.com/en/Site-Recovery-Manager/8.5/com.vmware.srm.install_config.doc/GUID-F7055BDF-E923-44BB-9795-92F94E2EFF87.html | 2021-11-27T14:19:32 | CC-MAIN-2021-49 | 1637964358189.36 | [] | docs.vmware.com |
You can edit a report whenever any modification needs to be done, such as changing the query, visualization, panel tile, panel description, and changing default settings of the visualization of reports and so on.
To edit a report follow the procedure:
Procedure
- Click Interface Discards drop-down button.
- Click Edit.
Figure 1. Edit Report
- Make the necessary changes in the report.
- Click Save Dashboard, to save the report. | https://docs.vmware.com/en/VMware-Telco-Cloud-Operations/1.2.0/config-guide-120/GUID-1BB6FF90-19D0-414E-8B31-4EA4EACFC5AC.html | 2021-11-27T14:16:05 | CC-MAIN-2021-49 | 1637964358189.36 | [] | docs.vmware.com |
Property Element (Module)
This content is outdated and is no longer being maintained. It is provided as a courtesy for individuals who are still using these technologies. This page may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.
Specifies a custom property value to implement within the file.
<Property Name = "Text" Value = "Text"> </Property>
Attributes
Child Elements
Parent Elements
Occurrences
Example
For an example of how this element is used, see Modules.
Microsoft.Win32.RegistryKey#4 | https://docs.microsoft.com/fr-fr/previous-versions/office/developer/sharepoint-services/cc264281(v=office.12) | 2019-01-16T01:50:36 | CC-MAIN-2019-04 | 1547583656577.40 | [] | docs.microsoft.com |
Advanced Install¶
If you’ve completed the Quick Install guide, you can skip right to the Tutorial. This section is intended for users who wish to have more control over the install process or to compile from source code. If you’re looking for a simple solution, see the Quick Install guide.
Without Anaconda¶
If you already have Python 3.x installed from python.org or anywhere else, you can use your existing distribution instead of Anaconda (or Miniconda). Note that this does require manually installing some dependencies.
Windows
Install the Visual C++ 2015 Runtime.
Install numpy, scipy and matplotlib binaries from Christoph Gohlke.
Pybinding is available as a binary wheel on PyPI. Install it with:
pip3 install pybinding
Linux
Building pybinding from source is the only option on Linux.
Make sure you have gcc and g++ v5.0 or newer. To check, run
g++ --versionin your terminal. Refer to instruction from your Linux distribution in case you need to upgrade. Alternatively, you can use clang v3.5 or newer for compilation instead of gcc.
Install CMake >= v3.1 from their website or your package manager, e.g.
apt-get install cmake.
Install numpy, scipy and matplotlib with the minimal versions as stated previously. The easiest way is to use your package manager, but note that the main repositories tend to keep outdated versions of SciPy packages. For instructions on how to compile the latest packages from source, see.
Install pybinding using pip:
pip3 install pybinding
macOS
All the required SciPy packages and pybinding are available as binary wheels on PyPI, so the installation is very simple:
pip3 install pybinding
Note that pip will resolve all the SciPy dependencies automatically.
Compiling from source¶
If you want to get the latest version (the master branch on GitHub), you will need to compile it from source code. Before you proceed, you’ll need to have numpy, scipy and matplotlib. They can be installed either using Anaconda or following the procedure in the section just above this one. Once you have everything, follow the steps below to compile and install pybinding.
Windows
Install Visual Studio 2015 Community. The Visual C++ compiler is required, so make sure to select it during the customization step of the installation (C++ may not be installed by default).
-
Build and install pybinding. The following command will instruct pip to download the latest source code from GitHub, compile everything and install the package:
pip3 install git+
Linux
You’ll need gcc/g++ >= v5.0 (or clang >= v3.5) and CMake >= v3.1. See the previous section for details. If you have everything, pybinding can be installed from the latest source code using pip:
pip3 install git+
macOS
-
Install CMake:
brew install cmake
Build and install pybinding. The following command will instruct pip to download the latest source code from GitHub, compile everything and install the package:
pip3 install git+
For development¶
If you would like to work on the pybinding source code itself, you can install it in an editable development environment. The procedure is similar to the “Compiling from source” section with the exception of the final step:
Clone the repository using git (you can change the url to your own GitHub fork):
git clone --recursive
Tell pip to install in development mode:
cd pybinding pip3 install -e . | http://docs.pybinding.site/en/v0.9.2/install/advanced.html | 2019-01-16T01:27:57 | CC-MAIN-2019-04 | 1547583656577.40 | [] | docs.pybinding.site |
tkp.utility.sigmaclip – Generic sigma clipping routine¶
Generic kappa-sigma clipping routine.
Note: this does not replace the specialized sigma_clip function in utilities.py
tkp.utility.sigmaclip.
calcmean(data, errors=None)[source]¶
Calculate the mean and the standard deviation of the mean
tkp.utility.sigmaclip.
calcsigma(data, errors=None, mean=None, axis=None, errors_as_weight=False)[source]¶
Calculate the sample standard deviation
Kwargs:
- errors (numpy.ndarray, None): Eerrors for the data. Errors
- needs to be the same shape as data (this is different than for numpy.average). If you want to use weights instead of errors as input, set errors_as_weight=True. If not given, all errors (and thus weights) are assumed to be equal to 1.
- mean (float): Provide mean if you don’t want the mean to be
- calculated for you. Pay careful attention to the shape if you provide ‘axis’.
- axis (int): Specify axis along which the mean and sigma are
- calculated. If not provided, calculations are done over the whole array
errors_as_weight (bool): Set to True if errors are weights.
tkp.utility.sigmaclip.
clip(data, mean, sigma, siglow, sighigh, indices=None)[source]¶
Perform kappa-sigma clipping of data around mean
Kwargs:indices (numpy.ndarray): data selection by indices
tkp.utility.sigmaclip.
sigmaclip(data, errors=None, niter=0, siglow=3.0, sighigh=3.0, use_median=False)[source]¶
Remove outliers from data which lie more than siglow/sighigh sample standard deviations from mean.
Kwargs:
- errors (numpy.ndarray, None): Errors associated with the data
- values. If None, unweighted mean and standard deviation are used in calculations.
- niter (int): Number of iterations to calculate mean & standard
- deviation, and reject outliers, If niter is negative, iterations will continue until no more clipping occurs or until abs(‘niter’) is reached, whichever is reached first.
- siglow (float): Kappa multiplier for standard deviation. Std *
- siglow defines the value below which data are rejected.
- sighigh (float): Kappa multiplier for standard deviation. Std *
- sighigh defines the value above which data are rejected.
use_median (bool): Use median of data instead of mean. | http://tkp.readthedocs.io/en/release2.1/devref/tkp/utility/sigmaclip.html | 2018-06-18T05:19:40 | CC-MAIN-2018-26 | 1529267860089.11 | [] | tkp.readthedocs.io |
Error getting tags :
error 404Error getting tags :
error 404
openControl
on openControl
startProgressAnimation
databaseQuery
end openControl
Handle the openControl message to change a group's objects or perform other updates, when a card with the group on is visited.
For groups with their backgroundBehavior property set to true, the openControl message is sent immediately after the openBackground message is sent to the card being opened. For non-background groups, it is sent after the openCard message.
For nested groups, the openControl message is sent to the parent group first, if it is passed or not handled by the parent group, then it passes though each child group in reverse layer order (i.e from highest to lowest). | http://docs.runrev.com/Message/openControl | 2018-06-18T05:49:54 | CC-MAIN-2018-26 | 1529267860089.11 | [] | docs.runrev.com |
Translation helpers¶
- class co.translation.MutableTranslationTable(size)¶
A mutable version of TranslationTable with insert, delete and substitute methods for updating the translation table with the corresponding mutations.
- exception co.translation.OverlapError¶
OverlapError is raised when a mutation is applied to a position in a sequence that has been altered by a previous mutation.
In strict mode, an OverlapError is fired more frequently, such as when a deletion is applied to a range that has previously been modified by an insertion.
- class co.translation.TranslationTable(source_size, target_size, source_start, source_end, target_start, target_end, chain)¶
This class is inspired by the UCSC chain format for pairwise alignments documented here:
TranslationTable encodes an alignment between two sequences, source and target.
The alignment is encoded in a chain of tuples in the format (ungapped_size, ds, dt), where ungapped_size refers to regions that align, and the gaps dt and ds each refer to regions present only in the other sequence.
- alignment_str()¶
Returns a string representation of the alignment between source and target coordinates.
Warning
This function should only be used for debugging purposes.
- le(position)¶
le() attempts to return the coordinate in the target sequence that corresponds to the position parameter in the source sequence. If position falls into a gap in the target sequence, it will instead return the last coordinate in front of that gap. | http://co.readthedocs.io/en/latest/translation.html | 2018-06-18T05:47:44 | CC-MAIN-2018-26 | 1529267860089.11 | [] | co.readthedocs.io |
Special consideration apply to network adapters, both physical and VMkernel, that are associated with an iSCSI adapter. and considerations when managing iSCSI-bound virtual and physical network adapters:
Make sure that the VMkernel network adapters are assigned addresses on the same subnet as the iSCSI storage portal they connect to.
iSCSI adapters using VMkernel adapters are not able to connect to iSCSI ports on different subnets, even if those ports are discovered by the iSCSI adapters. make changes that might break association of VMkernel adapters and physical network adapters. You can break the association if you remove one of the adapters or the vSphere switch that connects them, or change the 1:1 network policy for their connection. | https://docs.vmware.com/en/VMware-vSphere/5.5/com.vmware.vsphere.storage.doc/GUID-66AF5FA0-7A95-4730-BB30-55F98C481BFF.html | 2018-06-18T05:59:45 | CC-MAIN-2018-26 | 1529267860089.11 | [] | docs.vmware.com |
DSE Search configuration file (solrconfig.xml)
solrconfig.xml is the primary DSE search configuration file.
- For DataStax Enterprise configuration, see DataStax Enterprise configuration file (dse.yaml).
-. Do not make schema changes on production systems.
Parameters
You might need to modify the following parameters to tune DSE Search. For full details, see the Apache Solr Reference Guide.
- autoSoftCommit
- See Configuring and tuning indexing performance.
- if ever done. You can force a column to change type by using force="true". For example:
After changing the type mapping, you must reload the Solr core with re-indexing.
<dseTypeMappingVersion force = "true">1</dseTypeMappingVersion>
- dseUpdateRequestProcessorChain
- You can output transformer API is an option to the input/output transformer support in Solr.
- fieldOutputTransformer
- The field output transformer API is an option to the input/output transformer support in Solr. See Field input/output (FIT) transformer API and an Introduction to DSE Field Transformers.
- can stop - Supported for RT and NRT indexing. Specify a positive number > 0 and defaults to number of available processors. This parameter regulates how many tasks are created to apply deletes during soft/hard commit in parallel. - See. You must change this value on all cores and then restart the nodes to make the change effective. See Changing maxBooleanClauses.
- mergeScheduler
- The default mergeScheduler settings are not appropriate for DSE Search near real time (NRT) indexing production use on a typical size server. DataStax recommends these settings as a starting point, and then adjust as appropriate to your environment:
maxThreadCount= to the number of CPU cores divided by 2
maxMergeCount=
maxThreadCount* 2
<indexConfig> ... <mergeScheduler class="org.apache.lucene.index.ConcurrentMergeScheduler"> <int name="maxThreadCount">12</int> <int name="maxMergeCount">24</int> </mergeScheduler> ...
-. See Configuring and tuning indexing performance.
- requestHandler
- The correct search handler is required for CQL Solr queries in DSE Search.
When you automatically generate resources, the solrconfig.xml file already contains the request handler for running CQL Solr queries in DSE Search. If you do not automatically generate resources and want to run CQL Solr queries using custom resources, the CqlSearchHandler handler is automatically inserted:
<requestHandler class="com.datastax.bdp.search.solr.handler.component.CqlSearchHandler" name="solr_query" />
For recommendations for the basic configuration for the search handler, and an example that shows adding a search component, see Configuring search components.
In this example, to configure the Data Import Handler, you can add a request handler element that contains the location of data-config.xml and data source connection information.
- rt
To enable live indexing (also known as RT), add
<rt>true</rt>to the <indexConfig> attribute.
See Configuring and tuning indexing performance.
<indexConfig> <rt>true</rt> ...
- SolrFilterCache
- The DSE Search configurable filter cache, SolrFilterCache, can reliably bound the filter cache memory usage for a Solr core. This implementation contrasts with the default Solr implementation which defines bounds for filter cache usage per segment. See Configuring filter cache for searching.
- updateHandler
- You can configure per-document or per-field TTL. See Expiring a DSE Search column. | https://docs.datastax.com/en/datastax_enterprise/4.8/datastax_enterprise/srch/configSolrconfigXml.html | 2017-08-16T15:04:54 | CC-MAIN-2017-34 | 1502886102307.32 | [] | docs.datastax.com |
This article includes frequently asked questions about Azure Site Recovery. If you have questions after reading this article, post them on the Azure Recovery Services Forum.
General
What does Site Recovery do?
Site Recovery contributes to your business continuity and disaster recovery (BCDR) strategy, by orchestrating and automating replication of Azure VMs between regions, on-premises virtual machines and physical servers to Azure, and on-premises machines to a secondary datacenter. Learn more..
Does Site Recovery support the Azure Resource Manager model?
Site Recovery is available in the Azure portal with support for Resource Manager. Site Recovery supports legacy deployments in the Azure classic portal. You can't create new vaults in the classic portal, and new features aren't supported.
Can I replicate Azure VMs?
Yes, you can replicate supported Azure VMs between Azure regions. Learn more.
What do I need in Hyper-V to orchestrate replication with Site Recovery?
For the Hyper-V host server what you need depends on the deployment scenario. Check out the Hyper-V prerequisites in:
- Replicating Hyper-V VMs (without VMM) to Azure
- Replicating Hyper-V VMs (with VMM) to Azure
- Replicating Hyper-V VMs to a secondary datacenter
- If you're replicating to a secondary datacenter read about Supported guest operating systems for Hyper-V VMs.
- If you're replicating to Azure, Site Recovery supports all the guest operating systems that are supported by Azure.
Can I protect VMs when Hyper-V is running on a client operating system?
No, VMs must be located on a Hyper-V host server that's running on a supported Windows server machine. If you need to protect a client computer you could replicate it as a physical machine to Azure or a secondary datacenter.
What workloads can I protect with Site Recovery? and Active Directory, and works closely with leading vendors, including Oracle, SAP, IBM and Red Hat. Learn more about workload protection.
Do Hyper-V hosts need to be in VMM clouds?
If you want to replicate to a secondary datacenter, then Hyper-V VMs must be on Hyper-V hosts servers located in a VMM cloud. If you want to replicate to Azure, then you can replicate VMs on Hyper-V host servers with or without VMM clouds. Read more.
Can I deploy Site Recovery with VMM if I only have one VMM server?
Yes. You can either replicate VMs in Hyper-V servers in the VMM cloud to Azure, or you can replicate between VMM clouds on the same server. For on-premises to on-premises replication, we recommend that you have a VMM server in both the primary and secondary sites.
What physical servers can I protect?
You can replicate physical servers running Windows and Linux to Azure or to a secondary site. Learn about operating system requirements. The same requirements apply whether you're replicating physical servers to Azure, or to a secondary site.
Note that physical servers will run as VMs in Azure if your on-premises server goes down. Failback to an on-premises physical server isn't currently supported. For a machine protected as physical, you can only failback to a VMware virtual machine. exact requirements for replicating VMware servers and VMs to Azure, or to a secondary site..
What charges do I incur while using Azure Site Recovery?
When you use Site Recovery, you incur charges for the Site Recovery license, Azure storage, storage transactions, and outbound data transfer. Learn more.
The Site Recovery license is per protected instance, where an instance is a VM, or a physical server.
- If a VM disk replicates to a standard storage account, the Azure storage charge is for the storage consumption. For example, if the source disk size is 1 TB, and 400 GB is used, Site Recovery creates a 1 TB VHD in Azure, but the storage charged is 400 GB (plus the amount of storage space used for replication logs).
- If a VM till a test failover or a failover. In the replication state, storage charges under the category of "Page blob and disk" as per the Storage pricing calculator are incurred. These charges are based on the storage type of premium/standard and the data redundancy type -LRS, GRS,RA-GRS etc.
- If the option to use managed disks on a failover is selected, charges for managed disks apply after a failover/test failover. Managed disks charges do not apply during replication.
- If the option to use managed disks on a failover is not selected, storage charges under the category of "Page blob and disk" as per the Storage pricing calculator are incurred after failover. These charges are based on the storage type of premium/standard and the data redundancy type -LRS,GRS,RA-GRS etc.
- Storage transactions are charged during steady-state replication and for regular VM operations after a failover / test failover. But these charges are negligible.
Costs are also incurred during test failover, where the VM, storage, egress, and storage transactions costs will be applied.
Security
Is replication data sent to the Site Recovery service?
No, Site Recovery doesn't intercept replicated data, and doesn't have any information about what's running on your virtual machines or physical servers. Replication data is exchanged between on-premises Hyper-V hosts, VMware hypervisors, or physical servers and Azure storage or your secondary site. Site Recovery has no ability to intercept that data. Only the metadata needed to orchestrate replication and failover is sent to the Site Recovery service.
Site Recovery is ISO 27001:2013, 27018, HIPAA, DPA certified, and is in the process of SOC2 and FedRAMP JAB assessments..
Does Site Recovery encrypt replication?
For virtual machines and physical servers, replicating between on-premises sites encryption-in-transit is supported. For virtual machines and physical servers replicating to Azure, both encryption-in-transit and encryption-at-rest (in Azure) are supported.
Replication
Can I replicate over a site-to-site VPN to Azure?
Azure Site Recovery replicates data to an Azure storage account, over a public endpoint. Replication isn't over a site-to-site VPN. You can create a site-to-site VPN, with an Azure virtual network. This doesn't interfere with Site Recovery replication.
Can I use ExpressRoute to replicate virtual machines to Azure?
Yes, ExpressRoute can be used to replicate virtual machines to Azure. Azure Site Recovery replicates data to an Azure Storage Account over a public endpoint. You need to set up public peering to use ExpressRoute for Site Recovery replication. After the virtual machines have been failed over to an Azure virtual network you can access them using the private peering setup with the Azure virtual network.
Are there any prerequisites for replicating virtual machines to Azure?
Virtual machines you want to replicate to Azure should comply with Azure requirements.
Your Azure user account needs to have certain permissions to enable replication of a new virtual machine to Azure.
Can I replicate Hyper-V generation 2 virtual machines to Azure?
Yes. Site Recovery converts from generation 2 to generation 1 during failover. At failback the machine is converted back to generation 2. Read more..
Can I automate Site Recovery scenarios with an SDK?
Yes. You can automate Site Recovery workflows using the Rest API, PowerShell, or the Azure SDK. Currently supported scenarios for deploying Site Recovery using PowerShell:
- Replicate Hyper-V VMs in VMMs clouds to Azure PowerShell Resource Manager
- Replicate Hyper-V VMs without VMM to Azure PowerShell Resource Manager
If I replicate to Azure what kind of storage account do I need?
- Azure classic portal: If you're deploying Site Recovery in the Azure classic portal, you'll need a standard geo-redundant storage account. Premium storage isn't currently supported. The account must be in the same region as the Site Recovery vault.
- Azure portal: If you're deploying Site Recovery in the Azure portal, you'll need an LRS or GRS storage account. We recommend GRS so that data is resilient if a regional outage occurs, or if the primary region can't be recovered. The account must be in the same region as the Recovery Services vault. Premium storage is now supported for VMware VM, Hyper-V VM, and physical server replication, when you deploy Site Recovery in the Azure portal.
How often can I replicate data?
- Hyper-V: Hyper-V VMs can be replicated every 30 seconds (except for premium storage), 5 minutes or 15 minutes. If you've set up SAN replication then replication is synchronous.
- VMware and physical servers: A replication frequency isn't relevant here. Replication is continuous.
Can I extend replication from existing recovery site to another tertiary site?
Extended or chained replication isn't supported. Request this feature in feedback forum.
Can I do an offline replication the first time I replicate to Azure?
This isn't supported. Request this feature in the feedback forum.
Can I exclude specific disks from replication?
This is supported when you're replicating VMware VMs and Hyper-V VMs to Azure, using the Azure portal.
Can I replicate virtual machines with dynamic disks?
Dynamic disks are supported when replicating Hyper-V virtual machines. They are also supported when replicating VMware VMs and physical machines to Azure. The operating system disk must be a basic disk.
Can I add a new machine to an existing replication group?
Adding new machines to existing replication groups is supported. To do so, select the replication group (from 'Replicated items' blade) and right click/select context menu on the replication group and select the appropriate option.
Can I throttle bandwidth allotted for Hyper-V replication traffic?
Yes. You can read more about throttling bandwidth in the deployment articles:
- Capacity planning for replicating VMware VMs and physical servers
- Capacity planning for replicating Hyper-V VMs in VMM clouds
- Capacity planning for replicating Hyper-V VMs without VMM
Failover
If I'm failing over to Azure, how do I access the Azure virtual machines after failover?
You can access the Azure VMs over a secure Internet connection, over a site-to-site VPN, or over Azure ExpressRoute. You'll need to prepare a number of things in order to connect. Learn more
If I fail over to Azure how does Azure make sure my data is resilient?
Azure is designed for resilience. Site Recovery is already engineered for failover to a secondary Azure datacenter, in accordance with the Azure SLA if the need arises. If this happens, we make sure your metadata and vaults remain within the same geographic region that you chose for your vault.
If I'm replicating between two datacenters what happens if my primary datacenter experiences an unexpected outage?
You can trigger an unplanned failover from the secondary site. Site Recovery doesn't need connectivity from the primary site to perform the failover.
Is failover automatic?
Failover isn't automatic. You initiate failovers with single click in the portal, or you can use Site Recovery PowerShell to trigger a failover. Failing back is a simple action in the Site Recovery portal.
To automate you could use on-premises Orchestrator or Operations Manager to detect a virtual machine failure, and then trigger the failover using the SDK.
- Read more about recovery plans.
- Read more about failover.
- Read more about failing back VMware VMs and physical servers
If my on-premises host is not responding or crashed, can I failover back to a different host?
Yes, you can use the alternate location recovery to failback to a different host from Azure. Read more about the options in the below links for VMware and Hyper-v virtual machines.
Service providers
I'm a service provider. Does Site Recovery work for dedicated and shared infrastructure models?
Yes, Site Recovery supports both dedicated and shared infrastructure models.
For a service provider, is the identity of my tenant shared with the Site Recovery service?
No. Tenant identity remains anonymous. Your tenants don't need access to the Site Recovery portal. Only the service provider administrator interacts with the portal.
Will tenant application data ever go to Azure?
When replicating between service provider-owned sites, application data never goes to Azure. Data is encrypted in-transit, and replicated directly between the service provider sites.
If you're replicating to Azure, application data is sent to Azure storage but not to the Site Recovery service. Data is encrypted in-transit, and remains encrypted in Azure.
Will my tenants receive a bill for any Azure services?
No. Azure's billing relationship is directly with the service provider. Service providers are responsible for generating specific bills for their tenants.
If I'm replicating to Azure, do we need to run virtual machines in Azure at all times?
No, Data is replicated to an Azure storage account in your subscription. When you perform a test failover (DR drill) or an actual failover, Site Recovery automatically creates virtual machines in your subscription.
Do you ensure tenant-level isolation when I replicate to Azure?
Yes.
What platforms do you currently support?
We support Azure Pack, Cloud Platform System, and System Center based (2012 and higher) deployments. Learn more about Azure Pack and Site Recovery integration.
Do you support single Azure Pack and single VMM server deployments?
Yes, you can replicate Hyper-V virtual machines to Azure, or between service provider sites. Note that if you replicate between service provider sites, Azure runbook integration isn't available.
Next steps
- Read the Site Recovery overview | https://docs.microsoft.com/en-us/azure/site-recovery/site-recovery-faq | 2017-08-16T15:37:18 | CC-MAIN-2017-34 | 1502886102307.32 | [array(['media/site-recovery-faq/add-server-replication-group.png',
'Add to replication group'], dtype=object) ] | docs.microsoft.com |