content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
implementation of the Scala compiler,
75to
61and look at the updated output in the console.
Adding a dependency
Changing gears a bit, let’s look at how to use published libraries to add extra functionality to our apps.
- Open up
build.sbtand add the following line:
libraryDependencies += "org.scala-lang.modules" %% "scala-parser-combinators" % "1.1.0"
Here,
libraryDependencies is a set of dependencies, and by using
+=,
we’re adding the scala-parser-combinators dependency to the set of dependencies that sbt will go
and fetch when it starts up. Now, in any Scala file, you can import classes,
objects, etc, from scala-parser-combinators with a regular import.
You can find more published libraries on
Scaladex, the Scala library index, where you
can also copy the above dependency information for pasting into your
build.sbt
file.
Next steps
Continue to the next tutorial in the getting started with IntelliJ series, and learn about testing Scala code in IntelliJ with ScalaTest.
or
- Continue learning Scala interactively online on Scala Exercises.
- Learn about Scala’s features in bite-sized pieces by stepping through our Tour of Scala. | https://docs.scala-lang.org/getting-started-intellij-track/building-a-scala-project-with-intellij-and-sbt.html | 2018-10-15T13:41:29 | CC-MAIN-2018-43 | 1539583509196.33 | [] | docs.scala-lang.org |
For details about the syntax and contents of the application resource file, see the
app manual page in Kernel.
This section is to be read with the
app(4) and
application(3) manual pages in Kernel.
When you have written code implementing some specific functionality you might want to make the code into an application, that is, a component that can be started and stopped as a unit, and which can also be reused in other systems.
To do this, create an
application callback module, and describe how the application is to be started and stopped.
Then, an application specification is needed, which is put in an
application resource file. Among other things, this file specifies which modules the application consists of and the name of the callback module.
If you use
systools, the Erlang/OTP tools for packaging code (see
Releases), the code for each application is placed in a separate directory following a pre-defined
directory structure.
How to start and stop the code for the application, that is, the supervision tree, is described by two callback functions:
start(StartType, StartArgs) -> {ok, Pid} | {ok, Pid, State} stop(State)
startis called when starting the application and is to create the supervision tree by starting the top supervisor. It is expected to return the pid of the top supervisor and an optional term,
State, which defaults to
[]. This term is passed as is to
stop.
StartTypeis usually the atom
normal. It has other values only in the case of a takeover or failover, see
Distributed Applications.
StartArgsis defined by the key
modin the
application resource file.
stop/1is called after the application has been stopped and is to do any necessary cleaning up. The actual stopping of the application, that is, the shutdown of the supervision tree, is handled automatically as described in
Starting and Stopping Applications.
Example of an application callback module for packaging the supervision tree from
Supervisor Behaviour:
-module(ch_app). -behaviour(application). -export([start/2, stop/1]). start(_Type, _Args) -> ch_sup:start_link(). stop(_State) -> ok.
A library application that cannot be started or stopped, does not need any application callback module.
To define an application, an application specification is created, which is put in an application resource file, or in short an
.app file:
{application, Application, [Opt1,...,OptN]}.
Application, an atom, is the name of the application. The file must be named
Application.app.
Optis a tuple
{Key,Value}, which define a certain property of the application. All keys are optional. Default values are used for any omitted keys.
The contents of a minimal
.app file for a library application
libapp looks as follows:
{application, libapp, []}.
The contents of a minimal
.app file
ch_app.app for a supervision tree application like
ch_app looks as follows:
{application, ch_app, [{mod, {ch_app,[]}}]}.
The key
mod defines the callback module and start argument of the application, in this case
ch_app and
[], respectively. This means that the following is called when the application is to be started:
ch_app:start(normal, [])
The following is called when the application is stopped.
ch_app:stop([])
When using
systools, the Erlang/OTP tools for packaging code (see Section
Releases), the keys
description,
vsn,
modules,
registered, and
applications are also touses this list when generating boot scripts and tar files. A module must be defined in only one application. Defaults to
[].
registered- All names of registered processes in the application.
systoolsuses this list to detect name clashes between applications. Defaults to
[].
applications- All applications that must be started before this application is started.
systoolsuses this list to generate correct boot scripts. Defaults to
[]. Notice that all applications have dependencies to at least Kernel and STDLIB.
For details about the syntax and contents of the application resource file, see the
app manual page in Kernel.
When packaging code using
systools, the code for each application is placed in a separate directory,
lib/Application-Vsn, where
Vsn is the version number.
This can be useful to know, even if
systools is not used, since Erlang/OTP is packaged according to the OTP principles and thus comes with a specific directory structure. The code server (see the
code(3) manual page in Kernel) automatically uses code from the directory with the highest version number, if more than one version of an application is present.
Any directory structure for development will suffice as long as the released directory structure adhere to the
description below, but it is encouraged that the same directory structure also be used in a development environment. The version number should be omitted from the application directory name since this is an artifact of the release step.
Some sub-directories are required. Some sub-directories are optional, meaning that it should only be used if the application itself requires it. Finally, some sub-directories are recommended, meaning it is encouraged that it is used and used as described here. For example, both documentation and tests are encouraged to exist in an application for it to be deemed a proper OTP application.
─ ${application} ├── doc │ ├── internal │ ├── examples │ └── src ├── include ├── priv ├── src │ └── ${application}.app.src └── test
src- Required. Contains the Erlang source code, the source of the
.appfile and internal include files used by the application itself. Additional sub-directories within
srccan be used as namespaces to organize source files. These directories should never be deeper than one level.
priv- Optional. Used for application specific files.
include- Optional. Used for public include files that must be reachable from other applications.
doc- Recommended. Any source documentation should be placed in sub-directories here.
doc/internal- Recommended. Any documentation that describes implementation details about this application, not intended for publication, should be placed here.
doc/examples- Recommended. Source code for examples on how to use this application should be placed here. It is encouraged that examples are sourced to the public documentation from this directory.
doc/src- Recommended. All source files for documentation, such as Markdown, AsciiDoc or XML-files, should be placed here.
test- Recommended. All files regarding tests, such as test suites and test specifications, should be placed here.
Other directories in the development environment may be needed. If source code from languages other than Erlang is used, for instance C-code for NIFs, that code should be placed in a separate directory. By convention it is recommended to prefix such directories with the language name, for example
c_src for C,
java_src for Java or
go_src for Go. Directories with
_src suffix indicates that it is a part of the application and the compilation step. The final build artifacts should target the
priv/lib or
priv/bin directories.
The
priv directory holds assets that the application needs during runtime. Executables should reside in
priv/bin and dynamically-linked libraries should reside in
priv/lib. Other assets are free to reside within the
priv directory but it is recommended it does so in a structured manner.
Source files from other languages that generate Erlang code, such as ASN.1 or Mibs, should be placed in directories, at the top level or in
src, with the same name as the source language, for example
asn1 and
mibs. Build artifacts should be placed in their respective language directory, such as
src for Erlang code or
java_src for Java code.
The
.app file for release may reside in the
ebin-directory in a development environment but it is encouraged that this is an artifact of the build step. By convention a
.app.src file is used, which resides in the
src directory. This file is nearly identical as the
.app file but certain fields may be replaced during the build step, such as the application version.
Directory names should not be capitalized.
It is encouraged to omit empty directories.
A released application must follow a certain structure.
─ ${application}-${version} ├── bin ├── doc │ ├── html │ ├── man[1-9] │ ├── pdf │ ├── internal │ └── examples ├── ebin │ └── ${application}.app ├── include ├── priv │ ├── lib │ └── bin └── src
src- Optional. Contains the Erlang source code and internal include files used by the application itself. This directory is no longer required in a released application.
ebin- Required. Contains the Erlang object code, the
beamfiles. The
.appfile must also be placed here.
priv- Optional. Used for application specific files.
code:priv_dir/1is to be used to access this directory.
priv/lib- Recommended. Any shared-object files that are used by the application, such as NIFs or linked-in-drivers, should be placed here.
priv/bin- Recommended. Any executable that is used by the application, such as port-programs, should be placed here.
include- Optional. Used for public include files that must be reachable from other applications.
bin- Optional. Any executable that is a product of the application, such as escripts or shell-scripts, should be placed here.
doc- Optional. Any released documentation should be placed in sub-directories here.
doc/man1- Recommended. Man pages for Application executables.
doc/man3- Recommended. Man pages for module APIs.
doc/man6- Recommended. Man pages for Application overview.
doc/html- Optional. HTML pages for the entire Application.
doc/pdf- Optional. PDF documentation for the entire Application.
The
src directory could be useful to release for debugging purposes but is not required. The
include directory should only be released if the applications has public include files.
The only documentation that is recommended to be released in this way are the man pages. HTML and PDF will normally be distributed in some other manner.
It is encouraged to omit empty directories.
When an Erlang runtime system is started, a number of processes are started as part of the Kernel application. One of these processes is the application controller process, registered as
application_controller.
All operations on applications are coordinated by the application controller. It is interacted through the functions in the module
application, see the
application(3) manual page in Kernel. In particular, applications can be loaded, unloaded, started, and stopped. first loads it using
application:load/1. It checks the value of the
applications key, to ensure that all applications that are to shut down. The top supervisor tells all its child processes to shut down, and so on; the entire tree is terminated in reversed start order. The application master then calls the application callback function
stop/1 in the module defined by the
mod key. is to be an atom.
Val is any term. The application can retrieve the value of a configuration parameter by calling
application:get_env(App, Par) or a number of similar functions, see the
application(3) manual page in Kernel. that contains configuration parameters for relevant applications:
[{Application1, [{Par11,Val11},...]}, ..., {ApplicationN, [{ParN1,ValN1},...]}].
The system configuration is to be called
Name.config and Erlang is to be started with the command-line argument
-config Name. For details, see the
config(4) manual page in Kernel.
Example:
A file
test.config is created with the following contents:
[{ch_app, [{file, "testlog"}]}].
The value of
file overrides is to be used and that file is to be called
sys.config.
The values in the
.app file"}
A start type is defined when starting the application:
application:start(Application, Type)
application:start(Application) is the same as calling
application:start(Application, temporary). The type can also be
permanent or
transient:
normal, this is reported but no other applications are terminated. If a transient application terminates abnormally, that is with any other reason than
normal, all other applications and the runtime system are also terminated.
An application can always be stopped explicitly by calling
application:stop/1. Regardless of the mode, no other applications are affected.
The transient mode is of little practical use, since when a supervision tree terminates, the reason is set to
shutdown, not
normal.
© 2010–2017 Ericsson AB
Licensed under the Apache License, Version 2.0. | http://docs.w3cub.com/erlang~20/doc/design_principles/applications/ | 2018-10-15T13:48:34 | CC-MAIN-2018-43 | 1539583509196.33 | [] | docs.w3cub.com |
Datastore 2.3.0 User Guide Add entire publication Select collection Cancel Create a New Collection Set as my default collection Cancel This topic and sub-topics have been added to MyDocs. Ok No Collection has been selected. --> Add comments to query results To add comments to one or many query results: In the main screen, from panel one or panel two, select one or more query results. From the panel corresponding to the selected results, click More, then select Add comment. A dialog appears. Add your comment in the input area and click Save. A special column in the result grid, next to the checkbox column, shows this icon: for every result line that has comments. This column is visible if at least one result has comments. Caution Comments cannot be added to aggregated query results. To ensure navigation to/from the Comment objects and Object Type or Element, read Navigate to/from Comment objects. Comment structure The maximum allowed length for one comment is 512 characters. Space, tabulation, and carriage return are permitted. Related Links | https://docs.axway.com/bundle/Datastore_230_UserGuide_allOS_en_HTML5/page/Content/UserGuide/Datastore/Tasks/Datastore_tasks/Query_Wizard/t_query_add_comment.htm | 2018-10-15T13:13:14 | CC-MAIN-2018-43 | 1539583509196.33 | [] | docs.axway.com |
Fire & Water
Fire and Water are our state of the art development environments for programmers using the Elements compiler on the Mac and on Windows, respectively.
While sharing a lot of common concepts, infrastructure and internal code, each version of the IDE designed specifically and natively for the platform it runs on – the Mac (Fire) and Windows (Water), respectively.
Both Fire and Water support all four of the Elements languages (Oxygene, RemObjects C#, RemObjects Swift (Silver) and Iodine (Java)), and each supports developing for all four Elements target platforms, including .NET, Cocoa, Java/Android SDK and Island.
Fire and Water are written from the ground up to be a fresh new look at what a development environment could look like. They take some of the best ideas from other IDEs we love and use, including Xcode and Visual Studio, and combine them with unique new ideas that we believe will help improve developer workflow.
One of the fundamental principles of Fire and Water is that they will never get in your way. They are written to be lean and mean, always responsive and mode-less. That means that you will never be pulled out of your flow.
- Fire first shipped with Elements 8.3 in early 2016.
- Water is currently in usable beta and will officially ship later in 2018.
Getting Started
The topics in this section will help you get started working in Fire and Water.
- Navigation will guide you through finding your way around the IDE. Fire and Water are designed around easy and seamless navigation, and understanding a few core concepts will get you productive in no time.
- Code Editor introduces you to the most important part of the IDE, the place where you'll write code for your awesome apps. Fire and Water, Water and the Elements compiler have a few prerequisites you may need to install in order to have all the tools you need to get started with the platform.
Discussing Fire and Water, and Reporting Feedback
We have separate forums on our Talk site for discussing Fire and Water, reporting bugs, and giving feedback:
- Fire Discussion Forum
- Water BETA Discussion Forum (login needed while in beta) | https://docs.elementscompiler.com/Fire/ | 2018-10-15T12:30:26 | CC-MAIN-2018-43 | 1539583509196.33 | [array(['./Water-256.png', None], dtype=object)
array(['./Fire-256.png', None], dtype=object)] | docs.elementscompiler.com |
The installation process of the Smart2Pay Plugin requires first downloading the WordPress platform, WooCommerce plugin, and then uploading the Smart2Pay WooCommerce plugin and installing Smart2Pay PHP-SDK.
In order to download the WordPress platform, please go to the download page of and follow the installation instructions.
In order to install the WooCommerce plugin, please go to and follow the installation instructions.
In order to install Smart2Pay WooCommerce plugin, you need to follow the next steps:
- Download the archive from GitHub in your installation folder.
- Upload woocommerce-smart2pay directory to the plugins directory from your WordPress dashboard.
- Go to Plugins setting page and activate WooCommerce Smart2Pay.
- From your WordPress dashboard go to WooCommerce area -> Settings -> Checkout section and navigate to Payment Gateways area where Smart2Pay – alternative payment methods option should be available and click Enable button.
- Click the Save Changes button to complete the installation of the WooCommerce plugin.
Please note: Upon download, the system will rename the directory into woocommerce-master. You will have to change the name of the directory into the correct form woocommerce-smart2pay.
| https://docs.smart2pay.com/category/smart2pay-plugins/smart2pay-woocommerce-plugin/smart2pay-woocommerce-plugin-installation/ | 2018-05-20T14:19:19 | CC-MAIN-2018-22 | 1526794863570.21 | [array(['https://docs.smart2pay.com/wp-content/uploads/2016/02/Screenshot-526.png',
'1 WooCommerce Smart2Pay Plugin Activation'], dtype=object)
array(['https://docs.smart2pay.com/wp-content/uploads/2016/02/Screenshot-480.png',
'1 Smart2Pay Payment Plugin Installation'], dtype=object) ] | docs.smart2pay.com |
The vCenter Server migration wizard prompts you for the deployment and migration information when migrating a vCenter Server instance, a vCenter Single Sign-On instance, or a Platform Services Controller instance from Windows to an appliance. It is a best practice to keep a record of the values that you entered in case you must power off the appliance and restore the source installation.
You can use this worksheet to record the information that you need for migrating a vCenter Server instance with an vCenter Single Sign-On or Platform Services Controller from Windows to an appliance.
The user name that you use to log in to the machine from which you want run the GUI installer, the path to the vCenter Server Appliance installer, and your values including the passwords, must contain only ASCII characters. Extended ASCII and non-ASCII characters are unsupported.
Local OS users existing on source windows machine are not migrated to the target vCenter Server Appliance and must be recreated after migration is complete. If any local OS user names are used to log in to the vCenter Single Sign-On, you must recreate them and reassign permissions in the Platform Services Controller appliance.
If the source vCenter Server machine is joined to an Active Directory domain, the account you use must have permissions to rejoin the machine to the domain. For more information, see. | https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.upgrade.doc/GUID-64C15F55-3E56-475E-942B-96C9D3EDBF33.html | 2018-05-20T13:55:05 | CC-MAIN-2018-22 | 1526794863570.21 | [] | docs.vmware.com |
Commands¶
Commands in HexChat are prefixed with / and to escape them you can type it twice e.g. //
HexChat will first try to run plugin commands, then user commands, then client commands, and lastly send it directly to the server.
User Commands¶
User commands can be used to create aliases, to run multiple commands at a time, or more complex custom commands. They are set in.
An alias is just a shortcut refering to an existing command, for example /j refers to /join &2
Naming two user commands the same thing will run both in the order they are listed.
For more complex commands you can use these codes:
- %c Current channel
- %e Current network
- %m Machine info
- %n Your nick
- %t Time/date
- %v HexChat version
- %<num> Word
- &<num> Word from end of line | http://hexchat.readthedocs.io/en/latest/commands.html | 2018-05-20T13:27:55 | CC-MAIN-2018-22 | 1526794863570.21 | [] | hexchat.readthedocs.io |
To install and use the vRealize Automation plug-in, your system must meet certain functional prerequisites.
vRealize Automation
You must have access to a vRealize Automation server. Version 7.0 of the plug-in works with vRealize Automation 7.0.
For information about setting up vRealize Automation, see vRealize Automation Installing vRealize Automation 7.0.
vRealize Orchestrator Server
Version 7.0 of the plug-in works with vRealize Orchestrator. | https://docs.vmware.com/en/vRealize-Automation/7.0/com.vmware.vra.extensibility.doc/GUID-F0249E65-2C1B-41AE-84F6-CC0FAD9D712E.html | 2018-05-20T13:58:33 | CC-MAIN-2018-22 | 1526794863570.21 | [] | docs.vmware.com |
Search Engine Optimisation
From Joomla! Documentation
Search Engine Optimization (SEO) is the process of improving the volume and quality of traffic to a Website from search engines.
The main article about SEO is Making your site Search Engine Friendly, which lists steps that lead to a better search engine ranking.. | https://docs.joomla.org/index.php?title=Category:Search_Engine_Optimisation&direction=prev&oldid=101647 | 2015-10-04T09:50:09 | CC-MAIN-2015-40 | 1443736673081.9 | [] | docs.joomla.org |
Changes related to "Category:Global Configuration Management"
← Category:Global Configuration Management
4 September 2015
06:35(Page translation log) MATsxm (Talk | contribs) marked J3.x:Global configuration for translation
05:30J3.x:Global configuration (diff; hist; +186) Leolam
| https://docs.joomla.org/index.php?title=Special:RecentChangesLinked&days=30&from=&target=Category%3AGlobal_Configuration_Management | 2015-10-04T09:20:31 | CC-MAIN-2015-40 | 1443736673081.9 | [array(['/extensions/CleanChanges/images/Arr_r.png', None], dtype=object)
array(['/extensions/CleanChanges/images/Arr_d.png', 'Hide details -'],
dtype=object)
array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/showuserlinks.png',
'Show user links Show user links'], dtype=object)
array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/showuserlinks.png',
'Show user links Show user links'], dtype=object)
array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/showuserlinks.png',
'Show user links Show user links'], dtype=object)
array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/showuserlinks.png',
'Show user links Show user links'], dtype=object) ] | docs.joomla.org |
Revision history of "JSessionStorageWincache/1.6"
View logs for this page
There is no edit history for this page.
This page has been deleted. The deletion and move log for the page are provided below for reference.
-")) | https://docs.joomla.org/index.php?title=JSessionStorageWincache/1.6&action=history | 2015-10-04T10:20:56 | CC-MAIN-2015-40 | 1443736673081.9 | [] | docs.joomla.org |
: is a dependency of Paramiko which provides the low-level (C-based) encryption algorithms used to run SSH.:\> | https://fabric.readthedocs.org/en/1.1.4/installation.html | 2015-10-04T09:12:33 | CC-MAIN-2015-40 | 1443736673081.9 | [] | fabric.readthedocs.org |
Difference between revisions of "Components Banners Banners"
From Joomla! Documentation
Revision as of 08:16, 13 January 2013.
Screenshot
Column Headers
In the table containing Banners, these are the different columns shown below. Click on the column heading on the banner manager screen.
- Name. The name of the Banner. Editing Option - 'click' on the name to open the Banner for editing.
-.
- Purchase Type. The purchase type of the banner. This is used to indicate how the banner client purchased the display time.
- -.
- Batch. Batch processes the selected banners. Works with one or multiple banners selected.
<translate>
- Options. Opens the Options window where settings such as default parameters can be edited.</translate>
<translate>
- Help. Opens this help screen.</translate>. | https://docs.joomla.org/index.php?title=Help30:Components_Banners_Banners&diff=79893&oldid=79864 | 2015-10-04T09:20:52 | CC-MAIN-2015-40 | 1443736673081.9 | [array(['/images/a/a7/Help30-colheader-Order-Ascending-DisplayNum.png',
'Help30-colheader-Order-Ascending-DisplayNum.png'], dtype=object)] | docs.joomla.org |
Overview¶
Start here for all things
distlib.
Distlib evolved out of
packaging¶
Distlib is a library which implements low-level functions that relate to
packaging and distribution of Python software. It consists in part of
the functions in the
packaging Python package, which was intended to be
released as part of Python 3.3, but was removed shortly before Python
3.3 entered beta testing.
What was the problem with
packaging?¶
The
packaging software just wasn’t ready for inclusion in the Python
standard library. The amount of work needed to get it into the desired
state was too great, given the number of people able to work on the project,
the time they could devote to it, and the Python 3.3 release schedule.
The approach taken by
packaging was seen to be a good one: to ensure
interoperability and consistency between different tools in the packaging
space by defining standards for data formats through PEPs, and to do away
with the ad hoc nature of installation encouraged by the
distutils
approach of using executable Python code in
setup.py. Where custom
code was needed, it could be provided in a standardised way using
installation hooks.
While some very good work was done in defining PEPs to codify some of the
best practices,
packaging suffered from some drawbacks, too:
Not all the PEPs may have been functionally complete, because some important use cases were not considered – for example, built (binary) distributions for Windows.
It continued the command-based design of
distutils, which had resulted in
distutilsbeing difficult to extend in a consistent, easily understood, and maintainable fashion.
Some important features required by distribution authors were not considered – for example:
- Access to data files stored in Python packages.
- Support for plug-in extension points.
- Support for native script execution on Windows.
These features are supported by third-party tools (like
setuptools/
Distribute) using
pkg_resources, entry points and console scripts.
There were a lot of rough edges in the
packagingimplementation, both in terms of bugs and in terms of incompletely implemented features. This can be seen (with the benefit of hindsight) as due to the goals being set too ambitiously; the project developers bit off more than they could chew.
How Distlib can help¶
The idea behind Distlib is expressed in this python-dev mailing-list post, though a different name was suggested for the library..
How you can help¶
If you have some time and the inclination to improve the state of Python packaging, then you can help by trying out Distlib, raising issues where you find problems, contributing feedback and/or patches to the implementation, documentation, and underlying PEPs.
Main features¶¶
Distlib. To work with the project, you can download a release from PyPI, or clone the source repository or download a tarball from it.
Coverage results are available at:
Continuous integration test results are available at:
The source repository for the project is on BitBucket:
You can leave feedback by raising a new issue on the issue tracker (BitBucket registration not necessary, but recommended).
Next steps¶
You might find it helpful to look at the Tutorial, or the API Reference. | https://distlib.readthedocs.org/en/latest/overview.html | 2015-10-04T09:09:10 | CC-MAIN-2015-40 | 1443736673081.9 | [] | distlib.readthedocs.org |
An alias address is an IP address that is assigned to a specified network interface after the interface has been configured with an address. With multiple addresses, the second and subsequent addresses are referred to as aliases. When only a single address is assigned to an interface, it is not called an alias, regardless of how it was initially assigned.
In versions of Mac OS X and Mac OS X Server up through 10.1.1, alias addresses are not supported by any of the GUI, and so must be set up and maintained using startup scripts, or directly, using command-line tools. This HowTo deals with Mac OS X in general, and not with the Server product, so details may be slightly different for the latter. In particular, there may be existing mechanisms in the Server product to handle aliases which we will not touch upon here. Updates to this information are solicited.
The command used to manipulate network address configurations is ifconfig(8), and it is documented in a man page by that name (without the "(8)", which refers to the section of the manual containing the page).
There are two situations to handle when adding an alias, and they both deal with subnets as they exist on the interface in question, at the time the alias is added.
In this case, the alias address can be assigned with the netmask that goes with the new address.
For example, suppose there is a single address assigned to "en1", say, 192.168.32.25, and its netmask is 255.255.255.0 (this is sometimes written as 192.168.32.25/24; the 24 indicating 24 consecutive '1' bits in the netmask).
If the alias address is given to you (by the network admin, for example) as 192.168.64.25/24, then you would use the netmask 255.255.255.0. In this case, the new address is on a different subnet from the originaladdress (192.168.32.0 and 192.168.64.0). The following shows how to assign the alias.
ifconfig en1 inet 192.168.64.25 netmask 255.255.255.0 alias
Again, suppose there is a single address assigned to "en1", as above.
Let's also suppose that the network admin gives you 192.168.32.27/24 to use as an alias. Here, you should use the netmask 255.255.255.255. In this case, the new address is on the same subnet as the original address (192.168.32.0). The following shows how to assign the alias.
ifconfig en1 inet 192.168.32.27 netmask 255.255.255.255 alias
To go into a little more depth, for those with an urge to know, the choice of netmask is dictated by the way the system keeps track of "routes" internally. In order to correctly determine where to send a packet, interfaces are tagged with pairs of "address, netmask". Since aliases are, in a sense, duplicate tags, the system needs to know whether the subnet represented by the tag is new, and this is indicated by the netmask. With a "normal" netmask, the system is told this is a new subnet, and it can then set up internal tables correctly. If the subnet is not new, the tables will get set up incorrectly if the netmask is "normal".
With a netmask of 255.255.255.255, the system knows this is a duplicate of an existing subnet, and therefore will assign the address as if it were assigned to the loopback interface, with the "point-to-point" mask.
Using the values from Case 2, assigning a netmask of 255.255.255.0 will, in most cases, appear to work. However, the internal tables will not be set up correctly, and if the alias is removed, problems may ensue. This can be overcome by adding appropriate host routes when the alias is added, and removing the routes when the alias is removed. However, the system works correctly with the "point-to-point" netmask, and maintenance is easier.
If we add the alias with netmask 255.255.255.0, ifconfig will report an EXISTS error. This can be ignored, and the alias is successfully added. The error indicates that subnets match. At this point, the user must add a host route as follows:
route add -host 192.168.32.27 -interface 127.0.0.1
This will set up the routing tables in a way that keeps the system's tables correct. Removing the alias will require that you remove this route by hand, as well.
If an alias address (or indeed, any address) is to be removed from an interface, the ifconfig command is again used. If the address is an alias, the command is
ifconfig DEV inet IP.AD.DR.SS netmask NE.TM.AS.KK -alias
where DEV is the network device ("en1" in the above examples), and the IPaddress and netmask values match the values used to assign the alias.
If the address is not an alias (i.e., is the only IP address assigned), the form of the command is slightly different:
ifconfig DEV inet IP.AD.DR.SS netmask NE.TM.AS.KK delete
In Mac OS X 10.1.2, aliases are fully supported by the system. The GUI allows the creation of aliases which are stored in the network configuration database, and are installed on reboot.
Using the values from the example above, first, open the Network Preferences panel (e.g., from the above, and configure it as you would any other port.item in the menu. Then duplicate the device for which you want to assign an alias, as described
In particular, you should provide router and nameserver information so that if the original address is removed, the network will continue to function. If the alias resides on an existing subnet, you should duplicate the values for that original. If the subnet is new, you will have to provide at least a different router address, if you want the system to continue working with that subnet, once the original is removed.
Finally, select and drag the 'copy' to its desired position in the hierachy of ports, and "apply" the changes you've made.
Important: unlike the situation where you do this "by hand", the GUI support does not require the point-to-point netmask used in the case of common subnets. The system will actually do all the work involved in configuring in this case. Thus, you can assign the "correct" netmask when configuring an alias address in the GUI.
Until the release of Darwin 5.2, the user was on his own to support alias address assignment. It was easy to configure network devices with alias addresses, but the admin had to figure out precisely where to put the commands in the various startup scripts in order to get "persistence".
With Darwin 5.2, the system supports persistent alias address assignment, so you no longer have to go through the "find a place to put this script" dance.
Use iftab as you normally would, except that you add the 'alias' keyword to the entry for an alias address. At boot time, the system will install the desired alias addresses. As usual, the entry mimics the arguments to ifconfig:
en0 inet 192.168.32.27 netmask 255.255.255.0 alias
in all cases.
As in Mac OS X 10.1.2, the system handles the subnet issue by itself, installing routes as appropriate for the existing and new configuration information. | http://docs.huihoo.com/darwin/opendarwin/articles/network_config/ar01s03.html | 2015-10-04T09:09:27 | CC-MAIN-2015-40 | 1443736673081.9 | [] | docs.huihoo.com |
Difference between revisions of "JRequest::getString"
From Joomla! Documentation
Latest revision as of 21Request::getString
Description
Fetches and returns a given filtered variable.
Description:JRequest::getString [Edit Descripton]
public static function getString ( $name $default= '' $hash= 'default' $mask=0 )
- Returns
- Defined on line 271 of libraries/joomla/environment/request.php
- Referenced by
See also
JRequest::getString source code on BitBucket
Class JRequest
Subpackage Environment
- Other versions of JRequest::getString
SeeAlso:JRequest::getString [Edit See Also]
User contributed notes
<CodeExamplesForm /> | https://docs.joomla.org/index.php?title=API17:JRequest::getString&diff=cur&oldid=57546 | 2015-10-04T10:42:10 | CC-MAIN-2015-40 | 1443736673081.9 | [] | docs.joomla.org |
Tobiko Configuration Guide¶
Configure Tobiko Framework¶
In order to make sure Tobiko tools can connect to OpenStack services via Rest API configuration parameters can be passed either via environment variables or via a ini configuration file (referred here as tobiko.conf). Please look at Authentication Methods for more details.
To be able to execute scenario test cases there some OpenStack resources that has to be created before running test cases. Please look at Setup Required Resources for more details.
tobiko.conf¶
Tobiko tries to load tobiko.conf file from one of below locations:
current directory:
./tobiko.conf
user home directory:
~/.tobiko/tobiko.conf
system directory:
/etc/tobiko/tobiko.conf
Configure Logging¶
Tobiko can configure logging system to write messages to a log file. You can edit below options in tobiko.conf to enable it as below:
[DEFAULT] # Whenever to allow debugging messages to be written out or not debug = true # Name of the file where log messages will be appended. log_file = tobiko.log # The base directory used for relative log_file paths. log_dir = .
Authentication Methods¶
Tobiko uses OpenStack client to connect to OpenStack services.
Authentication Environment Variables¶
To configure how Tobiko can connect to services you can use the same environment variables you would use for OpenStack Python client CLI.
Currently supported variables are:
# Identity API version export OS_IDENTITY_API_VERSION=3 # URL to be used to connect to OpenStack Irentity Rest API service export OS_AUTH_URL= # Authentication username (name or ID) export OS_USERNAME=admin export OS_USER_ID=... # Authentication password export OS_PASSWORD=... # Project-level authentication scope (name or ID) export OS_PROJECT_NAME=admin export OS_TENANT_NAME=admin export OS_PROJECT_ID=... export OS_TENANT_ID=... # Domain-level authorization scope (name or ID) export OS_DOMAIN_NAME=Default export OS_DOMAIN_ID=... # Domain name or ID containing user export OS_USER_DOMAIN_NAME=Default export OS_USER_DOMAIN_ID=... # Domain name or ID containing project export OS_PROJECT_DOMAIN_NAME=Default export OS_PROJECT_DOMAIN_ID=... # ID of the trust to use as a trustee user export OS_TRUST_ID=...
Autentication Configuration¶
You can also configure the same authentication parameters by editing ‘keystone’ section in tobiko.conf file. For example:
[keystone] # Identity API version api_version = 3 # URL to be used to connect to OpenStack Irentity Rest API service auth_url= # Authentication username (name or ID) username = admin # Authentication password password = ... # Project-level authentication scope (name or ID) project_name = admin # Domain-level authorization scope (name or ID) domain = default # Domain name or ID containing user user_domain_name = default # Domain name or ID containing prject project_domain_name = default # ID of the trust to use as a trustee user trust_id = ...
Proxy Server Configuration¶
The first thing to make sure is Tobiko can reach OpenStack services. In case OpenStack is not directly accessible from where test cases or Tobiko CLI are executed it is possible to use an HTTP proxy server running on a network that is able to reach all OpenStack Rest API service. This can be performed by using below standard environment variables:
export http_proxy=http://<proxy-host>:<proxy-port>/ export https_proxy=http://<proxy-host>:<proxy-port>/ export no_proxy=127.0.0.1,...
For convenience it is also possible to specify the same parameters via tobiko.conf:
[http] http_proxy = http://<proxy-host>:<proxy-port>/ https_proxy = http://<proxy-host>:<proxy-port>/ no_proxy = 127.0.0.1,...
Because Tobiko test cases could execute local commands (like for example ping) to reach network services we have to specify in tobiko.conf file a shell (like OpenSSH client) to be used instead of the default local one (‘/bin/sh’):
[shell] command = /usr/bin/ssh <proxy-host>
Please make sure it is possible to execute commands on local system without having to pass a password:
/usr/bin/ssh <proxy-host> echo 'Yes it works!'
To archive it please follow one of the many guides available on Internet .
Setup Required Resources¶
To be able to execute Tobiko scenario test cases there some OpenStack resources that has to be created before running test cases.
Install required Python OpenStack clients:
pip install --upgrade \ -c \ python-openstackclient \ python-glanceclient \ python-novaclient \ python-neutronclient
You need to make sure ref:authentication-environment-variables are properly set:
source openstackrc openstack image list openstack flavor list openstack network list
Get an image for Nova instances created by Tobiko:
wget openstack image create cirros \ --file cirros-0.4.0-x86_64-disk.img \ --disk-format qcow2 \ --container-format bare \ --public
Create a flavor to be used with above image:
What’s Next¶
To know how to run Tobiko scenario test cases you can look at Tobiko Test Cases Execution Guide | https://tobiko.readthedocs.io/en/0.1.0/user/config.html | 2022-08-08T02:16:56 | CC-MAIN-2022-33 | 1659882570741.21 | [] | tobiko.readthedocs.io |
A multiplatform feature flag is a flag in a CloudBees Feature Management app that serves different values for multiple applications connected to CloudBees Feature Management through SDKs. This flag is separately configured for each of these applications, or platforms.
If you have both iOS and Android applications, for example, you can use the same feature flag for both, and configure the flag separately on each platform to control each application differently.
Refer to SDK installation for a full list of CloudBees Feature Management SDKs.
Enabling multiplatform control
With multiplatform control, you can control which platforms serve the default flag configuration, and which serve a different configuration.
To enable multiplatform control:
From the CloudBees Feature Management Home page, select All apps from the left pane, and then select the app where multiple SDKs are initialized.
From the left pane, select the environment, and then select a flag.
Select Platform Management in the Audience tab.Figure 1. Platform Management is displayed.
Switch Default to Explicit for the platforms whose flag configuration you want to override.
Select Update.Figure 2. The Python platform is set to Explicit.
You have enabled multiplatform control for your flag. You can select Platform Management to:
Set a platform to Explicit to override the default flag configuration.
Set a platform to Default to use the default flag configuration.
Configuring a flag for a specific platform
You can create a flag configuration for a platform that will override the default flag configuration.
To configure a flag for a specific platform:
Navigate to the flag you want to configure.
In the Audience tab, select the dropdown menu next to Platform Management, and then select a platform. Only those platforms set to Explicit in Platform Management are displayed.Figure 3. The Python and .NET platforms are set to Explicit and can be selected from the Platform dropdown menu. The JavaScript Browser platform is set to Default, so it is not explicitly listed.
Configure the flag as desired for the specified platform.
The flag is served to the platform according to its unique configuration. | https://docs.cloudbees.com/docs/cloudbees-feature-management/latest/feature-flags/multiplatform-feature-flags | 2022-08-08T00:58:30 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.cloudbees.com |
Configis set to
en_US.UTF-8and initialize the database as follows:
- RHEL 7:
echo 'LC_ALL="en_US.UTF-8"' >> /etc/locale.conf sudo su -l postgres -c "postgresql-setup.value. Setting
wal_buffersto be approximately 3% of
shared_buffersup. | https://docs.cloudera.com/cdp-private-cloud-base/7.1.7/installation/topics/cdpdc-configuring-starting-postgresql-server.html | 2022-08-08T01:45:32 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.cloudera.com |
Policy Update Mode
Policy Update Modes are the same for both Windows and macOS computers.
Update modes can be used to quickly set all applications to update automatically, update manually using the grid, schedule updates, or version freeze all applications. Additional customization can be done to change update modes for individual applications.
Update Mode
Description
Semi-Automatic
Ensures that all applications in the Deploy library will always stay up to date if they are installed on the computers. Customize this to change the way a particular application updates.
Windows Updates will install based on the scheduled time set in the General Settings of the policy (daily, weekly, or monthly).
Manual
Applications will only update if manually triggered from the Applications grid in the Deploy console.
Scheduled
All application updates occur only during the scheduled period. The schedule can be set in the General Settings of the Policy. Tasks can be scheduled once a week, once a day, or once a month.
Version Freeze
Freezes all application updates by default. To version freeze specific applications, select Manual, Scheduled, or Semi-Automatic mode and check the version freeze option beside the applications that need to be version frozen.
All Windows Updates will be disabled.
Custom
Any combination of one or more Semi-Automatic, Manual, Scheduled, and Version Freeze update modes make it a custom mode.
Policies (macOS)
Next - Application Management
Applications Overview
Last modified
1yr ago
Copy link | https://docs.faronics.com/faronicsdeploy/policy-update-mode | 2022-08-08T00:56:01 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.faronics.com |
Instancing
In the musical “A Chorus Line,” the most well-known scene is when about 50 identical-looking young women line up left-to-right across the stage, and they all kick-left-kick-right in unison. To implement this in Panda3D, you might do this:
for i in range(50): dancer = Actor.Actor("chorus-line-dancer.egg", {"kick":"kick.egg"}) dancer.loop("kick") dancer.setPos(i*5,0,0) dancer.reparentTo(render)
Here is the scene graph that we just created:
This works fine, but it is a little expensive. Animating a model involves a lot of per-vertex matrix calculations. In this case, we’re animating 50 copies of the exact same model using 50 copies of the exact same animation. That’s a lot of redundant calculations. It would seem that there must be some way to avoid calculating the exact same values 50 times. There is: the technique is called instancing.
The idea is this: instead of creating 50 separate dancers, create only one dancer, so that the engine only has to update her animation once. Cause the engine to render her 50 times, by inserting her into the scene graph in 50 different places. Here is how it is done:
dancer = Actor.Actor("chorus-line-dancer.egg", {"kick":"kick.egg"}) dancer.loop("kick") dancer.setPos(0,0,0) for i in range(50): placeholder = render.attachNewNode("Dancer-Placeholder") placeholder.setPos(i*5, 0, 0) dancer.instanceTo(placeholder)
Here is a diagram of the scene graph we just created:
It’s not a tree anymore, it is a directed acyclic graph. But the renderer still traverses the graph using a recursive tree-traversal algorithm. As a result, it ends up traversing the dancer node 50 times. Here is a diagram of the depth-first traversal that the renderer takes through the graph. Note that this is not a diagram of the scene graph - it’s a diagram of the renderer’s path through the scene graph:
In other words, the renderer visits the dancer actor 50 times. It doesn’t even notice that it’s visiting the same actor 50 times, rather than visiting 50 different actors. It’s all the same to the renderer.
There are 50 placeholder nodes, lined up across the stage. These are called dummy nodes. They don’t contain any polygons, they’re little tiny objects used mainly for organization. In this case, I’m using each placeholder as a platform on which a dancer can stand.
The position of the dancer is (0,0,0). But that’s relative to the position of the parent. When the renderer is traversing placeholder 1’s subtree, the dancer’s position is treated as relative to placeholder 1. When the renderer is traversing placeholder 2’s subtree, the dancer’s position is treated as relative to placeholder 2. So although the position of the dancer is fixed at (0,0,0), it appears in multiple locations in the scene (on top of each placeholder).
In this way, it is possible to render a model multiple times without storing and animating it multiple times.
Advanced Instancing
Now, let’s go a step further:
dancer = Actor.Actor("chorus-line-dancer.egg", {"kick":"kick.egg"}) dancer.loop("kick") dancer.setPos(0,0,0) chorusline = NodePath('chorusline') for i in range(50): placeholder = chorusline.attachNewNode("Dancer-Placeholder") placeholder.setPos(i*5,0,0) dancer.instanceTo(placeholder)
This is the exact same code as before, except that instead of putting the 50
placeholders beneath
render, I
put them beneath a dummy node called
chorusline. So my line of dancers
is not part of the scene graph yet. Now, I can do this:
for i in range(3): placeholder = render.attachNewNode("Line-Placeholder") placeholder.setPos(0,i*10,0) chorusline.instanceTo(placeholder)
Here is the scene graph I just created:
But when the renderer traverses it using a recursive tree-traversal algorithm, it will see 3 major subtrees (rooted at a line-placeholder), and each subtree will contain 50 placeholders and 50 dancers, for a grand total of 150 apparent dancers.
Instancing: an Important Caveat
Instancing saves panda quite a bit of CPU time when animating the model. But that doesn’t change the fact that the renderer still needs to render the model 150 times. If the dancer is a 1000 polygon model, that’s still 150,000 polygons.
Note that each instance has its own bounding box, each is occlusion-culled and frustum-culled separately.
The NodePath: a Pointer to a Node plus a Unique Instance ID
If I had a pointer to the chorus-line dancer model, and I tried to ask the question “where is the dancer,” there would be no well-defined answer. The dancer is not in one place, she is in 150 places. Because of this, the data type pointer to node does not have a method that retrieves the net transform.
This is very inconvenient. Being able to ask “where is this object located” is fundamental. There are other incredibly useful queries that you cannot perform because of instancing. For example, you cannot fetch the parent of a node. You cannot determine its global color, or any other global attribute. All of these queries are ill-defined, because a single node can have many positions, many colors, many parents. Yet these queries are essential. It was therefore necessary for the Panda3D designers to come up with some way to perform these queries, even though a node can be in multiple locations at the same time.
The solution is based on the following observation: if I had a pointer to the chorus line-dancer model, and I also had a unique identifier that distinguishes one of the 150 instances from all the others, then I could meaningfully ask for the net transform of that particular instance of the node.
Earlier, it was noted that a NodePath contains a pointer to a node, plus some
administrative information. The purpose of that administrative information is
to uniquely identify one of the instances. There is no method
PandaNode.get_net_transform(), but there is a method
NodePath.getNetTransform(). Now you know why.
To understand how NodePath got its name, think about what is necessary to uniquely identify an instance. Each of the 150 dancers in the graph above corresponds to a single path through the scene graph. For every possible path from root to dancer, there exists one dancer-instance in the scene. In other words, to uniquely identify an instance, you need a list of nodes that starts at the leaf and goes up to the root.
The administrative information in a NodePath is a list of nodes. You can fetch
any node in the list, using the
NodePath.node(i) method. The first one,
node(0), is the node to which the NodePath points. | https://docs.panda3d.org/1.10/python/programming/scene-graph/instancing | 2022-08-08T00:23:08 | CC-MAIN-2022-33 | 1659882570741.21 | [array(['../../../_images/instancing1.jpg',
'../../../_images/instancing1.jpg'], dtype=object)
array(['../../../_images/instancing2.jpg',
'../../../_images/instancing2.jpg'], dtype=object)
array(['../../../_images/instancing3.jpg',
'../../../_images/instancing3.jpg'], dtype=object)
array(['../../../_images/instancing4.jpg',
'../../../_images/instancing4.jpg'], dtype=object)] | docs.panda3d.org |
Normalize alerts with correlation search templates in ITSI
IT Service Intelligence (ITSI) ships with several predefined correlation search templates to help you normalize alerts from common third-party systems. Leverage these searches when creating a correlation search to bring third-party alerts into ITSI and normalize them as notable events. For more information about correlation searches, see Overview of correlation searches in ITSI.
Prerequisites
Access correlation search templates
All third-party search templates are available within the correlation search creation workflow. To leverage a template, perform the following steps:
- From the ITSI main menu, click Configuration > Correlation Searches.
- Click Create New Search > Create Correlation Search.
- Provide a name and description for the search.
- For Search Type, choose Predefined.
- Click Select a Search and choose from one of the predefined search templates described below.
- Click Select an index and choose an index to use for the search.
- Configure the rest of the correlation search to normalize the third-party alert fields. For instructions, see Ingest third-party alerts into ITSI.
Available correlation search templates
Choose from the following correlation search templates to bring third-party alerts into ITSI:
AppDynam! | https://docs.splunk.com/Documentation/ITSI/4.13.0/EA/PredefinedCS | 2022-08-08T01:33:04 | CC-MAIN-2022-33 | 1659882570741.21 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
FIPS 140-2 accreditation validates that an encryption solution meets a specific set of requirements designed to protect the cryptographic module from being cracked, altered, or otherwise tampered with. When FIPS 140-2 mode is enabled, any secure communication to or from vRealize Operations Manager 8.4 uses cryptographic algorithms or protocols that are allowed by the United States Federal Information Processing Standards (FIPS). FIPS mode turns on the cipher suites that comply with FIPS 140-2. Security related libraries that are shipped with vRealize Operations Manager 8.4 are FIPS 140-2 certified. However, the FIPS 140-2 mode is not enabled by default. FIPS 140-2 mode can be enabled if there is a security compliance requirement to use FIPS certified cryptographic algorithms with the FIPS mode enabled.
Enable FIPS during the initial cluster deployment
- Ensure a new deployment of a vRealize Operations Manager cluster.
- Ensure that theflag is appropriately used during the deployment of cluster nodes (OVF/OVA).
- Navigate to https://<VROPS IP>/admin/index.action.
- Login as an admin user.
- Take the cluster offline to activate the Administrator Settings page.button in the
- Open the Administrator Settings tab in the left panel.
- Click FIPS Setting section.under the
- Bring the cluster online.
Verify that FIPS mode is Enabled
- Navigate to https://<VROPS IP>/admin/index.action.
- Login as the admin user.
- Open the Administrator Settings tab from the left panel.
- A FIPS 140-2 Status message appears. | https://docs.vmware.com/en/vRealize-Operations/8.3/com.vmware.vcom.scg.doc/GUID-DD4D5DA2-E1F7-47D0-8AAE-647F772BD801.html | 2022-08-08T00:44:17 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.vmware.com |
estimator.prob.drop
estimator.prob.drop#
- estimator.prob.drop(n, h, k, fail=0, rotations=False)[source]#
Probability that
krandomly sampled components have
failnon-zero components amongst them.
- Parameters
n – LWE dimension n > 0
h – number of non-zero components
k – number of components to ignore
fail – we tolerate
failnumber of non-zero components amongst the k ignored components
rotations – consider rotations of the basis to exploit ring structure (NTRU only) | https://lattice-estimator.readthedocs.io/en/latest/_apidoc/estimator.prob/estimator.prob.drop.html | 2022-08-08T02:17:44 | CC-MAIN-2022-33 | 1659882570741.21 | [] | lattice-estimator.readthedocs.io |
10.2
Japanese & German language OCR and support for special characters
12 November 2019
Version 10.2 (named Lovelace) is a significant update. There are a large number of improvements and fixes in this version.
10.2 also sees the introduction of 2 highly requested features: support for text using ‘special characters’ and Japanese & German language OCR support.
The list of changes in 10.2 are very large, so here’s a brief overview of the improvements.
Features
- New: Support for UTF-8 text. Including importing, exporting and OCR files with special characters.
- New language support for OCR, including Japanese and German.
Improvements
- Multiple improvements to importing raster and vector PDF files.
- Multiple user interface enhancements focussed on improving the overall usability of the software.
- Experience much faster image loading speeds on Windows.
- The processing speed of Clean Image tools (Raster Effects) are significantly improved.
Fixes
- Multiple fixes related to application stability. | https://docs.scan2cad.com/article/105-10-2 | 2022-08-08T00:20:19 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.scan2cad.com |
Password best practices for users
Use the following best practices to create strong passwords that protect your $PONYDOCSPRODUCT deployment..! | https://docs.splunk.com/Documentation/Splunk/9.0.0/Security/Passwordbestpracticesforusers | 2022-08-08T01:02:42 | CC-MAIN-2022-33 | 1659882570741.21 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
Genesys™ Products and Components EOL Life Cycle Table
From Genesys Documentation
Contents
Provides the End of Life (EOL) statuses for Genesys Engage and PureConnect suites, products, and components; one table for those on the EOL track. Genesys™ on-premises EOL refers to its products or components that have reached its maturity and entered the retirement phase in their product life cycle for a variety of reasons, such as technology changes.
Important
- To download a PDF of the document, click the link below. Be aware that the EOL Announcement links are not active in the PDF version of this document.
- For End of Platform Support (EOPS), see Supported Operating Environment – Discontinued Supported.
- For Deprecations of features or products available from Genesys Engage cloud, see Feature Deprecations. Genesys™ deprecation refers to the retirement of a feature/service under the Genesys Engage cloud or Genesys Cloud platforms or the retirement of certain versions or products due to their obsolescence.
- See Genesys Engage & PureConnect Lifecycle Policy for information on the Genesys End-of-Life Policy and process.
Download a EOL PDF
To download a PDF of the document, click the link: Download PDF.
Be aware that the EOL Announcement links are not active in the PDF version of this document. | https://all.docs.genesys.com/System/EOL | 2021-01-16T03:41:52 | CC-MAIN-2021-04 | 1610703499999.6 | [] | all.docs.genesys.com |
Genesys IVR Recording (EE21) for Genesys Engage cloud the system to pause and resume a recording is configured as part of the VXML scripts within the IVR, based on your requirements.
Distribution Logic
N/A
User Interface & Reporting
Customer Interface Requirements
N/A
Agent Desktop Requirements
N/A
Reporting
Real-time Reporting
N/A
Historical Reporting
Historical reporting is provided by templates in the SpeechMiner UI (business interface), which is part of Genesys Interaction Recording. As this is a compliance use case, the number of calls recorded per service/business line/customer segment is not relevant. The assumption is that 100% of calls are recorded.
- If IVR is used to collect payment information or other customer-sensitive data, then use case Genesys Selective Recording (EE30) or Genesys Compliance Recording (EE29) needs to be used as well.
- Workforce Desktop Edition
- Customization of other desktop applications to enable Dynamic Recording
- High Availability for the Apache Load Balancer
- Provisioning of recordings from other vendors
Interdependencies
All required, alternate, and optional use cases are listed here, as well as any exceptions.
On-premises Assumptions
- The Record IVR Interactions – Base package supports 100% of voice recording at the IVR Extension level only (no other recording methods)
- This use case supports Genesys GVP only–no 3rd-party IVRs
- Apache is the only load balancer currently supported for GIR
- GIR MCPs are not shared with GVP
Cloud Assumptions
- The Record IVR Interactions – Base package supports 100% of voice recording at the IVR Extension level only (no other recording methods)
- The following activities are out of scope:
- Configuration of Network at its final state: SBC, Media Gateways, VLANs, Firewalls, NAT, Trunking Services, etc.
- Configuration of external storage system (such as SAN / NAS)
- Provisioning of recordings from other vendors
Related Documentation
Document Version
- v 1.0.3 | https://all.docs.genesys.com/UseCases/Current/GenesysEngage-cloud/EE21 | 2021-01-16T03:42:16 | CC-MAIN-2021-04 | 1610703499999.6 | [] | all.docs.genesys.com |
Genesys Speech Analytics (EE22) for Genesys Engage cloud.
Contents
- 1 What's the challenge?
- 2 What's the solution?
- 3 Use Case Overview
- 4 Use Case Definition
- 5 User Interface & Reporting
- 6 Assumptions
- 7 Related Documentation
Use Case Overview.
Business and Distribution Logic
Business Logic
See the user guide for search and discovery functionality.
Distribution Logic prerequisites for this use case on PureConnect are Genesys Voice Recording (EE07) and Genesys Voice and Screen Recording (EE08)
UConnector for PureConnect is required to utilize Genesys Intelligence Analytics on PureConnect
Languages
Languages currently available on Premise include: English, Spanish, German, French, Brazilian Portuguese, Italian, Korean, Japanese, Mandarin, Arabic, Turkish, Cantonese, Dutch, Canadian French, Russian.
Check with product team for specific dialects and planned dates for new languages.
Customer Assumptions
Interdependencies
All required, alternate, and optional use cases are listed here, as well as any exceptions.
On-premises Assumptions
Cloud Assumptions
Available in the Cloud for customers using Genesys Interaction Recording. Note that the only language available in the Cloud is English.
Related Documentation
Document Version
- v 1.1.5 | https://all.docs.genesys.com/UseCases/Current/GenesysEngage-cloud/EE22 | 2021-01-16T03:55:14 | CC-MAIN-2021-04 | 1610703499999.6 | [] | all.docs.genesys.com |
ResourceTag
Tags are key-value pairs that can be associated with Amazon SWF state machines and activities.
Tags may only contain unicode letters, digits, whitespace, or these symbols:
_ . : / = + - @.
Contents
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: | https://docs.aws.amazon.com/amazonswf/latest/apireference/API_ResourceTag.html | 2021-01-16T04:03:50 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.aws.amazon.com |
Analyzing AR System Log Analyzer output
AR System Log Analyzer provides a summary of API and SQL activity captured in BMC Remedy AR System logs. This tool is useful for investigating performance issues reported with BMC Remedy AR System applications. The tool is similar to database performance tools. This section describes how to use the tool to generate the output file and analyze it.
Enhancements
The AR System Log Analyzer provides the following enhancements and features:
- Ability to filter the log file based on specific user
- Ability to filter the log file during a specific time period
- Better handling of the AR 7.6.x and AR 8.0 logs
- Ability to get the statistics of queued API calls, with TOP N list
Ability to collect report statistics on the long queued API calls
A logging parameter called Queue Delay shows the length of the time an API call has to stay in the queue. For more information about this parameter, see Setting server statistics options.
Before you begin
Before using AR System Log Analyzer, ensure that you have performed the following tasks:
- Collect details of the performance issue experienced by a user or applications.
- Enable logs to investigate the performance issue.
- Capture the reported performance issue behavior in the logs. | https://docs.bmc.com/docs/ars1902/analyzing-ar-system-log-analyzer-output-836458716.html | 2021-01-16T03:41:37 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.bmc.com |
How to merge source code files
GroupDocs.Comparison provides an ability to merge source code files by using the ComparisonAction properties:
- ComparisonAction.Accept accepts the found changes and adds them to the file without highlighting;
- ComparisonAction.Reject cancels found changes and removes them from the result file.
The following are the steps to apply/reject changes to resultant.
Example of merge source code file by using GroupDocs.Comparison
For example, you need to compare and merge several versions of source code files and you need to accept or discard changes made by different persons.
The differences show that two methods are written in the source.cs file: AddNumbers and Sum.
If you did not use ComparisonAction, then in the resulting file, all changes will be committed, and these methods will be removed, but if you need to control the merging of files, the ComparisonAction property will help you with this.
Example of using ComparisonAction
The following code samples demonstrate how to merge two source code files.
using (Comparer comparer = new Comparer(sourcePath)) { comparer.Add(targetPath); comparer.Compare(resultPath); ChangeInfo[] changes = comparer.GetChanges(); for (int i = 0; i < 10; i++) { changes[i].ComparisonAction = ComparisonAction.Accept; } for (int i = 10; i < changes.Length; i++) { changes[i].ComparisonAction = ComparisonAction.Reject; } comparer.ApplyChanges(File.Create(resultPath), new ApplyChangeOptions { Changes = changes }); }
The result of merging files
As a result, we get a merged source code file where the deleted elements are marked in red, the added – in blue, and the modified – in green.
Also, you will receive a file in HTML format with changed places in the code.
As you can see from the resulting files, only one of the two methods was removed.. | https://docs.groupdocs.com/comparison/net/how-to-merge-source-code-files/ | 2021-01-16T03:07:03 | CC-MAIN-2021-04 | 1610703499999.6 | [array(['comparison/net/images/how-to-merge-source-code-file-source.png',
None], dtype=object)
array(['comparison/net/images/how-to-merge-source-code-file-target.png',
None], dtype=object) ] | docs.groupdocs.com |
Authcore is a universal platform for secure and frictionless sign-in. It protects all user accounts with advanced security features and integrates with Intel SGX to reduce the time and internal resources to implement a reliable data protection solution.
User's private key was secured by a key management service. client authenticate through SPAKE2 zero-knowledge proofs and key management service. The key management service also uses strong encryption to protect the data. This design is to guarantee that only users can access their private key. providing a cost convenience experience. Users who are familiar with blockchain can back up their own private key. | https://docs.like.co/user-guide/liker-id/what-is-authcore | 2021-01-16T02:00:09 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.like.co |
Other Documentations
How to do WPML with eventon for front-end
March 15, 2019
You can set upto 3 WPML custom languages(by default) that works with 3 customize languages that is supported by eventon for front-end.
Step 1:
Identify the language code associated in each language in WPML eg. ‘en’, ‘es’, ‘fr’, ‘ de’
Step 2:
Go into myeventon> Language and select a language and type in correct language text for matching WPML language.
Eg. if you are using L1 in eventon as WPML first language (english) then you should write english translations in EventON L1. If EventON L2 is for WPML 2nd language (Spanish), then your eventON L2 must be in spanish and so forth.
Step 3:
Create the shortcode variables.
Example:
If Language #1 is French and language code for WPML for this is “fr”, your shortcode variable will look like wpml_l1=’fr’
Step 4:
Similar to above step, create other shortcode variables for other language you are using in association with wpml in eventON language.
Example: If Language #2 is Spanish, and language code in WPML is “es” then the shortcode variables to add is wpml_l2=’es’
Your end shortcode should look something like below:
[add_eventon wpml_l1=’fr’ wpml_l2=’es’]
Further Customization
If you wish you add support for more than 3 wpml language please use pluggable filter below to add those additional language values. – place this code in your theme functions.php file. | https://docs.myeventon.com/documentations/wpml-eventon-front-end/ | 2021-01-16T02:20:36 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.myeventon.com |
Legacy alarms are triggered when system attributes reach alarm threshold values. If you want to reduce or clear the count of legacy alarms on the Dashboard, you can acknowledge the alarms.
If an alarm from the legacy system is currently active, the Health panel on the Dashboard includes a Legacy alarms link. The number in parentheses indicates how many legacy alarms are currently active.
When you acknowledge an alarm, it is no longer included in the count of legacy alarms unless the alarm is triggered at the next severity level or it is resolved and occurs again. | https://docs.netapp.com/sgws-114/topic/com.netapp.doc.sg-troubleshooting/GUID-62D36C7B-66DF-4207-B350-2055349CABF1.html?lang=en | 2021-01-16T02:57:20 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.netapp.com |
Genesys Personalized Task Distribution (BO04) for Genesys Engage on premises.
Contents
- 1 What's the challenge?
- 2 What's the solution?
- 3 Use Case Overview
- 4 Use Case Definition
- 5 User Interface & Reporting
- 6 Assumptions
- 7 Related Documentation
Use Case Overview.
Business and Distribution Logic
Business Logic
In addition to the business logic from BO02, this use case makes the attributes from the customer context available within the business rules for task classification and prioritization as additional custom attributes. For example, the original source system may not include the customer segment for a specific task. A lookup in the CRM database can add this information to the task and can in turn be used within the prioritization rules to ensure that your VIP customers are handled with higher priority.
Distribution Logic
In addition to the distribution logic from BO02, this use case adds context-based routing, which uses the captured contextual data from third-party systems to enhance the task distribution. The attributes are made available to set up rules within the system. In addition to the standard and custom attributes supported in BO02, this use case adds further custom attributes to be used in routing rules to define the required employee skill to handle the specific interaction.
User Interface & Reporting
Customer Interface Requirements
N/A
Agent Desktop Requirements
In addition to the employee desktop requirements from BO02, this use case requires the display of eternal contextual data fetched from the third-party systems or Genesys context services. These will be displayed as additional Case Data in Workspace.
Reporting
Real-time Reporting
The iWD out-of-the-box Pulse templates can provide the following reports.
IWD Agent Activity
A report presenting agent or agent group activity as it relates to the processing of iWD work items of the type contacts. It is possible to report separately on not-ready reason codes in the relevant KPIs.
IWD Queue Activities
A report presenting agent or agent group activity as it relates to the processing of iWD work items of the type contacts. It is possible to report separately on not-ready reason codes in the relevant KPIs.The following graphic shows a typical dashboard configured with iWD templates for work item monitoring.
Historical Reporting
Using CX Insights, Genesys provides some out-of-the-box reports and metrics, including these out-of-the-box customizable reports:
- Capture Reports
- Capture Point Business Value
- Capture Point Task Duration
- Classification Reports
- Customer Segment Service Level
- Intraday Backlog Summary
- Process Volume
- Resource Reports
- Resource Performance
- Queue Reports
- Queue Priority Range
- Queue Task Duration
- Task Detail Reports
Assumptions
General Assumptions
This use case requires:
- Genesys Work Distribution (BO02) for GenesysEngage-onpremises
- This use case can coexist with Genesys Task Distribution-Workgroup (BO03) for GenesysEngage-onpremises for Genesys Engage to provide more personalization.
Customer Assumptions
Interdependencies
All required, alternate, and optional use cases are listed here, as well as any exceptions.
On-premises Assumptions
- Network communication between Genesys and the source of external contextual data is enabled
- The source of external context data shall comply with the default external context adapter (PS asset).
- This use case is available in Premise
Cloud Assumptions
- This use case is not available in the cloud
Related Documentation
Document Version
- v 1.2.4 | https://all.docs.genesys.com/UseCases/Public/GenesysEngage-onpremises/BO04 | 2021-01-16T03:10:45 | CC-MAIN-2021-04 | 1610703499999.6 | [] | all.docs.genesys.com |
Configuring Spark logging options
Configure Spark logging options.
logback.xmlThe location of the logback.xml file depends on the type of installation:
Log directories
- Executor logs
- SPARK_WORKER_DIR/worker-n/application_id/executor_id/stderr
- SPARK_WORKER_DIR/worker-n/application_id/executor_id/stdout
- Spark Master/Worker logs
- Spark Master: the global system.log
- Spark Worker: SPARK_WORKER_LOG_DIR/worker-n/worker.log
The default SPARK_WORKER_LOG_DIR location is /var/log/spark/worker.
- Default log directory for Spark SQL Thrift server
- The default log directory for starting the Spark SQL Thrift server is $HOME/spark-thrift-server.
- Spark Shell and application logs
- Spark Shell and application logs are output to the console.
- SparkR shell log
- The default location for the SparkR shell is $HOME/.sparkR.log
- Log configuration file
- Log configuration files are located in the same directory as spark-env.sh.
Procedure
- Configure logging options, such as log levels, in the following files:
- If you want to enable rolling logging for Spark executors, add the following options to spark-daemon-defaults.conf.
Enable rolling logging with 3 log files retained before deletion. The log files are broken up by size with a maximum size of 50,000 bytes.
spark.executor.logs.rolling.maxRetainedFiles 3 spark.executor.logs.rolling.strategy size spark.executor.logs.rolling.maxSize 50000The default location of the Spark configuration files depends on the type of installation:
- Package installations and Installer-Services: /etc/dse/spark/
- Tarball installations and Installer-No Services: installation_location/resources/spark/conf
- Configure a safe communication channel to access the Spark user interface.Note: When user credentials are specified in plain text on the dse command line, like
dse -u username -p password, the credentials are present in the logs of Spark workers when the driver is run in cluster mode.
The Spark Master, Spark Worker, executor, and driver logs might include sensitive information. Sensitive information includes passwords and digest authentication tokens for Kerberos guidelines mode that are passed in the command line or Spark configuration. DataStax recommends using only safe communication channels like VPN and SSH to access the Spark user interface.Tip: Authentication credentials can be provided in several ways, see Connecting to authentication enabled clusters. | https://docs.datastax.com/en/dse/5.1/dse-admin/datastax_enterprise/spark/sparkLogging.html | 2021-01-16T02:14:13 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.datastax.com |
This filter makes an image look like an old photo: blurred, with a jagged border, toned with a brown shade, and marked with spots.
If checked, a Gaussian blur will be applied to the image, making it less clear.
When you choose a border size > 0, the Fuzzy Border filter will be applied to the image, adding a white, jagged border.
If checked, the filter reproduces the effect of aging in old, traditional black-and-white photographs, toned with sepia (shades of brown).[16] To achieve this effect, the filter desaturates the image, reduces brightness and contrast, and modifies the color balance.[17]
When you check this option, the image will be marked with spots.
Figure 17.269. Example for the “Mottle” option
A plain white image mottled (without Defocus or Sepia)
If checked, the filter creates a new window containing a copy of the image with the filter applied. The original image remains unchanged.
[16] See Wikipedia [WKPD-SEPIA].
[17] Compare Section 8.2, “Color Balance”. | https://docs.gimp.org/en/script-fu-old-photo.html | 2021-01-16T03:31:57 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.gimp.org |
Breaking: #72334 - Removed utf8 conversion in EXT:recycler¶
See Issue #72334
Description¶
The recycler module previously handled conversions of labels to and from UTF-8 in order to send proper UTF-8 encoded data via JavaScript. The TYPO3 backend is running with UTF-8 since TYPO3 4.5.
The logic and the according functions have been removed as they are not needed anymore.
Impact¶
The following methods have been removed:
RecyclerUtility::getUtf8String() RecyclerUtility::isNotUtf8Charset() RecyclerUtility::getCurrentCharset()
Affected Installations¶
Any TYPO3 instance directly accessing any of the mentioned
RecyclerUtility
methods above via a custom extension. | https://docs.typo3.org/c/typo3/cms-core/master/en-us/Changelog/8.0/Breaking-72334-RemovedUtf8ConversionInEXTrecycler.html | 2021-01-16T03:44:56 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.typo3.org |
WandB is absolute heaven.
With next to no modification of my code, I'm able to visualize metrics in real time when training my models on multiple systems and can look at their performance relative to each other, and previous runs, anytime on any device. Sweeps also allows me to train my models with different parameter settings on multiple systems in unison via a Bayesian approach to optimally test a range of input arguments and find the best solution.
I can't imagine going back to how I did things before.
WandB has become the critical tool for us in bringing together the work of remote researchers across several continents. On a recent paper, we started using a wandb report as the central hub for collaboration, where everyone could see the most recent experimental results and essentially the latest draft of the paper. The flexibility and clarity of wand reports have enabled us to collaborate in ways that used to only be possible in person.
Since machine learning is a very experimental process, meaning you try something, see if it works and if not you try something else. I plan on building a number of different models to see which one works best. To track the results of each different model, I set up Weights & Biases, a tool for tracking deep learning experiments.
My team enjoys using this helpful tool. ML is always experimental in nature, in industry and in research and wandb is like a diary to record the whole ML development journey from the baseline to the SOTA. And the best thing is that you can share the journey with others.
10/10. It's really great software. Helps me a ton with my work. Also, sweeps handle a lot of my work.
Between architecture, hyper parameters and general problem approach, I always found myself going in circles trying to keep track of results, configs and the code version I was using in an experiment. After looking around for a tool/dashboard to help with experiment management I landed on WandB. With one init line I could track many of the metrics I was interested in. Wandb one of the few tools that makes it so easy that it’s a no brainer to try for yourself.
Helpful. Easy to use. Integrates cleanly.
I went from storing all of my experiment results in JSON files and plotting them with matplotlib and seaborn, to have all the work taken care by W&B. Viewing the results so fast has helped me to identify bugs in my code at an early stage, it's a game changer. Thank you!
I am impressed daily by wandb and the amazing visualization tools. Now I present all my works and experiments through wandb— no more slides. I also, send progress report to my supervisor using wandb. The synchronization with TensorBoard helps me to further use embedding projector. So far, I have explored all the wandb tools, but I can't wait to explore more.
TensorBoard can be a nightmare when training on multiple machines. I have to run TensorBoard locally on a master machine and sync logs between computers to visualize real-time results. With wandb, this is so easy. Wandb is going to change everything for me.
Great interface. Does what I want it to do without much effort.
Effortless python usage, and excellent visualization of my experiments. Minimal hassle, maximum benefit.
Your tool is very easy to set up (well documented!) and works great.
Awesome UI for logging. Cross-platform(library) support. Excellent customer service. Free for an academic use. Easy for beginners etc.
It helps me a lot to collaborate with my co-workers in different countries. They can easily find the running experiments and the logs. It's so cool that we can get rid of the tedious terminal loss.
10/10. It is purely awesome!
I really like the visualizations and it's very easy to integrate with the pipeline.
Great tool! Still a lot to improve but looks like you have an entire team fully working on that!
-Dumb-easy api really helped. Developing with tensorboardx which I had experience before was more trickier than learning this from the scratch.
-Summarizing all the loggings on the cloud helps me keep focused on important things.
-Sweep: plug-n-play version of hyperparam tuning methodologies
It's just amazing, you guys are doing an amazing job over there! I'm telling about it to all my friends, everyone should use wandb!
I just love how we can make reports and share our work. I use this for kaggle mainly and I know how difficult was it for me to compile my results for the team.
Awesome, easy to use and intuitive!
I love you guys. Your system is awesome! Please, keep your service. Thank you guys.
This platform is absolutely amazing, makes life so much easier. Thank you so much to the developers of this platform! | https://docs.wandb.ai/company/testimonials | 2021-01-16T03:06:01 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.wandb.ai |
- NFS VHD Storage
- Software iSCSI Storage
- Hardware HBA Storage
- StorageLink Storage
- ISO Storage
A Software iSCSI SR uses a shared Logical Volume Manager (LVM) on a SAN attached LUN over iSCSI. iSCSI is supported using the open-iSCSI software iSCSI initiator or by using a supported iSCSI Host Bus Adapter (HBA).
Note that dynamic multipathing support is available for iSCSI storage repositories. By default, multipathing uses round robin mode load balancing, so both routes will have active traffic on them during normal operation. You enable and disable storage multipathing in XenCenter via the Multipathing tab on the server's Properties dialog; see Storage Multipathing. | https://docs.citrix.com/fr-fr/xencenter/6-5/xs-xc-storage/xs-xc-storage-pools-add/xs-xc-storage-pools-add-iscsi.html | 2018-07-16T05:07:38 | CC-MAIN-2018-30 | 1531676589179.32 | [] | docs.citrix.com |
HR Service Management Non-Scoped Welcome to the non-scoped version of HR Service Management. This version is only available for customers that went live with HR in Helsinki or earlier. Beginning with Istanbul, new features and functionality are no longer supported and require migration to the scoped version of HR. You will deploy: HR Service Management Non-Scoped HR Service Management Non-Scoped allows you to standardize the documentation, interaction, and fulfillment of employee inquiries and requests. You can also deploy: HR Service Portal Non-Scoped HR Service Portal Non-Scoped provides a single place for employees to quickly and easily get all the HR services they need. You can also deploy any of the following supporting applications: HR Performance Analytics Non-Scoped measures key performance indicators to track HR performance over time. It requires a separate premium subscription to the Performance Analytics application. HR Workday Integration Non-Scoped synchronizes employee profile information between the non-scoped version of HR Service Management and Workday. | https://docs.servicenow.com/bundle/istanbul-hr-service-delivery/page/product/human-resources-global/concept/global-hr-service-delivery.html | 2018-07-16T04:55:36 | CC-MAIN-2018-30 | 1531676589179.32 | [] | docs.servicenow.com |
Restore a service application (Search Server 2010)
Si applica a: Search Server 2010
Ultima modifica dell'argomento: 2016-12-01
There are situations in which you might have to restore a specific service application instead of restoring the complete farm. Some service applications — for example, the Business Data Connectivity service application the Search service application — provide data to other services and sites. As a result, users might experience some service interruption until the recovery process is finished.
For information about how to simultaneously restore all the service applications in a farm, see Restore a farm (Search Server 2010).
Importante
You cannot back up from one version of Microsoft Search Server and restore to another version of Search Server.
Procedures in this topic:
Use Windows PowerShell to restore a service application
Use Central Administration to restore a service application
Use SQL Server tools to restore a service application
Use Windows PowerShell to restore a service application
You can use Windows PowerShell to restore a service application.
To restore a service application by using Windows PowerShell
Verify that you meet the following minimum requirements: vedere Add-SPShellAdmin.
In the SharePoint 2010 Management Shell, at the Windows PowerShell command prompt, type the following command:
Restore-SPFarm -Directory <BackupFolder> -Item <ServiceApplicationName> -RecoveryMethod <Option> -BackupId <GUID> -Verbose
Where:
<BackupFolder> is the path of the backup that you want to use.
<ServiceApplicationName> is the name of the service application that you want to restore. To display the names of service applications, type the following command:
Backup-SPFarm -ShowTree.
<Option> is one of the following:
Overwrite, to restore a service application to the same farm.
New, to restore to a different farm such as a recovery farm.
<GUID> is the identity of the specific backup that you want to use. If you do not use the BackupId parameter, the most recent backup is used.
Nota
If you are not logged on as the Farm account, you are prompted for the Farm account’s credentials.
To view the progress of the operation, use the Verbose parameter.
You cannot restore a service application from a configuration-only backup. operation failed. At line: <line> char:<column>. + Recover-SPFarm <<<< <Error Message>
If there are errors or warnings, or if the operation does not finish successfully, review the Sprestore.log file in the backup folder.
Use Central Administration to restore a service application
Use the following procedure to restore a service application by using the SharePoint Central Administration Web site.
To restore a service application by using Central Administration
Verify that the user account performing this procedure is a member of the Farm Administrators.
Nota service application, and then click Next.
On the Restore from Backup — Step 3 of 3: Select Restore Options page, in the Restore Component section, make sure that Farm\<Service application> appears in the Restore the following content list.
In the Restore Only Configuration Settings section, make sure that the Restore content and configuration settings option is selected.
In the Restore Options section, select the Type of restore option. Use the Same configuration setting unless you are migrating the service application. If you select this option, a dialog box will appear that asks you to confirm the operation. Click OK.
Nota
If the Restore Only Configuration Settings section does not appear, the backup that you selected is a configuration-only backup. You must select another backup..
Use SQL Server tools to restore a service application
You cannot restore the complete service application by using SQL Server tools. However, you can use SQL Server tools to restore the databases that are associated with the service application. To restore the complete service application, use either Windows PowerShell or Central Administration.
To restore a service application by using SQL Server tools
Verify that the user account, and then click Restore.
In the Restore Database dialog box, select the kind of recovery that you want to perform from the Restore type list.
For more information about which recovery type to use, see Overview of Recovery Models () in SQL Server 2005 Books Online.
In the Restore component area, click Database.
Either use the default name provided or specify a name for the recovery set in the Name text box.
Specify the expiration date for the recovery set. The date determines how long, or when, the recovery set can be overwritten by any later recoveries that have the same name. By default, the recovery set is set to never expire (0 days).
In the Destination area, specify where you want to store the recovery.
Click OK to restore the database.
Repeat steps 2-10 for each database that is associated with the service application. | https://docs.microsoft.com/it-it/previous-versions/office/search-server-2010/ff428105(v=office.14) | 2018-07-16T05:17:05 | CC-MAIN-2018-30 | 1531676589179.32 | [] | docs.microsoft.com |
Reset Linux User Password activity The Reset Linux User Password activity resets the password for a given user on a Linux computer. This activity requires that the user executing the command be able to run the chpasswd command and, if expiring the password immediately, to run chage with sudo privileges. designer, which gives workflow administrators the ability to store input and output variables in the databus. Input variables Table 1. Reset Linux User Password input variables Variable Description hostname IP address of the target Linux machine. user Name of the user whose password is being reset. password New password set for this user. The password is a workflow variable that is encrypted either as a password2 field or by calling the encryption method of a Packages.com.glide.util.Encrypter object. force_change Indicates if this password is temporary and to force the named user to change the password at login. Output variables Table 2. Reset Linux User Password output variables Variable Description return_code Indicates whether or not the user password reset action was successful. error_message Describes any error that occurred during password reset. If no error occurred, this value is null. Conditions Table 3. Reset Linux User Password conditions Variable Description Success Activity successfully changed specified user's password Failure Activity failed to change specified user's password. | https://docs.servicenow.com/bundle/istanbul-servicenow-platform/page/administer/orchestration-activities/reference/r_ResetLinuxUserPasswordActivity.html | 2018-07-16T05:04:24 | CC-MAIN-2018-30 | 1531676589179.32 | [] | docs.servicenow.com |
Recently Viewed Topics
Configure Nessus
Begin Browser Portion of the Nessus Setup
On the Welcome to Nessus page, select the link at the end of the Please connect via SSL statement. You will be redirected and you will continue with the remaining installation steps.
Caution: When accessing Nessus via a web browser, you will encounter a message related to a security certificate issue: a connection privacy problem, an untrusted site, an unsecure connection, or similar security related message. This is expected and normal behavior; Nessus provides a self-signed SSL certificate.
Refer to the Security Warnings section for steps necessary to bypass the SSL warnings.
- Accept, then disable privacy settings.
- On the Welcome to Nessus page, select the Continue button.
Create Nessus System Administrator Account
On the Initial Account Setup page, in the Username field, type the username that will be used for this Nessus System Administrator’s account.
Note: After setup, you can create additional Nessus System Administrator accounts.
- Next, in the Password field, type the password that will be used for this Nessus System Administrator’s account.
- In the Confirm Password field, re-enter the Nessus System Administrator account’s password.
- Finally, select the Continue button.
Select Nessus Registration | https://docs.tenable.com/nessus/6_7/Content/ConfigureNessus.htm | 2018-07-16T05:02:13 | CC-MAIN-2018-30 | 1531676589179.32 | [] | docs.tenable.com |
New in version 2.2.
The below requirements are needed on the host that executes this module.
Note
- name: create a tag digital_ocean_tag: name: production state: present - name: tag a resource; creating the tag if it does not exist digital_ocean_tag: name: "{{ item }}" resource_id: "73333005" state: present with_items: - staging - dbserver - name: untag a resource digital_ocean_tag: name: staging resource_id: "73333005" state: absent # Deleting a tag also untags all the resources that have previously been # tagged with it - name: remove a tag digital_ocean_tag: name: dbserver state: absent
Common return values are documented here, the following are the fields unique to this module: | https://docs.ansible.com/ansible/latest/modules/digital_ocean_tag_module.html | 2018-07-16T04:47:36 | CC-MAIN-2018-30 | 1531676589179.32 | [] | docs.ansible.com |
Incident.Incident promotion UI actionsUI actions add links in the Incident form context menu to promote incidents to problems, changes, or requests. Administrators can customize incident promotion behavior. | https://docs.servicenow.com/bundle/istanbul-it-service-management/page/product/incident-management/concept/incident-configuration.html | 2018-07-16T04:49:22 | CC-MAIN-2018-30 | 1531676589179.32 | [] | docs.servicenow.com |
Edge Encryption components Edge. Proxy application When going through the Edge Encryption proxy server, the Edge Encryption plugin allows you to specify which fields, patterns, and attachments should be encrypted. You can also manage encryption rules to encrypt specific requests and schedule mass encryption jobs. Proxy server The Edge Encryption proxy server uses encryption rules to identify in an HTTP request what, if anything, needs to be encrypted and encrypts it before forwarding the request to the instance. For decryption, the Edge Encryption proxy server looks at the HTTP responses for any encrypted data and decrypts it before sending the response back to the client. In order for this to happen, all HTTP requests and responses must go through the Edge Encryption proxy server. This includes any requests originating from a browser, as well as any SOAP or REST requests. Proxy database If using order preserving encryption or encryption patterns, your proxy servers rely on a MySQL database located in your network. All proxy servers in your network must use the same database. The proxy database contains these tables.Table 1. Proxy database tables Name Description db_id Unique database ID edge_token_map Encryption pattern data token_map Order-preserving encryption data Backing up your proxy database Because encryption patterns rely on tokenization, clear text values are stored in your proxy database. If the database is lost, clear text values cannot be restored. It is critical that you maintain regular backups. To avoid data loss, back up your proxy database according to ServiceNow recommendations. Back up your database every 24 hours. Retain MySQL database binary log files for at least two days. After a backup has been restored, use the binary log to regenerate any data lost since the most recent backup. Refer to MySQL database backup best practices for your database version. | https://docs.servicenow.com/bundle/istanbul-servicenow-platform/page/administer/edge-encryption/concept/c_EdgeEncryptionProxy.html | 2018-07-16T04:31:54 | CC-MAIN-2018-30 | 1531676589179.32 | [] | docs.servicenow.com |
About Drawing Tools
T-SBFND-008-001
Vector Drawing
Bitmap Drawing
Drawing Tools.
NOTE: By default, the Tools toolbar is displayed vertically on the left side of the interface. For convenience, you can display the Tools toolbar horizontally—see Tools Toolbar. | https://docs.toonboom.com/help/storyboard-pro-5-5/storyboard/drawing/about-drawing-tool.html | 2018-07-16T04:19:29 | CC-MAIN-2018-30 | 1531676589179.32 | [] | docs.toonboom.com |
Azure AD Connect sync: Directory extensions
You can use directory extensions to extend the schema in Azure Active Directory (Azure AD) with your own attributes from on-premises Active Directory. This feature enables you to build LOB apps by consuming attributes that you continue to manage on-premises. These attributes can be consumed through Azure AD Graph API directory extensions or Microsoft Graph. You can see the available attributes by using Azure AD Graph Explorer and Microsoft Graph Explorer, respectively.
At present, no Office 365 workload consumes these attributes.
You configure which additional attributes you want to synchronize in the custom settings path in the installation wizard.
The installation shows the following attributes, which are valid candidates:
- User and Group object types
- Single-valued attributes: String, Boolean, Integer, Binary
- Multi-valued attributes: String, Binary
Note
Azure AD Connect supports synchronizing multi-valued Active Directory attributes to Azure AD as multi-valued directory extensions. But no features in Azure AD currently support the use of multi-valued directory extensions.
The list of attributes is read from the schema cache that's created during installation of Azure AD Connect. If you have extended the Active Directory schema with additional attributes, you must refresh the schema before these new attributes are visible.
An object in Azure AD can have up to 100 attributes for directory extensions. The maximum length is 250 characters. If an attribute value is longer, the sync engine truncates it.
During installation of Azure AD Connect, an application is registered where these attributes are available. You can see this application in the Azure portal.
The attributes are prefixed with the extension _{AppClientId}_. AppClientId has the same value for all attributes in your Azure AD tenant.
These attributes are now available through the Azure AD Graph API. You can query them by using Azure AD Graph Explorer.
Or you can query the attributes through the Microsoft Graph API, by using Microsoft Graph Explorer.
Note
You need to ask for the attributes to be returned. Explicitly select the attributes like this:.
For more information, see Microsoft Graph: Use query parameters.
Next steps
Learn more about the Azure AD Connect sync configuration.
Learn more about Integrating your on-premises identities with Azure Active Directory. | https://docs.microsoft.com/en-us/azure/active-directory/connect/active-directory-aadconnectsync-feature-directory-extensions | 2018-07-16T05:01:14 | CC-MAIN-2018-30 | 1531676589179.32 | [array(['media/active-directory-aadconnectsync-feature-directory-extensions/extension2.png',
'Schema extension wizard'], dtype=object)
array(['media/active-directory-aadconnectsync-feature-directory-extensions/extension3new.png',
'Schema extension app'], dtype=object)
array(['media/active-directory-aadconnectsync-feature-directory-extensions/extension4.png',
'Azure AD Graph Explorer'], dtype=object) ] | docs.microsoft.com |
Documentation
Ucommerce includes full API reference documentation and lots of helpful articles to help you build your e-commerce site as effortlessly as possible..).
With the OOTB configuration that Ucommerce ships with, anonymous user is not allowed access and you will therefore get this screen.
You can log in by using your server authentication here. However, you can also change the anonymousUserAccessMode or use an API key.
You can change the OOTB setting for RavenDB by creating a new component like the one shown below.
<configuration> <components> <component id="SearchSessionProvider" service="UCommerce.Search.RavenDB.IRavenDbStoreProvider, UCommerce" type="UCommerce.Search.RavenDB.RavenDbStoreProvider, UCommerce"> <parameters> <anonymousUserAccessMode>None</anonymousUserAccessMode> <ravenDatabasePath>~/App_Data/RavenDatabases/Ucommerce</ravenDatabasePath> </parameters> </component> </components> </configuration>
You can read more about component in this article.
The anonymousUserAccessMode paramenter sets the level of access that you want the studio to run with.
The ravenDatabasePath is the place where the document store will be stored on you server.. | https://docs.ucommerce.net/ucommerce/v7.13/how-to/access-ravendb-studio.html | 2018-07-16T04:32:55 | CC-MAIN-2018-30 | 1531676589179.32 | [] | docs.ucommerce.net |
Puppet Enterprise User's Guide
A newer version is available; see the version menu above for details.
Welcome! This is the user’s guide for Puppet Enterprise 2.7.
- If you are new to Puppet Enterprise, begin with the quick start guide to create a small proof-of-concept deployment and experience the core Puppet Enterprise tools and workflows. This guided walkthrough will take approximately 30 minutes.
- To install Puppet Enterprise, see the following pages:
- To see what’s new since the last release, see New Features.
- The Deployment Guide has a ton of information to help you set up and deploy Puppet Enterprise in accordance with the best practices and methods used by Puppet Labs’ professional services engineers.
Otherwise, use the navigation to the left to move between this guide’s sections and chapters. | https://docs.puppet.com/pe/2.7/ | 2018-07-16T04:30:18 | CC-MAIN-2018-30 | 1531676589179.32 | [] | docs.puppet.com |
You can install multiple Mirage servers on the Mirage Management system. When the server is installed, it registers itself with the Mirage Management server and appears in the servers list.
About this task
See the VMware Mirage Installation Guide.
Procedure
- Double-click the Mirage.server.x64.buildnumber.msi file.
The server installation starts.
- Repeat the process for each server to install on the Mirage Management system. | https://docs.vmware.com/en/VMware-Mirage/5.9/com.vmware.mirage.admin/GUID-CBA8AFF1-5DC4-402E-9F93-0976408E7966.html | 2018-07-16T05:07:24 | CC-MAIN-2018-30 | 1531676589179.32 | [] | docs.vmware.com |
Documentation
Ucommerce includes full API reference documentation and lots of helpful articles to help you build your e-commerce site as effortlessly as possible.
When your site is ready to go live and sell a bunch of goodies, you need to active your license whether you're running on a free, pro or enterprise license. To do so you must first apply for the license through our website.
Please note that if you have a staging site, and you want to activate it, you can follow this guide to get rid of the developer banner without activating the license:
Staging sites without the developer banner
To active your license you need to:
When you do the first request the license will be evaluated. Licenses can be activated up to three times on different servers, but should only be used for the site that the license is bought for.. | https://docs.ucommerce.net/ucommerce/v5/how-to/active-a-license.html | 2018-07-16T04:40:17 | CC-MAIN-2018-30 | 1531676589179.32 | [] | docs.ucommerce.net |
Troubleshooting the Installation
This page covers various troubleshooting topics, such as help for troubleshooting a problematic installation.
Unable to Register the Console Agent
When running Mule ESB server and the console in a Tomcat environment you may have a problem registering the Mule server instance with the console agent. This problem occurs when you try to register the server through the console screen. After entering the server name and the URL for the Mule agent, the console displays a message indicating that the console could not register the Mule server you specified, and that the server host system did not send a valid HTTP response. This problem usually occurs if you had previously unregistered the same server instance, but for some reason the unregistration process did not complete properly. There may be other reasons why the Mule server instance may internally be still registered.
When this occurs, you need to do the following to fix the problem:
Delete the
truststore.jksfile from the
<MULE_HOME>/.mule/.agentdirectory. It is not necessary to stop the Mule server to delete this file.
Unable to Register Server
Getting this exception when trying to register Mule Server using MMC
Could not register server <serverName>: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
Solution: Try deleting
truststore.jks from the
<MULE_HOME>/.mule/.agent folder, or delete the
.mule/.agent folder completely. It is not necessary to stop the Mule server to delete this file.
Using a Custom Agent Configuration
You may want to configure a different default port for agent communication, or a different server ID before running Mule.
You can change the agent configuration from that of the default URL, which is displayed when registering a new server instance. The agent configuration determines the bind port for the server instance. You may wish to change the agent URL if you want to start multiple instances of Mule ESB within the same box and connect the console to these different instances, or if you want to connect to remote server instances.
Unless a port is specified, the console will look in the 7777-7877 port range and bind to the first free port by default. When you start Mule from a command line, you can change the port to which the server binds. You specify the new port as a switch or option in the command used to start Mule, as follows:
For example:
You may also specify a custom port range, as follows:
This will force the agent to bind on the first available port starting from 7783 until 7883.
You can also specify these arguments in the
wrapper.conf file. For organizations with strict policies on available ports, this option should be used.
In addition, when you change the agent bind port to accommodate multiple Mule instances, you also must start Mule from the
bin directory that corresponds to the particular Mule instance. For example, you might run a second instance of Mule as follows, where this second instance is installed at
/opt/second_mule:
Starting Multiple Mule Agent Instances
You can start more than one Mule agent instance, but you have to bind each agent to its own port and start a separate listener for each agent.
For example, if you start two Mule agent instances, one instance uses the default bind port of 7777, as described in the above section. For the second agent instance, you set up the port to which the agent binds as port 7778. You also must specify a listener class for this second agent instance. Add the following code to the
web.xml file:
Note that the same two properties, the bind port and listener class, are supported for standalone Mule ESB servers and those servers configured via the
web.xml file.
Support for MMC When Used with Tcat
For the management console to work with a Tcat server, you need to modify the value of the variable
PermGenSize from 128 to 256. You should increase the size of
PermGenSize to 256 before you deploy the management console WAR file to Tcat. After modifying
PermGenSize, start Tcat and then deploy the management console WAR file. Both Tcat and the management console should now work together with no problems.
If after changing
PermGenSize and starting Tcat, you get an
OutOfMemoryError message when deploying the console WAR file, then you need to take the following steps. These steps walk you through registering the Tcat local agent, deploying the management console WAR file, logging into the console and registering the console agent.
Install Tcat version 6.2.1 or greater.
Edit the batch file for starting Tcat,
catalina.bat. You want to move the line referencing
JAVA_OPTS at the top of the file to line 126 in the file. Then, change the setting for `PermGenSize`within that line from 126m to 256m. When you’re done, line 126 should look as follows:
Start Tcat.
Log into Tcat and register the local Tcat agent. Use this URL for the agent (eg.).
Create a new package that contains the mmc war file and the Local Tcat Agent.
Click the Save and Deploy buttons.
Click the Local Tcat Agent Servers link.
Click the Applications tab.
Click the Go To link.
Log into the Mule management console and start a local Mule ESB instance on which the console agent has been deployed.
Click the New Server button in the console and register the console agent using its URL (eg.).
Configuring a Custom Folder for mmc-data
To specify a new folder for
mmc-data, use the following parameter in the Mule startup command:
For example: | https://docs.mulesoft.com/mule-management-console/v/3.7/troubleshooting-tips | 2018-07-16T04:55:01 | CC-MAIN-2018-30 | 1531676589179.32 | [] | docs.mulesoft.com |
Difference between revisions of "Editing a template with Template Manager"
From Joomla! Documentation
Revision as of 10:44, 29 November 2013
To edit a template's files with the Template Manager you must first access the Template manager.
Contents
Access the Template>
Two methods to Access the Template Manager Customisation Feature
Since Joomla! 3.2, there are two methods available for accessing the Template Manager: Cutomise Template. The Cutomise Template interface allows for editing the actual code found in the template files, creating layout overrides and template file manipulation.
One-Click or Switch to Template View
File:30-Template-manager-template-templates-view.png
Remember: Styles refers to changing the available parameters of a template, such as color, font-color, logo, etc. These are dependent on the parameters a template maker made available and are a convenience for quick changes.
To access the Template Cutomise feature:
- Click the template name in the column Template
- Styles will be highlighted, click on Templates which will turn the view to Template Manager:Templates.
Further Reading
- See How to use the Template Manager for instructions on how to use the Cutomise Template features. | https://docs.joomla.org/index.php?title=J3.2:Editing_a_template_with_Template_Manager&diff=105862&oldid=105860 | 2015-06-30T05:52:53 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Difference between revisions of "JMail::setSubject"::setSubject
Description
Set the email subject.
Description:JMail::setSubject [Edit Descripton]
public function setSubject ($subject)
- Returns Returns this object for chaining.
- Defined on line 123 of libraries/joomla/mail/mail.php
- Since
See also
JMail::setSubject source code on BitBucket
Class JMail
Subpackage Mail
- Other versions of JMail::setSubject
SeeAlso:JMail::setSubject [Edit See Also]
User contributed notes
<CodeExamplesForm /> | https://docs.joomla.org/index.php?title=JMail::setSubject/11.1&diff=57319&oldid=47876 | 2015-06-30T05:13:42 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Difference between revisions of "JED history"
From Joomla! Documentation
Revision as of 00:30, 10 April 2012
Contents
- 1 Special dates
- 2 JED Team History
- 3 Team Leaders
JED History: Joomla! Extensions Directory facts and dates[1]. (Dates are in DD-MM-YYYY)
Special dates
JED started on basis of this list at Joomla! forum Ken actually started the list back when we were Mambo. It was then that a few of us began discussion an actual Directory as something that we should be working on.
A Directory is born
- Directory Conception: 15-10-2005
- Directory Born: 05-03-2006
- Birth certificate: It's Time to Celebrate the Freedom - Yet Again!
Directory Baptism:
proposal by ot2sen - 12-08-2006
- full name: Joomla! Extensions Directory
- nick: JED
Extension numbers
JED launched with 314 extensions [2],
- 314 extensions - 05-03-2006
- 1000 extensions mark - 16-09-2006
- 2000 extensions mark - 28-08-2007
- 3000 extensions mark - 20-01-2008
- 4000 extensions mark - 17-10-2008
- 5000 extensions mark - 27-05-2010
GPL only
- March 31 2009: the Joomla! Extensions Directory (JED) no longer accept non GPL
- July 1 2009: only GPL extensions are allowed to be listed
Announcement: JED to be GPL Only by July 2009
Extension Versions
- No 1.0 extensions: March 31 2009, JED no longer accepts Joomla! 1.0 compatible-only extensions. However, accepts extensions compatible with Joomla! 1.0 and Joomla! 1.5
- Announcement: JED Will Phase Out Joomla 1.0 Extensions in June 2009
- No 1.5 legacy only for new extensions -
- Open for 1.6 extensions -
- Open for 2.5 extensions - Jan, 24, 2012
- End for 1.6 and 1.7 extensions: April 30, 2012 - The JED and Version Support
- End of 1.5 only extensions: April 1, 2012 - The JED and Version Support
JED Team History
Actual JED Team
Team Profiles:
Manager
- mlipscomb (Matt Lipscomb) - (appointed by CLT 06-08-2010)
Editors:
- Horus_68 (Paulo Izidoro) - Joined: 08-01-2008 /
- astgeorge (Aaron St.George) - Joined: 14-06-2008 /
- LaFrance (Pierre Gazzola) - Joined: 10-01-2010
- Seblod (Sebastian Lapoux) - Joined: ??
- Andy Sharman - Joined: ??
- Herdboy.789 (Mustaq Sheikh) - Joined: ??
- Nathan Gifford - Joined: ??
- Costas Kakousis - Joined: ??
- Daniel Dimitrov - Joined: 22-02-2012
Development
- Dknight (Lee Cher Yeong) - Joined: 15-10-2005
Joined in 2012
- Daniel Dimitrov - Joined: 22-02-2012
Joined in 2011
- natselection (Nathan Gifford) - Joined: ??
- Seblod (Sebastian Lapoux) - Joined: ??
- Andy Sharman - Joined: ??
- Herdboy.789 (Mustaq Sheikh) - Joined: ??
- Eduardo Lomeli - Joined: ??
- Costas.Kakousis - Joined: ??
- hooduku - Joined: ??
- Sudhi - Joined: ??
Joined in 2010
- LaFrance (Pierre Gazzola) - Joined: 10-01-2010
- ot2sen (Ole Bang Ottosen) - Joined (appointed by CLT): 09-03-2010 / Left 25-03-2010
- Danayel (Daniel Chapman) - Joined: 03-08-2010 / Left 20-09-2010
- jeffchannell (Jeff Channels) - Joined: 17-08-2010 / Left 11-09-2010
- mlipscomb (Matt Lipscomb) - (appointed by CLT 06-08-2010)
- porwig (Paul Orwig) - (appointed by CLT 06-08-2010) Left: 31-12-2011)
Joined in 2009
- astgeorge (Aaron St.George) - Joined: before 12-02-2009
- tj.baker (TJ Baker) - Joined: 31-12-2009 / Left: 02-08-2010
- nonumber (Peter Van Westen) - Joined: 15-09-2009 / Left: 3-09-2010
Joined in 2008
- Horus_68 (Paulo Izidoro) - Joined: 08-01-2008 /
- timothy.stiffler (Timothy Stiffler) - Joined: 21-01-2008 / Left: 16-04-2008
- astgeorge (Aaron St.George) - Joined: after 14-06-2008 /
- Toni Marie - Joined: 10-03-2008 / Left: 06-08-2010
- alledia (Steve Burge) - Joined: 10-03-2008 / Left: 20-04-2010
- mindiam (Ben Rogers) - Joined: 29-03-2008 / Left: 20-08-2008
- Geoff (Geoff Cheung)- Joined: 09-09-2008 / Left: ??
- normdouglas (Norm Douglas) - Joined: 26-12-2009 / Left: 09-09-2010
- vdrover (Victor Drover) - Joined: 26-12-2008 / Left: 06-04-2012
Joined in 2007
- doctorj (Josh) - Joined: 02-07-2007 / Left: 27-08-2007
- ircmaxell (Anthony Ferrara) - Joined: 13-12-2007 / Left: 29-05-2008 (?)
Joined in 2006
- Tonie (Tonie de Wilde) - Joined: 05-03-2006 / Left: 29-04-2008
- Vimes - Joined: 08-08-2006 / Left: 15-06-2007
- LorenzoG (Lorenzo Garcia) - Joined: 16-10-2006 / Left: 12-04-2010
Pre-launch team - 2005
Since 15-10-2005 until the directory launch on 05-03-2006 [3]
- gsbe (Graham Spice) - Joined: 15-10-2005 / Left: 03-05-2006
- ot2sen (Ole Bang Ottosen) - Joined: 15-10-2005 / Left: 10-03-2008
- dknight (Lee Cher Yeong) - Joined: 15-10-2005
- rhuk (Andy Miller) - Joined: 15-10-2005 / Left: 05-03-2006
Pre-launch helpers
original site template: rhuk
code contributes: Andrew Eddie
- manuman (Shayne Bartlett) - Joined: 15-10-2005 / Left: 05-03-2006
- DeanMarshall (Dean Marshall) - Joined: 15-10-2005 / Left: 05-03-2006
- brad (Brad Baker) - Joined: 15-10-2005 / Left: 05-03-2006
- kenmcd (Ken McDonald) was the original 'collector' of extensions
Team Leaders
Users in leadership roles at JED team
- Tonie (From 05-03-2006 to 29-04-2008)
- LorenzoG (From 29-04-2008 to 12-04-2010)
- Toni Marie (From 12-04-2010 to 06-08-2010)
- Co-managers (CLT appointment): mlipscomb (From 06-08-2010 to 31-12-2011) and porwig (From 06-08-2010 to 31-12-2011)
- Manager: mlipscomb (From 31-12-2011 to actual date )
Referências
- ↑ The dates presented in this document are confirmed by different documents, public and private. The persons involved also contribute to revise this document. Our thanks to all
- ↑ At the time with an estimated value of over $139,500 for the 310 free “310 extensions x 15 hours development per extension x $30/hour = $139,500. A ridiculously low estimate of the value of the extensions is well over $100,000 - for FREE”:
- ↑ The 1st JED TEAM started his functions on 05 Mar 2006 (the directory opening day) "The Directory team which consists of Andy [rhuk], Graham [Graham Spice], ot2sen [Ole Bang Ottosen] and Lee [dknight] | https://docs.joomla.org/index.php?title=JED_history&diff=66102&oldid=30974 | 2015-06-30T05:41:36 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Changes related to "How to find your absolute path"
← How to find your absolute path
This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold.
No changes during the given period matching these criteria. | https://docs.joomla.org/index.php?title=Special:RecentChangesLinked&from=20130703104030&target=How_to_find_your_absolute_path | 2015-06-30T06:41:13 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Help Center
Local Navigation
- Quick Help
- Shortcuts
- Phone
- Voice commands
- Messages
- Files
- Media
- Ring tones, sounds, and alerts
- Browser
- message to all meeting participants
- Conference call meetings
- Synchronizing calendar
- Calendar options
- Calendar shortcuts
- Troubleshooting:
Show tasks in the calendar
Next topic: Calendar shortcuts
Previous topic: Change how long your device stores calendar entries
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/22178/View_tasks_in_a_calendar_60_1137757_11.jsp | 2015-06-30T05:21:47 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.blackberry.com |
Static Files¶
In Pylons, the application's "public" directory is configured as a static overlay on "/", so that URL "/images/logo.png" goes to "pylonsapp/public/images/logo.png". This is done using a middleware. Pyramid does not have an exact equivalent but it does have a way to serve static files, and add-on packages provide additional ways.
Static view¶
This is Pyramid's default way to serve static files. As you'll remember from the main function in an earlier chapter:
config.add_static_view('static', 'static', cache_max_age=3600)
This tells Pyramid to publish the directory "pyramidapp/static" under URL "/static", so that URL "/static/images/logo.png" goes to "pyramidapp/static/images/logo.png".
It's implemented using traversal, which we haven't talked about much in this Guide. Traversal-based views have a view name which serves as a URL prefix or component. The first argument is the view name ("static"), which implies it matches URL "/static". The second argument is the asset spec for the directory (relative to the application's Python package). The keyword arg is an option which sets the HTTP expire headers to 3600 seconds (1 hour) in the future. There are other keyword args for permissions and such.
Pyramid's static view has the following advantages over Pylons:
- It encourages all static files to go under a single URL prefix, so they're not scattered around the URL space.
- Methods to generate URLs are provided:
request.static_url()and
request.static_path().
- The deployment configuration (INI file) can override the base URL ("/static") to serve files from a separate static media server ("").
- The deployment configuration can also override items in the static directory, pointing to other subdirectories or files instead. This is called "overriding assets" in the Pyramid manual.
It has the following disadvantages compared to Pyramid:
- Static URLs have the prefix "/static".
- It can't serve top-level file URLs such as "/robots.txt" and "/favicon.ico".
You can serve any URL directory with a static view, so you could have a separate view for each URL directory like this:
config.add_images_view('images', 'static/images') config.add_stylesheets_view('stylesheets', 'static/stylesheets') config.add_javascript_view('javascript', 'static/javascript')
This configures URL "/images" pointing to directory "pyramidapp/static/images", etc.
If you're using Pyramid's authorization system, you can also make a separate view for files that require a certain permission:
config.add_static_view("private", "private", permission="admin")
Generating static URLs¶
You can generate a URL to a static file like this:
href="${request.static_url('static/images/logo.png')}
Top-level file URLs¶
So how do you get around the problem of top-level file URLs? You can register normal views for them, as shown later below. For "/favicon.ico", you can replace it with an HTTP header in your site template:
<link rel="shortcut icon" href="${request.static_url('pyramidapp:static/favicon.ico')}" />
The standard Pyramid scaffolds actually do this. For "/robots.txt", you may decide that this actually belongs to the webserver rather than the application, and so you might have Apache serve it directly like this:
Alias /robots.txt /var/www/static/norobots.txt
You can of course have Apache serve your static directory too:
Alias /static /PATH-TO/PyramidApp/pyramidapp/static
But if you're using mod_proxy you'll have to disable proxying that directory early in the virtualhost configuration:
Alias ProxyPass /static !
If you're using RewriteRule in combination with other path directives like Alias, read the RewriteRule flags documentation (especially "PT" and "F") to ensure the directives cooperate as expected.
External static media server¶
To make your configuration flexible for a static media server:
# In INI file static_assets = "static" # -OR- static_assets = ""
Main function:
config.add_static_view(settings["static_assets"], "zzz:static")
Now it will generate "" or "" depending on the configuration.
Static route¶
This strategy is available in Akhet. It overlays the static directory on top of "/" like Pylons does, so you don't have to change your URLs or worry about top-level file URLs.
This registes a static route matching all URLs, and a view to serve it. Actually, the route will have a predicate that checks whether the file exists, and if it doesn't, the route won't match the URL. Still, it's good practice to register the static route after your other routes.
If you have another catchall route before it that might match some static URLs, you'll have to exclude those URLs from the route as in this example:
config.add_route("main", "/{action}", path_info=r"/(?!favicon\.ico|robots\.txt|w3c)") config.add_static_route('zzz', 'static', cache_max_age=3600)
The static route implementation does not generate URLs to static files, so you'll have to do that on your own. Pylons never did it very effectively either.
Other ways to serve top-level file URLs¶
If you're using the static view and still need to serve top-level file URLs, there are several ways to do it.
A manual file view¶
This is documented in the Pyramid manual in the Static Assets chapter.
Or if you're really curious how to configure the view for traversal without a route:
@view_config(name="favicon.ico")
pyramid_assetviews¶
"pyramid_assetviews" is a third-party package for top-level file URLs.
Of course, if you have the files in the static directory they'll still be visible as "/static/robots.txt" as well as "/robots.txt". If that bothers you, make another directory outside the static directory for them. | http://docs.pylonsproject.org/projects/pyramid-cookbook/en/latest/pylons/static.html | 2015-06-30T05:17:14 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.pylonsproject.org |
Changes related to "J2.5:Logging in or out of the Administrator back-end"
← J2.5:Logging in or out of the Administrator back-end
This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold.
No changes during the given period matching these criteria. | https://docs.joomla.org/index.php?title=Special:RecentChangesLinked&from=20140513074042&target=J2.5%3ALogging_in_or_out_of_the_Administrator_back-end | 2015-06-30T06:00:52 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
If your cluster does not have access to the Internet, or you are creating a large cluster and you want to conserve bandwidth, you need to provide access to the bits using an alternative method.
Set up the local mirror repositories as needed for HDP and HDP Utils..
Be sure to remember the local mirror repository Base URLs for each operating system from the Deploying HDP In Production Data Centers with Firewalls steps. You will use these Base URLs during the cluster install.
For example, if your system includes hosts running CentOS 6, to point to the HDP 2.0.6 repositories, your local repository Base URL would look something like this:
http://{your.hosted.local.repository}/HDP-2.0.6/repos/centos6
Configure which JDK to use and how the JDK will be downloaded and installed.
If you have not already installed the JDK on all hosts, and plan to use Oracle JDK 1.6,
download jdk-6u31-linux-x64.bin to
/var/lib/ambari-server/resources.
If you plan to use a JDK other than Oracle JDK 1.6, you must install the JDK on each host in your cluster and use the -j flag when running
ambari-server setup. See JDK Requirements for more information on supported JDKs.
If you have already installed the JDK on all your hosts, you must use the -j flag when running
ambari-server setup | http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.9.1/bk_using_Ambari_book/content/ambari-chap1-6.html | 2015-06-30T05:15:11 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.hortonworks.com |
Exact SymPy expressions can be converted to floating-point approximations (decimal numbers) using either the .evalf() method or the N() function. N(expr, <args>) is equivalent to sympify(expr).evalf(<args>).
>>> from sympy import * >>> N(sqrt(2)*pi) 4.44288293815837 >>> (sqrt(2)*pi).evalf() 4.44288293815837
By default, numerical evaluation is performed to an accuracy of 15 decimal digits. You can optionally pass a desired accuracy (which should be a positive integer) as an argument to evalf or N:
>>> N(sqrt(2)*pi, 5) 4.4429 >>> N(sqrt(2)*pi, 50) 4.4428829381583662470158809900606936986146216893757
Complex numbers are supported:
>>> N(1/(pi + I), 20) 0.28902548222223624241 - 0.091999668350375232456*I
If the expression contains symbols or for some other reason cannot be evaluated numerically, calling .evalf() or N() returns the original expression, or in some cases a partially evaluated expression. For example, when the expression is a polynomial in expanded form, the coefficients are evaluated:
>>> x = Symbol('x') >>> (pi*x**2 + x/3).evalf() 3.14159265358979*x**2 + 0.333333333333333*x
You can also use the standard Python functions float(), complex() to convert SymPy expressions to regular Python numbers:
>>> float(pi) 3.1415926535... >>> complex(pi+E*I) (3.1415926535...+2.7182818284...j)
If these functions are used, failure to evaluate the expression to an explicit number (for example if the expression contains symbols) will raise an exception.
There is essentially no upper precision limit. The following command, for example, computes the first 100,000 digits of π/e:
>>> N(pi/E, 100000) ...
This shows digits 999,951 through 1,000,000 of pi:
>>> str(N(pi, 10**6))[-50:] '95678796130331164628399634646042209010610577945815'
High-precision calculations can be slow. It is recommended (but entirely optional) to install gmpy (), which will significantly speed up computations such as the one above.
Floating-point numbers in SymPy are instances of the class Float. A Float can be created with a custom precision as second argument:
>>> Float(0.1) 0.100000000000000 >>> Float(0.1, 10) 0.1000000000 >>> Float(0.125, 30) 0.125000000000000000000000000000 >>> Float(0.1, 30) 0.100000000000000005551115123126
As the last example shows, some Python floats are only accurate to about 15 digits as inputs, while others (those that have a denominator that is a power of 2, like .125 = 1/4) are exact. To create a Float from a high-precision decimal number, it is better to pass a string, Rational, or evalf a Rational:
>>> Float('0.1', 30) 0.100000000000000000000000000000 >>> Float(Rational(1, 10), 30) 0.100000000000000000000000000000 >>> Rational(1, 10).evalf(30) 0.100000000000000000000000000000
The precision of a number determines 1) the precision to use when performing arithmetic with the number, and 2) the number of digits to display when printing the number. When two numbers with different precision are used together in an arithmetic operation, the higher of the precisions is used for the result. The product of 0.1 +/- 0.001 and 3.1415 +/- 0.0001 has an uncertainty of about 0.003 and yet 5 digits of precision are shown.
>>> Float(0.1, 3)*Float(3.1415, 5) 0.31417
So the displayed precision should not be used as a model of error propagation or significance arithmetic; rather, this scheme is employed to ensure stability of numerical algorithms.
N and evalf can be used to change the precision of existing floating-point numbers:
>>> N(3.5) 3.50000000000000 >>> N(3.5, 5) 3.5000 >>> N(3.5, 30) 3.50000000000000000000000000000
When the input to N or evalf is a complicated expression, numerical error propagation becomes a concern. As an example, consider the 100’th Fibonacci number and the excellent (but not exact) approximation \(\varphi^{100} / \sqrt{5}\) where \(\varphi\) is the golden ratio. With ordinary floating-point arithmetic, subtracting these numbers from each other erroneously results in a complete cancellation:
>>> a, b = GoldenRatio**1000/sqrt(5), fibonacci(1000) >>> float(a) 4.34665576869e+208 >>> float(b) 4.34665576869e+208 >>> float(a) - float(b) 0.0
N and evalf keep track of errors and automatically increase the precision used internally in order to obtain a correct result:
>>> N(fibonacci(100) - GoldenRatio**100/sqrt(5)) -5.64613129282185e-22
Unfortunately, numerical evaluation cannot tell an expression that is exactly zero apart from one that is merely very small. The working precision is therefore capped, by default to around 100 digits. If we try with the 1000’th Fibonacci number, the following happens:
>>> N(fibonacci(1000) - (GoldenRatio)**1000/sqrt(5)) 0.e+85
The lack of digits in the returned number indicates that N failed to achieve full accuracy. The result indicates that the magnitude of the expression is something less than 10^84, but that is not a particularly good answer. To force a higher working precision, the maxn keyword argument can be used:
>>> N(fibonacci(1000) - (GoldenRatio)**1000/sqrt(5), maxn=500) -4.60123853010113e-210
Normally, maxn can be set very high (thousands of digits), but be aware that this may cause significant slowdown in extreme cases. Alternatively, the strict=True option can be set to force an exception instead of silently returning a value with less than the requested accuracy:
>>> N(fibonacci(1000) - (GoldenRatio)**1000/sqrt(5), strict=True) Traceback (most recent call last): ... PrecisionExhausted: Failed to distinguish the expression: -sqrt(5)*GoldenRatio**1000/5 + from zero. Try simplifying the input, using chop=True, or providing a higher maxn for evalf Traceback (most recent call last): ... PrecisionExhausted: Failed to distinguish the expression:
If we add a term so that the Fibonacci approximation becomes exact (the full form of Binet’s formula), we get an expression that is exactly zero, but N does not know this:
>>> f = fibonacci(100) - (GoldenRatio**100 - (GoldenRatio-1)**100)/sqrt(5) >>> N(f) 0.e-104 >>> N(f, maxn=1000) 0.e-1336
In situations where such cancellations are known to occur, the chop options is useful. This basically replaces very small numbers in the real or imaginary portions of a number with exact zeros:
>>> N(f, chop=True) 0 >>> N(3 + I*f, chop=True) 3.00000000000000
In situations where you wish to remove meaningless digits, re-evaluation or the use of the round method are useful:
>>> Float('.1', '')*Float('.12345', '') 0.012297 >>> ans = _ >>> N(ans, 1) 0.01 >>> ans.round(2) 0.01
If you are dealing with a numeric expression that contains no floats, it can be evaluated to arbitrary precision. To round the result relative to a given decimal, the round method is useful:
>>> v = 10*pi + cos(1) >>> N(v) 31.9562288417661 >>> v.round(3) 31.956
Sums (in particular, infinite series) and integrals can be used like regular closed-form expressions, and support arbitrary-precision evaluation:
>>> var('n x') (n, x) >>> Sum(1/n**n, (n, 1, oo)).evalf() 1.29128599706266 >>> Integral(x**(-x), (x, 0, 1)).evalf() 1.29128599706266 >>> Sum(1/n**n, (n, 1, oo)).evalf(50) 1.2912859970626635404072825905956005414986193682745 >>> Integral(x**(-x), (x, 0, 1)).evalf(50) 1.2912859970626635404072825905956005414986193682745 >>> (Integral(exp(-x**2), (x, -oo, oo)) ** 2).evalf(30) 3.14159265358979323846264338328
By default, the tanh-sinh quadrature algorithm is used to evaluate integrals. This algorithm is very efficient and robust for smooth integrands (and even integrals with endpoint singularities), but may struggle with integrals that are highly oscillatory or have mid-interval discontinuities. In many cases, evalf/N will correctly estimate the error. With the following integral, the result is accurate but only good to four digits:
>>> f = abs(sin(x)) >>> Integral(abs(sin(x)), (x, 0, 4)).evalf() 2.346
It is better to split this integral into two pieces:
>>> (Integral(f, (x, 0, pi)) + Integral(f, (x, pi, 4))).evalf() 2.34635637913639
A similar example is the following oscillatory integral:
>>> Integral(sin(x)/x**2, (x, 1, oo)).evalf(maxn=20) 0.5
It can be dealt with much more efficiently by telling evalf or N to use an oscillatory quadrature algorithm:
>>> Integral(sin(x)/x**2, (x, 1, oo)).evalf(quad='osc') 0.504067061906928 >>> Integral(sin(x)/x**2, (x, 1, oo)).evalf(20, quad='osc') 0.50406706190692837199
Oscillatory quadrature requires an integrand containing a factor cos(ax+b) or sin(ax+b). Note that many other oscillatory integrals can be transformed to this form with a change of variables:
>>> init_printing(use_unicode=False, wrap_line=False, no_global=True) >>> intgrl = Integral(sin(1/x), (x, 0, 1)).transform(x, 1/x) >>> intgrl oo / | | sin(x) | ------ dx | 2 | x | / 1 >>> N(intgrl, quad='osc') 0.504067061906928
Infinite series use direct summation if the series converges quickly enough. Otherwise, extrapolation methods (generally the Euler-Maclaurin formula but also Richardson extrapolation) are used to speed up convergence. This allows high-precision evaluation of slowly convergent series:
>>> var('k') k >>> Sum(1/k**2, (k, 1, oo)).evalf() 1.64493406684823 >>> zeta(2).evalf() 1.64493406684823 >>> Sum(1/k-log(1+1/k), (k, 1, oo)).evalf() 0.577215664901533 >>> Sum(1/k-log(1+1/k), (k, 1, oo)).evalf(50) 0.57721566490153286060651209008240243104215933593992 >>> EulerGamma.evalf(50) 0.57721566490153286060651209008240243104215933593992
The Euler-Maclaurin formula is also used for finite series, allowing them to be approximated quickly without evaluating all terms:
>>> Sum(1/k, (k, 10000000, 20000000)).evalf() 0.693147255559946
Note that evalf makes some assumptions that are not always optimal. For fine-tuned control over numerical summation, it might be worthwhile to manually use the method Sum.euler_maclaurin.
Special optimizations are used for rational hypergeometric series (where the term is a product of polynomials, powers, factorials, binomial coefficients and the like). N/evalf sum series of this type very rapidly to high precision. For example, this Ramanujan formula for pi can be summed to 10,000 digits in a fraction of a second with a simple command:
>>> f = factorial >>> n = Symbol('n', integer=True) >>> R = 9801/sqrt(8)/Sum(f(4*n)*(1103+26390*n)/f(n)**4/396**(4*n), ... (n, 0, oo)) >>> N(R, 10000)111745 02841027019385211055596446229489549303819644288109756659334461284756482337867831 ...
The function nsimplify attempts to find a formula that is numerically equal to the given input. This feature can be used to guess an exact formula for an approximate floating-point input, or to guess a simpler formula for a complicated symbolic input. The algorithm used by nsimplify is capable of identifying simple fractions, simple algebraic expressions, linear combinations of given constants, and certain elementary functional transformations of any of the preceding.
Optionally, nsimplify can be passed a list of constants to include (e.g. pi) and a minimum numerical tolerance. Here are some elementary examples:
>>> nsimplify(0.1) 1/10 >>> nsimplify(6.28, [pi], tolerance=0.01) 2*pi >>> nsimplify(pi, tolerance=0.01) 22/7 >>> nsimplify(pi, tolerance=0.001) 355 --- 113 >>> nsimplify(0.33333, tolerance=1e-4) 1/3 >>> nsimplify(2.0**(1/3.), tolerance=0.001) 635 --- 504 >>> nsimplify(2.0**(1/3.), tolerance=0.001, full=True) 3 ___ \/ 2
Here are several more advanced examples:
>>> nsimplify(Float('0.130198866629986772369127970337',30), [pi, E]) 1 ---------- 5*pi ---- + 2*E 7 >>> nsimplify(cos(atan('1/3'))) ____ 3*\/ 10 -------- 10 >>> nsimplify(4/(1+sqrt(5)), [GoldenRatio]) -2 + 2*GoldenRatio >>> nsimplify(2 + exp(2*atan('1/4')*I)) 49 8*I -- + --- 17 17 >>> nsimplify((1/(exp(3*pi*I/5)+1))) ___________ / ___ 1 / \/ 5 1 - - I* / ----- + - 2 \/ 10 4 >>> nsimplify(I**I, [pi]) -pi ---- 2 e >>> n = Symbol('n') >>> nsimplify(Sum(1/n**2, (n, 1, oo)), [pi]) 2 pi --- 6 >>> nsimplify(gamma('1/4')*gamma('3/4'), [pi]) ___ \/ 2 *pi | http://docs.sympy.org/dev/modules/evalf.html | 2015-06-30T05:13:13 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.sympy.org |
The following example should help make things clearer: (its files should be present in the jython Demo dir)
>>> | http://docs.pushtotest.com/jythondocs/jreload.html | 2008-05-16T04:32:55 | crawl-001 | crawl-001-008 | [] | docs.pushtotest.com |
Installation¶
Requirements¶
Server:
- Python 2.7 or later ()
- routes 2.0 or later ()
- webob 1.3 or later ()
- PyYaml 3.0 or later ()
Client:
Note
We recommend using a Linux distribution which has Python 2.7 as its standard Python install (e.g. yum in Centos requires Python 2.6 and a dual Python install can be fairly tricky and buggy). This guide was written based ZTPServer v1.1.0 installed on Fedora 20.
Installation Options¶
Turn-key VM Creation¶
The turn-key VM option leverages Packer to auto generate a VM on your local system. Packer.io automates the creation of the ZTPServer VM. All of the required packages and dependencies are installed and configured. The current Packer configuration allows you to choose between VirtualBox or VMWare as your hypervisor and each can support Fedora 20 or Ubuntu Server 12.04.
VM Specification:
- 7GB Hard Drive
- 2GB RAM
- Hostname ztps.ztps-test.com
- eth0 (NAT) DHCP
- eth1 (hostonly) 172.16.130.10
- Firewalld/UFW disabled
- Users
- root/eosplus
- ztpsadmin/eosplus
- Python 2.7.5 with PIP
- DHCP installed with Option 67 configured (eth1 only)
- BIND DNS server installed with zone ztps-test.com
- wildcard forwarding rule passing all other queries to 8.8.8.8
- SRV RR for im.ztps-test.com
- rsyslog-ng installed; Listening on UDP and TCP (port 514)
- ejabberd (XMPP server) configured for im.ztps-test.com
- XMPP admin user: ztpsadmin/eosplus
- httpd installed and configured for ZTPServer (mod_wsgi)
- ZTPServer installed
- ztpserver-demo repo files pre-loaded
See the Packer VM code and documentation as well as the ZTPServer demo files for the Packer VM.
PyPI Package (pip install)¶
ZTPServer may be installed as a PyPI package.
This option assumes you have a server with Python and pip pre-installed. See installing pip.
Once pip is installed, type:
bash-3.2$ pip install ztpserver
The pip install process will install all dependencies and run the install script, leaving you with a ZTPServer instance ready to configure.
Manual installation¶
Download source:
- Latest Release on GitHub
- Active Stable: (GitHub) (ZIP) (TAR)
- Development: (GitHub) (ZIP) (TAR)
Once the above system requirements are met, you can use the following git command to pull the develop branch into a local directory on the server where you want to install ZTPServer:
bash-3.2$ git clone
Or, you may download the zip or tar archive and expand it.
bash-3.2$ wget bash-3.2$ tar xvf <filename> or bash-3.2$ unzip <filename>
Change in to the ztpserver directory, then checkout the release desired:
bash-3.2$ cd ztpserver bash-3.2$ git checkout v1.1.0
Execute
setup.py to build and then install ZTPServer:
[user@localhost ztpserver]$ sudo python setup.py build running build running build_py ... [root@localhost ztpserver]# sudo python setup.py install running install running build running build_py running install_lib ...
Upgrading¶
Upgrading ZTP Server is based on the method of installation:
PyPI (pip):
sudo pip install --upgrade ztpserver
Manual, Packer-VM, GitHub installs:
cd ztpserver/ sudo ./utils/refresh_ztps -b <branch>
The ztpserver/ directory, above, should be a git repository (where the files were checked out). The
branchidentifier may be any version identifier (1.3.2, 1.1), or an actual branch on github such as
master(released), or
develop(development).
RPM:
sudo rpm -Uvh ztpserver-<version>.rpm
Additional services¶
Note
If using the Turn-key VM Creation, all of the steps, below, will have been completed, please reference the VM documentation.
Allow ZTPServer Connections In Through The Firewall¶
Be sure your host firewall allows incoming connections to ZTPServer. The standalone server runs on port TCP/8080 by default.
Firewalld examples:
- Open TCP/<port> through firewalld
bash-3.2$ firewall-cmd --zone=public --add-port=<port>/tcp [--permanent]
- Stop firewalld
bash-3.2$ systemctl stop firewalld
- Disable firewalld
bash-3.2$ systemctl disable firewalld
Note
If using the Turn-key VM Creation, all the steps from below will be been completed automatically.
Configure the DHCP Service¶
Set up your DHCP infrastructure to server the full path to the ZTPServer bootstrap file via option 67. This can be performed on any DHCP server. Below you can see how you can do that for ISC dhcpd.
Get dhcpd:
- RedHat:
-
bash-3.2$ sudo yum install dhcp
- Ubuntu:
-
bash-3.2$ sudo apt-get install isc-dhcp-server
Add a network (in this case 192.168.100.0/24) for servicing DHCP requests for ZTPServer:
subnet 192.168.100.0 netmask 255.255.255.0 { range 192.168.100.200 192.168.100.205; option routers 192.168.100.1; option domain-name-servers <ipaddr>; option domain-name "<org>"; # Only return the bootfile-name to Arista devices class "Arista" { match if substring(option vendor-class-identifier, 0, 6) = "Arista"; # Interesting bits: # Relay agent IP address # Option-82: Agent Information # Suboption 1: Circuit ID # Ex: 45:74:68:65:72:6e:65:74:31 ==> Ethernet1 option bootfile-name "http://<ztp_hostname_or_ip>:<port>/bootstrap"; } }
Enable and start the dhcpd service¶
RedHat (and derivative Linux implementations)
bash-3.2# sudo /usr/bin/systemctl enable dhcpd.service
bash-3.2# sudo /usr/bin/systemctl start dhcpd.service
Ubuntu (and derivative Linux implementations)
bash-3.2# sudo /usr/sbin/service isc-dhcp-server start
Check that /etc/init/isc-dhcp-server.conf is configured for automatic startup on boot.
Edit the global configuration file located at
/etc/ztpserver/ztpserver.conf (if needed). See the Global configuration file options for more information. | https://ztpserver.readthedocs.io/en/develop/install.html | 2022-06-25T14:55:55 | CC-MAIN-2022-27 | 1656103035636.10 | [] | ztpserver.readthedocs.io |
Ensure AWS EC2 instances aren't automatically made public with a public IP
Error: AWS EC2 instances aren't automatically given public IPs
Bridgecrew Policy ID: BC_AWS_PUBLIC_12
Checkov Check ID: CKV_AWS_88
Severity: HIGH
AWS EC2 instances aren't automatically made public and given public IP addresses
Description
A public IP address is an IPv4 address that is reachable from the Internet. You can use public addresses for communication between your instances and the Internet. Each instance that receives a public IP address is also given an external DNS hostname.
We recommend you control whether your instance receives a public IP address as required.
Fix - Runtime
AWS Console
To change the policy using the AWS Console, follow these steps:
- Log in to the AWS Management Console at.
- Open the Amazon VPC console.
- In the navigation pane, select Subnets.
- Select a subnet, then select Subnet Actions > Modify auto-assign IP settings.
- Select auto-assign public IPv4 address. When selected, requests a public IPv4 address for all instances launched into the selected subnet. Select or clear the setting as required.
- Click Save.
Fix - Buildtime
Terraform
- Resource: aws_instance
- Argument: associate_public_ip_address - (Optional) Associate a public ip address with an instance in a VPC. Boolean value.
resource "aws_instance" "bar" { ... - associate_public_ip_address = true }
CloudFormation
- Resource: AWS::EC2::Instance / AWS::EC2::LaunchTemplate
- Argument: NetworkInterfaces.AssociatePublicIpAddress - (Optional) Associate a public ip address with an instance in a VPC. Boolean value.
Resources: EC2Instance: Type: AWS::EC2::Instance Properties: ... NetworkInterfaces: - ... - AssociatePublicIpAddress: true EC2LaunchTemplate: Type: AWS::EC2::LaunchTemplate Properties: LaunchTemplateData: ... NetworkInterfaces: - ... - AssociatePublicIpAddress: true
Updated 7 months ago
Did this page help you? | https://docs.bridgecrew.io/docs/public_12 | 2022-06-25T13:27:40 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.bridgecrew.io |
Information for "Menus Menu Item Contact Single Contact/id" Basic information Display titleHelp310:Menus Menu Item Contact Single Contact/id Default sort keyMenus Menu Item Contact Single Contact/id Page length (in bytes)11,292 NamespaceHelp310 Page ID17379020:57, 5 July 2017 Latest editorFuzzyBot (talk | contribs) Date of latest edit05:39, 11 April 2022 Total number of edits204 Total number of distinct authors2 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Page properties Transcluded template (1)Template used on this page: Template:Cathelp (view source) Retrieved from "" | https://docs.joomla.org/index.php?title=Help310:Menus_Menu_Item_Contact_Single_Contact/id&action=info | 2022-06-25T14:42:09 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.joomla.org |
You can calibrate and adjust the transmitter using the following tools:
- Push buttons on the transmitter component board
- Serial line commands
- Portable humidity meter HM70
A calibrator kit is needed for calibration against saturated salt solutions. The HMK15 Humidity Calibrator and pre-measured certified salts are available from Vaisala. For further information, please contact your Vaisala representative.
Vaisala Service Centers also offer accredited calibrations for humidity and temperature.
You can also remove the HMP110 series probe and replace it with a new one. The old probe can be adjusted using another transmitter body, if you have one available. | https://docs.vaisala.com/r/M211280EN-D/en-US/GUID-517893C0-D0B1-498B-8407-9406546C8AE3 | 2022-06-25T13:53:33 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.vaisala.com |
The MMCM reference clock can be dynamically switched using the CLKINSEL pin. The switching is done asynchronously. After the clock switches, the MMCM is likely to lose LOCKED and automatically lock onto the new clock. Therefore, after the clock switches, the MMCM must be reset. The MMCM clock MUX switching is shown in the following figure. The CLKINSEL signal directly controls the MUX. No synchronization logic is present.
Figure 1. Input Clock Switching
Missing Input Clock or Feedback Clock
When the input clock or feedback clock is lost, the CLKINSTOPPED or CLKFBSTOPPED status signal is asserted. The MMCM deasserts the LOCKED signal. After the clock returns, the CLKINSTOPPED signal is deasserted and a RESET must be applied. | https://docs.xilinx.com/r/en-US/am003-versal-clocking-resources/Reference-Clock-Switching | 2022-06-25T13:37:37 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.xilinx.com |
Finding and downloading the COVIDcast data
Simply follow these step-by-step instructions:
Create a directory on the computer where you run
ysqlshto hold the files for this case study. Call it, for example, "covid-data-case-study".
Go to the COVIDcast site and select the “Export Data” tab. That will bring you to this screen:
In Section #1, “Select Signal”, select “Facebook Survey Results” in the “Data Sources” list and select “People Wearing Masks” in the “Signals” list.
In Section #2, “Specify Parameters”, choose the range that interests you for “Date Range” (This case study used "2020-09-13 - 2020-11-01".) Select “States” for the “Geographic Level”.
In Section #3, "Get Data” hit the “CSV” button.
Then repeat, leaving all choices unchanged except for the choice in the “Signals” list. Select “COVID-Like Symptoms” here.
Then repeat again, again leaving all choices unchanged except for the choice in the “Signals” list. Select “COVID-Like Symptoms in Community” here.
This will give you three files with names like these:
covidcast-fb-survey-smoothed_wearing_mask-2020-09-13-to-2020-11-01.csv covidcast-fb-survey-smoothed_cli-2020-09-13-to-2020-11-01.csv covidcast-fb-survey-smoothed_hh_cmnty_cli-2020-09-13-to-2020-11-01.csv
The naming convention is obvious. The names will reflect your choice of date range.
Create a directory called "csv-files" on your "covid-data-case-study" directory and move the
.csvfiles to this from your "downloads" directory. Because you will not edit these files, you might like to make them all read-only to be sure that you don't make any accidental changes when you use a text editor or a spreadsheet app to inspect them. | https://docs.yugabyte.com/preview/api/ysql/exprs/aggregate_functions/covid-data-case-study/download-the-covidcast-data/ | 2022-06-25T13:14:25 | CC-MAIN-2022-27 | 1656103035636.10 | [array(['/images/section_icons/api/ysql.png',
'Download the COVIDcast data Download the COVIDcast data'],
dtype=object)
array(['/images/api/ysql/exprs/aggregate_functions/covid-data-case-study/covidcast-real-time-covid-19-indicators.png',
'Download the COVIDcast Facebook Survey Data'], dtype=object) ] | docs.yugabyte.com |
Trailer Fashion Babylon Skip to tickets Directed By: Gianluca Matarrese Nightvision 2022 France English, French, German North American Premiere 87 minutes Nightvision: Future cult classics From the front rows of Paris fashion shows to decadent after-parties, drag queen Violet Chachki, musician Casey Spooner and style icon Michelle Elie share their impressions of the vapid publicity theatre and crushing court system of the fashion world. Loving the glitter but loathing the grind of its dream machine, they navigate the choppy waters of living a fantasy while at the same time not being able to pay rent, reconciling the illusion and delusion of fashion as concept, communication, commodity, environment and state of mind. Fashion Babylon presents a prism that tests whether you can live in a dream or just project onto one. Bringing the immaterial into the material world demands a relationship to fashion as art, but also to the fashion world as artifice, where the difference between artist and arriviste, gift and service, muse and mascot, is totally diffuse. But fashion is a show, after all—featuring appearances by Jean-Paul Gaultier, Karli Kloss, Céline Dion, Chloë Sevigny, Miuccia Prada and more. Angie Driscoll Read less Credits Director(s) Gianluca Matarrese Producer(s) Dominique Barneaud Executive Producer(s) Casey Spooner Writer(s) Gianluca Matarrese Editor(s) Tess Gomet Cinematography Gianluca Matarrese Composer Cantautoma Sound Davide Giorgio Read less Back to What's On See More & Save Buy your Festival ticket packages and passes today! Share | https://hotdocs.ca/whats-on/hot-docs-festival/films/2022/fashion-babylon | 2022-06-25T13:52:41 | CC-MAIN-2022-27 | 1656103035636.10 | [] | hotdocs.ca |
You are viewing documentation for Kubernetes version: v1.23
Kubernetes v1.23 documentation is no longer actively maintained. The version you are currently viewing is a static snapshot. For up-to-date documentation, see the latest version.
Generating Reference Pages for Kubernetes Components and Tools
This page shows how to build the Kubernetes component and tool reference pages.
Before you begin
Start with the Prerequisites section in the Reference Documentation Quickstart guide.
Follow the Reference Documentation Quickstart to generate the Kubernetes component and tool reference pages.
What's next
Last modified May 30, 2020 at 3:10 PM PST: add en pages (ecc27bbbe7) | https://v1-23.docs.kubernetes.io/docs/contribute/generate-ref-docs/kubernetes-components/ | 2022-06-25T14:02:34 | CC-MAIN-2022-27 | 1656103035636.10 | [] | v1-23.docs.kubernetes.io |
You are viewing documentation for Kubernetes version: v1.23
Kubernetes v1.23 documentation is no longer actively maintained. The version you are currently viewing is a static snapshot. For up-to-date documentation, see the latest version.
kubeadm token
Boot.
kubeadm token create
Create bootstrap tokens on the server
Synopsis]
Options
Options inherited from parent commands
kubeadm token delete
Delete bootstrap tokens on the server
Synopsis
This command will delete a list of bootstrap tokens for you.
The [token-value] is the full Token of the form "[a-z0-9]{6}.[a-z0-9]{16}" or the Token ID of the form "[a-z0-9]{6}" to delete.
kubeadm token delete [token-value] ...
Options
Options inherited from parent commands
kubeadm token generate
Generate and print a bootstrap token, but do not create it on the server
Synopsis]
Options
Options inherited from parent commands
kubeadm token list
List bootstrap tokens on the server
Synopsis
This command will list all bootstrap tokens for you.
kubeadm token list [flags]
Options
Options inherited from parent commands
What's next
- kubeadm join to bootstrap a Kubernetes worker node and join it to the cluster | https://v1-23.docs.kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-token/ | 2022-06-25T13:29:32 | CC-MAIN-2022-27 | 1656103035636.10 | [] | v1-23.docs.kubernetes.io |
Setting Page Options
Setting Page Options
Page setup options are fully supported in Aspose.Cells. This article explains how to set page options with Aspose.Cells and shows code samples for the PageSetup property that is used to set the page setup options of the worksheet. In fact, this PageSetup property is an object of the PageSetup class used to set different page layout options for a printed worksheet. The PageSetup class provides various properties used to set page setup options. Some of these properties are discussed below.
Page Orientation
Page orientation can be set to portrait or landscape using the PageSetup class' Orientation property. The Orientation property accepts one of the pre-defined values in the PageOrientationType enumeration, listed below.
Scaling Factor
It is possible to reduce or enlarge a worksheet’s size by adjusting the scaling factor with the PageSetup.Zoom property.
FitToPages Options
To fit the contents of the worksheet to a specific number of pages, use the PageSetup class' FitToPagesTall and FitToPagesWide properties. These properties are also used to scale worksheets.
Paper Size
Set the paper size that the worksheets will be printed to using the PageSetup class' PaperSize property. The PaperSize property accepts one of the pre-defined values in the PaperSizeType enumeration, listed below.
Print Quality
Set the print quality of the worksheets to be printed with the PageSetup class' PrintQuality property. The measuring unit for print quality is Dots Per Inches (DPI).
First Page Number
Start the numbering of worksheet pages using the PageSetup class' FirstPageNumber property. The FirstPageNumber property sets the page number of the first worksheet page and the next pages are numbered in ascending order. | https://docs.aspose.com/cells/net/setting-page-options/ | 2022-06-25T14:15:10 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.aspose.com |
Breaking: #87081 - Language update (scheduler) task doesn’t work after upgrading to TYPO3 >= v9.2¶
See Issue #87081
Description¶
The language update command was moved away from ext:lang and was rewritten as a Symfony Console Commmand.
Affected Installations¶
An installation is affected if a language update scheduler task was created and exists before an upgrade to TYPO3 >= 9.2. | https://docs.typo3.org/c/typo3/cms-core/main/en-us/Changelog/9.2/Breaking-87081-LanguageUpdateSchedulerTaskDoesNotWorkAfterTYPO3Upgrade.html | 2022-06-25T14:53:46 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.typo3.org |
Beam Design Strategy
This section provides users with information on how MASS™ obtains a beam design. Specifically, the section discusses the design philosophy employed in the moment and deflection, shear, and bearing design steps. This information allows the designer to manipulate the beam module with ease.
Moment and Deflection Design Strategy
During the moment design step the factored moment, Mf , is compared with the moment resistance, Mr. In order achieve a successful design, Mr ≥ Mf . To meet this requirement the program performs the design by iterating through the bar size, number of bars per layer, number of layers of reinforcement, unit strength, and unit size, in that order. This is summarized in
Note: MASS can only iterate through the properties that are checked-on. Hence, de-selecting all masonry properties and reinforcement configurations, except for one of each type, converts the program into an analysis tool, rather than a design tool. The program simply determines moment resistance, Mr, and compares that value to the factored moment, Mf.
Figure 3‑75: Moment and Deflection Design Iteration Hierarchy for Out-of-Plane Wall Module
The program begins by iterating up through the reinforcement configurations (bar sizes, # of bars/layer, and # of layers of reinforcement), while fixing: the unit strength and the unit size. For the initial configuration, the program uses the weakest and smallest unit size selected by users. If the masonry unit properties are left unchanged by users, the default values are: 10 cm, 15 MPa for concrete block; and 10 cm, 55 MPa for clay brick. The number of layers of reinforcement (at the top and bottom of the beam), by default, is one layer (Layer 1). By default, the program can use 1, 2, or 3 bars per layer (at the top and the bottom of the beam).
The program creates a list of all possible reinforcement configurations, based on the parameters selected by users and those that fit within the current block size or brick cavity given the bar separation, side cover, top cover and knockout depth constraints entered by users. The program then sorts the possible steel configurations in terms of total steel area, and cycles through the steel configurations from lowest to highest area. If two steel areas are the same, within 0.1 %, the program chooses the configuration with the greater number of smaller bars, instead of fewer larger bars; and, for the same size and number of bars, the program chooses the configuration with the least number of layers.
The program iterates through the compression steel configurations before the tension steel to start the sequence off with the zero-compression steel condition, which is desirable cost and constructability. The program sets the compression steel configuration to the one with the minimum amount of steel, which in most cases is none. It then cycles through all possible tension steel configurations. If design is not achieved it increments the compression steel area, and then cycles through all the tension steel configurations again.
Note: If the compression steel is not tied, the program does not cycle through the compression steel configurations, as the compression steel does not contribute to the beam moment resistance. The program returns to the next strength if all tension steel configurations failed due to the steel not yielding and/or not providing enough moment resistance.
The iteration continues until a design is achieved, or the compression steel configurations are exhausted. In the latter case, the program iterates up to the next weakest block strength, and begins reiterating through the reinforcement configurations, while fixing the block size.
If a successful design is not achieved with the strongest unit size permitted, the program iterates up to the next smallest unit size, and begins reiterating through the reinforcement configurations. This iteration procedure continues until a successful design is reached, or all block sizes are exhausted.
Note: Due to the large number of iterations the program performs, it may take up to several minutes to reach a successful design. If the program requires more than several seconds to reach a solution, the smaller weaker unit, or smaller bar sizes do not provide enough capacity. In this case, users can easily de-select some of the early iterations in the midst of the design process. This significantly speeds up the program.
If a successful design is found, the program proceeds to design for the deflection of the beam. During the deflection design step the live plus long term deflection, Δ LivePlusLT , is calculated using the beam properties that provided a successful moment design. This deflection is compared with the maximum allowable deflection Δ LivePlusLT Limit (governed by CSA S304-14: 11.4.5). The program also allows users to set an additional total deflection limit, Δ Total Limit , and compares this value with the total immediate plus long term deflection. To achieve a successful deflection design:
ΔLivePlusLT ≤ Δ LivePlusLT Limit and ΔTotal ≤ Δ Total Limit.
In most cases a successful moment design also produces a successful deflection design. However, if the design fails in deflection, the program returns to the moment design iteration procedure. If the moment and deflection design step is successful, the program can proceed to the shear design step.
Shear Design Strategy
During the shear design step the shear resistance (at each cell i ), Vri, is calculated using the beam properties that provided a successful moment and deflection design. The shear resistance (at each cell i ) is compared with the factored shear (at each cell i ), Vfi.
To achieve a successful shear design, Vri ≥ Vfi. To meet this requirement the program performs the design by iterating through the stirrup spacing, stirrup configurations, and stirrup size, in that order. This is summarized in
Figure 3‑76: Moment and Deflection Design Iteration Hierarchy for Beam Module
The program begins by iterating through the stirrup spacing, while fixing: the stirrup configuration, and the stirrup size. For the initial configuration, the program uses the unit strength and size obtained from the moment and deflection design. If the vertical steel properties are left unchanged by users, the program starts by selecting the ‘None’ stirrup configuration. In this case, the beam is unreinforced (for shear purposes), and thus the stirrup spacing and stirrup size are not a consideration. If this configuration does not yield a successful shear design, the program attempts the design with single leg stirrup configuration, the largest stirrup spacing allowed by design, and the smallest stirrup size.
In MASS™ vertical shear steel can be spaced at variable (Figure 3-43) or uniform (Figure 3-44) intervals along the length of the beam. When uniform spacing is selected, the program uses the smallest spacing calculated from all the cells along the beam. When variable spacing is selected the program first verifies (cell-by-cell) if shear reinforcement is required. If the factored shear is equal or less than half the shear resistance of the masonry, Vf ≤ Vm/2 , no stirrups are required (CSA S304-14: 11.3.4.7.1). If the factored shear is greater than half the shear resistance of the masonry (Vf > Vm/2), then the spacing of the vertical shear steel is iterated as previously described, starting with the maximum spacing, Smax, allowed based on the minimum area of shear reinforcement required by CSA S304-14: 11.3.4.7.2 .
If a successful design is not achieved with the smallest stirrup spacing, the program iterates to the next stirrup configuration, and begins iterating through stirrup spacing again, while fixing the stirrup size. If a successful design is not achieved with the last stirrup configuration, the program iterates up the next stirrup size, and begins iterating through stirrup spacing again. This iteration procedure continues until a successful design is reached, or all stirrup sizes are exhausted.
If the program has exhausted all vertical steel possibilities (stirrup spacing, configuration, and size) and a solution cannot be found, the program reports a failed design, and returns to the moment and deflection design step to increase the unit strength and/or size. If a successful design is found, the program proceeds to design for bearing (supports, and point loads).
Bearing Design Strategy
Unlike in moment, deflection, and shear designs, the program does not iterate through masonry units properties, or reinforcement possibilities. In bearing design, the program only verifies the bearing resistance of the beam, based on initial user inputs (bearing length) and the beam design obtained from the moment, deflection, and shear design. The program simply informs users if the bearing area is sufficient to withstand the maximum reaction force at the supports, or the maximum point load applied.
Continue Reading: Out-of-Plane Walls
Was this post helpful? | https://docs.masonryanalysisstructuralsystems.com/beams/beam-design-strategy/ | 2022-06-25T13:44:37 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.masonryanalysisstructuralsystems.com |
Use this example to handle a flow info record.
Unlearned flow information
When a flow is unlearned i.e. removed from the flow table in the SmartNIC, information about the flow is generated. See Flow unlearning for more information about how a flow can be unlearned. A flow info record includes the following:
- User-defined flow ID.
- Number of bytes and frames received in the flow.
- RX time stamp of the last frame in the flow.
- Union of all TCP flags observed over the life of the flow.
- Cause of the flow unlearning.
Note: TCP flags are collected either from the inner layer or the outer layer determined by the IpProtocol configuration in NTPL. If IpProtocol is set to Inner, inner layer TCP sessions are tracked. If it is set to Outer, outer layer TCP sessions are be tracked. If it is set to None (default), TCP sessions are not tracked.
Flow info record
A flow info record can be retrieved using the NT_FlowRead function with the NtFlowInfo_s structure. NtFlowInfo_s is defined as follows:
typedef struct NtFlowInfo_s { uint64_t packetsA;/*!< Packet counter for set A */ uint64_t octetsA; /*!< Byte/octet counter for set A */ uint64_t packetsB;/*!< Packet counter for set B */ uint64_t octetsB; /*!< Byte/octet counter for set B */ uint64_t ts; /*!< Time stamp in UNIX_NS format of last seen packet */ uint64_t id; /*!< The 64-bit user-defined ID from the flow programming operation */ uint16_t flagsA; /*!< Bitwise OR of TCP flags for packets in set A */ uint16_t flagsB; /*!< Bitwise OR of TCP flags for packets in set B */ uint8_t cause; /*!< Unlearn cause (0: Software, 1: Timeout, 2: TCP flow termination */ } NtFlowInfo_t;
Note: The flow info record of an unlearned flow is generated only if the gfi field of NtFlow_s was set to 1 when learning the flow. See API: Learn a Flow for more information about flow learning.
The associated flow can be identified using the id field which contains the user-defined flow identification. See Programming Key ID, key set ID and flow ID.
Two counter sets, A and B are supported. Two separate counter sets can be utilized handling counters for upstream traffic and downstream traffic individually for example. NtFlowInfo_s contains values for both counter sets. By default counter set A is used. It can be configured which counter set to be used using NTPL as shown in the following example.
Assign[StreamId=(0..3); Color=0; Descriptor=DYN4, ColorBits=FlowID] = Upstream and \\ Key(kd, KeyID=1, CounterSet=CSA) == 4 Assign[StreamId=(0..3); Color=0; Descriptor=DYN4, ColorBits=FlowID] = Downstream and \\ Key(kd, KeyID=1, FieldAction=Swap, CounterSet=CSB) == 4In this example, counter set A is used for upstream traffic and counter set B is used for downstream traffic. See the full NTPL command example and descriptions in Flow Management over Network TAP Configuration.
The flagsA and flagsB fields contain nine 1-bit of flags (control bits) for a TCP flow. It can be utilized to track status changes of the TCP session.
The cause field contains what caused the flow unlearning operation. The returned value is one of the following:
- 0: Unlearned by NT_FlowWrite in the application.
- 1: Unlearned due to a timeout event because the flow was inactive during the defined period. The time can be configured using the FlowTimeOut parameter in /opt/napatech3/config/ntservice.ini. The default value is 120000 milliseconds as shown in the following example.
FlowTimeOut = 120000 # 0 .. 4294967295Note: Set FlowTimeOut to 0 to disable the timeout feature.
- 2: Unlearned automatically when the TCP session was terminated. This applies to a TCP flow if the tau field of NtFlow_s was set to 1 when the flow was learned. See API: Learn a Flow.
Note: Automatic TCP flow unlearning can be controlled on a per-flow basis by setting the tau field of NtFlow_s when learning a flow. See API: Learn a Flow
Note: The IpProtocol parameter must be set to either Inner or Outer in NTPL commands to enable tracking of TCP sessions for automatic TCP flow unlearning. If IpProtocol is set to Inner, inner layer TCP sessions are tracked. If IpProtocol is set to Outer, outer layer TCP sessions are be tracked. If it is set to None (default), TCP sessions are not tracked, and TCP flows are not automatically unlearned.
Read flow info records of unlearned flows
The following code snippet shows how to read flow info records of unlearned flows.
NtFlowInfo_t flowInfo; // For each element in the flow stream queue, print the flow info record. while(NT_FlowRead(flowStream, &flowInfo, 0) == NT_SUCCESS) { std::cout << "NT_FlowRead of flow ID " << flowInfo.id << std::endl << "CSA: Packages: " << flowInfo.packets_a << ", Octets: " << flowInfo.octets_a << std::endl << "CSB: Packages: " << flowInfo.packets_b << ", Octets: " << flowInfo.octets_b << std::endl << "Time stamp: " << flowInfo.ts << std::endl << "TCP flags A: " << flowInfo.flags_a << ", TCP flags B: " << flowInfo.flags_b << std::endl; switch(flowInfo.cause) { case 0: std::cout << "Unlearn cause: Software" << std::endl; break; case 1: std::cout << "Unlearn cause: Timeout" << std::endl; break; case 2: std::cout << "Unlearn cause: TCP flow termination" << std::endl; break; default: std::cout << "Unlearn cause: Not supported" << std::endl; break; } }After a flow is unlearned, the SmartNIC delivers the flow info record to the host. Up to 65536 records can be stored in the host memory. If the host memory is full, new records from the SmartNIC will be dropped. It is therefore good practice calling NT_FlowRead until no more records are available as shown in this example.
Note: If receiving flow info records is not needed, set the gfi field of NtFlow_s to 0 when learning flows. See API: Learn a Flow.
Note: Flow info records are read from the same flow stream that originally learned the flows. | https://docs.napatech.com/r/Stateful-Flow-Management/API-Read-Flow-Info-of-an-Unlearned-Flow | 2022-06-25T14:05:32 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.napatech.com |
The high performance XtremIO All Flash Array (AFA) offers Block Storage services to OpenStack. Using the driver, OpenStack Block Storage hosts can connect to an XtremIO Storage cluster.
This section explains how to configure and connect the block storage nodes to an XtremIO storage cluster.
Edit the
cinder.conf file by adding the configuration below under
the [DEFAULT] section of the file in case of a single back end or
under a separate section in case of multiple back ends (for example
[XTREMIO]). The configuration file is usually located under the
following path
/etc/cinder/cinder.conf.
For a configuration example, refer to the configuration Configuration example.
Configure the driver name by setting the following parameter in the
cinder.conf file:
For iSCSI:
volume_driver = cinder.volume.drivers.emc.xtremio.XtremIOISCSIDriver
For Fibre Channel:
volume_driver = cinder.volume.drivers.emc.xtremio.XtremIOFibreChannelDriver
To retrieve the management IP, use the show-xms CLI command.
Configure the management IP by adding the following parameter:
san_ip = XMS Management IP
In XtremIO version 4.0, a single XMS can manage multiple cluster back ends. In such setups, the administrator is required to specify the cluster name (in addition to the XMS IP). Each cluster must be defined as a separate back end.
To retrieve the cluster name, run the show-clusters CLI command.
Configure the cluster name by adding the following parameter:
xtremio_cluster_name = Cluster-Name
Note
When a single cluster is managed in XtremIO version 4.0, the cluster name is not required.
OpenStack Block Storage requires an XtremIO XMS user with administrative privileges. XtremIO recommends creating a dedicated OpenStack user account that holds an administrative user role.
Refer to the XtremIO User Guide for details on user account management.
Create an XMS account using either the XMS GUI or the add-user-account CLI command.
Configure the user credentials by adding the following parameters:
san_login = XMS username san_password = XMS username password
Configuring multiple storage back ends enables you to create several back-end storage solutions that serve the same OpenStack Compute resources.
When a volume is created, the scheduler selects the appropriate back end to handle the request, according to the specified volume type.
To support thin provisioning and multipathing in the XtremIO Array, the following parameters from the Nova and Cinder configuration files should be modified as follows:
Thin Provisioning
All XtremIO volumes are thin provisioned. The default value of 20 should be
maintained for the
max_over_subscription_ratio parameter.
The
use_cow_images parameter in the
nova.conf file should be set to
False as follows:
use_cow_images = False
Multipathing
The
use_multipath_for_image_xfer parameter in the
cinder.conf file
should be set to
True as follows:
use_multipath_for_image_xfer = True
Limit the number of copies (XtremIO snapshots) taken from each image cache.
xtremio_volumes_per_glance_cache = 100
The default value is
100. A value of
0 ignores the limit and defers to
the array maximum as the effective limit.
To enable SSL certificate validation, modify the following option in the
cinder.conf file:
driver_ssl_cert_verify = true
By default, SSL certificate validation is disabled.
To specify a non-default path to
CA_Bundle file or directory with
certificates of trusted CAs:
driver_ssl_cert_path = Certificate path
The XtremIO Block Storage driver supports CHAP initiator authentication and discovery.
If CHAP initiator authentication is required, set the CHAP Authentication mode to initiator.
To set the CHAP initiator mode using CLI, run the following XMCLI command:
$ modify-chap chap-authentication-mode=initiator
If CHAP initiator discovery is required, set the CHAP discovery mode to initiator.
To set the CHAP initiator discovery mode using CLI, run the following XMCLI command:
$ modify-chap chap-discovery-mode=initiator
The CHAP initiator modes can also be set via the XMS GUI.
Refer to XtremIO User Guide for details on CHAP configuration via GUI and CLI.
The CHAP initiator authentication and discovery credentials (username and password) are generated automatically by the Block Storage driver. Therefore, there is no need to configure the initial CHAP credentials manually in XMS.
You can update the
cinder.conf file by editing the necessary parameters as
follows:
[Default] enabled_backends = XtremIO [XtremIO] volume_driver = cinder.volume.drivers.emc.xtremio.XtremIOFibreChannelDriver san_ip = XMS_IP xtremio_cluster_name = Cluster01 san_login = XMS_USER san_password = XMS_PASSWD volume_backend_name = XtremIOAFA
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. | https://docs.openstack.org/newton/config-reference/block-storage/drivers/emc-xtremio-driver.html | 2022-06-25T15:00:13 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.openstack.org |
The ExclusionListsV[X] views are security zone constrained views that contain object lists and the objects (database, table, join index, or hash index) included in each list. The exclusion object lists specify objects that are not to be moved. ExclusionListsVX returns only the objects to which the user has access.
DictionaryDatabase
DictionaryDatabase is an optional column to define where the data dictionary tables are stored. This should be set when running AnalyzeSP on a system that is different from the system where the Exclusions are defined. The contents of Maps, Dbase, TVM, TVFields, DBQLStepTbl, ObjectUsage, and DatabaseSpace for the objects to analyze need to be copied from the source system to DictionaryDatabase on the system where AnalyzeSP and CreateExclusionListSP will be run. Note that the following views need to be created in DictionaryDatabase: Tables2V, Databases2V, MapsV, QryLogStepsV, ColumnsV, InsertUseCountV, UpdateUseCountV, DeleteUseCountV, and DiskSpaceV. Finally, the rows written to ActionsTbl need to be copied to the source system. | https://docs.teradata.com/r/Teradata-VantageTM-Data-Dictionary/July-2021/Views-Reference/ExclusionListsV-X/Usage-Notes | 2022-06-25T14:11:20 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.teradata.com |
The MMCME5_ADV primitive provides three inputs and one output for dynamic fine phase shifting. Each CLKOUT and the CLKFBOUT divider can be individually selected for phase shifting. The dynamic phase shift amount is common to all the output clocks selected.
The variable phase shift is controlled by the PSEN, PSINCDEC, PSCLK, and PSDONE ports (see the following figure). The phase of the MMCM output clock(s) increments/decrements according to the interaction of PSEN, PSINCDEC, PSCLK, and PSDONE from the initial or previously performed dynamic phase shift. PSEN, PSINCDEC, and PSDONE are synchronous to PSCLK. When PSEN is asserted for one PSCLK clock period, a phase shift increment/decrement is initiated. When PSINCDEC is High, an increment is initiated and when PSINCDEC is Low, a decrement is initiated. Each increment adds to the phase shift of the MMCM clock outputs by 1/32nd of the VCO period. Similarly, each decrement decreases the phase shift by 1/32nd of the VCO period. PSEN must be active for one PSCLK period. PSDONE is High for exactly one-clock period when the phase shift is complete. The number of PSCLK cycles (12) is deterministic. After initiating the phase shift by asserting PSEN and the completion of the phase shift signaled by PSDONE, the MMCM output clocks gradually drift from their original phase shift to an increment/decrement phase shift in a linear fashion. The completion of the increment or decrement is signaled when PSDONE asserts High. After PSDONE has pulsed High, another increment/decrement can be initiated. There is no maximum phase shift or phase shift overflow. An entire clock period (360°) can always be phase shifted regardless of frequency. When the end of the period is reached, the phase shift wraps around round-robin style. In the case where there is no additional phase shift initiated (PSEN stays Low), PSDONE continues to repeat a one-cycle High pulse every 32 PSCLK cycles. | https://docs.xilinx.com/r/en-US/am003-versal-clocking-resources/Dynamic-Phase-Shift-Interface-in-the-MMCM | 2022-06-25T14:20:29 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.xilinx.com |
.
It also contains implementations of the
OperationContext interface for various kinds of cache operations.
KeyOperationContext: Implementation for operations that require a key for the operation.
It provides a getKey method to obtain the key. Also provided are setCallbackArg and getCallbackArg methods that can be used to set/get an optional callback argument.
The operations returned as KeyOperationContext are
OperationContext.OperationCode.DESTROY
and
OperationContext.OperationCode.CONTAINS_KEY.
KeyValueOperationContext: Implementation for operations that require both key and value for the operation.
It extends the
KeyOperationContext implementation providing getValue and setValue methods
in addition to those provided by the KeyOperationContext class.
The operations returned as KeyValueOperationContext are
OperationContext.OperationCode.GET and
OperationContext.OperationCode.PUT.
For the GET operation this is used to both the pre and post operation cases.
InterestOperationContext: Implementation for register and unregister of interest in a region.
It defines a sub-class
InterestType that encapsulates different kinds of interest viz.
KEY, LIST, REGULAR_EXPRESSION, FILTER_CLASS and OQL_QUERY.
It provides getInterestType method to get the interest type,
getInterestResultPolicy method to get the
InterestResultPolicy of the request,
isUnregister method that returns true if this is an unregister operation, and getKey/setKey methods to get/set the key being registered.
The key may be a single key, a list of keys, a regular expression string or an OQL
Query.
QueryOperationContext: Implementation for a cache query operation for both the pre and post operation cases.
It provides getQuery to get the query string as well as a modifyQuery method to be able to modify it.
A utility getRegionNames method is also provided to obtain the list of regions as referenced by the query string.
For the results in the post operation phase, getQueryResult allows getting the result and setQueryResult allows modification of the result.
RegionOperationContext: Implementation for the region level operation for the pre operation case.
It provides getCallbackArg and setCallbackArg methods to get/set the optionally callback argument.
The operations returned as RegionOperationContext are
OperationContext.OperationCode.REGION_CLEAR
and
OperationContext.OperationCode.REGION_DESTROY.
DestroyOperationContext: Implementation for
OperationContext.OperationCode.DESTROY operation
having the key object for both the pre-operation and post-operation updates.
CloseCQOperationContext: Implementation for
OperationContext.OperationCode.CLOSE_CQ operation
for the pre-operation case.
ExecuteCQOperationContext: Implementation for
OperationContext.OperationCode.EXECUTE_CQ operation
for both the pre-operation and post-operation case.
ExecuteFunctionOperationContext: Implementation for
OperationContext.OperationCode.EXECUTE_FUNCTION operation
for the pre-operation case.
GetOperationContext: Implementation for
OperationContext.OperationCode.GET operation
having the key object for the pre-operation case and both key-value objects for the post-operation case.
InvalidateOperationContext: Implementation for
OperationContext.OperationCode.INVALIDATE region
operation having the key object for the pre-operation case and post-operation case.
PutOperationContext: Implementation for
OperationContext.OperationCode.PUT operation
having the key and value objects for the pre-operation case and post-operation case.
PutAllOperationContext: Implementation for
OperationContext.OperationCode.KEY_SET operation
having the key and value objects for the pre-operation case and post-operation case.
RegionCreateOperationContext: Implementation for
OperationContext.OperationCode.REGION_CREATE
operation for the pre-operation case and post-operation case.
StopCQOperationContext: Implementation for
OperationContext.OperationCode.STOP_CQ operation
for the pre-operation case.
RegisterInterestOperationContext: Implementation for
OperationContext.OperationCode.REGISTER_INTEREST operation
for the pre-operation case, which derives from
InterestOperationContext
UnregisterInterestOperationContext: Implementation for
OperationContext.OperationCode.UNREGISTER_INTEREST operation
for the pre-operation case, which derives from
InterestOperationContext | http://gemfire-95-javadocs.docs.pivotal.io/org/apache/geode/cache/operations/package-summary.html | 2019-05-19T08:59:10 | CC-MAIN-2019-22 | 1558232254731.5 | [] | gemfire-95-javadocs.docs.pivotal.io |
Redis¶
Redis is a key-value store engine used by most components of the CartoDB application stack to store configuration and cache.
In contrast to the PostgreSQL metadata (which is used only by the CartoDB editor), the metadata stored in Redis is shared among the CartoDB editor, the SQL API, and the Maps API.
Important
Even though also:
- Database 0: Table and visualization metadata, including map styles and named maps.
- Database 3: OAuth credentials metadata.
- Database 5: Metadata about the users, including API keys and database_hosts.
At this moment CartoDB requires redis 3.x version. | https://cartodb.readthedocs.io/en/v4.11.106/components/redis.html | 2019-05-19T09:46:44 | CC-MAIN-2019-22 | 1558232254731.5 | [] | cartodb.readthedocs.io |
Administering Open Firmware/EFI Passwords
You can administer Open Firmware or EFI passwords to ensure the security of managed computers.
There are two ways to set and remove an Open Firmware/EFI password: using a policy or using Jamf Remote.
Requirements
The “setregproptool” binary must be present on each computer and any alternate boot volume(s) used to set firmware. For models “Late 2010” or later with macOS 10.9.x or earlier, the binary must be obtained and placed on the computer. (For more information, see the Setting EFI Passwords on Mac Computers (Models Late 2010 or Later) Knowledge Base article.)
Setting or Removing an Open Firmware/EFI Password set or remove an Open Firmware/EFI password.
Click the Accounts tab.
Select the Set Open Firmware/EFI Password checkbox.
Do one of the following:
To set the password, choose "command" from the Security Level pop-up menu and enter and verify the password.
To remove the password, choose "none" from the Security Level pop-up menu.. | https://docs.jamf.com/10.0.0/jamf-pro/administrator-guide/Administering_Open_Firmware_EFI_Passwords.html | 2019-05-19T08:55:56 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.jamf.com |
As of Tue, 04/16/2019 - 21:07, this link is reporting 404 - Not Found.
Lemon CMS is a simple to use web-based content management system that allows you easily, quickly and dynamically update the content on your website. The CMS uses a file based system making it simple to port into any website. | http://docs.ongetc.com/?q=content/lemon-cms | 2019-05-19T08:40:30 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.ongetc.com |
Transfer CFT 3.2.2 Local Administration Guide About Transfer CFT Copilot ( Transfer CFT UI): provides frames, menus and icons you can use to manage the processes Additionally this section offers a Command Index, which is a summary of all commands with their syntax and default values. Related Links | https://docs.axway.com/bundle/Transfer_CFT_322_UsersGuide_LocalAdministration_allOS_en_HTML5/page/Content/central_governance/c_intro_userinterfaces.htm | 2019-05-19T08:45:30 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.axway.com |
All content with label as5+hot_rod+infinispan+jboss_cache+listener+non-blocking+release+scala.
Related Labels:
expiration, publish, datagrid, coherence, interceptor, server, replication, transactionmanager, dist,, batch, hash_function, configuration, buddy_replication, loader, xa, write_through, cloud, mvcc, notification, tutorial, xml, jbosscache3x, distribution, cachestore, data_grid, cacheloader, resteasy, cluster, br, development, websocket, async, transaction, interactive, xaresource, build, searchable, demo, installation, cache_server, client, jpa, filesystem, tx, gui_demo, eventing, client_server, testng, infinispan_user_guide, standalone, hotrod, snapshot, repeatable_read, webdav, docs, batching, consistent_hash, store, jta, faq, 2lcache, jsr-107, jgroups, lucene, locking, rest
more »
( - as5, - hot_rod, - infinispan, - jboss_cache, - listener, - non-blocking, - release, - scala )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/as5+hot_rod+infinispan+jboss_cache+listener+non-blocking+release+scala | 2019-05-19T09:26:23 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.jboss.org |
All content with label cloud+gridfs+infinispan+installation+interface+jsr-107+loader+repeatable_read.
Related Labels:
podcast, expiration, publish, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, release, query, deadlock, intro, archetype, lock_striping, jbossas,, index, events, hash_function, configuration, batch, buddy_replication,, infinispan_user_guide, standalone, webdav, hotrod, snapshot, docs, consistent_hash, batching, store, jta, faq, 2lcache, lucene, jgroups, locking, rest, hot_rod
more »
( - cloud, - gridfs, - infinispan, - installation, - interface, - jsr-107, - loader, - repeatable_read )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/cloud+gridfs+infinispan+installation+interface+jsr-107+loader+repeatable_read | 2019-05-19T09:57:59 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.jboss.org |
Contents IT Service Management Previous Topic Next Topic Define help information for a service catalog variable Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Define help information for a service catalog variable Enter help information for a variable to help users determine what info they must provide for a service catalog variable. Navigate to Service Catalog > Catalog Definitions > Maintain Items and click the catalog item with the variable that you want to provide help for. In the Variables related list, click the variable. In the Annotation section of the Variable form, select the Show Help check box. The Help, Help text, and Instructions fields appear. In the Help tag field, enter the short descriptive text that appears between the question and the responses. For example, Click here for help or Preview. In the Help text field, enter the help text that displays for users who clicks the Help tag on the page for the catalog item. Note: The Help and Help text fields do not support HTML tags. Figure 1. Example of help information Related TasksDefine a question choiceRelated ReferenceService catalog variable attributes On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/jakarta-it-service-management/page/product/service-catalog-management/task/t_DefineHelpInformation.html | 2019-05-19T08:56:27 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.servicenow.com |
Table of contents
What is Subiz Referral Program?
How does the referral program work?
How to join?
Manage referral account
Commission Program?
How to receive money?
Is it compulsory to install Subiz on my website?
Note
1, What is Subiz Referral Program?
Referral Program creates the opportunity of cooperating between Subiz and partners based on seeking Subiz’s potential customers.
This program is designed for businesses/people dealing in online activities : professional website design company, Internet Marketing company, Marketing staff, web developer, etc.
2, How does the referral program work?
Subiz provides you a referral link with tracking code. Share the link with as many friends as you want! When customers click on the link and register, Subiz will record. And when they pay for Subiz, you will receive a corresponding commission.
3, How to join?
Access your Dashboard >> Reports tab >> Referral
Fill in your email which Subiz can contact you well and Click “Register”
Check your email to make sure that you are accepted in this program.
4, Manage your Referral account
Check Total register, Total paid accounts chart. You can filter by the date as you want.
Then, Click “Payment” to access your referral account information.
Check your referral income:
Check your referral link and commission:
5, Program commission?
10-15% any kind of customer payments (initial charges, extension, adding more agents, etc) will be put aside for you.
This commission info will also display at Reports >> Referral >> Payment
(See more at “How to manage your Referral account”)
If you are working well, we will offer a higher commission rate.
6, Receive money?
On the 5th of every month, Subiz will check and determine the income you receive. The payment will be made within 3-5 working days later.
However, If you reach to minimum payment limit ($20 minimum), you can request us to receive money anytime.
7, Is it compulsory to install Subiz on my website?
We encourage you to install Subiz on website in order to experience and know more about Subiz before referring to others. However, this is not mandatory. So, the only thing you have to do is that registering succesfully an account on Subiz.com
Note:
- Follow the international rules of referral program
- Do not make advertisements that compete with Subiz (directly or indirectly)
- You can write recommendations about Subiz on Website, Blog, Email…
- Subiz appreciates that you make direct recommendation to customers and help them understand about Subiz
- Subiz do not take responsibility of compensation if you do not follow referral rules | https://docs.subiz.com/category/payment/ | 2019-05-19T08:46:42 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.subiz.com |
During syslog configuration, you can opt to send Console events, Device events, or both. Any events generated by the AirWatch Console are sent to your SIEM tool according to the scheduler settings. Syslog can be configured for both on-premises and SaaS deployments.
To configure syslog:
- Navigate to Hub > Reports & Analytics > Events > Syslog.
On the General tab, configure the following syslog settings:
On the Advanced tab, configure the following settings:
- Select Save and use the Test Connection button to ensure successful communication between the AirWatch Console and the SIEM tool. | https://docs.vmware.com/en/VMware-AirWatch/9.2/vmware-airwatch-guides-92/GUID-AW92-Configure_Syslog.html | 2019-05-19T08:17:18 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.vmware.com |
The Nodes menu provides a comprehensive view of all of the nodes that are used across your cluster. You can view a graph that shows the allocation percentage rate for CPU, memory, or disk.
Figure 1 - Agent Nodes tab
By default, all of your nodes are displayed in List view, sorted by health. You can filter nodes by service type, health, regions, and zones. You can also sort the nodes by number of tasks or percentage of CPU, GPU, memory, or disk space allocated.
You can switch to Grid view to see a “donuts” percentage visualization.
Figure 2 - Nodes grid view
Clicking on a node opens the Nodes side panel, which provides CPU, GPU, memory, and disk usage graphs and lists all tasks on the node. Use the dropdown or a custom filter to sort tasks and click on details for more information. Click on a task listed on the Nodes side panel to see detailed information about the task’s CPU, GPU, memory, and disk usage and the task’s files and directory tree.
Clicking on the Masters tab opens the Masters Nodes view.
Figure 3 - Masters Nodes tab
The Masters Nodes tab shows information about the masters in the cluster. You can see the leader and non-leaders in the cluster, with their corresponding IP and port, region, version, started time, and elected time. | http://docs-review.mesosphere.com/1.12/gui/nodes/ | 2019-05-19T09:24:58 | CC-MAIN-2019-22 | 1558232254731.5 | [array(['/1.12/img/nodes-ee-dcos-1-12.png', 'Nodes'], dtype=object)
array(['/1.12/img/nodes-donuts-ee-dcos-1-12.png', 'Nodes'], dtype=object)
array(['/1.12/img/nodes-masters-ee-dcos-1-12.png', 'Nodes'], dtype=object)] | docs-review.mesosphere.com |
Running Sync Tables¶
If you are working with the Sync Tables feature, you must run a rake task to trigger the synchronization of the dataset. This rake retrieves all sync tables that should get synchronized, and puts the synchronization tasks at Resque:
bundle exec rake cartodb:sync_tables[true]
You might want to set up a cron so that this task is executed periodically in an automated way. | https://cartodb.readthedocs.io/en/v4.11.9/operations/run_sync_tables.html | 2019-05-19T09:47:40 | CC-MAIN-2019-22 | 1558232254731.5 | [] | cartodb.readthedocs.io |
Create Account
For details on how to create a new account please check Create a new Account.
Apply App SID and App Key
For details on how to get App Key and App SID please check Create New App and Get App Key and SID.
Free Plan
Our free plan allows you to use Cloud APIs as you would normally. It only applies the limitation to the data that can be processed with the APIs. For details on the free plan please check Cloud APIs FAQs. For pricing please check Cloud APIs Pricing.
Paid Plan
The free plan simply becomes paid plan when you upgrade your plan for any paid account. Please follow below steps to upgrade your free plan to paid plan
1- Login to.
2- Click on Upgrade Plan.
3- Follow instructions after clicking on Buy Now.
3- Same App Key and App SID will be used for the free plan as well. | https://docs.aspose.cloud/pages/diffpagesbyversion.action?pageId=5439507&selectedPageVersions=3&selectedPageVersions=2 | 2019-05-19T09:32:30 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.aspose.cloud |
How to: Create a Workflow. Only one of the topics in this section is required to complete the tutorial; you should pick the style that interests you and follow that step. However, you may complete all of the topics if desired.
Note
Each topic in the Getting Started tutorial depends on the previous topics. To complete this topic, you must first complete How to: Create an Activity.
Note
To download a completed version of the tutorial, see Windows Workflow Foundation (WF45) - Getting Started Tutorial.
In This Section
How to: Create a Sequential Workflow
Describes how to create a sequential workflow using the Sequence activity.
How to: Create a Flowchart Workflow
Describes how to create a flowchart workflow using the Flowchart activity.
How to: Create a State Machine Workflow
Describes how to create a state machine workflow using the StateMachine activity.
See also
Feedback
Send feedback about: | https://docs.microsoft.com/en-us/dotnet/framework/windows-workflow-foundation/how-to-create-a-workflow | 2019-05-19T08:42:30 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.microsoft.com |
Contents Now Platform Administration Previous Topic Next Topic Enforcing unique numbering Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Enforcing. The following script references a script created in Configure left padding of a system number in a table. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/london-platform-administration/page/administer/field-administration/concept/c_EnforcingUniqueNumbering.html | 2019-05-19T08:57:29 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.servicenow.com |
Multiple resolutions¶ to different screen sizes.
Resizing¶
There are several types of devices, with several types of screens, which
in turn have different pixel density and resolutions. Handling all of
them can be a lot of work, so Godot tries to make the developer’s life a
little easier. The Viewport
node has several functions to handle resizing, and the root node of the
scene tree is always a viewport (scenes loaded are instanced as a child
of it, and it can always be accessed by calling
get_tree().get_root() or
get_node("/root")).
In any case, while changing the root Viewport params is probably the most flexible way to deal with the problem, it can be a lot of work, code and guessing, so Godot provides a simple set of parameters in the project settings to handle multiple resolutions.
Stretch settings¶
Stretch settings are located in the project settings, it’s just a bunch of configuration variables()). | https://docs.godotengine.org/en/stable/tutorials/viewports/multiple_resolutions.html | 2019-05-19T08:31:24 | CC-MAIN-2019-22 | 1558232254731.5 | [array(['../../_images/screenres.png', '../../_images/screenres.png'],
dtype=object)
array(['../../_images/stretchsettings.png',
'../../_images/stretchsettings.png'], dtype=object)
array(['../../_images/stretch.png', '../../_images/stretch.png'],
dtype=object)
array(['../../_images/stretch_demo_scene.png',
'../../_images/stretch_demo_scene.png'], dtype=object)] | docs.godotengine.org |
The Cascading Style Sheet class to be applied to the weblet when the mouse is moved over it.
Default value
Blank.
Valid values
Any valid class name from the current Cascading Style Sheet, in single quotes. A list of available classes can be selected from by clicking the corresponding dropdown button in the property sheet. A shipped class of 'std_rad_button_mouseover' is supplied. | https://docs.lansa.com/14/en/lansa087/content/lansa/wamengb2_2370.htm | 2019-05-19T09:03:34 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.lansa.com |
In this step you will learn how to use the Auto Complete prompter when entering a SELECT command to retrieve a number of fields from the xEmployee table. You will see how Auto Complete will show you all parameters for commands. You will begin by deleting the SELECT statement created with the Command Assistant and you will code the following commands:
Select Fields(#xEmployeeGivenNames #xEmployeeCity #xEmployeeIdentification #xEmployeePostalCode #xEmployeeSurname) From_File(xEmployee)
Endselect
1. From the File menu select Options. Select the Source options. Make sure that Auto Complete has been set to Prompter.
Hint: The Visual LANSA status bar along the bottom edge of window contains information such as Line and Column position in the editor, the install directory, partition and current user. It also contains setting information such as Audit Off, DirectX and 2015Gray. These three items provide a direct link to open the Settings Dialog. Clicking them will open the Settings Dialog.
2. Click the OK button to close the LANSA Settings dialog.
3. In the GetData routine, delete the SELECT and ENDSELECT statements.
The GetData routine should now be empty.
4. Within the GetData routine, type S on a blank line. Auto Complete shows a list of commands starting with S:
5. Position the cursor to SELECT and press enter to select the command.
The Auto Complete Prompter adds the SELECT command with its mandatory parameters to the line. The ENDSELECT command is also added. The cursor is positioned to the From_File parameter.
6. Type xe in the From_File parameter.
Auto Complete displays a list of files starting with xe.
7. Position the cursor to the xEmployee table and press Enter. The FROM_FILE parameter will be completed.
8. Position the cursor in the Fields parameter. Type #xe. The Auto Complete drop-down will show the fields beginning xe.
9. Position the cursor on the xEmployeeGivenNames field and press enter. The sequence of the fields in the Fields parameter is not significant. Your code should now look like the following:
10. Continue by adding a space followed by #xe in the Fields parameter. Select the field xEmployeeIndentification and press Enter.
11. Repeat step 10 to enter fields xEmployeeCity, xEmployeeIdentification, xEmployeePostalCode and xEmployeeSurname into the Fields parameter. Your code should look like the following:
The SELECT command definition is complete.
You may want to try to use the AutoComplete setting Inline which completes your code on the same line as you type. When you are learning RDML, the recommended setting for AutoComplete is Prompter. You may find the Inline option faster once you develop some RDML programming skills.
You can turn off AutoComplete and then access it when required by using the Ctrl+Space keys anywhere on a command line.
12. Delete the GetData method routine. Compile the form iiiDragandDrop. | https://docs.lansa.com/14/en/lansa095/content/lansa/edttut01_0155.htm | 2019-05-19T09:25:28 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.lansa.com |
HTTP and HTTPS ports and routing are provided automatically for all web
applications deployed to Helion Stackato (unless processes: web: is set to
~).
If your application requires additional TCP or UDP ports, use the Harbor service to allocate them.
Additional ports are provisioned like any other data service. To request
a port with the
stackato client:
$ stackato create-service harbor debug-port
To request a port from Harbor in the
manifest.yml file, add it to a
services: block. For example:
name: port-test memory: 256 services: my-port: harbor
This creates a TCP port tunnel which the application can access on the
host and port specified in the
$STACKATO_SERVICES environment variable.
The example above might create the following
my-port object in
$STACKATO_SERVICES:
{ "my-port": { "hostname": "192.168.68.111", "host": "192.168.68.111", "port": 30134, "name": "cf7f868a-8b7b-4ac8-ab4d-9fd87efb7c09", "node_id": "harbor_node_1", "int_port": 4100, "ext_hostname": "ports.example.com", "ext_host": "15.185.104.122" } }
This provides the following information:
hostname: The internal hostname (if configured) of the node providing the service (the Harbor node). If none is configured by the admin, this will show the internal IP address.
host: The internal IP address of the Harbor node.
port: The external port number exposed by the service. Connections from external clients and other internal applications (those not directly bound to the service) will connect with this port number.
name: The service instance ID (Helion Stackato internal refer).
node_id: The Harbor node ID (Helion Stackato internal).
int_port: The port on the application container which forwards to Harbor (see also Harbor Environment Variables). Applications bound to the service should connect to this port.
Access to the port from outside of the Helion Stackato VM or cluster may or may not be exposed, depending on how the Harbor service is configured by the Admin. If Harbor is set up to allow public port access, the following two settings will also be displayed:
ext_hostname: The public hostname (if configured) exposing the port.
ext_host: The public IP address exposing the port.
Note
To remotely check the settings and credentials of any Helion Stackato service, run the stackato service command.
If there is only one Harbor service, the
STACKATO_HARBOR environment
variable can be used to get the internal port number.
If there is more than one Harbor service,
STACKATO_HARBOR is not
available. Instead, a custom
STACKATO_HARBOR_<SERVICE_NAME>
environment variable will be created for each harbor service
(service name upper-cased with hyphens replaced by underscores).
For example, if your
manifest.yml file configures the following services:
services: udp-port: harbor tcp-port: harbor
Two environment variables would be created:
STACKATO_HARBOR_UDP_PORT
and
STACKATO_HARBOR_TCP_PORT.
This naming scheme can be used in conjunction with the
STACKATO_APP_NAME_UPCASE environment variable. For example, in an
app with the following harbor services defined:
services: udp-${name}: harbor tcp-${name}: harbor
The Harbor port number for the UDP service could be accessed within the container with a construct such as:
UDP_SERVICE_NAME=STACKATO_HARBOR_UDP_${STACKATO_APP_NAME_UPCASE} UDP_SERVICE_PORT=${!UDP_SERVICE_NAME}
Harbor supports both the TCP and UDP protocols. When you provision a
service with Harbor it will create a TCP enabled port by default. If you
want to have a UDP port provisioned instead, prefix your service name with
udp. For example:
$ stackato create-service harbor udp-debug-port
If you have an application that requires both TCP and UDP, you can prefix
your service name with either
multi- or
both-. For example:
$ stackato create-service harbor both-debug-port
Harbor will then create UDP and TCP proxies for your application, so applications like DNS can use both protocols on the same provisioned port.
Harbor recognizes when you have multiple instances of your app running, and will update the available app backends accordingly.
The ZNC sample application allows you to deploy an IRC bouncer on Helion Stackato using the Harbor port service.
SSL termination of HTTPS to applications hosted on Helion Stackato normally happens at the Router.
There is currently no mechanism for users to add SSL certs for their own applications to the Router, but you can expose an external HTTPS interface via the Harbor port service which uses your SSL certs.
To do this, upload the SSL certificates and keys along with your application, and expose your application server directly on the TCP port provided by Harbor.
Note
When using this approach, the hostname or IP address of the app will be the one provided by the Harbor node and the client will connect using the Harbor-assigned port number, not port 443.
For example, an application running through the port service might have
a URL such as.
You can set up aliases to this URL using DNS, but the explicit port specification must always be added.
If you are using a framework such as Python or Perl which sets up
uWSGI (or any other framework that provides its own intermediate web
server) Harbor can provision an HTTPS server in the app container that
forwards HTTPS requests to the framework's HTTP server. To do this, add
the suffix
https to the name of your Harbor service. For example:
name: harbor-test-app services: custom-cert-https: harbor
Put your server certificate and key (named
harbor.crt and
harbor.key respectively) in a directory called
cert in the
application's root directory. For example:
app_root ─── certs ─── harbor.crt ─── harbor.key ─── ...
Alternatively, use a standalone or buildpack setup which provisions its own intermediate web server instead.
If your application uses multiple SSL certificates, use the following naming scheme:
<harbor service name>.key
<harbor service name>.crt
For example:
app_root ─── certs ─── harbor-https-custom-1.crt ─── harbor-https-custom-2.key ─── ...
The proxy will look for these certs before reverting to
harbor.crt and
harbor.key.
Important
Using Harbor in this way does not take advantage of any load balancing set up for regular web traffic through the Routers and Load Balancer.
If you have multiple instances of your app routing through a Harbor TCP port as above, connections will be distributed via round-robin. | https://docs.stackato.com/user/services/port-service.html | 2019-05-19T09:19:55 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.stackato.com |