content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Sponsorship keeps our open-source projects running. We want our open-source projects running independently from our business [Terl]() because that way: They are not tied to the longevity of our business. They are not tied to the financial incentives of our business. They remain open-source forever. They live a long time. You can find out more about why we've decided to have supporters on our supporters information page. In order to become a sponsor you must be willing to commit at least £5,000 per month for at least one year. Though, that number is entirely flexible. The higher the number the more benefits we provide. You can also become a supporter if that number's a bit too high. Sponsors receive organisation-wide 12 hour support/bug/feature requests on all our open-source projects plus a wide array of other benefits such as: Discounts on all our products and services. Free one-to-one support from our devs, max 5 hour consultations per month. Merchandise. Some advertisement of your products and services on a yearly basis (e.g. reviews). Nothing here yet! Be the first! Talk to us at [email protected]. We don't bite.
https://docs.lazycode.co/lazysodium/sponsors
2019-02-16T05:03:35
CC-MAIN-2019-09
1550247479885.8
[]
docs.lazycode.co
Basic LINQ Query Operations (C#) This topic gives a brief introduction to LINQ query expressions and some of the typical kinds of operations that you perform in a query. More detailed information is in the following topics: Standard Query Operators Overview (C#) Walkthrough: Writing Queries in C# Note If you already are familiar with a query language such as SQL or XQuery, you can skip most of this topic. Read about the " from clause" in the next section to learn about the order of clauses in LINQ query expressions. ( customers) and the range variable ( cust). //queryAllCustomers is an IEnumerable<Customer> var queryAllCustomers = from cust in customers select. Note For non-generic data sources such as ArrayList, the range variable must be explicitly typed. For more information, see How to: Query an ArrayList with LINQ (C#) and from clause. Filtering. Ordering. var queryLondonCustomers3 = from cust in customers where cust.City == "London" orderby cust.Name ascending select cust; To order the results in reverse order, from Z to A, use the orderby…descending clause. For more information, see orderby clause. Grouping. Joining. var innerJoinQuery = from cust in customers join dist in distributors on cust.City equals dist.City select new { CustomerName = cust.Name, DistributorName = dist.Name };. Selecting (Projections). See also Feedback We'd love to hear your thoughts. Choose the type you'd like to provide: Our feedback system is built on GitHub Issues. Read more on our blog.
https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/concepts/linq/basic-linq-query-operations
2019-02-16T06:13:43
CC-MAIN-2019-09
1550247479885.8
[]
docs.microsoft.com
trident.LightRay¶ - class trident. LightRay(parameter_filename, simulation_type=None, near_redshift=None, far_redshift=None, use_minimum_datasets=True, max_box_fraction=1.0, deltaz_min=0.0, minimum_coherent_box_fraction=0.0, time_data=True, redshift_data=True, find_outputs=False, load_kwargs=None)[source]¶ A 1D object representing the path of a light ray passing through a simulation. LightRays can be either simple, where they pass through a single dataset, or compound, where they pass through consecutive datasets from the same cosmological simulation. One can sample any of the fields intersected by the LightRay object as it passed through the dataset(s). For compound rays, the LightRay stacks together multiple datasets in a time series in order to approximate a LightRay’s path through a volume and redshift interval larger than a single simulation data output. The outcome is something akin to a synthetic QSO line of sight. Once the LightRay object is set up, use LightRay.make_light_ray to begin making rays. Different randomizations can be created with a single object by providing different random seeds to make_light_ray. Parameters
https://trident.readthedocs.io/en/latest/generated/trident.LightRay.html
2019-02-16T06:08:33
CC-MAIN-2019-09
1550247479885.8
[]
trident.readthedocs.io
Tools¶ Doctrine Console¶ The Doctrine Console is a Command Line Interface tool for simplifying common tasks during the development of a project that uses Doctrine PHPCR-ODM. It is built on the Symfony Console Component If you have not set up the console yet, take a look at the Console Setup Section. Command Overview¶ There are many commands, for example to import and export data, modify data in the repository, query or dump data from the repository or work with PHPCR workspaces. Run the console without any arguments to see a list of all commands. The commands are self documenting. See the next section how to get help. Note PHPCR-ODM specific commands start with doctrine:. The commands that start with only phpcr: come from the phpcr-utils and are not specific to Doctrine PHPCR-ODM. If you use the PHPCR-ODM bundle in Symfony2, all commands will be prefixed with doctrine:phpcr. Getting documentation of a command¶ Type ./vendor/bin/phpcrodm on the command line and you should see an overview of the available commands or use the –help flag to get information on the available commands. If you want to know more about the use of the register command for example, call: ./vendor/bin/phpcrodm help doctrine:phpcr:register-system-node-types PHPCR implementation specific commands¶ Jackrabbit specific commands¶ If you are using jackalope-jackrabbit, you also have a command to start and stop the jackrabbit server: jackalope:run:jackrabbitStart and stop the Jackrabbit server Register system node types¶ This command needs to be run once on a new repository to prepare it for use with the PHPCR-ODM. Failing to do so will throw you errors when you try to store a document that uses a node type different from nt:unstructured, like a file or folder. Adding your own commands¶ You can also add your own commands on-top of the Doctrine supported tools by adding them to your binary. To include a new command in the console, either build your own console file or copy bin/phpcrodm.php into your project and add things as needed. Read more on the Symfony Console Component in the official symfony documentation.
http://docs.doctrine-project.org/projects/doctrine-phpcr-odm/en/latest/reference/tools.html
2016-09-25T01:58:21
CC-MAIN-2016-40
1474738659753.31
[]
docs.doctrine-project.org
is where you can view and edit permissions for items that are part of this Contact Category. The tree on the left represents the user groups that have been defined for your Joomla! site, while the right part displays permissions for the currently selected group, and lets you edit those permissions: At the top right you will see the toolbar:
http://docs.joomla.org/index.php?title=Help16:Components_Contact_Categories_Edit&diff=77187&oldid=34253
2013-12-05T07:03:20
CC-MAIN-2013-48
1386163041297
[]
docs.joomla.org
Help Center Local Navigation Search This Document Making software and applications available on a network drive To make the BlackBerry® Device Software or applications available for users to install on or add to their BlackBerry devices, you must save the BlackBerry Device Software and applications to a network drive and create a software index. You can maintain only one version of software or an application on the network drive at a time. Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/admin/deliverables/12934/Make_sw_apps_avail_on_network_drive_189002_11.jsp
2013-12-05T06:43:46
CC-MAIN-2013-48
1386163041297
[]
docs.blackberry.com
While it is true that PHP4 allows only one parameter, it is not true that PHP5 allows only 2 parameters. The correct function signature for PHP5 should be: resource ibase_blob_open([ resource link_identifier, ] string blob_id) If a link_identifier is not provided, then the "default" link_identifier will be used. The default link identifier is assigned every time you call ibase_connect or ibase_pconnect, so if you have multiple connections it will be whichever was connected LAST.
http://docs.php.net/manual/es/function.ibase-blob-open.php
2013-12-05T06:43:24
CC-MAIN-2013-48
1386163041297
[]
docs.php.net
This page is under construction. Basic Operations APIs receive() - for a queue, receive is the standard JMS receive method. -. Shared Features Features that are used across APIs are listed in this section Message Selector - In JMS, Message Selector is a String that defines some filtering conditions for receiving messages. Refer to the JMS Javadoc for details - In Groovy Messaging Service. - For a Closure, it works similar to a map and it passes in the destination name or destination and expect a String of return value as the message selector. (TODO: message selector is not implemented) When there are multiple destination and message selector combination, -) -.
http://docs.codehaus.org/pages/viewpage.action?pageId=110166041
2013-12-05T06:43:24
CC-MAIN-2013-48
1386163041297
[]
docs.codehaus.org
User Guide Local Navigation Search This Document Zoom in to or out from a webpage On a webpage, press theAfter you finish: To turn off zoom mode, press the key > Zoom. Next topic: About tabbed browsing Previous topic: Visit a webpage or search the Internet Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/36023/Zoom_in_to_a_web_page_60_1065587_11.jsp
2013-12-05T06:46:08
CC-MAIN-2013-48
1386163041297
[array(['menu_key_bb_bullets_39752_11.jpg', 'Menu'], dtype=object) array(['escape_key_arrow_curve_up_left_39748_11.jpg', 'Escape'], dtype=object) ]
docs.blackberry.com
Can I get the Austral Addaline 35 as a Wall Mounted clothesline? Yes, in fact the initial price displayed above is for the wall mount version, this is standard for of any of the Fold Down type clotheslines. To have it ground mounted you will need to add the Ground Mount Kit option to your order, this option is located on the right below the price displayed on the product page.
https://docs.lifestyleclotheslines.com.au/article/90-can-i-get-the-austral-addaline-35-as-a-wall-mounted-clothesline
2020-09-18T20:13:42
CC-MAIN-2020-40
1600400188841.7
[]
docs.lifestyleclotheslines.com.au
Create a new customer contact As an account manager, you can create a new contact when you receive an email from a customer whose details are not in the Microsoft Outlook contacts list. Before you beginRole required: sn_customerservice.contact_manager or sn_customerservice.proxy_contact Procedure Open an email message you received from the contact. On the Microsoft Outlook Home tab, click View in ServiceNow. The contact details are not available and the Outlook add-in panel displays the No contact was found message. Click the more actions icon () and select Create Contact. Fill in the contact details in the case form and click Submit. You can also click the Pop-out icon () to create a contact from the CSM portal page.
https://docs.servicenow.com/bundle/orlando-customer-service-management/page/product/customer-service-management/task/create-new-contact.html
2020-09-18T21:17:11
CC-MAIN-2020-40
1600400188841.7
[]
docs.servicenow.com
When owner to share saved Pinboards. Any user can share them, based on the access levels the user has. You can share a pinboard from the list of pinboards on the main pinboards page, or from the pinboard itself. Share from the Pinboards page To share a Pinboard from the main Pinboard page, follow these steps. Configure the Pinboard to look as it must appear when you share it. Save the Pinboard by clicking the ellipsis ellipsis icon , and selecting Save. Click the sharing icon . - yellow warning symbol appears. If you click on it, it tells you to enable access: If you own the underlying data source, refer to share uploaded data. If you do not own the data source, ThoughtSpot sends an email to the owner or your ThoughtSpot administrator to tell them to share the data. To stop sharing with a user or group, click the x icon . You can send an email notification and an optional message: Click Share.
https://docs.thoughtspot.com/6.0/admin/data-security/share-pinboards.html
2020-09-18T19:46:39
CC-MAIN-2020-40
1600400188841.7
[array(['/6.0/images/sharing-pinboards.gif', 'Save and share pinboards Save and share pinboards'], dtype=object)]
docs.thoughtspot.com
Running a Groovy Service After creating and populating your Groovy service, you may want to run it. Running a Groovy service is a pretty straightforward process, just do the following: From the Navigator view, right click on the Groovy class or script which contains the service1 you want to run, then click Invoke in Browser. Invoke in HTTP Client Aside from invoking a service via the browser, you can also invoke services via the HTTP Client. The HTTP Client is a neat API development tool in Martini Desktop that allows developers to compose advanced requests for testing their APIs. Select the service you want to run from the appearing dialog. Extra services Aside from the services explicitly declared in your script, you many find other services by checking the "Show hidden services" checkbox. These services are from your class or script's superclass. Thus, if you have used inheritance, you will also see the services of the inherited class. Click OK (or Invoke, in Martini Online). A new browser tab will open up displaying the details of the service you are about to invoke. The newly launched tab displays an interface we know as the service invoker. This page contains the description of the service you are trying to invoke (retrieved from the provided Groovydoc), the parameters of the service and their descriptions, request and response type options, and the invoke URL for the service. If your service has input parameters, you may populate them with data. When you're ready to finally run your service, simply click the Invoke button. Use the run icon to run Groovy services You can run Groovy services in Martini Desktop by opening the file and clicking the run icons beside the line numbers. You will find these icons across method signatures. Doing so will open the invoke window for the service.
https://docs.torocloud.com/martini/latest/developing/groovy/running/
2020-09-18T21:22:21
CC-MAIN-2020-40
1600400188841.7
[array(['../../../placeholders/img/coder-studio/compressed/groovy-service-running-http.png', 'Invoking a Groovy service via the service invoker'], dtype=object) array(['../../../placeholders/img/coder/compressed/groovy-service-running-http.png', 'Invoking a Groovy service via the service invoker'], dtype=object) array(['../../../placeholders/img/common/compressed/groovy-service-in-service-invoker.png', 'Service invoker'], dtype=object) array(['../../../placeholders/img/coder-studio/groovy-service-running-from-editor-line.png', 'Invoking services from the editor'], dtype=object) ]
docs.torocloud.com
Latest Trends - Customer User Guide - How Administrator Guide Developer Guide - Twilio Voicemail Setup in Flexie CRM - Setting up Workflows - How to build a workflow - Workflow and automation overview - Working with workflow automation - Event-based Actions - Actions based on any entity update/insert event - Process form entries into actions chain - Adjust lead points based on dynamic conditions - Sending web/email/sms notifications - Adjust lead points based on dynamic conditions - Actions chain for back-on website alert Flexie CRM Methods Videos - How Forms
https://docs.flexie.io/
2020-02-16T23:43:11
CC-MAIN-2020-10
1581875141430.58
[]
docs.flexie.io
While a traditional CRM gathers data about prospects via traditional channels and usually push them to a funnel, the Social CRM is the today’s word for a customer focused CRM. Being that companies more and more are present in social networks, customer communication may take place in distributed channels and all of those need to be tracked and logged in a CRM system automatically. The best CRM system today, is the one who is easily integrated, provides connection with distributed Social Networks and brings together in a centralized hub all the information. Communication ways can be very diverse, you can text your customers, call them, send them emails and have all this interaction history in a single place. Evolution of the idea Traditional Customer Relationship Management systems were a corporate domain years ago for automation of business processes for marketing, sales and customer support. The big impact of Social Networks brought the need of a CRM for all small and medium-sized businesses. Even a single person who runs an activity can organize its marketing, sales with a CRM. Gathering prospects from diversified channels, building customer relationships through their preferred channel and bringing all that information to a CRM automatically is the evolution from Traditional to Social CRM. In the digital era, everyone has a voice through internet. The customer is placed in a central focus and keeping them happy is very important for a business. Companies can engage with the customer directly, find out more about their interest and they can help with their services. A Social CRM is an enhanced traditional CRM, more opened to communication and to building what its name suggests: relationships. In a Social CRM, the customer is the one who controls the conversation, as it may take place in a Facebook Group or a Twitter post where it can receive opinions and get shared multiple times. While a Traditional CRM usually is sales oriented and goes directly to preparation of the sales pitch, a Social CRM engages to find out customer’s interest and preferences first. Don’t find customers for your products, find products for your customersSeth Godin A Social CRM analyses in-depth customers buying patterns and then applies the right marketing campaign that offers exactly what the customer is asking for. All of this is very easy in Flexie CRM and that is what makes it so special. You can be connected through unlimited channels and still have the information structured inside Flexie CRM. To stay updated with the latest features, news and how-to articles and videos, please join our group on Facebook, Flexie CRM Academy and subscribe to our YouTube channel Flexie CRM.
https://docs.flexie.io/docs/marketing-trends/social-crm-a-must-for-your-business/
2020-02-16T23:43:16
CC-MAIN-2020-10
1581875141430.58
[]
docs.flexie.io
9.1.3 September 4th, 2018 Men & Mice announces the release of Version 9.1.3 of the Men & Mice Suite. This is a maintenance release containing various fixes and improvements. Bug fixes - An issue was fixed where in some cases for xDNS zones, an additional record would be created - An issue was fixed with the Web application installer where it would in some cases not uninstall properly - An issue was fixed with the Web application where the filtering sidebar collapsed state was not persisted - An issue was fixed with authentication on the Web application where in the case of a failed login, an external authentication dialog would be shown. - An issue was fixed in the Web interface where in some cases DNS records would not be shown in the IP address properties - An issue was fixed where changing zone options for forward zones on window server 2016 would always return an error - An issue was fixed where it was not possible to create A records that had the same name as the DNS zone for AD integrated zones - An issue was fixed on the Men & Mice appliances regarding SNMP trap behavior Improvements - BIND on the DNS & DHCP appliances was upgraded to 9.11.4-P1
https://docs.menandmice.com/display/MM910/9.1.3
2020-02-16T22:53:31
CC-MAIN-2020-10
1581875141430.58
[]
docs.menandmice.com
Health insurance isn’t insurance, it’s an annuity. Can you afford it? So the waters have parted and America will now provide “affordable” health insurance for all. On an earlier blog post, I explained why health insurance is different from any other kind of insurance because everyone who is insured collects on it; and not just a little, but for hundreds of thousands of dollars. In that respect, buying health insurance is really more like buying an annuity. We’ve all seen the figures on US healthcare spending; somewhere between $2.2 and $2.5 trillion a year or roughly $8000 each year for every man, woman, and child. Statistics also tell us that most of this is spent on people with chronic conditions like high blood pressure, diabetes, heart failure, cancer, etc. (but let’s face it, every one of us will end up with one or more chronic conditions eventually). We also know that an insane amount of money gets spent in America during the last few weeks or months of a person’s life. Looking at these figures another way, we could say that current spending levels on healthcare suggest that each of us will use about $600,000 worth of care over our lifetime ($8000 x 75 years). So if we had a completely level playing field each of us would need to save more than half a million dollars (in today’s dollars) to pay for our medical care. OK. I know this is very simplistic thinking. In fact, it is way too simplistic and far too frightening to share with the American people. But the point is this; unless we do something to fundamentally rein in the cost of care, or rein in our expectations about what everyone is entitled to, we are doomed to indebtedness beyond hope of reprise. If we are going to entitle everyone to equal care, then we had better do something about the cost curve. Most European nations do a better job. Depending on the country they spend between half to two-thirds of what we spend on care in America. Technology is part of the solution. And frankly, I see much more focus on the role of technology in providing healthcare services when I look outside the US than I see within it. Bending the cost curve will also necessitate a recalibration on expectations about care. Everyone won’t get everything they want or need. The world over, that is just a fact of life. So, now that I’ve had a chance to vent I’ll just calm down and blissfully go along knowing that we have solved the healthcare crisis in America and that everyone is now protected from the harsh realities of life. I feel better already. Bill Crounse, MD Senior Director, Worldwide Health Microsoft Technorati Tags: health reform,health insurance,healthcare spending,HIT,ICT
https://docs.microsoft.com/en-us/archive/blogs/healthblog/health-insurance-isnt-insurance-its-an-annuity-can-you-afford-it
2020-02-16T23:21:57
CC-MAIN-2020-10
1581875141430.58
[]
docs.microsoft.com
Database Changes¶ When changing code for this project you may add, remove or modify columns in the SQL database. Those change must be updated too in the sql directory. Data¶ Some tables may require data, place them in the directory sql/data. The files must be called with the insert prefix to the table name. So if you create the new table domaindrivers you have to : - Create a file at sql/mysql/domaindrivers.sql - Optionally create a file at sql/data/insert_domaindrivers.sql with the insertions SQLite¶ SQLite definitions are used for testing and are created from the MySQL files. Once the mysql file is created, add the new table name to the sql/mysql/Makefile and run make. It requires
https://ravada.readthedocs.io/en/latest/devel-docs/database_changes.html
2020-02-16T21:49:28
CC-MAIN-2020-10
1581875141430.58
[]
ravada.readthedocs.io
Table of Contents Product Index Makaio is a high quality character for Lee 7 for use in Daz Studio 4.8 and up. The head and body are uniquely custom sculpted in Zbrush, and the skin was created using high quality photo references for depth and detail. Please Note: While Makaio was designed with Daz Studio 4.8 (and up) in mind, and its amazing Iray render engine, He will also work and look great using the 3Del.
http://docs.daz3d.com/doku.php/public/read_me/index/29328/start
2020-02-16T22:23:17
CC-MAIN-2020-10
1581875141430.58
[]
docs.daz3d.com
This package is the main dependency of our framework. With it, you can start your NodeJs application and scale it on a non-intrusive/non-imperative way by maintaining everything over a Dependency Injection Pattern working together with a Module/Component abstraction provided by Appt. We assume you got here after seeing the Getting Started page. If you don't, we strongly recommend you to step back an take a 5 minutes reading to get used with some key concepts we're going to apply here. $ npm install @appt/core --save import { Module, Component, TDatabase} from '@appt/core';import { myCustomSettings } from '@appt/core/config';
https://docs.apptjs.com/packages/appt-core
2020-02-16T22:44:39
CC-MAIN-2020-10
1581875141430.58
[]
docs.apptjs.com
OpenID Connect details¶ OpenID Connect (OIDC) is a simple standardized identity (authentication) layer on top of OAuth 2.0. After a successful login, the user agent is in possession of an access token and an ID token. The access token looks the same as for plain OAuth2. The ID token is a signed JSON Web Token with info about the user. The ID token only includes the minimum information required by the standard. More information, including the user’s name and email, can be gotten from the userinfo endpoint described in the reference doc: Discovery and configuration¶ All you need to know in order to configure your OpenID Connect client to the Feide platform is available through the discovery endpoint: Client registration¶ In order to access the Feide APIs, you need to register your application, and obtain the OIDC/OAuth credentials for your application. You may register your application using the dashboard. When registering a client, you need to know the redirect URI endpoint of your application. Registration of new application screenshot Scopes¶ In order to use OpenID Connect you need to have the openid scope. This can be selected in the dashboard. OpenID Connect also defines a few other standard scopes. Of these the profile and The application should include openid in the scope parameter of the authorization request. If you are using an OIDC library, this is probably already taken care of. OpenID Specifications¶ Supported features¶ - Authorization Code flow response_mode: code - Implicit grant flow response_mode: id_token token - Hybrid flow response_mode: code id_token - ID token signed with PKI (RS256) - Proof Key for Code Exchange (PKCE) may be used with authorization code flow and hybrid flow. code_challenge_method: S256 Implicit grant flow response_mode: id_token token is also still supported. But due to weak security, it should be avoided for new clients. Existing clients should migrate to authorization code flow with PKCE. Dynamic registration is not supported. Requiring a specific authentication level¶ The application can request a specific authentication level via the optional acr_values parameter. Currently this only works with the Feide login provider, and the only supported value is urn:mace:feide.no:auth:level:fad08:3, which triggers Feide MFA. An organization can also request a specific authentication level for some or all users, see the multifactor authentication deployment guide. Login hints - bypassing the login discovery page¶ The default behaviour when a client sends the end user to Feide for authentication is that the user first meets either the account chooser or the ID-provider discovery page. Sometimes the client may want to let the user bypass the discovery page / accountchooser and go to a specific ID provider. This is possible by using the OpenID login_hint parameter to the authorization endpoint. The following prerequisites needs to be met to use this functionality: - The client owner needs to configure the client to not require user interaction. This can be done using the Developer Dashboard. Uncheck the checkbox «Require user interaction». - Consider the security implications of allowing Single Sign-on to automatically login users to your site without user interaction. - Make sure that the openidscope is enabled for the client and included in the authentication request. If not, the request is interpreted as a plain OAuth request, and then the login hint functionality is not supported The login_hint parameter is sent as part of the authentication request to Fe expect to login. If user tries to login with another account the user will get an warning that the user was not expected, but the userID is not enforced beyond that. idporten Automatically send user to ID-porten. eidas Automatically send user to eIDAS (European authentication framework). edugain|urn:mace:entity Automatically send user to the Identity Provider with entity ID urn:mace:entity via eduGAIN. openidp Automatically send user to Feide’s OpenIdP (guest accounts). Automatically send user to Facebook. Automatically send user to Twitter. Automatically send user to Linked XYuZmVpZGVjb25uZWN0Lm5vIiwiYXVkIjoiNWFjODc1M2YtODI5Ni00MWJmLWI5ODUtNTl kODk3NjkwMDVlIiwic3ViIjoiNzZhN2EwNjEtM2M1NS00MzBkLThlZTAtNmY4MmVjNDI1M DFmIiwiaWF0IjoxNDQ5MDY1NDMyLCJleHAiOjE0NDkwNjkwMzIsImF1dGhfdGltZSI6MTQ 0OTA2NTM2NH0.bObvZ\_Ampf\_exj4iUcocptJwHKt\_zZI4GnZ-VrXoqYlXaGGgwACzCz hSpck\_z1C87gZYlOdK-TQwILHcGyObmi1rH5VCvrYL1xNyGeHYlYs8bQ8odhZAPiYjb9c et5nP1aP4ZeJu5aInWwLIaeVUgavQEVAl1xGiPRh8WjKZdP-P1WslLACnVZu84YLrOZQYn kGMpDS\_VBGHVSK3VPVjRd14vhqYCoGTaKSXrp49LlejU0dzaokmGI\_ZAejwVY1BCFMon EyDNwZVZKoq2GbHwqpjhucWOZRQjYzeWTEXlly18EwYg55k6awNPZt8fKp0XoRoTB4we5W GoFV6XZuaGA ID Token JWT decoded { "iss": "auth.dataporten.no", } The example above shows what the ID token includes when only the openid scope is enabled. All times are in seconds since 1970-01-01 00:00:00 UTC. - iss - Issuer - aud - Audience - the client ID - sub - Subject - The internal ID of the authenticated user. This ID is stable but opaque, not releasing any additional information about the user. - iat - Issued at - Time issued (in seconds since 1970-01-01T0:0:0 UTC) - exp - Expiration time (in seconds since 1970-01-01T0:0:0 UTC) - auth_time - Time when the end-user authentication occurred The attributes acr, at_hash, c_hash and nonce may also be present. See the OIDC standard for info about these. The client must validate the ID token. In particular, iss has to be auth.dataporten.no and aud has to match the client id. See the OIDC standard for full details about ID Token Validation. To see what is inside an ID token, the online JWT debugger at is useful.
https://docs.feide.no/reference/oauth_oidc/openid_connect_details.html
2020-02-16T23:33:38
CC-MAIN-2020-10
1581875141430.58
[array(['../../_images/registerclient2.png', 'Registration of new application screenshot'], dtype=object)]
docs.feide.no
Getting access to the customer portal¶ To manage your organizations integrations with Feide, your organization needs access to the Feide customer portal. If your organization is already a service provider or home organization in Feide, you should already have access. If not, you need to register as a service provider in Feide. The first step is to fill out the application form. The Feide administrators that need access also needs to register accounts at the Feide OpenIdP. Once you have filled out the application form and the administrators have registered account at the Feide OpenIdP, send an email to [email protected] with the usernames of the Feide administrators. E.g.: Subject: Usernames for administrators at [organization] These are the usernames for the Feide administrators: - Name: username - Other Name: otherusername Once we have that information, we will register your organization in the customer portal and give you access.
https://docs.feide.no/service_providers/getting_started/customer_portal.html
2020-02-16T23:32:53
CC-MAIN-2020-10
1581875141430.58
[]
docs.feide.no
Ribbon, ribbon... Where did my favorite commands go? As Richard posted today, no school in the Puget Sound area today. I'm thankful today for Outlook Web Access and Live Meeting today as it allows us to dial in for broad meetings that we've had on that schedule for months now. Now back to our regularly scheduled programme... The Office Help and How-to site has a section on using the new user interface in Microsoft Office 2007 and how it can help ease the way you work. As we dogfooded Office 2007 this summer, the only thing I added to the quick action tool bar was "Save as..." given I needed to save docs in the Office 2003 format for those who hadn't yet made the leap. I don't bother with it any more: nice to see that I'm now receiving documents in Office 2007 format from some of the last teams to move. ;) From the Office site:
https://docs.microsoft.com/en-us/archive/blogs/mthree/ribbon-ribbon-where-did-my-favorite-commands-go
2020-02-16T23:21:52
CC-MAIN-2020-10
1581875141430.58
[]
docs.microsoft.com
About the Software Inventory Client Agent Applies To: System Center Configuration Manager 2007, System Center Configuration Manager 2007 R2, System Center Configuration Manager 2007 R3, System Center Configuration Manager 2007 SP1, System Center Configuration Manager 2007 SP2 Software inventory is the process of gathering software information from client computers in a Microsoft System Center Configuration Manager 2007 site. The information gathered can include data on the operating system, installed programs, and any files you want to inventory or collect. Configuration Manager 2007 stores this data in the site database, where you can use the information in queries to generate and view reports, or to build software-specific collections. For example, you can create a collection of all computers that are running Microsoft Windows® XP and that have Microsoft Office 2003 installed. To enable and configure software inventory client agent settings for a site, you use the General tab of the software inventory client agent properties. For more information about enabling and configuring the software inventory client agent, see Software Inventory Client Agent Properties: General Tab. See Also Tasks How to Configure Software Inventory for a Site How to Exclude Folders From Software Inventory Concepts About Collecting Software Inventory For additional information, see Configuration Manager 2007 Information and Support. To contact the documentation team, email [email protected].
https://docs.microsoft.com/en-us/previous-versions/system-center/configuration-manager-2007/bb681072(v=technet.10)?redirectedfrom=MSDN
2020-02-16T21:22:32
CC-MAIN-2020-10
1581875141430.58
[]
docs.microsoft.com
The bipartiteSBM User Guide¶ The bipartiteSBM is a Python library of a fast community inference heuristic of the bipartite Stochastic Block Model (biSBM), using the MCMC sampler or the Kernighan-Lin algorithm. It estimates the number of communities (as well as the partition) for a bipartite network. The bipartiteSBM utilizes the Minimum Description Length principle to determine a point estimate of the numbers of communities in the biSBM that best compress the model and data. Several test examples are included. Supported and tested on Python>=3.6. If you have any questions, please contact [email protected]. Quick start Dataset Module documentation Frequently Asked Questions (FAQ) Additional resources Acknowledgements¶ The bipartiteSBM is inspired and supported by the following great humans, Daniel B. Larremore, Tiago de Paula Peixoto, Jean-Gabriel Young, Pan Zhang, and Jie Tang. Thanks Valentin Haenel who helped debug and fix the Numba code.
https://docs.netscied.tw/additional-resources/usage/make-plots.html
2020-02-16T22:16:47
CC-MAIN-2020-10
1581875141430.58
[]
docs.netscied.tw
Released on: Wednesday, March 7, 2018 - 13:20 Bug an issue where the Java agent 4.3.0 would not report data depending on the locale setting of the JVM. Java Agent 4.3.0 failed to send event data if the JVM locale is set to use a comma as the decimal. You would see NumberFormatException in the agent log. - an issue where the agent could cause an application deadlock when two tokens are created and linked from the other’s thread. This affects users of the New Relic token API and also users of the Hystrix framework. - Fixes an issue where the agent would not capture JMX Datasource information from Tomcat when JDNI GlobalNamingResources is used. Fixes in 4.7.0 - None Fixes in 4.8.0 -.
https://docs.newrelic.co.jp/docs/release-notes/agent-release-notes/java-release-notes/java-agent-3471
2020-02-16T22:21:16
CC-MAIN-2020-10
1581875141430.58
[]
docs.newrelic.co.jp
.4.0. A modern, feature-rich and highly tunable Java client library for Apache Cassandra® (2.1+) and DataStax Enterprise (4.7+), and DataStax Apollo, using exclusively Cassandra’s binary protocol and Cassandra Query Language (CQL)>$..
https://docs.datastax.com/en/developer/java-driver/4.4/
2020-02-16T21:41:00
CC-MAIN-2020-10
1581875141430.58
[]
docs.datastax.com
So what does Cluster Recovery actually recover anyway? The Windows Server 2003 Resource Kit introduced a handy, easy-to-use utility that even warranted its own download link on Microsoft.com/downloads. However the name “Cluster Recovery” can lead to misunderstanding and taken at face value makes quite a bold statement. So what exactly does this handy utility actually recover? ClusterRecovery.exe in fact performs two completely independent but important tasks. The first functionality probably suits its name the best, in that it recovers lost cluster resource checkpoints. To understand this functionality a bit of background technical detail is necessary here. The Microsoft server clustering service provides the ability to cluster various server features, roles, applications and services. These “resources” may have registry configuration keys stored in various locations under HKLMSoftware and/or HKLMSystem that need to be maintained and synchronized between all nodes of a cluster. This synchronization is necessary to ensure stability and predictable behavior of the resource across the entire cluster. The clustering service provides a method for replicating these keys between server nodes and this automated process includes saving these registry keys onto the quorum device under the default folder location <quorumdrive>:MSCS{Resource GUID}. These folders and files (0000000n.CPR) are what we call Resource Checkpoints. Should these checkpoint files ever get lost or deleted due to various reasons we can use ClusterRecovery.exe to simply recreate these checkpoint files. The second functionality that Cluster Recovery provides could arguably be the more valuable, however it may also be the most misunderstood. The option to “Replace a physical disk resource” doesn’t actually do any disk replacement for you, it does however automate a very tedious clean up job on all of your other resources if you have had to replace one or more clustered storage volumes. Here is what you need to know before using Cluster Recovery if you have had a disk failure or simply need to replace or migrate your clustered disks to new disks. First you will need to present/attach your new LUNs, partition and format the disks and restore all data to your new disks. Then using Cluster Administrator create a new Physical Disk resource to manage the new disk. Once that is complete you are still left with an original disk resource that, more than likely, will have multiple other resources that are configured to be dependent upon it. In the case of file server clusters this could be quite a few File Share resources. Here is where Cluster Recovery comes into play. More precisely, it will analyze the cluster resources to find any resource that is dependent upon your specified original disk resource and move that dependency to the new disk resource. It will also rename your original disk resource to “<Original Name> (lost)” and rename the new disk resource to “<Original Name>”. For example, if you have a Physical Disk resource “Disk U:” with a File Share resource named “User Shares - U:Users” that is dependent upon “Disk U:” and now you have added a larger “New DISK U:” to the cluster to replace the original, the following is what will occur through the use of Cluster Recovery: Before: [Disk U:] <-- original resource + |_______ [User Shares - U:Users] [New DISK U:] <-- new resource After: [Disk U: (lost)] <-- original resource [Disk U:] <-- new resource + |_______ [User Shares - U:Users] You can download Cluster Recovery here. Notes: Once again you must add, format and restore any data and manage the drive letters manually in addition to creating the cluster resource yourself. Also there is no 64 bit version as of this writing and the 32 bit download should not be used with 64-bit operating systems. Author: Chris Allen Microsoft Enterprise Platforms Support Support Escalation Engineer
https://docs.microsoft.com/en-us/archive/blogs/askcore/so-what-does-cluster-recovery-actually-recover-anyway
2020-02-16T23:33:07
CC-MAIN-2020-10
1581875141430.58
[]
docs.microsoft.com
into an In Memory Message Store. You then use a Message Processor to retrieve the message from the store and then deliver the message to the back-end service. Message Store - Righ click on SampleServices in the Project Explorer and navigate to New->Message Store. Select Create a new message-store artifact and fill in the information in the following table and Properties Properties Properties. Deploying the Artifacts to WSO2 EI Enterprise Integrator. You have now explored how WSO2 EI can be used to implement store and forward messaging using Message Stores, Message Processors and the Store mediator.
https://docs.wso2.com/display/EI600/Storing+and+Forwarding+Messages
2020-02-16T22:46:59
CC-MAIN-2020-10
1581875141430.58
[]
docs.wso2.com
Run Ravada in development mode¶ Once it is installed, you have to run the two ravada daemons. One is the web frontend and the other one runs as root and manage the virtual machines. It is a good practice run each one in a different terminal: The web frontend runs with the morbo tool that comes with mojolicious. It auto reloads itself if it detects any change in the source code: ~/src/ravada$ morbo -v ./rvd_front.pl The backend runs as root because it has to deal with the VM processes. It won’t reload automatically when there is a change, so it has to be restarted manually when the code is modified: ~/src/ravada$ sudo ./bin/rvd_back.pl --debug Stop system Ravada¶ You may have another copy of Ravada if you installed the package release. rvd_back will complain if it finds there is another daemon running. Stop it with: $ sudo systemctl stop rvd_back; sudo systemctl stop rvd_front
https://ravada.readthedocs.io/en/latest/devel-docs/run.html
2020-02-16T21:51:32
CC-MAIN-2020-10
1581875141430.58
[]
ravada.readthedocs.io
Old Configuration Files¶ Tahoe-LAFS releases before v1.3.0 had no tahoe.cfg file, and used distinct files for each item listed below. If Tahoe-LAFS v1.9.0 or above detects the old configuration files at start up it emits a warning and aborts the start up. (This was issue ticket #1385.) Note: the functionality of [node]ssh.port and [node]ssh.authorized_keys_file were previously (before Tahoe-LAFS v1.3.0) combined, controlled by the presence of a BASEDIR/authorized_keys.SSHPORT file, in which the suffix of the filename indicated which port the ssh server should listen on, and the contents of the file provided the ssh public keys to accept. Support for these files has been removed completely. To ssh into your Tahoe-LAFS node, add [node]ssh.port and [node].ssh_authorized_keys_file statements to your tahoe.cfg. Likewise, the functionality of [node]tub.location is a variant of the now (since Tahoe-LAFS v1.3.0) unsupported BASEDIR/advertised_ip_addresses . The old file was additive (the addresses specified in advertised_ip_addresses were used in addition to any that were automatically discovered), whereas the new tahoe.cfg directive is not ( tub.location is used verbatim).
https://tahoe-lafs.readthedocs.io/en/latest/historical/configuration.html
2020-02-16T22:58:10
CC-MAIN-2020-10
1581875141430.58
[]
tahoe-lafs.readthedocs.io
= Version 2.4.8 | 22nd January 2020 = * Add: Support for testimonial carousel shortcode. * Fix: Out of date template notice. * Remove: data-vocabulary.org from breadcrumbs. = Version 2.4.7 | 19th December 2019 = * Fix: Issue where product search shows transparent header when it shouldn’t. * Fix: Issue with some undefined variables. = Version 2.4.6 | 13th November 2019 = * Fix: Breadcrumb Bug. * Fix: Lightbox Gallery Block. = Version 2.4.5 | 11th October 2019 = * Fix: Elementor Bug. = Version 2.4.4 | 9th October 2019 = * Update: Breadcrumbs, new class better rank math support. * Update: Allow Call to action to have larger padding. * Update: Elementor Pro Individual Header and Footer Support. * Update: Rev Slider. * Fix: Chrome notice with nicescroll. = Version 2.4.3 | 17th April 2019 = * Update: Portfolio Type not selectable. = Version 2.4.2 | 16th April 2019 = * Fix: indivisible product issue. = Version 2.4.1 | 15th April 2019 = * Update: Woo 3.6 support. * Update: Fix home page not showing fullwidth. = Version 2.4.0 | 26th March 2019 = * Update: Split button target. = Version 2.3.9 | 21st March 2019 = * Update: Custom 404 WPML support. * Update: Tribe Events Title Support. * Fix: Css Bug. * Fix: PHP issue with 5.2 = Version 2.3.8 | 20th February 2019 = * Update: Woocommerce 3.5.5 support. = Version 2.3.7 | 11th February 2019 = * Update: Shrink causing header offset in some browsers. * Update: Bundled Plugin Versions. * Update: Kadence Blocks Notice. = Version 2.3.6 | 29th January 2019 = * Update: Sticky to work with elementor. * Update: Nav issue with other plugins. = Version 2.3.5 | 8th January 2019 = * Update: Sticky to work with elementor. * Update: Menu issue with elementor. * Update: IT Language files. = Version 2.3.3 | 5th December 2018 = * Update: WordPress 5 ready. = Version 2.3.2 | 26th November 2018 = * Update: Google Map Widget Issue = Version 2.3.1 | 1st November 2018 = * Update: Woocommerce Notice. * Update: Gutenberg Support. = Version 2.3.0 | 24th October 2018 = * Update: Woocommerce Notice. * Update: Contact for delayed load. = Version 2.2.9 | 8th October 2018 = * Update: Staff has force row option. * Update: Staff grid uses intrinsic loading. * Update: portfolio single image has filter for height. = Version 2.2.8 | 24th September 2018 = * Update: Customization link * Update: Importer not installing correctly. = Version 2.2.7 | 6th August 2018 = * Update: Some Code cleanup. * Update: Gutenberg Support. * Update: Dashboard. = Version 2.2.6 | 13th July 2018 = * Bug: Footer issue. = Version 2.2.5 | 5th July 2018 = * Add: Mobile Menu option. * Add: Better Elementor Support. * Add: Improve Lazy Load. = Version 2.2.4 | 13th June 2018 = * Fix: Button Issue = Version 2.2.3 | 12th June 2018 = * Update: Kadence Recaptcha support. * Fix: Ajax save issue. = Version 2.2.2 | 30th May 2018 = * Update: Instagram Icon. = Version 2.2.1 | 30th May 2018 = * Update: WC files for 3.4.1 * Add: option to auto download Google fonts and load locally. = Version 2.2.0 | 22nd May 2018 = * Update: WC files for 3.4.0 * Add: option for name not being required in contact form. * Add: Option for privacy policy checkbox in testimonial form. = Version 2.1.9 | 15th May 2018 = * Fix: Portfolio Carousel Title Order issue. * Update: Portfolio bottom carousel will pull from all equal level portfolio types. * Update: srcset for blog header image. = Version 2.1.8 | 7th May 2018 = * Update: Woo Template files archive-product.php and single-product.php * Update: Woocommerce hooks to change woo templates functionality. * Add: Check if category has children before outputting filter. * Add: option for consent checkbox with contact form. * Fix: Minor translation issue. = Version 2.1.7 | 27th April 2018 = * Fix: Cart text * Fix: css Stripe saved card alignment * Add: Mobile topbar search icon focus. = Version 2.1.6 | 20th April 2018 = * Fix: Issue with php 7.2.4 * Add: anonymizeIP to google analitics settings. * Fix: small issue with schema. * Fix: validator updates. = Version 2.1.5 | 27th March 2018 = * Update: Testimonial input. * Update: add auto focus when search icon clicked. * Update: fix checkbox alignment. * Add: New action hooks. = Version 2.1.4 | 5th March 2018 = * Update: breadcrumb event check. * Update: fix parallax on ipad pro. = Version 2.1.3 | 20th February 2018 = * Update: slight change to breadcrumb html * Update: Woo Templates = Version 2.1.1 | 17th January 2018 = * Add Row Seporators. * Update testimonial input just to make sure no extras are made. * Update support for new woo 3.3 * Fix issue with image menu image size. * Update: breadcrumbs = Version 2.1.0 | 4th December 2017 = * Update: Redux Framework. = Version 2.0.9 | 21st November 2017 = * Add: Option for h2 tag in product. * Fix: issue with page transitions. * Add: order to portfolio shortcode. * Update: Js cart spinner script. * Update: Archive title to work with polyang issues. = Version 2.0.8 | 9th October 2017 = * Add: search results, updating support for attachments. * Update: Spinner css support. * Update: notice with product navs. * Update: typo in image widget. = Version 2.0.7 | 20th September 2017 = * Fix: Issue with local scroll. * Update: Image widget has a lot more controls. = Version 2.0.6 | 12th September 2017 = * Fix: issue with updraftplus and CMB * Fix: issue with carousel not saving categories. * Add: nocookie=true option to youtube shortcode. = Version 2.0.5 | 5th August 2017 = * Fix: issue with blog photo grid on archive pages. * Fix: Issue with past defaults and typo. * Update: blog grid shortcode 1 column. * Update: Image menu for better retina support. = Version 2.0.4 | 21th July 2017 = * Fix: issue with blog grid. * Fix: issue with defaults for portfolio carousel. * Fix: issue with filter on mobile. = Version 2.0.3 | 11th July 2017 = * Fix: Photo Grid settings. = Version 2.0.2 | 11th July 2017 = * Fix: Post defaults. * Fix: Rev update. * Update: Shortcodes, ** staff allows 1 column and turning off name and content. ** Blog has tag, author_name options. ** Carousel has showtype option. * Add: accordion widget. * Fix: issue with infinite scroll = Version 2.0.1 | 30th June 2017 = * Fix: Page title metabox * Update: Custom 404 page to force pagebuilder layouts to always take. = Version 2.0.0 | 28th June 2017 = * Update: CMB2 * Update: Ready for WC 3.1.0 * Update: Infinite scroll 3.0! * Update: Slick slider added, started replacing all carousels and sliders. * Update: Galleries lightboxes pull in captions first then alt if nothing else. * Move: bootstrap js to it’s own file for better plugin compatibility. * Add: Option for google tag output right after body open. = Version 1.9.9 | 19th June 2017 = * Update: Pl Lang * Update: Rev slider * Update: The_title output in attributes. = Version 1.9.8 | 11th June 2017 = * Fix: pagebuilder padding issue. * Fix: admin select issue. = Version 1.9.7 | 29th May 2017 = * Update: admin css. = Version 1.9.6 | 13th May 2017 = * Update: Icon select in admin. = Version 1.9.5 | 2nd April 2017 = * Fix: Portfolio Video Lightbox Issue = Version 1.9.4 | 19th April 2017 = * Update: Select2 issue. = Version 1.9.3 | 16th April 2017 = * Update: Select2 issue. * Update: Issue with variations. = Version 1.9.2 | 12th April 2017 = * Update: WC notice issue. * Fix: Issue with product image init zoom * Update: Schema. = Version 1.9.1 | 7th April 2017 = * Update: WC notice issue. * Update: Lang files. = Version 1.9.0 | 5th April 2017 = * Update: Order issue with portfolio posts. * Update: Select2 issue admin. * Update: Featured products for woo 3.0 = Version 1.8.9 | 4th April 2017 = * Update: clearing floats issue. * Update: select2 issue. * Update: Photon issue. = Version 1.8.8 | 30th March 2017 = * Update: select2 issue. * Update: Rev slider * Update: Woocommerce 3.0 Ready * Update: Portfolio links to have more clickable space. = Version 1.8.7 | 13th March 2017 = * Update: prep for woocommerce 2.7 * Fix: issue in very old php. * Update: Image sizing. * Update: Admin Scripts. * Update: WPML config. * Update: Cart CSS. * Update: Schema = Version 1.8.6 | 6th February 2017 = * Update: Shortcodes for php 7.1 * Update: function for really old php versions. * Update: select2. * Fix: few small plugin conflicts. = Version 1.8.5 | 4th January 2017 = * Update: admin CSS * Fix: Portfolio tag sidebar issue. * Update: schema. * Update: my-account. = Version 1.8.4 | 16th December 2016 = * Update: Link in portfolio post. * Update: z-index in admin. = Version 1.8.3 | 12th December 2016 = * Fix: Out of stock notice with certain languages. * Fix: Issue with blog shortcode inside tabs. * Fix: Issue with match height and infinite scroll. * Add: gif spinner for checkout. = Version 1.8.2 | 13th October 2016 = * Add: Read more link in custom excerpts. * Update: Schema * Add: Staff posts to carousel shortcode. * Update: Few css tweaks. * Update: Staff shortcode. = Version 1.8.1 | 30th September 2016 = * Fix: Staff Links issue. = Version 1.8.0 | 21st September 2016 = * Fix: Older PHP users. = Version 1.7.9 | 19th September 2016 = * Fix: php notice with testimonial widget.. * Add: Filters for custom post type ‘map_meta_cap’ and ‘capability_type’. * Add: Hook for breadcrumbs. * Update: Icons. * Update: Staff templates. * Add: Email and phone to staff posts. = Version 1.7.8 | 19th August 2016 = * Small Fix: excerpt parsing issue with page builder and split content. * Update: Language files. = Version 1.7.7 | 18th August 2016 = * Small Fix: change fitrows for home blog. = Version 1.7.6 | 18th August 2016 = * Add: Option to set blog grid posts with a match height. * Add: Equal heights to icon menu. * Add: Valign middle to simple box shortcode gen. * Add: animate in classes. * Add: Start delay for typed text. * Update: Infinate scroll for photo grid. * Update: Clear instance option. * Update: Added support for Menu Nav Role Plugin. = Version 1.7.5 | 4th August 2016 = * Update: Fix issue with polylang 2.0 = Version 1.7.4 | 19th July 2016 = * Update: Is dragable check for mobile in google maps. * Update: Fix issue with radio boxes overflowing. * Udpate: Woo Files. * Add: Table options in shortcodes. = Version 1.7.3 | 12th July 2016 = * Update: Various hooks, filters. * Update: Woo radio button js (added more combatiblity with other woo extensions). * Fix: Woo Category output issue. * Add: filter to make Siteorigin editor work better when using shortcodes. = Version 1.7.2 | 8th July 2016 = * Fix: issue with photo grid on front page. * Update: Portfolio carousel issue. * Update: Shop slider options. * Add: Getting Started Page. = Version 1.7.1 | 27th June 2016 = * Add: Soundcloud to social. * Update: API status. * Update: Rev slider. * Update: Google Maps. * Update: hooks. * Update: Home Blog Photo grid. * Fix/rework: Image sizes. * Fix: Select2 admin issue with some plugins. * Fix: Default post summary issue. * Fix: Portfolio Grid issue. * Fix: Issue with sidebar on products. = Version 1.7.0 | 17th June 2016 = * Fix: woo-update = Version 1.6.9 | 11th June 2016 = * Fix: Gallery issue in accordions. * Update: Gallery output even for realy small images. = Version 1.6.8 | 8th June 2016 = * Fix: Carousel single column. * Fix: Animate in issue. * Update: Gallery captions. = Version 1.6.7 | 8th June 2016 = * Fix: Captions stuck on wordpress gallery. = Version 1.6.6 | 8th June 2016 = * Update: IT Language file. * Fix: Some servers not activating. * Fix: Shop Header issue. = Version 1.6.5 | 7th June 2016 = * Add: Gallery Caption Default setting. * Add: API inputs * Add: Sitewide footer Shortcode * Add: Staff Filter * Add: Custom 404 page. * Add: Rocket Lazy Load support in grids. * Update: Theme Options. * Update: Rev slider. * Update: Kadence slider (LARGE UPDATE). * Update: Intrinsic padding for grids. * Update: Support for WC 2.6 * Update: Language files. * Update: Responsive support for embeded videos. * Fix: Testimonial notification email. = Version 1.6.4 | 5th April 2016 = * Update: Icon Box default color. * Update: Srcset. * Update: Gallery Options. * Update: Mosaic Galley Improvments. * Update: Ready for WP 4.5 = Version 1.6.3 | 22nd March 2016 = * Update: Kadence Slider. * Add: Filter for skins path. * Update: Third party plugins api. * Update: Hide a revslider notice. * Add: Header and Footer Scripts. = Version 1.6.2 | 3rd March 2016 = * Update: Kadence Slider. * Update: Arrow css. * Fix: Language strings. * Add: Infinite scroll to categories. * Add: New Language strings to theme options. * Update: Relate and up-sales in woocommerce. * Update: Variations JS. * Update: Responsive images for shop output. = Version 1.6.1 | 16th February 2016 = * Fix: Theme options issue. * Fix: Various Formatting. * Fix: Issue with no featured image and header image set. * Update: Analytics Output. * Update: Carousel auto play. * Update: Woocommerce file. * Update: Widget issue. * Update: touch js for image carousel. * Add: Landing Page template. * Add: Product Navigation. * Add: Feature option for first post in pinnacle: recent posts. = Version 1.6.0 | 4th February 2016 = * Fix: Mobile Carousel lightbox issue. * Fix: issue with category. = Version 1.5.9 | 3nd February 2016 = * Fix: Mobile Carousel lightbox issue. * Fix: Portfolio header defaults. = Version 1.5.8 | 2nd February 2016 = * Add: New posts carousel shortcode. * Fix: Ajax issue. * Update: Woocommerce template. * Update: PHP7 notice. * Update: Mosiac gallery lightbox caption. = Version 1.5.7 | 6th January 2016 = * Update: Parallax on certain screens. = Version 1.5.6 | 6th January 2016 = * Update: Woocommmerce update for Dolphin. * Update: Shortcode filter. * Update: Workaround for certain servers with select2. * Update: Rev Slider * Update: Testimonial Grid Options. = Version 1.5.4 | 18th December 2015 = * Update: Home Image Carousel for link option. * Update: Schema for Posts. * Update: Single Image Lightbox. * Update: RevSlider. * Update: New Updater API. * Update: Gallery Option to use alt. * Update: Single Portfolio Post with new actions. = Version 1.5.3 | 10th December 2015 = * Add: option for mobile to show on tablet. * Add: Typed Shortcode. * Add: Portfolio Tag Support. * Add: Mosaic Gallery option. * Add: html tag option for call to action widget. * Add: New split content widget. * Update: Comment Output. * Update: Woocomerce integration for better support of variation plugins. * Update: Flex slider script. * Update: rev_slider. * Update: product title output if hidden (schema). * Fixed: Portfolio Shortcode. = Version 1.5.2 | 9th November 2015 = * Update: Portfolio shortcode option. * Update: Theme options. = Version 1.5.1 | 21st October 2015 = * Update: Schema. * Update: Sidebar Default settings. * Update: AQ Size Notice. * Update: Shortcodes. * Update: Rev Slider. * Update: Plugin notice to only show for admin and added an quick filter. * Update: Simplify Meta title. * Update: Parallax on archive pages. * Update: Add page title filter. * Update: Portfolio type page options. * Add: Breadcrumb Shortcode. = Version 1.5.0 | 24th September 2015 = * Update: vimeo Shortcode. * Bug Fix: Topbar widget area. = Version 1.4.9 | 23nd September 2015 = * Update: Rev Slider. * Update: Theme options. * Update: A few structure codes. * Update: Radio Button js. * Move: Plugins out of theme. * Add: Tag description as the subtitle output. * Add: Tag output option on blog posts. * Add: View Details to Language Settings. * Add: Breadcrumbs when slider is used on Category pages. * Fix: Max output in related products. * Fix: Css bug in visited link. = Version 1.4.8 | 27th August 2015 = * Fix: Delete Slide Issue. * Remove: CDN Scripts. * Add: alt tag in virtue image widget. * Fix: Alt tag in product image = Version 1.4.7 | 18th August 2015 = * Fix: Slider update = Version 1.4.6 | 18th August 2015 = * Fix: Small css error. * Fix: ajax radio variations. = Version 1.4.5 | 17th August 2015 = * Add: Default ratio portfolio setting. * Update: Variation sanitation. * Update: wootab with gallery. = Version 1.4.4 | 13th August 2015 = * Fix: Udpate Slider issue. * Fix: Menu css issue. = Version 1.4.3 | 11th August 2015 = * Fix: Carousel Issue. = Version 1.4.2 | 11th August 2015 = * Fix: Theme options blog defaults. * Fix: Issue with scroll amount in carousels. * Update: Theme options code. * Update: rev slider. * Update: wp_widget for 4.3 release. * Update: Wootemplate files. = Version 1.4.1 | 2nd August 2015 = * Fix: Odd Gallery Issue. = Version 1.4.0 | 30th July 2015 = * Fix: Map on contact page. = Version 1.3.9 | 30th July 2015 = * Fix: Portfolio shortcode. * Add: Portfolio post image grid column option. * Add: Call to action tag option for the home page. * Add: Shop Excerpt off switch. = Version 1.3.8 | 6th July 2015 = * Fix: Odd Gallery Issue. = Version 1.3.7 | 6th July 2015 = * Fix: Typo Default Text string. * Fix: Google Analytics not being called. * Fix: Breadcrumb current title micro format. * Fix: Staff shortcode limit content issue. * Fix: Inconsistent grid with no margin. * Add: = Version 1.3.6 | 26th June 2015 = * Fix: Extensions Turn off. * Fix: Custom Product Tabs. * Fix: Mobile product search in menu. = Version 1.3.5 | 21st June 2015 = * Fix: WordPress Gallery. * Fix: Portfolio Permalink. = Version 1.3.4 | 20th June 2015 = * Update: Theme Options Panel * Add: Options for removing portfolio, products, staff from admin. * Add: New image menu feature. * Add: Option for Prooduct Search in main menu * Add: Option to use custom image ratio on custom carousel. = Version 1.3.3 | 14th May 2015 = * Update: Product sidebar default * Fix: Post image carousel * Add: Action hook after header. = Version 1.3.2 | 12th May 2015 = * Fix: Resizer. = Version 1.3.1 | 7th May 2015 = * Add: Lightbox off option. * Add: Xing to social widget. * Add: Shortcode slider overide on single post pages. * Add: Comments on pages option. * Update: Cyclone Slider Updater. * Update: Bundled RevSlider. * Update: Image size attributes for grid. * Update: TGM script for recommending plugins. * Update: Customizer Options. * Update: Add lightbox php script. * Update: Lots of code restructure. * Update: Resizer to work better with jetpack photon… not that I really recommend using it. * Update: Next post links with title on tooltip. = Version 1.3.0 | 16th April 2015 = * Fix: Testimonial issue. * Update: Shortcode script force clear cache issue. = Version 1.2.9 | 14th April 2015 = * Add: Page Title Filter. * Add: Filter for Schema HTML. * Add: YouTube option for author info. * Add: Option to turn off google scripts for multiple maps on a page. * Add: Line height option for call to action widget. * Update: Theme options. * Update: Flexslider init delay for grid. * Fix: Product Categories layout. * Fix: Issue with shortcode gen. = Version 1.2.8 | 4th April 2015 = * Add: Hard crop option for post excerpts. * Update: Theme options, fix for php 5.2. * Update: Schema code throughout. * Fix: Issue with undefined item in shortcode gen. * Fix: Ipad Mobile Menu. * Fix: Shop placeholder image. * Fix: Check for revslider. * Fix: Check for woocommerce. * Fix: Styling for 2 column Shop. * Fix: Header CSS issue. * Fix: Scroll Gallery Issue. = Version 1.2.7 | 20th March 2015 = * Add: Collapsible submenus for mobile menu. * Update: Pot with missing language strings. * Update: Cyclone Slider. * Update: Theme options, fix for php 5.2. = Version 1.2.6 | 11th March 2015 = * Bug Fix: Product Flip CSS * Add: Recent and Similar Carousel column options. = Version 1.2.5 | 10th March 2015 = * Bug Fix: Home Page Slider background. * Bug Fix: Porfolio Image Carousel * Bug Fix: Product Flip on IE 10 and 11 = Version 1.2.4 | 5th March 2015 = * Bug Fix: Theme options save bug. * Add: Option for isostyle fitRow in gallery. * Add: Better Page pagination support. * Add: Four Column Image Grid option to portfolio options. * Update: Inline Javascript in portfolio. = Version 1.2.3 | 26th Feb 2015 = * Update: Theme options, faster now with ajax. * Update: Langauge Settings. * Update: CSS styles with some widgets. * Add: Blog Home template. * Add: Shortcode offset options. * Fix: Staff post issue. * Fix: page-builder updates. = Version 1.2.2 = * Update: Woocommerce 2.3 fixes. * Add: quantity input buttons back. (woocomerce removed in 2.3). = Version 1.2.1 = * Update: Contact Form = Version 1.2.0 = * Update: Parallax JS * Add filter hook for sidebar. * Add a filter for site title. = Version 1.1.9 = * Add: Comment box filter option. * Add: no page title option for shop page. * Add: Force fitrows option for shop. * Fix: mqtranslate issue with contact form. * Fix: CSS for the image split. * Fix: Metabox CSS issue. * Bug Fix: Woocommerce CRM icon issue. = Version 1.1.8 = * Small Fix * Fix full Carousel on Windows * Update Smooth Scroll = Version 1.1.7 = * Add option to set how many portfolio items show on a category page. * Update for new pagebuilder. * Small Fix for menu settings with center logo. * Fix for portfolio home with no margin. * Fix for post header. * Fix for firefox hover. * Fix for firefox columns. * Small update to contact form. * Update gallery shortcode. * Update Tabs and Accordions. = Version 1.1.6 = * Small Fix for gallery posts. * Remove to the top title. * Fix for home page sidebar if shop page is selected. * Update Theme Options. * CSS Fix. = Version 1.1.5 = * Small Fix for full blog posts * Child of for portfolio category shortcode. = Version 1.1.4 = * CSS fixes * single post fix = Version 1.1.3 = * Update for carousel * Update for search results * Update shortcode slider option for any page. * Add latest posts carousel = Version 1.1.2 = * Smooth scroll update. * Soldout issue. * Add image split shortcode. * Change logo output (faster, better for mobile) * Fix Category issue. * Fix portfolio type filter. = Version 1.1.1 = * Hotfix for categories. = Version 1.1.0 = * Fix for blog category hide. * Fix for hide topbar * Remove theme wrapper * Small css fix. * Update rev-slider * Fix arrow scroll for shrink header * Button Border options * Default image options * Update Kadence Slider = Version 1.0.9 = * Add portfolio video lightbox option. * Add Header text scroll fade. * Fix for theme options icon menu. * Fix for header background. * Fix for out of stock products. = Version 1.0.8 = * Language updates. * Add option for icon menu button to show without hover. * Add hook for header overlay. * Add option for center header style. * Add Search results header style. * Fix for visual editor widget. * Fix product page title settings. * Fix issue with css box not saving data correclty. * Fix for category description. * Fix for ajax cart in header. = Version 1.0.7 = * Fix for color background. * Small css fix. * Add my account link. * Add hover style for button. * Update language files = Version 1.0.6 = * small fix for sale carousel. * small fix for custom portfolio header. = Version 1.0.5 = * Update fix for icon shortcode. * Small fix for shortcode generator. * Add portfolio category shortcode. * Update Theme Options = Version 1.0.4 = * Admin css update. * Staff shortcode gen update. * Support for Woocommerce german market plugin. * Support for events plugin. * Fix single product issue. * Menu sub fix = Version 1.0.3 = * Small change in header * Small Menu fixes * Update Kadence Slider = Version 1.0.2 = * Fix page sidebar issue. * Fix small issue with header. * Fix IE menu issue. * Theme options small typo fix. * Update math on contact form. * Update Theme options. = Version 1.0.1 = * Update Theme Options. * Remove double breadcrumb color. * Re-work how fullscreen carousel/portfolio columns work. Created two new grid levels. * Re-work javascript to work better with w3 total cache – minify. * Fix language issue. = Version 1.0.0 = * Initial Release
http://docs.kadencethemes.com/pinnacle-premium/premium-changelog/
2020-02-16T21:32:32
CC-MAIN-2020-10
1581875141430.58
[]
docs.kadencethemes.com
About Internet Exchange Equinix Internet Exchange Equinix Internet Exchange enables customers to exchange internet traffic through public peering on the largest peering platform in the world (IX) enables networks, content providers, and large enterprises to exchange internet traffic using our global peering solution. This solution, the world's largest, spans over 35 peering exchange points across the globe, with traffic peaks that exceed 10 terabits (Tbs) per second. The Equinix Internet Exchange is a Layer 2 platform that enables interconnection (peering) between multiple networks in an operationally-efficient and cost-effective manner, while also providing high availability, performance, advanced security, and unlimited scalability.
https://docs.equinix.com/en-us/Content/Interconnection/IX/IX-intro.htm
2020-02-16T23:15:04
CC-MAIN-2020-10
1581875141430.58
[]
docs.equinix.com
How to Configure Protected Accounts Applies To: Windows Server 2012 R2 Through Pass-the-hash (PtH) attacks, an attacker can authenticate to a remote server or service by using the underlying NTLM hash of a user's password (or other credential derivatives). Microsoft has previously published guidance to mitigate pass-the-hash attacks. Windows Server 2012 R2 includes new features to help mitigate such attacks further. For more information about other security features that help protect against credential theft, see Credentials Protection and Management. This topic explains how to configure the following new features: There are additional mitigations built in to Windows 8.1 and Windows Server 2012 R2 to help protect against credential theft, which are covered in the following topics: Protected Users. Members of the Protected Users group who are signed-on to Windows 8.1 devices and Windows Server 2012 R2 hosts can no longer use: Default credential delegation (CredSSP) - plaintext credentials are not cached even when the Allow delegating default credentials policy is enabled Windows Digest - plaintext credentials are not cached even when they are enabled NTLM - NTOWF is not cached Kerberos long term keys - Kerberos ticket-granting ticket (TGT) is acquired at logon and cannot be re-acquired automatically If the domain functional level is Windows Server 2012 R2, members of the group can no longer: Authenticate by using NTLM authentication Use Data Encryption Standard (DES) or RC4 cipher suites in Kerberos pre-authentication Be delegated by using unconstrained or constrained delegation Renew user tickets (TGTs) beyond the initial 4-hour lifetime To add users to the group, you can use UI tools such as Active Directory Administrative Center (ADAC) or Active Directory Users and Computers, or a command-line tool such as Dsmod group, or the Windows PowerShell Add-ADGroupMember cmdlet. Accounts for services and computers should not be members of the Protected Users group. Membership for those accounts provides no local protections because the password or certificate is always available on the host. Warning The authentication restrictions have no workaround,. You should never add all highly privileged accounts to the Protected Users group until you have thoroughly tested the potential impact. Members of the Protected Users group must be able to authenticate by using Kerberos with Advanced Encryption Standards (AES). This method requires AES keys for the account in Active Directory. The built-in Administrator does not have an AES key unless the password was changed on a domain controller that runs Windows Server 2008 or later. Additionally, any account, which has a password that was changed at a domain controller that runs an earlier version of Windows Server, is locked out. Therefore, follow these best practices: Do not test in domains unless all domain controllers run Windows Server 2008 or later. Change password for all domain accounts that were created before the domain was created. Otherwise, these accounts cannot be authenticated. Change password for each user before adding the account to the Protected Users group or ensure that the password was changed recently on a domain controller that runs Windows Server 2008 or later. Requirements for using protected accounts Protected accounts have the following deployment requirements: To provide client-side restrictions for Protected Users, hosts must run Windows 8.1 or Windows Server 2012 R2. A user only has to sign-on with an account that is a member of a Protected Users group. In this case, the Protected Users group can be created by. To provide domain controller-side restrictions for Protected Users, that is to restrict usage of NTLM authentication, and other restrictions, the domain functional level must be Windows Server 2012 R2. For more information about functional levels, see Understanding Active Directory Domain Services (AD DS) Functional Levels. Troubleshoot events related to Protected Users This section covers new logs to help troubleshoot events that are related to Protected Users and how Protected Users can impact changes to troubleshoot either ticket-granting tickets (TGT) expiration or delegation issues. New logs for Protected Users Two new operational administrative logs are available to help troubleshoot events that are related to Protected Users: Protected User – Client Log and Protected User Failures – Domain Controller Log. These new logs are located in Event Viewer and are disabled by default. To enable a log, click Applications and Services Logs, click Microsoft, click Windows, click Authentication, and then click the name of the log and click Action (or right-click the log) and click Enable Log. For more information about events in these logs, see Authentication Policies and Authentication Policy Silos. Troubleshoot TGT expiration Normally, the domain controller sets the TGT lifetime and renewal based on the domain policy as shown in the following Group Policy Management Editor window. For Protected Users, the following settings are hard-coded: Maximum lifetime for user ticket: 240 minutes Maximum lifetime for user ticket renewal: 240 minutes Troubleshoot delegation issues Previously, if a technology that uses Kerberos delegation was failing, the client account was checked to see if Account is sensitive and cannot be delegated was set. However, if the account is a member of Protected Users, it might not have this setting configured in Active Directory Administrative Center (ADAC). As a result, check the setting and group membership when you troubleshoot delegation issues. Audit authentication attempts To audit authentication attempts explicitly for the members of the Protected Users group, you can continue to collect security log audit events or collect the data in the new operational administrative logs. For more information about these events, see Authentication Policies and Authentication Policy Silos Provide DC-side protections for services and computers Accounts for services and computers cannot be members of Protected Users. This section explains which domain controller-based protections can be offered for these accounts: Reject NTLM authentication: Only configurable via NTLM block policies Reject Data Encryption Standard (DES) in Kerberos pre-authentication: Windows Server 2012 R2 domain controllers do not accept DES for computer accounts unless they are configured for DES only because every version of Windows released with Kerberos also supports RC4. Reject RC4 in Kerberos pre-authentication: not configurable. Note Although it is possible to change the configuration of supported encryption types, it is not recommended to change those settings for computer accounts without testing in the target environment. Restrict user tickets (TGTs) to an initial 4-hour lifetime: Use Authentication Policies. Deny delegation with unconstrained or constrained delegation: To restrict an account, open Active Directory Administrative Center (ADAC) and select the Account is sensitive and cannot be delegated check box. Authentication policies Authentication Policies is a new container in AD DS that contains authentication policy objects. Authentication policies can specify settings that help mitigate exposure to credential theft, such as restricting TGT lifetime for accounts or adding other claims-related conditions. In Windows Server 2012, Dynamic Access Control introduced an Active Directory forest-scope object class called Central Access Policy to provide an easy way to configure file servers across an organization. In Windows Server 2012 R2, a new object class called Authentication Policy (objectClass msDS-AuthNPolicies) can be used to apply authentication configuration to account classes in Windows Server 2012 R2 domains. Active Directory account classes are: User Computer Managed Service Account and group Managed Service Account (GMSA) Quick Kerberos refresher The Kerberos authentication protocol consists of three types of exchanges, also known as subprotocols: The Authentication Service (AS) Exchange (KRB_AS_*) The Ticket-Granting Service (TGS) Exchange (KRB_TGS_*) The Client/Server (AP) Exchange (KRB_AP_*) The AS exchange is where the client uses the account’s password or private key to create a pre-authenticator to request a ticket-granting ticket (TGT). This happens at user sign-on or the first time a service ticket is needed. The TGS exchange is where the account’s TGT is used to create an authenticator to request a service ticket. This happens when an authenticated connection is needed. The AP exchange occurs as typically as data inside the application protocol and is not impacted by authentication policies. For more detailed information, see How the Kerberos Version 5 Authentication Protocol Works. Overview Authentication policies complement Protected Users by providing a way to apply configurable restrictions to accounts and by providing restrictions for accounts for services and computers. Authentication policies are enforced during either the AS exchange or the TGS exchange. You can restrict initial authentication or the AS exchange by configuring: A TGT lifetime Access control conditions to restrict user sign-on, which must be met by devices from which the AS exchange is coming You can restrict service ticket requests through a ticket-granting service (TGS) exchange by configuring: - Access control conditions which must be met by the client (user, service, computer) or device from which the TGS exchange is coming Requirements for using authentication policies Restrict a user account to specific devices and hosts A high-value account with administrative privilege should be a member of the Protected Users group. By default, no accounts are members of the Protected Users group. Before you add accounts to the group, configure domain controller support and create an audit policy to ensure that there are no blocking issues. Configure domain controller support The user’s account domain must be at Windows Server 2012 R2 domain functional level (DFL). Ensure all the domain controllers are Windows Server 2012 R2, and then use Active Directory Domains and Trusts to raise the DFL to Windows Server 2012 R2. To configure support for Dynamic Access Control In the Default Domain Controllers Policy, click Enabled to enable Key Distribution Center (KDC) client support for claims, compound authentication and Kerberos armoring in Computer Configuration | Administrative Templates | System | KDC. Under Options, in the drop-down list box, select Always provide claims. Note Supported can also be configured, but because the domain is at Windows Server 2012 R2 DFL, having the DCs always provide claims will allow user claims-based access checks to occur when using non-claims aware devices and hosts to connect to claims-aware services. Warning Configuring Fail unarmored authentication requests will result in authentication failures from any operating system which does not support Kerberos armoring, such as Windows 7 and previous operating systems, or operating systems beginning with Windows 8, which have not been explicitly configured to support it. Create a user account audit for authentication policy with ADAC Open Active Directory Administrative Center (ADAC). Note The selected Authentication node is visible for domains which are at Windows Server 2012 R2 DFL. If the node does not appear, then try again by using a domain administrator account from a domain that is at Windows Server 2012 R2 DFL. Click Authentication Policies, and then click New to create a new policy. Authentications Policies must have a display name and are enforced by default. To create an audit-only policy, click Only audit policy restrictions. Authentication policies are applied based on the Active Directory account type. A single policy can apply to all three account types by configuring settings for each type. Account types are: User Computer Managed Service Account and Group Managed Service Account If you have extended the schema with new principals that can be used by the Key Distribution Center (KDC), then the new account type is classified from the closest derived account type. To configure a TGT lifetime for user accounts, select the Specify a Ticket-Granting Ticket lifetime for user accounts check box and enter the time in minutes. For example, if you want a 10-hour maximum TGT lifetime, enter 600 as shown. If no TGT lifetime is configured, then if the account is a member of the Protected Users group, the TGT lifetime and renewal is 4 hours. Otherwise, TGT lifetime and renewal are based on the domain policy as seen in the following Group Policy Management Editor window for a domain with default settings. To restrict the user account to select devices, click Edit to define the conditions that are required for the device. In the Edit Access Control Conditions window, click Add a condition. Add computer account or group conditions To configure computer accounts or groups, in the drop-down list, select the drop-down list box Member of each and change to Member of any. Note This access control defines the conditions of the device or host from which the user signs on. In access control terminology, the computer account for the device or host is the user, which is why User is the only option. Click Add items. To change object types, click Object Types. To select computer objects in Active Directory, click Computers, and then click OK. Type the name of the computers to restrict the user, and then click Check Names. Click OK and create any other conditions for the computer account. When done, then click OK and the defined conditions will appear for the computer account. Add computer claim conditions To configure computer claims, drop-down Group to select the claim. Claims are only available if they are already provisioned in the forest. Type the name of OU, the user account should be restricted to sign on. When done, then click OK and the box will show the conditions defined. Troubleshoot missing computer claims If the claim has been provisioned, but is not available, it might only be configured for Computer classes. Let’s say you wanted to restrict authentication based on the organizational unit (OU) of the computer, which was already configured, but only for Computer classes. For the claim to be available to restrict User sign-on to the device, select the User check box. Provision a user account with an authentication policy with ADAC From the User account, click Policy. Select the Assign an authentication policy to this account check box. Then select the authentication policy to apply to the user. Configure Dynamic Access Control support on devices and hosts You can configure TGT lifetimes without configuring Dynamic Access Control (DAC). DAC is only needed for checking AllowedToAuthenticateFrom and AllowedToAuthenticateTo. Using either Group Policy or Local Group Policy Editor, enable Kerberos client support for claims, compound authentication and Kerberos armoring in Computer Configuration | Administrative Templates | System | Kerberos: Troubleshoot Authentication Policies Determine the accounts that are directly assigned an Authentication Policy The accounts section in the Authentication Policy shows the accounts that have directly applied the policy. Use the Authentication Policy Failures – Domain Controller administrative log A new Authentication Policy Failures – Domain Controller administrative log under Applications and Services Logs > Microsoft > Windows > Authentication has been created to make it easier to discover failures due to Authentication Policies. The log is disabled by default. To enable it, right-click the log name and click Enable Log. The new events are very similar in content to the existing Kerberos TGT and service ticket auditing events. For more information about these events, see Authentication Policies and Authentication Policy Silos. Manage authentication policies by using Windows PowerShell This command creates an authentication policy named TestAuthenticationPolicy. The UserAllowedToAuthenticateFrom parameter specifies the devices from which users can authenticate by an SDDL string in the file named someFile.txt. PS C:\> New-ADAuthenticationPolicy testAuthenticationPolicy -UserAllowedToAuthenticateFrom (Get-Acl .\someFile.txt).sddl This command gets all authentication policies that match the filter that the Filter parameter specifies. PS C:\> Get-ADAuthenticationPolicy -Filter "Name -like 'testADAuthenticationPolicy*'" -Server Server02.Contoso.com This command modifies the description and the UserTGTLifetimeMins properties of the specified authentication policy. PS C:\> Set-ADAuthenticationPolicy -Identity ADAuthenticationPolicy1 -Description "Description" -UserTGTLifetimeMins 45 This command removes the authentication policy that the Identity parameter specifies. PS C:\> Remove-ADAuthenticationPolicy -Identity ADAuthenticationPolicy1 This command uses the Get-ADAuthenticationPolicy cmdlet with the Filter parameter to get all authentication policies that are not enforced. The result set is piped to the Remove-ADAuthenticationPolicy cmdlet. PS C:\> Get-ADAuthenticationPolicy -Filter 'Enforce -eq $false' | Remove-ADAuthenticationPolicy Authentication policy silos Authentication Policy Silos is a new container (objectClass msDS-AuthNPolicySilos) in AD DS for user, computer, and service accounts. They help protect high-value accounts. While all organizations need to protect members of Enterprise Admins, Domain Admins and Schema Admins groups because those accounts could be used by an attacker to access anything in the forest, other accounts may also need protection. Some organizations isolate workloads by creating accounts that are unique to them and by applying Group Policy settings to limit local and remote interactive logon and administrative privileges. Authentication policy silos complement this work by creating a way to define a relationship between User, Computer and managed Service accounts. Accounts can only belong to one silo. You can configure authentication policy for each type of account in order to control: Non-renewable TGT lifetime Access control conditions for returning TGT (Note: cannot apply to systems because Kerberos armoring is required) Access control conditions for returning service ticket Additionally, accounts in an authentication policy silo have a silo claim, which can be used by claims-aware resources such as file servers to control access. A new security descriptor can be configured to control issuing service ticket based on: User, user’s security groups, and/or user’s claims Device, device’s security group, and/or device’s claims Getting this information to the resource’s DCs requires Dynamic Access Control: User claims: Windows 8 and later clients supporting Dynamic Access Control Account domain supports Dynamic Access Control and claims Device and/or device security group: Windows 8 and later clients supporting Dynamic Access Control Resource configured for compound authentication Device claims: Windows 8 and later clients supporting Dynamic Access Control Device domain supports Dynamic Access Control and claims Resource configured for compound authentication Authentication policies can be applied to all members of an authentication policy silo instead of to individual accounts, or separate authentication policies can be applied to different types of accounts within a silo. For example, one authentication policy can be applied to highly privileged user accounts, and a different policy can be applied to services accounts. At least one authentication policy must be created before an authentication policy silo can be created. Note An authentication policy can be applied to members of an authentication policy silo, or it can be applied independently of silos to restrict specific account scope. For example, to protect a single account or a small set of accounts, a policy can be set on those accounts without adding the accounts to a silo. You can create an authentication policy silo by using Active Directory Administrative Center or Windows PowerShell. By default, an authentication policy silo only audits silo policies, which is equivalent to specifying the WhatIf parameter in Windows PowerShell cmdlets. In this case, policy silo restrictions do not apply, but audits are generated to indicate whether failures occur if the restrictions are applied. To create an authentication policy silo by using Active Directory Administrative Center Open Active Directory Administrative Center, click Authentication, right-click Authentication Policy Silos, click New, and then click Authentication Policy Silo. In Display name, type a name for the silo. In Permitted Accounts, click Add, type the names of the accounts, and then click OK. You can specify users, computers, or service accounts. Then specify whether to use a single policy for all principals or a separate policy for each type of principal, and the name of the policy or policies. Manage authentication policy silos by using Windows PowerShell This command creates an authentication policy silo object and enforces it. PS C:\>New-ADAuthenticationPolicySilo -Name newSilo –Enforce This command gets all the authentication policy silos that match the filter that is specified by the Filter parameter. The output is then passed to the Format-Table cmdlet to display the name of the policy and the value for Enforce on each policy. PS C:\>Get-ADAuthenticationPolicySilo -Filter 'Name -like "*silo*"' | Format-Table Name, Enforce –AutoSize Name Enforce ---- ------- silo True silos False This command uses the Get-ADAuthenticationPolicySilo cmdlet with the Filter parameter to get all authentication policy silos that are not enforced and pipe the result of the filter to the Remove-ADAuthenticationPolicySilo cmdlet. PS C:\>Get-ADAuthenticationPolicySilo -Filter 'Enforce -eq $False' | Remove-ADAuthenticationPolicySilo This command grants access to the authentication policy silo named Silo to the user account named User01. PS C:\>Grant-ADAuthenticationPolicySiloAccess -Identity Silo -Account User01 This command revokes access to the authentication policy silo named Silo for the user account named User01. Because the Confirm parameter is set to $False, no confirmation message appears. PS C:\>Revoke-ADAuthenticationPolicySiloAccess –Identity Silo –Account User01 –Confirm:$False This example first uses the Get-ADComputer cmdlet to get all computer accounts that match the filter that the Filter parameter specifies. The output of this command is passed to Set-ADAccountAuthenticatinPolicySilo to assign the authentication policy silo named Silo and the authentication policy named AuthenticationPolicy02 to them. PS C:\>Get-ADComputer –Filter 'Name –like "newComputer*"' | Set-ADAccountAuthenticationPolicySilo –AuthenticationPolicySilo Silo –AuthenticationPolicy AuthenticationPolicy02
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/dn518179(v=ws.11)?redirectedfrom=MSDN
2020-02-16T21:45:10
CC-MAIN-2020-10
1581875141430.58
[array(['images/dn518179.21a525d4-2296-4f6c-9e61-ce1a45b7f182%28ws.11%29.jpeg', None], dtype=object) array(['images/dn518179.fb01bd2b-42a4-4d0a-93c2-49f823c30534%28ws.11%29.jpeg', None], dtype=object) array(['images/dn518179.09b474fb-9ab0-4d9a-9a8d-531eae9cf3b5%28ws.11%29.jpeg', None], dtype=object) array(['images/dn518179.a377b0c7-3cb3-4e06-b6a5-d7b1b46682a1%28ws.11%29.jpeg', None], dtype=object) array(['images/dn518179.182567e8-d392-487e-9477-b1002ca2a1f5%28ws.11%29.jpeg', None], dtype=object) array(['images/dn518179.d2846bfd-46a3-4ea2-97fa-c65103297821%28ws.11%29.jpeg', None], dtype=object) array(['images/dn518179.411e6ba9-9fd5-44eb-a6cb-f4ca026bc0cf%28ws.11%29.jpeg', None], dtype=object) array(['images/dn518179.f4f8dacb-e1e9-449a-a370-59c80a0b52d8%28ws.11%29.jpeg', None], dtype=object) ]
docs.microsoft.com
The need to transition between multiple UI screens is fairly common. In this page we will explore a simple way to create and manage those transitions using animation and State Machines to drive and control each screen. The high-level idea is that each of our screens will have an Animator Controller with two states (Open and Closed) and a boolean Parameter (Open). To transition between screens you will only need to close the currently open Screen and open the desired one. To make this process easier we will create a small Class ScreenManager that will keep track and take care of closing any already open Screen for us. The button that triggers the transition will only have to ask the ScreenManager to open the desired screen. If you plan to support controller/keyboard navigation of UI elements, then it’s important to have a few things in mind. It’s important to avoid having Selectable elements outside the screen since that would enable players to select offscreen elements, we can do that by deactivating any off-screen hierarchy. We also need to make sure when a new screen is shown we set a element from it as selected, otherwise the player would not be able to navigate to the new screen. We will take care of all that in the ScreenManager class below. Let’s take a look at the most common and minimal setup for the Animation Controller to do a Screen transition. The controller will need a boolean parameter (Open) and two states (Open and Closed), each state should have an animation with only one keyframe, this way we let the State Machine do the transition blending for us. Now we need to create the transition between both states, let’s start with the transition from Open to Closed and let’s set the condition properly, we want to go from Open to Closed when the parameter Open is set to false. Now we create the transition from Closed to Open and set the condition to go from Closed to Open when the parameter Open is true. With all the above set up, the only thing missing is for us to set the parameter Open to true on the screens Animator we want to transition to and Open to false on the currently open screens Animator. To do that, we will create a small script: using UnityEngine; using UnityEngine.UI; using UnityEngine.EventSystems; using System.Collections; using System.Collections.Generic; public class ScreenManager : MonoBehaviour { //Screen to open automatically at the start of the Scene public Animator initiallyOpen; //Currently Open Screen private Animator m_Open; //Hash of the parameter we use to control the transitions. private int m_OpenParameterId; //The GameObject Selected before we opened the current Screen. //Used when closing a Screen, so we can go back to the button that opened it. private GameObject m_PreviouslySelected; //Animator State and Transition names we need to check against. const string k_OpenTransitionName = "Open"; const string k_ClosedStateName = "Closed"; public void OnEnable() { //We cache the Hash to the "Open" Parameter, so we can feed to Animator.SetBool. m_OpenParameterId = Animator.StringToHash (k_OpenTransitionName); //If set, open the initial Screen now. if (initiallyOpen == null) return; OpenPanel(initiallyOpen); } //Closes the currently open panel and opens the provided one. //It also takes care of handling the navigation, setting the new Selected element. public void OpenPanel (Animator anim) { if (m_Open == anim) return; //Activate the new Screen hierarchy so we can animate it. anim.gameObject.SetActive(true); //Save the currently selected button that was used to open this Screen. (CloseCurrent will modify it) var newPreviouslySelected = EventSystem.current.currentSelectedGameObject; //Move the Screen to front. anim.transform.SetAsLastSibling(); CloseCurrent(); m_PreviouslySelected = newPreviouslySelected; //Set the new Screen as then open one. m_Open = anim; //Start the open animation m_Open.SetBool(m_OpenParameterId, true); //Set an element in the new screen as the new Selected one. GameObject go = FindFirstEnabledSelectable(anim.gameObject); SetSelected(go); } //Finds the first Selectable element in the providade hierarchy. static GameObject FindFirstEnabledSelectable (GameObject gameObject) { GameObject go = null; var selectables = gameObject.GetComponentsInChildren<Selectable> (true); foreach (var selectable in selectables) { if (selectable.IsActive () && selectable.IsInteractable ()) { go = selectable.gameObject; break; } } return go; } //Closes the currently open Screen //It also takes care of navigation. //Reverting selection to the Selectable used before opening the current screen. public void CloseCurrent() { if (m_Open == null) return; //Start the close animation. m_Open.SetBool(m_OpenParameterId, false); //Reverting selection to the Selectable used before opening the current screen. SetSelected(m_PreviouslySelected); //Start Coroutine to disable the hierarchy when closing animation finishes. StartCoroutine(DisablePanelDeleyed(m_Open)); //No screen open. m_Open = null; } //Coroutine that will detect when the Closing animation is finished and it will deactivate the //hierarchy. IEnumerator DisablePanelDeleyed(Animator anim) { bool closedStateReached = false; bool wantToClose = true; while (!closedStateReached && wantToClose) { if (!anim.IsInTransition(0)) closedStateReached = anim.GetCurrentAnimatorStateInfo(0).IsName(k_ClosedStateName); wantToClose = !anim.GetBool(m_OpenParameterId); yield return new WaitForEndOfFrame(); } if (wantToClose) anim.gameObject.SetActive(false); } //Make the provided GameObject selected //When using the mouse/touch we actually want to set it as the previously selected and //set nothing as selected for now. private void SetSelected(GameObject go) { //Select the GameObject. EventSystem.current.SetSelectedGameObject(go); //If we are using the keyboard right now, that's all we need to do. var standaloneInputModule = EventSystem.current.currentInputModule as StandaloneInputModule; if (standaloneInputModule != null && standaloneInputModule.inputMode == StandaloneInputModule.InputMode.Buttons) return; //Since we are using a pointer device, we don't want anything selected. //But if the user switches to the keyboard, we want to start the navigation from the provided game object. //So here we set the current Selected to null, so the provided gameObject becomes the Last Selected in the EventSystem. EventSystem.current.SetSelectedGameObject(null); } } Let’s hook up this script, we do this by creating a new GameObject, we can rename it “ScreenManager” for instance, and add the component above to it. You can assign an initial screen to it, this screen will be open at the start of your scene. Now for the final part, let’s make the UI buttons work. Select the button that should trigger the screen transition and add a new action under the On Click () list in the Inspector. Drag the ScreenManager GameObject we just created to the ObjectField, on the dropdown select ScreenManager->OpenPanel (Animator) and drag and drop the panel you want to open when the user clicks the button to the las ObjectField. This technique only requires each screen to have an AnimatorController with an Open parameter and a Closed state to work - it doesn’t matter how your screen or State Machine are constructed. This technique also works well with nested screens, meaning you only need one ScreenManager for each nested level. The State Machine we set up above has the default state of Closed, so all of the screens that use this controller start as closed. The ScreenManager provides an initiallyOpen property so you can specify which screen is shown first.
https://docs.unity3d.com/560/Documentation/Manual/HOWTO-UIScreenTransition.html
2020-02-16T23:20:48
CC-MAIN-2020-10
1581875141430.58
[]
docs.unity3d.com
CleanSPDA Description SharePoint DataArchiver Cleanup Tool This is a tool to clean up certain custom list columns created by Commvault Qinetix Sharepoint Data Archiver Product. currently, two columns, "Archived" and "Commvault Stub", are created on every sharepoint list (docLibrary, PicLibrary etc) processed by the DataArchiver. On uninstall of the DataArchiver product, these columns are not removed by the uninstaller automatically. CleanSPDA tool provides this facility as a command line utility. Platform SharePoint DataArchiver Usage The usage of cleanSPDA.exe is as: CleanSPDA [-force] [ [-vs <vs url>] | [-site <site url>] | [-web <web url>]] [-lib <library name>] [-col <column name>] [-log <logfile path>] The parameters are: - force : optional; if specified, will process non-empty lists; by default, non empty lists are skipped. - target options: The following mutually exclusive three options provide the tool with the starting point of processing - -vs <vs url> : optional. Specifies a virtual server. All lists contained by this virtual server will be processed. - -site <site url> : optional. Specifies a TopLevel Site collection. All lists contained by this sitecoll will be processed. - -web <web url> : optional. Specifies a subsite. All lists contained by this subsite will be processed. If none of the above target options are specified, then the entire webserver will be processed. - -lib <library name> : optional. Specifies a SharePoint list to be processed. If not specified, all lists will be processed. - -col <column name> : optional. Specifies the column names to remove; multiple -col instances can be specified. If not specified, defaults to the columns "Archived" and "Commvault Stub." - -log <logfile path> : optional. Specifies the full path of the log file created. If not specified the logfile used is ./cleanSPDA.log. Special Notes for SharePoint V3/MOSS 2007 If this tool is being installed on a SharePoint platform built on .Net Framework 2.0,the file CleanSPDA.exe.config also needs to be copied into the folder where the binary CleanSPDA.exe is placed. If this tool is being installed on a SharePoint platform built on .Net Framework 1.x, then CleanSPDA.exe.config should not be present at the folder where the binary CleanSPDA.exe is placed.
http://docs.snapprotect.com/netapp/v10/article?p=features/resource_pack_utils/readmes/rm_win_clean_spda.htm
2017-09-19T15:26:59
CC-MAIN-2017-39
1505818685850.32
[]
docs.snapprotect.com
Quickstart Intro¶ In this quick tutorial we will show you how to two set up two Reach devices as a base and a rover with correction link over Wi-Fi. Tip If you encounter any issues performing these steps, we will be happy to help at our community forum. This tutorial only covers one use case. To get more information, follow these links: Powering up¶ Take Micro-USB <--> USB cable that is coming with the package. Plug Micro-USB end of the cable into Micro-USB port on Reach and plug another end into 5V power source such as USB power bank, USB wall adapter or USB port of a computer. Danger Do not plug two power supplies at the same time as it may damage the device. More on power supply you can read here. Connecting and placing GPS antenna¶ Plug antenna cable into MCX socket on Reach. Place antenna on a ground plane. It could be a cut piece of metal > 100mm in diameter, roof of a car or metal roof of a building. Warning There should be no obstacles near the antenna that could block the sky view higher than 30 degrees above horizon. Do not test the device indoors or near buildings, do not cover the skyview for the antennas with laptops, cars or yourself. RTK requires good satellite visibility and reception. A guide how to properly place the antennas is available in Antenna Placement section. Connecting to Reach¶ When Reach is powered for the first time it will create a Wi-Fi hotspot. Open a list of Wi-Fi networks on your smartphone, tablet or laptop. Connect to a network named reach:xx:xx (ex. reach:66:ac). Type network password: emlidreach. Setting up Wi-Fi¶ After connecting to the network hosted by reach, open a web browser on your smartphone, tablet or laptop. - Type either or in the address bar and you will see ReachView Updater. Note If your interface looks different, you need to reflash Reach device with v2.3 image by following this guide. You only need to do this if your device was purchased before 1 March 2017. - Press plus button and enter your Wi-Fi network name, security type and password. Press Save button - Press on your added network and click Connect. - After that Reach device will attempt to connect your Wi-Fi network. Tip If your device did not connect to Wi-Fi network it will switch to hotspot mode. You can find Reach on or. Check your network name and password and try again. Accessing Reach device in a network¶ After connecting Reach device to an existing Wi-Fi network, you will need to identify it's IP. For this you can use: Reach will show up as "Murata Manufacturing" device in these apps. Put Reach IP in address bar and go. Read more on resolving IP addresses in the ReachView section. - After that you will see ReachView Updater again which will install latest updates. Press Reboot and go to the app! button. Wait while device reboots. In about a minute refresh the page with ReachView app. Working with ReachView app¶ Interface walkthrough¶ ReachView menu consists of 9 tabs, but we only need three of them to start work: Status tab which shows current satellite levels, RTK parameters, coordinates and map. Base mode tab is used to set correction output type, base coordinates and RTCM3 messages. Correction input tab is used to set base correction for the rover. Setting up base station¶ Connect to Reach you want to use as a base. Navigate to Base mode tab and turn on Correction output box toggle. Wait until base averages it's position in Base coordinates box.. You can see a bar chart with satellite levels, RTK parameters, positioning mode and solution status, current coordinates of rover and base in LLH format, velocity and map. In this quick tutorial, positioning mode is set to "Kinematic" which is the main RTK mode. - If everything has been set up correctly, Solution status will be Float and you should see grey bars near satellite levels bars. Float means that base corrections are now taken into consideration and positioning is relative to base coordinates, but the integer ambiguity is not resolved. If you see "-" or Single in Solution status box on this step, that means that some settings are incorrect. "-" means there is no information for the software to process. Either not enough time has passed or the antenna is not placed correctly. Single means that rover has found a solution relying on it's own receiver and base corrections are not taken into consideration yet. If rover is started in single mode, this will also be the result. - If everything has been set up correctly and base and rover have good sky visibility, you should see Solution status change to Fix in a few minutes. Fix means that positioning is relative to the base and the integer ambiguity is resolved. Now you can see green points on the map below. Orange points show Float solution. Red - Single solution. You're ready to go! More reading¶ Congratulations on finishing the quickstart tutorial! Continue to learn about setting up different correction links in the ReachView section.
https://docs.emlid.com/reach/quickstart/
2017-09-19T15:03:22
CC-MAIN-2017-39
1505818685850.32
[array(['../img/reach/quickstart/reach_view_updater_main.png', None], dtype=object) array(['../img/reach/quickstart/reach_view_updater_wifi.png', None], dtype=object) array(['../img/reach/quickstart/reach_view_updater_wifi_connect.png', None], dtype=object) array(['../img/reach/quickstart/fing.png', None], dtype=object) array(['../img/reach/quickstart/reach_view_updater_finish.png', None], dtype=object) array(['../img/reach/quickstart/reach_view_loading.png', None], dtype=object) array(['../img/reach/quickstart/reach_view_status_menu.png', None], dtype=object) array(['../img/reach/quickstart/reach_view_base_mode_menu.png', None], dtype=object) array(['../img/reach/quickstart/reach_view_correction_input_tab.png', None], dtype=object) array(['../img/reach/quickstart/reach_view_correction_input_tcp.png', None], dtype=object) array(['../img/reach/quickstart/reach_view_status_menu_correction.png', None], dtype=object) array(['../img/reach/quickstart/reach_view_status_menu_fix.png', None], dtype=object) ]
docs.emlid.com
CvSpf Description This utility displays Windows system protected files. Usage Command Description CvSpf /? To get usage help CvSpf /f c:\windows\my.dll To check if the passed in file is protected CvSpf /p c:\MyFolder To print any protected files in the root of MyFolder CvSpf /p c:\MyFolder /r To print any protected files in MyFolder; this will recurse into MyFolder CvSpf /a To print all SPF files CvSpf /e To print only SPF files that exist on my machine CvSpf /c To print just the counts of SPF files
http://docs.snapprotect.com/netapp/v10/article?p=features/resource_pack_utils/readmes/rm_win_cvspf.htm
2017-09-19T15:26:32
CC-MAIN-2017-39
1505818685850.32
[]
docs.snapprotect.com
To collect data from an object, you might need to add an object or edit an existing object in your environment. For example, you might need to add objects for an adapter that does not support autodiscovery, or change the maintenance schedule of an existing object. Where You Find Manage Objects In the left pane, select. Click the plus sign to add an object or the pencil to edit the selected object. Items that appear in the window depend on the object that you are editing. Not all options can be changed.
https://docs.vmware.com/en/vRealize-Operations-Manager/6.5/com.vmware.vcom.core.doc/GUID-AEF85BDB-F8E1-4B67-BAD1-97F5C0D827AE.html
2017-09-19T15:38:09
CC-MAIN-2017-39
1505818685850.32
[]
docs.vmware.com
Slack Users - How do I schedule messages for future delivery in Slack? - What can I do with Convergely? - What is Convergely? - How do I create a poll in Slack? - What permissions does Convergely require from my Slack account? - Keo Slash Commands -? - How do I create tasks in Slack? - How many polls can I create? - Who is Keo? - Using the "schedule a message" feature, who can I send my messages to? - Where can I see a list of the messages that I scheduled through the app? - What type of questions can I ask using the polls feature? - What can Keo do? - How do I set up Convergely with my Slack account? - Can I chose where I want to use Annotate? - What information from my Slack Account is being used?
http://docs.convergely.com/category/139-slack-users
2017-09-19T15:14:11
CC-MAIN-2017-39
1505818685850.32
[]
docs.convergely.com
Create a Virtual Machine - VM Lifecycle Management - Microsoft Hyper-V Overview The Virtual Machines module allows you to create and customize your own virtual machine. In addition, you can make snapshots and restore files from backups of your virtual machine. You must use the CommCell Console that is associated with the Web Console to back up your virtual machine. Only you and your administrator can view the contents of your virtual machine. Prerequisites Obtain the Naming Pattern for naming virtual machines from your administrator. You will need this information as you proceed. Create a Virtual Machine - Log into the Web Console. For instructions, see Accessing the Web Console. - Click Virtual Machines. - At the top-right of the page, click Create Virtual Machine. - From the Select Virtual Machine Pool list, select the name of an Hyper-V. - Select a template from the list, and then click Next. - If you do not see a User Details page, then continue to the next step. If you see the User Details page, enter a password for the virtual machine, and then click Next. The user name is entered by default. - On the Resources page, define the memory size, number of CPUs, and number of NICs for the virtual machine. - In the Memory box, enter the amount of memory for the virtual machine. You can also drag the customization bar to refine the memory amount. - In the Number of CPUs box, enter the number of CPUs for the virtual machine. - In the Number of NICs, enter the number of NICs for the virtual machine - Click Next. - On the Disks page, in the Max Disk Size box, enter the amount of disk space for the virtual machine, and then click Next. You can also drag the customization bar to refine the disk size. - On the Summary page, click Finish. Once the virtual machine is created, it will appear in the My Virtual Machines list. The creation process may take several moments to complete. An email that confirms the creation of your virtual machine is automatically sent to you. Immediately after you create a new virtual machine, you cannot select it or run operations, such as Start and Stop, on it. If you log out of Web Console, and then log in again, the permissions are reset. You can then select and run operations on the newly created virtual machine. Congratulations - You have successfully created your first virtual machine. If you want to further explore this feature's capabilities, read the Advanced section of this documentation by clicking Next.
http://docs.snapprotect.com/netapp/v10/article?p=products/vs_ms/vm_provisioning/vmlm_hyper_v_user_create_vm.htm
2017-09-19T15:23:35
CC-MAIN-2017-39
1505818685850.32
[]
docs.snapprotect.com
Retrieves the binary stream of a project that has been deployed to the Integration Services server. Syntax get_project [ @folder_name = ] folder_name , [ @project_name = ] project_name Arguments [ @folder_name = ] folder_name The name of the folder that contains the project. folder_name is nvarchar(128). [ @project_name = ] project_name The name of the project. project_name is nvarchar(128). Return Code Value 0 (success) Result Sets The binary stream of the project is returned as varbinary(MAX). No results are returned if the folder or project is not found. Permissions This stored procedure requires one of the following permissions: READ permissions on the project Membership to the ssis_admin database role Membership to the sysadmin server role Errors and Warnings The following list describes some conditions that may cause the get_project stored procedure to raise an error: The project does not exist The folder does not exist The user does not have the appropriate permissions
https://docs.microsoft.com/it-it/sql/integration-services/system-stored-procedures/catalog-get-project-ssisdb-database
2017-09-19T15:13:37
CC-MAIN-2017-39
1505818685850.32
[array(['../../includes/media/yes.png', 'yes'], dtype=object) array(['../../includes/media/no.png', 'no'], dtype=object) array(['../../includes/media/no.png', 'no'], dtype=object) array(['../../includes/media/no.png', 'no'], dtype=object)]
docs.microsoft.com
App creation¶ Mayan EDMS apps are essentially Django app with some extra code to register navigation, permissions and other relationships. App modules¶ __init__.py Should be empty if possible. No initialization code should be here, use the ready() method of the MayanAppConfig class in the apps.py module. admin.py Standard Django app module to define how models are to be presented in the admin interface. api_views.py REST API views go here. Mayan EDMS uses Django REST Framework API view classes. apps.py Contains the MayanAppConfig subclass as required by Django 1.7 and up. This is a place to define the app name and translatable verbose name as well as code to be execute when the modules of the app are ready. classes.py Hold python classes to be used internally or externally. Any class defined by the app that is not a model. events.py Define event class instances that are later committed to a log by custom code. exceptions.py Custom exceptions defined by the app. fields.py Place any custom form field classed you define here. forms.py Standard Django app module that hold custom form classes. handlers.py Contains the signal handlers, functions that will process a given signal emitted from this or other apps. Connect the handler functions to the corresponding signal in the ready() method of the MayanAppConfig subclass in apps.py links.py Defines the links to be used by the app. Import only from the navigation app and the local permissions.py file. literals.py Stores magic numbers, module choices (if static), settings defaults, and constants. Should contain all capital case variables. Must not import from any other module. managers.py Standard Django app module that hold custom model managers. These act as model class method to performs actions in a series of model instances or utilitarian actions on external models instances. models.py Standard Django app module that defines ORM persistent data schema. permissions.py Defines the permissions to be used to validate user access by links and views. Imports only from the permissions app. Link or view conditions such as testing for is_staff or is_super_user flag are defined in this same module. runtime.py Use this module when you need the same instance of a class for the entire app. This module acts as a shared memory space for the other modules of the app or other apps. serializers.py Hold Django REST Framework serializers used by the api_views.py module. settings.py Define the configuration settings instances that the app will use. signals.py Any custom defined signal goes here. statistics.py Provides functions that will compute any sort of statistical information on the app’s data. tasks.py Code to be execute in the background or as an out-of-process action. tests/ directory Hold test modules. There should be one test_*.py module for each aspect being tested, examples: test_api.py, test_views.py, test_parsers.py, test_permissions.py Any shared constant data used by the tests should be added to tests/literals.py utils.py Holds utilitarian code that doesn’t fit on any other app module or that is used by several modules in the app. Anything used internally by the app that is not a class or a literal (should be as little as possible) widgets.py HTML widgets go here. This should be the only place with presentation directives in the app (aside the templates). Views¶ The module common.generics provides custom generic class based views to be used. The basic views used to create, edit, view and delete objects in Mayan EDMS are: SingleObjectCreateView, SingleObjectDetailView, SingleObjectEditView, and SingleObjectListView These views handle aspects relating to view permissions, object permissions, post action redirection and template context generation.
https://mayan.readthedocs.io/en/latest/topics/app_creation.html
2017-03-23T04:12:06
CC-MAIN-2017-13
1490218186774.43
[]
mayan.readthedocs.io
Part 3 - iOS Backgrounding Techniques - PDF for offline use - Let us know how you feel about this 0/250 Overview Let’s examine the different ways to perform background processing on iOS in more detail. In the following sections, we will cover how to: - Register a task to run in the background. - Register an entire application for backgrounding privileges. - Update an application's content from the background. In this guide, we will explore the following iOS features alongside the existing backgrounding options: - Opportunistic Background Tasks - Preserve battery life by running background tasks in opportunistic chunks when the device is awake for other processing. - Background Transfer Service - Reliably upload and download files regardless of network status or file size. - Background Fetch - Refresh an application from the background at system-determined intervals. - Remote Notifications - Use push notifications to trigger content updates in the background before the user opens the application, with an option to notify the user or update silently. - Background UI Updates - Prepare the application UI for the user, and update the application's snapshot, all from the background. Sections iOS Backgrounding with Tasks Registering Apps to Run in the Background Updating an Application in the.
https://docs.mono-android.net/guides/ios/application_fundamentals/backgrounding/part_3_ios_backgrounding_techniques/
2017-03-23T04:22:48
CC-MAIN-2017-13
1490218186774.43
[]
docs.mono-android.net
Release Note 20150113 Table of Contents - Features & Improvements - Bug Fixes Features & Improvements This is a summary of the new features and improvements introduced in this release: Backend: Upgraded Presto to v0.89 We upgraded the Presto Engine to the currently latest version v0.89. Please refer to the Presto official v0.89 release note page for more information on what changed. Backend: Hive’s INSERT INTO Default time Field Hive queries taking advantage of the INSERT INTO clause (to write the result back into a Treasure Data table more efficiently) now use the query’s scheduled time, or current time if not available, if a time column is not found in the result produced by the query. Previously it was mandatory for the query to contain the time column in order for the INSERT INTO clause to carry on successfully and being able to write the result into a Treasure Data table. Bug Fixes These are the most important Bug Fixes made in this release: APIs: Performance Improvement on User Model Update - [Problem] When an user with Administrative permissions creates a database, it is not recorded as its owner. [Solution] This was found to be an API issue. Only restricted users and the account Owner were recorded as owners of a database but not regular Administrators (non-Owner administrators) were not. We modified the logic to cover this use case as well and mark the Administrator user as owner for databases it created. Console: Tutorial Not Functioning - [Problem] After last release’s deployment, the tutorial was no longer working. [Solution] This was due to last release’s refactoring of the Databases and Tables pages in which the hooks required by the Tutorial flow to function properly were modified inadvertently. We modified the hooks to match with the Databases and Tables hooks and that fixed the tutorial flow. Backend: Free User’s Query Failures - [Problem] On January 6th, queries issued by accounts on a Free plan failed because of ‘Array Out of Bound’ exceptions. [Solution] This problem was due to a fix introduced for the Treasure Data time index filtering capability on the Hive version on the cluster free accounts are allocated to. As the problem was reported, the fix was reverted to mitigate the impact. Later in the week the code was modified to address the problem causing the exception and solve the initial limitation affecting Free accounts. Backend: Presto Conversion of Floating Point Numbers - [Problem] Certain Presto WHERE clauses where the comparison value is a Floating point smaller than 1 may see the comparison value truncated to 0. [Solution] This was found to be a problem in the Presto query optimizer for time index filtering which casts the Floating point comparison argument to integer, thus making any number smaller than 1 (e.g. 0.67) a 0. We modified the query optimizer’s logic to not attempt to optimize WHERE clauses where the reference column/field is not the time column. Backend: Treasure Dat Result Exports Never Complete If the Session Is Deleted - [Problem] When a Bulk Import session associated to a Treasure Data Export from a query is forcibly deleted, the Result Export keep running indefinitely and never completes. [Solution] This problem is due to the Bulk Import Commit worker retrying to commit the Bulk Import when the session is not found. This is undesirable. We modified the logic to avoid retrying when the session is not found (that is, was deleted) or the status of the session is already ‘committed’. Last modified: Jan 16 2015 03:23:35 UTC If this article is incorrect or outdated, or omits critical information, please let us know. For all other issues, please see our support channels.
https://docs.treasuredata.com/articles/releasenote-20150113
2017-03-23T04:20:41
CC-MAIN-2017-13
1490218186774.43
[]
docs.treasuredata.com
dumpscript¶ The dumpscript command generates a standalone Python script that will repopulate the database using objects. The advantage of this approach is that it is easy to understand, and more flexible than directly populating the database, or using XML. Why?¶ There are a few benefits to this: - less drama with model evolution: foreign keys handled naturally without IDs, new and removed columns are ignored - edit script to create 1,000s of generated entries using for loops, generated names, python modules etc. For example, an edited script can populate the database with test data: for i in xrange(2000): poll = Poll() poll.question = "Question #%d" % i poll.pub_date = date(2001,01,01) + timedelta(days=i) poll.save() Real databases will probably be bigger and more complicated so it is useful to enter some values using the admin interface and then edit the generated scripts. Features¶ - ForeignKey and ManyToManyFields (using python variables, not object IDs) - Self-referencing ForeignKey (and M2M) fields - Sub-classed models - ContentType fields and generic relationships (but see issue 43) - Recursive references - AutoFields are excluded - Parent models are only included when no other child model links to it - Individual models can be referenced What it can’t do (yet!)¶ - Ideal handling of generic relationships (ie no AutoField references): issue 43 - Intermediate join tables: issue 48 - GIS fields: issue 72 How?¶ To dump the data from all the models in a given Django app (appname): $ ./manage.py dumpscript appname > scripts/testdata.py To dump the data from just a single model (appname.ModelName): $ ./manage.py dumpscript appname.ModelName > scripts/testdata.py To reset a given app, and reload with the saved data: $ ./manage.py reset appname $ ./manage.py runscript testdata Note: Runscript needs scripts to be a module, so create the directory and a __init__.py file. Caveats¶ Naming conflicts¶ Please take care that when naming the output files these filenames do not clash with other names in your import path. For instance, if the appname is the same as the script name, an importerror can occur because rather than importing the application modules it tries to load the modules from the dumpscript file itself. Examples: # Wrong $ ./manage.py dumpscript appname > dumps/appname.py # Right $ ./manage.py dumpscript appname > dumps/appname_all.py # Right $ ./manage.py dumpscript appname.Somemodel > dumps/appname_somemodel.py
http://django-extensions.readthedocs.io/en/latest/dumpscript.html
2017-03-23T04:11:42
CC-MAIN-2017-13
1490218186774.43
[]
django-extensions.readthedocs.io
We provide your instances with IP addresses and DNS hostnames. These can vary depending on whether you launched the instance in the EC2-Classic platform or in a virtual private cloud (VPC). For information about the EC2-Classic and EC2-VPC platforms, see Supported Platforms. For information about Amazon VPC, see What is Amazon VPC? in the Amazon VPC User Guide. Contents You can use private IP addresses and internal DNS hostnames for communication between instances in the same network (EC2-Classic or a VPC). Private IP addresses are not reachable from the Internet. For more information about private IP addresses, see RFC 1918. When you launch an instance, we allocate a private IP address for the instance using DHCP. Each instance that you launch into a VPC has a default network interface. The network interface specifies the primary private IP address for the instance. If you don't select a primary private IP address, we select an available IP address in the subnet's range. You can specify additional private IP addresses, known as secondary private IP addresses. Unlike primary private IP addresses, secondary private IP addresses can be reassigned from one instance to another. For more information, see Multiple Private IP Addresses. Each instance is provided an internal DNS hostname that resolves to the private IP address of the instance in EC2-Classic or your VPC. We can't resolve the DNS hostname outside the network that the instance is in. If you create a custom firewall configuration in EC2-Classic, you must allow inbound traffic from port 53 (with a destination port from the ephemeral range) from the address of the Amazon DNS server; otherwise, internal DNS resolution from your instances fails. If your firewall doesn't automatically allow DNS query responses, then you'll need to allow traffic from the IP address of the Amazon DNS server. To get the IP address of the Amazon DNS server on Linux, use the following command: grep nameserver /etc/resolv.conf. For instances launched in EC2-Classic, a private IP address is associated with the instance until it is stopped or terminated. For instances launched in a VPC, a private IP address remains associated with the network interface when the instance is stopped and restarted, and is released when the instance is terminated. You can use public IP addresses and external DNS hostnames for communication between your instances and the Internet or other AWS products, such as Amazon Simple Storage Service (Amazon S3). Public IP addresses are reachable from the Internet. When you launch an instance in EC2-Classic, we automatically assign a public IP address to the instance. You cannot modify this behavior. When you launch an instance into EC2-VPC, you can control whether your instance receives a public IP address. The public IP address is assigned to the eth0 network interface (the primary network interface). When you launch an instance into a VPC, your subnet has an attribute that determines whether instances launched into that subnet receive a public IP address. By default, we don't automatically assign a public IP address to an instance that you launch in a nondefault subnet. Therefore, if you want an instance in a nondefault subnet to communicate with the Internet, you must either enable the public IP addressing feature during launch, or associate an Elastic IP address with the primary or any secondary private IP address assigned to the network interface for the instance. You can also modify the public IP addressing attribute of a nondefault subnet to specify that instances that are launched into that subnet should receive a public IP address. For more information, see Modifying Your Subnet's Public IP Addressing Behavior in the Amazon VPC User Guide. Note T2 instance types can only be launched into a VPC. If you use the Amazon EC2 launch wizard to launch a T2 instance type in your EC2-Classic account, and you have no VPCs, the launch wizard creates a nondefault VPC for you, and modifies the subnet's attribute to automatically request a public IP address for your instance. For more information about T2 instance types, see T2 Instances. A public IP address is assigned to your instance from Amazon's pool of public IP addresses, and is not associated with your AWS account. When a public IP address is disassociated from your instance, it is released back into the public IP address pool, and you cannot reuse it. You cannot manually associate or disassociate a public IP address from your instance. Instead, in certain cases, we release the public IP address from your instance, or assign it a new one: We release the public IP address for your instance when it's stopped or terminated. Your stopped instance receives a new public IP address when it's restarted. We release the public IP address for your instance when you associate an Elastic IP address (EIP) with your instance, or when you associate an EIP with the primary network interface (eth0) of your instance in a VPC. When you disassociate the EIP from your instance, it receives a new public IP address. If the public IP address of your instance in a VPC has been released, it will not receive a new one if there is more than one network interface attached to your instance. If you require a persistent public IP address that can be associated to and from instances as you require, use an Elastic IP address (EIP) instead. You can allocate your own EIP, and associate it to your instance. For more information, see Elastic IP Addresses (EIP).. If your instance is in a VPC and you assign it an Elastic IP address, it receives a DNS hostname if DNS hostnames are enabled. For more information, see Using DNS with Your VPC in the Amazon VPC User Guide. The private IP address and public IP address for an instance are directly mapped to each other through network address translation (NAT). For more information about NAT, see RFC 1631: The IP Network Address Translator (NAT). Note Instances that access other instances through their public NAT IP address are charged for regional or Internet data transfer, depending on whether the instances are in the same region. The following table summarizes the differences between IP addresses for instances launched in EC2-Classic, instances launched in a default subnet, and instances launched in a nondefault subnet. You can use the EC2 console to determine the private IP addresses, public IP addresses, and EIPs of your instances. To determine your instance's IP addresses using the console Open the Amazon EC2 console. Click Instances in the navigation pane. Select an instance. The console displays information about the instance in the lower pane. Get the public IP address from the Public IP field. If an EIP has been associated with the instance, get the EIP from the Elastic IP field. Get the private IP address from the Private IP field. You can also determine the public and private IP addresses of your instances using instance metadata. For more information, see Instance Metadata and User Data. To determine your instance's IP addresses using instance metadata Connect to the instance. Use the following command to access the private IP address: $ GET Use the following command to access the public IP address: $ GET Note that if an EIP is associated with the instance, the value returned is that of the EIP., nondefault. Important You can't manually disassociate the public IP address from your instance after launch. Instead, it's automatically released in certain cases, after which you cannot reuse it. For more information, see Public IP Addresses and External DNS Hostnames. If you require a persistent public IP address that you can associate or disassociate at will, assign an Elastic IP address to the instance after launch instead. For more information, see Elastic IP Addresses (EIP). To access the public IP addressing feature when launching an instance Open the Amazon EC2 console. Click Launch Instance. Choose an AMI and click its Select button, then choose an instance type and click Next: Configure Instance Details. On the Configure Instance Details page, select a VPC from the Network list. An Auto-assign Public IP list is displayed. Select Enable or Disable to override the default setting for the subnet. The following rules apply: A public IP address can only be assigned to a single network interface with the device index of eth0. The Auto-assign Public IP list is not available if you're launching with multiple network interfaces, and is not available for the eth1 network interface. You can only assign a public IP address to a new network interface, not an existing one. Follow the steps on the next pages of the wizard to complete your instance's setup. For more information about the wizard configuration options, see Launching an Instance. On the final Review Instance Launch page, review your settings, and then click Launch to choose a key pair and launch your instance. On the Instances page, select your new instance and view its public IP address in Public IP field in the details pane. The public IP addressing feature is only available during launch. However, whether you assign a public IP address to your instance during launch or not, you can associate an Elastic IP address with your instance after it's launched. For more information, see Elastic IP Addresses (EIP). You can also modify your subnet's public IP addressing behavior. For more information, see Modifying Your Subnet's Public IP Addressing Behavior. To enable or disable the public IP addressing feature, use one of the methods in the table below. For more information about these command line interfaces, see Accessing Amazon EC2.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-instance-addressing.html
2014-10-20T12:57:51
CC-MAIN-2014-42
1413507442900.2
[]
docs.aws.amazon.com
null pointer dereference is a prime example of a bug (hopefully the urgency is self-evident). Potential bugs are a bit more subtle, but no less important. Instances of this sin are tracked with the issues mechanism. Typically bugs and potential bugs will show up as Blocker or Critical issues, although that's fully configurable. Coding Standards Breaches: Use the differential views to monitor new issues. You can set your coding standards (active coding rules, severity, etc.) through the Quality Profiles administration page.
http://docs.codehaus.org/plugins/viewsource/viewpagesrc.action?pageId=229738267
2014-10-20T13:22:50
CC-MAIN-2014-42
1413507442900.2
[]
docs.codehaus.org
This document provides detailed instructions only for users who updated to 3.1.2. Please check the version number in your installation before following these instructions. The version is visible on the bottom right of your admin view or can be found by going to the System>System Information menu. If you need additional support or have problems updating please visit the Joomla forums.
http://docs.joomla.org/index.php?title=J3.1:Detailed_instructions_for_updating_from_3.1.2_to_3.1.4&oldid=101859
2014-10-20T13:48:14
CC-MAIN-2014-42
1413507442900.2
[]
docs.joomla.org
. javax.persistence.EntityManager getTargetEntityManager() throws) IllegalStateException- if no underlying EntityManager is available
http://docs.spring.io/spring/docs/3.0.0.RC1/javadoc-api/org/springframework/orm/jpa/EntityManagerProxy.html
2014-10-20T13:07:34
CC-MAIN-2014-42
1413507442900.2
[]
docs.spring.io
I can't send text messages Depending on your wireless service plan, this feature might not be supported. Try the following actions: - Verify that your BlackBerry smartphone is connected to the wireless network. If you're not in a wireless coverage area, your smartphone should send the messages when you return to a wireless coverage area. - Verify that fixed dialing is turned off. Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/38326/1489020.jsp
2014-10-20T13:15:18
CC-MAIN-2014-42
1413507442900.2
[]
docs.blackberry.com
A half-normal continuous random variable. Continuous random variables are defined from a standard form and may require some shape parameters to complete its specification. Any optional keyword parameters can be passed to the methods of the RV object as given below: Notes The probability density function for halfnorm is: halfnorm.pdf(x) = sqrt(2/pi) * exp(-x**2/2) for x > 0. Examples >>> from scipy.stats import halfnorm >>> numargs = halfnorm.numargs >>> [ ] = [0.9,] * numargs >>> rv = halfnorm()norm.cdf(x, ) >>> h = plt.semilogy(np.abs(x - halfnorm.ppf(prb, )) + 1e-20) Random number generation >>> R = halfnorm.rvs(size=100) Methods
http://docs.scipy.org/doc/scipy-0.11.0/reference/generated/scipy.stats.halfnorm.html
2014-10-20T13:05:30
CC-MAIN-2014-42
1413507442900.2
[]
docs.scipy.org
Do you spend a lot of time preparing your data for use? Flywheel Gears reduce data-management overhead by automating many routine, time-consuming tasks. Gears are applications that run in Flywheel. They come in two flavors depending on their functionality: Utility or Analysis. For example, to convert all files from DICOM to NIfTI format or produce a report on QA metrics like displacement and signal spikes, you can use a Utility Gear. Or use an Analysis Gear to run popular tools such as Freesurfer. Learn more about how Gears can reduce your data-management overhead and allow you to focus on the research. To view the available Gears in your environment, click Installed Gears in the left navigation menu. Site Admins can add more gears in two ways: The Gear Exchange: (only Site Admins can install Gear Exchange gears.) Develop your own Gear: You can also create your own custom gear to upload to your site. See our Gear Developer tutorial for more information on how to build your first gear. Navigate to a session, and click the checkbox. Select an acquisition. Tip Batch run GearsTo run a Gear against multiple sessions or acquisitions at the same time, use batch processing Click Run Gear. Select Utility or Analysis. Choose a gear from the list. Select any inputs. Tip You can also add a tag to the Gear job. Tagging Gear jobs is helpful for tracking. Select the Configuration tab to edit any additional settings. Click Run Gear. View the Gear's progress by clicking the Provenance tab. If you do not see the gear, click Refresh. Once the job is complete you can view detailed logs, including the input and output files for the Gear. If you ran an Analysis Gear, you can view the output in the Analyses tab. However, manually running a gear each time you have new data can be time consuming. To simplify running Gears, create Gear rules to automate jobs. Gear Rules allow you to automate a project's data processing. Whenever the conditions of the gear rule are met, the rule triggers a specific gear to run. file is added to the project, Flywheel will automatically run the DICOM MR Classifier. Correctly classifying data and extracting metadata greatly increases the value of raw data and allows you to do more with it inside Flywheel. For example, now that your data is properly classified, you can create more gears rules to automate converting DICOM files to NIfTI using the dcm2niix: DICOM to NIfTI conversionGear.
https://docs.flywheel.io/hc/en-us/articles/360008411014-Using-gears
2022-05-16T18:57:58
CC-MAIN-2022-21
1652662512229.26
[]
docs.flywheel.io
Download model and job usage data Overview Modzy provides users the ability to view individual model usage data, job specific data, and download it for your own custom use case. Filtering Data To download data for a job(s) run for a particular model or user, you will first navigate to the top section named "Operations." Under Operations, select the "Jobs" tab. On this page, you have the option to search by several fields including: Job ID, Model Name, or User. You can also customize the date range and job status displayed. The default view shown is the most recent jobs run within your team in descending order. Downloading Job Results To download job specific data, click into any single job. This will take you to the Job Details page, with detailed information about the job, including model name, user, start time, queued time, processing time, and total time. Each input will have a dropdown with corresponding outputs along with the status of each input listed. To download all results for that job, click the "Download all results" button. This will download a JSON file of each input/output within that completed job. Downloading Job History To download one or multiple jobs that you've filtered, select the boxes to the left of the job ID on the Jobs page. The modal at the bottom of the screen will then prompt you to download the job(s) in either JSON or CSV. Updated 9 months ago
https://docs.modzy.com/docs/download-model-and-job-usage-data
2022-05-16T19:16:34
CC-MAIN-2022-21
1652662512229.26
[]
docs.modzy.com
This page describes the different ways you can control the camera and interact with content at runtime in the Collab Viewer Template, in both desktop and VR modes. Desktop Controls Toolbar You can use the toolbars at the top of the window to teleport, switch navigation modes, and save the current session. Switch between modes quickly with the following hotkeys: Activate Fly mode by pressing U on your keyboard. Activate Walk mode by pressing I on your keyboard. Activate Orbit mode by pressing O on your keyboard. Activate VR mode by pressing P on your keyboard. Common Desktop Controls The following controls work the same way in all desktop movement modes: Fly mode, Walk mode, and Orbit mode. Fly Mode Controls In addition to the Common Desktop Controls, the following controls work in Fly mode. Walk Mode Controls In addition to the Common Desktop Controls, the following controls work in Walk mode. Orbit Mode Controls In addition to the Common Desktop Controls, the following controls work in Orbit mode. VR Controls The Interaction Menu The Interaction Menu offers you several commands and modes for interacting with the content in your scene at runtime. To open the Interaction Menu, press Spacebar in any desktop mode. See VR Controls for how to open the menu on your VR controller. When an interaction mode is active, such as Xray Apply, the name of the mode appears in the lower right of the viewport. Explode Option The Interaction Menu displays an Explode command for the first Explode_BP Actor it finds in your Level, if any. You can use this option to toggle between the default and exploded positions for the Actors managed by that Explode_BP Actor. For details, see Setting Up Explode Animations. The Collab Viewer template includes an Explode_Gears Blueprint that explodes and rebuilds the transmission assembly inside the building: Testing in VR in the Unreal Editor When you launch a packaged or standalone version of the Collab Template with VR set up on your computer, you'll be able to switch to VR mode using the icon in the Toolbar. However, if you want to use VR controls while testing your Project in the Unreal Editor, you'll need to follow these steps: Find the BP_CollaborativeViewer_GameInstance Asset in the Content Browser under CollaborativeViewer/Blueprints/GameInstance. Double-click the Asset to open it in the Blueprint Editor. In the My Blueprint panel, select the NavigationMode variable. In the Details panel, under the Default Value section, select VR for the Navigation Mode option. Compile and Save the Blueprint. To launch the preview, use the drop-down arrow next to the Play button in the Toolbar to select VR Preview. Remember to turn this setting back off before you package your application! If you don't, the resulting package will not work as you expect.
https://docs.unrealengine.com/4.26/en-US/Resources/Templates/CollabViewer/Interaction/
2022-05-16T19:25:55
CC-MAIN-2022-21
1652662512229.26
[]
docs.unrealengine.com
Users can restore a previous version of an existing file or a deleted file from snapshots stored on the Mirage server. The restore is based on files and directories included in CVD snapshots, in accordance with the upload policies currently in effect. See Working with Upload Policies. When the CVD contains Encrypted File System (EFS) files, the files are recovered in their original encrypted form. Only EFS files that the recovering user encrypted are restored from the CVD. Unauthorized files are filtered from the restore. The file restore operation generates an audit event on the Mirage server for management and support purposes. Files are restored with their original Access Control Lists (ACLs).
https://docs.vmware.com/en/VMware-Mirage/5.9/com.vmware.mirage.admin/GUID-EE017E46-9FC9-45BE-8C4D-4609FFFA86DD.html
2022-05-16T19:03:42
CC-MAIN-2022-21
1652662512229.26
[]
docs.vmware.com
Datainjection¶ - Sources link: - Download: Features¶ This plugin allows data import into GLPI using CSV files. It allows to create models of injection for a future re-use. It’s been created in order to: - Import data coming from others asset management softwares - Inject electronic delivery forms Data to be imported using the plugins are: - Inventory data (except softwares and licenses), - Management data (contract, contact, supplier), - Configuration data (user, group, entity). Install the Plugin¶ - Uncompress the archive. - Move the escaladedirectory to the <GLPI_ROOT>/pluginsdirectory - Navigate to the Configuration > Plugins page, - Install and activate the plugin. Configuration¶ You will access the datainjetion configuration from the Tool > File injection. Clic here to manage model Create new model¶ Fist step you need te create model, for this example we import Computer Clic here to create new model And fill form - Name : define a model Name - Visibility : is private or not for other user - Entity / sub entity : model visibility for entity - Type of data : type of data to import - Allow lines creation : yes or not - Allow lines update : yes or not - Allow creation of dropdowns : if dropdown value not exist, let’s create It - Dates format : date format in CSV file - Allow update of existing fields : yes or not - Float format : float format in CSV file - Try to establish network connection is possible : yes or not - Port unicity criteria : define unicity field fir port After model creation it’s possible to : * Define if header is present * Change file delimitor : default -> “;” Inject your CSV file¶ Send to GLPI your CSV file with computer data content of CSV file for this documentation Name;Type;Model;Manufactuer;serial Desktop-ARTY;Desktop;Dell Inspiton;Samsung;567DFG45DFG Laptop-QUER;Laptop;Dell XPS;Samsung;345UKB78DGH Mapping CSV column and object field¶ For each column of your CSV file you must select the table and the corresponding field in GLPI Note You need to define link field. The plugin will search on this link field to known if object need to be added or updated The dropdown list contains other tables, which allows it to import, for example, the financial and administrative informations during computer import. Each type of data (Computer, Monitor, User) have differents options to import other data Additional data¶ You can define additional data to be imported, it will be requested during import. Each can be flag as manatory Execute import¶ You will access the model from the Tool > File injection. Select a model, select CSV file, and run import
https://glpi-plugins.readthedocs.io/en/latest/datainjection/index.html
2022-05-16T17:38:33
CC-MAIN-2022-21
1652662512229.26
[]
glpi-plugins.readthedocs.io
Structural Element Links Introduction A link is an object that links a source node to a target. Currently that target may be either another structural node or a zone, although more targets may be added in the future. Each link utilizes the local system of its source node, and all link properties are specified with respect to this local system. Links implement the interactions that occur between the different types of elements and the grid. In most cases it will not be necessary to create or modify links, because they will be created and their properties set automatically by the elements that utilize them. However, if one wishes to introduce a plastic hinge with full rotational freedom (such that two different rotation angles can develop on each side of the hinge point), then one must create two separate nodes at this point, and create a node-to-node link between them and specify a normal-yield spring in the appropriate rotational degree-of-freedom, and set the stiffness and yield strength of this spring equal to that of the plastic hinge. For these, and other more complex situations, we provide the following interface to the link logic. Each link contains six different possible attach conditions for each of the six degrees-of-freedom (three translational, and three rotation). The possible conditions are free (no force transmitted), rigid (rigidly connected to the target location), and deformable (forces generated by a one-dimensional force-displacement law based on relative motion). The behavior of a deformable condition is governed by a link model, of which there are currently four options: linear, shear-yield, normal-yield, and pile-yield. Note that a link can only have one condition or model for each degree-of-freedom. Recursive chaining of rigid connections is allowed. For the specific case of embedded liners, two links per node are possible. In this case, only one rigid connection per degree-of-freedom across all links is allowed. Note that link model properties are, by default, assigned automatically based on the properties of the elements that are connected to their host nodes. Unless the interaction type is incompatible, this will override any properties set manually via the command line or FISH. This happens any time the code performs a “geometry update,” which is at the start of cycling, and, if in large strain mode, at the start of every update interval during cycling. Because of this, the easiest way to customize link properties is to set the properties in the elements attached to their host node. Otherwise, the user must make certain to override the default values after every update occurs by using a properly timed FISH callback (see the fish callback command). When selecting with the group range element, links are considered to be a member of a group by default if either they, the node they are connected to, or any element the node is connected to is a member of that group. The by keyword may be used to restrict this to a specific type of object. Whenever a link attempts to establish a connection to a target zone, it will search for a non-null zone for which the source node lies within a distance \(d\) of the zone’s boundary. The value of \(d\) is obtained from the global value of zone tolerance (see the structure link tolerance-contact command) multiplied by zone size, where zone size is the maximum \(x\)-, \(y\)-, or \(z\)-dimension of the zone bounding box. But note that such a nearby zone will be used only if the source node does not lie within or on the boundary of any non-null zone. If the source node lies within the d-boundary of a zone, then the weighting functions used to transfer information from the link to the zone will correspond with the location on the zone surface that is nearest to the node location. Link Model Properties Structural link deformable models are, by default, created automatically when links are created. The default type used in each degree-of-freedom depends on the element type (see Default link attachment conditions for element types). Properties of deformable models are also, by default, set during initialization based on the properties of the elements they are connected to. However, it is possible to override these defaults, both by changing the model and by changing the properties of existing models — see the structure link attach and structure link property commands. The following properties are available to each of the currently available deformable link models.. Figure 1 illustrates the system. Manually setting link properties As stated above, deformable link model properties are set automatically by the program for a full list). The struct.was.updated FISH intrinsic can be used to determine if an actual full geometry update occurred at the start of that cycle.
http://docs.itascacg.com/flac3d700/common/sel/doc/manual/sel_manual/links/links.html
2022-05-16T18:41:48
CC-MAIN-2022-21
1652662512229.26
[]
docs.itascacg.com
Exporting data from Castle There are multiple ways for you to consume Castle's data, for instance when you're looking to feed it into your other log and security management tools: - Ingest the response of the inline Risk and Filter APIs. This will offer you a 1:1 mapping of all the Risk and Filter calls you send to Castle and lets you run additional queries on our risk scores and signals. - Subscribe to webhooks to get alerted when a policy triggers Deny or Challenge. This will only trigger once per device so it will be less data than the inline APIs, but might be relevant for alerting use-cases - Use the Devices API to fetch the list of devices for a specific user. - Manually export up to 1,000 events from the Event view in the dashboard. You can always run any query first to reduce the result set down to something that fits within the 1,000 limitation. Updated 3 months ago Did this page help you?
https://docs.castle.io/docs/exporting-data-from-castle
2022-05-16T18:51:14
CC-MAIN-2022-21
1652662512229.26
[]
docs.castle.io
Overview Explainability is a model’s ability to return inference details that explain how it came to a result. The details returned depend on the explainable input type. Some types include image, audio, and video. This feature is tested during the model deployment. Check out the model container specifications for more details. Whitebox explainability This category includes models that have a built-in explainability algorithm. These models return JSON outputs with mask values for explainable results. Mask values include the prediction results and the explainability results (pixel values that motivate the prediction made by the model). With whitebox explainability, job results return three objects: a results object, an explainability object, and a model type object. The model type object, modelType, describes the type of data the model can process. The results object, result, contains the prediction results in a classPredictions array. The explainability object, explanation, varies with each model type as described below. To check if a model has whitebox explainability, send a request to get model details and look for the built-in-explainability feature. Explanation by model type imageClassification imageClassification In this case, the explanation object contains parameters that provide details on how the model came to the results in a maskRLE array and dimension parameters height and width. The maskRLE follows a column-major order (Fortran order). { "modelType": "", "result": { "classPredictions": [] }, "explanation": { "maskRLE": [], "dimensions": { "height": "", "width": "" } } } textClassification textClassification Text classification models with the explainability feature return JSON outputs with word importance values for explainable results. It includes the prediction results and explainability results (score values that motivate the prediction made by the model). { "modelType": "textClassification", "result": { "classPredictions": [] }, "explanation": { "wordImportances": {}, "explainableText": {} } }
https://docs.modzy.com/reference/explainability
2022-05-16T18:00:52
CC-MAIN-2022-21
1652662512229.26
[]
docs.modzy.com
KEYTRANSFER SPEC: Data transfer proofs¶ shortname: KEYTRANSFER name: Data transfer proofs type: Standard status: Valid version: 0.1 editor: Sami Mäkelä <[email protected]> contributors: Aitor Argomaniz <[email protected]> - KETRANSFER SPEC: Data transfer proofs This SPEC describes an addition to ACCESS SPEC, namely new flows with improved guarantees about data transfer. This enables using public storage for encrypted data. Motivation¶ Nevermined manages access control over digital assets. The protocol has been based in authenticate and authorize consumers to get access to components created and registered into an ecosystem by a publisher. When this access control needs to be implemented on top of storage solutions with some authorization mechanism (like Amazon S3 or similar) it's easier. The gateway protects that only authorized users can get access. But when you want to use some storage without any access control capabilities (like a file available in a public HTTP url, or in IPFS or Filecoin), whoever with access to the URL can get access to the file going over any access control mechanism that Nevermined can provided. In this kind of scenarios, they only way to protect this content is to encrypt it, and only allow to decrypt when the user fulfill some conditions. This SPEC is about the defition of the solution that allows to build robust and scalable access control on top of publicly available data. Main ideas¶ If the unencrypted asset is described by its hash, there are at least two use cases for being able to prove that the access to the asset with given hash is transferred: - A third party might have reviewed the data and can confirm its properties. Additionally it will show that each recipient gets the same data. - If compute attestation is available, for it to work both participants need access to data. To save resources, instead of showing that all the data is transferred, we assume that there is publicaly available encrypted data and only the symmetric key will be transferred (the hash of symmetric key is known by all participants). This does not impact the above use cases: - The third party can instead validate the combination of the key and encrypted data. - The compute attestation will use the encrypted data hash and the key instead of plain data hash as starting point. Note that with interactive proofs it would be enough for sender to send the decrypted key signed with their ECDSA or similar key. An advantage is that the keys can be used multiple times when using snarks. Actors and Technical Components¶ - PUBLISHERS - Provide access to assets and/or services - CONSUMERS - Want to get access to assets and/or services - PROVIDER - When the publisher is not 100% 'online' can delegate some responsibilities to a provider for making data available on behalf of the publisher Flows¶ Publishing Assets¶ To set up the asset metadata: For the files, the first file represents the key and url attribute contains the plain text key (probably have to change). These parts of metadata are encrypted so they won't show up publicly when querying the gateway. The public parts that have to be added to additionalInformation are poseidonHash: Poseidon hash of the key. providerKey.xand providerKey.y: The Babyjubjub public key of the provider. Service agreement¶ For the service agreement, following data is needed: - Address of provider and consumer. - Asset ID. - Poseidon hash of the data. - Babyjubjub keys of the provider and consumer. - Payment information. Before entering into the agreement, the consumer should already have downloaded the publicly available encrypted data. Other parts of the flow are the same as normal access flow, but the final fulfilling of the transfer on-chain is different. Provider first has to compute a shared secret using ECDH from his private key and consumers public key. This secret is used to encrypt the key using MiMC. The encrypted key is then sent onchain with the SNARK proof of correctness. Similarly, consumer first has to compute a shared secret using ECDH from his private key and providers public key. The encrypted key is read from chain and then decrypted using MiMC. The hash of the result will be the same as was given beforehand. Here is the complete flow including the different actors: In the case of an issue, here you can find the flow about how to manage the dispute resolution: Accessing from gateway¶ In this scenario, the gateway acts as a PROVIDER. This is specially useful when the PUBLISHER doesn't want to be on-line running a service for responding to CONSUMER requests. In these kind of scenarios, the PUBLISHER delegates to the PROVIDER running a gateway the capabilities of releases the decryption key to the CONSUMER when the conditions are fulfilled. Accessing documents using the gateway mostly works the same way as normal flow, but the consumer has to send its Babyjubjub public key too. The data must be checked so that the gateway won't send invalid proofs to the net (it's possible to get the key from the calldata of the fulfill method if they have the corresponding key). Additionally we can check that the eth address corresponds to the babyjub public key. This isn't absolutely necessary, but is needed if we want the gateway to return the data transfer key (or perhaps the data as plain text).
https://docs.nevermined.io/architecture/specs/keytransfer/
2022-05-16T17:30:20
CC-MAIN-2022-21
1652662512229.26
[]
docs.nevermined.io
The CRM service enables service providers to deliver a robust customer relationship management solution to customers through Microsoft Dynamics CRM 2011. Customers enjoy a 360-degree view of their customers along with automated workflows, ease of reporting, and a granular security structure. The CRM service supports Internet-facing deployments (IFDs), which makes CRM 2011 organizations available from the Internet. Additionally, the service is deployed for use with Active Directory Federated Services (AD FS). When you install and configure the ADFS and CRM 2011 services, you will need to supply this group and account information to enable Services Manager to work with your CRM deployment. For more information about creating CRM Deployment Administrators and assigning permissions, see the article "Creating a New CRM Deployment Administrator Account" on the Microsoft TechNet web site (). For more information about assigning permissions for deploying CRM, see the article "How to assign the minimum permissions to a deployment administrator in Microsoft Dynamics CRM 4.0" on the Microsoft Support web site (). After configuring CRM 2011, verify that user connections are successful and there are no certificate errors. Test the environment by creating an organization using CRM 2011 Deployment Manager and, afterward, browsing to the site.
https://docs.citrix.com/ja-jp/cloudportal-services-manager/11-5/ccps-plan-overview/ccps-plan-services/ccps-plan-crm.html
2018-04-19T13:26:58
CC-MAIN-2018-17
1524125936969.10
[]
docs.citrix.com
The sort filter sorts an array: {% for user in users|sort %} ... {% endfor %} Note Internally, Twig uses the PHP asort function to maintain index association. It supports Traversable objects by transforming those to arrays. © 2009–2017 by the Twig Team Licensed under the three clause BSD license. The Twig logo is © 2010–2017 SensioLabs
http://docs.w3cub.com/twig~1/filters/sort/
2018-04-19T13:40:25
CC-MAIN-2018-17
1524125936969.10
[]
docs.w3cub.com
DatabaseImporterMySQL::getColumns/11.1 to API17:JDatabaseImporterMySQL::getColumns without leaving a redirect (Robot: Moved page) - 20:29, 27 April 2011 Doxiki2 (Talk | contribs) automatically marked revision 56274 of page JDatabaseImporterMySQL::getColumns/11.1 patrolled
https://docs.joomla.org/index.php?title=Special:Log&page=JDatabaseImporterMySQL%3A%3AgetColumns%2F11.1
2015-08-28T03:44:40
CC-MAIN-2015-35
1440644060173.6
[]
docs.joomla.org
Welcome to the Bug Squad From Joomla! Documentation Revision as of 15:33, 7 June 2009 by Dextercowley (Talk | contribs), test, and fix bugs in the current Joomla release and we help debug new major releases. Bug Tracking Process For a description of the way the JBS works, see the Joomla! Maintenance Procedures article. More Information There is a lot of great information for JBS members on the Joomla wiki site under the heading Category:Bug Squad. Also, and very importantly, we treat everyone with respect and consideration and take the Joomla code of conduct very seriously.
https://docs.joomla.org/index.php?title=Welcome_to_the_Bug_Squad&oldid=14461
2015-08-28T03:34:38
CC-MAIN-2015-35
1440644060173.6
[]
docs.joomla.org
Revision history of "How do you assign a template to a specific page?" View logs for this page Diff selection: Mark the radio boxes of the revisions to compare and hit enter or the button at the bottom. Legend: (cur) = difference with latest revision, (prev) = difference with preceding revision, m = minor edit.
https://docs.joomla.org/index.php?title=How_do_you_assign_a_template_to_a_specific_page%3F&action=history
2015-08-28T02:19:02
CC-MAIN-2015-35
1440644060173.6
[]
docs.joomla.org
Changes related to "Issues that affect both the CMS and Platform" ← Issues that affect both the CMS and Platform This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold. No changes during the given period matching these criteria.
https://docs.joomla.org/index.php?title=Special:RecentChangesLinked&days=30&from=&target=Issues_that_affect_both_the_CMS_and_Platform
2015-08-28T03:32:35
CC-MAIN-2015-35
1440644060173.6
[]
docs.joomla.org
Code 01060 From Joomla! Documentation Contents One line summary Advanced search capabilities including specific content types including third party extensions, section and category, weighted searches, Ajax support, multiple tabbed results sorted differently include most recent, relevance, external API's like Google and Yahoo Description Skills needed Difficulty Work Product Licensing All code must be created using the [GNU General Public License version] Documentation written for this task must be made available under the Joomla! Electronic Documentation License. Possible mentor Return to Google Summer of Code 2008
https://docs.joomla.org/index.php?title=Code_01060&oldid=3386
2015-08-28T03:55:55
CC-MAIN-2015-35
1440644060173.6
[]
docs.joomla.org
Difference between revisions of "Content Article Manager Edit" From Joomla! Documentation Revision as of 07:31, 5 August 2012: - Reset button. Press this button to change the Hits to 0. Article, Image, Page Break, Read More, and Toggle Editor Buttons Five buttons. -. - Toggle Editor. This button toggles the editor between the TinyMCE editor and "No Editor", a basic code view editor.. Metadata Information. - In combination with the Related Articles module, to display Articles that share at least one keyword in common. For example, if the current Article displayed has the keywords "cats, dogs, monkeys", any other Articles with at least one of these keywords will show in the Related Articles module. -: - - For help on using TinyMCE and other editors: Content editors
https://docs.joomla.org/index.php?title=Help17:Content_Article_Manager_Edit&diff=70481&oldid=60317
2015-08-28T04:02:02
CC-MAIN-2015-35
1440644060173.6
[]
docs.joomla.org
Control Plane Aserto edge authorizers can connect to the Aserto control plane to receive policy and directory updates and commands. Edge authorizers must use client certificates from satellite connections to connect to the control plane. #Configuration The Aserto CLI can be used to configure certificates. The list-connections sub-command lists a tenant's existing satellite connections: aserto control-plane list-connections Each of the listed connections has an id field, which can be used to retrieve certificate data, including the certificate and private key: aserto control-plane client-cert <satellite-connection-id> For more details on how to configure the certificate see the edge authorizers section of this documentation. #Commands To list the edge authorizer instances connected to the control plane: aserto control-plane list-instance-registrations Each entry in the resulting list will have an id field, a policy-id field indicating what policy instance the edge is configured to run and a remote_host field which can be used to identify the individual edge instance. The value of the latter is the $HOSTNAME environment variable of the edge host, and will be overridden with the $ASERTO_HOSTNAME environment variable, if it exists. The discovery sub-command causes an edge authorizer to immediately fetch configuration from the control plane. aserto control-plane discovery <instance-registration-id> The edge-dir-sync sub-command causes an edge authorizer to immediately synchronize its local directory state (if synchronization is enabled). aserto control-plane edge-dir-sync <instance-registration-id>
https://docs.aserto.com/docs/command-line-interface/aserto-cli/control-plane
2022-06-25T06:57:59
CC-MAIN-2022-27
1656103034877.9
[]
docs.aserto.com
the Citrix ADC cluster If a failure occurs in a Citrix ADC cluster, the first step in troubleshooting is to get information on the cluster instance. You can get the information by running the show cluster instance clId and show cluster node nodeId commands on the cluster nodes run, the status of commands, and the state changes. Check the newnslog files. Use the newnslogfiles, available in the /var/nslog/ directory of each node, to identify the events that have occurred on the cluster nodes. You can view multiple newnslogfiles. You can use the command to send the report to the technical support.
https://docs.citrix.com/en-us/citrix-adc/13/clustering/cluster-troubleshooting.html
2022-06-25T07:43:17
CC-MAIN-2022-27
1656103034877.9
[]
docs.citrix.com
Menu Designer Adding Main Menu items There are multiple methods for building menus using the RadItem Collection Editor or the RadMenu designer. To add a new main menu item: - Click the RadMenu area labeled Type here, and type your top level menu item directly into the entry space provided. When you're finished click ESCto abandon your edits or Enterto accept the edits and create a new RadMenuItem. - Click the drop-down arrow to the right of the existing main menu items, and select Add RadMenuItem, Add RadMenuComboItem or Add RadMenuSeparatorItem to create an item of the corresponding type. Once the menu item is created you can use the Smart Tag to configure the Text, Image properties and edit the Items collection for the menu item. - Click the RadMenu control, open its Smart Tag menu, and select Edit Items. Add a new RadMenuItem in the RadItem Collection Editor. The menu designer is decorated with rightward and downward pointing arrow buttons. Right-pointing arrows indicate Smart Tags for the RadMenu and the Down-pointing arrows let you add a particular menu item type, i.e. RadMenuItem , RadMenuComboItem or RadMenuSeparatorItem . Adding Sub Menu Items To add a new sub-menu item to a main menu item, use one of these procedures: Select the main menu item, click in its Items property, click the ellipsis button, and then use the RadItem Collection Editor. Click a main menu item in the designer to invoke the Add new item. Add new will allow you to select from RadMenuitem, RadMenuComboItem or RadmenuSeparatorItem. Select one of these menu item types to create it and add it below the selected menu item. Each RadMenuItem can have its own items to allow menu designs that require multiple levels of hierarchy. Removing Menu Items To remove a main menu or sub-menu item, select the item and press Delete or right click the menu item and select Delete from the context menu.
https://docs.telerik.com/devtools/winforms/controls/menus/menu/design-time/menu-designer
2022-06-25T08:31:31
CC-MAIN-2022-27
1656103034877.9
[array(['images/menus-menu-design-time-menu-designer001.png', 'menus-menu-design-time-menu-designer 001'], dtype=object) array(['images/menus-menu-design-time-menu-designer002.png', 'menus-menu-design-time-menu-designer 002'], dtype=object) array(['images/menus-menu-design-time-menu-designer003.png', 'menus-menu-design-time-menu-designer 003'], dtype=object) array(['images/menus-menu-design-time-menu-designer004.png', 'menus-menu-design-time-menu-designer 004'], dtype=object) array(['images/menus-menu-design-time-menu-designer006.png', 'menus-menu-design-time-menu-designer 006'], dtype=object)]
docs.telerik.com
Information for "Vyatka" Basic information Display titleUser:Vyatka Default sort keyVyatka Page length (in bytes)303 NamespaceUser Page ID34079 Page content languageen - English Page content modelwikitext User ID99458Vyatka (talk | contribs) Date of page creation11:43, 19 April 2014 Latest editorVyatka (talk | contribs) Date of latest edit11:49, 19 April 2014 Total number of edits2 Total number of distinct authors1 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Retrieved from ""
https://docs.joomla.org/index.php?title=User:Vyatka&action=info
2022-06-25T07:48:41
CC-MAIN-2022-27
1656103034877.9
[]
docs.joomla.org
Overview of Event Analytics in ITSI Splunk IT Service Intelligence (ITSI) Event Analytics ingests events from across your IT landscape and from other monitoring silos to provide a unified operational console of all your events and service-impacting issues. You can also integrate with incident management tools and helpdesk applications to accelerate incident investigation and automate remedial actions. Event Analytics is equipped to handle huge numbers of events coming into ITSI at once. Because these events might be related to each other, they must be grouped together so you can identify the underlying problem. Event Analytics provides a way to deal with this huge volume and variety of events. Aggregation policies reduce your event noise by grouping notable events based on their similarity and displaying them in Episode Review. An episode is a collection of notable events grouped together based on a set of predefined rules. An episode represents a group of events occurring as part of a larger sequence, or an incident or period considered in isolation. Aggregation policies let you focus on key event groups and perform actions based on certain trigger conditions, such as consolidating duplicate events, suppressing alerts, or closing episodes when a clearing event is received. Other ITSI episode management features include a Python-based, notable event action SDK, which lets you define secondary, post-episode actions such as adding tags, adding comments, viewing episode activities, changing owner, status, and severity, and so on. Event Analytics workflow ITSI Event Analytics is designed to make event storms manageable and actionable. After data is ingested into ITSI from multiple data sources, it's processed through correlation searches to create notable events. ITSI generates notable events when a correlation search or multi-KPI alert meets specific conditions that you define. Notable event aggregation policies group the events into meaningful episodes, a group of events occurring as part of a larger sequence (an incident or period considered in isolation). Use Episode Review to view episode details and identify issues that might impact the performance and availability of your IT services. You can then take actions on the episodes, such as running a script, pinging a host, or creating tickets in external systems. The following image illustrates the Event Analytics workflow: You can also leverage Event Analytics to monitor your internal services and KPIs. Service and KPI data is ingested through correlation searches or multi-KPI alerts. Once events are created, they proceed through the following workflow: Step 1: Ingest events through correlation searches The data itself comes from Splunk indexes, but ITSI only focuses on a subset of all Splunk Enterprise data. This subset is generated by correlation searches. A correlation search is a specific type of saved search that generates notable events from the search results. For instructions, see Overview of correlation searches in ITSI. Step 2: Configure aggregation policies to group events into episodes Once notable events start coming in, they need to be organized so you can start gaining value from them. Configure an aggregation Overview of aggregation policies in ITSI. Step 3: Set up automated actions to take on episodes You can run actions on episodes either automatically using aggregation policies or manually in Episode Review.. For more information, see Configure episode action rules in ITSI.!
https://docs.splunk.com/Documentation/ITSI/4.13.0/EA/AboutEA
2022-06-25T07:59:09
CC-MAIN-2022-27
1656103034877.9
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
High resolution 3D models of furniture Twinmotion’s Chair & Tables Pack 1 is a collection of 3D models related to the furniture category that has been set up for use in Unreal Engine. This asset pack contains 144 high resolution 3D models as well as blueprints for the composed assets. UE-Only Content - Licensed for Use Only with Unreal Engine-based Products Features:
https://docs.unrealengine.com/marketplace/en-US/product/twinmotion-chairs-tables-pack-1
2022-06-25T08:10:18
CC-MAIN-2022-27
1656103034877.9
[]
docs.unrealengine.com
Browsing Geographie, Hydrologie by Contributor "Thieken, Annegret H." Now showing items 1-2 of 2 Short contribution on adaptive behaviour of flood-prone companies: A pilot study of Dresden-Laubegast, Germany Integrated flood management strategies consider property-level precautionary measures as a vital part. Whereas this is a well-researched topic for residents, little is known about the adaptive behaviour of flood-prone ... The behavioral turn in flood risk management, its assumptions and potential implications Kuhlicke, Christian ; Seebauer, Sebastian ; Hudson, Paul ; Begg, Chloe ; Bubeck, Philip ; Dittmer, Cordula ; Grothmann, Torsten ; Heidenreich, Anna ; Kreibich, Heidi ; Lorenz, Daniel F. ; Masson, Torsten ; Reiter, Jessica ; Thaler, Thomas ; Thieken, Annegret H. ; Bamberg, Sebastian (Wiley Interdisciplinary Reviews: Water, 2020-03-09)Recent policy changes highlight the need for citizens to take adaptive actions to reduce flood‐related impacts. Here, we argue that these changes represent a wider behavioral turn in flood risk management (FRM). The ...
https://e-docs.geo-leo.de/handle/11858/Geography/browse?type=author&value=Thieken%2C+Annegret+H.
2022-06-25T07:04:10
CC-MAIN-2022-27
1656103034877.9
[]
e-docs.geo-leo.de
Basis¶ 3×3 matrix datatype. Description¶ 3×3 matrix used for 3D rotation and scale. Almost always used as an orthogonal basis for a Transform. Contains 3 vector fields X, Y and Z as its columns, which are typically interpreted as the local basis vectors of a transformation. For such use, it is composed of a scaling and a rotation matrix, in that order (M = R.S). Can also be accessed as array of 3D vectors. These vectors are normally orthogonal to each other, but are not necessarily normalized (due to scaling). For more information, read the "Matrices and transforms" documentation article. Tutorials¶ Properties¶ Methods¶ Constants¶ IDENTITY = Basis( 1, 0, 0, 0, 1, 0, 0, 0, 1 ) --- The identity basis, with no rotation or scaling applied. This is identical to calling Basis() without any parameters. This constant can be used to make your code clearer, and for consistency with C#. FLIP_X = Basis( -1, 0, 0, 0, 1, 0, 0, 0, 1 ) --- The basis that will flip something along the X axis when used in a transformation. FLIP_Y = Basis( 1, 0, 0, 0, -1, 0, 0, 0, 1 ) --- The basis that will flip something along the Y axis when used in a transformation. FLIP_Z = Basis( 1, 0, 0, 0, 1, 0, 0, 0, -1 ) --- The basis that will flip something along the Z axis when used in a transformation. Property Descriptions¶ The basis matrix's X vector (column 0). Equivalent to array index 0. The basis matrix's Y vector (column 1). Equivalent to array index 1. The basis matrix's Z vector (column 2). Equivalent to array index 2. Method Descriptions¶ Constructs a pure rotation basis matrix from the given quaternion. Constructs a pure rotation basis matrix from the given Euler angles (in the YXZ convention: when *composing*, first Y, then X, and Z last), given in the vector format as (X angle, Y angle, Z angle). Consider using the Quat constructor instead, which uses a quaternion instead of Euler angles. Constructs a pure rotation basis matrix, rotated around the given axis by phi, in radians. The axis must be a normalized vector. Constructs a basis matrix from 3 axis vectors (matrix columns). Returns the determinant of the basis matrix. If the basis is uniformly scaled, its determinant is the square of the scale. A negative determinant means the basis has a negative scale. A zero determinant means the basis isn't invertible, and is usually considered invalid. Returns the basis's rotation in the form of Euler angles (in the YXZ convention: when decomposing, first Z, then X, and Y last). The returned vector contains the rotation angles in the format (X angle, Y angle, Z angle). Consider using the get_rotation_quat method instead, which returns a Quat quaternion instead of Euler angles. This function considers a discretization of rotations into 24 points on unit sphere, lying along the vectors (x,y,z) with each component being either -1, 0, or 1, and returns the index of the point best representing the orientation of the object. It is mainly used by the GridMap editor. For further details, refer to the Godot source code. Returns the basis's rotation in the form of a quaternion. See get_euler if you need Euler angles, but keep in mind quaternions should generally be preferred to Euler angles. Assuming that the matrix is the combination of a rotation and scaling, return the absolute value of scaling factors along each axis. Returns the inverse of the matrix. Returns true if this basis and b are approximately equal, by calling is_equal_approx on each component. Note: For complicated reasons, the epsilon argument is always discarded. Don't use the epsilon argument, it does nothing. Returns the orthonormalized version of the matrix (useful to call from time to time to avoid rounding error for orthogonal matrices). This performs a Gram-Schmidt orthonormalization on the basis of the matrix. Introduce an additional rotation around the given axis by phi (radians). The axis must be a normalized vector. Introduce an additional scaling specified by the given 3D scaling factor. Assuming that the matrix is a proper rotation matrix, slerp performs a spherical-linear interpolation with another rotation matrix. Transposed dot product with the X axis of the matrix. Transposed dot product with the Y axis of the matrix. Transposed dot product with the Z axis of the matrix. Returns the transposed version of the matrix. Returns a vector transformed (multiplied) by the matrix. Returns a vector transformed (multiplied) by the transposed basis matrix. Note: This results in a multiplication by the inverse of the matrix only if it represents a rotation-reflection.
https://docs.godotengine.org/pl/stable/classes/class_basis.html
2022-06-25T07:23:21
CC-MAIN-2022-27
1656103034877.9
[]
docs.godotengine.org
The <rich:notify> component serves for advanced user interaction, using notification boxes to give the user instant feedback on what's happening within the application. Each time this component is rendered, a floating notification box is displayed in the selected corner of the browser screen.@author Lukas Fryc @author Brian Leathem Output generated by Vdldoc View Declaration Language Documentation Generator.
https://docs.jboss.org/richfaces/4.5.X/4.5.14.Final/vdldoc/rich/notify.html
2022-06-25T07:21:03
CC-MAIN-2022-27
1656103034877.9
[]
docs.jboss.org
mars.tensor.arctanh# - mars.tensor.arctanh(x, out=None, where=None, **kwargs)[source]# Inverse hyperbolic tang the same shape as x. - Return type Tensor - 1 M. Abramowitz and I.A. Stegun, “Handbook of Mathematical Functions”, 10th printing, 1964, pp. 86. - 2 Wikipedia, “Inverse hyperbolic function”, Examples >>> import mars.tensor as mt >>> mt.arctanh([0, -0.5]).execute() array([ 0. , -0.54930614])
https://docs.pymars.org/en/latest/user_guide/tensor/generated/mars.tensor.arctanh.html
2022-06-25T08:45:05
CC-MAIN-2022-27
1656103034877.9
[]
docs.pymars.org
Event batching This topic describes how the Optimizely C# SDK uses the event processor to batch impressions and conversion events into a single payload before sending it to Optimizely. The Optimizely Full Stack C# C# using OptimizelySDK; class App { static void Main(string[] args) { string sdkKey = args[0]; // Returns Optimizely Client OptimizelyFactory.NewDefaultInstance(sdkKey); } } By default, batch size is 10 and flush interval is 30 seconds. Advanced Example using OptimizelySDK; class App { static void Main(string[] args) { string sdkKey = args[0]; ProjectConfigManager projectConfigManager = HttpProjectConfigManager.builder() .WithSdkKey(sdkKey) .Build(); BatchEventProcessor batchEventProcessor = new BatchEventProcessor.Builder() .WithMaxBatchSize(10) .WithFlushInterval(TimeSpan.FromSeconds(30)) .Build(); Optimizely optimizely = new Optimizely( projectConfigManager, .. // Other Params ..batchEventProcessor ); } } For more information, see Initialize SDK. Side effects The table lists other Optimizely functionality that may be triggered by using this class. Registering LogEvent listener To register a LogEvent listener NotificationCenter.AddNotification( NotificationType.LogEvent, new LogEventCallback((logevent) => { // Your code here }) ); LogEvent LogEvent object gets created using EventFactory.It represents the batch of impression and conversion events we send to the Optimizely backend. Dispose Optimizely on application exit If you enable event batching, it's important that you call the Close method ( optimizely.Dispose()) prior to exiting. This ensures that queued events are flushed as soon as possible to avoid any data loss. Important Because the Optimizely client maintains a buffer of queued events, we recommend that you call Dispose()on the Optimizely instance before shutting down your application or whenever dereferencing the instance. Updated 4 months ago
https://docs.developers.optimizely.com/experimentation/v3.1.0-full-stack/docs/event-batching-csharp
2022-06-25T07:29:56
CC-MAIN-2022-27
1656103034877.9
[]
docs.developers.optimizely.com
Setting Custom Data Discussed here is how to set user attributes and tags, as well as log user events, and their relevant APIs. User Attributes You can assign custom attributes to your users and they will show up on your Instabug dashboard with each report. These attributes can later be used to filter reports in your dashboard. This is where additional user attributes appear in your bug reports. To add a new user attribute use the following method. Instabug.setUserAttribute("Age", "18"); Instabug.setUserAttribute("Logged", "True"); You can also retrieve the current value of a certain user attribute, or retrieve all user attributes. // Getting attribute Instabug.getUserAttribute("Logged in", function(attribute) { // `attribute` is the return value }); // Loading all attributes Instabug.getAllUserAttributes(function (allAttributes) { // `allAttributes` Object containing all keys and attributes }); Or remove the current value of a certain user attribute Instabug.removeUserAttribute("Completed IAP"); User Events You can log custom user events throughout your application. Custom events are automatically included with each report. Instabug.logUserEventWithName("OnFeedbackButtonClicked"); You can add custom tags to your bug and crash reports. These tags can later be used to filter reports or set custom rules from your dashboard. This is where tags appear in your bug reports. The example below demonstrates how to add tags to a report. Instabug.appendTags(["Tag 1", "Tag 2"]); Adding tags before sending reports Sometimes it's useful to be able to add a tag to a bug report before it's been sent. In these cases, the perfect solution would be use the event handlers of the bug reporting class. You can find more details here. You can also get all the currently set tags as follows. Instabug.getTags(function (tags) { // `tags` is the returned values }); Last, you can reset all the tags.<< Experiments Only in Crash Reporting This feature is currently only available for crash occurrences. In certain scenarios, you might find that you're rolling out different experiments to different users, where your user base would be seeing different features depending on what's enabled for them. In scenarios such as these, you'll want to keep track of the enabled experiments for each user, of which there could be many, and even filter by them. This is currently possible using the experiments methods provided by the SDK. You can have up to a limit of 600 experiments, with no duplicates. Each experiment can consist of 70 characters at most and are not removed at the end of the session or if logOut is called. Minimum SDK Requirement Please note that the experiments feature is available starting from SDK version 10.13.0 Add Experiment You can use the below method to add experiments to the next report: Instabug.addExperiments(['exp1']); Remove Experiment You can use the below method to remove certain experiments from the next report: Instabug.removeExperiments(['exp1']); Clear Experiment You can use the below method to clear experiments from the next report: Instabug.clearAllExperiments(); Updated 3 months ago You now have more information than ever about each bug and crash report, so we suggest you read up more on bug and crash reporting.
https://docs.instabug.com/docs/react-native-set-custom-data
2022-06-25T08:43:20
CC-MAIN-2022-27
1656103034877.9
[array(['https://files.readme.io/c523b79-Bug_Report_Content_-_Bug_Details.png', 'Bug Report Content - Bug Details.png This is where additional user attributes appear in your bug reports.'], dtype=object) array(['https://files.readme.io/c523b79-Bug_Report_Content_-_Bug_Details.png', 'Click to close... This is where additional user attributes appear in your bug reports.'], dtype=object) array(['https://files.readme.io/57b77da-Bug_Report_Content_-_Tags.png', 'Bug Report Content - Tags.png This is where tags appear in your bug reports.'], dtype=object) array(['https://files.readme.io/57b77da-Bug_Report_Content_-_Tags.png', 'Click to close... This is where tags appear in your bug reports.'], dtype=object) array(['https://files.readme.io/d3ce480-Tags_Copy_2.png', 'Tags Copy 2.png'], dtype=object) array(['https://files.readme.io/d3ce480-Tags_Copy_2.png', 'Click to close...'], dtype=object) ]
docs.instabug.com
Mapped drive connection to network share may be lost This article provides solutions to an issue where the mapped drive may be disconnected if you map a drive to a network share.. Cause This behavior occurs because the systems can drop idle connections after a specified time-out period (by default, 15 minutes) to prevent wasting server resources on unused sessions. The connection can be re-established quickly, if necessary. Resolution To resolve this behavior, change the default time-out period on the shared network computer. To do this, use one of the following methods. Method 1: Using Registry Editor Warning can't use this method to turn off the autodisconnect feature of the Server service. You can only use this method to change the default time-out period for the autodisconnect feature. Click Start, click Run, type regedit, doesn: - Location: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\lanmanworkstation\parameters - Value: KeepConn - Data type: REG_DWORD - Range: 1 to 65535 (sec) - Default value: 600 sec = 10 mins Method 2: Using Did this fix the problem Check whether the problem is fixed. If the problem is fixed, you are finished with this section. If the problem is not fixed, you can contact support. More information Some earlier programs may not save files or access data when the drive is disconnected. However, these programs function normally before the drive is disconnected. For more information about how to increase the default time-out period, Server service configuration and tuning
https://docs.microsoft.com/en-us/troubleshoot/windows-client/networking/mapped-drive-connection-to-network-share-lost
2022-06-25T07:45:55
CC-MAIN-2022-27
1656103034877.9
[]
docs.microsoft.com
Sharing a Ping Once you've created the Ping, you can easily share the unique link that you can share with your members. To get started, click Pings in the sidebar and then select the Ping you want to share. To share the Ping, Share link in the top right. To copy the link, click Copy and we'll add the unique link to your clipboard so you can share with your members on your Facebook group, your email newsletter, or social media - wherever your members hang out and however you communicate with them. Alternatively, we make it easy for you to email your members with the link so you can share it without leaving Payzip. To do this, click Next,. .
https://docs.payzip.co.uk/article/106-sharing-a-ping
2022-06-25T07:46:22
CC-MAIN-2022-27
1656103034877.9
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/60a4d71713fd125a39b45055/images/60c753a7af164f7b537cdf5d/file-XwZEq9wzIV.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/60a4d71713fd125a39b45055/images/60c753d3af164f7b537cdf5f/file-JDpBORisCS.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/60a4d71713fd125a39b45055/images/60c2120d96768369c70bc70c/file-Lx6sbO9Boz.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/60a4d71713fd125a39b45055/images/60c2134c4173c622df92a62d/file-owBZT8znIL.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/60a4d71713fd125a39b45055/images/60c2137ca6d12c2cd643e868/file-CdjYErGkVf.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/60a4d71713fd125a39b45055/images/60c213d76264f06fc02ffc77/file-AOsSZkDRna.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/60a4d71713fd125a39b45055/images/60c216884173c622df92a647/file-X1h1SYvIVD.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/60a4d71713fd125a39b45055/images/60c2186dbf1166357a3ffc91/file-MA1nNHdN0N.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/60a4d71713fd125a39b45055/images/60c219d6a6d12c2cd643e89b/file-SalmtLHlEe.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/60a4d71713fd125a39b45055/images/60c21b8b6264f06fc02ffcbb/file-DSEAm7Z74W.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/60a4d71713fd125a39b45055/images/60c21bcf96768369c70bc75d/file-GoAlW2BF2k.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/60a4d71713fd125a39b45055/images/60c21c5fbf1166357a3ffcab/file-X7lokyqn75.png', None], dtype=object) ]
docs.payzip.co.uk
Unity SDK The project is on GitHub. You can add the package to your Unity project via "Package Manager" -> "add package from Git URL" -> enter (be sure to include the .git part in the URL) Step 3 - Initialize the SDK After installation, you will need to initialize the SDK using a Client SDK key from the "API Keys" tab on the Statsig console. These Client SDK Keys are intended to be embedded in client side applications. If need be, you can invalidate or create new SDK Keys for other applications/SDK integrations. info Do NOT embed your Server Secret Key in client side applications In addition to the SDK key, you should also pass in a Statsig user for feature gate targeting and experimentation grouping purposes. The 3rd parameter is optional and allows you to pass in a StatsigOptions to customize the SDK. It is important to make sure API calls to Statsig are made from the main thread to ensure everything functions correctly. Operations that take longer like network requests are made asynchronously so they will not block the main thread. using StatsigUnity; await Statsig.Initialize( "client-sdk-key", new StatsigUser { UserID = "some_user_id", Email = "[email protected]" }, new StatsigOptions // optional parameters to customize your Statsig client, see "Statsig Options" section below to see details on available options { EnvironmentTier = EnvironmentTier.Development, InitializeTimeoutMs = 5000, } );. if (Statsig.CheckGate("show_new_loading_screen")) { // Gate is on, show new loading screen } else { // Gate is off, show old loading screen }: var config = Statsig.GetConfig("awesome_product_details"); // The 2nd parameter is the default value to be used in case the given parameter name does not exist on // the Dynamic Config object. This can happen when there is a typo, or when the user is offline and the // value has not been cached on the client. string itemName = config.Get<string>("product_name", "Awesome Product v1"); double price = config.Get<double>("price", 10.0); bool shouldDiscount = config.Get<bool>("discount", false); Then we have Experiments, which you can use to run A/B/n experiments and use advanced features like layers (coming soon) to avoid collision and enable quicker iterations with parameter reuse. var expConfig = Statsig.GetExperiment("new_player_promo"); string promoTitle = expConfig.Get<string>("title", "Welcome to Statsig! Use discount code WELCOME10OFF for 10% off your first purchase!"); double discount = expConfig.Get<double>("discount", 0.1); ... double price = msrp * (1 - discount); for the event, and you can additionally provide some value and/or an object of metadata to be logged together with the event: Statsig.LogEvent("purchase", "new_player_pack", new Dictionary<string, string>() { { "price", "9.99" } }); Statsig User You should provide a StatsigUser object whenever possible when initializing the SDK, passing as much information as possible in order to take advantage of advanced gate and config conditions (like country or OS/browser level checks). Most of the time, the userID field is needed in order to provide a consistent experience for a given user (see logged-out experiments to understand how to correctly run experiments for logged-out users). If the user is logged out at the SDK init time, you can leave the userID out for now, and we will use a stable device ID that we create and store in the local storage for targeting purposes. Besides userID, we also have ip, userAgent, country, locale and appVersion as top-level fields on StatsigUser. In addition, you can pass any key-value pairs in an object/dictionary to the custom field and be able to create targeting based on them. Once the user logs in or has an update/changed, make sure to call updateUser with the updated userID and/or any other updated user attributes: // if you want to update the existing user, or change to a different user, call UpdateUser. The API makes a network request to fetch values for the new user. await Statsig.UpdateUser( new StatsigUser { UserID = "new_user_id", Email = "[email protected]" }, ); Initialize() takes an optional parameter options in addition to clientKey and user that you can provide to customize the Statsig client. Here are the current options and we are always adding more to the list: EnvironmentTier - type is an enum EnvironmentTierwith available values Production | Development | Staging; - you can use this to set the environment tier that the user is in, and targeting rules for that specific tier will apply. This is useful if you only want to enable a feature in Development tier, for example. - the default value is null, and only users who have null or Productionas their environment tier will be included in Pulse metric calculation. InitializeTimeoutMs - takes a non-negative integer, and default value is 5000; - this option specify the maximum time Statsig.Initialize()will take to complete; - the InitializeAPI makes a network request to fetch the latest feature gate and experiment values for the user, and this option is useful if your game waits for the API to return, but do not want it to wait for too long in case the user has poor connectivity. Shutting down Statsig In order to save users' data and battery usage, as well as prevent logged events from being dropped, we keep event logs in client cache and flush periodically. Because of this, some events may not have been sent when your app shuts down. To make sure all logged events are properly flushed or saved locally, you should tell Statsig to shutdown when your app is closing: // the function is async, and you can choose to await for it so that we make sure all the events that are yet to be flushed get flushed await Statsig.Shutdown(); How do I run experiments for logged out users? See the guide on device level experiments Is the SDK thread safe (for multi-threaded languages)? Even though we try to make the SDK thread safe whenever we can, we make no guarantees that is always the case. Therefore, please make sure to always call Statsig APIs from the same thread.
https://docs.statsig.com/client/unitySDK
2022-06-25T07:15:58
CC-MAIN-2022-27
1656103034877.9
[]
docs.statsig.com
1.1.4¶ 09/04/2018 Graphite 1.1.4 is now available for usage. Please note that this is a bugfix release for the stable Graphite 1.1.x branch and it’s recommended for production usage. It also contains some improvements backported from the master branch. Main features¶ - Django 2 and Python 3.7 support for carbon and graphite-web - SSL transport for carbon (carbon-c-relay compatible) - new parameters DESTINATIONS_POOL_REPLICAS, MAX_RECEIVER_CONNECTIONS for carbon - improving performance for big responses from cluster hosts (see REMOTE_BUFFER_SIZE in config) for graphite-web - many other improvements and bug fixes, please see full list below (with PR number and contributor name) Thanks a lot for all Graphite contributors and users! You are the best! Source bundles are available from GitHub: - - - - Graphite can also be installed from PyPI via pip. PyPI bundles are here: Upgrading¶ Please upgrade whisper, carbon and graphite-web - they contain valuable bugfixes and improvements. If you are using carbonate it also should be upgraded. New features¶ Graphite-Web¶ - Django 2 support (@piotr1212 #2278) - improve performance of keepLastValue (@DanCech #2285) - handle highest/lowest thresholds specified as numeric strings (@DanCech #2294) - Fail hard (@clusterfudge #2303) - Allow history tracking in composer (@yuzawa-san #2304) - Better support for variant in ceres (@tharvik #2307) - Document timestamp -1 in carbon protocol (@piotr1212 #2309) - Efficient reading (@clusterfudge #2314) - Added Skyline to the Monitoring section (@earthgekko #2319) - Add support for WhiteNoise 4 (@piotr1212 #2333) - Add Django 2.1 and Python 3.7 to Travis tests (@piotr1212 #2336 ) - add doc note about equal sign in rewrite rule’s pattern (@YevhenLukomskyi #2339) Carbon¶ - Add hint about whisper-resize.py near retention settings, (@helmo #775 #777) - Improve error message for parse errors in storage-schemas (@piotr1212 #780) - Support two new config options to enable SSL transport. (@postwait #793) - Add Python 3.7 testing to Travis (@piotr1212 #795) - carbon.conf.example: add more TCP_KEEPALIVE (@iksaif #798) - Allow strategies to return None (@iksaif #799) - Add stop accept() when we reach MAX_RECEIVER_CONNECTIONS (@iksaif #800) - Add DESTINATIONS_POOL_REPLICAS (@iksaif #801) Whisper¶ - Include tests in PyPI distributions (@sbraz, #253) - Add Python 3.7 testing to Travis (@piotr1212 #257) Bug Fixes¶ Graphite-Web¶ - skip ceres tests if not installed (@piotr1212 #2276) - Fix LDAP email address (@kajla #2277) - fix typo: matric -> metric in feeding-carbon.rst (@ngash #2281) - Fixing typo in docs (@deniszh #2284) - hashing: bisect fix for py3 (@piotr1212 #2291) - Generate error when find query is empty (@deniszh #2295) - replace raise StopIteration with return pep-0479 (@piotr1212 #2300) - carbonlink: set the type of the recv buffer explicitly to bytes (@piotr1212 #2301) - clarify ‘maxDataPoints’ (@Dieterbe #2302) - Fix get_real_metric_path for paths where an intermediate directory is a symlink (@yadsirhc #2326) - Latest whitenoise is not supported by graphite (@ellisvlad #2331) - convert prefetched values generator to reusable iterator (@TimWhalen #2322) - backport v4 compatibility (@deniszh #2340) Carbon¶ - Don’t leak file descriptors in instrumentation (@deejay1, #770) - Fix logging on py3 Twisted > 16 (@piotr1212, #774) - hashing, fix bisect on py3 (@piotr1212 #778) - replace raise StopIteration with return pep-0479 (@piotr1212, #779) - rewrite is handled in pipeline now (@DanCech, #790) - aggregator: hide “Allocating new metric” (@iksaif @796) - Fix compatibility issues (@deniszh #802) - import setUpRandomResolver only for new Twisted (@deniszh, #806) Whisper¶ - Make rrd2whisper.py run with Python 3 (@msk, #254) - E722 do not use bare except (@piotr1212, #255) - backport v4 compatibility (@deniszh #259)
https://graphite.readthedocs.io/en/stable/releases/1_1_4.html
2019-07-16T03:08:11
CC-MAIN-2019-30
1563195524475.48
[]
graphite.readthedocs.io
Express deployment hardware and software requirements Applies To: Windows Azure Pack A Windows Azure Pack express deployment is installed on single physical or virtual machine. Minimum hardware requirements Software requirements Note Before you install any of the Windows Azure Pack components, you must install the following software as described in Installing Windows Azure Pack software prerequisites. After complying with these prerequisites, you can Install an express deployment of Windows Azure Pack.
https://docs.microsoft.com/en-us/previous-versions/azure/windows-server-azure-pack/dn469325(v%3Dtechnet.10)
2019-07-16T02:30:20
CC-MAIN-2019-30
1563195524475.48
[]
docs.microsoft.com
Configure the distributed management console What is the distributed management console? The distributed management console lets you view detailed performance information about your Splunk Enterprise deployment. The topics in this chapter describe the available dashboards and alerts. The available dashboards provide insight into your deployment's indexing performance, search performance, operating system resource usage, Splunk Enterprise app key value store performance, and license usage. Find the distributed management console From anywhere in Splunk Web, click Settings, and then click the Distributed Management Console icon on the left. The distributed management console (DMC) is visible only to admin users. You can leave DMC in standalone mode on your Splunk Enterprise instance, which means that you can navigate to the DMC on your individual instance in your deployment and see that particular instance's performance. Or you can go through the configuration steps, still in standalone mode, which lets you access the default platform alerts. Finally, if you go through the configuration steps for distributed mode, you can log into one instance and view performance information for every instance in the deployment. Which instance should host the console? After you have configured the DMC in distributed mode, you can navigate to it on only one instance in your deployment and view the console information for your entire deployment. You have several options for where to host the distributed management console. The instance you choose must be provisioned as a search head. See "Reference hardware" in the Capacity Planning Manual. For security and some performance reasons, only Splunk Enterprise administrators should have access to this instance. Important: Except for the case of a standalone, non-distributed Splunk Enterprise deployment,, host the DMC on the master node. See "System requirements" in the Managing Indexes and Clusters Manual. As an alternative, you can host the DMC on a search head node in the cluster. If you do so, however, you cannot use the search head to run any non-DMC searches. In a deployment with multiple indexer clusters: On a dedicated license master You can configure the monitoring console on your license master if the following are true: - Your license master can handle the search workload, that is, meets or exceeds the search head reference hardware requirements. See "Reference hardware" in the Capacity Planning Manual. - Only Splunk Enterprise admins can access your dedicated. See "System requirements and other deployment considerations for search head clusters" in the Distributed Search Manual. The distributed management console is not supported in a search head pooled environment.. Configure your DMC to monitor a deployment Prerequisites - Have a functional Splunk Enterprise deployment. See "Distributed Splunk Enterprise overview" in the Distributed Deployment Manual. Any instance that you want to monitor must be running Splunk Enterprise 6.1 or higher. - Check whether your deployment is healthy, that is, that all peers are up. - Make sure that each instance in the deployment (each search head, license master, and so on) has a unique server.conf serverNamevalue and inputs.conf hostvalue. - Forward internal logs (both $SPLUNK_HOME/var/log/splunkand $SPLUNK_HOME/var/log/introspection) to indexers from all other instance types. See "Forward search head data" in the Distributed Search Manual. Without this step, many dashboards will lack data. These other instance types include: - Search heads. - License masters. - Cluster masters. - Deployment servers. - The user setting up the Distributed Management Console needs the "admin_all_objects" capability. Add instances as search peers 1. Log into the instance on which you want to configure the distributed management console. 2. In Splunk Web, select Settings > Distributed search > Search peers. 3. Add each search head, deployment server, license master, and standalone indexer as a distributed search peer to the instance hosting the distributed management console. You do not need to add clustered indexers, but you must add clustered search heads. Set up DMC in distributed mode 1. Log into the instance on which you want to configure the distributed management console. The instance by default is in standalone mode, unconfigured. 2. In Splunk Web, select Distributed management console > Setup. 3. Turn on distributed mode at the top left. 4. Check that: - The columns labeled instance and machine are populated correctly and populated with values that are unique within a column. Note: If your deployment has nodes running Splunk Enterprise 6.1.x (instead of 6.2.0+), their instance (host) and machine values will not be populated. - To find the value of machine, typically you can log into the 6.1.x instance and run hostnameon *nix or Windows. Here machine represents the FQDN of the machine. - To find the value of instance (host), use btool: splunk cmd btool inputs list default. - When you know these values, in the Setup page, click Edit > Edit instance. A popup presents you with two fields to fill in: Instance (host) name and Machine name. - The server roles are correct, with the primary or major roles. For example, a search head that is also a license master should have both roles marked. If not, click Edit to correct. - A cluster master is identified if you are using indexer clustering. If not, click Edit to correct. Caution: Make sure anything marked an indexer is really an indexer. 5. (Optional) Set custom groups. Custom groups are tags that map directly to distributed search groups. You don't need to add groups the first time you go through DMC setup (or ever). You might find groups useful, for example, if you have multisite indexer clustering (each group can consist of the indexers in one location) or an indexer cluster plus standalone peers. Custom groups are allowed to overlap. That is, one indexer can belong to multiple groups. See distributed search groups in the Distributed Search Manual. 6. Click Save. 7. (Optional) Set up platform alerts. If you add another node to your deployment later, return to Setup and check that the items in step 4 are accurate. Configure on a single instance On a single Splunk Enterprise instance operating by itself, you must configure standalone mode before you can use platform alerts. To configure: 1. Navigate to the Setup page in DMC. 2. Check that search head, license master, and indexer are listed under Server Roles, and nothing else. If not, click Edit. 3. Click Apply Changes to complete setup.!
https://docs.splunk.com/Documentation/Splunk/6.2.8/Admin/ConfiguretheMonitoringConsole
2019-07-16T02:35:36
CC-MAIN-2019-30
1563195524475.48
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
Introduction Follow these instructions to create your first website: - using a fresh copy of CakePHP 3.x - called mycake3.app - with Nginx virtual host - with two databases (one for testing purposes) 1. Login to your Virtual Machine Make sure that you are in the cakebox folder on your local machine before running: vagrant ssh 2. Provision the website Inside your Virtual Machine run: cakebox application add mycake3.app 3. Update your hosts file Open the hosts file on your local machine so you can tell your local system where to find the new website: - On Mac OS-X systems: /private/etc/hosts - On Linux systems: /etc/hosts - On Windows systems: c:\windows\system32\drivers\etc\hosts Note: Windows users MUST run Notepad as an Administrator (right mouse button on c:\windows\notepad.exe) and then use the File > Open menu options to open the hosts file or they won't be able to save the updated file. Add the following line and save the updated file. 10.33.10.10 cakebox mycake3.app You might want to test if your update was successful by running ping mycake3.app on your local machine. On Mac/Linux the output should look similar to: PING mycake3.app (10.33.10.10) 56(84) bytes of data. 64 bytes from mycake3.app (10.33.10.10): icmp_seq=1 ttl=64 time=0.016 ms 64 bytes from mycake3.app (10.33.10.10): icmp_seq=2 ttl=64 time=0.022 ms 64 bytes from mycake3.app (10.33.10.10): icmp_seq=3 ttl=64 time=0.022 ms On Windows it should look like this: Pinging mycake3.app [10.33.10.10] with 32 bytes of data: Reply from 10.33.10.10: bytes=32 time=1ms TTL=64 Reply from 10.33.10.10: bytes=32 time<1ms TTL=64 Reply from 10.33.10.10: bytes=32 time<1ms TTL=64 Done! That's all there's to it. You can now open the browser on your local system and browse to. If things went well you should see something similar to this: Editing Code You can use the editor on your local machine to update the (php) source files used by the new website. Just take a look inside the cakebox/Apps folder on your local machine. If things went well you should see a subfolder named mycake3.app containing all source files. Launch your local editor and make some changes. Changes to local files are automatically synchronized to your box so if you refresh the web page you should see your changes applied. Closing Note Remember that you can provision as many applications as you like. They will all run parallel inside your box so feel free to create another website to get comfortable with the process. As a closing note you might want to run cakebox application add --help inside your virtual machine to display a list of options you can use to e.g.: - provisioning a different framework flavor like Laravel or Yii - using HHVM instead of Nginx
https://cakebox.readthedocs.io/en/latest/tutorials/creating-your-first-website/
2019-07-16T02:17:21
CC-MAIN-2019-30
1563195524475.48
[array(['../../img/fresh-install-cake3.png', 'Cakebox Overview'], dtype=object) ]
cakebox.readthedocs.io
.NET Framework technologies unavailable on .NET Core Several technologies available to .NET Framework libraries aren't available for use with .NET Core, such as AppDomains, Remoting, Code Access Security (CAS), and Security Transparency. If your libraries rely on one or more of these technologies, consider the alternative approaches outlined below. For more information on API compatibility, the CoreFX team maintains a List of behavioral changes/compat breaks and deprecated/legacy APIs at GitHub. Just because an API or technology isn't currently implemented doesn't imply it's intentionally unsupported. You should first search the GitHub repositories for .NET Core to see if a particular issue you encounter is by design, but if you cannot find such an indicator, please file an issue in the dotnet/corefx repository issues at GitHub to ask for specific APIs and technologies. Porting requests in the issues are marked with the port-to-core label. AppDomains Application domains (AppDomains) isolate apps from one another. AppDomains require runtime support and are generally quite expensive. Creating additional app domains is not supported. We don't plan on adding this capability in future. For code isolation, we recommend separate processes or using containers as an alternative. For the dynamic loading of assemblies, we recommend the new AssemblyLoadContext class. To make code migration from .NET Framework easier, .NET Core/corefx GitHub repository, making sure to select the branch that matches your implemented version. Remoting .NET Remoting was identified as a problematic architecture. It's used for cross-AppDomain communication, which is no longer supported. Also, Remoting requires runtime support, which is expensive to maintain. For these reasons, .NET Remoting isn't supported on .NET Core, and we don't plan on adding support for it in the future. For communication across processes, consider inter-process communication (IPC) mechanisms as an alternative to Remoting, such as the System.IO.Pipes or the MemoryMappedFile class. Across machines, use a network-based solution as an alternative. Preferably, use a low-overhead plain text protocol, such as HTTP. The Kestrel web server,. There are too many cases in the .NET Framework and the runtime where an elevation of privileges occurs to continue treating CAS as a security boundary. In addition,+) is not supported by .NET Core. Feedback
https://docs.microsoft.com/en-us/dotnet/core/porting/net-framework-tech-unavailable
2019-07-16T02:39:48
CC-MAIN-2019-30
1563195524475.48
[]
docs.microsoft.com
Americommerce americommerce.com Configuration Instructions - 1 - Sign into your Rejoiner Account and click Implementation. Copy your Rejoiner Site ID. - 2 - From your Americommerce dashboard, click Tools > Apps & Add-ons > Rejoiner - 3 - Using your Rejoiner account credentials from Step 1, complete the Account Settings form and check Enable Rejoiner Testing Instructions - 1 - Walk through the testing guidelines outlined here.
https://docs.rejoiner.com/article/114-americommerce
2019-07-16T03:22:20
CC-MAIN-2019-30
1563195524475.48
[]
docs.rejoiner.com
Start Here - Segments In this tutorial, we'll learn about segmentation and how it can be used to target specific customer groups with automated email campaigns. CONTENTS Overview Segments are groups of customers that share specific traits or characteristics. Segments can be used to target specific groups of customers who abandon their cart or who purchase from your website. Each segment you create has a name and a set of filters that can be updated at any time. Structure Segments are structured as a CND (conjunctive normal form) formula: - Segment (conjunction of Clauses, i.e. AND) - Clause (disjunction of Filters, i.e. OR) - Filter (single rule that consists of filter type, operator, user value) In addition to this, each Clause may define a strategy for matching Cart Items (this only applies to filters that use them, e.g. cart item price): - Match Any (default) – at least one cart item must be matched by the Clause. - Match All – each cart item must be matched by the Clause. Filter Types Cart Item These filters use data collected from setCartItem calls via our JavaScript API. Note that these calls must be correctly implemented for these filters to work. Cart Data These filters use data collected from setCartData call via our JavaScript API. Note that these calls must be correctly implemented for these filters to work. Customer Filters in this category refer to the customer's activity in the past. Events Filters in this category refer to events in a customer's activity in the past. Lists Filters in this category refer to a contact's status within a specific list. Filters based on interactions with previous Rejoiner campaigns. Browsing Filters based on events from a user's browsing activity. Operator Types Filters are assigned to the following operators (this depends on the data type that is used in the filter): - Numeric operator - Text operator - Date operator Each operator has some unique comparison choices (e.g. RegExp – regular expression – matching in text operator, or relative and absolute date comparisons in date operator).
https://docs.rejoiner.com/article/41-start-here-segments
2019-07-16T03:21:28
CC-MAIN-2019-30
1563195524475.48
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/54623c9ae4b075529d654585/images/555204aee4b027e1978dfe43/file-JmMWW3otYv.png', None], dtype=object) ]
docs.rejoiner.com
Commands. Add Method Command Bar(String, vsCommandBarType, Object, Int32) Definition Creates a command bar that is saved and available the next time the environment is started. [System.Runtime.InteropServices.DispId(12)] public object AddCommandBar (string Name, EnvDTE.vsCommandBarType Type, object CommandBarParent, int Position = 1); - Type - vsCommandBarType Required. A vsCommandBarType constant that determines the type of command bar: Optional. An Office CommandBar object to which the new command bar is to be added. (This is required, however, if Type is vsCommandBarTypeMenu.) Optional. The index position, starting at one, in the command bar to place the new command bar. A CommandBar object. Remarks The CommandBar object is a member of the Microsoft Office object model. The command bar added with this method is a permanent command bar, which is available in every session of the IDE whether or not the VSPackage is loaded. You should use this method to add a command bar only if you want a permanent command bar. You should call this method only once, when your VSPackage is loaded for the first time. If you want a temporary command bar, which appears only when the VSPackage is actually loaded, you must use the DTE.CommandBars.Add method when you load the VSPackage, and then call DTE.CommandBars.Remove method when it is unloaded. Since a permanent command bar appears even when the VSPackage is not loaded, you should be sure to remove it when the VSPackage is uninstalled. Therefore, you must use an MSI to install and uninstall your VSPackage, and add a custom action to your uninstall program. For more information about menus and commands, see Commands, Menus, and Toolbars.
https://docs.microsoft.com/en-us/dotnet/api/envdte.commands.addcommandbar?redirectedfrom=MSDN&view=visualstudiosdk-2017
2017-11-18T02:58:03
CC-MAIN-2017-47
1510934804518.38
[]
docs.microsoft.com
With View Administrator, you can create a pool of Windows, but not Linux, desktop machines automatically. With vSphere PowerCLI, you can develop scripts that automate the deployment of a pool of Linux desktop machines. The sample scripts that are provided are for illustration purposes only. VMware does not accept responsibility for issues that might arise when you use and enhance the sample scripts.
https://docs.vmware.com/en/VMware-Horizon-6/6.1.1/com.vmware.horizon-view.linuxdesktops.doc/GUID-8EC5A02E-E619-47E0-B897-412F2F27DC48.html
2017-11-18T03:02:54
CC-MAIN-2017-47
1510934804518.38
[]
docs.vmware.com
You can select the security protocols and cryptographic algorithms that are used to encrypt communications between Horizon Client and Horizon servers or between Horizon Client and the agent in the remote desktop. These options are also used to encrypt the USB channel (communication between the USB service daemon and the agent). With the default setting, cipher suites use 128- or 256-bit AES, remove anonymous DH algorithms, and then sort the current cipher list in order of encryption algorithm key length. By default, TLS v1.0, TLS v1.1, and TLS v1.2 are enabled. SSL v2.0 and v3.0 are not supported. If TLS v1.0 and RC4 are disabled, USB redirection does not work when users are connected to Windows XP desktops. Be aware of the security risk if you choose to make this feature work by enabling TLS v1.0 and RC4. If you configure a security protocol for Horizon Client that is not enabled on the server to which the client connects, a TLS/SSL error occurs and the connection fails. At least one of the protocols that you enable in Horizon Client must also be enabled on the remote desktop. Otherwise, USB devices cannot be redirected to the remote desktop. On the client system, you can use either configuration file properties or command-line options for these settings: To use configuration file properties, use the view.sslProtocolString and view.sslCipherString properties. To use command-line configuration options, use the --sslProtocolString and --sslCipherString options. For more information, see Using the Horizon Client Command-Line Interface and Configuration Files and look up the property and option names in the table in Horizon Client Configuration Settings and Command-Line Options.
https://docs.vmware.com/en/VMware-Horizon-Client-for-Linux/4.4/com.vmware.horizon-client.linux-44.doc/GUID-7C4410AF-E25C-4A85-9F59-846C0DC6ABD6.html
2017-11-18T03:03:32
CC-MAIN-2017-47
1510934804518.38
[]
docs.vmware.com
You can install Horizon Client on all models of iPad and iPhone. The iOS device on which you install Horizon Client, and the peripherals it uses, must meet certain system requirements. iPad and iPhone models iPhone 4, 4s, 5, 5s, 5c, 6, 6 Plus, 6s, and 6s Plus iPad 2, iPad (3rd generation), iPad (4th generation), iPad mini, iPad mini 3, iPad mini 4, iPad mini with Retina display, iPad Air, iPad Air 2, and iPad Pro Horizon Client 3.4 and later include 64-bit processor support for iPhone 5s, 6, and 6 Plus, and iPad Air, iPad Air 2, iPad mini 2, and iPad mini 3. Operating systems iOS 6.0 and later, including iOS 9.x External keyboards (Optional) iPad Keyboard Dock and Apple Wireless Keyboard (Bluetooth) Smart card authentication See Smart Card Authentication Requirements. Touch ID authentication See Touch ID Authentication Requirements. Connection Server, Security Server, and View Agent or Horizon Agent Latest maintenance release of View 5.3.x and later releases. VMware recommends that you use a security server so that your iOS clients Client 4.0 or later and Horizon Agent 7.0 or later)
https://docs.vmware.com/en/VMware-Horizon-Client-for-iOS/4.0/com.vmware.horizon.ios-client-doc/GUID-A05941AE-0287-4B72-B7E7-F42EF8FB9307.html
2017-11-18T03:03:35
CC-MAIN-2017-47
1510934804518.38
[]
docs.vmware.com
The default authentication method for admin users to log in from the System directory is Password (Local Directory). The default access policy is configured with Password (Local Directory) as a fallback method so that admins can log in to VMware Identity Manager admin console and Workspace ONE portal If you create access policies for specific Web and desktop applications that system admins are entitled to, these policies must be configured to include Password (Local Directory) as a fallback authentication method. Otherwise, the admins cannot log in to the application.
https://docs.vmware.com/en/VMware-Identity-Manager/2.9.1/com.vmware.wsp-administrator_29/GUID-14FA4721-BB62-4ADD-AA46-8CF53EF1D9C5.html
2017-11-18T03:03:46
CC-MAIN-2017-47
1510934804518.38
[array(['images/GUID-8BF3CD2D-051E-4484-8FEF-3E33C2B5F17B-low.png', None], dtype=object) ]
docs.vmware.com
New Features - OpenStack - OpenStack Kilo is now officially supported in RightScale. For more information, see the OpenStack documentation on our docs site. - Dashboard Sidebar Redesign - Along with resolving minor bugs in how the sidebar hides/shows on certain views within Cloud Management, the new sidebar design is part of a continued effort to align the look and feel of RightScale's applications for a consistent user experience. - Disabling RightScripts - We have added support for disabling RightScripts from the boot sequence when launching a Server/Array through the CM API 1.5, allowing you greater control of the boot behavior of your RightScale servers. - EC2 Instance Tenancy Option - In EC2, you can now select the tenancy for a given instance when launching it (instead of relying on the VPC tenancy setting), giving you more flexibility around workload placement in AWS. - EC2 g2 Instances - We have added support for g2.8xlinstance types in EC2. - GCE Preemptible Instances - We have added support for launching instances and configuring servers/arrays to use preemptibleinstances in GCE. Use of these types of instances in GCE can drastically reduce costs, but come with a variety of limitations -- see GCE documentation for more information. This setting is now available in the Cloud Management Dashboard and available in the API and CAT through the cloud_specific_attributeshash of an instance.
http://docs.rightscale.com/release-notes/cloud-management/2015/12/02.html
2017-11-18T02:39:56
CC-MAIN-2017-47
1510934804518.38
[]
docs.rightscale.com
Title Creating a Religious Properties Database for the City of New Bedford: an Analysis of Best Practices and Available Systems Document Type Document Abstract This policy analysis was written to provide the city of New Bedford, the Waterfront Historic Area League, Inter-church Council of Greater New Bedford, and the congregations with possible database systems to consider in creating their historic religious properties database. It also provides the best methodology to use when choosing a database. Deciding on who will be involved in the choosing process, determining a budget, and listing the mandatory requirements the database should provide are all important to consider in the decision making process. Recommended Citation Cardarelli, Elizabeth C., "Creating a Religious Properties Database for the City of New Bedford: an Analysis of Best Practices and Available Systems" (2014). Historic Preservation Capstone Projects. 2. Included in Historic Preservation and Conservation Commons
https://docs.rwu.edu/hp_capstone_project/2/
2017-11-18T02:35:54
CC-MAIN-2017-47
1510934804518.38
[]
docs.rwu.edu
Contents IT Service Management Previous Topic Next Topic Apply conditions to tasks Add To My Docs Add selected topic Add selected topic and subtopics Subscribe to Updates Share Save as PDF Save selected topic Save selected topic and subtopics Save all topics in Contents Apply conditions to tasks Apply conditions to 1. Applying Conditions to Execution Plan Tasks In this example, the Deliver to IT Labs step does not run if the request itself is in Atlanta. There is no need to deliver something to the IT lab if it is already there. Related TasksDefine task templatesRelated ConceptsCreate task templatesRelated ReferenceUse condition scripts to run tasks Last Updated: 156 Tags: Products > IT Service Management > Service Catalog;
https://docs.servicenow.com/bundle/jakarta-it-service-management/page/product/service-catalog-management/reference/r_ApplyCondExecPlanTasks.html
2017-11-18T03:07:45
CC-MAIN-2017-47
1510934804518.38
[]
docs.servicenow.com
Query Console is an interactive web-based query development tool for writing and executing ad-hoc queries in XQuery, Server-Side JavaScript, SQL and SPARQL. Query Console enables you to quickly test code snippets, debug problems, profile queries, and run administrative XQuery scripts. The following terms and definitions cover the primary Query Console components: Using Query Console, you can: The query editor in Query Console includes features such as The workspaces and queries created in Query Console are stored in MarkLogic Server, so they are available to you from any computer with access to your MarkLogic Server instance. For example, you can create workspaces and queries on your desktop computer and use them from a lab machine with access to the same MarkLogic Server instance. You should only have one Query Console session active at a time for any given MarkLogic user. Query Console saves state to MarkLogic Server. If a user has multiple Query Console sessions active concurrently, the state can become inconsistent. The picture below summarizes key Query Console UI features. For more information on using specific features, see the Query Console Walkthrough.
http://docs.marklogic.com/guide/qconsole/intro
2017-11-18T02:53:32
CC-MAIN-2017-47
1510934804518.38
[]
docs.marklogic.com