|
Thread Rules 1. This is not a "do my homework for me" thread. If you have specific questions, ask, but don't post an assignment or homework problem and expect an exact solution. 2. No recruiting for your cockamamie projects (you won't replace facebook with 3 dudes you found on the internet and $20) 3. If you can't articulate why a language is bad, don't start slinging shit about it. Just remember that nothing is worse than making CSS IE6 compatible. 4. Use [code] tags to format code blocks. |
On October 09 2016 02:56 phar wrote:Show nested quote +On October 08 2016 22:01 mantequilla wrote: what's too "specific" about automating a web app's deployment? you mean thousands of companies in the world are deploying their web apps to cloud by hand or through homemade solutions they wrote and paying 6 figures to do it? And I am not asking for step by step solutions, just asking what kind of a tool or api or whatever they are using. Again, it's not a deployment problem. Automating deployment for a webpage is not hard, and azure makes it even easier by providing hooks for e.g. continuous deployment with whatever you're already using (GitHub, or what have you). The issue is that someone's asking for a whitelabeling solution, and trying to phrase it as a deployment problem . They're asking the wrong questions, that's why they're not getting great answers.
I don't know the term "whitelabeling" honestly, when I search for it, what comes up is changing name of someone else's program and remarketing it as your own.
By deployment I try to mean, provisioning is included too. Like setting up the infrastructure and doing the actual deployment of the code. If I could automate that, I could create as many instances as I want. I guess I may automate it using azure's rest api plus some non-reliable ftp scripts etc., closely imitating what I do manually, but it does not seem like a healthy solution to me.
|
On October 09 2016 05:44 mantequilla wrote:Show nested quote +On October 09 2016 02:56 phar wrote:On October 08 2016 22:01 mantequilla wrote: what's too "specific" about automating a web app's deployment? you mean thousands of companies in the world are deploying their web apps to cloud by hand or through homemade solutions they wrote and paying 6 figures to do it? And I am not asking for step by step solutions, just asking what kind of a tool or api or whatever they are using. Again, it's not a deployment problem. Automating deployment for a webpage is not hard, and azure makes it even easier by providing hooks for e.g. continuous deployment with whatever you're already using (GitHub, or what have you). The issue is that someone's asking for a whitelabeling solution, and trying to phrase it as a deployment problem . They're asking the wrong questions, that's why they're not getting great answers. I don't know the term "whitelabeling" honestly, when I search for it, what comes up is changing name of someone else's program and remarketing it as your own. By deployment I try to mean, provisioning is included too. Like setting up the infrastructure and doing the actual deployment of the code. If I could automate that, I could create as many instances as I want. I guess I may automate it using azure's rest api plus some non-reliable ftp scripts etc., closely imitating what I do manually, but it does not seem like a healthy solution to me.
Why are you trying to do this? One of the benefit of online software is having multiple customers share the same infrastructure. Do you think Salesforce makes new DB per customer? No they all share same infrastructure and DB and have custom user-id, maybe slap a CNAME on a domain and use a different splash screen.
There maybe is some weird data privacy reason why you would want to do this but honestly speaking I wouldn't trust any service you are hosting to protect my privacy based on your questions.
Hosting all these servers sounds like a pain to upgrade customers to new versions of your software as it rolls out.
phar is spamming whitelabeling which to some extent applies (you didn't mention customers actually reuse the site you host as their own which is what whitelabeling is about. For example Clevo is a whitelabel laptop maker since they make laptops that other PC makers sell as their own brands).
You can get what you want through some extend through configuration management software as mentioned, or take advantage or something host specific like cloudformation. I'd also look into something like Heroku which is super expensive but depending on your useage pattern and deadline might get you out the door. Containers are a popular thing now to get stuff like what you want done (customer clicks, ends up on isolated host). But containers wouldn't involve provisioning entirely new infrastructure but rather new containers. For example let us say you are using docker. Customer signing up causes new container to be brought up, you add a new subdomain to your root domain (customer1.app.example.com).
But really I'd consider the reason you are doing what you are doing. Customer might want to host it onsite (in which case doing what you're doing isn't it) but if it is not onsite I'm not sure the benefit to giving everyone their own infrastructure.
|
On October 09 2016 06:42 teamamerica wrote:Show nested quote +On October 09 2016 05:44 mantequilla wrote:On October 09 2016 02:56 phar wrote:On October 08 2016 22:01 mantequilla wrote: what's too "specific" about automating a web app's deployment? you mean thousands of companies in the world are deploying their web apps to cloud by hand or through homemade solutions they wrote and paying 6 figures to do it? And I am not asking for step by step solutions, just asking what kind of a tool or api or whatever they are using. Again, it's not a deployment problem. Automating deployment for a webpage is not hard, and azure makes it even easier by providing hooks for e.g. continuous deployment with whatever you're already using (GitHub, or what have you). The issue is that someone's asking for a whitelabeling solution, and trying to phrase it as a deployment problem . They're asking the wrong questions, that's why they're not getting great answers. I don't know the term "whitelabeling" honestly, when I search for it, what comes up is changing name of someone else's program and remarketing it as your own. By deployment I try to mean, provisioning is included too. Like setting up the infrastructure and doing the actual deployment of the code. If I could automate that, I could create as many instances as I want. I guess I may automate it using azure's rest api plus some non-reliable ftp scripts etc., closely imitating what I do manually, but it does not seem like a healthy solution to me. Why are you trying to do this? [...]
I bet mantequilla isn't interested in arguing and explaining about this here. It's just what his boss is asking for. You would have to argue with that boss. 
Here's the post where the idea is explained:
http://www.teamliquid.net/forum/general/134491-the-big-programming-thread?page=774#15477
This seems to be about selling a self-contained thingy to each customer, nothing getting shared between them.
I remember there was an older post about the boss pushing adopting new Microsoft stuff, and I think that post I remember might have been this here, but I feel there were some other posts as well:
http://www.teamliquid.net/forum/general/134491-the-big-programming-thread?page=765#15286
|
Pretty big part of this is a boss issue, yes. He's very experienced (20-25 years), but not very up-to date with new technologies like cloud and single-page apps. Sometimes he comes up with a solution like this and I have 2 options as I have only 2-3 years experience:
1) Do as he asks no matter how hard or wrong it feels, just make it work 2) Gather enough knowledge + present strong arguments how this method is wrong and what should be done instead
Sometimes 1 is easier since it's hard to convince him without strong points as I'm not that knowledgeable. Sometimes 2 is easier, when what he wants is obviously not the method everyone uses.
And documentations or books does not always suffice, so I ask people. I may have read 200-300 pages of text about azure and still I am not sure what to do. We already have a big legacy app, I need to consider its architecture too when making decisions.
Often someone with actual experience can sum it up with 3-4 sentences that 300 pages of text can't give. So I thank everyone that answers my questions.
|
In my experience bosses are very sensitive to money. Just calculate how much it costs to have running instances per customer, add overhead of administrating which instances belong to which customer, and of course how much it costs to keep all instances updated with newer versions (or, god help you, different versions per customer) including support desk stuff.
And include costs for the alternate versions as argued here.
|
Our benevolent government blocked fucking github(dropbox and google drive for good measure as well) because a hacker group leaked some emails. I guess the links were on a site hosted by github-pages or something. Be glad you don't live in middle east
I can't work on one of my websites because of this. VPN and proxies here we go again
|
Is anyone here familiar with Python code profiling with Spyder? How do I save the analysis results? :<
|
On October 10 2016 05:26 Isualin wrote: Our benevolent government blocked fucking github(dropbox and google drive for good measure as well) because a hacker group leaked some emails. I guess the links were on a site hosted by github-pages or something. Be glad you don't live in middle east
I can't work on one of my websites because of this. VPN and proxies here we go again
|
Regarding the docker image of mongodb:
- If I expose volume from /data/db and /data/configdb: Everything works fine, I can persist data on the host machine - If I expose volume from /data: I can't persist data and nobody knows why.
Fuck this shit.
Well, at least I solved my problem.
|
Didn't feel like declaring a variable. Still had to advance iterator by 4 elements. Tell me if there is a more readable way to do it pls xD
std::copy( digits.begin(), (++(++(++(++digits.begin())))), gamefield.begin() );
|
On October 10 2016 23:25 JWD[9] wrote:Didn't feel like declaring a variable. Still had to advance iterator by 4 elements. Tell me if there is a more readable way to do it pls xD std::copy( digits.begin(), (++(++(++(++digits.begin())))), gamefield.begin() ); digits.begin() + 4 .. should work.
|
On October 10 2016 23:32 beg wrote:Show nested quote +On October 10 2016 23:25 JWD[9] wrote:Didn't feel like declaring a variable. Still had to advance iterator by 4 elements. Tell me if there is a more readable way to do it pls xD std::copy( digits.begin(), (++(++(++(++digits.begin())))), gamefield.begin() ); digits.begin() + 4 .. should work.
Oh my gosh, Thank you!
|
On October 10 2016 23:35 JWD[9] wrote:Show nested quote +On October 10 2016 23:32 beg wrote:On October 10 2016 23:25 JWD[9] wrote:Didn't feel like declaring a variable. Still had to advance iterator by 4 elements. Tell me if there is a more readable way to do it pls xD std::copy( digits.begin(), (++(++(++(++digits.begin())))), gamefield.begin() ); digits.begin() + 4 .. should work. Oh my gosh, Thank you!  You can also use std::advance if you don’t have a random access iterator.
|
I was in an interview today and one of the technical questions was to design a data structure with pop, push and min methods. The problem is they all needed to work in O(1) time complexity. I was not able to find a good answer to the question
I know all about stacks, queues and priority queues but apparently this is a popular question and i needed to use another stack to store the minimum value.
|
RIght... And why would you want that? What is the use case for having a stack and having a min method for said stack?
|
On October 11 2016 03:01 supereddie wrote: RIght... And why would you want that? What is the use case for having a stack and having a min method for said stack? Maybe they have reeeeaaaaaalllyy shitty old software that they expect YOU to maintain. If you are not down for some horrible programming exercises there is no way you are ever going to get anything done in that job.
|
On October 11 2016 02:41 Isualin wrote:I was in an interview today and one of the technical questions was to design a data structure with pop, push and min methods. The problem is they all needed to work in O(1) time complexity. I was not able to find a good answer to the question I know all about stacks, queues and priority queues but apparently this is a popular question and i needed to use another stack to store the minimum value.
which company may I ask? is it in turkey?
|
On October 11 2016 05:02 mantequilla wrote:Show nested quote +On October 11 2016 02:41 Isualin wrote:I was in an interview today and one of the technical questions was to design a data structure with pop, push and min methods. The problem is they all needed to work in O(1) time complexity. I was not able to find a good answer to the question I know all about stacks, queues and priority queues but apparently this is a popular question and i needed to use another stack to store the minimum value. which company may I ask? is it in turkey? It was a small us based startup in ankara, not a fancy company but they also have a 4 step interview process. Seemed like overkill to me.
|
On October 11 2016 03:01 supereddie wrote: RIght... And why would you want that? What is the use case for having a stack and having a min method for said stack? Obviously for the same reason that you need to know how to program FizzBuzz. Interview questions are often unrelated to actual code, and are instead exercises to show basic analytical skills needed for programming. I think this one might go a bit overboard, but if you know all about stacks, you should be able to come up with the answer. Of course, it also depends how much time they give you, nerves, pressure, etc, so it's better if you practice with some of the "common" exercises. But most of them are reasonably easy to answer if you are a practiced programmer and are given a bit of time to think about the problem.
|
On October 10 2016 08:10 maybenexttime wrote: Is anyone here familiar with Python code profiling with Spyder? How do I save the analysis results? :<
So I translated my code from MATLAB to Python. I paste in below in case anyone would like to suggest some improvements:
+ Show Spoiler +#-- IMPORTS --#
import argparse # This will import the argparse command-line parameters module. import textwrap as _textwrap import math # This will import the math module import numpy as np # This will import the random number generation module import matplotlib.pylab as plt
#-- FUNCTION DEFINITIONS --#
# Definition of the help text wrapping class:
class MultilineFormatter(argparse.HelpFormatter): def _fill_text(self, text, width, indent): text = self._whitespace_matcher.sub(' ', text).strip() paragraphs = text.split('|n ') multiline_text = '' for paragraph in paragraphs: formatted_paragraph = _textwrap.fill(paragraph, width, \ initial_indent = indent, subsequent_indent = indent) + '\n\n' multiline_text = multiline_text + formatted_paragraph return multiline_text
#--------------------------------------------------------------------------------------- def calculate_variance(x, y, x_foc, y_foc, \ x_min, x_max, y_min, y_max):
# Calculation of the number of focal points:
m = len(x_foc) # Number of focal points = number of iterations in # the first loop. # Calculation of the number of scales:
scale = np.arange(1,45+1,1) # All scales. #scale = 1 # Only one scale. l = len(scale)
# Preallocation of memory for all focal points:
variance_avg_scale = np.zeros((m, 180)) # Calculation of the number of analyzed points: n = len(x) # Calculation of transect properties:
transect_angles = np.arange(0,180,1) slopes = np.tan((np.deg2rad(transect_angles))) # Selection of the focal point
for j in range(m): # Creation of local coordinates /vectorized:
i = np.arange(0,n,1) # One iteration for each (x, y) point. x_local = x[i] - x_foc[j] y_local = y[i] - y_foc[j]
# Limits of the studied area in local coordinates: x_min_local = x_min - x_foc[j] x_max_local = x_max - x_foc[j] y_min_local = y_min - y_foc[j] y_max_local = y_max - y_foc[j] # Translation to local polar coordinates:
[angle, radius] = cart_to_pol(x_local,y_local) angle = np.rad2deg(angle) # Consolidation of coordinates data:
data = np.vstack((angle, radius))
# Preallocation of memory for all scales:
transect_area = np.zeros(180) # Calculation of the transect area: for p in range(180): transect_angle = transect_angles[p] slope = slopes[p] transect_area[p] = calculate_transect_area(x_min_local, \ x_max_local, y_min_local, y_max_local, transect_angle, slope) # Preallocation of memory for all scales:
variance_scale = np.zeros((l,180))
# Selection of scale:
for k in range(l):
# Transect width: transect_width = scale # Preallocation of memory for all positions:
wavelet_transform = np.zeros(180) variance_normalized = np.zeros(180)
# Selection of angular position:
for p in range(180): # Finding the points inside the specified transect:
points_observed = observe_points(data, p, transect_width[k])
# Extraction of the polar coordinates of the observed points:
angle_observed = points_observed[0,:] radius_observed = points_observed[1,:]
# Calculation of the number of observed points:
observations = len(angle_observed) # Calculation of scaled wavelets for each point within the transect /vectorized:
if observations == 0: scaled_wavelet = [0] # Has to be '[0]' instead of '0' because '0' cannot be used # in sum (TypeError: 'int' object is not iterable). else: o = np.arange(0,observations,1) t = (angle_observed[o] - transect_angles[p])/scale[k] scaled_wavelet = radius_observed*wavelet_function(t) wavelet_transform[p] = sum(scaled_wavelet)/scale[k] # No density measure. variance_normalized[p] = wavelet_transform[p]**2/transect_area[p] variance_scale[k,:] = variance_normalized variance_avg_scale[j,:] = sum(variance_scale)/l # All specified scales. #variance_avg_scale[j,:] = variance_scale # Only one scale (1 degree). variance_avg_foc = sum(variance_avg_scale)/m return variance_avg_foc
#--------------------------------------------------------------------------------------- def cart_to_pol(x, y):
angle = arctan(y, x) radius = np.sqrt(x**2 + y**2) return [angle, radius]
#--------------------------------------------------------------------------------------- def arctan(y, x): n = len(x); angle = np.zeros(n) for i in range(n): angle[i] = math.atan2(y[i], x[i]) if angle[i] < 0: angle[i] = angle[i] + math.pi return angle
#--------------------------------------------------------------------------------------- def observe_points(data, p, transect_width):
# Finding the points inside the specified transect:
log_ind = ( np.logical_or( np.logical_and( \ ((p - (transect_width/2)) <= data[0,:]), \ (data[0,:] <= (p + (transect_width/2))) ), \ np.logical_and( \ ((p - (transect_width/2) + 180) <= data[0,:]), \ (data[0,:] <= (p + (transect_width/2) + 180)) ) ) ) points_observed = data[:,log_ind] return points_observed
#--------------------------------------------------------------------------------------- def calculate_transect_area(x_min_local, x_max_local, y_min_local, y_max_local, \ transect_angle, slope): # Slope variants: if (transect_angle == 0) or (transect_angle == 180): # Horizontal line. x_lim_1 = x_min_local x_lim_2 = x_max_local y_lim_1 = 0 y_lim_2 = 0 elif transect_angle == 90: # Vertical line. x_lim_1 = 0 x_lim_2 = 0 y_lim_1 = y_min_local y_lim_2 = y_max_local elif transect_angle > 0 and transect_angle < 90: # Boundary coordinates:
if y_min_local/slope >= x_min_local: x_lim_1 = y_min_local/slope y_lim_1 = y_min_local else: x_lim_1 = x_min_local y_lim_1 = slope*x_min_local
if y_max_local/slope <= x_max_local: x_lim_2 = y_max_local/slope y_lim_2 = y_max_local else: x_lim_2 = x_max_local y_lim_2 = slope*x_max_local
else: # Boundary coordinates:
if y_min_local/slope >= x_max_local: x_lim_1 = x_max_local y_lim_1 = slope*x_max_local else: x_lim_1 = y_min_local/slope y_lim_1 = y_min_local
if y_max_local/slope <= x_min_local: x_lim_2 = x_min_local y_lim_2 = slope*x_min_local else: x_lim_2 = y_max_local/slope y_lim_2 = y_max_local transect_area = (x_lim_1)**2 + (y_lim_1)**2 + (x_lim_2)**2 + (y_lim_2)**2 return transect_area # This is not real area of the transect. # It's area divided by omega/2, but that holds # for all transects and omega is constant.
#--------------------------------------------------------------------------------------- def wavelet_function(t):
mexican_hat = 2/(3**(1/2))*math.pi**(-1/4)*(1 - 4*t**2)*np.exp(-2*t**2) # Wavelet function normalized to have unit energy. return mexican_hat
#-- PARAMETERS --#
parser = argparse.ArgumentParser(description = 'This program quantifies the degree \ of banding in the microstructure by means of angular wavelet analysis. The only \ valid name of the input data is "coordinates.txt". \n', formatter_class = MultilineFormatter)
# Optional arguments:
parser.add_argument("-p", "--edge_parameter", # Edge parameter limiting the focal points area. type = float, default = 0.25, help = 'Edge parameter determines the boundaries of the focal \ points area, as measured from the edge of the analyzed image. The default value of \ the edge parameter is 0.25.')
parser.add_argument("-f", "--focal_points", # Shows points within the focal points area. action = 'store_true', help = 'Shows points within the focal points area.')
args = parser.parse_args()
#-- IMPORT OF DATA --#
D = np.loadtxt("coordinates.txt", delimiter = ' ')
#-- EXTRACTION OF COORDINATES --#
x_extracted = D[:,0] y_extracted = D[:,1]
# Transposition into vectors:
x = x_extracted.T y = y_extracted.T
# Verification of vector lengths:
if len(x) != len(y): print('The imported data is corrupted: the numbers of the x and y coordinates \ do not match')
#-- BOUNDARY CONDITIONS --#
# Limits of the analyzed space:
x_min = min(x) x_max = max(x) y_min = min(y) y_max = max(y)
# Edge region parameters for each coordinate:
edge_parameter = args.edge_parameter
edge_parameter_x = edge_parameter*(x_max - x_min) edge_parameter_y = edge_parameter*(y_max - y_min)
# Limits of the focal point space:
x_foc_min = x_min + edge_parameter_x x_foc_max = x_max - edge_parameter_x y_foc_min = y_min + edge_parameter_y y_foc_max = y_max - edge_parameter_y
#-- EXTRACTION OF FOCAL POINTS --#
# Generation of focal points:
x_foc = x[np.logical_and(np.logical_and(x > x_foc_min,x < x_foc_max), \ np.logical_and(y > y_foc_min,y < y_foc_max))] y_foc = y[np.logical_and(np.logical_and(x > x_foc_min,x < x_foc_max), \ np.logical_and(y > y_foc_min,y < y_foc_max))]
# Focal points plot:
if args.focal_points:
fig = plt.figure() ax1 = fig.add_subplot(111)
ax1.scatter(x, y, s = 1, color = 'black') # All points. ax1.scatter(x_foc, y_foc, s = 1, color = 'red') # Focal points. plt.show()
#-- ANALYSIS --#
overall_variance = calculate_variance(x, y, x_foc, y_foc, x_min, x_max, y_min, y_max)
#-- OUTPUT --#
# Original variance peak:
print('Original variance peak:') print(max(overall_variance))
# Plot:
plt.plot(np.arange(0,180,1), overall_variance) plt.show()
What profiling method would you recommend? I tried Spyder (Anaconda's native profiler). I couldn't find a way to save the analysis... From what I remember, the calculation took around 10 minutes, compared to roughly 3 minutes in MATLAB (and around 1-2 minutes in the software I am basing my analysis on). What stood out was that show(), while being called only once, took something like 280 seconds to finish.
I also tried using cProfile, but I am not sure how to read the output file. It's some gibberish when saved as .txt, and when read printed inside cmd, it's way too long and much of it is lost. If I understand the output correctly, the total calculation time as measured by cPython is only 5 minutes. Is it possible that Spyder is adding that much overhead?
|
|
|
|